forum_id
stringlengths
9
20
forum_title
stringlengths
3
179
forum_authors
sequencelengths
0
82
forum_abstract
stringlengths
1
3.52k
forum_keywords
sequencelengths
1
29
forum_decision
stringclasses
22 values
forum_pdf_url
stringlengths
39
50
forum_url
stringlengths
41
52
venue
stringclasses
46 values
year
stringdate
2013-01-01 00:00:00
2025-01-01 00:00:00
reviews
sequence
2h1siDrSMl
RoRA-VLM: Robust Retrieval-Augmented Vision Language Models
[ "Jingyuan Qi", "Zhiyang Xu", "Rulin Shao", "Zihao Lin", "Yang Chen", "Di Jin", "Yu Cheng", "Qifan Wang", "Lifu Huang" ]
Though vision-language models (VLMs) have demonstrated impressive capabilities as general-purpose visual assistants, they still exhibit inferior performance on knowledge-intensive tasks such as information-seeking visual question answering, primarily due to the challenge of accurately encoding all the associations between visual objects and scenes to their corresponding entities and background knowledge. While retrieval augmentation methods offer an efficient way to integrate external knowledge, extending them to vision-language domain presents unique challenges in (1) precisely retrieving relevant information from external sources due to the inherent discrepancy within the multimodal queries, and (2) being resilient to the irrelevant, extraneous and noisy information contained in the retrieved multimodal knowledge snippets. In this work, we introduce RORA-VLM, a novel and robust retrieval augmentation framework specifically tailored for VLMs, with two key innovations: (1) a 2-stage retrieval process with Image-anchored Textual-query Expansion to synergistically combine the visual and textual information in the query and retrieve the most relevant multimodal knowledge snippets; and (2) a robust retrieval augmentation method that strengthens the resilience of VLMs against irrelevant information in the retrieved multimodal knowledge by injecting adversarial noises into the retrieval-augmented training process, and filters out extraneous visual information, such as unrelated entities presented in images, via a query-oriented visual token refinement strategy. We conduct extensive experiments to validate the effectiveness and robustness of our proposed methods on three widely adopted benchmark datasets: OVEN, InfoSeek and Enc-VQA. Our results demonstrate that with a minimal amount of training instance, RORA-VLM enables the LLaVA-v1.5 model to achieve significant performance improvement and constantly outperform state-of-the-art retrieval-augmented VLMs on all benchmarks while also exhibiting a novel zero-shot domain transfer capability.
[ "retrieval-augmented generation", "vision language model" ]
Reject
https://openreview.net/pdf?id=2h1siDrSMl
https://openreview.net/forum?id=2h1siDrSMl
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zxZyqnTUGQ", "zTeC9DSB6E", "sPvvk3sYpb", "rpazeeXctK", "rUHFrvvudS", "pEnNbRksJE", "oyusDD7q1f", "kotGTI40Xn", "YDGpoFHngS", "S63jrgce9b", "QPRK0sjLDs", "OynpKhbLcU", "L0mA4M4N3S", "KZv9D8uzJ9", "JgyCtNJANK", "I7qnRmKJhz", "F9WHqMfSxa", "EJyPX9rutU", "DRPRYKehzp", "CP6bKF5wm2", "BFcmbflpnV", "7WWp2r5XYA", "179OxhuXeX" ], "note_type": [ "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732397247507, 1733274481271, 1732482821892, 1737523901037, 1732396738884, 1732694221784, 1732396009993, 1732666530195, 1730631285696, 1732440768428, 1733205642432, 1732572147995, 1733209056332, 1732682483759, 1730518266587, 1733971776959, 1730672985821, 1732396457856, 1732395642255, 1732397039335, 1732395053373, 1732572229413, 1732758638798 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8321/Authors" ], [ "ICLR.cc/2025/Conference/Submission8321/Authors" ], [ "ICLR.cc/2025/Conference/Submission8321/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8321/Authors" ], [ "ICLR.cc/2025/Conference/Submission8321/Reviewer_XFpf" ], [ "ICLR.cc/2025/Conference/Submission8321/Authors" ], [ "ICLR.cc/2025/Conference/Submission8321/Authors" ], [ "ICLR.cc/2025/Conference/Submission8321/Reviewer_MWJ4" ], [ "ICLR.cc/2025/Conference/Submission8321/Reviewer_XFpf" ], [ "ICLR.cc/2025/Conference/Submission8321/Authors" ], [ "ICLR.cc/2025/Conference/Submission8321/Authors" ], [ "ICLR.cc/2025/Conference/Submission8321/Authors" ], [ "ICLR.cc/2025/Conference/Submission8321/Reviewer_ZXbA" ], [ "ICLR.cc/2025/Conference/Submission8321/Reviewer_ZXbA" ], [ "ICLR.cc/2025/Conference/Submission8321/Area_Chair_FFBp" ], [ "ICLR.cc/2025/Conference/Submission8321/Reviewer_XFpf" ], [ "ICLR.cc/2025/Conference/Submission8321/Authors" ], [ "ICLR.cc/2025/Conference/Submission8321/Authors" ], [ "ICLR.cc/2025/Conference/Submission8321/Authors" ], [ "ICLR.cc/2025/Conference/Submission8321/Authors" ], [ "ICLR.cc/2025/Conference/Submission8321/Authors" ], [ "ICLR.cc/2025/Conference/Submission8321/Authors" ] ], "structured_content_str": [ "{\"comment\": \"## **W3. Fairness of Experiment Comparison in Table 1**\\nWe are sorry for missing the training details of the baselines. To clarify, all the baselines in Table 1 are finetuned on OVEN, InfoSeek and Enc-VQA, respectively, so that we can ensure a fair comparison between our approach and all the baselines\\n\\nWe list the knowledge-intensive pretraining as one of our contributions since previous vision-language models were predominantly pretrained on image-caption datasets such as CC12M. Pretraining solely on image-caption pairs may not be sufficient to align the internal knowledge in large language models (LLMs), such as entity names and entity background knowledge, with the visual representations of entities in images. In Table 4, we perform ablation studies to show the effect of knowledge-intensive pretraining on both LLaVA-v1.5 and our approach: LLaVA-v1.5 v.s. LLaVA-v1.5 w/ WikiWeb2M, RORA-VLM v.s. RORA-VLM w/o WikiWeb2M. Our results indicate that pretraining on image and entity-rich captions can significantly improve VLMs\\u2019 performance on information-seeking tasks, highlighting an important direction for advancing VLMs in the future.\\n\\n## **W4. Ablation study on hyper-parameters**\\nWe appreciate the reviewer\\u2019s comment regarding the impact of the parameters k, l, and m on model performance. Due to the time constraints of the rebuttal period, we were unable to conduct ablation studies for all these parameters. However, recognizing the importance of these analyses, we have already included an ablation study on the most critical hyperparameter, m (the number of retrieved knowledge snippets), in Appendix A2 of our submission. This study provides insights into how varying m affects model performance.\\n\\nFor the other suggested ablation studies, we acknowledge their value and will ensure they are included in the revised version of the paper if our submission is accepted. \\n\\n## **References**\\n[1] Michihiro Yasunaga, Armen Aghajanyan, Weijia Shi, Rich James, Jure Leskovec, Percy Liang, Mike Lewis, Luke Zettlemoyer, and Wen-tau Yih. 2023. Retrieval-augmented multimodal language modeling. In Proceedings of the 40th International Conference on Machine Learning (ICML'23), Vol. 202. JMLR.org, Article 1659, 39755\\u201339769.\\n\\n[2] Weizhe Lin, Jingbiao Mei, Jinghong Chen, and Bill Byrne. 2024. PreFLMR: Scaling Up Fine-Grained Late-Interaction Multi-modal Retrievers. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5294\\u20135316, Bangkok, Thailand. Association for Computational Linguistics.\"}", "{\"title\": \"General Response\", \"comment\": \"Dear Area Chair and Reviewers,\\n\\nWe sincerely thank you for your thoughtful reviews and engagement in the discussion of our paper. We appreciate the insightful comments and recognition of contributions of our work, such as Reviewers XFpf and ZXbA have highlighted that our paper is well-written and emphasizes the necessity and significance of our contributions, noting their potential impact on both academic research and practical applications. All reviewers have acknowledged that our work's motivation is clear and well-articulated.\\n\\nWe are particularly thankful that Reviewer XFpf has increased their score to 6 following our detailed responses and clarifications, and that Reviewer ZXbA has maintained their positive assessment. While Reviewer MWJ4 may not have had the opportunity to engage in further discussions, we believe our responses have effectively addressed their concerns through comprehensive explanations and additional experiments.\\n\\nAs the discussion period concludes, we would like to summarize the key improvements made during the rebuttal phase:\\n\\n- In response to concerns about baseline comparisons raised by Reviewers XFpf, MWJ4, and ZXbA, we have implemented additional retrieval-augmented vision-language model (VLM) baselines and provided detailed comparisons with existing methods. These comparisons demonstrate the novelty of our contribution as the first work focusing on enhancing the robustness of retrieval-augmented VLMs.\\n\\n- Following Reviewer XFpf's suggestion, we conducted an extensive analysis of model performance under varying levels of retrieval noise. The results demonstrate our model's effectiveness in mitigating noise in retrieved knowledge, addressing a critical challenge in retrieval-augmented generation.\\n\\n- In response to feedback from Reviewers MWJ4 and ZXbA, we performed comprehensive ablation studies on our two-stage retrieval process. The results validate its superiority in entity-centric, knowledge-intensive, and complex reasoning tasks.\\n\\nWe would like to emphasize the following contributions, which have been acknowledged by the reviewers:\\n\\n- Novel Two-Stage Retrieval: Our approach introduces a flexible and modular retrieval-based solution to complex VQA tasks, overcoming the unified multimodal encoding challenges of single-stage retrieval while maintaining robust performance across varying query perspectives.\\n\\n- Robust Retrieval-Augmented Generation: RORA-VLM effectively addresses the challenge of managing retrieval noise through an innovative combination of visual token refinement and adversarial noise injection, significantly improving performance on knowledge-intensive tasks.\\n\\n- Effectiveness and Generalizability: Our comprehensive experiments across multiple benchmarks demonstrate substantial improvements over existing approaches, validating our method's effectiveness and practical applicability.\\n\\nWe deeply appreciate the constructive feedback provided by all reviewers. In response, we have carefully refined our work and incorporated all suggested improvements in the revised submission, with updates clearly marked in blue. These changes significantly enhance the clarity and completeness of our paper. We thank the reviewers and area chair for their time and valuable input in helping us improve this work.\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"comment\": \"## **W4: Regarding W4: Just to confirm my understanding, was each model trained separately on each of these datasets and then evaluated on the same dataset it was trained on? Is that correct?**\\nYes, the reviewer\\u2019s understanding is correct. All the baseline models and our proposed models were fine-tuned separately on the OVEN, InfoSeek, and Enc-VQA datasets, and then evaluated on the same dataset they were fine-tuned on. The evaluation on the three datasets was conducted independently, ensuring no overlap or interdependence between the evaluations.\\n\\n## **W3: The remaining step to complete it would be to evaluate the model\\u2019s performance using only the top-1 retrievals.**\\nThank you for the reviewer\\u2019s valuable suggestion regarding the remaining step. Following the reviewer\\u2019s suggestion, we conducted the additional experiment, performing an ablation study using only the top-1 retrieved entity image and its corresponding knowledge snippet to augment the generation. The results are presented in the table below:\\n\\n| Model | InfoSeek - Entity | InfoSeek - Query |\\n|--------------------------------|-------------------|------------------|\\n| Top-3 Retrieval | **25.10** | **27.34** |\\n| Top-1 Retrieval + 2 Noises (1) | 19.61 | 21.97 |\\n| Top-1 Retrieval + 2 Noises (2) | 19.63 | 22.02 |\\n| Top-1 Retrieval Only | 20.49 | 22.19 |\", \"table\": \"Evaluation results in accuracy (%). The best performance is highlighted in bold.\\n\\nFrom the table, we observe that compared with the variant using only top-1 retrieval for augmentation, the inclusion of irrelevant retrieval noise does not significantly degrade the overall performance, demonstrating the robustness of our RoRA-VLM to the noises in retrieval. Furthermore, when we include two additional potentially query-relevant knowledge snippets (as in the top-3 retrieval variant), our RoRA-VLM effectively distinguishes and benefits from the relevant knowledge, resulting in improved performance.\\n\\n## **All the details and experiments provided in this rebuttal are critical and should be included in the revised version of the paper**\\nWe sincerely thank the reviewer for their thoughtful suggestions and comments. We apologize for the omission of these details and experiments in the current draft. We assure the reviewer that all the details and experiments provided in this rebuttal will be incorporated into the revised version of the paper. We will ensure that these additions are appropriately highlighted in BLUE and, if necessary, include them in the appendix with clear references in the main text. We are currently working on the revisions and will post the updated version upon completion.\\n\\nWe hope our responses have sufficiently addressed the reviewer\\u2019s comments and concerns. If there are any remaining questions or points that require further clarification, please let us know. We kindly request a reevaluation of our work based on the additional experiments and details provided during the rebuttal period. Once again, we sincerely thank the reviewer for their valuable insights, which have been really helpful in improving the clarity, quality, and overall presentation of our manuscript.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"## **W4. Ablation of the two-stage retrieval**\\nFollowing the reviewer\\u2019s suggestion, we conducted an additional ablation study to emphasize the effectiveness of our two-stage retrieval approach. Specifically, we performed an ablation experiment using only a single-stage retrieval method. In the single-stage configuration, we utilized the CLIP embedding of the query image to retrieve the most similar entity images in our retrieval database, and thereby obtain the corresponding entity names and background knowledge. This differs from our two-stage approach in that it bypasses the secondary textual retrieval phase, which normally uses the entity name and input query to refine the knowledge selection. Instead, the single-stage method directly employs the retrieved entity background contexts as knowledge snippets for retrieval-augmented generation. We compare this single-stage retrieval method with our proposed two-stage retrieval method in the table below. For a more comprehensive comparison, we also included RA-CM3 for comparison as it employed a single-stage retrieval method. \\n\\n| Model | InfoSeek - Entity | InfoSeek - Query |\\n|-------------------------|-------------------|------------------|\\n| LLaVA-v1.5 | 10.34 | 12.98 |\\n| RA-CM3 (single-stage) | 17.09 | 21.64 |\\n| RoRA-VLM (single-stage) | 21.9 | 23.87 |\\n| RoRA-VLM (2-stage) | **25.10** | **27.34** |\", \"table\": \"Evaluation results in accuracy (%). The best performance is highlighted in **bold**.\\n\\nFrom the results, it is evident that our two-stage retrieval method outperforms the single-stage approaches. A likely reason for this superiority is the flexibility and efficiency of our two-stage method. Specifically, during database construction, we only encode images as keys and their corresponding entity names as values. Once the main entity in an image is identified, we can combine the query with the entity name and efficiently search for relevant knowledge in a purely textual database. \\n\\nIn contrast, single-stage retrieval methods require constructing a search index that jointly represents both image content and knowledge, as seen in knowledge bases like [1]. However, models capable of effectively encoding image-query pairs into embeddings often underperform compared to models optimized for embedding generation within a single modality. Existing approaches typically rely on ad hoc implementations, such as combining CLIP embeddings of images and text. These methods introduce various design challenges and can lead to suboptimal performance. \\n\\nIn comparison, our proposed two-stage retrieval method is modular and can seamlessly integrate with state-of-the-art image and text retrievers, ensuring greater adaptability and robustness. We hope this study further demonstrates the advantages of our approach and addresses the reviewer\\u2019s concerns. \\n\\n## **W5. Refine Figure 2 to include more details of the methodology**\\nWe appreciate the reviewer\\u2019s comments and suggestions. We agree that adding visual representations would greatly improve the clarity and understanding of our approach. In the revised version, we will include two more figures: one to illustrate the details of the two-stage retrieval process and another to depict the query-oriented visual token refinement process. \\n\\n## **References**\\n[1] Gui, Liangke, et al. \\\"KAT: A Knowledge Augmented Transformer for Vision-and-Language.\\\" Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2022. \\n\\n[2] Lin, Weizhe, et al. \\\"Fine-grained late-interaction multi-modal retrieval for retrieval augmented visual question answering.\\\" Advances in Neural Information Processing Systems 36 (2023): 22820-22840. \\n\\n[3] Gan, Zhe, et al. \\\"Large-scale adversarial training for vision-and-language representation learning.\\\" Advances in Neural Information Processing Systems 33 (2020): 6616-6628.\\n\\n[4] PreFLMR: Scaling Up Fine-Grained Late-Interaction Multi-modal Retrievers. Weizhe Lin, Jingbiao Mei, Jinghong Chen, Bill Byrne\\n\\n[5] Michihiro Yasunaga, Armen Aghajanyan, Weijia Shi, Rich James, Jure Leskovec, Percy Liang, Mike Lewis, Luke Zettlemoyer, and Wen-tau Yih. 2023. Retrieval-augmented multimodal language modeling. In Proceedings of the 40th International Conference on Machine Learning (ICML'23), Vol. 202. JMLR.org, Article 1659, 39755\\u201339769.\"}", "{\"comment\": \"I thank the authors for clarifying my concerns. Following the detailed rebuttal, I raised my score.\"}", "{\"comment\": \"## **W6. More experiments to prove that the model ignores \\u201cnoise\\u201d in the retrieval samples (Lines 405-419).**\\nWe appreciate the constructive comment and suggestion from the reviewer and address the concern with the following additional experiments. \\n\\nThe key challenge for ideally proving the effectiveness of our model in ignoring the retrieval noise is that, for all the evaluation datasets we used, there are no gold standard labels for the retrieval step (i.e., we don\\u2019t know the exact relevancy between each input query and all the candidate samples for retrieval), so we cannot set up the experiment with 1 relevant sample and 2 randomly sampled irrelevant samples. \\n\\nHowever, we designed a similar experiment with varying levels of retrieval noise: During the inference stage, instead of using the top-3 retrieved entity images and their corresponding knowledge snippets, we used the top-1 retrieved entity image and knowledge snippet along with 2 randomly sampled irrelevant entity images and knowledge snippets. This sampling was repeated twice, yielding two different sets of randomly sampled irrelevant entity images and knowledge snippets for the same input instance. Then, based on the 3 sets of retrieved entity images and knowledge snippets, we perform retrieval augmentation on the InfoSeek dataset. The results are presented in the following table:\\n\\n| Model | InfoSeek - Entity | InfoSeek - Query |\\n|--------------------------------|-------------------|------------------|\\n| Top-3 Retreival | 25.10 | 27.34 |\\n| Top-1 Retreival + 2 Noises (1) | 19.61 | 21.97 |\\n| Top-1 Retreival + 2 Noises (2) | 19.63 | 22.02 |\", \"table\": \"Evaluation results in accuracy (%).\\n\\nFrom the results, we observe that the model's performance remains unaffected regardless of which two noise samples were chosen, which to some extend proves the effectiveness of our model in identifying the useful information from the retrieved samples, regardless of the irrelevant samples. However, since we do not have ground-truth labels for the retrieval process, there is no assurance that the top-1 retrieval output is correct or not. Therefore, it is reasonable to observe a slight performance degradation when randomly sampled irrelevant entities are used to replace the top-2 and top-3 retrieved samples. We hope these additional experiments can demonstrate the robustness of our method to retrieval noise.\\n\\n## **W7. Clarification for Figure 4 (Lines 420-430).**\\nWe would like to clarify that the goal of Figure 4 is to show that by comparing the detailed visual content in the query image and the retrieved images, the model can identify which retrieved images contain the same visual entities as the query image. Once these relevant images are identified, the model focuses more on the textual knowledge associated with them. For example, in the middle row of Figure 4, the attention-score graph in the right column demonstrates that the model assigns higher attention to the text associated with the first image, while giving less attention to the text associated with the second and third images. This clearly indicates that through our adversarial training, the model learns to distinguish relevant knowledge from irrelevant information by referencing visual information, and hence becomes more robust to retrieved noises.\\n\\n## **Q1. In line 257 - do you mean top-2 (instead of top-(k-1))?**\\nYes, in our specific setup, k = 3 and k-1 = 2.\\n\\n## **References**\\n\\n[1] Weizhe Lin, Jingbiao Mei, Jinghong Chen, and Bill Byrne. 2024. PreFLMR: Scaling Up Fine-Grained Late-Interaction Multi-modal Retrievers. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5294\\u20135316, Bangkok, Thailand. Association for Computational Linguistics.\\n\\n[2] Michihiro Yasunaga, Armen Aghajanyan, Weijia Shi, Rich James, Jure Leskovec, Percy Liang, Mike Lewis, Luke Zettlemoyer, and Wen-tau Yih. 2023. Retrieval-augmented multimodal language modeling. In Proceedings of the 40th International Conference on Machine Learning (ICML'23), Vol. 202. JMLR.org, Article 1659, 39755\\u201339769.\"}", "{\"comment\": \"Dear Reviewer XFpf,\\n\\nWe have uploaded the revised version of our paper based on your comments and suggestions. The updates include all the additional details and experiments we mentioned during the rebuttal period, which are highlighted in blue. We kindly hope you can reevaluate our paper based on the revision.\\n\\nPlease let us know if you have any further questions or require additional clarification. We sincerely appreciate your time and effort in reviewing our work.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"summary\": \"This paper presents a multimodal version RAG targeting multimodal large language models, such as LLaVA-1.5 for information-seeking VQA. To solve two challenges, the authors propose a 2-stage retrieval process with image-anchored textual query expansion and noise-resilient retrieval augmented generation. Experimental results highlight its effectiveness on OVEN and InfoSeek benchmarks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The motivation for this work is clearly presented and easy to follow.\\n2. Experiment results demonstrate its effectiveness.\", \"weaknesses\": \"1. In the Introduction section, providing a figure showing the process of the 2-stage retrieval method would be easier to understand.\\n2. Related work discussion and method novelty. How to incorporate multi-modal knowledge into models is not a new problem[1][2]. Some related work is proposed in other multi-modal tasks, such as knowledge-based VQA. Besides, adversarial training is also adopted in existing vision and language training, such as [3]. The authors are encouraged to discuss the existing work and compare the related ones with the proposed method.\\n3. The correspondence between the ablation model variants in Table 2 and the proposed module is somewhat unclear. What about the ablation of the two-stage retrieval ?\\n4. Figure 2 lacks some of the details of the methodology. The authors are encouraged to refine it.\\n\\n[1] Gui, Liangke, et al. \\\"KAT: A Knowledge Augmented Transformer for Vision-and-Language.\\\" Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2022.\\n[2] Lin, Weizhe, et al. \\\"Fine-grained late-interaction multi-modal retrieval for retrieval augmented visual question answering.\\\" Advances in Neural Information Processing Systems 36 (2023): 22820-22840.\\n[3] Gan, Zhe, et al. \\\"Large-scale adversarial training for vision-and-language representation learning.\\\" Advances in Neural Information Processing Systems 33 (2020): 6616-6628.\", \"questions\": \"Please refer to the above section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for detailed rebuttal and the effort they invested in preparing it.\", \"regarding_w4\": \"Just to confirm my understanding, was each model trained separately on each of these datasets and then evaluated on the same dataset it was trained on? Is that correct?\", \"answer_to_w3\": \"This was a well-conducted experiment. The remaining step to complete it would be to evaluate the model\\u2019s performance using only the top-1 retrievals. This would demonstrate that adding two random samples does not impact the performance, whereas including two \\u201crelevant\\u201d (model-selected) retrievals leads to a performance improvement.\\n\\nI want to emphasize that all the details and experiments provided in this rebuttal are critical and should be included in the revised version of the paper, as my rating is largely based on their absence. Please ensure that these changes are marked in BLUE before submission. If space limitations are an issue, including them in the appendix with appropriate references in the main text would be acceptable.\"}", "{\"title\": \"Thank you for your response\", \"comment\": \"We sincerely appreciate your time and effort in reviewing our paper and providing valuable feedback, which is crucial for improving our work. We are glad that our responses have addressed your concerns. In the revised manuscript, we will incorporate the detailed explanations and comparisons as discussed, as well as include the additional denoising experiment you suggested.\"}", "{\"comment\": \"Dear Reviewer MWJ4,\\n\\nWe sincerely appreciate the time and effort you\\u2019ve devoted to reviewing our work. We understand that your schedule may be quite busy, and we are truly grateful for your valuable feedback. As the Author-Reviewer discussion phase is ending soon, we would greatly value the opportunity to engage in further discussion with you. Our aim is to gain insights into whether our responses effectively address your concerns and to ascertain if there are any additional questions or points you would like to discuss.\\nWe look forward to the opportunity for further discussion with you. Thank you for your thoughtful consideration.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"We appreciate the reviewer's feedback and suggestions. We would like to further clarify the relationship between the two modules and address the concerns raised.\\n\\nWhile it is correct that the visual token refinement module and the adversarial noise injection training process are not tightly coupled (i.e., the former is not a necessary condition for the latter), the adversarial noise injection process does benefit from the visual token refinement module, rather than conflicting with it. Specifically, the types of noise addressed by each module are fundamentally different. The visual token refinement process focuses on removing query-irrelevant visual entities or objects within the input images. Even when the retrieved images are query-relevant entity images, they may contain background content or other distracting elements unrelated to the query. By eliminating these distractions, visual token refinement ensures that the model's attention is directed solely toward query-relevant visual information. Conversely, adversarial noise injection addresses a different challenge: the noise introduced by unsuccessful retrievals. The goal of adversarial training is to enhance the model's robustness against irrelevant retrieval passages by encouraging it to better compare the query entity image with retrieved entity images. By refining the input images through visual token refinement, the adversarial training process can focus more effectively on entity-level comparisons.\\n\\nFollowing the reviewer's suggestion, we conducted an additional ablation experiment where visual token refinement was applied exclusively during inference without being included in the training phase. The results are presented in the table below:\\n\\n| Model | InfoSeek - Entity | InfoSeek - Query |\\n|-----------------------------|-------------------|------------------|\\n| LLaVA-v1.5 | 10.34 | 12.98 |\\n| w/o VK-Refinement (during Training) | 22.12 | 24.42 |\\n| RoRA-VLM (ours) | **25.10** | **27.34** |\", \"table\": \"Evaluation results in accuracy (%). The best performance is highlighted in **bold**.\\n\\nThe results indicate that excluding visual token refinement during training leads to performance degradation. This outcome is expected as the refinement process alters the visual embedding arrangement of the original CLIP encoder. Using visual token refinement solely at inference time without incorporating it during training could lead to performance degradation. \\n\\nBased on this additional ablation study, we conclude that visual token refinement and adversarial noise injection are not in conflict but rather complement each other. Together, they address different aspects of the problem and contribute to the overall robustness and performance of the model.\"}", "{\"comment\": \"I sincerely thank the authors for their detailed rebuttal, which has addressed most of my concerns. I decide to maintain my positive score. Here are some additional comments:\", \"regarding_w1\": \"Thank you for the detailed explanation and additional experiments. Based on the authors\\u2019 response, I understand that multimodal composite retrieval excels in capturing multimodal semantics, while the entity-driven method shows superior performance in entity-centric, knowledge-intensive, complex reasoning tasks. Including explicit clarifications and comparisons in the paper would be beneficial to elucidate the trade-offs between these two approaches.\", \"regarding_w2\": \"I appreciate the authors\\u2019 clarification. However, the statement that \\u201cvisual token refinement is specifically tailored to support the adversarial noise injection training process\\u201d remains somewhat unclear. My understanding is that the two modules are not tightly coupled. Visual token refinement aims at denoising, while noise injection introduces adversarial noise for robust training. Using both during training might potentially interfere with the adversarial learning dynamics. It could be insightful to validate this hypothesis by employing noise injection during training and incorporating visual token refinement exclusively during inference. This additional experiment might help clarify their interplay.\"}", "{\"summary\": \"The paper introduces RORA-VLM, a framework aimed at improving Vision-Language Models (VLMs) on knowledge-intensive tasks. The method addresses two challenges: (1) effectively retrieving relevant multimodal information given the inherent discrepancy between vision and language modalities, and (2) managing the noisy and extraneous information in retrieved knowledge. The paper\\u2019s contributions include a two-stage retrieval process with image-anchored textual-query expansion and a robust retrieval augmentation method that employs adversarial noise and visual token refinement. Extensive experiments demonstrate that RORA-VLM outperforms current models on benchmarks such as OVEN, InfoSeek, and Enc-VQA.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. **Novelty**: The authors address two core challenges in multimodal retrieval-augmented generation (RAG): the retrieval inaccuracies caused by modality discrepancies and the noise often present in retrieved content. To tackle these issues, they propose an innovative solution using a two-stage retrieval process that mitigates modality inconsistency, allowing the system to capture multimodal background information more comprehensively. Combined with an anti-noise strategy, this approach effectively suppresses irrelevant information while enhancing retrieval accuracy and overall performance in multimodal tasks.\\n\\n2. **Significance**: RORA-VLM offers a valuable method for improving VLMs, especially in knowledge-intensive domains, where retrieval-augmented tasks are often challenged by noise. This framework effectively addresses this key issue, making it particularly suitable for such applications.\\n\\n3. **Clarity of Presentation**: The paper is well-structured with a clear research motivation, providing thorough explanations of the methodology and experimental results. This clarity aids readers in understanding both the approach and its effectiveness.\", \"weaknesses\": \"1. **Inconsistency in Method and Motivation**: The two-stage retrieval in the paper looks like an image entity-driven retrieval approach, would modal differences be better handled with image+query composite retrieval; Additionally, the motivations behind the designs of Query-oriented Visual Token Refinement and Adversarial Noise Injection for Robust Augmentation seem to conflict. The former introduces noise for adversarial learning, while the latter focuses on denoising. It might align better with the concept of adversarial learning if the former were applied solely during the inference phase and the latter exclusively during training.\\n2. **Fairness of Experimental Comparisons**: In the main experiments presented in Table 1, the authors' method has undergone pre-training and fine-tuning on knowledge-intensive datasets, whereas many baseline models may not have been trained on such datasets. This raises questions about the fairness of the experimental comparisons.\\n3. **Lack of Ablation Studies**: The paper lacks ablation studies on key parameters such as k, l, and m. Including these analyses would provide valuable insights into the impact of these parameters on the model's performance.\", \"questions\": \"1. **Differences in Approach and Motivation**: The two-stage retrieval approach proposed in the paper seems to be driven by image entities. Does a retrieval approach that combines images and queries better address pattern differences? Furthermore, how do the authors reconcile the conflicting motivations behind query-oriented visual token refinement (introducing noise for adversarial learning) and adversarial noise injection (focusing on denoising)? Would it be more consistent with adversarial learning principles if the former were used only during inference and the latter only during training?\\n2. **Fairness of experimental comparisons**: In Table 1, do the authors plan to conduct more experiments to ensure that all models are evaluated on a level playing field?\\n3. **Lack of ablation studies**: Can the authors provide insights on the impact of these parameters (k,l,m) on model performance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper proposes a retrieval-augmented method for knowledge-intensive tasks to make more relevant use of visual information. The reviewers praise the extensive experiments. However they raise numerous concerns about the method and experiments; some raise a concern about novelty. After the rebuttal stage, there is no strong support for acceptance (all scores are borderline).\", \"additional_comments_on_reviewer_discussion\": \"Two of three reviewers engaged in the discussion; those had borderline accept scores. One, with initially borderline reject, did not participate. These scores are all too close to borderline to provide a strong signal for acceptance.\"}", "{\"summary\": \"The paper introduces RORA-VLM, a retrieval-augmented framework designed to enhance Vision-Language Models (VLMs) by addressing two main challenges: managing multimodal query discrepancies and filtering out irrelevant, noisy retrievals. RORA-VLM employs a two-stage retrieval process: 1) Image-Anchored Entity Retrieval: This stage retrieves visually similar images based on the query image, anchoring the retrieval with associated entity information 2) Query-Expanded Text Retrieval: Using entity names from the first stage, the method expands the query to retrieve additional textual knowledge from sources like Google Search.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is well written and easy to follow\", \"The authors clearly state the motivation for the proposed method and its necessity.\", \"RORA-VLM introduces a unique two-stage retrieval approach, effectively bridging the gap between visual and textual information for more accurate knowledge retrieval.\", \"The paper tackles the common issue of irrelevant or noisy data in retrieval-based methods by implementing noise resilience strategy\", \"The paper address a clearly practical application that might be useful for the community and the industry.\"], \"weaknesses\": \"Method:\\n\\n- Section 3.2: The authors describe in details two stages. For my understanding, stage-1 is just formulation of the K-NN of the query image in the WIT images (within CLIP latent space). This is a well-known concept, especially in this line of work. I think this is well-detailed stage but it should be on the appendix, while the main paper should contain a brief description of the stage.\\n- Line 270: \\u201cSimilarly, the image I is encoded into a sequence of visual embeddings\\u2026\\u201d - this is not clear. CLIP encodes an image/text intro a shared embeddings space of dimension d. How do you encode the image patches (n) to the same dimension? Do you feed-forward each patch, separately, to the CLIP model? Do you use the N internal CLIP features for each patch? If so, are you sure that their dimension is d, before the last projection layer? Do you project them with the last visual projection layer as the pooled [CLS] token projected? Please elaborate more on this procedure.\\n\\nSection 5 currently combines results, ablation study, and discussion, which affects the clarity and flow of these findings. Separating these into distinct sections\\u2014such as \\u201cResults,\\u201d \\u201cAblation Study,\\u201d and \\u201cDiscussion\\u201d\\u2014would make it easier for readers to follow each component and understand the contributions more clearly. Additionally, crucial details and experiments appear to be missing, and some existing experiments do not convincingly support the claims made. Below are specific areas where the section could be strengthened:\", \"evaluation\": [\"Main results: Lines 307-316 (Baselines): The authors list a several MLLM backbones for the QA task which is great. However, baselines to compare to should be other RAG methods. If I understand correctly, only RORA-VLM and Wiki-LLaVA* are using Retrieval Augmentations. If so, how is it comparable to other baselines that uses zero-shot?\", \"Building on previous point, I am not fully understand the entire training setup: the only model that was tuned (lines 317-345) was RORA-VLM? If so, again, how is it comparable to other baselines? Please clarify these points.\", \"There are not enough details about the evaluation protocols and datasets in the paper, and some comparisons are missing. For example, what was the training set of each baseline in Table 1? Did the authors fine-tuned each baseline on the same dataset? which one of them use the proposed RAG method? what about other RAG methods and baselines?\"], \"ablation_study\": [\"Lines 365-367 states \\u201cwe use the widely adopted average pooling (kernel size of 2, stride of 2) to obtain the same number of visual tokens as our refinement approach\\u201d What does \\u201cwidely adopted average pooling\\u201d mean on N CLIP vectors? How does it relate to a kernel size of size 2 and stride 2? Did you manipulated the input image/kernel of CLIP to get the same amount of CLIP vectors? The authors should elaborate on the experiment that was done here, it is unclear.\", \"Lines 405-419: I am not convinced why this experiment proves that the model ignore \\u201cnoise\\u201d in the retrieval samples. I would be more convinced with an following experiments, for example: providing the model 1 relevant sample with 2 other randomly-sampled ones, will not change the model\\u2019s answer, regardless which 2 noise samples were chosen, or by just proving the 1 relevant sample with no other samples.\", \"Lines 420-430 describe Figure 4 that supposed to show how the model ignore \\u201cnoise\\u201d samples. However, it seems like the model pays attention to specific words the correlate with the question (e.g., row 1, \\u201chow wide\\u2026\\u201d attend \\u201cheight\\u201d and \\u201cwidth\\u201d). These examples does not show any rubsness to \\u201cnoise\\u201d retrieval as intended.\"], \"questions\": [\"In line 257 - do you mean top-2 (instead of top-(k-1))?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"## **W1. Needs a figure to show 2-stage retrieval**\\nWe appreciate the reviewer\\u2019s suggestion to include a figure illustrating the two-stage retrieval method in the Introduction section. We agree that a detailed visual representation would enhance the clarity and understanding of the proposed approach. Accordingly, we will add a detailed figure to illustrate the two-stage retrieval process in the revised version. \\n\\n## **W2. Discussion of related studies and novelty of our method.**\\nWe provide the following discussion to highlight the distinctions and contributions of our method, which will also be included in the revised version of the draft.\\n\\nPrevious studies, such as [1], [2], and the more recent work [4], focus on multimodal retrieval methods that enhance text-only language models by retrieving textual knowledge using visual queries (e.g., an image) to assist in answering visual questions. These approaches primarily aim to improve retrieval quality to support downstream tasks better.\\n\\nIn contrast, our work focuses more on addressing the critical challenge of how to more effectively utilize the retrieved information in retrieval-augmented generation (RAG). While prior work [1][2] and recent models, such as RA-CM3 [5] and Wiki LLaVA, largely rely on the quality of the retrieved passages, they do not explicitly account for the inherent noise and irrelevance introduced by multimodal retrieval processes. Given that the recall@1 of state-of-the-art retrievers on datasets like InfoSeek is still below 0.2, the presence of noisy or irrelevant passages remains a significant limitation for RAG systems. Our method, RoRA-VLM, is the first to directly tackle this issue by proposing a robust solution to reduce retrieval-induced noise. Unlike previous approaches that rely exclusively on textual retrieved information, our framework fully utilizes multiple modalities of retrieved information during the generation process. By enabling the model to learn to distinguish relevant information from irrelevant noise within the retrieved materials, our approach significantly enhances the robustness of the RAG pipeline. This allows the vision-language models to maintain strong performance even when retrieval accuracy is imperfect. \\n\\n[3] primarily focuses on visual representation learning, where noise is introduced at the embedding level of images and text. In contrast, our method focuses on injecting noises into the retrieved knowledge for generative models. The noise injection process and the corresponding learning paradigm in our approach are fundamentally different from those in [3].\\n\\n## **W3. Correspondence between the ablation model variants in Table 2 and the proposed module.**\\n\\nThe first row in Table 2 presents the performance of our RoRA-VLM model on InfoSeek. The second row, labeled as RoRA-VLM w/o VK-Refinement, evaluates the impact of our proposed visual token refinement strategy. Specifically, in RoRA-VLM w/o VK-Refinement, all configurations remain the same as in RoRA-VLM, except that the visual token refinement method is not applied to the input image tokens. Since the sequence length of LLaVA is limited to 2048, to accommodate four images and the textual context, we reduce the number of image tokens for each image by employing 2D average pooling, to reduce the number of input image tokens from 576 to 144 and match the number of visual tokens used in our VK-Refinement method.\\n\\nIn the third row, we assess the effectiveness of Adversarial Noise Injection for Robust Augmentation. To demonstrate that the performance improvements of our model come from its ability to distinguish relevant knowledge (associated with the entity in query image) from irrelevant knowledge or noise\\u2014and not simply from the availability of additional retrieved knowledge\\u2014we conduct an ablation study. In this ablation, we keep the same two-stage retrieval process but only provide the retrieved textual knowledge to the VLM during both training and inference, omitting the retrieved images entirely, so that the model relies solely on textual information to make predictions and cannot leverage visual information as evidence to validate the correctness of retrieval. This variant, labeled RoRA-VLM text-only RAG, ensures that the VLM processes the same textual knowledge as in the proposed approach but without the additional image input. As shown in Table 2, this results in a significant performance drop, which demonstrates that the improvements achieved by our adversarial training stem from the model's ability to effectively filter and focus on relevant knowledge rather than simply benefiting from the additional knowledge.\"}", "{\"comment\": \"## **W3. Lacks of retrieval augmented baselines.**\\n\\nWe appreciate the comment from the reviewer and implemented two additional retrieval-augmented baselines [1][2]. Both baselines were implemented with the same backbone models as our approach (i.e., Vicuna/LLaVA-1.5 as the backbone model) to ensure a fair comparison. \\n- Baseline 1 \\u2013 PreFLMR: it employs a multimodal retriever to retrieve query-related fine-grained textual context, which is then used to support the language model in answering questions. \\u200b\\u200bPreFLMR relies on its own constructed database, which includes the WIT dataset and other sources. Therefore, it has access to a knowledge base richer than our model.\\n- Baseline 2 \\u2013 RA-CM3: it encodes multimodal documents for mixed-modal retrieval. The retrieved multimodal documents are subsequently fed into a multimodal model to augment the generation process. As the source code for this baseline was not publicly available at the time of submission, we reimplemented it based on our best understanding of the paper descriptions. We constructed the retriever using the same data source (the WIT dataset) that we employed for our model's retriever. This means both our model and Baseline 2 retrieve information from the same dataset.\\n\\nGiven the limited time window for rebuttal, we could only try our best to fine-tune and evaluate these additional baselines on the InfoSeek dataset, which is the most challenging and comprehensive dataset used in our evaluation (all ablation studies were also conducted on this dataset). We will continue the experiments on other evaluation datasets in our paper draft and report the results in the next revised version. The table below shows their quantitative results. \\n\\n| Model | InfoSeek - Entity | InfoSeek - Query |\\n|-------------------|:--------------:|:-----:|\\n| LLaVA-v1.5 | 10.34 | 12.98 |\\n| PreFLMR | 19.37 | 22.21 |\\n| RA-CM3 | 17.09 | 21.64 |\\n| RoRA-VLM (ours) | **25.10** | **27.34** |\", \"table\": \"Evaluation results in accuracy (%). The best performance is highlighted in **bold**.\\n\\nFrom the results in Table 1, we observe that our proposed method outperforms all baseline models, demonstrating the effectiveness of the RORA framework. A possible reason for this is that the baseline methods do not explicitly address the noise inherent in the multimodal retrieval process. This limitation is significant, as the recall@1 of state-of-the-art retrievers (e.g., PreFLMR) on the InfoSeek dataset is currently below 0.2. This indicates that, in most cases, the retrieved knowledge snippets contain substantial noise. On the other hand, our RORA-VLM framework introduces a novel solution to mitigate retrieval-induced noise, thereby enhancing the model's robustness and overall performance.\\n\\n## **W4. Training details for baselines.**\\nWe are sorry for missing the training details of the baselines. To clarify, all the baselines in Table 1 are finetuned on OVEN, InfoSeek and Enc-VQA, respectively, so that we can ensure a fair comparison between our approach and all the baselines.\\n\\n## **W5. Average pooling on CLIP vectors in Ablation Study (Lines 365-367).**\\nAs detailed in the response for W2, each image is processed into a feature matrix with shape [576, 768] by the CLIP visual encoder and the LLaVA projector. Our proposed Visual Token Refinement method further selects the top 144 visual tokens that are most relevant to the query, constructing a feature matrix of shape [144, 768]. This selection process enables the VLM to focus more effectively on query-relevant image content while mitigating the influence of irrelevant noise, such as image backgrounds or query-irrelevant entities present in the image.\\n\\nTo conduct an ablation study of the Visual Token Refinement method, we replace it with a simple average-pooling-based baseline, which also takes in the original [576, 768] visual patch vectors as input, downsample and convert them into [144, 768] vectors to ensure a fair comparison with our Visual Token Refinement method. Specifically, we first reshape the first dimension of the feature matrix (i.e., 576) into a 2D grid with dimensions 24 \\u00d7 24, corresponding to the spatial arrangement of patches in the original image, then apply a 2D average pooling operation with a kernel size of 2 \\u00d7 2 and a stride of 2. This pooling reduces the spatial resolution from 24 \\u00d7 24 to 12 \\u00d7 12, yielding 144 patch vectors in total while each patch vector has a dimensionality of 768. \\n\\nBy reducing the number of feature vectors from 576 to 144, this process ensures compatibility with the limited sequence length of the LLM and aligns the number of input tokens for the average pooling baseline with that of our visual token refinement method. This alignment allows for a direct and fair comparison of the two approaches in the ablation study.\"}", "{\"comment\": \"## **W1. Comparing two-stage retrieval with image+query composite retrieval**\\nWe appreciate the thoughtful comment from the reviewer and humbly argue that our two-stage retrieval method is more flexible and potentially has better performance and efficiency. The reasons are as follows. \\n\\nDuring database construction, we only need to encode images as keys and their entity names as values. Once the main entity in an image is identified, we can simply combine the question with the entity name and search for relevant knowledge in a pure text database. In contrast, using an image+query embedding for searching requires constructing a search index that jointly represents both image content and knowledge, as seen in the knowledge base used in [1]. However, models capable of effectively encoding image+query pairs into embeddings are often not as powerful as models designed for generating embeddings within a single modality. Existing approaches often rely on ad hoc implementations, such as combining CLIP embeddings of images and text, which introduce many design questions and may result in suboptimal performance.\\n\\nIn comparison, our proposed two-stage method is modular and can seamlessly integrate with any state-of-the-art image and text retriever, ensuring adaptability and robustness.\\n\\nTo further demonstrate the advantages of our approach, we implemented two retrieval-augmented baselines that apply the multimodal composite retrieval, PreFLMR [2] and RA-CM3 [1], to empirically compare their performance with our proposed method. Both baselines were implemented using the same backbone models as our approach (Vicuna/LLaVA-1.5) to ensure a fair comparison. PreFLMR employs a multimodal retriever to retrieve fine-grained query-related textual contexts, while RA-CM3 encodes multimodal documents for mixed-modal retrieval.\\n\\nThe experimental results, presented in the table below, demonstrate that while composite retrieval approaches such as RA-CM3 can benefit from richer cross-modal representations, they may struggle to match the adaptability and robustness of our modular two-stage retrieval approach in knowledge-intensive complex reasoning tasks.\\n\\n| Model | InfoSeek - Entity | InfoSeek - Query |\\n|-----------------|-------------------|------------------|\\n| LLaVA-v1.5 | 10.34 | 12.98 |\\n| PreFLMR | 19.37 | 22.21 |\\n| RA-CM3 | 17.09 | 21.64 |\\n| RoRA-VLM (ours) | **25.10** | **27.34** |\", \"table\": \"Evaluation results in accuracy (%). The best performance is highlighted in **bold**.\\n\\n## **W2. Confliction between visual token refinement and adversarial noise injection**\\nWe appreciate the reviewer\\u2019s feedback and would like to clarify that the motivations behind visual token refinement and adversarial noise injection are distinct but complementary, working toward the same overarching goal. \\n\\nAdversarial noise injection during training is designed to help the model effectively leverage visual modality information by comparing retrieved entity images to the query image. This process enables the model to identify query-relevant retrieved documents while filtering out irrelevant retrieval passage noise. Its primary focus is on improving the model's robustness against noisy or irrelevant information in the retrieved documents. \\n\\nOn the other hand, visual token refinement is specifically tailored to support the adversarial noise injection training process. It filters out query-irrelevant content from the input images, ensuring that only query-relevant visual information is retained. By removing distracting elements such as background content or query-irrelevant entities, visual token refinement facilitates more accurate comparisons of entity-level content between images. This design ensures that the model is less influenced by extraneous visual information, improving its ability to focus on the content most relevant to the query. \\n\\nThus, while the two components address different aspects of the problem\\u2014visual token refinement focuses on image-level filtering, and adversarial noise injection focuses on retrieval-level robustness\\u2014they are aligned in their purpose of enhancing the model's ability to handle noise.\"}", "{\"comment\": \"## **W1. Shorten the description of Stage-1.**\\n\\nWe appreciate the suggestion from the reviewer and admit that Stage-1 is quite similar to the KNN formulation illustrated by the reviewer. However, besides providing the details of the technical design, the discussion of Stage-1 is more about providing the setups of the multimodal-based retrieval augmentation process, such as the source of the retrieval augmentation (i.e., an image database built on 37.6M images from WIT) and the choice of image encoder (CLIP), which are different from relevant previous studies and necessary to help readers gain a better understanding of the problem setup. The following sections also refer to some variable names defined at this stage. Following the reviewer's suggestions, we will shorten the description and move some details to the appendix in the revised version.\\n\\n## **W2. How to encode an image into a sequence of visual embeddings using CLIP? do you encode the image patches (n) to the same dimension? Do you feed-forward each patch, separately, to the CLIP model? Do you use the N internal CLIP features for each patch? If so, are you sure that their dimension is d, before the last projection layer? Do you project them with the last visual projection layer as the pooled [CLS] token projected? Please elaborate more on this procedure.**\\n\\nBelow, we provide a detailed description of how we encode an image into a sequence of visual embeddings using CLIP, addressing each aspect of the reviewer's concerns.\\n\\n**Image Encoding with CLIP:** In the CLIP model, the visual encoder is based on the Vision Transformer (ViT) architecture. Given an image, the visual encoder processes it as a whole and encodes it into a feature representation of shape [576, 1024]. This representation can be interpreted as 576 vectors, each with a dimensionality of 1024. The 576 vectors correspond to patches of the input image, where the image is internally divided into a grid of patches during the encoding process. This division is not explicit; rather, it is an inherent part of the ViT architecture, which computes patch-level embeddings directly through a convolutional embedding layer applied to the full image. The resulting intermediate patch embeddings collectively form the image\\u2019s representation in the model\\u2019s latent space.\\n\\n**Dimensionality of Visual Embeddings:** After passing through the vision transformer (ViT) layers, each patch is represented as a feature vector with a dimensionality of 1024. To further process these features, we utilized the final visual projection layer of the original CLIP model. This projection layer, which is also used for the pooled [CLS] token in the original implementation, is applied to all 576 patch-based feature vectors in our approach. The projection reduces the dimensionality of each feature vector from 1024 to 768. To clarify further, the visual projection layer is part of CLIP\\u2019s original implementation. While it is typically applied only to the pooled [CLS] token to produce the image-level feature representation, in our work, we extend its application to all 576 patch-level feature vectors. As a result, the output is a feature representation of shape [576, 768], where 576 corresponds to the number of patches and 768 is the dimensionality of the projected patch embeddings.\\n\\nAfter computing the patch embeddings, for each text query, we derive a 768-dimensional vector from the [CLS] token of the CLIP text encoder. We then compute the similarities between the text embedding and the image patch embeddings to select the top-m relevant patches, which are subsequently projected into the LLM's latent space using the LLaVA projector.\"}", "{\"comment\": \"Dear Reviewer ZXbA,\\n\\nWe sincerely appreciate the time and effort you\\u2019ve devoted to reviewing our work. We understand that your schedule may be quite busy, and we are truly grateful for your valuable feedback. As the Author-Reviewer discussion phase is ending soon, we would greatly value the opportunity to engage in further discussion with you. Our aim is to gain insights into whether our responses effectively address your concerns and to ascertain if there are any additional questions or points you would like to discuss.\\nWe look forward to the opportunity for further discussion with you. Thank you for your thoughtful consideration.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"Dear Reviewer MWJ4,\\n\\nWe have posted the revised version of our paper, incorporating all additional details and experiments mentioned during the rebuttal period, highlighted in blue for your convenience.\\n\\nAs the deadline approaches, we kindly ask you to review the revisions and reevaluate our work based on these updates. Please let us know if you have any further questions or need clarification. We sincerely appreciate your time and effort in reviewing our work.\\n\\nBest regards,\\n\\nAuthors\"}" ] }
2gW8lTRh9m
Continual Memorization of Factoids in Large Language Models
[ "Howard Chen", "Jiayi Geng", "Adithya Bhaskar", "Dan Friedman", "Danqi Chen" ]
Large language models (LLMs) can absorb a massive amount of knowledge through pretraining, but pretraining is inefficient for acquiring long-tailed or specialized facts. Therefore, fine-tuning on specialized or new knowledge that reflects changes in the world has become popular, though it risks disrupting the model’s original capabilities. We study this fragility in the context of continual memorization, where the model is trained on a small set of long-tail factoids (subject-relation-object associations) and must retain these factoids after multiple stages of subsequent training on other datasets. Continual memorization focuses on the specific challenge of retaining long-tail factoids, whereas general continual learning aims to maintain the LLM’s capabilities across a wide range of generic tasks (e.g., reasoning, commonsense knowledge). Through extensive experiments, we show that LLMs suffer from forgetting across a wide range of subsequent tasks, and simple replay techniques do not fully prevent forgetting, especially when the factoid datasets are trained in the later stages. We posit that there are two ways to alleviate forgetting: 1) protect the memorization process as the model learns the factoids, or 2) reduce interference from training in later stages. With this insight, we develop an effective mitigation strategy: REMIX (Random and Generic Data Mixing). REMIX prevents forgetting by mixing generic data sampled from pretraining corpora or even randomly generated word sequences during each stage, despite being unrelated to the memorized factoids in the first stage. REMIX can recover performance from severe forgetting, often outperforming replay-based methods that have access to the factoids from the first stage. We then analyze how REMIX alters the learning process and find that successful forgetting prevention is associated with a pattern: the model stores factoids in earlier layers than usual and diversifies the set of layers that store these factoids. The efficacy of REMIX invites further investigation into the underlying dynamics of memorization and forgetting, opening exciting possibilities for future research.
[ "Continual Learning", "Large Language Model", "Memorization" ]
Reject
https://openreview.net/pdf?id=2gW8lTRh9m
https://openreview.net/forum?id=2gW8lTRh9m
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zkImNMwEBQ", "zcnfoPjFuC", "uXRSEmcsl7", "sLVayary8C", "rA85x4H1G4", "qTq3sZfi2H", "oqFuZoCGs3", "lfMwje62uN", "jtdL6jqoHo", "gJARNfJVyt", "g1pAXzgzgu", "YLvEnTqJuU", "XsujbAfJ7C", "WhFVpViHwq", "TUzISGyVDR", "SmLJlXIWDe", "SSKcspYl9s", "OztAT939HS", "Mq8iRqErAh", "LZ5Ky1Jv6D", "JHdBIdsrUu", "JGJO4V3uPd", "HRE9gN89Dz", "Fam2x8E54I", "C6gdltVnYy", "6crtT1EePZ", "4zLswVlTyl", "3CXaR4avxG" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_review", "official_comment", "official_comment" ], "note_created": [ 1731959657618, 1733274377904, 1731960349170, 1731959891222, 1731961239266, 1732299555313, 1733108009560, 1733273944243, 1731961695429, 1732209187975, 1733107968109, 1731960962390, 1737524081303, 1733274265973, 1731960486608, 1731961424587, 1732299691447, 1733273809486, 1731960628663, 1735010586405, 1733108115463, 1732299665562, 1730397543167, 1730719839332, 1730587311510, 1730650075228, 1731960862025, 1732305162870 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10851/Authors" ], [ "ICLR.cc/2025/Conference/Submission10851/Authors" ], [ "ICLR.cc/2025/Conference/Submission10851/Authors" ], [ "ICLR.cc/2025/Conference/Submission10851/Authors" ], [ "ICLR.cc/2025/Conference/Submission10851/Authors" ], [ "ICLR.cc/2025/Conference/Submission10851/Authors" ], [ "ICLR.cc/2025/Conference/Submission10851/Reviewer_Qd9Z" ], [ "ICLR.cc/2025/Conference/Submission10851/Authors" ], [ "ICLR.cc/2025/Conference/Submission10851/Authors" ], [ "ICLR.cc/2025/Conference/Submission10851/Reviewer_XLf9" ], [ "ICLR.cc/2025/Conference/Submission10851/Reviewer_Qd9Z" ], [ "ICLR.cc/2025/Conference/Submission10851/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10851/Authors" ], [ "ICLR.cc/2025/Conference/Submission10851/Authors" ], [ "ICLR.cc/2025/Conference/Submission10851/Authors" ], [ "ICLR.cc/2025/Conference/Submission10851/Authors" ], [ "ICLR.cc/2025/Conference/Submission10851/Authors" ], [ "ICLR.cc/2025/Conference/Submission10851/Authors" ], [ "ICLR.cc/2025/Conference/Submission10851/Area_Chair_UxB4" ], [ "ICLR.cc/2025/Conference/Submission10851/Reviewer_Qd9Z" ], [ "ICLR.cc/2025/Conference/Submission10851/Authors" ], [ "ICLR.cc/2025/Conference/Submission10851/Reviewer_Qd9Z" ], [ "ICLR.cc/2025/Conference/Submission10851/Reviewer_QUxn" ], [ "ICLR.cc/2025/Conference/Submission10851/Reviewer_hvCG" ], [ "ICLR.cc/2025/Conference/Submission10851/Reviewer_XLf9" ], [ "ICLR.cc/2025/Conference/Submission10851/Authors" ], [ "ICLR.cc/2025/Conference/Submission10851/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to the reviewer (1/2)\", \"comment\": \"We thank the reviewer for the comments and suggestions. We appreciate your positive comments on the clarity and simplicity of our problem setup and the proposed solutions.\\n\\n\\n> There are some other methods to reduce the interference among the datasets of stage 1 and stage 2. For example, the method needs to compare with another baseline, i.e. \\\"mixing of Data A and Data B\\\"\\n\\nThank you for pointing out the need for more baselines in order to strengthen our conclusions.\\n\\nWe would like to clarify that since we focus on understanding how the knowledge of D_A is retained after another stage of training, stage 2 shouldn't have access to D_A (including mixing D_A and D_B).\\n\\n\\nNonetheless, we recognize that more baselines can strengthen our conclusions. We provide three more types of representative baselines to compare with our data mixing method:\\n- Weight regularization: we use Elastic Weight Consolidation (EWC) and calculate the Fisher score using one backward pass using the current mini-batch for training [1].\\n- Behavior regularization: we add the KL between the training model vs the original reference model to the loss [8]. \\n- Parameter expansion method: we learn separate and none-overlapping LoRA adapters at stage 1 and 2, similar to the IncLoRA model in [9].\\n\\nWe compare these baselines against the No Mixing baseline and REMIX (Random at stage 1 and Knowledge Pile at stage 2). We show results on the datasets that *suffer most from forgetting*: all factoid datasets and GSM8K from the non-factoid datasets.\\n \\n\\n| KVR | LAMA | EntityQA | WebQA | GSM8K | Avg |\\n| ----------------------- | ---- | -------- | ----- | ----- | ---- |\\n| No Mixing | 2.1 | 17.4 | 33.8 | 22.4 | 18.9 |\\n| REMIX (Random/KP) | 62.4 | 69.5 | 70.2 | 45.8 | **62.0** |\\n| Weight Regularization | 0.1 | 4.3 | 76.7 | 2.6 | 20.9 |\\n| Behavior Regularization | 0.2 | 15.6 | 36.6 | 28.1 | 20.1 |\\n| Parameter Expansion | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |\\n\\n| PopQA | LAMA | EntityQA | WebQA | GSM8K | Avg |\\n| ----------------------- | ---- | -------- | ----- | ----- | ---- |\\n| No Mixing | 7.7 | 57.8 | 72.5 | 19.0 | 39.3 |\\n| REMIX (Random/KP) | 85.8 | 90.7 | 80.5 | 38.5 | **73.9** |\\n| Weight Regularization | 12.1 | 67.4 | 76.7 | 25.7 | 45.5 |\\n| Behavior Regularization | 7.5 | 59.3 | 55.5 | 40.6 | 40.7 |\\n| Parameter Expansion | 0.0 | 0.1 | 0.0 | 1.2 | 0.3 |\\n\\n| TriviaQA | LAMA | EntityQA | WebQA | GSM8K | Avg |\\n| ----------------------- | ---- | -------- | ----- | ----- | ---- |\\n| No Mixing | 4.3 | 40.5 | 68.6 | 9.4 | 30.7 |\\n| REMIX (Random/KP) | 89.2 | 89.6 | 86.5 | 12.5 | **69.5** |\\n| Weight Regularization | 7.9 | 58.5 | 80.3 | 37.9 | 46.2 |\\n| Behavior Regularization | 6.8 | 39.0 | 71.0 | 14.5 | 32.8 |\\n| Parameter Expansion | 21.9 | 0.1 | 1.1 | 3.0 | 6.5 |\\n\\n\\n\\nWe observe that the weight regularization baseline and output regularization baseline can obtain better factoid retention at different tasks but on average lags behind REMIX by a large margin (40%+ on KVR, 30%+ on PopQA, and 20%+ on TriviaQA). In our attempt, the parameter expansion based baseline learns to achieve 100% accuracy at stage 2, but catastrophically forgets at stage 2, achieving close to zero factoid retention.\\n\\n\\n> For the Replay approach, what if we use a ratio = 1.0?\\n\\nSimilar to the previous point, a ratio of 1.0 means using the entire $D_A$ together with $D_B$ for training at stage 2, which is in conflict with the continual multi-stage setting. Replay is not meant to be a baseline it uses D_A but rather as a motivational study that inspires REMIX. In the factoid memorization task, using $D_A$ is essentially *cheating*, so it should be understood as a different settings (therefore 10% reported in section 3.2 seems adequate for the motivational purpose).\\n\\n> Table 1, the degradation is more severe for Stage 2 is also a factoid dataset. Do you have any explanation? Also, there is a big drop when using GSK8k. It will be very insightful to understand the interplays of the datasets.\\n\\nWhile it is hard to obtain a direct mechanistic explanation for this phenomenon, it corroborates with the findings in 1) the continual learning literature which suggest catastrophic forgetting happens when two tasks are similar and therefore interfere [2, 3], and 2) finetuning on unfamiliar knowledge disrupts the model and causes exacerbated hallucinations [4, 5, 6, 7]. A mechanistic understanding of this phenomenon is an important area for future investigation but is slightly outside of the scope of our paper.\\n\\nFor GSM8K, we hypothesize that the special format of its training data (e.g., extensive use of angle brackets) might contribute to forgetting as the model picks up such irregularities very quickly.\"}", "{\"comment\": \"We thank the reviewer for the acknowledgement. We will update the manuscript to include their additional experiments as suggested.\"}", "{\"title\": \"Response to the reviewer (1/3)\", \"comment\": \"We thank the reviewer for recognizing the rigor in our experimentation and the clearness in our writing. We also appreciate that the reviewer pointed out several aspects that would benefit from further clarification and experimentation to solidify the arguments in our investigation. We provide the explanations and further evidence in the following passages.\\n\\n \\n> The paper does not convincingly explain why memorizing long-tail knowledge in the form of factoids is important in practical applications. [...] If we are only concerned about factoids, why use LLMs in the first place? Why not just use traditional knowledge-based systems? The authors should show how memorizing factoids leads to downstream applications, such as utilizing the information from the factoids on tasks that specifically require LLMs.\\n\\nWe agree that the choice between parametric vs non-parametric representation for factoid knowledge has a long standing tension in practical scenarios. However, we position our work more as part of the larger body of works that aim to understand the knowledge acquisition dynamics of language models through finetuning and the unintended risk ([1, 2, 3, 4]), which also evaluate on knowledge datasets such as PopQA and TriviaQA. \\n\\nThe reviewer rightfully pointed out the importance of transferring the learned knowledge to downstream tasks to make it useful. This is discussed in depth in works such as [5, 6, 7, 8] which highlight the difficulty of manipulating the learned knowledge in downstream tasks. Our work is positioned as the *prerequisite* before knowledge manipulation \\u2013 if the knowledge is not retained in the first place, then there\\u2019s no chance for it to be recalled and manipulated successfully. While retention does not entail successful manipulation, we aim to understand the dynamics of retention as the first step, which is on its own a challenging task.\\nWe appreciate this profound point and will improve our framing to reflect this emphasis and make the distinction clearer.\\n\\nNonetheless, we fully recognize the importance of this question and further conducted evaluations to assess such capabilities of our models. We use three templates to assess the model\\u2019s ability to manipulate learned knowledge on the KVR task (since it is guaranteed to have no contamination from pretraining):\\n\\nTemplate 1 (reverse recall): \\\\\\nThe key of the value DEF is? \\\\\", \"key1\": \"ABC, Value1: DEF \\\\\", \"answer\": \"XEF\", \"here_are_two_keys\": \"ABC and XYZ. What is the value of the first key? \\\\\", \"key2\": \"XYZ, Value2: GHI \\\\\", \"key\": \"ABC \\\\\", \"value\": \"DEF \\\\\", \"we_evaluate_on_the_following_models\": \"No-Mixing, REMIX (Random / -), and REMIX (Random / Knowledge-Pile).\\n\\n| KVR | | LAMA | EntityQA | WebQA | GSM8K | MATH | EvolCode | APPS | UltraChat | Avg |\\n| ---------- | ------------- | ---- | -------- | ----- | ----- | ---- | -------- | ---- | --------- | ---- |\\n| Template 1 | No Mixing | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |\\n| | REMIX (R/-) | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |\\n| | REMIX (R/KP) | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |\\n| Template 2 | No Mixing | 0.3 | 1.6 | 0.3 | 2.9 | 7.3 | 1.3 | 15.8 | 15.8 | 5.7 |\\n| | REMIX (R/-) | 0.0 | 1.6 | 4.2 | 9.4 | 3.5 | 70.9 | 26.6 | 35.2 | 18.9 |\\n| | REMIX (R/KP) | 7.1 | 18.2 | 8.4 | 1.9 | 0.0 | 0.3 | 5.3 | 36.6 | 9.7 |\\n| Template 3 | No Mixing | 0.0 | 0.1 | 0.6 | 0.3 | 0.5 | 0.0 | 0.6 | 5.1 | 0.9 |\\n| | REMIX (R/-) | 0.0 | 0.8 | 1.7 | 1.2 | 1.2 | 3.4 | 0.8 | 5.0 | 1.8 |\\n| | REMIX (R/KP) | 3.1 | 4.0 | 3.1 | 0.4 | 0.0 | 0.0 | 2.6 | 6.1 | 2.4 |\\n\\nR = Random Word Sequence. KP = Knowledge Pile.\\nWe observe that none of the models can perform Template 1, which corroborates with [7, 8], highlighting the unique challenge of reverse recall in knowledge storage and manipulation.\\nHowever, we found that REMIX improves other types of knowledge manipulation such as Template 2 and 3 as shown in the table. This is an interesting finding that warrants further study. We appreciate the reviewer\\u2019s suggestion and will include this and further findings in the updated version of the paper.\"}", "{\"title\": \"Response to the reviewer (2/2)\", \"comment\": \"Please let us know if there's any more information we can provide to clarify our work, thank you!\", \"reference\": \"[1] Kirkpatrick et al., Overcoming Catastrophic Forgetting in Neural Networks. PANS 2017.\\n\\n[2] Farajtabar et al., Orthogonal gradient descent for continual learning. AISTATS 2020.\\n\\n[3] Bennani et al., Generalisation Guarantees for Continual Learning with Orthogonal Gradient Descent. ICML 2020.\\n\\n[4] Kang et al., Unfamiliar Finetuning Examples Control How Language Models Hallucinate. Arxiv 2024.\\n\\n[5] Gekhman et al., Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations? EMNLP 2024.\\n\\n[6] Zhang et al., Knowledge Overshadowing Causes Amalgamated Hallucination in Large Language Models. Arxiv 2024.\\n\\n[7] Ghosal et al., Understanding Finetuning for Factual Knowledge Extraction. ICLR 2024.\\n\\n[8] Sun et al., Distill and Replay for Continual Language Learning. COLING 2020.\\n\\n[9] Wang et al., Orthogonal Subspace Learning for Language Model Continual Learning. EMNLP 2023.\"}", "{\"title\": \"Response to the reviewer (3/4)\", \"comment\": \"> Expanding this method to resource-intensive domains like finance or healthcare could present challenges. If the authors could discuss the trade-offs between added data usage and computational demands [...], it would help assess its feasibility and scalability in high-resource environments.\\n\\nScalability is indeed critical for practical adaptations as the more mixing data needed for mitigation, the less scalable and practical it is. We provide analysis on the amount of REMIX data needed for successful forgetting mitigation. We show in L421-L424 the mixing ratio required for different mixing scenarios: most REMIX strategies only need a mixing ratio of 1.0 to be effective, and increasing the mixing ratio hits diminishing returns.\\nWe also simulate the scaled-up scenario (resource intensive cases as suggested) by increasing the dataset size from 2000 factoids to 4000 factoids.\\n\\n| PopQA with n=4000 | LAMA | EntityQA | WebQA | GSM8K |\\n| ------------------------ | ---- | -------- | ----- | ----- |\\n| No Mixing | 9.2 | 69.0 | 69.1 | 61.2 |\\n| REMIX ratio=1.0 (-/KP) | 89.8 | 93.1 | 81.0 | 65.2 |\\n| REMIX ratio=2.0 (-/KP) | 89.3 | 93.4 | 80.8 | 40.9 |\\n| REMIX ratio=4.0 (-/KP) | 88.9 | 92.2 | 77.0 | 55.5 |\\n\\nKP = Knowledge Pile.\\nWe find that the same trend holds for the scaled up case where only the mixing ratio of 1.0 is effective. A promising future direction is to discover the kind of mixing data that can be effective with very small mixing ratios.\\n\\n \\n> Exploration of Bidirectional Relational Memory with Atomic Fact Datasets: [...] Could the authors clarify whether such tests were conducted, or suggest if REMIX could potentially extend to this type of bidirectional memorization?\\n\\nThe reviewer rightfully pointed out the importance of transferring the learned knowledge to downstream tasks to make it useful. This is discussed in depth in works such as [2, 3, 4, 5], which highlight the difficulty of manipulating the learned knowledge in downstream tasks. Our work is mainly positioned as the *prerequisite* before knowledge manipulation \\u2013 if the knowledge is not retained in the first place, then there\\u2019s no chance for it to be recalled and manipulated successfully. While retention does not entail successful manipulation, we aim to understand the dynamics of retention as the first step, which is on its own a challenging task.\\n\\nNonetheless, we fully recognize the importance of this question and further conducted evaluations to assess such capabilities of our models. We use three templates to assess the model\\u2019s ability to manipulate learned knowledge on the KVR task (since it is guaranteed to have no contamination from pretraining):\\n\\nTemplate 1 (reverse recall): \\\\\\nThe key of the value DEF is? \\\\\", \"key1\": \"ABC, Value1: DEF \\\\\", \"answer\": \"XEF\", \"here_are_two_keys\": \"ABC and XYZ. What is the value of the first key? \\\\\", \"key2\": \"XYZ, Value2: GHI \\\\\", \"key\": \"ABC \\\\\", \"value\": \"DEF \\\\\", \"we_evaluate_on_the_following_models\": \"No-Mixing, REMIX (Random / -), and REMIX (Random / Knowledge-Pile).\\n\\n| KVR | | LAMA | EntityQA | WebQA | GSM8K | MATH | EvolCode | APPS | UltraChat | Avg |\\n| ---------- | ------------- | ---- | -------- | ----- | ----- | ---- | -------- | ---- | --------- | ---- |\\n| Template 1 | No Mixing | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |\\n| | REMIX (R/-) | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |\\n| | REMIX (R/KP) | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |\\n| Template 2 | No Mixing | 0.3 | 1.6 | 0.3 | 2.9 | 7.3 | 1.3 | 15.8 | 15.8 | 5.7 |\\n| | REMIX (R/-) | 0.0 | 1.6 | 4.2 | 9.4 | 3.5 | 70.9 | 26.6 | 35.2 | 18.9 |\\n| | REMIX (R/KP) | 7.1 | 18.2 | 8.4 | 1.9 | 0.0 | 0.3 | 5.3 | 36.6 | 9.7 |\\n| Template 3 | No Mixing | 0.0 | 0.1 | 0.6 | 0.3 | 0.5 | 0.0 | 0.6 | 5.1 | 0.9 |\\n| | REMIX (R/-) | 0.0 | 0.8 | 1.7 | 1.2 | 1.2 | 3.4 | 0.8 | 5.0 | 1.8 |\\n| | REMIX (R/KP) | 3.1 | 4.0 | 3.1 | 0.4 | 0.0 | 0.0 | 2.6 | 6.1 | 2.4 |\\n\\nR = Random Word Sequence. KP = Knowledge Pile.\\nWe observe that none of the models can perform Template 1, which corroborates with [8, 9, 10, 11], highlighting the unique challenge of reverse recall in knowledge storage and manipulation.\\nHowever, we found that REMIX improves other types of knowledge manipulation such as Template 2 and 3 as shown in the table. This is an interesting finding that warrants further study. We appreciate the reviewer\\u2019s suggestion and will include this and further findings in the updated version of the paper.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe'd like to send a gentle reminder that we have submitted the rebuttal to address your comments. We sincerely appreciate your feedback and are happy to address any additional questions you may have during this discussion period.\\n\\nWe thank you again for taking the time to review our work.\"}", "{\"comment\": \"Concerning the choice of data mixing types, I now understand the motivation for using both random word sequences and knowledge-rich text like Knowledge Pile. The mathematical derivation and intuition provided in the rebuttal are helpful. However, I still have a few questions about the empirical impact of these two types of mixing. For example, does the performance differ significantly when using only one of these mixing types (e.g., Random vs. Knowledge Pile) across various tasks, and under what specific conditions might one be more effective than the other?\"}", "{\"title\": \"Response to Reviewer Qd9Z (2/2)\", \"comment\": \"We can show the effectiveness of REMIX under constrained computational resources in the following two cases: 1) effectiveness of REMIX on smaller models, or 2) reducing computational needs by lowering the mixing ratio when the model size or data size is large.\\n\\nFor the first case, we provide REMIX experiments on TriviaQA with different model sizes below.\\n\\n| Llama-3.2-1B | LAMA | EntityQA | WebQA | GSM8K | MATH | EvolCode | APPS | UltraChat | Avg |\\n| ----------------------- | ---- | -------- | ----- | ----- | ---- | -------- | ---- | --------- | ---- |\\n| No Mixing | 36.7 | 79.5 | 91.9 | 88.9 | 97.0 | 98.8 | 98.8 | 97.9 | 86.2 |\\n| Mixing ratio=1.0 (-/KP) | 97.6 | 96.7 | 96.4 | 94.9 | 97.8 | 98.9 | 98.8 | 97.9 | **97.4** |\\n| Mixing ratio=4.0 (-/KP) | 98.2 | 98.2 | 95.5 | 97.2 | 97.6 | 97.4 | 98.5 | 95.3 | 97.2 |\\n\\n\\n| Llama-3.2-3B | LAMA | EntityQA | WebQA | GSM8K | MATH | EvolCode | APPS | UltraChat | Avg |\\n| ----------------------- | ---- | -------- | ----- | ----- | ---- | -------- | ---- | --------- | ---- |\\n| No Mixing | 47.4 | 79.5 | 91.9 | 88.9 | 93.4 | 98.8 | 98.3 | 97.6 | 87.0 |\\n| Mixing ratio=1.0 (-/KP) | 95.9 | 95.1 | 94.7 | 94.1 | 95.9 | 98.8 | 98.8 | 97.3 | 96.3 |\\n| Mixing ratio=4.0 (-/KP) | 98.3 | 98.3 | 96.7 | 96.4 | 98.1 | 98.0 | 96.6 | 95.8 | **97.3** |\\n\\nWe observe that REMIX is very effective across mixing ratios. Under constrained resources where only small models can be served, REMIX maintains its effectiveness.\\n\\nFor the second case, we showed that 1) the mixing ratio need not scale with dataset size in our previous response and 2) a mixed ratio of 1.0 is effective across different model sizes, from 1B to 3B to 8B in our paper, indicating that the ratio does not increase as model size scales.\\n\\nWe also aim to provide results on larger model sizes, which we are unable to provide at this moment due to time and resource limits. We will include the larger model experiment in our updated version of our paper.\"}", "{\"title\": \"Response to the reviewer\", \"comment\": \"We thank the reviewer for the positive comments, especially recognition of the importance of the problem, the novelty of our proposed mitigation method, and the extensiveness of our experiments. We also appreciate the reviewer for the constructive and actionable suggestions.\\n\\n \\n> Section 3.2 (on replay) is lacking detail in comparison to other sections, especially as it justifies the use of REMIX compared to other replay methods. In particular, I could not find which of the two LLMs was used to measure the effect of replay methods.\\n\\nThank you for pointing this out. The model used for replay was the Llama 3 8B model. We did not directly compare replay and REMIX due to the difference in their settings \\u2013 replay assumes access to stage 1 dataset $D_A$ and REMIX does not. Therefore, we use replay mainly as inspiration for REMIX as opposed to a competitive baseline. For clarity as suggested, we will update the REMIX section to refer and draw appropriate comparison to the replay method.\\n\\n \\n> It would have been interesting to investigate whether this results in any significant degradation of other model capabilities (e.g. fluency) compared to the basic two-stage training process. I understand that this paper specifically focuses on factoid memorization and contains many experiments already, but this could be mentioned as future work.\\n\\nWe find models trained with REMIX actually contain slightly better fluency compared to other models mainly because the models without mixing suffer from overfitting more severely. However, we totally agree a more comprehensive analysis on the side effects when using REMIX will be very valuable. We will aim to include such an analysis in the updated version.\\n\\n \\n> [...] vary either the model or the dataset's size to evaluate the link between model capacity and the efficacy of REMIX/replay techniques.\\n\\nWe provide an extra result on increasing the $D_A$ dataset size from 2000 factoids to 4000 factoids ($D_B$ size maintains to be 2000). \\n\\n| PopQA with n=4000 | LAMA | EntityQA | WebQA | GSM8K |\\n| ------------------------ | ---- | -------- | ----- | ----- |\\n| No Mixing | 9.2 | 69.0 | 69.1 | 61.2 |\\n| REMIX ratio=1.0 (-/KP) | 89.8 | 93.1 | 81.0 | 65.2 |\\n| REMIX ratio=2.0 (-/KP) | 89.3 | 93.4 | 80.8 | 40.9 |\\n| REMIX ratio=4.0 (-/KP) | 88.9 | 92.2 | 77.0 | 55.5 |\\n\\nKP = Knowledge Pile.\\nWe find that the effectiveness of REMIX holds for the scaled up dataset size case. We also find that there seems to be generally less forgetting in the No Mixing case, which might be an interesting phenomenon to further study.\\nWith the computation budgets at hand we defer the model scaling experiments to future research.\\n\\n \\n> Have the authors considered/tried combining REMIX with classic replay techniques? This seems like a natural next step to know whether the use of both methods leads to even better results.\\n\\nIn our investigation, we aim to fully separate the two settings: 1) allowing access to $D_A$ at later stages, and 2) strict continual learning setting where $D_A$ cannot be used in stage 2. We use the second setting throughout since using $D_A$ can be seen as a form of *cheating* because they are the exact factoids to memorize. Therefore, Replay (section 3.2) was only meant to motivate REMIX instead of treated as a fully comparable baseline.\\n\\nPlease let us know if there's any more information we can provide to clarify our work, thank you!\"}", "{\"comment\": \"I thank the authors for their clarifications and their extensive responses to the rebuttals, as well as for the additional experimental results they have provided. I have read in detail the other reviewers' comments and rebuttals, and maintain my current assessment of the authors' work. Furthermore, I strongly encourage the authors to update their manuscript to include their additional experiments (e.g. as appendices).\"}", "{\"comment\": \"I appreciate the authors' recognition of the need for more realistic datasets, and the additional experiments on Natural Questions are valuable. The results for Natural Questions with REMIX are indeed promising, and I can see how they align with the trends observed in the original datasets. That said, I am still curious about how REMIX might perform on datasets that involve overlapping or nested knowledge (e.g., relationships between facts or entities that require reasoning beyond isolated factoid recall). It would be interesting to explore whether REMIX can handle more complex knowledge integration in such contexts.\"}", "{\"title\": \"Response to the reviewer (2/4)\", \"comment\": \"> The datasets used in this paper, such as Key-Value Recall and PopQA, are primarily synthetic and consist of isolated factoids, which may not fully reflect the complexity of real-world data. [...] Testing REMIX on more commonly used datasets, such as Wikipedia or open-domain QA datasets (e.g., Natural Questions), could provide a more realistic evaluation of its effectiveness and generalizability.\\n\\nWe totally agree that capturing real-world complexities is important to understand the generalizability of our conclusion. In our investigation, we aim to strike a balance between controllability and the real world complexity. With the Key-Value Recall (KVR) task, we can ensure there\\u2019s no data contamination from pretraining despite its simplicity. On the other hand, both PopQA and TriviaQA are sourced from real websites yet maintain the \\u201catomic\\u201d nature of each factoid. While PopQA is still template-based, questions in TriviaQA is sourced from real Trivia websites and annotated by humans, which is close to how Natural Question is collected. We aimed to understand the memorization dynamics of these cases from simple/controllable (KVR) to realistic (TriviaQA).\\n\\nNonetheless, we fully recognize the importance of using more natural and realistic datasets to strengthen our conclusion suggested by the reviewer. We further investigate the effectiveness of REMIX on the Natural Question dataset as suggested. The results are the following:\\n\\n\\n| Natural Questions | LAMA | EntityQA | WebQA | GSM8K | MATH | EvolCode | APPS | UltraChat | Avg |\\n| ------------------ | ---- | -------- | ----- | ----- | ---- | -------- | ---- | --------- | ---- |\\n| No Mixing | 16.2 | 80.2 | 90.0 | 71.8 | 88.3 | 99.2 | 93.9 | 87.6 | 78.4 |\\n| REMIX (Random/-) | 38.5 | 81.2 | 63.6 | 80.1 | 90.9 | 79.9 | 89.3 | 80.8 | 75.5 |\\n| REMIX (KP/-) | 2.8 | 13.8 | 55.6 | 93.5 | 99.2 | 99.2 | 99.8 | 98.5 | 70.3 |\\n| REMIX (-/Random) | 18.7 | 61.7 | 85.6 | 78.3 | 96.1 | 92.7 | 96.1 | 88.5 | 77.2 |\\n| REMIX (-/KP) | 94.4 | 95.7 | 91.4 | 75.7 | 94.8 | 93.9 | 96.8 | 83.9 | **90.8** |\\n| REMIX (Random/KP) | 94.1 | 95.7 | 91.7 | 56.7 | 93.1 | 86.6 | 78.2 | 84.6 | 85.1 |\\n\\nKP = Knowledge Pile.\\nThe trend aligns with the trends for our choice of datasets in the paper \\u2013 mixing Knowledge Pile at stage 2 leads to best retention of the learned factoids (+12.4% average accuracy over No Mixing), followed by mixing Random Word Sequence at stage 1 + Knowledge Pile at stage 2 (+6.7% average accuracy over No Mixing). We observe that forgetting is generally less severe than the datasets we chose, which might be due to some level of contamination reported in the Llama 3 technical report (Table 15).\\n\\n \\n> Unclear Justification for Types of Data Mixing: The paper employs both random word sequences and knowledge-rich text (e.g., Knowledge Pile) as mixed data to prevent forgetting, but it does not provide a clear explanation of why these two disparate types would produce similar effects. [...] However, the choice of these sources appears empirical, lacking theoretical justification or detailed explanation. It remains unclear why certain data sources yield better performance on specific tasks, and this potential variation across tasks is not fully explored.\\n\\nWe motivate our choice of different mixing strategies in section 4.1 (L260 - L267) with the following intuition: REMIX at stage 1 aims to *protect* the learned factoids by maintaining a good starting weight space for stage 2 training. REMIX at stage 2 aims to *reduce the interference* of the memorized factoids.\\nWith this intuition, we justify the use of the two mixing datasets with mathematical derivation in Appendix A.3 (L866 - L917). Specifically, in stage 1, we need to choose a mixing data that is uncorrelated to $D_A$ and $D_B$, hence the choice of random word sequence. In stage 2, any natural distribution can achieve mitigation when forgetting is already severe; when the dataset aligns with $D_A$ more than $D_B$, the mitigation can be more effective (e.g., Knowledge Pile helps knowledge factoids more than Arxiv Pile in Figure 5).\\n\\n \\n> Impact on Performance in New Tasks: While REMIX performs well in retaining early-stage knowledge, the paper does not explore its impact on subsequent new tasks. For instance, it would be useful to know whether REMIX might limit the model's ability to learn these new tasks when introduced for fine-tuning.\\n\\nWhen applying mixing at stage 2, we ensure convergence for dataset $D_B$ using the same stopping criterion (loss reaching below 0.0001 by 5 times). We did not observe issues lowering the loss after stage 1 training. Similarly, we observe no issue when performing stage 3 training with mixing data as well (L426-L431 and Figure 8). This suggests that mixing data does not hinder the model's ability to learn new tasks.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"We would like to thank the reviewer again for the time for reviewing our work, providing actionable feedback, and engaging in the discussion. We hope our responses address your concerns and would appreciate if the scores can be updated to reflect it.\"}", "{\"title\": \"Response to the reviewer (2/3)\", \"comment\": \"> the idea of mixing generic data into training is not groundbreaking and does not specifically address the unique challenges of factoid memorization\\n\\nWe would like to first point out the important distinctions between our setting and the general continual learning setting where mixing is applied. 1) Mixing often assumes access to distributions of the previous training stages (similar to Replay in Section 3.2 that uses a small percentage of $D_A$). In REMIX, the model does not have access to $D_A$, which renders the problem much more challenging. 2) Mixing random sequence data is unexplored in past literature and its effectiveness is surprising. 3) While mixing generic pretraining data is familiar in settings like continual pretraining, its effectiveness is largely under-explored in the context of factual knowledge retention. 4) Although it appears fairly straightforward, we discovered that such a simple strategy can work extremely well, which we established through mathematical derivation and validated through extensive experimentation.\\n\\nAlong with the empirical efficacy of REMIX, we provide intuition (section 4.1) and derivations (Appendix A.3) to justify the choice of random data (help protecting the learned knowledge) and the generic data (help reduce the interference of the stage 2 data), which we hope supplement the existing data mixing methods such as replay.\\n\\n \\n> The authors only explore experience replay as the baseline approaches, whereas there exists other methods in literature that can mitigate forgetting during continued pretraining (parameter expansion-based methods, regularization methods, etc.\", \"we_provide_three_more_types_of_baselines_to_compare_with_our_data_mixing_method\": \"- Weight regularization: we use Elastic Weight Consolidation (EWC) and calculate the Fisher score using one backward pass using the current mini-batch for training [9].\\n- Behavior regularization: we add the KL between the training model vs the original reference model to the loss [10]. \\n- Parameter expansion method: we learn separate and none-overlapping LoRA adapters at stage 1 and 2, similar to the IncLoRA model in [11].\\n\\nWe compare these baselines against the No Mixing baseline and REMIX (Random at stage 1 and Knowledge Pile at stage 2). We show results on the datasets that *suffer most from forgetting*: all factoid datasets and GSM8K from the non-factoid datasets.\\n \\n\\n| KVR | LAMA | EntityQA | WebQA | GSM8K | Avg |\\n| ----------------------- | ---- | -------- | ----- | ----- | ---- |\\n| No Mixing | 2.1 | 17.4 | 33.8 | 22.4 | 18.9 |\\n| REMIX (Random/KP) | 62.4 | 69.5 | 70.2 | 45.8 | **62.0** |\\n| Weight Regularization | 0.1 | 4.3 | 76.7 | 2.6 | 20.9 |\\n| Behavior Regularization | 0.2 | 15.6 | 36.6 | 28.1 | 20.1 |\\n| Parameter Expansion | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |\\n\\n| PopQA | LAMA | EntityQA | WebQA | GSM8K | Avg |\\n| ----------------------- | ---- | -------- | ----- | ----- | ---- |\\n| No Mixing | 7.7 | 57.8 | 72.5 | 19.0 | 39.3 |\\n| REMIX (Random/KP) | 85.8 | 90.7 | 80.5 | 38.5 | **73.9** |\\n| Weight Regularization | 12.1 | 67.4 | 76.7 | 25.7 | 45.5 |\\n| Behavior Regularization | 7.5 | 59.3 | 55.5 | 40.6 | 40.7 |\\n| Parameter Expansion | 0.0 | 0.1 | 0.0 | 1.2 | 0.3 |\\n\\n| TriviaQA | LAMA | EntityQA | WebQA | GSM8K | Avg |\\n| ----------------------- | ---- | -------- | ----- | ----- | ---- |\\n| No Mixing | 4.3 | 40.5 | 68.6 | 9.4 | 30.7 |\\n| REMIX (Random/KP) | 89.2 | 89.6 | 86.5 | 12.5 | **69.5** |\\n| Weight Regularization | 7.9 | 58.5 | 80.3 | 37.9 | 46.2 |\\n| Behavior Regularization | 6.8 | 39.0 | 71.0 | 14.5 | 32.8 |\\n| Parameter Expansion | 21.9 | 0.1 | 1.1 | 3.0 | 6.5 |\\n\\nWe observe that the weight regularization baseline and output regularization baseline can obtain better factoid retention at different tasks but on average lags behind REMIX by a large margin (40%+ on KVR, 30%+ on PopQA, and 20%+ on TriviaQA). In our attempt, the parameter expansion based baseline learns to achieve 100% accuracy at stage 2, but catastrophically forgets at stage 2, achieving close to zero factoid retention.\"}", "{\"title\": \"Response to the reviewer (4/4)\", \"comment\": \"> Testing REMIX in a three-stage or four-stage setting could provide better insight into its stability and effectiveness over longer training cycles.\\n\\nWe investigated the three-stage setting (L426 - L431 and Figure 8) and show that REMIX can retain learned factoids for different combinations of stage 2 data.\\n\\n \\n> Why forgetting is more pronounced with factoids compared to non-factoids, as well as any observed differences in how REMIX performs on these types?\\n\\nWhile we do not have a direct mechanistic explanation for this phenomenon, it corroborates with the findings in 1) the continual learning literature which suggest catastrophic forgetting happens when two tasks are similar and therefore interfere [6, 7], and 2) finetuning on unfamiliar knowledge disrupts the model and causes exacerbated hallucinations [8, 9, 10, 11]. A mechanistic understanding of this phenomenon is an important area for future investigation but is slightly outside of the scope of our paper.\\n\\n \\n> Effectiveness of Random vs. Generic Text Mixing: The paper explores both random word sequence mixing and generic pretraining text mixing in REMIX. However, it is not entirely clear whether these two approaches yield similar or differing effects on knowledge retention. Could the authors provide more details on any observed differences in effectiveness between random and generic data mixing?\\n\\nBoth the Random and Generic mixing strategies can be applied at either stage 1 or stage 2 and they yield different results as shown in Table 2. Specifically, mixing Random is only effective in stage 1 and mixing Generic is only effective in stage 2. When combining both it yields the best results. This aligns with our motivation in section 4.1 \\u2013 mixing uncorrelated data in stage 1 *protects* the learned factoids and mixing at stage 2 helps *reduce interference* especially when forgetting is severe and the mixing data aligns with $D_A$.\\n\\n \\n> 100% Accuracy in Table 1: In Table 1, it is stated that all Stage 1 datasets are trained to 100% accuracy before Stage 2 training. Could the authors clarify how this 100% accuracy is achieved and guaranteed across different datasets? Specifically, were there particular training techniques or criteria used to ensure full memorization of Stage 1 data?\\n\\nWe train the models until reaching a loss below 0.0001 by 5 times before stopping to ensure full convergence. This often entails training many epochs to guarantee perfect accuracy. We provide training details in Appendix B.4 for reference.\\n\\n \\n> Has REMIX been tested on other types of tasks, such as generative or dialogue-based tasks?\\n\\nIn our initial exploration, we experimented with generative tasks and found that the forgetting phenomenon is much less pronounced than long-tail factoid data to begin with. This prompted us to further investigate the problem of forgetting specific to factoid datasets. We see the same phenomenon manifested in stage 2 training as well where non-factoid datasets generally have less impact leading to forgetting.\\n\\n \\n \\n\\nPlease let us know if there's any more information we can provide to clarify our work, thank you!\\n\\nReferences\\n\\n[1] Kirkpatrick et al., Overcoming Catastrophic Forgetting in Neural Networks. PNAS 2017.\\n\\n[2] Yang et al., Synthetic Continued Pretraining. Arxiv 2024.\\n\\n[3] Allen-Zhu and Li, Physics of Language Models: Part 3.1, Knowledge Storage and Extraction. Arxiv 2024.\\n\\n[4] Allen-Zhu and Li, Physics of Language Models: Part 3.2, Knowledge Manipulation. Arxiv 2024.\\n\\n[5] Berglund et al., The Reversal Curse: LLMs trained on \\\"A is B\\\" fail to learn \\\"B is A\\\". ICLR 2024.\\n\\n[6] Farajtabar et al., Orthogonal gradient descent for continual learning. AISTATS 2020.\\n\\n[7] Bennani et al., Generalisation Guarantees for Continual Learning with Orthogonal Gradient Descent. ICML 2020.\\n\\n[8] Kang et al., Unfamiliar Finetuning Examples Control How Language Models Hallucinate. Arxiv 2024.\\n\\n[9] Gekhman et al., Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations? EMNLP 2024.\\n\\n[10] Zhang et al., Knowledge Overshadowing Causes Amalgamated Hallucination in Large Language Models. Arxiv 2024.\\n\\n[11] Ghosal et al., Understanding Finetuning for Factual Knowledge Extraction. ICLR 2024.\\n\\n[12] Sun et al., Distill and Replay for Continual Language Learning. COLING 2020.\\n\\n[13] Wang et al., Orthogonal Subspace Learning for Language Model Continual Learning. EMNLP 2023.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe'd like to send a gentle reminder that we have submitted the rebuttal to address your comments. We sincerely appreciate your feedback and are happy to address any additional questions you may have during this discussion period.\\n\\nWe thank you again for taking the time to review our work.\"}", "{\"title\": \"Response to Reviewer Qd9Z (1/2)\", \"comment\": \"> I appreciate the authors' recognition of the need for more realistic datasets, and the additional experiments on Natural Questions are valuable. The results for Natural Questions with REMIX are indeed promising, and I can see how they align with the trends observed in the original datasets. That said, I am still curious about how REMIX might perform on datasets that involve overlapping or nested knowledge (e.g., relationships between facts or entities that require reasoning beyond isolated factoid recall). It would be interesting to explore whether REMIX can handle more complex knowledge integration in such contexts.\\n\\n\\n\\nThank you for the acknowledgement that the experiment on Natural Questions as the more realistic dataset shows to be promising and valuable.\\n\\nIn the experiments provided in the previous comment, we show that three tests where different templates are used to evaluate REMIX\\u2019s ability to manipulate stored knowledge beyond simple recall. For example, reverse recall (template 1) is the \\u201cBob\\u2019s dad is Sam\\u201d vs \\u201cSam\\u2019s son is Bob\\u201d case. We show that REMIX performs better than No Mixing at *selective recall (template 2)* and *recall-then-manipulate (template 3)*, suggesting that REMIX can already improve on some forms of knowledge manipulation even if it is only trained on isolated examples.\\nREMIX still fails at reverse recall (template 1), which highlights that some forms of knowledge manipulation are more challenging.\\n\\nWhile we would like to emphasize that the major challenge we aim to address in this work is knowledge retention, we are happy to include more evaluation on knowledge manipulation in the updated paper if the reviewer has further suggestions on any particular dataset.\\n\\n> Concerning the choice of data mixing types, I now understand the motivation for using both random word sequences and knowledge-rich text like Knowledge Pile. The mathematical derivation and intuition provided in the rebuttal are helpful. However, I still have a few questions about the empirical impact of these two types of mixing. For example, does the performance differ significantly when using only one of these mixing types (e.g., Random vs. Knowledge Pile) across various tasks, and under what specific conditions might one be more effective than the other?\\n\\nYes, mixing with random word sequence (Random) vs Generic (e.g., K-Pile) data differs empirically as we show this in Table 2 in the paper: Random is most effective when mixing at stage 1, and K-Pile is most effective when mixed in stage 2. \\n\\nTake Key Value Recall for example, mixing Random at stage 1 improves the factoid accuracy ($13.5 \\\\rightarrow 28.8$), but mixing Random at stage 2 does not help ($13.5 \\\\rightarrow 2.1$). Conversely, mixing K-Pile at stage 1 does not help ($13.5 \\\\rightarrow 8.4$), but mixing K-Pile at stage 2 improves significantly ($13.5 \\\\rightarrow 27.8$). This trend is consistent when training on *factoid* datasets in stage 2.\\n\\nWhen training on *non-factoid* datasets at stage 2, Random is still best to mix at stage 1 ($38.9 \\\\rightarrow 77.9$) as opposed to mixing at stage 2 ($38.9 \\\\rightarrow 28.8$), and K-Pile is best to mix at stage 1 as well ($38.9 \\\\rightarrow 52.2$) as opposed to stage 2 ($38.9 \\\\rightarrow 29.8$).\\n\\nIn short, mixing Random at stage 1 is most effective, but mixing K-Pile at stage 1 does not help for factoid data and should be mixed in stage 2, while for non-factoid data K-Pile is more effective at stage 1. This aligns with the intuition suggested by our derivation.\"}", "{\"title\": \"Response to the reviewer (3/3)\", \"comment\": \"Please let us know if there's any more information we can provide to clarify our work, thank you!\\n\\n\\nReferences\\n\\n[1] Kang et al., Unfamiliar Finetuning Examples Control How Language Models Hallucinate. Arxiv 2024.\\n\\n[2] Gekhman et al., Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations? EMNLP 2024.\\n\\n[3] Zhang et al., Knowledge Overshadowing Causes Amalgamated Hallucination in Large Language Models. Arxiv 2024.\\n\\n[4] Ghosal et al., Understanding Finetuning for Factual Knowledge Extraction. ICLR 2024.\\n\\n[5] Yang et al., Synthetic Continued Pretraining. Arxiv 2024.\\n\\n[6] Allen-Zhu and Li, Physics of Language Models: Part 3.1, Knowledge Storage and Extraction. Arxiv 2024.\\n\\n[7] Allen-Zhu and Li, Physics of Language Models: Part 3.2, Knowledge Manipulation. Arxiv 2024.\\n\\n[8] Berglund et al., The Reversal Curse: LLMs trained on \\\"A is B\\\" fail to learn \\\"B is A\\\". ICLR 2024.\\n\\n[9] Kirkpatrick et al., Overcoming Catastrophic Forgetting in Neural Networks. PNAS 2017.\\n\\n[10] Sun et al., Distill and Replay for Continual Language Learning. COLING 2020.\\n\\n[11] Wang et al., Orthogonal Subspace Learning for Language Model Continual Learning. EMNLP 2023.\"}", "{\"metareview\": \"Summary: The paper investigates the challenges of continual memorization in large language models (LLMs), focusing on retaining long-tail factual knowledge (factoids) across multiple stages of training. It highlights that standard fine-tuning approaches often lead to catastrophic forgetting of these factoids, especially when subsequent training involves similar datasets. The authors propose a mitigation strategy, REMIX (Random and Generic Data Mixing), which mixes random or unrelated generic data into training stages to preserve factoid knowledge. REMIX outperforms traditional replay methods, enhancing retention by altering where and how factoids are stored within the model.\", \"strengths\": [\"Clear writing and presentation\", \"Simplicity of the approach\"], \"weakness\": [\"Lukewarm response from all but one reviewer and the positive reviewer didn't champion the paper\", \"Primarily focused on isolated factoids rather than more complex knowledge relationships\", \"More comprehensive analysis on the side effects when using REMIX is not presented\", \"Limited exploration of performance on downstream tasks and more realistic datasets\", \"Applicability of the results with scaling is not clear based on further experiments provided during rebuttal as forgetting seems to reduce\", \"Resource requirements and scalability considerations could be better addressed\"], \"decision\": \"Given the lack of enthusiasm from the reviewers and limited practical relevance, unfortunately, the paper can't be accepted in its current form and addressing all the concerns would warrant another round of reviewing.\", \"additional_comments_on_reviewer_discussion\": [\"We thank the authors and reviewers for engaging during the discussion phase towards improving the paper. Below are some of the highlights:\", \"1. Multiple reviewers (including the positive one) want to see more ablations and baselines:\", \"Reviewers asked for simpler ablations and baselines like mixing and more replay settings, e.g. with ratio=1.\", \"Authors instead added comparisons with weight regularization, behavior regularization, and parameter expansion methods.\", \"2. Side-effects of REMIX\", \"Reviewers asked if REMIX affects other capabilities of the model like fluency\", \"Authors provided a qualitative response mentioning fluency improves and left a more comprehensive analysis on the side effects when using REMIX for future work\", \"3. Questions on practical relevance\", \"Reviewers asked if limited to factoid why not use traditional knowledge systems\", \"Authors argued the work belonged to the larger body of works that aim to understand the knowledge acquisition dynamics of language models through finetuning\", \"Reviewers also asked for more datasets like NQ,\", \"Authors conducted additional experiments on Natural Questions dataset\", \"Finally reviewers asked if factoids can be retrieved in reverse order (bidirectional) and other knowledge manipulations\", \"Authors instead provided experiments via templates, showing improvements in selective recall but not in reverse recall\", \"4. Scalability concerns: Authors provided results on smaller models and more data. The results were mixed as even though REMIX remains effective the gap or forgetting itself seems to reduce.\", \"5. Theoretical understanding: Authors expanded on the mathematical intuition and empirically validated when different mixing strategies work best.\"]}", "{\"comment\": \"In terms of scalability and resource concerns, I appreciate the authors' efforts to evaluate the impact of increasing data sizes and mixing ratios. It would be helpful to know if there are any guidelines or recommendations regarding the trade-offs between computational cost and the effectiveness of REMIX, especially in domains where resources are constrained. For instance, how does REMIX perform when applied to larger models or when deployed in environments with limited computational resources?\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe'd like to send a gentle reminder that we have submitted the rebuttal to address your comments. We sincerely appreciate your feedback and are happy to address any additional questions you may have during this discussion period.\\n\\nWe thank you again for taking the time to review our work.\"}", "{\"summary\": \"This paper examines the problem of forgetting in large language models (LLMs) during continual learning, particularly when training on a small set of long-tail factoids (subject-relation-object triples). The authors identify two primary challenges in retaining these long-tail facts over successive training stages: the limitations of standard replay techniques and the interference from training on unrelated datasets. To address these challenges, the authors propose REMIX (Random and Generic Data Mixing), which combines unrelated, generic data with the factoid data to prevent forgetting. Through comprehensive experiments, REMIX is shown to outperform replay-based methods and recover performance from severe forgetting. The authors further analyze how REMIX influences the learning process, noting that it shifts the storage of factoids to earlier layers and diversifies the layers used for storing these facts, thus reducing interference from later training stages.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Novel Approach to Memory Retention: REMIX introduces a unique approach to mitigate forgetting by mixing random and generic data during training, achieving substantial performance improvement compared to replay-based methods.\", \"thorough_experimental_analysis\": \"The authors conduct extensive experiments across multiple datasets, providing empirical evidence of REMIX\\u2019s effectiveness. They also analyze layer-specific behavior, offering insights into how REMIX modifies the model\\u2019s memory dynamics.\", \"generalizable_insight_for_continual_learning\": \"By demonstrating the limitations of replay techniques and proposing alternative strategies, this paper offers valuable insights for both continual memory retention and general continual learning in LLMs.\", \"weaknesses\": \"1 Lack of Comparison with Other Forgetting Mitigation Techniques:\\nAlthough the authors discuss the limitations of replay-based methods, the paper lacks a systematic comparison with other common forgetting mitigation techniques, such as Elastic Weight Consolidation (EWC) or Knowledge Distillation. For instance, EWC is frequently used in continual learning to reduce interference by regularizing key weights, while Knowledge Distillation selectively retains critical information. Comparing REMIX with these methods would help clarify REMIX\\u2019s unique advantages and performance under similar conditions.\", \"2_synthetic_and_specific_dataset_selection\": \"The datasets used in this paper, such as Key-Value Recall and PopQA, are primarily synthetic and consist of isolated factoids, which may not fully reflect the complexity of real-world data. For example, in practical scenarios, knowledge is often presented in overlapping or nested forms (e.g., \\u201cThe author of Hamlet is Shakespeare\\u201d and \\u201cShakespeare wrote Hamlet\\u201d) rather than as isolated facts. Testing REMIX on more commonly used datasets, such as Wikipedia or open-domain QA datasets (e.g., Natural Questions), could provide a more realistic evaluation of its effectiveness and generalizability.\", \"3_unclear_justification_for_types_of_data_mixing\": \"The paper employs both random word sequences and knowledge-rich text (e.g., Knowledge Pile) as mixed data to prevent forgetting, but it does not provide a clear explanation of why these two disparate types would produce similar effects. For example, random word sequences contain no factual content, while Knowledge Pile includes a substantial amount of knowledge and contextual information. The authors could further analyze why both random and knowledge-rich data help prevent forgetting or test the specific impacts of each type in different scenarios.\", \"4_impact_on_performance_in_new_tasks\": \"While REMIX performs well in retaining early-stage knowledge, the paper does not explore its impact on subsequent new tasks. For instance, it would be useful to know whether REMIX might limit the model's ability to learn these new tasks when introduced for fine-tuning. Evaluating REMIX\\u2019s impact on new tasks could provide insights into potential trade-offs between memory retention and generalization to new tasks.\", \"5_limited_evaluation_on_extended_stages\": \"The experiments primarily focus on two-stage continual learning, with limited testing of multi-stage scenarios. In real-world applications, models may undergo multiple updates, such as continual fine-tuning in legal or medical domains. Testing REMIX in a three-stage or four-stage setting could provide better insight into its stability and effectiveness over longer training cycles.\", \"6_resource_and_scalability_concerns\": \"REMIX relies on incorporating additional mixed data during training, which may increase computational costs, especially for large models such as Llama-3-8B. Expanding this method to resource-intensive domains like finance or healthcare could present challenges. If the authors could discuss the trade-offs between added data usage and computational demands or provide a rough estimate of the resources required to implement REMIX in a real-world setting, it would help assess its feasibility and scalability in high-resource environments.\", \"questions\": \"1 Comparison with Other Forgetting Mitigation Techniques:\\nHow does REMIX compare with other established forgetting mitigation methods across different tasks? A systematic comparison would strengthen the case for REMIX\\u2019s advantages.\", \"2_exploration_of_bidirectional_relational_memory_with_atomic_fact_datasets\": \"The datasets used appear to consist mainly of isolated factoids or \\\"atomic\\\" facts, without directly exploring bidirectional or inverse relational memory. For example, if the model learns that \\\"Bob\\u2019s dad is Sam,\\\" it would be valuable to evaluate whether the model can infer the inverse relationship, such as \\\"Who is Sam's son?\\\" This type of associative memory is essential for comprehensive fact retention, as it reflects a more integrated understanding of relationships. Could the authors clarify whether such tests were conducted, or suggest if REMIX could potentially extend to this type of bidirectional memorization?\", \"3_why_forgetting_is_more_pronounced_with_factoid_datasets\": \"The paper reports that models experience significant forgetting when fine-tuned on factoid datasets in the second stage, but not on non-factoid datasets. Could the authors elaborate on why forgetting is more pronounced with factoids compared to non-factoids, as well as any observed differences in how REMIX performs on these types? This could provide further insight into the underlying mechanisms of forgetting and the strengths of REMIX.\", \"4_rationale_behind_data_mixing_types\": \"The paper employs various data sources (e.g., Knowledge Pile, random word sequences) as mixed data in REMIX. However, the choice of these sources appears empirical, lacking theoretical justification or detailed explanation. It remains unclear why certain data sources yield better performance on specific tasks, and this potential variation across tasks is not fully explored. There is no clear guideline for selecting mixed data types, nor an analysis of how different types of mixed data impact task performance. A more thorough theoretical or empirical examination of these differences could enhance understanding of REMIX\\u2019s applicability and effectiveness across various contexts.\", \"5_impact_of_remix_on_new_task_performance\": \"The paper focuses on preventing forgetting in prior tasks, but it does not discuss the potential impact of REMIX on performance for new tasks introduced in later stages. While REMIX seems effective at preserving knowledge from earlier stages, it remains unclear whether this approach might inadvertently reduce performance on new tasks due to constraints placed on the model\\u2019s capacity or flexibility. An analysis of how REMIX affects the model's performance on new tasks would provide a more balanced understanding of its effectiveness in continual learning contexts.\\n6 Effectiveness of Random vs. Generic Text Mixing:\\nThe paper explores both random word sequence mixing and generic pretraining text mixing in REMIX. However, it is not entirely clear whether these two approaches yield similar or differing effects on knowledge retention. Could the authors provide more details on any observed differences in effectiveness between random and generic data mixing? Understanding how each type impacts forgetting could offer valuable insights into the dynamics of memory retention in large language models.\", \"7_combined_mixing_effectiveness\": \"The results indicate that combining random word sequence mixing with generic data mixing produces the best outcomes, but it is not fully explained why this combination is most effective. Is there a theoretical or empirical rationale for why mixing both types of data provides better retention compared to using either one alone? Additional explanation of this combined effect would enhance understanding of REMIX\\u2019s underlying mechanisms and may help guide future applications.\\n8 100% Accuracy in Table 1:\\nIn Table 1, it is stated that all Stage 1 datasets are trained to 100% accuracy before Stage 2 training. Could the authors clarify how this 100% accuracy is achieved and guaranteed across different datasets? Specifically, were there particular training techniques or criteria used to ensure full memorization of Stage 1 data? Additional details on this process would help in understanding the baseline setup for evaluating forgetting.\", \"9_suitability_across_task_types\": \"Has REMIX been tested on other types of tasks, such as generative or dialogue-based tasks? Additional testing on these tasks would clarify REMIX\\u2019s versatility and applicability beyond factoid retention.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studied the forgetting issues when finetuning a LLM in multi-stage datasets. They focus the setting of continual memorizing of factoid facts - Stage 1 is factoid fact datasets and Stage 2 finetune with fact/non-fact datasets. The authors find non-fact datasets will cause smaller drop. Based on this intuition, the authors proposed a data mixing strategy (introducing some unrelated datasets) in multi-stage fine-tuning to reduce the forgetting.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The papers present problems and solution pretty clear and easy to follows.\\nThe authors proposed a simple yet effect way to reduce the interference among the different fine-tuning datasets.\", \"weaknesses\": \"There are some other method to reduce the interference among the datasets of stage 1 and stage 2. For example, the method needs to compare with another baseline, i.e. \\\"mixing of Data A and Data B\\\"\", \"questions\": \"Table 1, the degradation is more severe for Stage 2 is also a factoid dataset. Do you have any explanation? Also, there is big drop when using GSK8k. It will be very insightful to understand the interplays of the datasets.\\n\\nFor the Replay approach, what if we use a ratio = 1.0?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper tackles the issue of continual memorization of factoids in large language models (LLMs), focusing on retaining specific, rare knowledge (factoids) as the model undergoes further training on unrelated datasets. Typical replay techniques fail to prevent forgetting of such factoids in LLMs, leading the authors to propose REMIX, a data-mixing approach that interleaves random or generic data during training stages to reduce forgetting. The paper demonstrates that REMIX helps preserve factoid knowledge across various datasets and training scenarios, with results analyzed using tools like Logit Lens.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Originality: The paper draws attention to the issue of continual memorization of long-tail information through factoid memorization.\", \"quality\": \"The experiments are conducted rigorously, covering a range of datasets and demonstrating REMIX\\u2019s impact across several configurations.\", \"clarity\": \"Explanations are mostly clear, and the figures help illustrate key points.\", \"significance\": \"The method has some practical relevance for fact retention.\", \"weaknesses\": \"Unclear Problem Motivation: The paper does not convincingly explain why memorizing long-tail knowledge in the form of factoids is important in practical applications. Without a clear motivation, the relevance of the problem formulation is uncertain, which diminishes the contribution\\u2019s significance. If we are only concerned about factoids, why use LLMs in the first place? Why not just use traditional knowledge-based systems? The authors should show how memorizing factoids leads to downstream applications, such as utilizing the information from the factoids on tasks that specifically require LLMs.\", \"lack_of_novelty\": \"REMIX lacks sufficient originality; the idea of mixing generic data into training is not groundbreaking and does not specifically address the unique challenges of factoid memorization.\", \"lack_of_baselines\": \"The authors only explore experience replay as the baseline approaches, whereas there exists other methods in literature that can mitigate forgetting during continued pretraining (parameter expansion-based methods, regularization methods, etc.)\", \"questions\": [\"Suggestions\", \"I would like to suggest the authors the strengthen the motivation of needing to memorize long-tail knowledge through the form of factoids, but showing that it transfers the knowledge itself to downstream NLP tasks that require integrating those long-tail information. Simply getting a high score in the factoid task itself is insufficient to motivate the problem formulation.\", \"I would suggest that the authors include more baselines from the continual learning literature that can mitigate the forgetting of previously learned knowledge.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work focuses on the continual memorization setting in LLMs, a subsetting of continual learning in which a model is first fine-tuned on a factoid dataset, then successively fine-tuned on other datasets (multiple training stages) and must retain knowledge learned during the first stage. The authors first demonstrate that catastrophic forgetting occurs in a 2-stage training process, especially if the dataset from the second stage is a factoid one, and that usual replay methods used in continual learning do not satisfactorily mitigate the issue.\\n\\nThe authors then introduce REMIX, a strategy for preventing forgetting in the multi-stage learning process. In this strategy, additional training data is added to one or both of the training stages. This data takes the form of either generic of random data. The authors show that this new method produces significantly better result than the basic training process on LLaMa-3 and Mistral-7B, which they show to be linked to a more diversified and larger set of layers in which factoids are stored.\\n\\nFinally, the authors perform a large number of validation experiments, proving that this method is effective with different mixing datasets, and investigate the effect of several other hyperparameters such as the mixing sequence length, the mixing ratio and the number of training stages.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The paper is very well-written and clear. The section order feels natural, and the reasoning and intuition for each idea are given clearly. The tables and charts summarize the data well, and while informative, the text flows well and is not overly complicated. As a result, this is a very pleasant paper to read. In addition, the references are recent and seem relevant.\", \"The paper touches upon the issue of catastrophic forgetting of factoids in LLMs, which is a relevant and unsolved issue, especially in the current context where many pre-trained LLMs showcase good reasoning capabilities but cannot easily be updated afterward to store novel world knowledge.\", \"The paper contains a large number of experiments, that give clear motivation for introducing REMIX, and then show its efficacy over many settings.\", \"The ideas found in this work are not revolutionary per se (which is not to say that they lack originality; see my next point), but the execution is straightforward and good. The authors carefully checked for important details such as dataset overlap.\", \"The idea of mixing generic/random data with the training dataset is quite creative and original. Despite being counterintuitive, the authors justify this idea mathematically.\", \"As a result, I recommend this paper for publication with no major point of criticism.\"], \"weaknesses\": [\"Many of the points of criticism I had while reading this paper were answered later on, or in the appendices. The other points that I have mainly consist of questions (see section below).\", \"In section 4.2, the word \\\"Figure\\\" is used several times instead of \\\"Table\\\".\", \"Section 3.2 (on replay) is lacking detail in comparison to other sections, especially as it justifies the use of REMIX compared to other replay methods. In particular, I could not find which of the two LLMs was used to measure the effect of replay methods.\"], \"questions\": [\"The authors show that their method causes the model to store factoids in more layers of the model, which presumably means that the factoids overwrite previous data in these shallower layers. It would have been interesting to investigate whether this results in any significant degradation of other model capabilities (e.g. fluency) compared to the basic two-stage training process. I understand that this paper specifically focuses on factoid memorization and contains many experiments already, but this could be mentioned as future work.\", \"Another interesting experiment would be to vary either the model or the dataset's size, to evaluate the link between model capacity and the efficacy of REMIX/replay techniques. Do the authors have any insight or early intuition regarding this?\", \"Have the authors considered/tried combining REMIX with classic replay techniques? This seems like a natural next step to know whether the use of both methods leads to even better results.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to the reviewer (1/4)\", \"comment\": \"We thank the reviewer for the detailed comments, suggestions, and the recognition of the novelty of our methods, the extensiveness of our experiments, and the generalizability of our conclusions. The reviewer pointed out several important aspects that would help strengthen our investigation, which we provide further experiments and explanations in the following passages.\\n\\n \\n> [...] the paper lacks a systematic comparison with other common forgetting mitigation techniques, such as Elastic Weight Consolidation (EWC) or Knowledge Distillation.\", \"we_provide_three_more_types_of_baselines_to_compare_with_our_data_mixing_method\": \"- Weight regularization: we use Elastic Weight Consolidation (EWC) [1] and calculate the Fisher score using one backward pass using the current mini-batch for training.\\n- Behavior regularization: we add the KL between the training model vs the original reference model to the loss. Knowledge distillation can be seen as a type of regularization in the continual learning setting [12].\\n- Parameter expansion method: we learn separate and none-overlapping LoRA adapters at stage 1 and 2, similar to the IncLoRA model in [13].\\nWe compare these baselines against the No Mixing baseline and REMIX (Random at stage 1 and Knowledge Pile at stage 2). We show results on the datasets that *suffer most from forgetting*: all factoid datasets and GSM8K from the non-factoid datasets.\\n \\n\\n| KVR | LAMA | EntityQA | WebQA | GSM8K | Avg |\\n| ----------------------- | ---- | -------- | ----- | ----- | ---- |\\n| No Mixing | 2.1 | 17.4 | 33.8 | 22.4 | 18.9 |\\n| REMIX (Random/KP) | 62.4 | 69.5 | 70.2 | 45.8 | **62.0** |\\n| Weight Regularization | 0.1 | 4.3 | 76.7 | 2.6 | 20.9 |\\n| Behavior Regularization | 0.2 | 15.6 | 36.6 | 28.1 | 20.1 |\\n| Parameter Expansion | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |\\n\\n| PopQA | LAMA | EntityQA | WebQA | GSM8K | Avg |\\n| ----------------------- | ---- | -------- | ----- | ----- | ---- |\\n| No Mixing | 7.7 | 57.8 | 72.5 | 19.0 | 39.3 |\\n| REMIX (Random/KP) | 85.8 | 90.7 | 80.5 | 38.5 | **73.9** |\\n| Weight Regularization | 12.1 | 67.4 | 76.7 | 25.7 | 45.5 |\\n| Behavior Regularization | 7.5 | 59.3 | 55.5 | 40.6 | 40.7 |\\n| Parameter Expansion | 0.0 | 0.1 | 0.0 | 1.2 | 0.3 |\\n\\n| TriviaQA | LAMA | EntityQA | WebQA | GSM8K | Avg |\\n| ----------------------- | ---- | -------- | ----- | ----- | ---- |\\n| No Mixing | 4.3 | 40.5 | 68.6 | 9.4 | 30.7 |\\n| REMIX (Random/KP) | 89.2 | 89.6 | 86.5 | 12.5 | **69.5** |\\n| Weight Regularization | 7.9 | 58.5 | 80.3 | 37.9 | 46.2 |\\n| Behavior Regularization | 6.8 | 39.0 | 71.0 | 14.5 | 32.8 |\\n| Parameter Expansion | 21.9 | 0.1 | 1.1 | 3.0 | 6.5 |\\n\\n\\nWe observe that the weight regularization baseline and output regularization baseline can obtain better factoid retention at different tasks but on average lags behind REMIX by a large margin (40%+ on KVR, 30%+ on PopQA, and 20%+ on TriviaQA). In our attempt, the parameter expansion based baseline learns to achieve 100% accuracy at stage 2, but catastrophically forgets at stage 2, achieving close to zero factoid retention.\"}", "{\"title\": \"General Message to Reviewers\", \"comment\": \"Dear Reviewers,\\n\\nWith the author-reviewer discussion period ending soon, we wanted to take this opportunity to thank you again for your suggestions and to check whether you had any remaining questions after our responses! We especially hope that you have a chance to review the additional experiments we ran based on reviewer suggestions described in each of our rebuttals.\\n\\nIf we've addressed your concerns, we'd be grateful if you'd consider updating your score!\"}" ] }
2gTEW29qsM
Masked Generative Priors Improve World Models Sequence Modelling Capabilities
[ "Cristian Meo", "Mircea Tudor Lică", "Zarif Ikram", "Akihiro Nakano", "Vedant Shah", "Aniket Rajiv Didolkar", "Dianbo Liu", "Anirudh Goyal", "Justin Dauwels" ]
Deep Reinforcement Learning (RL) has become the leading approach for creating artificial agents in complex environments. Model-based approaches, which are RL methods with world models that predict environment dynamics, are among the most promising directions for improving data efficiency, forming a critical step toward bridging the gap between research and real-world deployment. In particular, world models enhance sample efficiency by learning in imagination, which involves training a generative sequence model of the environment in a self-supervised manner. Recently, Masked Generative Modelling has emerged as a more efficient and superior inductive bias for modelling and generating token sequences. Building on the Efficient Stochastic Transformer-based World Models (STORM) architecture, we replace the traditional MLP prior with a Masked Generative Prior (e.g., MaskGIT Prior) and introduce GIT-STORM. We evaluate our model on two downstream tasks: reinforcement learning and video prediction. GIT-STORM demonstrates substantial performance gains in RL tasks on the Atari 100k benchmark. Moreover, we apply Transformer-based World Models to continuous action environments for the first time, addressing a significant gap in prior research. To achieve this, we employ a state mixer function that integrates latent state representations with actions, enabling our model to handle continuous control tasks. We validate this approach through qualitative and quantitative analyses on the DeepMind Control Suite, showcasing the effectiveness of Transformer-based World Models in this new domain. Our results highlight the versatility and efficacy of the MaskGIT dynamics prior, paving the way for more accurate world models and effective RL policies.
[ "World Modeling", "Model based RL" ]
Reject
https://openreview.net/pdf?id=2gTEW29qsM
https://openreview.net/forum?id=2gTEW29qsM
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uhQHoNDNnM", "uRvup7WwgU", "tFy9UYW2rX", "q0aaccIsPe", "pvem2GeP5x", "odV3roVmrV", "o0elFFpGPz", "l08YbOdylL", "kTAxVxXbwb", "gxRcO6WLD4", "ahBQ24T1xA", "Z0c91Ciok7", "Sd16wiZmpP", "S76YPpaqeH", "RQAa5KxqT3", "QaClVi5KNl", "PaI76pVEAJ", "NyuEQ6grH7", "MNawfQGAHg", "MCFIfDIIC3", "L0TnjWMpZB", "JMaBLVW1Kp", "JK7syeUQMn", "IzsCzT7ufs", "H0ECD7oylW", "BLgtR3VV34", "AjfbtM4Hu3", "AbePkDaXiu", "AZ9AVaNF7X", "8v7aNlIkxG", "8tMbsfH9hh", "86HJuzDCR4", "7GZz2eTlLU", "6i2C4VD9Fr", "6K4E3iORy4", "347nciBMyk", "1wEzILE87q", "1AUB5OuPoj", "0gqmffPnnE" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733039494383, 1733039611806, 1732362950578, 1732363546437, 1734856571583, 1732871046429, 1733040250909, 1732360987978, 1732925139879, 1732362160334, 1731618929256, 1733040675849, 1732360557532, 1731618894237, 1731618960357, 1733164938304, 1731618983841, 1737524277298, 1732362359295, 1730286285003, 1732380877872, 1732362090863, 1732878103323, 1732536419367, 1732360672295, 1732539828797, 1732503738668, 1732363179685, 1729587700056, 1732873722130, 1732362643657, 1730460841554, 1732370584390, 1730027989659, 1732360015066, 1733041006641, 1732360764146, 1732360295208, 1732359384838 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13706/Authors" ], [ "ICLR.cc/2025/Conference/Submission13706/Authors" ], [ "ICLR.cc/2025/Conference/Submission13706/Authors" ], [ "ICLR.cc/2025/Conference/Submission13706/Authors" ], [ "ICLR.cc/2025/Conference/Submission13706/Area_Chair_rsyu" ], [ "ICLR.cc/2025/Conference/Submission13706/Reviewer_JuLK" ], [ "ICLR.cc/2025/Conference/Submission13706/Authors" ], [ "ICLR.cc/2025/Conference/Submission13706/Authors" ], [ "ICLR.cc/2025/Conference/Submission13706/Authors" ], [ "ICLR.cc/2025/Conference/Submission13706/Authors" ], [ "ICLR.cc/2025/Conference/Submission13706/Authors" ], [ "ICLR.cc/2025/Conference/Submission13706/Reviewer_JuLK" ], [ "ICLR.cc/2025/Conference/Submission13706/Authors" ], [ "ICLR.cc/2025/Conference/Submission13706/Authors" ], [ "ICLR.cc/2025/Conference/Submission13706/Authors" ], [ "ICLR.cc/2025/Conference/Submission13706/Authors" ], [ "ICLR.cc/2025/Conference/Submission13706/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13706/Authors" ], [ "ICLR.cc/2025/Conference/Submission13706/Reviewer_Lr34" ], [ "ICLR.cc/2025/Conference/Submission13706/Reviewer_zpZw" ], [ "ICLR.cc/2025/Conference/Submission13706/Authors" ], [ "ICLR.cc/2025/Conference/Submission13706/Reviewer_JuLK" ], [ "ICLR.cc/2025/Conference/Submission13706/Authors" ], [ "ICLR.cc/2025/Conference/Submission13706/Authors" ], [ "ICLR.cc/2025/Conference/Submission13706/Authors" ], [ "ICLR.cc/2025/Conference/Submission13706/Reviewer_JuLK" ], [ "ICLR.cc/2025/Conference/Submission13706/Authors" ], [ "ICLR.cc/2025/Conference/Submission13706/Reviewer_JuLK" ], [ "ICLR.cc/2025/Conference/Submission13706/Authors" ], [ "ICLR.cc/2025/Conference/Submission13706/Authors" ], [ "ICLR.cc/2025/Conference/Submission13706/Reviewer_zpZw" ], [ "ICLR.cc/2025/Conference/Submission13706/Reviewer_aqah" ], [ "ICLR.cc/2025/Conference/Submission13706/Reviewer_aqah" ], [ "ICLR.cc/2025/Conference/Submission13706/Authors" ], [ "ICLR.cc/2025/Conference/Submission13706/Authors" ], [ "ICLR.cc/2025/Conference/Submission13706/Authors" ], [ "ICLR.cc/2025/Conference/Submission13706/Authors" ], [ "ICLR.cc/2025/Conference/Submission13706/Authors" ] ], "structured_content_str": [ "{\"title\": \"Reminder\", \"comment\": \"Dear Reviewer,\\n\\nCould you please answer us? We addressed every question you had and weaknesses you mentioned. \\n\\nWe only have 2 days before the time to discuss ends. We kindly ask you to reply, we would really appreciate to have the chance to address any remaining concerns or doubts. On the other hand, if you don\\u2019t have any, we would be really happy if you could raise our grade, considering we addressed to all your questions. \\n\\nWe really put a lot of work on this project, we really hope we can hear from you before the time ends.\"}", "{\"comment\": \"We really thank the reviewer for your feedback and suggestions!\\n\\nDo you have in mind any particular analysis we should carry out with respect to the latent space?\"}", "{\"title\": \"Q&A 2\", \"comment\": \"Q2: The contribution to supporting continuous actions is overclaimed (as 'for the first time'). In fact, concatenating or summating continuous inputs with hidden states is a too straightforward approach in current VLA models (e.g., OpenVLA for inputting continuous visual representations) and action-conditioned video prediction models (e.g., iVideoGPT for inputting continuous actions).\", \"a2\": \"We thank the reviewer for this feedback. Current VLA models (e.g., OpenVLA) do not use actions within their input space, instead they output actions. This is a very important difference since the model does not need to learn a causality relationship between states and actions. In other words, VLA models can be interpreted as policies and not world models, which makes them different modules and therefore cannot be compared to our model. When it comes to action-conditioned video prediction models (e.g., iVideoGPT), they do not mix actions and states, they use embeddings to map actions and input them as separated entities together with the states.\\n\\nWe found this question particularly interesting and we decided to perform an experiment to compare the action embedder approach used in iVideoGPT against our action state mixer. Moreover, we also include a version where we simply concatenate actions and states, without mixing them, to validate whether the contribution to the performance of our approach comes from the mixer or from concatenating actions and states. This analysis compares the effect of the State Mixer on downstream performance against the approach proposed in iVideoGPT. Figure 14 demonstrates that the State Mixer consistently outperforms the considered baselines.\\nInterestingly, under the given setup, the iVideoGPT approach fails to learn meaningful policies. We hypothesize that this limitation arises from the scale of the training procedure and considered environments. Specifically, iVideoGPT is designed to leverage much larger datasets, enabling it to learn robust representations. Moreover, we observe that bypassing the State Mixer by directly concatenating and feeding state and action embeddings into the transformer allows the model to learn policies that are meaningful\\nbut perform suboptimally compared to the State Mixer-based approach. This finding highlights the effectiveness of the State Mixer in extracting and processing state-action representations crucial for learning optimal policies.\\n\\nFinally, we would like to remark that combining actions and states meaningfully is not a straightforward problem, especially when it comes to continuous actions and discrete states. Therefore, we believe the community would greatly benefit from our work, as it explicitly shows and highlights an approach to do that, while all the other cited papers do not expand on this problem and simply state what they used.\"}", "{\"title\": \"Q&A 5 and 6\", \"comment\": \"Q5: To my knowledge, perplexity is a metric whose lower values mean better. However, in Table 3, higher perplexity is marked as better.\", \"a5\": \"In the context of computer vision and Autoencoder networks that learn image representations perplexity is maximized to encourage uniform utilization of codebook entries in models like Vector Quantized Variational Autoencoders (VQ-VAEs).\\nA higher perplexity indicates that the codebook entries are used more uniformly, which is desirable because it promotes diversity and richness in representation. Maximizing perplexity prevents codebook collapse, where only a few codes are used, leading to loss of information and reduced model expressiveness. It also improves reconstruction quality by allowing the model to capture more complex and diverse image features, enhancing detail preservation in reconstructed images.\", \"here_you_can_find_some_other_papers_where_higher_perplexity_is_considered_better\": \"[2] Yuhta Takida et al. SQ-VAE: Variational Bayes on Discrete Representation with Self-annealed Stochastic Quantization, 2022. https://arxiv.org/pdf/2205.07547\\n\\n[3] Minyoung Huh et al. Straightening Out the Straight-Through Estimator: Overcoming Optimization Challenges in Vector Quantized Networks, 2023. https://proceedings.mlr.press/v202/huh23a/huh23a.pdf\", \"q6\": \"In Figure 6, the quadruped agents are too small in the images. This work seems to have used an unusual camera setting for these tasks.\", \"a6\": \"We thank the reviewer for the feedback. We used the standard DMC wrapper provided by torch RL: https://pytorch.org/rl/stable/reference/generated/torchrl.envs.DMControlWrapper.html\\n\\nWhen it comes to visualizing the results, usually the images are upsampled for visualization purposes. We did not have the time to do that before the submission, we will try to fix it before the rebuttal period ends. For now, we removed the sequence from the paper.\"}", "{\"metareview\": \"While GIT-STORM shows some promising results in improving world model sequence modeling through masked generative priors, several critical concerns remain insufficiently addressed. The authors' claim of being the first to apply transformer-based world models to continuous actions was overstated, given the existence of TransDreamer. Additionally, there is concerning ambiguity in the MaskGIT head design and sampling process, particularly regarding the gap between training and inference time behavior. The performance gains over STORM appear modest and heavily depend on specific environments, while being still outperformed by DreamerV3 on DMC tasks. Though the authors provided detailed responses, their explanations remain unconvincing, suggesting the work needs further development before publication. I recommend rejecting this paper.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, reviewers raised concerns, primarily focusing on overclaimed novelty regarding transformer-based models for continuous actions, unclear state mixer design, questionable performance improvements, and ambiguity in the MaskGIT head's mechanism. The authors responded with new ablation studies, clarified novelty claims, and provided technical explanations. While three reviewers were satisfied and raised their scores, it was still not enough to reach the level of acceptance.\"}", "{\"comment\": [\"I appreciate the clarification from the authors. However, I still do not fully understand.\", \"Q1: I get that $z_t$ is not never used in Algorithm 1. But let's look at line 6 of Algorithm 1. It seems that the output (corresponding to $z_{t+1}$) is also used as input to MaskGIT for multiple iterations. This creates a gap between training and imagining/testing, because during training, the MaskGIT input is $z_t$.\", \"Q2: When we talk about a higher sample efficiency of the MBRL algorithm, we mean that the algorithm can achieve higher task performance with the same or fewer environment steps (rather than training time; see https://ai.stackexchange.com/a/5295). Therefore, the author's statement that \\\"quicker the generation (or sampling) of imagined trajectories and, hence, the higher the sample efficiency\\\" is not true, since the generation speed is independent of the task performance for the same number of environment steps.\"]}", "{\"comment\": \"Dear Reviewer,\\n\\nWe did our best to answer all your questions, and I must say we really enjoyed it. Thanks to you we double checked every thought or component of our model. Moreover you introduced as to the concept of weight tying. \\n\\nThere is not much time before the end of the rebuttal period, considering you are the only one with a grade of 3, we really hope you increase the grade. At this point if this paper will be accepted or not is in your hands, so we really hope we convinced you about the soundness and the contribution that this paper brings to the community. We did our best and used a lot of compute in this paper to perform experiments on both Atari and DMC. we really hope to see this paper published at ICLR2025.\\n\\nThank you so much again for all your feedback and time you spent for this review.\"}", "{\"title\": \"Q&A 1\", \"comment\": \"We thank the reviewer for such insightful and important questions. We are really happy to answer and fulfil any remaining doubts. Here we list aggregated questions and the related answers.\", \"q1\": \"The motivation and effect of using MaskGIT head in world models are unclear. Is there any evidence that the world models would have hallucinations, and how could a MaskGIT head mitigate such issues? How to distinguish if the improved performance (both RL and FVD) comes from more parameters or the MaskGIT prior?\", \"a1\": \"We thank the reviewer for the relevant question. Hallucinations are associated with any autoregressive model that uses sampling to compute the next token, such as GPT models, but also transformer-based world models. When it comes to world models, hallucinations may appear in the form of unrealistic reconstructions or wrong state dynamics transitions. Table 1 shows the comparison of TECO between using an MLP prior and\\na MaskGIT prior for video prediction in terms of FVD, where it can be seen that using a MaskGIT prior leads to much better video prediction performances (TECO w/ MaskGIT: 48 -> TECO w/ MLP head: 153 and TECO w/ MaskGIT: 199 -> TECO w/ MLP head: 228; respectively on DMLab and SSv2). This finding motivated us to explore the use of a MaskGIT head in world models. Indeed, since the MaskGIT leads to better results in Video Prediction tasks, it is plausible that it increases the ability of the dynamics module to produce better state transitions, which are the main source of hallucinations in world models and video prediction models. \\n\\nThe MaskGIT head has actually way fewer parameters than an MLP one ( MLP head: 4720640 parameters -> MaskGIT head: 460960 parameters). Therefore, it cannot be the case that the number of parameters is responsible for the improvement in performance. By contrast, the difference in number of parameters shows that the MaskGIT head presents a better inductive bias and requires fewer parameters to perform better.\"}", "{\"comment\": \"We thank the reviewer for the question. We are happy to expand on the MaskGIT prior and its usage in Algorithms 1 and 2.\", \"sq1\": \"In Line 3 of Algo 1, should $z^0$ be a draft of $z_{t+1}$?\", \"a1\": \"Not exactly. The MaskGIT prior respects the following equation: $z_{t+1} \\\\sim MaskGIT Head(z_t, h_t)$, therefore when drafting $z^0 \\\\sim MaskGIT Head( fully masked (z_t) , h_t)$, $z^0$ is called draft but is not necessarily and literally a draft of $z_{t+1}$.\", \"sq3\": \"If the above is true, then we face a gap between training and sampling, because for the same desired outputs $z_{t+1}$, our inputs is masked $z_t$ during training but masked $z_{t+1}$ during sampling.\", \"a3\": \"We are aware that this regime is not exactly as in training, it works well and could be explained considering that often there is not a big difference from one frame to another one, therefore we can expect that some of the tokens in $z_t$ remain the same in $z_{t+1}$. If we now consider a masked version of $z_t$ and a masked version of $z_{t+1}$, if some of the few tokens that have changed get masked the two representations would be even more similar. Although we are not sure if there is a way to prove this, we can say that they may appear as sampled from the same distribution and that, therefore, using a masked version of $z_{t+1}$ would be considered an in-distribution regime since we always use a masked version of $z_t$ during training.\\n\\nAlthough this is just a potential explanation of why it works, our explanation is also corroborated by the experimental and downstream results, which confirm and show that the model works. Finally, we are not the first ones that use this approach, indeed TECO[1] and the Draft-and-Revise algorithm [2] paper use the same approach in their proposed architectures, showing that this approach works well even there. \\n\\nMoreover, to answer reviewer aqah, we added Fig. 17 which shows the average probability of predicting a specific token, clearly illustrating how the GIT-STORM head presents higher probabilities and therefore is more confident in predicting the following tokens. This result corroborates, even more, the idea that the MaskGIT head works well and that in practice there is not a gap between using a masked version of $z_t$ and of $z_{t+1}$.\", \"sq2\": \"If the above is true, then in Line 3&4 of Algo 2 (called from Line 6 of Algo 1 with $\\\\gamma =1$ ), is the input of the MaskGIT's BidirectionalTransformer a masked version of $z^0$, namely a masked version of $z_{t+1}?\", \"a2\": \"Using the same explanation from A3, the intermediate representations $z^{\\\\gamma}$ can be considered as an in-distribution regime and therefore interpreted as a masked version of $z_t$.\\n\\n\\n[1] W. Yan et al., Temporally Consistent Transformers for Video Generation, 2022\\n[2] D. Lee et al., Draft-and-Revise: Effective Image Generation with Contextual RQ-Transformer, 2022\"}", "{\"title\": \"Q&A 3 and 4\", \"comment\": \"Q3: Section 2.1 could be more concise, as these are not quite related to the key contributions and are frequently repeated in each of the model-based RL papers.\", \"a3\": \"We thank the reviewer for this feedback. We agree that the section could be a bit more concise. We removed some of the information that can be easily found in any other paper within the field and that does not directly relate to our work.\", \"q4\": \"On lines 307-309, I think STORM uses KV caching in both the conditioning phase and the imagination phase, see here. The predict_next() uses forward_with_kv_cache() for decoding.\", \"a4\": \"We thank the reviewer for noticing this mistake. Indeed STORM uses KV caching already. What we did was vectoring the for loop within the conditioning phase, since all information is available from the start. One of the authors misstated the contribution. We removed the lines that relate to KV caching contributions in our paper. We apologize for the mistake.\"}", "{\"title\": \"Q&A 2\", \"comment\": \"Q2: It remains unclear why GIT-STORM does not consistently outperform STORM across all benchmarks. Could the authors elaborate on why GIT-STORM occasionally does not surpass STORM and the conditions where improvements are only minor? Understanding this would clarify the contextual efficacy of the MaskGIT prior.\", \"a2\": \"We understand why this may be confusing and thank the reviewer for asking this question.\", \"in_deep_reinforcement_learning_at_the_edge_of_the_statistical_precipice\": \"https://proceedings.neurips.cc/paper/2021/hash/f514cec81cb148559cf475e7426eed5e-Abstract.html , the authors write:\\n\\n\\u201cOnly reporting point estimates obscures nuances in comparisons [85] and can erroneously lead the field to conclude which methods are state-of-the-art [63, 84], ensuing wasted effort when applied in practice [108].\\u201d\\n\\nTherefore, the single values of each run cannot be really considered singularly, as the variation and stochasticity of these runs depend on a multitude of factors (e.g., used random seed, luck in exploration, etc.). Hence, whether GIT-STORM consistently outperforms STORM can only be assessed from a statistical perspective. In particular, following \\u201cDeep Reinforcement Learning at the Edge of the Statistical Precipice\\u201d, we use the metrics they propose to meaningfully assess models' capabilities in this field (e.g., mean, median, IQM, optimal GAP, and probability of improvement over other baselines P(GIT-STORM>Y), where Y are all other baselines). A more detailed explanation of these metrics can be found in Appendix J.\\n\\nAccording to these metrics, GIT-STORM consistently outperforms STORM.\\u00a0\\n\\nFinally, not only the single values are not reliable but also assessing why a model performs better than another one cannot be done with confidence or rational arguments. That\\u2019s why rather than writing conjectures on why GIT-STORM outperforms STORM in single environments, we base our arguments on the statistical metrics.\\u00a0\\n\\nWe hope this addresses your question, but we are more than happy to keep discussing should there be any other concerns or\\u00a0doubts.\"}", "{\"comment\": \"I appreciate the patient response from the authors. I have tried my best to understand, but I am still not fully convinced by the soundness of the MaskGIT head design. Given the overall quality of this paper, I decided to update my score to 5 and confidence to 4.\\n\\nI remain open to exchanging ideas with AC and other reviewers in the next stage of the reviewing process.\"}", "{\"title\": \"Q&A 3\", \"comment\": \"Q3: The experimental results in Atari 100K only demonstrate marginal improvement. The gain over STORM seems to primarily originate from the gopher task alone, which contains inconsistent results, as detailed in the questions section.\", \"a3\": \"We understand the confusion that the results table may bring, and we hope to address any doubts you may have regarding this question here.\", \"in_deep_reinforcement_learning_at_the_edge_of_the_statistical_precipice\": \"https://proceedings.neurips.cc/paper/2021/hash/f514cec81cb148559cf475e7426eed5e-Abstract.html , the authors write:\\n\\u201cOnly reporting point estimates obscures nuances in comparisons [85] and can erroneously lead the field to conclude which methods are state-of-the-art [63, 84], ensuing wasted effort when applied in practice [108].\\u201d\\n\\nTherefore, the results and comparisons within single environments cannot be considered to define which model is state-of-the-art. Instead, Optimality GAP, Interquantile Mean and Probability of improvement are statistically significant metrics and are the ones to be considered. We realized there was a bug in the plotting function we used and some seeds were not considered. We fixed it and now the inconsistencies are not there anymore. \\n\\n\\n\\n[63] Jimmy Lin, Daniel Campos, Nick Craswell, Bhaskar Mitra, and Emine Yilmaz. Significant improvements over the state of the art? a case study of the ms marco document ranking leaderboard. arXiv preprint arXiv:2102.12887, 2021.\\n[84] Nils Reimers and Iryna Gurevych. Reporting score distributions makes a difference: Performance study of lstm-networks for sequence tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 338\\u2013348, 2017.\\n[85] Samuel Ritter, David GT Barrett, Adam Santoro, and Matt M Botvinick. Cognitive psychology for deed neural networks: A shape bias case study. In International conference on machine learning, 2017.\\n[108] Ga\\u00ebl Varoquaux and Veronika Cheplygina. How i failed machine learning in medical imaging\\u2013shortcomings and recommendations. arXiv preprint arXiv:2103.10292, 2021.\"}", "{\"title\": \"Q&A 1\", \"comment\": \"We thank the reviewer for such insightful and important questions. We are really happy to answer and fulfill any remaining doubts. Here we list aggregated questions and the related answers.\", \"q1\": \"The paper does not fully explain the conditions under which GIT-STORM\\u2019s improvements are more marginal, suggesting a need for clearer insights into the impact of individual architectural components. Most importantly, the modifications from STORM to GIT-STORM are extensive, involving MaskGIT, state mixer, policy adjustments from DreamerV3, and an observation module from STORM. The compounded modifications make it difficult to discern the exact contribution of each component to the reported performance improvements. A more focused ablation study could be required to isolate the impact of each modification.\", \"a1\": \"We agree with the reviewer; we are working on an ablation study that assesses the contribution of the following components:\", \"maskgit_head\": \"We compare the MaskGIT head to the MLP head.\\nLogits computation using the dot product between $\\\\xi_t$ and MaskGit embeddings: We compare the GIT-STORM setup against using the MLP head that takes as input eps_t and returns logits, validating how using the dot product between eps_t and MaskGit embeddings impacts the performances.\", \"mixed_states\": \"we compare using a state mixer against using the unmixed states as world model autoregressive transformer input. As a result, we can validate to what extent the capacity introduced by the state mixer affects GIT-STORM performances.\\n\\n\\nFollowing your suggestions, we believe that these are the key components to analyze to understand GIT-STORM capabilities and the exact contribution of each component to the reported performance improvements.\\u00a0\\n\\n\\n\\nWe will validate the ablation on 3 Atari games (Hero, Freeway, and Boxing) and 3 DMC environments (Walker Walk, Walker Run, and Quadruped Run) over 3 seeds. We will send the ablations as soon as possible.\"}", "{\"title\": \"Q&A 3\", \"comment\": \"Q3: The paper claims state-of-the-art results for GIT-STORM on select environments, yet Table 6 seems to indicate that DrQ-v2 outperforms GIT-STORM on two environments (where the authors claim they are better?). Regarding the reported state-of-the-art claim, Table 6 suggests that DrQ-v2 outperforms GIT-STORM in some highlighted environments. Could the authors comment on why they claim GIT-STORM provides SOTA results on these? It is not the case, right?\", \"a3\": \"We understand the confusion that the results table may bring, and we hope to address any doubts you may have regarding this question here.\", \"in_deep_reinforcement_learning_at_the_edge_of_the_statistical_precipice\": \"https://proceedings.neurips.cc/paper/2021/hash/f514cec81cb148559cf475e7426eed5e-Abstract.html , the authors write:\\n\\n\\u201cOnly reporting point estimates obscures nuances in comparisons [85] and can erroneously lead the field to conclude which methods are state-of-the-art [63, 84], ensuing wasted effort when applied in practice [108].\\u201d\\n\\nMoreover, also because of these reasons, within this field the results within the 5% of variation of state-of-the-art (SOTA) values are also considered SOTA performances [1].\\n\\n\\n\\nTherefore, the results and comparisons within single environments cannot be considered to define which model is state-of-the-art, and following the community standards, we defined as state-of-the-art the results where GIT-STORM performs above every other model, or within the 5% of variation from SOTA values. That\\u2019s why we state:\\u00a0\\n\\n\\u201cGIT-STORM presents state-of-the-art scores on two environments, Walker Stand, and Quadruped Run.\\u201d\\n\\n\\u00a0\\n\\n[1] Danijar Hafner, Jurgis Pasukonis, Jimmy Ba, and Timothy Lillicrap. Mastering diverse domains through world models. arXiv preprint arXiv:2301.04104, 2023.\\n\\n[63] Jimmy Lin, Daniel Campos, Nick Craswell, Bhaskar Mitra, and Emine Yilmaz. Significant improvements over the state of the art? a case study of the ms marco document ranking leaderboard. arXiv preprint arXiv:2102.12887, 2021.\\n\\n[84] Nils Reimers and Iryna Gurevych. Reporting score distributions makes a difference: Performance study of lstm-networks for sequence tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 338\\u2013348, 2017.\\n\\n[85] Samuel Ritter, David GT Barrett, Adam Santoro, and Matt M Botvinick. Cognitive psychology for deed neural networks: A shape bias case study. In International conference on machine learning, 2017.\\n\\n[108] Ga\\u00ebl Varoquaux and Veronika Cheplygina. How i failed machine learning in medical imaging\\u2013shortcomings and recommendations. arXiv preprint arXiv:2103.10292, 2021.\"}", "{\"title\": \"Summary of rebuttal for Area Chairs\", \"comment\": \"Dear Area Chairs,\\n\\nWe appreciate the reviewers' thoughtful evaluations of our paper and have carefully addressed their concerns. Below, we provide a unified discussion of the main issues raised and our responses, followed by the outcomes for each reviewer.\\n\\n---\", \"main_concerns_and_our_responses\": [\"Clarification of Contributions and Claims:\", \"Concern: Some reviewers noted that our claim of being the first to apply transformer-based world models to continuous action environments was overstated due to existing work like TransDreamer.\", \"Response: We acknowledged this oversight and revised our claim to specify that we are the first to apply categorical transformer-based world models to continuous action environments. We highlighted the distinction between our use of discrete (categorical) representations and prior work's continuous representations, emphasizing the unique challenges and contributions of our approach.\", \"State Mixer Design and Analysis:\", \"Concern: The design and effectiveness of the state mixer were not adequately explained or empirically validated.\", \"Response: We expanded the description of the state mixer in the paper, detailing how it integrates continuous actions with discrete latent states. We conducted an ablation study comparing different state mixer designs, demonstrating the effectiveness of our approach through empirical results on both Atari and DeepMind Control (DMC) benchmarks. Moreover, we compared the used inductive bias against other SOTA approaches, such as the one used in iVideoGPT, and showed that the approach outperforms all included baselines.\", \"Use and Mechanism of the MaskGIT Head:\", \"Concern: Questions were raised about the motivation for using the MaskGIT head, its impact on hallucinations in world models, and whether improvements were due to increased parameters.\", \"Response: We clarified that hallucinations in autoregressive models can lead to unrealistic predictions, and the MaskGIT head mitigates this by improving sequence modeling capabilities. We provided evidence that the MaskGIT head actually has fewer parameters than the MLP head, indicating that performance gains are due to better inductive biases rather than parameter counts. We also detailed and explored the differences between latent variables generated by the MaskGIT and MLP heads, providing graphs of the two distributions and explicitly showing that the proposed MaskGIT is more confident when predicting state transitions.\", \"Performance Evaluation and Statistical Analysis:\", \"Concern: Some reviewers felt that GIT-STORM's improvements over STORM were marginal and questioned inconsistencies in reported results.\", \"Response: We explained that individual environment results can vary due to stochasticity inherent in reinforcement learning tasks. To provide a reliable assessment, we emphasized statistical metrics like mean, median, interquartile mean (IQM), and probability of improvement, following established guidelines in the field. We clarified any discrepancies and updated figures for consistency.\", \"Clarity and Presentation Enhancements:\", \"Concern: Suggestions were made to improve model descriptions, clarify figures, and make the background section more concise.\", \"Response: We updated Figure 1 to reflect the model architecture accurately and added detailed explanations of our design choices. We revised Section 2.1 to focus more directly on content relevant to our contributions, enhancing overall clarity.\", \"---\"], \"per_reviewer_outcomes\": [\"Reviewer **Lr34**:\", \"Outcome: Although we fulfilled all questions and weaknesses, the reviewer did not reply and actively engage in the review.\", \"Reviewer **JuLK**:\", \"Outcome: Raised their score from **3** to **5**. The reviewer expressed openness to further discussion during the next review stage.\", \"Reviewer **aqah**:\", \"Outcome: Raised score from **5** to **6**\", \"Reviewer **zpZw**:\", \"Outcome: Raised score from **5** to **6**\", \"---\", \"We believe that our revisions have effectively addressed the reviewers' concerns and have strengthened the paper's clarity and contribution to the field. Our work advances the integration of continuous actions with discrete latent states in transformer-based world models, providing valuable insights for future research. Such contribution is extremely relevant because almost every existent foundational model is based on a transformer architecture and trained as a sequence modeling architecture. Therefore, in this paper, we try to show that this approach goes beyond the RL community, beyond the video prediction community, for which we report several results. In this paper, we show that Masked Generative Prior improves the sequence modeling capabilities of autoregressive transformers, which touches almost every sub-field of Generative AI.\", \"We kindly ask the area chairs to consider our responses and the improvements made.\", \"Thank you for your consideration!\"]}", "{\"title\": \"Q&A 4\", \"comment\": \"Q4: Why does it fail to close the performance gap with DreamerV3 in environments beyond Atari 100k. What is the rationale for improving STORM over directly utilizing DreamerV3, which appears to perform better in many scenarios? Or put differently: why would one care to improve STORM with the proposed modifications when there is DreamerV3 and I could just use it or improve over DreamerV3?\", \"a4\": \"We thank the reviewer for such a fundamental\\u00a0and important question.\\n\\nThe rationale for improving STORM over directly utilizing DreamerV3 comes from the advantages of transformer-based world models over RSSMs, like the one used in Dreamer-based architectures. Indeed, as stated in the Introduction section:\\u00a0\\n\\n\\u201cAs RNNs impede parallelised computing due to its recurrent nature, some works (Micheli et al., 2022; Robine et al., 2023; Zhang et al., 2023) have incorporated autoregressive transformer architectures (Vaswani et al., 2017), which have been shown to be effective across various domains, such as language (Devlin et al., 2019; Radford et al., 2019; Brown et al., 2020; Raffel et al., 2020), images (Dosovitskiy et al., 2021; He et al., 2022; Liu et al., 2023), and offline RL (Janner et al., 2021; Chen et al., 2021). For instance, IRIS (Micheli et al., 2022) utilises discrete autoencoders (Oord et al., 2017) to map raw pixels into a smaller set of image tokens to be used as the input to the world model, achieving super-human performance in ten different environments of the Atari 100k benchmark (Kaiser et al., 2019).\\u201d\\n\\n\\n\\nExpanding this paragraph, transformer-based world models are very promising considering that dynamic state transitions can be performed in parallel. Indeed, iVideoGPT [2], a transformer-based world model, was the first-ever world model trained on the Open X-Embodiment dataset [3], an extremely big robotics dataset built from 22 different robots collected through a collaboration between 21 institutions, demonstrating 527 skills (160266 tasks).\\u00a0\\n\\nMoreover, iVideoGPT authors state:\\n\\n\\u201ciVideoGPT is scalable for action-free video prediction across a mixture of over one million robotic and human manipulation trajectories.\\u201d\\n\\nThis sentence, together with the whole introduction and methodology sections, shows how using a scalable model (e.g., transformers) to learn a world model allows us to tackle way harder and more realistic environments than game environments.\\u00a0\\n\\nTo sum up and answer your question explicitly, pursuing research within the domain of transformer-based transformers rather than using DreamerV3-based models that rely on RSSMs will eventually allow the community to build scalable models that are able to handle real-world environments\\u2014the ultimate goal of this community.\\u00a0\\n\\nFinally, we would like to add that the goal of the paper is to inform the community about the fact that Masked generative priors (e.g., MaskGIT priors) improve the sequence modelling capabilities of transformer-based world models. This, although tested on simpler environments, is something that definitely is worth exploring in more realistic environments and architectures, such as iVideoGPT, which would be a perfect fit for this method. If we had more time and world model weights were available, we would have validated the usage of MaskGIT prior to it as well.\\u00a0\\n\\n\\n\\n[2] Jialong Wu and Shaofeng Yin and Ningya Feng and Xu He and Dong Li and Jianye Hao and Mingsheng Long. iVideoGPT: Interactive VideoGPTs are Scalable World Models, 2024, https://arxiv.org/abs/2405.15223.\\n\\n[3] Open X-Embodiment Collaboration et al. Open X-Embodiment:Robotic Learning Datasets on RT-X Models, 2024, https://robotics-transformer-x.github.io/.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Q&A 5 and 6\", \"comment\": \"Q5: Freeway is a hard exploration environment, as the agent has to repeat the up operation many times to get a first reward, which is a rare event for a random policy. Without the first reward, the value space is all zero and the policy would be further optimized as a uniform random policy. STORM, IRIS, and DIAMOND have different tricks that can mitigate such an issue. But what is the underlying reason for GIT-STORM to reach a non-zero result? I think this is not related to the improved decoding or world modelling quality since DreamerV3 and STORM (w/o traj) could also produce a nearly perfect reconstruction and prediction on Freeway.\", \"a5\": \"We thank the reviewer for such an insightful question. Although it is true that exploration strategy plays a big role in this case, within the scope of this paper we did not consider this aspect and focused instead on the representation learning perspective of the world model. Indeed, in STORM [2], Fig.4 shows that using different policy input spaces significantly affects the learning behavior.\\n\\nThis result suggests that besides the exploration strategy, the policy input space plays a role when it comes to policy learning behavior. Therefore, we believe that the GIT-STORM policy input space allows the policy to learn effectively a meaningful policy in Freeway.\", \"q6\": \"For the Quadruped Run in Figure 6, I wonder if it's too small (compared to Figure 4 in DreamerV3).\\n\\nDreamerV3 upsampled the output images for visualization purposes, unfortunately, due to time constraints we could not do the same. We will try to update the image before the end of the review period, for now we removed the sequence.\"}", "{\"summary\": \"The paper introduces GIT-STORM, which incorporates three modifications to the base algorithm STORM: a MaskGIT prior that replaces the MLP dynamics head, a draft-and-revise decoding scheme for enhanced consistency, and a state mixer for continuous action environments. Experimental results demonstrate that GIT-STORM surpasses STORM on both Atari 100K and DMC benchmarks. Video prediction results indicate that this improvement is attributed to more accurate representations.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The motivation of incorporating a MaskGIT prior into the STORM architecture is clear.\", \"The proposed method is straightforward and easy to reproduce.\", \"MaskGIT can effectively improve the video prediction quality of STORM, indicating applicability of GIT-STORM to more complicated tasks.\"], \"weaknesses\": [\"The paper contains a misstatement in its contributions. The authors claim that they \\\"apply transformer-based world models to continuous action environments for the first time\\\". This claim is inaccurate, as TransDreamer[1] can also be applied to continuous action environments. The authors are evidently aware of this paper, given that they have cited it in this work.\", \"The state-mixer design is not properly addressed. If the authors claim this part of their contribution, they should either elaborate on the design, or provide empirical results to show the superiority of this method. Based on the overlapping tasks, TransDreamer appears to have better performance than GIT-STORM+state-mixer on the continuous control benchmark DMC.\", \"The experimental results in Atari 100K only demonstrate marginal improvement. The gain over STORM seems to primarily originate from the gopher task alone, which contains inconsistent results, as detailed in the questions section.\", \"[1] Chen et al. TransDreamer: Reinforcement Learning with Transformer World Models.\"], \"questions\": [\"Results on the Freeway task have very high variance according to Figure 10. How many out of the five runs does GIT-STORM actually achieve non-zero performance?\", \"The most challenging aspect of learning the Freeway task is obtaining the first successful trajectory, which I believe is more related to the exploration strategy than state predictions, given the sparse rewards. How does GIT-STORM assist the agent in exploring more efficiently? Is this strategy stable, or are the successful trajectories obtained by random seeds?\", \"Why would the pendulum swingup task fail for both STORM and GIT-STORM? DreamerV2, DreamerV3 and TransDreamer can learn this task fairly easily.\", \"The experiment results in Table 5 and Figure 10 appear inconsistent. For instance, the Gopher score reported in Table 5 is 8562, but the last point in Figure 10 shows a performance of around 2500. Do these two results use different metrics?\", \"Could you add the learning curves of STORM or DreamerV3 to Figure 10 for a better comparison, considering that you have reproduced these results?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for providing responses to my raised questions and the additional ablation results. I have increased my rating to 6.\"}", "{\"title\": \"Q&A 2\", \"comment\": \"Q2: There should be some further investigation into the mechanism of the MaskGIT head. Such as:\\n(a) What's the difference between the latent variables (or distributions) generated with the MLP head and MaskGIT head?\\n(b) This MaskGIT head looks like a skip/residual connection from to, would this reduce the KL divergence in the training or imagination?\\n(c) These are sample questions. An investigation like this would improve the soundness and contribution of this paper.\", \"a2\": \"We greatly appreciate the reviewer\\u2019s insightful suggestion regarding further investigation into the mechanism of the MaskGIT head. Below, we address the proposed questions and provide additional analysis to strengthen the paper\\u2019s contribution:\\n\\n(a) Difference between latent variables (or distributions) generated by the MLP head and the MaskGIT head\\n\\nThe MaskGIT head fundamentally differs from the MLP head in how it generates token distributions. Figure 17 provides a visualization of the logits distributions produced by the dynamics head in both GIT-STORM and STORM. A closer inspection reveals that both MLP and MaskGIT heads generate mean distributions with two distinct peaks: one near zero, corresponding to tokens that are unlikely to be sampled, and another, smaller peak, which captures the confidence of sampling a given token. This dual-peak structure reflects the model's ability to differentiate between likely and unlikely tokens, offering finer granularity in token sampling decisions. Therefore, the higher the second peak and the broader the distribution's support, the more confident the world model is in sampling tokens for a given dynamics state transition. Consistent with the perplexity values presented in Table 3, GIT-STORM produces more refined probability distributions, enabling it to make predictions with greater confidence compared to STORM.\\n\\n(b) Relationship between the MaskGIT head and KL divergence\\n\\nAs illustrated in Figure 16, GIT-STORM demonstrates consistently lower KL divergence across three environments (Hero, Boxing, and Freeway) compared to STORM. This reduction in KL divergence can be attributed to the MaskGIT head, which resembles a residual connection by leveraging token embeddings to compute logits. This approach ensures that token distributions are closer to the true dynamics of the environment, leading to more efficient encoding of state transitions. The residual-like design minimizes redundant learning, resulting in reduced KL divergence.\\n\\nNotably, the most significant improvement is observed in the Boxing environment, where GIT-STORM exhibits a marked advantage. This finding aligns with the hypothesis that the MaskGIT head facilitates better representation learning for complex dynamics.\\n\\n(c) Contribution of this investigation to the paper\\u2019s soundness\\n\\nTo further elucidate the contribution of the MaskGIT head, we have incorporated both KL divergence comparisons (Figure 16) and visualizations of the dynamics head\\u2019s output distributions (Figure 17). These analyses collectively provide insights into how the MaskGIT head improves the dynamics state transition modeling.\\n\\nIn summary, the MaskGIT head not only enhances the accuracy of token distributions but also promotes better sequence modeling by reducing KL divergence. This investigation underscores its critical role in achieving the improved performance of GIT-STORM and enriches the paper\\u2019s contribution by offering a deeper understanding of its design choices.\"}", "{\"comment\": [\"Q1 (continued): As stated by the authors, during training, the MaskGIT input is a masked version of $z_t$. Based on this statement, I want the authors to provide direct answers to the following subquestion:\", \"SQ1: In Line 3 of Algo 1, should $z^0$ be a draft of $z_{t+1}$?\", \"SQ2: If the above is true, then in Line 3&4 of Algo 2 (called from Line 6 of Algo 1 with $\\\\gamma=1$), is the input of the MaskGIT's BidirectionalTransformer a masked version of $z^0$, namely a masked version of $z_{t+1}$?\", \"SQ3: If the above is true, then we face a gap between training and sampling, because for the same desired outputs $z_{t+1}$, our inputs is masked $z_t$ during training but masked $z_{t+1}$ during sampling.\"]}", "{\"title\": \"Answers\", \"comment\": \"We thank the reviewer for the feedback, we are happy to answer any further questions!\", \"q1\": \"I partially understand that the MaskGit head in your work is used to simulate z transitions. However, I find the sampling of this transition very unclear. Could the author elaborate on which z is actually z_t and which is actually z_{t+1} in Algorithm 1? It seems that we need to input masked z_t to the header in order to output unmasked z_{t+1}, but in Algorithm 1, we are only iteratively decoding a single z.\", \"a1\": \"We now understand the source of confusion and will try to answer as clearly as possible.\\n\\nTo answer your question, we would like to give a high-level overview of the proposed method during training and inference time. Indeed, the Draft-and-Revise decoding scheme (Algorithm 1) is used only during inference (i.e., imagination and test phases).\", \"training_phase\": \"As shown in Fig. 7, latent representations $z_t$ are first extracted from observation $o_t$. Then, $z_t$ is mixed to $a_t$ and given as input to the autoregressive transformer, as shown in Fig. 1. The Transformer output $h_t$ is then given as input to the MaskGIT head, together with a masked version of $z_t$. Finally, the MaskGIT head returns $z_{t+1}$.\\n\\nImagination/Test phase: \\nIn this case, the Draft-and-Revise scheme is used: The Transformer output $h_t$ is given as input to the MaskGIT head, together with an empty version of $z_t$, which only contains masks ( referenced as $z^{\\\\text{empty}}$ in Algorithm 1). \\n\\nWhen it comes to Algorithm 1, we are describing the sampling operation of the MaskGIT head that is, therefore $z_t$ is the $z$ that it takes as input, and $z_{t+1}$ is the one that is outputted. Then, the Draft-and-Revise scheme is executed as described in Algorithm 1, with a first draft phase and N revision phases. In each phase, the MaskGIT head is used to compute either draft or revision logits, the only difference with training time is that this time we do not use $z_t$. Instead, following TECO, we start from a fully masked $z_t$ to avoid the model being biased from the previous state and condition the sampling of the next state dynamics transition only on $h_t$. As a result, once the revision phase has been executed, we get $z_{t+1}$ as output.\", \"going_back_to_your_question\": \"\\\"Could the author elaborate on which z is actually z_t and which is actually z_{t+1} in Algorithm 1?\\\", it should be clear now that $z_t$ is never used and $z_{t+1}$ is the output of Algorithm 1.\\n\\nWe thank the reviewer again for this question, as it made us realize that the algorithm description, which was meant to be as general, is probably unclear as of now. To address this, we revised the algorithm description to be more integrated with the paper and now the usage of the MaskGIT head is expressed explicitly.\", \"q2\": \"Moreover, the dot product operation seems exactly the weight tying strategy widely used in language models and MaskGiT implementations. See the reference links below:\", \"gpt2\": \"https://github.com/huggingface/transformers/issues/1993; https://datascience.stackexchange.com/questions/123149/why-do-gpt-models-use-a-transpose-of-the-embedding-matrix-to-convert-outputs-to\", \"maskgit\": \"https://github.com/dome272/MaskGIT-pytorch/blob/cff485ad3a14b6ed5f3aa966e045ea2bc8c68ad8/bidirectional_transformer.py#L133; https://github.com/valeoai/Maskgit-pytorch/blob/b0b2b3cc11cffd0b159f22dc1c6e73a7e8b53db3/Network/transformer.py#L189\", \"a2\": \"We thank the reviewer for indicating some references about weight tying strategy. We apologize for not noticing before that indeed such an approach has been formalized and used in several known architectures, including MaskGIT and GPT models. We updated the text that describes the dot product operation and added references to the papers you mentioned and the one that formalizes the weight tying strategy.\"}", "{\"title\": \"Q&A 4 and 5\", \"comment\": \"Q4: Results on the Freeway task have very high variance according to Figure 10. How many out of the five runs does GIT-STORM actually achieve non-zero performance?\", \"a4\": \"We thank the reviewer for this question. It is true that the results presents a high-variance, this happens because 3 out of 5 seeds presents non-zero results.\", \"q5\": \"The most challenging aspect of learning the Freeway task is obtaining the first successful trajectory, which I believe is more related to the exploration strategy than state predictions, given the sparse rewards. How does GIT-STORM assist the agent in exploring more efficiently? Is this strategy stable, or are the successful trajectories obtained by random seeds?\", \"a5\": \"We thank the reviewer for such an insightful question. Although it is true that exploration strategy plays a big role in this case, within the scope of this paper we did not consider this aspect and focused instead on the representation learning perspective of the world model. Indeed, in STORM [2], Fig.4 shows that using different policy input spaces affects the learning behavior.\\n\\nThis result suggests that besides the learning strategy, the policy input space plays a role when it comes to policy learning behavior. Therefore, we can say that GIT-STORM policy input space allows the policy to learn effectively a meaningful policy.\"}", "{\"title\": \"Q&A 3\", \"comment\": \"Q3: Do you mean that due to improved computational efficiency, you can set a larger update-to-data ratio (related to Update agent every env step in Table 8) thus improve the sample efficiency of model-based RL? Can you provide this hyperparameter of GIT-STORM and baselines for a direct comparison? For example, is the sampling efficiency improved because GIT-STORM has a larger \\\"Update agent every env step\\\"? If not, then the KV cache does not seem to contribute to sampling efficiency.\\n\\nA3.0: We thank the reviewer for asking clarification about the previous answer. We would like to answer the question starting from an analysis of computational complexity and FLOPs computation in transformers with and without KV caching. For the sake of this discussion we only focus on the Autoregressive Transformer, the module where KV caching is used. \\n\\n\\nHere, https://r4j4n.github.io/blogs/posts/kv/ , we can find a FLOPs comparison of vanilla Transformer and Transformer with KV Cache, which explicitly compares the FLOPs spent by vanilla Transformers versus the one by KV Cached Transformers. \\n\\nBy following the article, it can be seen that we can express the two FLOPs as:\\n\\nVanilla Transformer $FLOPS = n\\u00d7(24bsh^2 +4bs^2h)+2bshV$\\n\\nKV Cache Transformer $FLOPS = n\\u00d7(24bh^2 +4bh + 4b KV_{length})+2bhV$\\n\\nwhere, \\n\\nb = Batch Size = 256; s = Sequence Length = 16; h = Hidden Dimension = 128; n = number of layers = 2; V = vocabulary dimension = 32; KV_length = Sequence Length = 16\", \"by_substituting_our_hyperparameters_into_the_equations_we_can_see_that\": \"Vanilla Transformer FLOPs = 3.254.779.904 FLOPs > KV Cache Transformer FLOPs = 2.037.186.656 FLOPs\", \"and_that\": \"KV Cache Transformer FLOPs = 0,625 Vanilla Transformer FLOPs. \\n\\n\\nFrom these results, we can see that the autoregressive transformer with KV cache is certainly more efficient than the vanilla one, which does not use KV caching. \\n\\nEquipped with these new results, we now answer explicitly your questions: \\nQ3.1: Do you mean that due to improved computational efficiency, you can set a larger update-to-data ratio (related to Update agent every env step in Table 8) thus improve the sample efficiency of model-based RL?\\n\\nA3.1: The Update Agent Every env step hyperparameter measures how many env steps need to be executed before updating the agent. Sample efficiency is a measure related to the number of imagined trajectories that the model can generate at a given time. Therefore, Sample Efficiency and the Update Agent Every env step hyperparameter are not directly related, in the sense that modifying the hyperparameter will not affect the sample efficiency of the model. However, you are correct when saying that due to improved computational efficiency, we can set a larger update-to-data ratio and improve the sample efficiency of the model. Where with a larger update-to-data ratio we mean that less time is required to generate (or imagine) the data (batch of imagined trajectories) required to make an update of the agent. \\n\\n \\nQ3.2: Can you provide this hyperparameter of GIT-STORM and baselines for a direct comparison? \\n\\nA3.2: Besides STORM for which the same value is used, the mentioned hyperparameter is not defined consistently across the other different baselines. For instance, DreamerV3 uses a hyperparameter called train ratio, which is used as follows:\\n\\nargs.batch_size = 16\\nargs.batch_length = 65\\nargs.replay_context = 1\\nargs.train_ratio = 32\\n\\nbatch_steps = args.batch_size * (args.batch_length - args.replay_context) = 1/4\\nshould_train = embodied.when.Ratio(args.train_ratio / batch_steps) \\n\\nwhich means that the model is updated every 4 interactions with the environment. \\n\\nSince studying the effect of this hyperparameter would have not added any evidence supporting our claims, we did not do an ablation study or grid search of this hyperparameter and instead, we consistently used the same value of STORM (Update agent every env step =1). \\n\\n\\nQ3.3: For example, is the sampling efficiency improved because GIT-STORM has a larger \\\"Update agent every env step\\\"?If not, then the KV cache does not seem to contribute to sampling efficiency.\\n\\nA3.3:\\n\\nWe clarified that the Update agent every env step hyperparameter is not related to the sample efficiency of the method. However, as shown in A3.0, using the KV caching increases the computational efficiency of the autoregressive transformer that is used to generate the imagined trajectories, and therefore, the more efficient the autoregressive transformer, the quicker the generation (or sampling) of imagined trajectories.\\n\\n\\n\\nWe hope these answers address your questions. Should you have any remaining doubts, we are happy to answer!\"}", "{\"comment\": [\"I am grateful to the authors for the detailed responses they have provided. Some of my concerns have been addressed perfectly, but I still have the following questions that need clarification.\", \"A1: I partially understand that the MaskGit head in your work is used to simulate z transitions. However, I find the sampling of this transition very unclear. Could the author elaborate on which z is actually z_t and which is actually z_{t+1} in Algorithm 1? It seems that we need to input masked z_t to the header in order to output unmasked z_{t+1}, but in Algorithm 1, we are only iteratively decoding a single z.\", \"A1: Moreover, the dot product operation seems exactly the weight tying strategy widely used in language models and MaskGiT implementations. See the reference links below:\", \"GPT2: https://github.com/huggingface/transformers/issues/1993; https://datascience.stackexchange.com/questions/123149/why-do-gpt-models-use-a-transpose-of-the-embedding-matrix-to-convert-outputs-to\", \"MaskGIT: https://github.com/dome272/MaskGIT-pytorch/blob/cff485ad3a14b6ed5f3aa966e045ea2bc8c68ad8/bidirectional_transformer.py#L133; https://github.com/valeoai/Maskgit-pytorch/blob/b0b2b3cc11cffd0b159f22dc1c6e73a7e8b53db3/Network/transformer.py#L189\", \"A4: Do you mean that due to improved computational efficiency, you can set a larger update-to-data ratio (related to Update agent every env step in Table 8) thus improve the sample efficiency of model-based RL? Can you provide this hyperparameter of GIT-STORM and baselines for a direct comparison? For example, is the sampling efficiency improved because GIT-STORM has a larger \\\"Update agent every env step\\\"? If not, then the KV cache does not seem to contribute to sampling efficiency.\"]}", "{\"title\": \"Q&A 3 and 4\", \"comment\": \"Q3: The performance of GIT-STORM on DMC is outperformed by its base method, DreamerV3.\\n\\nWe thank the reviewer for the feedback. We are aware that DreamerV3 beats GIT-STORM on the DMC benchmark. However, we believe that this does not imply that the contribution of our work is meaningless or not good enough. Indeed, the message that we want to bring to our community is that masked generative priors increase the sequence modeling capabilities of transformer-based world models. This is quite clear when looking at the comparison between STORM and GIT-STORM. Moreover, this is also the very first work where transformer-based world models are benchmarked on the whole DMC suite. The main reason is that using continuous actions and discrete representations is not trivial and very difficult to perform meaningfully.\", \"q4\": \"In Line 309, why KV cache can improve sample efficiency? Do you mean computational efficiency?\\n\\nThe KV cache improves computational efficiency by storing key-value pairs from previously computed attention operations, eliminating redundant calculations, and reducing the computational cost of sequence modeling in transformers. In the model-based reinforcement learning context, this enhanced computational efficiency translates directly to improved sample efficiency, as the primary bottleneck in RL training is the number of sampled trajectories required to train policies. By enabling the model to process more trajectories within the same computational budget, the KV cache accelerates training, reduces the need for environment interactions, and facilitates longer sequence modeling without the quadratic cost scaling of traditional attention mechanisms. This dual benefit of faster training and efficient resource utilization makes the KV cache an important component for improving sample efficiency.\"}", "{\"summary\": \"This paper proposes GIT-STORM, which utilizes a MaskGIT model instead of an MLP for the prior head in world models (based on STORM). It also makes minor modifications (a state mixer) to support continuous actions. Experiments are done on Atari100k and DMC benchmarks, considering both policy learning and video prediction performance. GIT-STORM outperforms its base method STORM.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"To my knowledge, MaskGIT models, with their strong expressiveness, are not yet utilized for world models in the MBRL community.\", \"weaknesses\": [\"1. The illustration and descriptions of the model are confusing. Can authors provide more insights for their specific designs?\", \"In Figure 1 (left), it seems that GIT-STORM uses masked $z_t$ as inputs for reconstructing $z_{t+1}$. This is strange since, in the original MaskGIT, we mask and reconstruct masked target tokens. Similarly, I think it is more reasonable to mask $z_{t+1}$ as inputs.\", \"In Figure 1 (left), there is no $\\\\xi_t$ but only $\\\\eta_t$.\", \"Also, the dot product seems to be a commonly used trick that ties weights for embedding and linear layer before Softmax. If so, relevant literature should be cited.\", \"The Draft-and-Revise decoding scheme, if not proposed by this work, should be moved into a preliminary section.\", \"2. The contribution to supporting continuous actions is overclaimed (as 'for the first time'). In fact, concatenating or summating continuous inputs with hidden states is a too straightforward approach in current VLA models (e.g., OpenVLA for inputting continuous visual representations) and action-conditioned video prediction models (e.g., iVideoGPT for inputting continuous actions).\", \"3. The performance of GIT-STORM on DMC is outperformed by its base method, DreamerV3.\"], \"questions\": \"There are also some minor questions:\\n\\n1. In Line 309, why KV cache can improve sample efficiency? Do you mean computational efficiency?\\n2. To my knowledge, perplexity is a metric whose lower values mean better. However, in Table 3, higher perplexity is marked as better.\\n3. In Figure 6, the quadruped agents are too small in the images. This work seems to have used an unusual camera setting for these tasks.\\n\\nIf the authors well address my concerns, I am willing to improve my rating.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for replying, we are really happy to answer your questions.\", \"q1\": \"I get that $z_t$ is not never used in Algorithm 1. But let's look at line 6 of Algorithm 1. It seems that the output (corresponding to $z_{t+1}$) is also used as input to MaskGIT for multiple iterations. This creates a gap between training and imagining/testing, because during training, the MaskGIT input is $z_t$.\", \"a1\": \"We understand that this concept may be a bit confusing, so we try to expand it a bit more here. Let's break down Training and Imagination/Test; during training, the MaskGIT input is a **masked** version of $z_t$ and not the full $z_t$. Since the masking operation is done randomly, we can basically rewrite the step as in line 6 of Algorithm 1, which not only takes z as input but also $\\\\Pi$ and h_t. Using different iterations does not create a gap between training and imagination\\\\test, because essentially we are doing the same operation N times, where the only things that change are z and $\\\\Pi$. But this is not different from what happens during training, since per every step the MaskGIT head takes a different z which is randomly masked (e.g., $\\\\Pi$).\", \"q2\": \"When we talk about a higher sample efficiency of the MBRL algorithm, we mean that the algorithm can achieve higher task performance with the same or fewer environment steps (rather than training time; see https://ai.stackexchange.com/a/5295). Therefore, the author's statement that \\\"quicker the generation (or sampling) of imagined trajectories and, hence, the higher the sample efficiency\\\" is not true, since the generation speed is independent of the task performance for the same number of environment steps.\", \"a2\": \"We thank the reviewer for such an insightful question. The StackExchange website you shared is related to the paper called \\\"Sample Efficient Actor-Critic Witt Experience Replay. https://arxiv.org/pdf/1611.01224 .\", \"the_paper_states_the_following\": \"\\\"In particular, every time an agent acts upon the environment, an expensive simulation\\nstep is conducted. Thus to reduce the cost of simulation, we need to reduce the number of simulation\\nsteps (i.e. samples of the environment). This need for sample efficiency is even more compelling\\nwhen agents are deployed in the real world.\\\". We agree that this sentence is in accordance with what you wrote as well - \\\"with sample efficiency, we mean that the algorithm can achieve higher task performance with the same or fewer environment steps\\\". \\n\\nWe apologize for making a wrong statement, the confusion came from the fact that all models use the same number of environment steps, for instance, Atari100k is called like this because every given algorithm does 100k interactions with a given Atari environment. Considering that our model performs the best, according to many statistical metrics (i.e., Probability of improvement, IQM, mean, Optimality GAP), and that every model uses the same number of environment steps, according to the definition you gave to sample efficiency, our model should be the most sample efficient. However, within the context of the question you asked before - whether KV caching improves this sample efficiency or not - we agree with you that this is not the case. KV caching improves the computational efficiency of imagined trajectory generation as you suggested. We thank you for your valuable feedback! We corrected the statement we made in the previous answer. \\n\\nPlease let us know if you have any other feedback, they are very useful!\"}", "{\"title\": \"Q&A 1\", \"comment\": \"We thank the reviewer for such insightful and important questions. We are really happy to answer and fulfil any remaining doubts. Here we list aggregated questions and the related answers.\", \"q1\": \"The illustration and descriptions of the model are confusing. Can authors provide more insights for their specific designs?\\nIn Figure 1 (left), it seems that GIT-STORM uses masked as inputs for reconstructing . This is strange since, in the original MaskGIT, we mask and reconstruct masked target tokens. Similarly, I think it is more reasonable to mask as inputs.\\nIn Figure 1 (left), there is no but only .\\nAlso, the dot product seems to be a commonly used trick that ties weights for embedding and linear layer before Softmax. If so, relevant literature should be cited.\\nThe Draft-and-Revise decoding scheme, if not proposed by this work, should be moved into a preliminary section.\", \"a1\": \"We thank the reviewer for such detailed feedback. For our design, we based on the latent MaskGIT implementation proposed in TECO[1]. The original MaskGIT masks and reconstructs target tokens because they do not deal with state transitions but image reconstruction/generation, which does not involve dynamics. In our case, we want to generate tokens of the next time step, or in other words, we want to predict the dynamics state transition of $z_t$. Therefore, as proposed in TECO, in order to predict $z_{t+1}$ we use as input $z_t$, as context, and $h_t$, which is supposed to contain information about the next time step.\\n\\nWe added $\\\\xi_t$ to the model architecture figure. \\n\\nWe could not find any specific relevant model that uses the dot product operation as in GIT-STORM. If there is any paper that is relevant to our work, please give us the references and we will add the citations. \\n\\nWe agree with the reviewer that this should be the case. However, for the sake of readability, we decided to present it in the methodology section and specify that we are not proposing it for the first time.\"}", "{\"summary\": \"The paper, Masked Generative Priors Improve World Models Sequence Modelling Capabilities, introduces GIT-STORM, an extension of the STORM architecture, incorporating MaskGIT as a dynamics prior to enhance sequence modeling in world models. The authors address two main gaps in previous research: the limitation of transformer-based world models in continuous action environments and the inadequacies of prior methods, like STORM, in capturing effective state representations. Through experiments on Atari 100k (discrete) and DeepMind Control Suite (continuous), GIT-STORM demonstrates improvements in RL and video prediction, suggesting that Masked Generative Priors could be a powerful inductive bias for world models, supporting broader applicability across diverse RL tasks and environments.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The empirical evaluation spans discrete and continuous action benchmarks, providing a robust assessment of GIT-STORM\\u2019s performance. The reported results demonstrate that GIT-STORM not only improves sample efficiency in RL tasks but also enhances video prediction quality, particularly in the Atari 100k benchmark, aligning well with the study's objectives. Moreover, the paper is well-written with a clear structure, providing a good experience as a reader. Extending the transformer-based world models to continuous action tasks also poses a sufficient novelty and broadens the utility of these models in RL and video prediction applications.\", \"weaknesses\": \"It remains unclear why GIT-STORM does not consistently outperform STORM across all benchmarks or why it fails to close the performance gap with DreamerV3 in environments beyond Atari 100k. The paper does not fully explain the conditions under which GIT-STORM\\u2019s improvements are more marginal, suggesting a need for clearer insights into the impact of individual architectural components.\\n\\nThe paper claims state-of-the-art results for GIT-STORM on select environments, yet Table 6 seems to indicate that DrQ-v2 outperforms GIT-STORM on two environments (where the authors claim they are better?). Clarifying the conditions under which GIT-STORM achieves these results or adjusting the claim would help ensure consistency and accuracy in presenting the model's achievements.\\n\\nThe proposed approach for handling continuous action spaces is promising, yet lacks a comprehensive empirical analysis. Additional studies on more diverse continuous control tasks could provide stronger validation of the state mixer function's effectiveness and the broader applicability of the model in continuous settings. Most importantly, the modifications from STORM to GIT-STORM are extensive, involving MaskGIT, state mixer, policy adjustments from DreamerV3, and an observation module from STORM. The compounded modifications make it difficult to discern the exact contribution of each component to the reported performance improvements. A more focused ablation study could is required to isolate the impact of each modification.\", \"questions\": [\"Could the authors elaborate on why GIT-STORM occasionally does not surpass STORM and the conditions where improvements are only minor? Understanding this would clarify the contextual efficacy of the MaskGIT prior.\", \"Regarding the reported state-of-the-art claim, Table 6 suggests that DrQ-v2 outperforms GIT-STORM in some highlighted environments. Could the authors comment why they claim GIT-STORM provides SOTA results on these? It is not the case, right?\", \"What is the rationale for improving STORM over directly utilizing DreamerV3, which appears to perform better in many scenarios? Or put differently: why would one care to improve STORM with the proposed modifications when there is DreamerV3 and I could just use it or improve over DreamerV3?\", \"I am open to increase my score once there is clarity on these questions.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"The authors have addressed most of my concerns. Given the limited research on latent space in MBRL and the novel structure of GIT-STORM, I believe this paper should be accepted, and I will increase my score to 6.\\n\\nHowever, since the paper primarily offers an incremental contribution and does not deeply investigate the latent space to provide design guidance for future world models, I cannot justify a score of 8.\\n\\nAdditionally, regarding the Quadruped Run task, the camera configuration could perhaps reference [this implementation](https://github.com/NM512/dreamerv3-torch/blob/main/envs/dmc.py). That said, it\\u2019s not particularly important at this stage.\"}", "{\"summary\": \"This paper proposed to replace the MLP head with MaskGIT prior in STORM to achieve a higher quality of latent generation, and therefore achieve better performance on the Atari100k benchmark.\\nThis paper also bridges the gap of the lack of evaluation of transformer-based world models on continuous control tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper clearly distinguishes itself from previous work, with good comparison and illustration.\\n\\n2. One-hot categorical latent is widely used in recent model-based RL, yet the research on it is insufficient. This paper provides a novel view of it.\\n\\n3. This paper bridges the gap of the lack of evaluation of transformer-based world models on continuous control tasks.\", \"weaknesses\": \"1. The motivation and effect of using MaskGIT head in world models are unclear.\\nIs there any evidence that the world models would have hallucinations, and how could a MaskGIT head mitigate such issues?\\nHow to distinguish if the improved performance (both RL and FVD) comes from more parameters or the MaskGIT prior?\\n\\n There should be some further investigation into the mechanism of the MaskGIT head. Such as:\\n\\n (a) What's the difference between the latent variables (or distributions) generated with the MLP head and MaskGIT head?\\n\\n (b) This MaskGIT head looks like a skip/residual connection from $z_{t}$ to $\\\\hat{z}_{t+1}$, would this reduce the KL divergence in the training or imagination?\\n\\n (c) These are sample questions. An investigation like this would improve the soundness and contribution of this paper.\\n\\n2. Section 2.1 could be more concise, as these are not quite related to the key contributions and are frequently repeated in each of the model-based RL papers.\", \"questions\": \"1. On lines 307-309, I think STORM uses KV caching in both the conditioning phase and the imagination phase, see [here](https://github.com/weipu-zhang/STORM/blob/e0b3fd44320d7e213ec905c673ad3f35b61b89f4/sub_models/world_models.py#L363). The `predict_next()` uses `forward_with_kv_cache()` for decoding.\\n\\n2. Missing comma on line 214?\\n\\n3. What's new in the proposed state mixer compared to the STORM's action mixer?\\n\\n4. `Freeway` is a hard exploration environment, as the agent has to repeat the `up` operation many times to get a first reward, which is a rare event for a random policy. Without the first reward, the value space is all zero and the policy would be further optimized as a uniform random policy. STORM, IRIS, and DIAMOND have different tricks that can mitigate such an issue. But what is the underlying reason for GIT-STORM to reach a non-zero result? I think this is not related to the improved decoding or world modelling quality since DreamerV3 and STORM (w/o traj) could also produce a nearly perfect reconstruction and prediction on `Freeway`.\\n\\n5. For the `Quadruped Run` in Figure 6, I wonder if it's too small (compared to Figure 4 in [DreamerV3](https://arxiv.org/pdf/2301.04104)).\\n\\n6. Lines 529-530, \\\"Replacing...\\\", the order is reversed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Q&A 1\", \"comment\": \"We thank the reviewer for such insightful and important questions. We are really happy to answer and fulfil any remaining doubts. Here we list aggregated questions and the related answers.\", \"q1\": \"The paper contains a misstatement in its contributions. The authors claim that they \\\"apply transformer-based world models to continuous action environments for the first time\\\". This claim is inaccurate, as TransDreamer[1] can also be applied to continuous action environments. The authors are evidently aware of this paper, given that they have cited it in this work.\", \"a1\": \"We thank the reviewer for this insightful feedback. We indeed agree that our statement is too big of a claim considering that TransDreamer [1] shows some results on a subset of DMC environments. However, there are also a couple of reasons that led us to use this statement:\\n\\n1) TransDreamer was never published at a conference, the submission was withdrawn and the authors stated:\\n\\n\\u201cSince submission, we have found a few aspects of the paper we want to improve including some points mentioned by the reviewers. We have decided to withdraw our paper and improve it for a future conference. Thank you to the reviewers who spent the time reading our paper and providing useful feedback.\\u201d\\n\\n2) The source code was never released, which made it difficult to reproduce the results \\n\\n3) There is not a detailed explanation of how latent representations z and actions are combined. Moreover, it is also not clear which encoder was used to extract the representations z. However, by looking at the TransDreamer Loss function, it seems that the latent representations are continuous. \\n\\nThese three points make clear that, although TransDreamer is a transformer-based architecture, it does not use discrete representations (or tokens). \\nUsually, discrete representations (e.g., tokens) are the main reason why it is hard to work with continuous actions when working with transformer-based world models. Moreover, while continuous latent representations may enable an easier integration with continuous actions, they also decrease substantially the sample efficiency of the proposed approach. As a result, they were not able to validate and benchmark the model on many different environments and, instead, they focused only on a subset of DMC and Atari. \\nConsidering also that the premise of using transformer-based architecture is that they will enable a better sample efficiency [2], we believe that since the TransDreamer approach does not use discrete representations, it is out of scope concerning our discussion and proposed approach. However, thanks to your feedback, we now understand that this may be misleading and is not exactly correct. Therefore, we will change the statement from: \\n\\n\\\"apply transformer-based world models to continuous action environments for the first time\\u201d.\", \"to\": \"\\\"apply categorical transformer-based world models to continuous action environments for the first time\\u201d.\\n\\nHighlighting that we are solving the problem related to this subset of models, which however is also the most relevant in our community. Indeed, iVideoGPT [3], a transformer-based discrete world model, was the first-ever world model trained on the Open X-Embodiment dataset [4], an extremely big robotics dataset built from 22 different robots collected through a collaboration between 21 institutions, demonstrating 527 skills (160266 tasks).\\u00a0\\nMoreover, iVideoGPT authors state:\\n\\u201ciVideoGPT is scalable for action-free video prediction across a mixture of over one million robotic and human manipulation trajectories.\\u201d\\nThis sentence, together with the whole introduction and methodology sections, shows how using a scalable model (e.g., transformers using discrete tokens) to learn a world model allows us to tackle way harder and more realistic environments than game environments.\\u00a0Such scalability has been achieved only using categorical transformer-based world models, which is why we decide to focus on this class of models.\", \"finally_transdreamer_reviewer_auka_stated\": \"\\u201cthe paper just reports 'it works' but lacks meaningful insights or observations that come from utilizing Transformers instead of RNNs.\\u201d, https://openreview.net/forum?id=s3K0arSRl4d\\n\\nWe would like to add that our goal is to inform the community with meaningful insights and observations about the usage of masked generative priors and continuous action integration in discrete transformer-based world models. Indeed, in this paper with try to explicitly explain the effect of these components by showing their effect also on metrics like FVD and perplexity, providing different angles to evaluate how these components improve the sequence modelling capabilities of transformer-based world models. \\n\\n\\n\\n\\n[2] Weipu Zhang et al. STORM: Efficient Stochastic Transformer based World Models for Reinforcement Learning, 2024. https://arxiv.org/abs/2310.09615\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe really appreciate you changed the grade. Please tell us if there is anything or any other point we can expand, to make the explanation more sounding. It would really help a lot to have a 6. If there is anything we can do it to get there, we\\u2019ll do it. \\n\\nThank you so much for your feedback and time! And thank you so much for raising the score!\"}", "{\"title\": \"Q&A 6,7 and 8\", \"comment\": \"Q6: Why would the pendulum swingup task fail for both STORM and GIT-STORM? DreamerV2, DreamerV3 and TransDreamer can learn this task fairly easily.\", \"a6\": \"As described in A1, Transdreamer and DreamerV3 use continuous representations (together with stochastic discrete counterpart), to represent frames. We believe this to be the main difference and reason why they can learn this task fairly easily, while GIT-STORM can\\u2019t. However, we would like to remind the reviewer of all the advantages of using discrete representations instead as described in A1.\", \"q7\": \"The experiment results in Table 5 and Figure 10 appear inconsistent. For instance, the Gopher score reported in Table 5 is 8562, but the last point in Figure 10 shows a performance of around 2500. Do these two results use different metrics?\", \"a7\": \"We thank the reviewer for such detail-oriented feedback. While Table 5 shows the evaluation results, Figure 10 shows the training profiles. Moreover, the profiles were plotted using 3 different seeds, while Table 5 was made using 5 different seeds. We adjusted the inconsistency and updated the plot, which now is much more coherent.\", \"q8\": \"Could you add the learning curves of STORM or DreamerV3 to Figure 10 for a better comparison, considering that you have reproduced these results?\", \"a8\": \"We thank the reviewer for the feedback. We added the STORM learning curve.\"}", "{\"title\": \"Q&A 2\", \"comment\": \"Q2: The state-mixer design is not properly addressed. If the authors claim this part of their contribution, they should either elaborate on the design, or provide empirical results to show the superiority of this method. Based on the overlapping tasks, TransDreamer appears to have better performance than GIT-STORM+state-mixer on the continuous control benchmark DMC.\", \"a2\": \"We thank the reviewer for the feedback. Although we do not use a different state mixer design, as you suggest, in Appendix E we show an ablation comparing three different state mixer designs. Particularly, we compare the original design (concatenation of z and a) against adding an attention mechanism after concatenation or just computing cross attention between z and a, to evaluate different inductive biases to design an optimal state mixer. Figure 13, shows that the original design is the one that works better, which is why we sticked to it.\\n\\nAlthough we left as future work a more extensive exploration of state mixer designs, this experiment let us conclude that using a very simple inductive bias is the most effective solution because when working with world models latent spaces, by using a more elaborate inductive bias, we may introduce an implicit constraint that ultimately deteriorates the representation space and, therefore, the downstream results (e.g., scores). \\n\\nMoreover, we added an ablation of the State Mixer to assess how the action mixer contributes to the presented performances, comparing a setting with unmixed states (which uses concatenated states and actions but does not use a mixer) and iVideoGPT action embedding approach. \\nFigure 14 demonstrates that the State Mixer consistently outperforms the considered baselines. Interestingly, under the given setup, the iVideoGPT approach fails to learn meaningful policies. We hypothesize that this limitation arises from the scale of the training procedure and considered environments. Specifically, iVideoGPT is designed to leverage much larger datasets, enabling it to learn robust representations.\\n\\nMoreover, we observe that bypassing the State Mixer by directly concatenating and feeding state and action embeddings into the transformer allows the model to learn policies that are meaningful but perform suboptimally compared to the State Mixer-based approach. This finding highlights the effectiveness of the State Mixer in extracting and processing state-action representations crucial for learning optimal policies.\\n\\nFinally, for the same reasons explained in A1, we do not include TransDreamer in the benchmark. \\n\\nWe are more than happy to answer and address any other questions if you have any remaining doubts or concerns.\"}", "{\"title\": \"Ablations results - submission update\", \"comment\": \"We thank the reviewer again for suggesting us to analyze the ablations of the proposed model. We updated the submission and you can now find the ablations in Section E.\\n\\nFigure 12 illustrates an ablation study on three Atari games (Hero, Freeway, and Boxing) and three DMC environments (Walker Walk, Walker Run, and Quadruped Run). Across both sets of environments, the removal of the MaskGIT head consistently results in poorer downstream performance (e.g., lower scores). Additionally, leveraging the dot product between $\\\\xi_t$ and MaskGIT embeddings has a substantial impact in environments such as Freeway, Walker Walk, and Quadruped Run. However, its influence appears negligible in other environments like Hero and Walker Run, suggesting that its efficacy may be context-dependent.\\n\\nPlease let us know if you have any remaining doubts or concerns.\"}" ] }
2fojNANZSv
Mixture of In-Context Prompters for Tabular PFNs
[ "Derek Qiang Xu", "F Olcay Cirit", "Reza Asadi", "Yizhou Sun", "Wei Wang" ]
Recent benchmarks find In-Context Learning (ICL) outperforms both deep learning and tree-based algorithms on small tabular datasets. However, on larger datasets, ICL for tabular learning suffers in both efficiency and effectiveness. In terms of efficiency, transformers incur linear space and quadratic time complexity w.r.t. context size. In terms of effectiveness, contexts at inference encounter distribution shift compared to contexts from pretraining. We propose MixturePFN, which extends Sparse Mixture of Experts to the state-of-the-art ICL for tabular learning model. Specifically, MixturePFN finetunes a specialized ICL expert on each cluster of tabular data and routes new test samples to appropriate experts at inference. MixturePFN supports constant-size contexts by splitting large training datasets into more manageable clusters. MixturePFN addresses distribution shift by finetuning an expert on each training dataset cluster via bootstrapping. Extensive experimental results shows MixturePFN outperforms 19 baselines both in mean rank and as the Condorcet winner across 36 diverse tabular datasets under both accuracy and F1 score with statistical significance.
[ "Prior-Fitted Networks", "Tabular Learning", "Sparse Mixture of Experts." ]
Accept (Poster)
https://openreview.net/pdf?id=2fojNANZSv
https://openreview.net/forum?id=2fojNANZSv
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yoYAWN5IFN", "wfegIQtaJv", "rQCmvUT1aA", "oDioymEWou", "dpFfU6KFWA", "Wu6C732YQy", "WW5lxpMmq0", "O5J52NMZcD", "NKcIW6C5DI", "K64rrirEWw", "K5fEEmqFoD", "FM5BliLuLQ", "BoKS5116k4", "BcTK2KJss2", "AwcIR8IHl1", "8prgMGete0", "6l58HI2ELV", "3Maie0LYwi" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment" ], "note_created": [ 1732223233505, 1730102372314, 1733197337841, 1733110833126, 1730681031935, 1732287252973, 1734772650531, 1732223206681, 1730687745477, 1733188568033, 1732247005939, 1732687315113, 1732463254162, 1732223103045, 1732651136487, 1732223035473, 1737524154815, 1732653106724 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11928/Authors" ], [ "ICLR.cc/2025/Conference/Submission11928/Reviewer_X42j" ], [ "ICLR.cc/2025/Conference/Submission11928/Authors" ], [ "ICLR.cc/2025/Conference/Submission11928/Authors" ], [ "ICLR.cc/2025/Conference/Submission11928/Reviewer_DZw6" ], [ "ICLR.cc/2025/Conference/Submission11928/Authors" ], [ "ICLR.cc/2025/Conference/Submission11928/Area_Chair_J1ho" ], [ "ICLR.cc/2025/Conference/Submission11928/Authors" ], [ "ICLR.cc/2025/Conference/Submission11928/Reviewer_5Xhm" ], [ "ICLR.cc/2025/Conference/Submission11928/Reviewer_5Xhm" ], [ "ICLR.cc/2025/Conference/Submission11928/Reviewer_X42j" ], [ "ICLR.cc/2025/Conference/Submission11928/Authors" ], [ "ICLR.cc/2025/Conference/Submission11928/Area_Chair_J1ho" ], [ "ICLR.cc/2025/Conference/Submission11928/Authors" ], [ "ICLR.cc/2025/Conference/Submission11928/Reviewer_DZw6" ], [ "ICLR.cc/2025/Conference/Submission11928/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11928/Authors" ] ], "structured_content_str": [ "{\"title\": \"Thank you for your review!\", \"comment\": \"W1 + Q1. Regarding LoCALPFN, please refer to the **[LoCALPFN is a parallel work]** and\\n**[Comparison with LoCALPFN]** common questions.\", \"we_highlight\": \"(1) LoCALPFN\\u2019s NeurIPS paper **was accepted after** the ICLR submission deadline, (2) MixturePFN was **put on arXiv before** LoCALPFN, and (3) after discussing with the authors of LoCALPFN, **the authors of LoCALPFN agree with us** that MixturePFN and LoCALPFN came to this conclusion contemporaneously.\\n\\nAs requested, we provide empirical comparison with LoCALPFN (i.e. KNN prompting + finetuning). We show that by using more experts, MixturePFN substantially improves LoCALPFN\\u2019s performance. Our experimental results are shown in the **[Comparison with LoCALPFN]** common question.\\n\\nQ2. We find 3 very large tabular datasets to test our approach. Due to the limited time of the rebuttal, we limit MixturePFN clusters to at most 64 and adopt only target encoding for categorical features. Even under these handicaps, MixturePFN consistently improves TabPFN*\\u2019s accuracy (dataset size in brackets):\\n\\n\\n```\\nModel Name | Poker(1MM) | HiggsBig(940K) | BNG(1MM)\\n-------------+------------+----------------+----------\\nTabPFN* | 52.5% | 66.7% | 87.4%\\nMixturePFN | 60.0% | 69.0% | 89.0%\\n```\\n\\nQ3. As shown in Figures 10c and 11c of the Appendix, we test on datasets with hundreds or thousands of features. We find MixturePFN and deep learning perform worse on datasets with more features. In theory, as long as (1) most features are noise and (2) MRMR successfully filters noise from useful features, performance should not deteriorate. Our experiments (Figures 10c and 11c of the Appendix) suggest either (1) or (2) is violated on these datasets. Thus we recommend improved feature selection or feature compression algorithms to extend our work to datasets with many features for practitioners\\n\\nThank you again for your thorough review. Considering our rebuttal, we would highly appreciate your raising scores and further support for our paper!\"}", "{\"summary\": \"In this paper, the authors propose MixturePFN, an extension of Sparse Mixture of Experts to TabPFN to alleviate the context size limitations of the existing TabPFN. On the TabZilla benchmark, MixturePFN outperforms state-of-the-art tabular prediction models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The idea of Mixture of Experts blending into TabPFN seems novel.\\n\\n2. The effectiveness of MixturePFN is well evaluated in well-established benchmarks against a variety of baseline methods.\\n\\n3. Writing is easy to follow.\", \"weaknesses\": \"1. The biggest weakness I think is that the paper is missing a comparison with LoCalPFN [1]. Since LoCalPFN also tries to make TabPFN effective even on datasets with many-shots, I think it should be mentioned in the paper.\\n\\n----\\n[1] Thomas et al., Retrieval & Fine-Tuning for In-Context Tabular Models, NeurIPS 2024\", \"questions\": \"1. Can you provide a comparison with LoCalPFN [1]? If not possible, I think the comparison should be done using k neighbor samples rather than random sampling, at least for TabPFN*.\\n\\n2. I see that the authors say in the limitations section that they didn't do it on a dataset with a million samples, but I'm somewhat curious about the effectiveness of MixturePFN on a dataset with a million samples, since the paper is aimed at the scale-up aspect.\\n\\n3. I'm also curious about the effectiveness of MixturePFN on datasets with hundreds or thousands of features, which is very practical in the real world.\\n\\n----\\n[1] Thomas et al., Retrieval & Fine-Tuning for In-Context Tabular Models, NeurIPS 2024\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you!\", \"comment\": \"Thank you for raising your score! We will include these comparisons in our work, following your helpful suggestions.\"}", "{\"title\": \"Following-Up\", \"comment\": \"Thank you once again for your thorough review and literature search! As stated in our original rebuttal, [1] is a contemporaneous work. Nonetheless, MixturePFN's design choices significantly improves [1] performance. As the end of the Author-Reviewer discussion period nears, we look forward to receiving your feedback on our response!\\n\\n[1] Retrieval & Fine-tuning for In-Context Tabular Models\"}", "{\"summary\": \"The paper proposes the MixturePFN framework, which extends TabPFN for large tabular datasets by addressing the performance and scalability limitations of the number of table rows. The authors propose:\\n1. Mixture of In-Context Prompters (MICP), which optimizes inference by using a sparse mixture of experts to route test samples to specific \\\"prompters\\\" that create context-specific prompts to separate large training datasets into manageable clusters. \\n2. Context-Aware Finetuning (CAPFN), which addresses distributional shift issues by specializing each prompter on its assigned\\ncontext via parameter efficient finetuning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The MICP strategy effectively reduces memory usage, allowing the model to handle larger datasets compared to existing TabPFN\", \"CAPFN bootstrapping and finetuning approach appears to be an effective way to mitigate distribution shift ICL for tabular data\", \"Extensive benchmarks against 19 strong baselines show good performance in both mean rank and Condorcet ranking across diverse datasets\"], \"weaknesses\": [\"While MIXTUREPFN improves dataset scalability, it still struggles with feature-rich datasets, potentially limiting its applicability in domains with high-dimensional data, such as patient healthcare data. I realize the authors leave this to future work, but this is an area where simple XGBoost performs quite well, and I would be curious about their thoughts on tackling this issue.\", \"MICP's reliance on K-Means clustering to segment data into meaningful clusters as the quality of clusters can vary significantly based on dataset properties / distance metric chosen. Poor clustering could lead to suboptimal routing and ineffective prompts for certain test samples. I'd be curious to see some ablations in this area.\", \"The CAPFN bootstrapping method might introduce biases or overfitting if the sampled subsets are not representative of the entire dataset. Bootstrapping from small clusters may fail to capture enough diversity, especially in cases with imbalanced classes or rare features. I'd be also curious to see how this method works with highly imbalanced labels e.g. 1\\\\% positive.\"], \"questions\": \"See weaknesses.\\n\\nCan categorical features simply be encoded as ordinal features? Is that not implying false relationships between unordered elements?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you!\", \"comment\": \"Thank you for raising your score! We are glad our rebuttal addressed your concerns, and will include these results in our revised manuscript.\"}", "{\"metareview\": \"(a) Summary of Scientific Claims and Findings\\n\\nThe paper presents MixturePFN, an enhancement of Sparse Mixture of Experts tailored for tabular Prior-Fitted Networks (TabPFNs). MixturePFN leverages specialized In-Context Learning (ICL) experts, applied to clusters of tabular data, to improve both efficiency and accuracy, particularly on large datasets. The framework consists of two key components: Mixture of In-Context Prompters (MICP): Routes test samples to appropriate clusters, enabling effective segmentation of large datasets, and Context-Aware Fine-Tuning (CAPFN): Mitigates distributional shifts by employing fine-tuning specific to each cluster.\\n\\n(b) Strengths of the Paper\\n\\n1. Innovative integration of Mixture of Experts with ICL, designed specifically for tabular data applications.\\n\\n2. Demonstrates substantial performance improvements over state-of-the-art models, supported by extensive validation on diverse and large datasets.\\n\\n(c) Weaknesses of the Paper and Missing Elements\\n\\n1. Limited scalability to datasets with high-dimensional features, such as those found in healthcare.\\n\\n2. CAPFN may introduce biases due to insufficient cluster diversity, which can be problematic for datasets with imbalanced classes.\\n\\n(d) Decision and Rationale\\n\\nThe paper offers a significant and well-substantiated contribution to advancing ICL for tabular data. While there are concerns about scalability and generalization, the paper\\u2019s strengths and innovative approach outweigh these limitations.\", \"additional_comments_on_reviewer_discussion\": \"Consensus on the originality and impact of MixturePFN, with reviewers recognizing its contemporaneity with LoCALPFN as a parallel work.\\n\\nRecommendations for addressing scalability issues in feature-rich datasets and improving the robustness of clustering techniques.\"}", "{\"title\": \"Thank you for your review!\", \"comment\": \"W1. Yes, feature scalability is an inherent limitation of TabPFN. We believe there are several future directions: (1) improved feature selection algorithms improving MRMR, (2) improved categorical encoding algorithms to reduce cardinality of large categorical features, and (3) efficiently addressing TabPFN\\u2019s underlying feature invariance problem with architectural changes. Because these directions are outside the scope of our paper, which studies Mixture of Experts on TabPFN, we leave these directions to future work.\\n\\nW2. We agree that ideal clustering algorithms are a related but orthogonal problem to our work. As requested, we perform several ablation studies to test this: we use different types of categorical encodings (target and one-hot encoding), filtering each categorical feature to have at most K dimensions using MRMR, and using different numbers of clusters (where number of experts is proportional to gamma).\\n\\n```\\nModel Name | encoding-K | gamma | elec. Acc | phon. Acc \\n------------+------------+--------+-----------+-----------\\nTabPFN* | target-1 | no KNN | 81.2% | 88.3% \\nMixturePFN | target-1 | 5 | 89.7% | 90.2% \\nMixturePFN | target-1 | 1 | 87.4% | 87.4% \\nMixturePFN | onehot-1 | 1 | 86.7% | 86.7% \\nMixturePFN | target-5 | 1 | 87.4% | 87.8% \\nMixturePFN | onehot-5 | 1 | 86.3% | 87.9% \\n```\\n\\nThese additional ablations show performance is more sensitive to the number of experts than choice in categorical encoding algorithms. Hence, it is MixturePFN\\u2019s core novelties (MICP and CaPFN) are the reasons for its superior performance.\\n\\nW3. We agree with your intuition! When we plot the model performance improvement over TabPFN w.r.t. class imbalance, we find a loose correlation (R2=0.249). Hence, our method performs better on balanced data. We will add this interesting finding to our paper!\\n\\nThank you again for your detailed review! If our rebuttal answers your concerns, we highly appreciate your support for our paper.\"}", "{\"summary\": \"The paper proposes a mixture of experts approach for in-context learning on tabular data. Each expert in the mixture is a K-means cluster and the model routes the input instance to the closest cluster. This addresses the problem of context size in large datasets and provides a better selection of prompt instance than random sampling. To adapt the model to this type of routing authors also propose fine tuning by selecting a cluster of each training instance and maximizing the likelihood.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"The paper is well written and proposes a justified solution to address the context length issue for in-context learning models such as TabPFN. Authors conduct extensive experiments on many real world dataset to demonstrate the effectiveness of the proposed approach and compare with leading tree-based and deep learning tabular methods.\", \"weaknesses\": \"There is a very related previous work \\\"Retrieval & Fine-Tuning for In-Context Tabular Models\\\" by Thomas et al, which proposes both nearest neighbor retrieval to improve the prompt and fine tuning with this approach to adapt the model to the target distribution. I think the authors have to compare with this work and highlight what is novel in MixturePFN.\", \"questions\": \"I could not find an ablation study on the number of clusters K vs model performance, have you done these experiments?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks\", \"comment\": \"Thanks for a detailed response and comparison with LoCALPFN, I agree that the two works are contemporaneous and have revised my score.\"}", "{\"title\": \"Thank you for your response\", \"comment\": \"Thank you for putting a lot of time and effort into comparing LoCalPFN to your work, MixturePFN. I agree that your work is contemporary to LoCalPFN, but I am still impressed that MixturePFN surpasses that approach. I encourage the authors to include the results in a revised manuscript, and I give an acceptance grade on this point.\"}", "{\"title\": \"Following-Up\", \"comment\": \"Thank you once again for your thorough review and literature search. We will incorporate your valuable suggestions into our work and look forward to hearing your feedback!\"}", "{\"title\": \"Reminder: Author-Reviewer Discussion Period Closing Soon\", \"comment\": \"This is a reminder that the author-reviewer discussion period will end on Nov 26 AoE.\\n\\nYour engagement during this phase is critical for providing valuable feedback and clarifications. If you have any remaining questions or comments, please take a moment to participate before the deadline.\\n\\nThank you for your contributions to this important process.\\n\\nAC\"}", "{\"title\": \"Thank you for your review!\", \"comment\": \"W1. Regarding \\u201cRetrieval & Fine-tuning for In-Context Tabular Models\\u201d (i.e. LoCALPFN), please refer to the common response **[LoCALPFN is a parallel work]**.\\n\\nAccording to ICLR policy, LoCALPFN is a parallel work to our work, MixturePFN. We highlight: (1) LoCALPFN\\u2019s NeurIPS paper **was accepted after** the ICLR submission deadline, (2) MixturePFN (which proposes *\\u201cnearest neighbor retrieval to improve the prompt and fine tuning\\u201d*) was **put on arXiv before** LoCALPFN, and (3) after discussing with the authors of LoCALPFN, **the authors of LoCALPFN agree with us** that MixturePFN and LoCALPFN came to this conclusion contemporaneously.\\n\\nNonetheless, we highlight the difference: compared to LoCALPFN, which finetunes a single TabPFN model for retrieval, MixturePFN finetunes a specialized expert on each subset of the training/context data, improving effectiveness. We find this difference substantially improves tabular classification accuracy. We provide empirical results in the common response **[Comparison with LoCALPFN]**.\\n\\nQ1. Yes, we provide ablation studies on the number of experts vs model performance in Table 14 of the Appendix, which computes average accuracy across shared datasets with different number of experts (where the number of clusters = gamma * dataset_size). Our experiments show as the number of experts increases, performance improves.\\n\\n```\\ngamma | accuracy\\n------+----------\\n0.0 | 83.42%\\n1.0 | 83.96%\\n3.0 | 84.23%\\n5.0 | 84.23%\\n```\\n\\nThis trend is also supported by our new ablation studies in the **[Comparison with LoCALPFN]** common question, where we limit the total number of experts (i.e. total number of clusters stays constant, but each expert is responsible for multiple clusters).\\n\\nWe greatly appreciate your efforts and expertise on tabular in-context learning and hope our explanation of differences and detailed timeline between MixturePFN and LoCALPFN demonstrate why MixturePFN is novel. If you agree, we would greatly appreciate raising your score! Thank you!\"}", "{\"comment\": \"Thank you for the additional experiments, I also emphasize with the contemporaneous submission with LoCALPFN, and am impressed that results were able to be run in a short amount of time. I have updated my score accordingly. Best of luck!\"}", "{\"title\": \"Common Response\", \"comment\": \"**[LoCALPFN is a parallel work]**\\n\\n**Please Read:** Thank you very much for your thorough efforts reviewing our paper and its related works! According to ICLR policy, MixturePFN and LoCALPFN are contemporaneous: *\\u201cWe consider papers contemporaneous if they are published within the last four months. That means, since our full paper deadline is October 1, if a paper was published (i.e., at a peer-reviewed venue) on or after July 1, 2024, authors are not required to compare their own work to that paper.\\u201d*.\\n\\nIn fact, MixturePFN was the first to post on arXiv. Here is a detailed timeline:\\n\\n\\n- MixturePFN was posted on arXiv in May 2024, proposing MoE KNN and finetuning.\\n\\n- LoCALPFN was posted on arXiv in June 2024, proposing KNN retrieval and finetuning.\\n\\n- MixturePFN was submitted to ICLR on October 1 2024.\\n\\n- LoCALPFN (camera-ready) was accepted to NeurIPS on October 30 2024. After discussing with the authors of LoCALPFN, we both agree our papers are contemporaneous.\\n\\nWe hope our timeline gives a better perspective on the relationship between MixturePFN and LoCALPFN. Although we are *not required to*, we still provide a design and empirical comparison with LoCALPFN for completeness in the next section.\\n\\n**[Comparison with LoCALPFN]**\\n\\nCompared to LoCALPFN, which finetunes a single TabPFN model for retrieval, MixturePFN finetunes a specialized expert on each subset of the training/context data, improving effectiveness. Hence, LoCALPFN is a specific instance of MixturePFN: when gamma=inf and there is only one expert. \\n\\nBecause LoCALPFN does not release source code or performance on individual datasets (at the time of this rebuttal), we reimplement LoCALPFN\\u2020 under our framework by setting gamma=inf (i.e. KNN retrieval) and limiting the number of experts. We compute average accuracy across several datasets: electricity, phoneme, and airlines.\\n\\n```\\nModel Name | max experts | gamma | elec. Acc | phon. Acc | air. Acc\\n------------+-------------+--------+-----------+-----------+----------\\nTabPFN* | no finetune | no KNN | 81.2% | 88.3% | 60.0%\\nLoCALPFN\\u2020 | 1 | inf | 85.4% | 88.7% | 64.0%\\nMixturePFN\\u2020 | 1 | 5 | 85.6% | 88.5% | 64.3%\\nMixturePFN\\u2020 | 8 | 5 | 88.0% | 89.1% | 64.9%\\nMixturePFN | 1024 | 5 | 89.7% | 90.2% | 85.7%\\n```\\n\\nAs seen above, MixturePFN outperforms both TabPFN and LoCALPFN by training a specialized expert on each subset of the training dataset. The more experts that are trained, the better the performance.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Thank you!\", \"comment\": \"Thank you for raising your score! We are glad that our additional results and related work comparison help address your concerns and will include your valuable suggestions in our revised manuscript.\"}" ] }
2fgzf8u5fP
Derivative-Free Guidance in Continuous and Discrete Diffusion Models with Soft Value-Based Decoding
[ "Xiner Li", "Yulai Zhao", "Chenyu Wang", "Gabriele Scalia", "Gökcen Eraslan", "Surag Nair", "Tommaso Biancalani", "Shuiwang Ji", "Aviv Regev", "Sergey Levine", "Masatoshi Uehara" ]
Diffusion models excel at capturing the natural design spaces of images, molecules, DNA, RNA, and protein sequences. However, rather than merely generating designs that are natural, we often aim to optimize downstream reward functions while preserving the naturalness of these design spaces. Existing methods for achieving this goal often require differentiable proxy models (e.g., classifier guidance or DPS) or involve computationally expensive fine-tuning of diffusion models (e.g., classifier-free guidance, RL-based fine-tuning). In our work, we propose a new method to address these challenges. Our algorithm is an iterative sampling method that integrates soft value functions, which looks ahead to how intermediate noisy states lead to high rewards in the future, into the standard inference procedure of pre-trained diffusion models. Notably, our approach avoids fine-tuning generative models and eliminates the need to construct differentiable models. This enables us to (1) directly utilize non-differentiable features/reward feedback, commonly used in many scientific domains, and (2) apply our method to recent discrete diffusion models in a principled way. Finally, we demonstrate the effectiveness of our algorithm across several domains, including image generation, molecule generation, and DNA/RNA sequence generation.
[ "Diffusion models", "Reinforcement learning", "AI for science" ]
Reject
https://openreview.net/pdf?id=2fgzf8u5fP
https://openreview.net/forum?id=2fgzf8u5fP
ICLR.cc/2025/Conference
2025
{ "note_id": [ "oX4kwd2waU", "n4TYiwnLkp", "kcYxhGddFk", "ddfPEfZDnE", "cwHcbqjiY7", "bWFBzZlZle", "VF1aXiefwW", "O84OZVH4YP", "KTe3m5Ebdm", "JRiaUtPn4L", "FNrQMg7KUC", "ER8F7VBZ43", "DgZvRru8ef", "DDhnu8H4uZ", "8mwYI5tZMw" ], "note_type": [ "meta_review", "official_review", "official_comment", "official_review", "official_review", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1734760220642, 1730696555439, 1732828073845, 1731176820208, 1730277772972, 1737523475841, 1732944537876, 1730574887546, 1732905942879, 1732905739199, 1730566833696, 1732944268452, 1733179597598, 1732824019490, 1732825159972 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1947/Area_Chair_EmFL" ], [ "ICLR.cc/2025/Conference/Submission1947/Reviewer_v5um" ], [ "ICLR.cc/2025/Conference/Submission1947/Authors" ], [ "ICLR.cc/2025/Conference/Submission1947/Reviewer_tbvq" ], [ "ICLR.cc/2025/Conference/Submission1947/Reviewer_DiPS" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1947/Authors" ], [ "ICLR.cc/2025/Conference/Submission1947/Reviewer_YMgn" ], [ "ICLR.cc/2025/Conference/Submission1947/Authors" ], [ "ICLR.cc/2025/Conference/Submission1947/Authors" ], [ "ICLR.cc/2025/Conference/Submission1947/Reviewer_Jmxu" ], [ "ICLR.cc/2025/Conference/Submission1947/Authors" ], [ "ICLR.cc/2025/Conference/Submission1947/Reviewer_v5um" ], [ "ICLR.cc/2025/Conference/Submission1947/Authors" ], [ "ICLR.cc/2025/Conference/Submission1947/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"This work presents a unified framework for guidance in diffusion models, encompassing both discrete and continuous settings, with minimal additional training. It extends applicability to domains where downstream rewards may not be differentiable. The proposed method, SVDD (MC and PM), addresses discrete diffusion scenarios where continuous energy gradients cannot be directly applied to the discrete state space. Additionally, it is well-suited for cases involving non-differentiable rewards, which frequently arise in scientific domains.\\n\\nHowever, reviewers have raised concerns regarding the novelty of the approach, noting limited differentiation from existing twisted Sequential Monte Carlo (SMC) methods. The authors also acknowledge that the term \\\"reward maximization\\\" may be misleading in this context. Unlike SVDD, SMC methods require resampling across the entire batch, complicating parallelization.\", \"additional_comments_on_reviewer_discussion\": \"Despite these considerations, reviewers remain concerned about inconsistencies in baseline settings, which may bias evaluations in favor of SVDD. Consequently, the initial rating remains unchanged after rebuttal.\"}", "{\"summary\": \"The paper introduces SVDD, a method which aims to sample from the product distribution $p^*(x_0) \\\\propto p^{pre}_0(x_0) \\\\exp(R(x_0)/\\\\alpha)$ for some non-negative reward function $R(x)$, constant $\\\\alpha \\\\geq 0$ and pre-trained diffusion model $p^{pre}_t(x_t | x_{t + 1})$.\\n\\nThe method employs an SMC-inspired procedure for iteratively sampling and reweighting samples according to a soft value function and its corresponding optimal policy. Providing two options to obtain the soft-value function (which is required for the method's importance sampling step), the authors show that SVDD can be used with a cheap approximation based on the diffusion model's denoiser or an amortized version based on regression to a Monte Carlo estimate. The authors evaluate the performance of SVDD on a series of tasks -- images, molecule design, and DNA/RNA design.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper presents a number of strengths, such as\", \"Presenting a novel application of nested Sequential Monte Carlo to the difficult problem of sampling from the product distribution $p^*(x_0)$ given a pre-trained diffusion model.\", \"The method, especially SVDD-PM, provides a particularly efficient method to sample from the product distribution when no differentiable reward is available and the reward function is cheap. The manuscript shows that SVDD can indeed increase the reward over the pre-trained model, offering a compelling option to sample from the target product distribution with little overhead.\", \"The problem of cheaply sampling from the product distribution in the presence of non-differentiable rewards is especially significant as existing methods typically require availability of gradients or expensive (typically simulation-based) fine-tuning algorithms. Non-differentiable rewards are often seen in scientific discovery, a target area aptly pointed out by the authors.\"], \"weaknesses\": \"Overall I had a some issues regarding clarity of the paper, concerns about sources of bias that are not discussed in the manuscript, and am worried that the experimental section does not paint a fair picture of SVDD's performance relative to baselines. I will discuss each of these in turn\\n\\n\\n\\n### Unclear focus of probabilistic inference vs reward maximization\\n\\nSection 3.2 states that the objective of this paper is to perform probabilistic inference and sample from the target distribution $p^*(x_0) \\\\propto p^{pre}(x_0)\\\\exp(R(x_0) / \\\\alpha)$. However, towards the beginning of the experiment section and throughout the appendix the manuscript begins to say that SVDD is actually meant for reward maximization, not the problem of sampling from $p^*(x_0)$. In particular, the manuscript states that in practice they set $\\\\alpha = 0$ for all experiments, which corresponds to a constrained reward maximization where $p^*(x_0)$ is a Dirac centered at $x_0^* = \\\\underset{x_0 \\\\in Support(p_0^{pre}(x_0))}{\\\\arg\\\\max}R(x_0)$. This is quite different from sampling from the $p^*(x_0)$ for any $\\\\alpha$ and if this is the goal of SVDD it should be compared to baselines which try to do reward maximization.\\n\\n\\n\\n### Missing discussion and investigation on bias of soft value function estimates\\n\\nThe manuscript defines the soft value function as $v_t(x_t) = \\\\alpha \\\\log \\\\mathbb{E}_{x_0 \\\\sim p^{pre}(x_0 | x_t)}[\\\\exp(R(x_0) / \\\\alpha)]$. Next, due to issues with numerical stability for small values of $\\\\alpha$ the authors make use of an approximation\\n\\n$v_t(x_t) = \\\\alpha \\\\log \\\\mathbb{E}_{x_0 \\\\sim p^{pre}(x_0 | x_t)}[\\\\exp(R(x_0) / \\\\alpha)]$\\n\\n$ \\\\approx \\\\alpha \\\\log \\\\exp(\\\\mathbb{E}_{x_0 \\\\sim p^{pre}(x_0 | x_t)}[R(x_0)] / \\\\alpha)$\\n\\n$ = \\\\mathbb{E}_{x_0 \\\\sim p^{pre}(x_0 | x_t)}[R(x_0)]$\\n\\nThe second step takes the $\\\\exp$ function outside of the expectation and as such requires an application of Jensen's inequality, implying that $v_t(x_t) \\\\geq \\\\mathbb{E}_{x_0 \\\\sim p^{pre}(x_0 | x_t)}[R(x_0)]$. This means that the Monte Carlo regression used for SVDD-MC is in fact biased (although consistent), a fact which is not mentioned in the paper.\\n\\nThe situation is more complicated for SVDD-PM which first applies Jensen's and then another approximation as \\n\\n$v_t(x_t) \\\\geq \\\\mathbb{E}_{x_0 \\\\sim p^{pre}(x_0 | x_t)}[R(x_0)]$\\n\\n$ \\\\approx R(\\\\mathbb{E}_{x_0 \\\\sim p^{pre}(x_0 | x_t)}[x_0])$\\n\\nIt is unclear to me whether the error of the posterior mean estimate can be shown to be bounded as the reward function is potentially non-convex, but would be happy if the authors had some insight into this.\\n\\nGiven that SVDD requires accurate estimates of the soft-value functions to sample from the target distribution $p^*(x_0)$ I would be more convinced of SVDD's abilities were there a more detailed (potentially including an empirical results) analysis of the bias of the Monte Carlo regression and posterior mean estimates.\\n\\n\\n\\n### Issues with inconsistent setting of $\\\\alpha$ for baselines\\n\\nThe stated goal of SVDD is to sample from the target distribution $p^*(x_0;\\\\alpha) \\\\propto p^{pre}(x_0)\\\\exp(R(x_0) / \\\\alpha)$, where the temperature parameter $\\\\alpha$ controls how peaky $p^*(x_0;\\\\alpha)$ becomes. As discussed above, as $\\\\alpha \\\\rightarrow 0$ the target distribution becomes focused on the maximizer of the reward which is in the support of the pretrained model such that \\n\\n$\\\\mathbb{E}_{x \\\\sim p^* (x;0^+)}[R(x)] = \\\\underset{x_0 \\\\in Support(p^{pre}(x_0))}{\\\\arg\\\\max}R(x_0)$. \\n\\nIn general, as the value of $\\\\alpha$ is decreased the expected reward under the target distribution should increase. As such, comparing the distribution of rewards of generated samples for methods using different values of $\\\\alpha$ does not paint an accurate picture of each method's performance as one method having a higher quantile reward may simply be a consequence of the setting of $\\\\alpha$. \\n\\nUnfortunately, the manuscript's experiments use significantly different values of $\\\\alpha$ for its method and baselines while using the reward at different quantiles as the main performance metric. This is more problematic as the value of $\\\\alpha$ for their method is set to $0$ (where the true expected reward is the maximum reward value) and a larger value of $\\\\alpha$ for baselines. Because the value of $\\\\alpha$ is not set to be equal for SVDD and all baselines I do not believe that the experimental results in Table 2 paint a fair picture of SVDD's performance.\\n\\n\\n\\n### Overall comments\\n\\nI generally have concerns with the settings of either the number of particles $M$ being too small or the bias of the soft-value function estimates being too large. As far as I understand (and perhaps I am missing something!) by setting $\\\\alpha=0$ for SVDD in the experiment section the method _should_ be suffering from mode collapse and generating very high reward samples as the target distribution $p^*(x_0)$ is a Dirac centered at $x_0^* = \\\\underset{x \\\\in Support(p_0^{pre}(x))}{\\\\arg\\\\max}R(x)$. However, samples from SVDD do not exhibit this expected mode collapse, which seems to indicate that either many more particles $M$ need to be used or the bias from the value function estimation is preventing the algorithm from properly sampling from the target distribution.\\n\\nI note that the main reason for my score mostly due to the issue with inconsistent setting of $\\\\alpha$ for SVDD and baselines in the experiments section as well as the missing discussion. A missing discussion/analysis of the bias of the value function estimates and their impact on SVDD's ability to sample from the target distribution also contributes significantly to my score.\", \"questions\": \"1. I did not see it mentioned in the manuscript -- how many seeds were used for experiments?\\n2. Could the authors provide a discussion of the relation between the consistency of nested SMC and the consequence of using more or less particles in their method?\\n3. How expensive is training the soft-value function estimate in SVDD-MC? If it is reasonably long would it be worth adding a fine-tuning based method (e.g., relative trajectory balance https://arxiv.org/abs/2405.20971). On the other hand, if training the soft-value estimate is especially cheap it would be worthwhile to emphasize this more in the manuscript as a benefit of this method compared to direct fine-tuning methods.\\n4. Since the goal of SVDD is to sample from the product distribution of reward and pretrained model could they add some metrics evaluating the diversity of their samples or their naturalness (e.g., likelihood under the pre-trained model when available)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for your rebuttal. We have addressed your concern by explaining (1) why $\\\\alpha=0$ is fine, (2) approximation errors in value function learning, and (3) how diversity is discussed in the works of alignment in diffusion models.\\n\\n> W1: The authors consider setting $\\\\alpha=0$ in their experiments. However, prior work highlights that setting leads to over-optimization and increasingly reduces the diversity and realistic nature of the samples. Could the authors provide clarity on why this is not a problem in their setup?**\\n\\nA. We thank the reviewer for pointing out concerns regarding setting $\\\\alpha=0$.\\n\\n- **How naturalness (realisticy) is retrained?**: We always sample from pre-trained models because we use them as proposal distributions. Hence, it is naturally expected the likelihood is high. Indeed, empirically, the images shown in the Figures are realistic. We also have more quantitative metrics in molecules like Table 3. \\n\\n- **How diversity is retained**: While we recognize its importance, we intend to refrain from claiming our goal is to retain diversity because our primary goal is to optimize rewards while maintaining naturalness. However, even if $\\\\alpha=0$, it is expected that the diversity is not lost because it is regarded that we are sampling from many modes on the distribution of $p_{pre}(x)\\\\exp(r(x)/\\\\alpha)$ (with small $\\\\alpha$ but not $0$ exactly) due to the randomness coming from pre-trained models and finite $M$ in practice. Indeed, we observe it as we show in molecules and images. Furthermore, as shown in Table 3, we report diversity metrics for molecular domains, e.g., Uniqueness percentage and Novelty percentage. \\n\\nAs far as I know, **representative papers about alignment in diffusion models ([1], [2], [3]) like our work don\\u2019t show diversity metrics.** This is because alignment (reward maximization) is a primary goal, and diversity is a secondary objective. Instead, people often show generated samples to show the diversity we did. \\n\\n[1] Clark, K., P. Vicol, K. Swersky, and D. J. Fleet (2024). Directly fine-tuning diffusion models on differentiable rewards. arXiv preprint ICLR \\n\\n[2] Fan, Y., O. Watkins, Y. Du, H. Liu, M. Ryu, C. Boutilier, P. Abbeel, M. Ghavamzadeh, K. Lee, and K. Lee (2023). DPOK: Reinforcement learning for fine-tuning text-to-image diffusion models. arXiv preprint NeurIPS\\n\\n[3] Black, K., M. Janner, Y. Du, I. Kostrikov, and S. Levine (2024). Training diffusion models with reinforcement learning\\n\\n> W2, 3, &4 Concerns on value function approximation errors of SVDD-MC and SVDD-PM . \\n\\nA. We appreciate the reviewer\\u2019s concerns regarding the assumptions underlying SVDD-MC and SVDD-PM. In general, we admit these approximations are heuristic and we will add more discussion. **However, we want to convey that (1) in SVDD-MC, our algorithm works well without this heuristic, and we just empirically observe the algorithm works more stable with this heuristic; (2) in SVDD-PM, this is a well-known empirically successful approximation in current related literature ([1],[2], [3]) as Remark in 2, such as classifier guidance variants when rewards are classifiers.**\\n\\n[1] Chung, H., J. Kim, M. T. Mccann, M. L. Klasky, and J. C. Ye (2023). Diffusion posterior sampling for general noisy inverse problems. ICLR \\n\\n[2] Ho, J., T. Salimans, A. Gritsenko, W. Chan, M. Norouzi, and D. J. Fleet (2022). Video diffusion models. Advances in Neural Information Processing Systems 35, 8633\\u20138646.\\n\\n[3] Bansal, A., H.-M. Chu, A. Schwarzschild, S. Sengupta, M. Goldblum, J. Geiping, and T. Goldstein (2023). Universal guidance for diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, \\n\\n> W5 While the approach leads to generation of samples with high reward, the authors do not provide any kind of metrics that test for diversity of the samples generated.\\n\\nA. Discussed in W1. \\n\\n> Q1 Potential typo \\n\\nA. We appreciate the reviewer catching this potential typo. The negative sign in the expectation induced by p_t^{\\\\text{pre}}(\\\\cdot | x_{t-1}) is not a typo. The sign appears due to the reward gradient term, which accounts for minimizing divergence from the pre-trained prior while maximizing the reward. We will clarify this explicitly in the revised manuscript to avoid confusion.\"}", "{\"summary\": \"This paper proposes a method for diffusion models to sample data that is both within target distribution and maximizing some downstream reward function. The problem the paper studies is of great importance, and the method shows empirical effectiveness in some downstream tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is generally well-written, and the motivation is also clear. It starts with an important problem and proposes a well-motivated solution that requires no finetuning or differential proxy model.\", \"The paper is clear about how two critical challenges (the soft-value function is both unknown and unnormalized) are addressed by the proposed algorithm.\"], \"weaknesses\": [\"The soft value function seems to difficult to approximate in general. Is there any anlysis or justification to quantify the quality of the approximation? How does one know a good approximation is indeed attained? Moreover, how does the approximation quality matter for the generation? More ablation study can improve the paper further.\", \"Is there any additional computational overhead for the proposed method? Is the approximation to the soft value function costly?\", \"The performance gain does not seem to be very significant compared to simple baselines, say Best-of-N. From Table 2, Best-of-N baseline is only incrementally worse than the proposed method in molecule-related experiments.\", \"A minor question: Does the size of the diffusion model affect the performance of SVDD? I will be interested to see how this method works for diffusion models of different size.\"], \"questions\": \"See the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": [\"This paper introduces a new method called Soft Value-based Decoding in Diffusion models (SVDD) for optimizing diffusion models to generate samples with desired properties while maintaining naturalness. The contributions include:\", \"SVDD is an inference-time technique that doesn't require fine-tuning the original diffusion model\", \"Can work with non-differentiable reward functions, unlike previous methods that required differentiable proxy models\", \"Applicable to both continuous and discrete diffusion models in a unified way\"], \"the_algorithm_works_in_the_following_way\": \"1. Uses \\\"soft value functions\\\" that predict future rewards from intermediate noisy states\\n2. At each denoising step:\\n - Generates multiple samples using the pre-trained model\\n - Selects samples based on their predicted value\", \"there_are_two_variants\": [\"SVDD-MC: Uses Monte Carlo regression to learn value functions\", \"SVDD-PM: Directly uses reward feedback without additional training\"], \"experimental_results_span_across_multiple_domains\": \"image generation, molecule generation, DNA/RNA sequence generation. The proposed method consistently outperformed baseline methods while maintaining sample validity.\\nThe paper demonstrates that SVDD provides an effective way to guide diffusion models toward desired properties while preserving the natural characteristics learned during pre-training.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"There are several advantages of SVDD over previous methods:\\n1. No need for differentiable proxy models\\n2. No fine-tuning required\\n3. Works with both continuous and discrete spaces\\n5. Maintains better sample diversity compared to other approaches\\n\\nThe writing of this paper is clear.\", \"weaknesses\": \"The main weakness is about novelty. To be more specific, I can not see significant difference with twisted SMC methods (e.g., the papers mentioned in Sec 6 and App B). In the writing I see two differences claimed by the authors:\\n1. In previous works such as Wu et al., the reward is a classifier; while here it is \\\"reward maximization\\\" setting. \\n\\nFirst, I think the setting in this work should not be called \\\"reward maximization\\\" but be called \\\"alignment\\\" or \\\"reward sampling\\\" or similar names, due to the reasons in Sec 3.2 \\\"HIGH REWARDS WHILE PRESERVING NATURALNESS\\\". Second, whether the reward function is a classifier is not critical, as even for an usual reward r(x), we can understand it as an unnormalized probability prob(optimality | x).\\n\\n2. \\\"SMC methods involve resampling across the \\u201centire\\u201d batch, which complicates parallelization. Additionally, when batch sizes are small, as is often the case with recent large diffusion model\\\"\\n\\nI do not quite understand this part. I may miss the difference between SVDD and twisted SMC methods. Does the batch size mean the number of particles in SMC? It will be good if there could be a clarification.\", \"questions\": \"I do not have further questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Continue\", \"comment\": \"Experimental Weaknesses\\n\\n> A glaring missing baseline is Relative Trajectory Balance (Venkataraman et. al 2024) which does fine-tuning exactly like this paper considers for both discrete and continuous diffusion models. I kindly request the authors to consider adding this important baseline.\\n\\nWe thank the reviewer for pointing out the missing baseline. We know that Relative Trajectory Balance (RTB) is a relevant method for fine-tuning in diffusion models. **As we are not intentionally adding any tuning methods (Black et al., 2023; Fan et al., 2023) as well as RTB, we don\\u2019t plan to add a comparison. \\n\\n**In our work, we acknowledge that we don\\u2019t intend to claim our method like infernece-time technique is better than fine-tuning methods in Section 5.3.This is because it is hard to consider the right comparison between accounting training time and inference time. Indeed, many representative papers on inference-time techniques like our work do not compare fine-tuning generative models.**\\n\\n[1] Zhao, S., R. Brekelmans, A. Makhzani, and R. Grosse (2024). Probabilistic inference in language models via twisted sequential monte carlo. ICML\\n\\n[2] Chung, H., J. Kim, M. T. Mccann, M. L. Klasky, and J. C. Ye (2023). Diffusion posterior sampling for general noisy inverse problems. ICLR\\n\\n[3] ] Ho, J., T. Salimans, A. Gritsenko, W. Chan, M. Norouzi, and D. J. Fleet (2022). Video diffusion models. Advances in Neural Information Processing Systems 35, 8633\\u20138646.\\n\\n[4]] Bansal, A., H.-M. Chu, A. Schwarzschild, S. Sengupta, M. Goldblum, J. Geiping, and T. Goldstein (2023). Universal guidance for diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,\\n\\n\\n> W. Missing Text Experiment\\n\\n- We appreciate the suggestion to include a text experiment. Since we alrady have tried in several domains, we defer it fo tufure work. \\n\\n> W. Diversity Metrics\\n\\nFirst, we have certain discussion in molecular domains. Here, as shown in Table 3, we report diversity and validity metrics, e.g., Validity percentage, Uniqueness percentage, and Novelty percentage. These results demonstrate that SVDD maintains diversity and naturalness comparable to baselines while achieving higher rewards. We will include more naturalness metrics to evaluate sample quality in more domains.\\n\\nIn general, although we recognize that higher diversity is better, **representative papers about alignment in diffusion models ([1], [2], [3]), like our work, don\\u2019t have diversity metrics. This is because alignment (reward maximization) is a primary goal, and diversity is a secondary objective.** Instead, people often show generated samples to show the diversity we did. As another reason, we believe this is often a subjective metric very suspect to the definition of distance.\\n\\n[1] Clark, K., P. Vicol, K. Swersky, and D. J. Fleet (2024). Directly fine-tuning diffusion models on differentiable rewards. arXiv preprint ICLR\\n\\n[2] Fan, Y., O. Watkins, Y. Du, H. Liu, M. Ryu, C. Boutilier, P. Abbeel, M. Ghavamzadeh, K. Lee, and K. Lee (2023). DPOK: Reinforcement learning for fine-tuning text-to-image diffusion models. arXiv preprint NeurIPS\\n\\n[3] Black, K., M. Janner, Y. Du, I. Kostrikov, and S. Levine (2024). Training diffusion models with reinforcement learning\"}", "{\"summary\": \"This work provides a unified framework of guidance in diffusion models, both discrete and continuous, with minimal additional training and applicability in domains where a downstream reward might not even be differentiable. The proposed method SVDD (MC and PM) is applicable in discrete diffusion where a continuous gradient of energy cannot be directly added to the discrete state space, as well as cases where the reward is non differentiable which is the case in a lot of scientific domains. The work tackles an important problem in the scientific domain and leads to controllable generation without having to fine-tune large scaled models. Their results show that generations from SVDD lead to higher downstream rewards than the baselines considered.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The authors provide a widely applicable method that can be applied both to discrete and continuous diffusion settings.\", \"The proposed method, unlike previous guidance algorithms, does not rely on an explicitly trained conditional diffusion model (eg. for classifier free guidance), or on differentiable reward terms (eg. classifier based guidance).\", \"Results on both image and scientific domains highlight the benefits of the approach towards controlled generation of objects with high downstream reward, as intended.\", \"The work also conducts experiments in a wide variety of domains, ranging from images, molecules and DNA.\"], \"weaknesses\": [\"The authors consider setting $\\\\alpha=0$ in their experiments. However, prior work highlights that setting $\\\\alpha=0$ leads to over-optimization and increasingly reduces the diversity and realistic nature of the samples. Could the authors provide clarity on why this is not a problem in their setup?\", \"The work relies on two major assumptions (one for SVDD-MC and the other for SVDD-PM), which are neither well motivated theoretically nor are there any details provided about it.\", \"**Assumption about SVDD-MC**: The authors replace the logarithm of expectation with the expectation of logarithm in their quantity of interest, which in reality is only a bound on the actual quantity. Could the authors consider experimenting on some synthetic domain to describe the bias and variance caused by this approximation? When is this approximation reasonable and when would it be extremely incorrect?\", \"**Assumption about SVDD-PM**: This algorithm combines the above approximation with pushing the expectation inside the reward function $r(\\\\cdot)$. As with above, could the authors conduct experiments on synthetic domains and highlight when and where such an assumption is reasonable, and when is it violated?\", \"While the approach leads to generation of samples with high reward, the authors do not provide any kind of metrics that test for diversity of the samples generated.\"], \"questions\": [\"Is there a typo under the first equation in Section $4.1$, where the expectation is induced by $p_t^{pre}(\\\\cdot | x_{t-1}) -$ note the negative instead of positive sign.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"(Continue)\", \"comment\": \"> Q1. Number of Seeds Used for Experiments\\n\\nWe used 3 random seeds for all experiments. This information will be included in the revised manuscript.\\n\\n> Q2. Relation Between Consistency of Nested SMC and Number of Particles\\n\\nThe number of particles M directly impacts the value function estimation quality and the sampled distribution's diversity. As M increases, the Monte Carlo estimate of the soft value function becomes more accurate, reducing bias. Meanwhile, larger M improves sampling quality but increases computational cost. We have ablation studies on the effect of M (Figure 3 and 7) to quantify this trade-off.\\n\\n> Q3. Computational Cost of Soft-Value Function Training in SVDD-MC\\n\\n Training the soft value function in SVDD-MC involves forward passes through the diffusion model for multiple particles. While this introduces additional cost, it is significantly cheaper than fine-tuning-based methods, as it avoids modifying the diffusion model. We will add the training time. \\n\\n> Q4. Metrics for Diversity and Naturalness\\n\\nFor molecular domains, as shown in Table 3, we report diversity and validity metrics, e.g., Validity percentage, Uniqueness percentage, and Novelty percentage. These results demonstrate that SVDD maintains diversity and naturalness comparable to baselines while achieving higher rewards. We will include more naturalness metrics to evaluate sample quality in more domains.\\n\\n**Regarding diversity, while we recognize higher diversity is better, representative papers about alignment in diffusion models ([1], [2], [3]), like our work, don\\u2019t have diversity metrics.** This is because alignment (reward maximization) is a primary goal, and diversity is a secondary objective. Instead, people often show generated samples to show the diversity we did. As another reason, we believe this is often a subjective metric very suspect to the definition of distance. \\n\\n[1] Clark, K., P. Vicol, K. Swersky, and D. J. Fleet (2024). Directly fine-tuning diffusion models on differentiable rewards. arXiv preprint ICLR\\n\\n[2] Fan, Y., O. Watkins, Y. Du, H. Liu, M. Ryu, C. Boutilier, P. Abbeel, M. Ghavamzadeh, K. Lee, and K. Lee (2023). DPOK: Reinforcement learning for fine-tuning text-to-image diffusion models. arXiv preprint NeurIPS\\n\\n[3] Black, K., M. Janner, Y. Du, I. Kostrikov, and S. Levine (2024). Training diffusion models with reinforcement learning\"}", "{\"comment\": \"We appreciate your detailed review. We have clarified (1) our reward maximization goal and choice of $\\\\alpha$ and (2) approximation errors when learning value functions.\\n\\n> W1. Unclear Focus on Probabilistic Inference vs Reward Maximization\\n\\nYes, our focus is reward maximization. **This target distribution is widely accepted in alignment problems, as used in many representative papers in RLHF. While it can be seen as a probabilistic inference, the literature focuses on the reward maximization aspect to our knowledge [1,2,3].** (i.e., do not have metrics on diversity in general). But, we acknowledge we should have added more likelihood metrics. We will cite these papers and clarify the goal. \\n\\n[1] Ziegler, Daniel M., et al. \\\"Fine-tuning language models from human preferences.\\\" arXiv preprint arXiv:1909.08593 (2019).\\n\\n[2] Rafailov, R., Sharma, A., Mitchell, E., Manning, C. D., Ermon, S., and Finn, C. Direct preference optimization:Your language model is secretly a reward model. In Thirtyseventh Conference on Neural Information Processing Systems, 2023.\\n\\n[3] Ethayarajh, Kawin, et al. \\\"Kto: Model alignment as prospect theoretic optimization.\\\" (2024).\\n\\n> W2: Missing discussion and investigation on bias of soft value function estimates\\n\\nWe appreciate the reviewer\\u2019s concerns regarding the assumptions underlying SVDD-MC and SVDD-PM. In general, we admit these approximations are heuristic and we will add more discussion. \\n\\n- Approximation: SVDD-MC, our algorithm works well without this heuristic, but we just say that empirically, the algorithm works more stable with this heuristic. We plan to add more to the discussion. \\n\\n- Approximation of SVDD-PM: **We would also like to emphasize that this approximation is widely accepted and shown to be empirically successful in current related literature of diffusion models ([1,2,3]), such as classifier guidance variants when rewards are classifiers**. We will add more ablation studies on the value function quality, including SVDD-PM. \\n\\n\\n[1] Chung, H., J. Kim, M. T. Mccann, M. L. Klasky, and J. C. Ye (2023). Diffusion posterior sampling for general noisy inverse problems. ICLR\\n\\n[2] Ho, J., T. Salimans, A. Gritsenko, W. Chan, M. Norouzi, and D. J. Fleet (2022). Video diffusion models. Advances in Neural Information Processing Systems 35, 8633\\u20138646.\\n\\n[3] Bansal, A., H.-M. Chu, A. Schwarzschild, S. Sengupta, M. Goldblum, J. Geiping, and T. Goldstein (2023). Universal guidance for diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,\\n\\n> W3: Incosisnte values of $\\\\alpha$. \\n\\nWe agree that this part should be carefully considered. We will update our experiments to use consistent values of $\\\\alpha$ across SVDD and baseline methods.\\n\\n**How diversity is retained with $\\\\alpha=0**: While we recognize its importance, we intend to refrain from claiming our goal is to retain diversity because our primary goal is to optimize rewards while maintaining naturalness. However, even if $\\\\alpha=0$, it is expected that the diversity is not lost because it is regarded that we are sampling from many modes on the distribution of $p_{pre}(x)\\\\exp(r(x)/\\\\alpha)$ (with small $\\\\alpha$ but not $0$ exactly) due to the randomness coming from pre-trained models and finite $M$ in practice. Indeed, we observe it as we show in molecules and images. Furthermore, as shown in Table 3, we report diversity metrics for molecular domains, e.g., Uniqueness percentage and Novelty percentage.\"}", "{\"summary\": \"This paper introduces SVDD which is a new method to fine-tune diffusion models in both continuous and discrete spaces. SVDD-MC learns a soft value function by regression to $x_t$ while SVDD-PM exploits the posterior mean parametrization of masked diffusion models to estimate the soft value function directly. Given a soft value function, SVDD can be applied to a general class of reward functions, including non-differentiable ones, at inference time without further fine-tuning. This can be seen as a variation of the famous Sequential Monte-Carlo algorithm but applied and modified for diffusion models. Experiments are done on images, molecules, and docking and show improvements in fine-tuning performance under a specified reward metric.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper tackles a timely problem in considering fine-tuning diffusion models. Moreover, the suggested approach of SVDD-PM enjoys being computationally cheap to use as it does not require any further training while both SVDD-MC and SVDD-PM are applicable in settings where the reward function is non-differentiable. This is impactful because this unlocks a lot of potential application domains that have black-box rewards where learning a surrogate reward model is non-trivial. Finally, the paper considers a diverse set of experimental settings to showcase the universality of the proposed approach.\", \"weaknesses\": \"While the paper has some notable strengths there are a few lingering questions that point to potential weaknesses. I will try to list them below.\\n\\n**Theoretical weaknesses**\\n\\nTwo main questions arise when looking at the setup. The first one is the actual target distribution and whether SVDD hits it. In the continuous setting, I have severe doubts about whether the correct terminal distribution is reached due to the initial value function bias problem as introduced in Adjoint Matching (Domingo-Enrich et. al 2024). Certainly, nothing in the current theory suggests the process during fine-tuning is memoryless. Moreover, it is unclear what ramifications and bias we introduce in the MC setup when the regression is done using $r(x_0) \\\\to x_t$ as opposed to the more numerically unstable soft value function. For example, I believe when you remove the $\\\\exp$ from your regression target, this is a based value function but there is no discussion on this point outside of the fact that it is less numerically stable. As a result, I am dubious about the claims made about hitting the correct target distribution.\\n\\n\\nAnother question is the connection to Sequential Monte Carlo. There is a discussion on this in the paper but I think it's not accurate enough. I disagree with the statement made in the paper. The algorithm you propose is quite literally SMC but adapted to reward maximization, there is even a resampling step which is exactly what is done in SMC. The arguments that SMC is done over a batch are lukewarm. There is nothing wrong with demonstrating that SMC can be effectively applied to sampling from discrete diffusion---like analogously done for an autoregressive model by Zhao et. al 2024---and this is a valuable contribution. I suggest the authors be a bit more forthright with their claims as I would buy it a lot more. In fact, with the right framing, you achieve novelty by showing how SMC applies to a newer more interesting problem domain.\\n\\n**Additional Technical weaknesses**\\n\\nOne of the main selling points of SVDD is the fact that it is supposed to be a cheap inference time algorithm. This I believe is not quite true because of the need to estimate the soft value function in SVDD-MC. Indeed, one must estimate the soft value function using rollouts which I believe adds a heavy pre-processing step. I also did not see SVDD-MC in the ablation studies about computational cost---likely because it's significantly more expensive than SVDD-PM. Thus, I believe the main claim for SVDD-MC being a lightweight method is a bit misleading. Of course, if you had the perfect estimated value function then inference scales as indicated in the plot for 3c,d but this is not the full picture.\\n\\n**Experimental weaknesses**\\n\\nA glaring missing baseline is Relative Trajectory Balance (Venkataraman et. al 2024) which does fine-tuning exactly like this paper considers for both discrete and continuous diffusion models. I kindly request the authors to consider adding this important baseline. Moreover, it is a bit surprising that there is no text experiment given the heavy emphasis on using Masked Diffusion Models which have primarily been introduced for text. I would be encouraged to see a text experiment---perhaps of a similar scale to Zhao et. al 2024---to highlight that SVDD can be applied in the text setting.\\n\\nThe current experimental findings in Table 2 are not complete as they do not show other important aspects of the generated sample. They simply show that reward is maximized but this could also happen through gamification of the reward function. For instance, I would appreciate the authors providing sample-based diversity metrics to quantify how bad the drop in diversity is among the baselines. At the very minimum, FID scores for images should be provided and I'll let the authors determine appropriate diversity metrics for the other domains to complement the findings in Table 2.\\n\\n**Closing remarks**\\n\\nHaving said all of these weaknesses, I will note that I am open to significantly raising my score if **all of my concerns** are adequately addressed to my level of satisfaction. I will also state that I did not read the appendix so if I have missed something I would appreciate a pointer to the result there.\\n\\nI encourage the authors in their rebuttal endeavors and I hope they can strengthen the paper which I would like to eventually recommend for acceptance but not in its current state.\\n\\n\\n**References**\\n\\nVenkatraman, Siddarth, et al. \\\"Amortizing intractable inference in diffusion models for vision, language, and control.\\\" arXiv preprint arXiv:2405.20971 (2024).\\n\\nZhao, Stephen, et al. \\\"Probabilistic inference in language models via twisted sequential monte carlo.\\\" arXiv preprint arXiv:2404.17546 (2024).\", \"questions\": \"I would appreciate it if the authors could address my theoretical concerns regarding 1.) what is the optimal distribution hit by SVDD 2.) what is the bias introduced in the MC estimate and 3.) the actual computational cost of SVDD-MC.\\n\\nIn addition, I would appreciate it if the authors could carry out the additional experiments I have suggested with added diversity quantification.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We appreciate your detailed review. We have addressed your concerns by explaining (1) an initial bias problem is already considered, (2) more details on the approximation error of SVDD-MC, (3) the difference between SMC and SVDD, (4) why we don\\u2019t emphasize diversity metrics and comparison with fine-tuning methods like RTB.\\n\\n> *W. Does an initial bias problem exist? Relation with Domingo-Enrich et al. 2024*\\n\\n**There is no contraction between our paper and Domingo-Enrich et al. 2024, and our Theorem 1 considers an initial bias problem.** Since Domingo-Enrich et al. 2024 use continuous-time formulation, but we use discrete-time formulation, it might lead to confusion. However, our statement says we need to sample from the exponentially weighted initial distribution but not the original one, which is consistent with Domingo-Enrich et al. 2024. While their work deal with it by changing the schedule but without changing initial distributions, we handle it by sampling from a weighted initial distribution with value functions. \\n\\n> *W. Approximation error in SVDD-MC* \\n\\nWe acknowledge that our approximation is heuristic, and we will add more discussion. However, we emphasize that in SVDD-MC, our algorithm works well without this heuristic. **We claim that empirically, the algorithm works more stable with this heuristic.**\\n\\n> *W. Difference/Relation with Sequential Monte Carlo.* \\n\\n**While we don\\u2019t intend to claim, our algorithm is not related to SMC (indeed, we exactly claim that our algorithm is known as nested-SMC in the SMC literature in Appendix B.2.), a naively adapted version of Zhao et al. 2024 is different from our algorithm.** That\\u2019s why we distinguish between SMC and SVDD. This naively adapted version is specified in Appendix B.1, and this is different from SVDD. We are happy to answer more if certain points are unclear. \\n\\nHere are the primary differences (summary of Appendix B.2.) \\n\\n* **SVDD is tailored to reward optimization: SVDD is considered more suitable for optimization than SMC, as empirically shown in Section 7**. This is because, when using SMC for reward maximization, we must set $\\\\alpha$ very low, leading to a lack of diversity. This is expected, as when $\\\\alpha$ approaches 0, the effective sample size reduces to 1. This effect is also evident in our image experiments, as shown in Figure 4. Although SMC performs well in terms of reward functions, there is a significant loss of diversity. Some readers might think this could be mitigated by calculating the effective sample size based on weights (i.e., value functions) and resampling when the effective size decreases; however, this is not the case, as the effective sample size does not directly translate into greater diversity in the generated samples. In contrast, SVDD, maintains much higher diversity. \\n\\n* **Ease of parallelization in SVDD**: SVDD is significantly easier to parallelize across multiple nodes. In contrast, SMC requires interaction between nodes for parallelization. This advantage is also well-documented in the context of nested-IS SMC versus standard SMC (Naesseth et al., 2019). More specifically, \\n\\n 1. Twisted SMC methods typically resample particles across the entire batch, which can introduce sequential dependencies and complicate parallelization on modern hardware (e.g., GPUs). This is because particle resampling requires centralized coordination to determine which particles are retained or replaced.\\n\\n 2. In contrast, SVDD avoids explicit resampling across the batch by leveraging soft value functions that guide particle updates, enabling fully parallel processing without centralized coordination. This makes SVDD inherently more scalable, particularly for applications involving large diffusion models.\\n\\n* In SMC, the ``ratio'' is approximated: In SVDD, we approximate each \\n$\\\\exp(v_{t-1}(x_{t-1})/\\\\alpha)$\\nas a weight. However, in standard SMC, the ratio is approximated as a weight: \\n\\\\begin{align*}\\n \\\\frac{\\\\exp(v_{t-1}(x_{t-1})/\\\\alpha) }{\\\\exp(v_{t}(x_{t})/\\\\alpha)}.\\n\\\\end{align*}\\nThe key difference is that in SMC, both the numerator and the denominator are approximated, which could lead to greater error propagation.\\n\\n> *W. Computational Cost of SVDD-MC*\\n\\nA. We acknowledge the reviewer's concern about the computational cost of SVDD-MC. Below, we address this in detail: \\n\\nWe acknowledge that estimating the soft value function in SVDD-MC involves additional rollouts, which introduce computational overhead. However, this computation is minor compared to that needed for fine-tuning the pre-trained diffusion model. Indeed, in classifier guidance, we have a similar situation. **However, to our knowledge, the diffusion model community agrees that training classifiers is much easier than fine-tuning generative models because training classifiers is technically just supervised learning.**\\n\\nFurthermore, our paper recommends using the SVDD-PM method when it is hard to train in an MC way.\"}", "{\"comment\": \"I appreciate the authors' kind response and acknowledge their various points. Given that the rebuttal confirms that SVDD should in theory sample from the target distribution presented in Section 3.2 of the paper, I still have concerns about the inconsistency in the setting of $\\\\alpha$ across baselines leading to an evaluation biased towards SVDD. Because of this I maintain my original score, though I hope the authors revise their manuscript with updated experiments as I think the ideas in the manuscript are interesting and would be curious to see the results.\"}", "{\"comment\": \"We appreciate your feedback. We have addressed your concerns by explaining more about (1) the approximation quality of value functions, (2) the computational overhead of value function learning, and (3) the superior performance of our algorithm over the baseline.\\n\\n> *Q. The soft value function seems to difficult to approximate in general. Is there any analysis or justification to quantify the quality of the approximation? How does one know a good approximation is indeed attained? Moreover, how does the approximation quality matter for the generation?*\\n\\n- **Approximation error of SVDD-MC**: We evaluate the approximation of the soft value function through the learning loss, as well as its impact on the downstream reward maximization tasks. **Specifically, in Figure 6 in the Appendix, we show the training curves of value functions in SVDD-MC.** We know a good approximation is attained when the learning loss of the value function converges to a lower MSE. The consistent performance improvements across multiple domains also serve as an empirical validation of the approximation quality. \\n\\n- **Approximation of SVDD-PM**: We will add more ablation studies on the value function quality, including SVDD-PM. **We would also like to emphasize that this approximation is the golden standard in current related literature ([1,2,3]), such as classifier guidance variants when rewards are classifiers.**\\n\\n[1] Chung, H., J. Kim, M. T. Mccann, M. L. Klasky, and J. C. Ye (2023). Diffusion posterior sampling for general noisy inverse problems. ICLR \\n\\n[2] Ho, J., T. Salimans, A. Gritsenko, W. Chan, M. Norouzi, and D. J. Fleet (2022). Video diffusion models. Advances in Neural Information Processing Systems 35, 8633\\u20138646.\\n\\n[3] Bansal, A., H.-M. Chu, A. Schwarzschild, S. Sengupta, M. Goldblum, J. Geiping, and T. Goldstein (2023). Universal guidance for diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, \\n\\n> *Q. Is there any additional computational overhead for the proposed method? Is the approximation to the soft value function costly?*\\n\\nA. We acknowledge that there is additional computational overhead, primarily due to the need for multiple forwards passes through the pre-trained diffusion model to estimate the value function. We have discussed this in Sections 5.3 and Section 7. Here is a summary. \\n\\n- **Inference computational overhead**: As noted in Section 5.3, the computational complexity increases linearly with M (the number of samples), while memory requirements depend on whether computations are parallelized. However, compared to the Best-of-N baseline, our method is significantly more efficient under the same computational and memory budgets. **Figures 3c and 3d illustrate that SVDD achieves better results while maintaining manageable overhead.**\\n- **Training computational overhead**: Our SVDD-PM variant eliminates additional training, reducing overhead when non-differentiable feedback is available. In contrast, SVDD-MC does require computational overhead; however, it learns from the reward function, which requires less computation than the fine-tuning of the pre-trained diffusion model.\\n\\n> *Q The performance gain does not seem to be very significant compared to simple baselines, say Best-of-N. From Table 2, Best-of-N baseline is only incrementally worse than the proposed method in molecule-related experiments.*\\n\\nA. Overall, the performance improvements are significant and consistent across various domains. While the performance differences may appear incremental in some domains/metrics, they represent substantial improvements in practical settings. **Table 2 shows consistent top 10% quantile improvements, which is critical for high-reward applications such as drug discovery.**\\n\\n> *Q. A minor question: Does the size of the diffusion model affect the performance of SVDD? I will be interested to see how this method works for diffusion models of different sizes.*\\n\\nA. Thank you for this suggestion. While we did not explicitly explore model size variations in this work, our method relies on inference-time computations without modifying the pre-trained diffusion model, making it inherently scalable to larger models. For future work, investigating the effect of diffusion model size on SVDD's performance is a valuable direction. We hypothesize that larger models with richer representations would further enhance the quality of the soft value function, potentially amplifying performance gains.\"}", "{\"comment\": \"We appreciate the reviewer\\u2019s feedback on novelty and the comparison with twisted SMC methods. We address you once by explaining that SVDD and SMC are significantly different, and the performance of SVDD is better than that of the SMC method in the alignment setting, as we wrote in Section 6, 7 And Appendix B. We are happy to answer more if the author still has concerns.\\n\\n**Q1: Whether the reward function is a classifier is not critical, as even for a usual reward r(x), we can understand it as an unnormalized probability....**\\n\\nA. Yes, we are on the same page. Indeed, we wrote explicitly in Section 6: \\n\\n*\\u201cThey (Wu et al.) can also be applied to reward maximization. Notably, similar to our work, these methods do not require differentiable models\\u201d.*\\n\\nThis means that as you said, we did not intend to claim whether we use classifiers or regressors is the primary difference. We indeed show how to do this conversion, i.e., use SMC for reward maximization (or alignment) in Appendix B.1. \\n\\nHowever, our main point lies in more different aspects. Here are the primary differences (summary of Appendix B.2.) \\n\\n* **SVDD is tailored to reward optimization: SVDD is considered more suitable for optimization than SMC, as empirically shown in Section 7**. This is because, when using SMC for reward maximization, we must set $\\\\alpha$ very low, leading to a lack of diversity. This is expected, as when $\\\\alpha$ approaches 0, the effective sample size reduces to 1. This effect is also evident in our image experiments, as shown in Figure 4. Although SMC performs well in terms of reward functions, there is a significant loss of diversity. Some readers might think this could be mitigated by calculating the effective sample size based on weights (i.e., value functions) and resampling when the effective size decreases; however, this is not the case, as the effective sample size does not directly translate into greater diversity in the generated samples. In contrast, SVDD, maintains much higher diversity. \\n\\n* **Ease of parallelization in SVDD**: SVDD is significantly easier to parallelize across multiple nodes. In contrast, SMC requires interaction between nodes for parallelization. This advantage is also well-documented in the context of nested-IS SMC versus standard SMC (Naesseth et al., 2019). More specifically, \\n\\n 1. Twisted SMC methods typically resample particles across the entire batch, which can introduce sequential dependencies and complicate parallelization on modern hardware (e.g., GPUs). This is because particle resampling requires centralized coordination to determine which particles are retained or replaced.\\n\\n 2. In contrast, SVDD avoids explicit resampling across the batch by leveraging soft value functions that guide particle updates, enabling fully parallel processing without centralized coordination. This makes SVDD inherently more scalable, particularly for applications involving large diffusion models.\\n\\n* In SMC, the ``ratio'' is approximated: In SVDD, we approximate each \\n$\\\\exp(v_{t-1}(x_{t-1})/\\\\alpha)$\\nas a weight. However, in standard SMC, the ratio is approximated as a weight: \\n\\\\begin{align*}\\n \\\\frac{\\\\exp(v_{t-1}(x_{t-1})/\\\\alpha) }{\\\\exp(v_{t}(x_{t})/\\\\alpha)}.\\n\\\\end{align*}\\nThe key difference is that in SMC, both the numerator and the denominator are approximated, which could lead to greater error propagation.\\n\\n**Q. I think the setting in this work should not be called \\\"reward maximization\\\" but be called \\\"alignment\\\" or \\\"reward sampling\\\" or similar names,**\\n\\nA. We agree that the term \\\"reward maximization\\\" could potentially be interpreted as misleading and appreciate the reviewer\\u2019s suggestion to use alternative terminology. However, we avoid using alignment because, in many biology settings, we often use alignment in different ways, such as sequence alignment. \\n\\n**Q \\\"SMC methods involve resampling across the \\u201centire\\u201d batch, which complicates parallelization. Additionally, when batch sizes are small, as is often the case with recent large diffusion\\\" is unclear. Does the batch size mean the number of particles in SMC?**\\n\\nWe appreciate the reviewer\\u2019s question on batch size and its role in SMC methods. To clarify: \\n\\n- Yes. In SMC, **batch size refers to the number of particles** (or samples) used in the resampling process as it is more explicitly detailed in Appendix B.2.\\n- In twisted SMC methods, resampling operates over **the entire batch**, which involves reweighting and redistributing particles globally. This is computationally expensive and less parallelizable, especially for small batch sizes.\\n- In our method (SVDD), batch size refers to the number of samples evaluated in parallel during diffusion, and resampling is replaced by value-based guidance. This avoids the bottleneck of centralized resampling and ensures scalability even with smaller batch sizes.\\n\\nWe will expand on these points in Section 6 to improve clarity and explicitly highlight the differences between SVDD and twisted SMC methods.\"}" ] }
2fZ9iOVzpR
A Study of Posterior Stability for Time-Series Latent Diffusion
[ "Yangming Li", "Yixin Cheng", "Mihaela van der Schaar" ]
Latent diffusion has demonstrated promising results in image generation and permits efficient sampling. However, this framework might suffer from the problem of posterior collapse when applied to time series. In this paper, we first show that posterior collapse will reduce latent diffusion to a variational autoencoder (VAE), making it less expressive. This highlights the importance of addressing this issue. We then introduce a principled method: dependency measure, that quantifies the sensitivity of a recurrent decoder to input variables. Using this tool, we confirm that posterior collapse significantly affects time-series latent diffusion on real datasets, and a phenomenon termed dependency illusion is also discovered in the case of shuffled time series. Finally, building on our theoretical and empirical studies, we introduce a new framework that extends latent diffusion and has a stable posterior. Extensive experiments on multiple real time-series datasets show that our new framework is free from posterior collapse and significantly outperforms previous baselines in time series synthesis.
[ "Latent Diffusion", "Time Series", "Diffusion Models", "Posterior Collapse", "Impact Analysis" ]
Reject
https://openreview.net/pdf?id=2fZ9iOVzpR
https://openreview.net/forum?id=2fZ9iOVzpR
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yd5ueCN5l9", "rfrPwKpKiw", "loXXOb4A6T", "kKRVmiDjFX", "j2czkZbSMX", "hiRB8lt0Mq", "eF8ui3GM7Y", "b83fr7bmhh", "YrtsABMwpP", "YqfC2Wka0x", "Y0MVOpg1UR", "XXv7NgTc2X", "XAF2u3i9Il", "W1NXLntcCa", "UK8wlv62xA", "TJtCSEk5EE", "TAUrkI5M6F", "SVQStj9ub0", "RmLHMdn7Qm", "P9olSuYoaM", "NkiCITS16Q", "N0pdkwg7YI", "MdwDZG0JZA", "LLbi27rRpS", "GjpwRDQ5cC", "GCgux5hzwd", "E7fpB1Piaj", "DrqnyeJZfk", "Bb7EyuYZOG", "1IO8zJENpC" ], "note_type": [ "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_review", "meta_review", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1731983625715, 1731982368420, 1737523797463, 1732375329181, 1733201331862, 1732903156770, 1732540432600, 1730516582851, 1731984644642, 1732539393862, 1731985822540, 1732262373648, 1731980961054, 1732537937143, 1732507361044, 1732637242016, 1730710618300, 1732344192521, 1731986253571, 1731981519776, 1731986990873, 1730721589085, 1732263667363, 1730701325929, 1730765220032, 1734575094637, 1733196094869, 1730537116597, 1731980056660, 1732533312486 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6853/Authors" ], [ "ICLR.cc/2025/Conference/Submission6853/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6853/Authors" ], [ "ICLR.cc/2025/Conference/Submission6853/Authors" ], [ "ICLR.cc/2025/Conference/Submission6853/Authors" ], [ "ICLR.cc/2025/Conference/Submission6853/Authors" ], [ "ICLR.cc/2025/Conference/Submission6853/Reviewer_1wS4" ], [ "ICLR.cc/2025/Conference/Submission6853/Authors" ], [ "ICLR.cc/2025/Conference/Submission6853/Authors" ], [ "ICLR.cc/2025/Conference/Submission6853/Authors" ], [ "ICLR.cc/2025/Conference/Submission6853/Reviewer_1wS4" ], [ "ICLR.cc/2025/Conference/Submission6853/Authors" ], [ "ICLR.cc/2025/Conference/Submission6853/Reviewer_QHVr" ], [ "ICLR.cc/2025/Conference/Submission6853/Authors" ], [ "ICLR.cc/2025/Conference/Submission6853/Reviewer_aB87" ], [ "ICLR.cc/2025/Conference/Submission6853/Reviewer_h3BG" ], [ "ICLR.cc/2025/Conference/Submission6853/Reviewer_h3BG" ], [ "ICLR.cc/2025/Conference/Submission6853/Authors" ], [ "ICLR.cc/2025/Conference/Submission6853/Authors" ], [ "ICLR.cc/2025/Conference/Submission6853/Authors" ], [ "ICLR.cc/2025/Conference/Submission6853/Reviewer_QHVr" ], [ "ICLR.cc/2025/Conference/Submission6853/Authors" ], [ "ICLR.cc/2025/Conference/Submission6853/Reviewer_NiiK" ], [ "ICLR.cc/2025/Conference/Submission6853/Reviewer_wREh" ], [ "ICLR.cc/2025/Conference/Submission6853/Area_Chair_jZ7u" ], [ "ICLR.cc/2025/Conference/Submission6853/Reviewer_wREh" ], [ "ICLR.cc/2025/Conference/Submission6853/Reviewer_aB87" ], [ "ICLR.cc/2025/Conference/Submission6853/Authors" ], [ "ICLR.cc/2025/Conference/Submission6853/Reviewer_wREh" ] ], "structured_content_str": [ "{\"title\": \"Rebuttal, Section 1\", \"comment\": \"## Part-2: A recap of dependency measures\", \"we_would_like_to_provide_a_brief_review_of_a_key_technique_introduced_in_our_paper\": \"dependency measures, which might help your understanding and answer related concerns.\\n\\n### **1, Motivation of this technique**\\nA serious consequence of posterior collapse is that the time-series decoder tends to ignore the input latent variable for conditional generation. While this fact is widely recognized in the literature [1,2,3], there remains a lack of a principled method to quantify how much a decoder might neglect the latent variable. The dependency measure was developed in this background, and it will be indispensable for practitioners to diagnose the problem of posterior collapse.\\n\\n### **2, How it works, and notations** \\n\\nThe dependency measure is a type of gradient-based attribution [7], **with many previous theoretical and empirical works [8,9] that verified its effectiveness**. The core idea is to measure the sensitivity of a temporal model to its input variable through first-order gradients. Given a time-series decoder $f$, the global dependency $m_{t,0}$ is a signed measure estimating the dependency of predicting variable $x_t$ on latent variable $z$, while local dependency $m_{t, i}, 0 < i < t$ quantifies such dependency on variable $x_i$.\\n\\n### **3, Key properties, especially about the negative dependencies** \\n\\nThe measure $m_{t, i}, 0 \\\\le i < t$ is always bounded between $-1$ and $1$, satisfying that $\\\\sum_{0 \\\\le i < t} m_{t, i} = 1$. \\n\\nAbout the negative measures, please note:\\n- **Both positive and negative measures are valid**, though positivity is more common to see because typical time series exhibit structural dependencies among variables. \\n- There are cases where time series are shuffled or get noisy, such that non-positive measures might be observed.\\n\\nIn summary, negative measures do not mean bad, and they are not directly related to posterior collapse except for observing negative global dependency $m_{t, 0}$.\\n\\n### **4, Use cases, and relation to\\u00a0less expressivity** \\nRegarding the use cases of dependency measure and its relation to less expressivity, please note the following facts from our paper:\\n\\n- Proposition 3.1 of our paper only considered the cases of a fully collapsed posterior: $P(z | X) = P(z)$, where Latent Diffusion will be as inexpressive as a simple VAE. \\n- The dependency measure is a diagnostic tool for the cases of either fully or partially collapsed posterior [3]: $P(z | X) \\\\approx P(z)$, where the global dependency $m_{t, 0}$ will be close to $0$ for every time step $t$, or soon vanishes with increasing time $t$. \\n\\nTherefore, **The dependency measure extends the applicability of Proposition 3.1 to more scenarios.**\\n\\nThis part aims to address your related concerns in Weakness-3 and Question-1.\\nWe welcome any further questions you might have.\\n\\n## Part-3: Other concerns in Question-2\\n\\nAs mentioned in our Part-1 answer, the decoder of vanilla Latent Diffusion only has access to the last variable of the backward diffusion model: $z^0 = z$. Similarly, in our framework, the latent variable fed into the decoder is sampled from a very small number of variables in the backward process. Appendix F.1 of our paper showed experiment results (i.e., Table 2) about the effect of that number. For jointly training the autoencoder and diffusion model, we found it quite stable in practice.\\n\\n## References\\n[1] Generating Sentences from a Continuous Space, ACL-2016\\n\\n[2] Cyclical Annealing Schedule: A Simple Approach to Mitigating KL Vanishing, NAACL-2019\\n\\n[3] Controlling Posterior Collapse by an Inverse Lipschitz Constraint on the Decoder Network, ICML-2023\\n\\n[4] Latent Diffusion for Language Generation, NeurlPS-2023\\n\\n[5] High-Resolution Image Synthesis with Latent Diffusion Models, CVPR-2022\\n\\n[6] A Variational Perspective on Diffusion-Based Generative Models and Score Matching, NeurlPS-2021\\n\\n[7] Axiomatic Attribution for Deep Networks, ICML-2017\\n\\n[8] A Rigorous Study of Integrated Gradients Method and Extensions to Internal Neuron Attributions, ICML-2022\\n\\n[9] Guided Integrated Gradients: An Adaptive Path Method for Removing Noise, CVPR-2021\"}", "{\"title\": \"Rebuttal, Section 0\", \"comment\": \"We thank the reviewer for his or her comprehensive and constructive feedback.\\n\\n## Part-1: A review of Latent Diffusion, posterior collapse, and our framework\\n\\nWe would like to provide a step-by-step review of the posterior-collapsed latent diffusion and our framework, which might help your understanding and answer related concerns.\\n\\n### **1, How does Latent Diffusion samples?**\", \"the_sampling_process_of_latent_diffusion_takes_two_steps\": \"- The backward process of a diffusion model incrementally denoises a Gaussian noise $z^L$ into a latent variable $z = z^0$; \\n- The decoder $f$ of an autoencoder renders the sampled variable $z$ into time series $X$. \\n\\nTherefore, as a kind reminder, **it is the last variable $z^0$ of the backward process that controls the time-series decoding**, while all its previous variables $z^i, 1 \\\\le i \\\\le L$, are not involved.\\n\\n### **2, Definition of posterior collapse**\\n\\nBased on the above fact, the posterior collapse in our paper was only defined for the last variable $z^0$ in the backward process, rather than for others $z^i, 1 \\\\le i \\\\le L$. We believe this problem setup is quite appropriate, and **defining \\\"time-dependent and time-independent\\\" sub-problems, as you recommended, might not align with the architecture of Latent Diffusion**, unless the decoder $f$ has access to more than just the last variable of the backward process. \\n\\nOn the other hand, the definition of posterior collapse (i.e., \\\"Problem Formulation\\u201d paragraph in Sec. 3.1) in our paper is consistent with many previous works in the literature [1,2,3]. We would appreciate any references with the suggested problem setup, though we believe their applicability to our paper might be limited.\\n\\n### **3, The significance of studying posterior collapse in Latent Diffusion**\\n\\nLatent Diffusion represents one of the state-of-the-art generative models, which is well-known in both academia (e.g., LD4LG [4]) and industry (e.g., Stable Diffusion [5]). Therefore, it is very important to study the potential risks of this advanced architecture, such as posterior collapse.\\n\\nOn the other hand, please note some key findings that are first presented in our paper:\\n\\n- A fully posterior-collapsed Latent Diffusion is equivalent to a simple VAE, making it a much less capable generative model than even a vanilla diffusion model [6];\\n- In cases of partial posterior collapse [3], we introduced a principled method: dependency measure, to quantify the severity of the problem, identifying a previously unknown phenomenon: dependency illusion;\\n- As shown in Sec. 4.2 of our paper, the posterior collapse can be perfectly addressed with the diffusion process.\\n\\nThese points clearly indicate that **the problem of posterior-collapsed Latent Diffusion goes far beyond previous research focusing on a simple autoencoder**, calling for a more systematic study. Our work was inspired by this background.\\n\\n### **4, How does our framework address the problem?**\\n\\nThe key component in our framework to address posterior collapse is the collapse simulation loss as defined in Eq. (14). The core idea is to apply the diffusion process to simulate a posterior-collapsed latent variable $z$: $P(z|X) \\\\approx P(z)$, and penalize the decoder if it yields high conditional probability $P(X|z)$ with non-informative variable $z$. In other words, the defined loss forced the decoder $f$ to condition on latent variable $z$ for generation, making it informative about time series $X$.\\n\\nBesides the promising main experiment results measured by Wasserstein distance, **Fig. 1 and Fig. 4 of our paper (i.e., dependency measures over time) showed that the latent variable $z$ in our models maintained a stable control over the decoder$ f$ with increasing time $t$, indicating a non-collapsed posterior $P(z | X)$**.\\n\\nThis part aims to address your concerns in Weakness-1, 2, 4.\\nWe welcome any further questions you might have.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": [\"We thank the reply from the reviewer. Below is a point-by-point summary highlighting how our rebuttal tried to address your concerns:\", \"\\\"1, How does Latent Diffusion samples?\\\" of Part-1 clarified that time-series decoding is only controlled by the last backward latent variable $z = z^0$ ***(instead of any process), so your concern about \\\"There are potentially time series which are driven by a static latent process\\\" does not apply to the architecture of Latent Diffusion***. *In other words, the virtual time steps in diffusion models have nothing to do with the time steps in time-series data*;\", \"\\\"2, Definition of posterior collapse\\\" of Part- clarified that ***our definition of posterior collapse is consistent with many previous works, and we believe that your concern about \\\"time-dependent and time-independent mode collapses\\\" does not fit the problem setting of our paper***. We would also appreciate any references you could provide;\", \"\\u201d3, The significance of studying \\u2026\\\" of Part-1 aimed to address your concern about \\\"how the introduced technique solved the problem of mode collapse\\u201d;\", \"\\\"4, How does our framework address\\u2026\\\" of Part-1 aimed to address your concern about \\u201chow the introduced technique solved the problem of mode collapse\\u201d;\", \"\\u201d2, How it works ...\\\" and \\\"3, Key properties \\u2026 negative dependencies\\\" of Part-2 aimed to address your concerns about \\\"there is little theoretical analysis exploring its properties\\\" and \\\"negative local dependency\\u201d. We reviewed the key properties of dependency measures and clarified about negative dependencies;\", \"\\\"4, \\u2026 relation to\\u00a0less expressivity\\\" of Part-2 aimed to address your concern about \\u201cits relationship to the reduction to a VAE model\\u201d.\", \"A kind reminder is that *your review repeatedly referred to \\\"mode collapse\\\", instead of the posterior collapse studied by our paper*. ***These two terminologies have very distinct definitions in the context of generative models***. We are concerned that there might be some misunderstandings that we could further address.\", \"Thank you again for your reply, and we are looking forward to the opportunity to further clarify any of your concerns that have been not addressed well.\"]}", "{\"comment\": \"Glad to receive your new question! Our answer is yes: please refer to Appendix F.3 of our paper, showing that both text and time-series generative models were prone to posterior collapse, which was not the case for images. Therefore, the problem is prominent\\u00a0in typical sequential data, including time series. ***We hope our previous rebuttal had sufficiently addressed your concerns, and if so, we would greatly appreciate it if you could consider improving your rating.***\"}", "{\"title\": \"Looking Forward to Your Feedback for Our Rebuttal\", \"comment\": \"Dear Reviewer NiiK,\\n\\nWe would like to thank you again for your kind and insightful review! As the discussion stage is approaching its extended deadline, we noticed that we have not yet heard from you. ***We look forward to your feedback for our rebuttal***, and we would appreciate any improvement in the rating if your concerns were adequately addressed!\\n\\nBest Regards,\\n\\nThe Authors\"}", "{\"comment\": \"Thank you very much for your reply! I understand that you are currently occupied with a large volume of review tasks. If possible, we would appreciate it if you could let us know whether our previous rebuttal has addressed your concerns.\"}", "{\"summary\": \"This work starts from an analysis on the posterior collapse issue of latent diffusion for capturing time series. In particular, the KL-divergence from the standard Gaussian prior to the latent variable distribution approximated using latent diffusion, may reduce the time series latent diffusion to a VAE model. The authors define a dependency measure, which shows that the influences of the latent variables on decoding the observations, will decrease to zero along the diffusion time steps. In particular, as analyzed in the paper, the decoder built upon recurrent neural nets may decode the current observations only using the past observations, and thus leads to the dependency illusion issues. To address the problems, the paper develops a novel framework, in which the authors remove the KL-divergence regularization that causes the posterior collapse, decode the predicting observations using the latent variable sampled at intermediate diffusion steps, and introduce a novel penalty to avoid dependency illusion issues. The final experiments demonstrate the new framework can effectively avoid posterior collapse, and thus achieves superior generation quality, in comparisions to some SOTA time series latent diffusion methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Novelty: It is the first work to discuss the posterior collapse issue for time series latent diffusion. In particular, the paper introduces the novel dependency measure to quantify how the impacts of the latent variables on the generation of the predicting observations, decrease along the time steps. The authors develop a novel framework, which effectively avoid the dependency illusion issue, and outperforms the related time series latent diffusion models in terms of generation quality.\", \"clarity\": \"This work clearly illustrates the posterior collapse and dependency illusion issues by plotting the dependency measures over time steps. The most parts of the analysis are clearly presented, and easy-to-follow.\", \"significance\": \"The work demonstrate a significant issue for latent diffusion being applied to capturing time series data. The introduce dependency measure might be used in quantifying the posterior stability of the other related methods, and thus appears to be crucial.\", \"weaknesses\": \"The final experiments only demonstrate the compared models on only three dataset, using Wasserstein distance to measure between the true and generated time series. Perhaps the experiments could be enhanced by considering more evaluations metrics?\", \"questions\": \"I agree on the importance of posterior collapse issue found by the authors. I am wondering how is the generation performance of time series diffusion model in which latent variables have the same dimension as the observations.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer for such kind and constructive feedback.\\n\\n## Part-1: Time-series Generative Models vs Forecasting Models.\\n\\nOur paper focuses on Latent Diffusion, a type of Generative Models, which are very different from time-series Forecasting Models. **The following table compares the Generative and Forecasting Models, showing their significant difference in evaluation metrics.**\\n\\n| | **Time-series Generative Models** | **Time-series Forecasting Models** |\\n|--------------------------|------------------------------------------------------------|----------------------------------------------------------------|\\n| **Representative Methods** | Time-series GAN, VAE, Diffusion Models, etc. | ARIMA, TCNs, Transformer, etc. |\\n| **Task Definition** | Learning a latent representation $z$ of time series $X$, with a map to convert it into time series: $P(X, z)$ | Conditioning on a sequence of observations to predict the next one: $P(x_n \\\\mid x_{n-1}, x_{n-2}, \\\\cdots, x_1)$ |\\n| **Evaluation Metrics** | **Divergence measures (e.g., KL divergence) or Wasserstein distance that measure the distribution gap between the generated $X\\u2019$ and real time series $X$** | Some accuracy-like metric $l$ (i.e., mean square error (MSE) or F1) that are defined for the prediction of every observation: $l(x_i, x\\u2019_i)$ |\\n| **Application Scenarios** | Sensitive Data Anonymization, Data Synthesis for Privacy Protection, Molecular Design, etc. | Stock price prediction, weather forecasting, etc. |\\n\\n**Key point from the table:** Generative Models (e.g., Latent Diffusion) are evaluated by distribution-level metrics (e.g., Wasserstein distance): comparing the generated and real sample distributions, which largely differ from Forecasting Models that are evaluated by observation-level accuracy-like metrics (e.g., MSE). Such a distinction stems from their different model definitions.\\n\\n**Diverse metrics adopted in our paper:** Previously, We had considered the diversity of evaluation metrics. In Appendix F.3 of our paper, Table 4 compared our models with the baselines in terms of another widely used metric: maximum mean discrepancy (MMD) [1].\\n\\nThis part aims to address your concern in Weakness-2. We welcome any further questions you might have.\\n\\n## Part-2: Other advanced baselines, and ablation studies.\\n\\nAs you suggested, **we have additionally adopted two up-to-date time-series baselines for comparison**. One is Frequency Diffusion [2], a (not latent) diffusion-based Generative Model appearing in ICML-2024; The other is Neural STPP [3], a flow-based Generative Model appearing in NeurlPS-2023. The experiment results are shown in the below table.\\n\\n| **Method \\\\ Dataset** | **MIMIC** | **Earthquakes** |\\n|-------------------------------------------------------|-----------|-----------------|\\n| Transformer Latent Diffusion (CVPR-2022) | 5.02 | 5.91 |\\n| Neural STPP (NeurIPS-2023) | 5.13 | 5.82 |\\n| Frequency Diffusion (ICML-2024) | 4.56 | 5.07 |\\n| Transformer Latent Diffusion w/ Our Framework | **2.13** | **2.49** |\\n\\nWe can see that Latent Diffusion is competitive with up-to-date time-series generative baselines, and it can significantly outperform the baselines with our framework, showing the significance of addressing posterior collapse. **Previously, we had conducted ablation experiments as shown in Table 2 of our Appendix F.1**, verifying the effectiveness of different components (e.g., collapse simulation) in our framework and interpreting the effects of their hyper-parameters.\\n\\nThis part aims to address your concern in Weakness-1, 3.\\nWe welcome any further questions you might have.\\n\\n## References\\n[1] Training generative neural networks via maximum mean discrepancy optimization, UAI-2015\\n\\n[2] Time Series Diffusion in the Frequency Domain, ICML-2024\\n\\n[3] Automatic Integration for Spatiotemporal Neural Point Processes, NeurlPS-2023\"}", "{\"comment\": \"Thank you for your positive feedback, and thanks again for your great efforts in reviewing our paper!\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer for such kind and professional feedback.\\n\\n## Part-1: More datasets and evaluation metrics.\\n\\nDue to the limited space in the main text, we previously placed the experiment results with two extra time-series datasets (i.e., Retail and Energy) and another evaluation metric (i.e., MMD) in Table 4 of our Appendix F.3. We also explored other data modalities (e.g., texts) in Table 5 and Table 6 of our Appendix F.4.\\n\\nThis part aims to address your concerns in Weakness-1.\\nWe welcome any further questions you might have.\\n\\n## Part-2: Same-dimensional latent variables.\\n\\nIn the case where latent variable\\u00a0$z$ has the same dimension as $X$, the issue of posterior collapse might indeed be mitigated to some extent, though\\u00a0such a high-dimensional latent variable will make the latent diffusion as computationally costly as the vanilla diffusion model.\\n\\nAs you suggested, we conducted an experiment that ran latent diffusion with a latent variable of the same dimension (i.e., 360) as time series. The results are as below.\\n\\n| **Method** | **Dimension** | **Backbone** | **MIMIC** |\\n|------------------------|---------------|----------------|-----------|\\n| Latent Diffusion | 64 | Transformer | 5.02 |\\n| Latent Diffusion | 360 | Transformer | 3.71 |\\n| Our Framework | 64 | Transformer | **2.13** |\\n| Latent Diffusion | 64 | LSTM | 5.19 |\\n| Latent Diffusion | 360 | LSTM | 3.82 |\\n| Our Framework | 64 | LSTM | **2.29** |\\n\\nThis table shows that\\u00a0**the performance gain achieved by a high-dimensional latent variable is not as significant as that achieved by our framework**. One possible explanation for these results is that the high dimensionality of latent variables makes the diffusion model harder to learn [1, 2], though it can mitigate the issue of posterior collapse.\\n\\nThis part aims to address your concerns in Question-1. We welcome any further questions you might have.\\n\\n## Reference\\n[1] High-Resolution Image Synthesis with Latent Diffusion Models, CVPR-2022\\n\\n[2] Score-based Generative Modeling in Latent Space, NeurlPS-2021\"}", "{\"title\": \"RE\", \"comment\": \"Ive read your feedback, and want to thank you. I stick with my score.\"}", "{\"title\": \"Rebuttal, Section 0\", \"comment\": \"We thank the reviewer for providing such comprehensive and constructive feedback.\\n\\n## Part-1: Time-series Generative Models vs Forecasting Models.\\n\\nOur paper focuses on **Latent Diffusion, a type of Generative Models different from your recommended models (e.g., ARIMA and TCNs), which belong to time-series Forecasting Models** and are indeed free from posterior collapse. The following table compares the two classes of models, showing the use cases of our paper.\\n\\n| | Time-series Generative Models | Time-series Forecasting Models |\\n|-----|------------|-----------|\\n| **Representative Methods** | Time-series GAN [1], VAE, Diffusion Models, etc. | ARIMA, TCNs, Transformer, etc. |\\n| **Task Definition** | Learning a latent representation $z$ of time series $X$, with a map to convert it into time series: $P(X, z) $ | Conditioning on a sequence of observations to predict the next one: $P(x_{n} \\\\mid x_{n-1}, x_{n-2}, \\\\cdots, x_1) $ |\\n| **Main Concerns** | **Posterior collapse**, fairness [2], memorization [3], etc. | Model expressiveness, autoregressive modeling, graph neural networks, etc. |\\n| **Application Scenarios** | Sensitive Data Anonymization [10], Data Synthesis for Privacy Protection [11], Molecular Design [12], etc. | Stock price prediction, weather forecasting, etc. |\\n\\n\\n**Key points from the table:** Time-series Forecasting Models (e.g., ARIMA) are without a key component of Generative Models (e.g., Latent Diffusion): latent variable z, which might incur posterior collapse. For this reason, well-performing Forecasting Models do not suffer from posterior collapse, though we can see that Generative Models also have their unique values in real-world applications (e.g., Data Synthesis and Drug Discovery).\\n\\n**Advanced time-series architectures in our models:** On the other hand, we built the time-series decoder of our Generative Models with either modern Transformer or LSTM (see Table 1 of our paper), which are both your recommendations. As indicated above, while those architectures are without posterior collapse, the latent variable z that initializes them is the root cause incurring the problem.\\n\\nThis part aims to address your concerns in Weakness-1, 3 and Question-1, 3.\\nWe welcome any further questions you might have.\\n\\n## Part-2: Latent Diffusion is the state-of-the-art, with more recent baselines adopted.\\n\\nOur paper is based on Latent Diffusion that first appeared in CVPR-2022, representing one of the most advanced architectures of Generative Models. **There are many very recent papers that focused on time-series Latent Diffusion**, or even more broadly: sequence data with Latent Diffusion. For example, TimeLDM [7], LD4LG [8], and AudioLDM [9] that appear in NeurlPS-2023, ICML-2023, and the arXiv months ago. Therefore, from the recent literature, we believe that Latent Diffusion stands as a state-of-the-art time-series Generative Model.\\n\\nAs you recommended, **we have additionally adopted two up-to-date time-series baselines for comparison**:\\n1. One is Frequency Diffusion [10], a (not latent) diffusion-based Generative Model appearing in ICML-2024;\\n2. The other is Neural STPP [11], a flow-based Generative Model appearing in NeurlPS-2023. \\n\\nThe experiment results are shown in the below table.\\n\\n\\n| Method / Dataset | MIMIC | Earthquakes |\\n|---------------------------------------------------------------|--------|-------------|\\n| Transformer Latent Diffusion (CVPR-2022) | 5.02 | 5.91 |\\n| Neural STPP (NeurIPS-2023) | 5.13 | 5.82 |\\n| Frequency Diffusion (ICML-2024) | 4.56 | 5.07 |\\n| Transformer Latent Diffusion w/ Our Framework | **2.13** | **2.49** |\\n\\nWe can see that **vanilla Latent Diffusion achieved competitive performances to the up-to-date baselines** (e.g. Neural STPP), showing that it is still very advanced. In particular, **our framework significantly improves Latent Diffusion, with performances notably outperforming the baselines**, indicating that posterior collapse is indeed a performance bottleneck of Time-series Latent Diffusion. The above empirical results further confirmed that Latent Diffusion is a state-of-the-art time-series Generative Model, and verified the importance of addressing posterior collapse.\\n\\nThis part aims to address your concerns in Weakness-1, 2 and Question-1. We welcome any further questions you might have.\"}", "{\"title\": \"Thanks for the reply.\", \"comment\": \"Dear Authors, thanks for the reply. My concerns are addressed. Hence, I now increase my score.\"}", "{\"title\": \"Looking Forward to Further Feedback\", \"comment\": \"Dear Reviewers,\\n\\nThank you for your comprehensive and constructive reviews, including some latest feedback.\\n\\n\\nWith the discussion stage nearing its conclusion, we look forward to hearing from all other reviewers.\\n\\nThank you, and have a wonderful day!\\n\\nBest Regards,\\n\\n\\nThe Authors\"}", "{\"comment\": \"Thanks for the response and update! These do not affect my original overall review and I keep the original rating.\"}", "{\"summary\": \"This paper addresses the issue of posterior collapse in time-series latent diffusion models, a phenomenon that limits model expressivity by reducing it to a simpler VAE model. The authors introduce a dependency-measure regularization aimed at alleviating this collapse specifically within time-series data. Experiment results on three datasets (WARDS, MIMIC, and Earthquakes) demonstrates initial improvements in preventing posterior collapse over shorter timeframes.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"--The paper tries to focus on the specific issues of time-dependency collapse in the case of time series data and diffusion models.\\n\\n--The shuffling experiments help illustrate how a latent variable is not being used strongly throughout all time steps\", \"weaknesses\": \"--The problem is not sufficiently well motivated. In particular, the two types of mode collapse which in time series (time-dependent and time-independent) are not discussed. The reduction to a VAE is only about the elimination of the time-dependent influence. The impact of this simplification is not sufficiently discussed.\\n\\n--Moreover, the less expressivity is not shown explicitly to be a bad thing in the context of time series in general. There are potentially time series which are driven by a static latent process\\n\\n--Although the dependency measure is well-defined, there is little theoretical analysis exploring its properties and its relationship to the reduction to a VAE model\\n\\n--There is no analysis of the results showing specifically how the introduced technique solved the problem of mode collapse. Results with good Wasserstein distance do not directly imply that the issue of mode collapse was resolved.\\n\\n- This paper claims that they are the first to address the posterior collapse problem in latent diffusion for time series, but it really boils down to the old autoencoder problem. And the diffusion model became a redundant module when the input is a standard Gaussian distribution is a simple extension of the problem of autoencoder.\", \"questions\": \"It\\u2019s unclear to me why negative local dependency is bad. The authors claimed that it\\u2019s because the previous timestamp\\u2019s data may be from a low density region and therefore an outlier. But in case that the actual next value to be decoded should indeed be an extreme value, why is that problematic?\\n\\nCan you discuss the stability of the training? In cases where we 1) train the diffusion model and the decoder together 2) we require the decoder to decode the time series regardless of which timestamp\\u2019s noised version of the latent variable is selected.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the authors' detailed feedback to my reviews. While it helps to reiterate the claimed contributions, I am not fully convinced with the argument and therefore would maintain my original score.\"}", "{\"title\": \"Rebuttal, Section 0\", \"comment\": \"Many thanks for your comprehensive and constructive feedback.\\n\\n## Part-1: A recap of dependency measures.\", \"we_would_like_to_provide_a_step_by_step_review_of_the_key_technique_introduced_in_our_paper\": \"dependency measures, which might help your understanding and answer related concerns.\\n\\n### **1, Motivation of this technique** \\n\\nA serious consequence of posterior collapse is that the time-series decoder tends to ignore the input latent variable for conditional generation. While this fact is widely recognized in the literature [1,2,3], there remains a lack of a principled method to quantify how much a decoder might neglect the latent variable. The dependency measure was developed in this background, and it will be indispensable for practitioners to diagnose the problem of posterior collapse.\\n\\n### **2, Underlying principle (related to your concerns)** \\n\\nThe dependency measure is a typical gradient-based attribution method [4], **with a specific design for time-series generative models, considering the variable length, autoregressive nature, and discrete structure of time series**. The core idea is to measure the sensitivity of a temporal model to its input variable through first-order gradients. The gradient-based attribution itself is principled, with many previous theoretical and empirical works [5,6] that verified its effectiveness.\\n\\n### **3, Notations and key properties**\\n\\nGiven a time-series decoder $f$, the global dependency $m_{t,0}$ is a signed measure estimating the dependency of predicting variable $x_t$ on latent variable $z$, while local dependency $m_{t, i}, 0 < i < t$ quantifies such dependency on variable $x_i$. \\n\\nThe measure $m_{t, i}, 0 \\\\le i < t$ is always bounded between $-1$ and $1$, satisfying that $\\\\sum_{0 \\\\le i < t} m_{t, i} = 1$. In typical cases where time series exhibit structural dependencies among variables, $m_{t, i}, 0 \\\\le i < t$ is mostly non-negative, so we can infer:\\n- $m_{t, i} \\\\approx 1$ means that the variable $x_i, i > 0$ or latent variable $z, i = 0$ dominates the attention of decoder $f$;\\n- $m_{t, i} \\\\approx 0$ suggests quite the opposite: the decoder $f$ ignores the variable.\\n\\n### **4, Use cases (related to your concerns)** \\n\\nAs mentioned in the beginning, the most significant symptom of posterior collapse is that the time-series decoder tends to ignore the input latent variable for conditional generation [1,2,3]. In that situation, based on the properties of dependency measures, the global dependency $m_{t, 0}$ should be close to $0$ for every time step $t$, or soon vanishes with increasing time $t$. In light of this diagnostic logic, **posterior collapse can be asserted when vanishing global dependencies (i.e., $m_{t, 0} \\\\approx 0$ for at least large $t$) are observed**.\\n\\nOn the other hand, based on the property that all measures sum to $1$, one can claim that the decoder heavily depends on input observations under the impact of posterior collapse: high local dependencies $\\\\sum_{1 \\\\le i < t} m_{t, i} = 1 - m_{t, 0} \\\\approx 1$. **This is exactly the case when we explained the upper right subfigure of Fig. 1**.\\n\\n### **6, For other data modalities (related to your concerns)** \\n\\nThe dependency measures are specifically designed for time series but can naturally be extended to other types of sequential data (e.g., text). As mentioned in our experiments in Appendix F.4, text Latent Diffusion also seriously suffers from posterior collapse, which only has a minor impact on image Latent Diffusion. Therefore, **dependency measures will yield similar outcomes for text and time-series data, while exhibiting different behavior for image data**.\\n\\nThis part aims to address your related concerns in Weakness-1, 2, 4 and Question-1. We welcome any further questions you might have.\"}", "{\"title\": \"Rebuttal, Section 1\", \"comment\": \"## Part-3: Explanations of some terminologies.\\n\\nWe prepared the below table that re-explains and connects some key terminologies appearing in our paper.\\n\\n| | **Posterior Collapse** | **Dependency Measure** | **Dependency Illusion** |\\n|-----|--------|--------|--------|\\n| **Definition in the paper** | Paragraph \\\"problem formulation\\\" in Sec. 3.1 | Definition 3.2 in Sec. 3.2 | Paragraph \\\"Insightful results\\\" in Sec. 3.3 |\\n| **Explanation** | The posterior $P(z \\\\mid X)$ reduces to the prior $P(z),$ indicating that the latent variable $z$ is not informative about data $X$ | A principled method to quantify the impact of observation $x_i$ or latent variable $z$ to predict observation $x_j, j > i$ | Different observations $x_i, x_j, 1 \\\\le i < j \\\\le n$ in time series $X$ are totally or almost independent, but the decoder $f$ still highly relies on $x_i$ to predict $x_j$ (e.g., high dependency measure $m_{j,i}$) |\\n| **Newly Introduced?** | No | Yes | Yes |\\n| **Negative impacts on Latent Diffusion** | Making Latent Diffusion less expressive as a Generative Model (Proposition 3.1) and reducing the sensitivity of decoder $f$ to latent variable $z$ (Sec. 3.3) | N/A | Another implication of posterior collapse: the decoder $f$ incorrectly captures the relationships between different observations, which is not desired for conditional generation $P(X \\\\mid z)$ |\\n| **Related experiments in the main text** | Fig. 1, Fig. 2, Fig. 4, Table 1 | Fig. 1, Fig. 2, Fig. 4 | Fig. 1, Fig. 2, Fig. 4, Table 1 |\\n\\nThis table aims to answer your Question-2, and we will include it in the final version for better clarity. We welcome any further questions you might have.\\n\\n## References\\n[1] Time-series Generative Adversarial Networks, NeurlPS-2019\\n\\n[2] On Measuring Fairness in Generative Models, NeurlPS-2023\\n\\n[3] On Memorization in Probabilistic Deep Generative Models, NeurlPS-2021\\n\\n[4] Anonymization Through Data Synthesis Using Generative Adversarial Networks, IEEE-2020\\n\\n[5] Data Synthesis based on Generative Adversarial Networks, VLDB-2018\\n\\n[6] Equivariant Diffusion for Molecule Generation in 3D, ICML-2022\\n\\n[7] TimeLDM: Latent Diffusion Model for Unconditional Time Series Generation, arXiv-2024\\n\\n[8] Latent Diffusion for Language Generation, NeurlPS-2023\\n\\n[9] AudioLDM: Text-to-Audio Generation with Latent Diffusion Models, ICML-2023\\n\\n[10] Time Series Diffusion in the Frequency Domain, ICML-2024\\n\\n[11] Automatic Integration for Spatiotemporal Neural Point Processes, NeurlPS-2023\"}", "{\"title\": \"Rebuttal, Section 1\", \"comment\": \"## Part-2: More recent baselines and other data modalities.\\n\\nAs you recommended, **we have newly adopted two up-to-date baselines that aimed to address posterior collapse**:\\n- One is Inverse Lipschitz Constraint [7] from ICML-2023;\\n- The other is Mutual Information Constraint [8] from JMLR-2022.\\n\\nThe experiment results are shown in the below table.\\n\\n| **Method \\\\ Dataset** | **MIMIC** | **Earthquakes** |\\n|-------------------------------------------------------------------------------|-----------|-----------------|\\n| Transformer Latent Diffusion | 5.02 | 5.91 |\\n| Transformer Latent Diffusion w/ Skip Connections | 3.75 | 3.69 |\\n| Transformer Latent Diffusion w/ Mutual Information Constraint (JMLR-2022) | 3.59 | 3.85 |\\n| Transformer Latent Diffusion w/ Inverse Lipschitz Constraint (ICML-2023) | 3.01 | 3.42 |\\n| Transformer Latent Diffusion w/ Our Framework | **2.13** | **2.49** |\\n\\nWe can see the two new baselines can indeed mitigate the problem of posterior collapse, though their performance gains are much smaller than our framework. We will include the above new experiment results in the final version.\\n\\nOn the other hand,*we had indeed considered other data modalities (e.g., images). **In Appendix F.4 of our paper, we compared our models with the baselines on text and image datasets**, with experiment results shown in Table 5 and Table 6.\\n\\nThis part aims to address your concerns in Weakness-3, 5 and Question-2.\\nWe welcome any further questions you might have.\\n\\n## Part-3: Other concerns in Weakness-6 and Question-2.\\n\\nWe have introduced 2 new baselines that addressed posterior collapse in our Part-2 answer to you, with other 2 up-to-date time-series generative baselines in our Part-2 answer to Reviewer NiiK. **These 4 new baselines are all from accepted papers in 2023 or 2024**. We will also delete repeatedly cited papers in the final version.\\n\\n**The key component in our framework that makes the decoder sensitive is the collapse simulation loss** as defined in Eq. (14). The core idea is to apply the diffusion process to simulate a posterior-collapsed latent variable $z$: $P(z|X) \\\\approx P(z)$, and penalize the decoder if it yields high conditional probability $P(X|z)$ in this case. Our experiments in Fig. 4 and Table 1 both verified the effectiveness of this method in practice.\\n\\n## References\\n[1] Generating Sentences from a Continuous Space, ACL-2016\\n\\n[2] Cyclical Annealing Schedule: A Simple Approach to Mitigating KL Vanishing, NAACL-2019\\n\\n[3] Lagging Inference Networks and Posterior Collapse in Variational Autoencoders, ICLR-2019\\n\\n[4] Axiomatic Attribution for Deep Networks, ICML-2017\\n\\n[5] A Rigorous Study of Integrated Gradients Method and Extensions to Internal Neuron Attributions, ICML-2022\\n\\n[6] Guided Integrated Gradients: An Adaptive Path Method for Removing Noise, CVPR-2021\\n\\n[7] Controlling Posterior Collapse by an Inverse Lipschitz Constraint on the Decoder Network, ICML-2023\\n\\n[8] Mutual Information Constraints for Monte-Carlo Objectives to Prevent Posterior Collapse Especially in Language Modelling, JMLR-2022\"}", "{\"summary\": \"The paper investigates the issue of posterior collapse in latent diffusion models for time series data, where the latent variable becomes ineffective in influencing the model\\u2019s output. The authors propose a dependency measure to quantify how much the decoder relies on the latent variable, highlighting not only posterior collapse but also a related phenomenon termed dependency illusion. Then the paper introduces a new framework to address these issues by removing KL-divergence regularization and enhancing the decoder\\u2019s sensitivity to the latent variable, improving posterior stability. Experiments demonstrate that the proposed method achieves better performance than standard latent diffusion models with posterior collapse mitigation techniques across various time series datasets.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper addresses a previously underexplored issue in time series diffusion models\\u2014posterior collapse\\u2014which has primarily been studied in variational autoencoders (VAEs) but not in the context of diffusion models for time series.\", \"The dependency measure provides an insightful tool for quantifying the decoder\\u2019s reliance on the latent variable. This measure enables detection of both posterior collapse and dependency illusion, offering valuable diagnostic capabilities for latent-variable models.\", \"The approach aligns with the paper\\u2019s theoretical objectives, yielding meaningful performance improvements.\"], \"weaknesses\": [\"The empirical evaluation lacks comparisons with stable time series models that naturally avoid posterior collapse, such as ARIMA, RNNs, LSTMs, transformers, and temporal convolutional networks. Including these baselines would provide context on whether the proposed framework offers advantages beyond mitigating posterior collapse. The author also did not compare with recent baselines for time series, which are diffusion-based. Please check papers published in NeurIPS/ICLR/ICML in the past two years.\", \"The paper references Bowman et al. (2016) to support claims about posterior collapse in latent-variable models for time series, which may be outdated. This raises questions about whether latent diffusion models represent the current state of the art in time series modeling. Comparing the approach with recent state-of-the-art time series methods would strengthen the justification for the proposed framework.\", \"Although the datasets used are realistic, the paper does not discuss broader real-world applications or scenarios where posterior stability is crucial, such as in anomaly detection or real-time forecasting. Adding context on practical use cases would clarify the framework\\u2019s relevance.\"], \"questions\": [\"Could the authors include comparisons with recent state-of-the-art time series models, such as ARIMA, LSTMs, transformers, and TCNs, which are naturally robust against posterior collapse? This would contextualize the proposed method\\u2019s advantages relative to stable baselines.\", \"Could the authors provide clearer definitions or examples for terms like dependency illusion and posterior collapse in the context of latent diffusion models? A simplified explanation would improve accessibility.\", \"Are there specific real-world applications, such as anomaly detection or real-time forecasting, where this framework would be particularly useful? A discussion of practical use cases would strengthen the framework\\u2019s relevance.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Many thanks for your support, and thank you again for reviewing our paper!\"}", "{\"summary\": \"This paper proposes a new approach to establish a stable posterior for time series within the latent diffusion framework. The new approach circumvents the problematic KL-divergence regularization, prevents the posterior collapse, and maintains the influence of the latent variable in the decoder. The authors provide both theoretical and experimental support to their newly proposed framework.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The study of the latent diffusion model when applied in the context of time series is a trended topic and super interesting. The authors approach this by defining a proper dependency measure to quantify the problem of posterior collapse of the latent variable, and propose a new framework inspired by re-thinking the design of VAE and autoencoders. The new framework is equipped with new loss functions and regularizations, free from posterior collapse. The discussion comes together with empirical support. Overall, the paper's content is clear and easy to follow.\", \"weaknesses\": \"1. The experimental results mainly focus on real-world data to demonstrate the sampling benefits of the proposed method. Can the authors conduct synthetic data experiments to interpret and validate the effectiveness of the newly proposed component in the framework (e.g., the introduced loss function or regularization)?\\n\\n2. The prediction ability of a time series model is critical. Can the authors evaluate the proposed framework in terms of other metrics, such as the predictive MAE, to demonstrate its prediction ability?\\n\\n3. In addition to point 2, can the author compare with other advanced time-series models? Only compared with the latent diffusion family would not be convincing enough for advertising the model in the time series community.\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper addresses the problem of posterior collapse in latent diffusion models, specifically when applied to time series data. The authors provide a systematic analysis of this issue, showing that posterior collapse can reduce the expressiveness of latent diffusion to that of a variational autoencoder (VAE). They introduce a novel dependency measure to quantify the impact of latent variables on the generation process and identify a phenomenon called dependency illusion when time series data are shuffled. Building on these insights, the authors propose a new framework that eliminates the KL-divergence regularization, permits an expressive prior distribution, and ensures the decoder remains sensitive to the latent variable. Extensive experiments demonstrate that this framework avoids posterior collapse and significantly improves time series generation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1.\\tThe introduction of dependency measures to diagnose and address posterior collapse is both novel and insightful, providing a fresh perspective on an important issue within latent diffusion models.\\n2.\\tThe paper offers a solid theoretical foundation for the analysis of posterior collapse, and the proposed framework is well-motivated by both theoretical insights and empirical observations.\\n3.\\tThe proposed framework demonstrates significant improvements in the performance of time-series generation models, effectively addressing a key limitation in existing approaches.\", \"weaknesses\": \"1.\\tWhile the paper presents strong results for time-series data, it lacks a detailed discussion on the generalizability of the approach to other data modalities, such as images or text. Including a brief exploration or discussion of potential extensions could further enhance the contribution.\\n2.\\tThe experimental details, including specific configurations for baselines and the selection of hyperparameters, are not fully elaborated in the main text. Providing more comprehensive explanations in these areas would improve the paper\\u2019s clarity and reproducibility.\\n3.\\tAlthough the results are promising, some of the visualizations could be made more intuitive, particularly for readers unfamiliar with latent diffusion models. Additionally, converting the figures to vector graphics would significantly improve their quality, as several of the current images appear blurry and lack sharpness, which makes interpretation more difficult. Enhancing the clarity of the figures would improve the overall presentation of the paper.\", \"questions\": \"1.\\tCould the authors clarify how the dependency measure scales with longer time-series datasets? Does the framework handle large datasets efficiently?\\n2.\\tHave the authors considered extending this approach to other data types beyond time series? If so, how might the framework need to be adapted?\\n3.\\tIs there a specific reason for not including additional baselines, such as non-latent diffusion models, for comparison in the empirical section?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposed a metric to measure the posterior collapse and introduced a notion of the posterior collapse based on it. They then define an enhanced framework which can alleviate the posterior collapse problem. I found that their posterior collapse is different from the model collapse and Reviewer h3BG is incorrect to some degree.\\n\\nIn conjunction with the reviewers' concerns regarding the comparison with more baselines, however, I, myself, also have some concerns.\\n\\n1. They need to more carefully analyze what causes the posterior collapse. For instance, LSTMs and Transformers have oversmoothing and oversquashing problems and thereby, their latent vectors are limited in capturing all information in a long sequence. Isn't the posterior collapse by the low-capacity encoder, e.g., LSTMs and Transformers? \\n\\n2. Your new framework incurs more interactions and I agree that it can somehow address the problem. It would be nice if you can show that enhancing the encoder/decoder is not sufficient and your framework is needed. There are several papers on resolving the oversmoothing and oversquashing problems of RNNs and Transformers.\\n\\n3. After solving the above two questions, I recommend that you put more baselines since there are many time series synthesis methods after TimeGAN. Some of them are diffusion-based methods for time series synthesis. \\n\\nAll in all, I strongly encourage the authors improve the paper one more time. I think they can have a decent chance next time since their problem definition is new.\", \"additional_comments_on_reviewer_discussion\": \"The authors provided good messages and however, most reviewers feel that this paper is slightly below the acceptance threshold.\"}", "{\"title\": \"One more question\", \"comment\": \"I completely agree that posterior collapse is a critical issue in diffusion models. However, a key concern is whether this problem is uniquely prominent or particularly evident in the time series domain.\"}", "{\"summary\": \"This paper aims to address the posterior collapse problem of latent diffusion for time series data. The authors propose a dependency measure method to quantify how posterior collapse happens. And they propose a KL-divergence regularization based method to improve the sensitivity of decoder to the latent variable for time-series latent diffusion.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The authors focus on an important issue of latent diffusion, that is posterior collapse, and propose a potential method to quantify the posterior collapse of latent diffusion for time series data.\", \"weaknesses\": \"1.Regarding to the dependency illusion, you give an example (upper right subfigure of figure 1) to explain. But it\\u2019s unclear from figure 1 how you arrive at the conclusion that \\\"Even when the time series is randomly shuffled and thus lacks structural dependencies, the decoder of latent diffusion still heavily relies on input observations (instead of the latent variable) for prediction.\\\" Could you clarify how you determine that the decoder \\\"heavily\\\" depends on input observations? Providing a more detailed explanation or additional quantitative evidence would help support this observation.\\n\\n2.In section 3.1, the definition of posterior collapse seems a general term for all data, not only time series data. In section 3.2, you introduce a dependency measure to demonstrate the occurrence of posterior collapse in time series data. How does this measure specifically address time series data? Would this measure yield the same conclusion if applied to non-time series data?\\n\\n3.As shown in Figure 4, the dependency of the decoder on the latent variable decreases across both datasets. Although this trend appears improved compared to Figure 2, it would strengthen your findings to compare your method against additional baseline models, rather than only basic latent diffusion.\\n\\n4.There is a lack of experimental evidence supporting that the proposed dependency measure can accurately assess the impact of the latent variable on the decoder. You should compare your method with other measurement approaches and demonstrate how it outperforms them, providing a more comprehensive validation of its effectiveness.\\n\\n5.The baselines are not sufficient enough. Only three baselines and all of them are before 2019. Please compare with more the state-of-art works.\\n\\n6.Reference in this paper seems to be too old. And some of them are repeated. For example, papers in line 573 and line 576 are the same one.\", \"questions\": \"1.You claim that \\u201cwhen applied to time series data, this framework might suffer from posterior collapse\\u201d in the first paragraph. Do you have any evidence to support this claim? Is this phenomenon due to the diminishing influence of the latent variable on the decoder over time steps? How do you justify the decreased dependency correspond to the posterior collapse of latent diffusion for time series data?\\n\\n2.In section 4.2, you mention that the the variational inference in your framework leads the latent variable to be smooth in its effect on decoder. Is this the reason why your framework can increase the sensitivity of decoder to latent variable? Can your framework be applied to non-time series data? It seems the proposed method is not specific to time series data.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer for his or her kind and comprehensive feedback.\\n\\n## Part-1: More baselines, and other data modalities.\\n\\nAs you recommended, **we have additionally adopted two up-to-date baselines for comparison**:\\n1. One is Frequency Diffusion [1], a (not latent) diffusion-based Generative Model appearing in ICML-2024; \\n2. The other is Neural STPP, a flow-based Generative Model appearing in NeurlPS-2023. \\n\\nThe experiment results are shown in the below table.\\n\\n| Method / Dataset | MIMIC | Earthquakes |\\n|---------------------------------------------------------------|--------|-------------|\\n| Transformer Latent Diffusion (CVPR-2022) | 5.02 | 5.91 |\\n| Neural STPP (NeurIPS-2023) | 5.13 | 5.82 |\\n| Frequency Diffusion (ICML-2024) | 4.56 | 5.07 |\\n| Transformer Latent Diffusion w/ Our Framework | **2.13** | **2.49** |\\n\\nWe can see that Latent Diffusion is competitive with up-to-date time-series generative baselines, and it can significantly outperform the baselines with our framework, showing the significance of addressing posterior collapse.\\n\\nOn the other hand, **we had indeed considered other data modalities (e.g., images)**. In Appendix F.4 of our paper, we compared our models with the baselines on text and image datasets, with experiment results shown in Table 5 and Table 6.\\n\\nThis part aims to address your concerns in Weakness-1 and Question-2, 3. We welcome any further questions you might have.\\n\\n## Part-2: Scalability of dependency measures.\\n\\nFrom Eq. (10) of our paper, we can see that the computational complexity of a dependency measure is $O(NM)$, where $N$ is the number of Monte Carlo samples and $M$ is the length of time series. Therefore, the computational cost of the dependency measure linearly grows with the increasing sentence length, and this linear complexity can be further optimized with parallel GPU computation. In practice, a set of 10000 time series with an average length of 30 cost us only about 5min on a single GPU device. In summary, the dependency measure scales to large and long time-series datasets well.\\n\\nThis part aims to address your concern in Question-1. We welcome any further questions you might have.\\n\\n## Part-3: Other concerns about experiments.\\n\\nAs mentioned at the beginning of our Experiments section (i.e., Sec. 6), we move the Experiment Details section to Appendix E of our paper, due to the limited space in the main text. We will relocate that section and improve the graph quality as you suggested in the final version.\\n\\n## References\\n\\n[1] Time Series Diffusion in the Frequency Domain, ICML-2024\\n\\n[2] Automatic Integration for Spatiotemporal Neural Point Processes, NeurlPS-2023\"}", "{\"title\": \"Response to authors\", \"comment\": \"Thanks the authors' patient feedback.\"}" ] }
2ezRxhlAxJ
Efficient-vDiT: Efficient Video Diffusion Transformers With Attention Tile
[ "Hangliang Ding", "Dacheng Li", "Runlong Su", "Zhijie Deng", "Ion Stoica", "Hao Zhang" ]
Despite the promise of synthesizing high-fidelity videos, Diffusion Transformers (DiTs) with 3D full attention suffer from expensive inference due to the complexity of attention computation and numerous sampling steps. For example, the popular Open-Sora-Plan model consumes more than 9 minutes for generating a single video of 29 frames. This paper addresses the inefficiency issue from two aspects: 1) Prune the 3D full attention based on the redundancy within video data; We identify a prevalent tile-style repetitive pattern in the 3D attention maps for video data, and advocate a new family of sparse 3D attention that holds a linear complexity w.r.t. the number of video frames. 2) Shorten the sampling process based on multi-step consistency distillation; We split the entire sampling trajectory into several segments and perform consistency distillation within each one to activate few-step generation capacities. We further devise a three-stage training pipeline to conjoin the low-complexity attention and few-step generation capacities. Notably, with 0.1% pretraining data, we turn the Open-Sora-Plan-1.2 model into an efficient one that is 7.4x −7.8x faster for 29 and 93 frames 720p video generation with a marginal performance trade-off in VBench. In addition, we demonstrate that our approach is amenable to distributed inference, achieving an additional 3.91x speedup when running on 4 GPUs with sequence parallelism.
[ "Efficient inference", "video generation", "diffusion", "Transformer" ]
Reject
https://openreview.net/pdf?id=2ezRxhlAxJ
https://openreview.net/forum?id=2ezRxhlAxJ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "sd5uOIYy8o", "rhOh3OIBBY", "npdtS2nd0D", "j7x8eqkNEw", "hAteXikT6L", "aIiqf6Ew6V", "Rl42XXH6I5", "OyYyGy8OwK", "Nn7TKnRAct", "HVVhFQ35hN", "GQiO2rTBMG", "DvoaBb58Gi", "1uYbaptKOJ" ], "note_type": [ "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment" ], "note_created": [ 1732177687314, 1730557155551, 1737523535753, 1732177775456, 1732776652509, 1730710191474, 1732175734514, 1734658212192, 1732176341638, 1732176578408, 1730298028022, 1730210948438, 1732175875778 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2845/Authors" ], [ "ICLR.cc/2025/Conference/Submission2845/Reviewer_AbNB" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2845/Authors" ], [ "ICLR.cc/2025/Conference/Submission2845/Authors" ], [ "ICLR.cc/2025/Conference/Submission2845/Reviewer_tMR2" ], [ "ICLR.cc/2025/Conference/Submission2845/Authors" ], [ "ICLR.cc/2025/Conference/Submission2845/Area_Chair_2P1k" ], [ "ICLR.cc/2025/Conference/Submission2845/Authors" ], [ "ICLR.cc/2025/Conference/Submission2845/Authors" ], [ "ICLR.cc/2025/Conference/Submission2845/Reviewer_edPF" ], [ "ICLR.cc/2025/Conference/Submission2845/Reviewer_tExx" ], [ "ICLR.cc/2025/Conference/Submission2845/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response [Part 1/2]\", \"comment\": \"We thank the reviewer for acknowledging the novelty of our attention map findings and the speedup of our approach. We summarize the questions here and respond to them individually:\\n\\n## Q1: No comparison with sequential/autoregressive video generation models that might be computationally cheaper.\", \"a1\": \"We thank the reviewer for discussing the literature on sequential/autoregressive video generation models. In addition to the mentioned literature, we also include more references in this line [1][2][3] and have updated the paper accordingly. A major line of these works simply invokes diffusion models in an auto-regressive loop. Our method accelerates a single diffusion forward pass, which is compatible with these methods. For instance, [1] and [2] can simply replace their original diffusion model with one that is tuned with our Efficient-vDiT method, and benefit from the speedup without needing to change the overall generation procedure.\\n\\n[1] Henschel, R., et al. (2024). StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text. arXiv preprint arXiv:2403.14773.\\n\\n[2] Xiang, J., et al. (2024). Pandora: Towards General World Model with Natural Language Actions and Video States. arXiv preprint arXiv:2406.09455.\\n\\n[3] Zheng, Z., et al. (2024). Open-Sora: Democratizing Efficient Video Production for All. https://github.com/hpcaitech/Open-Sora\\n\\n## Q2: Lacks detailed ablation study showing separate effects of sparse attention without MLCD and experiment of first T_sparse then T_MLCD.\", \"a2\": \"1. We updated the experiments with T_sparse first and then T_MLCD in Appendix D. We find that the T_sparse stage is orthogonal to the T_MLCD stage, observing only negligible differences in the VBench, and CD-FVD scores. \\n\\n| Distill Order | Aesthetic Quality | Dynamic Degree | Motion Smoothness | Temporal Flickering | Object Class | Subject Consistency | Imaging Quality | CD-FVD$\\\\downarrow$ |\\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| MLCD+KD | 56.59% | 76.00% | 99.13% | 99.54% | 57.12% | 97.73% | 54.88% | 204.13 |\\n| KD+MLCD | 56.38% | 75.50% | 99.13% | 99.40% | 54.67% | 97.71% | 57.97% | 203.52 |\\n\\n2. In Tables 6 and 7 (upper section), we present the evaluation results for distillation works applied independently to both the base model and MLCD, demonstrating the isolated effect of sparse attention. In Table 6, the CD-FVD scores for Base_{3:5} and Base_{4:4} consistently remain below 200. However, as the attention sparsity increases, the CD-FVD score rises to 322.28, indicating a degradation in video generation quality. A similar trend is observed in Table 7, confirming that sparse attention has comparable effects when applied to both the base model and MLCD model.\\n\\n\\n## Q3: Both the sampling distillation (Stage 1) and knowledge distillation (Stage 3) lack technical novelty, as similar methodologies have been proposed in previous works [3,4]. The distinction between our proposed distillation and existing literature is unclear.\", \"a3\": \"Our main novelty is in the discovery of the analysis of attention tile redundancy patterns, where we adapted existing distillation processes to leverage this observation for inference acceleration.\\n\\n## Q4: Only shows two video examples with four frames each.\", \"a4\": \"We have included updated video examples from VBench and examples similar to those on OpenSora's website featuring dynamic scenes in Appendix E, demonstrating our capability to handle rapid, large-scale motions.\"}", "{\"summary\": \"The paper tackles the inefficiency of DiTs used in video diffusion model. The speedup of the presented method comes from two sources: 1) pruning the large full 3D attention of VDM DiTs and 2) distilling the model into a multi-step consistency model.\\nThe authors identify a repetitive tile-like pattern, termed \\\"Attention Tile,\\\" in the 3D attention maps of video data. Leveraging this pattern, they propose a new family of sparse 3D attention mechanisms that reduce the computational complexity from quadratic to linear with respect to the number of video frames.\\nTo further accelerate the inference process, the paper introduces a multi-step consistency distillation (MCD) technique. By dividing the sampling trajectory into segments and performing consistency distillation within each, the number of sampling steps required for video generation is significantly reduced.\\nResults show that the method achieves good speedup without suffer much performance, using limited training data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper makes a significant contribution by discovering the \\\"Attention Tile\\\" phenomenon in 3D full attention Diffusion Transformers (DiTs) for video data. This insight into the redundancy and repetitive patterns within attention maps is a valuable addition to the understanding of how attention mechanisms function in video generation models.\\n2. Building on the Attention Tile observation, the authors propose a new family of sparse 3D attention mechanisms that reduce computational complexity from quadratic to linear concerning the number of video frames. This is a substantial improvement that directly addresses the inefficiency issues in existing models.\\n3. The introduction of the EFFICIENT-VDIT framework is a well-thought-out approach that combines multi-step consistency distillation, layer-wise sparse attention mask searching, and knowledge distillation. This pipeline effectively accelerates inference while maintaining high metrics.\\n4. Achieving these results using only 0.1% of the pretraining data is notable. It indicates that the method is not only computationally efficient but also data-efficient, which is advantageous when large datasets are not readily available.\", \"weaknesses\": \"1. The paper could benefit from a more in-depth discussion of the trade-offs involved, such as the balance between sparsity level and video quality or the impact on different types of video content (e.g., fast-moving vs. static scenes). For instance, why don't you directly use the demo videos on OpenSORA's websites and compare the qualitative results? They provided both static scenes with only relative camera poses and more dynamic scenes, e.g. filming of an explosion scene.\\n2. The method relies on the observation that the Attention Tile pattern is data-independent. If this assumption does not hold for certain types of video data (e.g., highly dynamic scenes), the efficiency gains might not translate, potentially limiting the method's applicability.\\n3. The use of only 0.1% of the pretraining data raises concerns about the generalization capabilities of the accelerated model. While performance loss is minimal on tested datasets, the model may underperform on unseen data or less common video scenarios.\\n4. While the paper uses VBench and FVD for evaluation, these metrics may not capture all aspects of video quality, such as temporal coherence in more complex scenes or perceptual quality under different conditions. Including additional metrics or user studies could provide a more comprehensive assessment. This is especially concerning combined with weakness #2, since FVD is commonly known as a weak metric that focuses strongly on independent frames rather than overall video coherence. Overall, the evaluation seems to favor more static videos rather than highly dynamic videos, and I suspect the attention pruning would encourage such results too. A metric that takes motion into account is Content-Debiased FVD [1], but ideally, this is more suitable via a user study (even though I do not think this is necessary for the rebuttal stage, but better prepare it for another iteration of the paper).\\n5. Inherit my point in #2 and #4, the paper does not provide any video data, making it challenging to assess the actual quality of the generated contents. From my point of view, a VDM paper should always be accompanied with as many videos as possible within the supplemental material size limit. Again, a good set would be the demo videos on OpenSORA's websites. They provided a wide range of descriptions and all the corresponding text prompts --- supposedly those prompts would work well on OpenSORA.\\n\\n[1] Ge et al., On the Content Bias in Fr\\u00e9chet Video Distance, in CVPR 2024.\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response [Part 2/2]\", \"comment\": \"## Q5: When comparing speed-up performance for parallelization, are the baseline models also trained with parallelization (Table 4)?\", \"a5\": \"Yes, the model is trained with sequence parallelism using 4 GPUs. We thank the reviewer for the feedback and will update these details in the paper. However, regarding inference, we would like to clarify that Table 4 compares inference speed on a single GPU. Whether models are run with sequence parallelism (Table 3) is orthogonal to model performance (Table 4). If reported 29 frames generation on multi-GPUs, $\\\\text{Ours}_{r\\\\text{=0.100}}$ can achieve 25.8x speedup on 4 GPUs and 13.0x speedup on 2 GPUs (Sec4.2.2).\\n\\n## Q6: How does the proposed model achieve a lower FVD than the main base model?\", \"a6\": \"FVD values indeed have considerable variances. Based on the other reviewer\\u2019s advice, we update the FVD result using the Content-Debiased FVD (CD-FVD) result, which more effectively captures the temporal coherence of video quality, and the proposed model achieves a slighter higher CD-FVD score.\\n\\n| Model | FVD \\u2193 | Content-Debiased FVD \\u2193 |\\n|:---:|:---:|:---:|\\n| Base| 381.1 | 172.6 |\\n| MLCD | 438.1 | 190.5 |\\n| $\\\\text{Ours}_{r\\\\text{=0.025}}$ | 351.6 | 186.8 |\\n| $\\\\text{Ours}_{r\\\\text{=0.050}}$ | 357.4 | 195.6 |\\n| $\\\\text{Ours}_{r\\\\text{=0.100}}$ | 345.6 | 204.1 |\\n| $\\\\text{Ours}_{r\\\\text{=0.200}}$ | 356.9 | 223.8 |\\n| $\\\\text{Ours}_{r\\\\text{=0.400}}$ | 380.2 | 231.7 |\\n\\n\\n\\n[4] Songwei Ge et al., On the Content Bias in Fr\\u00e9chet Video Distance, in CVPR 2024.\\n\\n\\n## Q7: 1% difference is not reasonable.The imaging quality and subject class are significantly lower than those of the base model.\", \"a7\": \"We agree with the reviewer that the scores for imaging quality and subject class are lower than those of the base model. The reason why the VBench score remains within 1% difference is that our model improves the dynamic degree. With more sparsity, we noted that our pipeline has the characteristics of being able to capture richer motions between frames, but trading off some degrees of imaging quality and subject class accuracy. Quantitatively, we measure our models and variants across the overall VBench dimensions to justify the quality of our method. Qualitatively, we provide more samples from the VBench dataset in Appendix E to demonstrate that our method improves motion dynamics while maintaining acceptable imaging quality and subject class accuracy.\\n\\n## Q8: Justification: Questions the necessity of sparse attention given MLCD's strong performance alone\", \"a8\": \"Consistency distillation is indeed a strong method to reduce the redundancy between diffusion steps. However, we\\u2019d like to point out that the attention tile phenomenon and our attention distillation pipeline are complementary to consistency distillation, as they address a new kind of redundancy we discovered in video diffusion models. In other words, redundancy arises from both repetitive diffusion sampling and 3D attention; MLCD mitigates the former, while attention distillation focuses on the latter. As demonstrated in Tables 1 and 2, attention distillation alone accelerates inference significantly as well. Our approach synergistically integrates these techniques to efficiently eliminate redundancy in 3D-DiT.\"}", "{\"comment\": \"## Q2: Only experimented their method on one DiT based text-to-video generation model.\", \"a2\": \"We appreciate the reviewer's feedback. We have added comprehensive experiments on the CogVideoX-5B model in Appendix D.2 to demonstrate our method's generalization capability. CogVideoX is based on the MM-DiT architecture, where its attention module concatenates text tokens with video tokens, which differs from Open-Sora-Plan's cross attention module. These experiments demonstrate that our method works effectively on both MM-DiT and Open-Sora-Plan's cross attention architectures.\\n\\n1. **Kernel Performance** : We analyze the computation time for a single sparse attention kernel below. The results show that as sparsity increases, computation time decreases significantly. For instance, with a 2:11 attention mask, the execution time reduces to 15.16ms, achieving a 1.72\\u00d7 speedup compared to the full mask.\\n\\n| Mask | Sparsity (%) | Time(ms) | Speedup |\\n|------|--------------|----------|----------|\\n| full | 0.00 | 26.03 | 1.00$\\\\times$ |\\n| 1 | 14.50 | 24.12 | 1.08$\\\\times$ |\\n| 2 | 29.29 | 23.68 | 1.10$\\\\times$ |\\n| 3 | 38.30 | 20.51 | 1.27$\\\\times$ |\\n| 4 | 48.66 | 17.77 | 1.47$\\\\times$ |\\n| 6 | 60.15 | 14.08 | 1.85$\\\\times$ |\\n| 12 | 74.11 | 9.99 | 2.60$\\\\times$ |\\n\\n2. **Evaluate our method on VBench**\\n\\n**Experiment Setting**: CogVideoX-5B is profiled using Algorithm 1. For training, the model is trained for 10,000 steps, equivalent to 10 epochs of the dataset. The learning rate is set to 1e-7, and the gradient accumulation step is 1. The diffusion scale factor $\\\\lambda$ is set to 1.\\n\\n**Quantitative results**: The VBench evaluation results of the knowledge distillation model are shown below. Our model's results are within 1% of the final score with no noticeable drop in several key dimensions. It achieves comparable performance to the original model.\\n\\n| Model | Final Score | Aesthetic Quality | Motion Smoothness | Temporal Flickering | Subject Consistency | Overall Consistency | Speedup |\\n|-------|--------------|-------------------|-------------------|---------------------|---------------------|-------------------|----------|\\n| Base | 77.91% | 57.91% | 97.83% | 97.34% | 92.27% | 26.13% | 1.00$\\\\times$ |\\n| $\\\\text{Ours}_{r\\\\text{=5}}$ | 77.15% | 51.18% | 96.67% | 97.18% | 90.89% | 26.02% | 1.34$\\\\times$ |\\n\\n**Qualitative results**: In Appendix D.2. Figure 7, we demonstrate that our method shows robust performance in processing dynamic, complex scenes while maintaining high-quality video output using the prompt from CogVideoX official website.\"}", "{\"summary\": \"This paper addresses the acceleration of 3D full attention video generation models, focusing on sparsifying 3D attention and reducing sampling steps. The authors propose an algorithm for searching optimal sparse attention masks based on the observed Attention Tile phenomenon and combine this with a consistency distillation method to reduce the number of steps, resulting in an accelerated version of DiT while striving to maintain generation quality.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper effectively optimizes DiT using sparse attention and MCD, and the proposed framework demonstrates commendable speed results alongside assured generation quality. Specific strengths include:\", \"The identification of the Attention Tile phenomenon, accompanied by a detailed analysis, provides a background for the design of sparse attention masks and proposes an algorithm for searching optimal mask sets. Comprehensive evaluation experiments validate the effectiveness of this method.\", \"The integration of the consistency distillation method leads to a complete acceleration framework, with rigorous ablation studies confirming the framework's soundness and ensuring generation quality. The FVD metric significantly outperforms the use of the MLCD method alone.\"], \"weaknesses\": \"While the paper is rich in content, there are still potential issues to consider: According to Table 7, the acceleration benefits from sparse attention masks are not substantial, with noticeable quality degradation occurring beyond a 1.45\\u00d7 acceleration. Although there is some improvement when combined with MLCD (compared to a 5\\u00d7 acceleration), the effectiveness of the design based on the Attention Tile, which is a core contribution of the paper, appears insufficient here.\", \"questions\": [\"The changes in Dynamic Degree seem to exhibit a certain trend; are there any related experimental analyses available?\", \"There is a discrepancy between the acceleration results in Table 1 and Table 7. Could you please provide the specific experimental parameter differences (as they seem to be absent in the paper)?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General response\", \"comment\": [\"We thank all reviewers for their positive feedback. We are encouraged that they all appreciated our paper for the following reasons:\", \"Discovery and analysis of the Attention Tile phenomenon with comprehensive observations beyond just diagonal patterns.\", \"Novel sparse attention mechanism achieving significant speedup complementary to existing MLCD methods.\", \"Layer-wise optimal search for sparse attention masks demonstrating strong generalization.\", \"Based on the feedback, we mainly update the manuscript with:\", \"More comprehensive video samples in Appendix E.\", \"Added Content-Debiased FVD metrics for better video quality evaluation.\", \"Expanded ablation studies demonstrating the separate effects of sparse attention and MLCD in Appendix D1.\", \"Experiments on additional state-of-the-art video diffusion architectures CogVideoX in Appenedix D2.\"]}", "{\"metareview\": \"The paper is concerned about the task of accelerating the 3D attention in video generators. The reviewers ranked the paper as a borderline, acknowledging the strengths, but also listing a number of weaknesses. They highlight the attention tile idea proposed in the paper. The paper reports favorable VBench scores. At the same time, the AC was surprised to find that the paper reports no video examples in the supplement. This is very uncommon for video papers, in fact, and here AC agrees with Reviewer AbNB and tExx, that the authors should include as many samples as they can. Currently only several frames are concatenated and pasted to the paper. In this form, it's not really possible to analyze the quality of the videos and understand the value of the method. VBench scores are not sufficient alone. There can be temporal artifacts, flickering, inconsistencies that it's not possible to detect when looking at frames. After looking at the examples, the AC believes that the visual quality drops (check Fig 11 the dog example), also there are no prompts given.\\n\\nThe AC went through the discussion, the paper, and the provided video examples. The authors also shared a message with AC. The AC believes that the provided examples are not sufficient for the video paper. None of the reviewers champions the paper strongly. \\n\\nThe AC would like to encourage the authors to update the manuscript with the details they provided during the discussion period (including hundreds of video examples and comparisons, not frames!) and submit to a future venue.\", \"additional_comments_on_reviewer_discussion\": \"There was a somewhat reasonable exchange of messages between the authors and reviewers. Some concerns were solves, others remained.\"}", "{\"comment\": \"We thank the reviewer for the positive feedback on the sparse attention mechanisms in the video model and distillation framework. We summarize the questions here and respond to them individually:\\n\\n## Q1: Tradeoff analysis: Lacks detailed discussion of trade-offs between sparsity and video quality, especially for different types of content (static vs. dynamic scenes)\", \"a1\": \"From Table 5, we can observe that when sparsity increases, VBench dimensions such as total score and motion smoothness decrease, while higher sparsity leads to better approximation of the original image. Also, as shown in Table 1, as sparsity increases, the sparse kernel requires less execution time, thus reducing inference time. To address this trade-off, we present an optimization method based on the Lagrangian method in Appendix A, which considers how to achieve higher sparsity under given acceleration ratio constraints, thereby better approximating the original image. We have included updated video examples from VBench and examples similar to those on OpenSora's website featuring dynamic scenes in Appendix E.\\n\\n## Q2: Concerns about model generalization when using only 0.1% of pre-training data.\", \"a2\": \"We thankfully agree with the reviewer that more data would improve model generalization ability. In fact, we show that our method has already had good generalization ability with only 0.1% of pre-training data on VBench.Note that VBench already contains a diverse suite of test prompts, which suffices as a proof-of-concept. We believe distillation with more data will further boost the performance, but defer it as a future work due to the large amount of GPU hours required.\\n\\n## Q3: Current metrics (VBench and FVD) may not fully capture video quality, especially temporal coherence -> Content-Debiased FVD.\", \"a3\": \"Following your suggestion, we have updated our evaluation to include Content-Debiased FVD scores, which more effectively capture the temporal coherence of video quality. This additional metric provides better insight toward per-frame quality over temporal realism and helps identify its sources. As shown in the table below, the base model achieves better performance on Content-Debiased FVD, which aligns with our expectations.\\n\\n| Model | FVD \\u2193 | Content-Debiased FVD \\u2193 |\\n|:---:|:---:|:---:|\\n| Base| 381.1 | 172.6 |\\n| MLCD | 438.1 | 190.5 |\\n| $\\\\text{Ours}_{r\\\\text{=0.025}}$ | 351.6 | 186.8 |\\n| $\\\\text{Ours}_{r\\\\text{=0.050}}$ | 357.4 | 195.6 |\\n| $\\\\text{Ours}_{r\\\\text{=0.100}}$ | 345.6 | 204.1 |\\n| $\\\\text{Ours}_{r\\\\text{=0.200}}$ | 356.9 | 223.8 |\\n| $\\\\text{Ours}_{r\\\\text{=0.400}}$ | 380.2 | 231.7 |\\n\\n## Q4: Lack of sufficient video samples in supplementary materials.\", \"a4\": \"We have included updated video examples from VBench and examples similar to those on OpenSora's website featuring dynamic scenes in Appendix E, demonstrating our capability to handle rapid, large-scale motions.\"}", "{\"comment\": \"We thank the reviewer for acknowledging the novelty of our Attention Tile finding and layer-wise optimal search approach. We summarize the questions here and respond to them individually:\\n## Q1: The main speedup comes from the existing MLCD method rather than the paper's novel contribution (sparse attention)\", \"a1\": \"We would like to respectfully clarify that our method achieves 2.83x speedup in the attention module (Table 1) and 1.77x end-to-end speedup (listed below), with a sparsity level of 1:7. Our key point here is that although the sparse attention alone does not achieve higher speedup than the MLCD method alone, this speedup is arguably substantial, and is complementary to MLCD. In 3D-DiT, redundancy stems from repetitive diffusion sampling and 3D attention; MLCD addresses the former, while attention distillation targets the latter. The value of our method is that it harmoniously connects them to effectively eliminate 3D-DiT redundancy.\\n| Model | Final Score \\u2191 | Aesthetic Quality | Motion Smoothness | CD-FVD \\u2193 | Speedup |\\n|--------|--------------|------------------|-------------------|-----------|----------|\\n| Base | 76.12% | 58.34% | 99.43% | 172.64 | 1.00\\u00d7 |\\n| $\\\\text{Base}_{4:4}$ | 76.57% | 58.64% | 99.38% | 171.62 | 1.16\\u00d7 |\\n| $\\\\text{Base}_{3:5}$ | 75.53% | 55.47% | 99.01% | 197.35 | 1.26\\u00d7 |\\n| $\\\\text{Base}_{2:6}$ | 76.33% | 57.14% | 99.06% | 201.61 | 1.45\\u00d7 |\\n| $\\\\text{Base}_{1:7}$ | 77.15% | 57.53% | 98.67% | 322.28 | 1.77\\u00d7 |\\n\\n## Q2: Only experimented their method on one DiT based text-to-video generation model.\", \"a2\": \"We have updated ablation of the attention distill experiment on new model CogvideoX in new thread.\\n\\n## Q3: Lacks comparison with other acceleration methods, only showing self-comparisons with different parameter settings\", \"a3\": \"We thank the reviewer for the suggestion. We included one state-of-the-art work, PAB[1], into our discussion. PAB, developed on spatial-temporal DiTs, reuses attention computation from previous denoising steps to speed up inference. In their best-performing setup, they compute spatial attention, temporal attention, and cross attention every 2, 4, and 6 steps, respectively, which results in an average speedup of 4 in attention. This would translate to less than 4x speedup in an end-to-end setting. As shown in Table 5, our method, combined with consistency distillation, achieves 6.60x-7.80x end-to-end speedup.\\nSecond, we'd like to point out that PAB falls into the category of methods that leverage the repetitiveness between diffusion steps. Intuitively, it would be less compatible with consistency distillation (which is a well-established method to reduce repeated diffusion steps), because consistency distillation already reduces the repetitiveness between diffusion steps. In contrast, our proposed approach discovers and addresses a new type of redundancy in video diffusion, and shows ample evidence that it is complementary to consistency distillation.\\n\\n| Model | VBench Performance(%) | Speedup |\\n|:---:|:---:|:---:|\\n| $\\\\text{PAB}_{\\\\text{246}}$ | -0.09 | <4x |\\n| $\\\\text{PAB}_{\\\\text{357}}$ | -2.85 | <5x |\\n| $\\\\text{PAB}_{\\\\text{579}}$ | -8.58 | <7x |\\n| $\\\\text{Ours}_{r\\\\text{=0.025}}$ | +0.02 | 5.85x |\\n| $\\\\text{Ours}_{r\\\\text{=0.050}}$ | -0.11 | 6.60x |\\n| $\\\\text{Ours}_{r\\\\text{=0.100}}$ | -0.12 | 7.05x |\\n| $\\\\text{Ours}_{r\\\\text{=0.400}}$ | -0.84 | 7.80x |\\n\\n[1] Zhao, X., et al. (2024). Real-Time Video Generation with Pyramid Attention Broadcast. arXiv preprint arXiv:2408.12588.\\n\\n## Q4: The diagonal attention pattern emphasized in the paper is an obvious phenomenon due to the basic property of self-attention\", \"a4\": \"While the high main diagonal values may be obvious to some audiences, this is only one part of the attention tile phenomenon. The attention tile phenomenon comprises four observations (Figure 1): repetitiveness, large diagonals, locality, and data independence. Such observations have not been revealed by other literature to the best of our knowledge, and form the basis of our methodology.\"}", "{\"summary\": \"This paper proposed an efficient method for DiT based text-to-video generation. They found a unique pattern in the attention map of DiT based video generation diffusion models and proposed a method to exploit this pattern to ignore the computation of attention between many query/key pairs and hence speed up the generation.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The finding (attention tile) is quite interesting and could be useful for future research in the community in this area.\", \"the proposed layer-wise optimal search for sparse attention masks is somewhere novel in the paper.\"], \"weaknesses\": [\"This work over-claimed the contribution of the proposed method. Actually, the efficiency improvement is mostly coming from MLCD, which is proposed by another work. The real improvement from the main finding or the proposed 'new' method in this paper is much less than MLCD.\", \"Experiment is not thorough. This paper only experimented their method on one DiT based text-to-video generation model.\", \"Comparison with other methods is missing. This paper only compared the results from different hyper parameters of the proposed method. Many existing methods that accelerate diffusion models are missing in the paper.\", \"The larger diagonal attention is not something new or surprising as the each query token is computing the `correlation` with itself.\"], \"questions\": \"please refer to the weakness section\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose a framework to speed up video generation using Video Diffusion Transformers by optimizing attention computation and reducing sampling steps. A repetitive attention tile pattern in 3D attention maps is identified which allows for sparse attention that lowers complexity. The framework uses a three-stage training pipeline: multi-step consistency distillation to reduce sampling steps, a layer-wise search for optimal sparse attention masks, and knowledge distillation to retain performance. This approach claims to achieve up to a 7.8\\u00d7 speedup in video generation with minimal quality loss.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is well-written.\\n\\nThe computational complexity of video diffusion models presents a significant challenge, and the authors effectively highlight this issue and provide a good motivation for addressing it.\\n\\nTo tackle this, the solution provided by the authors of using a sparse attention map is interesting. Although thinking in this direction is not new, the way the authors motivate the solution and compute the attention maps is scientifically sound and has some novelty.\\n\\nThe computational speed-up achieved by the method looks impressive.\", \"weaknesses\": \"In the video generation literature, there are models that generate frames sequentially or follow an auto-regressive approach [1,2]. These models may be less computationally expensive than those using full 3D attention heads, yet there is no empirical or theoretical comparison with such models in the paper.\\n\\nThere should be an ablation study with the separate effects of sparse attention (without the MLCD) to understand each component in more detail.\\n\\nThe sampling distillation stage (Stage 1) is not really new, either technically or conceptually. There has been a line of work that provides a similar methodology [3,4], etc. It is not clear how different the proposed distillation is from the existing literature. The same can be said for the knowledge distillation in the final stage (Stage 3).\\n\\nThe paper has only two qualitative video generation results (or at least what I have found), of which only four frames are shown. There should be a lot more generated videos shown side by side to compare the method qualitatively.\\n\\n[1] Diffusion forcing: Next-token prediction meets full-sequence diffusion. Chen et al. 2024.\\n\\n[2] Diffusion models are real-time game engines. Valevski et al. 2024.\\n\\n[3] MLCM: Multistep Consistency Distillation of Latent Diffusion Model. Xie et al. 2024.\\n\\n[4] SCott: Accelerating Diffusion Models with Stochastic Consistency Distillation. Liu et al. 2024.\", \"questions\": \"What happens if the stages are switched, i.e., first obtain T_{sparse}, then T_{MCM}\\u200b from T_{sparse}, and finally apply the knowledge distillation step?\\n\\nTable 4 needs additional quantitative metrics like aesthetic quality, subject consistency, imaging quality, and FVD to provide a complete understanding of the effect of parallelization.\\n\\nWhen comparing speed-up performance for parallelization, are the baseline models also trained with parallelization (Table 4)?\\n\\nHow does the proposed model achieve a lower FVD (Table 5) than the main base model, given that the proposed model is ultimately a distilled version of the main model?\\n\\nHow is the claim (lines 424 to 430) that model performance is within 1% of the base model accurate? It is evident that the numbers for imaging quality and subject class are significantly lower than those of the base model.\\n\\nAblation studies in Table 6 show that only MLCD can speed up the process by 5 to 8 times compared to the base model without significantly compromising quality. What is the justification, then, for the need for sparse attention maps on top of that?\\n\\nIt seems the main contribution is the sparse attention part. However, some doubts remain. Therefore, I can increase my rating if my questions and concerns in the weakness section and questions section are answered satisfactorily.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for the positive feedback on the Attention Tile phenomenon analysis and distillation framework. We summarize the weaknesses and questions here and respond to them individually:\\n\\n## Q1: Attention distill improves little compared to MLCD\", \"a1\": \"Attention distillation is a method that reduces redundancy that MLCD cannot address and is orthogonal to it. The redundancy in 3D-DiT comes from two sources: the repetitive sampling of diffusion and the redundancy of 3D attention. MLCD can only address the first part, while the goal of attention distillation is to address the second part. In Tables 1 and 2, we show that attention distillation itself provides substantial speedup. In our method, these two components can be combined to jointly eliminate redundancy in 3D-DiT inference.\\n\\n## Q2: Dynamic Degree trend analysis\", \"a2\": \"As discussed in the VBench paper, models often exhibit trade-offs between temporal consistency and dynamic degree metrics. Quantitatively, we measure our models and variants across the overall VBench dimensions to justify the quality of our method. Qualitatively, we provide more samples in Appendix E from the VBench prompt to demonstrate that our method improves motion dynamics while maintaining acceptable consistency. Overall, with higher sparsity, our method captures richer motions between frames but with a trade-off in temporal consistency.\\n\\n\\n## Q3: Discrepancy between Table 1 and Table 7 results\", \"a3\": \"The difference between Tables 1 and 7 comes from different measurement scopes. Table 1 measures only the attention operation speedup, while Table 7 shows the speedup of the entire model (including overhead from normalization and MLP layers). For example, with the 1:7 ratio, while attention alone achieves a 2.83x speedup, the full model only achieves a 1.77x speedup due to other components.\"}" ] }
2ev44Srmt9
Revisiting Convergence: A Study on Shuffling-Type Gradient Methods
[ "Qi He", "Peiran Yu", "Ziyi Chen", "Heng Huang" ]
Shuffling-type gradient methods are favored in practice for their simplicity and rapid empirical performance. Despite extensive development of convergence guarantees under various assumptions in recent years, most require the Lipschitz smoothness condition, which is often not met in common machine learning models. We highlight this issue with specific counterexamples. To address this gap, we revisit the convergence rates of shuffling-type gradient methods without assuming Lipschitz smoothness. Using our stepsize strategy, the shuffling-type gradient algorithm not only converges under weaker assumptions but also match the current best-known convergence rates, thereby broadening its applicability. We prove the convergence rates for nonconvex, strongly convex, and non-strongly convex cases, each under both random reshuffling and arbitrary shuffling schemes, under a general bounded variance condition. Numerical experiments further validate the performance of our shuffling-type gradient algorithm, underscoring its practical efficacy.
[ "shuffling-type gradient methods", "convergence analysis", "relaxed smoothness assumptions" ]
Reject
https://openreview.net/pdf?id=2ev44Srmt9
https://openreview.net/forum?id=2ev44Srmt9
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yVtlNQSARV", "wV61FUQ81n", "v2LwD7JT6x", "qlsBW3K9KY", "qXk2k5lUB4", "pKKqfckNOt", "nawgJ4hG3x", "mSsoXn65S1", "lGuur3MIwq", "jZGzVnwpj0", "gRYi9ujtlo", "cGG4HQi6VP", "Y8GugW3yun", "VMkqkBA4A4", "RnWopXQTF0", "OtNWSW7bEK", "OjBEM25gmu", "NYqABk3f4P", "MtWz8S6Bd6", "LzDhqCekMC", "LAjL9vya6W", "Icei4kYA5J", "GAm53oSVxR", "CZIkio3Vcn", "Ba1CMmiaxE", "BA4x513EB5", "9FmVbdvOKw", "4JbEHgGKdt" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732215244138, 1732150458923, 1733063528517, 1733105387403, 1732855629122, 1730559931333, 1733154373594, 1733139939551, 1732594159359, 1732310959310, 1732150667761, 1732285682284, 1732660911010, 1737523871545, 1733153742509, 1733182074593, 1732660742763, 1732215024232, 1730678178522, 1734885428739, 1730153137637, 1732395982333, 1732150548839, 1733063542074, 1733154849306, 1733063535977, 1730675399968, 1732150202615 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7876/Authors" ], [ "ICLR.cc/2025/Conference/Submission7876/Authors" ], [ "ICLR.cc/2025/Conference/Submission7876/Authors" ], [ "ICLR.cc/2025/Conference/Submission7876/Reviewer_uHrC" ], [ "ICLR.cc/2025/Conference/Submission7876/Authors" ], [ "ICLR.cc/2025/Conference/Submission7876/Reviewer_h6tH" ], [ "ICLR.cc/2025/Conference/Submission7876/Authors" ], [ "ICLR.cc/2025/Conference/Submission7876/Reviewer_h6tH" ], [ "ICLR.cc/2025/Conference/Submission7876/Authors" ], [ "ICLR.cc/2025/Conference/Submission7876/Authors" ], [ "ICLR.cc/2025/Conference/Submission7876/Authors" ], [ "ICLR.cc/2025/Conference/Submission7876/Reviewer_h6tH" ], [ "ICLR.cc/2025/Conference/Submission7876/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7876/Reviewer_RYK7" ], [ "ICLR.cc/2025/Conference/Submission7876/Authors" ], [ "ICLR.cc/2025/Conference/Submission7876/Authors" ], [ "ICLR.cc/2025/Conference/Submission7876/Authors" ], [ "ICLR.cc/2025/Conference/Submission7876/Reviewer_9CY6" ], [ "ICLR.cc/2025/Conference/Submission7876/Area_Chair_ZoR6" ], [ "ICLR.cc/2025/Conference/Submission7876/Reviewer_RYK7" ], [ "ICLR.cc/2025/Conference/Submission7876/Reviewer_uHrC" ], [ "ICLR.cc/2025/Conference/Submission7876/Authors" ], [ "ICLR.cc/2025/Conference/Submission7876/Authors" ], [ "ICLR.cc/2025/Conference/Submission7876/Authors" ], [ "ICLR.cc/2025/Conference/Submission7876/Authors" ], [ "ICLR.cc/2025/Conference/Submission7876/Reviewer_uHrC" ], [ "ICLR.cc/2025/Conference/Submission7876/Authors" ] ], "structured_content_str": [ "{\"comment\": \"We thank the reviewer for their valuable comments.\\n\\nWe have updated the experimental section to include both strongly convex and non-strongly convex cases, as well as results for all three shuffling schemes. These updates can be viewed in the revised PDF. Additionally, we are currently working on experiments with image datasets and will include them in future updates.\"}", "{\"comment\": \"We thank the reviewer for the comments. Below we will try to address the concerns and questions of the reviewer.\\n\\n**For the Weakness:**\\n\\nWe have updated the conclusions to rely on weaker assumptions. Please refer to the updated version of the paper.\\n\\n**For the Questions:**\\n\\n- **Definition 2.2 Missing Subscript for $x$:** \\n We apologize for the lack of clarity. What we meant is that for any $x$ in the domain and any $x_1, x_2$ close enough to $x$, the property should hold. Specifically, we should have included \\\"$\\\\forall x \\\\in \\\\text{dom}(f)$\\\" in the definition. However, the $x$ on the right-hand side does not need a subscript. Definition 2.2 is therefore distinct from both symmetric and asymmetric generalized smoothness definitions in [1]. We have updated the definition accordingly.\\n\\n- **Use of $\\\\|\\\\nabla F(w)\\\\| \\\\leq G$ in Lemmas A.2 and A.4:** \\n In Definition 2.2, if $\\\\|\\\\nabla f(x)\\\\|$ is bounded above, we recover standard Lipschitz smoothness for any pair of points close enough to $x$. The statement on line 672 is justified by the following: \\n 1. From the induction hypothesis, $w^{(t)}_{k-1}$ is close enough to $w^{(t)}_0$. \\n 2. The term $\\\\|\\\\nabla F(w^{(t)}_0)\\\\|$ is bounded due to the definition of $\\\\tau$ and Lemma A.3. \\n\\n- **Can $G$ Be Large, and Is It Hidden in $O$?** \\n According to Theorem 4.4, $G = O\\\\left(\\\\left(\\\\frac{4\\\\Delta_1}{\\\\delta}\\\\right)^{\\\\frac{1}{2-p}}\\\\right)$, where $0 \\\\leq p < 2$ is the degree of the $\\\\ell$ function. It is possible for $G$ to be quite large, which is reasonable given the relatively loose assumption on the gradient. In some cases, the gradient can indeed be very large, but it is polynomial in $\\\\Delta_1$ and $1/\\\\delta$. The term $G$ is hidden in the $O$ notation because it is independent of $n$ and $\\\\epsilon$.\\n\\n- **Bounding $\\\\|\\\\nabla f(x)\\\\|$:** \\n We cannot directly bound $\\\\|\\\\nabla f(x)\\\\| = \\\\|\\\\nabla f(x) - \\\\nabla f(x^*)\\\\| \\\\leq (L_0 + L_1 \\\\|\\\\nabla f(x^*)\\\\|)R = RL_0$ because, in Definition 2.2, the points $x_1, x_2$ need to be close enough to $x$. There is no guarantee that $x$ is close to $x^*$ here. Additionally, we do not make an assumption such as $\\\\|x - x^*\\\\| \\\\leq R$.\\n\\n- **Similarity to Standard Smoothness Analysis:** \\n Our analysis is not equivalent to standard smoothness analysis. The key differences are as follows: \\n 1. We do not always assume standard smoothness. Since we do not have a bound for $\\\\|\\\\nabla f(w;i)\\\\|$, it is not possible to give an upper bound for $\\\\|\\\\nabla F(w)\\\\|$ along the trajectory. Instead, we use $G$ that controls the probability of the gradient norm exceeding $G$. \\n 2. To apply standard smoothness properties, we condition all results on the event $t < \\\\tau$, where $\\\\tau$ is the time when the gradient norm exceeds $G$. Under this conditioning, many results from standard smoothness cannot be directly applied.\\n\\n- **Redundant Notation $r$:** \\n The variable $r$ was used to simplify notation but is now redundant. We have removed $r$ from the draft. Thank you for pointing this out.\"}", "{\"comment\": \"Thank you for your valuable feedback on our paper. We have carefully addressed all the points raised and incorporated them into the revised manuscript. To ensure we can further improve the submission based on your insights, we kindly ask if you could provide any additional feedback at your earliest convenience.\"}", "{\"title\": \"Response\", \"comment\": \"I thank the authors for the further update. My comments are as follows:\\n\\n1. Theorem 4.11 (i.e., the random reshuffling scheme) still needs a variance assumption. In contrast, existing works like [1] do not have any variance assumption for the random reshuffling scheme. Hence, Theorem 4.11 is still weaker in my view. \\n\\n2. I understand the authors removed Assumption 4.3 in Theorem 4.12 (i.e., any shuffling scheme). But this result cannot reflect the better sample complexity when the random reshuffling scheme is employed, i.e., a $\\\\sqrt{n}$ improvement as indicated by Lines 347 (taking $p=0$ for simplicity) and 361.\\n\\nAs such, the result for the general convex case is still not satisfied.\\n\\n3. Another minor point is that Assumption 4.1 in every theorem seems to link to the wrong place. The authors may need to check this issue.\\n\\n**References**:\\n\\n[1] Mishchenko, Konstantin, Ahmed Khaled, and Peter Richt\\u00e1rik. \\\"Random reshuffling: Simple analysis with vast improvements.\\\" NeurIPS 2020.\"}", "{\"comment\": \"We sincerely thank the reviewers for their constructive feedback, which has greatly contributed to improving our manuscript. As the revision has reached a temporary conclusion, we would like to summarize the changes we have made:\\n\\n1. **Weaker assumptions**: We now use a weaker assumption of general bounded variance, aligning it with the assumption made under Lipschitz smoothness.\\n2. **Expanded analysis**: We have extended our analysis to cover both strongly convex and non-strongly convex cases without any variance assumptions.\\n3. **Additional experiments**: We conducted more numerical experiments, including:\\n - All three shuffling schemes,\\n - Both strongly convex and non-strongly convex cases,\\n - Image datasets.\\n\\nWe believe that currently we have addressed all the concerns raised by the reviewers. We look forward to further responses and suggestions to continue refining and improving the manuscript. Thank you again to all the reviewers for their invaluable feedback.\"}", "{\"summary\": \"The paper considers shuffling-type gradient descent under more general smoothness assumption -- $L_0, L_1$-smoothness.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Results match the standard Lipschitz smoothness rates\", \"weaknesses\": \"Variance assumptions, which are stronger than the most standard bounded variance assumption\", \"questions\": \"- Definition 2.2 seems missing subscript for $x$ in the right hand side. It is unclear what authors mean. It could be either symmetric (x is any of $x_1$ or $x_2$) either non-symmetric $L_0, L_1$-smoothness (maximization over $x_1$ $x_2$ interval), see [1]. Or something different? Symmetric case is much easier to analyze. I tried to get it through the proofs, and it was very strange to see, that everywhere (e.g. Lemma A.2, Lemma A.4 ) authors used $\\\\|\\\\|\\\\nabla F(\\\\omega)\\\\|\\\\| \\\\le G$, and then said (line 672) that \\\"From definition 2.2, we can use Lipschitz smoothness\\\". The standard smoothness AFAIU. So the question: Can the $G$ be huge? Is it just hidden to the $\\\\cO$? and is the analisys mainly like for the standard smoothness?\\nIt seems to me that it actually is, because in lines 615, 704, 890, 1000, etc authors use standard smoothness inequalities. \\nI simply can bound $|\\\\|\\\\nabla f(x) \\\\|\\\\| = |\\\\|\\\\nabla f(x) - \\\\nabla f(x^*) \\\\|\\\\| \\\\ \\\\le (L_0 + L_1|\\\\|\\\\nabla f(x^*) \\\\|\\\\|)R = RL_0$ which could be a G.\\nand then my effective smoothness is $L= L_0 + RL_0L_1$.\\nIn the non-symmetric we have extra exponents as a multipliers, according to [1].\\nIs this what authors effectively did? Of course the one can recover the same rate as for standard smoothness. The problem is that the constant will be huge.\\n\\n- what is $r := G/L$ in Theorem 4.5, Theorem 4.6, Lemma A.1, Lemma A.2. I couldn't find where authors referr to $r$. What is it for? The results of the mentioned theorems and lemmas do not depend on $r$.\\nThen $r$ is appearing in Lemma A.4 and in bounds two subsequent trajectory points difference norm. Which mainly coincides with my above bound on the gradient (if we plug in the step).\\n\\nI briefly check the proofs and it seems they are to adapting my above bound on the gradient norm to the stochastic case (which is where the weaker variance assumption is used -- no expectation, and the difference between the full gradient and its stochastic counterpart is estimated).\\n\\nHowever, the correct approach is to allow the step size to increase as the gradient norm approaches zero. E.g [1] suggests clipping with gradient norm in the denominator -- when the norm is large - the stepsize is small and vice-versa.\\n\\nIf I was mistaken, I would be glad to investigate the proofs more carefully if authors argued that I was wrong. My understating is that gradient norm is just trivially bounded and the contribution is poor.\", \"references\": \"[1] Z. Chen et al. 2023 Generalized-Smooth Nonconvex Optimization is As Efficient As Smooth Nonconvex Optimization\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer h6tH:\\n\\nThank you very much for reevaluating our work and raising your rating. \\nSince you said \\u201cStrengths overweight weaknesses\\u201d, we are not sure if you mean to accept this work? If yes, you could notice that rating 5 given by you means marginal rejection, while 6 means marginal acceptance. \\n\\nThanks,\\nAuthors\"}", "{\"comment\": \"I decided to increase my score. Strengths overweight weaknesses, and generally the paper make a contribution.\"}", "{\"comment\": \"We greatly appreciate the time and effort you put into providing constructive feedback. Regarding your questions, we have made attempt to answer them. We kindly request that you review the revised submission and our rebuttal and consider providing additional feedback. Thank you once again for your valuable input and for contributing to the quality of our work.\"}", "{\"comment\": \"We sincerely thank the reviewer for their thoughtful comments and for raising the score. Below, we provide detailed responses to the questions raised:\\n\\n- Our $\\\\ell$-smoothness assumption, which can be expressed as $\\\\|\\\\nabla^2 f(x)\\\\| \\\\leq \\\\ell(\\\\|\\\\nabla f(x)\\\\|)$ almost everywhere for a sub-quadratic $\\\\ell$, generalizes the $(L_0, L_1)$-smoothness condition. Specifically, when $\\\\ell$ is a linear function, the $\\\\ell$-smoothness assumption reduces to $(L_0, L_1)$-smoothness as a special case.\\n\\n- Since our $\\\\ell$-smoothness assumption is more general than $(L_0, L_1)$-smoothness, the techniques used in [1] cannot be directly extended to improve our results. Even if we generalize $(L_0, L_1)$-smoothness to $\\\\ell(x) = L_0 + L_1 x^p$ for $0 \\\\leq p < 2$ (a specific case of our generalized smoothness), the proof techniques in [1] fail to hold. This is because their analysis relies heavily on the linearity of the smoothness assumption. For instance, in the proof of Lemma 2.2 in [1], the term $\\\\nu(t)$ in the second inequality would become $\\\\nu^p(t)$ for $p \\\\neq 1$, invalidating the third inequality.\\n\\n- Analyzing shuffling-type algorithms is fundamentally different from, and often more challenging than, analyzing SGD. Unlike SGD, where each sampling step is independent, the sampling steps in a shuffling method are dependent within one epoch. This dependency introduces significant analytical complexity, making it non-trivial to generalize existing techniques to the shuffling setting. For example, [2] demonstrates the substantial effort required to adapt momentum methods to the shuffling framework, and [3] extends this effort to Nesterov acceleration. In our work, we tackle the unique challenge of analyzing shuffling gradient methods without assuming Lipschitz smoothness or independence between steps\\u2014issues that are rarely addressed in prior literature. As such, we believe our contribution goes beyond being incremental.\\n\\n**References**: \\n[1] Vankov, Daniil, et al. \\\"Optimizing $(L_0, L_1)$-Smooth Functions by Gradient Methods.\\\" *arXiv preprint arXiv:2410.10800* (2024). \\n[2] Tran, Trang H., Lam M. Nguyen, and Quoc Tran-Dinh. \\\"SMG: A shuffling gradient-based method with momentum.\\\" *International Conference on Machine Learning. PMLR, 2021*. \\n[3] Tran, Trang H., Katya Scheinberg, and Lam M. Nguyen. \\\"Nesterov accelerated shuffling gradient method for convex optimization.\\\" International Conference on Machine Learning. PMLR, 2022.\"}", "{\"comment\": \"We thank the reviewer for the comments. Below we will try to address the concerns and questions of the reviewer.\\n\\n1. **Noise Assumption:** \\n We have updated the noise assumption in the new draft to match the assumption in [1]. In the strongly convex case, the results now align with [1] under the same assumption, as outlined in the second paragraph of Theorem 1 in [1].\\n\\n2. **Dependence on $\\\\delta$:** \\n We have explicitly added the dependence on $\\\\delta$ in the remarks below Theorems 4.4, 4.8, and 4.11. However, we humbly disagree with the assertion that the polynomial dependence on $\\\\delta$ is weak. Under the $\\\\ell$-smoothness assumption, the dependence of $T$ on $\\\\delta$ is commonly polynomial. For instance:\\n - Theorem 6.2 in [2],\\n - Theorem 5.3 in [3],\\n - Theorem 4.13 in [4].\\n\\n This is because, under $\\\\ell$-smoothness, $\\\\delta$ accounts for the probability that Lipschitz smoothness does not hold\\u2014a consideration absent in standard Lipschitz smoothness settings. Therefore, it is not appropriate to directly compare the dependence on $\\\\delta$ here with that in Lipschitz smoothness. \\n\\n Furthermore, there are results under $\\\\ell$-smoothness with a $\\\\log(1/\\\\delta)$ dependency (e.g., Theorems 4.1 and 4.2 in [2]). However, in those cases, it is proved that the gradient norm is always bounded before $T$ in the probability space, which does not hold in our case. Improving the dependency to $\\\\log(1/\\\\delta)$ here would require a significant advancement in techniques under $\\\\ell$-smoothness and is beyond the scope of this paper.\\n\\n3. **Descriptions of Existing Works:** \\n We have corrected inaccuracies in the descriptions of some existing works. Thank you for pointing these out.\\n\\n4. **Threshold vs. Exact Stepsize:** \\n Regarding the comment about stepsize thresholds (e.g., Theorem 4.5) versus exact choices (e.g., Theorem 4.12), we now provide explicit stepsize choices in the paragraphs following the relevant theorems. However, we have retained the threshold form in some cases as it provides a more accurate description of our results.\\n\\n**References:**\\n\\n[1] Nguyen, Lam M., et al. \\\"A unified convergence analysis for shuffling-type gradient methods.\\\" *Journal of Machine Learning Research*, 22.207 (2021): 1-44. \\n\\n[2] Haochuan Li, Alexander Rakhlin, Ali Jadbabaie. \\\"Convergence of Adam Under Relaxed Assumptions.\\\" *NeurIPS 2023*. \\n\\n[3] Haochuan Li, Jian Qian, Yi Tian, Alexander Rakhlin, Ali Jadbabaie. \\\"Convex and Non-convex Optimization Under Generalized Smoothness.\\\" *NeurIPS 2023*. \\n\\n[4] Wenhan Xian, Ziyi Chen, Heng Huang. \\\"Delving into the Convergence of Generalized Smooth Minimax Optimization.\\\" *ICML 2024*.\"}", "{\"comment\": \"Thank for providing a comparison between [1] and your work. I indeed was very biased, because it seemed to me that the setup you considered led to pessimistic constant. Now I see I was right, but you considered a different assumption.\\n\\nThe reason I was biased is that the result of \\nD Vankov, A Rodomanov, A Nedich, L Sankar, SU Stich, Optimizing (L0,L1) - Smooth Functions by Gradient Methods, https://arxiv.org/abs/2410.10800\\nstates that even in the case leading to the pessimistic constant in your approach it is possible to almost recover the standard rates.\\n\\nYes, the paper was published right after the submission deadline. I just want to say now, that the analysis might be significantly improved.\\n\\nBut, that it is clearly a contribution to consider a stochastic case, but it was done in \\n\\nH. Li, J. Qian, Y. Tian, A. Rakhlin, and A. Jadbabaie. Convex and non-convex optimization under generalized smoothness. In Thirty-seventh Conference on Neural Information Processing Systems, 2023a.\\n\\ngeneralizing the result of (Li et al) to shuffling type methods seems to be an incremental contribution to me.\\n\\nAm I right saying that you basically generalized the work of (Li et all) to shuffling type methods? Having this is true, I am adjusting my scores.\"}", "{\"comment\": \"We thank the reviewer for their constructive feedback. In response, we have completed experiments on the CIFAR-10 image dataset; covered all three shuffling schemes; as well as strongly convex, non-strongly convex, and nonconvex cases. Please refer to our revised manuscript for detailed results and analysis.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for your responses. I sincerely appreciate the effort. After consideration, I have decided to maintain my score.\\n\\nBest.\"}", "{\"comment\": \"Thank you for your continuous feedback and constructive suggestions. We have addressed your concerns as follows:\\n\\n1. **Theorem 4.11 and the Variance Assumption:** \\n At present, we are unable to prove Theorem 4.11 without the variance assumption. Previous analyses of convex optimization under $\\\\ell$-smoothness (e.g., Theorem 4.2 in [1]) have only been conducted for gradient descent, where the gradient norm decreases in every iteration. In contrast, our analysis for arbitrary schemes requires a certain bound on the trajectory, which does not hold in random reshuffling. As a result, prior methods are invalid in our case. While we are actively exploring this case, we currently do not have a definitive conclusion.\\n\\n2. **Strongly Convex Case (Theorem 4.8):** \\n Our proof for Theorem 4.9 can be directly generalized to Theorem 4.8, yielding results consistent with [2]. On the other hand, the current version of Theorem 4.8 aligns with Theorem 1 in [2].\\n\\n3. **Contributions Despite Challenges in the Convex Case:** \\n While we acknowledge that resolving the convex case without the variance assumption is challenging (and potentially very difficult), we would like to highlight the significant contributions made in our work. These include advancements in settings of shuffling algorithms, including nonconvex cases, strongly convex cases, and non-strongly convex cases with arbitrary shuffling schemes. Additionally, we have employed variance assumptions that are weaker than any previously used under $\\\\ell$-smoothness assumptions and conducted extensive numerical experiments to validate our findings. We kindly request the reviewer to reconsider the score to reflect the contributions we have made.\\n\\n4. **Assumption 4.1 Link Update:** \\n We have updated the link for Assumption 4.1, and this change will be reflected in the next version.\\n\\nThank you for your time and thoughtful consideration!\\n\\n[1] H. Li, J. Qian, Y. Tian, A. Rakhlin, and A. Jadbabaie. Convex and non-convex optimization under generalized smoothness. In Thirty-seventh Conference on Neural Information Processing Systems, 2023a.\\n[2] Nguyen, Lam M., et al. \\\"A unified convergence analysis for shuffling-type gradient methods.\\\" Journal of Machine Learning Research 22.207 (2021): 1-44.\"}", "{\"comment\": \"We thank the reviewer for their constructive feedback. In response, we have updated the draft to further generalize Theorems 4.9 and 4.12 to cases without any variance assumptions. This update aligns our results with those in [1]. We invite you to review the revised manuscript for details.\\n\\n[1] Nguyen, Lam M., et al. \\\"A unified convergence analysis for shuffling-type gradient methods.\\\" Journal of Machine Learning Research, 22.207 (2021): 1-44.\"}", "{\"comment\": \"Since the reviewer referred to [1] multiple times, we would like to provide a comparison between [1] and our work.\\n\\nOur definition of $\\\\ell$-smoothness is derived from Definition 2 in [2], and it is equivalent to the condition $\\\\|\\\\nabla^2 f(x)\\\\| \\\\leq \\\\ell(\\\\|\\\\nabla f(x)\\\\|)$ almost everywhere, where $\\\\ell$ is a sub-quadratic, non-decreasing function. The generalized smoothness in [1] can be viewed as a special case of $\\\\ell$-smoothness with $\\\\ell(x) = L_0 + L_1 x^\\\\alpha$ for $\\\\alpha \\\\in [0, 1]$.\\n\\nIn terms of proof techniques, [1] uses normalization to handle potentially large gradients, whereas our approach allows for large gradients but ensures that they occur only with a small probability $\\\\delta/2$. We further demonstrate that the gradient norm bound $G$ is not excessively large, as it is independent of $\\\\epsilon$ or $n$ in the nonconvex case. Consequently, the challenges addressed in our work are significantly different from those in [1].\\n\\n**References:**\\n\\n[1] Chen, Z., Zhou, Y., Liang, Y., and Lu, Z. (2023). Generalized-smooth nonconvex optimization is as efficient as smooth nonconvex optimization. In International Conference on Machine Learning (pp. 5396-5427).\\n\\n[2] H. Li, J. Qian, Y. Tian, A. Rakhlin, and A. Jadbabaie. Convex and non-convex optimization under generalized smoothness. In Thirty-seventh Conference on Neural Information Processing Systems, 2023a.\"}", "{\"summary\": \"This paper revisits the case of random shuffling-type stochastic gradient methods for finite sum minimization problems. More precisely, the paper considers objectives without the traditional structural assumption of Lipschitz smoothness. In doing so, the authors focus on non-convex, strongly convex and convex objectives which satisfy as a smoothness \\\"surrogate\\\" the notion of $\\\\mathcal{l}-$ smoothness.\\nTo that end, the authors provide respective convergence rates for each respective case.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper is extremely well written and enjoyable to read. Moreover, as far I checked the math seems correct and sound. Without being an expert myself on the respective literature, I find the respective very interesting and challenging from a theoretical standpoint.\", \"weaknesses\": \"My main concerns consider two main factors:\\n\\nFirst, the notion of $\\\\mathcal{l}-$ smoothness introduces additional parameters to be tuned as it becomes apparent from the definitions of the step-sizes in the main theorems. It would be could to include some discussion of how difficult are these to be evaluated both in real life scenarios and in theory. More precisely, do the authors believe that the respective toolbox from the adaptive algorithm like in [1] can be incorporated?\\n\\nSecondly, the proposed step-size policies seem to rely on a prior knowledge on the iteration horizon $T$. Do the authors believe that an any time convergence rate guarantee can be achieved? \\n\\n\\n[1] Adaptive Stochastic Variance Reduction for Non-convex Finite-Sum Minimization, Neurips 2022.\", \"questions\": \"See weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": [\"This paper studies the convergence guarantees of shuffling based gradient methods in non-Lipshitz settings. Authors show that, this algorithm with a careful schedule converges under weak assumptions. Authors consider various settings, e.g. nonconvex, convex, assuming bounded variance. They further provide empirical studies for demonstration.\", \"This paper was reviewed by four reviewers and received the following Scores/Confidence: 5/4, 5/4, 5/3, 8/2. I think the paper is studying an interesting topic but authors are not able to convince the majority of the reviewers sufficiently well about the importance of their contributions. The following concerns were brought up by the reviewers:\", \"There were many concerns raised by the reviewers and I appreciate that the authors provided a major revision of their work. However, given the number of points raised, addressing all reviewer concerns would require significant revision, which then requires another set of reviews.\", \"Novelty of technical analysis.\", \"Experimental results are limited and do not sufficiently support main claims.\", \"Convergence guarantees have a weak dependence on the margin.\", \"Three out of four reviewers rejects the paper. As such, based on the reviewers' suggestion, as well as my own assessment of the paper, I recommend not including this paper to the ICLR 2025 program.\"], \"additional_comments_on_reviewer_discussion\": \"There were many concerns raised by the reviewers and I appreciate that the authors provided a major revision of their work. However, addressing all reviewer concerns would require significant revision, which then requires another set of reviews.\"}", "{\"summary\": \"This paper examines the convergence rate of shuffling-type gradient methods without assuming Lipschitz smoothness, achieving results that match the current best-known convergence rates. The theoretical analysis covers non-convex, strongly convex, and non-strongly convex cases under both random reshuffling and arbitrary shuffling schemes.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The theoretical analysis is rigorous and considers multiple cases for different function properties.\\n2. The authors discuss the limitations of their work and suggest directions for future research.\", \"weaknesses\": \"My main concern is the experimental section, which feels too limited and simple to fully support the theoretical findings.\\n1. The data used in the experiments is relatively simple. It would be valuable to see if this method remains effective on more complex datasets, such as image datasets or applications with large language models.\\n2. Additionally, since the theoretical analysis includes both random reshuffling and arbitrary shuffling, it would strengthen the paper to show results for both methods compared to the baseline SGD.\\n3. Similarly, since the analysis considers three different cases (non-convex, strongly convex, and non-strongly convex), conducting experiments separately under each case would add depth to the findings.\", \"questions\": \"See the Weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response\", \"comment\": \"I thank the authors' feedback. Given the new results, I have increased my score. I cannot give a higher score because the assumption is still strong in the general convex case as explained in the original review.\"}", "{\"comment\": \"We thank the reviewer for the comments. Below we will try to address the concerns and questions of the reviewer.\\n\\n- **Additional Parameters Introduced by $\\\\ell$-Smoothness:** \\n Since the $\\\\ell$ function and the value of $\\\\sigma$ are difficult to obtain in practice, the results in our paper may not provide optimal guidance for choosing hyperparameters. However, the primary contribution of this work lies in providing convergence guarantees under weaker smoothness assumptions.\\n\\n- **Incorporating Adaptive Algorithms:** \\n Regarding the possibility of incorporating the respective toolbox from adaptive algorithms, there has been work on shuffling algorithms with variance reduction techniques, such as in [1]. Moreover, since variance reduction has been successfully combined with generalized smoothness in [2], we believe it is indeed feasible to integrate such techniques into shuffling algorithms. This represents a promising direction for future research.\\n\\n- **Step-Size Policies and Any-Time Convergence Guarantees:** \\n The proposed step-size policies rely on prior knowledge of the iteration horizon $T$. However, the step-size and $T$ can be determined simultaneously at the start of the algorithm, as demonstrated in our parameter choices under Theorem 4.4. Achieving an any-time bound is also possible but may not be very useful. By replacing the iteration horizon $T$ with the current iteration index $t$, the $1 - \\\\delta$ probability and the bound for the average gradient norm (previously $\\\\epsilon^2$) would adjust accordingly.\\n\\n**References:**\\n\\n[1] Malinovsky, Grigory, Alibek Sailanbayev, and Peter Richt\\u00e1rik. \\\"Random reshuffling with variance reduction: New analysis and better rates.\\\" *Uncertainty in Artificial Intelligence. PMLR*, 2023.\\n\\n[2] Li, Haochuan, Alexander Rakhlin, and Ali Jadbabaie. \\\"Convergence of adam under relaxed assumptions.\\\" *Advances in Neural Information Processing Systems 36* (2024).\"}", "{\"comment\": \"Thank you for your valuable feedback on our paper. We have carefully addressed all the points raised and incorporated them into the revised manuscript. To ensure we can further improve the submission based on your insights, we kindly ask if you could provide any additional feedback at your earliest convenience.\"}", "{\"comment\": \"Thanks for the response. While we appreciate the thoroughness of the review process, we are disappointed by the conclusion that the score should be maintained. We believe our experiments have addressed all the concerns you mentioned. If they resolve your concerns, we kindly ask for a **reconsideration of the rating**. If any additional issues remain, please let us know and we would be happy to provide further clarifications.\"}", "{\"comment\": \"Thank you for your valuable feedback on our paper. We have carefully addressed all the points raised and incorporated them into the revised manuscript. To ensure we can further improve the submission based on your insights, we kindly ask if you could provide any additional feedback at your earliest convenience.\"}", "{\"summary\": \"This paper studies the shuffling method under the generalized smooth assumption, which was proposed recently to fit many modern machine learning tasks. The authors proved that, under properly picked parameters, the shuffling method provably converges under the weak smoothness condition for both nonconvex/strongly convex/convex objectives. Numerical experiments are also conducted to support the theory.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"I appreciate the paper tries to tackle more realistic problems (i.e., functions with non-uniform smoothness) and studies the shuffling algorithm, an arguably more common scheme in practice.\", \"weaknesses\": [\"**Major points**.\", \"1. All convergence results are proved in a high-probability manner. However, the dependence on the margin $\\\\delta$ is in the order of $\\\\mathrm{poly}(1/\\\\delta)$, which makes the results weak. I also suggest the authors explicitly give the dependence on $\\\\delta$ in the final sample complexity.\", \"2. Some descriptions of the existing works contain wrong facts. **(addressed)**\", \"Lines 90-91, the bounded variance assumption is **not** required to improve the rate $\\\\mathcal{O}(1/T^2)$ to $\\\\mathcal{O}(1/(nT^2))$ in Nguyen et al. (2021). Instead, Nguyen et al. (2021) can handle **unbounded** variance.\", \"Lines 92-93, results in both Mishchenko et al. (2020) and Nguyen et al. (2021) hold under **unbounded** variance condition. The current description is not correct.\", \"3. The conditions on noises, i.e., Assumptions 4.3, 4.4, and 4.7, are strong compared with the existing literature, which significantly reduces the impact of the work. I will elaborate more on this point in the following.\", \"4. Nonconvex part. **(addressed)**\", \"Random reshuffling scheme.\", \"In this case, previous works only consider Assumption 4.7, or even a weaker version, i.e., the non-uniformly bounded variance, to obtain the $\\\\mathcal{O}(\\\\sqrt{n}/\\\\epsilon^{3})$ sample complexity.\", \"However, to recover the same rate, this work requires much stronger Assumptions 4.3 and 4.4 in Theorems 4.5 and 4.6, respectively. Hence, the results in this paper are not directly comparable to prior literature.\", \"When only Assumption 4.7 holds, Corollary 4.8 has extra dependence on $n$ as indicated by the authors.\", \"In addition, I am not sure why Corollary 4.8 is a corollary and cannot find its proof. Did I miss anything?\", \"Arbitrary shuffling scheme.\", \"Again, the authors require stronger Assumption 4.3 to make their sample complexity as good as the previous results. However, the latter can hold under non-uniformly smoothness, e.g., see Nguyen et al. (2021). As such, the claim in Lines 63-64 is misleading.\", \"Moreover, imposing three assumptions on noises may confuse the reader. Especially, the proofs under Assumptions 4.3 and 4.4 are similar as claimed in Lines 860-863. I didn't see a necessary reason for the authors to do so.\", \"4. Strongly convex and convex parts.\", \"Assumption 4.3 is strong and not assumed in previous best-known results, making the contributions weak. **(addressed)**\", \"As far as I know, the previous best results don't need any assumption on the noises for convex problems (Assumption 4.14). Hence, whichever condition among Assumptions 4.3, 4.4, and 4.7 is used, the result is not an improvement in my opinion.\", \"5. The writing can be improved. Some theorems give a threshold on the stepsize (e.g., Theorem 4.5) but others give an exact choice (e.g., Theorem 4.12). Can the author present a unified statement? **(addressed)**\", \"**Minor points**. **(addressed)**\", \"1. Line 290, the second $T=\\\\mathcal{O}(\\\\frac{1}{\\\\epsilon^3})$ should be $\\\\mathcal{O}(\\\\frac{n}{\\\\epsilon^3})$.\", \"2. Line 310, $Delta_1$ should be $\\\\Delta_1$.\"], \"questions\": \"See **Weaknesses**.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Update of draft\", \"comment\": \"We have updated our paper in response to the reviewers' valuable suggestions. Below is a summary of the key changes we have made:\\n\\n- We are now using a weaker noise assumption (Assumption 4.3), aligning with [1]. All the theorems and proofs have been modified accordingly.\\n- The flow of the proofs in the appendix has been reorganized to improve readability.\\n- Typos have been corrected.\\n\\nWe also plan to work on numerical experiments soon, as suggested by reviewer RYK7.\\n\\n[1] Nguyen, Lam M., et al. \\\"A unified convergence analysis for shuffling-type gradient methods.\\\" *Journal of Machine Learning Research*, 22.207 (2021): 1-44.\"}" ] }
2efNHgYRvM
On the Identification of Temporal Causal Representation with Instantaneous Dependence
[ "Zijian Li", "Yifan Shen", "Kaitao Zheng", "Ruichu Cai", "Xiangchen Song", "Mingming Gong", "Guangyi Chen", "Kun Zhang" ]
Temporally causal representation learning aims to identify the latent causal process from time series observations, but most methods require the assumption that the latent causal processes do not have instantaneous relations. Although some recent methods achieve identifiability in the instantaneous causality case, they require either interventions on the latent variables or grouping of the observations, which are in general difficult to obtain in real-world scenarios. To fill this gap, we propose an \textbf{ID}entification framework for instantane\textbf{O}us \textbf{L}atent dynamics (\textbf{IDOL}) by imposing a sparse influence constraint that the latent causal processes have sparse time-delayed and instantaneous relations. Specifically, we establish identifiability results of the latent causal process based on sufficient variability and the sparse influence constraint by employing contextual information of time series data. Based on these theories, we incorporate a temporally variational inference architecture to estimate the latent variables and a gradient-based sparsity regularization to identify the latent causal process. Experimental results on simulation datasets illustrate that our method can identify the latent causal process. Furthermore, evaluations on multiple human motion forecasting benchmarks with instantaneous dependencies indicate the effectiveness of our method in real-world settings.
[ "Causal Representation Learning", "Instantaneous Dependency", "Identification" ]
Accept (Oral)
https://openreview.net/pdf?id=2efNHgYRvM
https://openreview.net/forum?id=2efNHgYRvM
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yW7ePyjalE", "yDAA4EYaOf", "x2pepPHba2", "v1JyAwtJp1", "uN8b3aVlpm", "sLkOKLo5wM", "pv8uEU78bG", "g2TwgXwhpK", "ewcDAFd70M", "d7VxrBruVY", "bkrd3xp7Mu", "X5kZfhMEdQ", "U4GsFj5gOg", "T0T5Cloz78", "PMDSR9Fuff", "OAi4n8weVa", "BnVioLBm7n", "9Mg9GES0y9", "82iJJTJC20", "13xavkwbA4" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732550995207, 1732117675281, 1732115410555, 1732114453947, 1734555391569, 1732551266317, 1737523670019, 1732115797650, 1730475533822, 1732114736097, 1732114574800, 1732115653271, 1730304253992, 1732279175423, 1732628177438, 1732121910280, 1732114429056, 1732114938618, 1731199084440, 1732635584542 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4912/Authors" ], [ "ICLR.cc/2025/Conference/Submission4912/Reviewer_mQKF" ], [ "ICLR.cc/2025/Conference/Submission4912/Authors" ], [ "ICLR.cc/2025/Conference/Submission4912/Authors" ], [ "ICLR.cc/2025/Conference/Submission4912/Area_Chair_wRMJ" ], [ "ICLR.cc/2025/Conference/Submission4912/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4912/Authors" ], [ "ICLR.cc/2025/Conference/Submission4912/Reviewer_QmDF" ], [ "ICLR.cc/2025/Conference/Submission4912/Authors" ], [ "ICLR.cc/2025/Conference/Submission4912/Authors" ], [ "ICLR.cc/2025/Conference/Submission4912/Authors" ], [ "ICLR.cc/2025/Conference/Submission4912/Reviewer_uLFc" ], [ "ICLR.cc/2025/Conference/Submission4912/Authors" ], [ "ICLR.cc/2025/Conference/Submission4912/Reviewer_uLFc" ], [ "ICLR.cc/2025/Conference/Submission4912/Authors" ], [ "ICLR.cc/2025/Conference/Submission4912/Authors" ], [ "ICLR.cc/2025/Conference/Submission4912/Authors" ], [ "ICLR.cc/2025/Conference/Submission4912/Reviewer_mQKF" ], [ "ICLR.cc/2025/Conference/Submission4912/Authors" ] ], "structured_content_str": [ "{\"title\": \"Have the concerns been adequately addressed in the response and revision?\", \"comment\": \"Dear Reviewer QmDF,\\n\\nThank you for dedicating time to review and provide feedback on our submission. We hope our response and revised work effectively address your concerns. If there are additional matters you'd like us to consider, we eagerly await the opportunity to respond.\\n\\nBest regards,\\n\\nAuthors of submission #4912\"}", "{\"title\": \"Response to authors\", \"comment\": \"Dear authors,\\n\\nThank you very much for carefully addressing the concerns. After reading your answers and the updated paper, I can see that my concerns have been covered. I am very excited to see some theoretical results that allow to go beyond the Markov equivalence class on the instantaneous dependences. Hence, I am raising my score to 8.\"}", "{\"title\": \"Response to Reviewer mQKF, Part 4\", \"comment\": \">W2.3: Lines 171-172: Could you indicate whether $p_{\\\\mathbf{c} _t}$ refers to the marginal distribution $p(\\\\mathbf{c} _t)$ or the conditional distribution $p(\\\\mathbf{c} _t|\\\\mathbf{z} _{t-2})$?\\n\\n**A2.3**: Thanks for the question! Here $p _{\\\\mathbf{c} _t}$ refers to the conditional distribution $p(\\\\mathbf{c} _t|\\\\mathbf{z} _{t-2})$. We have changed $p _{\\\\mathbf{c} _t}$ to $p(\\\\mathbf{c} _t|\\\\mathbf{z} _{t-2})$ in Theorem 1 and its counterpart in the appendix for better clarity.\\n\\n>W2.4: Lines 165-188: For better readability, could you indicate $\\\\mathbf{c} _t\\\\in \\\\mathbb{R}^{2n}$ in your example? Otherwise, at first glance it reads as $z _{t,i}, z _{t-1,i}$ for $\\\\mathbf{c} _{t,i}$ in Theorem 1.\\n\\n**A2.4**: We appreciate your efforts in helping us improve the readability. We have incorporated the changes in all theorems, for example, in lines 164, 180-183, 218, etc.\\n\\n>W2.5: line 217: Would it be better to use $\\\\emptyset$ to refer to $\\\\Phi$ as an empty set?\\n\\n**A2.5**: Thanks a lot! For better readability, we have updated it to $\\\\emptyset$ in line 213 of the revised paper.\\n\\n>W2.6: line 230: Could you define \\u201cisomorphic\\u201d for Markov networks? A footnote or reference to the Appendix suffices.\\n\\n**A2.6**: We deeply value your careful reviews for the preciseness of our work. In light of your suggestion, we have defined \\\"isomorphic\\\" for Markov networks. Specifically, we let the $V(\\\\cdot)$ be the vertex set of any graphs. An isomorphism of Markov networks $M$ and $\\\\hat{M}$ is a bijection between the vertex sets of $M$ and $\\\\hat{M}$ \\n\\n$$f:V(M)\\\\rightarrow V(\\\\hat{M})$$\\n\\nsuch that any two vertices $u$ and $v$ of $M$ are adjacent in $G$ if and only if $f(u)$ and $f(v)$ are adjacent in $\\\\hat{M}$.\\n\\nIn light of your suggestions, we have added a footnote on page 5 and added this definition in Appendix B.7.\\n\\n>W3(Q1): line 41: do you mean \\u201cmixing function\\u201d instead of \\u201cmixture function\\u201d?\\n\\n**A3**: Thank you for your careful review, which has improved the clarity and consistency of our paper. You are correct that the 'mixture function' should be the 'mixing function.' In response to your suggestion, we have reviewed the entire paper and replaced the 'mixture function' with the 'mixing function' in line 42 to ensure consistency throughout the text.\\n\\n>W4(Q2): Lines 332, 386 and 387: Notation. You are using $\\\\mathcal{L}$ and $L$ interchangeably. Could you revise this?\\n\\n**A4**: Thanks for your careful review. We have gone through the whole paper and use $\\\\mathcal{L}$ to denote the objective loss. The modifications are made in lines 333 and 387.\\n\\n>W5(Q3): If I am not mistaken, your identifiability theory does not obtain the causal graph, but a markov equivalence of it (please correct if mistaken). Yet apparently, the synthetic experiments suggest that you estimate the instantaneous causal graph with 100% accuracy (Figure 4, bottom left). Could you provide some explanation for this? For example, is it possible that your assumptions allow for stronger identifiability results that are overlooked in the presented theory?\\n\\n**A5**: Thank you for this insightful question, which helps us improve the solidness of our experiment. As discussed in W2.1, the true causal graph is identifiable with mild assumption. The true causal graph of this experiment (Figure 4) satisfies the assumption and is indeed fully identifiable. \\n\\nDue to the limitations of Markdown's expressiveness, we present the causal graph in tabular form as follows. A clearer version of the graph can be found in Figure A6 in line 1359 of the revised paper.\\n\\n| $z _{t-1,1}$ | $\\\\rightarrow$ | $z _{t,1}$ |\\n|-----------|---------|---------|\\n| $\\\\downarrow$ | $\\\\nearrow$ | $\\\\downarrow$ |\\n| $z _{t-1,2}$ | $\\\\rightarrow$ | $z _{t,2}$ |\\n| $\\\\downarrow$ | | $\\\\downarrow$ |\\n| $z _{t-1,3}$ | $\\\\rightarrow$ | $z _{t,3}$ |\\n\\nSince the skeleton and directions of time-delayed edges are straightforward to determine, we primarily focus on analyzing the directions of instantaneous edges within $\\\\mathbb{z} _t$. \\n\\nSince $z _{t-1,3}\\\\rightarrow z _{t,3}\\\\leftarrow z _{t,2}$ is a v-structure, $z _{t,3}\\\\leftarrow z _{t,2}$ can be determined. Since $z _{t-1,1}\\\\rightarrow z _{t,1}\\\\rightarrow z _{t,2}$ is a chain, $z _{t-1,1}$ and $z _{t,2}$ are not adjacent, and $z _{t-1,1}\\\\rightarrow z _{t,1}$ is known, we have $z _{t,1}\\\\rightarrow z _{t,2}$. Thus, the causal graph is identifiable. \\n\\nFor better readability and clarity, we have added this discussion in Appendix B.5. We have also mentioned it in the experiment part (lines 460-462): 'Please note that here, not only the Markov equivalence class but also the causal graph can be identified for dataset A, as shown in Figure 4. Please refer to Appendix B.5 for more details.\"}", "{\"title\": \"Response to Reviewer QmDF, Part 2\", \"comment\": \">W4(Q2): Could the authors clarify the computational complexity of IDOL compared to baselines, especially for high-dimensional data?\\n\\n**A4**: We appreciate this question and the opportunity to elaborate on IDOL's computational complexity in comparison to baseline methods, especially in the context of high-dimensional data. Specifically, we have provided a model efficiency analysis for the proposed IDOL and the baseline methods in Appendix F of the revised manuscript.\\n\\nSpecifically, we have provided model efficiency comparison on the low-dimension dataset (e.g., the Human dataset) and the high-dimension dataset (the MoCap dataset) by evaluating the model efficiency from three aspects: forecasting performance, training speed, and memory footprint. We found that in low-dimensional datasets, IDOL performs nearly as efficiently as top methods like Autoformer and MICN and outperforms others like CARD. Meanwhile, in the dataset with 117 observations, whose dimension is much higher, the proposed IDOL method requires more training time due to the added complexity of Jacobian calculations for instantaneous effects. When dealing with high-dimensional datasets, such as pixel-level images or videos, where the dimensionality is high yet often redundant, a common strategy is to first utilize pre-trained low-dimensional representations of the pixels.\"}", "{\"metareview\": \"The authors propose a framework to identify temporally causal relations with instantaneous dependencies. The three reviewers all voted to accept the paper, noting that the problem is important and motivated well and that the incorporation of instantaneous effects is a significant contribution. During the discussion phase, two reviewers raised their scores due to the satisfactory additional work presented to address their comments. The authors are encouraged to incorporate additional information in the final draft if space allows.\", \"additional_comments_on_reviewer_discussion\": \"Two of the reviewers raised their scores in response to the additional information provided by the authors. All three voted Accept.\"}", "{\"title\": \"Could you please let us know whether our responses and updated submission properly addressed your concerns?\", \"comment\": \"Dear Reviewer uLFc,\\n\\nThank you for your valuable time in reviewing our submission and for your insightful suggestions to make experiments more complete. We've tried our best to conduct the experiments and address your concerns in the response and updated submission. Due to the limited rebuttal discussion period, we eagerly await any feedback you may have regarding these changes. If you have further comments, please kindly let us know--we hope for the possible opportunity to respond to them.\\n\\nMany thanks,\\n\\nAuthors of submission #4912\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Oral)\"}", "{\"title\": \"Response to Reviewer uLFc, Part 2\", \"comment\": \">W4 (Q2): Can you please comment on the performance of your method in low-sample regimes?\\n\\n**A4**: Thanks a lot for your question! We are not sure if you mean low-sample resolutions of the time series data or small sample size by low-sample regimes. To better address your concerns, we have considered these two cases as follows:\\n\\n- **Small sample size**: We first consider the case with a small sample size by low-sample regimes. In light of your questions, we have evaluated how the sample size influences the identification performance, and we have conducted experiments on synthetic data. Specifically, we create subsets of Dataset A containing 1,000, 10,000, 25,000, and 50,000 samples. For each dataset, we use the same hyperparameters, such as learning rate, random seed, and batch size. We repeated the experiment three times with different random seeds and reported mean and variance. As shown in the following table, as the sample size decreases, the performance of the model gradually decreases. However, even with 1,000 samples, our method can still achieve relatively good performance, which proves that our method has good robustness even in low-sample data sets. In light of your question, we have added experiment results and discussion to Appendix G.3.\\n\\n\\n| Sample Size | 1000 | 10000 | 25000 | 50000 |\\n|:-----------:|:------------:|:------------:|:------------:|:------------:|\\n| MCC | 0.853(0.102) | 0.884(0.011) | 0.912(0.025) | 0.945(0.023) |\\n\\n\\n\\n- **Low resolution**: Sequentially, we consider another case where the time series data are sampled with low resolutions [4], where additional edges are introduced into the Markov network, making it denser. In this case, our performance will drop as the identifiability becomes harder. We can further assume a sparse mixing process to achieve identifiability. Specifically, when conditioned on historical information that provides sufficient changes, the sparse mixing procedure assumption imposes structural constraints on the mapping from estimated to true latent variables. This compensates for potentially insufficient distribution changes, enabling identifiability even when the time series data has low-sample regimes. To further evaluate this insight, we also conduct experiments on a synthetic downsampled dataset with a sparse mixing function. As shown in the following Table, when we constrain the sparsity of the mixing process, our method can still achieve good performance. In light of your question, we have added this discussion to Appendix G.4.\\n\\n| Model | IDOL+Sparse Mixing Constraint | IDOL |\\n|:-----:|:-----------------------------:|:------------:|\\n| MCC | 0.837(0.078) | 0.786(0.085) |\\n\\n[4] Danks, D. and Plis, S. Learning causal structure from undersampled time series. In JMLR: Workshop and Conference Proceedings, 2014.\"}", "{\"summary\": \"The paper presents a framework, IDOL (identification of instantaneous Latent Dynamics), for identifying temporally causal representation with instantaneous dependencies. IDOL employs a sparse latent process assumption, which is more adaptable to real-world data. The framework is validation through extensive experiments on both synthetic and rea-world human motion datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper introduces a novel approach to identifying temporally causal representations in time series data with instantaneous dependencies. This approach addresses a gap by proposing a sparse latent process assumption that is more practical for real-world applications than previous assumptions.\\n\\nExtensive evaluations are performance to demonstrate the effectiveness of the proposed approach.\\n\\nThe paper is well-organized. The use of illustrative figures helps clarify the complex concepts.\", \"weaknesses\": \"Providing further discussions on the possibility of extending IDOL to handle high-dimensional data can be beneficial.\\n\\nGiven the limitation due to the dependency on invertible mixing processes, providing guidelines for real-world applicability would add value.\", \"questions\": \"How does IDOL handle cases where the latent process sparsity assumption is only partially met?\\n\\nCould the authors clarify the computational complexity of IDOL compared to baselines, especially for high-dimensional data? \\n\\nAre there specific real-world scenarios where IDOL might struggle due to non-invertible mixing processes?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer mQKF, Part 2\", \"comment\": \">W1.2: Scalability to High-Dimensional Data: The authors acknowledge limitations with respect to high-dimensional data, which can restrict the application to real-world scenarios. An experiment to understand how high-dimensional one can go with IDOL would be ideal to support your point to understand how high one could go for IDOL.\\n\\n**A1.2**: We sincerely appreciate the insightful comment about the importance of evaluating the scalability of our approach with respect to high-dimensional data. Although the difficulty of causal process identification increases with dimensionality [1,2], some identification can still be achieved. In light of your suggestions, we have conducted experiments on the simulation and real-world datasets with varying dimensionality for a better understanding of the limits of IDOL\\u2019s performance in high-dimensional scenarios.\\n\\nAs for the experiment on the synthetic datasets, we follow the same data generation process to generate simulation datasets with latent variable dimensions of 8, 16, 24, and 32. All these datasets share a similar latent causal process: chain-like instantaneous effects and ono-to-one temporal effects. We then measured the Mean Correlation Coefficient (MCC) between the ground truth $\\\\mathbb{z} _t$ and the estimated $\\\\hat{\\\\mathbb{z}}_t$. The experimental results are presented in the following table.\\n\\n| Dimension | 8 | 16 | 24 | 32 |\\n|:---------:|:------:|:------:|:-------:|:------:|\\n| MCC | 0.9801(0.002) | 0.9747 (0.0029) | 0.9243 (0.0173) | 0.8640 (0.0071) |\\n\\nAccording to the results, as the dimension of latent variables increases, the value of MCC is still acceptable, despite possible performance loss in high-dimensional problems.\\n\\nBesides, we have also conducted experiments on a high-dimensional real-world dataset CMU-MoCap. The dataset contains various motion capture recordings and 117 skeleton-based measurements. Here are the results.\\n\\n| Motion | Predicted Length | IDOL | | TDRL | | CARD | | FITS | | MICN | | iTransformer | | TimesNet | | Autoformer | |\\n|:-------:|:-----------------:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:------------:|:-----:|:--------:|:-----:|:----------:|:-----:|\\n| | | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE |\\n| Running | 50-25 | 0.110 | 0.082 | 0.448 | 0.108 | 0.998 | 0.135 | **0.076** | **0.064** | 0.658 | 0.135 | 0.458 | 0.106 | 2.616 | 0.202 | 1.033 | 0.179 |\\n| | 50-50 | 0.286 | 0.109 | 1.779 | 0.167 | 2.217 | 0.153 | **0.266** | **0.103** | 0.978 | 0.161 | 1.831 | 0.170 | 4.720 | 0.252 | 3.944 | 0.283 |\\n| Soccer | 50-25 | **0.022** | **0.043** | 0.026 | 0.047 | 0.206 | 0.082 | 0.076 | 0.065 | 0.057 | 0.066 | 0.032 | 0.051 | 0.063 | 0.068 | 0.211 | 0.105 |\\n| | 50-50 | **0.079** | **0.071** | 0.084 | 0.073 | 0.397 | 0.108 | 0.265 | 0.103 | 0.284 | 0.120 | 0.133 | 0.082 | 0.392 | 0.107 | 0.452 | 0.143 |\\n\\nAs shown in the table above, the IDOL model achieved a comparable forecasting performance in the high-dimensional dataset Running.\\n\\nIn addition, we further provided some potential solutions to handle high-dimensional scenarios. One idea is to make use of the divide-and-conquer strategy. One possible way is to leverage independent relations in the measure of time series data, if any. For instance, if processes $X _1:=${$x _{t,1} \\u2223t\\\\in T$} and $X _2:=${$x _{t,2} \\u2223t\\\\in T$} happen to be independent of processes $X _3$ and $X _4$, then we can just learn the underlying processes for $(X _1, X _2)$ and $(X _3, X _4)$ separately. Another potential way is to use the conditional independent relations in the measured time series data. For example, if processes $X _1$ and $X _2$ are independent from $X _3$ and $X _4$ given $X _5$ and $X _6$, then we can just learn the underlying processes for $(X _1, X _2, X _5, X _6)$ and $(X _3, X _4, X _5, X _6)$ separately. In this way, we can reduce the search space and further reduce the complexity even in high-dimensional time series data. We hope that some other developments in reducing the computational load in deep learning can also be helpful.\\n\\nThank you again for your valuable suggestions. We have added this discussion to Appendix G to enhance the integrity of our paper.\\n\\n[1] Lopez, Romain, et al. \\\"Large-scale differentiable causal discovery of factor graphs.\\\" Advances in Neural Information Processing Systems 35 (2022). \\n[2] Cheng, Yuxiao, et al. \\\"CUTS+: High-dimensional causal discovery from irregular time-series.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 10. 2024.\"}", "{\"title\": \"Response to Reviewer mQKF, Part 1\", \"comment\": \"Dear Reviewer mQKF, thank you for your insightful and constructive feedback. Your comments have significantly helped us refine the rigor and completeness of our theoretical analysis. Additionally, your suggestions have guided us to improve the clarity and readability of our writing, making our work more accessible to a broader audience. We greatly appreciate the time and effort you dedicated to reviewing our submission.\\n\\n>W1.1: Computational Complexity: The sparsity constraint introduced in Eq. (11) seems to introduce significant computational complexity to the algorithm. The paper would benefit from a more detailed analysis regarding this. For example, would it be possible to compute wall-clock times (in training) for IDOL in comparison the proposed baselines?\\n\\n**A1.1**: Thank you for this valuable suggestion. We appreciate your focus on the practicality of training time, which is indeed a crucial factor for the usability of our approach. In response to your feedback, we have included the wall-clock training times for the proposed IDOL method and the baseline methods in Appendix F.1 of the revised version.\\n\\nFor the wall-clock training times of different methods, we used a consistent hardware setup, including the same GPU, CPU, and memory configurations, to ensure comparability. To measure the actual time for training, we wrapped the training process in a timer within our code. The Python pseudo-code is provided below:\\n```python\\n import time\\n start_time = time.time()\\n model.train(training_data) # Model training process\\n end_time = time.time()\\n wall_clock_time = end_time - start_time # Calculate wall-clock time in seconds\\n```\\nTo ensure reliability, we run these codes three times and average the results, which helps to smooth out any minor variations due to random factors in the environment. The wall-clock training of the proposed IDOL model and other baselines on the Walking dataset are shown in the following table. \\n\\n| Methods | IDOL | TDRL | CARD | FITS | MICN | iTransformer | TimesNet | Autoformer |\\n|---------|----------------|---------------|---------------|---------------|----------------|----------------|-----------------|----------------|\\n| Second | 65.960(7.556) | 33.902(5.781) | 62.994(3.035) | 92.941(0.715) | 45.547(13.034) | 38.155(6.441) | 324.941(25.286) | 51.648(15.673) |\\n\\n\\nAccording to the experiment results,\\n\\n- Compared to our baseline model TDRL, the wall-clock training time of the proposed IDOL is nearly twice as slow. Theoretically speaking, the primary computational difference is that IDOL calculates the Jacobian matrix for both time-delayed and instantaneous relationships, while TDRL only calculates the time-delayed component.\\n- Compared to other mainstream baselines, our IDOL method is slower than some models like MICN and iTransformer. However, our method is still faster than models like FITS.\"}", "{\"title\": \"Response to Reviewer uLFc, Part 1\", \"comment\": \"Dear Reviewer uLFc, thank you for taking the time and effort to review our paper. Your valuable suggestions have been instrumental in helping us improve the scalability of our work, i.e., extending its applicability to noisy environments and low-sample regimes. We deeply appreciate your insightful feedback and thoughtful comments.\\n\\n>W1 (Q1): The model assumes an invertible mixing process to reconstruct latent causal structures, which may not always be feasible in real-world data. In some scenarios, particularly in non-linear and noisy environments, this assumption could lead to inaccurate or incomplete latent representations, potentially undermining the model\\u2019s performance and causal interpretability. Can you please comment on the performance of your method in noisy environments?\\n\\n**A1**: That is a great point! Thank you for this insightful suggestion! Indeed, if the mixing process is very noisy in each measured observed variable, for instance, in financial data, our model assumption is violated [1]. There are some developments that rely on the additive noise assumptions. If one makes strong assumptions on the noise, says, by assuming an additive noise model, then it is possible to develop a certain type of identifiability, just like the extension of nonlinear ICA to the additive noise model case in [2]. However, general approaches to dealing with non-parametric noise terms are still to be developed in this field in the future. In light of your question, we have conducted experiments on the synthetic datasets with different noise scales (scale=0.1 means that the variance of the noise is 0.1 times the variance of the observation), which are shown as follows.\\n\\n\\n| Noise Scale | 0.0 | 0.1 | 0.3 | 0.5 | 0.7 |\\n|:-----------:|:------:|:------:|:------:|:------:|:------:|\\n| MCC | 0.9645 | 0.9257 | 0.8362 | 0.7381 | 0.6095 |\\n\\n\\nThe experimental results show that our method can still achieve relatively good identifiability results in a noisy environment of a reasonable noise scale. We have added this discussion in Appendix G.2.\\n\\n[1] Hu, Yingyao, and Susanne M. Schennach. \\\"Instrumental variable treatment of nonclassical measurement error models.\\\" Econometrica 76.1 (2008): 195-216.\\n\\n[2] Khemakhem, Ilyes, et al. \\\"Variational autoencoders and nonlinear ica: A unifying framework.\\\" International conference on artificial intelligence and statistics. PMLR, 2020.\\n\\n>W3: Furthermore, IDOL\\u2019s effectiveness heavily depends on the assumption of a sparse latent process. In cases where this sparsity assumption does not hold (i.e., when the causal structure is dense or complex), IDOL\\u2019s performance degrades, as demonstrated in the experiments. This sensitivity suggests that the framework may be less robust in scenarios where latent processes are highly interconnected.\\n\\n\\n\\n**A3**: Thank you for the good question! When the causal structure is dense, the identification degree depends on how the graph structure is dense. Let us consider the extreme situation: if there is only time-delay causal inference, which is very dense, and there are no instantaneous relationships, then we can still recover all the latent processes and the whole graph according to previous results [3]. If the instantaneous relationships are also dense, the problem becomes much more complex. It depends on the couple relationships of the time-delay and the instantaneous relations, as discussed at the end of Section 3.4. Specifically, for the latent variables, whose intimate neighbor set is an empty set, then those latent variables can be component-wise identifiable. For the latent variables, whose intimate neighbor set is not empty, each true variable can be a function of, at most, an estimated version of its corresponding variable and those within the intimate set. \\n\\nWe hope that our responses answer to your questions. \\n\\n[3] Yao, Weiran, Guangyi Chen, and Kun Zhang. \\\"Temporally disentangled representation learning.\\\" Advances in Neural Information Processing Systems 35 (2022): 26492-26503.\"}", "{\"summary\": \"This paper introduces a framework called IDOL (Identification framework for Instantaneous Latent dynamics) to enhance temporally causal representation learning for time series data with instantaneous dependencies. Traditional approaches for identifying latent causal processes in time series data often assume that the latent causal variables lack instantaneous interactions, limiting real-world applicability. IDOL addresses this limitation by applying a sparsity constraint on causal influences, allowing for both time-delayed and instantaneous dependencies in latent variables. The IDOL frameworkassumes a sparse influence within latent causal processes, allowing both time-delayed and instantaneous relations. Unlike prior methods that require data interventions or predefined groupings to achieve identifiability, IDOL elies on this sparse latent structure alone, making it highly applicable to scenarios where interventions are impractical. The framework\\u2019s theoretical foundation is built on leveraging sufficient variability and temporal contextual information, establishing identifiability through a combination of variational inference and sparsity regularization. This enables the model to accurately reconstruct latent variables and the underlying causal relationships without complex external assumptions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The proposed IDOL framework moves beyond traditional methods that often rely on grouping of variables or direct interventions, by introducing a sparse influence assumption to capture the natural sparsity in many real-world datasets. This approach is novel in handling instantaneous dependencies without requiring interventions or grouping. Furthermore, the paper demonstrates rigorous theoretical and empirical quality, supported by a well-founded identifiability proof and a solid mathematical framework. Experimental validation on both synthetic and real-world human motion datasets further underscores the robustness and reliability of the model, showcasing its ability to accurately identify causal relationships and achieve high predictive accuracy is synthetic and real-world datasets. The paper is overall clearly written and easy to follow. Overall, this work is significant for the field, since causal discovery for time series with instantaneous effects is an important open problem.\", \"weaknesses\": \"The model assumes an invertible mixing process to reconstruct latent causal structures, which may not always be feasible in real-world data. In some scenarios, particularly in non-linear and noisy environments, this assumption could lead to inaccurate or incomplete latent representations, potentially undermining the model\\u2019s performance and causal interpretability. Furthermore, IDOL\\u2019s effectiveness heavily depends on the assumption of a sparse latent process. In cases where this sparsity assumption does not hold (i.e., when the causal structure is dense or complex), IDOL\\u2019s performance degrades, as demonstrated in the experiments. This sensitivity suggests that the framework may be less robust in scenarios where latent processes are highly interconnected.\", \"questions\": \"Can you please comment on the performance of your method in noisy environments and low-sample regimes?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Could you please let us know whether our responses and updated submission properly addressed your concerns?\", \"comment\": \"Dear Reviewer uLFc,\\n\\nWe would like to express our sincere gratitude for your time to review our manuscript. As the rebuttal discussion period is coming to an end, we are eagerly anticipating any additional feedback you may have. We are thrilled to engage in further discussions with you.\\n\\nBest regards,\\n\\nAuthors of submission 4912\"}", "{\"comment\": \"Thank you for your reply. After reading your reply as well as the other reviewers comments, I decided to raise my score accordingly.\"}", "{\"comment\": \"Dear Reviewer mQKF,\\n\\nWe are delighted that you found the response well addressed your concerns. Thank you once again for your valuable comments and suggestions!\\n\\nWith best wishes,\\n\\nAuthors of submission #4912\"}", "{\"title\": \"Response to Reviewer QmDF, Part 1\", \"comment\": \"Dear Reviewer QmDF, we are very grateful for your valuable comments, helpful suggestions, and encouragement. Your insights into the further application of our IDOL model in real-world scenarios have greatly helped us bridge the gap between theory and practice. We provide the point-to-point response to your comments below and have updated the paper accordingly.\\n\\n>W1: Providing further discussions on the possibility of extending IDOL to handle high-dimensional data can be beneficial. \\n\\n**A1**: Thank you for your suggestion. We would like to kindly emphasize that our theorem is applicable to scenarios of any dimensionality despite possible performance loss in high-dimensional problems owing to the complexity of causal structure [1,2].\\n\\nTo better address this challenge, we have proposed several potential solutions to more effectively address the challenges of high-dimensional time-series data. One idea is to make use of the divide-and-conquer strategy. One possible way is to leverage independent relations in the measure of time series data, if any. For instance, if processes $X _1:=$ { $x _{t,1} \\u2223t\\\\in T$} and $X _2:=$ { $x _{t,2} \\u2223t\\\\in T$} happen to be independent of processes $X _3$ and $X _4$, then we can just learn the underlying processes for $(X _1, X _2)$ and $(X _3, X _4)$ separately. Another potential way is to use the conditional independent relations in the measured time series data. For example, if processes $X _1$ and $X _2$ are independent from $X _3$ and $X _4$ given $X _5$ and $X _6$, then we can just learn the underlying processes for $(X _1, X _2, X _5, X _6)$ and $(X _3, X _4, X _5, X _6)$ separately. In this way, we can reduce the search space and further reduce the complexity even in high-dimensional time series data. \\n\\nIn light of your constructive suggestion, we have added this discussion to Appendix G.1.3, and a unified strategy is to be developed. \\n\\n[1] Lopez, Romain, et al. \\\"Large-scale differentiable causal discovery of factor graphs.\\\" Advances in Neural Information Processing Systems 35 (2022). \\n[2]Cheng, Yuxiao, et al. \\\"CUTS+: High-dimensional causal discovery from irregular time-series.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 10. 2024. \\n\\n>W2 and W5(Q3): Given the limitation due to the dependency on invertible mixing processes, providing guidelines for real-world applicability would add value. and Are there specific real-world scenarios where IDOL might struggle due to non-invertible mixing processes?\\n\\n\\n**A2**: Thanks for your valuable suggestion. Indeed, in certain scenarios, it is possible for the mixing process not to be invertible, for instance, if the mixing process is highly noisy in each observed process, which might be the case in financial data [3]. We hope the community is able to address this problem in the near future. However, even if the observed processes seem highly dependent or even redundant, the mixing processes can still be invertible. Consider the following example. Suppose the underlying hidden processes are $z _1$ and $z _2$ and assume we observe three processes $x _1 = z _1+z _2, x _2 = z _1 - z _2$, and $x _3 = z _1 + 0.5 \\\\times z _2$. Then, after preprocessing the observed processes with dimension reduction, we have two transformed observed processes, and the mixing procedure becomes square (with the same number of latent processes and observed processes) and invertible. \\n\\n[3] Hu, Yingyao. \\\"The econometrics of unobservables: Applications of measurement error models in empirical industrial organization and labor economics.\\\" Journal of econometrics 200.2 (2017): 154-168.\\n\\n>W3(Q1): How does IDOL handle cases where the latent process sparsity assumption is only partially met?\\n\\n**A3**: Thank you for raising this important point. We hope you will find the discussion at the end of Section 3.4 helpful. Specifically, even if the sparsity assumption is only partially met, \\\"each true variable can still be a function of, at most, an estimated version of its corresponding variable and those within the intimate set.\\n\\nDepending on the proposes, identifiability up to the subspace level can be sufficient, because we may be able to consider them as parts of a single macro variable. Let us provide a simple example here. In a video of a moving car, it might be hard to have individual identifiability of the separate car wheels and car body; however, they can be considered as essential parts of the macro variable 'car'. This macro representation might be sufficient for the purpose of modeling the interactions between the car and other objects.\\n\\nIn light of your question, we have added the example of a moving car video to the end of Section 3.4 to improve our readability.\"}", "{\"title\": \"Response to Reviewer mQKF, Part 3\", \"comment\": \">W2.1: **Major Concern: Theory Section Clarity and Limitations**: The paper\\u2019s theoretical claims, particularly around identifiability, would benefit from clarification to avoid potential misunderstandings regarding the nature of identifiability achieved. It appears that IDOL identifies the latent Markov Network rather than the true causal graph for the instantaneous component of the latent dynamics. This is an important distinction, as conditional independence relations allow only for the identification of the Markov equivalence class, not the directed causal structure itself. However, the presentation throughout the paper, especially in the introduction, experiments (such as Figure 4), and conclusions, may lead readers to infer that IDOL identifies the causal graph rather than the Markov network.\\n\\n\\n>To address this issue, the authors could consider the following changes:\\n\\n>Introduction (around line 89): Indicate that the identifiability of the instantaneous structure in IDOL is only up to a Markov equivalence class, clarifying that IDOL does not identify the directions of edges in the instantaneous part.\\n\\n>Figure 1c Modification: Consider modifying Figure 1c to remove the arrow pointers from edges, signaling that the result is a Markov network rather than a causal graph when discussing identifiability (this might make sense in terms of theory, but not from a data generation perspective).\\n\\n>Conclusion: Mention the Markov equivalence class limitation explicitly. This would open a path for further research to extend the identifiability result from Markov equivalence to the full causal structure, especially given the promising empirical results observed in Figure 4.\\n\\n**A2.1**: Thank you very much for your so insightful, helpful, and constructive reviews. To clarify, we mean that the latent causal relationships are (partially) identifiable. Due to the presence of temporal information, it is possible to go beyond the equivalence class. With certain additional assumptions, one can achieve full identifiability of the graph over the latent processes. Specifically, here are some take-home messages:\\n- **Time-delayed edges**: All directions of time-delayed edges can be naturally determined, as the direction of time is known.\\n- **Instantaneous edges**: For any pair of adjacent latent variables $z_{t,i}, z_{t,j}$ at time step $t$, if their time-delayed parents are not identical, i.e., $Pa _ d(z _{t,i})\\\\not=Pa _ d(z _ {t,j})$, the direction of edge between them becomes identifiable.\\n\\nIn light of your valuable comments, we have carefully followed your suggestions and made the necessary changes. \\n- Line 21-24 in the abstract: '..., we establish identifiability results of the latent causal process up to a Markov equivalence class ... We further explore under what conditions the identification can be extended to the causal graph.'\\n- Line 95-97 in the introduction: '..., which implies the identification of Markov equivalence class. Furthermore, we can extend to the identification of the causal graph when the endpoints of instantaneous edges do not share identical time-delayed parents.' \\n- Line 54-63 in Figure 1(c), we did not modify the figure since in this case the causal graph is identifiable.\\n- Line 528-531 in the conclusion: 'This paper proposes a general framework for time series data with instantaneous dependencies to identify the latent variables and latent causal relations up to the Markov equivalence class. Furthermore, with mild assumption, the causal graph is also identifiable.'\\n- Appendix B.5: We have provided a detailed discussion about the conditions under which the causal graph is identifiable.\\n\\nThank you once again for your valuable comments and suggestions, which have greatly helped clarify our theoretical results and contribute meaningfully to the field.\\n\\n>W2.2: lines 130-132: \\u201cthe latent causal relations are also immediately identifiable because conditional independence relations fully characterize instantaneous causal relations in an instantaneous causally sufficient system\\u201d. I don\\u2019t think this line is correct without any additional assumptions. Conditional independence relations only provide the Markov equivalence class, not the exact causal graph, without further assumptions. Rephrasing this to accurately reflect the distinction between the Markov equivalence class and the causal graph would strengthen the theoretical foundation.\\n\\n**A2.2**: Thank you for your careful reading and correct comment. As discussed in Weakness 2.1, here we mean that 'the latent causal process is (partially) identifiable,' and an extra assumption is required to achieve full identifiability. \\n\\nTo enhance the clarity and accuracy of our work, we have rephrased the statement in Lines 128\\u2013130 as \\\"... up to a Markov equivalence class. We further show how to go beyond the Markov equivalence class and identify the instantaneous causal relations with a mild assumption in Corollary A2\\\"\"}", "{\"summary\": \"This paper proposes IDOL, a framework for achieving identifiability in sequential latent variable models with instantaneous dependencies. The authors establish identifiability up to permutation of the latent variables and demonstrate that the underlying causal graph can be identified up to its Markov equivalence class (if this interpretation is correct). They thoroughly discuss the limitations of their assumptions in comparison to recent works, which helps underscore the significance of the proposed framework.\\n\\nAn estimation method is also introduced, with experiments on synthetic data verifying the theoretical results, while real-world experiments highlight the importance of incorporating instantaneous dependencies.\", \"edit\": \"All the major concerns have been addressed in the rebuttal, and hence I have raised my score to 8.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The manuscript is clear in terms of motivating the problem and introducing the theoretical framework.\", \"Incorporation of instantaneous effects into sequential latent variable models is a very significant contribution.\", \"The paper discusses limitations of the assumptions in comparison to recent works.\", \"The experiments with real-world data motivate the incorporation of instantaneous effects.\"], \"weaknesses\": [\"**Minor Concerns**\", \"**Computational Complexity:** The sparsity constraint introduced in Eq. (11) seems to introduce significant computational complexity to the algorithm. The paper would benefit from a more detailed analysis regarding this. For example, would it be possible to compute wall-clock times (in training) for IDOL in comparison the proposed baselines?\", \"**Scalability to High-Dimensional Data:** The authors acknowledge limitations with respect to high-dimensional data, which can restrict the application to real-world scenarios. An experiment to understand how high-dimensional one can go with IDOL would be ideal to support your point to understand how high one could go for IDOL.\", \"**Major Concern: Theory Section Clarity and Limitations**\", \"The paper\\u2019s theoretical claims, particularly around identifiability, would benefit from clarification to avoid potential misunderstandings regarding the nature of identifiability achieved. It appears that IDOL identifies the latent Markov Network rather than the true causal graph for the instantaneous component of the latent dynamics. This is an important distinction, as conditional independence relations allow only for the identification of the Markov equivalence class, not the directed causal structure itself. However, the presentation throughout the paper, especially in the introduction, experiments (such as Figure 4), and conclusions, may lead readers to infer that IDOL identifies the causal graph rather than the Markov network.\", \"To address this issue, the authors could consider the following changes:\", \"**Introduction (around line 89):** Indicate that the identifiability of the instantaneous structure in IDOL is only up to a Markov equivalence class, clarifying that IDOL does not identify the directions of edges in the instantaneous part.\", \"**Figure 1c Modification:** Consider modifying Figure 1c to remove the arrow pointers from edges, signaling that the result is a Markov network rather than a causal graph when discussing identifiability (this might make sense in terms of theory, but not from a data generation perspective).\", \"**Conclusion:** Mention the Markov equivalence class limitation explicitly. This would open a path for further research to extend the identifiability result from Markov equivalence to the full causal structure, especially given the promising empirical results observed in Figure 4.\"], \"the_following_specific_statements_in_the_theory_section_could_be_revised_to_improve_clarity_and_accuracy\": [\"lines 130-132: \\u201cthe latent causal relations are also immediately identifiable because conditional independence relations fully characterize instantaneous causal relations in an instantaneous causally sufficient system\\u201d. I don\\u2019t think this line is correct without any additional assumptions. Conditional independence relations only provide the Markov equivalence class, not the exact causal graph, without further assumptions. Rephrasing this to accurately reflect the distinction between the Markov equivalence class and the causal graph would strengthen the theoretical foundation.\", \"lines 171-172: Could you indicate whether $p_{c_t}$ refers to the marginal distribution $p(c_t)$ or the conditional distribution $p(c_t|z_{t-2})$?\"], \"lines_165_188\": [\"For better readability, could you indicate $c_t \\\\in R^{2n}$ in your example? Otherwise, at first glance it reads as $\\\\{z_{t,i}, z_{t-1,i} \\\\}$ for $c_{t,i}$ in Theorem 1.\", \"line 217: Would it be better to use $\\\\emptyset$ to refer to $\\\\Phi$ as an empty set?\", \"line 230: Could you define \\u201cisomorphic\\u201d for Markov networks? A footnote or reference to the Appendix suffices.\"], \"questions\": [\"line 41: do you mean \\u201cmixing function\\u201d instead of \\u201cmixture function\\u201d?\", \"lines 332, 386 and 387: Notation. You are using $\\\\mathcal{L}$ and $L$ interchangeably. Could you revise this?\", \"If I am not mistaken, your identifiability theory does not obtain the causal graph, but a markov equivalence of it (please correct if mistaken). Yet apparently, the synthetic experiments suggest that you estimate the instantaneous causal graph with 100% accuracy (Figure 4, bottom left). Could you provide some explanation for this? For example, is it possible that your assumptions allow for stronger identifiability results that are overlooked in the presented theory?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer uLFc,\\n\\nWe are glad to address your concerns. In light of your treasurable suggestions, we have provided more discussions and experiments about the applicability to noisy environments and low-sample regimes.\\n\\nWith best wishes,\\n\\nAuthors of submission #4912\"}" ] }
2edigk8yoU
Looped Transformers for Length Generalization
[ "Ying Fan", "Yilun Du", "Kannan Ramchandran", "Kangwook Lee" ]
Recent work has shown that Transformers trained from scratch can successfully solve various arithmetic and algorithmic tasks, such as adding numbers and computing parity. While these Transformers generalize well on unseen inputs of the same length, they struggle with length generalization, i.e., handling inputs of unseen lengths. In this work, we demonstrate that looped Transformers with an adaptive number of steps significantly improve length generalization. We focus on tasks with a known iterative solution, involving multiple iterations of a RASP-L operation—a length-generalizable operation that can be expressed by a finite-sized Transformer. We train looped Transformers using our proposed learning algorithm and observe that they learn highly length-generalizable solutions for various tasks.
[ "Transformers" ]
Accept (Poster)
https://openreview.net/pdf?id=2edigk8yoU
https://openreview.net/forum?id=2edigk8yoU
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zdcgZb1Jue", "zd3cAGr9xG", "y2etkYF1qL", "vmQB4pYrpk", "t4JjI0Zz1p", "o61iqgOP0U", "lwOtk2QvtQ", "iEHv2U4kXf", "czmRbrFet6", "cJbiHzIh4A", "WfxpeIIvFx", "WY3Gea5VSb", "RN7VNQZ9fE", "QYFmscdh9e", "Ne06gtDuAb", "FSpdkAIk2V", "ALBW4L3TKT", "ACHZBffFKW", "7H6Li5UrrK", "70r6JmVZln", "5tLGWpNRrS", "5LPnSDZjFN" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "decision" ], "note_created": [ 1730721250176, 1732468030190, 1732170260767, 1732468059865, 1732169959352, 1734665730222, 1732169981834, 1732377337434, 1729806855276, 1732468070031, 1732557611472, 1732469030241, 1732660881796, 1732387034000, 1732170096659, 1732170044229, 1732387197276, 1730693173407, 1730664953905, 1732467988156, 1732223048797, 1737523449032 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1367/Reviewer_gBs4" ], [ "ICLR.cc/2025/Conference/Submission1367/Authors" ], [ "ICLR.cc/2025/Conference/Submission1367/Authors" ], [ "ICLR.cc/2025/Conference/Submission1367/Authors" ], [ "ICLR.cc/2025/Conference/Submission1367/Authors" ], [ "ICLR.cc/2025/Conference/Submission1367/Area_Chair_FF3S" ], [ "ICLR.cc/2025/Conference/Submission1367/Authors" ], [ "ICLR.cc/2025/Conference/Submission1367/Reviewer_gBs4" ], [ "ICLR.cc/2025/Conference/Submission1367/Reviewer_Fhkc" ], [ "ICLR.cc/2025/Conference/Submission1367/Authors" ], [ "ICLR.cc/2025/Conference/Submission1367/Reviewer_gBs4" ], [ "ICLR.cc/2025/Conference/Submission1367/Reviewer_Fhkc" ], [ "ICLR.cc/2025/Conference/Submission1367/Authors" ], [ "ICLR.cc/2025/Conference/Submission1367/Reviewer_3KX9" ], [ "ICLR.cc/2025/Conference/Submission1367/Authors" ], [ "ICLR.cc/2025/Conference/Submission1367/Authors" ], [ "ICLR.cc/2025/Conference/Submission1367/Reviewer_imEP" ], [ "ICLR.cc/2025/Conference/Submission1367/Reviewer_3KX9" ], [ "ICLR.cc/2025/Conference/Submission1367/Reviewer_imEP" ], [ "ICLR.cc/2025/Conference/Submission1367/Authors" ], [ "ICLR.cc/2025/Conference/Submission1367/Reviewer_Fhkc" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"summary\": [\"This work studies the efficacy of Looped Transformers for Length Generalization of several algorithmic tasks whose computation complexity is known (as a function of the query length).\", \"The paper proposes the definition of $n$-RASP-L, a generalization of the RASP-L computation model allowing the loop of RASP-L programs. It is shown, under a general framework called full-answer prediction (FAP), that some tasks (Copying binary sequence (allowing duplicates), Parity, and Binary Addition) have their own $n$-RASP-L program with a linear number of steps in problem length.\", \"The authors propose training Looped Transformers (with input injection and curriculum learning) to learn $n$-RASP-L-programmable tasks, where the ground-truth number of steps is known for each task during training. They also propose two variants of inference methods: either we retain the knowledge about the number of steps at inference time (*Oracle*), or we adaptively decide the number of iterations based on the confidence of FAP (*Maximum confidence*).\", \"The proposed method is tested on several algorithmic tasks.\"], \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"S1. The paper is written and organized well. Overall, the presentation of the methodology and empirical results is clear and easy to follow.\\n\\nS2. The idea behind the proposed method is neat and plausible. It is natural to think about adaptively scaling the depth of the model according to the problem length or the problem complexity. This paper successfully implements this idea to solve various interesting algorithmic tasks with the power of Looped Transformers. Also, $n$-RASP-L is an interesting but intuitive generalization of the RASP-L framework by allowing the loops. \\n\\nS3. The proposed answer-generation framework called FAP is also an interesting component of this work. It might be of separate interest to study.\\n\\nS4. The paper presents extensive ablation studies on several components of the proposed method. Also, the empirical results (length generalization performances) are impressive enough to convince the readers about the proposed method\\u2019s efficacy.\", \"weaknesses\": [\"**W1. The definition of $n$-RASP-L (Definition 3.1) can be improved.**\", \"I think the equation \\u201c$T(n): \\\\mathbb{N} \\\\rightarrow \\\\mathbb{N}$\\u201d should be corrected to \\u201c$T: \\\\mathbb{N} \\\\rightarrow \\\\mathbb{N}$\\u201d because $T$ (instead of $T(n)$) is a function of input length $n$ representing the number of steps inside a task-solving $n$-RASP-L program.\", \"In (2), I guess $P\\u2019$ should be a RASP-L program, which is unspecified in the definition.\", \"Should $P$ be decomposed to a sequential application of $P\\u2019$, i.e., $P = (P\\u2019)^{T(n)}$? I don\\u2019t think this is exactly true because there are pre-/post-processing parts inside the proposed $n$-RASP-L programs (in Appendix A). Can the same RASP-L program $P\\u2019$ handle such parts? (It might be true because of the experimental results, but I cannot fully understand this part.) If not, I guess the definition should be modified to include the pre-/post-processing parts. For example, $P = P_{\\\\tt pre} \\\\circ (P\\u2019)^{T(n)} \\\\circ P_{\\\\tt post}$.\", \"**W2. \\u201cGround truth\\u201d number of steps?**\", \"According to Definition 3.1, a program $P$ suffices to be an $n$-RASP-L if a corresponding $T(n)$ exists. Indeed, Propositions 3.2, 3.3, and 3.4 claim and prove the existence of $T(n)$ for the Parity, Copy (with duplicates), and Binary Addition tasks, respectively.\", \"My question is about the uniqueness or optimality of such $T(n)$\\u2019s. There might be a clever way to construct another RASP-L program $\\\\tilde{P}$ so that $P$ can be implemented with $\\\\tilde{T}(n)$ steps of applying $\\\\tilde{P}$, where $\\\\tilde{T}(n)$ is much smaller than the previously known $T(n)$ (e.g., $\\\\tilde{T}(n) \\\\in o(T(n))$). It may happen since there is no uniqueness guarantee or lower bound result on $T(n)$.\", \"If I venture a guess, I would say it might be possible to implement an $O(\\\\log n)$-step $n$-RASP-L solution for the Parity task by using the parallelism of the transformer architecture. Please correct me if I am wrong. Also, I understand if it is impossible to show whether this bold guess is true. If you are interested, there are some (probably) useful references about logarithmic-depth transformers [1,2].\", \"However, the authors keep using the phrase \\u201cground truth number of steps\\u201d throughout the paper, which may lead to misunderstanding that the only way to implement the given $n$-RASP-L program is by using a loop of length $T(n)$.\", \"If two different $T(n)$\\u2019s can be applied to a single $n$-RASP-L-programmable task, it might be interesting to observe whether the model\\u2019s performance changes depending on the choice of $T(n)$.\", \"Furthermore, if multiple choices of $T(n)$\\u2019s exist for a given task, does knowing only one of them suffice to train reasonably performant Looped Transformers? If we know more than one, how should we choose $T(n)$ when we train the model?\", \"**W3. Shouldn\\u2019t we consider the input injection when implementing an $n$-RASP-L program for the given task?**\", \"The input injection seems to be an important component of their experiments. Since it changes the input vectors of each layer, I guess the task-solving algorithm under input injection might be different from that without it.\", \"However, I can\\u2019t see that the $n$-RASP-L programs provided in Appendix A reflect the input injection. As I inspect inside the loop of each program, every iteration only reuses the calculation(s) from the previous iteration right before the current one.\", \"Shouldn\\u2019t we consider the very first input sequence and the result from the previous iteration when implementing the loops? Or is it a valid implementation of input injection? Getting even further, Is there any way to embed the input injection into the $n$-RASP-L programs?\", \"**W4. The proposed training method requires prior knowledge of the task\\u2019s structure.**\", \"The proposed method is limited in that it requires a prior understanding of the structure (e.g., $T(n)$) of the task where we want to train a model. This is because it hinders fully end-to-end training.\", \"Are Looped Transformers still useful for achieving length generalization even when we don\\u2019t (or cannot) know the exact expression of $T(n)$?\", \"Besides, it seems that the depth of the decoder block is determined based on the complexity/difficulty of the subroutine $P\\u2019$ at each step inside the loop (Appendix F). How are they actually chosen? Or, how should we decide the size of the repeating decoder block?\", \"**W5. Some experimental details seem missing or wrong.**\", \"I guess Equation (2) has a typo: shouldn\\u2019t it be arg-main instead of arg-max?\", \"In Binary Addition, it seems that $T$ is chosen to be $n$ (the length of each operand). However, Proposition 3.4 claims that $T(n)=n+1$ for the same task. Why is there a discrepancy between theory and experiment?\", \"In Binary Multiplication, I guess some words are used in a wrong way. In Lines 417-418, I think it should be: \\u201cWe define the problem length to be the **length** of the second **number**, and set $T$ to be the product of the lengths of two **numbers**.\\u201d\", \"In Section 6.1.2, are input injections also applied to NTP-based methods? Also, I\\u2019m not sure why it is fair to compare their method (based on FAP) to NTP methods with the architectural setting \\u201c\\u2026with a depth 20 times the depth of the looped block\\u201d because such depth might be suboptimal for NTP-based methods.\", \"Although the paper indirectly showcases that their adaptive decision of the number of steps works quite well via Figure 5, it would be better to display similar performance plots to Figure 4 (plots based on the \\u201cOracle\\u201d inference) but using the adaptive confidence-based method instead, at least in their appendix.\", \"**W6. Minor writing issues**\", \"Section 4.1, fourth bullet point: I guess $T(n) \\\\in \\\\\\\\{T(1), \\\\ldots, T(n_{\\\\rm max})\\\\\\\\}$ is correct ($T(1)$ instead of $1$).\", \"Equations (1) and (2) have several weird-looking brackets (too many open brackets etc.)\", \"Line 510: Use *less* abbreviations like \\u201cw.r.t.\\u201d\", \"---\", \"**References**\", \"[1] Sanford, Clayton, et al. \\\"Transformers, parallel computation, and logarithmic depth.\\\"\\u00a0ICML 2024.\", \"[2] Sanford, Clayton, et al. \\\"Understanding transformer reasoning capabilities via graph algorithms.\\\"\\u00a0NeurIPS 2024.\"], \"questions\": \"**Q1. Question on the visualization in Figure 3**\\n\\n- Why don\\u2019t the illustrations in the figure contain any \\u201c#\\u201d (EOS) tokens? Is it due to the pre-processing?\\n\\n**Q2. Do the trained Looped Transformers simulate the $n$-RASP-L program?**\\n\\n- Although it might be difficult to reverse-engineer a trained transformer model to figure out what algorithm it actually simulates or implements, it might be interesting if we can observe any kind of similarity between it and the $n$-RASP-L program.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you so much for the careful review and the suggestions! We acknowledge that the original description needs to be clearer for Figure 5 and update the description in Section 4.2 with new results in Figure 9, Appendix D to clarify the ambiguity. Please check the draft with updates marked in blue.\\n\\n===\\n\\n**1. \\u201cPlease report the full results when the stopping criterion\\u201d (paraphrased)**\", \"our_response\": \"The stopping criterion visualized in Figure 5 was chosen for the whole test set. (That\\u2019s why we had one figure per task, and one vertical line representing the optimal stopping time per task.)\\n\\nHowever, inspired by your question, we also tested the same stopping criterion \\u201cper-sample.\\u201d We found that for the converging tasks, both per-sample and per-test-set have similar performance, but for non-converging tasks, the per-sample stopping criterion does not perform well (even for in-distribution). We believe that it is because converging tasks are more tolerant with respect to when to stop, while non-converging tasks are not, as shown in Figure 5.\\n\\nAccordingly, we updated Equation (2) to include an extra hyper-parameter $B$, denoting the number of test samples used to decide when to stop. Using this, we now have results for both $B = 1$ and $B = n_{test}$. \\n\\n===\\n\\nWe hope this addressed your questions/concerns. Please let us know if you have any additional questions or suggestions. Again, we really appreciate your thoughtful comment!\"}", "{\"title\": \"Rebuttal\", \"comment\": \"Thank you for the acknowledgment of our work and the detailed comments. We respond to each point in the following:\\n\\n### About limited significance, and comparison to Bansal et al. [1]\\n\\nThank you for the detailed comments! We acknowledge that generalizing to larger images could share certain similarities to generalizing to longer input sequences. However, we would like to emphasize that although similar extrapolation with recurrent networks has been studied in [1] (which uses recurrent convolutional layers instead of Transformers), our work still has significant contributions: 1) We provide the theoretical n-RASP-L framework to motivate why such recurrence would help length generalization, based on specific domain knowledge in Transformers 2) We design Transformer architectures and the training setup according to the proposed theoretical framework and 3) We show empirical success of the proposed method. Showing looped Transformers could help length generalization in our work still requires significant effort. given that the theoretical framework is highly different.\\n\\nAs for requiring the number of predefined steps, we believe that this is a relatively weaker form of side information, compared to the other side information used in other methods such as scratchpad training which is a common and popular approach in the length generalization domain. \\n\\nBesides, in our NTP-Loop baseline, we used a fixed number of loops to do the next token prediction as a baseline, which does not require knowing the number of steps needed. Our experiments show that certain extra side information like # of predefined steps would help the model avoid learning shortcut solutions that fail at length generalization. However, further relaxing the requirements could be interesting for future work. \\n\\nAgain, we thank you for extending the discussion on the comparison with Bansal et al. [1], and would incorporate a more comprehensive discussion than the current introduction and the related work section about this work. \\n\\n[1] End-to-end algorithm synthesis with recurrent networks: Extrapolation without overthinking\\n\\n### About the stopping criterion and confusion about Figure 5\\n\\nWe want to clarify that the quantities reported in Figure 5 are from the test examples with 100 times the training batch size, which is introduced in the experimental details in Appendix F in our draft (there is no training example). In Figure 5 the pre-defined number of steps is unknown and we just run a maximum number of steps and then select the stopping time based on the confidence of the output.\\n\\n\\n### Minor issues\\n\\nThank you for pointing them out! We will fix them in the revised draft.\"}", "{\"comment\": \"Thank you for your response. We are happy to hear that you\\u2019re willing to support our work.\\n\\nWe just uploaded a revised draft with all the reviewers\\u2019 comments incorporated. Please let us know if you have any further questions.\"}", "{\"title\": \"Rebuttal (1/2)\", \"comment\": \"Thank you for acknowledging our work and providing detailed comments. We respond to each point in the following:\\n\\n### The definition of RASP-L:\\nThank you for pointing them out! We acknowledge all proposed suggestions and will incorporate them into the revised draft.\\n\\n### \\u201cGround truth\\u201d number of steps\\nThank you for the comments and for providing the references on the logarithmic-depth transformers! We acknowledge that we only provide one possible n-RASP-L solution in the draft, and we did not mean that it is the only solution. We will use \\u201cpre-defined number of steps\\u201d to replace the term \\u201cground truth number of steps\\u201d in the revised draft. \\nBesides, it is an interesting question on which to choose if there are multiple n-RASP-L solutions with different T(n) functions. Some solutions might be easier to learn while others could be harder. Thus, the generalization performance might depend on many factors: the training data distribution, the architecture of the looped layers, etc. There might be no general criterion for the choice. In practice, we could use a validation set to choose T(n) with the best validation accuracy.\\n\\n### About input injection\\nThank you for your question. In fact, a single input injection operation could also be embedded in an RASP-L program: Notice that identical mapping could be represented by RASP-L. We can express input injection if we double the size of the embedding space in half and let some MLP layer add the outcomes from two embedding spaces. However, when designing the n-RASP-L solutions for specific programs, we did not find it necessary to add input injection operations, while we found it more useful in practice. We believe that this is because adding explicit input injection helps gradients flow better in looped models, which is a common trick as discussed in [1][2][3]. We also provided the performance with and without input injection in Figure 6, Appendix B, where there is a slight decay in the test accuracy without input injection, but still outperforming other baseline methods.\\n\\n[1] Deep equilibrium models\\n[2] Looped transformers are better at learning learning algorithms\\n[3] End-to-end algorithm synthesis with recurrent networks: Extrapolation without overthinking\\n\\n### Prior knowledge of the task structure\\nThank you for the comments. First, we tested (although under the NTP scheme) using a fixed number of loops for all problems in Figure 4 \\u201cNTP-Loop\\u201d baseline, where we fixed the number of looped steps as 20. In [4], the authors also use a similar approach with a fixed number of loops. Such methods could be used when we do not know T(n) beforehand. However, doing so does not utilize the side information of pre-defined number of steps, while our experiments show that using such extra side information helps the model avoid learning shortcut solutions that does not length generalize. We also believe that this is a relatively weaker form of side information, compared to the other side information used in other methods such as scratchpad training. \\n\\nAs shown in [5], there is no known way to find the number of layers even if we want the model to learn certain RASP-L programs and they say \\u201cRASP program length does not perfectly correspond to Transformer-complexity\\u201d, and we found it similar in n-RASP-L too. We treat the number of layers inside a loop as a task-dependent hyperparameter of the looped transformer models as in [5].\\n\\n[4] Transformers Can Do Arithmetic with the Right Embeddings\\n[5] What Algorithms can Transformers Learn? A Study in Length Generalization\\n\\n### Experimental confusions\\nWe acknowledge the typo in equation (2) and the description of the multiplication task. We will fix them in the revised draft. \\nFor binary addition, we tested both n and n+1 and they have similar performances, so we presented the results from n. We will also add the results from n+1 in the revised draft.\\nFor Section 6.1.2, we apply input injection to \\u201cNTP-Loop\\u201d since input injection is normally used as a trick for looped models. We chose the 20x larger depth to match the effective depth of the looped model, and the in-distribution performances from those NTP-based models are all near perfect, which shows that at least it is not too deep to learn (i.e., no issue with optimization). We also tried shallower models in NTP-based methods and the performance is not significantly better.\", \"for_adding_adaptive_inference_results\": \"Thank you for the suggestion and we will add them in the final draft.\"}", "{\"metareview\": \"This paper studies length generalization in Transformers through the lens of looped architectures, showing that Transformers with repeated layers can effectively generalize to longer sequences when trained appropriately. The reviewers appreciated the paper's clear writing, empirical evaluation, and theoretical framing of n-RASP-L.\\nWhile some reviewers raised concerns about the requirement of knowing the number of steps T(n) during training, the authors adequately addressed this through additional experiments and discussion.\\nI recommend acceptance, as the reviewers found the paper insightful. For the camera-ready version, I strongly suggest incorporating feedback from the discussion (e.g. clarifying the role of T(n), adding the adaptive-stopping-time results, etc).\", \"additional_comments_on_reviewer_discussion\": \"See above.\"}", "{\"title\": \"Rebuttal (2/2)\", \"comment\": \"### Do the trained models simulate the n-RASP-L program?\\nWe read out the intermediate predictions for the parity task and found they share similar patterns with our n-RASP-L solution such that the parity location has some periodic changes. For example, for input 1010101010 (length 10, five 0\\u2019s and five 1\\u2019s alternating each other), the prediction parity read out after each step shifts 5 times between 0 and 1 and end with 1; for input with two 1\\u2019s, the output would change two times and end with 0, etc. If we visualize the intermediate embedding space, it also circles as the inference step increases. Although the intermediate reading out does not necessarily make sense, the visualization still shows that it somehow learns an iterative algorithm that is similar to the n-RASL-L program. We will add the visualization results in the appendix of the final draft as another finding.\\n\\n### Questions on Figure 3; Other minor issues\", \"about_the_visualization_in_figure_3\": \"Yes, we mentioned that \\u201cthe inputs are preprocessed\\u201d in the caption of Figure 3.\", \"about_minor_issues\": \"We thank you for pointing them out and will fix them accordingly in the revised draft.\"}", "{\"title\": \"Additional Response\", \"comment\": \"Thank you for writing the rebuttal. Before I leave comments and questions on it, please recall that, in ICLR, the authors can revise and upload their manuscript during the discussion period. I\\u2019m not sure that the authors did so. If the authors don\\u2019t make any revisions until the end of the discussion period, I might lower my score below the acceptance because it makes me believe that they don\\u2019t have much intention to further improve their paper. The reviewers could give additional feedback from the revised version, couldn\\u2019t they?\\n\\nNow, let me provide my further answer. I leave no answers for the rebutted points for which I am almost satisfied.\\n\\n**Definition of RASP-L**\\n\\n- I am looking for the exact revised form of Definition 3.1. For now, it is not clear how the description of the definition will change.\\n\\n**Prior knowledge of the task structure**\\n\\n- I wonder whether the authors will also present the experimental results for FAP + fixed number of loops (like 20), as a demonstration of \\u201cignoring the side information.\\u201d\\n- The authors claim that the \\u201cpredefined number of steps\\u201d $T(n)$ is a weaker form of side information. I\\u2019m a bit suspicious about this statement because, in order to precisely characterize a $T(n)$, it seems that the exact implementation of the $n$-RASP-L program is necessary, which requires the actual algorithm of solving the given task. In fact, this is the same for scratchpadding, although the author\\u2019s work does not precisely teach a Transformer the task-solving rule unlike scratchpad. Although the amount of information given to a model is different, I think the same amount of information is necessary to prepare the training. Considering these, can $T(n)$ still be a weaker form of side information? I would be able to admit this claim if there is a magical way to infer a proper choice of $T(n)$ without knowing the exact task-solving algorithm in the form of $n$-RASP-L program.\\n- Extending from the point above, I express my concern because there might be many general tasks that can\\u2019t be solved with only a single loop. For example, observe that a usual algorithm (performed by humans) for solving the general *integer multiplication* doesn\\u2019t work by simply repeating the same unique job. It looks very hard to specify the \\u201cpredefined number of steps\\u201d for such tasks. Thus, I want to have an author\\u2019s discussion on the cases where the predefined number of steps is not clearly known or can never be obtained; if it is a limitation of this work, the authors should make it clear in their last section of the main text.\\n - After pondering a bit about the multiplication (where both operands\\u2019 lengths can vary), I came up with a possible workaround: simply stacking multiple looped transformer layers, as done by McLeish et al. (2024)! If we break down the multiplication, we first do $N$-digit by $1$-digit multiplication several times (1st loop), shift their digits to the left properly (2nd loop), and add them all (3rd loop): that is, the usual algorithm may be solvable by multiple loops!\\n - With that in mind, if time permits, can you provide the experimental results for multi-looped transformers, especially on general (binary) multiplication, where the lengths of both numbers are the subject of length generalization? I believe that the \\u201cBinary Multiplication\\u201d task in the current paper only considers the length of the second operand. Although McLeish et al. (2024) were not really successful in achieving a significant length generalization for general integer multiplication in the NTP setup, it would be extremely interesting if the multi-looped transformer works for this task in the FAP setup.\\n\\n**Experimental confusions**\\n\\n- Although I understand that it is impossible to run all experiments in an adaptive inference setup, I\\u2019ll be happy if I can see some initial results (at least for one task). Does it show a similar performance as in the Oracle setup?\\n\\n---\\n\\n**References**\\n\\nMcLeish, Sean et al. \\u201cTransformers Can Do Arithmetic with the Right Embeddings.\\u201d NeurIPS 2024.\"}", "{\"summary\": \"Empirically explores the ability of looped Transformers, i.e. Transformers that repeatedly apply the same block of layers, to length-generalize on several algorithmic tasks, including copy, parity, and addition. First, the authors manually derive length-generalizing solutions to the considered tasks through a variant of the RASP language, which they term n-RASP-L. Then, based on these ground truth solutions, they show that looped Transformers length-generalize well when trained with access to the true number of steps required to compute the output for a given input.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is mostly well-written and easy to follow.\\n\\n2. Demonstrates that, given knowledge of the number of steps required to perform a given task, a certain looped Transformer, which jointly predicts the full output sequence, tends to learn a length-generalizing solution. The length-generalizing capabilities of this looped Transformer are shown to surpass baselines that use next-token prediction.\", \"weaknesses\": \"1. The main weakness of the current paper is that the significance of the results is somewhat limited. In particular, I find that it falls somewhere in between works that are practically relevant and those that may not be practically relevant, but improve our understanding of certain phenomena. On the one hand, this work shows empirically that, in some algorithmic tasks for which we already know how to write explicitly a length-generalizing solution (in terms of looped Transformer weights), looped Transformers generalize well to longer lengths, if they have access during training to the number of steps required for solving the task for a given input. Consequently, the practical relevance is limited since the proposed method requires that we already know how to manually write a length-generalizing solution, in which case there is arguably no point in learning. On the other hand, this work does not provide much in terms of understanding why or how looped Transformers are able to length-generalize.\\n\\n Note that one may consider the demonstration of such length generalization to be possible as a main contribution. Yet, the ability to extrapolate through recurrence of layers has been demonstrated in the past, albeit for other architectures (see Bansal et al. 2022 [1], which notably do not require knowing the ground truth number of steps in training).\\n\\n2. A related issue is the usage of ground truth stopping time during inference. The quantities reported in Figure 5 seem to be for a single training example, yet it is not entirely clear. If so, then how does the maximum confidence stopping criterion fair across the dataset? It would be useful to report results similar to those of Figure 4 but when using the proposed stopping criterion as opposed to the ground truth stopping time, which should be unknown.\\n\\nOverall, my assessment of the paper tends towards the positive side, yet it is not a clear accept due to the substantial limitations mentioned above. Specifically, the significance of the contributions can be greatly improved if it would be possible to remove the dependence on knowing the ground truth number of steps required to solve the task for a given input during training (and by how it seems from the current results, during test time as well).\\n\\n\\nAdditional (more minor) comments:\\n- In Definition 3.1, it seems that the intention is for $P\\u2019$ to be some RASP-L program, as opposed to just a program. Otherwise, trivially any program $P$ is an n-RASP-L program by choosing $P\\u2019 = P$ and $T(n) = 1$.\\n- In Equation (2), I believe that the criterion should be an argmin over the cross entropy loss instead of an argmax.\\n\\n\\n[1] Bansal, A., Schwarzschild, A., Borgnia, E., Emam, Z., Huang, F., Goldblum, M., & Goldstein, T. (2022). End-to-end algorithm synthesis with recurrent networks: Extrapolation without overthinking. Advances in Neural Information Processing Systems, 35, 20232-20242.\", \"questions\": \"1. Are the quantities reported in Figure 5 indeed for a single training example? When using the maximum confidence criterion, how do the results compare to the ones reported in Figure 4 with access to the ground truth number of steps?\\n\\n2. In Bansal et al. 2022, they avoid the need for knowing the exact number of steps during training and inference. Have you tried using similar heuristics?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response. We are happy to hear that you\\u2019re willing to support our work.\\n\\nWe just uploaded a revised draft with all the reviewers\\u2019 comments incorporated. Please let us know if you have any further questions.\"}", "{\"title\": \"Retaining my score\", \"comment\": \"Thank you so much for revising the manuscript and providing a further response. I am mostly happy with the response and the revised paper.\\n\\n* One minor concern is a partial failure in the \\\"FAP-Loop-Adaptive-Instance\\\" setup, but I don't think this is a problem because individual instances may not capture the whole structure of the problem.\\n* Another minor point is that it might be better to use hats differently in Eq. (2) and equations nearby there: I recommend using $\\\\hat{y}^t_l$ instead of $\\\\hat{y^t_l}$.\\n\\nGiven all the promises of further updates, I keep my score to 6.\"}", "{\"comment\": \"Thank you for providing additional details and experiments regarding the stopping criterion. The updates made to the paper have fully addressed both of the above-mentioned concerns. I therefore would like to maintain my initial positive assessment of the paper.\"}", "{\"comment\": \"Thank you very much for your great suggestions.\\n\\nPer your suggestion \\u201cI wonder whether the authors will also present the experimental results for FAP + fixed number of steps (like 20), as a demonstration of \\u201cignoring the side information\\u201d, we update the results from 3 tasks using full answer prediction with a fixed number of steps (20). We follow the evaluation setup in the draft and present the test accuracy of both in-distribution length and the largest length tested in Figure 4 for each task. We observe that similar to NTP+a fixed number of loops, the performance decreases without using the side information of when to stop. For reference, we also provide the test accuracy of full answer prediction with the pre-defined number of steps as $T(n)$ in the table.\\n\\n| Tasks | # of steps | In-distribution Acc | OOD Acc |\\n|---------------------------|-------------|-----------------|-----------|\\n| Parity | T(n) | 1.0\\u00b10.0 | 1.0\\u00b10.0 |\\n| | Fixed (20) | 1.0\\u00b10.0 | 0.49\\u00b10.05 |\\n| Copy | T(n)| 1.0\\u00b10.0 | 0.95\\u00b10.02 |\\n| | Fixed (20) | 1.0\\u00b10.0 | 0.49\\u00b10.05 |\\n| Addition | T(n) | 1.0\\u00b10.0 | 0.99\\u00b10.01 |\\n| | Fixed (20) | 1.0\\u00b10.0 | 0.49\\u00b10.05 |\\n\\nWe will run more extensive experiments (not just one test length, but the full evaluation as in Figure 4, and not just three tasks, but all six tasks) and incorporate them in our camera-ready version. \\n\\nBesides, we also updated the use of hat in Eq. (2) as suggested in the draft.\\n\\nPlease let us know if you have any further questions!\"}", "{\"comment\": \"Thank you for the response. However, as reviewer gBs4 mentioned, I would like to see the revised paper as ICLR conference allows authors to revise their paper during the rebuttal period.\\n\\nOverall, while the applicability of the proposed approach is limited, I find the paper intriguing and believe that this paper offers an interesting direction in the iterature on length generalization. I will maintain my score.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"Thank you for acknowledging our work and providing detailed comments. We respond to each point in the following:\\n\\n### About Universal Transformers\\n\\nOur architecture is indeed similar to Universal Transformers (UT), but not equivalent. As discussed in Table 1 in our draft, there are certain design choices and architectural differences compared to UT, which is tailored to our n-RASP-L setup. Notice that an apple-to-apple comparison to the original Universal TF is not trivial due to these differences. The most similar baseline we have is the fixed loop NTP (NTP-Loop), but it is still not exactly the same as UT.\\n\\n### About other comparisons\\n\\nThank you for the suggestions! For this paper, we want to study whether the proposed architecture and training could help length generalization, and stick to NoPE to avoid the effect of different positional encodings (positional encodings could not be represented by RASP-L). This is orthogonal to other tricks like positional embeddings, index hints, and other format changes. Also, most of the format designs are for NTP but not for FAP. However, it would indeed be interesting to change the positional encoding in our current method to see whether the performance could be further improved, and we plan to add such experiments in the final draft.\\n\\n### Depth of the encoder block\\n\\nJust to clarify, we do not have encoder blocks and only use decoder blocks in our paper since RASP-L is for causal models.\\n\\n### Adaptive inference time\\n\\nThere are certain halting mechanisms in Universal Transformers and Ponder Net (as discussed in Table 1 in our draft) that actually learn some weights about adaptive inference time. Our approach is more compatible with our n-RASP-L formulation, and exploring halting techniques could be interesting for future work.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"Thank you for acknowledging our work and providing detailed comments. We respond to each point in the following:\\n\\n### Limitation of n-RASP-L tasks\\n\\nWe acknowledge that our scope is learning to solve n-RASP-L tasks while utilizing the number of predefined steps during training. our experiments show that using such extra side information helps the model avoid learning shortcut solutions that does not length generalize. We also believe that this is a relatively weaker form of side information, compared to the other side information used in other methods such as scratchpad training. Our work could be viewed as a first attempt in this domain which uses test time scaling in terms of increasing the number of looped steps in Transformers, and potentially relaxing the assumptions could be interesting future work.\\n\\n### Effect of curriculum learning\\n\\nAs shown in [1], curriculum learning does not affect the final performance significantly. It is pretty standard in training Transformers with increasing lengths like [1] and we find it helps speed up the training so we use it as a common trick for all methods we compare. We will add a remark in the experimental section in the revised draft.\\n\\n[1] What Can Transformers Learn In-Context? A Case Study of Simple Function Classes\\n\\n### \\u201cTolerance to step counts\\u201d\\n\\nWe conduct experiments in Parity with 1 and 2 additional steps in training and observe a decay in the performance of longer-length generalization with length 50. The performance in length 30 remains near optimal which shows some extent of robustness. As for more efficient solutions, there might be solutions with fewer steps, but might not just be a constant shift in terms of the number of steps.\\n\\n| Model/Test Length | 10 | 30 | 50 |\\n|-------------------|-----|-----|---------|\\n| Original | 1.0 | 1.0 | 1.0 |\\n| 1 more step | 1.0 | 1.0 | 0.96875 |\\n| 2 more steps | 1.0 | 1.0 | 0.375 |\\n\\n\\n### Question about converging/non-converging behaviors\\n\\nAbout the performance after the number of steps we have in the solutions, there are two cases as discussed in Section 6.4: converging, or not converging. For the tasks with converging behaviors, it might appear that even after the number of steps predefined in our solution, it would still maintain the performance like the copy task. Our conjecture is that during training, the model learns some kind of index hint and refines the output in place in each step (instead of shifting the output location in each step as our n-RASP-L solution), so it tends to find some fixed-point solution with more steps (However, there might also be other n-RASP-L solutions for copy that are different from our solution and potentially with a similar number of steps needed). For other tasks like parity, the model learns something very similar to our n-RASP-L solution and only outputs the right answer at the pre-defined number of steps. We think it is still an interesting open question, and whether the model learns such kind of behavior might be task-dependent. We will add a detailed discussion in the final draft.\\n\\n### Nonlinear T(n)\\n\\nThere might also be solutions for parity with log(n) steps as Reviewer gBs4 pointed out. Further exploring such tasks could be interesting for future work.\\n\\n### Reversed vs not reversed\\n\\nFor binary addition, we found an n-RASP-L solution without reversed inputs so we want to see whether our training can achieve this given that most works focus on the reversed output in binary addition and fail on non-reversed output format. We are not sure whether there exist n-RASP-L solutions for multiplication without reversed output, so we stick to the common setup with reversed output, especially since recent work has shown that the reversed format also makes multiplication easy to learn [2].\\n\\n[2] Positional Description Matters for Transformers Arithmetic\"}", "{\"comment\": \"I thank the authors for their rebuttal. After reading the other reviews, I still believe that the paper is interesting and maintain my score.\"}", "{\"summary\": \"This paper investigates the length generalization problem of Transformer models, which refers to the inability of the model to deal with longer samples than encountered during the training phase. While recent literature has focused on modifying the positional embeddings and the input formats, this paper proposes to use Looped Transformers, which can dynamically adjust their computation steps according to the problem length. The authors define n-RASP-L problems to figure out which problems can be solved by Looped Transformers. Then, they train the models on these tasks (parity, copy, binary addition, binary sum, binary multiplication, unique set) under a full-answer prediction setup. Empirically, the trained models could successfully length-generalize to longer lengths by appropriately adapting the number of loops at inference time.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is well-structured and clearly written.\", \"The introduction of Looped Transformers is well-motivated and effectively argued.\", \"The results are strong and solid. They do not require the use of a scratchpad. Also, the prediction is conducted using an end-to-end, full-answer prediction setup, which is a more general way than the conventional next-token prediction setup.\", \"The paper clearly illustrates that the model can determine the number of steps to take on its own and does not require T(n) in the test time.\"], \"weaknesses\": \"Weakness 1: Applicability Limited to n-RASP-L Tasks\\n\\n- The approach is limited to tasks that belong to n-RASP-L categories, as it requires the ground-truth number of steps in the training data.\", \"weakness_2\": [\"Insufficient Experimentation.\", \"***Effect of Curriculum Learning.*** How does the model perform without curriculum learning? Is the use of curriculum learning necessary?\", \"***Tolerance to Step Counts.*** I am curious whether this method will still perform well with different choices of T(n). For example, for tasks like parity, would the model maintain its performance if T(n) were set to n+1 rather than n? What about 2n instead of n? This question stems from the possibility that there might be more efficient solutions to n-RASP-L problems than human-designed ones, which could work with fewer steps. Testing whether the model is robust under overestimated T(n) values could help verify the robustness of this approach.\", \"Overall, the paper requires more ablation studies.\"], \"questions\": \"Q1. In Figure 5, why do some tasks perform well even when exceeding the step count, while others degrade immediately? For instance, the performance of the parity task and the binary sum task immediately drops when executed with additional steps, whereas the addition, multiplication, and copy tasks retain accuracy to some extent.\\n- Particularly for the copy task, the selected step count is significantly higher than the actual number of steps required, which seems unusual to me.\\n\\nQ2. Are there any tasks whose T(n) is nonlinear (e.g. sqrt(n), n^2) to the length of the input sequence? It would be interesting to see experimental results for such tasks.\\n\\nQ3. Why is the output reversed for binary multiplication (but not for binary addition)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper examines how looped transformers perform in terms of length generalization. The focus is on n-RASP-L problems, which are problems that can be tackled using a loop of a single RASP-L program. The concept is that Transformers can learn steps that are independent of length, employing a flexible number of iterations in the looped transformer to achieve length generalization. The authors first demonstrate that n-digit addition, n-bit parity, and copying n symbols can be addressed with n-RASP-L solutions. They then reveal that when utilizing the looped transformer with adaptive stopping time, the results exhibit significantly stronger length generalization compared to next token prediction (NTP) and other methods like using pause tokens or NTP-loop with a fixed stopping time.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Overall, I really liked the paper, I think that using a looped transformer to achieve length generalization is an interesting idea that was not studied in the past to my knowledge. This paper complements all the other techniques (universal transformers, different types of position emebedding, etc.) that were used in the past for length generalization The paper is well-written and well-explained. This is why I advocate for acceptance of this paper.\", \"weaknesses\": [\"I would like to raise the following weaknesses/questions regarding this paper:\", \"**Lack of other baselines**: What would happen if you have a very deep universal transformer? Universal transformers also have shared parameters and looks equivalent to the loop transformer. The depth may play the role of the number of loops. Would this be equivalent to the fixed loop NTP? It would be interesting to run the same experiments with a universal transformer.\", \"**Comparison with other methods**: Where would you position the looped transformers in the list of all the tricks for length generalization? Are the effects similar or complementary to change of the input (index hinting, reverse order of operands, etc.) ? Changes of positional encoding? Chain of Thought? It would be interesting to understand this by making combinations of the tricks with looped transformers with other tricks and analyze the performance differences.\", \"What is the depth of the encoder block in the loop transformer? I think this information is important to put in the main paper.\", \"**Adaptive inference time**: I think one weak point of the method is actually coming up with an adaptive inference time. The methods that are proposed are nice but may look a bit hacky. Do you think one could learn this adaptive inference time?\", \"In Figure 2, which adaptive inference time method is used for FAP-Loop-Adaptive?\", \"Lastly, this is a wild question: have you tried your method on problems where there is no n-RASP-L solutions? Would it still work better than just doing NTP?\"], \"questions\": \"I listed my questions in the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Additional Rebuttal\", \"comment\": \"Thank you for the careful review and the additional response! Sorry for the delay in updating the draft. We uploaded a new version of the revised draft with changes marked in blue.\\n\\n**Definition of n-RASP-L**\\n\\nWe updated the definition of n-RASP-L in Section 3, including the pre/post-processing steps as we discussed before. \\n\\n**Prior knowledge of the task structure & experimental confusions**\\n\\n**1. \\u201cI wonder whether the authors will also present the experimental results for FAP + fixed number of loops\\u201d**\\n\\nThank you for the suggestions! We are currently running experiments for FAP + a fixed number of loops. Due to time constraints, it might not be available before the end of the discussion period, so we are running with a smaller scale for now. We believe that we can give an additional update within the next few days before the discussion period ends. (And later, we will be able to run the full-blown experiments.)\\n\\n**2. \\u201cThe exact implementation of the n-RASP-L program seems necessary. Can $T(n)$ still be a weaker form of side information?\\u201d (paraphrased)**\\n\\nThis is a great question! We believe that it\\u2019s still (slightly) weaker side information. This is because it might be possible to know the number of steps without knowing the exact algorithm. For instance, this can happen when the upper/lower bounds of computational complexity are known. Because of this, one can also simply try multiple $T(n)$ candidates (say log, linear, quadratic, \\u2026) and choose the one that performs the best on (out-of-distribution test length) validation. \\n\\n**3. \\u201cAlthough I understand that it is impossible to run all experiments in an adaptive inference setup, I\\u2019ll be happy if I can see some initial results (at least for one task). Does it show a similar performance as in the Oracle setup?\\u201d**\\n\\nAs you suggested, we ran experiments with our adaptive stopping criterion applied and presented the result in Figure 9, Appendix D as suggested. (We are happy to move this to the main body if the reviewers think that\\u2019s a better idea \\u2013 for now, we couldn\\u2019t do so due to the page limit.)\\n\\n**4. \\u201cExtension of n-RASP-L to support multiple loops\\u201d**\\n\\nThank you for sharing the great idea with us. We did not have time to run additional experiments for this, but we agree with your idea that allowing for multiple loops in our framework can handle a much larger class of tasks including a more general length generalizable multiplication. We revised the last section (limitation & conclusion) with an additional future work direction based on your suggestion.\"}", "{\"comment\": \"Thank you for the response, I have read it and the other reviews carefully. I have a couple of follow up remarks.\\n\\n1. The response has not addressed my concern regarding the necessity of knowing the stopping time, especially during inference (see question #1 in the original review). In particular, when using this stopping criterion it is not clear from the paper nor the response how the results compare to the ones reported in Figure 4, in which access to the ground truth number of steps is assumed. If indeed, as claimed in Section 6.4, the stopping criterion chooses the correct step near-perfectly, then why not report the full results (i.e. results analogous to Figure 4) when using this criterion? I believe it is worth being more transparent here. To be clear, I do not believe that if the stopping criterion is suboptimal, then this is a blocker for publication. Rather, more concerning is the opaqueness regarding to how well it works.\\n\\n2. I would recommend the authors to specify that the quantities in Figure 5 are means across the test examples in the main text, as opposed to an appendix. What is still not clear to me though is what does the vertical line marking the chosen step correspond to in Figure 5? Isn't the stopping criterion different for each example, or is it chosen once for the whole test set?\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}" ] }
2ea5TNVR0c
Advancing LLM Reasoning Generalists with Preference Trees
[ "Lifan Yuan", "Ganqu Cui", "Hanbin Wang", "Ning Ding", "Xingyao Wang", "Boji Shan", "Zeyuan Liu", "Jia Deng", "Huimin Chen", "Ruobing Xie", "Yankai Lin", "Zhenghao Liu", "Bowen Zhou", "Hao Peng", "Zhiyuan Liu", "Maosong Sun" ]
We introduce EURUS, a suite of large language models (LLMs) optimized for reasoning. Finetuned from Mistral-7B, Llama-3-8B, and Mixtral-8x22B, EURUS models achieve state-of-the-art results among open-source models on a diverse set of benchmarks covering mathematics, code generation, and logical reasoning problems. Notably, EURUX-8X22B outperforms GPT-3.5 Turbo in reasoning through a comprehensive benchmarking across 12 test sets covering five tasks. The strong performance of EURUS can be primarily attributed to ULTRAINTERACT, our newly-curated large-scale, high-quality training data dataset specifically designed for complex reasoning tasks. ULTRAINTERACT can be used in both supervised fine-tuning, preference learning, and reward modeling. It pairs each instruction with a preference tree consisting of (1) reasoning chains with diverse planning strategies in a unified format, (2) multi-turn interaction trajectories with the environment and the critique, and (3) pairwise positive and negative responses to facilitate preference learning. ULTRAINTERACT allows us to conduct an in-depth exploration of preference learning for reasoning tasks. Our investigation reveals that some well-established preference learning algorithms may be less suitable for reasoning tasks compared to their effectiveness in general conversations. The hypothesis is that in reasoning tasks, the space of correct answers is much smaller than that of incorrect ones, so it is necessary to explicitly increase the reward of chosen data. Therefore, in addition to increasing the reward margin as many preference learning algorithms do, the absolute values of positive responses’ rewards should be positive and may serve as a proxy for performance. Inspired by this, we derive a novel reward modeling objective and empirically that it leads to a stable reward modeling curve and better performance. Together with ULTRAINTERACT, we obtain a strong reward model.
[ "Reasoning", "Alignment", "Data" ]
Accept (Poster)
https://openreview.net/pdf?id=2ea5TNVR0c
https://openreview.net/forum?id=2ea5TNVR0c
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ypeNK9tpFN", "y8rfMxtsF8", "wZuey5nZVM", "uqTBit9CAo", "ljfziG1IvC", "jFPQokGPMk", "ghwPNrfH25", "cL1FWVER4s", "aKm5Yutv5N", "YTxHlyPAhe", "Y94TRwRhm7", "XuVxVJIXT5", "WEmuURzUvz", "Sz5zhXtN5m", "RjU6AHYTLq", "QOTqjihptw", "PI7SN2lsY2", "DWpD5yKT6y", "DSaOq8Zmac", "AR2JcLZC3q", "970fgrslJY" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "meta_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review" ], "note_created": [ 1732203710540, 1732203188644, 1732394826241, 1733188375784, 1730690878068, 1730670555093, 1734494643673, 1737523921692, 1732548203152, 1732518018701, 1732202244709, 1732549543013, 1732253539104, 1732212024085, 1732203519258, 1732203253352, 1732203444213, 1732220695994, 1731428929167, 1732202369929, 1730959877346 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8627/Authors" ], [ "ICLR.cc/2025/Conference/Submission8627/Authors" ], [ "ICLR.cc/2025/Conference/Submission8627/Authors" ], [ "ICLR.cc/2025/Conference/Submission8627/Reviewer_EWRx" ], [ "ICLR.cc/2025/Conference/Submission8627/Reviewer_EWRx" ], [ "ICLR.cc/2025/Conference/Submission8627/Reviewer_rVp1" ], [ "ICLR.cc/2025/Conference/Submission8627/Area_Chair_J15Y" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8627/Authors" ], [ "ICLR.cc/2025/Conference/Submission8627/Reviewer_xgCx" ], [ "ICLR.cc/2025/Conference/Submission8627/Authors" ], [ "ICLR.cc/2025/Conference/Submission8627/Reviewer_rmjd" ], [ "ICLR.cc/2025/Conference/Submission8627/Reviewer_xgCx" ], [ "ICLR.cc/2025/Conference/Submission8627/Reviewer_rmjd" ], [ "ICLR.cc/2025/Conference/Submission8627/Authors" ], [ "ICLR.cc/2025/Conference/Submission8627/Authors" ], [ "ICLR.cc/2025/Conference/Submission8627/Authors" ], [ "ICLR.cc/2025/Conference/Submission8627/Authors" ], [ "ICLR.cc/2025/Conference/Submission8627/Reviewer_xgCx" ], [ "ICLR.cc/2025/Conference/Submission8627/Authors" ], [ "ICLR.cc/2025/Conference/Submission8627/Reviewer_rmjd" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer rVp1\", \"comment\": \"> Q1. While the authors acknowledge the use of proprietary GPT models in data synthesis, they do not thoroughly analyze the limitations of relying on these models. It would be helpful to discuss the potential biases introduced by GPT models and explore alternative approaches for data generation that rely solely on open-source models. Though, it's worth noting that they attempt to address this by creating ULTRAINTERACT-v2 using only open-source models, which shows promising results.\", \"a1\": \"Thanks for your suggestion. Major concerns of using proprietary models lie in the non-permissive license and the financial cost of calling APIs. Using open-source models can help address both issues.We acknowledge that certain biases might exist. However, for reasoning tasks where there are absolute right and wrong, we believe such biases are less severe than general conversations where preferences can be subjective and ambiguous. Rather, by mixing multiple open-source models, we can introduce more diverse patterns and intervene potential spurious correlations, which may help mitigate such biases.\\n\\n> Q2. In the paper, a few preference learning algorithms, since the preference pairs are collected in the ULTRAINTERACT, not running RL with the data seems like a big miss.\", \"a2\": \"We considered only preference learning because of its simplicity. It\\u2019s known that PPO models are hard to train, so it will introduce many confounders to track the effect of our data. However, following your suggestion, we are implementing PPO experiments in this discussion period but it takes time. We will update results later once it is finished.\\n\\n> Q3. In line 90-91, the statement is unclear to me. In 'a higher final reward often indicates a better reasoning capability', whose reasoning capability? Can you elaborate a bit more?\", \"a3\": \"Sorry for the confusion. By better reasoning capability we mean higher benchmark results as indicated in Table 3. We have clarified the description.\\n\\n> Q4. About the result remove in Table 3 due to data contamination. For some of the model has data contamination issue, the table suggests the TheoryQA is leaked, what about the rest dataset? If the rest doesn't has data contamination issue, should the result be compared? Without TheoryQA number, OpenChat seems like still a strong candidate.\", \"a4\": \"Thanks for pointing this out. We can only confirm the contamination of TheoremQA. Even if we simply remove the number of TheoremQA, Eurus-7B-SFT (48.9), Eurus-7B-KTO (51.4), Eurus-7B-NCA (50.6) still outperform OpenChat (48.7).\"}", "{\"title\": \"Response to Reviewer rmjd (1)\", \"comment\": \"Thank you for your constructive comments. We will try our best to address your concerns.\\n\\n> Q1: I agree that providing trajectories to guide model improvements is a potential approach. However, during the training process, I believe that the vertical improvement information, sequential refinement across turns, may not be effectively learned. This is because current preference algorithms primarily focus on horizontal comparisons, assessing responses within the same turn.\", \"a1\": \"Thanks for your comments. We try to address your concern by emphasizing some empirical results.\\n\\n1. To further demonstrate the effectiveness of the tree structure, we trained Llama-3-Eurus-8B-SFT on single-turn pairwise data, namely decomposing a multi-turn tree into multiple single turn pairs. Results can be found here. We find that compared to training on single-turn pairs, training on multi-turn trees enjoys huge benefits on multi-turn interaction ability and slightly improves the overall performance.\\n\\n| Model | Coding | | | Math | | | | | Reasoning | Ins-Following | Multi-Turn | | Avg. |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| | HumanEval | MBPP | LeetCode | GSMPLUS | MATH | TheoremQA | SVAMP | ASDiv | BBH (CoT) | IFEval | Code | Math | |\\n| Llama-3-Eurus-8B-SFT | 51.2 | 57.9 | 17.2 | 50.7 | 32.0 | 21.3 | 82.2 | 83.7 | 72.4 | 47.1 | 18.4 | 24.5 | 46.6 |\\n| \\\\+ KTO | 51.8 | 58.1 | 15.6 | 54.8 | 34.2 | 24.9 | 80.1 | 86.7 | 71.7 | 50.6 | 26.5 | 37.4 | 49.4 |\\n| \\\\+ KTO (single-turn) | 53.7 | 59.1 | 14.4 | 54.8 | 30.7 | 23.1 | 77.8 | 86.2 | 72.1 | 49.9 | 22.8 | 33.0 | 48.1 |\\n| \\\\+ NCA | 50.6 | 60.4 | 15.6 | 55.2 | 34.8 | 25.4 | 79.9 | 87.5 | 71.7 | 56.2 | 21.3 | 36.3 | 49.6 |\\n| \\\\+ NCA (single-turn) | 53.7 | 55.9 | 16.1 | 55.4 | 30.5 | 25.4 | 79.3 | 87.5 | 72.2 | 54.2 | 17.7 | 35.5 | 48.6 |\\n\\n2. We mainly explored preference learning in this work. However, even if preference learning cannot effectively learn from the sequential refinement, this dataset may also facilitate other algorithms that target improving the refinement ability. For example, our data offers an opportunity to implement other algorithms such as SELF-CORRECTION [1], which trains a model to map an incorrect response to a correct one. One can implement this by adopting our incorrect response in previous turns and the refined correct response in the following turns as the training data pairs.\\u00a0\\n\\n[1] Generating Sequences by Learning to Self-Correct. Welleck et al. 2022.\"}", "{\"title\": \"Reponse to follow-up comments of Reviewer xgCx\", \"comment\": \"Dear Reviewer,\\n\\nThanks for your response! Following your suggestion, we present the ablation study on prefernce data mixture as follows:\\n\\n| Model | Coding | Math | BBH | IFEval | Multi-Turn | Avg. |\\n| ------------------------------------ | ------ | ----- | ----- | ------ | ---------- | ----- |\\n| Llama-3-Eurus-8B-SFT | 42.11 | 53.97 | 72.40 | 47.10 | 21.45 | 46.60 |\\n| +KTO (UltraFeedback + UltraInteract) | 41.84 | 56.14 | 71.70 | 50.60 | 31.92 | 49.40 |\\n| +KTO (Only UltraFeedback) | 44.60 | 56.26 | 72.00 | 50.10 | 24.03 | 48.80 |\\n| +KTO (Only UltraInteract) | 40.70 | 55.86 | 71.70 | 50.60 | 34.49 | 49.40 |\\n\\n| Model | MT-Bench |\\n| ------------------------------------ | -------- |\\n| Llama-3-Eurus-8B-SFT | 6.8 |\\n| +KTO (UltraFeedback + UltraInteract) | 7.3 |\\n| +KTO (Only UltraFeedback) | 7.5 |\\n| +KTO (Only UltraInteract) | 7.2 |\\n\\nFrom the results, we see that training only on UltraInteract leads to higher overall reasoning performances. Looking deeper, these improvements are mainly credited to the multi-turn interaction ability, which demonstrates the superiority of the tree structure of our data. However, we also observe a lower MT-Bench score compared to training solely on UltraFeedback. Nevertheless, this can be mitigated without hurting reasoning performances by mixing these two datasets together, which indicates that our data is compatible with other datasets, consistent with our conclusions on reward modeling.\"}", "{\"comment\": \"Thanks for the detailed response. I will maintain my score. Sorry for the missing reference.\\n\\n[1] Iterative reasoning preference optimization, Pang et al, 2024\"}", "{\"summary\": \"This paper has several contributions. First, it builds a dataset on reasoning tasks that contain both correct and wrong steps. Second, it proposed a modified loss function for training a reward model that is better suited for reasoning tasks. Lastly, it trains a set of LLMs using the proposed dataset that have competitive performance on reasoning tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"The paper is advancing open science by making the training data and model checkpoints public. Given the significant improvements in reasoning tasks, it is likely that these assets will be helpful to other researchers.\", \"The paper also proposes a new way of training reward models that is better suited to reasoning tasks. In addition, the training datasets have multi-step attempts that contain mistakes and tool usage, which is unlike other preference datasets.\", \"The experimental section is detailed and provides many interesting results, such as comparing three different preference optimization methods. There are many ablations provided, and evaluations are done on many tasks, which makes the results more convincing.\"], \"weaknesses\": [\"The heavy reliance on GPT responses makes me feel like this is more of distilling GPT. Also, it is not clear what are the usage limitations that will arise from using a proprietary model like GPT4. As shown in tab7, this was crucial for obtaining good performance.\", \"The problem of the likelihood of chosen responses going down in reasoning is a known issue and studied prior work [1], which is not cited in the paper (the related work is quite short)\", \"The term \\u201cmulti-turn action\\u201d was confusing. It seems that all the tasks require only a single correct response. None of the tasks is truly multi-turn where the model has to do multiple actions. From reading the paper, it seems the term \\u201cmulti-turn\\u201d is used to describe a process where a model can try again if it makes a mistake. Actually, it is not clear how this process works, especially when training the model and evaluating it. Also, the dataset contains observations and judgements, but are they also used when training the actor? What about the python executions? There is very little detail on how the agent is trained on these and evaluated.\", \"As mentioned in the previous point, there are certain steps that are not well explained. See the questions for examples. Given that the motivation is to advance open-source LLMs, I think it is important to describe the process of training in more details.\"], \"questions\": [\"Is the reward model used in training the actor model?\", \"L148 \\u201cthe actor model first decomposes the input problem into several problems\\u201d How is this done?\", \"L181 \\u201cwe adopt more diverse reasoning patterns\\u201d How exactly is this done?\", \"Is python the only tool used?\", \"Typo in L263 reward notation\", \"What is \\\"prompt level loose score\\\" in L282\", \"I think the tables have too many numbers in them (tab3 has at least a hundred) and not sure if anyone will look at all of them. Instead, average scores can be put there and the detailed table can move to the appendix. This is only a suggestion though.\", \"Which GPT-4 model is used? I think there are multiple versions.\", \"How is the reward model performance compared to ArmoRM?\", \"How is GPT-4 used as a reward model in tab4?\", \"Why does self-consistency drop in fig1 left?\", \"How is MCTS decoding done exactly in sec5.2?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents EURUS, a new collection of large language models (LLMs) and a reward model (RM) designed to enhance reasoning capabilities. The authors develop ULTRAINTERACT, a dataset designed for complex reasoning tasks with 12 datasets spanning math, coding, and logical reasoning problems. ULTRAINTERACT employs preference trees, which pair each instruction with reasoning chains, interaction trajectories with feedback, and pairwise responses for preference learning.\\n\\nThe authors use ULTRAINTERACT to fine-tune several open-source LLMs, including Mistral-7B, Llama-3, and Mixtral-8x22B. They show that EURUS models achieve top performance on multiple reasoning benchmarks, including LeetCode and TheoremQA. EURUS-7B and LLAMA-3-EURUS-8B even surpass baselines 5 times their size, while EURUX-8X22B outperforms GPT-3.5 Turbo on 12 test sets.\\n\\nThey also create a reward model, EURUS-RM-7B, that excels on several reward modeling benchmarks and introduce a new reward modeling objective that merges the Bradley-Terry objective with an additional term to directly adjust the reward of chosen and rejected action\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper introduces a novel dataset, ULTRAINTERACT, designed for complex reasoning tasks. It comprises instructions paired with preference trees, featuring reasoning chains, multi-turn interaction trajectories with feedback, and pairwise positive and negative responses. ULTRAINTERACT emphasizes complex and diverse reasoning patterns, encouraging models to break down problems into sub-problems and use tools to solve them. This dataset is a valuable contribution and can be useful for future research on LLM reasoning.\\n\\n2. The proposed EURUS models achieve state-of-the-art performance on several reasoning benchmarks, demonstrating the effectiveness of ULTRAINTERACT and the proposed training methods. Notably, the smaller EURUS models outperform much larger baselines, showcasing their efficiency.\\n\\n3. The paper provides valuable insights into preference learning for reasoning tasks. The analysis of reward patterns during training leads to a new reward modeling objective that improves performance, particularly on challenging problems. The authors highlight the importance of the absolute value of rewards in preference learning for reasoning, as opposed to just focusing on relative differences as in general conversation settings.\", \"weaknesses\": \"1. While the authors acknowledge the use of proprietary GPT models in data synthesis, they do not thoroughly analyze the limitations of relying on these models. It would be helpful to discuss the potential biases introduced by GPT models and explore alternative approaches for data generation that rely solely on open-source models. Though, it's worth noting that they attempt to address this by creating ULTRAINTERACT-v2 using only open-source models, which shows promising results.\\n\\n2. In the paper, a few preference learning algorithms, since the preference pairs are collected in the ULTRAINTERACT, not running RL with the data seems like a big miss.\", \"questions\": \"1. In line 90-91, the statement is unclear to me. In 'a higher final reward often indicates a better reasoning capability', whose reasoning capability? Can you elaborate a bit more?\\n\\n2. About the result remove in Table 3 due to data contamination. For some of the model has data contamination issue, the table suggests the TheoryQA is leaked, what about the rest dataset? If the rest doesn't has data contamination issue, should the result be compared? Without TheoryQA number, OpenChat seems like still a strong candidate.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper presents a suite of large language models that achieve state-of-the-art performance in reasoning tasks. Key contributions include the introduction of a novel dataset featuring multi-turn preference trees designed for reasoning, and a reward modeling objective tailored to reasoning tasks. The paper demonstrates competitive performance across a variety of benchmarks, with EURUS models outperforming larger baselines and even proprietary GPT-3.5 Turbo in reasoning.\\n\\nThe reviewers agree that the contribution in this paper, despite being largely empirical, is valuable. Since open model and data is a major contribution of the proposed work, the authors should consider disclose as much details as possible of their experimental process upon acceptance of the paper.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers raised various concerns including motivation of the new reward model design, the lack of cohesiveness of the three distinct pieces of work, heavy reliance on distilling from GPT, and writing clarities. These concerns are mostly addressed by author response.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThanks for your response! Following your suggestion, we revised the paper and supplemented the results in Appendix H. We also referenced them in line 329. \\n\\nIf you have any further questions, please let us know and we will do our best to address your concerns. Thanks!\"}", "{\"comment\": \"Great thanks for running those additional ablation studies, I think together with th studies you performed below on training on single-turn vs multi-turn interactions these make the contribution of the ultra-interact preference tree more convincing. Would it be possible to reflect this in the paper a bit more, for instance by adding these additional ablation studies to the appendix and referencing these in the claims made in 327-329. This would help to emphasise the contribution of ultra-interact as a multi-turn interaction dataset and improve the soundness of the claims made in your evaluation section in my view. I understand the space limitations however.\"}", "{\"title\": \"Response to Reviewer xgCx\", \"comment\": \"Thank you for your positive feedback and diligent efforts in reviewing our paper. We appreciate your comments and would like to share our responses.\\n\\n\\n> Q1: As a minor point the spelling and grammar could be improved; for instance \\\"Is proprietary models\\\" (line 470) should be \\\"Are proprietary models\\\", and more generally things like \\\"Perference Learning\\\" (line 247). More substantially some of the references point to the wrong sections (e.g. the reference to section 5 (replaced with 6) (line 255) -- in this case harming readability (hence my review of the presentation...)\", \"a1\": \"Thanks for pointing out these typos. We have fixed them.\\n\\n\\n> Q2: I feel that the modification to the reward model could be better motivated in section 3, for instance by referencing other works that maximise a similar margin loss. At the least it should be explicitly linked to the discussion in section 4.2 that actually seems to motivate it. This might be aided to seperating out the reward modelling section from the finetuning section? Since it seems to follow on more logically from the finetuning investigations\", \"a2\": \"Thanks for your comment and sorry that we mistakenly linked to section 6 in section 3, instead of referring to section 4.2. We have corrected the reference and provided more explanations to motivate the objective.\\n\\n\\n> Q3: Section 6.1 doesn't really address the section title properly. While the performance itself does suggest that just training on open source data is sufficient (ignoring the instruction following benchmark); the body of the section just talks about mixing in this additional V2 data, and the ensuing performance gains. It would suffice to add a brief comment at the end of line 483 explaining the results of finetuning just on V2\", \"a3\": \"Thanks for your suggestion. We have added a description on results of training with merely V2: Compared to V1, training on V2 improves model performance on both SFT and preference learning stage, and particularly, the Llama-3-Eurus-8B-KTO (V2) successfully surpasses the official Llama-3-8b-Instruct model, which it previously failed to.\\n\\n\\n> Q4: As a general comment I feel that this work feels like three distinct pieces of work rather than a single cohesive one. I.e. the proposal of a new training dataset; a set of models finetuned on this dataset alongside others; and more separetely a reward model trained on a combination of dataset including the one proposed here. One way of mitigating this would be to focus on the contribution of the dataset to the reward modelling phase (using the data from the ablation studies).\", \"a4\": \"Thanks for your comments. Our ultimate goal is to build strong open-source reasoning generalists, which involves a lot of aspects. We aimed to make this work solid in execution so we compile all the artifacts, results and insights together. However, following your suggestion, we have emphasized the contribution of data in this recipe and highlighted our ablation results which can support this claim. To make the flow more coherent, we add a quick road map to the end of introduction:\\n\\nWe compiled this work by first synthesizing both SFT and preference datasets to improve the reasoning ability of open-source models (Section 2). We examined the effectiveness of our datasets by training both policy and reward models (Section 3). We evaluated the performance of policy models in Section 4, during which we observed a correlation between reward patterns and benchmark performances. Next, we then evaluated our reward models and validated that our insights on the reward-performance correlation can be converted into gains in model training (Section 5). Finally, we ablate some factors in our dataset construction in Section 6.\\n\\n\\n> Q5: Section 2. is a little bit confusing and could be rephrased to make it a little but clearer that it is all just an example.\", \"a5\": \"Thanks for your suggestion. We have emphasized this in the updated version.\"}", "{\"comment\": \"Thank you for patiently addressing my questions and providing the additional experiments. Your responses have clarified some of my concerns, and I will increase my score. I believe this dataset will be valuable for future research.\"}", "{\"comment\": \"Don't worry too much about implementing the PPO experiments, I appreciate the difficulty of running those experiments.\\nThanks for updating the paper, it feels much more readable to me now, although there are still a few mistakes I'm sure youll find in the editing process\\n\\nAs a quick question, did you have any ablation results of data mixtures used in preference learning? It would be nice to see the effect of using ultrainteract vs ultrafeedback here in particular, though I appreciate that we can expect this to be similar to the results on reward modelling so not necessary. But regardless would help to validate importance of the dataset, especially since the existing ablations are done on the SFT data which only uses the final responses from the preference trees if I am correct?\"}", "{\"comment\": \"I really appreciate for providing additional experiments to address my concerns. I might have misunderstood some parts, so I would like to ask what is 'decomposing a multi-turn tree into multiple single turn pairs'. Based on my understanding, even though the dataset follows a tree structure, when the data is fed into the preference algorithm for training, it should inherently be multiple single turn pairs. For example, if we use the tree in Figure 2 (Right) to train the model, it's inherent to be three pairs. Therefore, I would like to ask how this experiment was conducted and how the tree structure was preserved during training. Thanks again for authors' patience and thoughtful responses.\"}", "{\"title\": \"Response to Reviewer EWRx (2)\", \"comment\": \"> Q7. L181 \\u201cwe adopt more diverse reasoning patterns\\u201d How exactly is this done?\", \"a7\": \"We have already described the process in Line 150-152: To promote solution diversity, the actor model randomly samples one reasoning schema in the form of either CoT (Wei et al., 2022) or modularization programming (Qian et al., 2023; Yuan et al., 2023).\\n\\n> Q8. Is python the only tool used?\", \"a8\": \"Yes. However,\\u00a0 the Python code interpreter in our case is equipped with external libraries/tools like wiki search. As explored in many recent works [1, 2, 3], the Python interpreter provides a general-purpose and broadly applicable environment for various tasks and applications. For example, a Python interpreter enables LLMs to not only defer calculations but also call tools to search for information or perform complex real-world agent tasks, such as sending emails or online shopping [1, 4].\\n\\n[1] Executable Code Actions Elicit Better LLM Agents.\\n\\n[2] FireAct: Toward Language Agent Fine-tuning.\\n\\n[3] Taskweaver: A Code-First Agent Framework.\\n\\n[4] Tool Learning with Foundation Models.\\n\\n> Q9. Typo in L263 reward notation\", \"a9\": \"Thanks for pointing this out. We have corrected the issue.\\n\\n> Q10. What is \\\"prompt level loose score\\\" in L282\", \"a10\": \"IFEval provides four metrics in their codebase: prompt-level strict score, prompt-level loose score, instruction-level strict score, instruction-level loose score. We directly follow the evaluation setup in ??? Detailed discussion on this metric is beyond the scope of this work.\\n\\n> Q11. I think the tables have too many numbers in them (tab3 has at least a hundred) and not sure if anyone will look at all of them. Instead, average scores can be put there and the detailed table can move to the appendix. This is only a suggestion though.\", \"a11\": \"Thanks for your suggestion. We will highlight the average scores, but we think providing a detailed breakdown of the model performance may help readers understand what is going on so we tend to keep them.\\n\\n> Q12. Which GPT-4 model is used? I think there are multiple versions.\", \"a12\": \"We always used the latest version of GPT-4. Specifically, we started with gpt-4-0613 to provide feedback, and then switched to gpt-4-1106-preview and gpt-4-0125-preview.\\n\\n> Q13. How is the reward model performance compared to ArmoRM?\", \"a13\": \"On RewardBench, ArmoRM achieves a higher score than EurusRM (89.0 and 82.4). However, It is worth noting that ArmoRM uses a stronger base model (LLaMA-3-8B) and a much larger training dataset. Specifically, we use 803K pairs while they use 587.4K for the multi-objective reward modeling and another 1004.4K for the gating layer training. Therefore, it may not be a fair baseline of our work.\\n\\n> Q14. How is GPT-4 used as a reward model in tab4?\", \"a14\": \"We directly adopt results from [1]. According to their paper, they use LLM-as-a-judge to rank two responses using the following prompts:\\n\\n```\", \"instruction\": \"${instruction}\", \"input\": \"${input}\", \"candidate_a\": \"${candidate1}\", \"candidate_b\": \"${candidate2}\\n\\n\\n\\nGiven the instruction and input above, please compare the two candidates. You only have 4 choices to output:\\n\\n\\n\\nIf you think A is better, please output: 1. A is better\\n\\n\\nIf you think B is better, please output: 2. B is better\\n\\n\\n\\nIf you think both are good enough correctly give the answer, please output: 3. Same good\\u00a0\\n\\n\\n\\nIf you think both are bad and do not follow the instruction, please output: 4. Same bad\\u00a0\\n\\n\\n\\nDo not output anything else except the 4 choices above.\", \"output_your_choice_below\": \"```\\n\\n[1] LLM-BLENDER: Ensembling Large Language Models with Pairwise Ranking and Generative Fusion. Jiang et al. ACL 2023.\\n\\n> Q15. Why does self-consistency drop in fig1 left?\", \"a15\": \"Increasing N may include some low-quality data into consideration and dilute the proportion of correct answers, which may distract self-consistency and finally lead to a different major voted answer.\\n\\n> Q16. How is MCTS decoding done exactly in sec5.2?\", \"a16\": \"We implement the vanilla MCTS decoding setup using LLMReasoner [1].\\n\\n[1] LLM Reasoners: New Evaluation, Library, and Analysis of Step-by-Step Reasoning with Large Language Models. Hao et al. COLM 2024.\"}", "{\"title\": \"Response to Reviewer rmjd (2)\", \"comment\": \"> Q2. The reasons behind the better performance of EURES are hard to track and some studies will be necessary if authors want to claim that the proposed dataset is the reason. Because the baselines has different scales and training method, for example, their training dataset could have different size and their preference algorithm could be different, etc.. Plus if EURES can beat some larger model, the claim that the dataset is better will be more convincing.\", \"a2\": \"Thanks for your comments. However, we argue that despite the different model and dataset sizes, Eurus models actually beat baselines with larger backbone size, larger datasets, or more advanced algorithms, which should make our claims even more convincing.\\n\\nFirstly, we highlight that the size of our models and baselines are comparable. Most baseline models in ~7B category are Mistral-7B based, the same as our Eurus-7B. The exceptions are Magicoder-S-DS-6.7B and OpenCI-DS-6.7B, which are trained on DeepSeek-Coder-6.7B, a stronger base model on reasoning and especially on coding. However, our Eurus-7B and Llama-3-Eurus-8B outperforms all of them and even surpasses much larger ~40B models (Mixtral-8x7B-Instruct and Deepseek-Coder-33B-Ins).\\n\\nSecondly, most of the baselines did not open their dataset, so compared to those, our datasets and mixtures are transparent, demonstrating a huge advantage. Compared to the remaining models with open datasets, we trained Eurus-7B-SFT with 399K data in total, and for preference learning we used 560K pairwise data. However, baselines consume more data than us in terms of SFT, and do not open their recipe for preference learning:\\n\\n| **Model** | **SFT Data Size** | **Preference Learning Data Size** |\\n| --- | --- | --- |\\n| CodeLLaMA-70B-Instruct | Non-transparent | Non-transparent |\\n| DeepSeek-LM-67B-Chat | 1.5M | Non-transparent |\\n| QWen1.5-72B-Chat | Non-transparent | Non-transparent |\\n| OpenCI-CL-70B | 68K Sample, 192K Turn | - |\\n| OpenMath-CL-70B | 1.8M | - |\\n| WizardLM-2-8x22B | Non-transparent | Non-transparent |\\n| Mixtral-8x22B-Instruct-v0.1 | Non-transparent | Non-transparent |\\n| Ours | 399K | 560K |\\n\\nAlso, since our data mixture for SFT consists of UltraInteract and existing data, we conducted an ablation study in Section 6.2 in which we trained models either only on our data or only on existing open-source data. Results in Table 7 can firmly support the claim that the performance boost on reasoning is due to our proposed dataset.\\n\\nLastly, it\\u2019s not clear what methods are used to train baseline models, but we may suppose that all models have at least gone through SFT as common practice, and any further operations may be intended to push the limit of their SFT models. Therefore, comparing our SFT models to baselines should be a fair setup to them but may not be fair to us. Nevertheless, our SFT models can already outperform baselines.\\nTherefore, we consider that our comparisons are convincing and we can safely claim that our dataset is the major contributor of the superior performance.\\n\\n> Q3. There may be some factors contributing to the value differences observed in reward modeling, especially given the varying formulations of alignment methods. It would be valuable for the authors to offer insights into the potential reasons for these differences in the value of rewards.\\n\\nA3. The major difference shown in Figure 6 is that, the rewards of chosen data and margins increase regardless of $\\\\mathcal{L}_{\\\\text{DR}}$, but the rewards of rejected data decrease to be negative with regularization. This may be attributed to the nature of $\\\\mathcal{L}_{\\\\text{BT}}$, which only optimizes the relative margin between rewards if chosen data and rejected data, while not explicitly forcing the rewards of chosen data to be positive and that of rejected data to be negative. Therefore, the absolute value of rewards are not guaranteed.\\n\\n> Q4. If the model is unable to effectively learn from vertical improvements, then it raises the question of why we want to synthesize the dataset with tree structure and why we are providing trajectories to the model.\\n\\nA4. Please see response to weakness 2.\"}", "{\"title\": \"Response to Reviewer EWRx\", \"comment\": \"Thanks for your positive feedback!\\n\\n> Q1. The heavy reliance on GPT responses makes me feel like this is more of distilling GPT. Also, it is not clear what are the usage limitations that will arise from using a proprietary model like GPT4. As shown in tab7, this was crucial for obtaining good performance.\", \"a1\": \"Thanks for your comments but we think there could be some misunderstanding. It is Table 6 that aims to ablate the effect of GPT rather than Table 7. In Table 6, we show that models trained on UltraInteract V2, the version constructed using only open-source models, can outperform the models trained on V1, the GPT generated version. This gives us a clue that we can construct high-quality data without GPT. We suppose the \\u201copen-source only\\u201d row in Table 7 distracts you, which indeed means training SFT models with only the open-source data (UltraChat, ShareGPT, and OpenOrca) without UltraInteract. Experiments in table 7 demonstrate that the superior performance of our SFT model is credited to our carefully generated data and its rationales. We have updated Table 7 and refer to the setup as \\u201cexisting data only\\u201d for clarification.\\n\\nRegarding the usage limitations of proprietary models, major concern may lie in the non-permissive license and the financial cost of calling APIs. Using open-source models can help address both issues.\\n\\n> Q2. The problem of the likelihood of chosen responses going down in reasoning is a known issue and studied prior work [1], which is not cited in the paper (the related work is quite short)\", \"a2\": \"Thanks for pointing it out. Our related work is short because we are already short of space, but we will add the reference you list. It seems the paper title is missing, would you mind specifying it?\\n\\n> Q3. The term \\u201cmulti-turn action\\u201d was confusing. It seems that all the tasks require only a single correct response. None of the tasks is truly multi-turn where the model has to do multiple actions. From reading the paper, it seems the term \\u201cmulti-turn\\u201d is used to describe a process where a model can try again if it makes a mistake. Actually, it is not clear how this process works, especially when training the model and evaluating it. Also, the dataset contains observations and judgements, but are they also used when training the actor? What about the python executions? There is very little detail on how the agent is trained on these and evaluated.\", \"a3\": \"Thanks for your comments and sorry for the confusion. We directly follow the setup in MINT [1], which measures the multi-turn correction ability of LLMs. All questions in our dataset can be answered within a single turn, but LLMs can make mistakes so they may require another turn of action to correct its previous answers. We acknowledge that there are some agentic tasks that may intrinsically require interruption of actions to wait for observations and feedback, where LLMs have to act multiple times before finally answering for one time. However, according to CodeAct [2], as long as we use code as actions, which is exactly what we did, we can design all control flows in a single code block (i.e., an action) and compress it into a single turn. All intermediate results can be automatically saved in the code variables and be forwarded for further processing. Therefore, we believe this setup can already cover all multi-turn scenarios that require multiple actions.\\n\\nDuring inference, LLMs generate <execute></execute> when they need to write code. Upon finishing generating the whole response, we extract all code between the tags and send it to a sandbox to execute, and append the returned outputs to the end of the model response as observations. However, when training actors, even though observations and feedback are also provided in the history, they will be masked and only the model generated tokens will be optimized.\\u00a0\\n\\n[1] MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback. Wang et al. ICLR 2024.\\n\\n[2] Executable Code Actions Elicit Better LLM Agents. Wang et al. ICML 2024.\\n\\n> Q4. As mentioned in the previous point, there are certain steps that are not well explained. See the questions for examples. Given that the motivation is to advance open-source LLMs, I think it is important to describe the process of training in more details.\", \"a4\": \"Thanks for your suggestions. We have provided more details in the Appendix.\\n\\n> Q5. Is the reward model used in training the actor model?\", \"a5\": \"No. During dataset construction, the actor model is fixed; After obtaining the data and subsequent reward model, we can utilize the reward model train models with PPO, but in this paper we directly conducted preference learning algorithms due to their simplicity.\\n\\n> Q6. L148 \\u201cthe actor model first decomposes the input problem into several problems\\u201d How is this done?\", \"a6\": \"Our setup is highly consistent with MINT [1]. We prompt the model to reason before writing code as actions, and think step by step to solve the problem.\"}", "{\"title\": \"Respond to follow-up comments of Reviewer rmjd\", \"comment\": \"Dear Reviewer,\\n\\nThanks for your prompt response! \\n\\nBy 'decomposing a multi-turn tree into multiple single turn pairs', we mean training models on the preference pairs without interaction history. \\n\\nMore specifically, for experiments in our paper, we adopt different strategies to utilize interaction history. For SFT, as stated in line 244-245, \\\"We find it yields better performance to discard interaction history and train only on correct leaf nodes in each tree.\\\", i.e. only training on (instruction, single-turn response). However, for preference learning, \\\"Differently from SFT, here we include all multi-turn trajectory pairs in our ULTRAINTERACT\\\", i.e., given a tree of five turns, we will have five pairs with the depth of each being 1, 2, ..., 5. The pair at the later turn can oberve the full interactions between previous responses and environment/critique which aims to rectify previous wrong nodes. Namely, we train on (instruction, chosen response at turn 1, rejected response at turn 1), (instruction + rejected response at turn 1 + observation + critique, chosen response at turn 2, rejected response at turn 2), etc. We hope that LLMs can learn from the interaction history so that they can rectify incorrect answers based on feedback during inference. We presented one case of the single-turn SFT and one case of multi-turn preference learning in Table 15 and 16 in Appendix G.2 respectively. \\n\\nIn the additional experiments, we split a tree of five turns into five single-turn examples, namely (instruction, chosen response at turn 1, rejected response at turn 1), (instruction, chosen response at turn 2, rejected response at turn 2), etc. No interaction history is presented, which is akin to our setup of SFT. Intuitively, this omits the \\\"vertical information\\\" in our trees and thus will lead to performance drop in benchmark results on MINT. Our additional results have confirmed this intuition, therefore demonstrating that LLMs can effectively learn the \\\"sequential refinement across turns\\\" from our preference trees.\\n\\nIf you have any further questions, please let us know and we will do our best to address your concerns. Thanks!\"}", "{\"summary\": \"The authors explore improving large language model reasoning through the curation of high quality training data for that reasoning.\\nThis data (UltraInteract) consists in preference trees, with nodes splitting on correct/incorrect responses; critique and refinement of rejected responses; and uses different reasoning schemas/ actor models to increase training data diversity\\nThe actor used to generate these trajectories is GPT3.5 Turbo, with GPT4 being used as a critique model with access to an interpreter/tools.\\n\\nThe authors then use this dataset (alongside others) to finetune 3 language models using the following process:\\n1. SFT over the correct actions\\n2. Preference learning over correct vs incorrect actions using off the shelf preference learning algorithms\", \"additionally_the_authors_also_use_this_to_derive_a_reward_model\": \"3. Train a reward model, adding in terms for the difference in absolute rewards to the normal Bradley Terrey reward model.\", \"in_my_view_the_key_contributions_of_this_paper_are\": [\"introduction and analysis of preference-tree based instruction following data, which is scalable and effective\", \"introduction of improved objectives for training reward models\"], \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. With regards to soundness, I feel that the necessary experiments have been run to validate the majority of claims, especially where those claims are with regards to methodological contributions. The authors have also taken pains to remove contaminated data from their work in order to make comparisons fair and meaningful, including when reporting others' work.\\n2. The presented language models have strong performance, and the data and reward models are in and of themselves useful contributions to the research community, removing some of the limitations of scale and quality from prior works creating preference datasets and reward models\\n3. The investigation surrounding the flaws of existing preference learning models is an original contribution.\\n4. In my view the largest contribution is the rather detailed study of creating their ultra-instruct dataset albeit moreso as an engineering challenge.\\n5. The experiments are run against meaningful baselines: models of similar scale, trained on similar data in similar ways.\", \"weaknesses\": \"1. As a minor point the spelling and grammar could be improved; for instance \\\"Is proprietary models\\\" (line 470) should be \\\"Are proprietary models\\\", and more generally things like \\\"Perference Learning\\\" (line 247). More substantially some of the references point to the wrong sections (e.g. the reference to section 5 (replaced with 6) (line 255) -- in this case harming readability (hence my review of the presentation...)\\n2. I feel that the modification to the reward model could be better motivated in section 3, for instance by referencing other works that maximise a similar margin loss. At the least it should be explicitly linked to the discussion in section 4.2 that actually seems to motivate it. This might be aided to seperating out the reward modelling section from the finetuning section? Since it seems to follow on more logically from the finetuning investigations\\n3. Section 6.1 doesn't really address the section title properly. While the performance itself does suggest that just training on open source data is sufficient (ignoring the instruction following benchmark); the body of the section just talks about mixing in this additional V2 data, and the ensuing performance gains. It would suffice to add a brief comment at the end of line 483 explaining the results of finetuning just on V2\\n4. As a general comment I feel that this work feels like three distinct pieces of work rather than a single cohesive one. I.e. the proposal of a new training dataset; a set of models finetuned on this dataset alongside others; and more separetely a reward model trained on a combination of dataset including the one proposed here. One way of mitigating this would be to focus on the contribution of the dataset to the reward modelling phase (using the data from the ablation studies).\\n5. Section 2. is a little bit confusing and could be rephrased to make it a little but clearer that it is all just an example.\", \"questions\": \"1. Did you conduct any comparative investigations over general conversational preference learning using your reward modelling objective? This would help to verify your intuition that this method is effective due to the unique features of reasoning tasks\\n2. Would it be possible to use the Eurus reward model for PPO-based alignment? How would this perform in comparison to the existing finetuning methods\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer xgCx (2/2)\", \"comment\": \"> Q6: Did you conduct any comparative investigations over general conversational preference learning using your reward modelling objective? This would help to verify your intuition that this method is effective due to the unique features of reasoning tasks\", \"a6\": \"Thanks for your suggestion. We conducted an additional experiment on UltraFeedback (one pair per instruction) with $\\\\mathcal{L}_{BT}$ and\\u00a0$\\\\mathcal{L}_{DR}+\\\\mathcal{L}_{BT}$ respectively. Results in the following table show that $\\\\mathcal{L}_{DR}$ does not help improve reward model performance on general chat data, which may indicate that the absolute value of rewards are not that important as on reasoning tasks. This aligns with our intuition to only apply $\\\\mathcal{L}_{DS}$ to UltraInteract examples.\\n\\n| Loss | Chat | Chat Hard | Reasoning | Safety |\\n| --- | --- | --- | --- | --- |\\n| $\\\\mathcal{L}_{BT}$ | 94.5 | 44.1 | 56.5 | 52.9 |\\n| $\\\\mathcal{L}_{DR} + \\\\mathcal{L}_{BT}$ | 92.8 | 36.05 | 45.6 | 43.0 |\\n\\n> Q7: Would it be possible to use the Eurus reward model for PPO-based alignment? How would this perform in comparison to the existing finetuning methods\", \"a7\": \"We considered only preference learning because of its simplicity. It\\u2019s known that PPO models are hard to train, so it will introduce many confounders to track the effect of our data. However, following your suggestion, we are implementing PPO experiments in this discussion period but it takes time. We will update results later once it is finished.\"}", "{\"summary\": \"The authors emphasize the performance gap between open-source LLMs and the most advanced models, particularly in reasoning capabilities. They attribute this gap to two primary factors: (1) the lack of high-quality datasets and (2) the under-exploration of preference learning techniques. To address this gap, the authors introduce a novel dataset, ULTRAINTERACT, which features a multi-turn, tree-structured format designed to enhance reasoning abilities. Additionally, they offer new insights into preference algorithms and reward modeling. They argue that effective reward modeling should consider not only the margin between rewards but also the absolute value of the reward itself. Based on this insight, they propose a new reward model that combines two loss functions, L_{BT} and L_{DR}, demonstrating superior performance compared to existing models.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Authors use a new method to synthesize a dataset for SFT and preference learning, which could potentially enhance model's reasoning abilities. The intuition behind the synthesis method is straightforward and easy to be understood. I think the dataset is cool and it could be a potential approach for model to learn how to improve the response. Plus, the insights on preference learning algorithm is interesting.\", \"weaknesses\": \"1). I agree that providing trajectories to guide model improvements is a potential approach. However, during the training process, I believe that the vertical improvement information, sequential refinement across turns, may not be effectively learned. This is because current preference algorithms primarily focus on horizontal comparisons, assessing responses within the same turn.\\n\\n2). The reasons behind the better performance of EURES are hard to track and some studies will be necessary if authors want to claim that the proposed dataset is the reason. Because the baselines has different scales and training method, for example, their training dataset could have different size and their preference algorithm could be different, etc.. Plus if EURES can beat some larger model, the claim that the dataset is better will be more convincing.\\n\\n3). There may be some factors contributing to the value differences observed in reward modeling, especially given the varying formulations of alignment methods. It would be valuable for the authors to offer insights into the potential reasons for these differences in the value of rewards.\", \"questions\": \"If the model is unable to effectively learn from vertical improvements, then it raises the question of why we want to synthesize the dataset with tree structure and why we are providing trajectories to the model.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
2eFq6S35iB
HiLo: A Learning Framework for Generalized Category Discovery Robust to Domain Shifts
[ "Hongjun Wang", "Sagar Vaze", "Kai Han" ]
Generalized Category Discovery (GCD) is a challenging task in which, given a partially labelled dataset, models must categorize all unlabelled instances, regardless of whether they come from labelled categories or from new ones. In this paper, we challenge a remaining assumption in this task: that all images share the same domain. Specifically, we introduce a new task and method to handle GCD when the unlabelled data also contains images from different domains to the labelled set. Our proposed `HiLo' networks extract High-level semantic and Low-level domain features, before minimizing the mutual information between the representations. Our intuition is that the clusterings based on domain information and semantic information should be independent. We further extend our method with a specialized domain augmentation tailored for the GCD task, as well as a curriculum learning approach. Finally, we construct a benchmark from corrupted fine-grained datasets as well as a large-scale evaluation on DomainNet with real-world domain shifts, reimplementing a number of GCD baselines in this setting. We demonstrate that HiLo outperforms SoTA category discovery models by a large margin on all evaluations.
[ "Generalized Category Discovery" ]
Accept (Poster)
https://openreview.net/pdf?id=2eFq6S35iB
https://openreview.net/forum?id=2eFq6S35iB
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zlz6NjBdSh", "yf0ZnapsUW", "wj1zyCvgGp", "rH3SXNsfdQ", "ph0ZyKp1Vo", "pbEhQPxULq", "odtaSzr7qa", "kX1qTuv5YC", "ipdFEqUCjA", "eO4YKr4rIC", "VD2M0A62uD", "NljP8vI2ej", "NhRE9twbgW", "KnY7vvQcN3", "JkxIPtbDU3", "Gc26gjQWcU", "A99wNkwgoj", "6O8kYNwQXN", "4LMWR4SxpP", "2eXFwg3Iak" ], "note_type": [ "official_comment", "official_review", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732515308454, 1730089876547, 1730713284469, 1732515233694, 1737523405194, 1732515516926, 1732762955141, 1732515445416, 1732672921699, 1732515219038, 1733018510778, 1733103376802, 1734533489124, 1730684786804, 1733018739647, 1732515080255, 1732515343555, 1733018492560, 1733068896609, 1730540320207 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission592/Authors" ], [ "ICLR.cc/2025/Conference/Submission592/Reviewer_jy36" ], [ "ICLR.cc/2025/Conference/Submission592/Reviewer_6eyX" ], [ "ICLR.cc/2025/Conference/Submission592/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission592/Authors" ], [ "ICLR.cc/2025/Conference/Submission592/Reviewer_L4eA" ], [ "ICLR.cc/2025/Conference/Submission592/Authors" ], [ "ICLR.cc/2025/Conference/Submission592/Reviewer_jy36" ], [ "ICLR.cc/2025/Conference/Submission592/Authors" ], [ "ICLR.cc/2025/Conference/Submission592/Authors" ], [ "ICLR.cc/2025/Conference/Submission592/Authors" ], [ "ICLR.cc/2025/Conference/Submission592/Area_Chair_yqXe" ], [ "ICLR.cc/2025/Conference/Submission592/Reviewer_RNZQ" ], [ "ICLR.cc/2025/Conference/Submission592/Authors" ], [ "ICLR.cc/2025/Conference/Submission592/Authors" ], [ "ICLR.cc/2025/Conference/Submission592/Authors" ], [ "ICLR.cc/2025/Conference/Submission592/Authors" ], [ "ICLR.cc/2025/Conference/Submission592/Reviewer_6eyX" ], [ "ICLR.cc/2025/Conference/Submission592/Reviewer_L4eA" ] ], "structured_content_str": [ "{\"comment\": \"> Features are disentangled by assuming that features from different layers represent domain and semantic information\\u2026.However, this assumption may oversimplify the complexity of feature representation in neural networks.\\n\\nOur selection of high-level and low-level feature is based on previous literature and empirical evidence. First, several works [S5][S6][S7] have demonstrated the efficacy of separating domain and semantic information and leveraging mutual information to address domain adaptation challenges, showing it is not always that complex when introducing mutual information minimization between the representations. Furthermore, the selection of layers to represent domain and semantic features is not arbitrary but based on extensive empirical investigation. As illustrated in Figure 3 of our paper, we conduct a comprehensive analysis to determine the optimal layer assignments. Our results demonstrate that attaching the domain head to earlier layers yields superior performance, corroborating the hypothesis that lower-level features are more domain-oriented. Likewise, Figure 3(b) shows that fixing the domain head to the first layer and varying the 'Deep' layer for the semantic head from the last to the fourth last layer reveals that the last layer is optimal for the semantic head. These findings substantiate the importance of domain-semantic feature disentanglement and validate our design choice of utilizing lower-level features for domain-specific information and higher-level features for semantic-specific information. In Figure 4(a), we also present the visualization by first applying PCA to the domain features and semantic features obtained through $\\\\mathcal{H}$, and then plotting the corresponding images. As can be seen, the images are naturally clustered according to their domains and semantics, demonstrating that HiLo successfully learns domain-specific and semantic-specific features from higher-level features and the shallower layers.\\n\\n[S5] Zhao, Haiteng, et al. \\\"Domain adaptation via maximizing surrogate mutual information.\\\" IJCAI. 2022.\\n\\n[S6] Park, Geon Yeong, and Sang Wan Lee. \\\"Information-theoretic regularization for multi-source domain adaptation.\\\" ICCV. 2021.\\n\\n[S7] Sharma, Yash, Sana Syed, and Donald E. Brown. \\\"Mani: Maximizing mutual information for nuclei cross-domain unsupervised segmentation.\\\" MICCAI, 2022.\\n\\n> When dealing with data with large domain differences, it is a challenge to determine the mixing proportion and application method accurately. If not handled properly, it may introduce too much noise or incorrect information, which may instead interfere with the learning process of the model and reduce the classification performance.\\n\\nWe agree that determining appropriate mixing proportions is crucial for PatchMix to be effective. To address this challenge, our method incorporates two dynamic mechanisms for controlling mixing proportions. First, we introduce $\\\\alpha$, which weights each patch based on attention scores from $x$ and $x\\u2032$. This ensures that patches less relevant to semantic content receive lower weights in computing $\\\\mathcal{L}^{rep}_s$ and $\\\\mathcal{L}^{cls}_s$, effectively reducing the impact of potentially noisy or irrelevant patches. Second, $\\\\beta$ is sampled from with Beta distribution (Zhu et al. (2023)), as a random mixing proportion for each patch $j$. This dual-control mechanism makes our mixing proportion dynamic rather than static, allowing the model to adaptively adjust the influence of different patches based on their semantic relevance while maintaining a degree of randomness through the Beta distribution. This approach helps mitigate the risk of introducing excessive noise or incorrect information during the mixing process, particularly when dealing with significant domain differences.\"}", "{\"summary\": \"This paper introduces a new problem setting: Generalized Category Discovery (GCD) with domain shift. The authors leverage techniques from domain adaptation and curriculum learning to propose a new method called HiLo. Comprehensive experiments on the proposed benchmark demonstrate substantial improvements.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper proposes a new problem setting and proposes a HiLo, which combines multiple techniques from domain adaption and achieves better results.\", \"weaknesses\": \"1. The novelty of the method appears limited, as it seems to combine various techniques from different domains.\\n\\n2. The comparison with UniOT should be included in the main results. Since the proposed setting is similar to universal domain adaptation, it is essential to compare methods from both domains in the main results.\\n\\nMinor\\uff1a \\n\\nMissing citation for the following important paper\\n\\n[1] Rastegar et al. Learn to Categorize or Categorize to Learn? Self-Coding for Generalized Category Discovery. NeurIPS 2023.\\n\\n[2] Gu et al. Class-relation Knowledge Distillation for Novel Class Discovery. ICCV2023.\", \"questions\": \"Please clarify the novelty of the proposed method, and include more comparisons with UinOT in the main results.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a new challenge for Generalized Category Discovery, which requires model to categorize unlabeled data in the presence of domain shifts. Traditional GCD methods assume all images come from the same domain, which leads to a significant performance drop under domain shifts. The proposed HiLo framework explicitly disentangles semantics and domain, achieving domain adaptation in GCD through patchmix and curriculum learning. Experimental results show performance improvements, validating the effectiveness of the approach.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper presents a new, practically meaningful, and challenging setting, and constructs corresponding datasets.\\n2. The domain-semantic disentangled design is well-reasoned, clearly aligning the motivation.\\n3. The proposed approach demonstrates significant performance improvement on SSB-C.\\n4. The writing is clear and easy to follow.\", \"weaknesses\": \"1. The performance gain on DomainNet is considerably smaller than on SSB-C, and the improvement over methods like SimGCD, which does not account for domain shift, is modest. This indicates limited robustness on various domain shifts and fails to highlight the advantages of the proposed approach.\\n2. The method is sensitive to certain hyperparameters, and $r^{'}$ does not exhibit consistent performance across the original and new domains.\\n3. The approach of decoupling domain and semantics is derived from [1], and the use of patchmix for domain adaptation is adapted from [2]. The curriculum learning strategy is also straightforward. Overall, the method seems to be an assembly of prior works, lacking substantial novelty.\\n4. There is no analysis of the disentangled domain and semantic features, such as distribution visualizations. This would help illustrate the effectiveness of the disentanglement.\\n5. In line 287, same representation loss $L^{rep}_s$ on both domain and semantic features is confusing. This approach may lead domain features to capture information beyond true domain characteristics. It would be valuable to see t-SNE visualizations of domain features, semantic features, and their combination. The author does not provide a corresponding discussion.\\n6. Line 313 mentions using pre-trained DINO to obtain $z_d$, but previously $z_d$ is associated with a projection head. If the projection head is discarded, then $z_d$ will always be identical in different time steps. If it is retained, the term \\u201cpretrained\\u201d is confusing. This needs clarification.\\n7. The ablation study is somewhat unclear. For instance, in row (5) where only deep features are used, does this mean all other designs related to the shallow feature $z_d$ are also omitted? This also needs clarification.\\n\\nReference\\n\\n[1] Learning deep representations by mutual information estimation and maximization\\n\\n[2] Patch-mix transformer for unsupervised domain adaptation: A game perspective\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> Clarification of zd in Line 313\\n\\nIndeed, $\\\\mathcal{z}_d$ is computed without the projection head only once before data loading and remains constant across different time steps. This is intentional in our design. For our curriculum sampling, $\\\\mathcal{z}_d$ is used to pre-compute sampling weights before data loading begins. This pre-computation approach is efficient and aligns with our goal of using domain information as a fixed reference point for guiding the learning of semantic features, rather than as a feature representation that needs to be optimized. \\n\\nThis design choice helps maintain our focus on clarifying semantic representations while treating domain information as an auxiliary signal for regularization and sampling. We thank the reviewer for raising this point, and we have now further clarified this by removing $\\\\mathcal{z}_d$ in Line 309-310.\\n\\n\\n> In row (5) where only deep features are used, does this mean all other designs related to the shallow feature zd are also omitted?\\n\\nRow (5) means we extract domain features ($\\\\boldsymbol{z}_d$) from the penultimate layer and semantic features ($\\\\boldsymbol{z}_s$) from the final layer. Row (6) means we extract domain features from the first layer and semantic features from the second layer. We have revised the descriptions to \\u201c$\\\\boldsymbol{z}_d, \\\\boldsymbol{z}_s$ from deep features only\\u201d and \\u201c$\\\\boldsymbol{z}_d, \\\\boldsymbol{z}_s$ from shallow features only\\u201d to make it clearer.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"> The novelty of the method appears limited, as it seems to combine various techniques from different domains.\\n\\nPlease see the General Response for our response regarding novelty of the method.\\n\\n\\n> The comparison with UniOT should be included in the main results. Since the proposed setting is similar to universal domain adaptation, it is essential to compare methods from both domains in the main results.\\n\\nFollowing the suggestion, we have conducted experiments using UniOT on SSB-C. The results are presented in Table S2 and also incorporated into Table 3 in the main paper. Our HiLo continues to outperform all other methods.\", \"table_s2\": \"Evaluation on SSB-C. Bold values represent the best results.\\n| | CUB-C Original | | | CUB-C Corrupted | | | Scars-C Original | | | Scars-C Corrupted | | | FGVC-C Original | | | FGVC-C Corrupted | | |\\n|-----------------|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|\\n| | All | Old | New | All | Old | New | All | Old | New | All | Old | New | All | Old | New | All | Old | New |\\n| SimGCD | 31.9 | 33.9 | 29.0 | 28.8 | 31.6 | 25.0 | 26.7 | 39.6 | 25.6 | 22.1 | 30.5 | 14.1 | 26.1 | 28.9 | 25.1 | 22.3 | 23.2 | 21.4 |\\n| UniOT | 27.5 | 29.3 | 26.8 | 27.3 | 33.2 | 22.5 | 24.3 | 37.5 | 22.3 | 22.9 | 31.4 | 13.7 | 27.3 | 29.8 | 22.5 | 21.6 | 23.5 | 19.6 |\\n| HiLo (Ours) | **56.8** | **54.0** | **60.3** | **52.0** | **53.6** | **50.5** | **39.5** | **44.8** | **37.0** | **35.6** | **42.9** | **28.4** | **44.2** | **50.6** | **47.4** | **31.2** | **29.0** | **33.4** |\\n\\n\\n\\n> Missing citation for the following important paper\\n\\nWe have included these two works in our related work following the suggestion.\"}", "{\"comment\": \"I appreciate the authors' response, and I will maintain the initial score.\"}", "{\"comment\": \"> The author needs to differentiate between the GCD task setting and the domain shift GCD task setting, so this statement should be revised for clarity and precision.\\n\\n\\nFollowing the suggestion, we have revised the corresponding sentence in Line 164-167 to \\u201cThe objective of GCD with domain shifts is to classify all unlabelled images in $\\\\mathcal{D}^u$ (from either $\\\\Omega^a$ or $\\\\Omega^b$) using only the labels in $\\\\mathcal{D}^l$. This is different from the setting of NCD with domain shift and GCD, which assumes $\\\\mathcal{Y}^l\\\\cap\\\\mathcal{Y}^u=\\\\emptyset$ for the former and $\\\\Omega^a=\\\\Omega^b$ with $|\\\\Omega^a|=|\\\\Omega^b|=1$ for the latter.\\u201d\\n\\n\\n> The font sizes of the tables are not standardized, and the font in table 2 is too small.\\n\\n\\nThanks for the suggestion. We will properly fix it in our final version. Specifically, we will report the average results across different domains in the main paper and move the breakdown evaluation of each domain shift to Appendix.\\n\\n\\n> The authors should have listed the error lines generated by the three independent runs.\\n\\n\\nAll results in Table 2 and 3 are averaged by three trials with different random seeds. Following your suggestion, we have added the bar chars with corresponding error bars for different methods across DomainNet and SSB-C in Appendix O.\\n\\n\\n> For the experimental results of the ORCA method, what is the backbone used by the authors?\\n\\n\\nWe use the pretrained DINO model as the backbone for all methods to ensure fair comparison, which is a common practice in the GCD literature (Wen et al. (2023); Wang et al. (2024)).\\n\\n\\n> Were any curriculum learning alternatives considered, such as adaptive weighting based on difficulty or dynamic sample weighting? A brief discussion on these choices would clarify why the current approach was favored.\\n\\nIndeed, there are several common curriculum learning strategies: (1) difficulty-based adaptive weighting, which adjusts sample weights based on model performance [S9]; (2) dynamic sample weighting, which updates weights during training based on learning progress [S10]. Our method aligns with these approaches in principle, as domain shift naturally correlates with learning difficulty - samples with larger shifts from the source domain are inherently more challenging to learn from. However, rather than computing difficulty measures during training, we pre-compute sampling weights based on domain shifts before training begins. This design choice offers two key benefits: (1) computational efficiency by avoiding per-iteration difficulty assessment, and (2) more stable training by preventing potential oscillations in difficulty estimates that can occur with dynamic weighting. This approach captures similar intuitions about progressive learning.\\n\\n\\nTo address this comment, we further explored one of each type, CL [S8] and Self-Paced Learning (SPL) [S9]. For CL, we implemented it by gradually including more difficult samples based on classification loss, starting with 20% of the easiest samples and increasing by 10% every 20 epochs. We also integrated SPL into our framework by adding a weighted loss term $\\\\lambda||v||1 + \\\\sum{i}v_i\\\\ell_i$, where $v_i$ indicates sample weights and $\\\\ell_i$ is the classification loss for sample $i$. While both CL and SPL improve over the baseline SimGCD (60.1/33.2 on Real/Painting domains), achieving 62.0/34.7 and 62.8/35.0 respectively for 'All' classes, they still fall notably short of our method's performance (64.4/42.1 as shown in Table S1 and Table 6 in the main paper). In contrast, our pre-computed domain shift-based sampling achieves better results, demonstrating the effectiveness of using domain shifts as a proxy for learning difficulty.\", \"table_s1\": \"Evaluation on DomainNet. Bold values represent the best results.\\n| | Real | | | Painting | | |\\n|-----------------|-----|-----|-----|-----|-----|-----|\\n| Methods | All | Old | New | All | Old | New |\\n| SimGCD | 61.3 | **77.8** | 52.9 | 34.5 | 35.6 | 33.5 |\\n| HiLo | **64.4** | 77.6 | **57.5** | **42.1** | **42.9** | **41.3** |\\n| HiLo + CL - curriculum sampling | 62.0 | 75.9 | 53.2 | 34.7 | 35.8 | 33.8 |\\n| HiLo + SPL - curriculum sampling | 62.8 | 76.5 | 54.5 | 35.0 | 36.1 | 34.0 |\\n\\n\\n[S8] Bengio, Yoshua, et al. \\\"Curriculum learning.\\\" ICML. 2009.\\n\\n\\n[S9] Kumar, M., Benjamin Packer, and Daphne Koller. \\\"Self-paced learning for latent variable models.\\\" NeurIPS. 2010.\"}", "{\"comment\": \"Thank authors' detailed feedback. My concerns are resolved. Therefore, I maintain the initial score.\"}", "{\"comment\": \"> The performance gain on DomainNet is considerably smaller than on SSB-C, which fails to highlight the advantages of the proposed approach.\\n\\nWe appreciate the reviewer's observation regarding the performance differences between SSB-C and DomainNet. We would like to clarify several important points:\\n\\nFirst, DomainNet presents a highly challenging scenario due to its dramatic domain shifts and large-scale nature. We find that even ensembling of a number of SoTA domain adaptation methods, combined with SimGCD, show limited improvement on this dataset. For instance, in Table 5, SimGCD+MCC+NWD yields 35.7 ACC, compared to 42.5 for HiLo. Our method achieves 16% improvement in proportional terms for \\\"All\\\" classes ACC on the \\\"Painting\\\" domain, substantially greater than those achieved by combining two SoTA domain adaptation.\\n\\nSecond, the relatively smaller performance gain on DomainNet aligns with a well-known phenomenon in machine learning: improvements on large-scale datasets are typically more modest compared to smaller datasets. For instance, in object detection, SOTA improvements on COCO (large-scale) typically show 1-2% gains [S2], while improvements on smaller datasets like PASCAL VOC can reach 5-7% [S3]. Similarly, in image classification, recent architectural advances show 0.5-1% improvements on ImageNet but 2-3% gains on smaller datasets like CIFAR-100 [S4].\\n\\n[S2] Sun, P., Zhang, R., Jiang, Y., Kong, T., Xu, C., Zhan, W., ... & Luo, P. \\u201cSparse r-cnn: End-to-end object detection with learnable proposals.\\u201d CVPR. 2021.\\n\\n[S3] Liu, Ze, et al. \\\"Swin transformer: Hierarchical vision transformer using shifted windows.\\\" CVPR. 2021.\\n\\n[S4] Touvron, Hugo, et al. \\\"Training data-efficient image transformers & distillation through attention.\\\" ICML, 2021.\\n\\n\\n> The method is sensitive to certain hyperparameters, like $r\\u2032$. \\n\\nAs demonstrated in our ablation study (Appendix M), the performance with respect to $r'$ exhibits a convex shape across different configurations, indicating a clear pattern rather than arbitrary sensitivity. Specifically, we found that $r'=0$ performs optimally for the original domain, while $r'=0.5$ works best for corrupted domains. This behavior is actually expected and interpretable: the original domain requires less distribution shifts ($r'=0$) since the data distribution is clean, while corrupted domains benefit from moderate augmentation ($r'=0.5$) to handle distribution shifts.\\nGiven our challenging setting of simultaneously handling generalized category discovery and domain adaptation, finding a single set of hyperparameters that performs optimally across all domains is inherently difficult. Nevertheless, our method maintains robust performance across domains even with sub-optimal hyperparameter choices (see Appendix M).\\n\\n> The method seems to be an assembly of prior works, lacking substantial novelty.\\n\\nPlease see the General Response for our response regarding novelty of the method.\\n\\n> There is no analysis of the disentangled domain and semantic features, such as distribution visualizations\\n\\nWe have visualized disentangled domain and semantic features projected by PCA in Figure 4a, which can effectively show disentanglement. This verifies that HiLo successfully learns domain-specific and semantic-specific features. Additionally, we have added t-SNE visualization to address the next comment below.\\n\\n\\n> Same representation loss Lsrep on both domain and semantic features is confusing\\u2026 It would be valuable to see t-SNE visualizations of domain features, semantic features, and their combination.\\n\\nWe thank the reviewer for raising this point! Our approach actually addresses a dual GCD problem that operates simultaneously across semantic and domain axes, particularly given that we work without explicit source/target domain splits in unlabeled data. Both semantic and domain spaces contain their own \\\"seen\\\" and \\\"unseen\\\" categories, making representation learning valuable for both aspects to achieve effective disentanglement.\\n\\nFurthermore, since our PatchMix strategy inherently interweaves both semantic and domain information as part of the augmentation process, maintaining representation learning for both feature types becomes essential for proper disentanglement. We have added t-SNE visualizations in the appendix P that demonstrate that our approach learns distinct domain and semantic features.\"}", "{\"comment\": \"Dear Reviewer jy36,\\n\\nWe are pleased that our responses have addressed your concerns. Thank you very much for your insightful suggestions and valuable efforts, which are crucial for enhancing the quality of our paper.\"}", "{\"comment\": \"Dear Reviewer 6eyX,\\n\\n\\nWe are thrilled to note that your concerns have been addressed. We sincerely appreciate your dedicated time and effort in offering invaluable feedback.\"}", "{\"metareview\": \"The paper introduces HiLo, a novel framework for Generalized Category Discovery (GCD) under domain shifts. It disentangles semantic and domain features using mutual information minimization, enhances learning with PatchMix-based contrastive learning, and integrates curriculum learning. Extensive evaluations on SSB-C and DomainNet benchmarks show substantial improvements over baselines.\\n\\nThe paper's strengths include: 1) Innovative problem setting combining GCD and domain adaptation. 2) Effective disentanglement of domain/semantic features. 3) Robust experimental results with strong theoretical underpinnings.\\n\\nHowever, the reviewers raised concerns about the novelty and insufficient analysis of the claimed disentangled feature.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal, reviewers raised concerns about novelty, disentanglement assumptions, curriculum learning robustness, and comparison with UniOT. The authors clarified methodological choices, added visualizations, extended comparisons, and refined explanations. These responses effectively resolved most concerns, demonstrating rigorous design and empirical strengths. The paper's practical impact and solid evaluation led to acceptance.\"}", "{\"summary\": \"Generalized Category Discovery (GCD) is a challenging task where, given a partially labeled dataset, the model must classify all unlabeled instances. This paper introduces a new task and method to handle the GCD problem when the unlabeled data contains images from different domains. In terms of the method, the HiLo architecture and learning framework involves extracting \\\"low-level\\\" (early layers) and \\\"high-level\\\" (late layers) features from a vision Transformer and decoupling domain and semantic features by minimizing the mutual information between the two sets of features. The PatchMix contrastive learning method is introduced into the GCD task, with its objective function extended to enable the utilization of both labeled and unlabeled data for training. Curriculum learning is adopted, gradually increasing the sampling probability weight of samples predicted to be from unknown domains to enhance the model's robustness to domain shifts. Experiments are conducted on the DomainNet and the SSB-C benchmark datasets constructed based on the Semantic Shift Benchmark (SSB). The experimental results show that HiLo significantly outperforms existing category discovery models, validating the effectiveness of the method.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The HiLo architecture extracts features from different layers of the vision Transformer and decouples domain and semantic features by minimizing mutual information. This feature processing method, based on the neural network hierarchical structure and information theory, provides a more effective feature representation for category discovery in the presence of domain shifts and avoids the problem of feature confusion in traditional methods.\\n\\n2. The PatchMix method is introduced into the GCD task and innovatively extended. By adjusting its objective function, it can adaptively utilize labeled and unlabeled data for training. This extension not only combines the advantages of data augmentation but also flexibly adjusts the learning process according to the nature of different data, enhancing the model's ability to learn data from different domains and categories.\\n\\n3. The curriculum learning method is employed, which dynamically adjusts the sampling probability weights according to the difficulty of samples and the unknown degree of domains. This strategy of gradually introducing samples from easy to difficult conforms to the learning law, enabling the model to better adapt to the challenges brought by domain shifts and improving the model's convergence speed and robustness to complex data distributions.\\n\\n4. In terms of method design, innovative technical architectures and learning strategies are used, as well as theoretical analyses to verify their effectiveness. From the theoretical derivation of the target error to the analysis of the roles of different components, a solid theoretical foundation is provided for the innovation of the method, demonstrating the advantage of the close integration of theory and practice.\", \"weaknesses\": \"1. In HiLo, features are disentangled by assuming that features from different layers represent domain and semantic information, respectively and minimizing the mutual information based on this assumption. However, this assumption may oversimplify the complexity of feature representation in neural networks. In fact, features from different layers may be a mixture of multiple types of information. Simply defining the early layers as domain features and the late layers as semantic features may not be entirely accurate, which may lead to incomplete feature disentanglement in some complex data distributions and affect the performance and generalization ability of the model.\\n\\n2. The introduction and extension of PatchMix in the GCD task is an innovation, but it also brings some problems. The adjustment of its objective function and its application on different data increases the complexity of the model. When dealing with data with large domain differences, it is a challenge to determine the mixing proportion and application method accurately. If not handled properly, it may introduce too much noise or incorrect information, which may instead interfere with the learning process of the model and reduce the classification performance.\\n\\n3. In the curriculum learning method, the adjustment parameters of the sampling probability weights need to be selected through the validation set, which increases the dependence of the model on specific datasets. Moreover, for different datasets and tasks, the optimal values of these parameters may vary greatly, and the model cannot adaptively determine these parameters. If these parameters cannot be correctly selected in a new dataset or task, curriculum learning may not be able to play its intended role. It may even have a negative impact on the learning of the model.\", \"questions\": \"Please see the Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer 6eyX,\\n\\n\\nThanks very much for your time and valuable comments. We have provided detailed responses to all your comments and questions point-by-point for the unclear presentations and novelty clarification.\\n\\nAny comments and discussions are welcome!\"}", "{\"title\": \"General response\", \"comment\": \"We thank reviewers for their constructive and valuable feedback. We are encouraged that the reviewers find our paper to be **\\\"clear and easy to follow\\\"** (Reviewer 6eyX), with a **\\\"well-reasoned\\\"** (Reviewer 6eyX) and **\\u201cinnovative approach\\u201d**, while presenting **\\\"innovative technical architectures\\\"** with **\\\"solid theoretical foundation\\\"** (Reviewer RNZQ). The reviewers agreed that our contributions are significant, noting our **\\\"new, practically meaningful, and challenging setting\\\"** (Reviewer 6eyX, jy36) and **\\\"effective feature representation\\\"** learning method (Reviewer RNZQ). We are glad that reviewers also found our evaluation **\\\"demonstrates the model's robustness\\\"** (Reviewer L4eA) and that reviewers acknowledge our **\\\"significant performance improvement on SSB-C\\\"** (Reviewer 6eyX) and effectiveness in handling **\\\"domain shifts\\\"** (Reviewer jy36, L4eA).\\n\\nWe have carefully addressed all concerns raised by the reviewers. First, we provide a **general response** to shared concerns or critical points. We also address the reviewers\\u2019 individual concerns after their comments. We have also revised our manuscript based on the comments from the reviewers.\\n\\n\\n**Novelty of proposed model (Reviewer 6eyX, jy36)**\\n\\nIn this paper we have proposed both a novel (but intuitive) **problem setting** as well as a new **solution** to tackle it. The setting is a challenging and practical image classification problem, as noted by Reviewers 6eyX, RNZQ and jy36. Here, a model must jointly learn from labeled and unlabeled images, with the goal of clustering all unlabeled images into distinct categories. Notably, the unlabeled images may come from different **categories** and **domains** to the labeled set. Though simple and practical, only subsets of this problem have been addressed in prior literature: particularly in the Unsupervised Domain Adaptation (UDA) and Generalized Category Discovery (GCD) fields. \\n\\nAs such, we are transparent in the paper that aspects of our solution have been introduced in prior work (Line 247-248), notably the PatchMix approach. However, we also show that simply ensembling SoTA methods from UDA and GCD does not yield substantial gains in our challenging setting (see Table 5). \\n\\nParticularly, we find that it is critical to find the optimal recipe for the current task, and that without the following innovations the method does not work:\\n- A new PatchMix formulation for new classes: PatchMix was developed for UDA and naive application of it does not allow the method to discover new classes. Instead, we must devise a PatchMix-based contrastive learning method to address the challenge of GCD in the presence of domain shift (see Section 3.2.2, and ablation in Table 4). Our approach properly leverages all available samples, including both labelled and unlabelled data, from both in-domain and out-of-domain sources, encompassing both old and new classes. By incorporating these diverse samples, our technique aims to improve the model's ability to handle domain shifts and effectively generalize across different classes.\\n- A curriculum learning strategy: To our knowledge, the use of curriculums is still underexplored in the GCD literature. We find that in our challenging setting, it is critical for an appropriate curriculum to be introduced. \\n- Disentangling domain and semantic features: Though the loss formulation has been previously explored, it has not been applied to category discovery before, where it finds a natural fit in the presence of the domain shift problem.\"}", "{\"comment\": \"> In the curriculum learning method, the adjustment parameters of the sampling probability weights need to be selected through the validation set, which increases the dependence of the model on specific datasets. Moreover, for different datasets and tasks, the optimal values of these parameters may vary greatly, and the model cannot adaptively determine these parameters. If these parameters cannot be correctly selected in a new dataset or task, curriculum learning may not be able to play its intended role. It may even have a negative impact on the learning of the model.\\n\\nWe agree that traditional curriculum learning methods often require careful parameter tuning per dataset, which can indeed increase model dependence on specific datasets. However, our approach fundamentally differs in how we determine sample difficulty. Instead of introducing additional hyperparameters that need validation set tuning, we leverage semi-supervised k-means clustering on domain features extracted from a DINO pretrained backbone to naturally separate samples based on their domain shifts.\\n\\nSpecifically, we run semi-supervised k-means on domain features across all domains, where labeled source domain samples serve as anchors. This clustering process automatically identifies samples with varying degrees of domain shift without requiring dataset-specific parameter tuning. Samples that cluster far from source domain centers naturally represent instances with larger domain shifts (higher difficulty), while those clustering closer indicate smaller shifts (lower difficulty). This data-driven approach adapts to the inherent structure of each dataset, making our method more generalizable across different scenarios.\\n\\nWhile our method does involve some parameters ($r_0$, $r^{\\\\prime}$ and $t^{\\\\prime}$) for controlling the curriculum progression, these parameters are more interpretable compared to traditional curriculum learning parameters. This is because they operate on the natural difficulty hierarchy established by domain shifts rather than arbitrary difficulty measures:\\n- $r_0$ and $r^{\\\\prime}$ represent initial and final sampling ratios for harder samples\\n- $t^{\\\\prime}$ controls the curriculum pacing\", \"these_parameters_follow_a_simple_principle\": \"start with easier samples (closer to source domain) and gradually incorporate harder ones (larger domain shifts). This intuitive progression pattern remains consistent across different datasets, making parameter selection more straightforward and transferable.\\n\\nMoreover, since we use the pretrained DINO backbone for feature extraction, this difficulty assessment is based on general visual representations rather than dataset-specific characteristics. This makes our approach more robust and transferable across different datasets and tasks.\"}", "{\"comment\": \"Dear Reviewer L4eA,\\n\\nThanks very much for your time and valuable comments. We appreciate your positive feedback.\"}", "{\"title\": \"Official Comment by Reviewer 6eyX\", \"comment\": \"Thank you for the authors' detailed response, which resolves my concerns. As a result, I will increase my score.\"}", "{\"summary\": \"The paper introduces the HiLo framework, a learning method aimed at tackling Generalized Category Discovery (GCD) under domain shifts. HiLo addresses challenges in categorizing both seen and unseen categories across distinct domains within partially labeled datasets, leveraging a multi-faceted approach: mutual information minimization to separate domain and semantic features, PatchMix for augmented domain adaptation, and a curriculum learning strategy. The proposed method is evaluated on synthetic and real-world domain-shift datasets, showing substantial improvements over existing GCD models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper presents an innovative GCD approach by combining mutual information minimization with domain-specific data augmentation and curriculum learning to handle domain shifts effectively.\\n2. Extensive evaluation on both synthetic (SSB-C) and real-world (DomainNet) benchmarks demonstrates the model's robustness and its superiority over baseline GCD models, especially under domain-shifted conditions.\", \"weaknesses\": \"1. In the \\\"Problem statement,\\\" the following sentence is unclear: \\\"The objective of GCD is ... with singleton cardinalities for the latter.\\\" The author needs to differentiate between the GCD task setting and the domain shift GCD task setting, so this statement should be revised for clarity and precision.\\n2. The font sizes of the tables are not standardized, and the font in table 2 is too small.\\n3. I'm curious as to how many runs each of the authors' experimental results were derived from, and given that the differences in the results of the GCD benchmark tests can be very large, the authors should have listed the error lines generated by the three independent runs.\", \"questions\": \"1. For the experimental results of the ORCA method, what is the backbone used by the authors?\\n2. Were any curriculum learning alternatives considered, such as adaptive weighting based on difficulty or dynamic sample weighting? A brief discussion on these choices would clarify why the current approach was favored.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
2e4ECh0ikn
Talking Turns: Benchmarking Audio Foundation Models on Turn-Taking Dynamics
[ "Siddhant Arora", "Zhiyun Lu", "Chung-Cheng Chiu", "Ruoming Pang", "Shinji Watanabe" ]
The recent wave of audio foundation models (FMs) could provide new capabilities for conversational modeling. However, there have been limited efforts to evaluate these audio FMs comprehensively on their ability to have natural and interactive conversations. To engage in meaningful conversation with the end user, we would want the FMs to additionally perform a fluent succession of turns without too much overlapping speech or long stretches of silence. Inspired by this, we ask whether the recently proposed audio FMs can understand, predict, and perform turn-taking events? To answer this, we propose a novel evaluation protocol that can assess spoken dialog system's turn-taking capabilities using a supervised model as a judge that has been trained to predict turn-taking events in human-human conversations. Using this protocol, we present the first comprehensive user study that evaluates existing spoken dialogue systems on their ability to perform turn-taking events and reveal many interesting insights, such as they sometimes do not understand when to speak up, can interrupt too aggressively and rarely backchannel. We further evaluate multiple open-source and proprietary audio FMs accessible through APIs on carefully curated test benchmarks from Switchboard to measure their ability to understand and predict turn-taking events and identify significant room for improvement. We will open source our evaluation platform to promote the development of advanced conversational AI systems.
[ "Turn-taking", "Conversation AI", "Audio Foundation Models", "Evaluation Metric", "Evaluation Benchmark" ]
Accept (Poster)
https://openreview.net/pdf?id=2e4ECh0ikn
https://openreview.net/forum?id=2e4ECh0ikn
ICLR.cc/2025/Conference
2025
{ "note_id": [ "x8Uf2wrEkP", "wIMAkFntRF", "mZ8wHPdxP8", "mDaMYz4Qgq", "m8cbAor9Ek", "iDQVCEGGJV", "f7xygJYq6u", "azPGkOPnas", "ZofSXU5CDG", "Vi7nzoMqrE", "OkKQQtTSx3", "Og4eAEqIxu", "ONplZpc92I", "NqGtp12o1j", "NNDKxnkP88", "LVzPavY5JS", "KX8UUn8uwL", "JmoGh57MHL", "IvqsaYfd3C", "FcrEx0aSGh", "FbAIyW0elD", "FY0OtTM7Xi", "CrlFQ5SLXY", "9c6EuF6EC9", "8RsnnD2xnH", "7GCx4exBBE", "4LgFXsMkkQ", "4Jr9n6QafP", "3ybX8VGoVY", "3KP6Y0ydoy", "1GK7eo7udo", "0nnPR2f0EI" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732208857070, 1730624096388, 1732209745046, 1732208460024, 1733107557937, 1732523934461, 1732455684312, 1732342066103, 1734528458098, 1733226708879, 1730653223784, 1732209451621, 1732642845776, 1732543852380, 1730692233304, 1732543708445, 1730714246413, 1733179218716, 1737524130363, 1733178872591, 1732208717480, 1730713380415, 1732209216449, 1733178746748, 1733179177929, 1732599969611, 1732208557542, 1732543985767, 1732543937290, 1732543821855, 1732639496279, 1733245289695 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11546/Authors" ], [ "ICLR.cc/2025/Conference/Submission11546/Reviewer_mo78" ], [ "ICLR.cc/2025/Conference/Submission11546/Authors" ], [ "ICLR.cc/2025/Conference/Submission11546/Authors" ], [ "ICLR.cc/2025/Conference/Submission11546/Reviewer_mo78" ], [ "ICLR.cc/2025/Conference/Submission11546/Reviewer_cJr2" ], [ "ICLR.cc/2025/Conference/Submission11546/Authors" ], [ "ICLR.cc/2025/Conference/Submission11546/Reviewer_mo78" ], [ "ICLR.cc/2025/Conference/Submission11546/Area_Chair_MGh6" ], [ "ICLR.cc/2025/Conference/Submission11546/Reviewer_mo78" ], [ "ICLR.cc/2025/Conference/Submission11546/Reviewer_BKtK" ], [ "ICLR.cc/2025/Conference/Submission11546/Authors" ], [ "ICLR.cc/2025/Conference/Submission11546/Authors" ], [ "ICLR.cc/2025/Conference/Submission11546/Authors" ], [ "ICLR.cc/2025/Conference/Submission11546/Reviewer_cJr2" ], [ "ICLR.cc/2025/Conference/Submission11546/Authors" ], [ "ICLR.cc/2025/Conference/Submission11546/Reviewer_WGPU" ], [ "ICLR.cc/2025/Conference/Submission11546/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11546/Authors" ], [ "ICLR.cc/2025/Conference/Submission11546/Authors" ], [ "ICLR.cc/2025/Conference/Submission11546/Reviewer_Ddi6" ], [ "ICLR.cc/2025/Conference/Submission11546/Authors" ], [ "ICLR.cc/2025/Conference/Submission11546/Authors" ], [ "ICLR.cc/2025/Conference/Submission11546/Authors" ], [ "ICLR.cc/2025/Conference/Submission11546/Authors" ], [ "ICLR.cc/2025/Conference/Submission11546/Authors" ], [ "ICLR.cc/2025/Conference/Submission11546/Authors" ], [ "ICLR.cc/2025/Conference/Submission11546/Authors" ], [ "ICLR.cc/2025/Conference/Submission11546/Authors" ], [ "ICLR.cc/2025/Conference/Submission11546/Reviewer_BKtK" ], [ "ICLR.cc/2025/Conference/Submission11546/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer Ddi6\", \"comment\": \"Thanks for the insightful suggestions and acknowledging that our work offers valuable references for future development of voice dialogue systems. We address your concerns below.\\n\\n---\\n\\n# 1. Evaluated few audio FMs\\n> This study only tested a few open-source and closed-source audio FM models.\\n\\nCheck general response\\n\\n---\\n\\n# 2. There is a lack of comprehensive performance evaluation and summary.\\n\\nWe apologize if our comprehensive evaluation was not clearly emphasized in the draft. Our work provides a thorough assessment of spoken dialogue systems by reviewing prior literature to identify key turn-taking abilities for effective human-AI interaction and developing specific metrics (Sections 4.4-4.8) to evaluate these skills. Our analysis highlights significant limitations in current AI dialogue systems and outlines key research directions for improvement, as noted by reviewers (@R Ddi6, @R BKtK, @R mo78). If you have specific suggestions for additional evaluations, we will try to incorporate them into the paper.\\n\\n---\"}", "{\"summary\": \"This paper introduces a novel evaluation framework for assessing turn-taking capabilities in audio foundation models (FMs). The authors first propose metrics for five core conversational abilities: determining when to speak up, backchannel, interrupt, convey turn-taking cues, and handle interruptions. They develop a supervised model trained on human-human conversations to serve as a judge for evaluating these turn-taking events. Using this framework, they conducted a user study with different spoken dialogue systems (full-duplex E2E spoken dialogue system Moshi and VAD-based cascade dialogue system) and evaluated them. They evaluate several open-source and proprietary audio FMs on their ability to understand and predict turn-taking events.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The evaluation protocol is novel and well-motivated.\\n2. The experimental analysis provides valuable insights into turn-taking capabilities of audio foundation models (FMs).\\n3. The user study reveals noteworthy observations about current spoken dialogue systems.\", \"weaknesses\": \"1. Turn-taking prediction models used in evaluation protocol require training, which limits scalability and applicability.\\n2. The paper does not thoroughly address how its proposed evaluation protocol compares with previous turn-taking approaches, such as Ekstedt and Skantze (2022).\\n\\nReference\\n* Ekstedt, Erik, and Gabriel Skantze. Voice activity projection: Self-supervised learning of turn-taking events. Interspeech 2022\", \"questions\": \"1. What is the main difference between the proposed evaluation protocol and the previous approach by Ekstedt and Skantze (2022)? Is it impractical to apply the metrics from prior turn-taking evaluation methods to audio FMs?\\n2. While the turn-taking prediction model has been evaluated on an out-of-domain task-oriented spoken dialogue corpus, could you evaluate it on additional non-task-oriented spoken dialogue datasets to assess the generalizability of the model?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer mo78\", \"comment\": \"Thanks for the valuable comments and acknowledging that our work offers noteworthy observations about current spoken dialogue systems. We address your concerns below.\\n\\n---\\n\\n# 1. Generalization to non-task-oriented spoken dialogue datasets\\t\\n\\nWe evaluate our turn-taking prediction model on Fisher. Please check the general response.\\n\\n---\\n\\n# 2. Difference with prior turn taking evaluation metrics\\n> What is the main difference between the proposed evaluation protocol and the previous approach by Ekstedt and Skantze (2022)? Is it impractical to apply the metrics from prior turn-taking evaluation methods to audio FMs?\\n\\nThank you for this thoughtful question, and we apologize if this distinction was unclear in the draft. The key difference between our evaluation protocol and prior approaches [1], lies in the evaluation focus and application context.\\n\\n[1] focuses on assessing turn-taking model's ability to **predict turn-taking events** that will happen in the near future. This involves assessing how well turn-taking model forecasts when specific turn-taking events will happen in **human-human conversations** based on contextual cues.\\n\\nIn contrast, our protocol evaluates AI dialogue systems' ability to **perform turn-taking events** in spontaneous, interactive conversations with human users. This involves assessing how well the AI system actively decides to take the conversation floor, yield it\\u2019s turn, backchannel or interrupt the user during live **human-AI interactions**, reflecting its capability to engage in natural dialogue with the end user. This shift required adapting existing metrics and creating new ones tailored to human-AI interaction.\\n\\nFor example, while [1]'s SHIFT vs. HOLD (S/H) metric evaluates how well a turn-taking model predicts whether the current speaker will hold a turn or there will be a turn shift, we adapted it to evaluate an audio FM\\u2019s decision to speak or allow the user to continue during pauses. Pseudo ground-truth labels are generated using a judge turn-taking model, and the agreement between the AI's decisions and these pseudo labels is used to assess the quality of the AI system's turn-taking decisions. Additionally, we introduced novel metrics, such as evaluating how well AI systems manage user interruptions\\u2014an essential aspect of realistic human-AI interactions.\\n\\nIn summary, our protocol represents the first effort to assess audio FMs' ability to perform turn-taking events in human-AI conversations, offering valuable insights for future research. In response to your feedback, we have added this discussion in Sec. 4.3 and A.1.1 to clarify this novel contribution.\\n\\n---\\n\\n# 3. Scalability and Applicability of Approach\\n> Turn-taking prediction models used in evaluation protocol require training, which limits scalability and applicability.\\n\\nPlease check the general response.\\n\\n---\\n\\nReferences\\n\\n[1] Ekstedt, Erik, and Gabriel Skantze. Voice activity projection: Self-supervised learning of turn-taking events. Interspeech 2022 (https://arxiv.org/abs/2205.09812 )\"}", "{\"title\": \"General Response\", \"comment\": \"We thank all the reviewers for their thoughtful feedback. We are pleased that our approach is recognized as novel (@R WGPU, @R mo78), thoughtfully designed (@R BKtK), and clearly presented (@R WGPU, @R Ddi6). We appreciate the recognition of its value in assessing conversational capabilities of audio FMs (@R Ddi6, @R BKtK), highlighting AI dialogue system limitations, and suggesting future research directions (@R Ddi6, @R BKtK, @R mo78). We will incorporate the reviewers\\u2019 constructive suggestions into the paper.\\n\\n---\\n\\n# 1. Results on Fisher\\n\\nReviewers (@R WGPU, @R mo78) suggested evaluating our judge turn-taking model on an out-of-domain, non-task-oriented spoken dialogue dataset, such as Fisher. To address this, we created a random test set comprising 23 hours of audio and 138 conversations, similar in size to Switchboard test set, and we will make this test split publicly available. The Fisher dataset's transcriptions were created using the Quick Transcription specification, which introduced inaccuracies and left significant portions untranscribed as also noted in [1]. We developed heuristics to identify such audio segments with large untranscribed content and made sure to exclude these audio segments from the test set. Further manual analysis revealed errors in the ground truth timestamps and we corrected these timestamps using speaker diarization outputs from Pyannote.\\n\\nOur results show that our model demonstrates strong generalization on this corpus, achieving a **ROC-AUC of 91.0** comparable to the performance on the in-domain test set, i.e., Switchboard (overall ROC-AUC score of 92.0). These results and a detailed discussion have been included in the updated draft (Tab. 1, Sec. 4.1, Sec A.4), and we will make all our data preparation code and model publicly available. Thank you for the suggestion, as it helped us further validate the robustness of our approach. \\n\\n---\\n\\n# 2. Evaluating more audio FMs\\nWe acknowledge the reviewer's (@R Ddi6, @R BKtK) concern about benchmarking only a limited set of audio FMs for evaluating conversational capabilities. At the time of conducting this study, there were indeed very few audio FMs capable of performing turn-taking events, limiting our selection. We emphasize that this is a developing field, and we plan to expand our benchmarking as more audio FMs with turn-taking capabilities emerge. We explicitly mention this in the updated paper (Limitations in Sec. 6).\\n\\n@R BKtK mentioned that we should have evaluated GPT-4o. At the time of submission, GPT-4o\\u2019s Advanced Voice Mode was not publicly available, preventing its inclusion in our evaluations. Based on the feedback, we have started collecting human-GPT-4o conversation data, accumulating 55 minutes and 38 seconds of audio across six speakers. This preliminary test trial has revealed some interesting insights:\\n\\n- Turn-Taking Latency: GPT-4o exhibits a moderate gap between speaker turns (16.1% of cumulative duration), smaller than the Cascaded system (32.5% in Fig. 2b) but larger than Moshi (11.8% in Fig. 2b), indicating intermediate latency.\\n- Overlap: Similar to a cascaded system, GPT-4o has minimal overlapping speech (0.5% of cumulative duration in Fig. 2b), resulting in less interactive conversations.\\n- Turn-Yielding Behavior: GPT-4o has a high number of pause events (18.0 per minute) and fewer gap events (3.2 per minute) compared to the other dialogue systems (Fig. 2a), indicating that it sometimes speaks for very long without yielding it\\u2019s turn, which makes the conversation bland and less engaging for end users.\\n- Metric C (Sec. 4.6): GPT-4o rarely interrupts users (0.1%, lower than the other systems in Table 2), but when it does, its agreement with the judge label (75.0%) is significantly higher than Moshi (35.7% in Fig. 3c) and the Cascaded system (24.2% in Fig. 3c).\\n- Metric D (Sec. 4.7): GPT-4o is significantly better at conveying users when it wants to keep the conversation floor, having 68.9% agreement with judge label which is much higher than Moshi (32.7% agreement in Fig. 3d) and Cascaded system (40.8% agreement in Fig. 3d).\\n\\nWe are currently expanding this effort to collect 4 hours of GPT-4o conversation data, comparable to the results reported for other dialogue systems in the draft. This analysis with GPT-4o will be included in the final paper.\\n\\nWe agree with the reviewers (@R Ddi6, @R BKtK, @R mo78) that, while our analysis is limited to a few audio FMs, it effectively highlights issues in current AI dialogue systems, offering valuable insights for future research and underscoring the utility of our protocol.\\n\\n---\", \"references\": \"[1] Generative spoken dialogue language modeling (https://doi.org/10.1162/tacl_a_00545 )\"}", "{\"comment\": \"Thank you for your detailed response! My concerns have been partially addressed, and I will adjust the contribution score accordingly. However, I will maintain my overall rating due to ongoing concerns about the relatively low agreement between the majority of judge labels and human judgments, as highlighted by another reviewer.\"}", "{\"comment\": \"### Threshold and Reliability\\n\\n**L303-304** \\n> we take inspiration from prior works (Yang et al., 2024; Zheng et al.,2024) that have experimented with using an LLM as a judge.\\n\\nWhen considering the concept of an LLM as a judge, it is customary to employ a robust model such as GPT-4. Typically, these models demonstrate agreement rates exceeding 80% when compared with human evaluations [1]. Furthermore, such models are validated not only against subjective measures like agreement but also through objective benchmarks [2]. These high standards highlight the importance of evaluating how well the proposed approach aligns with them. \\n\\nAlthough the proposed model shows high agreement with human judgments within the 'dataset' used for human-human comparisons, this does not necessarily establish the model as 'objectively' strong. Compared to the numerous benchmarks used to evaluate LLMs, the evidence provided in Sections 4.4\\u20134.8 (e.g., lines 322\\u2013323 for Metric A, lines 374\\u2013375 for Metric B, lines 399\\u2013401 for Metric C, lines 414\\u2013415 for Metric D, and lines 448\\u2013450 for Metric E) may not be sufficient to conclude that it is an excellent judge. That said, I want to stress that this critique does not imply the evaluation methods themselves are flawed. \\n\\n> we carefully tuned the thresholds ... to maximize the agreement between the judge model's labels and human judgments.\\n\\nTherefore, a good judge model should minimize the need for extensive hyperparameter tuning to achieve consistent results. Alternatively, it might be beneficial to first demonstrate that the model's performance is robust and consistent across different hyperparameter settings. \\n\\n[1] Zheng et al., Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena https://arxiv.org/abs/2306.05685\\n\\n[2] OpenAI, https://arxiv.org/abs/2303.08774\\n\\nFinally, I have adjusted the contribution score to reflect the strengths of this work, particularly its thoughtful exploration of evaluation techniques and efforts to align human and model judgments. While there are areas for improvement, the study provides valuable insights that advance discussions in this domain.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Thank you for engaging with our response and for your thoughtful questions.\\n\\n**Recap: What is our goal?** Prior turn-taking evaluation methods only assess how well turn taking models predict future turn-taking events in human-human conversations. In contrast, our protocol evaluates the AI dialogue system's ability to actively perform turn-taking events\\u2014assessing the quality of its decisions when taking the conversation floor, backchanneling, or interrupting the user during live human-AI interactions. \\n\\nDue to this fundamental difference in application focus, several challenges arise when attempting to apply prior turn-taking metrics to our setting:\\n\\n1. **Absence of Ground Truth**: In prior turn-taking evaluation methods, human-human turn-taking decisions serve as ground truth to assess how accurately a model predicts upcoming turn-taking events. However, in human-AI conversations, there is no inherent ground truth to evaluate the quality of AI\\u2019s turn-taking decisions. Ground truth must be generated through human relevance judgments, requiring annotators to listen to entire human-AI conversations and determine whether the AI\\u2019s turn-taking decisions were appropriate. For example, [1]'s SHIFT vs. HOLD (S/H) metric evaluates whether a turn taking model can predict whether the current speaker will hold or yield it\\u2019s turn. We adapted this to assess whether an audio FM can \\u201ccorrectly\\u201d decide when to speak or when to allow the user to continue when the user pauses in interactive human-AI conversation. Since no predefined ground truth exists to judge these decisions, we introduced a protocol to generate pseudo ground-truth labels using a judge turn-taking model trained on human-human conversations. The agreement between the AI\\u2019s decisions and these pseudo labels serves as a measure of the quality of the AI system's turn-taking behavior.\\n2. **Dual Metrics for Human and AI Turns**: Unlike prior works, our evaluation protocol requires separate modeling of turn-taking decisions for when the AI is the listener (human\\u2019s turn) versus when the AI is the speaker (AI\\u2019s turn). For example, in addition to evaluating the AI's behavior as a listener, we also assess it based on the turn taking events made by the user when it pauses during its turn. Specifically, we examine whether the AI provides clear cues to the end user to convey its willingness to yield the floor or retain it. This distinction is crucial for understanding the AI\\u2019s ability to appropriately formulate it\\u2019s output to manage turn-taking effectively. \\n3. **Distinct Metrics for Human and AI Turns**: It is important to recognize that the same set of metrics cannot be applied uniformly to both the AI and human turns. For instance, during the human\\u2019s turn, we evaluate the AI\\u2019s decision to interrupt\\u2014determining whether its interruptions are relevant and timely or overly aggressive. Conversely, during the AI\\u2019s turn, it is not meaningful to assess the appropriateness of user interruptions. Instead, we focus on how the AI responds to user interruptions. Specifically, we evaluate whether the AI ignores interruptions entirely or, conversely, always becomes silent, even when the user is merely providing supportive feedback rather than attempting to take over the conversation. To address these nuances of what metrics are appropriate, we conducted a thorough survey of prior work to identify the key turn-taking abilities required for a conversational agent to engage effectively with end users. Based on these findings, we designed appropriate metrics for each ability, adapting existing metrics when feasible and creating new ones where necessary.\\n4. **Introducing New Metrics**: Due to the difference in application focus, existing metrics could not be fully adapted to capture the complete range of conversational capabilities. For example, prior metrics did not address the handling of interruptions. In this work, we designed a labeling sequence for our judge model to explicitly differentiate between floor-taking (successful interruption) and butting-in (unsuccessful interruption) interruptions. This enhancement ensures that our evaluation protocol comprehensively assesses all key aspects of human-AI turn-taking behavior.\\n\\nTo conclude, we acknowledge the valuable contributions of prior work on training and evaluating turn-taking models. However, the prior evaluation metrics cannot be directly applied to assessing the quality of turn-taking decisions made by AI systems in human-AI conversations due to the absence of ground truth. Even when adapted using pseudo-ground truths or human relevance judgments, they may not be appropriate in certain scenarios or fully capture the complete range of capabilities needed to evaluate whether audio FMs can effectively manage turns in human-AI interactions. Based on your feedback, we will add this discussion to the paper. \\n\\nReferences\\n\\n[1] Ekstedt, Erik, and Gabriel Skantze. Voice activity projection: Self-supervised learning of turn-taking events.\"}", "{\"comment\": \"Thank you for your reply and hard work. I think 1) Generalization to non-task-oriented spoken dialogue datasets is resolved with your additional experiments on Fisher and 3) Scalability and applicability is somewhat explained while it remained a limitation\\nHowever, I still have a major concern about 2) Difference with prior turn-taking evaluation metrics: If current turn-taking metrics evaluate human-human conversation and your proposed metric evaluates human-AI interaction, isn't it possible to apply the current turn-taking metrics to human-AI interaction? In particular, could you explain what specific aspects of human-AI turn-taking cannot be captured by adapting existing metrics? Could you elaborate on this further?\"}", "{\"metareview\": \"This paper proposes a novel evaluation protocol for assessing turn-taking capabilities in audio-based spoken dialogue systems and audio foundation models. By defining metrics to capture when to speak, backchannel, interrupt, and handle interruptions, the authors introduce a supervised judge model trained on human-human conversations. Reviewers appreciated the motivation and the effort to formalize a challenging, subjective aspect of conversational interaction. They also welcomed the clarification that while perfect human-model alignment is unrealistic, the judge model achieves moderate to high agreement with human judgments across datasets. Although some thresholds and methods could be justified, and certain baselines were missing, additional evaluations indicate that the approach is sound and generalizes to out-of-domain scenarios.\\nGiven these points and the constructive improvements offered by the authors, I recommend acceptance.\", \"additional_comments_on_reviewer_discussion\": \"During the discussion, the authors clarified concerns about threshold choices and demonstrated that their turn-taking judge model generalizes well to out-of-domain datasets. They compared their agreement rates with human judgments to existing literature, showing comparable performance. The reviewers generally agree that this contribution offers a valuable step forward in evaluating conversational turn-taking, supporting the decision to accept.\"}", "{\"comment\": \"Thank you for your response. I have considered the response. While some concerns have remained still (1) the evaluation protocol is highly expensive and (2) biased participants, most concerns are addressed and I agree that the evaluation protocol is promising.\\n\\nTherefore, I would raise the score.\\n\\nAdditionally, I suggest including a detailed description of the training budgets and experimental setup in the main paper, as the judge model plays a crucial role in your protocol. This would further strengthen your work.\\n\\nGood luck with your submission!\"}", "{\"summary\": \"The paper presents an evaluation protocol designed to assess the turn-taking capabilities of spoken dialogue systems. It evaluates the exact timing of these events using a supervised model trained to predict them. The experimental results reveal interesting insights about existing spoken dialogue systems and offer valuable suggestions for their future development.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"1. This paper proposes a comprehensive evaluation protocol and well-designed metrics to assess the turn-taking capabilities of spoken dialogue systems. The evaluation framework and metrics are thoughtfully developed and provide valuable insights.\\n2. The paper extends the evaluation of turn-taking capabilities of spoken dialogue systems from corpus-level statistics to a more granular assessment of the timing of turn-taking events. This fine-grained approach enables a more accurate reflection of a spoken dialogue system\\u2019s turn-taking capabilities.\\n3. The proposed evaluation metrics provide insights into the limitations of current systems in achieving interactive and natural conversations, highlighting areas for potential improvement.\", \"weaknesses\": \"1. In Metric (E), the judge labels show low consistency with human relevance judgments, indicating that this metric may have limited reliability in assessing the model's ability to handle user interruptions effectively.\\n2. My primary concern is the relatively low agreement between the majority of judge labels and human judgments, with most falling below 80%. This raises questions about the strength of the claim that the proposed metrics maintain high consistency with human decisions.\\n3. GPT-4o was not evaluated.\\n\\nIf my above concerns are resolved, I would consider increasing my rating.\", \"questions\": \"My questions are listed above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer BKtK\", \"comment\": \"Thanks for the valuable comments and suggestions. We address your concerns below.\\n\\n---\\n\\n# 1. GPT-4o was not evaluated.\\n\\nPlease check the general response on evaluating more audio FMs.\\n\\n---\\n\\n# 2. Agreement between the judge labels and human judgments\\n> My primary concern is the relatively low agreement between the majority of judge labels and human judgments, with most falling below 80%. This raises questions about the strength of the claim that the proposed metrics maintain high consistency with human decisions.\\n\\nWe acknowledge the reviewer\\u2019s concerns. We would like to emphasize that turn-taking prediction is indeed a challenging task, as turn-taking behavior can vary widely even among different users. Despite this complexity, our turn-taking model achieves performance comparable to those reported in prior studies, which we believe demonstrates its robustness and practical utility. Our model also maintains moderate alignment with human decisions even on OOD spoken conversation datasets\\u2014an indicator of its generalizability. \\n\\nThat said, we would like to stress that the primary contribution of this work lies in our novel evaluation protocol, which is designed to be adaptable and can integrate any turn-taking prediction model. We agree that enhancing the model\\u2019s accuracy would further improve the protocol's reliability, and explicitly discuss this in Sec. 6 (Limitations) in the main text. \\n\\n---\\n\\n# 3. Low consistency with human judgments for Metric (E)\\n> In Metric (E), the judge labels show low consistency with human relevance judgments, indicating that this metric may have limited reliability in assessing the model's ability to handle user interruptions effectively.\\n\\nThis is indeed a limitation of our work and we have acknowledged this explicitly in section 4.8. Moving forward, we consider this an area for further improvement, where enhancing the judge model could lead to better alignment with human judgments.\"}", "{\"comment\": \"Thank you for considering our response!\"}", "{\"title\": \"Official Comment\", \"comment\": \"Thanks Reviewer Ddi6 for your thorough review! We hope that information in our response helps clarify some of your concerns. We hope that you will take a look and consider updating your score.\"}", "{\"summary\": \"This paper proposes an evaluation protocol to measure the turn-taking capabilities of audio foundation models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The strengths of this paper are as follows:\\n 1. This paper provides an automated turn-taking protocol for audio foundation models\\n 2. The evaluation platform will be open-sourced.\", \"weaknesses\": \"The weaknesses of this paper are as follows:\\n 1. The study aims to measure precise turn-taking, but the thresholds are set to arbitrary values.\\n 2. The participants introduced in Sec 3 seem biased, consisting of the authors and related individuals.\\n 3. The confidence in some evaluations (Fig. 3(b), (e)) appears high, but no explanation is provided.\", \"questions\": \"Here are questions for the authors:\\n - The thresholds in Sec. 4.4-4.8 seem arbitrary. Is there a specific reason for choosing these values? All units appear to represent likelihoods, yet they range from negative ($threshold_3$ = -0.45) to positive values ($threshold_2$ = 0.1).\\n - There are concerns about the reliability of the judge model. Since all results are based on comparisons with this model, is there concrete evidence supporting its credibility? Specifically, the conclusion that Moshi[1] is \\\"too aggressive\\\" lacks persuasiveness if it relies solely on comparisons with the judge model.\\n\\n[1] Defossez et al. Moshi: a speech-text foundation model for real-time dialogue\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"> Furthermore, such models are validated not only against subjective measures like agreement but also through objective benchmarks\\n\\nWe would like to emphasize that we also evaluate using objective metrics as shown in Table 1. Our results clearly demonstrate that our model performs **on par with prior turn-taking prediction models**. Additionally, our model exhibits strong out-of-domain (OOD) generalization. Specifically, it achieves robust **zero-shot performance on two OOD** datasets: (1) non-task-oriented spoken dialogues (Fisher Corpus) and (2) task-oriented spoken dialogues (Columbia Games Corpus).\\n\\nThat said, we would like to stress that the primary contribution of this work lies in our novel evaluation protocol, which is designed to be adaptable and can **integrate any turn-taking prediction model**. Turn-taking prediction is indeed a challenging task and enhancing the model\\u2019s accuracy would further improve the protocol's reliability. We explicitly discuss this in Sec. 6 (Limitations) in the main text.\\n\\n----\\n\\n> Therefore, a good judge model should minimize the need for extensive hyperparameter tuning to achieve consistent results. Alternatively, it might be beneficial to first demonstrate that the model's performance is robust and consistent.\\n\\nIt is important to understand that **number of turn taking events are extremely unbalanced** with **continuation (C)** and **silence (NA)** making up the label set for more than 95% of instances. As a result, it is indeed a **common practice [1] in prior literature** to tune the threshold for each label on the validation set. However, our results show that these threshold not only lead to consistently good agreement with human judgement on an in-domain test but also **generalise well to an out-of-domain test** set showing that our model is robust and achieves consistent results even in out of domain setting.\\n\\n---\\n\\nWe hope that information in our response helps clarify some of your concerns on the robustness and reliability of our approach. We hope that you will take a look and consider updating your score.\\n\\nReferences\\n\\n[1] Ekstedt, Erik, and Gabriel Skantze. Voice activity projection: Self-supervised learning of turn-taking events.\"}", "{\"summary\": \"The paper proposes a new evaluation protocol to assess the spoken dialog system's turn-taking capabilities, i.e., the Moshi and Cascaded model. They use a supervised model as a judge which is trained to predict turn-tasking events in human-human conversation (i.e., Switchboard). The paper presents a comprehensive user study that evaluates the Moshi and Casaded model on their ability to perform turn-taking events, and it finds that they sometimes do not understand when to speak up, can interrupt too aggressively, and rarely backchannel. The main contributions are:\\n1. A new evaluation protocol to assess the spoken dialog system's turn-taking capabilities.\\n2. Some insight about existing spoken dialogue systems through user study.\\n3. Additionally create a test benchmark using Switchboard dataset to evaluate SALMONN, Qwen2-audio-instruct, Qwen-audiochat, Whisper+GPT-4o on their ability to understand and predict turn-taking events.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The originality of the work is commendable. The authors propose a novel evaluation protocol to assess the turn-taking capabilities of spoken dialog systems.\\n2. The paper is well-written and provides sufficient experimental details in the Appendix.\\n3. The authors plan to open source the evaluation platform.\", \"weaknesses\": \"1. The evaluation protocol is highly expensive, as it requires a supervised dataset to train the judge model. This approach is not feasible if we lack a supervised dataset in other languages, such as Chinese.\\n2. The filler word set for backchannel detection is heuristic and may miss some backchannel cases that are not included in the filler word set.\", \"questions\": \"1. The Fisher dataset is a common dataset comparable to Switchboard. What is the performance of the supervised turn-taking prediction model on this dataset?\\n2. How can your evaluation protocol be adapted or applied in scenarios where supervised datasets are not available for other languages, such as Chinese?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer BKtK,\\n\\nThank you for taking the time to provide thoughtful and constructive feedback. We sincerely appreciate your efforts and have tried to address your concerns regarding the agreement between judge labels and human decisions in our general response. We hope the clarifications we provided align with your expectations and address the issues raised comprehensively.\\n\\nAs the discussion period deadline approaches, we kindly ask if you could take a moment to review our response. If you have any additional questions or require further elaboration, we would be grateful for the opportunity to address them promptly.\\n\\nThank you once again for your valuable insights and guidance throughout this process. We deeply appreciate your time and support.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Dear Reviewer cJr2,\\n\\nThank you for taking the time to provide thoughtful and constructive feedback. We sincerely appreciate your efforts and have tried to address your concerns in our general response as well as previous responses. We hope the clarifications we provided align with your expectations and address the issues raised comprehensively.\\n\\nAs the discussion period deadline approaches, we kindly ask if you could take a moment to review our response. If you have any additional questions or require further elaboration, we would be grateful for the opportunity to address them promptly.\\n\\nThank you once again for your valuable insights and guidance throughout this process. We deeply appreciate your time and support.\"}", "{\"title\": \"Response to Reviewer WGPU\", \"comment\": \"Thank you for your valuable comments and acknowledging the originality of our work. Weakness 3 and 4 are similar to weakness 1 and 2 and we address remaining concerns below.\\n\\n---\\n\\n# 1. Results on Fisher:\\n\\nPlease check general response.\\n\\n---\\n\\n# 2. Application to other languages:\\n>\\u201cHow can your evaluation protocol be adapted or applied in scenarios where supervised datasets are not available for other languages, such as Chinese?\\u201d\\n\\nPlease check general response on scalability and applicability of our approach.\\n\\n---\\n\\n# 3. Backchannel detection is heuristic\\n> The filler word set for backchannel detection is heuristic. It may ignore some backchannel case not in filler word set.\\n\\nThank you for your insightful observation. We acknowledge that our approach for identifying backchannels relies on heuristics, using common one- and two-word phrases as indicators of backchannels. While this may miss some backchannels, it aligns with standard practices in prior turn-taking models [1, 2], which also use similar heuristic methods. Based on your comment, we explicitly discussed this limitation in Sec. 6 in the main paper.\\n\\n---\\n\\nReferences\\n\\n[1] TurnGPT: a Transformer-based Language Model for Predicting Turn-taking in Spoken Dialog (https://aclanthology.org/2020.findings-emnlp.268/)\\n\\n[2] Turn-taking and Backchannel Prediction with Acoustic and Large Language Model Fusion (https://arxiv.org/abs/2401.14717)\"}", "{\"summary\": \"This paper addresses the challenges of evaluating the turn-taking capabilities of audio foundation models (FMs) in conversational settings.\\nIt defines 6 types of Turn-Taking Events and evaluates the performance of end-to-end speech dialogue models as well as cascaded systems.\\nThrough the results obtained from this study, the authors discovered numerous issues with existing AI dialogue systems in handling turn-taking, such as sometimes failing to intervene in conversations at appropriate times or excessively interrupting others. Furthermore, the authors conducted tests on multiple open-source and closed-source audio foundation models, revealing their limitations in understanding and predicting turn-taking events, and highlighting areas for improvement.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The definition of turn-taking is detailed and clear.\\n\\n2. The evaluation protocol proposed in this paper contributes to better assessing the performance of audio foundation models in dialogues, providing strong support for the development of voice dialogue systems.\\n\\n3. This paper reveals many issues existing in current AI dialogue systems when handling turn-taking, offering valuable references for future research.\", \"weaknesses\": \"1. This study only tested a few open-source and closed-source audio FM models.\\n\\n2. There is a lack of comprehensive performance evaluation and summary.\", \"questions\": \"NA.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer cJr2\", \"comment\": \"Thank you for your insightful comments. We address your concerns below.\\n\\n---\\n\\n# 1. Thresholds for proposed metrics\\n> The thresholds in Sec. 4.4-4.8 seem arbitrary. Is there a specific reason for choosing these values?\\n\\nWe apologize if this was not sufficiently clear. As mentioned in Sec 4.3 (lines 307-309) on page 6, we carefully tuned the thresholds for all proposed metrics using an in-domain validation set to maximize the agreement between the judge model's labels and human judgments. We present the agreement between judge labels and human judgments on the in-domain validation set in Table 6 in the Appendix. We hope this addresses your concern.\\n\\n---\\n\\n# 2. Reliability of judge model\\n>There are concerns about the reliability of the judge model. Since all results are based on comparisons with this model, is there concrete evidence supporting its credibility? \\n\\nWe agree that establishing the reliability of the judge model is essential, given that our results depend on using the judge label. To address this, we outline our validation approach in Sec. 4.3 (lines 302-312) on page 6, inspired by prior works [1, 2] showing high consistency between LLM predictions and human relevance judgments.\\n\\nIn our study, we assess judge model consistency with human judgments by analyzing instances in a human-human conversation dataset corresponding to each metric. For example, Metric A considers scenarios where a listener decides whether to speak up during a speaker\\u2019s pause. Detailed explanations for all metrics are in sections 4.4-4.8 (e.g., lines 322-323 for Metric A, lines 374-375 for Metric B, lines 399-401 for Metric C, lines 414-415 for Metric D, and lines 448-450 for Metric E). As shown in Figure 3 on page 7, our judge labels have good agreement with human judgments on both in-domain (blue) and out-of-domain (green) test sets for most metrics, supporting the model's credibility. We hope this clarifies our approach.\\n\\n---\\n\\n# 3. Big Confidence Intervals for Proposed Metrics\\n>The confidence in some evaluations (Fig. 3(b), (e)) appears high, but no explanation is provided.\\n\\nThank you for pointing out this important observation. We acknowledge the reviewer\\u2019s concern regarding the high confidence intervals for Metric (B) in Fig. 3(b) and Metric (E) in Fig. 3(e). We hypothesize that this result from a small sample size, as shown in Table 2, where occurrences of the AI system backchanneling (0.01% for both Moshi and Cascaded systems) and user interruptions (0.2% for Moshi and 0.1% for the Cascaded system) are indeed very rare in human-AI conversations. We updated the paper (lines 387-389 in Sec. 4.5 and lines 458-460 in Sec. 4.8) to clearly address and discuss this. \\n\\n---\\n\\n# 4. Biased participants\\n>The participants introduced in Sec 3 seem biased, consisting of the authors and related individuals.\\n\\nThank you for raising this important concern. We acknowledge the potential for bias in our user study, as participants were primarily the authors and their research colleagues. Similar \\u201clab experiments\\u201d are common in prior work [3], enabling focused testing of specific functionalities (eg. prompting participants to interrupt the AI system) and facilitating deeper insights through discussions about their subjective experiences. However, we agree that this controlled setup may introduce bias. We hope our evaluation protocol inspires future studies to conduct larger-scale evaluations with more diverse participants. We explicitly discuss and acknowledge this limitation in Sec. 6 and A.2 in the paper.\\n\\n---\\n\\nReferences\\n\\n[1] AIR-Bench: Benchmarking Large Audio-Language Models via Generative Comprehension (https://arxiv.org/abs/2402.07729)\\n\\n[2] Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena (https://arxiv.org/pdf/2306.05685)\\n\\n[3] Survey on evaluation methods for dialogue systems [https://link.springer.com/article/10.1007/s10462-020-09866-x]\"}", "{\"title\": \"General Response regarding the agreement between judge labels and human decisions\", \"comment\": \"We sincerely thank the reviewers for their thoughtful feedback. Some reviewers (@R cJr2, @R BKtK, @R mo78). raised concerns about the relatively low agreement (<80%) between judge labels and human judgments. We would like to clarify that, with the exception of Metric E, the judge labels demonstrate a consistently high level of agreement (>70%, as shown in Figure 3) with human decisions on the in-domain test set. Additionally, prior works have reported **similar levels of agreement when using LLMs as judges, with around 70% [1] and 66% [2] agreements**. Turn-taking, by nature, is a subjective task influenced by individual user behavior, making complete alignment neither expected nor feasible. Moreover, our judge labels also show moderate agreement with human decisions even on an out-of-domain test set.\\n \\nTo further demonstrate the robustness of our trained turn-taking model, we evaluated it objectively on its ability to predict upcoming turn-taking events in human-human conversations. This evaluation was conducted on both an in-domain test set (Switchboard) and two out-of-domain datasets (Columbia Games and Fisher). Our results indicate that the model not only performs on par with prior works but also generalizes strongly to OOD spoken dialogue corpora in zero shot manner, achieving comparable performance across datasets.\\n \\nThese findings suggest that the model can reliably evaluate the precise timing of turn-taking decisions made by AI dialogue systems. Its decent consistency with human decisions indicates that it serves as an effective proxy for human judgment, addressing concerns regarding its reliability and utility. We will incorporate these discussions into the paper.\\n \\nReferences\\n\\n[1] Yiang et al. AIR-Bench: Benchmarking Large Audio-Language Models via Generative Comprehension (https://arxiv.org/pdf/2402.07729 )\\n\\n[2] Kolchinski et al. Approximating Human Judgment of Generated Image Quality. (https://arxiv.org/pdf/1912.12121 )\"}", "{\"comment\": \"Dear Reviewer mo78,\\n\\nThank you for taking the time to provide thoughtful and constructive feedback. We sincerely appreciate your efforts and have tried to address your concerns regarding the agreement between judge labels and human decisions in our general response. We would like to emphasize that prior works have reported similar levels of agreement when using LLMs as judges, with around 70% [1] and 66% [2] agreements. We hope the clarifications we provided align with your expectations and address the issues raised comprehensively.\\n\\nAs the discussion period deadline approaches, we kindly ask if you could take a moment to review our response. If you have any additional questions or require further elaboration, we would be grateful for the opportunity to address them promptly.\\n\\nThank you once again for your valuable insights and guidance throughout this process. We deeply appreciate your time and support.\\n\\nReferences\\n\\n[1] Yiang et al. AIR-Bench: Benchmarking Large Audio-Language Models via Generative Comprehension (https://arxiv.org/pdf/2402.07729 )\\n\\n[2] Kolchinski et al. Approximating Human Judgment of Generated Image Quality. (https://arxiv.org/pdf/1912.12121 )\"}", "{\"title\": \"Update\", \"comment\": \"**Requirement for threshold tuning for judge model**\\n> it might be beneficial to first demonstrate that the model's performance is robust and consistent across different hyperparameter settings.\\n\\nIn response to your comment, we surveyed prior literature to better quantify the impact of threshold tuning on our turn-taking model\\u2019s performance and identified two main approaches:\\n\\n1. Sensitivity Analysis [1,2,3]: Prior works have experimented with varying key hyperparameters systematically over a range and analyzed validation performance as a function of hyperparameters. \\nInspired by this, we vary thresholds from -0.5 to 0.5 for metrics where judge labels are computed using the difference of 2 likelihoods (all metrics except metric B in Eq. 3) and from 0 to 1 for others (i.e., for metric B). The thresholds are incremented in steps of 0.01. We then calculated the Margin of Error (ME) [4] for 95% Confidence Intervals to quantify uncertainty in agreement between judge labels and human relevance judgments due to sensitivity to threshold values. \\nThe margin of error (ME) is calculated as:\\n\\n$ME = z \\\\cdot \\\\frac{\\\\sigma}{\\\\sqrt{n}}$\", \"where\": \"- \\\\(z = 1.96\\\\): Z-score for a 95% confidence level.\\n- \\\\($\\\\sigma$\\\\): Standard deviation \\n- \\\\(n\\\\): Sample size (i.e. Size of threshold range over which performance is computed =100 for our experiments)\\n\\nThe observed margin of errors, along with the agreement of the judge label with human decisions on the in-domain validation set, are shown below.\\n\\n| **Turn-Taking Metric** | **Agreement with Judge Label** | **Margin of Error (ME)** |\\n|--------------------------------------------------|------------------------------|-------------------|\\n| **Metric A and D: When to speak up?** | | |\\n| When listener decides to speak up | 81.2 | \\u00b16.43 |\\n| When listener lets speaker continue | 75.5 | \\u00b17.46 |\\n| **Metric B: When to backchannel?** | | |\\n| When listener backchannels | 71.6 | \\u00b15.75 |\\n| When listener does not backchannel | 74.8 | \\u00b13.49 |\\n| **Metric C: When to interrupt?** | | |\\n| When listener interrupts | 76.6 | \\u00b15.13 |\\n| When listener does not interrupt | 72.8 | \\u00b11.80 |\\n| **Metric E: Handle user interruptions?** | | |\\n| When interrupting speaker takes the floor | 61.3 | \\u00b15.90 |\\n| When interrupting speaker does not take the floor| 57.0 | \\u00b15.30 |\\n\\nGenerally, a margin of error of below 5% is considered excellent for high-accuracy needs, and less than 10% is acceptable for most studies. Our analysis shows that the agreement with human judgements does not undergo huge fluctuations with changes in threshold (i.e., margin of error is always less than 10%), and hence, our approach does not require extensive hyperparameter tuning. Based on your feedback, we will add these results to Table 6 in the paper.\\n\\n2. Validation on Multiple Datasets [5]: Prior works have argued the reliability of their model by showing consistent performance across datasets, demonstrating that the model generalizes well without dataset-specific tuning. In Figure 3, our judge labels achieve good agreement with human judgment even on the OOD spoken dialog dataset, i.e., Columbia Games Corpus, without a threshold being specifically tuned for this dataset. This result shows that our model achieves consistent performance without dataset-specific tuning.\\n\\nOur new analysis provides evidence of the robustness and consistency of our model. We hope that these findings address your concerns regarding the reliability of our model, and we will incorporate these results into the paper. \\n\\n---\\n\\nReferences\\n\\n[1] Novello et al. Goal-Oriented Sensitivity Analysis of Hyperparameters in Deep Learning.\\n\\n[2] Razavi et al. The Future of Sensitivity Analysis: An essential discipline for systems modeling and policy support.\\n\\n[3] Sadeghi et al. A Review of Global Sensitivity Analysis Methods and a comparative case study on Digit Classification\\n\\n[4] Tanur et al. Margin of error.\\n\\n[5] Rijn et al. Hyperparameter Importance Across Datasets.\"}", "{\"title\": \"General Response\", \"comment\": \"# 3. Scalability and Applicability of Approach\\n\\nReviewers (@R WGPU, @R mo78) mentioned that the evaluation protocol needs a supervised dataset to train the judge model, limiting its scalability and applicability. We agree that this is a limitation of our approach and updated the draft to explicitly acknowledge it. While our current approach indeed relies on a supervised dataset to train a turn-taking model as the judge, this model can be trained on any spoken conversation dataset containing speaker turns, transcripts, and timestamps, which are often available even for non-English languages. As noted in Appendix A.1, prior work [2] successfully trained turn-taking models on Chinese (Mandarin) and Japanese using publicly available datasets [3, 4]. Interestingly, multilingual models trained on English, Chinese, and Japanese perform comparably to monolingual models despite the diverse turn-taking behaviors of these languages.\\n\\nTo address scenarios without supervised datasets, we propose a low-cost solution: collecting a small spoken dataset for the target language and generating annotations through human efforts or using tools like PyAnnote (speaker diarization) and Whisper (ASR). We can then train multilingual turn-taking models that leverage high-resource language data to improve performance on low-resource languages. This approach is far more cost-effective than collecting human relevance judgments for every turn-taking event. In response to your feedback, we added a discussion in the paper (Limitations in Sec. 6, A.7) on adapting our evaluation to non-English and low-resource languages.\\n\\n---\", \"references\": \"[2] Multilingual Turn-taking Prediction Using Voice Activity Projection (https://aclanthology.org/2024.lrec-main.1036.pdf ) \\n\\n[3] HKUST Mandarin Telephone Speech (https://catalog.ldc.upenn.edu/LDC2005S15 ) \\n\\n[4] Japanese Travel Agency Task Dialogues (https://aclanthology.org/2022.lrec-1.619/)\"}", "{\"title\": \"Official comment by authors\", \"comment\": \"Thanks Reviewer mo78 for your thorough review! We hope that information in our response helps clarify some of your concerns. We hope that you will take a look and consider updating your score.\"}", "{\"title\": \"Official Comment\", \"comment\": \"Thanks Reviewer BKtK for your thorough review! We hope that information in our response helps clarify some of your concerns. We hope that you will take a look and consider updating your score.\"}", "{\"title\": \"Official Comment\", \"comment\": \"Thanks Reviewer WGPU for your thorough review! We hope that information in our response helps clarify some of your concerns. We hope that you will take a look and consider updating your score.\"}", "{\"title\": \"Response to Authors\", \"comment\": \"Thanks for your response. The additional experiments on GPT-4o reveal interesting insights about the model, and the direction of this work is promising. However, since my primary concern has not been addressed, I'll keep my current rating.\"}", "{\"comment\": \"Thank you for thoughtfully considering our response and for your constructive feedback. We are grateful for your positive recommendation and the updated score.\\n\\nWe fully agree that providing a detailed description of the training budgets and experimental setup of the judge model will enhance the clarity of our work. We will ensure this is thoroughly addressed in the main paper!\"}" ] }
2d734s2WDb
VIBEID: A STRUCTURAL VIBRATION-BASED SOFT BIOMETRIC DATASET FOR HUMAN GAIT RECOGNITION
[ "Mainak Chakraborty", "Chandan", "Sahil Anchal", "Bodhibrata Mukhopadhyay", "Subrat Kar" ]
We present VIBeID, a dataset and benchmark designed for advancing non-invasive human gait recognition using structural vibration. Structural vibrations, produced by the rhythmic impact of the toe and heel on the ground, are distinct and can be used as a privacy-preserving and non-cooperative soft-biometric modality. We curated the largest dataset VIBeID consists of footfall generated structural vibrations of 100 subjects. Existing datasets in this field typically include around ten subjects and lack comprehensive exploration of domain adaptation. To thoroughly explore the domain adaptation aspect of this biometric approach, we recorded vibration data on three distinct floor types (wooden, carpet, and cement) and at three distances from the geophone sensor (1.5 m, 2.5 m, and 4.0 m), involving 40 and 30 subjects, respectively. Additionally, we benchmarked our dataset against video recordings from 15 individuals in an outdoor setting. Beyond providing 88 hours of raw vibration data, VIBeID establishes a comprehensive benchmark for a) person identification: where the aim is to recognize individuals through their unique structural vibrations, b) domain adaptation: assessing model performance across different walking surfaces and sensor positions, and c) multi-modal comparison: comparing vibration-based and vision-based identification methods. Our experiments, using both machine learning and deep learning approaches, establish a baseline for future research in this field, and introduce a large-scale dataset for the broader machine learning community.
[ "Structural vibrations", "Gait Recognition", "Deep learning", "Machine learning" ]
Reject
https://openreview.net/pdf?id=2d734s2WDb
https://openreview.net/forum?id=2d734s2WDb
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zCbwU4kI1O", "unXYXqZX2O", "uLAMjtKUJB", "madD1kSDU4", "mYk83av8c9", "h7tPjaMnTC", "dNwORrREii", "bqZFRiMv61", "axU1Wm1oQL", "Xqj0379w4g", "V57Q4L2lK6", "PRJoestKFx", "OiyTRWmeEk", "MxxGk5Nx5O", "KCBaBEzCi3", "Euzz79HG27", "EBHb68tqqk", "DTjnGViavl", "BiaYMYnr5q", "AqckLmUp2o", "9jfb8wJX0e", "7GgNX6CJ30", "7657jyEjDg", "5ujRiJj4sL", "4Pyc6D3Jzb", "3WwzPv4K4z", "1twJ5MOplk", "0Y2MeOGeD6" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732209661350, 1732794101755, 1730493179810, 1732069554679, 1732502020960, 1730983797985, 1732502040078, 1732070016393, 1732512717877, 1733983023043, 1733204957462, 1730519738398, 1732502049481, 1732794107461, 1732502178525, 1732070687595, 1732682275939, 1733288020723, 1730220243542, 1737524028147, 1733207826090, 1732561657089, 1732704735227, 1732359568986, 1732679347863, 1732691329640, 1732519224604, 1732071175657 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10133/Reviewer_Sqmc" ], [ "ICLR.cc/2025/Conference/Submission10133/Authors" ], [ "ICLR.cc/2025/Conference/Submission10133/Reviewer_LqAV" ], [ "ICLR.cc/2025/Conference/Submission10133/Authors" ], [ "ICLR.cc/2025/Conference/Submission10133/Authors" ], [ "ICLR.cc/2025/Conference/Submission10133/Reviewer_Sqmc" ], [ "ICLR.cc/2025/Conference/Submission10133/Authors" ], [ "ICLR.cc/2025/Conference/Submission10133/Authors" ], [ "ICLR.cc/2025/Conference/Submission10133/Reviewer_g5tV" ], [ "ICLR.cc/2025/Conference/Submission10133/Area_Chair_c4uP" ], [ "ICLR.cc/2025/Conference/Submission10133/Reviewer_g5tV" ], [ "ICLR.cc/2025/Conference/Submission10133/Reviewer_g5tV" ], [ "ICLR.cc/2025/Conference/Submission10133/Authors" ], [ "ICLR.cc/2025/Conference/Submission10133/Authors" ], [ "ICLR.cc/2025/Conference/Submission10133/Authors" ], [ "ICLR.cc/2025/Conference/Submission10133/Authors" ], [ "ICLR.cc/2025/Conference/Submission10133/Authors" ], [ "ICLR.cc/2025/Conference/Submission10133/Authors" ], [ "ICLR.cc/2025/Conference/Submission10133/Reviewer_35Gq" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10133/Reviewer_35Gq" ], [ "ICLR.cc/2025/Conference/Submission10133/Reviewer_LqAV" ], [ "ICLR.cc/2025/Conference/Submission10133/Authors" ], [ "ICLR.cc/2025/Conference/Submission10133/Authors" ], [ "ICLR.cc/2025/Conference/Submission10133/Authors" ], [ "ICLR.cc/2025/Conference/Submission10133/Reviewer_35Gq" ], [ "ICLR.cc/2025/Conference/Submission10133/Authors" ], [ "ICLR.cc/2025/Conference/Submission10133/Authors" ] ], "structured_content_str": [ "{\"title\": \"Thanks for the rebuttal\", \"comment\": \"Thanks for the rebuttal. However, I am still concerned about the technical contributions and applicability of this work, especially comparing against vision based work. The authors argued that \\\"Geophone-based gait recognition does not require a direct line-of-sight or physical contact, immune to lighting condition, low-cost, and is less computationally expensive and eco-friendly that using vision-based systems. \\\", I would expect some experiments and results to demonstrate Geophone can do what vision cannot do -- this would be particularly important.\\n\\nAnd also a very minor point. I can see the efforts of changing the citation format. Unfortunately current format is still wrong. Although this minor point would not affect my final recommendation regarding this paper at all, I would like to remind that, in the official latex template of ICLR, it already said using \\\\citep{}, rather than manually adding bracket ().\"}", "{\"title\": \"Thank you !!! Thank you again for your valuable feedback.\", \"comment\": \"Thank you again for your valuable feedback. We hope our edits addressed your comments. We would appreciate any additional feedback, comments or suggestions you might have to further improve our draft before the end of the rebuttal period.\"}", "{\"summary\": \"The presented work introduces a new biometric dataset for human gait recognition based on structural vibrations. The dataset is applied to various tasks such as person identification, domain adaptation, and multi-modal scenarios combining vibration and vision-based identification methods. Experimental analysis includes verification of machine-learning and deep learning approaches.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The description of the data collection protocol is clearly written with sufficient details and clear explanations of the research motivation\\n2. The introduction and related work sections contain important background information justifying the motivation for the introduced dataset.\", \"weaknesses\": \"1. Is there any requirement for using a specific sensor type during the inference if trained on the presented dataset? I'm wondering about the practical implication of the proposed solution.\\n2. The work indicates that the concurrent activity was not taken into account, however it's very possible to happen in real-life scenarios. Would the presented dataset be sufficient for handling such scenarios? How should one prepare for additional noise introduced in this way? \\n3. It's not clear how filtering of potential noise was performed? Was the assumption that the data collection is performed in an isolated environment without any noise? You mentioned that there was environmental noise present, but how do you quantify its presence? If the assumption is that there is minimal or no noise, it again raises question around the practicality of the solution.\\n4. It's not clear how the data was split for training and testing? Were the same subjects present in both subsets or did you ensure no overlap?\\n5. One of the motivation behind introducing a new dataset is that other datasets contains a limited number of subjects. It's mentioned that there are 100 subjects in the proposed dataset but then only 30 and 40 subjects are used for floor types and distance measurements. Why not all 100 subjects were used for all of the scenarios?\", \"questions\": \"1. What do you mean by events in Table 2?\\n2. Line 407 - where is table 10? or did you mean 1?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"We sincerely appreciate your valuable feedback and insightful suggestions.\", \"Sample size: We appreciate the your observation regarding the sample size, especially in comparison to popular vision-based gait recognition datasets which often involve larger participant groups. However, we would like to emphasize that VibeID represents a first-of-its-kind experiment and constitutes the largest dataset for gait recognition using geophone sensors\\u2014practically 10 times larger than existing datasets in this domain (Pan et al., 2017; Mirshekari et al., 2018; Anchal et al., 2020; Dong and Noh, 2023; Xu et al., 2024). While vision-based datasets are the result of decades of research in the field, structural vibration as a modality for gait recognition is still in its early stages. Our work aims to bridge this gap and highlights the feasibility of geophone sensors as a promising modality for gait recognition. We believe that our study lays a crucial foundation for future research and underscores the importance of exploring this emerging domain.\", \"Technical contents: In response, we have included a person identification use case in addition to the multi-class classification task initially proposed (as detailed in Section 4.1).Our primary objective in this study is to establish foundational baselines using traditional signal processing, machine learning, and deep learning methods. These baselines are intended to serve as a starting point for the broader machine learning community to build upon this dataset and evaluate their methods.\", \"Applicability: We appreciate the concern expressed regarding the limited application of ambient sensors to indoor environments. While wearable sensors offer certain advantages, they can be intrusive and uncomfortable for continuous monitoring. Structural vibration sensors, such as geophones, offer a non-intrusive alternative for gait analysis. They can be easily be installed in various settings, including homes, and hospitals. For instance, in healthcare settings, geophones can be installed in patient rooms to monitor gait patterns remotely, without requiring patients to wear any additional devices. The small distance monitoring limit is not due to any inherent restriction of the sensor's range but instead to the design of the preamplifier circuit and the room sizes used in the study ( Supplementary section 6.1). To fully harness the potential of ambient sensor technology like geophones, it's essential to continue exploring innovative approaches and addressing current limitations like lack of large-scale datasets.\", \"Experiment setup: We acknowledge the importance of addressing concurrent human activities to enhance the usability of our study. In response, we have proposed a method to isolate footstep events in the presence of concurrent activities, which is detailed in the supplementary section 6.4. Specifically, we have demonstrated how our event detection module can be used to distinguish the person-of-interest from other activities (human or non-human). To evaluate scenarios involving concurrent human and non-human activities, we conducted additional experiments that yielded an additional 30 minutes of data. A geophone signal is generally less dependent on perspective compared to vision-based identification systems. Unlike vision systems that need specific angles to accurately capture features, geophones detect vibrations transmitted through the floor. This means they can reliably capture gait information without needing a direct line of sight or a precise orientation toward the individual. This characteristic allows geophones to operate effectively across a range of placements, as they pick up on the unique patterns of movement through floor vibrations rather than visual cues.\", \"Gait event: As highlighted in in Section 3.5, we have used a pre-existing validated toolkit for gait event detection in both structural vibration and Gait Energy Image (GEIs). Both the toolkit has been tested and validated in previous studies (Song et al.2022, Anchal et al. 2020).\", \"Dataset details : Thank you for your feedback, we have re-written the entire section 4.1, and highlighted the splits. Additionally, we have added Table 2, which provides more detailed information about the dataset's composition and subject split for person identification.\", \"Table clarity: We have re-written the entire section, and highlighted the Identification accuracy with train-test splits and explained the results in detail. As shown in the Table 7, GEIs are somewhat vulnerable to view-points, whereas structural vibration is not. By leveraging the complementary strengths of both modalities, we hope to develop more reliable and efficient gait recognition systems in the future.\", \"Minor - Yes, A2, A3, and A4 are subsets of A1. We have now clearly stated in the main text to avoid any confusion.\", \"Minor - citation: We have corrected the manuscript.\"]}", "{\"title\": \"Follow-Up on Reviewer Feedback\", \"comment\": \"Thank you for your valuable feedback. We have incorporated your comments and hope our edits address them effectively. If you have further suggestions or clarifications, we would greatly appreciate your input to refine our draft before the rebuttal deadline Nov 27 '24 .\"}", "{\"summary\": \"This study introduces a benchmark for gait recognition utilizing a novel structural vibration sensing technique, the geophone. and the new benchmark comprised of in total 100 subjects, collected under indoor or outdoor settings. It invesitgated whether the novel sensing modality can encode identity related information, and what is the limitations or sensitivity of this technique. Although this work addresses an interesting topic, it may not yet provide the technical depth or extensive experimental validation expected for broader applicability. There are also several concerns regarding the experimental settings and presentation clarity.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper presents a well-justified study, especially it clearly identifies current research gap for person identification. Overall it reads very well.\", \"weaknesses\": [\"[Sample size] Probably for the gait recognition task, we are more interest in how many subjects collected. The subject size, compared with current large dataset, especially the GaitSet, is still not that comparable.\", \"[Technical contents] I am concerned that the technical content is somewhat limited, even for a benchmark paper, for ICLR. Please consider adding more experiments and tasks to thoroughly validate the usability of this dataset, such as Re-ID, gait event detection, and generalization across subpopulations... I encourage the authors to refer to established works like GaitSet for inspiration. Gait data is highly complex, influenced by factors such as age, gender, emotion, and health conditions. Reflecting on these factors in your experiments would enhance the depth of the study.\", \"[Applicability] I am also concerned that this kind of ambient sensor can only be applied indoor or with relatively small distances, which might limit its application, compared to wearable data?\", \"[Experiment setup] In the experiment settings, I noticed that there were no concurrent human activity when recording the data, this may be another issue that limits the usability of this study. Additionally, will the data be sensitive to the perspective of the sensor, as I know it is quite sensitive for vision based person identification.\", \"[Gait event] gait event detection, is this be validated in terms of accuracy?\", \"[Dataset details] Further elaboration on the dataset\\u2019s composition and subject split for person identification would be valuable, particularly for readers unfamiliar with this topic.\", \"[Table clarity] Table 5 is not clearly illustrated, what is the performance comparison between structural vibration and camera? Very limited information is given in both the table and the associated texts. Expanding on this comparison would help readers understand the relative strengths of each technique.\", \"[Minor - clarification] I assume the subjects of A2, A3, A4 are part of A1, correct? Please clarify this.\", \"[Minor - citation] When citing a work which actually does not play any role in your sentence, please use (X et al., XXXX), rather than X et al. (XXXX).\"], \"questions\": \"I would appreciate if the authors could address the concerns I raised in the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Follow-Up on Reviewer Feedback\", \"comment\": \"Thank you for your valuable feedback. We have incorporated your comments and hope our edits address them effectively. If you have further suggestions or clarifications, we would greatly appreciate your input to refine our draft before the rebuttal deadline Nov 27 '24 .\"}", "{\"comment\": [\"We sincerely appreciate your thoughtful feedback.\", \"Comment 1 : In response to your comment, we have conducted additional experiments with concurrent human and non-human activities. Statistical tests (t and p-value) suggest no significant difference between noisy and everyday environments. We acknowledge the challenges of acquiring noise-free signals in real-world scenarios; our research demonstrates the potential of geophones for gait recognition, even in diverse environments. By testing across various flooring types, sensor placements, and outdoor settings, we've shown that geophones can effectively detect and monitor individuals in sparsely populated areas.\", \"Comment 2 : In outdoor scenarios, directly controlling noise is not possible. Instead, we used an unsupervised event detection module to isolate footstep event from background noise (see Supplementary Section 6.3). While outdoor environments are inherently challenging due to unpredictable noise levels, our methodology is designed to adapt and perform signal extraction under such conditions.\", \"Comment 3 : We agree that cameras have advantages in specific scenarios; however, exploring alternative modalities like vibration sensing is essential to expand application possibilities. Geophones capture vibrations transmitted through the floor, enabling them to work effectively without requiring a direct line of sight or precise alignment, as cameras do. Additionally, Geophone-based gait recognition does not require physical contact, is immune to lighting conditions, is low-cost, and is less computationally expensive and eco-friendly than using vision-based systems.\", \"Comment 4: You've raised an important point regarding the distinction between training and testing sets in human identification scenarios. In our initial experiments, we focused on multi-class classification, where each individual is treated as a separate class. This methodology, commonly adopted in prior research (Pan et al., 2017; Mirshekari et al., 2018; Anchal et al., 2020; Dong and Noh, 2023; Xu et al., 2024), enables us to evaluate the model's capacity to discriminate between different subjects. To address your concern, we have conducted additional experiments, as detailed in Section 4.2, Tables 4 and 6. These results demonstrate the identification accuracy of our model when evaluated on distinct training and testing sets, where the individuals in the test set are not present in the training set. This approach aligns more closely with the typical human identification paradigm.\\\\\\\\\", \"Comment 5 : The thickness of the carpet used is 9mm, this information is updated in the manuscript. We agree that the thickness of carpets can influence the ground vibrations and, consequently, the geophone's response. To address this, we conducted experiments on three distinct floor types: carpet (soft), wood (medium), and tile (hard). We believe that most real-world carpet variations would fall within the spectrum of these three categories. Our primary goal was to assess the consistency of gait pattern recognition across different surfaces, rather than focusing solely on carpet variations. To further characterize the environmental factors affecting the geophone's signal, we included noise measurements for each room in Supplementary Section 6.1. To ensure uniformity in all the scenarios, we limited our experimental design has a range of 4 meters for both indoor and outdoor data collection.\", \"This 4-meter limit is not due to any inherent restriction of the sensor\\u2019s range but instead to the design of the preamplifier circuit and the room sizes used in the study (Supplementary A-6.1). In practice, our configuration has an effective sensing radius of up to 6-10 meters indoors and up to 10-15 meters outdoors, depending on the level of background noise.\", \"Ethics Review: Structural vibration signals are one-dimensional signals that capture essential information about human gait, making them inherently more privacy-preserving than other methods. This does not reveal any sensitive details like facial recognition or fingerprints. In our study, we took additional steps to enhance privacy by anonymizing the data at the collection point, ensuring no personally identifiable information could be linked to the vibration data. Sharing this dataset openly is intended to contribute to the field and encourage a broader discussion on the privacy safeguards necessary when using such technologies.\"]}", "{\"comment\": \"1. How to deal with the noise caused by group of people walking together?\\n2. Vision-based models are also eco-friendly. For instance, the pose-based method (GaitTR) could be smaller than 1M and the silhouette-based (GaitGL) methods are less than 10M, but the ResNet-18 is comparatively larger.\\n3. Exploring different modalities as input is valuable; however, it is unclear whether the geophones are widely used in daily life or data collection. Consequently, the potential for recognizing gait using this modality appears to be limited.\"}", "{\"metareview\": \"The paper introduces VIBeID, a comprehensive dataset for human gait recognition using structural vibration, addressing the limitations of prior small-scale datasets. VIBeID includes over 88 hours of vibration data from 100 individuals, collected across different floor types (wood, carpet, cement) and distances (1.5m, 2.5m, 4.0m) from a geophone sensor, with additional multi-modal data combining vibrations and video recordings. It establishes benchmarks for person identification, domain adaptation, and multi-modal comparison, showcasing the effectiveness of machine learning and deep learning methods, such as ResNet-18 and ResNet-50, for identifying individuals based on their unique walking-induced structural vibrations. The dataset demonstrates structural vibration as a non-invasive, privacy-preserving biometric modality suitable for diverse environments, with significant potential for applications in security, healthcare, and smart buildings. Future work aims to expand the dataset and explore new applications, further solidifying structural vibration's role in soft biometrics.\\n\\nThe paper is well written and the dataset is composed of quite new modality for human gait recognition. However, some concerns are still not addressed:\\n- Dataset Scale: Recent gait recognition datasets using emerging sensors like Lidar typically involve thousands of subjects, which makes the size of VIBEID appear limited in comparison. The feasibility of geophone-based gait recognition has already been demonstrated by pioneering works. The small scale of dataset may limit the usage in ML community considering that this data modality is new.\\n- Feasibility in the real world: The current approach still struggles with simultaneously identifying multiple target subjects. This limitation significantly reduces the system\\u2019s practical applicability in real-world scenarios. I do believe single-person also has a value yet the audience may not be ML community (more in ubiquitous computing / mobile computing).\\n\\nTherefore, I think the paper has novelty but may not be ready for an ICLR publication.\", \"additional_comments_on_reviewer_discussion\": \"In the rebuttal, the authors mainly answer the questions regarding to the details of the datasets: real-world applications, data scale, and clarity. Since it is a dataset paper, not many new experiments are required.\\n\\nAfter rebuttal, though the average score is 5.75, three reviewers still have concerns on this submission, e.g., the range of applications and the effectiveness compared to the vision-based method, the selection of the person of interest, and the applicability of this work. I think it has not reached the bar of ICLR.\"}", "{\"comment\": \"Thanks for your follow-up experiments and explanation. Although I still have concerns about the range of applications and the effectiveness compared to the vision-based method, introducing a new modality is helpful. I would like to increase the score by 3.\"}", "{\"summary\": \"The authors built a dataset with 100 people using geophone to do multiple experiments based on the human's structural vibration. It consists of multiple covariances including floor types, and distances. The work tries to find a connection between the vibration and identities and builds multiple benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is clear and easy to follow, and the tables and figures are easy to understand\\nThe proposed question is interesting, trying to build the connection between identity and walking vibration, a fine detail when a human is walking. And authors collect a relatively large dataset in multiple conditions.\", \"weaknesses\": \"In real-life applications, it is hard to find a good condition to use a geophone to capture a human's gait with little noise.\\n\\nHow to control the noise in outdoor cases.\\n\\nCompared to a camera, the vibration-based method is restricted by the sensor and distance. \\n\\nThe protocol is not clear. How is the train and test set defined? For human identification, the identities appearing in the training set will not be present in the test set. It seems these experiments do not follow this setting\", \"questions\": \"Although the authors define the floor in different classes, the hardness might be a more reliable way to classify. Since different carpets' thicknesses may have different responses. And what is the distance range for the geophone, since 4 m is not far for camera sensor\", \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety']\", \"details_of_ethics_concerns\": \"Involving human data collection\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Follow-Up on Reviewer Feedback\", \"comment\": \"Thank you for your valuable feedback. We have incorporated your comments and hope our edits address them effectively. If you have further suggestions or clarifications, we would greatly appreciate your input to refine our draft before the rebuttal deadline Nov 27 '24 .\"}", "{\"title\": \"Thank you !!! Thank you again for your valuable feedback.\", \"comment\": \"Thank you again for your valuable feedback. We hope our edits addressed your comments. We would appreciate any additional feedback, comments or suggestions you might have to further improve our draft before the end of the rebuttal period.\"}", "{\"title\": \"Follow-Up on Revisions\", \"comment\": \"Thank you again for your valuable feedback. We hope our edits addressed your comments. We would appreciate any additional feedback to further improve our draft before the end of the rebuttal period.\"}", "{\"comment\": [\"We sincerely appreciate your suggestions.\", \"1: Thank you for your question. Currently, our model has been trained and tested exclusively on data from geophone sensors, the core principles and techniques we've developed can be applied to a variety of geophone types. Moreover, it can be used simultaneously along with vision-based systems. Merging the modalities can improve overall efficacy. Additionally, it can be used as an alternative to pressure-mats, or wearables, as it is non-intrusive and does not require direct physical contact.\", \"Additionally, for the broader machine learning community, it serves as a valuable resource for validating new methods, offering a distinct set of 100 distinct classes, containing structural vibration signals.\\\\\\\\\", \"2: We acknowledge the importance of addressing concurrent human activities to enhance the usability of our study. In response, we have proposed a method to isolate footstep events in the presence of concurrent activities, which is detailed in the supplementary section 6.4. Specifically, we have demonstrated how our event detection module can be used to distinguish the person-of-interest from other activities, whether human or non-human. Our structural vibration dataset is highly valuable as it offers labelled data for identifying the person-of-interest. We encourage the community to build upon our dataset and develop innovative solutions for addressing the challenges of gait recognition in real-world settings.\", \"3: As highlighted in Supplementary Section 6.3, we have used an unsupervised approach to address the challenge of filtering potential noise from the dataset. Specifically, we use a Gaussian Mixture Model (GMM), which is an unsupervised clustering technique, to differentiate footstep events from background noise. The GMM is effective in identifying the presence of distinct event patterns (footsteps) while grouping other irregular noise patterns into separate clusters, thus isolating the relevant signals from the environmental noise. As GMM is an unsupervised method, it can be applied across different environments without relying on predefined thresholds or assumptions about the type or amount of noise present. This allows for flexible operation in various locations without the need for extensive noise profiling, making it suitable for real-time, adaptable applications.\", \"4: You've raised an important point about the distinction between training and testing sets in multi-class classification and human identification. Initially, we used a multi-class classification approach, treating each individual as a separate class. This method, commonly used in previous research (Pan et al., 2017; Mirshekari et al., 2018; Anchal et al., 2020; Dong and Noh, 2023; Xu et al., 2024), allows us to assess the model's ability to differentiate between subjects. To address your concern, we conducted additional experiments (Section 4.2, Tables 4 and 6). These results demonstrate the model's identification accuracy on distinct training and testing sets, where the test set contains unseen individuals, aligning more closely with standard human identification practices.\", \"5: The proposed dataset contains data from 100 subjects, not all 100 subjects were used for every scenario due to challenges in maintaining consistent data collection. The data was recorded over a span of five years, and the outbreak of COVID-19 interrupted our data collection process, preventing us from gathering continuous data for all 100 individuals. As a result, for certain sub-tasks such as floor types and distance measurements, we utilized the available subjects (30 and 40 individuals, respectively). We would argue that using 30 and 40 subjects is sufficient for an initial understanding of the effects of different domains on structural vibration signals. Moreover, we believe that the introduction of this dataset, is an important step toward addressing the data scarcity in this field. By open-sourcing the dataset and the code, our goal is to encourage further contributions and foster research in this domain.\", \"6: Thank you for your question. In Table 2, the term \\\"events\\\" refers to distinct footstep events, which correspond to impacts made by the human foot on the floor. These impacts are unique and can be distinguished from any background noise, as illustrated in Figure 5. We use consecutive footstep events for training and testing. For example, \\\"2 events\\\" indicates two consecutive impacts on the floor, corresponding to two steps, while ``5 events'' refers to five consecutive steps. Similarly, \\\"7 events\\\" and \\\"10 events\\\" represent seven and ten steps, respectively. These events are used after pre-processing to train the model, helping to capture the dynamics of human gait.\", \"7: Thank you for pointing this out. It appears to be a typographical error, and we meant to refer to Table 2, not Table 10. We have corrected this mistake in the revised manuscript.\"]}", "{\"title\": \"Thank you !!!\", \"comment\": \"Thank you for your constructive feedback and for raising your score. We agree that looking into a larger and more diverse dataset and using different types of sensors are great next steps, have emphasized this in the Limitation and Conclusions section. Our dataset is the first of its kind for studying gait recognition with geophone sensors. We are dedicated to continually improving our dataset and making our solution more practical for real-world use. We hope the machine learning community will use our dataset to develop and test new algorithms. Thank you once again for your valuable guidance and support.\"}", "{\"title\": \"Thank You for your comments!!\", \"comment\": [\"Thank you for your valuable feedback.\", \"First, we sincerely apologize for the citation typo in Table 1 and have corrected the error.\", \"We understand the section needs to be written with more clarity. Our focus is on detecting whether the specific \\\"person of interest\\\" is present in the recording, even when there are other activities happening in the background, such as noise, other humans, or groups. To achieve this, we use a Gaussian Mixture Model (GMM) with 134 features (see Section 3.5). During training, we use data containing only footsteps, with no intentional background activity, and cluster it into two categories: noise (natural structural vibration of the location) and footsteps events (due to impact on the ground) (See Section 3.5). We understand that real-world scenarios are much more complex. To address this, we conducted an additional experiment using a \\\"wild set\\\" (see Supplementary Section 6.4). In this scenario, the person of interest walked in the presence of background concurrent activity like non-human, human and group activity. During testing, we clustered the data into three categories. Based on previous research (Anchal et al. 2020), the cluster with the largest covariance determinant is labeled as \\\"Complex Noise\\\"- meaning more variations, while the second-largest is identified as the \\\"Person of Interest\\\" among the three clusters (see Section 6.4). Instead of separating signals for multiple individuals, our goal is to identify whether the person of interest is present in the recording, even in complex environments. To answer your question about \\\"five individuals,\\\" we are not attempting to separate signals for all five people but rather to detect and isolate the signal trace of our specific person of interest. To match the detected cluster containing the person of interest, we perform a T-test on the extracted embeddings from both the actual dataset and the wild set (Supplementary Section 6.4.1). This approach helps us on detecting the signal trace of the \\\"Person of Interest\\\" amidst complex environments. We believe this is a starting point for future improvement, and plan to try more complex approaches soon, which will likely require novel methods contributions. We acknowledge that more sophisticated methods may be needed in the future to fully address scenarios involving multiple individuals with overlapping signals. Our goal is to provide a simple baseline for future research, and we hope this work inspires further advancements in this field.\", \"We understand your concern about the dataset size of 100 subjects for a top-tier conference in 2024. We would also highlight that apart from 100 participant's data, we\\u2019ve also explored various use cases, such as different floors, rooms, and sensor distances, along with meta-data including height, weight for real-life applications. We believe this work is an important first step and a strong starting point, based on previous research. Collecting this data took significant time and effort to ensure its quality and reliability. We hope it will inspire other researchers to build on our work and push the boundaries even further.\", \"We genuinely hope you see the value of our effort and the potential it holds for advancing this field. Your thoughtful feedback has been immensely helpful, and we deeply appreciate your time and engagement with our work. Thank you for reviewing our submission.\"]}", "{\"summary\": \"The paper presents a novel dataset termed VIBEID, designed for human gait recognition using structural vibration data. The dataset includes recordings of 100 subjects across various distances, floors, and environments. Experiments demonstrate that structural vibration can serve as a viable biometric trait across different scenarios.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Vibration-based gait recognition introduces a novel approach for human identification.\", \"The proposed baseline methods are effective.\", \"This work establishes the largest-scale vibration-based dataset to date.\"], \"weaknesses\": [\"The dataset has a limited number of subjects, although it includes over 88 hours of recorded data.\", \"Compared to commonly used vision-based gait recognition, the operating distance remains relatively short.\", \"The evaluation setup lacks clarity.\"], \"questions\": \"1. To what extent do walking speed and the carrying of objects impact recognition performance?\\n2. Building on question 1, does abnormal gait pose significant challenges for re-identification?\\n3. The VIBEID dataset studies an operating distance range of 1.5m to 4m, while vision-based gait recognition typically works at distances over 10m. What is the distance limit for vibration sensors to capture meaningful gait signals?\\n4. If there are obstructions between subjects and the sensor, is reliable recognition still possible?\\n5. How does the proposed vibration-based gait recognition handle scenarios with multiple pedestrians walking simultaneously?\\n6. I recommend replacing Figure 3 with a clearer version.\\n7. The evaluation protocol should be more detailed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"1\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for your response.\\n\\n1. There seems to be a possible citation typo in Table 1. Shen et al. appear to have proposed the LIDAR-based dataset, while Wang et al. are credited with the Event-based one.\\n\\n2. I am still unclear about how the selection of the person of interest is handled. If five individuals are walking within the field of view of the geophone, how can unsupervised algorithms reliably extract the walking signals of each individual? Moreover, how are these signals matched to specific individuals? I consider this a critical issue for real-world applications. Since this submission primarily focuses on application-oriented contributions, it may be challenging to sidestep this issue by explaining that it is merely a pioneering attempt.\\n\\n3. I appreciate the value of exploratory work. In fact, this manuscript serves as a continuation of the efforts outlined in the datasets listed in Table 1. However, given that it is now 2024, a dataset limited to only 100 subjects may pose challenges for acceptance at a top-tier conference.\\n\\nI agree with the perspectives shared by the other reviewers and appreciate your thoughtful response. \\n\\nHowever, opinions can vary, and I still feel inclined to maintain my current rating. \\n\\nThank you for your understanding.\"}", "{\"title\": \"Thank you for rebuttal\", \"comment\": \"Thank you for addressing my concerns and performing additional experiments to provide more details on my and other reviewers' comments (low light, concurrent activity, walking speed). I think your work has potential. There are still some valid concerns about practicality of the solution's deployment when it comes to sensor availability/type and dataset size/variety, etc., but with additional improvements added, I'm willing to increase my score by 1.\"}", "{\"title\": \"Thank you for your feedback. Interesting Questions!!!\", \"comment\": \"We truly appreciate your insightful feedback, and it has helped us rewrite the manuscript with better clarity. Below, we address the points raised and provide clarifications:\\n\\n**Comparison with Commercially Available Biometric Techniques**\\n\\nWe acknowledge that commercially available biometric techniques may offer an alternate viable solution in constrained indoor environments, each having its own pros and cons. We recognize a deeper, underlying question: why explore structural vibration-based gait recognition when so many established gait recognition methods already exist in the market? Our response is, why not? We believe geophones offer unique potential that is still in its nascent stage and remains unexplored. We envision a future where humans interact effortlessly with diverse sensing modalities, just as naturally as we engage with everyday materials.\\n\\nIn comparison to geophones, LIDAR and DVS sensors rely on maintaining a direct line-of-sight. As discussed in Section 6.5.5., obstructions between the sensor and target can block the line-of-sight (or laser beams), potentially resulting in incomplete or inaccurate data capture. Moreover, geophones present a highly cost-effective alternative to 128-beam LIDAR scanners (1:18 cost comparison on available market products), with significantly lower computational and power requirements (Section 6.6). This makes geophones particularly appealing for applications where affordability and efficiency are critical. We have incorporated a detailed comparison in Table 1 of the revised manuscript to highlight these distinctions. Excerpt shown below: \\n\\n| Reference | Sensor Type | Subjects | Samples | Domain | Environment |\\n|------------------------------------|-----------------|----------|---------|----------|-------------|\\n| [Shen et al., 2023](#shen2023lidargait) | Event cameras | 20 | 4000 | 1 | Indoor |\\n| [Doe et al., 2020](#9337225) | LIDAR | 1050 | 25,239 |1| Outdoor |\\n \\n**Data Scale and Comparisons**\\n\\nWe sincerely appreciate the reviewer\\u2019s point regarding the availability of larger datasets in gait-recognition studies. It is worth noting that early research in these fields also began with relatively small datasets before scaling to thousands of subjects. For instance, even the referenced paper on event-stream recognition includes real-life data from 20 subjects with 4,000 samples. Our dataset provides a detailed exploration of different domains, such as multiple floors, rooms, and sensor distances. Similarly, our current dataset serves as an essential starting point, comparable to foundational benchmarks, which also began with fewer or comparable numbers of subjects [1, 2]. The primary objective of this study is to introduce the most extensive structural vibration-based gait recognition dataset to date, building on prior work (Pan et al., 2017; Mirshekari et al., 2018; Anchal et al., 2020; Dong and Noh, 2023; Xu et al., 2024). This dataset is intended to serve as a foundation for future research, facilitating the expansion and scalability of gait recognition studies to encompass a larger pool of subjects. Additionally, we are committed to ongoing efforts to enhance the quality and scope of the dataset.\\n\\n**Use of Clustering in Multi-Pedestrian Scenarios**\\n\\nWe wish to clarify that our approach does not depend on manual intervention (or manual calculation of cluster number). Instead, we leverage unsupervised clustering to segment signals into three distinct categories. The intuition behind this is that a signal inherently comprises three types of information: background noise, the person of interest, and all other activities (including human, non-human, and group activities). By consistently setting the cluster number to three, regardless of the number of individuals present, our approach focuses on isolating the person of interest amidst diverse and dynamic activities. Our clustering method is statistically robust, effectively detecting the person of interest even in noisy environments, without the need to propose entirely new methods.\\n\\n**Baseline and Future Directions**\\n\\nOur experimental baselines were intentionally kept simple to provide initial insights and to establish a foundation for future research. These methods serve as a starting point for the community to explore more complex approaches. We welcome collaboration and encourage researchers to leverage our dataset for advancing novel techniques in this domain.\\n\\nWe deeply appreciate the reviewer\\u2019s insightful comments and ongoing engagement, which have played a crucial role in enhancing the quality of the manuscript.\\n\\n**References:**\\n\\n[1] Hiroyuki Yamada et al., Advanced Robotics, 2020.\\n[2] Shiqi Yu et al., ICPR, 2006.\"}", "{\"title\": \"Thank you for your feedback. Excellent Question !!\", \"comment\": \"We understand your concerns regarding the technical contributions and the applicability of this work, particularly in demonstrating the unique strengths of Geophone-based sensing in comparison to vision-based systems. We have addressed this concern with two approaches:\\n \\n- Quantitative Analysis :\\nWe have experimented with metrics mean average precision (mAP), for multi-modal datasets (Table 7 ). Quantitative results indicate that the geophone-based modality achieves performance comparable to vision-based modalities. While vision-based systems exhibit lower mAP in certain scenarios, the geophone modality demonstrates consistent performance. However, this could be attributed to the need for view-invariant analysis in vision-based systems . To provide definitive evidence that the geophone can succeed where vision-based systems fail, we extended our study with qualitative analysis to demonstrate its unique advantages in challenging scenarios.\\n \\n- Qualitative Analysis :\\nWe conducted a series of experiments to highlight scenarios where the geophone outperforms vision-based systems. We recorded videos and structural vibration signals in varied conditions, namely, normal, low-light, half obstructions, and full obstruction. Obstruction was created using a green screen placed in the field of view of the camera, while recording data (Figure 14). We conducted statistical test-like t-test, p-value test. Additionally, we evaluated each modality on events extracted. These qualitative results, as discussed in detail in Supplementary Section 6.5.5 (Table 15- excerpt shown below).\\n \\n| **Comparison** | **Camera 1 T-Stat.** | **Camera 1 P-Value** | **Camera 1 Event Ratio** | **Camera 2 T-Stat.** | **Camera 2 P-Value** | **Camera 2 Event Ratio** | **Geophone T-Stat.** | **Geophone P-Value** | **Geophone Event Ratio** |\\n|---------------------------|----------------------|-----------------------|---------------------------|----------------------|-----------------------|---------------------------|-----------------------|----------------------|---------------------------|\\n| Normal to Normal | 0.0 | 1.0 | 1.00 | 0.0 | 1.0 | 1.00 | 0.0 | 1.0 | 1.00 |\\n| Normal to Low Light | -4.60 | 0.004 | 0.50 | -2.09 | 0.036 | 0.51 | 0.53 | 0.59 | 1.01 |\\n| Normal to Half Obstruction| -2.52 | 0.016 | 0.42 | -1.31 | 0.187 | 0.68 | 0.53 | 0.59 | 1.02 |\\n| Normal to Full Obstruction| - | - | - | - | - | - | 0.461 | 0.644 | 1.00 |\\n \\nUnder normal conditions, all modalities achieve a consistent event ratio of 1.00. In low-light conditions, Camera 1 and Camera 2 experience significant degradation, with event ratios dropping by 50% and 49%, respectively, and statistically significant p-values (\\\\(p < 0.05\\\\)). In contrast, the geophone remains robust, showing a slight improvement in its event ratio (+1%). Under partial obstruction, the event ratios for Camera 1 and Camera 2 decline by 58% and 32%, respectively, with Camera 1 showing significant degradation (\\\\(p = 0.016\\\\)). The Geophone remains unaffected, with its event ratio improving by 2%. In scenarios of full obstruction, vision-based systems fail completely, producing undefined event ratios, while the Geophone maintains consistent performance.\\n \\nThe Geophone\\u2019s high p-values (\\\\(p > 0.05\\\\)) across conditions, including \\\\(p = 0.59\\\\) in low-light and partial obstruction, highlight its resilience and stability. These results highlight the Geophone's reliability as a complementary or alternative sensing modality, particularly in scenarios where vision systems struggle. Its independence from environmental factors, low computational cost, and privacy-friendly design further enhance its appeal. \\n \\nWe hope this clarification addresses your concerns and demonstrates the technical contributions and real-world applicability of our work. Thank you for your thoughtful feedback, and we welcome any further questions.\"}", "{\"title\": \"A Deep Dive into the Comments !!\", \"comment\": \"- In response to your comments we explored the possibility of detecting individual persons within a group walking together. Group can be defined two or more person. We considered scenarios where two individuals walk concurrently but independently within the same recording environment. We recorded an additional 10 minutes of data featuring individuals walking side-by-side in an uncontrolled, random manner. Our goal was to determine whether our modified event detection system could successfully identify specific individuals amidst group activities. We then utilized statistical p-value tests to quantify the impact of such group dynamics on our detection capabilities. (See Supplementary section 6.4)\\n\\n ## Statistical Test Results for Pure vs. Activity Data\\n \\n| Comparison | T-Statistic | P-Value |\\n|-------------------------------------|-------------|---------|\\n| Pure & Non-Human Activity Data | -0.772 | 0.440 |\\n| Pure & Human Activity Data | -1.750 | 0.080 |\\n| Pure & Group Activity Data | -1.329 | 0.183 |\\n| Pure & Random Noise | -237.97 | 0.0 |\\n\\nThe statistical test results, as shown in Table 9 indicate that embeddings generated from noisy data using our GMM-based event extraction approach closely align with embeddings derived from cleaner distributions. Additionally, we have compared the p-value and t-test with a random noise data, as a control.\\n\\n- In response to the question of comparing the eco-friendliness of two modalities, we considered both vision-based and geophone-based systems. While vision-based modalities include models designed with eco-friendly considerations, our analysis emphasizes the act of recording data. This provides a direct comparison of the environmental impact of the two modalities. (See Supplementary section 6.6)\\n \\nFrom a power consumption perspective, the geophone is a passive sensor, meaning it does not require an external power source for operation. It generates an electrical signal in response to mechanical vibrations. However, the associated electronics, such as a Raspberry Pi, consume power. Conversely, vision-based systems typically use CCTV cameras, which have more substantial power requirements.\\n \\n## Comparison of Power Consumption and Environmental Impact\\n \\n| Modality | Load | Power (W) | Daily Energy (kWh/day) | Annual Energy (kWh/year) | Equivalent Annual CO2e (kg/year, Global) |\\n|-----------------|--------------|-----------|------------------------|--------------------------|------------------------------------------|\\n| **Vision-based**| Basic (PoE) | 6.3 | 0.1512 | 55.188 | 26.71 |\\n| | Maximum (PoE)| 18.9 | 0.4536 | 165.204 | 78.47 |\\n| **Geophone-based** | Basic | 1.95 | 0.0468 | 17.082 | 8.11 |\\n| | Maximum | 5.15 | 0.1236 | 45.114 | 21.43 |\\n \\n*Note: PoE stands for Power over Ethernet.*\\n \\n---\\n\\nThe geophone-based system, with its associated Raspberry Pi, has significantly lower power consumption and carbon emissions compared to the CCTV-based system. This highlights the eco-friendliness of the geophone modality, especially in scenarios requiring continuous operation in an indoor setting. Additionally, a single Raspberry Pi can be modified to record multiple geophone sensors, with very little carbon emission of around 0.208 kg/year per geophone.\\n\\n---\\nThank you again for your thoughtful feedback. We hope that the edits we have made reflect that we understood your concerns and have addressed them. We would be very grateful if you could let us know whether or not our edits have addressed your concerns and/or changed your opinion on the quality of the paper before the rebuttal period ends.\"}", "{\"title\": \"Thanks for your dense response\", \"comment\": \"To my knowledge, the primary advantage of gait recognition lies in its ability to identify individuals at a distance. In constrained environments or limited spaces, other commercially available biometric techniques, such as iris or face recognition, may be more practical alternatives to developing gait recognition in such scenarios.\\n\\nI acknowledge that visual biometrics like face and iris recognition may raise privacy concerns. To address this, some studies have proposed alternative approaches, such as Lidar-based gait recognition [1], which features a benchmark dataset with over 1,000 subjects under diverse walking conditions. Similarly, non-RGB sensors like event cameras have also been explored for gait recognition [2], with the added benefit of effectively leveraging existing RGB-based gait datasets.\\n\\nBoth Lidar- and event-based gait recognition methods are privacy-friendly and versatile, suitable for both indoor and outdoor scenarios, across varying distances, and scalable to datasets with thousands of subjects.\\n\\nFor scenarios involving multiple pedestrians, the proposed method clusters subjects by adjusting the number of clusters. However, manually counting pedestrians is impractical for real-world applications. This challenge may be a unique limitation of vibration-based gait recognition and merits further in-depth discussion in the manuscript.\\n\\nWhile introducing new sensors for gait recognition is commendable and encouraged, top-tier application-oriented publications demand greater clarity and focus on addressing concerns about the method\\u2019s motivation and practical viability.\\n\\nThank you for your efforts. However, in its current form, this version of the manuscript may still face challenges in improving my rating.\", \"references\": \"[1] LidarGait: Benchmarking 3D Gait Recognition with Point Clouds, CVPR 2023.\\n[2] Event-Stream Representation for Human Gait Identification Using Deep Neural Networks, T-PAMI 2021.\"}", "{\"title\": \"Thank you for your feedback.\", \"comment\": \"1. As mentioned in Section 6.4 of the supplementary materials, titled \\\"Wild Set Evaluation Involving Concurrent Human and Non-Human Activity,\\\" we have modified our event detection module for detecting both human and non-human activities. Results show that it is possible for detecting person-of-interest from any concurrent activities happening simultaneously. Our current setup with a geophone sensor is really meant for identifying individuals by their walking patterns in an indoor setting. When many people walk together , their steps mix together, making it hard to separate one person's walking pattern from another, and require more in depth analysis. This is similar to trying to understand each person talking at the same time during a phone call, which is possible but needs more detailed study. Our goal with this research is to introduce a large-scale dataset that can serve as a foundation for the machine learning community to further develop and refine. This dataset is intended to inspire and enable more advanced studies in the field.\\n \\n \\n2. We acknowledge that GaitTR and GaitGL are more efficient in terms of model size compared to ResNet-18. These methods are built on existing datasets for gait recognition and focus on optimizing within those frameworks. In contrast, our work introduces a new dataset specifically designed for novel-modality geophone-based gait recognition. We validated our approach using ResNet models due to their established reputation and widespread acceptance in the machine learning community as benchmark models.\\nGeophone-based gait recognition captures one-dimensional signals, which provide significant advantages in terms of storage and processing simplicity compared to video-based systems. Video data typically requires substantial storage and computational resources, whereas geophone data is more compact and suitable for resource-efficient preprocessing and real-time deployment on devices with limited storage capacity.\\n3. We appreciate this observation and acknowledge that geophones may not yet be widely used in daily life or standard data collection scenarios. While geophones represent a relatively new modality in this domain, previous research has already demonstrated their potential for capturing gait characteristics effectively in Table 1 and section 2 (Pan et al., 2017; Anchal et al., 2020; Dong & Noh,2023; Mirshekari et al., 2018; Chakraborty & Kar, 2023; Xu et al., 2024). Geophones, as highly sensitive vibration sensors, offer a non-invasive, privacy-preserving, and cost-effective alternative to traditional video or image-based systems. Unlike cameras, geophones do not capture visual data, making them suitable for environments where privacy concerns are paramount, such as in healthcare settings. Additionally, geophones can operate effectively in low-light or visually occluded environments, where video-based systems may struggle (supplementary section 6.5.5). Moreover, by introducing a new geophone-based dataset, we aim to pave the way for further research in this area, bridging the gap between emerging technologies and real-world applications. As machine learning and sensor technologies evolve, geophones could find increased relevance in various specialized use cases, ultimately demonstrating their utility for gait recognition and beyond.\"}", "{\"comment\": \"We sincerely appreciate your thoughtful feedback and suggestions. Your insights have provided us with crucial perspectives to strengthen our study.\\n\\n-Dataset size : We would like to highlight that the dataset contains data from 100 subjects. It is important to note that this dataset is first of its kind to address gait recognition using geophone sensors. Additionally, the dataset includes over 88+ hours of recorded data, offering a significant amount of real-world signals that reflect a variety of gait patterns and environmental conditions. We are committed to improve the dataset, by continuously expanding and diversifying the dataset, we aim to enhance its usability for practical applications and ensure compatibility with a wide range of use cases.\\n\\n-Operating size: The operating range of the geophone is designed to be suitable for environments such as indoor spaces, where monitoring is typically required at close proximity. The distance range can be extended up to 6-10m, by using more better data acquisition units or additional sensors, but this is often constrained by the physical layout of the environment (aka size of the rooms) rather than the inherent limitations of the geophone sensor itself. \\n\\n-Evaluation setup: We apologize for any lack of clarity in our evaluation setup. To address your comment, we have rewritten section 4.1 and 4.2.\\n\\n-1: To investigate the impact of walking speed on recognition performance, we conducted additional experiments, with results presented in Supplementary Section 6.4.4, which includes an extra 7.5 hours of data. Furthermore, given that the data spans over five years, we have recordings from the same individuals in both summer and winter (with heavier clothing), and our observation is that carrying small objects do not significantly affect the underlying gait pattern. However, when individuals carry heavier objects or change their posture (e.g., hunching over), this can alter their gait, potentially leading to abnormal ground contact patterns, which affects recognition performance. Our findings suggest that variations in walking speed and the carrying of objects do indeed affect the gait pattern, as expected. However, the fundamental movement patterns, which our model is designed to recognize, remain consistent despite these changes. We recognize that this is an area that warrants further investigation. \\n\\n-2: Thank you for your comment. We agree that gait abnormality can pose challenges for re-identification. In our study, we collected data from 100 individuals within a similar age range (25\\u201335), where many of the subjects have similar height-to-weight ratios. This similarity between individuals makes re-identification based solely on standard gait patterns more challenging. However, when a person exhibits a medical condition, this deviation tends to be more distinct and more accessible to detect. The abnormalities in gait patterns are more pronounced and can be identified more accurately. However, anyone deliberately faking abnormal gait would be difficult to detect.\\n\\n-3: To ensure uniformity in all the scenarios, we limited the range to 4 meters in our experimental design for both indoor and outdoor data collection. This 4 meter limit is not due to any inherent restriction of the sensor's range but instead to the design of the preamplifier circuit and the room sizes used in the study ( Supplementary A-6.1). In practice, our configuration has an effective sensing radius of up to 6 meters indoors and up to 10-15 meters outdoors, depending on the level of background noise.\\n\\n-4: Thank you for your question. The geophone sensor is capable of recording gait signals even in the presence of obstructions, as it detects ground vibrations rather than relying on line-of-sight. In our experiments, we have successfully recorded data in rooms with various objects, such as tables and chairs, without significant interference to the vibration signals. Therefore, obstructions between the subject and the sensor do not hinder the sensor's ability to capture meaningful gait patterns.\\\\\\\\\\n\\n-5 : Thank you for raising this important point. our unsupervised event detection module (we have updated in details in supplementary 6.3), can be modified to detect events from multiple activities, by changing the number of clusters from 2 (event vs noise) to 3 (event vs noise vs other activities).\\n\\n-6 : Thank you for your suggestion. We appreciate your feedback on Figure 3. A revised figure has been included in the updated manuscript.\\n\\n-7: Thank you for your comment. We apologize for any lack of clarity in our evaluation setup. To address your comment, we have rewritten section 4.1 and 4.2.\"}" ] }
2cF3f9t31y
SelectFormer in Data Markets: Privacy-Preserving and Efficient Data Selection for Transformers with Multi-Party Computation
[ "Xu Ouyang", "Felix Xiaozhu Lin", "Yangfeng Ji" ]
Critical to a free data market is $ \textit{private data selection}$, i.e. the model owner selects and then appraises training data from the data owner before both parties commit to a transaction. To keep the data and model private, this process shall evaluate the target model to be trained over Multi-Party Computation (MPC). While prior work suggests that evaluating Transformer-based models over MPC is prohibitively expensive, this paper makes it practical for the purpose of data selection. Our contributions are three: (1) a new pipeline for private data selection over MPC; (2) emulating high-dimensional nonlinear operators with low-dimension MLPs, which are trained on a small sample of the data of interest; (3) scheduling MPC in a parallel, multiphase fashion. We evaluate our method on diverse Transformer models and NLP/CV benchmarks. Compared to directly evaluating the target model over MPC, our method reduces the delay from thousands of hours to tens of hours, while only seeing around 0.20% accuracy degradation from training with the selected data.
[ "Secure Multiparty Computation", "Machine Learning", "Efficiency", "Transformer model" ]
Accept (Poster)
https://openreview.net/pdf?id=2cF3f9t31y
https://openreview.net/forum?id=2cF3f9t31y
ICLR.cc/2025/Conference
2025
{ "note_id": [ "w4DBlotOBM", "koOX5GB9UI", "ieCWguHSx3", "gkG5Ih92Ks", "YynuMsa8Yw", "0kpAeaxSK2" ], "note_type": [ "official_review", "meta_review", "official_review", "official_review", "official_review", "decision" ], "note_created": [ 1731470167639, 1734652564405, 1731362774173, 1730663725217, 1730665236559, 1737523609974 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3957/Reviewer_p75F" ], [ "ICLR.cc/2025/Conference/Submission3957/Area_Chair_P5cR" ], [ "ICLR.cc/2025/Conference/Submission3957/Reviewer_jQt4" ], [ "ICLR.cc/2025/Conference/Submission3957/Reviewer_hi43" ], [ "ICLR.cc/2025/Conference/Submission3957/Reviewer_8FFm" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes SelectFormer, a method for privately selecting data for transformer finetuning using MPC. The setting is that a data holder is trying to sell data to a user, who wants to finetune a transformer on only a subset of the holder\\u2019s data. However, the data holder doesn\\u2019t want to expose all fo their data, and the data user doesn\\u2019t want to expose their model weights to the data holder. The main idea is to iteratively learn a sequence of models, each of which is used to privately rank points in the dataset in terms of their informativeness (entropy of the output distribution of the target model). These models replace nonlinear operations like softmax with an MLP with ReLU activation, where the MLP is trained by collecting input-output pairs from the intermediate model. The authors show that their method significantly reduces the time to select a good training dataset relative to prior private baselines, without sacrificing much in terms of quality (i.e. it selects data points that lead to a good downstream model).\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The problem formulation is interesting and new\", \"The algorithm appears to cleverly use the available information\", \"The empirical performance and experiments are promising\", \"Overall, I think the paper is proposing an interesting new idea (both formulation and algorithm), and hence I gave it a positive score. However, I think the level of polish and writing is rather low, and I think the paper would need a significant clean-up prior to publication.\"], \"weaknesses\": \"-\\tThe paper is not polished or easy to read/understand\\n-\\tIt\\u2019s not clear if the paper compares against relevant baselines\\n-\\tImprecise threat model and privacy guarantees. The threat model does not clearly specify *who* should not learn *what data* \\n-\\tHow does the MPC handle nonlinearities in the MLP? \\n-\\tThe level of formalism in the paper is very low \\n\\nIn your privacy guarantees, please specify *who* learns or does not learn what information. For instance, if the data holder knows the architecture of the model(s) and they know the ranked entropy of their samples, why can\\u2019t they train the model locally on the most informative samples to approximate the private model? This leakage was not explored in the paper, to my knowledge. \\n\\nThe paper\\u2019s evaluation considers time-accuracy tradeoffs. However, these are mostly provided implicitly in tables that do not clearly show the tradeoff. It may be helpful to show a plot with selection time on one axis and accuracy on the other, and then see different baselines, including variants of your method. Specifically, based on the results in the Appendix, I think the 1-phase variant with a dimension 2 MLP may do well enough relative to the other hyperparameter settings, especially if you consider its lower delay. But it\\u2019s hard to judge from the tables provided. Can you produce this plot and compare it to the suggested hyperparameter settings? \\n\\nA lot of notation seemed to be undefined or imprecisely defined. This made the paper difficult to read. For instance, in Sec. 4.1, \\u201cwith a selectivity \\u03b1i = Si/Si\\u22121\\u201d if S_i is a dataset, do you mean the ratio of the *cardinalities* of S_i and S_{i-1}? In Sec 2.1 and 2.2 -- What is the difference between M_t and M_target? Why do you want to query all the samples in D on M_t instead of M_target? I thought M_t was the finetuned model, which is unknown until you finetune with the selected data? The notation $\\\\hat M_i$ was not defined, and the proxy model was previously defined as $M_p$. And what is $M_g$ in Figure 2? It is not defined until later in the paper, in Section 4.2. But the definition doesn\\u2019t match Figure 2 (is it the bottom K or L layers?) What is the difference between $W_i$ and $w_i$? \\n\\nIn the same vein, the paper does not seem to formally write out the MPC algorithm or prove any guarantees. \\n\\nThe paper is missing several relevant references, including one potentially important baseline for comparison: \\n-\\tPang et al, \\u201cBolt: Privacy-Preserving, accurate, and efficient inference for transformers\\u201d (S&P 2024)\\n\\nTable 1 is hard to read. Please enlarge. Also what metric is it providing? Please make the table and caption self-contained. This is true also of the tables in the appendix, many of which don't specify what metric they are listing.\", \"minor_comments\": [\"Intro: \\u201cAs shown in ??,\\u201d\", \"The section \\u201cWorkflow\\u201d is very difficult to follow\\u2014it\\u2019s not clear what you mean by ranking samples by entropy, for instance\", \"The model owner has a private, small validation set, on which she wants to maximize the test accuracy. \\uf0df Do you mean validation accuracy?\", \"\\u201coffline generate a random triple called Beaver triple Beaver (1992)\\u201d wrong type of citation typo\", \"I am not an expert in MPC and may be missing relevant literature and requirements.\"], \"questions\": \"1) Can you share a plot of accuracy vs. delay for baselines and the variants of your method, including the variant with 1 phase and a dimension 2 MLP?\\n\\n2) Why is replacing one nonlinearity with a different nonlinearity useful for MPC? \\n\\n3) Can/should you compare to BOLT in the evaluation?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The reviewers were supportive of the paper. There were a few concerns that if addressed in the final version that would be great,\\n\\na) The threat model should clearly define who should not learn what data. \\nb) The paper should specify more clearly how the MPC handles nonlinearities in the MLP.\\nc) The level of formalism is a bit low, and the privacy guarantees should be more specific. \\nd) The section on \\\"Workflow\\\" is difficult to follow.\", \"additional_comments_on_reviewer_discussion\": \"See above.\"}", "{\"summary\": \"The paper considers the private data selection problem, i.e. the problem of a model trainer selecting data to purchase from a data owner without revealing the purchased data to the model trainer. The authors focus on the active learning framework, which targets examples which have the largest prediction entropy, i.e. which the model can learn more from, and does not require labelled examples. Ideally multi-party computation can be used to have the model trainer and data owner compute the examples' entropies (or, if entropies are themselves sensitive, a relative ordering of the examples' entropies) without revealing any other information about the examples themselves. For transformers, MPC is infeasible because transformers use high-dimensional, non-linear operations. Past work got around this issue by approximating the non-linear operations using linear operations. However, using MPC to compute these approximations remains expensive and the approximation comes at a cost in accuracy.\\n\\nThe authors instead propose a number of techniques to make MPC evaluation of transformers more efficient. First, they fuse multiple non-linear steps. Next, they use multi-layer perceptrons (MLPs) to approximate the fused steps and reduce their dimension (as opposed to past work, which use MLPs to reduce a single non-linear step without dimension reduction). Third, they employ a multi-phase selection protocol, where initially a small model is used to filter the initial dataset $S$ into $S_1$, and in each successive step a larger model is used to filter $S_i$ into $S_{i+1}$ until the final dataset is acquired. To find the MLP approximation and also construct the smaller models, the model trainer purchases a small arbitrary bootstrap dataset up front, and uses the inputs/outputs of the large model on this dataset to train the MLP to approximate layers of the large model. The number of layers and width of the layers, as well as the dimension of the MLPs, can be reduced to construct a smaller model.\\n\\nThe authors perform an empirical comparison of their data selection protocol to (1) picking data at random (2) an oracle which chooses the highest-entropy examples without MPC, and (3) MPCFormer. At varying dataset sizes, the authors' result is competitive with baseline (2) in terms of accuracy and achieves a ~200x speedup in runtime over (2), and has large accuracy improvements over (1) and (3), on a variety of empirical tasks.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper studies a problem that has been of interest in past work. It proposes a method that is both very efficient and achieves high utility in the empirical studies. The authors can operate in a more general setting (where examples are unlabelled) than past work. It is clear from the presentation what techniques the authors' method uses to improve upon the past work, and the techniques are made understandable at a high-level even to someone who is not an expert in the area. Furthermore, techniques such as using MLPs for dimension-reduction and fusing multiple operations are themselves sufficiently different from past work and a possibly interesting contribution independent of the problem of data selection.\", \"weaknesses\": [\"The MLP dimensions, proxy layers, and selectivity per round are chosen via grid search, which might limit the practicality of the method. It would be nice to have either \\\"standard\\\" guidance for choosing these parameters or a justification for grid search being practical. See Questions below.\", \"The paper also needs some editing, there are some major typography errors, though I expect this is easy to handle in a revision. For example:\", \"Line 82, citation to ??\", \"Line 398 - \\\"under 3 compute parties\\\" appears twice\", \"Line 498, $d_i$ is not formatted properly\"], \"questions\": \"Is there a way to make the grid search practical? We probably do not have the luxury of trying MPC with a data vendor multiple times with different parameters in practice. Maybe we can do a grid search to see what parameters perform the best on a non-private test dataset, and then use these values in all actual interactions with data owners?\\n\\nCan you include a 'baseline' for Figure 1? i.e., show how long the corresponding operations take without using MPC and the memory requirement for each (communication rounds could be omitted, of course).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents an efficient approach to private data selection, leveraging multi-party computing (MPC) for data appraisal in complex models such as transformers and ViT. The authors propose using a proxy model that approximates the complex model by emulating non-linear modules with MLP for data appraisal. While experimental results demonstrate improved efficiency compared to random selection and MPCFormer, the paper would benefit from clearer algorithmic descriptions and more comprehensive experimental analysis.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper addresses the important and challenging problem of efficient private data selection.\\n2. The proposed hierarchical algorithm demonstrates thoughtful consideration of multiple components, including multi-phase selection and dimension optimization, to enhance efficiency.\", \"weaknesses\": \"1. The pipeline of the algorithm is not clear. In multi phase selection, why do you need to query the candidate data? Whether or not to select this subset of data depends on the test data performance. It seems that when you train the MLP, except for the selected data, you also generate an Gaussian distribution for the layer, but how to use this distribution to supervise fine-tuning is not clear. In sum, I hope there will be an algorithm pipeline to clearly show each steps and which parameters are hyper-parameters.\\n2. The results in Table 2 is surprising. Does it mean that MLP is enough and we can drop all non-linear layer since some of your results show that all emulations with MLP outperform no emulations. \\n3. The gap between MLP and non-linear module is simply shown by the final accuracy of your task, which may contain much randomness. Could you explain the gap in embedding layer? Like how much embedding difference for different layer emulation.\\n4. The experimental results are not clear. E.g., in Table 3, did you test the model under the same communication and computation budget? In Figure 5, what does 1 phase mean? How many phased do you use in the \\\"ours\\\"? Why not compare your delay with the baseline MPCFormer?\\n5. Lack of analysis. As your work focus on improving the efficiency and keeping performance, it is important to specifically analyze the computation and communication costs.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper focuses on private data selection for transformers over MPC. The authors consider a two-party setting, where a model owner wishes to purchase data points from a data owner. The model owner needs this data to fine-tune a target model. To find the most relevant data points without revealing the rest of the dataset, the two parties engage in a multiparty computation.\\n\\nComputational and communication overhead is a key challenge in making MPC practical for data selection. In an ideal world, the model owner could evaluate its target model on data points with MPC. However, this is not practical for large transformer models, which contain nonlinearities (such as layer norm or softmax) that are prohibitively expensive to compute with MPC. Thus, an alternative approach is to use a cheaper proxy model to approximate the target model efficiently while still selecting useful datapoints. \\n\\nThis paper proposes a new way of building such proxy models, by replacing costly nonlinearities by more MPC-efficient, trainable, multilayer perceptrons. The authors also introduce a multiphase selection approach where datapoints are selected progressively, thereby using previously selected datapoints to improve the selection of future datapoints. Delay can also be optimized by overlapping computation and communication. \\n\\nThe paper is evaluated on vision and NLP tasks, and shows significant improvements in both delay and accuracy compared to prior work.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"Main strengths:\", \"SelectFormer shows strong improvements in delay and accuracy over baselines, which include two naive baselines (Oracle and Random) and one recent MPC inference paper (MPCFormer).\", \"Multiphase selection is a nice technique. The key idea is that we can significantly reduce delay by selecting the first datapoints with coarse model proxies, and progressively use more accurate and slower proxies (trained on a now larger collection of purchased datapoints) to better select the following datapoints. It turns out that this technique also improves accuracy a bit.\", \"Another useful contribution is MLP approximation of nonlinearities, where the MLPs are trained by generating a large synthetic dataset using metrics coming from the small number of datapoints already purchased.\", \"The paper's evaluation is broad and strong, across both vision and NLP tasks. I appreciated the numerous experiments and very granular ablation studies, such as Fig 4 and Fig 6.\"], \"other_strengths\": [\"It is useful to compare the accuracy drop from different MLP approximations.\", \"Handling imbalanced and unlabeled data is important.\", \"Using computation/communication parallelism sounds intuitive, but it might not be done by other works, so it is valuable to evaluate it. This parallelism offers a nice gain in performance.\"], \"weaknesses\": [\"Main weaknesses:\", \"I don't know the literature extensively, but I wonder if MPCFormer is really the strongest baseline against which we should evaluate SelectFormer (the other baselines, Oracle and Random, are useful to set the delay/accuracy range but are not real alternatives to SelectFormer). Indeed, as the authors note, \\\"MPCFORMER\\u2019s model distillation approach is ill-suited to data selection\\\", and I wonder if this is the reason behind MPCFormer's particularly poor accuracy (Table 3). The paper mentions THE-X (Chen et al, 2022), which could be a stronger baseline. Another potentially relevant paper is Bolt, published at S&P 24: https://eprint.iacr.org/2023/1893. I don't know these works in detail, but they might be more amenable to data selection than MPCFormer if they do not rely on data distillation or data-dependent approximations. In short, I am worried that MPCFormer might be a strawman against which SelectFormer shines too easily.\", \"Another concern is that the techniques proposed in this paper seem to only apply to a quite specific application, namely MPC data selection. Hence, depending on how widespread MPC data selection is, SelectFormer could have a pretty limited impact. Indeed, the authors note that their \\\"MLP approximation is specifically suitable for data selection while impractical for model inference directly\\\", which might limit the applicability of SelectFormer to other problems.\", \"Finally, I am not an expert in MPC systems, but I am a bit skeptical of the blanket claim that \\\"No prior MPC systems exploit such parallelism\\\". I remember a similar idea being mentioned by Meta in a research whitepaper (https://research.facebook.com/publications/private-computation-framework-2-0/), and a cursory web search returned a preprint showing how \\\"it is possible to carefully orchestrate the computation and communication steps to overlap\\\" in ML MPC training and inference (https://arxiv.org/pdf/2209.13643). The paper might still be making valuable contributions in MPC parallelism, but highlighting such contributions might benefit from a more detailed comparison to prior work.\"], \"minor_comments\": [\"typo: \\\"Note that we cannot directly compare to PUMA Dong et al. (2023), which is designed under 3 compute parties. and three computing parties.\\\"\", \"I can't find the data showing that MPS reduces total delay by 33% to 61%. Is this from the difference between PM and PMT in Figure 6? Maybe this would be easier to break down if there was a delay column in Table 4?\"], \"questions\": [\"Have you considered alternatives to MPCFormer, such as THE-X (that you cite, Chen et al 2022) or other modern 2PC inference papers for transformers, especially if they use data-independent nonlinearity approximations? Or are there reasons why such baselines don't apply or are already outperformed by MPCFormer in a data selection setting where distillation data is scarce?\", \"If I understand correctly, computing ReLUs is still a bottleneck in MPC. This might be why FFNs take most of the computation time in Fig 1. Yet, your method introduces even more ReLUs by adding MLPs. Have you considered other learnable approximations as an alternative to MLPs, such as polynomials with learnable coefficients, which could be more MPC-friendly? Why should we expect MLP approximation to be the best way to obtain MPC-friendly proxy models?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}" ] }
2c7pfOqu9k
DeFT: Decoding with Flash Tree-attention for Efficient Tree-structured LLM Inference
[ "Jinwei Yao", "Kaiqi Chen", "Kexun Zhang", "Jiaxuan You", "Binhang Yuan", "Zeke Wang", "Tao Lin" ]
Large language models (LLMs) are increasingly employed for complex tasks that process multiple generation calls in a tree structure with shared prefixes of tokens, including few-shot prompting, multi-step reasoning, speculative decoding, etc. However, existing inference systems for tree-based applications are inefficient due to improper partitioning of queries and KV cache during attention calculation.This leads to two main issues: (1) a lack of memory access (IO) reuse for KV cache of shared prefixes, and (2) poor load balancing.As a result, there is redundant KV cache IO between GPU global memory and shared memory, along with low GPU utilization. To address these challenges, we propose DeFT(Decoding with Flash Tree-Attention), a hardware-efficient attention algorithm with prefix-aware and load-balanced KV cache partitions. DeFT reduces the number of read/write operations of KV cache during attention calculation through **KV-Guided Grouping**, a method that avoids repeatedly loading KV cache of shared prefixes in attention computation. Additionally, we propose **Flattened Tree KV Splitting**, a mechanism that ensures even distribution of the KV cache across partitions with little computation redundancy, enhancing GPU utilization during attention computations. By reducing 73-99% KV cache IO and nearly 100% IO for partial results during attention calculation, DeFT achieves up to 2.23/3.59$\times$ speedup in the end-to-end/attention latency across three practical tree-based workloads compared to state-of-the-art attention algorithms. Our code is available at https://github.com/LINs-lab/DeFT.
[ "LLM inference", "attention", "memory-efficiency", "tree-based decoding" ]
Accept (Spotlight)
https://openreview.net/pdf?id=2c7pfOqu9k
https://openreview.net/forum?id=2c7pfOqu9k
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yahr39gtem", "wDzw8mPII3", "w2FdFmFpKy", "uVvAFs0sOq", "nwTQghID8w", "lCTjwkQ8T9", "UMwFqOCRVE", "SZpjQdOihq", "RwyCnrEUQu", "Qb96kedKnG", "PwIWYjlw0p", "O08NoKg3ap", "HMzOOJJjzF", "BnoZ5rW5Qx", "ArXTw7bCDB", "AUw6ImE2PB", "9gKYJbLO39", "7kqeTNAihN", "6Z0OFvF7G9", "2q2SmCm4gi", "24ziswpKmj" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment" ], "note_created": [ 1730637242528, 1732261737907, 1732261650332, 1732264390190, 1733235766283, 1732263904091, 1732736631578, 1733160730442, 1730544461553, 1734441816190, 1731163517623, 1732624596523, 1732261512327, 1729832569195, 1732263435698, 1732263313946, 1732264461303, 1732555630837, 1732263998067, 1737523909789, 1732263367010 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8454/Reviewer_pNpD" ], [ "ICLR.cc/2025/Conference/Submission8454/Authors" ], [ "ICLR.cc/2025/Conference/Submission8454/Authors" ], [ "ICLR.cc/2025/Conference/Submission8454/Authors" ], [ "ICLR.cc/2025/Conference/Submission8454/Authors" ], [ "ICLR.cc/2025/Conference/Submission8454/Authors" ], [ "ICLR.cc/2025/Conference/Submission8454/Authors" ], [ "ICLR.cc/2025/Conference/Submission8454/Reviewer_pNpD" ], [ "ICLR.cc/2025/Conference/Submission8454/Reviewer_2d42" ], [ "ICLR.cc/2025/Conference/Submission8454/Area_Chair_hQU3" ], [ "ICLR.cc/2025/Conference/Submission8454/Reviewer_M6M5" ], [ "ICLR.cc/2025/Conference/Submission8454/Reviewer_2d42" ], [ "ICLR.cc/2025/Conference/Submission8454/Authors" ], [ "ICLR.cc/2025/Conference/Submission8454/Reviewer_6baS" ], [ "ICLR.cc/2025/Conference/Submission8454/Authors" ], [ "ICLR.cc/2025/Conference/Submission8454/Authors" ], [ "ICLR.cc/2025/Conference/Submission8454/Authors" ], [ "ICLR.cc/2025/Conference/Submission8454/Authors" ], [ "ICLR.cc/2025/Conference/Submission8454/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8454/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces the DEFT (Decoding with Flash Tree-Attention) algorithm, aimed at enhancing efficiency in tree-structured language model (LLM) inference. Traditional approaches often fall short due to redundant memory access, inefficient handling of KV cache for shared prefixes, and poor GPU utilization. DEFT addresses these issues through two primary innovations: KV-Guided Grouping and Flattened Tree KV Splitting. Authors claim that these strategies optimize memory accesses and ensure balanced GPU utilization, leading to significant speed-ups in tree-based tasks like few-shot prompting, multi-step reasoning, and speculative decoding.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. Authors introduce KV-Guided Grouping, which reuses memory for shared prefixes in the KV cache, minimizing redundant I/O operations.\\n2. Authors' approach to balanced workload distribution via Flattened Tree KV Splitting leads to better GPU usage.\\n3. Triton implementation provides strong empirical evidence of the efficacy of the method.\", \"weaknesses\": \"1. While single GPU performance is quite good, it is not clear how DeFT can scale to larger models requiring multiple GPUs.\\n2. Though there is a one-liner on vLLM comparison, there is no numerical comparison with vLLM given that vLLM also implements prefix-based KV-cache sharing.\\n3. The overhead of QKV PREPARATION PHASE is unclear from the empirical results.\", \"questions\": \"Please see weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Common Response(3/3)\", \"comment\": \"### CQ3\\n\\n> CQ3: how to improve the paper manuscript?\\n> \\n\\nWe thank all reviewers for their detailed reviews. We have made the following changes (in **ORANGE** ) to the paper and uploaded a new revision:\\n\\n- (suggested by reviewer **M6M5**) reorganize the elements in Figure 3 to distinguish our main techniques and baselines. We separated the baselines (Flash-Decoding, Radix-Attention, etc) and main techniques of DeFT (KV-guided Grouping and Flattened Tree KV Splitting) into three subgraphs.\\n- (suggested by reviewer **M6M5)** modify Figure 4 by adding the latency breakdown of DeFT-Node-Chunk .\\n- (suggested by reviewer **M6M5 and 6baS**) reorganized and reduced the call to the appendix in the main text of section 3, to make sure the reader can get the key design of DeFT intuitively and find details in the Appendix.\"}", "{\"title\": \"Common Response(2/3)\", \"comment\": \"### CQ2\\n\\n> CQ2: the comparison between DeFT and vLLM with its prefix-caching?\\n\\nWe argue that the direct comparison with vLLM is unfair and prefix-caching is a technique for a faster prefill stage, which is not the bottleneck and orthogonal to DeFT\\u2019s optimization on the decoding stage. \\n\\n- First, we want to point out that prefix-caching is a technique of memory storage optimization with the benefit of prefill phase acceleration, not a technique that optimizes memory access for decoding phase speedup.\\n - SGLang (DeFT develops based on an early version of SGLang) adopts a radix tree and vLLM adopts hash codes to maintain the KV cache storage with prefix-awareness.\\n - Prefix-caching can only reduce the Time To First Token (*TTFT*) in the prefill phase, while it only takes a very small percentage (<5% in most of the tasks we test, as it takes <2 seconds for 4k prompt length in Llama3-8B) of end-to-end latency for tasks with long generation lengths. It does not reduce the time needed to generate new tokens (the decoding phase).\\n - Instead, DeFT optimized time Per Output Token (*TPOT*) in the decoding phase by memory access reduction *(KV-guided grouping)* and balanced partitions *(Flattened tree splitting)* of KV cache, and the prefix-caching is just as an orthogonal technique for storage optimization in DeFT.\\n- Second, as a baseline of attention kernel, we argue that SGLang is fairer and more reliable because we can control every part the same except the attention kernel for comparison of performance. We would like to clarify that the current version of DeFT was developed based on SGLang, which is the first framework that supports flexible tree-structured management of KV cache along with an attention kernel (Radix Attention) that fits such management. We chose and prioritized SGLang as our major baseline for three reasons:\\n 1. It adopts radix tree for **Automatic KV Cache Reuse in storage level(DeFT can even reuse KV cache in memory access level)**, which is more flexible than vLLM; \\n 2. Based on the [SGLang\\u2019s documentation](https://lmsys.org/blog/2024-07-25-sglang-llama3/), **SGLang can even outperform vLLM in many workloads;** \\n 3. Besides, Radix attention kernel in SGLang is developed based on Triton (DeFT attention kernel is also on Triton), while attention kernels in vLLM are based on CUDA. Actually, the attention algorithm in these two frameworks is the same and the major difference is the implementation in Triton/CUDA. Therefore, Radix Attention is a fairer baseline of DeFT that can reduce the difference in implementation to show the effectiveness of DeFT algorithms.\\n- Third, although it is not fair to compare with vLLM\\u2019s attention kernel because CUDA\\u2019s implementation has a natural speed advantage over Triton (e.g, Flash-Attention on Triton would be 70% speed of Flash-Attention on CUDA), we still provide the comparison results with DeFT to prove the effectiveness of the DeFT-Flatten algorithm.\\n - [Experiment setting] Radix Attention and DeFT-Flatten is based on SGLang. The workload is few-shot prompting tokens in A100 80GB with tree width=1/5/20/30. The model is Llama3-8B. The prompt length is about 1k tokens.\\n - [Notions] The best latency is in **bold**, and the second best is in *italic*.\\n - **[Conclusion]** When the treewidth is small (<20), the attention latency of Paged Attention in vLLM is the best as it is based on CUDA while Radix Attention and DeFT-Flatten are based on Triton. When the treewidth is 30, DeFT-Flatten is faster than Paged Attention in attention/end-to-end latency with 1.25X/1.20X speedup, respectively. This is because the advantages of the DeFT-Flatten algorithm in memory access overcome the disadvantages of the implementation (Triton V.S. CUDA).\\n\\n| treewidth | framework+Attention kernel | end-to-end latency | attention latency |\\n| --- | --- | --- | --- |\\n| 1 | SGLang+Radix Attention | 10.64 | 3.82 |\\n| 1 | vLLM+Paged Attention | **8.37** | **2.01** |\\n| 1 | SGLang+DeFT-Flatten | *9.32* | *2.43* |\\n| 5 | SGLang+Radix Attention | 11.07 | 4.21 |\\n| 5 | vLLM+Paged Attention | **9.25** | **2.43** |\\n| 5 | SGLang+DeFT-Flatten | *9.57* | *2.67* |\\n| 20 | SGLang+Radix Attention | *12.37* | 5.99 |\\n| 20 | vLLM+Paged Attention | 12.58 | **4.19** |\\n| 20 | SGLang+DeFT-Flatten | **11.82** | *4.20* |\\n| 30 | SGLang+Radix Attention | *14.08* | 7.30 |\\n| 30 | vLLM+Paged Attention | 15.18 | *6.18* |\\n| 30 | SGLang+DeFT-Flatten | **12.69** | **4.94** |\"}", "{\"title\": \"Response to Reviewer 6bas(1/2)\", \"comment\": \"Thank you for your helpful feedback and insightful questions.\\n\\n**Weaknesses:**\\n\\n> W1: Presentation Clarity: While the supplementary material improves clarity, some sections of the main paper remain dense, and the inclusion of key explanations from the supplementary material into the main text could further enhance understanding. Significant critical information is gained through the supplementary material, **specifically regarding reproducibility and algorithm details**, which would benefit from inclusion in the main text.\\n> \\n\\nThanks for your suggestion for improving the clarity of our paper! See CQ3.\\n\\n> W2: Limited Discussion on Energy Efficiency: The paper still focuses primarily on speedup metrics, and while memory access reduction implies energy efficiency, an explicit discussion or measurement of energy consumption **would strengthen the work**.\\n> \\n\\nWe agree that energy efficiency is an important metric for real-world deployment. See Q2.\\n\\n> W3: Applicability in Varying Scenarios: Although the authors include experiments with varying tree widths and prompt lengths, further exploration of scenarios with **minimal shared prefixes** or very small tree widths would provide a more comprehensive understanding of DEFT's applicability.\\n> \\n\\nSee Q3. \\n\\n**Questions:**\\n\\n> Q1: Integration of Supplementary Material: Could the authors consider integrating key explanations and findings from the supplementary material into the main paper to improve readability and clarity for readers who may not delve into the appendix?\\n> \\n\\nThanks for your suggestion! We summarize the improvement of writing in CQ3.\\n\\n> Q2: Energy Efficiency Metrics: While DEFT reduces IO operations, have the authors considered measuring the impact on energy consumption or providing an analysis of energy efficiency improvements?\\n> \\n\\nThanks for pointing out a potential advantage of DeFT in energy efficiency! \\n\\nThe main focus of this work is latency and our roadmap is to add energy efficiency after we expand the DeFT to multi-GPU versions. We would like to explore the user-sensitive metrics like latency first then explore the service side metrics like energy efficiency. But we do believe energy is indeed a potential advantage of DeFT as the calculation is nearly the same but the memory access is much less\\u2014whether the memory access takes the most of energy cost still requires future experiments to verify.\\n\\n> Q3: Minimal Shared Prefixes Scenarios: How does DEFT perform in scenarios where the shared prefix is minimal or the tree width is very small? Are there any overheads introduced in such cases compared to existing methods?\\n> \\n- For the scenarios when the shared prefix is small, DeFT will have less speedup, because it will degenerate to Radix Attention. When tree width = 10, the shared prompt length reaches 1k, and there will be 1.39X attention speedup and 1.09X wall clock speedup, as shown in Table 7 of our paper.\\n- For the scenarios when treewidth is small, we compare DeFT-Flatten with the SOTA Radix Attention with the settings of tree size T=5 and 10 in speculative decoding, and treewidth=1,2,5 in few shot prompting. The model is llama3-8B and the GPU is A100 80GB. The results are as follows. We can see the speedup of attention latency is still obvious(up to 2.20x) but the end-to-end latency is up to 1.21x. The reason is that when the total number of tokens in a decoding tree is small, the bottleneck is the FFN rather than attention.\\n- Table: Speculative decoding (t=5/10, other settings are the same as our paper including prompt length=~1k tokens).\\n\\n| token tree size T | method | end-to-end latency (s) | attention latency (s) | attention speedup | e2e speedup |\\n| --- | --- | --- | --- | --- | --- |\\n| 5 | DeFT-Flatten | 44.34 | 4.15 | 1.73X | 1.18X |\\n| 5 | Radix Attention | 52.38 | 7.19 | - | - |\\n| 10 | DeFT-Flatten | 45.55 | 4.65 | 2.20X | 1.21X |\\n| 10 | Radix Attention | 55.44 | 10.25 | - | - |\"}", "{\"comment\": \"We truly appreciate your support and kind words. Thanks again for taking the time to review our work and for your thoughtful feedback and encouragement.\"}", "{\"title\": \"Response to Reviewer pNpD\", \"comment\": \"Thank you for your helpful feedback and positive recognition of our work.\\n\\n**Weaknesses:**\\n\\n> W1. While single GPU performance is quite good, it is not clear how DeFT can scale to larger models requiring multiple GPUs.\\n> \\n\\nSee CQ1.\\n\\n> W2. Though there is a one-liner on vLLM comparison, there is no numerical comparison with vLLM given that vLLM also implements prefix-based KV-cache sharing.\\n> \\n\\nSee CQ2.\\n\\n> W3. The overhead of QKV PREPARATION PHASE is unclear from the empirical results.\\n> \\n\\nThe cost of QKV preparation phase is low\\u2014 it only accounts for less than 5% of the e2e latency, while attention computation accounts for 35-70% of e2e latency. The cost of QKV preparation is from the materialization of memory addresses of tokens for Triton\\u2014we need to serialize the memory addresses into a tensor as an input for the Triton kernel.\"}", "{\"comment\": \"Thank you for taking the time to provide such constructive feedback and for recommending acceptance. Your insights and support mean a great deal to us.\"}", "{\"comment\": \"I have reviewed the author's response and appreciate their detailed answers. I believe DEFT is a significant contribution, and I will keep my current rating. All the best to the authors.\"}", "{\"summary\": \"The paper presents DEFT (Decoding with Flash Tree-Attention), an optimized algorithm for efficient inference in tree-structured large language model (LLM) tasks, such as few-shot prompting, multi-step reasoning, and speculative decoding. Existing methods face inefficiencies from redundant memory access of shared prefixes and unbalanced GPU workload distribution, which leads to low utilization and slower processing. DEFT addresses these issues with two key techniques: KV-Guided Grouping, which minimizes memory access by loading shared prefix data only once, and Flattened Tree KV Splitting, which enhances GPU utilization by evenly distributing workload across GPU units. Implemented on Triton, DEFT achieves up to 2.5x faster end-to-end speeds by significantly reducing memory operations, making it highly effective for complex, tree-based LLM applications.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. **Efficient Memory Usage and Balanced Workload Distribution**: DEFT's KV-Guided Grouping minimizes redundant memory access by loading shared prefix data only once, reducing IO costs associated with repeatedly reloading the KV cache. Combined with the Flattened Tree KV Splitting strategy, which evenly distributes data across GPU units, DEFT maximizes GPU utilization by ensuring balanced workload distribution, thus avoiding bottlenecks and maintaining consistent processing speeds.\\n2. **Enhanced End-to-End Processing Speed**: Compared to state-of-the-art methods, DEFT achieves up to a 2.5x speedup in end-to-end latency, making it highly effective for tasks that require complex, tree-based structures like few-shot prompting and multi-step reasoning.\\n3. **Scalability Across Tasks**: DEFT demonstrates versatility by performing well across different tree-structured applications, such as speculative decoding, where shared prefix usage and efficient load balancing are particularly challenging.\", \"weaknesses\": \"1. **Lack of Comparison with Shared Prefix Infrastructure**: While DEFT introduces novel techniques for memory efficiency and load balancing, it lacks a direct comparison with existing infrastructure solutions like vLLM and DeepSpeed-MII, which already support shared prefix KV cache across different batches. Such a comparison would clarify DEFT\\u2019s advantages and limitations relative to widely adopted methods that also aim to reduce redundancy in KV cache management.\\n2. **Challenges with Distributed Memory and Tensor Parallelism**: DEFT\\u2019s current design primarily targets single-device GPU optimization and may not be directly compatible with distributed memory or tensor parallelism setups, which are commonly used to scale large language models across multiple GPUs. Adapting DEFT to work efficiently in distributed environments could require additional modifications to handle inter-device communication and memory sharing effectively, potentially limiting its scalability for very large models.\", \"questions\": \"1. Reasoning has become a popular approach to enhance the performance of large language models (LLMs) on complex tasks. Are there any future plans to integrate this method within task pipelines to achieve end-to-end improvements?\\n2. As noted in the weaknesses, tensor parallelism is widely used to scale large LLMs across multiple GPUs. Will this work be released as an open-source repository to help develop an infrastructure, similar to vLLM or DeepSpeed, that provides a usable framework for the public?\\n3. The test on speculative decoding sets T from 32 to 256, which is much larger than usual settings (<10), have you test speculative decoding with smaller T value?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper presents DeFT, a novel algorithm for enhancing tree-based decoding in LLM inference. It targets the inefficiencies of existing systems, such as redundant KV cache access and poor load balancing. By introducing techniques like KV-Guided Grouping and Flattened Tree KV Splitting, DeFT aims to optimize memory access and workload distribution. Empirically, it achieves significant speedup in end-to-end and attention latency compared to current state-of-the-art methods.\\n\\nThe paper's strengths lie in its timeliness and relevance, with a novel and theoretically sound approach supported by solid empirical evidence. However, it has weaknesses. The main text could be clearer, with some crucial details in the supplementary material. There's a lack of energy efficiency analysis, and further exploration in specific scenarios is needed.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, reviewers raised points regarding clarity of presentation (suggesting integrating supplementary material into the main text), energy efficiency measurement, performance in scenarios with minimal shared prefixes or small tree widths, and realistic scalability to larger model sizes. The authors addressed these by outlining changes made to improve paper clarity, explaining the plan to explore energy efficiency in future work, presenting performance data in relevant scenarios, and discussing challenges and possible solutions for extending DeFT to larger models. These responses showed the authors' thorough consideration of the issues and their commitment to improving the work, which weighed positively in the final decision.\"}", "{\"summary\": \"Tree-structured decoding is gaining more popularity in LLM serving due to the presence of applications such as multi-step reasoning and speculative decoding. Existing inference systems are inefficient due to their failure to be prefix-aware: they either perform redundant recomputation of KV caches for shared prompts, or repeatedly load and store KV caches of shared prompts during attention calculation. This paper presents DeFT, an efficient attention calculation algorithm with prefix-awareness and load-balanced KV cache partitions. DeFT uses KV-guided grouping to group the prefix's KV cache with all shared queries. It then uses flattened tree KV splitting which splits the KV cache into balanced partitions to reduce overhead in computation. Evaluations show that DeFT has better wall-clock time speedup in multiple tree-structured decoding applications compared to state-of-the-art baselines.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. Tries to solve the important problem that current LLM serving systems are inefficient in computation and IO for tree-based decoding applications.\\n2. Provides good background on segmented attention and existing attention algorithms.\\n3. Evaluation results show decent speedup over baselines.\", \"weaknesses\": \"1. The paper is hard to follow. The design figures include too many details. Lack of clear explanation of the key techniques including KV-guided grouping and tree KV splitting.\\n2. Lack of evaluation or discussion on multi-node settings and other baselines.\", \"questions\": \"Thank you for submitting the paper to ICLR 2025! I think this paper tries to tackle the important problem of improving GPU utilization for LLM serving under the scenario of tree-structured generation. The paper provides a good background of example tree-structured applications, how existing attention algorithms work and how attention could be calculated in a segmented way. The evaluation of the new proposed algorithm demonstrates solid speedup over existing baselines. I overall feel positive about the paper with a few comments and suggestions for improvements.\\n\\nThe current illustration of the main algorithm in Section 3 is hard to follow.\\n\\nThere are remarks and comparisons here and there.\\n\\nFigure 3 includes too many notations and details that make the reader hard to recognize which are the baselines and which are the new techniques proposed in the paper. Even after reading all the text, I could not clearly figure out how flattened tree KV splitting works in detail. There are tons of places where the descriptions refer to the Appendix.\\nHowever, I think the reader should be able to grasp the intuition and how the algorithm works at a high level by just reading the main text of the paper.\\n\\nMy current understanding is that the core of the DeFT algorithm is to help create balanced and sharable QKV groups during the QKV Preparation Phase. It is probably better to clearly define how KV-guided grouping and flattened tree KV splitting work into two separate subsections, as they are the two main techniques proposed in the paper.\\n\\nIn terms of questions, how do you define the node sizes in the tree KV? \\n\\nIf the DeFT-Node-Chunk adds additional overhead due to imperfect splits after splitting by the nodes, could we first optimize the tree KV structure to ensure we have nodes of balanced sizes?\\n\\nIn the Attention Calculation Phase, how many techniques introduced in the paper are novel compared to previous works?\\n\\nIn addition, how does the proposed technique compare to [cascade inference algorithm](https://flashinfer.ai/2024/02/02/cascade-inference.html)? The cascade inference algorithm also makes the observation that the KV caches could be shared when there are common prefixes between requests. It first uses a multi-query attention kernel to compute the attention between queries and KV caches of the shared prefix, which goes through L1 cache and registers. Then it uses batch decode attention kernel to calculate for the remaining suffixes, which accesses the global memory and L2 cache.\\n\\nIn terms of experiments, it seems all evaluations are currently completed on a single A100 GPU.\\nHow would the performance be if the algorithm is applied in a multi-node distributed LLM inference setting?\\nWould any of the parallelization techniques affect the effectiveness of the splitting algorithm?\\nHow would the algorithm perform in a long context LLM serving scenario?\", \"other_questions\": \"1. For Table 5, why is there an even larger speedup for the case of upper-bound (no attention)? Isn't the proposed algorithm only optimizing for the attention operation?\\n\\n2. How would different types of attention operation (e.g. multi-head, multi-query, or group-query attention) affect the performance of DeFT?\\n\\n3. For Figure 4, what would the latency breakdown be for DeFT-Node-Chunk? Would unpaged versions of DeFT-Node-Chunk and DeFT-Flatten incur similar overhead for KV management?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Rebuttal\", \"comment\": \"I have reviewed the author's rebuttal and appreciate their responses. I believe DEFT is a valuable contribution, and I will maintain my current rating. Best wishes to the authors, and I look forward to seeing future developments on DEFT.\"}", "{\"title\": \"Common Response(1/3)\", \"comment\": \"We appreciate the reviewers for their insightful comments and constructive feedback. We are pleased to note that all reviews were positive and that the reviewers recognized our work as addressing a significant problem.\\n\\nWe would like to first address some common questions, and then respond to specific inquiries raised by the reviewers. The improvements to the manuscript have been finished and the modified part is in **ORANGE**, please refer to Common Question 3 (CQ3) for an outline of updates.\\n\\n## **Questions in common(CQs)**\\n\\n### CQ1\\n\\n> CQ1: can DeFT be extended to multi-GPU versions? What\\u2019s the performance of DeFT in multiple GPUs setting?\\n> \\n- We would like to point out that the single-GPU version of DeFT targets low latency (request finish time), which is non-trivial already, as there are tradeoffs between redundant memory access/calculation and load-balancing. The LLM serving asks us to reduce the latency to below a threshold and then improve the throughput as much as possible. Improving the throughput in multiple GPUs without sacrificing the latency too much would be the future step, which could be achieved by a better design of batching and scheduling.\\n- Regarding the **parallelization techniques and their impact**\\n - Tensor parallelism(TP) is completely orthogonal to the current single-GPU version of DeFT because TP does partitioning in the head for attention and hidden dimension for MLP, while DeFT does partitioning in the dimension of sequence length(s). TP introduces two allreduce communication, in the attention module, the head dimension is split to distribute the computation over different GPUs, in this case we just need to manage the KV of each head separately on multiple GPUs, in the MLP module, the intermediate hidden dimension is partitioned, which does not modify any implementation of our approach.\\n - If we want to have sequence parallelism(SP) in multiple GPUs, it\\u2019s non-trivial in the system design because there could be problems like KV cache fragmentation and the trade-off between communication of partial attention and KV cache movement for better locality. There are some latest works[1][2] about sequence parallelism for general sequence-based serving systems that explored this topic. Tree-based decoding will make it more challenging, but we are interested in exploring this topic in the future.\\n\\n*[1] Sun, B., Huang, Z., Zhao, H., Xiao, W., Zhang, X., Li, Y., & Lin, W. (2024). Llumnix: Dynamic Scheduling for Large Language Model Serving. In Proceedings of the 18th USENIX Symposium on Operating Systems Design and Implementation (OSDI 24).*\\n\\n*[2] B. Wu, S. Liu, Y. Zhong, P. Sun, X. Liu, and X. Jin, \\u201cLoongserve: Efficiently serving long-context large language models with elastic sequence parallelism,\\u201d* In Proceedings of the ACM SIGOPS 30th Symposium on Operating Systems Principles (SOSP '24).\"}", "{\"summary\": \"The paper presents DEFT (Decoding with Flash Tree-Attention), a hardware-efficient algorithm that optimizes large language model (LLM) inference for tree-based decoding tasks like few-shot prompting and multi-step reasoning. Current systems struggle with redundant Key-Value (KV) cache loading and poor load balancing, causing inefficient memory use and low GPU utilization. DEFT solves this with KV-Guided Grouping, which reuses shared prefixes to reduce KV cache access, and Flattened Tree KV Splitting, which improves GPU efficiency. Implemented with OpenAI Triton, DEFT achieves significant speedups in attention latency compared to existing methods\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Relevance: The paper tackles a timely and significant problem in optimizing LLM inference for tree-based decoding applications, which is highly relevant to current AI research and deployment.\", \"originality\": \"Introduces a novel attention algorithm, DEFT, leveraging KV-Guided Grouping and Flattened Tree KV Splitting to address memory access inefficiencies and load balancing.\", \"theoretical_justification\": \"Provides solid theoretical analysis to justify the proposed methods, including IO complexity analysis and discussions on the correctness of the algorithm.\", \"empirical_validation\": \"Demonstrates significant improvements in end-to-end latency and attention computation across multiple tasks (few-shot prompting, multi-step reasoning, speculative decoding) compared to state-of-the-art baselines. The supplementary material includes extensive experimental results and ablation studies, strengthening the empirical validation.\", \"comparison_with_concurrent_works\": \"The supplementary material provides detailed comparisons with concurrent works, clarifying the advantages of DEFT in handling multi-level tree decoding and addressing unbalanced workloads.\", \"scalability\": \"The authors provide results demonstrating DEFT's scalability to larger models (up to 34B parameters) and different hardware setups.\", \"accuracy_preservation\": \"The paper includes analysis showing that DEFT maintains model accuracy, with negligible differences in attention scores and perplexity compared to baseline methods\", \"weaknesses\": \"Presentation Clarity: While the supplementary material improves clarity, some sections of the main paper remain dense, and the inclusion of key explanations from the supplementary material into the main text could further enhance understanding. Significant critical information is gained through the supplementary material, specifically regarding reproducibility and algorithm details, which would benefit from inclusion in the main text.\", \"limited_discussion_on_energy_efficiency\": \"The paper still focuses primarily on speedup metrics, and while memory access reduction implies energy efficiency, an explicit discussion or measurement of energy consumption would strengthen the work.\", \"applicability_in_varying_scenarios\": \"Although the authors include experiments with varying tree widths and prompt lengths, further exploration of scenarios with minimal shared prefixes or very small tree widths would provide a more comprehensive understanding of DEFT's applicability.\", \"questions\": \"Integration of Supplementary Material: Could the authors consider integrating key explanations and findings from the supplementary material into the main paper to improve readability and clarity for readers who may not delve into the appendix?\", \"energy_efficiency_metrics\": \"While DEFT reduces IO operations, have the authors considered measuring the impact on energy consumption or providing an analysis of energy efficiency improvements?\", \"minimal_shared_prefixes_scenarios\": \"How does DEFT perform in scenarios where the shared prefix is minimal or the tree width is very small? Are there any overheads introduced in such cases compared to existing methods?\", \"realistic_scalability\": \"Do the authors foresee any limitations or challenges in extending DEFT to more common larger model sizes (e.g., 70B parameters, or 400B) or to different model architectures beyond those tested? These larger models generally excel at complex multi-step reasoning tasks compared to the <32B counterparts, which may reveal different patterns in inference and could affect the effectiveness or accuracy retention of your approach.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"The paper makes a significant contribution to optimizing LLM inference for tree-based decoding tasks, introducing novel methods that are both theoretically sound and empirically validated. The authors have addressed previous concerns through additional material, improving the clarity and robustness of the work. Therefore, I recommend acceptance\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer M6M5(3/3)\", \"comment\": \"> Q8: For Figure 4, what would the latency breakdown be for DeFT-Node-Chunk? Would unpaged versions of DeFT-Node-Chunk and DeFT-Flatten incur similar overhead for KV management?\\n> \\n\\nThe breakdown of DeFT-Node-Chunk (paged) is similar to DeFT-Flatten(paged) but with more attention overhead:\\n\\n| Method | End to End (s) | Attention Computation (s, % of End to End) | KV Management (s, % of End to End) |\\n| --- | --- | --- | --- |\\n| DeFT-Flatten | 49.01 | 14.28 (29.13%) | 6.76 (13.79%) |\\n| DeFT-Node | 89.19 | 51.97 (58.26%) | 6.84 (7.67%) |\\n| DeFT-Node-Chunk | 53.44 | 16.48 (30.84%) | 6.68 (12.51%) |\\n| Radix Attention | 69.96 | 35.71 (51.05%) | 6.73 (9.62%) |\\n\\n*Table: Latency breakdown for speculative decoding with a token tree of 32 queries, whose tree topology is from Medusa, in seconds. Values in parentheses represent the percentage of the end-to-end time.*\\n\\n- We don\\u2019t implement unpaged versions of DeFT-Node-Chunk and DeFT-Flatten, because the comparison between DeFT-Node (paged) and DeFT-Node (unpaged) can show the superiority of paged memory management already.\\n- Theoretically, DeFT-Node (unpaged) and DeFT-Node-Chunk (unpaged) should incur similar overhead for KV management with DeFT-Node (unpaged), where the materialization/concatenation of KV cache from a tree structure to a single tensor for attention calculation is expensive.\\n- As for paged memory management, DeFT-Node, DeFT-Node-Chunk, and DeFT-Flatten should have nearly the same memory management cost as the difference between these three lies in the memory access of KV cache, not the memory storage.\"}", "{\"title\": \"Response to Reviewer M6M5(1/3)\", \"comment\": \"**Thank you for your constructive comments and insightful questions.**\\n\\n**Weaknesses:**\\n\\n> W1. The paper is hard to follow. The design figures include too many details. Lack of clear explanation of the key techniques including KV-guided grouping and tree KV splitting.\\n> \\n\\nSee [CQ3](https://openreview.net/forum?id=2c7pfOqu9k&noteId=wDzw8mPII3) about how will we make our paper writing better. We appreciate your feedback. We would like to point out that we already included the explanation of our two key techniques: (1) KV-guided Grouping (lines 278-285 on the left part of Figure 3, and lines 318-321); (2) Flattened Tree KV Splitting (lines 286-296 on the left part of Figure 3, and lines 351-362 );\\n\\n> W2. Lack of evaluation or discussion on multi-node settings and other baselines.\\n> \\n\\nSee [CQ1](https://openreview.net/forum?id=2c7pfOqu9k&noteId=HMzOOJJjzF) for a discussion on multi-node settings and CQ2 for a discussion/comparison with vLLM.\\n\\n> Q1: how do you define the node sizes in the tree KV?\\n> \\n\\n**We define the node size of the tree KV by merging tokens as many as possible into a single node while avoiding the introduction of a causal mask.** \\n\\n- When using a traditional prefix/trie tree structure to manage tokens, each node typically represents a single token. However, this approach is not efficient for attention computation if we treat each token\\u2019s KV as a separate group when they share the same queries.\\n- In essence, each node should contain as many tokens as possible, with all tokens within the node being associated with the same set of queries.\\n- For example, consider a scenario where query q1 requires tokens [t1, t2, t3, t4, t5, t6] and query q2 requires tokens [t1, t2, t3, t4, t5', t6']. Here, q1 and q2 can share up to 4 tokens, [t1, t2, t3, t4], which can be grouped into a single node. If we were to merge tokens [t1, t2, t3, t4, t5, t5'] into one node, we would need a causal mask because the KV cache of t5 and t5' is only required by q1 and q2, respectively.\\n\\n> Q2: If the DeFT-Node-Chunk adds additional overhead due to imperfect splits after splitting by the nodes, could we first optimize the **tree KV structure to ensure we have nodes of balanced sizes?**\\n> \\n- First, we want to clarify that for unpaged memory, KV cache tensors are structured in a tree psychically, while in paged memory we don\\u2019t need to do so\\u2014we just need to store the KV cache of tokens discretely in a memory pool, with records in a tree structure that maps between each token and its memory address.\\n - Therefore, for paged memory management, we don\\u2019t need to optimize the tree KV structure in **memory storage**.\\n - Instead, we just need to optimize the logical grouping of KV cache and queries during QKV Preparation phase, for low **memory access** and great **load-balancing** in Attention Calculation phase. In this phase, DeFT-Flatten can achieve balanced sizes of KV blocks for different QKV partitions.\\n- Second, we want to point out that the cost of QKV preparation phase is low\\u2014 it only accounts for less than 5% of the e2e latency, while attention computation accounts for 35-70% of e2e latency.\\n - The cost is from the materialization of memory addresses of tokens\\u2019 KV cache into a tensor as input for Triton kernel.\\n - Therefore, we don\\u2019t need to care about the cost of grouping for balanced nodes/chunks of KV cache too much.\\n\\n> Q3: In the Attention Calculation Phase, how many techniques introduced in the paper are novel compared to previous works?\\n> \\n- One of our important contributions is the insight that there is a large design space of QKV Preparation phase for tree-structured LLM inference efficiency, which is ignored by systems for sequence-based decoding. DeFT\\u2019s main contribution lies in this phase.\\n- As for Attention Calculation Phase, the existing works like Flash-Attention/Flash-Decoding are already well-designed. The goal of this phase is to fit the QKV partitions from the previous phase as a part of the kernel/system design. The contribution of DeFT in this phase lies in the kernel design and implementation. The global reduction of partial attention in Flash-Decoding is for sequence-based decoding, which cannot be aware of the tree-topology for global reduction in tree-based decoding. Therefore, we propose Tree-Topology-Aware Global Reduction and implement a fused kernel, as shown in Table 10 and Figure 10 b), Appendix A4.\"}", "{\"title\": \"Response to Reviewer 6bas(2/2)\", \"comment\": \"**(continued for Q3)**\\n- Table: Few shot prompting (treewidth = 1,2,5, other settings are the same as our paper including prompt length=~4k tokens).\\n - To point out, the attention latency in treewidth 1 is just because of the implementation difference between DeFT-Flatten and Radix Attention.\\n - We can see when we expand the treewidth from 1 to 5, the attention latency in DeFT-Flatten increases by only 0.24s, while it increases by 0.39s due to a lack of awareness for reusing KV cache memory access in Radix Attention.\\n\\n| treewidth | method | end-to-end latency(s) | attention latency(s) | attention speedup | e2e speedup |\\n| --- | --- | --- | --- | --- | --- |\\n| 5 | DeFT-Flatten | 9.57 | 2.67 | 1.58X | 1.16X |\\n| 5 | Radix Attention | 11.07 | 4.21 | - | - |\\n| 2 | DeFT-Flatten | 9.40 | 2.48 | 1.62X | 1.15X |\\n| 2 | Radix Attention | 10.83 | 4.02 | - | - |\\n| 1 | DeFT-Flatten | 9.34 | 2.43 | 1.57X | 1.14X |\\n| 1 | Radix Attention | 10.64 | 3.82 | - | - |\\n- Compared with Radix Attention, DeFT-Flatten just introduced a little bit more (<5% of end-to-end latency) overhead in QKV preparation for serializing the memory addresses and grouping, yielding more than 1.57X/1.14X speedup in attention/end-to-end latency on the few shot prompting workloads with treewidth 1-5 shown in the above table. Larger treewidth would have a much more obvious speedup while the QKV preparation cost is still negligible.\\n\\n> Q4: Realistic Scalability: Do the authors foresee any limitations or challenges in extending DEFT to more common larger model sizes (e.g., 70B parameters, or 400B) or to different model architectures beyond those tested? These larger models generally excel at complex multi-step reasoning tasks compared to the <32B counterparts, which may reveal different patterns in inference and could affect the effectiveness or accuracy retention of your approach.\\n> \\n\\nWe would like to point out DeFT is an accurate attention algorithm, which means it would not bring a difference in accuracy or effectiveness, as shown in Table 15, Appendix A7. The major challenges in extending DeFT to larger model sizes lie in efficiency:\\n\\n- [distributed setting needed for long-context/large-treewidth reasoning with large models] A larger model size takes larger memory storage, which means the memory remaining for KV cache storage would be less. We need to expand DeFT to a multi-GPU setting for long-context or large treewidth scenarios.\\n - It can be implemented by tensor parallelism easily as TP is orthogonal to DeFT;\\n - If we want additional sequence parallelism, it would be complex in system designs as discussed in CQ1.\\n- [distributed setting brings more challenges] A distributed setting brings an additional cost of communication among different GPUs. We have these open questions to answer in the future:\\n - How to overlap this cost with the computation for overhead hidden?\\n - How to schedule the requests of different decoding trees in multiple GPUs for large throughput without sacrificing the latency too much?\"}", "{\"title\": \"Appreicate the efforts of all reviewers\", \"comment\": \"We thank all reviewers for their constructive and insightful efforts in evaluating this work. We have uploaded revised files, with several modifications (in **ORANGE**) :\\n- (suggested by reviewer **M6M5**) reorganize the elements in Figure 3 to distinguish our main techniques and baselines. We separated the baselines (Flash-Decoding, Radix-Attention, etc) and main techniques of DeFT (KV-guided Grouping and Flattened Tree KV Splitting) into three sub-figures.\\n- (suggested by reviewer **M6M5)** modify Figure 4 by adding the latency breakdown of DeFT-Node-Chunk .\\n- (suggested by reviewer **M6M5 and 6baS**) reorganized and reduced the call to the appendix in the main text of section 3, to make sure the reader can get the key design of DeFT intuitively and find details in the Appendix.\\n\\nWe greatly appreciate the time and effort you've dedicated to reviewing our paper. Your feedback has played a crucial role in enhancing the quality of our manuscript. If you have any additional questions or comments, please feel free to reach out.\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer 2d42\", \"comment\": \"Thank you for your helpful feedback and positive recognition of our work.\\n\\n**Weaknesses:**\\n\\n> W1. **Lack of Comparison with Shared Prefix Infrastructure**: While DEFT introduces novel techniques for memory efficiency and load balancing, it lacks a direct comparison with existing infrastructure solutions like vLLM and DeepSpeed-MII, which already support shared prefix KV cache across different batches. Such a comparison would clarify DEFT\\u2019s advantages and limitations relative to widely adopted methods that also aim to reduce redundancy in KV cache management.\\n> \\n\\nSee CQ2.\\n\\n> W2. **Challenges with Distributed Memory and Tensor Parallelism**: DEFT\\u2019s current design primarily targets single-device GPU optimization and may not be directly compatible with distributed memory or tensor parallelism setups, which are commonly used to scale large language models across multiple GPUs. Adapting DEFT to work efficiently in distributed environments could require additional modifications to handle inter-device communication and memory sharing effectively, potentially limiting its scalability for very large models.\\n> \\n\\nSee CQ1. \\n\\n**Questions:**\\n\\n> Q1. Reasoning has become a popular approach to enhance the performance of large language models (LLMs) on complex tasks. Are there any future plans to integrate this method within task pipelines to achieve end-to-end improvements?\\n> \\n- We agree that reasoning is important for the future LLM serving system. Our plan consists of two parts:\\n - (1) to support more frameworks: integrate DeFT(developed based on an early version of SGLang) to the vLLM;\\n - (2) to support reasoning frameworks: LLM Reasoners is a great framework specialized for reasoning to facilitate the development and evaluation of reasoning algorithms for LLMs. We plan to contact them for potential cooperation to improve the efficiency of the whole reasoning pipeline, including the efficiency of attention optimized by DeFT Attention Kernel and other components like tree search.\\n\\n> Q2. As noted in the weaknesses, tensor parallelism is widely used to scale large LLMs across multiple GPUs. Will this work be released as an open-source repository to help develop an infrastructure, similar to vLLM or DeepSpeed, that provides a usable framework for the public?\\n> \\n\\nThank you for your expectations for our future work!\\n\\n- Tensor parallelism (TP) is completely orthogonal to the current single GPU version of DeFT, as discussed in CQ1.\\n- Supporting more frameworks is definitely in our roadmap: DeFT develops based on the early version of [SGLang](https://lmsys.org/blog/2024-07-25-sglang-llama3/), an LLM serving framework that can outperform vLLM in most of the tasks. What\\u2019s more, a faster CUDA kernel is working in progress as well: the current version of DeFT attention kernel is based on Triton but a CUDA version would be faster.\\n- As for why we set SGLang as our major baseline not vLLM right now, see CQ2.\\n\\n> Q3. The test on speculative decoding sets T from 32 to 256, which is much larger than usual settings (<10), have you test speculative decoding with smaller T value?\\n> \\n- The setting of decoding tree sizes (32-256) is from Medusa, where the tree size of 64 tokens is the best in speedup discussed in the ablation study of Medusa paper: it shows a better acceleration rate than a tree size of 256 tokens with nearly the same acceptance rate of tokens, but much higher acceptance rate than small token tree sizes (e.g., 16 tokens).\\n- We provided the performance when T is small (< = 10) in A100 with Llama3-8B model as follows.\\n - Setting: The prompt length is about 1k tokens. We test Llama3-8B model in A100 80GB. The number of generated tokens is about 5K.\\n - Conclusion: We can see the speedup attention latency is still obvious(1.73-2.20x) but the end-to-end latency is 1.18-1.21x with T=5/10. The reason is that when the total number of tokens in a decoding tree is small, the bottleneck is the FFN rather than attention.\\n - As shown in Table 18 of Appendix A7, our ablation study shows more tokens in the decoding tree( one way is to increase the prompt length), and more Attention/FFN latency ratio (A/F-LR). For long-context scenarios, attention dominates the end-to-end latency, which brings a great speedup potential on wall-clock time.\\n\\n| token tree size T | method | end-to-end latency (s) | attention latency (s) | attention speedup | e2e speedup |\\n| --- | --- | --- | --- | --- | --- |\\n| 5 | DeFT-Flatten | 44.34 | 4.15 | 1.73X | 1.18X |\\n| 5 | Radix Attention | 52.38 | 7.19 | - | - |\\n| 10 | DeFT-Flatten | 45.55 | 4.65 | 2.20X | 1.21X |\\n| 10 | Radix Attention | 55.44 | 10.25 | - | - |\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"title\": \"Response to Reviewer M6M5(2/3)\", \"comment\": [\"> Q4: In addition, how does the proposed technique compare to\\u00a0[cascade inference algorithm](https://flashinfer.ai/2024/02/02/cascade-inference.html)? The cascade inference algorithm also makes the observation that the KV caches could be shared when there are common prefixes between requests. It first uses a multi-query attention kernel to compute the attention between queries and KV caches of the shared prefix, which goes through L1 cache and registers. Then it uses batch decode attention kernel to calculate for the remaining suffixes, which accesses the global memory and L2 cache.\", \">\", \"Cascaded inference is one of our concurrent works. The algorithm is the same as Hygragen which we discussed in Table 9, Appendix 3. We both have the insight that IO sharing of prefix KV cache matters. However, we have the following differences.\", \"**Different scenarios:**\", \"Cascaded inference targets for single-context batch sampling, which only contains 2 cascades-a prefix and suffixes, as a special case of tree-based decoding;\", \"DeFT targets for general tree-based decoding with multiple-cascades, including few-shot prompting, multi-step reasoning, speculative decoding, and etc.\", \"**Different challenges:** as we target different scenarios, we notice a trade-off between redundant calculation and load-balancing for tree-based decoding, as the node length varies a lot.\", \"If we adopt DeFT-Node/DeFT-Node-Chunk as the attention algorithm, there is no redundant calculation introduced as there is no result that would be masked, but with unbalanced workloads;\", \"If we adopt DeFT-Flatten as the attention algorithm, the partitions are balanced but invalid calculation is introduced along with causal masks.\", \"Cascaded inference does not address the challenges above.\", \"**Different designs:** We have different QKV partition strategies during QKV preparation phase.\", \"Cascaded inference groups a prefix and suffixes into two QKV groups, then combines 2 kernels for the prefix and suffixes. It works in the 2-level scenario, but in the case of multi-level cascade inference, cascaded inference doesn\\u2019t have an effective way to handle all intermediate levels.\", \"DeFT can automatically fit multiple levels of prefixes with the sharing of IOs, providing greater acceleration potential.\", \"**Different implementations:**\", \"Cascaded inference cannot be expanded to multi-cascades in a single kernel. It needs to iteratively call many kernels by users, which can introduce potential great kernel launching costs.\", \"DeFT can automatically handle multi-cascaded prefixes sharing within a single kernel, which is unified in algorithm logic and efficient in hardware.\", \"> Q5: In terms of experiments, it seems all evaluations are currently completed on a single A100 GPU. **How would the performance be if the algorithm is applied in a multi-node distributed LLM inference setting? Would any of the parallelization techniques affect the effectiveness of the splitting algorithm?** How would the algorithm perform in a long context LLM serving scenario?\", \">\", \"**(single-GPU for low latency, then multi-GPU for high throughput)** The real-world serving asks us to satisfy the latency within a threshold and then improve the throughput as much as possible. We argue that latency is the first thing we need to satisfy and it\\u2019s non-trivial already. The throughput in multiple GPUs would be the next step to be optimized in the follow-up work. See CQ1 for details.\", \"(**parallelization techniques and their impact)** Tensor parallelism is orthogonal to DeFT**.** See CQ1 for details.\", \"As for a long-context LLM serving scenario, especially for the case when the prefixes are long, DeFT can even have a more obvious speedup, as shown in **Table 7.** Intuitively, this is because, for a long-context scenario, the attention computation takes most of the end-to-end latency, bringing a larger potential for wall-clock time speedup.\", \"> Q6: For Table 5, why is there an even larger speedup for the case of upper-bound (no attention)? Isn't the proposed algorithm only optimizing for the attention operation?\", \">\", \"As we mentioned in the caption of **Table 5**, Upper-bound (no attention) refers to the maximum speedup we could achieve for the best wall-clock latency baseline (Radix Attention) if we exclude the attention computation (i.e, attention latency is reduced to 0). As we cannot reduce the attention overhead to 0, it\\u2019s the speedup upper bound of e2e latency.\"]}" ] }
2bn7gayfz9
CTBench: A Library and Benchmark for Certified Training
[ "Yuhao Mao", "Stefan Balauca", "Martin Vechev" ]
Training certifiably robust neural networks is an important but challenging task. While many algorithms for (deterministic) certified training have been proposed, they are often evaluated on different training schedules, certification methods, and systematically under-tuned hyperparameters, making it difficult to compare their performance. To address this challenge, we introduce CTBench, a unified library and a high-quality benchmark for certified training that evaluates all algorithms under fair settings and systematically tuned hyperparameters. We show that (1) almost all algorithms in CTBench surpass the corresponding reported performance in literature in the magnitude of algorithmic improvements, thus establishing new state-of-the-art, and (2) the claimed advantage of recent algorithms drops significantly when we enhance the outdated baselines with a fair training schedule, a fair certification method and well-tuned hyperparameters. Based on CTBench, we provide insights into the current state of certified training and suggest future research directions. We are confident that CTBench will serve as a benchmark and testbed for future research in certified training.
[ "certified training", "benchmark", "open-source library" ]
Reject
https://openreview.net/pdf?id=2bn7gayfz9
https://openreview.net/forum?id=2bn7gayfz9
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xU06RGVYWM", "pt5mpfJC01", "m3bWCDPdg8", "cPaYK1bwOa", "WNHNKC4aBZ", "TOIeynM3l0", "TNdMQtXm9m", "PHqrCIdJeX", "On9MbQpIdn", "O9d1QZmAnq", "N9Gvc2DOVg", "LYZ5cNVMMm", "IqCCV2CIC3", "CojUbPV0qk", "CQvJhhdmy4", "9FhRn1virx", "8cXxxRCnq8", "7LRiwL4Faa", "5xFBBCQdUZ", "4UhFpqEGuz", "2zIeidK1ie", "2c7kW9ba2i" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_review", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1734699007663, 1732012512417, 1733057855752, 1733057994508, 1733310622158, 1732012869620, 1732012344297, 1731199412945, 1732012363470, 1730473397523, 1729595913260, 1732012253483, 1737523774847, 1732786886643, 1730671730968, 1732537140010, 1733310597341, 1733163112136, 1733058119189, 1733162025926, 1732564419714, 1732012682362 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6522/Area_Chair_QNPa" ], [ "ICLR.cc/2025/Conference/Submission6522/Authors" ], [ "ICLR.cc/2025/Conference/Submission6522/Authors" ], [ "ICLR.cc/2025/Conference/Submission6522/Authors" ], [ "ICLR.cc/2025/Conference/Submission6522/Authors" ], [ "ICLR.cc/2025/Conference/Submission6522/Authors" ], [ "ICLR.cc/2025/Conference/Submission6522/Authors" ], [ "ICLR.cc/2025/Conference/Submission6522/Reviewer_xBxH" ], [ "ICLR.cc/2025/Conference/Submission6522/Authors" ], [ "ICLR.cc/2025/Conference/Submission6522/Reviewer_t9Mr" ], [ "ICLR.cc/2025/Conference/Submission6522/Reviewer_s9m2" ], [ "ICLR.cc/2025/Conference/Submission6522/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6522/Reviewer_t9Mr" ], [ "ICLR.cc/2025/Conference/Submission6522/Reviewer_Z4Eb" ], [ "ICLR.cc/2025/Conference/Submission6522/Reviewer_s9m2" ], [ "ICLR.cc/2025/Conference/Submission6522/Authors" ], [ "ICLR.cc/2025/Conference/Submission6522/Reviewer_Z4Eb" ], [ "ICLR.cc/2025/Conference/Submission6522/Authors" ], [ "ICLR.cc/2025/Conference/Submission6522/Reviewer_t9Mr" ], [ "ICLR.cc/2025/Conference/Submission6522/Authors" ], [ "ICLR.cc/2025/Conference/Submission6522/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"This paper develops a benchmark and library for several representative deterministic certified training algorithms for fair comparison on hyperparameters, training schedules and (exact) certification methods. The reviewers generally agree that this paper is well-written, however, lacking technical novelty due to the nature of the topic. The submission will benefit from incorporating error bars of the results on all the dataset reported to show informative comparison. On a side note, the L2 (and more generally, Lp, p>=1) certified training can be directly extended from the Linf certified training method, since propagating the bounds are essentially the same and the only difference would be at the input when using holder's inequality. The authors are urged to include other Lp results to further strengthen the paper.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, reviewers and authors have fruitful discussions regarding the novelty, technical contributions, statistical significance of the results, and the impact of the work. I agree with the reviewers that there are several unresolved problems (e.g. lacking statistical significance on the numbers reported in the table, the authors should report results using multiple exact/inexact verifiers, and investigate Lp certified training results).\"}", "{\"title\": \"Response to $\\\\Rz$\", \"comment\": \"We are happy to hear that Reviewer $\\\\Rz$ considers our work is well-written, tackles an important problem, needed by the field and provides novel insights into the deterministic certified training. In the following, we address all concrete questions raised by Reviewer $\\\\Rz$.\\n\\n**Q1: Is the proposed benchmark sustainable? How could others submit their work to the existing leaderboard?**\\n\\nWe would like to note that the primary goal of our work is to develop a benchmark rather than a leaderboard. Benchmarks differ from leaderboards in that a benchmark needs to evaluate in fair and comparable settings, while leaderboards simply take numbers reported in the literature as grounded. Therefore, benchmarks are naturally less sustainable than leaderboards. In addition, as the field advances, the benchmark setting often evolves. For example, we expect the field will make advances towards scalable certified training, and thus in the future, a larger model than the current SOTA architecture may be used in the future benchmarks. Our benchmark represents the current knowledge of the field, and thus we do not expect submissions of new numbers to this benchmark, otherwise it may introduce unfair evaluations. This deviation of benchmarks and leaderboards has been observed in practice. For example, [1] develops a benchmark along with a leaderboard. While their leaderboard is maintained, their benchmark is never updated, as expected. However, this does not imply that a benchmark is unnecessary for the field, as a fair and high-quality evaluation provides more scientific conclusions than a leaderboard which merely draws reported numbers from literature.\\nIn addition, our provided library makes development of future benchmarks much easier than before, increasing the sustainability of the benchmark.\\n\\n\\n**Q2: Regarding Section 5.1, why do we care if certified models have fewer flipped neurons (less fragmentation)? Is this expected given that the models are more robust?**\\n\\nRobustness is not directly related to less fragmentation. For example, as shown in Figure 3, the more robust SABR model has more fragmentation than the less robust IBP model. Therefore, robustness does not imply less fragmentation.\\n\\nLoss fragmentation is of special interest to certified models. This is because the complexity and thus computational costs of certification algorithms highly depend on loss fragmentation. Therefore, understanding loss fragmentation is important for certified training, and our study and insights about how certified training affects loss fragmentation facilitates future work in certified training.\\n\\n**Q3: Regarding Section 5.3, why do methods like TAPS and MTL-IBP deactivate more neurons (for clean input) but achieve better accuracy (compared to IBP)? Is there a theoretical framework to explain the relationship between neuron deactivation and (certified) robustness?**\\n\\nWhile insufficient model utilization (number of neurons activated) strongly affects performance when the model does not have sufficient capacity, high model utilization does not necessarily bring better performance, e.g., in adversarial training, EDAC has better clean accuracy than PGD but less model utilization. This potentially explains why IBP is worse than MTL-IBP in terms of clean accuracy. Regarding the relationship between the number of deactivated neurons and certified robustness, there is some preliminary but not yet complete theoretical framework explaining this. For example, [2] shows that certified training increases propagation tightness, a metric relating to the tightness of certification, and one way to increase propagation tightness is to deactivate more neurons. Intuitively, this is because IBP bounds of deactivated neurons are zero, which matches the exact bounds.\\n\\n**Q4: Is there a way to leverage shared mistake patterns to improve certified training?**\\n\\nWe believe this is one promising future direction for certified training. As discussed in Section 6, algorithms leveraging this, e.g., curriculum learning, might further improve certified training. These attempts are out of the scope of this work, and we leave them as future work.\\n\\n**Reference**\\n\\n[1] Li et al., Sok: Certified robustness for deep neural networks, 2023.\\n\\n[2] Mao et al., Understanding certified training with interval bound propagation, 2024.\"}", "{\"title\": \"Reply to Reviewer $\\\\Rt$ (1/3)\", \"comment\": \"We are happy to further address Reviewer $\\\\Rt$\\u2019s concerns below:\\n\\n**Statistical Significance of Results**\\n\\nWe appreciate the reviewer\\u2019s concern regarding statistical significance. As observed in prior works on certified training, the reported improvements often fall within a similar order of magnitude. For example, [1] reports certified accuracy gains of approximately 1% for MNIST at $\\\\epsilon=0.3$ and 2% for CIFAR-10 at $\\\\epsilon=8/255$, [2] reports improvements of roughly 0.3% for MNIST at both $\\\\epsilon=0.1$ and $\\\\epsilon=0.3$, 1% for CIFAR-10 $\\\\epsilon=2/255$ and 0.15% for CIFAR-10 $\\\\epsilon=8/255$, and [3,4,5] report similar improvements depending on the setting. In our benchmark, improvements for methods such as SABR and MTL-IBP also fall within this range, highlighting consistency with established research. Also note that in previous work it is common to run multiple random seeds and report only the best certified accuracy, underlining that if our improvements were not statistically significant, previous work could have easily achieved similar numbers.\\n\\nMoreover, during our hyperparameter finetuning process, we observe stable trends when varying different parameters. For instance, increasing regularization strength or adjusting the robust weight and epsilon-shrinking factor produces predictable changes in certified and clean accuracy. This demonstrates a high signal-to-noise ratio in our experiments, suggesting that our reported improvements are robust.\\n\\nHowever, we acknowledge the importance of explicitly quantifying uncertainty. Unfortunately, due to the high computational cost of these experiments, it is not feasible to fully address this during the rebuttal phase. For instance, training and certifying a single CIFAR-10 network trained with SABR or MTL-IBP requires approximately 2-3 days on a single GPU, while TinyImageNet takes even longer. However, in Tables S1 and S2 below we present all randomness results on the MNIST dataset, given the lower computational costs. We run each Certified Training method independently for 3 times with the same tuned hyperparameters that we originally reported and report the average and standard deviation. We observe that the standard deviation across 3 random seeds is close to or smaller than 0.1 for almost all methods. The results shows that our improvements for most MNIST methods have a statistical significance of more than $3\\\\sigma$.\\n\\n**Table S1**: MNIST 0.1 Randomness results\\n| Method| Source| Nat| Cert |\\n|--|---|---|---|\\n|IBP| Literature|98.84 |97.95 |\\n| | CTBench manuscript|98.87 |98.26 |\\n| | Average $\\\\pm$ Stdev | 98.86 $\\\\pm$ 0.06 | 98.25 $\\\\pm$ 0.03 |\\n| CROWN-IBP | Literature|98.83 |97.76 |\\n| | CTBench manuscript|98.94 |98.21 |\\n| | Average $\\\\pm$ Stdev | 98.93 $\\\\pm$ 0.01 | 98.17 $\\\\pm$ 0.05 |\\n|SABR | Literature|99.23 |98.22 |\\n| | CTBench manuscript|99.08 |98.40 |\\n| | Average $\\\\pm$ Stdev | 99.15 $\\\\pm$ 0.08 | 98.42 $\\\\pm$ 0.03 |\\n|TAPS | Literature|99.19 |98.39 |\\n| | CTBench manuscript|99.16 |98.52 |\\n| | Average $\\\\pm$ Stdev | 99.20 $\\\\pm$ 0.05| 98.50 $\\\\pm$ 0.04|\\n| STAPS | Literature|99.15 |98.37 |\\n| | CTBench manuscript|99.11 |98.47 |\\n| | Average $\\\\pm$ Stdev | 99.15 $\\\\pm$ 0.04 | 98.38 $\\\\pm$ 0.10|\\n|MTL-IBP| Literature|99.25 |98.38 |\\n| | CTBench manuscript|99.18 |98.37 |\\n| | Average $\\\\pm$ Stdev | 99.16 $\\\\pm$ 0.03 | 98.31 $\\\\pm$ 0.06 |\\n\\n**Table S2**: MNIST 0.3 Randomness results\\n| Method| Source| Nat| Cert |\\n|--|---|---|---|\\n|IBP| Literature|97.67 | 93.10 |\\n| | CTBench manuscript|98.54 |93.80 |\\n| | Average $\\\\pm$ Stdev | 98.55 $\\\\pm$ 0.02 | 93.82 $\\\\pm$ 0.10|\\n| CROWN-IBP | Literature|98.18 |92.98 |\\n| | CTBench manuscript|98.48 |93.90 |\\n| | Average $\\\\pm$ Stdev | 98.46 $\\\\pm$ 0.03 | 93.84 $\\\\pm$ 0.12 |\\n|SABR | Literature|98.75 | 93.40 |\\n| | CTBench manuscript|98.66 |93.68 |\\n| | Average $\\\\pm$ Stdev | 98.69 $\\\\pm$ 0.03 | 93.64 $\\\\pm$ 0.06 |\\n|TAPS | Literature|97.94 |93.62 |\\n| | CTBench manuscript|98.56 |93.95 |\\n| | Average $\\\\pm$ Stdev | 98.58 $\\\\pm$ 0.03 | 93.90 $\\\\pm$ 0.11|\\n| STAPS | Literature|98.53 |93.51 |\\n| | CTBench manuscript|98.74 |93.64 |\\n| | Average $\\\\pm$ Stdev | 98.69 $\\\\pm$ 0.06 | 93.60 $\\\\pm$ 0.05|\\n|MTL-IBP| Literature| 98.80 |93.62 |\\n| | CTBench manuscript|98.74 |93.90 |\\n| | Average $\\\\pm$ Stdev | 98.75 $\\\\pm$ 0.02 | 93.82 $\\\\pm$ 0.21 |\\n\\nWe will incorporate this discussion and the tables in the revised manuscript.\"}", "{\"title\": \"Reply to Reviewer $\\\\Rt$ (2/3)\", \"comment\": \"**Accuracy-Robustness Tradeoff**\\n\\nWe think the reviewer might be unfamiliar with the common practice of certified training and the reasons behind, thus we provide a further explanation and empirical analysis here. The accuracy-robustness tradeoff is a well-studied concept in the field of certified training, but it works very differently to adversarial robustness. We particularly note that a drop in natural accuracy does not necessarily mean an increase in certified accuracy, because increased regularization might reduce the **true robustness** to ease certification with less precise methods (e.g. with IBP), which could be unnecessary for powerful certification methods. This is why the field (and us) all take the highest point in the curve, i.e., the highest certified accuracy, as the sole basis of SOTA. In addition, analyses about the robustness-accuracy tradeoff already exist in the respective papers, thus revisiting the concept in our work would be less meaningful, because we aim to fully release the potential of respective methods rather than study their internal dynamics. For example, [2] provides a thorough analysis in Figure 7, showing how certified and clean accuracy evolve for different values of $\\\\lambda$, and [4] similarly examines this tradeoff for their methods in Figure 1. In both cases we observe that higher regularization (i.e. higher values for $\\\\lambda$ or $\\\\alpha$) indeed improves IBP certifiability of the network, but heavily damages the natural accuracy which in turn also lowers the network\\u2019s empirical and certified robustness.\\n\\nHowever, for completeness, we will include certified accuracy versus natural accuracy plots for a subset of methods in the appendix of the final version. While the manuscript itself can no longer be updated during the rebuttal phase, we provide plots at this [anonymous link](https://mega.nz/file/DFJUATKZ#ZXwyFVaNHKGb4QgVqlhtsixnOgLu5Ra9Fm9XV6L7l88). Figures S1 and S2 present the CTBench results from Table 1 in the original manuscript for MNIST 0.1 and CIFAR-10 2/255 respectively. Figure S3 presents a zoom-in of Figure S2 where we show the robustness-accuracy tradeoff for the three best methods under different hyperparameters: SABR, STAPS and MTL-IBP.\\n\\nSince our benchmark is only focused on finding the best certified accuracy of each method, the plot in Figure S3 is also focused on the peak region of the robustness-accuracy plots. We particularly analyze this region and observe the same trends as previous work: decreasing regularization (the direction of increasing natural accuracy) from the level of IBP (equivalent to $\\\\lambda=1$ for SABR and $\\\\alpha=1$ for MTL-IBP) also comes with increased robustness, up to an optimal point. Afterwards reducing regularization further increases natural accuracy, but severely hurts certifiability, up to the point where adversarially trained networks (equivalent to $\\\\lambda=0$ for SABR and $\\\\alpha=0$ for MTL-IBP) exhibit much higher natural accuracies, but close to 0 certified robustness even when considering SOTA verifiers.\\n\\n**Efficient Deterministic Certification Methods**\\n\\nThe reviewer\\u2019s comments regarding efficient certification methods seem to overlook the current state of the field. Over the past decade, deterministic certification has been a major area of research, culminating in the development of highly precise methods that have been extensively optimized for efficiency [6,7,8]. \\n\\nMoreover, cheap certification methods such as IBP are very weak when it comes to proving robustness guarantees for networks trained with recent SOTA-certified training methods. For instance, [2] (Figure 7) and [4] (Figure 1) demonstrate that state-of-the-art trained networks exhibit close to zero IBP certifiability, but very high MN-BAB or OVAL-BAB verified robustness. As a result, while IBP-based certification is computationally cheap, its use for evaluating these networks would be uninformative and inconsistent with standard practices.\\n\\nIt is also important to note that our work does not aim to innovate on certification algorithms. Like all relevant prior work in certified training, we use existing certification methods as-is, without modification. Our focus is on benchmarking certified training techniques, not developing or analyzing certification algorithms.\"}", "{\"title\": \"Response to $\\\\Rt$ (cont.)\", \"comment\": \"**Efficient Deterministic Certification Methods**\\n\\nWe appreciate the reviewer\\u2019s input but note that our benchmark evaluates certified training methods, not certification algorithms. While their efficiency is important, it is beyond the scope of this work.\\n\\nUsing cheaper methods like IBP would result in **near-zero certified accuracy for techniques like SABR, TAPS, and MTL-IBP** on challenging datasets like CIFAR (small perturbations), as highlighted in their original papers. Employing more computationally expensive methods is necessary to enable meaningful certification and provide a fair assessment of certified training approaches.\\n\\n**Novelty of the Benchmark**\\n\\nWe respectfully disagree with the reviewer's assessment, as it seems to focus solely on the benchmarking aspect of our work while overlooking its broader contributions. While benchmarking naturally involves evaluating existing methods, our work goes far beyond this by introducing significant innovations:\\n- **Unified Library**: We provide a comprehensive library consolidating methods for consistent evaluation and ease of use.\\n- **Corrected Implementations**: Errors and inconsistencies in prior work were addressed, leading to more reliable results.\\n- **Well-Tuned Hyperparameters**: Systematic tuning improves performance, setting new baselines.\\n- **Improved Benchmark**: Combining corrections and optimizations, we establish a new benchmark that raises the standard for certified training methods.\\n- **Novel Insights**: Our analysis goes beyond certified accuracy, exploring training dynamics, regularization effects, loss fragmentation, and shared mistakes across methods\\u2014offering unique insights critical for advancing the field.\\nBy integrating these contributions, we are not only benchmarking but also driving the field forward by providing a robust foundation for future research.\"}", "{\"title\": \"Response to $\\\\Rs$\", \"comment\": \"We are happy to hear that Reviewer $\\\\Rs$ considers our work is well-written, utilizes \\u201cbeyond impressive\\u201d amounts of experiments and computational resources on a good problem, and derives particularly helpful insights into the deterministic certified training.\\n\\n**Q1: Should this paper overlook the advancements in randomized certified robustness?**\\n\\nAs clearly stated at the beginning of the abstract and introduction, we focus solely on deterministic certified robustness. We acknowledge the advances in randomized certified robustness, but randomized certificates are not comparable to deterministic certificates. For example, one provides high confidence certificates while the other provides deterministic certificates, and one brings additional inference overhead while the other does not have inference overhead. Therefore, as an in-depth study regarding deterministic certified robustness, we do not refer to randomized smoothing literature.\\n\\n**Q2: This paper only considers $L_\\\\infty$ norm; could the authors provide some insights into other perturbation norms? Do we need a separate library in case one wants to implement other norms?**\\n\\nUnfortunately, the field of deterministic certified radius focuses on the $L_\\\\infty$ norm and no deterministic certified training algorithms for other norms have been developed. This makes us unable to provide any insights into other perturbation norms. However, we choose to design our library such that norm types are disentangled: if one wants to implement another norm, they can easily migrate their solution into our library, as relaxations (core aspects about norms) are modularized and thus extensible. This allows development regarding other norms in the future.\\n\\n**Q3: Could you decompose the improvements and quantify contributions of individual components?**\\n\\nGreat question! We discuss improvement decomposition thoroughly in Appendix A.1. In short, improvements often bring additional hyperparameter vectors for tuning, and thus formal decomposition of benefits is not practically feasible. More details can be found in Appendix A.1.\\n\\n**Q4: Is the developed library extensible to other architectures other than the state-of-the-art CNN7?**\\n\\nYes, architectures are modularized, and thus incorporating another architecture is trivial for our library. In particular, the current library has multiple architectures implemented; we report CNN7 in the paper because this is the state-of-the-art architecture and represents the most important aspects of certified training.\\n\\n**Q5: Interest in (adversarially) robust models seem to have declined recently; potential reasons include practicability and the increased interest in generative models. While this paper makes valuable contributions (towards certified robustness), is this direction outdated and no longer meaningful?**\\n\\nWe are glad to discuss this from our own perspective, but not representing ideas in general. Adversarial robustness is an essential requirement for artificial intelligence, thus it will never be outdated or meaningless until we solve it. In addition, adversarial robustness is not losing interest, as many works are shifted to jailbreaking or discovering other attack vectors for generative models. Furthermore, many start-ups regarding model robustness have been established, thus its practicability in certain areas has been acknowledged. Many legitimate rules are also developed, and one frequent requirement is robustness. Therefore, we believe the decreased number of publications does not mean declined necessity; instead, it represents a hard time for problem solvers, because this problem has been shown to be non-trivial. \\n\\n**Q6: Is adversarial robustness for PGD lower than the state-of-the-art adversarial robustness? Is this because the authors use CNN7 (rather than a much larger model used in the adversarial machine learning literature)?**\\n\\nYes, this is because we use CNN7, for a fair comparison with certified models.\\n\\n**Q7: Does L133 define certified accuracy imprecisely?**\\n\\nYes, thanks for the correction!\"}", "{\"title\": \"Response to $\\\\Rx$ (part 1)\", \"comment\": \"We are happy to hear that Reviewer $\\\\Rx$ feels that our unified library, implementation correction, systematic study and our novel findings into the deterministic certified training are useful and facilitate future research. In the following, we address all concrete questions raised by Reviewer $\\\\Rx$.\\n\\n**Q1: Authors incorrectly state that the benchmark from [1] is not up-to-date; at the time of this review, the number on their leaderboard website is updated. Does this mean that the authors might have pulled their numbers from a stale source?**\\n\\nWe would like to distinguish a benchmark with a leaderboard. Benchmarks differ from leaderboards in that a benchmark needs to evaluate in fair and comparable settings, while leaderboards simply take numbers reported in the literature as grounded. In fact, other than a leaderboard which simply draws reported numbers from the publications, [1] also provides a benchmark study, as can be seen from their website. This benchmark is not updated and is what we referenced, reporting 89% best certified accuracy for MNIST $\\\\epsilon=0.3$.\\n\\nOn the other hand, their leaderboard is updated by the original authors, collecting reported numbers from the literature. This means that this leaderboard is mixed: different algorithms use different architecture, particularly different activations. This makes this leaderboard unfair in the sense that different algorithms are not directly comparable. To solve this, we consider all algorithms based on ReLU networks rather than specialized activations, as this matches the wide practice of deep learning. We pull our literature numbers from a series of the most recent SOTA publications [4,5,6], representing the best practices in the field.\\n\\n\\n**Q2: Improved comprehensiveness over [1] is expected. However, this work only evaluates deterministic certified training while Li et al. evaluates certification methods and deterministic/randomized certified training methods on different norms. Does this mean the contribution of this work is covered/shaded by [1]?**\\n\\nWe would like to note that [1] is a SoK paper providing a meta-analysis about the general certified robustness area, while our work develops a library and benchmark about deterministic certified robustness. Therefore, our contributions are in nature not directly comparable to theirs in comprehensiveness, as they focus on comprehensiveness while we focus on an in-depth study regarding deterministic certified robustness. Readers should not expect the same coverage in our study, as we are not meta-analyzing the field. In contrast, we provide an easy-to-use library for deterministic certified training, while [1] did not provide such toolboxes. In addition, the benchmark in [1] naturally does not cover the most recent advances, and thus cannot draw insights about the current progress, while our study achieves both goals. Therefore, our contribution is by no means covered by [1].\\n\\n\\n**Q3: Previous works have shown robust (adversarial) training increases local smoothness; what is unique about the findings presented in Section 5.1?**\\n\\nSection 5.1 observes certified training induces less loss fragmentation. While this relates to increased local smoothness, they are not equivalent. For example, models with the same level of loss fragmentation could have different local smoothness. In addition, instead of studying smoothness, we focus on loss fragmentation for a specific reason in certified training: the complexity of certification algorithms highly depends on loss fragmentation, but not on local smoothness. This is why understanding loss fragmentation is important for certified training, but local smoothness is not directly related, to the best of our knowledge. Therefore, Section 5.1 is of specific interest to the deterministic certified training community, which is the main goal of our study.\"}", "{\"summary\": \"The paper presents CTBENCH, a standardized library and benchmark designed to fairly evaluate certified training algorithms for neural networks, addressing the inconsistency in previous evaluations due to varied training schedules, certification methods, and under-optimized hyperparameters. By testing all algorithms under consistent conditions with tuned hyperparameters, CTBENCH reveals that most certified training methods perform better than previously reported, setting new benchmarks. Through CTBENCH, authors uncover several interesting properties of models training with certified methods.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1. The paper proposes a new benchmark for certified robustness methods for image classifiers.\\n2. Authors implement several prominent certified robustness methods in a unified framework, thereby standardizing the implementations to facilitate future research.\\n3. Furthermore, authors correct implementation mistakes and perform systematic hyperparameter tuning to fully realize the potential of all methods.\\n4. Authors present several interesting findings regarding the properties of ceritified robustness methods, for example, models trained using distinct methods have a high overlap in the examples they succeed and fail on, uncovering a sample-specific inherent difficulty level that can be leveraged to improve training. And, these methods can boost OOD generalization for specific corruptions, and hurt generalization for others.\", \"weaknesses\": \"1. Authors incorrectly state that the benchmark from Li et al. is not up to date as \\\"it reports 89% and 51% best certified accuracy for MNIST epsilon = 0.3 and CIFAR-10 epsilon = 2/255 in its evaluation, respectively, while recent methods have achieved more than 93% and 62%\\\". However, at the time of this review, the numbers on Li et al.'s leaderboard (https://sokcertifiedrobustness.github.io/leaderboard/) are even higher than 93% and 62%, they are 94.02% and 68.2%. Furthermore, the leaderboard toppers are defenses from 2019/2021. It appears that the authors might have pulled their numbers from a stale source.\\n2. In order to be an improvement over the existing benchmark (of Li et.al.), one important requirement is comprable or improved comprehensiveness. Based on the results in the paper, the proposed benchmark is significantly less comprehensive than Li et. al. on two important directions: (i) number of defenses evaluated, (ii) number of diverse models used during evaluation. While I understand that the proposed work can be made more comprehensive by running more experiments, this is not the case currently and so is worth poining out.\\n3. Furthermore, as stated in the limitations section, the propsoed benchmark only focuses on deterministic certified robustness in the L_infinity space. Whereas, Li et. al.'s benchmark uses both determinisitc and probabilistic certified methods, and covers all the popularly used norms in literature (i.e., L_1, L_2, L_infinity). Thereby further hurting the comprehensiveness of the proposed benchmark.\\n4. Some of the findings presented in this paper are expected and already established by prior works (see Questions).\\n5. The main contribution of the paper is a unified code-based (and benchmark) for promiment certified robustness methods. Even though authors uncover several interesting findings while reproducing and tuning sota methods, the nature of the contributions of this paper are heavily empirical (not enough technical novelty). As such, this paper is much better suited for venues like TMLR that put emphasis on contributions of such nature.\", \"questions\": \"1. It is already well established by previous works that robustness training increases local smoothness. What is unique about the findings presented in this paper?\\n2. It is also previously established that adversarially robust trianing methods tend to have higher sample complexity, and therefore are more likely to overfit (less regularization). Other than the choice of metric, what is unique about the findings in Section 5.4.?\\n3. Is there an explanation for why the model performs worse for certain corruptions? How will these results be affected if we use different L_p norms? For example, I would expect a model trained to be robust in the L_2 space to be better resistant to Gaussian noise and less resistant to salt and pepper noise.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to $\\\\Rx$ (part 2)\", \"comment\": \"**Q4: Previous works have shown adversarial training tends to have a higher sample complexity and overfit more easily (less regularization); what is unique about the findings presented in Section 5.4?**\\n\\nAdversarial training is known to overfit easily, called robust overfit. We would like to note this is not directly related to less regularization, as adversarial training naturally puts more regularization than standard training, e.g., increasing local smoothness. The nuance here is that not all regularization relates to overfitting; instead, each regularization simply introduces an inductive bias into the model. For example, while $L_2$ regularization which asks for small weights usually prevents overfitting, we could also ask for large weights as a special inductive bias. Therefore, investigating different regularization has its own value.\\n\\nOur study focuses on one special inductive bias (regularization) which is closely related to certified training: propagation tightness [7]. Basically, high propagation tightness makes certification easier for loose relaxations, but not necessarily for tight relaxations. Therefore, understanding propagation tightness introduced by different certified training algorithms helps the field to gain insights and develop future algorithms.\\n\\n**Q5: Regarding Section 5.5, is there an explanation about why certified models perform worse for some corruptions? Could different training norms affect this?**\\n\\nThis is a very good question. Since all corruptions we studied are out-of-distribution and not directly controlled by certified training, we do not know the exact answer about why some corruptions appear to be more difficult. Different training norms might affect this, and we agree that intuitively training with regard to the $L_2$ norm might resist Gaussian noise better. However, we would like to note that currently the field of deterministic certified training focuses on $L_\\\\infty$ norm and other norms are largely overlooked. In particular, no well-performing $L_2$-norm deterministic certified training algorithm has been developed. Therefore, it is out of the scope of our work to investigate this question.\\n\\n**Reference**\\n\\n[1] Li et al., Sok: Certified robustness for deep neural networks, 2023.\\n\\n[2] https://sokcertifiedrobustness.github.io/leaderboard/\\n\\n[3] Lyu et al., Towards Evaluating and Training Verifiably Robust Neural Networks, 2021.\\n\\n[4] De Palma et al., Expressive losses for verified robustness via convex combinations, 2024.\\n\\n[5] M\\u00fcller et al., Certified training: Small boxes are all you need, 2023.\\n\\n[6] Mao et al., Connecting certified and adversarial training, 2023.\\n\\n[7] Mao et al., Understanding certified training with interval bound propagation, 2024.\"}", "{\"summary\": \"This paper proposes a library for benchmarking certified training methods under unified settings. It uses the best practices for certified training from (Shi et al., 2021), such as CNN7 architecture with batch normalization, IBP initialization, warm-up schedule and warm-up regularizers. To improve generalization, it uses L1 regularization and stochastic weight averaging (Izmailov et al., 2018). From the implementation perspective, the authors propose to use full batch statistics to address problems with batch normalization when gradient accumulation or PGD attack is performed. The paper claims that the improvements of recent methods in certified training drop significantly compared to older IBP training method under the same settings with proper hyperparameter tuning. Further, the authors analyze different aspects of the training methods: regularization strength, model utilization, loss fragmentation, OOD generalization and shared mistakes.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper raises an important question of fairly assessing the algorithmic improvements of recent certified training methods compared to older IBP-based training. Since the evaluation depends on many factors and components, the paper proposes to fix some of them to the best-known ones and to properly tune the rest.\", \"The writing is clear (except for the presentation of Table 1), the code for benchmarking, and the weights of pre-trained models are provided.\", \"The analysis of training methods leads to interesting conclusions. Particularly, the relationship between propagation tightness and certified accuracy at larger epsilon, i.e. the absence of correlation, is surprising.\"], \"weaknesses\": \"I believe the **experiments are insufficient** to support the main claims of the paper. Particularly:\\n\\n1. **Accuracy-robustness tradeoffs are not considered**. Improvements in robustness can be due to decreased natural accuracy, and vice versa [a, b, c]. For example, in Table 1 for CIFAR-10 at 2/255 the implementations of following methods choose a different point at accuracy-robustness tradeoff curve compared to the one in literature, getting higher robustness at the cost of reduced accuracy: CROWN-IBP, SABR, STAPS, MTL-IBP, making claims about technical improvements unsupported. In this regard, the baselines such as ACERT [a], and ACE [b] are missing. Accuracy-robustness tradeoff curves and metrics such as ART-score [a] can be used to capture the improvements in the tradeoff.\\n2. **Error bars are missing**. The presented improvements over the results in the literature could be statistically insignificant. For example, the experimental results for CIFAR-10 at 8/255 in paper by Shi et al. (2021) show standard deviation of $\\\\pm0.3$ for certified accuracy and of $\\\\pm0.4-0.7$ for natural accuracy, which makes improvements in both accuracy and robustness in Table 1 for SABR and TAPS within the error of standard deviation. \\n3. **Training costs are not considered**. Different methods require different amount of computational costs for training, which could be an important factor to consider in benchmarking.\\n4. **Certification costs are not considered**. Since some certified training methods allow computing tight certified bounds using efficient \\\"online\\\" certification methods, such as IBP (Gowal et al., 2018, Mao et al., 2024), the IBP-based certified accuracy or IBP-based certified radius [a] could also be compared. The cost of test-time verification might be an important factor in choosing a training method.\\n\\nSince this is a paper proposing a benchmark, it **lacks original** contributions. In terms of evaluation setting, most of the components were already used consistently in previous works.\", \"smaller_comments\": \"- The main results in Table 1 are hard to parse and analyze due to large amount of numbers to compare. Accuracy-robustness plots could help with improving clarity.\\n- Due to shared mistakes, the paper claims that \\\"_... there could be an intrinsic difficulty score for each input_\\\". The certified radius of robustness of each point, described in [a, d], could serve as such score. The average certified radius and/or the histogram of radii [d] can be compared in the benchmark. The adaptive training methods can be discussed in this regard.\\n\\n[a] Nurlanov, Z., Schmidt, F.R., Bernard, F. (2024). Adaptive Certified Training: Towards Better Accuracy-Robustness Tradeoffs. In: Bifet, A., et al. Machine Learning and Knowledge Discovery in Databases. Research Track and Demo Track. ECML PKDD 2024. Lecture Notes in Computer Science(), vol 14948. Springer, Cham. https://doi.org/10.1007/978-3-031-70371-3_8\\n\\n[b] M\\u00fcller, M. N., Balunovi\\u0107, M., & Vechev, M. (2021). Certify or predict: Boosting certified robustness with compositional architectures. In International Conference on Learning Representations (ICLR 2021).\\n\\n[c] Tsipras, D., Santurkar, S., Engstrom, L., Turner, A., & Madry, A (2019). Robustness May Be at Odds with Accuracy. In International Conference on Learning Representations (ICLR 2019).\\n\\n[d] Bosman, A. W., Hoos, H. H., & van Rijn, J. N. (2023). A preliminary study of critical robustness distributions in neural network verification. In Proceedings of the 6th workshop on formal methods for ML-enabled autonomous systems.\", \"questions\": \"The main concerns about the experiments are raised in the weaknesses section. If these can be addressed, I would be happy to change my opinion.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The The paper introduces CTBENCH, a unified library and benchmark for evaluating certified training methods for neural networks. It addresses the challenges in comparing existing certified training algorithms by standardizing training schedules, certification methods, and hyperparameter tuning. The authors demonstrate that most algorithms in CTBENCH surpass previously reported results, revealing that much of the perceived advantage of newer methods diminishes when outdated baselines are properly tuned. The benchmark provides insights into certified training methods, encouraging future research by offering a consistent framework and re-establishing state-of-the-art performance.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well presented, well written, with clear goals and objectives.\\n\\n2- While this is for a non-expert is not obvious, but the amount of experiments and computation required in this paper is beyond impressive.\\n\\n3- The insights of the paper are particularly helpful. I personally did not expect that current SOTA methods are under performing. However, it was not that surprising that the improvements over IBP for larger epsilons are not that big.\\n\\n4- Paper sheds light on a relatively good problem.\", \"weaknesses\": \"1. The paper focuses solely on deterministic certified training, overlooking advancements in randomized certified robustness. I believe the paper should have cited works like Cohen et al., Matthias et al. (l1 certification with differential privacy -- early works from 2019), Greg Yang (\\\"All Shapes and Sizes\\\" paper), among many others.\\n\\n2. The paper only considers infinity ball, neglecting other perturbation sets. While this is generally okay, some insights with a few experiments in other perturbation sets might be helpful. It is not clear whether the proposed tricks in the library as part of the unified certified training would work for other perturbation sets (e.g., L2). If they do not, it raises the question of whether we would need a separate library for each perturbation set. The next steps are unclear if that is the case.\\n\\n3. Some conclusions on the impact of tuning and modifications, while valid, lack formal decomposition, making it difficult to quantify individual contributions. No clarity on the contribution of each individual component (batch norm, etc) towards the final performance. A small systematic study will be very helpful.\\n\\n4. The evaluation is based on a single model architecture (CNN7); the paper should demonstrate that the library and recommendations hold across different architectures.\", \"general_comment\": \"Interest in certified models has significantly declined over the past two years. At ECCV, for example, there were notably fewer submissions and accepted papers on adversarial attacks, even though this topic was previously very popular in vision conferences. One reason for this decline could be the uncertainty around where such certifications can be practically deployed, especially given the massive scale of current models, which are thousands of times larger than the CNNs discussed here. Furthermore, as models shift towards generative architectures, it\\u2019s unclear who will find this domain relevant. While the paper makes valuable contributions, this direction feels somewhat outdated by about two years and the question of the benefit for it is very unclear and vague, at least to me. I would love to hear the authors take on this.\", \"minor_comments\": \"1. Cite \\\"is NP-complete\\\" line 321.\\n2. Is not typical robust accuracy (adv acc) for PGD around 48% on 8/255 CIFAR10? Or is because you use CNN7.\\n3. adversarial accuracy is not well defined in line 135. You need to say that it is empirical and serves as an upper bound to the robust accuracy.\\n4. certified accuracy defined in 133 is not correct. It should be the portion of *correctly* classified samples that are certifiably robust.\", \"questions\": \"See above; I would love to hear the authors comments on each of the weakness above along with a response to the general comment.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General Response\", \"comment\": \"$\\\\newcommand{\\\\Rx}{\\\\textcolor{green}{xBxH}}$\\n$\\\\newcommand{\\\\Rz}{\\\\textcolor{blue}{Z4Eb}}$\\n$\\\\newcommand{\\\\Rt}{\\\\textcolor{purple}{t9Mr}}$\\n$\\\\newcommand{\\\\Rs}{\\\\textcolor{orange}{s9m2}}$\\n\\nWe thank all reviewers for their insightful reviews, helpful feedback, and interesting questions. We are particularly encouraged to hear that reviewers consider our library and benchmark to be important ($\\\\Rx$, $\\\\Rz$, $\\\\Rt$, $\\\\Rs$), our insights novel and useful ($\\\\Rx$, $\\\\Rz$, $\\\\Rt$, $\\\\Rs$) and our paper well-written ($\\\\Rz$, $\\\\Rt$, $\\\\Rs$). No shared concern from reviewers is identified, thus we will address all concrete questions in individual responses.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Authors\", \"comment\": \"I appreciate the authors' response. Unfortunately, my concerns remain. In particular, the concerns about statistical significance of the stated improvements, accuracy-robustness tradeoffs and the absence of efficient certification methods in evaluation are not addressed. Also, as mentioned in the original review, the benchmark lack novelty since it uses established techniques from [1].\\n\\n[1]: Shi et al., Fast certified robust training with short warmup, 2021\"}", "{\"summary\": \"The paper introduces a benchmark for certified robust training. The goal is to standardize the hyperparameters, training schedules (& other training configurations) between competing methods in certified training. The purported advantages of newer methods are lower when older baselines were given equivalent optimization and testing conditions. The work covers several popular approaches like PGD, IBP, and CROWN-IBP.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"-- The paper is well-written and typeset well\\n\\n-- Tackles an important problem in the field: the inconsistent evaluation of different certified training methods. I think the field needed this kind of paper. \\n\\n-- It's not only a benchmark paper but provides some analysis into certified model behavior in loss fragmentation (showing certified models reduce fragmentation compared to adversarial training), have shared mistake patterns, model utilization metrics, and generalization performance (showing certified training provides benefits for certain types of corruptions).\", \"weaknesses\": \"-- The novelty of the paper is limited since it's just focused on benchmarking existing methods. Certified robustness is a relatively new field and the field needs methods as much as unifying benchmarks. I do believe the lack of novelty is mitigated to an extent by the analysis provided in Section 5.\\n\\n-- I wonder about the sustainability of the benchmark since there are other leaderboards for adversarial training (e.g. RobustBench). Others may want to submit their work to an existing leaderboard rather than standardize to adopt your settings.\\n\\n-- I'm a bit confused about the purpose of the fragmentation experiments. Robust models lead to fewer flipped neurons in the presence of noise, but why should we care? This is after all expected given they are more robust in general to input noise. I believe these experiments may be valuable but the authors should articulate why.\", \"questions\": \"Some questions I had while reading:\\n\\n-- Why do methods like TAPS and MTL-IBP achieve better accuracy while deactivating more neurons?\\n\\n-- Is there a theoretical framework to explain the relationship between neuron deactivation and robustness? \\n\\n-- Is there a way to understand and leverage the shared mistakes patterns to improve certified training? Or is it natural that mistakes would overlap (similar to how mistakes overlap in natural training)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Follow up\", \"comment\": \"I want to first take the opportunity to thank the authors for their efforts in the paper and rebuttals. I have read all the reviews and the authors' responses to each review.\", \"q1\": \"I generally disagree. This is a question of whether one would prefer a high-confidence (where the confidence level is controlled) probabilistic certificate that scales to hundreds of layers versus a deterministic certificate that scales to tens of layers. The answer strongly depends on the application. However, my disagreement with the authors here is a matter of subjective taste and does not influence my final decision. I include it here for completeness only.\", \"q2\": \"Point taken. I do not mind \\ud835\\udc3f\\u221e balls, or any ball for that matter, as they are all equivalent in some metric space up to a constant.\", \"q3\": \"I thank the authors for their feedback. However, this presents a scalability issue. If every method requires tuning these parameters, let alone introducing new ones, the general utility of a library becomes questionable. What is the value of such a baseline if, for every new method, we must revisit and fine-tune it with the proposed set of parameters?\", \"q4\": \"Addressed.\", \"q5\": \"I disagree here again. Hijacking, prompting, and similar issues are not related to certification but rather to empirical robustness, which has gained traction in vision and is now being explored in the language domain. We have not solved certification for vision, let alone for language, which involves challenges ranging from model scale to discrete optimization over tokens. The majority of startups I know of in this space focus on empirical evaluation layers, red-teaming, and jailbreak prevention, with very few (if any) claiming provable guarantees against generation. If a company were to achieve this, it would indeed be a significant breakthrough, as this is an open problem. I do not believe we yet have the algorithms to accomplish this\\u2014scaling deterministic methods to hundreds of layers in vision alone has proven challenging. The decline in research papers on this topic is not definitive proof but rather evidence of waning interest. That said, my rating is not based solely on Q5.\", \"q6\": \"Addressed.\", \"q7\": \"Addressed.\\n\\n\\n thank the authors again for their efforts on the paper and rebuttal. I am keeping my score unchanged, but I will not oppose if other reviewers feel differently. The reasons behind my score are summarized as follows: (1) Certification, while important, has not yet demonstrated scalability. We are still reporting results on CNN7, a model from several years ago, with only a few layers. Scale remains a major issue, raising questions about the value of this benchmark. (2) The library requires fine-tuning several parameters for new methods, which limits the core utility of the proposed approach. (3) The combination of points (1) and (2) amplifies my concerns about the applicability and usability of the work. The need to tune hyperparameters for the library to benchmark new models each time, even if the models are small, highlights the lack of scalability in certification.\"}", "{\"title\": \"Response to $\\\\Rt$\", \"comment\": \"**Statistical Significance of Results**\\n\\nWe appreciate the reviewer\\u2019s concern and agree that additional statistical analysis will strengthen the manuscript. We commit to including this in the revised version.\\n\\nRegarding hyperparameter tuning, the approach depends on the goal:\\n- To study the variance of the best numbers, one would tune for every seed.\\n- To analyze sensitivity to hyperparameters, one would use the same hyperparameters across all seeds.\\n\\nIn our work, we use the latter approach, fixing hyperparameters to focus on sensitivity analysis.\\n\\nAs for the results (numbers are provided in the last response), the improvements for SABR and TAPS on MNIST are statistically significant, with SABR achieving improvements of **6 sigma** (98.22 to 98.40 \\u00b1 0.03) for MNIST 0.1 and **4 sigma** (93.40 to 93.64 \\u00b1 0.06) for MNIST 0.3, and TAPS achieving **3 sigma** in both settings. We note that we are comparing our average performance to the best performance in the literature reported across random seeds, further highlighting the statistical significance.\\nFor CIFAR-10 at $\\\\epsilon=2/255$, larger standard deviations are offset by more substantial improvements, yielding comparable statistical significance. At $\\\\epsilon=8/255$, improvements across methods remain minimal, consistent with prior work. We will incorporate detailed numbers in the revised manuscript.\\n\\nWe hope this clarification reassures the reviewer about the robustness and significance of our results.\\n\\n**Accuracy-Robustness Tradeoff**\\n\\nWe thank the reviewer for raising this point but note that the comments appear directed at the broader field of deterministic certified robustness rather than the specific contributions of our work. The tradeoff between natural accuracy and certified robustness is a well-known challenge in this domain, and obtaining robustness for large perturbations in realistic settings is particularly difficult.\\n\\nObtaining even empirical robustness inherently requires sacrificing natural accuracy, as shown by methods like EDAC (78.95% natural accuracy for 42.48% empirical robustness on CIFAR 8/255). Achieving near-standard training natural accuracy is possible but leads to almost zero certified accuracy, making evaluation meaningless. Unlike adversarial robustness evaluation, where computational cost is uniform, certification often involves significant computational expense (e.g., 1000 seconds per input) for unverified samples, emphasizing the practical constraints.\\n\\nIn addition, we would like to point out that not all methods exhibit the same tradeoff. For methods such as IBP and CROWN-IBP, there is **no inherent tradeoff** between certified accuracy and natural accuracy, as is the case for methods like SABR and MTL-IBP, which can specifically tune parameters to achieve networks with varying levels of robustness. We note that the discussion is limited to training with robust loss solely, as adopted by **every** certified training method in the literature.\\nWe also want to mention again that extensive plots analyzing the accuracy-robustness tradeoff have already been provided in previous works, and adding similar plots to our paper would not represent a novelty. Our goal is to benchmark the existing methods systematically, and while we will provide additional plots in the appendix for completeness, we do not believe this addition will significantly change the analysis presented in previous work.\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for the response. This clarifies my questions. I do think some concerns remain regarding the novelty of the proposed work. However, I was positive on the paper before and will retain my score. \\n\\nBest regards,\\n\\nReviewer Z4Eb\"}", "{\"title\": \"Reply to Reviewer $\\\\Rt$ (3/3)\", \"comment\": \"**Novelty of the Benchmark**\\n\\nThe purpose of a benchmark and library is not to introduce novel algorithms but rather to compile and systematize recent and relevant advances in the field. This effort facilitates future development and evaluation of new methods, providing a consistent and reproducible framework.\\n\\nWhile our work (as well as all relevant related work in the past 3 years) builds upon the techniques proposed by Shi et al. (2021) [1], we want to emphasize that the contributions of our benchmark extend far beyond their scope. For instance, our benchmarking setup incorporates a broad range of methods and systematic hyperparameter tuning, which are absent in [1], and our analysis of training dynamics, regularization strength, loss fragmentation, and shared mistakes are novel and provide deeper insights into certified training. Further, as a benchmark paper, we believe that it is essential to keep the core algorithms under examination unchanged. The reviewer seems to believe that the novelty and impact of all benchmarks are limited because they are evaluating existing algorithms, which we find quite confusing.\\n\\nThus, while we acknowledge the importance of [1], our contributions, as a benchmark and library, are distinct and critical for advancing the field.\\n\\n**Conclusion**\\n\\nWe thank the reviewer for their feedback and hope that our clarifications address their concerns. We remain committed to improving the clarity, completeness, and rigor of our work and look forward to incorporating additional analyses in the final version of the paper.\\n\\n**References**\\n\\n[1] Shi et al., Fast Certified Robust Training with Short Warmup, NeurIPS 2021\\n\\n[2] Mueller et al., Certified Training: Small Boxes are All You Need, ICLR 2023\\n\\n[3] Mao et al., Connecting Certified and Adversarial Training, NeurIPS 2023\\n\\n[4] De Palma et al., Expressive Losses for Verified Robustness via Convex Combinations, ICLR 2024\\n\\n[5] Mao et al. Understanding Certified Training with Interval Bound Propagation, ICLR 2024\\n\\n[6] OVAL-BAB, https://github.com/oval-group/oval-bab, multiple publications (2017-2021)\\n\\n[7] Ferrari et al., Complete Verification via Multi-Neuron Relaxation Guided Branch-and-Bound, ICLR 2022\\n\\n[8] Alpha-Beta-CROWN, https://github.com/Verified-Intelligence/alpha-beta-CROWN, multiple publications (2017-2024)\"}", "{\"title\": \"Response to Authors\", \"comment\": \"1. **Statistical Significance of the Results**\\n\\n- The fact that referenced papers only report a single best number is not an excuse for the proposed benchmark to also lack higher-order statistics, especially given that the improvements are marginal. \\n- I think that **tuning hyperparameters for each random seed** is not the correct way to evaluate the training methods. \\n- Do you tune hyperparameters on a separate validation set or on the test set? If it is the latter, than there is a high chance the numbers are overfitted and do not reflect real statistically significant algorithmic improvements. \\n- Please note that in contrast to the referenced papers, Shi et al (2021) reports mean and standard deviation of the results, and the variation is higher on CIFAR-10 compared to MNIST. As noted in my original review, the stated improvements in both accuracy and robustness in Table 1 for SABR and TAPS are within the error of standard deviation. This point is not addressed properly in the author responses.\\n\\n2. **Accuracy-Robustness Tradeoff**\\n\\nI am familiar with the mentioned works in certified training. I think it is important to consider applicability of the training methods in practice, therefore it is important to measure the accuracy-robustness tradeoff of the evaluated methods. For example, on CIFAR-10 the natural accuracy drops from typical 91% of standard training to 54% to achieve 35% certified accuracy at a fixed epsilon of 8/255. A drop of 37% in classification accuracy makes the approach of pursuing only the best certified accuracy impractical. I appreciate the provided plots in Figure S3 for 3 methods on a single setting of eps=2/255. These plots also show the tradeoff (e.g. for MTL-IBP and STAPS), in contrast to the author claims. I believe on larger epsilon values, the tradeoffs are even more noticeable. I think that the tradeoff plots should try to reach the level of natural accuracy of standard training. For the proposed benchmark paper, all settings and all considered methods should be equally evaluated. \\n\\n3. **Efficient Certification Methods**\\n\\nThe efficient certification methods have their own merits and corresponding applications. The comprehensive benchmark should also include the efficiency of the certification into account, given that certified training methods are known to be closely related to the following verification methods. The current setup has the limit of 1000 seconds for each input, which is quite expensive. \\n\\n4. **Novelty of Benchmark**\\n\\nThe proposed benchmark uses the setup exactly as in works referred by authors. It is almost the same as the experiments section of a common paper in the domain, proposing a new method. I believe that the hyperparameter tuning does not represent sufficient novelty. My main complain here is that the benchmark does not address any problems with existing evaluations, and simply repeats the same evaluation protocols with different hyperparameters. As recommended in my original review, considering statistical significance of the algorithmic improvements, studying the accuracy-robustness tradeoffs of the training methods, measuring the efficiency of the certification methods that can be applied post-training -- could address some of the problems with the existing evaluations.\\n\\n---\\n**References:**\\n\\n- Shi et al. Fast Certified Robust Training with Short Warmup. 2021\"}", "{\"title\": \"Reply to Reviewer $\\\\Rs$\", \"comment\": \"We thank Reviewer $\\\\Rs$ for their response and are happy to know that we have addressed all technical concerns from the reviewer. In the following, we provide further clarification regarding the reviewer's remaining concerns, including factual corrections and our personal perspectives.\\n\\n**On the Comparison Between Deterministic and Randomized Certification**\\n\\nWe understand the reviewer's perspective and appreciate their acknowledgment that this distinction is largely a matter of application preferences. However, we would like to reiterate that our claim regarding the non-comparability of deterministic and randomized certificates is grounded in fundamental differences between the two approaches. Deterministic certificates guarantee robustness with absolute certainty, while randomized methods provide probabilistic guarantees dependent on sampling and confidence levels, also incurring a multiplicative computational overhead factor at inference time. These differences are not just subjective preferences but intrinsic properties of the methods, each suited to specific needs. In particular, these differences lead to major technical deviations in the methodology between deterministic and randomized certified robustness.\\n\\n**Scalability and Hyperparameter Tuning**\\n\\nWe regret any confusion about the necessity of hyperparameter tuning. To clarify, the tuning effort that the reviewer referred to applies to benchmarking new methods, not the library itself. The library is designed to be general and extensible, making it a robust foundation for future research. Benchmarking inherently requires computational and optimization effort, but this is a natural part of scientific evaluation rather than a limitation of the library. \\n\\nWe understand the concern about scalability and agree that certification benchmarks often involve significant computational resources. However, this effort is crucial for progress, as robust and fair comparisons require careful evaluation. When designing new methods, tuning both general-interest hyperparameters (e.g. the level of L1 regularization) and method-specific ones (e.g. $\\\\lambda$ in SABR [2] or $\\\\alpha$ in MTL-IBP [4]) is unavoidable, but we hope that our library will help reduce the overall time and effort spent on these experiments by providing a modular and extensible framework to ease the burden of implementation and testing.\\n\\n**On the State of the Field**\\n\\nWe thank the reviewer for sharing their views on the broader challenges in certified robustness. We now realize that the reviewer wes referring specifically to certified robustness in this context, rather than general adversarial robustness. Certified robustness offers critical benefits, such as verifiable guarantees of model behavior under specific perturbations, which empirical robustness methods cannot provide.\\n\\nWhile we acknowledge the difficulties in scaling certified methods, recent advancements (published in top-tier conferences in recent years) [1-6] demonstrate that the field continues to progress. These works explore novel algorithms, larger architectures, and improved training paradigms to address scalability and robustness challenges. The reviewer might refer to the diminished interest in certification algorithms which indeed has fewer publications due to the ability and completeness of existing certification methods, but the interest apparently has shifted to certified training which trains/designs the network such that they are easier to certify. Certified training, therefore, is the main focus of our work.\\n\\nWe remain optimistic about the general certified robustness field despite its difficulties. Hard problems like these demand persistence, as breakthroughs often emerge from cumulative effort over time. The current challenges highlight the need for innovative solutions, which motivates our work and contributions.\\n\\n**Final Comment**\\n\\nWhile we understand Reviewer $\\\\Rs$\\u2019s concerns about scalability and usability, we believe our work contributes valuable tools and insights that pave the way for addressing these very challenges.\\n\\nWe share Reviewer\\u2019s optimism that certified robustness, while difficult, remains a meaningful and necessary pursuit. Without tackling such hard problems, progress in ensuring robust and trustworthy AI systems would stall.\\n\\nWe thank Reviewer $\\\\Rs$ again for their feedback, and we are grateful for their willingness to engage deeply with our work.\\n\\n**References**\\n\\n[1] Shi et al., Fast Certified Robust Training with Short Warmup, NeurIPS 2021\\n\\n[2] Mueller et al., Certified Training: Small Boxes are All You Need, ICLR 2023\\n\\n[3] Mao et al., Connecting Certified and Adversarial Training, NeurIPS 2023\\n\\n[4] De Palma et al., Expressive Losses for Verified Robustness via Convex Combinations, ICLR 2024\\n\\n[5] Mao et al. Understanding Certified Training with Interval Bound Propagation, ICLR 2024\\n\\n[6] Baader et al., Expressivity of ReLU-Networks under Convex Relaxations, ICLR 2024\"}", "{\"title\": \"Response to $\\\\Rt$\", \"comment\": \"We are happy to hear that Reviewer $\\\\Rt$ considers our work important, clearly written, leading to interesting and surprising insights into the deterministic certified training. In the following, we address all concrete questions raised by Reviewer $\\\\Rt$.\\n\\n**Q1: Is accuracy-robustness tradeoff not considered? For example, some algorithms in Table 1 might get better certified robustness at the cost of reduced accuracy. How is this handled by this work (and this field)?**\\n\\nWe would like to note that in practice, this field mostly focuses on improving the *best certified accuracy*, regardless of the drop in *clean accuracy*. More specifically, all recent SOTA works [1,2,3,4] select their best model solely based on the best certified accuracy, which is thus the basis of literature numbers reported in Table 1. This effectively means that accuracy-robustness tradeoff in this field translates to the right-most point one algorithm can get. With this in mind, we remark that based on the current algorithms, one cannot get better certified accuracy than the numbers reported to the best of their implementation/algorithms, regardless of whether they decide to sacrifice more clean accuracy or not. This is also because all these algorithms are trained solely on the objective of certified robustness, but not clean accuracy.\\n\\n**Q2: Error bars are not provided for Table 1. Could this make the result statistically insignificant?**\\n\\nFollowing the discussion above, all numbers in Table 1 are the best numbers one can get to the best of their efforts. In the case of CTBench numbers (our benchmark), we use the same random seed for all algorithms (thus fixing random batches, etc.) and the same training schedule (thus the same training steps), and then perform a thorough hyperparameter tuning for all algorithms separately. This procedure means Table 1 numbers are highly costly, as also pointed out by Reviewer $\\\\Rs$. Simply selecting a different random seed and reusing the same hyperparameter cannot get the same performance, thus reporting error bars means repeating this full procedure multiple times, which is prohibitively expensive. In addition, based on the described procedure, our numbers all represent the best numbers we can get for each algorithm, rather than the result of a random experiment. This highly reduces the variance of the result, as we perform hyperparameter tuning in a search space of size roughly 50, as described in Appendix B.4.\\n\\n**Q3: Is training and certification cost considered and how?**\\n\\nWe report the complete training and certification cost in Appendix B.6. Regarding training, we fix the number of steps taken by each algorithm (the same training schedule). Regarding certification, we fix the certification algorithm (one of the SOTA complete verifiers, MN-BaB) and the timeout (1000 seconds per sample). Details can be found in Appendix B.3 and B.5.\\n\\n**Q4: How should Table 1 be parsed? Could accuracy-robustness plots help with clarity?**\\n\\nTable 1 reports the clean (natural) accuracy, adversarial accuracy and certified accuracy, both in literature and in our benchmarks. We note that, as discussed in **Q1**, all models in literature and in our benchmark are selected solely based on the best certified accuracy. Therefore, adding accuracy-robustness plots is not meaningful for Table 1. In addition, our work is completely in parallel to [5], which develops conclusions about the maximum certifiable radius. We would also like to note that since our certification budget is 1000 seconds per sample at a given perturbation size, searching for the maximum certifiable radius is computationally prohibitive, thus such plots are naturally impossible to create.\\n\\n**Q5: Could average certified radius be plotted for the studied deterministic certified training methods? Could adaptive training methods improve certified training?**\\n\\nFollowing the discussion above, computing maximum certified radius is computationally prohibitive, thus computing average certified radius is also impossible for us. Regarding adaptive training methods, we acknowledge that such approaches may improve certified training, but this is out of the scope of this work, which designs the library and benchmark.\\n\\n**Reference**\\n\\n[1] Shi et al., Fast certified robust training with short warmup, 2021.\\n\\n[2] M\\u00fcller et al., Certified training: Small boxes are all you need, 2023.\\n\\n[3] Mao et al., Connecting certified and adversarial training, 2023.\\n\\n[4] De Palma et al., Expressive losses for verified robustness via convex combinations, 2024.\\n\\n[5] Nurlanov et al., Adaptive Certified Training: Towards Better Accuracy-Robustness Tradeoffs.\"}" ] }
2bWf4M5tRo
Enhancing Hallucination Detection with Noise Injection
[ "Litian Liu", "Reza Pourreza", "Sunny Panchal", "Apratim Bhattacharyya", "Yao Qin", "Roland Memisevic" ]
Large Language Models (LLMs) are observed to generate plausible yet incorrect responses, known as hallucinations. Effectively detecting such hallucination instances is crucial for the safe deployment of LLMs. Recent research has linked hallucination to model uncertainty, suggesting to detect hallucinations by measuring dispersion over answer distributions obtained from a set of samples drawn from the model. While using the model's next token probabilities used during training is a natural way to obtain samples, in this work, we argue that for the purpose of hallucination detection, it is overly restrictive and hence sub-optimal. Motivated by this viewpoint, we perform an extensive empirical analysis showing that an alternative way to measure uncertainty - by perturbing hidden unit activations in intermediate layers of the model - is complementary to sampling, and can significantly improve detection accuracy over mere sampling.
[ "Hallucination Detection; Robustness" ]
Reject
https://openreview.net/pdf?id=2bWf4M5tRo
https://openreview.net/forum?id=2bWf4M5tRo
ICLR.cc/2025/Conference
2025
{ "note_id": [ "y3Ju19bFLW", "tQhHxIpZW9", "ouhmM2F8Hn", "lhtPqf7sD0", "irhLMlW35b", "i1cs4x0l3U", "erIiyqPRkG", "dzyNF5WlvM", "dNq30INEnG", "adq6JALyVg", "YVZbC2GRzg", "XOZOTL0UyI", "ST7hsEhOrh", "Q5KtF0HOBI", "PjNbjsO3xp", "PgGvGklNQL", "H5AxozIB8a", "GlGQClw8De", "FlEsNRTCAI", "Dlx76ndWAD", "4OzlfS838W", "3r0laRdCYX", "2Qy3UxXGhY", "1mFE5hG8Da" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1733209235644, 1733218407389, 1732583770237, 1732646224855, 1733178465814, 1733194773437, 1733081405348, 1732583662304, 1730625783095, 1737524186126, 1734740759813, 1729329783796, 1733181351729, 1732583368827, 1732584726292, 1732584221791, 1732584304057, 1730661041064, 1730588115042, 1732584935434, 1733197404169, 1733206291294, 1731350499678, 1733239346525 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12340/Authors" ], [ "ICLR.cc/2025/Conference/Submission12340/Reviewer_8zQH" ], [ "ICLR.cc/2025/Conference/Submission12340/Authors" ], [ "ICLR.cc/2025/Conference/Submission12340/Area_Chair_YeEB" ], [ "ICLR.cc/2025/Conference/Submission12340/Reviewer_76cD" ], [ "ICLR.cc/2025/Conference/Submission12340/Reviewer_8zQH" ], [ "ICLR.cc/2025/Conference/Submission12340/Area_Chair_YeEB" ], [ "ICLR.cc/2025/Conference/Submission12340/Authors" ], [ "ICLR.cc/2025/Conference/Submission12340/Reviewer_AUc5" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12340/Area_Chair_YeEB" ], [ "ICLR.cc/2025/Conference/Submission12340/Reviewer_76cD" ], [ "ICLR.cc/2025/Conference/Submission12340/Authors" ], [ "ICLR.cc/2025/Conference/Submission12340/Authors" ], [ "ICLR.cc/2025/Conference/Submission12340/Authors" ], [ "ICLR.cc/2025/Conference/Submission12340/Authors" ], [ "ICLR.cc/2025/Conference/Submission12340/Authors" ], [ "ICLR.cc/2025/Conference/Submission12340/Reviewer_drSq" ], [ "ICLR.cc/2025/Conference/Submission12340/Reviewer_S4r6" ], [ "ICLR.cc/2025/Conference/Submission12340/Authors" ], [ "ICLR.cc/2025/Conference/Submission12340/Authors" ], [ "ICLR.cc/2025/Conference/Submission12340/Reviewer_8zQH" ], [ "ICLR.cc/2025/Conference/Submission12340/Reviewer_8zQH" ], [ "ICLR.cc/2025/Conference/Submission12340/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your follow-up question. Below is our current understanding based on the task characteristics.\\n\\nFor CSQA, as a multiple-choice task, responses must adhere to a specific format to be valid. Introducing randomness, either through sampling temperature or noise injection, can sometimes produce invalid outputs, leading to irregularities in the performance trend.\\n\\nIn contrast, TriviaQA is a free-form question-answering task where all responses are treated as valid. This could explain the smoother performance improvements observed with increasing noise magnitude.\\n\\nWhile the optimal noise magnitude varies across datasets, as previously noted, noise injection consistently enhances performance across tasks. We appreciate your insightful question and hope this helps provide further understanding into the versatility of the tasks and the observed trends.\"}", "{\"comment\": \"Thank you for your further explanation!\\n\\nI would like to increase my score to 5, but unfortunately no higher.\"}", "{\"comment\": \"**[Section 4: Statistical Significance]** Thank you for raising this concern. To assess the statistical significance of our results, we report the 95% confidence intervals in the table below. Specifically, we use a bootstrap method to estimate the intervals: we sample five generations per question from a broader pool of 20 generations with replacement and a bootstrap sample size of 25 for GSM8K, TriviaQA, and CSQA. For ProntoQA, we use a bootstrap sample size of 50 due to higher data variability.\\n\\n| Metric | GSM8K | CSQA | TriviaQA | ProntoQA |\\n|----------------------------------|:-------------------------:|:-------------------------:|:-------------------------:|:------------------------:|\\n| Answer Entropy | 72.91 \\u00b1 0.43 | 68.70 \\u00b1 0.52 | 62.74 \\u00b1 0.10 | 65.45 \\u00b1 0.66 |\\n| Answer Entropy w/ Noise | 79.04 \\u00b1 0.44 (+6.13) | 69.89 \\u00b1 0.47 (+1.19) | 64.04 \\u00b1 0.10 (+1.30) | 66.38 \\u00b1 0.64 (+0.93) |\\n\\nWe also conduct a t-test to determine whether the changes are statistically significant. All datasets pass the t-test (significance level: $\\\\alpha = 0.05 $), with the following results: GSM8K ($ t_\\\\text{crit} = 1.677, t_\\\\text{score} = 20.669 $), CSQA ($ t_\\\\text{crit} = 1.677, t_\\\\text{score} = 3.553 $), TriviaQA ($( t_\\\\text{crit} = 1.677, t_\\\\text{score} = 19.259 $), and ProntoQA ($ t_\\\\text{crit} = 1.661, t_\\\\text{score} = 2.041 $).\\n\\nGiven the relatively small answer space, calculating entropy using five samples provides sufficient precision to highlight the differences introduced by noise injection. For predictive probability and normalized predictive entropy\\u2014metrics that evaluate a significantly larger space encompassing all possible reasoning and answer sequences\\u2014five generations sampled from our pool of 20 are less likely to yield reliable Monte Carlo estimates. This limitation could potentially be addressed in future studies by increasing the number of samples, expanding the generation pool, or focusing exclusively on the answer string. Nonetheless, under our current setup, our experiments demonstrate that noise injection does not degrade performance under these measures.\", \"questions\": \"**[Prompting and Answer Extraction]** We prompt the model using In-Context Learning examples and extract the final answers based on the formatting provided in these examples. We provide further details in Appendix B.1.\\n\\n**[Greedy Decoding Accuracy]** With greedy decoding GSM8K reports 29.11% accuracy; CSQA reports 62.62% accuracy; TriviaQA reports 73.57% accuracy; ProtoQA reports 76.2% accuracy. \\n\\n**[Difference on Table 2 and Table 3]** The results differ because Table 2 presents GSM8K performance under a single random seed, which is the same seed used in Figures 2 and 3 for the corresponding experiments. In contrast, Table 3 reports the average performance across 20 random seeds.\"}", "{\"comment\": \"Dear Reviewers,\\n\\nThe authors have responded - please see if they have addressed your concerns and engage in further discussion to clarify any remaining issues.\\n\\nThanks!\\nAC\"}", "{\"comment\": \"Thank you for addressing my concerns. Given also the comments from the other reviewers, I stand by my score. I believe the paper might benefit from trying to theoretically ground the approach. Furthermore, although the authors show statistically significant improvements on all datasets in the rebuttal, only on one dataset the benefits are very evident, and my question of whether the introduced additional complexity is worth remains.\"}", "{\"comment\": \"I appreciate the authors' responses!\\n\\nFor the first issue, I would really like to see more formalized methodologies at adjusting variances; the current optimality results seemed not convincing as they are highly dependent on the variance chosen, and the choices seemed pretty random and I doubt its generalization.\\n\\nFor the second issue, I am still concerned about the sample sizes being only 5 for computing the answer entropy. I hope the authors could increase the sample size like promised in the next iteration of this paper. Looking forward to the improvements!\\n\\nFor the issues are not full addressed, I would keep my original score.\"}", "{\"comment\": \"Dear Reviewers,\\n\\nThe authors have provided responses - do have a look and engage with them in a discussion to clarify any remaining issues as the discussion period is coming to a close in less than a day (2nd Dec AoE for reviewer responses).\\n\\nThanks for your service to ICLR 2025.\\n\\nBest, \\nAC\"}", "{\"comment\": \"Thank you for your comprehensive review and constructive comments. We apologize for the typos and Figure 7 should instead be Figure 2. We have fixed the typos in the updated PDF. Please see our response below.\\n\\nWeaknesses\\n\\n**[Related work]** Thank you for your suggestion. Our work builds upon prior studies linking model uncertainty to hallucination (lines 31-35; lines 495-500) by introducing a new source of randomness. Since we modify intermediate layers, we also review relevant work on hallucination detection using intermediate layer representations (lines 501-508). While we focus on areas most relevant to our contributions, we welcome any specific references to further enrich our discussion.\\n\\n**[Single Model]** Kindly note that we experimented with multiple models: LLAMA2-13B-Chat for our case study and alternative models -- LLAMA2-13B-Chat and Mistral -- in Table 6. \\n\\n**[Intro: Hallucination Definition]** Thank you for raising this concern. In our experimental setup, model responses are coherent (i.e., not repetitive or unreadable) and their correctness cannot be determined without reference (see Line 3 in Table 1). In this context, incorrect answers are considered plausible, which aligns with our definition of hallucinations. We have updated line 161-170 to clarify this.\\n\\n**[Intro: Empirical Validation]** Thank you for raising this concern. We empirically validate our hypothesis that hallucination cases are less robust to noise injection in Figure 2(a). Specifically, the hallucination cases (grey) exhibit higher entropy, indicating greater variance in responses with injected noise. We have updated line 89-90 to clarify this point.\\n\\n**[Section 2: Hidden State Perturbation]** Thank you for raising this concern. To clarify, when h_l is perturbed, we first compute h_{l+1} from the perturbed h_l. If layer l + 1 is selected, we then apply noise to h_{l+1}. Thus, the noise is not only added to the residual stream. We have updated line 143-144, 268-269, 298-299 to better explain this process. \\n\\n**[Section 3: Significance and dataset size]** Table 2, as a case study example, is over one single seed. For the same setup, we demonstrate the standard deviation with 20 random seeds Figure 4. We evaluate on GSM8K test set containing 1319 questions. We have updated line 160-161 to report the dataset size.\\n\\n**[Section 3: Noise Effect on Accuracy.]** Thank you for raising this concern. Kindly note that we do not claim that **within a single generation**, the number of hallucination cases appear less with noise injection. Instead, we argue that the incorrect answers generated during hallucination are **less consistent across generations** with noise injected, making incorrect answers less likely to be selected by majority vote. As a result, this shift improves the likelihood of correct answers being chosen, thereby enhancing accuracy under the majority vote scheme. We update line 338-344 in the manuscript to clarify the explanation. \\n\\n**[Section 3: Separate section for GSM8K]** Thank you for your feedback. Section 3 serves as our methodology section, detailing our approach through a case study on GSM8K. Section 4, by contrast, presents experimental results across multiple datasets. This distinction aims to improve clarity, but we are open to suggestions for reducing potential repetition.\"}", "{\"summary\": \"This paper proposes enhancing the performance of hallucination detection by perturbing hidden unit activations in intermediate layers for sampling-based methods. Unlike existing approaches that measure uncertainty through prediction layer sampling, this work introduces noise to intermediate layer representations and combines this noise injection with prediction layer sampling to improve hallucination detection. Extensive experiments demonstrate the effectiveness of this method across various datasets, uncertainty metrics, and model architectures.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The motivation for introducing randomness in the hidden layer is intuitive and makes a lot of sense. The paper is well-written and easy to implement.\\n2. The concept of perturbing intermediate representations to enhance the separability between hallucinated and non-hallucinated generation is overall innovative.\\n3. Extensive experiments are provided to demonstrate the effectiveness of noise injection in enhancinghallucination detection across various datasets and uncertainty metrics.\", \"weaknesses\": \"1. The performance improvement from noise injection is insignificant in most cases. As illustrated in Table 3, there is an insignificant increase in Predictive Entropy and Normalized Entropy, with the most notable improvement occurring only in the answer entropy of the GSM8K dataset.\\n2. The author argues that the effects of noise injection and prediction layer sampling are complementary. However, this claim is not strongly substantiated by the results shown in Figure 3. A Pearson correlation of 0.67 does not clearly indicate a complementary relationship between these two sources of randomness. Even without introducing noise, drawing entropy with temperatures T=0.5 and T=1.0 will show similar positive correlations.\\n3. The author introduced additional hyperparameters $\\\\alpha$, $\\\\ell_1$ and $\\\\ell_2$ to adjust the randomness of sampling. However, this comparison may be unfair, as performance could also be enhanced by optimizing parameters such as temperature T, top_P, and top_K.\\n4. Theoretical insight is limited in explaining why perturbations at the hidden layer are more effective than output layer sampling for self-consistency based hallucination detection methods. In my opinion, using a larger temperature is essentially the same as modifying the feature space to increase randomness.\", \"questions\": \"1. Is there any explanation why the performance is more significant only when combined with Answer Entropy?\\n2. I like the results shown in Table 4, but I would appreciate it if the author can proivde more experiments in other datasets, such as CSQA or TriviaQA.\\n3. I would like to see more perturbation based methods. For example, what will happen if we perturb the input query for those samping based methods?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"This paper proposes a new method for detecting when an LLM is hallucinating based on the empirical observation that higher model uncertainty is correlated with hallucinations. Specifically, the paper proposes introducing noise into intermediate layers of the model to amplify the separation between hallucinated and non-hallucinated outputs on uncertainty measures. While the approach is new and interesting, and the paper includes relevant ablation results, the empirical evaluation though promising, is insufficient to convincingly demonstrate the generalizability of the approach. The method was only evaluated on a limited combination of model architectures and datasets (initially 1 architecture x 4 datasets and separately 2 architectures x 1 dataset), limited improvements were seen in several cases, and no theoretical justifications are provided. Thus, overall the paper falls below the bar of acceptance for ICLR, though the authors are encouraged to strengthen their empirical evaluations and resubmit to a future conference.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers all appreciated the importance of the hallucination detection problem being solved and the promising results but all had concerns about the empirical evaluations, in terms of whether the improvements observed were significant and whether they generalize to other model architectures and tasks. The authors provided additional results including standard deviations and statistical tests during the rebuttal and one additional evaluation with Mistral, but these were not sufficient to convince the reviewers as there was also no theoretical justification provided (either in the original paper or the revision). Some other issues with presentation (drSq) and further analysis on the noise used (76cD, 8zQH) were addressed by the rebuttal. Overall, the lack of a comprehensive evaluation on multiple datasets and model combinations to convincingly demonstrate the generalizability of the method is a major limitation of the current work and needs to be addressed before the paper is ready for publication.\"}", "{\"summary\": \"This work builds upon the idea that the variability of LLM answers to a question is most pronounced when the LLM does not know the correct answer. By perturbing the intermediate LLM layers, they show this gap in variability tends to increase, facilitating the detection of hallucinations.\\n\\nThe work is largely empirical. Most of the results are shown for the GSM8K dataset, where the method appears to work best. On three other datasets, results are still positive but much more contained. Table 3 would benefit from reporting standard deviations over the multiple runs. Right now it is not clear if the difference in entropy over CSQA, TriviaQA and ProntoQA is significant.\\n\\nI appreciate the insight this work brings in terms of showing that the epistemic uncertainty induced by perturbing intermediate layers can provide complementary effects to the aleatoric uncertainty induced by last layer for the purpose of detecting hallucinations. However, considering the complications introduced - the method needs access to the intermediate layers of the model, it may be sensitive to the noise magnitude (the Appendix in this direction is not particularly extensive) and to which layers are perturbed - I wonder if the improvements are in fact worth the effort. \\n\\nI'd suggest the authors to provide a comprehensive evaluation across many datasets, including standard deviation of the results, to show that the method works robustly in multiple instances.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Perturbing intermediate layers seems to increase the uncertainty gap between instances where the model is correct and where it is not.\", \"The authors make an effort in ablating their results, in particular to distinguish the noise effect induced by intermediate vs last layer.\"], \"weaknesses\": [\"Results seem significant on GSM8K, less so on the other datasets. Standard deviations are missing.\", \"It may be worth extending the analysis on the sensitivity to the noise magnitude to better gauge the robustness of the algorithm. In the main paper, the authors only use either no noise or noise magnitudes 0.01 and 0.05, and only for one dataset. In the Appendix, results for another dataset are presented, but at different noise magnitudes. It would be good to provide results for a sufficient amount of noise magnitudes and all datasets.\"], \"questions\": [\"The authors refer to Figure 7 multiple times throughout the text. I believe this is a type, as there is no Figure 7. Should this be Figure 2 instead?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your feedback. Regarding the theoretical grounding, we have addressed this concern by elaborating on how adjusting parameters like $T $, complements noise injection. Specifically, temperature modifies the sampling distribution while preserving the token likelihood order (e.g., $\\\\text{Pr}(\\\\text{token}_A) > \\\\text{Pr}(\\\\text{token}_B) $), whereas noise injection can reverse this order, offering a complementary effect. We have revised lines 80\\u201384 and 226\\u2013229 in the manuscript to incorporate this discussion.\\n\\nAs for the additional complexity, we have already clarified that it represents a trade-off. Its value depends on the application context, and practitioners may weigh this against their specific goals and constraints. Moreover, the proposed approach does not introduce additional inference delay, ensuring practical applicability in latency-sensitive scenarios.\"}", "{\"comment\": \"Thank you for your comments and suggestions. Please see our response to your concerns below.\\n\\n**[Fixed Distribution]** While we exemplify our method with a uniform distribution, the mean and variance **do** vary as we select different noise magnitudes. As clarified in lines 464\\u2013466 of Section 4.5, the sampling distribution depends on the specific LLM, enabling adaptation and supporting generalizability. \\n\\n**[Statistical Significance]** Thank you for raising this concern. To assess the statistical significance of our results, we report the 95% confidence intervals in the table below. Specifically, we use a bootstrap method to estimate the intervals: we sample five generations per question from a broader pool of 20 generations with replacement and a bootstrap sample size of 25 for GSM8K, TriviaQA, and CSQA. For ProntoQA, we use a bootstrap sample size of 50 due to higher data variability. \\n\\n| Metric | GSM8K | CSQA | TriviaQA | ProntoQA |\\n|----------------------------------|:-------------------------:|:-------------------------:|:-------------------------:|:------------------------:|\\n| Answer Entropy | 72.91 \\u00b1 0.43 | 68.70 \\u00b1 0.52 | 62.74 \\u00b1 0.10 | 65.45 \\u00b1 0.66 |\\n| Answer Entropy w/ Noise | 79.04 \\u00b1 0.44 (+6.13) | 69.89 \\u00b1 0.47 (+1.19) | 64.04 \\u00b1 0.10 (+1.30) | 66.38 \\u00b1 0.64 (+0.93) |\\n\\nWe also conduct a t-test to determine whether the changes are statistically significant. All datasets pass the t-test (significance level: $ \\\\alpha = 0.05 $), with the following results: GSM8K ($ t_\\\\text{crit} = 1.677, t_\\\\text{score} = 20.669$), CSQA ($ t_\\\\text{crit} = 1.677, t_\\\\text{score} = 3.553 $), TriviaQA ($ t_\\\\text{crit} = 1.677, t_\\\\text{score} = 19.259 $), and ProntoQA ($ t_\\\\text{crit} = 1.661, t_\\\\text{score} = 2.041 $).\\n\\nGiven the relatively small answer space, calculating entropy using five samples provides sufficient precision to highlight the differences introduced by noise injection. For predictive probability and normalized predictive entropy\\u2014metrics that evaluate a significantly larger space encompassing all possible reasoning and answer sequences\\u2014five generations sampled from our pool of 20 are less likely to yield reliable Monte Carlo estimates. This limitation could potentially be addressed in future studies by increasing the number of samples, expanding the generation pool, or focusing exclusively on the answer string. Nonetheless, under our current setup, our experiments demonstrate that noise injection does not degrade performance under these measures.\"}", "{\"comment\": \"Thank you for your comments and suggestions. We apologize for the typo, and Figure 7 should be Figure 2 instead. We cleaned up the typos in the updated draft. Please see our response below.\\n\\nWeaknesses\\n\\n**[statistical significance and t-test]** Thank you for raising this concern. To assess the statistical significance of our results, we report the 95% confidence intervals in the table below. Specifically, we use a bootstrap method to estimate the intervals: we sample five generations per question from a broader pool of 20 generations with replacement and a bootstrap sample size of 25 for GSM8K, TriviaQA, and CSQA. For ProntoQA, we use a bootstrap sample size of 50 due to higher data variability. \\n\\n| Metric | GSM8K | CSQA | TriviaQA | ProntoQA |\\n|----------------------------------|:-------------------------:|:-------------------------:|:-------------------------:|:------------------------:|\\n| Answer Entropy | 72.91 \\u00b1 0.43 | 68.70 \\u00b1 0.52 | 62.74 \\u00b1 0.10 | 65.45 \\u00b1 0.66 |\\n| Answer Entropy w/ Noise | 79.04 \\u00b1 0.44 (+6.13) | 69.89 \\u00b1 0.47 (+1.19) | 64.04 \\u00b1 0.10 (+1.30) | 66.38 \\u00b1 0.64 (+0.93) |\\n\\nWe also conduct a t-test to determine whether the changes are statistically significant. All datasets pass the t-test (significance level: $\\\\alpha = 0.05 $), with the following results: GSM8K ($ t_\\\\text{crit} = 1.677, t_\\\\text{score} = 20.669 $), CSQA ($ t_\\\\text{crit} = 1.677, t_\\\\text{score} = 3.553 $), TriviaQA ($( t_\\\\text{crit} = 1.677, t_\\\\text{score} = 19.259 $), and ProntoQA ($ t_\\\\text{crit} = 1.661, t_\\\\text{score} = 2.041 $).\\n\\nGiven the relatively small answer space, calculating entropy using five samples provides sufficient precision to highlight the differences introduced by noise injection. For predictive probability and normalized predictive entropy\\u2014metrics that evaluate a significantly larger space encompassing all possible reasoning and answer sequences\\u2014five generations sampled from our pool of 20 are less likely to yield reliable Monte Carlo estimates. This limitation could potentially be addressed in future studies by increasing the number of samples, expanding the generation pool, or focusing exclusively on the answer string. Nonetheless, under our current setup, our experiments demonstrate that noise injection does not degrade performance under these measures.\\n\\n**[Mistral Experiments]** Thank you for the suggestion. In response to your request, we conducted additional TriviaQA experiments on Mistral, using noise magnitudes of 0 and 0.02, consistent with our GSM8K experiments on Mistral. Without noise injection, hallucination detection AUROC is 66.42, and with noise injection, AUROC improves to 69.44. Due to time and computational constraints, we focus on this representative evaluation, which further demonstrate the method\\u2019s generality.\\n\\n**[Figure Illustration]** Thank you for your suggestion. Figure 2 is based on only 5 generations, which limits the granularity of the entropy values. We are open to any further suggestions on enhancing visual understanding.\"}", "{\"comment\": \"Thank you for your valuable questions and comments! Please see our response below.\\n \\nWeaknesses\\n\\n**[Statistical Significance]** Thank you for raising this concern. To assess the statistical significance of our results, we report the 95% confidence intervals in the table below. Specifically, we use a bootstrap method to estimate the intervals: we sample five generations per question from a broader pool of 20 generations with replacement and a bootstrap sample size of 25 for GSM8K, TriviaQA, and CSQA. For ProntoQA, we use a bootstrap sample size of 50 due to higher data variability. \\n\\n| Metric | GSM8K | CSQA | TriviaQA | ProntoQA |\\n|----------------------------------|:-------------------------:|:-------------------------:|:-------------------------:|:------------------------:|\\n| Answer Entropy | 72.91 \\u00b1 0.43 | 68.70 \\u00b1 0.52 | 62.74 \\u00b1 0.10 | 65.45 \\u00b1 0.66 |\\n| Answer Entropy w/ Noise | 79.04 \\u00b1 0.44 (+6.13) | 69.89 \\u00b1 0.47 (+1.19) | 64.04 \\u00b1 0.10 (+1.30) | 66.38 \\u00b1 0.64 (+0.93) |\\n\\nWe also conduct a t-test to determine whether the changes are statistically significant. All datasets pass the t-test (significance level: $\\\\alpha = 0.05 $), with the following results: GSM8K ($ t_\\\\text{crit} = 1.677, t_\\\\text{score} = 20.669 $), CSQA ($ t_\\\\text{crit} = 1.677, t_\\\\text{score} = 3.553 $), TriviaQA ($( t_\\\\text{crit} = 1.677, t_\\\\text{score} = 19.259 $), and ProntoQA ($ t_\\\\text{crit} = 1.661, t_\\\\text{score} = 2.041 $).\\n\\nGiven the relatively small answer space, calculating entropy using five samples provides sufficient precision to highlight the differences introduced by noise injection. For predictive probability and normalized predictive entropy\\u2014metrics that evaluate a significantly larger space encompassing all possible reasoning and answer sequences\\u2014five generations sampled from our pool of 20 are less likely to yield reliable Monte Carlo estimates. This limitation could potentially be addressed in future studies by increasing the number of samples, expanding the generation pool, or focusing exclusively on the answer string. Nonetheless, under our current setup, our experiments demonstrate that noise injection does not degrade performance under these measures.\\n\\n**[Complementary Effect and Pearson Correlation]** We agree that sampling at different temperatures may yield a similar Pearson correlation. However, such correlations still reflect complementary effects between different sampling temperatures. While combining different temperatures to leverage their complementary effect is not straightforward, our algorithm demonstrates how noise injection and temperature-based sampling can be effectively combined to leverage their complementary effects.\\n\\n**[Fair Comparison against Adjusting T, Top-P, Top-K]** We agree that performance could be enhanced by optimizing hyperparameters such as $T $, $\\\\text{top-P} $, and $ \\\\text{top-K} $. To ensure a fair comparison, we present results with different $T $ values while using the default values for$ \\\\text{top-P}$ and $ \\\\text{top-K} $ in Table 4. We observe that noise injection improves performance across all tested temperatures.\\nExhaustively testing all possible configurations is infeasible. However, theoretically, adjusting $ T $, $ \\\\text{top-P} $, and $ \\\\text{top-K} $ would complement noise injection. Specifically, these parameters alter the sample distribution but preserve token likelihood ordering (e.g., if $ \\\\text{Pr}(token_A) > \\\\text{Pr}(token_B) $, this order remains unchanged). In contrast, noise injection can reverse this order, offering a complementary effect.\\n\\n**[Theoretical Insight and Comparison with Increasing T]** Thank you for the feedback. We do not claim that perturbations at the hidden layer are more effective than output layer sampling. Instead, we suggest that combining both can be more effective due to their complementary effects, as discussed theoretically in our response to [Adjusting T, Top-P, Top-K] . We also edit line 80-84, 226-229 to include the discussion. \\nEmpirically, we show in Table 4 that noise injection and temperature adjustments have distinct effects. Specifically, increasing \\\\( T \\\\) from 0.5 to 0.8 or 1.0 reduces hallucination detection AUROC, but adding noise at \\\\( T = 0.5 \\\\) introduces a different source of randomness and improves performance (lines 425\\u2013428).\"}", "{\"comment\": \"Questions:\\n\\n**[Explanation of Better Significance on Answer Entropy]** We address the question in response to [Statistical Significance]\\n\\n**[Ablation study in Table 4 on additional datasets]** Thank you for your suggestions. Below we ablate on temperature and noise magnitude with CSQA dataset. As in Table 4 in the paper, noise injection improves detection effectiveness compared to no noise. We will updated the manuscript to include the results. \\n| | Noise Magnitude = 0 | Noise Magnitude = 0.01 | Noise Magnitude= 0.05 |\\n|----------------|----------------------------|---------------------------------|------------------------------|\\n| T = 0.2 | 60.93 | 61.70 | 62.69 |\\n| T = 0.5 | 65.49 | 67.56 | 67.38 |\\n| T = 0.8 | 68.70 | 68.94 | 69.89 |\\n| T = 1.0 | 69.42 | 71.58 | 70.14 |\", \"table\": \"Ablation on Temperature and Noise Magnitude. Evaluation on CSQA dataset with Llama2-13B-chat model across 5 generations.\\n\\n**[Other Perturbation Methods]** Thank you for the suggestion. While input query perturbations are interesting, they are beyond the scope of this work. We focus on model perturbations and leave this direction for future exploration.\"}", "{\"summary\": \"The paper addresses the challenge of detecting \\\"hallucinations\\\" in Large Language Models (LLMs). The study proposes a novel technique to improve hallucination detection by adding \\\"noise injection\\\" to intermediate layers of the model, creating an additional source of randomness during response generation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper touches a critical issue in current LLMs. Any progress in error detection is critical to the field.\"], \"weaknesses\": [\"The paper presents some notable weaknesses in both the presentation of content and in aspects of the methodology and experimental design. Below are specific areas of concern:\", \"The review of related work is somewhat shallow. There is substantial literature on detecting hallucinations in models, yet this paper does not adequately differentiate its approach or clarify how it builds upon existing insights.\", \"All experiments are conducted on a single model, which limits the generalizability of the conclusions. Testing across multiple models would strengthen the claims.\", \"## Intro:\", \"The term \\\"hallucinations\\\" is only briefly defined as instances where a model generates \\u201cplausible yet incorrect responses.\\u201d However, it remains unclear if this term includes all model errors or just those based on plausibility. The paper does talk about plausibility further, leaving the reader uncertain about what qualifies as a hallucination.\", \"You refer to figure 7 which is in the appendix. Core results should be presented in the main paper, and anything you talk about in the intro is definitely core. Note that reviewers are not required to read them but in your case it was fundamental to understand your results. This note is relevant for the rest of the paper as well.\", \"We empirically validate the hypothesis in Figure 7 (a) -> how exactly the figure validates your hypothesis? Readers need a step-by-step walkthrough to see how Figure 7(a) substantiates the hypothesis.\", \"## Section 2:\", \"The definition of $f$ is a bit vague and as a results, the method as well. The model's output is not a function of all of its hidden states, because each hidden state $l$ is a function of the previous hidden state $l-1$. I think that maybe you could say that if you talk about the residual stream that sums all hidden states (because later you talk about mlp output), but it is very not clear at this point of reading.\", \"Because of that, it's not clear what happens when you replace $h_t^l$ with a noised version. Do you recompute $h_t^{l+1}$ to get a noised version or do you just noise the clean version? This needs to be clearly explained. If you add the noise to the MLP output which in turn simply goes to the residual stream, and you don't recompute the following MLPs in higher layers after adding noise, then this is just equivalent to add noise K times (where K are the number of layers you noised) to the residual stream, without significance the the specific layers that are noised, because the unembedding layer simply takes the residual stream after the final layer.\", \"## Section 3:\", \"Table 2 lacks information on statistical significance, including standard deviations and the number of seeds used for experiments. Additionally, there is no indication of the dataset size.\", \"he statement, \\u201cThis supports our intuition that incorrect answers are less robust to noise injection\\u2026\\u201d appears without prior context. While there is mention of hallucinations having higher entropy, there is no discussion that wrong answers may appear less after noise injections. Why does this happens?\", \"It was not clear to me why you need a separate section for GSM8K as experiments are later conducted across multiple datasets, making this section feel repetitive.\", \"## Section 4:\", \"The paper lacks a clear presentation of noise boundaries and statistical significance tests, which raises concerns about the reliability of findings. The difference between the proposed methods and baselines is small, and it is unclear how significant these differences are. Only Figure 4 provides such comparisons for GSM8K, while other datasets are not covered.\", \"Some other typos etc.:\", \"Links to figures/equations are broken.\", \"Line 118: \\\"**an** uncertainty metric\\\"\", \"Line 122 sentence is not grammatically correct\", \"Line 289 \\\".,\\\"\", \"Figure 7 caption: \\\"Rest of setup up follows Figure 7 (b)\\\" -> typo?\", \"I believe that all of these issue could be fixed in a revision (not sure that in the short time of the rebuttal period) and then it will be a valuable research paper.\"], \"questions\": [\"How do you extract the final answers from the long answer? How do you make sure it is always in the end? Do you do some sort of prompt engineering or few shot for this\", \"What is the acc of the model in greedy decoding?\", \"Why are the results on GSM8K are different in table 2 and 3? What is the difference in the setting?\", \"\\\"For each dataset, we select the temperature within T = {0.2, 0.5, 0.8, 1.0} which optimizes the model accuracy on this dataset\\\" - on the validation dataset?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes to inject noise in the intermediate representations to enhance hallucination detection. The method is mainly tested on Llama2 on 4 different datasets.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper flows well with detailed explanations.\", \"Ablation experiments are thorough and extensive.\", \"The problem of Hallucination detection is crucial in recent LLM studies.\"], \"weaknesses\": [\"My main concern is the soundness of the experimental results. Although the authors have shown the std of experiments in Figure 4, this is only shown for the dataset, GSM8K, which had the greatest improvement. However, considering that the gain in the other three datasets is relatively smaller, I would like to see the std values for other datasets too. Also, please conduct a t-test on the improvements.\", \"The authors tested their method mainly on Llama2-13B-chat. Although the experiment on Mistral has been provided in Table 6, this is only done on GSM8K. I would like to see a full table of experiments on other datasets.\", \"The message of Figure 2 (b) is somewhat unclear to me. I don't think the figures demonstrate better separability between non-hallucination and hallucination. Maybe a more fine-grained histogram would show a better picture?\", \"(minor) There are some grammatical issues in writing. I suggest using Grammarly or ChatGPT to refine the manuscript.\", \"(minor) There is no Figure 7 while the manuscript keeps referring to it. I'm assuming it should have been Figure 2, but please correct this.\", \"Overall, the paper is well written. However, my main concern is the significance and generality of the approach. If my concerns are resolved, I would be happy to adjust my scores.\"], \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your comments and suggestions. We apologize for the typo -- Figure 7 should be Figure 2 instead. Please see our response below.\\n\\n**[Tradeoff of Complication and Performance]** Thank you for highlighting the trade-offs involved. We agree that the method introduces additional considerations. As such, the decision to adopt this approach would indeed depend on the specific application and its requirements. We appreciate your suggestion and acknowledge that future work could further explore these aspects to refine the effort-benefit tradeoff.\\n\\nWeaknesses\\n\\n**[Statistical Significance]** Thank you for raising this concern. To assess the statistical significance of our results, we report the 95% confidence intervals in the table below. Specifically, we use a bootstrap method to estimate the intervals: we sample five generations per question from a broader pool of 20 generations with replacement and a bootstrap sample size of 25 for GSM8K, TriviaQA, and CSQA. For ProntoQA, we use a bootstrap sample size of 50 due to higher data variability. \\n\\n| Metric | GSM8K | CSQA | TriviaQA | ProntoQA |\\n|----------------------------------|:-------------------------:|:-------------------------:|:-------------------------:|:------------------------:|\\n| Answer Entropy | 72.91 \\u00b1 0.43 | 68.70 \\u00b1 0.52 | 62.74 \\u00b1 0.10 | 65.45 \\u00b1 0.66 |\\n| Answer Entropy w/ Noise | 79.04 \\u00b1 0.44 (+6.13) | 69.89 \\u00b1 0.47 (+1.19) | 64.04 \\u00b1 0.10 (+1.30) | 66.38 \\u00b1 0.64 (+0.93) |\\n\\nWe also conduct a t-test to determine whether the changes are statistically significant. All datasets pass the t-test (significance level: $\\\\alpha = 0.05 $), with the following results: GSM8K ($ t_\\\\text{crit} = 1.677, t_\\\\text{score} = 20.669 $), CSQA ($ t_\\\\text{crit} = 1.677, t_\\\\text{score} = 3.553 $), TriviaQA ($( t_\\\\text{crit} = 1.677, t_\\\\text{score} = 19.259 $), and ProntoQA ($ t_\\\\text{crit} = 1.661, t_\\\\text{score} = 2.041 $).\\n\\nGiven the relatively small answer space, calculating entropy using five samples provides sufficient precision to highlight the differences introduced by noise injection. For predictive probability and normalized predictive entropy\\u2014metrics that evaluate a significantly larger space encompassing all possible reasoning and answer sequences\\u2014five generations sampled from our pool of 20 are less likely to yield reliable Monte Carlo estimates. This limitation could potentially be addressed in future studies by increasing the number of samples, expanding the generation pool, or focusing exclusively on the answer string. Nonetheless, under our current setup, our experiments demonstrate that noise injection does not degrade performance under these measures.\\n\\n**[Sensitivity on Noise Magnitude]** Thank you for the suggestion. In response, we have conducted a sensitivity analysis of noise magnitude on both CSQA and TriviaQA datasets. While we observe that the optimal noise magnitude varies across datasets, the results indicate that noise injection over a broad range of magnitudes consistently improves performance. We will update the manuscript to include these experiments, providing results across multiple noise magnitudes to ensure a comprehensive evaluation.\\n| Noise Magnitude | TriviaQA AUROC | CSQA AUROC |\\n|:------------------------:|:---------------------:|:-------------------:|\\n| 0 | 61.66 | 60.93 |\\n| 0.01 | 62.06 | 61.70 |\\n| 0.02 | 62.11 | 62.87 |\\n| 0.03 | 62.29 | 63.34 |\\n| 0.04 | 62.60 | 62.61 |\\n| 0.05 | 63.18 | 62.69 |\\n| 0.06 | 63.41 | 63.84 |\\n| 0.07 | 63.96 | 62.99 |\\n| 0.08 | 64.37 | 63.34 |\\n| 0.09 | 64.83 | 63.48 |\\n| 0.10 | 65.07 | 63.18 |\", \"table\": \"Sensitivity Analysis of Noise Magnitude on TriviaQA and CSQA.\"}", "{\"comment\": \"Thank you for your thoughtful feedback!\\n\\nFor the first concern, we appreciate your suggestion. Our current approach selects the per-model variance magnitude based on GSM8K and applies it consistently across other datasets. We will reinforce this point in the manuscript to ensure clarity. While we find this approach effective, we are open to further suggestions for refining the variance selection. Additionally, our results do not heavily depend on the specific variance chosen. For your reference, the sensitivity analysis table below demonstrates the robustness of our findings across a range of variance values. While we observe that the optimal noise magnitude varies across datasets, the results indicate that noise injection over a broad range of magnitudes consistently improves performance. \\n\\n| Noise Magnitude | TriviaQA AUROC | CSQA AUROC |\\n|:------------------------:|:---------------------:|:-------------------:|\\n| 0 | 61.66 | 60.93 |\\n| 0.01 | 62.06 | 61.70 |\\n| 0.02 | 62.11 | 62.87 |\\n| 0.03 | 62.29 | 63.34 |\\n| 0.04 | 62.60 | 62.61 |\\n| 0.05 | 63.18 | 62.69 |\\n| 0.06 | 63.41 | 63.84 |\\n| 0.07 | 63.96 | 62.99 |\\n| 0.08 | 64.37 | 63.34 |\\n| 0.09 | 64.83 | 63.48 |\\n| 0.10 | 65.07 | 63.18 |\", \"table\": \"Sensitivity Analysis of Noise Magnitude on TriviaQA and CSQA.\\n\\nFor the second concern, we note that a sample size of 5 falls within the range used in well-established prior work (see Figure 3(a) in [1]). In addition, in Figure 4 in our manuscript, we have already explored higher sample sizes and found the results to be consistent. While further increasing the number of samples could offer additional insights, it also introduces significant computational cost, which is an important practical consideration.\\n\\nWe hope this clarifies our choices and demonstrates the validity of our approach.\\n\\n[1] Kuhn, Lorenz, Yarin Gal, and Sebastian Farquhar. \\\"Semantic uncertainty: Linguistic invariances for uncertainty estimation in natural language generation.\\\" arXiv preprint arXiv:2302.09664 (2023).\"}", "{\"comment\": \"Thank you for your prompt responses.\\n\\nFor the first issue, may I have an explanation regarding why with the magnitude of noise increasing, the performances on TriviaQA consistently increases, while for CSQA, the effects are more nuanced and probably less straightforward to make sense of?\"}", "{\"summary\": \"This paper explores the potential of injecting noise into the intermediate layer outputs of LLMs to induce greater uncertainty when they are prone to hallucination.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Good logical flow and storytelling.\", \"Clear presentation of experimental results and straightforward mathematical formulations.\"], \"weaknesses\": [\"Lack of theoretical justification for the noise injection approach: Although the injection method is simplistic, the authors do not clarify why they chose to sample noise from a uniform distribution with fixed mean and variance across LLMs. This choice raises concerns about the generalizability of the results.\", \"No evaluation of statistical significance: The reported performance improvements with noise injection are marginal, and the absence of confidence intervals weakens claims regarding these improvements.\", \"Overall, I feel that this paper is still not ready for publication.\"], \"questions\": \"No specific question from me. But my concerns are majorly stated in the previous section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your thoughtful engagement and for revisiting your score. We truly appreciate the time and effort you've put into reviewing our work and providing valuable feedback. If there are any additional suggestions or areas you feel we could further refine, we'd be glad to take them into account for future iterations.\"}" ] }
2bIQBDSfRk
DenseAttention: No-Compromise Exact All $N \times N$ Interactions Algorithm with $O(N)$ Space and Time Complexity
[ "Andrew Argatkiny" ]
The ubiquitous Transformer architecture suffers from two main bottlenecks: 1) low computational and memory efficiency, leading to suboptimal hardware utilization, and 2) quadratic time complexity with respect to sequence length $N$, making it slow and costly for large data contexts. We propose a novel DenseAttention Network architecture, a straightforward simplification of the standard Transformer block that addresses these issues and serves as a drop-in replacement for language modeling tasks. We eliminate memory-bound components in DenseAttention, including Softmax, masking, one skip connection, and both LayerNorms, as well as key, value, and output projection matrices, as they become redundant. Despite these removals, it maintains exact $N \times N$ pairwise interactions between tokens. By exploiting the associativity of matrix multiplications, DenseAttention can be computed with $O(N^2d)$ or $O(Nd^2)$ time and space complexity, depending on the context. To handle the absence of Softmax and prevent numerical instability, we introduce MaxNormActivation at both ends of the Transformer block. We also devise Cosine Relative Positional Embeddings as a computationally efficient replacement for RoPE, and simple LocalAttention variations of the block to help the model focus on details in extremely long contexts. DenseAttention competes with FlashAttention in speed on small sequences and outperforms it by orders of magnitude on large contexts. We pre-train encoder language models on sequences up to 16K in length, which perform similarly or better than baseline BERT-large, while significantly improving speed and efficiency. Finally, we achieve state-of-the-art on the LRA benchmark among the Transformer-based architectures.
[ "self-attention", "deep learning", "transformer architecture", "nlp", "efficient transformers", "DenseAttention", "long context", "Long Range Arena" ]
Reject
https://openreview.net/pdf?id=2bIQBDSfRk
https://openreview.net/forum?id=2bIQBDSfRk
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zDdAMW0NMx", "xkMa1plTTA", "xGecxQbgo2", "vQHIhTMKjK", "uxPLtbzc6g", "tnFURiRKfC", "tUvBdkhdo2", "tUpFRx5nBv", "pGcFVvpicL", "ovpV0qZoxM", "mhGqTrQYAh", "ij4BAHYtBO", "hnJVfgM4Nn", "foVsIAO9FX", "fim3GcAL8r", "eNLBauCF9J", "cVCT2VGfjd", "auViObrYsv", "ZCavc5OTg9", "YpsQ96Attk", "XvmpHNMkMj", "XJEzmaRy8k", "X593L1e4LL", "Vo3IuaJ8w9", "VaIMprPTsw", "UrbMYssA0z", "UgijnwnEZS", "TvKLUgz2vl", "POfbnwq0mj", "MmwZFCIsZQ", "MK9BWHr34a", "LlUfgk3cgS", "LkFhT4ABEv", "JdwYSZV8jL", "JMNzEYnsOM", "IkavGOXXm7", "IJGTKziMvn", "I6LWewzUWO", "I5IWoPgbMR", "EsNry6tV7e", "B0V8KVIUfB", "AGwOKdl6d9", "ACEbGMUnpV", "A7y2eJdnE8", "7Ta7z79Rza", "7Q9RBjHV9g", "3wfNmA9BKR", "1T7LnfB48U", "0vJ1U5d9cF" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment" ], "note_created": [ 1732693619464, 1732905129600, 1733299597745, 1732694956169, 1732517830182, 1733168967934, 1733202884768, 1732904324615, 1734557446429, 1732516158116, 1733185979898, 1729383657457, 1732693912335, 1732924552637, 1732926714235, 1732906704595, 1733167285093, 1732519650340, 1732905632065, 1732695985480, 1732516072266, 1732904566730, 1732519026120, 1730700192667, 1733203118569, 1733299491123, 1732693955281, 1732515905230, 1732517470824, 1730190707932, 1732519837330, 1732696459976, 1732926990144, 1733114038923, 1733185906109, 1732515303963, 1733161721136, 1733108904850, 1733203354506, 1733104269290, 1730748251435, 1732517978328, 1732733138224, 1732695583004, 1733187097192, 1732696248688, 1732904846444, 1737524271750, 1732905945023 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Area_Chair_bjZq" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Reviewer_ruBJ" ], [ "ICLR.cc/2025/Conference/Submission13608/Reviewer_rVwA" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Reviewer_HAtu" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Reviewer_oGkz" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Reviewer_HAtu" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Reviewer_ruBJ" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Reviewer_rVwA" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13608/Authors" ] ], "structured_content_str": [ "{\"title\": \"General Response. Part 1. Discussion of Linear Transformers in relation to DenseAttention\", \"comment\": \"Dear Reviewers,\\n\\nWe sincerely thank you for the precious time, insightful feedback and constructive suggestions which helped to clarify and enhance the exposition of our work.\\n\\nHere we would like to address several key commonly shared comments.\\n\\n---\\n\\n**Discussion of Linear Transformers in relation to DenseAttention**\\n\\nThe majority of reviewers suggested to put DenseAttention into perspective with Linear Transformer class of algorithms. Here\\u2019s the extended discussion comparing Linear Transformer and its derivatives with DenseAttention Network from the architectural standpoint. We present it below in full and will include it in the revised version of the paper.\\n\\nGiven entries $Q_i, K_j, V_j \\\\in \\\\mathbb{R}^{1 \\\\times d}$ of matrices $\\\\mathbf{Q}, \\\\mathbf{K}$ and $\\\\mathbf{V}$, standard softmax attention for input i can be reformulated as\\n\\n\\n$ \\n\\\\begin{equation}\\n A_i = \\\\frac{\\\\sum_{j=1}^N \\\\text{Sim}(Q_i, K_j)V_j}{\\\\sum_{j=1}^N \\\\text{Sim}(Q_i, K_j)} \\\\in \\\\mathbb{R}^{1 \\\\times d}\\n\\\\end{equation},\\n$\\n\\nwhere $\\\\text{Sim}(Q_i, K_j)=\\\\text{exp}(Q_i K_j^\\\\top)$. Conceptually, linear attention class of algorithms, described in [1] and built upon in numerous subsequent works, approximates or replaces this similarity function with separable kernel $\\\\text{Sim}(Q_i, K_j)=\\\\mathcal{K}(Q_i, K_j)=\\\\phi(Q_i) \\\\phi(K_j^\\\\top)$, where $\\\\phi: \\\\mathbb{R}^d \\\\to \\\\mathbb{R}_{+}^{r}$ maps query and key vectors to non-negative vectors with possibly different dimension r.\\n\\nHence, the attention mechanism becomes:\\n\\n $A_i = \\\\frac{\\\\sum_{j=1}^N \\\\phi(Q_i) \\\\phi(K_j^\\\\top)V_j}{\\\\sum_{j=1}^N \\\\phi(Q_i) \\\\phi(K_j^\\\\top)} = \\\\frac{\\\\phi(Q_i) \\\\sum_{j=1}^N \\\\phi(K_j^\\\\top)V_j}{\\\\phi(Q_i) \\\\sum_{j=1}^N \\\\phi(K_j^\\\\top)}, \\\\quad (1)$ \\n\\nwhich can be computed in linear time.\\n\\nThe function $\\\\phi(\\\\cdot)$ can take various forms, such as 1 + ELU [1], ReLU [2], squared ReLU [3], Taylor [4-6] or Random Feature [7-8] expansions, and even MLPs trained to mimic softmax attention [6]. They aim to approximate softmax without its explicit calculation when being applied jointly to queries and keys, or to retain its properties, most importantly, non-negativity of resulting dot products $\\\\phi(Q_i) \\\\phi(K_j^\\\\top)$. \\n\\n\\nThe latter property, together with reweighting attention scores (denominator in the formula 1) are defining for Linear Transformer algorithms. Absence of scaling by $\\\\frac{1}{\\\\phi(Q_i) \\\\sum_{j=1}^n \\\\phi(K_j^\\\\top)}$ leads to numerical instabilities, and the scaling factor itself is not guaranteed to be bounded without non-negative $\\\\phi(\\\\cdot)$. However, both mappings $\\\\phi(\\\\cdot)$ and memory intensive non-MatMul operations for reweighting contribute to subpar speed and computational efficiency in comparison with ordinary and fast self-attention algorithms on all but large context sizes.\\n\\nWe forgo both transforming $\\\\mathbf{Q}, \\\\mathbf{K}$ by $\\\\phi(\\\\cdot)$ and reweighting in DenseAttention as we believe the main factor of success of Transformer is the ability of all $N \\\\times N$ interactions between tokens. It results in an improved computational efficiency and simpler design which can be expressed entirely by matrix multiplications:\\n\\n$\\\\mathbf{A} = \\\\mathbf{Q} \\\\mathbf{K}^\\\\top \\\\mathbf{V} $\\n\\n---\\n\\n**References**\\n\\n[1] Katharopoulos et al., \\\"Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention.\\\" ICML 2020\\n\\n[2] Qin et al., \\\"cosFormer: Rethinking Softmax in Attention.\\\" ICLR 2022\\n\\n[3] Hua et al., \\\"Transformer Quality in Linear Time.\\\" ICML 2022 \\n\\n[4] Keles et al., \\\"On The Computational Complexity of Self-Attention.\\\" International Conference on Algorithmic Learning Theory 2023\\n\\n[5] Arora et al., \\\"Simple Linear Attention Language Models Balance the Recall-Throughput Tradeoff.\\\" ICLR 2024\\n\\n[6] Zhang et al., \\\"The Hedgehog & the Porcupine: Expressive Linear Attentions with Softmax Mimicry.\\\" ICLR 2024\\n\\n[7] Choromanski et al., \\\"Rethinking Attention with Performers.\\\" ICLR 2021\\n\\n[8] Peng et al., \\\"Random Feature Attention.\\\" ICLR 2021\"}", "{\"title\": \"Response to Reviewer oGkz. Part 4\", \"comment\": \"**Weakness 4**\\n\\n> While the paper demonstrates large throughput benefits in the long context regime compared to Transformers, it has not been shown in the paper that DANet performs well in the long context regime.\\n\\n---\\n\\n**The LRA results** \\n\\nOur primary experiments involved testing on the Long Range Arena suite of benchmarks [1]. It\\u2019s a diverse and challenging set of tasks designed *specifically to stress test the modeling performance of an architecture in the long context regime*. The sequence length for the tasks varies from 1K (smallest) to 16K tokens. Until recently, no Transformer-based model could even score above chance on the most challenging benchmark, Path-X with 16k context length.\\n\\nPrompted by your comments, we wrote up a detailed description of the LRA tasks. We present it in the Appendix E.1 in the current revision and politely encourage you to read it but omit posting it in this comment for brevity.\\n\\nWe would like to emphasize that the LRA is arguably considered to be a gold standard in testing long-range abilities of NN sequence models. And DenseAttention architecture outperforms, to the best of our knowledge, all other Transformer-based architecture which have been tested on the LRA to date.\\n\\nAlthough it may be only subtly hinted at in the manuscript (lines 430-431), by achieving superior results than our exceptionally strong baseline from \\u201cNever Train from Scratch\\u201d [1], we automatically outperform all other Transformer-based models and architectures, including the most recent ones, such as [1-2].\\n\\nInspired by your and other reviewers\\u2019 comments, we composed a full comparison table, listing results for 25+ models. We share it in the General Response and in the Appendix E.2 in the current revision of the paper.\\n\\nMoreover, our second main baseline for the LRA \\u2013 S4 model [3] \\u2013 is a State Space Model. Generally, SSMs are in the league of their own in comparison with Transformer-based architectures and greatly outperform them on the LRA benchmarks due to their inherent inductive bias towards capturing hierarchical and long-range dependencies and lack of such bias in Transformers, as discussed in [1, 4-5]. DenseAttention outperforms this much stronger SSM baseline in 4 of 6 benchmarks (full results reported in Section 4.1, Table 1 in the paper). To the best of our knowledge, this is the first case when a pure Transformer-based model compares favorably with an SSM on the LRA, which indicates its potent long-range modeling abilities.\\n\\n---\\n\\n**Pathfinder-256 benchmark**\\n\\nFurthermore, motivated by your comments, we conducted an experiment on Pathfinder-256 benchmark. It is is an extremely challenging version of the Pathfinder task with sequence length 65k which is on par with input context size of even recent generations of open-source (e.g. DBRX, 32k, https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm) and close source (original GPT-4, 32k) Large Language Models.\\n\\nWe present the results and the discussion below in full (It\\u2019s also available in the Appenix H.1 in the revised paper).\\n\\n---\\n\\n| Algorithm | Accuracy on the validation set, % |\\n|-----------------------------------------|------------------------------------|\\n| FlashAttention [6] | 63.1 |\\n| S4 [3] | 67.8 |\\n| DenseAttention | _72.6_ |\\n| DenseAttention after additional 550 epochs | **77.1** |\\n\\n\\nDenseAttention model outperforms existing results from the literature of standard Transformer augmented with FlashAttention [9] and S4-v2 model [12] as reported in [11] The result holds both when the training procedure is carried out for 200 training epochs as in [9] and then it\\u2019s prolonged for 550 additional epochs.\", \"this_experiment_lets_us_make_several_observations\": [\"DenseAttention Network architecture performs well even on very long input sequences which is promising given current trend of increasing context size in modern Large Language and Multimodal Models\", \"DenseAttention shows favorable scaling properties with respect to the amount of training iterations, even with the fixed dataset size. The validation accuracy for the task kept improving throughout the whole training and would likely have continued if the experiment had not been stopped.\", \"Truly linear scaling in sequence length is crucial for improvements in quality for large contexts. It took approximately 3 days on 4 H100 GPUs to train our model for 750 epochs in linear mode, while the projected runtime of quadratic (FlashAttention-2 [10]) and log-linear (S4) algorithms in the same setting would be at best 3 and 0.5 months, respectively, which renders them impractical for prolonged training.\", \"---\"]}", "{\"title\": \"General summary\", \"comment\": \"We sincerely thank reviewers for their time, feedback, and constructive suggestions which led to additional experiments and helped to clarify and strengthen our work. We would like to emphasize the main points of the paper, highlighted strengths, and key updates to the revision.\\n\\n---\\n\\n**Recap**\\n\\nIn this paper we propose DenseAttention Network, a simplification of Self-Attention and the vanilla Transformer, which runs both in $O(N)$ and $O(N^2)$ time depending on what\\u2019s more computationally efficient. Among other modifications to the architecture, we completely eliminate softmax from attention with no substitutions to it. We show it\\u2019s able to work if the standard LayerNorm is replaced with a proposed MaxNormActivation. DenseAttention in both regimes runs faster than highly-optimized low-level Transformer implementations, despite being coded in plain PyTorch. It retains or outperforms the modeling quality of the Transformer and its multiple variations across the wide range of context sizes, as shown in experiments on language modeling.\\n\\nAdditionally, we suggest two extensions, Cosine RelPE as a computationally efficient alternative to RoPE, and local-global attention scheme, to aid the model performance on extremely long sequences while maintaining computational efficiency. It allows us to achieve state-of-the-art results on the Long Range Arena suite of benchmarks among all Transformer-based models and the overall best result on the Pathfinder-256 task (65K context size).\", \"we_are_grateful_to_reviewers_for_acknowledging_the_following_strengths_in_our_work\": [\"The paper being **clearly written and easy to follow** as well as proposing a solution to **important** problem of **efficiency** of Transformer architectures (ruBJ).\", \"The solution being **well-motivated** (ruBJ, oGkz), in particular, MaxNormActivation, which is also acknowledged to be **useful** primitive **to stabilize LLM training** (rVwA).\", \"Originality, novelty and efficiency of the approach to unite and **fuse** projection matrices of Transformer into **single parameter** (HAtu, rVwA) and of the proposition to **use local attention** together with the new architecture (HAtu).\", \"Strong empirical results for speed/ **efficiency** gains with just **architectural** changes and **no specialized kernels** as well as for promising modeling performance in **long-context sequence modeling tasks** (rVwA).\", \"**Provision of code** (oGkz) which ensures easy reproducibility.\", \"**Summary of updates**\", \"Here\\u2019s the summary of the key updates we made based on reviewers\\u2019 comments and suggestions.\", \"Added a discussion of differences of Linear Transformers in relation to DenseAttention in **Appendix D** of the revised manuscript (also in **General Response. Part 1** for convenience) (ruBJ, HAtu, oGkz, rVwA). Additionally provided an exposition of other sub-quadratic algorithms for sequence processing in **General Response. Part 3** (please note: version of this exposition in the manuscript is a draft which will be updated). (oGkz).\", \"Added an \\u201cExtended comparison with Transformer-based models\\u201d on the LRA for 25+ models in **Appendix E.2** (also in **General Response. Part 2**), which highlights strong performance of DANet (ruBJ, HAtu, rVwA). Provided an extended \\u201cDiscussion of the LRA tasks\\u201d clarifying the scope and difficulty for each task in **Appendix E.1** (ruBJ, oGkz).\", \"Conducted additional experimental studies on extremely long sequences performance on Pathfinder-256 benchmark in **Appendix H.1** (ruBJ, oGkz), and scaling effects in **Appendix H.3** (ruBJ, rVwA), both with positive outcomes.\", \"Reproduced all DANet-BERT pretraining experiments with architecture equivalent to original BERT in **General Response. Part 4** and arrived at results and conclusions similar to former ones (oGkz).\", \"Performed additional ablation studies on local-global attention in DANet-BERT for extremely long context sizes in **General Response. Part 4** (oGkz), speed gains of Cosine PE in **Appendix H.2** (oGkz, rVwA), and use of MaxNormActivation in vanilla Transformer in **General Response. Part 4** (rVwA), all to expected/ favorable outcomes.\", \"Added a \\u201cConclusion and Future Work\\u201d section in **Appendix A** (draft) and **General Response. Part 3** (final version) to clarify the scope of the work and future directions (ruBJ, oGkz).\"]}", "{\"comment\": \"We thank the reviewer rVwA for valuable review and thought-provoking comments. We are delighted by your appreciation of both modeling quality on long contexts, as well as performance and efficiency gains of DenseAttention, which were the main reasons for much inner working behind the DANet architecture. Please let us address your concerns.\\n\\n**Weakness 1**\\n\\n> The main mechanism behind DenseAttention, i.e., removing the softmax and using associativity to compute the product in linear-in-$N$ time, has been studied before; see for example [1]. It is acknowledged that the paper cites [1], but the paper suggests that the mechanism in [1] has poor efficiency; however DenseAttention is strictly a special case of the mechanism in [1] (using the notation of [1], set $\\\\phi$ as the identity mapping). So this claim does not make sense, and the novelty of DenseAttention seems limited.\\n\\n---\\n\\nMotivated by your and other reviewers\\u2019 comments, we wrote a detailed exposition of the Linear Transformer class of algorithms and their fundamental differences with DenseAttention, which we will include in the revision of the paper. We share it in the General Response and gently ask you to read it. We would also like to address specific points in your comment.\\n\\nAs we discuss there, DenseAttention Network\\u2019s architecture is quite different from the Linear Transformer. The mandatory building blocks of their architecture (section 3.2 in [1]) and numerous derivative works are non-negative mapping $\\\\phi(\\\\cdot)$ and reweighting scheme implemented as denominator in $\\\\frac{\\\\phi(Q_i) \\\\sum_{j=1}^N \\\\phi(K_j^\\\\top)V_j}{\\\\phi(Q_i) \\\\sum_{j=1}^N \\\\phi(K_j^\\\\top)}$. Both elements are memory-intensive operations and contribute to computational inefficiency of Linear Transformers similar to regular softmax attention.\\n\\nWe don\\u2019t utilize any of these blocks and empirically show that modeling quality is, in fact, better.\\n\\nFraming the absence of any transformations applied to $\\\\mathbf{Q}$ and $\\\\mathbf{K}$ as an identity mapping $\\\\phi(x)=x$ would be conceptually wrong because transformed values are required to be non-negative in the Linear Transformer class of algorithms [1-2]. And the absence of reweighting the attention scores by their row-wise sums further sets DenseAttention apart from Linear Transformers.\\n\\nIn fact, getting rid of attention reweighting was the hardest part in designing the new algorithm because without it attention outputs tend to diverge to infinity very quickly in real-world scenarios even with moderately deep networks and moderately large head sizes. It became the primary reason for using MaxNormActivation.\\n\\n---\\n\\n**Weakness 2**\\n> The reasoning behind using MaxNormActivation seems lacking. In particular, since all norms are equivalent (i.e., bounded by each other up to multiplicative constants, possibly dependent on dimension) in finite-dimensional vector space, the boundedness of the maximum norm is equivalent to the boundedness of the $\\\\ell^{2}$ norm. So if the argument in the paper goes through, it should mean that $\\\\ell^{2}$ normalization should also work (then why not LayerNorm? But LayerNorm doesn't work, as reported in the paper, so something else is going on.). Although the MaxNormActivation is an interesting and potentially useful contribution, it may not work for the reason explained in the paper.\\n\\nWe thank you for this very interesting comment and are happy give detailed explanations.\\n\\nWithout reweighting, the attention mechanism becomes just a composition of matrix multiplications: $\\\\mathbf{A}=\\\\mathbf{X} \\\\mathbf{W} \\\\mathbf{X}^\\\\top \\\\mathbf{X}$. Outputs in this expression grow cubically w.r.t inputs: for any element $X_{i, j}$ of input matrix $\\\\mathbf{X}$, there exists an output element $A_{m, n}$, which has $X_ {i, j}^3$ as a summand. It can lead to exploding or vanishing output values, depending on the distribution of the inputs, especially when computed in half-precision formats (fp16\\u2019s max value is just 65504). Hence, it becomes imperative to control the magnitude of inputs.\\n\\nNaturally, the most suitable norm for directly controlling the maximal magnitude is $\\\\ell_\\\\infty$. Although it\\u2019s true that $\\\\lVert x \\\\rVert_\\\\infty \\\\leq \\\\lVert x \\\\rVert_2$ for a finite-dimensional vector $x$, scaling $x$ by $\\\\frac{1}{\\\\lVert x \\\\rVert_\\\\infty}$ is preferable because it guarantees that the absolute maximum value of its elements is exactly 1. It prevents outputs from both exploding to $\\\\infty$ or $NaN$ values or shrinking to 0.\\n\\nFurthermore, calculation of $\\\\lVert x \\\\rVert_2$ in low precision formats can lead to either a numerical instability (as in the case of fp16) or to a loss of numerical precision (both in fp16 and bf16) for high-dimensional $x$. \\n\\nFinally, finding maximum absolute value in a vector is relatively cheaper computationally than squaring, adding all the elements and then taking a root.\", \"title\": \"Author response. Part 1\"}", "{\"title\": \"Author response. Part 2\", \"comment\": \"> **W1.1 (Continuation)**\\n\\n**4.** Moreover, motivated by the suggestions, we decided to conduct an additional **experiment on the Pathfinder-256 task**. This is an extremely challenging version of the Pathfinder task with sequence length 65k which is on par with input context size of recent generations of proprietary Large Language Models. We present the results below:\\n\\n---\\n\\n| Algorithm | Accuracy on the validation set, % |\\n|-----------------------------------------|------------------------------------|\\n| FlashAttention [9] | 63.1 |\\n| S4 [11] | 67.8 |\\n| DenseAttention | _72.6_ |\\n| DenseAttention after additional 550 epochs | **77.1** |\\n\\n\\nDenseAttention model outperforms existing results from the literature of standard Transformer augmented with FlashAttention [9] and S4-v2 model [12] as reported in [11] The result holds both when the training procedure is carried out for 200 training epochs as in [9] and then it\\u2019s prolonged for 550 additional epochs.\", \"this_experiment_lets_us_make_several_observations\": \"* DenseAttention Network architecture performs well even on very long input sequences which is promising given current trend of increasing context size in modern Large Language and Multimodal Models\\n* DenseAttention shows favorable scaling properties with respect to the amount of training iterations, even with the fixed dataset size. * The validation accuracy for the task kept improving throughout the whole training and would likely have continued if the experiment had not been stopped.\\n* Truly linear scaling in sequence length is crucial for improvements in quality for large contexts. It took approximately 3 days on 4 H100 GPUs to train our model for 750 epochs in linear mode, while the projected runtime of quadratic (FlashAttention-2 [10]) and log-linear (S4) algorithms in the same setting would be at best 3 and 0.5 months, respectively, which renders them impractical for prolonged training.\\n---\\n\\n**5)** We acknowledge that there are other modalities and specialized architectures that would benefit from long-context efficiency improvements if the DenseAttention is ported or applied to them, such as ViT and SAM for Computer Vision tasks and LLAMA for language modeling. We hope to address them in future work.\\n\\n[1] Katharopoulos et al., \\\"Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention.\\\" ICML 2020\\n\\n[2] Peng et al., \\\"Random Feature Attention.\\\" ICLR 2021 \\n\\n[3] Tay et al., \\\"Long Range Arena: A Benchmark for Efficient Transformers.\\\" ICLR 2021\\n\\n[4] Nangia and Bowman, \\\"ListOps: A Diagnostic Dataset for Latent Tree Learning.\\\" NAACL 2018\\n\\n[5] Maas et al., \\\"Learning Word Vectors for Sentiment Analysis.\\\" ACL 2011\\n\\n[6] Radev et al., \\\"The ACL Anthology Network Corpus.\\\" Language Resources and Evaluation, 2013\\n\\n[7] Krizhevsky, \\\"Learning Multiple Layers of Features from Tiny Images.\\\" Technical Report, 2009\\n\\n[8] Kim et al., \\\"Disentangling Neural Mechanisms for Perceptual Grouping.\\\" ICLR 2020\\n\\n[9] Dao et al., \\\"FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness.\\\" NeurIPS 2022 \\n\\n[10] Dao, T., \\\"FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning.\\\" ICLR 2024\\n\\n[11] Amos et al., \\\"Never Train from Scratch: Fair Comparison of Long-Sequence Models Requires Data-Driven Priors.\\\" ICLR 2024\\n\\n[12] Gu et al., \\\"Efficiently Modeling Long Sequences with Structured State Spaces.\\\" ICLR 2022\\n\\n---\\n\\n> **W1.2 The baselines are limited. There is only one Transformer model that serves as the baseline without specifying how the model is trained.**\\n\\nWe thank you for pointing out the perceived lack of baselines. Indeed, Table 1 in the paper lists only two baselines. However, these baselines are exceptionally strong. Actually, DANet outperforms *all transformer-based models and architectures that we are aware of*, including recent ones, such as [10-11] on the Long Range Arena suite of benchmarks. \\n\\n**Additional baselines** \\n\\nWe acknowledged the strength of our main baseline from (Never Train from Scratch, ICLR 2024) in the lines 424-425, quote: \\u201cInterestingly, even without pre-training, but with RoPE, they reached a SOTA score on the benchmark among all Transformer-based architectures with a large margin\\u201d. In hope that it would be clear from the text, and due to space limitations, we omitted explicit comparisons with other 25+ models, including 11 models tested in the original LRA paper (reference), in the Table 1. We present the full comparisons below in a separate table.\"}", "{\"title\": \"On misunderstanding of Mamba fairness for Reviewer HAtu\", \"comment\": \"It seems there might be a misunderstanding of \\\"fairness\\\" shade of meaning in our response. Our comparison is indeed fair. However, the authors of the Mamba paper themselves seem to be opposed to testing and comparing the Mamba model on the LRA as discussed above. In that sense, it might be not fair and, maybe, even unethical to them to compare their model with. That was what we meant.\\n\\nSince this misunderstanding appears to be resolved now by our explanation, and since we have comprehensively addressed all other your concerns, we kindly ask you to consider reevaluating our work positively or providing additional feedback, if needed.\"}", "{\"title\": \"General Response. Part 3\", \"comment\": \"**Conclusion and Future Work**\\n\\nIn this paper, we propose DenseAttention Network -- a general architecture which simplifies the Transformer block and can serve as a drop-in replacement in every model architecture using it. We conduct experiments on the diverse modalities spanning from logic to language modeling and image classification and from short to extremely long sequence lengths using the LRA suite of benchmarks and MLM-style language model pre-training on text data. The results show that DenseAttention is capable of generalizing to many different tasks and context sizes and achieving favorable performance in comparison with standard Transformer and its augmented variants while being faster and more computationally efficient even with no specialized, low-level computation algorithms such as in [1].\\n\\nWe acknowledge that there are other modalities and specialized architectures that would benefit from long-context efficiency improvements if the DenseAttention is ported or applied to them, such as ViT [2] and SAM [3] for Computer Vision tasks, and LLAMA \\n[4] for decoder-style language modeling. We hope to address them in future work. In particular, we look forward to adapting DenseAttention architecture to causal LLAMA-style LLMs and studying their scaling laws at billions of parameters range.\", \"references\": \"[1] Gu et al., \\\"Efficiently Modeling Long Sequences with Structured State Spaces.\\\" ICLR 2022\\n\\n[2] Gupta et al., \\\"Diagonal State Spaces are as Effective as Structured State Spaces.\\\" NeurIPS 2022\\n\\n[3] Ma et al., \\\"Mega: Moving Average Equipped Gated Attention.\\\" ICLR 2023\\n\\n[4] Gu and Dao, \\\"Mamba: Linear-Time Sequence Modeling with Selective State Spaces.\\\" CoLM 2024\\n\\n[5] Beck et al., \\\"xLSTM: Extended Long Short-Term Memory.\\\" NeurIPS 2024\\n\\n[6] Orvieto et al., \\\"Resurrecting Recurrent Neural Networks for Long Sequences.\\\" ICML 2023\\n\\n[7] Peng et al., \\\"RWKV: Reinventing RNNs for the Transformer Era.\\\" Findings of EMNLP 2023\\n\\n[8] Dao et al., \\\"Monarch: Expressive Structured Matrices for Efficient and Accurate Training.\\\" ICML 2022 \\n\\n[9] Fu et al., \\\"Simple Hardware-Efficient Long Convolutions for Sequence Modeling.\\\" ICML 2023\\n\\n[10] Poli et al., \\\"Hyena Hierarchy: Towards Larger Convolutional Language Models.\\\" ICML 2023\\n\\n[11] Fu et al., \\\"Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture.\\\" NeurIPS 2023\\n\\n[12] Yang et al., \\\"Gated Linear Attention Transformers with Hardware-Efficient Training.\\\" ICML 2024\"}", "{\"title\": \"Response to Reviewer oGkz. Part 1\", \"comment\": \"We thank the reviewer oGkz for the thoughtful feedback, attention to the details, and constructive suggestions which have helped us to improve the exposition of our work and prompted us to conduct several additional experiments. We are grateful to you for the comprehensive outline of our contributions and recognition of motivation behind MaxNormActivation.\\n\\nPlease let us try to address and alleviate your concerns with detailed responses. \\n\\n---\\n\\n**Question 1**\\n> Why do you still consider DANet as Transformer-based? The only part of transformers that is left, is the Feeforward layers which is now inside the block.\\n\\nPlease let us start by answering the first question to set the context for further discussion.\\n\\nThe defining part of the standard Transformer architecture is its attention mechanism as all other constituents may be altered, replaced (e.g standard FeedForward for GeGLU/ SwiGLU layers and LayerNorm for RMSNorm like in [1]), or moved (PreLayerNorm and PostLayerNorm discussed in [2])\\n\\nFundamentally, the attention mechanism A can be described as follows:\\n\\n$\\\\mathbf{A} = \\\\mathcal{S} ( \\\\mathbf{Q}(\\\\mathbf{X}) \\\\mathbf{K}^\\\\top (\\\\mathbf{X}) ) \\\\mathbf{V} (\\\\mathbf{X})$\\n\\nwhere $\\\\mathbf{Q}(\\\\cdot) \\\\in \\\\mathbb{R}^{N \\\\times d_q}$, $\\\\mathbf{K}(\\\\cdot) \\\\in \\\\mathbb{R}^{N \\\\times d_q}$ and $\\\\mathbf{V}(\\\\cdot) \\\\in \\\\mathbb{R}^{N \\\\times d_v}$ are some mappings (usually, linear projections) of $\\\\mathbf{X} \\\\in \\\\mathbb{R}^{N \\\\times d}$, and $\\\\mathcal{S}(\\\\cdot)$ is some similarity function (for example, row-wise $\\\\text{Softmax}(\\\\cdot \\\\/ d_q)$ in the standard Transformer\\u2019s attention). \\n\\nAll Transformer-based architectures have an attention mechanism which can be parametrized by some of functions $\\\\mathbf{Q}(\\\\cdot)$, $\\\\mathbf{K}(\\\\cdot)$, $\\\\mathbf{V}(\\\\cdot)$, and $\\\\mathcal{S}(\\\\cdot)$. In particular, by letting $\\\\mathbf{Q}(\\\\mathbf{X}) = \\\\mathbf{X} \\\\mathbf{W}_Q$, $\\\\mathbf{K}(\\\\mathbf{X}) = \\\\mathbf{X}$, $\\\\mathbf{V}(\\\\mathbf{X})=\\\\mathbf{X}$, and $\\\\mathcal{S}(\\\\mathcal{X}) = \\\\mathcal{X}$, we get \\n\\n$\\\\mathbf{A} = \\\\mathbf{Q} \\\\mathbf{K}^\\\\top \\\\mathbf{V} = \\\\mathbf{X} \\\\mathbf{W}_Q \\\\mathbf{X}^\\\\top \\\\mathbf{X},$\\n\\nwhich is exactly the formula for DenseAttention.\\n\\nIn contrast, DenseAttention cannot be considered an instance of another broad class of algorithms \\u2013 State Space Models (SSMs) / linear RNNs which are characterized by linear recurrence:\\n\\n \\\\begin{align*}\\n&x_t = \\\\mathbf{\\\\overline{A}}x_{t-1} + \\\\mathbf{\\\\overline{B}}u_t \\\\\\\\\\n& y_t = \\\\mathbf{\\\\overline{C}}x_t + \\\\mathbf{\\\\overline{D}}u_t,\\n\\\\end{align*}\\n\\nwhere $\\\\mathbf{\\\\overline{A}}$ is a data-independent matrix. There are no data-independent matrices in DenseAttention.\\n\\nTherefore, DANet is most naturally classified as an instance of the Transformer-based class of algorithms. In fact, we specifically designed it by taking the standard Transformer architecture and simplifying / modifying it.\\n\\n---\\n\\n**Weakness 1**\\n> A related work section is missing in which the authors put DANet in relation to other Linear Attention variants (e.g. GLA https://arxiv.org/abs/2312.06635 ), State space models (e.g. Mamba https://arxiv.org/abs/2405.21060) or other RNNs variants (e.g. xLSTM (https://arxiv.org/abs/2405.04517 ) or RWKV (https://arxiv.org/abs/2305.13048 )). Also a relation to embedding models other than BERT is missing, e.g. Monarch Mixer (https://arxiv.org/abs/2310.12109).\\n\\nWe thank you for pointing out limited discussions with other work. In fact, we cited several papers on linear and sub-quadratic Transformers, including one of the first and most influential works on Linear Attention [3] in the Introduction (lines 52-53), and LocalAttention for DenseAttention (lines 361-363; 373 where we discussed similarities with our local attention paradigm) sections.\\n\\nHowever, we initially omitted the extended discussion of [3] and subsequent work as we believe DenseAttention architecture is conceptually different from Linear Transformer and its derivatives. Instead, we briefly discussed the most similar research to ours [4-5] in the Introduction.\\n\\nMotivated by your and other reviewers\\u2019 feedback, we wrote a detailed exposition which puts DenseAttention into perspective with other Linear Attention variants. We present it in the General Response and gently ask you to read it.\\n\\nAs discussed earlier, DenseAttention Network is a Transformer-based model and is not related to SSMs and RNNs. However, encouraged by your comment, we further augment this section by a brief exposition of these architectures. We present it below in full.\\n\\n---\"}", "{\"metareview\": \"The primary goal of this paper is to develop an attention mechanism which, given an input of N tokens, allows for the computation of the full matrix of NxN pairwise attention weights in less than N^2 time (i.e., linear in N). This is done in a manner similar to the Linear Attention mechanism, but as discussed by the reviews and authors is still somewhat distinct from Linear Attention in a few technical details regarding issues like normalization, which the authors suggest is important for numerical stability. Beyond this, the authors also make several changes to the overall architecture in terms of other normalization operators, positional encoding, etc and achieve good performance on several benchmarks.\\n\\nOverall, the reviewers are fairly unanimous in leaning to rejection, and I agree that the paper is perhaps not ready for publication just yet. As has been noted in the reviews, the proposed mechanism is very similar to Linear Attention, and while the current work may not be strictly captured by Linear Attention, this proximity combined with the numerous other changes to the architecture make it difficult to establish which elements are critical to performance and for readers to understand the key aspects of the contribution and why the proposed mechanism is needed relative to Linear Attention. I would encourage the authors to take these aspects and additional comments from the reviewers into account when preparing a new manuscript for a future conference which better highlights the advantages of the proposed approach relative to existing techniques (such as Linear Attention) as well as justifying the additional modification to the overall architecture.\", \"additional_comments_on_reviewer_discussion\": \"The authors have provided very extensive rebuttals to the initial reviews, and some of the reviewers found these arguments helpful in showing the contribution of the work and revised their scores higher (though not to the point of arguing for paper acceptance). While this is a commendable effort from the authors, this also somewhat suggests that the manuscript also would potentially benefit from a significant revision to incorporate the clarification and additional content from the rebuttal, and as mentioned in the meta review I encourage the authors to take this into account and look forward to their submission at future conferences.\"}", "{\"title\": \"Author response. Part 4\", \"comment\": \"> **W3**\\n\\nWe apologize for all the typographical errors and inconsistencies and thank you for pointing them out. We really appreciate your effort and time in reviewing the manuscript, and we will fix these typos along with others, which we found during proofreading, in the revised version of the paper.\"}", "{\"comment\": \"Dear authors. Thank you for the response and the summary.\\n\\n> W1.1. Limited testbeds.\\n\\nI was aware that LRA contains multiple datasets. I expected the author to obtain more results beyond LRA. In particular, the paper does not contain the standard LM eval tasks like MMLU, GSM8K, TriviaQA, ARC, Hellaswag, MATH, etc.\\n\\n> W1.2. Limited baselines. \\n\\nThanks for providing more baselines. The major issue in the updated table is the authors should carefully select related baselines (similar parameter counts and training data) for fair comparisons. It is not a contest to have more rows.\\n\\n> W1.3. Scaling effects\\n\\nThanks for showing the scaling effect. However, the scaling is from an ultra-small scale (31M) to a small scale (336M). More than five years ago, the smallest GPT2 already had 100M+ parameters. In the initial review, I was concerned that \\\"It is unclear if the method could be scaled to larger-scale applications\\\".\\n\\n> W2. Discussion of Linear Transformers\\n\\nThanks for adding the discussion. It is now clearer where the paper should be positioned. But using an identity mapping as $\\\\phi$ and dropping the denominator does not seem to largely contribute to the established knowledge unless the authors could justify that previous papers were doing wrong (and therefore could explain why their performance is much lower than DenseAttention).\\n\\nIn summary, the authors only partially addressed the concerns. In recognition of this and the authors' diligence, I have raised my score to 5.\"}", "{\"summary\": \"The authors propose a new architecture which they call DenseAttention Network, which is a variation of the standard transformer architecture which is specifically tuned to perform well on long sequences. The changes include:\\n1) Take the softmax away in the attention block, writing $QK^{\\\\top}V$ instead of $\\\\mathrm{softmax}(QK^{\\\\top})V$, and using associativity to compute the matrix product in linear time. (They name this as DenseAttention mechanism/block.)\\n2) Use a MaxNormActivation block instead of LayerNorm, which scales each token feature by its maximum absolute value.\\n3) Use a novel positional embedding called Cosine RelPE, which is claimed to perform similarly but more efficiently computable than RoPE.\\n4) For very long contexts, use a hand-rolled local attention implementation suited for DenseAttention.\\nThey show that this architecture has general improvements over a basic transformer + RoPE implementation in Long Range Arena, both in performance (across a few tasks) and efficiency (more broadly, as the usual attention mechanism does not have linear-in-$N$ time complexity). It also shows improvements against S4-V1 in Long Range Arena, and BERT in terms of Masked Language Modeling.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The MaxNormActivation block is potentially useful as a method to stabilize LLM training.\", \"The empirical results in tables 1-3 show that DenseAttention has some promise empirically on long-context sequence modeling tasks as compared to the standard transformer, S4-V1, and BERT.\", \"The efficiency results in table 4 show that it's possible to get out-of-the-box performance increases at long context compared to the usual BERT model. Specifically, all changes seem to be architectural, and no specialized kernels are needed to get better performance, as a result of using `torch.compile` and potentially fusing linear operations together.\"], \"weaknesses\": [\"The main mechanism behind DenseAttention, i.e., removing the softmax and using associativity to compute the product in linear-in-$N$ time, has been studied before; see for example [1]. It is acknowledged that the paper cites [1], but the paper suggests that the mechanism in [1] has poor efficiency; however DenseAttention is strictly a special case of the mechanism in [1] (using the notation of [1], set $\\\\phi$ as the identity mapping). So this claim does not make sense, and the novelty of DenseAttention seems limited.\", \"The reasoning behind using MaxNormActivation seems lacking. In particular, since all norms are equivalent (i.e., bounded by each other up to multiplicative constants, possibly dependent on dimension) in finite-dimensional vector space, the boundedness of the maximum norm is equivalent to the boundedness of the $\\\\ell^{2}$ norm. So if the argument in the paper goes through, it should mean that $\\\\ell^{2}$ normalization should also work (then why not LayerNorm? But LayerNorm doesn't work, as reported in the paper, so something else is going on.). Although the MaxNormActivation is an interesting and potentially useful contribution, it may not work for the reason explained in the paper. Also there's a potential typo in the equation defining MaxNormActivation: it should be $\\\\frac{X_{i}}{\\\\max_{j}|X_{ij}| + \\\\epsilon}$ on the RHS (note the absolute value).\", \"Not much motivation is given for the two other modifications, e.g., CosineRelPE and the local attention proposal - they seem to have a flavor of \\\"we tried it and it works,\\\" potentially with some ablation, and without context of why such an approach may or may not make sense or generalize to other architectures.\", \"The results in Long Range Arena are promising insofar as they match up against a standard transformer and an SSM, but this may not be a fair comparison. Given that the authors start with a regular transformer and apply modifications to show improvement on long-context, they could also compare against more recent models specialized for long context. For example, the authors omit comparison with S5, whose numbers are publicly available on [PapersWithCode](https://paperswithcode.com/dataset/lra), as well as S4 V2 and a long list of other models benchmarked on Long Range Arena but not necessarily added there.\", \"The result on efficiency compared to BERT also may seem to not be a fair comparison. BERT is trained with an encoder-only architecture, while DenseAttention Network is trained with a decoder-only architecture. A fairer comparison would pit DenseAttention Network against a regular decoder-only transformer (as well as BERT if desired, along with, say, an SSM), under the same experimental setting, and allow readers to observe trends in the different approaches as different scaling parameters vary.\", \"[1] Katharopoulos, Angelos, et al. \\\"Transformers are RNNs: Fast autoregressive transformers with linear attention.\\\" International conference on machine learning. PMLR, 2020.\"], \"questions\": [\"What is the specific motivation of designing CosineRelPE?\", \"Is there anything that suggests that any new block (DenseAttention, CosineRelPE, MaxNormActivation) can generalize to other architectures and improve either performance or efficiency (while not degrading the other)?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General Response. Part 2. Additional comparisons on the LRA (1)\", \"comment\": \"**Additional comparisons of DenseAttention on the LRA**\\n\\nIn our paper, we have stated that our baseline from [11] for the LRA suite of benchmarks surpasses all previous Transformer-based architectures we are aware of. Since DenseAttention outperforms this baseline and due to space limitations, we initially omitted explicit comparisons with these models. However, the reviewers\\u2019 comments indicated that it\\u2019s more advisable to present them. \\n\\nWe post the comparison table here in full and will include it in the Appendix of the paper.\\n\\n\\n| Model | ListOps | Text | Retrieval | Image | Pathfinder | Path-X | Avg |\\n|-----------------------|------------|-----------|-------------|------------|-------------|------------|------------|\\n| Transformer [1] | 36.37 | 64.27 | 57.46 | 42.44 | 71.40 | 61.40 [12] | 54.39 |\\n| Local Attention [1] | 15.82 | 52.98 | 53.39 | 41.46 | 66.63 | - | 46.06 |\\n| Sparse Trans. [1] | 17.07 | 63.58 | 59.59 | 44.24 | 71.71 | - | 51.24 |\\n| Longformer [1] | 35.63 | 62.85 | 56.89 | 42.22 | 69.71 | - | 53.46 |\\n| Linformer [1] | 35.70 | 53.94 | 52.27 | 38.56 | 76.34 | - | 51.36 |\\n| Reformer [1] | 37.27 | 56.10 | 53.40 | 38.07 | 68.50 | - | 50.67 |\\n| Sinkhorn Trans. [1] | 33.67 | 61.20 | 53.83 | 41.23 | 67.45 | - | 51.29 |\\n| Synthesizer [1] | 36.99 | 61.68 | 54.67 | 41.61 | 69.45 | - | 52.88 |\\n| BigBird [1] | 36.05 | 64.02 | 59.29 | 40.83 | 74.87 | - | 55.01 |\\n| Linear Trans. [1] | 16.13 | 65.90 | 53.09 | 42.34 | 75.30 | - | 50.55 |\\n| Performer [1] | 18.01 | 65.40 | 53.82 | 42.77 | 77.05 | - | 51.41 |\\n| RFA [2] | 36.80 | 66.00 | 56.10 | - | - | - | - |\\n| Luna-256 [3] | 37.98 | 65.78 | 79.56 | 47.86 | _78.55_ | - | 61.95 |\\n| Nystr\\u00f6mformer [4] | 37.15 | 65.52 | 79.56 | 41.58 | 70.94 | - | 58.95 |\\n| Kernelized Attention [5] | 38.78 | 60.22 | 81.77 | 41.29 | 70.73 | - | 58.56 |\\n| Informer [5] | 32.53 | 62.64 | 77.57 | 38.10 | 57.83 | - | 53.73 |\\n| Skyformer [5] | 38.69 | 64.70 | 82.06 | 40.77 | 70.73 | - | 59.39 |\\n| cosFormer [6] | 37.90 | 63.41 | 61.36 | 43.17 | 70.33 | - | 55.23 |\\n| FNet [7] | 35.33 | 65.11 | 59.61 | 38.67 | 77.80 | - | 55.30 |\\n| FLASH-quad [8] | 42.20 | 64.10 | 83.00 | 48.30 | 63.28 | - | 60.18 |\\n| FLASH [8] | 38.70 | 64.10 | _86.10_ | 47.40 | 70.25 | - | 61.31 |\\n| TransNormer T1 [8] | 41.03 | 66.90 | 83.11 | 51.60 | 75.92 | - | 63.71 |\\n| TransNormer T2 [8] | 41.60 | 72.20 | 83.82 | 49.60 | 76.80 | - | 64.80 |\\n| KDEformer [9] | 36.64 | 62.00 | 73.52 | 45.45 | 68.13 | - | 57.15 |\\n| Hedgehog [10] | 37.15 | 64.60 | 82.24 | 40.15 | 74.16 | - | 59.66 |\\n| Transformers + Rotary [11] | _47.90_ | _79.08_ | 82.31 | **75.04** | 76.64 | _84.72_ | _72.89_ |\\n| DenseAttention (ours) | **50.50** | **81.19** | **87.51** | _72.55_ | **87.40** | **88.82** | **75.83** |\\n\\nDenseAttention outperforms all other Transformer-based models on the Long Range Arena. The metrics for all tasks is accuracy. To ensure consistent comparisons, the averages for the models which report the result on Path-X task are computed without it. The references in the table link to the papers which reported LRA results for corresponding models.\"}", "{\"title\": \"Thanks and further discussion\", \"comment\": \"Dear reviewer rVwA,\\n\\nWe thank you very much for your response and appreciation of our work. We are delighted to have partially addressed your concerns.\\n\\n> In order to validate that it is a full replacement of RoPE without any performance drawbacks, it would have been great to have higher-scale experiments. \\n\\nIn our preliminary ablations we have found that incorporation of either RoPE or Cosine RelPE causes a performance boost on all of the LRA tasks. One of such ablations have been presented in Section 4.1, Table 2 in the paper (we repeat below the relevant part for your convenience)\\n\\n\\n| **Model** | **Accuracy** |\\n|---------------------------------------------------|--------------|\\n| DANet + Sinusoidal Embedding (bf16 format) | 82.69 |\\n| DANet + Cosine RelPE | 83.98 |\\n\\nAblation on the type of embeddings on the Retrieval task of the LRA. Use of Cosine RelPE leads to a better performance. We observe a similar effect on all other LRA tasks.\\n\\nAs the difference in metrics of models trained on either of PE variants had been negligible, and at the same time, a boost from using them in general was evident, we discarded the full logs and proceeded with the most computationally efficient option (Cosine RelPE) as the default one. Should you recommend it, we will reproduce these full comparisons and include them in the final version of the paper. \\n\\n> For example, can the DenseAttention block work with standard LayerNorms (I suspect the answer is no for the reasons discussed earlier in your rebuttal)?\\n\\nYou are correct. Indeed, it proved empirically that DenseAttention can\\u2019t work with a standard LayerNorm. As we explicitly state in the paper (lines 255-256), to quote: \\u201cIn our ablation experiments any other activation or normalization function or absence thereof would lead to a prompt and unrecoverable numerical instability early on during training.\\u201d\\n\\n> Can the MaxNormActivation work without DenseAttention, i.e., does it improve performance or efficiency of transformers with the usual attention? \\n\\nThank you for the suggestion! We are now actively working on this experiment and will do our best to finish and present it before the discussion period ends.\\n\\n---\\n\\nAgain, we are grateful for your feedback and subsequent response as they helped to clarify and enhance our work, and we are looking forward to further discussion if you have any suggestions or remaining concerns.\"}", "{\"title\": \"Follow-Up for Reviewer HAtu\", \"comment\": \"Dear reviewer HAtu,\\n\\nWe thank you again for the thorough review and constructive feedback which helped to improve our work.\\n\\nWe would like to follow up and gently ask if we were able to address your concerns in the Author Response. As the discussion period progresses, we would appreciate any updates or further questions you may have. Thank you for your time in advance!\"}", "{\"title\": \"Response to Reviewer oGkz. Part 7\", \"comment\": \"**New results of DANet-BERT pre-training**\\n\\n| **Model** | **MLM Loss (L=128)** | **Acc. (L=128)** | **MLM Loss (L=512)** | **Acc. (L=512)** | **MLM Loss (L=1024)** | **Acc. (L=1024)** |\\n|--------------------------------------|-----------------------|-------------------|-----------------------|-------------------|-----------------------|-------------------|\\n| DenseAttention (1 head, N=128) | **1.91** | **0.620** | - | - | - | - |\\n| DenseAttention (4 heads, N=128) | 1.97 | 0.611 | - | - | - | - |\\n| BERT-large | 2.58 | 0.582 | 2.31 | 0.614 | - | - |\\n| DenseAttention (1 head, N=512) | 1.97 | 0.614 | 1.726 | 0.644 | - | - |\\n| DenseAttention (4 heads, N=512) | 2.04 | 0.602 | 1.84 | 0.624 | - | - |\\n| DenseAttention (1 head, N=1024) | 1.96 | 0.615 | **1.71** | **0.648** | 2.26 | 0.591 |\\n| DenseAttention (4 heads, N=1024) | 2.07 | 0.598 | 1.83 | 0.627 | **1.87** | **0.618** |\\n\\nEvaluations of MLM loss and accuracy for DenseAttention Network models and the original BERT on C4 dataset texts of different context sizes. N is the maximum sequence length with which a model was trained or evaluated and L is the length of evaluation samples. DANet-BERT model variations uniformly outperform the original BERT on corresponding context sizes.\\n\\nThe results of the new DANet-BERT model, which is closely aligned with the original, are similar with the old results. The relations and trends in performance between all models and all context lengths remain unchanged.\\n\\n--- \\n**Question 5**\\n\\n>Why do you use float16 in the experiments?\\n\\nWe thank you for this exciting question!\\n\\nWe pay great attention to the compatibility of the DenseAttention with half-precision formats (fp16 and bf16) because they are widely, if not predominantly, used in practice, e.g for training and inference of Language Models. Importantly, use of these formats brings enormous speed gains (up to 2x in comparison with full precision)\\n\\nSpeaking of the differences between the two formats, they have trade-offs in relation to each other. bf16 has a wider numerical range than fp16, but less precision which makes fp16 empirically better for some models provided that their activations stay inside its numerical range.\\n\\nOur primary motivation for using fp16 in experiments is its superior compatibility. A model, trained in fp16, can be converted and used with bf16, but the opposite need not always be the case. And a lot of hardware still used both in academia and in industry has no support for bf16 (e.g NVIDIA V100 and T4 server GPUs and Nvidia consumer GPUs older than 3xxx series). We would like to make the architecture and the models accessible to everyone so we try to use fp16 wherever possible.\\n\\n---\\n\\n**P. S.**\\n\\nWe sincerely apologize for the delay in the response. It was due to longer than expected allocation of resources and duration of several computationally-intensive experiments which were motivated by your feedback. We hope they helped to enhance our work and to address your concerns and questions. We\\u2019d like to thank you again for the review!\"}", "{\"title\": \"Summary for reviewer ruBJ\", \"comment\": \"Dear reviewer ruBJ,\\n\\nAs we acknowledge that your time may be constrained, and given the comprehensive scope of our response, here we present its summary for your convenience.\\n\\n**W1.1. Limited testbeds.** Communicated that the LRA is not the only testbed; there are also extensive experiments with standalone language modeling in the paper (**Section 4.2**). Presented an extensive discussion of diverse and comprehensive scope for all 6 of the LRA benchmarks which signifies the broad and versatile nature of the chosen testbeds (**Part 1**, also **Appendix E.1** in the revised manuscript). Drew a comparison with the experimentation scope of related published papers highlighting the same or greater breadth in our work. Conducted a new experiment on extremely challenging Pathfinder-256 task (65K context size) and set a new SOTA (**Part 2**, also **Appendix H.1**). Discussed directions for future work.\\n\\n**W1.2. Limited baselines.** Reiterated on exceptional strength of our augmented Transformer baseline on the LRA, which in turn outperforms all Transformer-based models. Referred to specifications on how it was trained. Presented explicit comparisons with 25+ Transformer-based architectures to validate that DenseAttention outperforms all of them, including the models mentioned in W2 and the most recent ones (**Parts 2-4** and **General Response. Part 2** above, also available in **Appendix E.2**). Highlighted the significance of DenseAttention also outperforming an instance of SSM class much better suited for the LRA. Explained the LM experiment design to pre-train an architecture very closely matching BERT for fair comparison in modeling performance and speed. Reported that DANet compares favorably in both. (**Part 4**)\\n\\n**W1.3. Scaling effects**\\n\\nPresented a scaling effects study w.r.t. the model size which indicates DenseAttention architecture exhibits favorable scaling properties similar to Transformer (**Part 4**, also **Appendix H.3**). Expressed promising scaling properties w.r.t. the amount of training as shown in the new Pathfinder-256 experiment.\\n\\n**W2. Discussion of Linear Transformers**\\n\\n Reiterated existing discussions of related work in the original manuscript. Provided an in-detail \\u201cDiscussion of Linear Transformers in relation to DenseAttention\\u201d, in which we described theoretical differences between DenseAttention and the broad class of Linear Transformer models (**Part 5**, same in **General Response. Part 1** and in **Appendix D** in the revised manuscript). \\n\\n\\nWe thank you for your time and insightful suggestions which helped to refine our work, and we eagerly await your feedback.\"}", "{\"title\": \"Author response. Part 5\", \"comment\": \"> **W2. The paper lacks several important previous papers. In fact, linearizing attention has been heavily studied before [1, 2, 3]. This paper has no comparisons or discussions.**\\n\\nWe thank you for pointing out limited discussions and comparisons with other papers related to linearized attention. In fact, we cited several papers on linear and sub-quadratic Transformers, including one of the first and most influential works on linear attention [1] in the Introduction (lines 52-53), and LocalAttention for DenseAttention (lines 363-364; 375 where we discussed similarities with our local attention paradigm) sections.\\n\\nHowever, we initially omitted the extended discussion of [1] and subsequent work as we believe DenseAttention architecture is conceptually different from Linear Transformer and its derivatives. The most similar research to ours is SimA [4], which we discuss in the Introduction.\\n\\nAs we commented in response to W1.2, DenseAttention compares favorably with all Transformer-based architectures we are aware of and which have been tested on the LRA, including linear and subquadratic algorithms. In particular, the model outperforms both Linear Transformer [1] and Random Feature Attention [2] by a large margin. We copy select rows from the table for convenience:\\n\\n| Model | ListOps | Text | Retrieval | Image | Pathfinder | Avg |\\n|---------------------|---------|--------|-----------|--------|------------|--------|\\n| Linear Trans. | 16.13 | 65.90 | 53.09 | 42.34 | 75.30 | 50.55 |\\n| RFA | 36.80 | 66.00 | 56.10 | - | - | - |\\n| DenseAttention | 50.50 | 81.19 | 87.51 | 72.55 | 87.40 | 75.83 |\\n\\n[1] Katharopoulos et al., \\\"Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention.\\\" ICML 2020\\n\\n[2] Peng et al., \\\"Random Feature Attention.\\\" ICLR 2021\\n\\n[3] Tsai et al., \\\"Transformer Dissection: A Unified Understanding of Transformer's Attention via the Lens of Kernel.\\\" EMNLP 2019\\n\\n[4] Koohpayegani and Pirsiavash, \\\"SimA: Simple Softmax-free Attention for Vision Transformers.\\\" WACV 2024\\n\\n---\\n\\nMoreover, based on the reviews, we understand now that explicit analysis of [1] would greatly improve our work and sincerely thank you for the suggestion. We will incorporate the following discussion in the paper.\\n\\n---\\n\\n**Discussion of Linear Transformers in relation to DenseAttention**\\n\\nGiven entries $Q_i, K_j, V_j \\\\in \\\\mathbb{R}^{1 \\\\times d}$ of matrices $\\\\mathbf{Q}, \\\\mathbf{K}$ and $\\\\mathbf{V}$, standard softmax attention for input i can be reformulated as\\n\\n\\n$ \\n\\\\begin{equation}\\n A_i = \\\\frac{\\\\sum_{j=1}^N \\\\text{Sim}(Q_i, K_j)V_j}{\\\\sum_{j=1}^N \\\\text{Sim}(Q_i, K_j)} \\\\in \\\\mathbb{R}^{1 \\\\times d}\\n\\\\end{equation},\\n$\\n\\nwhere $\\\\text{Sim}(Q_i, K_j)=\\\\text{exp}(Q_i K_j^\\\\top)$. Conceptually, linear attention class of algorithms, described in [1] and built upon in numerous subsequent works, approximates or replaces this similarity function with separable kernel $\\\\text{Sim}(Q_i, K_j)=\\\\mathcal{K}(Q_i, K_j)=\\\\phi(Q_i) \\\\phi(K_j^\\\\top)$, where $\\\\phi: \\\\mathbb{R}^d \\\\to \\\\mathbb{R}_{+}^{r}$ maps query and key vectors to non-negative vectors with possibly different dimension r.\\n\\nHence, the attention mechanism becomes:\\n\\n $A_i = \\\\frac{\\\\sum_{j=1}^N \\\\phi(Q_i) \\\\phi(K_j^\\\\top)V_j}{\\\\sum_{j=1}^N \\\\phi(Q_i) \\\\phi(K_j^\\\\top)} = \\\\frac{\\\\phi(Q_i) \\\\sum_{j=1}^N \\\\phi(K_j^\\\\top)V_j}{\\\\phi(Q_i) \\\\sum_{j=1}^N \\\\phi(K_j^\\\\top)}, \\\\quad (1)$ \\n\\nwhich can be computed in linear time.\\n\\nThe function $\\\\phi(\\\\cdot)$ can take various forms, such as 1 + ELU [1], ReLU [2], squared ReLU [3], Taylor [4-6] or Random Feature [7-8] expansions, and even MLPs trained to mimic softmax attention [6]. They aim to approximate softmax without its explicit calculation when being applied jointly to queries and keys, or to retain its properties, most importantly, non-negativity of resulting dot products $\\\\phi(Q_i) \\\\phi(K_j^\\\\top)$. \\n\\n\\nThe latter property, together with reweighting attention scores (denominator in the formula 1) are defining for Linear Transformer algorithms. Absence of scaling by $\\\\frac{1}{\\\\phi(Q_i) \\\\sum_{j=1}^n \\\\phi(K_j^\\\\top)}$ leads to numerical instabilities, and the scaling factor itself is not guaranteed to be bounded without non-negative $\\\\phi(\\\\cdot)$. However, both mappings $\\\\phi(\\\\cdot)$ (even relatively simple), and memory intensive non-MatMul operations for reweighting contribute to subpar speed and computational efficiency in comparison with ordinary and fast self-attention algorithms on all but large context sizes.\\n\\nWe forgo both transforming $\\\\mathbf{Q}, \\\\mathbf{K}$ by $\\\\phi(\\\\cdot)$ and reweighting in DenseAttention as we believe the main factor of success of Transformer is the ability of all $N \\\\times N$ interactions between tokens. It results in an improved computational efficiency and simpler design which can be expressed entirely by matrix multiplications:\\n\\n$\\\\mathbf{A} = \\\\mathbf{Q} \\\\mathbf{K}^\\\\top \\\\mathbf{V} $\"}", "{\"title\": \"Response to Reviewer oGkz. Part 5\", \"comment\": \"**DANet-BERT 16K with local attention ablation**\\n\\nFinally, we performed new experiments by taking DANet-BERT model after it had finished pre-training on sequence length 512, augmenting it with our local attention scheme and continuing pre-training on sequence lengths 1024 and then 16384 tokens. We compare the results with the old way of pre-training without local attention and report them in full below (as we obtained them recently, they are not yet present in the current revision of manuscript).\\n\\n| **Context size** | **1k** | | | **16k** | | |\\n|------------------------|---------------------------|---------------------------|---------------------------|----------------------------|---------------------------|---------------------------|\\n| **Metrics** | **Samples** | **MLM loss** | **MLM acc.** | **Samples** | **MLM loss** | **MLM acc.** |\\n| DANet-BERT | 80M | 2.255 | 0.591 | 27M | 2.843 | 0.452 |\\n| DANet-BERT + local attention | 80M | 1.705 | 0.647 | 7.8M | 1.689 | 0.637 |\\n\\n\\nComparison of DenseAttention BERT-large pre-trained on long context sizes with and without local attention. The models with context size 1k and 16k were evaluated on the corresponding length texts from C4 and Bookcorpus (held-out split) datasets respectively. Samples denotes number of sequences of corresponding length seen by a model during continual pre-training.\\n\\nThe results show that introduction of local-global attention pattern helps to quickly recover the modeling performance even on extremely long sequences. It brings the performance to the same level we observed when pre-training on small sequences and significantly outperforms the models which were pre-trained without the local attention.\\n\\n---\\n\\nTo conclude, all of the results mentioned above indicate towards strong long context modeling capabilities of DANet architecture.\\n\\n---\\n\\n**References:**\\n\\n[1] Amos et al., \\\"Never Train from Scratch: Fair Comparison of Long-Sequence Models Requires Data-Driven Priors.\\\" ICLR 2024\\n\\n[2] Zhang et al., \\\"The Hedgehog & the Porcupine: Expressive Linear Attentions with Softmax Mimicry.\\\" ICLR 2024\\n\\n[3] Gu et al., \\\"Efficiently Modeling Long Sequences with Structured State Spaces.\\\" ICLR 2022\\n\\n[4] Ma et al., \\\"Mega: Moving Average Equipped Gated Attention.\\\" ICLR 2023\\n\\n[5] Tran et al., \\\"The Importance of Being Recurrent for Modeling Hierarchical Structure.\\\" EMNLP 2018\\n\\n[6] Dao et al., \\\"FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness.\\\" NeurIPS 2022\\n\\n[7] Dao, T., \\\"FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning.\\\" ICLR 2024\\n\\n**Weakness 5**\\n\\n> Regarding Cosine RelPE: It is not clear why the authors made the modification to the original Rotary Positional Embedding. It seems to be motivated by efficiency gains, but this claim is not supported sufficiently. An experiment on this could help.\\n\\nWe thank you for the constructive feedback. Motivated by it, we conducted an additional ablation study, a discussion and results of which we present below (also available in the Appendix H.2 of the revised manuscript).\\n\\n---\\n\\n**Ablation on Cosine RelPE**\\n\\nRegular Rotary Positional Embeddings (RoPE) [1] are known to enhance modeling performance and generalization in Transformer models and are widely used [2-5]. However, regular RoPE are not computationally efficient as we explain in section 3.2 of the paper. Our primary motivation behind designing Cosine RelPE is speed and efficiency gains, as we aimed to make DenseAttention as efficient as possible. As we demonstrated in the paper, expanded expressions for RoPE and Cosine RelPE are similar while the latter form of embeddings involves much less memory-intensive computations. Empirically, we found that the difference in modeling quality between the two types is negligible.\\n\\n\\n\\n| Model variant | Training Speed (speed-up) | Inference Speed (speed-up) |\\n|------------------------|---------------------------|----------------------------|\\n| Rotary Embeddings | 7025 (1.00x) | 16908 (1.00x) |\\n| Cosine Embeddings q,k | 10276 (1.46x) | 28467 (1.68x) |\\n| Cosine Embeddings | 10438 (1.49x) | 29630 (1.75x) |\\n\\nComparison of training and inference speeds (in sequences per seconds) on the LRA\\u2019s Pathfinder task. Cosine RelPE are significantly faster in both scenarios. \\u201cq, k\\u201d in the second row denotes that Cosine RelPE were applied separately to Q and K matrices like in regular RoPE.\"}", "{\"title\": \"General response. Part 3\", \"comment\": \"**Weakness 4**\\n\\n> The results in Long Range Arena are promising insofar as they match up against a standard transformer and an SSM, but this may not be a fair comparison. Given that the authors start with a regular transformer and apply modifications to show improvement on long-context, they could also compare against more recent models specialized for long context.\\n\\nWe thank you for the good suggestion. Indeed, Table 1 in the paper lists only two baselines. However, these baselines are exceptionally strong. The first baseline, the transformer you mentioned, is not rather standard but augmented with RoPE, which let it surpass all previous Transformer variants. And the paper, which introduced it, \\u201cNever Train from Scratch\\u201d, was published in ICLR recently, in 2024. \\n\\nAlthough it may be only subtly hinted at in the initial version of the manuscript (lines 424-425), by achieving superior results than our baseline from \\u201cNever Train from Scratch\\u201d, we automatically outperform *all other Transformer-based models and architectures*, which we are aware of, including the most recent ones, such as [1-2]. \\n\\nInspired by your and other reviewers\\u2019 suggestions, we composed a full comparison table, listing results for 25+ models. We share it in the General Response and will add it to the Appendix of the paper.\\n\\n> For example, the authors omit comparison with S5, whose numbers are publicly available on PapersWithCode, as well as S4 V2 and a long list of other models benchmarked on Long Range Arena but not necessarily added there.\\n\\nS4 [3], S5 [4], Megalodon [5] (current first place) and other models which occupy the top spots of the LRA suite of benchmarks belong to the class of State Space Models (SSMs).\\n\\nGenerally, the SSMs are in the league of their own in terms of the modeling quality on the LRA, and until now, no Transformer-based model could close the gap on the LRA even to the relatively old SSM-based architectures. This insurmountable gap is explained by SSMs\\u2019 inherent inductive bias towards capturing hierarchical and long-range dependencies and lack of such bias in Transformers, as discussed in [1, 6-7]. \\n\\nSince the difference in performance between Transformers and SSMs is very big, it is customary for the recent papers published in the leading ML venues not to provide comparisons with any SSMs on the LRA if they propose a new Transformer-based architecture and test it on this suite of benchmarks (see, e.g. [2] \\u2013 ICLR 2024, [8] \\u2013 ICLR 2022, and [9] \\u2013 ICML 2023).\\n\\nIn light of aforementioned arguments, we would like to emphasize that outperformance by DenseAttention of even relatively old S4-v1 SSM is a valuable and remarkable achievement for the Transformer-based architectures. This is the first case when such architecture surpasses an SSM on the 4 of 6 LRA benchmarks.\\n\\n\\n**References**\\n\\n[1] Amos et al., \\\"Never Train from Scratch: Fair Comparison of Long-Sequence Models Requires Data-Driven Priors.\\\" ICLR 2024\\n\\n[2] Zhang et al., \\\"The Hedgehog & the Porcupine: Expressive Linear Attentions with Softmax Mimicry.\\\" ICLR 2024\\n\\n[3] Gu et al., \\\"Efficiently Modeling Long Sequences with Structured State Spaces.\\\" ICLR 2022 \\n\\n[4] Smith et al., \\\"Simplified State Space Layers for Sequence Modeling.\\\" ICLR 2023\\n\\n[5] Ma et al., \\\"Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length.\\\" NeurIPS 2024\\n\\n[6] Ma et al., \\\"Mega: Moving Average Equipped Gated Attention.\\\" ICLR 2023\\n\\n[7] Tran et al., \\\"The Importance of Being Recurrent for Modeling Hierarchical Structure.\\\" EMNLP 2018\\n\\n[8] Qin et al., \\\"cosFormer: Rethinking Softmax in Attention.\\\" ICLR 2022\\n\\n[9] Zandieh et al., \\\"KDEformer: Accelerating Transformers via Kernel Density Estimation.\\\" ICML 2023\"}", "{\"title\": \"Author response. Part 3\", \"comment\": \"> **W2. Continuation**\\n\\nWe thank you for the valuable feedback which let us enhance the exposition of our work and leave a future reader with no questions regarding the difference between Linear Transformers and DANet.\\n\\nWe would also like to emphasize that, although it may be only subtly hinted at in the initial version of the manuscript (lines 424-425), by achieving superior results than our baseline from \\u201cNever Train from Scratch\\u201d, we automatically outperform all other Transformer-based models, which we are aware of. Now, we present these results explicitly in the following table:\\n\\n---\\n\\n**Additional comparisons with Transformer-based models on the LRA**\\n\\n\\n--- \\n\\n**Update:** We have moved the table with the additional comparisons to the **General Response** section above. Please find it attached there.\\n\\n---\"}", "{\"title\": \"Response to Reviewer oGkz. Part 2\", \"comment\": \"**Other sub-quadratic algorithms for sequence processing**\\n\\nAnother promising line of work focuses on applying deep State Space Models (SSMs) [6-9] and Linear RNNs [10-12] to long-range sequence and language modeling. Fundamentally, these architectures model interactions in sequence dimension by a linear recurrence:\\n\\n$\\n\\\\begin{align*}\\n&x_t = \\\\mathbf{\\\\overline{A}}x_{t-1} + \\\\mathbf{\\\\overline{B}}u_t, \\\\\\\\\\n& y_t = \\\\mathbf{\\\\overline{C}}x_t + \\\\mathbf{\\\\overline{D}}u_t,\\n\\\\end{align*}\\n$\\n\\nwhere recurrence matrix $\\\\mathbf{\\\\overline{A}}$ and other parameters are data-independent matrices which form and initialization are defining properties for a particular SSM/ RNN architecture. \\nThe linear recurrence is advantageous during inference as it runs in $O(N)$ time. For training, it also can be unrolled into a convolutional kernel $\\\\mathbf{K} = \\\\begin{bmatrix} \\\\mathbf{\\\\overline{C} \\\\overline{B}}, & \\\\mathbf{\\\\overline{C} \\\\overline{A} \\\\overline{B}}, & \\\\ldots, & \\\\mathbf{\\\\overline{C} \\\\overline{A}}^{N-1}\\\\mathbf{\\\\overline{B}} \\\\end{bmatrix} $ to compute $y = \\\\mathbf{K} * u$ via Fast Fourier Transform (FFT) in $O (N \\\\log N)$ time. Here, we set $\\\\mathbf{D}=0$ for ease of exposition, but in practice it's usually set to identity to act as a skip-connection ubiquitous in modern deep NN architectures.\\n\\nAmong other novel algorithms which rely on FFT or its generalizations such as Monarch matrices [13], are Long Convolutions [14], Hyena [15], and Monarch Mixer [16], with the latter using sub-quadratic primitives both for computations along the sequence length and the model dimension. \\n\\nWhile being sub-quadratic, these algorithms are still slower than linear time as in DenseAttention. However, recently [9] introduced data-dependent gating for SSM parameters and low-level, hardware efficient CUDA implementation for parallel-scan operation which allows for fast linear-time processing both during training and inference. And [17] adopt a similar gating mechanism for causal linear attention which allows to drop the denominator in the linear attention formula but also admits no parallel training without resorting to low-level implementations.\\n\\n---\", \"references\": \"[1] Touvron et al., \\\"LLaMA: Open and Efficient Foundation Language Models.\\\" arXiv preprint arXiv:2302.13971, 2023.\\n\\n[2] Xiong et al., \\\"On Layer Normalization in the Transformer Architecture.\\\" ICML 2020\\n\\n[3] Katharopoulos et al., \\\"Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention.\\\" ICML 2020\\n\\n[4] Shen et al., \\\"Efficient Attention: Attention with Linear Complexities.\\\" WACV 2021\\n\\n[5] Koohpayegani and Pirsiavash, \\\"SimA: Simple Softmax-free Attention for Vision Transformers.\\\" WACV 2024\\n\\n[6] Gu et al., \\\"Efficiently Modeling Long Sequences with Structured State Spaces.\\\" ICLR 2022\\n\\n[7] Gupta et al., \\\"Diagonal State Spaces are as Effective as Structured State Spaces.\\\" NeurIPS 2022\\n\\n[8] Ma et al., \\\"Mega: Moving Average Equipped Gated Attention.\\\" ICLR 2023\\n\\n[9] Gu and Dao, \\\"Mamba: Linear-Time Sequence Modeling with Selective State Spaces.\\\" CoLM 2024\\n\\n[10] Beck et al., \\\"xLSTM: Extended Long Short-Term Memory.\\\" NeurIPS 2024\\n\\n[11] Orvieto et al., \\\"Resurrecting Recurrent Neural Networks for Long Sequences.\\\" ICML 2023\\n\\n[12] Peng et al., \\\"RWKV: Reinventing RNNs for the Transformer Era.\\\" Findings of EMNLP 2023\\n\\n[13] Dao et al., \\\"Monarch: Expressive Structured Matrices for Efficient and Accurate Training.\\\" ICML 2022 \\n\\n[14] Fu et al., \\\"Simple Hardware-Efficient Long Convolutions for Sequence Modeling.\\\" ICML 2023\\n\\n[15] Poli et al., \\\"Hyena Hierarchy: Towards Larger Convolutional Language Models.\\\" ICML 2023\\n\\n[16] Fu et al., \\\"Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture.\\\" NeurIPS 2023\\n\\n[17] Yang et al., \\\"Gated Linear Attention Transformers with Hardware-Efficient Training.\\\" ICML 2024\\n\\n--- \\nWe thank you for the opportunity to improve our paper and will include the above exposition in the next revision of the manuscript (please note that the present revision contains an abbreviated un-proofread version which will be updated).\"}", "{\"title\": \"Author response. Part 4\", \"comment\": \"> **W1.2 (Ending)**\\n\\n---\\n\\n**Discussion of the main baselines**\\n\\n\\u201cTransformers + Rotary\\u201d result from recent ICLR-published paper \\u201cNever Train from Scratch\\u201d which serves as our primary baseline, in its turn, outperforms all previous Transformer-based models by a large margin. The details how the model is trained are comprehensively described both in the paper (https://openreview.net/pdf?id=PdaPky8MUn) and in the code (https://github.com/IdoAmos/not-from-scratch). \\n\\nWe would like to emphasize that DenseAttention outperforms, among others, Linear Attention [1] and Random Feature Attention [2] architectures, which you suggested to compare with in W2. \\n\\nAs you have correctly noticed, our second baseline for the LRA \\u2013 S4 model [3] \\u2013 is not a Transformer based architecture. It belongs to a class of State Space Models (SSMs) which almost uniformly greatly outperform Transformer-based models on this suite of benchmarks due to their inherent inductive bias towards capturing hierarchical and long-range dependencies and lack of such bias in Transformers [4-7]. This makes our result superior to an instance of SSM in several benchmarks a valuable and interesting insight. \\n\\n---\\n\\nWe are grateful to you for the chance to clarify and improve the exposition of our work. We will include the table with the extended LRA results in the appendix of the next version of the manuscript.\\n\\n---\\n\\n**BERT experiments** \\n\\nRegarding the experiments with BERT-like architecture, we believe it\\u2019s natural to compare our models with BERT itself as we closely follow the implementation details and training process for the original model, except for replacing Transformer blocks with DANet blocks. Our key objective is to show that DenseAttention-BERT is at least on par with the original model in terms of LM quality, while being faster and more computationally efficient. Please note that we demonstrated the latter property by comparing our plain-PyTorch implementation of DANet both with popular PyTorch implementation by HuggingFace and with low-level specialized CUDA implementation FlashAttention-2 [7], widely regarded as the fastest attention computation algorithm available and universally used in practice (https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html). \\n\\n----\\n\\n[1] Katharopoulos et al., \\\"Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention.\\\" ICML 2020\\n\\n[2] Peng et al., \\\"Random Feature Attention.\\\" ICLR 2021\\n\\n[3] Gu et al., \\\"Efficiently Modeling Long Sequences with Structured State Spaces.\\\" ICLR 2022\\n\\n[4] Amos et al., \\\"Never Train from Scratch: Fair Comparison of Long-Sequence Models Requires Data-Driven Priors.\\\" ICLR 2024\\n\\n[5] Ma et al., \\\"Mega: Moving Average Equipped Gated Attention.\\\" ICLR 2023\\n\\n[6] Tran et al., \\\"The Importance of Being Recurrent for Modeling Hierarchical Structure.\\\" EMNLP 2018\\n\\n[7] Dao, T., \\\"FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning.\\\" ICLR 2024\\n\\n---\\n\\n> **W1.3 The scaling effect is not studied. The authors do not analyze how the parameter number affects the results. It is unclear if the method could be scaled to larger-scale applications.**\\n\\nWe thank you for the constructive feedback. Inspired by it, we conducted an additional experiment on scaling, which results are presented below.\\n\\n| Model | Parameters | Configuration | Perplexity | MLM accuracy, % |\\n|--------------------|------------|------------------|------------|-----------------|\\n| DANet-BERT-small | 31M | L=6, D=512 | 15.60 | 49.51 |\\n| DANet-BERT-base | 110M | L=16, D=768 | 7.55 | 60.01 |\\n| DANet-BERT-large | 336M | L=32, D=1024 | 5.47 | 64.92 |\\n\\nThe table depicts three single-head DenseAttention Network models of different sizes pre-trained on Wiki+BookCorpus dataset with MLM objective for 100B tokens. MLM perplexity and accuracy are reported for out-of-sample data from C4 dataset [1]. L and D denote number of layers and hidden dimension of FFN input, respectively.\\n\\nWe observe that DenseAttention architecture exhibits favorable scaling properties similar to vanilla Transformer, as modeling quality grows with the parameters count. Also, as we show in the Pathfinder-256 experiment, the quality consistently increases with the number of training iterations, which opens a promising second axis for scaling.\\n\\nHowever, we acknowledge that the architecture hasn't been tested on billion+ parameters range models, and we hope to explore the applications to LLMs in future work.\\n\\n---\\n[1] Raffel et al., \\\"Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer.\\\" JMLR 2020\"}", "{\"summary\": \"The paper proposes a DenseAttention Network (DANet), which addresses inefficiencies in the Transformer architecture, especially its high memory and computational cost - O(N^2), with respect to sequence length - N. DANet uses a new MaxNormActivation and Cosine Relative Positional Embeddings, capturing N x N interactions at O(N) space and time complexity. Experimental results demonstrate that DANet outperforms FlashAttention on long sequences on the Long Range Arena benchmark.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1) The paper proposes an interesting approach to eliminate projection matrices in attention, considering that the multiplication $W_QW_K^{\\\\top}$ can be replaced with a single parameter, which I don't think exists in previous literature.\\n2) The paper also proposes using local attention in conjunction with the proposed attention function.\", \"weaknesses\": \"1) The paper lacks comparison against mamba in experiments. Mamba-I and Mamba-II are fast approaches for long range sequence modeling.\\n2) This is not the first paper which captures NXN correlations with O(N) complexity. Linear attention [1] uses linear approximations of attention. A fair comparison with this paper would be great.\\n3) The mathematical writing in this paper is inconsistent. Here are some instances:\", \"better_notation\": \"1. Standard operators max, var should be mentioned in times new roman using \\\\DeclareMathOperator\\n2. Defined operators such as MaxNormActivation can be put in \\\\text{MaxNormActivation}, as done in 200-204. \\n3. Line 240: has a typo open bracket.\\n4. Line 284: << should be \\\\ll.\\n5. Line 246: why is fp16 and bf16 bolded?\", \"major_readability_issues\": \"1. Inconsistent definition of $X_i$ in line 247, 300 and 311.\\n\\nIf the above issues are resolved I am willing to increase my score.\\n\\n[1] Katharopoulos, Angelos, et al. \\\"Transformers are rnns: Fast autoregressive transformers with linear attention.\\\" International conference on machine learning. PMLR, 2020.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Gratitude and further explanations for Reviewer ruBJ\", \"comment\": \"Dear reviewer ruBJ,\\n\\nWe sincerely thank you very much for your response! We are really delighted and encouraged to have partially addressed your concerns. \\n\\nWe hope to alleviate the remaining concerns at least to some extent with the following discussion.\\n\\n---\\n\\n> **W1.2.** The major issue in the updated table is the authors should carefully select related baselines (similar parameter counts and training data)\\n\\nAll the models which are tested on the LRA should be trained from scratch on the datasets pre-supplied by the authors of this suite of benchmarks. No additional data or pre-training is allowed as per guidelines for the LRA set by authors in the paper. They also set the requirement for the participating models to have an approximately fixed number of parameters for each task. All of the papers we compare with have explicitly stated they adhere to these requirements and have undergone peer review.\\n\\n--- \\n\\n> **W1.1.** \\u201cThe paper does not contain the standard LM eval tasks like MMLU, GSM8K, TriviaQA, ARC, Hellaswag, MATH, etc.\\u201d\\n\\n> **W1.3.** \\u201cHowever, the scaling is from an ultra-small scale (31M) to a small scale (336M).\\u201d and \\u201cIt is unclear if the method could be scaled to larger-scale applications\\\"\\n\\nUnfortunately, our computational resources are limited which prevents us from being able to pre-train multi-billion parameter models for adequate amount of time/ iterations, both for scaling studies and for evaluating on the suggested tasks. In particular, for the majority of suggested tasks, multi-billion parameter models trained on hundreds of billions or trillions of tokens are required to score above chance. \\n\\nHowever, existing experiments indicate that DenseAttention is a viable and performant architecture as it performs comparably and better than standard Transformers on tested scales and exhibits upward trend with the increase of parameters, just like the Transformer. We argue that, given current evidence, there are no reasons to suspect the scaling trend will halt with large models as it does not with the standard Transformer.\\n\\n\\nWe would also like to acknowledge that many papers which introduce new Transformer-based architectures, conduct experiments on equal or smaller scale models compared to ours. These include, among others, recent ICLR published [1-2] and ICML published [3] papers.\\n\\nFinally, as we have stated in our initial response, large scale experiments with GPT and LLAMA style architectures are out of scope for current work, however, we hope to address it in future work. To articulate the scope of the paper more clearly, we have added a \\u201cConclusion & Future Work\\u201d section. We present it in the **General Response. Part 3** above for your convenience and also in the revised manuscript (**Appendix A**).\\n\\n[1] Zhang et al., \\\"The Hedgehog & the Porcupine: Expressive Linear Attentions with Softmax Mimicry.\\\" ICLR 2024\\n\\n[2] Zheng et al., \\\"Efficient Attention via Control Variates.\\\" ICLR 2023\\n\\n[3] Zandieh et al., \\\"KDEformer: Accelerating Transformers via Kernel Density Estimation.\\\" \\nICML 2023\"}", "{\"title\": \"General Response. Part 4. Additional experiments and ablations.\", \"comment\": \"**New results of DANet-BERT pre-training**\\n\\n| **Model** | **MLM Loss (L=128)** | **Acc. (L=128)** | **MLM Loss (L=512)** | **Acc. (L=512)** | **MLM Loss (L=1024)** | **Acc. (L=1024)** |\\n|--------------------------------------|-----------------------|-------------------|-----------------------|-------------------|-----------------------|-------------------|\\n| DenseAttention (1 head, N=128) | **1.91** | **0.620** | - | - | - | - |\\n| DenseAttention (4 heads, N=128) | 1.97 | 0.611 | - | - | - | - |\\n| BERT-large | 2.58 | 0.582 | 2.31 | 0.614 | - | - |\\n| DenseAttention (1 head, N=512) | 1.97 | 0.614 | 1.726 | 0.644 | - | - |\\n| DenseAttention (4 heads, N=512) | 2.04 | 0.602 | 1.84 | 0.624 | - | - |\\n| DenseAttention (1 head, N=1024) | 1.96 | 0.615 | **1.71** | **0.648** | 2.26 | 0.591 |\\n| DenseAttention (4 heads, N=1024) | 2.07 | 0.598 | 1.83 | 0.627 | **1.87** | **0.618** |\\n\\nEvaluations of MLM loss and accuracy for DenseAttention Network models and the original BERT on C4 dataset texts of different context sizes. N is the maximum sequence length with which a model was trained or evaluated and L is the length of evaluation samples. DANet-BERT model variations uniformly outperform the original BERT on corresponding context sizes.\\n\\nThe results of the new DANet-BERT model, which is closely aligned with the original, are similar with the old results. The relations and trends in performance between all models and all context lengths remain unchanged.\\n\\n--- \\n\\n\\n**DANet-BERT 16K with local attention**\\n\\nWe conducted additional experiments by taking DANet-BERT model after it had finished pre-training on sequence length 512 and continuing pre-training on sequence lengths 1024 and then 16384 tokens both with and without local attention scheme. \\n\\n| **Context size** | **1k** | | | **16k** | | |\\n|------------------------|---------------------------|---------------------------|---------------------------|----------------------------|---------------------------|---------------------------|\\n| **Metrics** | **Samples** | **MLM loss** | **MLM acc.** | **Samples** | **MLM loss** | **MLM acc.** |\\n| DANet-BERT | 80M | 2.255 | 0.591 | 27M | 2.843 | 0.452 |\\n| DANet-BERT + local attention | 80M | 1.705 | 0.647 | 7.8M | 1.689 | 0.637 |\\n\\nComparison of DenseAttention BERT-large pre-trained on long context sizes with and without local attention. The models with context size 1k and 16k were evaluated on the corresponding length texts from C4 and Bookcorpus (held-out split) datasets respectively. Samples denotes number of sequences of corresponding length seen by a model during continual pre-training.\\n\\nThe results show that introduction of local-global attention pattern helps to quickly recover the modeling performance even on extremely long sequences. It brings the performance to the same level we observed when pre-training on small sequences and significantly outperforms the models which were pre-trained without the local attention.\\n\\n---\\n\\n**Ablation on use of MaxNormActivation in standard Transformer** \\n\\n| **Model** | **MLM loss** | **MLM accuracy** | \\n|-------------------------------------|--------------|-------------------|\\n| BERT-large (LayerNorm) | 2.11 | 59.3 |\\n| BERT-large (MaxNormActivation) | 2.46 | 54.3 |\\n\\nComparisons between LayerNorm and MaxNormActivation for BERT-large Transformer pre-trained on Wiki+BookCorpus dataset for 10B tokens. MLM loss and accuracy are reported for out-of-sample data from C4 dataset\\n\\nThe results indicate that standard LayerNorm is optimal for standard Transformer as replacing it by MaxNormActivation leads to a subpar performance. On the other hand, MaxNormActivation is not a merely optimal but rather essential part of DANet architecture, because putting the standard LayerNorm into it instead of MaxNormActivation results in numerical instability.\"}", "{\"title\": \"General Response. Additional comparisons on the LRA (2)\", \"comment\": \"**References**\\n\\n[1] Tay et al., \\\"Long Range Arena: A Benchmark for Efficient Transformers.\\\" ICLR 2021\\n\\n[2] Peng et al., \\\"Random Feature Attention.\\\" ICLR 2021 \\n\\n[3] Ma et al., \\\"Luna: Linear Unified Nested Attention.\\\" NeurIPS 2021 \\n\\n[4] Xiong et al., \\\"Nystr\\u00f6mformer: A Nystr\\u00f6m-Based Algorithm for Approximating Self-Attention.\\\" AAAI 2021\\n\\n[5] Chen et al., \\\"Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\\u00f6m Method.\\\" NeurIPS 2021 \\n\\n[6] Qin et al., \\\"cosFormer: Rethinking Softmax in Attention.\\\" ICLR 2022\\n\\n[7] Lee-Thorp et al., \\\"FNet: Mixing Tokens with Fourier Transforms.\\\" NAACL 2022\\n\\n[8] Qin et al., \\\"The Devil in Linear Transformer.\\\" EMNLP 2022\\n\\n[9] Zandieh et al., \\\"KDEformer: Accelerating Transformers via Kernel Density Estimation.\\\" ICML 2023\\n\\n[10] Zhang et al., \\\"The Hedgehog & the Porcupine: Expressive Linear Attentions with Softmax Mimicry.\\\" ICLR 2024\\n\\n[11] Amos et al., \\\"Never Train from Scratch: Fair Comparison of Long-Sequence Models Requires Data-Driven Priors.\\\" ICLR 2024\\n\\n[12] Dao et al., \\\"FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness.\\\" NeurIPS 2022\"}", "{\"title\": \"Author response. Part 2\", \"comment\": \"> **W2. This is not the first paper which captures NXN correlations with O(N) complexity. Linear attention [1] uses linear approximations of attention. A fair comparison with this paper would be great.**\\n\\nWe thank you for the constructive suggestion to compare DenseAttention with the paper [1] which was one of the first to bring the concept of Linear and Linearized Transformers to light. \\n\\nActually, we cited this and several other papers on linear and sub-quadratic Transformers but initially abstained from more evolved analysis. The reason is we believe that DenseAttention architecture is substantially different from Linear Transformer and its derivatives. Moreover, DenseAttention significantly outperforms it on the LRA suite of benchmarks (the comparison is presented below).\\n\\n\\n| Model | ListOps | Text | Retrieval | Image | Pathfinder | Path-X | Avg |\\n|-------------------|---------|--------|-----------|--------|------------|--------|-------|\\n| Linear Transformer | 16.13 | 65.90 | 53.09 | 42.34 | 75.30 | - | 50.55 |\\n| DenseAttention | 50.50 | 81.19 | 87.51 | 72.55 | 87.40 | 88.82 | 75.83* |\\n\\n\\\\* Average for DenseAttention in this table is calculated without Path-X task for compatibility.\\n\\nHowever, motivated by your and other reviewers\\u2019 suggestions, we wrote up an extended discussion comparing Linear Transformer and its derivatives with DenseAttention Network from the view of architecture. We present it below in full and will include it in the revised version of the paper.\\n\\n---\\n\\nGiven entries $Q_i, K_j, V_j \\\\in \\\\mathbb{R}^{1 \\\\times d}$ of matrices $\\\\mathbf{Q}, \\\\mathbf{K}$ and $\\\\mathbf{V}$, standard softmax attention for input i can be reformulated as\\n\\n\\n$ \\n\\\\begin{equation}\\n A_i = \\\\frac{\\\\sum_{j=1}^N \\\\text{Sim}(Q_i, K_j)V_j}{\\\\sum_{j=1}^N \\\\text{Sim}(Q_i, K_j)} \\\\in \\\\mathbb{R}^{1 \\\\times d}\\n\\\\end{equation},\\n$\\n\\nwhere $\\\\text{Sim}(Q_i, K_j)=\\\\text{exp}(Q_i K_j^\\\\top)$. Conceptually, linear attention class of algorithms, described in [1] and built upon in numerous subsequent works, approximates or replaces this similarity function with separable kernel $\\\\text{Sim}(Q_i, K_j)=\\\\mathcal{K}(Q_i, K_j)=\\\\phi(Q_i) \\\\phi(K_j^\\\\top)$, where $\\\\phi: \\\\mathbb{R}^d \\\\to \\\\mathbb{R}_{+}^{r}$ maps query and key vectors to non-negative vectors with possibly different dimension r.\\n\\nHence, the attention mechanism becomes:\\n\\n $A_i = \\\\frac{\\\\sum_{j=1}^N \\\\phi(Q_i) \\\\phi(K_j^\\\\top)V_j}{\\\\sum_{j=1}^N \\\\phi(Q_i) \\\\phi(K_j^\\\\top)} = \\\\frac{\\\\phi(Q_i) \\\\sum_{j=1}^N \\\\phi(K_j^\\\\top)V_j}{\\\\phi(Q_i) \\\\sum_{j=1}^N \\\\phi(K_j^\\\\top)}, \\\\quad (1)$ \\n\\nwhich can be computed in linear time.\\n\\nThe function $\\\\phi(\\\\cdot)$ can take various forms, such as 1 + ELU [1], ReLU [2], squared ReLU [3], Taylor [4-6] or Random Feature [7-8] approximations, and even MLPs trained to mimic softmax attention [6]. They aim to approximate softmax without its explicit calculation when being applied jointly to queries and keys, or to retain its properties, most importantly, non-negativity of resulting dot products $\\\\phi(Q_i) \\\\phi(K_j^\\\\top)$. \\n\\n\\nThe latter property, together with reweighting attention scores (denominator in the formula 1) are defining for Linear Transformer algorithms. Absence of scaling by $\\\\frac{1}{\\\\phi(Q_i) \\\\sum_{j=1}^n \\\\phi(K_j^\\\\top)}$ leads to numerical instabilities, and the scaling factor itself is not guaranteed to be bounded without non-negative $\\\\phi(\\\\cdot)$. However, both mappings $\\\\phi(\\\\cdot)$ (even relatively simple), and memory intensive non-MatMul operations for reweighting contribute to subpar speed and computational efficiency in comparison with ordinary and fast self-attention algorithms on all but large context sizes.\\n\\nWe forgo both transforming $\\\\mathbf{Q}, \\\\mathbf{K}$ by $\\\\phi(\\\\cdot)$ and reweighting in DenseAttention as we believe the main factor of success of Transformer is the ability of all $N \\\\times N$ interactions between tokens. It results in an improved computational efficiency and simpler design which can be expressed entirely by matrix multiplications:\\n\\n$\\\\mathbf{A} = \\\\mathbf{Q} \\\\mathbf{K}^\\\\top \\\\mathbf{V} $\\n\\n[1] Katharopoulos et al., \\\"Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention.\\\" ICML 2020\\n\\n[2] Qin et al., \\\"cosFormer: Rethinking Softmax in Attention.\\\" ICLR 2022\\n\\n[3] Hua et al., \\\"Transformer Quality in Linear Time.\\\" ICML 2022 \\n\\n[4] Keles et al., \\\"On The Computational Complexity of Self-Attention.\\\" International Conference on Algorithmic Learning Theory 2023\\n\\n[5] Arora et al., \\\"Simple Linear Attention Language Models Balance the Recall-Throughput Tradeoff.\\\" ICLR 2024\\n\\n[6] Zhang et al., \\\"The Hedgehog & the Porcupine: Expressive Linear Attentions with Softmax Mimicry.\\\" ICLR 2024\\n\\n[7] Choromanski et al., \\\"Rethinking Attention with Performers.\\\" ICLR 2021\\n\\n[8] Peng et al., \\\"Random Feature Attention.\\\" ICLR 2021 \\n\\n---\"}", "{\"title\": \"Author response. Part 1\", \"comment\": \"We thank the reviewer ruBJ for insightful comments and valuable feedback. We are grateful for your recognition of the motivation and the presentation of the paper. In the following, we hope to address your concerns with detailed responses:\\n\\n> **W1.1 The testbeds are limited. Currently, the only benchmark is Long Range Arena (for causal LMs)**.\\n\\nWe thank you for bringing up this issue. We think there might be some confusion about the scope of the experiments and we would like to clarify it with an extended discussion: \\n\\n**1.** The Long Range Arena [3] (section 4.1) is not the only benchmark used in the paper. We also conduct extensive experiments and benchmarks on Masked Language Modeling (MLM) in section 4.2 with BERT-styled DANet models as compared to standard Transformer-based BERT.\\n\\n**2.** The Long Range Arena is, in fact, not a single benchmark but a suite of *6 challenging and diverse tasks* designed to test modeling capabilities across different domains. Below is a brief description of each task. \\n\\n---\\n**ListOps.**[4] This is a purely logical synthetic task which is dedicated to modeling evaluation results of long hierarchically structured sequences. Each sequence has length up to 2000 symbols and consists of whole numbers from 0 to 9, mathematical operators, such as MAX, MIN, MEDIAN and SUM_MOD, and parentheses.\\n\\n**Text Classification (IMDB)** [5]. This task tests Natural Language Understanding (NLU) abilities of models by letting them classify the sentiment of movie reviews in the IMDB dataset. To make the task more challenging, the texts of the reviews are split into tokens not on a word level, but on a character (or byte) level. This leads to much longer sequences of 4K max length.\\n\\n**Document Retrieval (AAN)** [6]. This task tests the abilities of producing encoded representations of the textual information and further matching/ retrieving them. Namely, given a pair of the documents from ACL Anthology Network (AAN; Radev et al., 2013) dataset, a model should independently process them and, based on their final embeddings, classify if the two documents have a citation link. As in the IMDB tasks, individual input texts are tokenized on a character (byte) level with max sequence length 4K.\\n\\n**Image Classification (CIFAR-10)** [7]. This is an image classification task with 10 classes on a classical CIFAR-10 benchmark with one specific condition: images should be ingested into models as 1-d sequences, thus setting the input length to 1024 tokens (pixels) and making the task more challenging.\\n\\n**Pathfinder** [8]. This is a binary classification task of 32x32 pixels grayscale images with corresponding sequence length 1024 tokens, which, formally, makes it similar to CIFAR-10 task. However, it\\u2019s different on a conceptual level, as the task measures a model\\u2019s ability to discern spatial dependencies. Given a multitude of intertwined, dashed line paths, a model should correctly determine if two rounded dots are connected by a dashed line.\\n\\n**Pathfinder-X (Pathfinder-128)**. It\\u2019s a version of Pathfinder task with 16K (128x128) pixels images which makes it significantly more challenging. At the time of publication of the original LRA paper [3], none of the tested models managed to achieve a score above chance on this benchmark.\\n\\nTherefore, the Long Range Arena arguably represents a wide range of tasks, spanning from logic and reasoning to language modeling and image classification. To perform well on all of the 6 benchmarks, a model\\u2019s architecture should be powerful and versatile enough to generalize to different modalities.\\n\\n---\\n\\nWe thank you for the opportunity to clarify and enhance the exposition of the experiments section. We will include the description of the LRA benchmarks in the appendix of the new version of manuscript.\\n\\n--- \\n**3.** We would also like to kindly note that the amount of the experimentation in our work seems to be adequate in comparison with many other papers introducing novel architectural modifications of Transformers and published in leading ML venues. \\n\\nFor example, the ICLR-published paper \\u201cRandom Feature Attention\\u201d [2], which you have referenced in the review, contains experiments with Language Modeling (causal and machine translation) and Long Range Arena, albeit they tested their model only on 3 out of 6 its benchmarks. \\n\\nAnother widely known paper \\u201cTransformers are RNNs\\u201d [1], which you have also referenced, contains experiments on autoregressive image generation (MNIST and CIFAR-10), speech recognition, and artificial copying task. However, it doesn\\u2019t even provide experiments or results for language modeling task. Also, there\\u2019s no Long Range Arena results in the paper, although it can be attributed to the fact that this suite of benchmarks was not presented at that time.\\n\\n---\"}", "{\"summary\": \"This paper proposes a novel neural network architecture called DenseAttention Network as alternative to Transformer networks with Self-Attention.\\nThe core innovation of DANet is the novel DenseAttention mechanism, which removes Softmax and projection layers from original Self-Attention.\", \"additionally_the_authors_modify_the_surrounding_network_block\": \"They replace Layernorm or RMS with their novel \\u201cMaxNormActivation\\u201d, they remove some skip connections and modify the Rotary Positional Embeddings.\\nThe paper performs experiments on the Long Range Arena Benchmark and masked language modeling with BERT-large sized models. \\n\\nThe paper claims to outperform a BERT baseline on masked language modeling. \\nIt claims to set a new SOTA on \\u201cTransformer-based\\u201d models on LRA and to outperform 4 of 6 State Space model baselines.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"The usage of MaxNormActivation seems to be well motivated by a theoretical variance analysis. However, this could also be supported with experiments empirically too.\", \"Code provided.\"], \"weaknesses\": [\"In general I believe this paper is not ready for publication as there are several weaknesses in terms of the new architecture, the experiments and the presentation in the paper. My main concerns are summarized below:\", \"A related work section is missing in which the authors put DANet in relation to other Linear Attention variants (e.g. GLA https://arxiv.org/abs/2312.06635 ), State space models (e.g. Mamba https://arxiv.org/abs/2405.21060) or other RNNs variants (e.g. xLSTM (https://arxiv.org/abs/2405.04517 ) or RWKV (https://arxiv.org/abs/2305.13048 )). Also a relation to embedding models other than BERT is missing, e.g. Monarch Mixer (https://arxiv.org/abs/2310.12109).\", \"Since DANet seems to be a hybrid architecture (Section 3.3), also a relation to hybrid architectures (e.g. https://arxiv.org/abs/2402.19427, https://arxiv.org/abs/2406.07522) is interesting.\", \"There are so many architecture changes (e.g. Layernorm, Positional Encoding, Block structure, Attention mechanism, Block order / hybrid variants) that leave the reader unclear of what brings performance gains. A careful ablation study could help here.\", \"While the paper demonstrates large throughput benefits in the long context regime compared to Transformers, it has not been shown in the paper that DANet performs well in the long context regime.\", \"Regarding Cosine RelPE: It is not clear why the authors made the modification to the original Rotary Positional Embedding. It seems to be motivated by efficiency gains, but this claim is not supported sufficiently. An experiment on this could help.\", \"A conclusion is missing.\"], \"questions\": [\"Why do you still consider DANet as Transformer-based? The only part of transformers that is left, is the Feeforward layers which is now inside the block.\", \"You train your model with 4 stages, but the original BERT was trained on 2 stages. Could you also train the baseline in the same way?\", \"On page 10, line 494 (key highlights) you hint at the fact that DANet outperforms the baseline due to a soft-capping of output logits that you use. Why did you not try this for the baseline as well?\", \"L.400: The authors find that local attention is effective. Do you use the Transformer Self-Attention here? An ablation on this would be interesting.\", \"Why do you use float16 in the experiments?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Remaining references\", \"comment\": \"[1] Katharopoulos et al., \\\"Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention.\\\" ICML 2020\\n\\n[2] Qin et al., \\\"cosFormer: Rethinking Softmax in Attention.\\\" ICLR 2022\\n\\n[3] Hua et al., \\\"Transformer Quality in Linear Time.\\\" ICML 2022 \\n\\n[4] Keles et al., \\\"On The Computational Complexity of Self-Attention.\\\" International Conference on Algorithmic Learning Theory 2023\\n\\n[5] Arora et al., \\\"Simple Linear Attention Language Models Balance the Recall-Throughput Tradeoff.\\\" ICLR 2024\\n\\n[6] Zhang et al., \\\"The Hedgehog & the Porcupine: Expressive Linear Attentions with Softmax Mimicry.\\\" ICLR 2024\\n\\n[7] Choromanski et al., \\\"Rethinking Attention with Performers.\\\" ICLR 2021\\n\\n[8] Peng et al., \\\"Random Feature Attention.\\\" ICLR 2021\"}", "{\"title\": \"References\", \"comment\": \"[1] Dosovitskiy et al., \\\"An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale.\\\" ICLR 2021\\n\\n[2] Kirillov et al., \\\"Segment Anything.\\\" ICCV 2023\\n\\n[3] Dubey et al., \\\"The Llama 3 Herd of Models.\\\" arXiv preprint arXiv:2407.21783, 2024\\n\\n[4] Gemma Team, \\\"Gemma 2: Improving Open Language Models at a Practical Size.\\\" arXiv preprint arXiv:2408.00118, 2024\"}", "{\"title\": \"Follow-Up for Reviewer ruBJ\", \"comment\": \"Dear reviewer ruBJ,\\n\\nWe thank you again for your insightful comments and valuable feedback, which prompted us to conduct new experiments and add extended discussions for several topics.\\n\\nWe would like to follow up and gently ask if we were able to address your concerns in the Author Response. As the discussion period progresses, we would appreciate any updates or further questions you may have. Thank you for your time in advance!\"}", "{\"title\": \"Additional experiment for reviewer rVwA\", \"comment\": \"Dear reviewer rVwA,\\n\\nFollowing your suggestion and fulfilling the commitment we expressed in the latest response, we have completed an additional experiment. We present its results below and will include in the final version of the manuscript:\\n\\n| **Model** | **MLM loss** | **MLM accuracy** | \\n|-------------------------------------|--------------|-------------------|\\n| BERT-large (LayerNorm) | 2.11 | 59.3 |\\n| BERT-large (MaxNormActivation) | 2.46 | 54.3 |\\n\\nComparisons between LayerNorm and MaxNormActivation for BERT-large Transformer pre-trained on Wiki+BookCorpus dataset for 10B tokens. MLM loss and accuracy are reported for out-of-sample data from C4 dataset\\n\\nThe results indicate that standard LayerNorm is optimal for standard Transformer as replacing it by MaxNormActivation leads to a subpar performance. On the other hand, MaxNormActivation is not a merely optimal but rather essential part of DANet architecture, because putting the standard LayerNorm into it instead of MaxNormActivation results in numerical instability.\\n\\n---\\n\\nWe thank you for the opportunity to further enhance our work and are keen to receive your feedback.\"}", "{\"title\": \"Gentle follow up for reviewer rVwA\", \"comment\": \"Dear reviewer rVwA,\\n\\nWe express heartfelt gratitude for your response and constructive, actionable feedback which resulted in an additional experiment and extended discussion. As we hope to have resolved your remaining concerns, and given that the discussion period ends in less than 12 hours, we kindly ask you to consider updating your score if we have been able to address them, or to let us know if there are any remaining concerns or questions.\"}", "{\"title\": \"Author response. Part 1\", \"comment\": \"We thank the reviewer HAtu for thorough review, attention to details, and constructive feedback. We are encouraged by your recognition of the approach to eliminate the two low-rank matrices $\\\\mathbf{W}_Q$ and $\\\\mathbf{W}_K$ in favor of single high-rank matrix, which served as one of the reasons to name the method DenseAttention. Please let us address the concerns you have raised.\\n\\n\\n\\n> **W1. The paper lacks comparison against mamba in experiments. Mamba-I and Mamba-II are fast approaches for long range sequence modeling.**\\n\\nWe thank you for bringing up a discussion about Mamba. We believe comparing with Mamba on the benchmarks referred to in our paper would be a delicate issue due to the following reasons:\\n\\n**1. Absence of LRA tests by Mamba's authors**\\n\\nThe authors of the Mamba didn\\u2019t provide the results on the Long Range Arena (LRA) suite of benchmarks in the paper, and, it seems, they do not intend the model to be tested on the LRA. To quote the author of the paper (source: https://github.com/state-spaces/mamba/issues/282#issuecomment-2221135197), \\u201cWe did not try LRA with Mamba. We don't believe that it's a good dataset, e.g. see: [1]\\u201c \\n\\nDespite that, we found the Mamba\\u2019s results on the LRA in another paper [2]. We compare them with DenseAttention below:\\n\\n\\n\\n| Model | ListOps | Text | Retrieval | Image | Pathfinder | Path-X | Avg |\\n|-------------------|---------|--------|-----------|--------|------------|--------|-------|\\n| DenseAttention | 50.50 | 81.19 | 87.51 | 72.55 | 87.40 | 88.82 | 75.83 |\\n| Mamba | 38.02 | 82.98 | 72.14 | 69.82 | 69.26 | 67.32 | 66.59 |\\n\\nThe metric for all tasks is accuracy, in %. Larger is better. DenseAttention significantly outperforms Mamba on average and in each benchmark individually. \\n\\nHowever, we don\\u2019t believe it might be a fair comparison because 1) the authors explicitly stated that this model is best suited for other causes, and 2) Mamba is an instance of a class of State Space Models (SSMs), whereas our model is Transformer-based.\\n\\nIt is worth noting that, generally, SSMs are in the league of their own in comparison with Transformer-based architectures and greatly outperform them on the LRA benchmarks due to their inherent inductive bias towards capturing hierarchical and long-range dependencies and lack of such bias in Transformers, as discussed in [1, 3-4]. Nevertheless, we found that DenseAttention also outperforms one of these much stronger SSM baselines in 4 of 6 benchmarks (S4-v1 [5], results reported in the Table 1 of the paper). To the best of our knowledge, this is the first case when a pure Transformer-based model compares favorably with an SSM on the LRA which is a valuable and interesting insight. \\n\\n**2. Mamba's incompatibility with bidirectional sequence processing**\\n\\nOur second group of experiments involves pre-training BERT-like architectures with DenseAttention. We are not able to draw comparisons with Mamba, because the task for the experiments \\u2013 Masked Language Modeling \\u2013 requires bidirectional architecture while Mamba is a solely unidirectional, left-to-right type of model. The authors of the paper have not provided an official bidirectional implementation as of now (https://github.com/state-spaces/mamba/issues/99). Devising a bidirectional Mamba-based architecture for general sequence processing would be an important piece of work on its own merits, worth a dedicated research paper, however it\\u2019s out of scope of our current work as it is focused on improving the Transformer-based architecture.\\n\\nAlso, we believe it\\u2019s natural to compare our models with BERT [6] specific architecture itself because we closely follow the implementation details and training process for the original model, except for replacing Transformer blocks with DANet blocks. Our key goal was to show that DenseAttention-BERT is at least on par with the original model in terms of LM quality, while being faster and more computationally efficient, and we successfully accomplished this goal.\", \"references\": \"[1] Amos et al., \\\"Never Train from Scratch: Fair Comparison of Long-Sequence Models Requires Data-Driven Priors.\\\" ICLR 2024\\n\\n[2] Alonso et al., \\\"State Space Models as Foundation Models: A Control Theoretic Overview.\\\" arXiv preprint arXiv:2403.16899, 2024.\\n\\n[3] Ma et al., \\\"Mega: Moving Average Equipped Gated Attention.\\\" ICLR 2023\\n\\n[4] Tran et al., \\\"The Importance of Being Recurrent for Modeling Hierarchical Structure.\\\" EMNLP 2018\\n\\n[5] Gu et al., \\\"Efficiently Modeling Long Sequences with Structured State Spaces.\\\" ICLR 2022 \\n\\n[6] Devlin et al., \\\"BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.\\\" NAACL 2019\"}", "{\"comment\": \"I highly recommend you conduct fair comarisons with Mamba in your future revisions. I decided to maintain my score.\"}", "{\"title\": \"Summary for reviewer HAtu\", \"comment\": \"Dear reviewer HAtu,\\n\\nAcknowledging that you may have time constraints, we have prepared an executive summary of our detailed response for your convenience. \\n\\n---\\n\\n**W1. Comparisons against Mamba.** Presented comparisons with Mamba on the LRA which demonstrate DenseAttention is a clear winner. Reiterated about the comparison with another strong SSM baseline. Discussed the limitations of Mamba for bidirectional MLM and explained that our experiment design was to pre-train an architecture very closely matching BERT for fair comparison. (**Part 1**)\\n\\n**W2. Comparisons with Linear Transformers**. Provided an in-detail \\u201cDiscussion of Linear Transformers in relation to DenseAttention\\u201d, in which we described theoretical differences between DenseAttention and the broad class of Linear Transformer models. (**Part 2**, same in **General Response. Part 1** above and in **Appendix D** in the revised manuscript) \\n\\nPresented comparisons with Linear Transformer and 25+ other Transformer-based architectures, including the most recent, to explicitly demonstrate that DenseAttention outperforms all of them. Restated that, to the best of our knowledge, Dense attention holds the top place on the LRA across all such architectures to date. (**Parts 2-3** and **General Response. Part 2**, also in **Appendix E.2** in the revised manuscript)\\n\\n**W3. Typos.** Fixed all errors and inconsistencies throughout the paper.\\n\\n---\\n\\nWe are grateful for your effort and suggestions which helped to enhance our work, and we are very keen to receive your feedback.\"}", "{\"title\": \"Gratitude and further explanations for Reviewer ruBJ (2)\", \"comment\": \"> W2. \\u201cBut using an identity mapping as $\\\\phi$ and dropping the denominator does not seem to largely contribute to the established knowledge unless the authors could justify that previous papers were doing wrong (and therefore could explain why their performance is much lower than DenseAttention).\\u201d\\n\\nComplete removal of non-trivial $\\\\phi$ and the denominator from the attention is a rewarding, but very hard thing to achieve as explained in the paper. To the best of our knowledge, we are the first to successfully do it and demonstrate excellent modeling performance. It unlocks another conceptual paradigm since attention scores are no longer constrained to be non-negative *as in all previous Transformer-based architectures* starting from the vanilla softmax self-attention. We argue that this increased expressivity and versatility contributes positively to the performance.\\n\\nWe emphasize that framing the absence of any transformations applied to $\\\\mathbf{Q}$, $\\\\mathbf{K}$, or their dot-products as an identity mapping $\\\\phi(x)=x$ would be conceptually wrong, because attention scores are required to be non-negative both in the Linear and general Kernelized (vanilla self-attention belongs here) attention classes [1-3]. They also need to be normalized by some weights, and the absence of reweighting the attention scores by their row-wise sums further sets DenseAttention apart from all other algorithms.\\n\\nBoth elements are memory-intensive operations and exactly justify and explain relative computational inefficiency of attention mechanisms w.r.t DenseAttention.\\n\\nTo conclude, exactly the omission of these two elements brings both modeling quality and speed/ computational efficiency gains. However, designing such an architecture to be numerically stable and to have well-behaved activations is very hard. We believe that accomplishing it and proving the architecture performs well in different settings is a major and valuable finding.\\n\\n[1] Katharopoulos et al., \\\"Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention.\\\" ICML 2020\\n\\n[2] Choromanski et al., \\\"Rethinking Attention with Performers.\\\" ICLR 2021\\n\\n[3] Tsai et al., \\\"Transformer Dissection: A Unified Understanding of Transformer's Attention via the Lens of Kernel.\\\" EMNLP 2019\\n\\n---\\n\\nAgain, we express our gratitude for your response and positive evaluation of our efforts in the rebuttal. We also hope to have been able to alleviate your remaining concerns with new explanations. If this is the case, we kindly ask you to consider further updating your score or sharing any additional feedback or questions, should you feel it is necessary.\"}", "{\"title\": \"Summary for reviewer oGkz\", \"comment\": \"Dear reviewer oGkz,\\n\\nAs we understand your time might be limited, and given our detailed response, here we provide an executive summary of it for your convenience.\\n\\n---\\n\\n**Q1. Why DANet is Transformer-based?** Provided a derivation from generalized attention formula to DenseAttention and highlighted minor differences across Transformer models to prove DANet is Transformer-based. Contrasted it with major differences in RNN/ SSM architectures. **(Part 1)**\\n\\n**W1. Related work.** Reiterated existing discussions of related work in the original manuscript. Added an in-depth analysis of Linear Transformers in relation to DenseAttention (**General Response. Part 1**) and an extended exposition of SSMs/ RNNs variants and Monarch Mixer (**Parts 1\\u20132**) at your suggestion. \\n\\n**W2 and Q4. Use of self-attention in local layers/ hybrid architectures. Ablations** Explained that local layers also use DenseAttention, and thus, it\\u2019s a \\u201cpure\\u201d architecture. Drew a comparison with Gemma-2 Transformer LLM from Google. Pointed to an ablation on local attention in the paper and a new ablation. (**Part 3**)\\n\\n**W3. Ablations on various elements.** Explained that some ablations had failed or been impossible due to specificity of the architecture. Pointed to existing ablations/ discussions in the paper and new ablations inspired by your feedback. (**Part 3**)\\n\\n**W4. Performance on long contexts.** Discussed the essence of the LRA suite of benchmarks as a challenging test for long context (up to 16K length) capabilities and reiterated on remarkable DANet performance on it. Added comparisons with 25+ models on the LRA (trends hold). Presented an experiment on Pathfinder-256 (65K length) benchmark where DANet establishes a new SOTA. Conducted an ablation study by using local attention with DANet-BERT MLM and demonstrated that the model\\u2019s performance even on 16K length quickly matches the target quality which had been established on small contexts. (**Parts 4-5**)\\n\\n**W5. Efficiency gains of Cosine RelPE vs RoPE.** Made an ablation study and demonstrated Cosine RelPE are faster during training and inference up to 49% and 75%. (**Part 5**)\\n\\n**W6. Conclusion.** Presented a \\u201cConclusion & Future Work\\u201d section. (**Part 6**)\\n\\n**Q2 & Q3. Differences in training and architecture between DANet and original BERT.** Explained that first 2 of 4 training stages correspond to original BERT pre-training, and the last two are for testing performance on long contexts. Fully reproduced all DANet-BERT pretraining experiments with the model accurately matching original architecture, and demonstrated that all results, relations and trends in models\\u2019 performance hold. (**Parts 6-7**)\\n\\n**Q5. Use of fp16.** Explained that the reasons to use it are speed gains and compatibility with older hardware. (**Part 7**)\\n\\n---\\n\\nWe express a sincere gratitude for your review which helped to enhance our work, and we are eagerly looking forward to your feedback.\"}", "{\"summary\": \"In this paper, the authors propose a new architecture, DenseAttention Network, which could potentially replace Transformer. The motivation of this new design is to alleviate the quadratic time complexity in sequence length as well as the memory-bound operations in the vanilla Transformer (e.g. softmax and layer normalization). Specifically, they propose to linearize the original multi-head attention layer with naive matrix multiplications. To stabilize the forward pass, the inputs of each layer are scaled to have the same $\\\\ell_\\\\infty$ norm. The authors further propose a replacement for the rotary embedding and sliding window that is compatible with their approach.\\n\\n**Strengths**\\n1. Given the popularity of Transformer models, the topic of their efficiency becomes more and more important. The proposed solution is also well-motivated.\\n2. The paper is well-written and easy to follow.\\n\\n**Weaknesses**\\n1. The major flaw of this paper is the thin experiments.\\n2. The paper lacks several important previous papers.\\n\\nIn summary, this paper proposes a potential solution to accelerate Transformer models. However, the experiments are not convincing enough. Therefore, I would recommend a clear rejection unless there is further evidence.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Given the popularity of Transformer models, the topic of their efficiency becomes more and more important. The proposed solution is also well-motivated.\\n2. The paper is well-written and easy to follow.\", \"weaknesses\": \"1. The major flaw of this paper is the thin experiments. The Transformer model is known to perform well on a wide range of tasks. In addition, it also demonstrates a promising scaling effect. On the other hand, this paper only contains limited experiments: (1) the testbeds are limited. Currently, the only benchmark is Long Range Arena (for causal LMs); (2) the baselines are limited. There is only one Transformer model that serves as the baseline without specifying how the model is trained; (3) the scaling effect is not studied. The authors do not analyze how the parameter number affects the results. It is unclear if the method could be scaled to larger-scale applications.\\n2. The paper lacks several important previous papers. In fact, linearizing attention has been heavily studied before [1, 2, 3]. This paper has no comparisons or discussions. \\n\\n[1] Random Feature Attention, ICLR 2021 \\\\\\n[2] Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention, ICML 2020 \\\\\\n[3] Transformer Dissection: A Unified Understanding of Transformer's Attention via the Lens of Kernel, EMNLP 2019\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author response. Part 3. Additional baselines\", \"comment\": \"> **W1.2 (Continuation)**\\n --- \\n\\n**Update:** We have moved the table with the additional baselines to the **General Response** section above. Please find it attached there.\\n\\n---\"}", "{\"title\": \"Response to Rebuttal\", \"comment\": \"Thanks for the detailed response.\\n\\n> Framing the absence of any transformations applied to $Q$ and $K$ as an identity mapping $\\\\phi(x) = x$ would be conceptually wrong because transformed values are required to be non-negative in the Linear Transformer class of algorithms [1-2]. And the absence of reweighting the attention scores by their row-wise sums further sets DenseAttention apart from Linear Transformers.\\n\\nThanks for pointing it out. This seems to make sense --- actually $\\\\phi$ need not be non-negative (as a scalar or element-wise as a vector) but $\\\\phi(x)^{\\\\top}\\\\phi(y)$ needs to be non-negative (to clarify a point being made in your general response), for example Gaussian kernel. But overall I agree that this dense attention module is not strictly in scope of [1]. \\n\\n> Furthermore, calculation of $\\\\|x\\\\|_{2}$ in low precision formats can lead to either a numerical instability (as in the case of fp16) or to a loss of numerical precision (both in fp16 and bf16) for high-dimensional $x$.\\n\\nThis makes sense and is a good motivation for MaxNormActivation. \\n\\nHowever, the CosineRelPE block, which is the other novel block, still seems to only be motivated by an ablation and concerns of efficiency. In order to validate that it is a full replacement of RoPE without any performance drawbacks, it would have been great to have higher-scale experiments. Local attention is pulled from another work so it is OK to not have the full motivation for it here.\\n\\n> \\u201cWe pre-train an encoder model with the approximately same number of parameters as in BERT-large\\u201d\\n\\nMy apologies, I seem to have missed the word \\\"encoder\\\" in my initial review. Yes, this experiment makes sense now, and seems to be fair.\\n\\n> Question 2\\n\\nThe response to this question contains mostly a summary of the tools used. I would have liked to see if there are any fundamental reasons or empirical proof why the blocks can or cannot generalize beyond the context of having all three modifications together. As in, I understand that you can arbitrary replace the standard transformer blocks with them; my question is more about understanding the contexts in which they work. For example, can the DenseAttention block work with standard LayerNorms (I suspect the answer is no for the reasons discussed earlier in your rebuttal)? Can the MaxNormActivation work without DenseAttention, i.e., does it improve performance or efficiency of transformers with the usual attention? The ablation on CosineRelPE seems to have already been tried in Table 2.\\n\\nDue to comprehensively addressing several of my comments in the initial review, I will raise my score.\"}", "{\"title\": \"Author response. Part 2\", \"comment\": \"> Also there's a potential typo in the equation defining MaxNormActivation: it should be $\\\\frac{X_{i}}{\\\\max_{j}|X_{ij}| + \\\\epsilon}$ on the RHS (note the absolute value).\\n\\nThank you for noting this typo. We will fix it in the next revision. \\n\\n---\\n\\n**Weakness 3** \\n\\n> Not much motivation is given for the two other modifications, e.g., CosineRelPE and the local attention proposal - they seem to have a flavor of \\\"we tried it and it works,\\\" potentially with some ablation, and without context of why such an approach may or may not make sense or generalize to other architectures.\\n\\n---\\n\\nWe thank you for pointing out the perceived lack of motivation behind designing CosineRelPE and the local attention.\\n\\nRegular Rotary Positional Embeddings (RoPE) [3] are known to enhance modeling performance and generalization in Transformer models and are widely used [4-7]. In fact, just by incorporating it into a standard Transformer model, Amos et al. [8] managed to beat all efficient and long-context modifications of Transformer on the Long Range Arena benchmark. \\n\\nHowever, regular RoPE are not computationally efficient as we explain in section 3.2 of the paper. Our primary motivation behind designing Cosine RelPE is speed and efficiency gains, as we aimed to make DenseAttention as efficient as possible. As we demonstrated in the paper, expanded expressions for RoPE and Cosine RelPE are similar while the latter form of embeddings involves much less memory-intensive computations. Empirically, we found that the difference in modeling quality between the two types is negligible.\\n\\nMotivated by your comments, we conducted an ablation study on speed. We present the results below.\\n\\n\\n| Model variant | Training Speed (speed-up) | Inference Speed (speed-up) |\\n|------------------------|---------------------------|----------------------------|\\n| Rotary Embeddings | 7025 (1.00x) | 16908 (1.00x) |\\n| Cosine Embeddings q,k | 10276 (1.46x) | 28467 (1.68x) |\\n| Cosine Embeddings | 10438 (1.49x) | 29630 (1.75x) |\\n\\n\\nComparison of training and inference speeds (in sequences per seconds) on the LRA\\u2019s Pathfinder task. Cosine RelPE are significantly faster in both scenarios. \\u201cq, k\\u201d in the second row denotes that Cosine RelPE were applied separately to Q and K matrices like in regular RoPE. \\n\\nRegarding the use of local attention, we discuss the motivation in the section 3.3 of the paper, to quote:\\n\\n> \\u201cThe reason of this extension is outlined by Qin et al. [9]: in linear Transformer family of models, attention scores of a query are distributed along the sequence length more uniformly as compared to Softmax attention, so the model is not fully able to focus at details in the vicinity of a query\\u2019s token.\\u201d\", \"to_reiterate_this_reasoning\": \"intuitively, with local attention scheme we introduce a proximity bias which helps the model to pay more attention to close tokens in case of very large contexts. Standard self-attention achieves this property due to softmax nonlinearity which is able to selectively increase some of the attention scores by a large magnitude in relation to others.\\n\\n---\\n\\n**References**\\n\\n[1] Katharopoulos et al., \\\"Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention.\\\" ICML 2020\\n\\n\\n[2] Choromanski et al., \\\"Rethinking Attention with Performers.\\\" ICLR 2021\\n\\n[3] Su et al., \\\"RoFormer: Enhanced Transformer with Rotary Position Embedding.\\\" arXiv preprint arXiv:2104.09864, 2021\\n\\n[4] Biderman et al., \\\"Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling.\\\" ICML 2023\\n\\n[5] Black et al., \\\"GPT-NeoX-20B: An Open-Source Autoregressive Language Model.\\\" BigScience Workshop 2022\\n\\n[6] Chowdhery et al., \\\"PaLM: Scaling Language Modeling with Pathways.\\\" JMLR 2023\\n\\n[7] Dubey et al., \\\"The Llama 3 Herd of Models.\\\" arXiv preprint arXiv:2407.21783, 2024.\\n\\n[8] Amos et al., \\\"Never Train from Scratch: Fair Comparison of Long-Sequence Models Requires Data-Driven Priors.\\\" ICLR 2024\\n\\n[9] Qin et al., \\\"The Devil in Linear Transformer.\\\" EMNLP 2022\"}", "{\"title\": \"Gentle reminder about discussion period end for Reviewer oGkz\", \"comment\": \"Dear reviewer oGkz,\\n\\nWe believe we have been able to carefully and comprehensively address all of your concerns and questions. With the discussion period set to close in less than 12 hours, we would greatly appreciate it if you might consider reevaluating the score of our work or sharing any additional feedback or questions, should you feel it is necessary.\\n\\nWe thank you again for constructive feedback which led to several new experiments, ablations and other improvements.\"}", "{\"title\": \"Author response. Part 4\", \"comment\": \"**Weakness 5**\\n\\n> The result on efficiency compared to BERT also may seem to not be a fair comparison. BERT is trained with an encoder-only architecture, while DenseAttention Network is trained with a decoder-only architecture. A fairer comparison would pit DenseAttention Network against a regular decoder-only transformer (as well as BERT if desired, along with, say, an SSM), under the same experimental setting, and allow readers to observe trends in the different approaches as different scaling parameters vary.\\n\\nWe apologize if some parts of the paper may have caused your confusion. However, we explicitly stated in the paper (section 4.2, lines 459-460 in the original manuscript):\\n> \\u201cWe pre-train an encoder model with the approximately same number of parameters as in BERT-large\\u201d\\n\\nWe reiterate here that DenseAttention-BERT architecture is the bidirectional encoder-only and very closely follows the original architecture of [1]. In fact, we replicate the implementation details and training process for the original model as closely as possible, except for replacing Transformer blocks with DANet blocks. Our key goal was to show that DenseAttention-BERT is at least on par with the original model in terms of LM quality, while being faster and more computationally efficient, and we successfully accomplished this goal.\\n\\nAlso, we have conducted additional experiments on scaling laws for the DenseAttention-BERT architecture and present them below.\\n\\n| Model | Parameters | Configuration | MLM loss | MLM accuracy, % |\\n|--------------------|------------|------------------|----------|-----------------|\\n| DANet-BERT-small | 31M | L=6, D=512 | 2.74 | 49.5 |\\n| DANet-BERT-base | 110M | L=16, D=768 | 2.02 | 60.0 |\\n| DANet-BERT-large | 336M | L=32, D=1024 | 1.70 | 64.9 |\\n\\n\\nThe table depicts three single-head DenseAttention Network models of different sizes pre-trained on Wiki+BookCorpus dataset with MLM objective for 100B tokens. MLM loss and accuracy are reported for out-of-sample data from C4 dataset [2]. L and D parameters denote number of layers and hidden dimension of FFN input, respectively.\\n\\n[1] Devlin et al., \\\"BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.\\\" NAACL 2019\\n[2] Raffel et al., \\\"Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer.\\\" JMLR 2020\\n\\n---\\n\\n**Question 1**\\n\\n> What is the specific motivation of designing CosineRelPE?\\n\\nPlease see discussion for weakness 3.\\n\\n---\\n\\n**Question 2**\\n> Is there anything that suggests that any new block (DenseAttention, CosineRelPE, MaxNormActivation) can generalize to other architectures and improve either performance or efficiency (while not degrading the other)?\\n\\nThank you for the inspiring question! Let us respond point-by-point:\\n\\n**1.** **DenseAttention Network** is a general architecture block which can serve as a drop-in replacement for the Transformer block in every model architecture that uses it. We conducted experiments on the diverse modalities spanning from logic and reasoning to language modeling and image classification, which are components of the LRA suite of benchmarks, along with standalone LM pre-training. \\n\\nTheir results show that DenseAttention is capable of generalizing to many different tasks and achieving favorable performance in comparison with standard Transformer and its augmented variants while being faster and more computationally efficient. We believe it also strongly indicates DenseAttention can be ported or applied to specialized architectures that would benefit from long-context efficiency improvements, such as ViT [1] and SAM [2] for Computer Vision tasks and LLAMA [3] for language modeling. We actively plan to address them in future work.\\n\\n**2.** **Cosine RelPE** is a general building block which can be applied to any Transformer-like architecture in place of RoPE to bring efficiency gains. Based on both theoretical and empirical observations, we believe Cosine RelPE would contribute similarly favorably to these architectures in terms of modeling quality and are confident it will increase computational efficiency. However, a careful exploration of a general class of trigonometric relative position transformations and various methods of their application to attention inputs should be performed, which is worth a dedicated research paper. We also leave it to future work.\\n\\n**3.** **Local Attention pattern** similar to ours (combination of alternating local and global attention layers) is already shown to perform well in Google\\u2019s Gemma 2 family of models [4]. \\n\\n**4.** **MaxNormActivation** was designed to overcome the specific challenges of DenseAttention. It may prove to be useful in training deep NN models with large dimension of hidden states in fp16 format to prevent numerical instabilities.\"}", "{\"title\": \"Response to Reviewer oGkz. Part 3\", \"comment\": \"**Weakness 2 & Question 4**\\n\\n> Since DANet seems to be a hybrid architecture (Section 3.3), also a relation to hybrid architectures (e.g. https://arxiv.org/abs/2402.19427, https://arxiv.org/abs/2406.07522) is interesting.\\n\\n> L.400: The authors find that local attention is effective. Do you use the Transformer Self-Attention here? \\n\\nWe apologize if some parts of the paper may have caused your confusion. Our LocalAttention and ShiftedLocalAttention layers are just regular DenseAttention layers applied to non-overlapping chunks of a sequence rather than globally, similar to [1-2]. Although it was implied in text and shown in the code, we will state it explicitly in the next version of the paper to make it clear.\\n\\nWe use only DenseAttention and no Transformer\\u2019s Self-Attention in our local attention scheme which alternates local and global layers. Thus, DANet augmented with local attention is still a \\u201cpure\\u201d architecture and cannot be directly related to hybrid Transformer-SSM models mentioned in your comment. \\n\\nHowever, as we referenced in the Section 3.3, there is a \\u201cpure\\u201d Transformer Large Language Model which successfully employs a similar alternated pattern at the scale of billions of parameters \\u2013 Gemma 2 by Google.\\n\\n> An ablation on this would be interesting.\\n\\nIn fact, we conducted two ablation studies on the effectiveness of local attention. The first is available in the main text of the original and current revisions (Section 4.1, Table 2). It shows that inclusion of local attention significantly boosts modeling quality on the LRA. A new ablation, centered around long contexts in DenseAttention-BERT, will be reported in response to Weakness 4 (*DANet-BERT 16K with local attention ablation*).\", \"references\": \"[1] Dao et al., \\\"FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness.\\\" NeurIPS 2022 \\n\\n[2] Qin et al., \\\"The Devil in Linear Transformer.\\\" EMNLP 2022\\n\\n[3] Gemma Team, \\\"Gemma 2: Improving Open Language Models at a Practical Size.\\\" arXiv preprint arXiv:2408.00118, 2024.\\n\\n---\\n\\n**Weakness 3**\\n\\n> There are so many architecture changes (e.g. Layernorm, Positional Encoding, Block structure, Attention mechanism, Block order / hybrid variants) that leave the reader unclear of what brings performance gains. A careful ablation study could help here.\\n\\nWe thank you for bringing up this very interesting topic and are happy to address it with a detailed explanation. \\n\\nThe bottom line is that while we were able to conduct some of them, not all ablations are possible in principle.\\n\\n**Attention Mechanism, LayerNorm & Block Structure.** One of the core contributions of our work is the complete removal of Softmax. As discussed in the paper (lines 215-226), without it attention outputs become unbounded and quickly diverge to $\\\\infty$ or shrink to 0, especially when computed in half-precision format fp16. The only way we found to prevent it was to use MaxNormActivation, which we had derived by theoretical analysis, before the attention layer. As we explicitly stated in the paper (lines 255-257), to quote:\\n\\n> In our ablation experiments any other activation or normalization function or absence thereof would lead to a prompt and unrecoverable numerical instability early on during training.\\n\\nThe cubic growth rate of outputs w.r.t inputs in DenseAttention dictated another design choice: moving the second MaxNormActivation to the end of the DANet block after FFN sub-block. As the standard Transformer block has only two LayerNorms, we aimed to keep this number intact. Leaving any type of a layer normalization between attention and FFN sub-blocks instead of placing it in the end would cause the loss to diverge or get trapped in a bad local minimum for moderately deep networks.\\n\\nTo summarize, the absence of Softmax does not leave much room for architectural changes and, thus, ablations, as the current choice and placement of the components make it numerically stable. However, we conducted a study on a parameter which allowed for an ablation: the number of heads. We presented it in Section 4.2, Table 3. The results indicate that the single head DANet-BERT variant mostly outperforms the multi-head one (extended analysis is available in lines 496-501).\\n\\n**Block Order / Hybrid Variants** We discuss and present the ablations for the Local Attention in the responses to Weakness 2 and Weakness 4 (*DANet-BERT 16K with local attention ablation*).\\n\\n **Positional Encoding** We present the ablations for Cosine RelPE in the response to Weakness 5.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer oGkz. Part 6\", \"comment\": \"References:\\n\\n[1] Su et al., \\\"RoFormer: Enhanced Transformer with Rotary Position Embedding.\\\" arXiv preprint arXiv:2104.09864, 2021\\n\\n[2] Biderman et al., \\\"Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling.\\\" ICML 2023\\n\\n[3] Black et al., \\\"GPT-NeoX-20B: An Open-Source Autoregressive Language Model.\\\" BigScience Workshop 2022\\n\\n[4] Chowdhery et al., \\\"PaLM: Scaling Language Modeling with Pathways.\\\" JMLR 2023\\n\\n[5] Dubey et al., \\\"The Llama 3 Herd of Models.\\\" arXiv preprint arXiv:2407.21783, 2024.\\n\\n---\\n\\n**Weakness 6**\\n\\n> A conclusion is missing.\\n\\nWe thank you for pointing this out. We present it in the revised version of the paper and here, below in full.\\n\\n---\\n\\n**Conclusion and Future Work**\\n\\nIn this paper, we propose DenseAttention Network -- a general architecture which simplifies the Transformer block and can serve as a drop-in replacement in every model architecture using it. We conduct experiments on the diverse modalities spanning from logic to language modeling and image classification and from short to extremely long sequence lengths using the LRA suite of benchmarks and MLM-style language model pre-training on text data. The results show that DenseAttention is capable of generalizing to many different tasks and context sizes and achieving favorable performance in comparison with standard Transformer and its augmented variants while being faster and more computationally efficient even with no specialized, low-level computation algorithms such as in [1].\\n\\nWe acknowledge that there are other modalities and specialized architectures that would benefit from long-context efficiency improvements if the DenseAttention is ported or applied to them, such as ViT [2] and SAM [3] for Computer Vision tasks, and LLAMA \\n[4] for decoder-style language modeling. We hope to address them in future work. In particular, we look forward to adapting DenseAttention architecture to causal LLAMA-style LLMs and studying their scaling laws at billions of parameters range.\", \"references\": \"[1] Dao et al., \\\"FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness.\\\" NeurIPS 2022 \\n\\n[2] Dosovitskiy et al., \\\"An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale.\\\" ICLR 2021\\n\\n[3] Kirillov et al., \\\"Segment Anything.\\\" ICCV 2023\\n\\n[4] Touvron et al., \\\"LLaMA: Open and Efficient Foundation Language Models.\\\" arXiv preprint arXiv:2302.13971, 2023.\\n\\n---\\n\\n**Question 1**\\n\\nAddressed at the beginning of the response.\\n\\n---\\n\\n**Questions 2 & 3**\\n\\n> You train your model with 4 stages, but the original BERT was trained on 2 stages. Could you also train the baseline in the same way?\\n\\n> On page 10, line 494 (key highlights) you hint at the fact that DANet outperforms the baseline due to a soft-capping of output logits that you use. Why did you not try this for the baseline as well?\\n\\nWe thank you for the insightful questions. The first 2 of 4 stages corresponded to the same way the original BERT had been trained. Thus, after the second stage the DANet-BERT corresponds to fully pre-trained original BERT. The last two stages involve training on longer-context sequences (1k and 16k) to test the generalization abilities of our model. We emphasize that we report results for all 4 stages separately in Tables 3 and 6.\\n\\nThe soft-capping of output logits and input embeddings had been a part of the DANet-BERT initially. However, motivated by your questions, we removed all the differences in the architecture of DANet-BERT from the original model to make them perfectly identical except for DANet blocks. Then we completely re-trained and re-evaluated the model. We present the new results in the table below and in the response to Weakness 4 (*DANet-BERT 16K with local attention ablation*) and we will include them into the next revision of the manuscript (as these experiments have finished only very recently, we haven\\u2019t been able to adjust the current revision).\\n\\n---\"}" ] }
2bEjhK2vYp
SSLA: A Generalized Attribution Method for Interpreting Self-Supervised Learning without Downstream Task Dependency
[ "Zhiyu Zhu", "Jiayu Zhang", "NAN YANG", "Xinyi Zhang", "Zhibo Jin", "Jianlong Zhou", "Fang Chen" ]
Self-Supervised Learning (SSL) is a crucial component of unsupervised tasks, enabling the learning of general feature representations without the need for labeled categories. However, our understanding of SSL tasks remains limited, and it is still unclear how SSL models extract key features from raw data. Existing interpretability methods are heavily reliant on downstream tasks, requiring information from these tasks to explain SSL models. This reliance blurs the line between interpreting the SSL model itself and the downstream task model. Moreover, these methods often require additional samples beyond the target of interpretation, introducing extra information that complicates the interpretability process. In this paper, we propose three fundamental prerequisites for the interpretability of SSL tasks and design the Self-Supervised Learning Attribution (SSLA) algorithm that adheres to these prerequisites. SSLA redefines the interpretability objective by introducing a feature similarity measure, reducing the impact of randomness inherent in SSL algorithms, and achieving more stable interpretability results. Additionally, SSLA abstracts the interpretability process, making it independent of specific neural network architectures. To the best of our knowledge, SSLA is the first SSL interpretability method that does not rely on downstream tasks. We also redesign a more reasonable evaluation framework and establish baselines for comparative assessment. The source code for our implementation is publicly available at https://anonymous.4open.science/r/SSLA-EF85.
[ "Interpretability", "Attribution", "Self-Supervised Learning" ]
Reject
https://openreview.net/pdf?id=2bEjhK2vYp
https://openreview.net/forum?id=2bEjhK2vYp
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zTqIchOh9s", "ysjvGWeL3W", "vxDrvzwcWu", "urO86CsReR", "tZ9X11ZVbF", "qPcJNQhMi2", "q34MScqdlw", "oWiYeg8UxS", "oSERT4vARw", "mQhGVBCbgg", "koJoIXfROF", "javEeyjaRr", "jYYgQ54OEz", "iHs43KzSsx", "gDnaZy6q3D", "g6s9urj1nm", "ep2W2QUifi", "eM4aGReAeF", "dOQZIdUn5R", "cIlDGkD6am", "bmVMdp0lrU", "aSkdWrOwMW", "YmhFX9vMzY", "W17BEZ3AZ4", "Su0IhGdMXG", "RV2YTFpBN8", "QlIA1ATadH", "Q5IWpBAuRL", "OhZ7riKdO2", "N0YVyZ6aGO", "L3PtXu7zOw", "Jf1chVnFL0", "Iv4G2Zx2PJ", "GI3dAePLgo", "EomFJDavqq", "DSYS9Z9K5X", "B4pVsKw9zB", "Aa44DY3JG6", "9l5FscUM65", "8tXmTTdf6U", "72hcYkTrLW", "4sKAhovlWs", "3l9TWWAHPu", "3JyTJIeJal", "1wQVVmJVyE", "1ZHja7BR2m" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1737523474642, 1733194655795, 1733103951985, 1732010256590, 1733118556010, 1730516340297, 1733066808466, 1731999263980, 1732503149566, 1733101656817, 1733126052010, 1731999500792, 1733192518822, 1730686177408, 1733086529355, 1733113513618, 1733146917453, 1733066872566, 1732932807112, 1733108195012, 1733196641076, 1733196550311, 1734659628420, 1733198674631, 1731999479316, 1732538718227, 1733103123793, 1733100811061, 1733112085611, 1733114334366, 1729847643126, 1733146879134, 1733195741205, 1733197536406, 1733066848621, 1732017027065, 1733101143680, 1732918446836, 1730670750042, 1732506417147, 1733199985780, 1731999228312, 1731999346290, 1732504247527, 1733066765374, 1733191167137 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1920/Reviewer_oq2z" ], [ "ICLR.cc/2025/Conference/Submission1920/Authors" ], [ "ICLR.cc/2025/Conference/Submission1920/Reviewer_oq2z" ], [ "ICLR.cc/2025/Conference/Submission1920/Authors" ], [ "ICLR.cc/2025/Conference/Submission1920/Reviewer_CZeC" ], [ "ICLR.cc/2025/Conference/Submission1920/Authors" ], [ "ICLR.cc/2025/Conference/Submission1920/Authors" ], [ "ICLR.cc/2025/Conference/Submission1920/Reviewer_oq2z" ], [ "ICLR.cc/2025/Conference/Submission1920/Authors" ], [ "ICLR.cc/2025/Conference/Submission1920/Reviewer_oq2z" ], [ "ICLR.cc/2025/Conference/Submission1920/Authors" ], [ "ICLR.cc/2025/Conference/Submission1920/Authors" ], [ "ICLR.cc/2025/Conference/Submission1920/Reviewer_6yuv" ], [ "ICLR.cc/2025/Conference/Submission1920/Reviewer_CZeC" ], [ "ICLR.cc/2025/Conference/Submission1920/Reviewer_6yuv" ], [ "ICLR.cc/2025/Conference/Submission1920/Authors" ], [ "ICLR.cc/2025/Conference/Submission1920/Authors" ], [ "ICLR.cc/2025/Conference/Submission1920/Authors" ], [ "ICLR.cc/2025/Conference/Submission1920/Reviewer_oq2z" ], [ "ICLR.cc/2025/Conference/Submission1920/Reviewer_oq2z" ], [ "ICLR.cc/2025/Conference/Submission1920/Reviewer_oq2z" ], [ "ICLR.cc/2025/Conference/Submission1920/Area_Chair_vhYM" ], [ "ICLR.cc/2025/Conference/Submission1920/Reviewer_oq2z" ], [ "ICLR.cc/2025/Conference/Submission1920/Authors" ], [ "ICLR.cc/2025/Conference/Submission1920/Authors" ], [ "ICLR.cc/2025/Conference/Submission1920/Reviewer_oq2z" ], [ "ICLR.cc/2025/Conference/Submission1920/Reviewer_oq2z" ], [ "ICLR.cc/2025/Conference/Submission1920/Authors" ], [ "ICLR.cc/2025/Conference/Submission1920/Reviewer_oq2z" ], [ "ICLR.cc/2025/Conference/Submission1920/Reviewer_oq2z" ], [ "ICLR.cc/2025/Conference/Submission1920/Authors" ], [ "ICLR.cc/2025/Conference/Submission1920/Authors" ], [ "ICLR.cc/2025/Conference/Submission1920/Authors" ], [ "ICLR.cc/2025/Conference/Submission1920/Authors" ], [ "ICLR.cc/2025/Conference/Submission1920/Authors" ], [ "ICLR.cc/2025/Conference/Submission1920/Authors" ], [ "ICLR.cc/2025/Conference/Submission1920/Reviewer_bs4Q" ], [ "ICLR.cc/2025/Conference/Submission1920/Reviewer_bs4Q" ], [ "ICLR.cc/2025/Conference/Submission1920/Reviewer_oq2z" ], [ "ICLR.cc/2025/Conference/Submission1920/Authors" ], [ "ICLR.cc/2025/Conference/Submission1920/Authors" ], [ "ICLR.cc/2025/Conference/Submission1920/Authors" ], [ "ICLR.cc/2025/Conference/Submission1920/Authors" ], [ "ICLR.cc/2025/Conference/Submission1920/Authors" ], [ "ICLR.cc/2025/Conference/Submission1920/Reviewer_oq2z" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response for Authors rebuttal\", \"comment\": \"Thank you for your rebuttal.\\nYou should calculate $\\\\mathcal{L}$ instead of $\\\\mu_k$.\"}", "{\"comment\": \"Given that SSL tasks often utilize Linear layers and KNN methods during downstream task evaluations, both of which rely on linear distance, we believe using alignment as the evaluation criterion is a natural and appropriate choice to reflect this aspect. Incorporating divergence into the evaluation process, while potentially valuable, would require extensive sampling and may introduce a degree of randomness. We kindly wonder if such a stochastic evaluation criterion would indeed be more suitable than the method we have proposed. While exploring this direction could be an excellent avenue for future work, we are confident that our current approach is both comprehensive and well-justified.\"}", "{\"title\": \"Response for Authors rebuttal\", \"comment\": \"Thank you for your reply. I don't think [1] and [2] only focus on training techniques for SSL tasks. Both of these works reveal key factors that can assist downstream tasks in pre-training contrastive processes. These two studies suggest that a good representation space should cluster data distributions by category. Alignment and enhancement implicitly introduce weak supervised signals. So, what I mean is that you should try to incorporate a measure of divergence for different latent class centers in your evaluation criteria. I believe this is the right direction.\\n\\nIf there are any errors in my understanding, please feel free to correct me.\"}", "{\"comment\": \"Thank you for your feedback and for clarifying your perspective regarding the computation of divergence.\\n\\nWe would like to reiterate that class centers cannot be directly accessed in scenarios where explicit class labels are unavailable. Although features can be obtained through input samples passed through the SSL model, without class labels, it is impossible to confirm whether these features represent intra-class dispersion or inter-class separation. This ambiguity makes the direct computation of divergence challenging.\\n\\nFurthermore, as the process requires sampling additional instances to estimate the divergence, the evaluation inherently involves sampling, which introduces a degree of randomness. While sampling can provide a reasonable approximation, the results would still depend on the specific samples chosen, adding variability to the evaluation process.\\n\\nWe hope our clarification has addressed your concerns, and we kindly request you to reconsider your score in light of these explanations. Thank you for your continued engagement and valuable feedback.\"}", "{\"summary\": \"The paper proposes SSLA, a feature attribution method for self-supervised learning (SSL) tasks. In particular, the method is designed without dependency of downstream tasks. The method starts by defining the usefulness of SSL model as its ability to preserve representation of data after transformation. Then it addresses the significance of features by attributing this usefulness to features iteratively. The paper then conducts feature masking experiment to demonstrate the effectiveness of the method.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper proposed a novel feature attribution method for SSL model that does not rely on downstream tasks.\\n2. The claim made in the paper is supported by both theoretical derivations and experiments. \\n3. The paper is well written and easy to follow. The discussion of prerequisites of an attribution method for SSL may spark interesting discussions.\", \"weaknesses\": \"1. The paper relies on the independence of downstream tasks, which make the comparison of this method and existing methods difficult. Hence, it is difficult to address the effectiveness of this method.\\n2. The derivation of theorems rely heavily on first order approximation. Though it is common, the paper does not provide analysis on error bounds, which downgrades the trustworthiness of the method.\\n3. The two main components of the method lack motivation. The first one is using cosine-similarity of features before/after transformation as a measure of usefulness of SSL model. The correlation (or even causality) of this and \\\"SSL as learning representation\\\" is not clear. The second one is the iterative method. The author may consider justify why we need an iterative method to attribute the importance.\\n4. Although the paper proposes the method to be independent of downstream tasks, its evaluations still rely on downstream tasks, which seems counter-intuitive(Line 179 -180). Moreover, since the evaluation is dependent of downstream tasks, the author may consider compare their method to other SSL attribution methods that rely on downstream tasks.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer CZeC,\\n\\nThank you for your thoughtful review. As the rebuttal deadline draws near, we have carefully addressed all the concerns you raised in your comments. If there are any remaining issues or further questions, we would be happy to address them. \\n\\nWe humbly ask for your reconsideration of the score. \\n\\nBest regards, \\n\\nSubmission1920 authors\"}", "{\"comment\": \"### Response for Weakness 1\\nAs stated in Section 4.5, our method is the first to interpret SSL tasks without relying on downstream task information. This creates a lack of direct baselines for comparison. In this context, we designed experiments to validate the effectiveness of SSLA. Specifically, we evaluated five representative SSL methods (BYOL, SimCLR, SimSiam, MoCo-v3, and MAE), which cover major technical approaches in contrastive learning. This demonstrates the universality and effectiveness of SSLA. By comparing with a random masking baseline, we showcased SSLA's precision in distinguishing important and unimportant features. Furthermore, we innovatively employed cosine similarity as an evaluation metric, avoiding the baseline bias inherent in traditional insertion and deletion scores, as detailed in Section 4.3.\\n\\n---\\n\\n### Response for Weakness 2\\nUsing the ImageNet dataset and ResNet-50 aligns with experimental setups in existing interpretability methods, ensuring comparability and validation of our approach on widely recognized models. As shown in Algorithm 1, SSLA's architecture-agnostic nature stems from its algorithmic design, which abstracts away from specific model architectures. This abstraction makes SSLA applicable across different architectures, and its effectiveness is theoretically ensured through adherence to attribution axioms, as proven in Appendices B and C.\\n\\n---\\n\\n### Response for Weakness 3\\nTraditional evaluation methods fail to address the interpretability of SSL tasks due to their reliance on baselines such as zero images or Gaussian blur, which are ineffective for SSL. SSL's focus on invariance, coupled with data augmentation already incorporating similar transformations, diminishes the impact of such baselines. Additionally, the subjectivity in baseline selection introduces bias, making these methods unsuitable for capturing the core mechanisms of SSL. As discussed in Section 4.3, we provided detailed theoretical justifications for our evaluation framework, supported by proofs in Appendix D.\\n\\n---\\n\\n### Response for Question 1\\nMethods like Integrated Gradients and Grad-CAM require specified categories for their explanations, making them incompatible with SSL tasks. These methods are inherently tied to specific downstream tasks and cannot analyze SSL independently. Consequently, they are unsuitable for exploring the generalizable regions that SSL focuses on.\\n\\n---\\n\\n### Response for Question 2\\nAs noted in our response to Weakness 3, Section 4.3 and Appendix D contain detailed mathematical derivations and theoretical proofs validating the effectiveness of our proposed evaluation framework. To enhance reproducibility, we have uploaded visual examples to the rebuttal folder in the repository, which can be accessed via [https://anonymous.4open.science/r/SSLA-EF85/rebuttal/].\\n\\n---\\n\\n### Response for Question 3\\nIn addition to ResNet-50, we evaluated SSLA on models with architectures beyond ResNet-50. For example, MAE utilizes a ViT-based model, and our results in Table 1 show SSLA's strong performance when interpreting MAE. This highlights the generalizability of SSLA across architectures. Furthermore, our experiments covered major SSL methods, which are predominantly trained on ResNet-50 due to its effectiveness in SSL, as noted in foundational SSL papers.\\n\\n---\\n\\n### Response for Question 4\\nSSLA inherently accounts for the randomness in SSL methods, such as stochastic data augmentations. As shown in lines 261\\u2013263 of the manuscript, we compute multiple evaluations and use the mathematical expectation to mitigate the impact of randomness, ensuring stable and reliable attribution results.\\n\\n---\\n\\n### Response for Question 5\\nAs summarized in the table below, SSLA requires only 51 forward and backward propagations, making it as efficient as IG [1] and significantly less computationally intensive compared to AGI [2] and ISA [3]. This efficiency makes SSLA feasible for large-scale models and datasets.\\n\\n| Method | Forward Passes | Backward Passes |\\n|----------------|----------------|-----------------|\\n| IG | 51 | 51 |\\n| AGI | 1580 | 3144 |\\n| ISA | 512 | 512 |\\n| **SSLA** | **51** |51 |\\n\\n---\\n\\n### Response for Question 6\\nIn principle, SSLA can be applied to SSL models in domains beyond computer vision. While our experiments focus on computer vision tasks, the theoretical foundation of SSLA is not domain-specific, enabling its extension to other domains with appropriate SSL setups.\"}", "{\"title\": \"Response for Authors rebuttal\", \"comment\": \"Thanks to the authors for their patient explanations and excellent rebuttals.\\n\\nAs you said, SSLA is specifically designed to remain independent of downstream tasks and labels. The lack of annotation information results in the non-computability of divergence. However, the purpose of recommending [1] and [2] is that both reveal NCE, Barlow Twins, and existing SSL methods have the capacity to optimize the divergence term, even though label information is not accessible. An alternative way to solve this issue is to figure out an upper bound for the divergence, the computation of which does not require any information about labels.\\nApart from that, the gap between the upstream and downstream tasks can be temporarily ignored, as the downstream dataset can be regarded as a subset of the pretraining dataset with some disturbed distribution shift when the pretraining dataset is comprehensive enough. Therefore, we can approximately think that the excellent structure of data distribution can be transferred to the downstream dataset. Thus, calculating the alternative quantity of divergence using pretraining data is acceptable.\\nIn summary, I am fairly certain that the motivation behind SSLA is valuable and deserves widespread attention. However, I don't think the authors sufficiently consider whether their core criterion is reasonable enough. I maintain that divergence is the most important factor in the success of current SSL methods, but the authors seem to ignore it.\\n\\nAs stated above, I tend to maintain my score. If I have any misunderstandings, I would appreciate the authors pointing out my mistakes or engaging in a more in-depth discussion.\"}", "{\"comment\": \"Dear Reviewer oq2z,\\n\\nThank you for your feedback and for engaging in this meaningful discussion. We would like to seek further clarification on what you would consider a convincing justification for our evaluation criterion. \\n\\nOur choice of cosine similarity aligns directly with the design of SSL tasks, where normalized dot product similarity (equivalent to cosine similarity) is a foundational element used in training. This consistency ensures that our interpretability framework is tightly coupled with the logic and mechanisms inherent to SSL training processes. Could you clarify if this alignment poses any specific issues in your view?\\n\\nOur goal is to interpret SSL models in a manner consistent with how they are trained. By using the same distance metric employed during SSL task design, we ensure that our method faithfully reflects the model's internal logic and behavior. If this alignment is problematic, we would deeply appreciate further insights into how you believe the evaluation could be improved or what alternative would be more appropriate.\\n\\nThank you once again for your valuable time and feedback. We look forward to your further clarification and hope this discussion can help refine the work even further.\\n\\nBest regards, \\n\\nSubmission1920 Authors\"}", "{\"title\": \"Response for Authors rebuttal\", \"comment\": \"Thank you for your rebuttal.\\n\\nHowever, I kindly suggest that the authors carefully read my previous responses above and refrain from revisiting past discussions, as they are meaningless for both of us. The current contrastive learning methods all propose alternative approaches (most of which are upper bounds of divergence) to address the incomputability of divergence. They do not require extra sample size to calculate, as the samples used to compute this term are the same as the samples used to calculate alignment, which is cosine similarity for SSLA.\"}", "{\"comment\": \"### Response for Weakness\\n\\nThe cited works [1] and [2] focus on training techniques for SSL tasks, which differ from our research objective of interpreting SSL models. Our method is not limited to any specific SSL training strategy and can be applied to models trained using the approaches discussed in [1] and [2]. While [2] emphasizes divergence around class centers and concentration of augmented data, these aspects primarily influence the training phase. Our study focuses on interpreting SSL models post-training, independent of the specific training techniques employed.\\n\\nFurthermore, as noted in line 244 of our manuscript, the sampling strategy for \\\\(\\\\tau\\\\) in our method mirrors the one used during SSL training, preserving properties like class-centered divergence and augmented data concentration. We will clarify in the manuscript that our approach is equally applicable to SSL models trained with methods like those described in [1] and [2].\\n\\n---\\n\\n### Response for Question\\n\\nSSL models trained using divergence-based techniques are fully compatible with our method. Our primary focus is on interpreting SSL models rather than the specific methods used for training and optimization. The criteria used in our paper aim to evaluate how well features learned by SSL models capture invariances in the data, which remains valid regardless of the training divergence strategies. We appreciate your feedback and will emphasize this point in the revised manuscript.\"}", "{\"comment\": \"Thank you for your feedback.\\n\\nThe upper bound $\\\\mu_k$ in Theorem 4 requires calculating the center point of class $k$, which depends on class annotations from downstream tasks. This calculation process clearly necessitates sampling from class $k$. Similarly, the work in [2] is derived based on specifically designed loss functions and strong assumptions, which are not feasible to compute in real-world scenarios (and can only provide a theoretical bound, meaning it cannot be practically evaluated). The approach in [3] also requires class labels to analyze the transferability between downstream tasks. [4], on the other hand, analyzes the difference between contrastive and non-contrastive methods, focusing on guiding SSL task design, which is significantly different from the evaluation of SSL interpretability. Therefore, we would like to ask whether there are divergence evaluation methods that do not rely on additional samples.\"}", "{\"summary\": \"The paper proposed a novel attribution algorithm for feature-level attribution on self-supervised learning. Compared to other feature-level attribution. The method is designed to meet prerequisites that the interpretation should not rely on 1) downstream task 2) other samples (other than the argumentation) and 3) model architectures. Authors present some experiments to justify the new method (SSLA) is effective.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The prerequisites are designed to resolve the problem caused by other factors. And the method are design to reflect this spirit.\", \"The diagram are clear and help the readers to understand the method.\"], \"weaknesses\": [\"I am not sure if the perquisites can be widely accepted by the community. For example, what is the downside (empirically / theoretically) if a downstream task is considered during the attribution process?\", \"Lack of comparison between different attribution methods. One interesting problem could be what's the difference between SSLA result v.s. other methods that relies on downstream tasks.\", \"Minor presentation suggestion\", \"Equation 1 and 2 seems to be a little redundant.\", \"I am willing to raise the rate if the effectiveness of SSLA on some traditional evaluation methods (on downstream tasks) are proved (at least) to be correlated\"], \"questions\": [\"In Figure 2, why we have a full R^n shape for a0, a1, ..., a_T?\", \"There seems to be no reason seperate the snowflake and light blue arrow in Figure 2?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I sincerely thank the authors' response and am sorry for my late response.\\n\\nMy concern on weakness #3 and #4 are addressed. On the other hand, I keep my concern that the derivation is not well supported by error bound analysis(weakness #2) and the effectiveness of the method is not well addressed because of a lack of comparison of existing methods. (weakness #1). Hence I currently keep my original score.\"}", "{\"comment\": \"Thank authors for the reply. I acknowledge I have read all the reviews and rebuttal replies. I have no more concerns about the paper. The only concern is the limited empirical improvement and weak baseline, this concern further developed to the possible practicality of this method. I will keep my rating as 5 for now.\"}", "{\"comment\": \"Thank you for your feedback and valuable insights.\\n\\nWe would like to clarify that our intention was not to revisit the same points but to address specific aspects raised in your comments. Based on the references you provided, incorporating divergence indeed involves sampling additional instances to compute meaningful estimates. If there are methods that do not require additional samples to calculate divergence, we would greatly appreciate it if you could share such references. This would be immensely helpful for us in improving our work in the future.\\n\\nThank you again for your constructive suggestions and time.\"}", "{\"comment\": \"Dear Reviewer oq2z,\\n\\nThank you for your review and feedback. As the rebuttal deadline approaches, we want to confirm that we have addressed the concerns you highlighted. If there are any additional questions or points to clarify, we are happy to provide further explanations. \\n\\nWe respectfully request your reconsideration of the score. \\n\\nBest regards, \\n\\nSubmission1920 authors\"}", "{\"comment\": \"Thank you for the valuable response. As our work represents the first study in this specific area, it has indeed been challenging to identify comparable baselines.\\n\\nAt the same time, exploring the explainability of SSL methods is of significant importance, particularly under the premise of not relying on downstream task information. SSL possesses high scalability and generalizability, making it a desirable approach for practitioners aiming to apply it across various downstream tasks.\\n\\nIn this context, conducting explainability analyses of SSL methods without incorporating downstream tasks allows researchers to evaluate in advance whether the core regions of interest for downstream tasks are effectively captured by the model. If these regions are not adequately addressed, it indicates that SSL may not have fully leveraged the information from these regions during feature extraction. This approach can save substantial computational costs by avoiding repeated trial-and-error attempts. Therefore, analyzing SSL from the perspective of explainability not only helps researchers optimize the transferability of the model but also provides valuable guidance for the design and application of downstream tasks. \\n\\nWe believe this line of research is highly insightful and practical, and we kindly ask you to reconsider the score.\"}", "{\"title\": \"Response for Authors rebuttal\", \"comment\": \"Why does evaluating the divergence term require extensive sampling and potentially introduce a degree of randomness?\\n\\nAs far as the authors confidence. I can only acknowledge that you have the right to hold your own views, even though I have already explained that divergence is a crucial factor in the success of contrastive learning.\"}", "{\"title\": \"Response for Authors rebuttal\", \"comment\": \"For the implementation related to negative samples, refer to https://github.com/jhaochenz96/spectral_contrastive_learning\"}", "{\"title\": \"Response for Authors rebuttal\", \"comment\": \"Thank you for your rebuttal.\\n\\nI earnestly request the author to carefully consider whether additional sampling is necessary to calculate $\\\\mathcal{L}^{Cross}_2$ in Theorem 4. You can definitely use the samples for calculating cos similarity to calculate $L $in Theorem 4, right? Even if negative samples are used, they can still come from samples used to calculate cos similarity. Are we talking about the same thing?\"}", "{\"metareview\": \"This paper introduces a novel interpretability framework for SSL models, aiming to decouple attribution from downstream tasks. The method, SSLA, uses a feature similarity measure to explain SSL models independently of downstream tasks or additional sample information. The paper outlines theoretical derivations, empirical results, and a new evaluation framework.\\n\\nWhile the idea is innovative, several key concerns remain unresolved. Reviewers criticized the lack of baseline comparisons with existing attribution methods, even if those rely on downstream tasks. The reliance on cosine similarity as the sole evaluation metric was seen as insufficient, with Reviewer oq2z emphasizing the importance of divergence-based measures, which were not incorporated. Reviewer CZeC pointed out the reliance on first-order approximations without error bounds, weakening the theoretical robustness. Empirical validation was also limited, with experiments primarily conducted on ResNet-50 and ImageNet, raising questions about generalizability.\\n\\nAlthough the authors actively engaged in the rebuttal phase, they did not convincingly address these concerns. I also agree with parts of the reviewers' concerns and think that the proposed method does not yet meet the bar for acceptance. I thus recommend rejecting the paper, but I encourage the authors to incorporate these helpful discussions into the revision and resubmit to a future venue.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers raised some concerns that remained unresolved through discussion. One issue was the reliance on cosine similarity as the sole evaluation metric. Reviewer oq2z argued that this was inadequate for assessing SSL models and emphasized the importance of incorporating divergence-based measures. While the authors defended cosine similarity as consistent with SSL design principles, they did not convincingly address alternative approaches suggested by oq2z, who maintained that divergence is central to understanding contrastive learning.\\n\\nAnother point was the lack of baseline comparisons. Reviewers bs4Q and 6yuv criticized the absence of tests against existing attribution methods, even if those rely on downstream tasks. The authors argued that SSLA\\u2019s independence from downstream tasks made direct comparisons inappropriate, but reviewers felt this weakened the empirical case for SSLA\\u2019s effectiveness.\\n\\nReviewer CZeC highlighted some theoretical issues, particularly the lack of error bounds in first-order approximations, which weakened confidence in the method\\u2019s robustness. The rebuttal did not fully resolve this concern. \\n\\nWhile the authors actively engaged in the rebuttal phase, their responses failed to sufficiently address these core critiques. The reviewers thus were consistent in their view that the paper lacked the empirical and theoretical support needed for acceptance.\"}", "{\"title\": \"Response to Authors rebuttal\", \"comment\": \"The upper bound of divergence is an reasonable quantity to evaluate the divergence. Its calculation does not require extra samples.\\n\\nI have tried my best to indicate my concerns and provided a possible way to evaluate the divergence.\\n\\nbut I don't think my concept can coincide with the authors' opinion.\\n\\nI decide to keep my score and accept AC's descision.\"}", "{\"comment\": \"### Response for Weakness 1\\nTo address the difficulty in directly comparing SSLA with existing attribution methods, we provide extensive theoretical justifications and mathematical proofs to establish the validity and effectiveness of our method. These proofs rigorously demonstrate how SSLA aligns with established attribution axioms and resolves key challenges in SSL interpretability.\\n\\n---\\n\\n### Response for Weakness 2\\nFirst-order approximations are a standard approach in influential attribution methods such as Integrated Gradients (IG) [1] and AGI [2], both of which have been widely validated. When the approximation is sufficiently close, the derived theories hold. In our work, we ensure the validity of the first-order approximation by setting a sufficiently low learning rate, minimizing approximation errors and enhancing the reliability of our method.\\n\\n---\\n\\n### Response for Weakness 3\\nCosine similarity is a core element of SSL training, as employed in methods like SimCLR [3] and MoCo [4]. It directly represents the essence of SSL, which involves learning invariant representations. Using cosine similarity aligns with SSL\\u2019s intrinsic properties, making it a natural and effective choice for our evaluation. Regarding the iterative approach, it ensures that the method satisfies attribution axioms, such as those outlined in IG [1]. Without iteration, the method would only capture local properties, akin to saliency maps, and fail to meet these axioms.\\n\\n---\\n\\n### Response for Weakness 4\\nOur evaluation process does not rely on downstream tasks. Lines 179\\u2013180 provide definitions of typical SSL workflows to aid readers unfamiliar with SSL concepts, improving accessibility. However, these lines do not imply that our method requires downstream tasks for evaluation. Our experiments are designed specifically to avoid downstream task dependency, as detailed in the manuscript.\", \"reference\": \"[1] Sundararajan, Mukund, Ankur Taly, and Qiqi Yan. \\\"Axiomatic attribution for deep networks.\\\" International conference on machine learning. PMLR, 2017.\\n[2] Pan, Deng, Xin Li, and Dongxiao Zhu. \\\"Explaining deep neural network models with adversarial gradient integration.\\\" Thirtieth International Joint Conference on Artificial Intelligence (IJCAI). 2021.\\n[3] Chen, Ting, et al. \\\"Simclr: A simple framework for contrastive learning of visual representations.\\\" International Conference on Learning Representations. Vol. 2. 2020.\\n[4] He, Kaiming, et al. \\\"MoCov1: Momentum Contrast for Unsupervised Visual Representation Learning.\\\" (2020): 9729-9738.\"}", "{\"comment\": \"We understand your concerns and sincerely appreciate your willingness to adjust your score. The reason we chose cosine similarity as the evaluation criterion lies in its foundational role in SSL task design. In SSL training, similarity calculations are typically performed using normalized dot product similarity, which corresponds to cosine similarity. Our design aligns with the logic of attribution based on the loss functions used during training. This approach is reasonable, as it focuses on observing how the encoding of SSL outputs changes relative to the original encoding as the input samples vary.\\n\\nIf divergence were incorporated into the training of SSL tasks, our evaluation framework could seamlessly adapt to use a divergence-based version. This mirrors the relationship between the original SSL papers and the works referenced in [1] and [2], where different perspectives can be used to discuss the same concept. Similarly, SSL tasks often evaluate performance by introducing downstream tasks to assess linear separability. The linear separability criterion is inherently consistent with the dot product operation used in SSL.\\n\\nWe hope this clarification further justifies our choice of cosine similarity and its alignment with the principles underlying SSL task design. We sincerely appreciate your thoughtful feedback and your willingness to engage in such a meaningful discussion. Your insights have been invaluable in helping us refine our work. \\n\\nGiven these clarifications and the alignment of our approach with SSL design principles, we kindly hope you might consider further improving the score, as your support would greatly encourage the ongoing development of this research. Thank you again for your time and consideration.\"}", "{\"title\": \"Response for Authors rebuttal\", \"comment\": \"Thank you for your rebuttal. My concerns about the divergence term have been specifically proposed in the above rebuttal. I believe that using only the alignment term as the evaluation criterion is inappropriate.\"}", "{\"title\": \"Response for Authors rebuttal\", \"comment\": \"Thank you for your rebuttal.\\n\\nI don\\u2019t think the author has addressed my concern about the criterion in a convincing way; therefore, I tend to score 3 or 5.\\n\\nConsidering that the author notice an important issue of getting rid of the influence of downstream tasks when considering interpretability, I keep my score at 5.\"}", "{\"comment\": \"Thank you for your feedback and for raising your concerns regarding divergence.\\n\\nReferring to the original paper on divergence that you mentioned ([2] Weiran Huang, Mingyang Yi, Xuyang Zhao, and Zihao Jiang. \\\"Towards the generalization of contrastive self-supervised learning.\\\" arXiv preprint arXiv:2111.00743, 2021), calculating divergence inherently requires the computation of class centers. This process involves sampling multiple instances to establish these centers, making the evaluation dependent on extensive sampling. In contrast, our evaluation method is designed to assess the model's interpretability using only the current sample, ensuring simplicity and reducing computational overhead.\\n\\nWe fully acknowledge that divergence is a crucial factor in the success of contrastive learning. However, for explaining a successful SSL task, incorporating divergence is not a necessity. Given that SSL tasks have already demonstrated their effectiveness, our focus shifts to interpreting the critical regions attended to by these models. In this context, assessing the impact of important features on linear distances is the most suitable and direct approach.\\n\\nWe hope this explanation clarifies our perspective, and we remain open to further discussion. Thank you again for your time and valuable insights.\"}", "{\"title\": \"Response for Authors rebuttal\", \"comment\": \"Thank you for your response.\\n\\nCalculating divergence does not actually require the computation of class centers, as the class center is accessible in the upstream task, while the goal of SSLA is to ensure that the evaluation process does not depend on the downstream task. Therefore, as discussed above, determining an upper or lower bound for the class center based on the upstream dataset(s), whose computation does not rely on label information, is a reasonable approach, and this is also a concern that the author did not convince me of.\\n\\nIn fact, these points have already been discussed in previous rebuttals, so there is no need to revisit them. When focusing on model interpretation, it is crucial to first clearly understand what is important for the model. Based on this, I believe the author's overall approach is correct, but the standards chosen are problematic.\"}", "{\"summary\": \"This paper indicates that the introduced additional samples from downstream task would impede the interpretability of Self-Supervised Learning (SSL). To tackle this issue, the authors try to propose a new interpretability objective by introducing a feature similarity measure, decoupling the interpretability process from the reliance of downstream tasks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The additional information introduced by prediction head and downstream tasks may influence the interpretability of SSL is a crucial and challenging breakthrough point.\", \"Three fundamental prerequisites for the interpretability of SSL proposed by this paper sounds reasonable.\"], \"weaknesses\": [\"Despite the author indicating the potential weakness for the interpretability of SSL, the alignment ability for data augmentation is just one side of current self-supervised learning. The other component, which is often regarded as an extra design to prevent training collapse intuitively, is essentially to ensure that the representation divergence is sharp enough to cluster the data distribution by latent categories. More details can be found in [1] and [2]. Therefore, only adopting the extent of augmentation that is invariant to evaluate the related influence of variables is quite biased.\", \"[1] Awasthi, Pranjal et al. \\u201cDo More Negative Samples Necessarily Hurt in Contrastive Learning?\\u201d International Conference on Machine Learning (2022).\", \"[2] Weiran Huang, Mingyang Yi, Xuyang Zhao, and Zihao Jiang. Huang, Weiran, et al. \\\"Towards the generalization of contrastive self-supervised learning.\\\" arXiv preprint arXiv:2111.00743 (2021).\"], \"questions\": \"**Reviewing summary**\\n- As listed in the weaknesses, I think the authors did not incorporate the divergence into their consideration, which can be regarded as the most critical component of contrastive self-supervised learning, making their criterion sound unreasonable. Despite their indicating the unreasonable aspects regarding the interpretability of SSL in relation to downstream tasks, this results in my score of 3.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your follow-up comment. As we previously explained in our response to Reviewer bs4Q, our work represents the first study in this specific domain, which inherently poses challenges in identifying comparable baselines. Nevertheless, we firmly believe that exploring the interpretability of SSL methods, especially without relying on downstream task information, is crucial.\\n\\nSSL's scalability and generality make it an ideal framework for practitioners aiming to apply it across diverse downstream tasks. In this context, performing interpretability analysis on SSL methods without incorporating downstream tasks enables researchers to pre-evaluate whether a model effectively captures the core regions of interest for such tasks. If these regions are not sufficiently addressed, it may indicate that the SSL model has not fully leveraged the information from these regions during feature extraction. This approach can save significant computational costs by avoiding repetitive trial-and-error attempts. \\n\\nAt the same time, as mentioned in our **Response for Weakness 4**, compared to traditional attribution methods such as IG, our approach demonstrates superior interpretability in SSL tasks. This is evident from its improved performance on metrics like Insertion (INS) and Deletion (DEL), which further validates the effectiveness and practicality of our method. \\n\\nFrom an interpretability perspective, analyzing SSL not only helps researchers optimize the model's transferability but also provides valuable guidance for the design and application of downstream tasks. \\n\\nConsidering these clarifications, we kindly request you to reconsider your rating.\"}", "{\"comment\": \"Thank you for your feedback.\\n\\n$\\\\mathcal{L} _\\\\text{infoNCE}$ requires sampling, whereas $\\\\mathcal{L} _\\\\text{align}$, like our method, uses linear distance but lacks angular information [1]. For instance, vectors such as [1, 2, 3] and [2, 4, 6] would have identical distances when evaluated for downstream classification tasks (which aligns with the foundational design principles of SSL). However, $\\\\mathcal{L} _\\\\text{align}$ would show a significantly different linear distance. In other words, relying solely on $\\\\mathcal{L} _\\\\text{align}$is insufficient. Given that $\\\\mathcal{L} _\\\\text{infoNCE}$ requires sampling, this does not align with the previously discussed perspective.\"}", "{\"comment\": \"Thank you for your reference.\\n\\nPlease do not overlook our concern. The referenced work does not mention a specific evaluation standard or calculation method for divergence, nor does it even include the term \\\"divergence\\\" in the paper. We have carefully reviewed the papers you suggested, and we still have not found an evaluation method that meets the criteria you proposed.\"}", "{\"comment\": \"Dear Reviewer bs4Q,\\n\\nThank you for your insightful comments. With the rebuttal phase nearing its conclusion, we have resolved the concerns you raised and provided detailed clarifications in our responses. Please let us know if there are any further questions. \\n\\nWe kindly request you to reconsider your score. \\n\\nBest regards, \\n\\nSubmission1920 authors\"}", "{\"comment\": \"We sincerely thank the reviewer for their prompt response and constructive feedback. We greatly appreciate the opportunity to engage in this meaningful discussion and for the thoughtful suggestions that help refine our work.\\n\\nThe central idea of [1] is to explore the trade-off between alignment and uniformity in learned representations, highlighting that simply increasing negative samples is not always advantageous. The authors propose strategies, such as improved sampling techniques, to optimize this balance, offering valuable guidance on the inclusion of negative samples during the training process. Similarly, [2] identifies alignment, divergence, and concentration as critical factors for the generalization ability of SSL tasks.\\n\\nOur work, however, focuses on interpreting well-trained SSL models to determine which regions are attended to during feature extraction, with the aim of providing human-understandable explanations. In contrast to [1] and [2], our primary objective is not to optimize SSL training or to analyze the sources of SSL task generalization ability. Nevertheless, these studies provide important insights and are highly relevant for guiding efforts to improve SSL generalization during training.\\n\\nWe greatly appreciate your suggestion regarding divergence. Indeed, we have incorporated a similar consideration, albeit with a distinction: divergence evaluations typically require knowledge of class labels and downstream tasks. **As our work is specifically designed to remain independent of downstream tasks and labels,** this aspect falls outside the scope of our evaluation framework. As noted in [2], downstream task accuracy has a direct relationship with the divergence between latent class centers. To address this, we evaluated the impact of insertion and deletion on downstream tasks (refer to the table in our response to Weakness 4 from Reviewer 6yuv: [https://openreview.net/forum?id=2bEjhK2vYp&noteId=4sKAhovlWs]), where our method demonstrated significant improvements over IG.\\n\\nThat said, we agree that directly evaluating the influence of individual features on divergence between latent class centers would be a meaningful addition. Given that our method is the first to directly interpret SSL tasks while introducing a novel and effective evaluation framework, we believe that extending this work to include such evaluations would be best pursued in future research.\"}", "{\"comment\": \"Dear Reviewer CZeC,\\n\\nThank you for your feedback and for taking the time to review our work. As stated in our manuscript and rebuttal, to the best of our knowledge, our method is the first to address explainability in SSL tasks without relying on downstream tasks. We have not found similar methods for direct comparison. If you are aware of such methods, we kindly request that you provide examples to facilitate a meaningful comparison.\\n\\nRegarding traditional attribution methods, as mentioned in our response to Reviewer 6yuv (Response for Weakness 4), we have already compared our method with Integrated Gradients (IG), a well-established attribution method. Our results demonstrate that our method significantly outperforms IG in terms of capturing feature importance in SSL tasks.\\n\\nGiven these clarifications and the novelty of our approach, we respectfully request that you reconsider your score.\\n\\nBest regards, \\n\\nSubmission1920 Authors\"}", "{\"comment\": \"I thank the authors for their detailed and thoughtful response. While they have clarified some of my concerns, several fundamental issues remain unaddressed.\\nThe main concern is the generalizability of their method and the lack of baseline comparisons. While it is possible to define a set of constraints for which no baselines exist, the key question is what practical value this brings. I believe the authors need to make a stronger effort to demonstrate the advantages and practical usefulness of their approach.\\nBased on these ongoing concerns, I maintain my original score.\"}", "{\"summary\": \"This paper addresses the interpretability of SSL models, focusing on the challenge that existing interpretability methods often rely on downstream tasks or specific model architectures. To overcome these issues, the authors propose three fundamental prerequisites for SSL interpretability:\\n1. The interpretation should not introduce information from downstream tasks.\\n2. The interpretation process should not introduce samples other than the current sample.\\n3. The interpretation process should not be restricted to specific model architectures.\\n\\nBased on these prerequisites, they introduce the Self-Supervised Learning Attribution (SSLA) algorithm. SSLA redefines the interpretability objective by introducing a feature similarity measure. \\nThey also propose a new evaluation framework tailored to SSL tasks, arguing that traditional interpretability evaluation methods are impractical due to the absence of explicit labels and suitable baselines in SSL settings. Experiments are conducted using five representative SSL methods (BYOL, SimCLR, SimSiam, MoCo-v3, MAE) on the ImageNet dataset with ResNet-50 as the backbone. They compare SSLA against a random masking baseline, demonstrating that SSLA can more effectively identify important features that influence the SSL model's representations.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"**Novel Focus on SSL Interpretability:** The paper addresses an important and under-explored area\\u2014the interpretability of SSL models without reliance on downstream tasks or specific architectures.\", \"**Clear Prerequisites:** The authors clearly outline three prerequisites for SSL interpretability methods, providing a solid foundation for their approach.\", \"**Architecture-Agnostic Approach:** SSLA is designed to be independent of specific neural network architectures, potentially making it broadly applicable across different SSL models.\"], \"recognition_of_evaluation_challenges\": \"The authors recognize the limitations of traditional interpretability evaluation methods in the context of SSL and attempt to propose a new framework tailored to SSL tasks.\", \"weaknesses\": [\"**Insufficient Empirical Evaluation:** The experimental evaluation is limited. The authors only compare SSLA to a random masking baseline. There are no comparisons with other existing attribution methods adapted to SSL, making it hard to gauge the effectiveness of SSLA.\", \"**Limited Dataset and Model Diversity:** Although experiments are conducted on the ImageNet dataset using ResNet-50, the evaluation lacks diversity in both datasets and model architectures. The claim that SSLA is architecture-agnostic is not fully supported without experiments on different architectures.\", \"**Evaluation Methodology Concerns:** The proposed evaluation framework is novel but not thoroughly validated. The authors argue that traditional evaluation methods are unsuitable for SSL interpretability but do not provide sufficient empirical evidence or theoretical justification. It is unclear whether the metrics used effectively measure interpretability in SSL contexts.\"], \"questions\": [\"**Comparison with Existing Methods:** Have you considered adapting existing attribution methods like Integrated Gradients or Grad-CAM to SSL settings for comparison? How does SSLA perform relative to these methods?\", \"**Validation of Evaluation Framework:** How have you validated the effectiveness of your proposed evaluation framework? Have you conducted any studies or experiments to show that it correlates with human intuition or ground truth attributions?\", \"**Testing on Diverse Architectures:** Given that experiments are only conducted with ResNet-50, have you tested SSLA on other architectures?\", \"**Handling SSL Randomness:** How does SSLA account for the randomness inherent in SSL methods, such as stochastic data augmentations? Does this randomness affect the stability of the attribution results?\", \"**Computational Overhead:** What is the computational cost of SSLA compared to standard inference? Is it feasible to apply SSLA to large-scale models and datasets?\", \"**Generalization to Other Domains:** Can SSLA be applied to SSL models in domains other than computer vision?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response for Authors rebuttal\", \"comment\": [\"Thank you for your further explanations.\", \"I know the authors' goal is to explain the features SSL focuses on during the extraction process. However, the evaluation criterion adopted by the authors\\u2014cosine similarity\\u2014is, as I mentioned, not reasonable according to [1] and [2].\", \"I am confident that the authors' work is valuable and that the studies in [1] and [2] do not address the same question as SSLA. I am simply suggesting that these studies should incorporate divergence into their evaluation criteria. While the original SSL papers do not introduce the concept of divergence, this is not a major issue. The initial motivation for a new method may differ from the essential factor that makes it successful, and this should not be a reason for the authors' rebuttal.\", \"I have already provided a possible scheme for tackling the problem I mentioned. However, I agree with the author: we cannot evaluate the quality of this work solely based on whether it can be improved in the future. With that in mind, I am willing to improve my score to 5.\"]}", "{\"comment\": \"Thank you for your thoughtful comments and for taking the time to engage in this meaningful discussion. We respect your decision and remain grateful for the opportunity to exchange ideas.\"}", "{\"comment\": \"### Response for Weakness 1\\nThe challenge of distinguishing whether attribution results derive from SSL itself or the downstream task highlights a critical issue. SSL demonstrates its strength by achieving superior performance across various downstream tasks, emphasizing its generalizability. Our approach focuses on understanding the origins of this generalizability. Specifically, by investigating which regions SSL prioritizes within samples, we aim to uncover why SSL methods excel across diverse downstream tasks rather than tailoring explanations to any single downstream task.\\n\\n---\\n\\n### Response for Weakness 2 \\nAs mentioned at the beginning of Section 4.5, our method is the first to interpret SSL tasks without downstream task dependency, meaning there are no direct baselines for comparison. Traditional SSL interpretability methods inherently rely on downstream tasks, which makes them unsuitable for our context. Unlike these methods, our approach focuses on explaining which regions of the data SSL attends to, as these regions are the source of SSL's **generalizability**. Methods incorporating downstream tasks cannot explain this **generalizability**, making them inadequate for our study's objective.\\n\\n\\n---\\n\\n### Response for Weakness 3\\nEquations 1 and 2 are essential for establishing the attribution nature of our method. They serve as a foundation for proving that SSLA adheres to the Sensitivity Axiom and demonstrate how attribution results accumulate during sample updates. We will consider improving their presentation in future revisions to enhance clarity.\\n\\n---\\n\\n### Response for Weakness 4\\nTo address this concern, we conducted an evaluation using a linear classifier added to SSL, akin to typical SSL applications in downstream classification tasks. The Following Table results on the ImageNet dataset, comparing SSLA with the widely used Integrated Gradients (IG) [1] method. SSLA demonstrates significantly better performance on metrics such as Insertion (INS) and Deletion (DEL), indicating strong correlation and effectiveness in this setting.\\n\\n| | INS | DEL |\\n|------|--------|--------|\\n| IG | 0.0656 | 0.0125 |\\n| SSLA | 0.2577 | 0.03 |\", \"reference\": \"[1] Sundararajan, Mukund, Ankur Taly, and Qiqi Yan. \\\"Axiomatic attribution for deep networks.\\\" International conference on machine learning. PMLR, 2017.\\n\\n---\\n\\n### Response for Question 1\\nThe attribution results maintain the same dimensionality as the input because the process of explaining SSL tasks requires evaluating the importance of every input feature. This consistency ensures a one-to-one correspondence between the input and the attribution results.\\n\\n---\\n\\n### Response for Question 2\\nThe light blue arrows represent backpropagation, while the snowflake icon indicates computations that do not require gradient backpropagation. This distinction allows us to emphasize that these computations can be preprocessed independently, optimizing the overall computational efficiency of the attribution process.\"}", "{\"comment\": \"Reference:\\n\\n[1] Sundararajan, Mukund, Ankur Taly, and Qiqi Yan. \\\"Axiomatic attribution for deep networks.\\\" International conference on machine learning. PMLR, 2017.\\n\\n[2] Pan, Deng, Xin Li, and Dongxiao Zhu. \\\"Explaining deep neural network models with adversarial gradient integration.\\\" Thirtieth International Joint Conference on Artificial Intelligence (IJCAI). 2021.\\n\\n[3] Zhu, Zhiyu, et al. \\\"Iterative Search Attribution for Deep Neural Networks.\\\" Forty-first International Conference on Machine Learning.\"}", "{\"comment\": \"Thank you for your feedback.\\n\\nI completely agree with the point that these two papers, as well as the underlying rationale, explain the source of SSL generalization ability. However, as mentioned in the review, our goal is to explain what features SSL focuses on during the extraction process (i.e., which regions in the input samples are important) and that this importance is tied to the SSL task itself, rather than individual samples. Therefore, these two perspectives are not contradictory. If our work were about exploring the generalization performance of SSL tasks rather than explaining which regions SSL models focus on in samples, then the content of these two papers would inevitably require comparison and discussion.\\n\\nAs we mentioned, we will incorporate this discussion. However, these two papers did not address (explaining which regions SSL models focus on in the samples) this task, so there is no conflict. Our work remains the first to achieve this goal.\\nIt is also worth noting that in the original SSL papers, concepts such as divergence were not introduced, yet they still achieved excellent performance. This is another aspect that should be considered and subjected to interpretability analysis.\\n\\nRegarding \\\"calculating the alternative quantity of divergence,\\\" we acknowledge that this has potential for future improvement of the work. However, it is not a necessary aspect to consider in this work. We cannot evaluate the quality of this work solely based on whether it can be improved in the future.\\n\\nI kindly hope you will reconsider the score based on these clarifications. Thank you again for your valuable insights and time.\"}", "{\"comment\": \"Dear Reviewer 6yuv,\\n\\nThank you for your valuable feedback on our submission. With the rebuttal deadline approaching, we would like to confirm that we have thoroughly addressed all the concerns you raised in your review. If there are any additional questions or issues, please let us know. \\n\\nWe kindly request you to reconsider your score based on our clarifications. \\n\\nBest regards, \\n\\nSubmission1920 authors\"}", "{\"title\": \"Response for Authors rebuttal\", \"comment\": \"Thank you for your rebuttal.\\n\\nPlease refer to [1], especially Theorem 4. A similar effect can be achieved by [2] and [3]. This formula clearly does not require additional random sampling; just adopting the same samples used to calculate the cosine similarity is sufficient. In fact, despite adopting negative samples, additional samples are not required. For more details, please refer to https://github.com/jhaochenz96/spectral_contrastive_learning. There are lots of similar alternative plans, for example, dimensional contrastive mentioned in [4].\\n\\n[1] Weiran Huang, Mingyang Yi, Xuyang Zhao, and Zihao Jiang. \\\"Towards the generalization of contrastive self-supervised learning.\\\" arXiv preprint arXiv:2111.00743, 2021\\n\\n[2] HaoChen, Jeff Z., et al. \\\"Provable guarantees for self-supervised deep learning with spectral contrastive loss.\\\" Advances in Neural Information Processing Systems 34 (2021): 5000-5011.\\n\\n[3] Jeff Z. HaoChen, Colin Wei, Ananya Kumar, and Tengyu Ma. 2024. Beyond separability: analyzing the linear transferability of contrastive representations to related subpopulations. In Proceedings of the 36th International Conference on Neural Information Processing Systems (NIPS '22).\\n\\n[4] Garrido, Quentin, et al. \\\"On the duality between contrastive and non-contrastive self-supervised learning.\\\" arXiv preprint arXiv:2206.02574 (2022).\"}" ] }
2aL6gcFX7q
Understanding Data Poisoning Attacks for RAG: Insights and Algorithms
[ "Xun Xian", "Tong Wang", "Liwen You", "Yanjun Qi" ]
Large Language Models (LLMs) have achieved success across various domains but also exhibit problematic issues, such as hallucinations. Retrieval-Augmented Generation (RAG) effectively alleviates these problems by incorporating external information to improve the factual accuracy of LLM-generated content. However, recent studies reveal that RAG systems are vulnerable to adversarial poisoning attacks, where attackers manipulate retrieval systems by poisoning the data corpus used for retrieval. These attacks raise serious safety concerns, as they can easily bypass existing defenses. In this work, we address these safety issues by first providing insights into the factors contributing to successful attacks. In particular, we show that more effective poisoning attacks tend to occur along directions where the clean data distribution exhibits small variances. Based on these insights, we propose two strategies. First, we introduce a new defense, named DRS (Directional Relative Shifts), which examines shifts along those directions where effective attacks are likely to occur. Second, we develop a new attack algorithm to generate more stealthy poisoning data (i.e., less detectable) by regularizing the poisoning data’s DRS. We conducted extensive experiments across multiple application scenarios, including RAG Agent and dense passage retrieval for Q&A, to demonstrate the effectiveness of our proposed methods.
[ "Safety; Retrieval" ]
Reject
https://openreview.net/pdf?id=2aL6gcFX7q
https://openreview.net/forum?id=2aL6gcFX7q
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zyOh7U2BqB", "ztXylOTxDC", "uM1VBYcy9h", "m1ETEZ0lvY", "kXjME52ZPF", "hrmrVCX84v", "gImOIOCQSD", "bsUmINXXMu", "bfZB9dASBi", "bDBO89sVdC", "ZfFMXC1ty3", "Z8n1T9l903", "YmR1UXOtoH", "Vc8RqXd2rn", "UhUzOzPdSp", "SNYWL8H1nK", "LijXgy2PF2", "KzeaHGABkP", "KSN744Slxm", "Gf0KcbI6fF", "DzZ29fyxoa", "CPW1bx7VtQ", "Ao3EmZh3K8", "70zWsw88R8", "5yQkWYwgCG" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review" ], "note_created": [ 1732233311980, 1732304567685, 1732303268020, 1732599181399, 1732232932674, 1732233101734, 1737524203475, 1732595424683, 1732668134391, 1730828314888, 1730142729818, 1732242512941, 1730722745459, 1731900295065, 1732331947011, 1732233179328, 1732242456888, 1732412372872, 1732412084770, 1732682843133, 1732233229310, 1730550888102, 1732232875935, 1732325169211, 1734618614971 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12614/Authors" ], [ "ICLR.cc/2025/Conference/Submission12614/Authors" ], [ "ICLR.cc/2025/Conference/Submission12614/Reviewer_NLx3" ], [ "ICLR.cc/2025/Conference/Submission12614/Authors" ], [ "ICLR.cc/2025/Conference/Submission12614/Authors" ], [ "ICLR.cc/2025/Conference/Submission12614/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12614/Reviewer_F894" ], [ "ICLR.cc/2025/Conference/Submission12614/Reviewer_NLx3" ], [ "ICLR.cc/2025/Conference/Submission12614/Reviewer_7fYQ" ], [ "ICLR.cc/2025/Conference/Submission12614/Reviewer_NLx3" ], [ "ICLR.cc/2025/Conference/Submission12614/Authors" ], [ "ICLR.cc/2025/Conference/Submission12614/Reviewer_bSEM" ], [ "ICLR.cc/2025/Conference/Submission12614/Reviewer_pzSQ" ], [ "ICLR.cc/2025/Conference/Submission12614/Authors" ], [ "ICLR.cc/2025/Conference/Submission12614/Authors" ], [ "ICLR.cc/2025/Conference/Submission12614/Authors" ], [ "ICLR.cc/2025/Conference/Submission12614/Authors" ], [ "ICLR.cc/2025/Conference/Submission12614/Reviewer_bSEM" ], [ "ICLR.cc/2025/Conference/Submission12614/Authors" ], [ "ICLR.cc/2025/Conference/Submission12614/Authors" ], [ "ICLR.cc/2025/Conference/Submission12614/Reviewer_F894" ], [ "ICLR.cc/2025/Conference/Submission12614/Authors" ], [ "ICLR.cc/2025/Conference/Submission12614/Reviewer_bSEM" ], [ "ICLR.cc/2025/Conference/Submission12614/Area_Chair_KySE" ] ], "structured_content_str": [ "{\"title\": \"Rebuttal\", \"comment\": \"We deeply appreciate the reviewers for dedicating their time and effort to reviewing our manuscript and providing insightful feedback. We are pleased that all reviewers acknowledged the novelty of our work. Furthermore, we are grateful that they considered our writing clear and our approach effective across different setups. We will integrate their suggestions into the revised version of the paper.\\n\\n>Q: I am uncertain about the reliability of DRS. For example, if the question is, \\\"Who is the OpenAI CEO?\\\" I would expect the embedding of a clean document (\\\"The CEO of OpenAI is Sam Altman\\\") to be similar ...\\n\\n**Response**: Thank you for your insightful and sharp question regarding the reliability of DRS. We believe that, in general, it could be challenging to distinguish between the two queries you provided using all existing defense methods, including ours. These queries are very close in terms of semantic meaning, sentence quality, and embedding vectors. As a result, regardless of the defense method\\u2014whether norm-based, perplexity-based, or our proposed method\\u2014it may be infeasible to distinguish between these two queries.\\n\\nMore fundamentally, by casting the problem of detecting the two queries you proposed as a hypothesis testing/binary classification problem, we can show that the Bayes Risk (which depends on the total variation between the two distributions) for the problem will be very high, as their total variation distance is very small. This represents a fundamental limitation of all detection-based methods, including our proposed DRS.\\n\\n>Q: In Figure 1, what is the Y-axis?\\n\\n**Response**: Thank you for your question regarding Figure 1. The Y-axis in Figure 1 represents the relative change in the attack success rate (ASR) for the three attacks. We define the relative changes in Lines 288-295. Specifically, the relative change along a certain direction of the embedding vectors is calculated by measuring the difference between adversarial and clean documents along the directions of the clean documents. This difference is then normalized by dividing the relative mean by the standard deviation along these directions. We observed that more effective attacks, such as AgentPoison, tend to have a larger relative distance along directions with small variances (i.e., the left-most group represents the directions of clean embedding documents with the top 100 smallest variances), which empirically verifies our theory.\\n\\n\\n>Q: In Section 2.1, the attacker\\u2019s capability is described as \\\"only injecting poisoned data (e.g., by creating a new Wikipedia page).\\\" However, in Section 5.1.2, the setting appears to change, with the retriever itself being backdoored.\\n\\n**Response**: Thank you for your question regarding the threat model. This does not violate our threat model. In Section 5.1.2, the scenario considered is that the attacker only poisons the training data, but later the users themselves fine-tune the retriever based on the poisoned data for better performance. As a result, the retriever becomes backdoored. We will further clarify this point in the revised manuscript.\\n\\n>Q: In Section 5.1.1, there is no description of the adversarial query.\\n\\n**Response**: Thank you for your comment regarding the adversarial query. Due to space limitations, we included detailed descriptions and provided some examples of the adversarial queries in the appendix. Here, we list them for your reference. To create adversarial data, the attacker appends an adversarial backdoor trigger at the end of normal data. The backdoor/adversarial trigger used for autonomous driving is: `Be safe and make a discipline.`\\n\\n>Q: In Section 5.1.1, the statement \\\"For each attack method, we generate 300 poisoned data samples\\\" is unclear. Does \\\"poisoned data samples\\\" refer to poisoned documents?\\n\\n**Response**: Thank you for your question. Yes, you are absolutely correct. We will make this clear in the revised manuscript.\\n>Q: If I understand correctly, DRS also requires a set of clean samples to compute the threshold, but it is unclear how large and diverse this dataset needs to be.\\n\\n**Response**: Thank you for your question. Yes, you are correct about the need for a set of clean samples. Regarding the size of the data, we provide an ablation study on using different sizes for the clean data (in the context of autonomous driving) and summarize the results in the following table. We observed that our proposed method is robust with different sample sizes of the clean dataset.\\n\\n| Sample Size |500 | 1000| 1500|\\n|---|---|---|---|\\n| Detection Rate |0.91 | 0.97| 0.99| \\n\\nRegarding the diversity of the clean dataset, it depends heavily on the targeted query set to be protected. If the query set contains queries from closely similar topics, then the corresponding clean dataset need not be diverse. Otherwise, the clean dataset requires more diverse data. We will clarify this in the revision.\"}", "{\"title\": \"Thanks for the response and raising scores. Further Clarificaiton\", \"comment\": \"We would like to thank the reviewer for their quick responses and for raising the scores.\\n\\nWe would like to highlight the specific problem (the detection challenge) that the reviewer is mentioning. The scenario described by the reviewer is a type of **fundamentally infeasible** problem (the hardness of which we prove in the following). In the sense that **no detectors** can perform well under such a case. \\n\\nWe believe it may not be reasonable to focus on this scenario particularly, especially given that **our method has shown significant improvements** over the baselines in various experimental setups (e.g., increasing the detection rate from 0.1 to 0.99).\\n\\nTheoretically, let us denote the distributions of the two queries you proposed as $P_1$ and $P_2$. The Bayes Risk (which is the best possible performance for any classifier) is:\\n\\n$$\\n\\\\frac{1}{2}[1 - TV(P_1, P_2)],\\n$$\\n\\nwhere $TV$ is the total variation distance between the two distributions. In the scenario you mentioned, the distributions of $P_1$ and $P_2$ are very close to each other, and hence the Bayes Risk is very close to 0.5 (which is the performance of a random guess). In other words, **the optimal strategy (for any detection) is to randomly guess**. This is a fundamental limitation of the problem itself, regardless of any detection methods.\\n\\nWe hope that the reviewer can take this part into account. Thank you again for your effort and time.\"}", "{\"comment\": \"Thanks for the detailed response. Some of my concerns have been addressed. I'd love to increase my score to 5.\\n\\nSince my major concern still challenges the proposed detection method, I cannot support the acceptance of this paper. I would encourage the authors to analyze the problem much deeper in the next version.\"}", "{\"title\": \"Thanks for your feedback and increasing the score!\", \"comment\": \"Thank you for your feedback and for raising the score! We are happy that our responses have addressed your concerns.\\n\\nThank you again for your effort in reviewing our paper!\"}", "{\"title\": \"Rebuttal continues\", \"comment\": \">Q: The evaluation in Section 5.2 for the proposed attack is very limited.\\n\\n**Response**: Thank you for your valuable suggestion regarding the evaluation of our newly proposed attack. To the best of our knowledge, we are not aware of any other existing attacks (with open-sourced code) that aim to achieve similar goals, i.e., generating poisoned data with high attack success rates while explicitly forcing them to be less discernible to detection, in the context of RAG, except for the ones already included. In fact, the AgentPoison attack (first on arXiv in July this year) itself is very new, as quoted: '... the first backdoor attack targeting generic and RAG-based LLMs ...'.\\n\\nNonetheless, we have introduced a new attack (proposed by ourselves, motivated by backdoor literature) that penalizes the Wasserstein distance between adversarial and normal queries, instead of the proposed DRS. We summarize the results in the following table, where we observe that the detection rate of the Wasserstein distance-based attack is higher than that of the proposed DRS-based attack (with a lower detection rate indicating that the attack is more effective), indicating the effectiveness of our proposed DRS attack algorithm. \\n\\n**Table**: Filtering rates for poisoned data, generated by AgentPoison and our newly proposed DRS-regularized AgentPoison, and the Wasserstein-regularized AgentPoison. The decision threshold for filtering is set to the 99th percentile of the **clean** scores, resulting in a false positive rate of approximately 1% for clean documents. \\n\\n| Attack Method | Perplexity filter | \\u21132-norm filter | \\u21132-distance filter | DRS (proposed) |\\n|-----------------------------------|-------------------|----------------|-------------------|----------------|\\n| AgentPoison | 0.03 | 0.03 | 0.01 | 0.99 |\\n| DRS-regularized AgentPoison | 0.03 | 0.01 | 0.01 | 0.85 |\\n| Wass-regularized AgentPoison (New)| 0.03 | 0.02 | 0.01 | 0.94 |\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We deeply appreciate the reviewers for dedicating their time and effort to reviewing our manuscript and providing insightful feedback. We are pleased that all reviewers acknowledged the novelty of our work. Furthermore, we are grateful that they considered our writing clear and our approach effective across different setups. We will integrate their suggestions into the revised version of the paper.\\n\\n>Q: Missing some references.\\n\\n**Response**: Thank you for pointing out the missing references. We have included them in the revised manuscript. \\n\\n>Q: There has been some work discussing the characterization of poisoned samples. In particular, the proposed method (i.e., DRS) is similar to [3] to some extent. The authors should compare their method to existing works.\\n\\n**Response**: Thank you for your valuable suggestion regarding the comparison of our proposed method to existing works. The proposed DRS differs significantly from [3] in terms of the threat model and technical components.\\n\\n[3] is a **training-stage defense** method that uses **both** clean and poisoned training data to filter out backdoors and build a clean model, evaluating based on clean data and attack success rates. In contrast, our approach, being **an inference-stage** method, uses only a small set of clean validation data and no knowledge of the poisoned data, aiming to detect future poisoned inputs. We evaluate our method using the AUC-ROC score of the detector.\\n\\n\\n\\n>Q: Explain the performance of the proposed attack.\\n\\n**Response**: Thank you for your question regarding the performance of the proposed attack. In Table 5 in Section 5.2 of the main text, we report the filtering rates for poisoned data generated by different algorithms. Given the same attack success rates of the poisoned data, more effective attacks have lower filtering rates, as they are less likely to be detected by the defense mechanism. We observe that the detection rate (by the proposed DRS defense) for poisoned data generated by our algorithm decreases by 15% compared to the vanilla AgentPoison, highlighting the effectiveness of the algorithm.\\n\\nIn fact, as discussed in Section 4.2 of the main text, the detection rate of the proposed DRS defense can be further lowered by increasing the hyperparameter $\\\\lambda_2$. However, this comes with a trade-off: as the penalty increases, the attack success rate of the corresponding poisoned data decreases, as suggested by our theorems. This is because, intuitively, a large penalty forces the poisoned data to be more similar to the clean data, making the attack ineffective.\\n\\n\\n\\n>Q: The authors only use AgentPoison as an example to demonstrate the effectiveness of the proposed attack. The authors should conduct more extensive experiments on all discussed attacks to verify its generalizability.\\n\\n**Response**: Thank you for your valuable suggestion regarding the evaluation of our newly proposed attack. To the best of our knowledge, we are not aware of any other existing attacks (with open-sourced code) that aim to achieve similar goals, i.e., generating poisoned data with high attack success rates while explicitly forcing them to be less discernible to detection, in the context of RAG, except for the ones already included. In fact, the AgentPoison attack (first on arXiv in July this year) itself is very new, as quoted: '... the first backdoor attack targeting generic and RAG-based LLMs ...'.\\n\\nNonetheless, we have introduced a new attack (proposed by ourselves, motivated by backdoor literature) that penalizes the Wasserstein distance between adversarial and normal queries, instead of the proposed DRS. We summarize the results in the following table, where we observe that the detection rate of the Wasserstein distance-based attack is higher than that of the proposed DRS-based attack (with a lower detection rate indicating that the attack is more effective), indicating the effectiveness of our proposed DRS attack algorithm. \\n\\n**Table**: Filtering rates for poisoned data, generated by AgentPoison and our newly proposed DRS-regularized AgentPoison, and the Wasserstein-regularized AgentPoison. The decision threshold for filtering is set to the 99th percentile of the **clean** scores, resulting in a false positive rate of approximately 1% for clean documents. \\n\\n| Attack Method | Perplexity filter | \\u21132-norm filter | \\u21132-distance filter | DRS (proposed) |\\n|-----------------------------------|-------------------|----------------|-------------------|----------------|\\n| AgentPoison | 0.03 | 0.03 | 0.01 | 0.99 |\\n| DRS-regularized AgentPoison | 0.03 | 0.01 | 0.01 | 0.85 |\\n| Wass-regularized AgentPoison (New)| 0.03 | 0.02 | 0.01 | 0.94 |\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thanks for the responses. Since the responses have addressed part of my concern, I will increase my score.\"}", "{\"comment\": \"Thank you again for the detailed response.\\n\\nI understand that the scenario might be fundamentally infeasible. However, constructing such poisoning samples should be extremely easy (much easier than methods like GCG).\"}", "{\"summary\": \"This paper studies both defenses and attacks to retrieval-augmented generation, which has been used in many applications. The proposed attack and defense are based on the observation that poisoning attacks tend to occur along directions for which clean data distribution has small variances.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The attacks and defenses to RAG are an active research topic, given RAG is used in many real-world applications. Additionally, existing attacks are summarized in the paper.\\n\\n2. Multiple attacks on RAG are considered.\\n\\n3. The analysis made in the paper is interesting. For instance, Figure 1 shows some empirical evidence to verify the developed theory.\", \"weaknesses\": \"1. One limitation of the method is that the assumption can be strong. For instance, it is assumed that adversarial query has a different distribution from normal query. However, in practice, an attacker may select normal queries as target queries. In this scenario, the distribution of the adversarial query would be the same as the target query. This assumption may hold for certain attacks. The authors may consider narrowing down the scope, i.e., focusing on the scenarios where the adversarial query has a different distribution from the target query.\\n\\n2. The assumption 1 is not very clear. How to measure the distance between two texts? The authors may consider adding more explanations to make it easier for readers to understand. Also, assumption 1 states the distance between two texts is bounded, which may not be informative, as it may hold for two arbitrary texts in practice. \\n\\n3. The proposed defense may influence the utility of RAG. For instance, if new knowledge is added for a query, it can be rejected if it is substantially different from clean texts in the clean data corpus. In the experiments, it is shown that the false positive rate is very high. Is it because the clean documents are irrelevant to the protected queries? It can be helpful to perform a comprehensive analysis of the proposed defense on the influence of the utility of RAG systems. One naive defense is to reject all documents whose similarities (e.g., embedding vector similarity) are high with protected queries. The authors may consider comparing with some baselines to demonstrate the effectiveness of the proposed defenses. Additionally, the evaluation in Section 5.2 for the proposed attack is very limited.\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this paper, the authors conduct a comprehensive analysis of data poisoning attacks on RAG. Specifically, they provide a framework to analyze attacker objectives. They observe that more effective attacks tend to result in larger relative shifts along directions with smaller variances. Based on this observation, the authors design a new filtering method to defend against poisoning attacks. Additionally, they introduce a regularizer to bypass the new detection method. Through experiments, they demonstrate the effectiveness of both the new defense and attack strategies.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The analysis and observations of current poisoning attacks on RAG are novel and interesting.\\n\\nThe paper considers four attack settings to demonstrate the effectiveness of the defense methods, offering a comprehensive and thorough evaluation.\", \"weaknesses\": \"Major concern: I am uncertain about the reliability of DRS. For example, if the question is, \\\"Who is the OpenAI CEO?\\\" I would expect the embedding of a clean document (\\\"The CEO of OpenAI is Sam Altman\\\") to be similar to that of a poisoned document (\\\"The CEO of OpenAI is Elon Musk\\\"). I am unsure whether DRS can effectively handle such an attack.\\n\\nThe clarity of this paper needs improvement.\", \"some_examples\": \"1. In Figure 1, what is the Y-axis?\\n2. In Section 2.1, the attacker\\u2019s capability is described as \\\"only injecting poisoned data (e.g., by creating a new Wikipedia page).\\\" However, in Section 5.1.2, the setting appears to change, with the retriever itself being backdoored.\\n3. In Section 5.1.1, there is no description of the adversarial query.\\n4. In Section 5.1.1, the statement \\\"For each attack method, we generate 300 poisoned data samples\\\" is unclear. Does \\\"poisoned data samples\\\" refer to poisoned documents?\\n\\nIf I understand correctly, DRS also requires a set of clean samples to compute the threshold, but it is unclear how large and diverse this dataset needs to be.\", \"questions\": \"NA\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal continues\", \"comment\": \">Q: The paper suggests that attack effectiveness is maximized by targeting low-variance directions within the data distribution. Can the authors provide more detailed empirical evidence on how such low-variance features manifest in real-world documents? Also, could you please specify the experimental settings of Fig. 2?\\n\\n**Response**: Thank you for your insightful question regarding the low-variance directions. We provide a real-world document example to illustrate the low-variance directions in Figure 1 in the main text. In Figure 1, we follow the exact setup in AgentPoison (Chen et al., 2024a) to generate poisoned documents by employing three different attacks: Ap, BadChain, and AutoDan, with ASR of Ap > BadChain \\u2265 AutoDan. Next, we plot the relative changes for different attacks. We define the relative changes in Lines 288-295. Specifically, the relative change along a certain direction of the embedding vectors is calculated by measuring the difference between adversarial and clean documents along the directions of the clean documents. This difference is then normalized by dividing the relative mean by the standard deviation along these directions. We observed that more effective attacks, such as AgentPoison, tend to have a larger relative distance along directions with small variances (i.e., the left-most group represents the directions of clean embedding documents with the top 100 smallest variances), which empirically verifies our theory.\\n\\nRegarding the experimental settings of Figure 2, it uses the same setup as Figure 1.\\n\\n\\n>Q: A sensitivity analysis of the hyperparameters and would give insight into the attack\\u2019s trade-offs between attack sucess rate and evasion of the defense.\\n\\n**Response**: Thank you for your insightful comment regarding the sensitivity analysis of the hyperparameters. We provided an ablation study on the hyperparameters of the proposed attack in the appendix and have included the table here for your reference. By using an appropriate value of $\\\\lambda_2$, specifically 1, we can achieve a good trade-off between the attack success rate and the evasion of the defense.\", \"table\": \"Sensitivity analysis of the hyperparameters ($\\\\lambda_2$) of the proposed attack on the autonomous driving task.\\n| \\u03bb\\u2082 | Attack Success Rate| DRS (proposed) Detection Rate|\\n|------|-------------------|----------------|\\n| 0.1 | 0.78 | 0.99 |\\n| 0.5 | 0.76 | 0.85 |\\n| 1 | 0.70 | 0.72 |\\n| 5 | 0.51 | 0.51 |\\n| 10 | 0.42 | 0.29 |\"}", "{\"summary\": \"The paper investigates the vulnerability of Retrieval-Augmented Generation (RAG) systems to data poisoning attacks, where adversaries manipulate the retrieval corpus to influence model outputs. It reveals that effective poisoning occurs along low-variance directions in the clean data distribution, allowing attackers to insert poisoned data that stealthily alters retrieval results. The authors propose a new defense metric, Directional Relative Shifts (DRS), to detect these poisoned entries by examining shifts along susceptible directions. Additionally, they introduce an advanced attack algorithm that regularizes DRS values, making poisoned data harder to detect. Empirical tests confirm the effectiveness of DRS in various RAG applications, demonstrating the need for robust defenses.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe authors attempt to give a deeper understanding and theoretical analysis of existing attacks. It should be encouraged.\\n2.\\tThis is a well-written paper. The definitions of symbols and the overall flow are clear.\\n3.\\tThe proposed defense is simple yet highly effective.\", \"weaknesses\": \"1. Missing some references.\\n- Line 65: The authors should provide references for perplexity-based filters (e.g., [1]).\\n- Line 143-153: The authors should also mention existing attacks against (e.g., [2]).\\n2. There has been some work discussing the characterization of poisoned samples. In particular, the proposed method (i.e., DRS) is similar to [3] to some extent. The authors should compare their method to existing works.\\n3. The authors only use AgentPoison as an example to demonstrate the effectiveness of the proposed attack. The authors should conduct more extensive experiments on all discussed attacks to verify its generalizability.\\n4. According to Section 5.2 (Table 5), the performance of the proposed attack is limited.\\n5. The authors should directly place the appendix after the references in the main document.\\n\\n\\nReferences\\n1. Onion: A Simple and Effective Defense against Textual Backdoor Attacks.\\n2. Targeted attack for deep hashing based retrieval.\\n3. Spectral Signatures in Backdoor Attacks.\", \"questions\": \"1. Add more related references.\\n2. compare their method to existing works like [3].\\n3. Conduct more experiments regarding the proposed attacks.\\n4. Explain the performance of the proposed attack.\\n\\nPlease find more details in the aforementioned 'Weaknesses' part.\", \"ps\": \"I am willing to increase my score if the authors can (partly) address my concerns.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies data poisoning attacks against Retrieval-Augmented Generation (RAG) systems. RAG systems can be compromised when attackers inject manipulated data into the retrieval corpus. The authors suggest that succesful attacks may exploit low-variance directions in the data distribution. Based on these findings, the authors introduce two significant innovations: a defense method called Directional Relative Shifts (DRS), which detects potential poisoning by analyzing shifts in low-variance directions, and a stealthier attack method that reduces detectability by minimizing DRS scores for poisoned data. The experiments show the effectiveness of the proposed defense across various RAG applications, such as Q&A systems and medical data retrieval, while the new attack algorithm succeeds in circumventing traditional and DRS defenses under specific settings.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The Directional Relative Shifts (DRS) metric is the most interesting contribution of the paper: it is a novel measure to detect poisoned documents. Moreover, Both theoretical and empirical results are provided. In terms of clarity, the paper is well written and easy to follow.\", \"weaknesses\": \"A significant shortcoming is the absence of reported attack success rates in the experimental results. Without this metric, it becomes difficult to fully evaluate the effectiveness of both the proposed attacks and defenses.\\n\\nThe paper also lacks a deep discussion on the computational cost of DRS. The access to clean documents need better justification and analysis.\", \"questions\": [\"How defending against poisoning in RAG settings differs from defending against, e.g., jailbreak or prompt injection attacks?\", \"Can the authors motivate better this assumption?: \\\"We assume the defender has access to both the retriever and the clean data corpus. When a new test document is proposed for injection into the clean corpus, the defender calculates its DRS score (to be defined later in Eq. 3) and compares it with the scores of known clean documents.\\\" How can that clean data corpus be garanteed to not poisoned? And how many clean documents would be required so achieve such guarantee?\", \"Could the authors elaborate on the computational overhead of calculating DRS?\", \"The paper suggests that attack effectiveness is maximized by targeting low-variance directions within the data distribution. Can the authors provide more detailed empirical evidence on how such low-variance features manifest in real-world documents? Also, could you please specify the experimental settings of Fig. 2?\", \"A sensitivity analysis of the hyperparameters $\\\\lambda_1$ and $\\\\lambda_2$ would give insight into the attack\\u2019s trade-offs between attack sucess rate and evasion of the defense.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks the reviewer for the feedback; Further Clarification\", \"comment\": \"We thank the reviewer for the feedback. We would like to address the remaining concerns in the following points:\\n\\n1. First, the difference in the threat model compared with [3] plays a significant role in the technical contribution of our work. In short, our technique primarily relies on the **localization property of the KNN algorithm**, rather than **robust statistics techniques** as those employed in [3]. To be more specific, in [3], the authors have access to both **clean and backdoor data**, and their technique involves applying **robust statistics** to eventaully build a classifier that is robust against the backdoor training data. However, in our setting, we **do not have access to the backdoor data** and only have access to clean data. Our theoretical contribution is to demonstrate how optimal backdoor data might look **even without direct access to them**. In this sense, our work addresses a potentially more difficult problem than the one in [3], as we have less information (i.e., no access to backdoor data). We derive our theory based on the **localization property of the KNN algorithm** and the **decaying property of distributions with well-behaved tails**.\\n\\n2. Second, we can further lower the detection rate by adjusting the hyperparameter $\\\\lambda_2$. For example, by increasing $\\\\lambda_2$ to 0.75, we can lower the detection rate from 0.99 to 0.71, although the attack success rate would decrease by around 12%. We believe that, given the relatively recent development of this field, the reduction of 15% reported in the paper represents a significant improvement.\\n\\n3. Third, while we are aware of several RAG poisoning attacks, as summarized in Table 1, **we are not aware of any attacks that explicitly aim to make generated poisoned data less discernible to detection by using specific objective functions/regularization techniqes in RAG setup**, except for the one included in our paper. If the reviewer is aware of other such works, we kindly request that they share them with us.\\n\\n4. Fourth, we have conducted a new study (attack) in which we apply our technique to the Backdoor DPR Attack setting (Long et al., 2024). Specifically, we first trained a backdoored retriever based on backdoor data, where the backdoor triggers are primarily grammar errors. Next, we further optimize these backdoor triggers with the proposed DPR. The results are summarized in the table below, where we observe a decrease in the detection rate from 0.65 to 0.52. We believe the detection rate can be further reduced by tuning the hyperparameters.\\n\\n| Attack Method | Perplexity Filter | \\u21132-norm Filter | \\u21132-distance Filter | DRS (Proposed) |\\n|----------------------------------|-------------------|----------------|--------------------|----------------|\\n| BadDPR | 0.13 | 0.36 | 0.36 | 0.65 |\\n| DRS-regularized BadDPR | 0.10 | 0.31 | 0.37 | 0.52 |\\n\\n5. Finally, we note that NLP backdoors are much less diverse than CV backdoors due to the discrete nature of text. In other words, the backdoor trigger or pattern is quite restricted, typically involving just a few words or sentences. In this context, we believe that the AgentPoison paper (accepted at NeurIPS this year) and the newly added backdoor DPR scenario are indeed representative of the current state of the art. \\n\\nWe hope that these points address the reviewer's concerns. We are happy to provide further clarification if needed. Thank you again for your time and effort in reviewing our work!\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We deeply appreciate the reviewers for dedicating their time and effort to reviewing our manuscript and providing insightful feedback. We are pleased that all reviewers acknowledged the novelty of our work. Furthermore, we are grateful that they considered our writing clear and our approach effective across different setups. We will integrate their suggestions into the revised version of the paper.\\n\\n>Q: Sparse Theoretical Explanation -- While DRS\\u2019s foundation on variance shifts is intuitive, a deeper theoretical analysis could further clarify why certain dimensional shifts are more vulnerable. This would strengthen the defense\\u2019s theoretical underpinnings.\\n\\n**Response**: Thank you for your suggestion regarding the theoretical explanation of DRS. In Corollary 1, we have shown that certain directions are more effective for attacks, with an intuitive explanation provided in Remark 3. Below is the logic flow of our theoretical analysis as it was presented for your reference:\\n\\n1. Theorem 1 demonstrates that attackers can successfully launch an attack, as defined in Section 2.1, by creating poisoned queries sufficiently distant from normal ones.\\n2. However, if adversarial queries are too different from the clean ones, they may be easily detected. Therefore, the attacker seeks to answer: **Given the maximum deviation (e.g., \\u21132 distance) between the normal and adversarial distributions, what are the most effective directions for moving from $Q_{normal}$ to $Q_{adv}$?**\\n3. Corollary 1 shows that the most effective attack directions are those that maximize the rate at which the clean data $\\\\mathcal{D}^{\\\\text{clean}}$ density decays.\\n\\n\\nIntuitively, directions in $\\\\mathcal{D}^{\\\\text{clean}}$ with rapidly decaying density often correspond to low-variance directions. Low variance indicates that most of the probability mass is concentrated around the mean, so even small deviations from the mean significantly reduce the probability mass. This aligns with the attacker's goal: perturbing a clean query in a low-variance direction reduces the likelihood of clean data being close to the perturbed query, increasing the chance of retrieving poisoned documents. We hope this explanation clarifies the theoretical basis of DRS and will improve the presentation in the revised manuscript.\\n\\n\\n>Q: Unrealistic Defense Assumptions -- The defense method assumes prior knowledge of a specific subset of queries that need protection from poisoning attacks. In real-world applications, defenders typically do not have knowledge of which specific queries might be targeted, and a practical defense would need to offer broad protection across all possible queries. This limitation reduces the generalizability and practicality of the proposed DRS-based defense method.\\n\\n**Response**: Thank you for your comment regarding the defense assumption. We believe that the assumption of prior knowledge of a specific subset of queries is reasonable and, in fact, essential for both practical and theoretical reasons.\\n\\n1. Theoretically, if the defender has no prior knowledge of the queries to be protected or aims to protect all possible queries, it can be shown that (i.e., using LeCam's Method to prove an information-theoretical lower bound) the defense is infeasible. This is because, by considering all possible queries, their distributions are likely to cover the entire input space, making it impossible to distinguish between normal and adversarial queries.\\n\\n2. Practically, in many real-world applications, the defender does have some knowledge of the queries that need protection. For example, a defender, such as a RAG service provider, has access to the underlying database for retrieval and is likely aware of which queries are critical to the system's operation. In this case, the defender can use this knowledge to protect these queries from poisoning attacks.\\n\\nMoreover, the effectiveness of the proposed defenses across multiple settings clearly demonstrates the wide applicability of the defense. We will further clarify this point in the revised manuscript.\\n\\n>Q: Unrealistic Assumption -- In Section 3.1, the authors illustrate their attack method with an example where, in a knowledge base about food, an adversarial query about mathematics is used to avoid retrieving clean documents. This assumption is unrealistic, as it does not reflect typical user behavior ...\\n\\n**Response**: Thank you for your comments regarding the experimental results. We believe there might be a misunderstanding between the Assumption 1 (we assume you are referring to this one) and the example. We provided this extreme-case example simply to offer some sanity checks/intuition. Recall that when proving theoretical results, it is common to choose extreme values for a simple sanity check. We will modify the example to make it more realistic in the revised manuscript.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We deeply appreciate the reviewers for dedicating their time and effort to reviewing our manuscript and providing insightful feedback. We are pleased that all reviewers acknowledged the novelty of our work. Furthermore, we are grateful that they considered our writing clear and our approach effective across different setups. We will integrate their suggestions into the revised version of the paper.\\n\\n>Q: A significant shortcoming is the absence of reported attack success rates in the experimental results. Without this metric, it becomes difficult to fully evaluate the effectiveness of both the proposed attacks and defenses.\\n\\n**Response**: Thank you for your insightful comment regarding the evaluation of the proposed attacks and defenses. **All the attacks are implemented by directly running the open-sourced code released by the authors without any modification.** As a result, the ASR of the attacks is the same as reported in the original papers. Specifically, all these attacks have decent ASR as reported in the original papers. For instance, AgentPoison has an ASR of around 0.8 on the autonomous driving dataset. We will include the ASR of the attacks in the revised manuscript for better clarity.\\n\\n\\n\\n>Q: The paper also lacks a deep discussion on the computational cost of DRS. \\n\\n**Response**: Thank you for your insightful comment regarding the computational cost of DRS. The computation and employment of DRS are very light. Specifically, we first collect a set of clean documents and obtain their embeddings. Then, we perform SVD/eigen-decomposition (**only once and can be done offline**) on the embedding matrix and store the resulting eigenvalues and eigenvectors. When new test data arrives, we only need to compute the DRS score by calculating the inner product between the test data embedding and the previously stored eigenvectors. Thus, the computation involves only a few matrix-vector multiplications and is very efficient. We will include this information in the revised manuscript for better clarity.\\n\\n\\n>Q: The access to clean documents need better justification and analysis. How defending against poisoning in RAG settings differs from defending against, e.g., jailbreak or prompt injection attacks?\\n\\n**Response**: Thank you for your insightful comment regarding access to clean documents and the difference from other attacks. The main difference between poisoning attacks in RAG and the other attacks you mentioned is that RAG poisoning attacks heavily rely on the successful **retrieval of adversary-injected documents** from the database using adversarial prompts/queries. In contrast, typical adversarial attacks, like jailbreak against LLMs, do not involve retrieval and solely rely on crafting adversarial prompts/queries to lead the LLM to generate adversarial outputs.\\n\\nGiven this discussion, access to clean documents is essential for the defender to detect adversary-injected documents. Since this is where the poisoning occurs, the defender needs access to clean documents to identify which ones are potentially poisoned. We will further clarify this point in the revised manuscript.\\n\\n\\n>Q: Can the authors motivate better this assumption?: \\\"We assume the defender has access to both the retriever and the clean data corpus. When a new test ...\\n\\n**Response**: Thank you for your insightful question regarding the clean data assumption. In fact, the defender must have access to a set of (known) clean documents in order to distinguish between future clean and potentially poisoned documents. Imagine a scenario where you are given a basket of apples, some green and some red, and you are asked to identify the red apples. Although you can divide the apples into two groups, you wouldn't know which group contains the red apples without first knowing what the red color looks like. Similarly, without knowing what a clean document looks like, it would be impossible to accurately identify the poisoned ones. This assumption is fundamental in detection-based literature, such as Out-of-Distribution detection.\\n\\nMore fundamentally, the detection problem can be framed as a hypothesis testing or binary classification problem. In this context, the defender must understand the distribution of either clean data (which is typical in detection-based problems) or adversarial data. Without this knowledge, we can show that the optimal detector for differentiating between clean and adversarial data would effectively be reduced to randomly guessing.\\n\\nRegarding the number of clean documents required, if you mean \\\"provable guarantees,\\\" such as those in conformal literature, then the number of clean documents needed to achieve such a guarantee would depend on the distribution of clean and adversarial data, which we leave as future work. In our experiments, the number of clean documents required is relatively small (e.g., 500-1000) to achieve a high detection rate. We will further clarify this point in the revised manuscript.\"}", "{\"title\": \"Thanks for your quick responses and raising the score!\", \"comment\": \"Thank you for your quick responses and for raising the score! We are happy that our responses have addressed your concerns.\\n\\nThank you again for your effort in reviewing our paper!\"}", "{\"comment\": \"Thank you for your detailed responses! Your rebuttal has addressed most of my concerns. As such, I increase my score to 6.\"}", "{\"title\": \"Further responses\", \"comment\": \"Thank you for your further responses.\\n\\nWe are pleased that you also agree with the potential impossibility detection result of the scenario you mentioned. While it is true that crafting poisoning examples in this context can be easy, **attacks under this regime do NOT necessarily guarantee high success rates**. In fact, we can show that the attack success rates will be low if the distributions of clean and poisoned data are very similar. This is because when the distributions of adversarial/poisoned and clean data are close, both types may be simultaneously retrieved by a relevant query. Such behavior contradicts the underlying philosophy of all (as well as our) RAG poisoning attacks, as outlined in our threat model, where the goal is to ensure that all retrieved documents are poisoned given adversarial queries. As a result, attacks of this type may not be favored in practice, since the fundamental requirement of high attack success rates is not met.\\n\\nWe will use the example you proposed as a demonstration. When the adversary query \\\"Who is the CEO of OPENAI?\\\" is made, both the clean (\\\"The CEO of OpenAI is Sam Altman\\\") and poisoned (\\\"The CEO of OpenAI is Elon Musk\\\") documents are retrieved, as we empirically verified using the WikiQA dataset and the Contriever embedding function, following the PoisonedRAG approach. This retrieval performance actually indicates the failure of the attack, since clean documents are also retrieved, which contradicts the adversary's goal. When the LLM is provided with both clean and poisoned documents, it is likely that it will not output the adversary's targeted answer, leading to a low attack success rate (ASR). We found that the ASR in such cases is around 40%, further indicating the failure of such attacks.\\n\\nTo summarize, from a detection perspective, your insightful examples fall into the category of impossible results, making all methods ineffective. **However, such attacks are unlikely to be of practical interest, as they are expected to have a low success rate, which contradicts the attacker's goal. As a result, while such cases theoretically exist, they are unlikely to have a significant impact on real implementations. Overall, we believe these scenarios will not negatively affect our proposed defense.**\\n\\nWe hope that the reviewer can take this part into account. Thank you again for your effort and time.\"}", "{\"title\": \"Rebuttal continues\", \"comment\": \">Q: Inaccurate Description of Experimental Results -- In Figure 1, the authors claim that \\\"we can observe that the attack success rates of Ap are higher than BadChain and AutoDan.\\\" However, the figure only shows relative changes in certain dimensions and does not explicitly provide data on the actual success rates of each attack. This discrepancy between the description and the figure may mislead readers and reflect a lack of rigor in interpreting experimental results.\\n\\n**Response**: Thank you for your comments regarding the experimental results. The ASR of the attacks mentioned is AP > BadChain \\u2265 AutoDan (we have included this information in the caption of Figure 1). We will revise the figure and the corresponding text to make it clearer in the revised manuscript.\\n\\n>Q: Limited Innovation in Attack Method -- Although the paper claims to develop a new attack algorithm, it essentially modifies existing attack methods by adding a regularization term based on the proposed defense metric (DRS). This adjustment is an incremental improvement rather than a substantive innovation. Moreover, the effectiveness of this \\u201cnew\\u201d attack is limited, as it only partially reduces the DRS defense success rate without significantly overcoming the defense.\\n\\n**Response**: Thank you for your comments regarding the attacks. To the best of our knowledge, there is very little existing work, as cited in the paper (as of the date of submission), that proposes similar attack algorithms aimed at reducing stealthiness in data poisoning for RAG systems. Our method is novel in that it specifically targets this challenge of minimizing the detectability of attacks in the context of RAG systems. However, if you are aware of any relevant work, please let us know, and we would be more than happy to cite it.\\n\\nIn addition, the main contribution of our paper is threefold: (1) a novel theoretical understanding of poisoning attacks in RAG systems, (2) a new defense mechanism based on these insights, and (3) a new attack algorithm designed to challenge this defense. We believe these significant contributions highlight the overall novelty of our work.\\n\\n\\n>Q: Lack of Attack Success Rate Comparison -- In the evaluation of the proposed \\u201cnew\\u201d attack algorithm, the paper only presents its detection rate under the DRS defense. Could you provide a comparison of the attack success rates between the new algorithm and traditional attacks?\\n\\n**Response**: Thank you for your comments regarding the experimental results. We report the attack success rates of the proposed DRS attack and the traditional attacks (i.e., AP) in the table below. We observed that the proposed DRS attack has a similar attack success rate to the traditional attack, indicating that the proposed attack is effective in generating poisoned data while maintaining stealthiness. We will include this table in the revised manuscript.\\n\\n**Table**: Attack success rates of the proposed DRS attack and the traditional (AP) on the autonomous driving task. \\n\\n|Task | Metric | AP | DRS (proposed) |\\n|----|-----------------------------------|----|----------------|\\n|Autonmous Driving | Attack Success Rate | 0.81 | 0.78 |\\n|ReAct | Attack Success Rate | 0.73 | 0.74 |\"}", "{\"summary\": \"This paper investigates vulnerabilities in RAG systems due to adversarial data poisoning attacks. The authors analyze how specific data characteristics affect attack success, proposing a new defense method, Directional Relative Shifts (DRS), which detects poisoned data by monitoring shifts in directions with low data variance. They also introduce a stealthier attack algorithm that minimizes DRS to evade detection. Experimental results indicate that DRS demonstrates strong defense performance, though its effectiveness is somewhat reduced against the proposed attacks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. **Innovative Approach** -- The proposed DRS defense is novel in its focus on low-variance directions to detect adversarial data shifts. This approach, within the experimental settings of the paper, demonstrates defensive effectiveness against poisoning attacks.\\n2. **Comprehensive Evaluation** -- This paper provides extensive experiments in multiple RAG setups, such as autonomous driving and medical Q&A, confirming the generalizability of DRS across diverse applications.\\n3. **Insightful Theoretical Contributions** -- The theoretical analysis connecting attack effectiveness to data distribution characteristics (specifically low-variance directions) offers valuable insights, potentially influencing future defenses in retrieval systems.\", \"weaknesses\": \"1. **Sparse Theoretical Explanation** -- While DRS\\u2019s foundation on variance shifts is intuitive, a deeper theoretical analysis could further clarify why certain dimensional shifts are more vulnerable. This would strengthen the defense\\u2019s theoretical underpinnings.\\n2. **Unrealistic Defense Assumptions** -- The defense method assumes prior knowledge of a specific subset of queries that need protection from poisoning attacks. In real-world applications, defenders typically do not have knowledge of which specific queries might be targeted, and a practical defense would need to offer broad protection across all possible queries. This limitation reduces the generalizability and practicality of the proposed DRS-based defense method.\\n3. **Unrealistic Assumption** -- In Section 3.1, the authors illustrate their attack method with an example where, in a knowledge base about food, an adversarial query about mathematics is used to avoid retrieving clean documents. This assumption is unrealistic, as it does not reflect typical user behavior\\u2014users are unlikely to ask irrelevant questions, like mathematics queries, in a food-related knowledge base context. This reduces the practical applicability of the assumptions underpinning the theoretical insights.\\n4. **Inaccurate Description of Experimental Results** -- In Figure 1, the authors claim that \\\"we can observe that the attack success rates of Ap are higher than BadChain and AutoDan.\\\" However, the figure only shows relative changes in certain dimensions and does not explicitly provide data on the actual success rates of each attack. This discrepancy between the description and the figure may mislead readers and reflect a lack of rigor in interpreting experimental results.\\n5. **Limited Innovation in Attack Method** -- Although the paper claims to develop a new attack algorithm, it essentially modifies existing attack methods by adding a regularization term based on the proposed defense metric (DRS). This adjustment is an incremental improvement rather than a substantive innovation. Moreover, the effectiveness of this \\u201cnew\\u201d attack is limited, as it only partially reduces the DRS defense success rate without significantly overcoming the defense.\", \"questions\": \"1. **Clarification on Theoretical Basis** -- Could you provide a more rigorous theoretical explanation for why certain low-variance directions are more susceptible to poisoning attacks in DRS? A deeper analysis would help clarify the underlying vulnerabilities exploited by attackers.\\n2. **Defense Scope and Practicality** -- Given that the defense currently focuses on protecting a specific subset of pre-selected queries, how would DRS perform in scenarios where the entire query space needs protection? Have you considered evaluating DRS\\u2019s effectiveness without pre-selecting queries, to simulate more realistic defensive conditions?\\n3. **Lack of Attack Success Rate Comparison** -- In the evaluation of the proposed \\u201cnew\\u201d attack algorithm, the paper only presents its detection rate under the DRS defense. Could you provide a comparison of the attack success rates between the new algorithm and traditional attacks?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We deeply appreciate the reviewers for dedicating their time and effort to reviewing our manuscript and providing insightful feedback. We are pleased that all reviewers acknowledged the novelty of our work. Furthermore, we are grateful that they considered our writing clear and our approach effective across different setups. We will integrate their suggestions into the revised version of the paper.\\n\\n> Q: One limitation of the method is that the assumption can be strong. For instance, it is assumed that adversarial query has a different distribution from normal ...\\n\\n**Response**: Thank you for your valuable suggestion regarding Assumption 1 and the insightful example you provided. We would like to clarify that our threat model, as specified in Section 2.1 of the main text, precisely addresses the scenario you mentioned and is designed to avoid related issues. Therefore, we believe Assumption 1 is both valid and reasonable within the context of our threat model. We elaborate on this point in detail in the followings.\\n\\nIn our threat model, as specified in Section 2.1 of the main text, we consider **targeted attacks** where the attacker aims to (1) manipulate the RAG system to generate a prescribed adversarial output in response to **attacker-specified adversary queries**, and (2) ensure that the RAG responds normally to **normal queries**. In this context, the distribution of the adversary-specified queries is inherently different\\u2014and should, in fact, be significantly different\\u2014from that of normal queries. This difference is the very foundation upon which the attacker exploits the system to carry out successful attacks. If the distribution of adversarial queries were identical to that of normal queries, there would essentially be no opportunity for the attacker to manipulate the RAG to produce a specific adversarial output.\\n\\nGiven this threat model, we believe Assumption 1 is both reasonable and valid. Within our model, there is a substantial gap between the distributions of adversarial and normal queries. As a result, it should not be difficult to inject adversarial documents into the database that are close (in terms of distribution) to the adversary queries, but far from the normal queries, so that they can be retrieved effectively when given adversary queries. And this is precisely what the Assumption 1 is describing. We will further clarify this point in the revised manuscript.\\n\\n\\n>Q: The assumption 1 is not very clear. How to measure the distance between two texts? The authors may consider adding more explanations to make it easier for readers to understand. Also, assumption 1 states the distance between two texts is bounded, which may not be informative, as it may hold for two arbitrary texts in practice.\\n\\n**Response**: Thank you for your valuable suggestion regarding Assumption 1. We use the $\\\\ell_2$ distance between the embeddings of two texts to measure the distance between them by default, as described in the Notation section of the main text. Intuitively, Assumption 1 suggests that, given an adversarial query, there (almost surely) exists a set of adversarial documents that are closer to the nearest clean documents. The current statement is primarily used to simplify the proof of the main theorem. We will revise this to make it more explicit and intuitive, while emphasizing that it does not impact the theoretical proof.\\n\\n\\n\\n>Q: The proposed defense may influence the utility of RAG. For instance, if new knowledge is added for a query, it can be rejected if it is substantially different from clean texts in the clean data corpus...\\n\\n**Response**: Thank you for your insightful question regarding the impact of the proposed defense on RAG utility. We agree that a trade-off between defense effectiveness and normal RAG utility is inevitable, as this trade-off is **inherent in all** detection-based problems. We note that the decision threshold in our method (in the main text) is set to result in a 1% false positive rate for clean documents. In fact, the false positive rate can be adjusted by changing the threshold as specified in Algorithm 2 in the main text. Below, we provide an ablation study by adjusting different thresholds and report their false positive rates and detection rates for the autonomous driving dataset. We can observe that with a small FPR of 0.5%, the detection rate is still very high at 0.95. This indicates that the proposed defense is effective in detecting poisoned data while maintaining a low false positive rate.\", \"table\": \"False positive rates and detection rates for different thresholds on the autonomous driving task.\\n| FPR | .5% | 1% | 2% | 5% | \\n|---|---|---|---|---|\\n| Detection Rate |0.95 | 0.98 | 0.99| 0.99|\"}", "{\"comment\": \"I would like to thank the authors for the detailed responses. However, some of my concerns remain.\\n\\n1. Please compare to [3] from a technical aspect, instead of simply the aspect of threat model.\\n2. A 14% reduction isn't enough for your attack to escape your detection algorithm.\\n3. I look forward to seeing different types of attacks rather than just replacing distance metric. \\n- There are currently at least three related work for poisoning RAG that I am aware of. The absence of open source code is not a reason for you not to compare. Targeting only one piece of work does not constitute so-called 'understanding' and will also greatly limit the scope of your paper.\\n- I'm curious as to why not just take inspiration from existing backdoor attacks (e.g., different trigger designs)?\"}", "{\"metareview\": \"This paper received three negative review and one positive review. Three reviewers pointed out one major weakness is about the strong assumption that malicious and normal queries are different. This makes the method cannot defend some easily crafted attack samples. There are other issues like presentation issues, costs, influence on normal utility, missing refs, etc. After an active rebuttal, some reviewers raised scores but no one champion this paper. The AC thinks the current version is still not ready for publication.\", \"additional_comments_on_reviewer_discussion\": \"I think this paper has a major concern of the strong assumption raised by three reviewers. This is the main reason that I think this paper is not ready to publish. Although reviewers use straightforward examples to illustrate and authors replied with some discussion, personally, I think maybe authors can add the attacker\\u2019s knowledge in the threat model to justify about the assumption. After all, no method can defend all attacks. Maybe there are some specific scenarios that the defenders somehow have the prior knowledge that can fulfill this assumption.\"}" ] }
2ZTnALzLyX
MotifExplainer: a Motif-based Graph Neural Network Explainer
[ "Zhaoning Yu", "Hongyang Gao" ]
We consider the explanation problem of Graph Neural Networks (GNNs). Most existing GNN explanation methods identify the most important edges or nodes but fail to consider substructures, which are more important for graph data. One method considering subgraphs tries to search all possible subgraphs and identifies the most significant ones. However, the subgraphs identified may not be recurrent or statistically important for interpretation. This work proposes a novel method, named MotifExplainer, to explain GNNs by identifying important motifs, which are recurrent and statistically significant patterns in graphs. Our proposed motif-based methods can provide better human-understandable explanations than methods based on nodes, edges, and regular subgraphs. Given an instance graph and a pre-trained GNN model, our method first extracts motifs in the graph using domain-specific motif extraction rules. Then, a motif embedding is encoded by feeding motifs into the pre-trained GNN. Finally, we employ an attention-based method to identify the most influential motifs as explanations for the prediction results. The empirical studies on both synthetic and real-world datasets demonstrate the effectiveness of our method.
[ "Instance-level explanation", "Graph Neural Network", "Motif" ]
https://openreview.net/pdf?id=2ZTnALzLyX
https://openreview.net/forum?id=2ZTnALzLyX
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ydaVI9yzr3", "sUN4Idm6yO", "qWBrRjqQbm", "eKyRpZFo9R", "booFUm7z4g" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1729132699091, 1730719988651, 1730438817857, 1730720614715, 1732293048501 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8664/Reviewer_kHGc" ], [ "ICLR.cc/2025/Conference/Submission8664/Reviewer_Y5Xn" ], [ "ICLR.cc/2025/Conference/Submission8664/Reviewer_D26j" ], [ "ICLR.cc/2025/Conference/Submission8664/Reviewer_pzr6" ], [ "ICLR.cc/2025/Conference/Submission8664/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposed a simple but effective method to explain GNNs at the instance-level. It first identifies motifs by domain knowledge, then feeds each motif to the GNN to obtain the motif embedding. Finally, they build an attention-based network to obtain the attention weights of each motif in each graph instance, which are identified as the importance of the motifs.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. Proposed method is simple but effective.\\n2. Good empirical results on Fidelity$-$ and Accuracy metrics compared with some old methods.\", \"weaknesses\": \"1. The baselines and related work are old. Recent works such as [r1,r2,r3,r4,r5] should be discussed and compared.\\n\\n2. Presentation is very poor. Citation style needs to be corrected. \\\\citep and \\\\citet should be properly used. Some tables are confusing. See questions. \\n\\n3. To extract motifs, domain knowledge are required. This make it impossible to be applied to a variety of real world tasks where the domain knowledge is unknown. \\n\\n4. The fidelity evaluated in this paper is different from the one used in the paper of SubgraphX. Why not use their metric? We'd like to see how MotifExplainer performs on common metrics. \\n\\n5. Feeding the motifs to GNNs, and training additional attention network will result in more computational cost. Can you also provide efficiency analysis?\\n\\n[r1] Zhang, et al. Gstarx: Explaining graph neural networks with structure-aware cooperative games. Advances in Neural Information Processing Systems, 35:19810\\u201319823, 2022. \\n\\n[r2] Rong, et al. \\\"Efficient gnn explanation via learning removal-based attribution.\\\" ACM Transactions on Knowledge Discovery from Data (2023).\\n\\n[r3] Lu, et al. \\\"GOAt: Explaining Graph Neural Networks via Graph Output Attribution.\\\" The Twelfth International Conference on Learning Representations, 2023. \\n\\n[r4] Li, et al. DAG matters! GFlownets enhanced explainer for graph neural networks. In The Eleventh International Conference on Learning Representations, 2023. \\n\\n[r5] Pereira, et al. Distill n\\u2019explain: explaining graph neural networks using simple surrogates. In International Conference on Artificial Intelligence and Statistics, pp. 6199\\u20136214. PMLR, 2023.\", \"questions\": \"1. What is the metric for the results shown in Table 5? Why for MUTAG, the smaller the better, but for the other two the larger the better?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces MotifExplainer, a novel method for explaining Graph Neural Networks (GNNs) by identifying important motifs within a graph. MotifExplainer utilizes domain-specific motif extraction rules to identify these recurring substructures, creating motif embeddings through a pre-trained GNN\\u2019s feature extractor.\\nIn graph classification, MotifExplainer aggregates motif embeddings to create a new graph embedding, while in node classification, it focuses on motifs that affect a specific node\\u2019s embedding. An attention layer highlights the most relevant motifs for predictions, aiming for more interpretable, human-understandable explanations. The approach is more efficient than subgraph-based methods by reducing the search space, and experiments show it provides high-quality explanations with improved interpretability and computational efficiency.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"1. MotifExplainer focuses on statistically significant motifs rather than individual nodes or edges, providing more human-understandable explanations by highlighting recurring and functionally relevant substructures within graphs.\\n\\n2. By reducing the search space to motifs rather than all possible subgraphs, MotifExplainer is computationally more efficient, making it suitable for dense or large-scale graphs.\", \"weaknesses\": \"1. The motif-based explanation approach is already a well-known method, with other papers[1,2,3,4] actively utilizing motifs for explainability. This paper needs to demonstrate its unique advantages and the necessity of its approach compared to these previous works.\\n\\n- [1] Chen, Jialin, and Rex Ying. \\\"Tempme: Towards the explainability of temporal graph neural networks via motif discovery.\\\" Advances in Neural Information Processing Systems 36 (2023): 29005-29028.\\n- [2] Ding, Feng, et al. \\\"MEGA: Explaining Graph Neural Networks with Network Motifs.\\\" 2023 International Joint Conference on Neural Networks (IJCNN). IEEE, 2023.\\n- [3] Perotti, Alan, et al. \\\"Graphshap: Motif-based explanations for black-box graph classifiers.\\\" arXiv preprint arXiv:2202.08815 (2022).\\n- [4] Zhang, Shichang, et al. \\\"Motif-driven contrastive learning of graph representations.\\\" arXiv preprint arXiv:2012.12533 (2020).\\n\\n2. In this model, cycles are used to extract motifs without domain knowledge. However, the paper needs to justify the validity of the statement \\\"We consider combining cycles with more than two coincident nodes into a motif.\\\" Since motifs are central to this model, the model's validity hinges on how motifs are defined. The justification for the effectiveness of this approach in extracting motifs across various domains is insufficient.\\n\\n3. The authors claim that their model addresses efficiency issues when generating explanations for dense or large-scale graphs. However, in Section G, they conducted experiments only on the simplest molecular dataset, the MUTAG dataset, without testing on large-scale data. To demonstrate the model's practical utility, efficiency experiments should also be performed on larger graph datasets, such as the IMDB dataset used by the authors, as well as on even larger datasets.\\n\\n4. The model's performance heavily relies on motif extraction, which plays a critical role in explainability. It is necessary to show how performance varies with different motif extraction methods.\", \"questions\": \"The questions are listed in paper weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes an explainable method for graph data by providing explanations using motifs, which represent subgraphs within the original graph that play a critical role in prediction. To generate explanations for a pretrained GNN model, they first extract motifs from the original graph using off-the-shelf extraction algorithms (e.g., BRICS, RECAP) or a proposed extraction method that generalizes by only considering cycles and edges as motifs. They then determine the importance of each motif by training an attention weight for each one. In experiments, they present both qualitative and quantitative results to demonstrate the superiority of their explanation method.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Considering motifs within the original graph makes the explanation more human-understandable.\\n\\n2. The paper is well-written and easy to follow.\\n\\n3. The method is intuitive and easy to understand.\", \"weaknesses\": \"1. The method appears highly dependent on the motif-extraction algorithm, which is not a contribution of this paper. For example, in the case of the MUTAG dataset, without domain knowledge, if the proposed motif extraction algorithm (a cycle-based extraction method) is used, $NH_2$ and $NO_2$ are unlikely to be identified as motifs that play a critical role in prediction. I strongly recommend that the authors show which motifs are extracted depending on the motif-extraction algorithm and compare the performance of the method accordingly.\\n\\n2. PGIB [1], which has a closely related and similar motivation to this paper, should be included. PGIB also considers subgraphs (i.e., motifs) to provide explanations, sharing the same motivation of emphasizing the importance of motifs for explaining graph data. The paper should elaborate on its strengths compared to PGIB and include PGIB as a baseline in the experiments.\\n\\n[1] NeurIPS'23, Interpretable Prototype-based Graph Information Bottleneck\", \"questions\": \"1. Performance differences depending on the motif-extraction algorithm need to be shown.\\n\\n2. How does the proposed motif-extraction algorithm, which focuses on cycle structures, manage to extract $NO_2$ and $NH_2$ as motifs?\\n\\n3. Compared to PGIB, the current SOTA method that shares a similar motivation (i.e., considering motifs) with this paper, what are the strengths of this paper?\\n\\n4. The threshold $\\\\sigma / t$ appears to have a significant effect on the final explanation; however, it also seems heuristic without guidance on how to determine it. How can we set this threshold when working with real-world datasets, and how can we evaluate whether the threshold is properly set?\\n\\n5. Not all GNN prediction models may be explicitly divided into two parts: an embedder and a predictor. How can this method be applied in such cases?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a GNN explainer that uses motifs as the unit of explanation. By decomposing representations based on extracted motifs, it produces subgraph explanations. The proposed approach demonstrates its effectiveness across various datasets.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Providing post-hoc explanations is crucial for training trustworthy GNNs.\", \"Using motifs can be a valuable approach for interpretability, offering substantial potential impact.\", \"The proposed method\\u2019s utility is supported through experiments on a range of datasets.\"], \"weaknesses\": [\"It is unclear how this approach improves over existing subgraph-based explanation models, such as GLGExplainer [2].\", \"The paper would benefit from comparisons with more recent XAI methods, such as D4Explainer [1] and MixupExplainer [3], along with subgraph-based explanation methods like GLGExplainer [2]. The most recent baseline in the experiment section of this paper was published in 2021.\"], \"references\": \"[1] Chen et al., \\\"D4Explainer: In-distribution Explanations of Graph Neural Network via Discrete Denoising Diffusion,\\\" NeurIPS 2023.\\n[2] Azzolin, \\\"Global Explainability of GNNs via Logic Combination of Learned Concepts,\\\" ICLR 2023.\\n[3] Zhang et al., \\\"MixupExplainer: Generalizing Explanations for Graph Neural Networks with Data Augmentation,\\\" KDD 2023.\", \"questions\": [\"How are the most important motifs determined, and which motifs were defined and used as explanations in the experiments?\", \"In Algorithm 1, where does h originate?\", \"The proposed model includes motif extraction in the efficiency study, which is generally quite slow. How can it outperform existing models in speed? Could you also provide a time complexity analysis?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
2ZK8zyIt7o
Improving Long-Text Alignment for Text-to-Image Diffusion Models
[ "Luping Liu", "Chao Du", "Tianyu Pang", "Zehan Wang", "Chongxuan Li", "Dong Xu" ]
The rapid advancement of text-to-image (T2I) diffusion models has enabled them to generate unprecedented results from given texts. However, as text inputs become longer, existing encoding methods like CLIP face limitations, and aligning the generated images with long texts becomes challenging. To tackle these issues, we propose LongAlign, which includes a segment-level encoding method for processing long texts and a decomposed preference optimization method for effective alignment training. For segment-level encoding, long texts are divided into multiple segments and processed separately. This method overcomes the maximum input length limits of pretrained encoding models. For preference optimization, we provide decomposed CLIP-based preference models to fine-tune diffusion models. Specifically, to utilize CLIP-based preference models for T2I alignment, we delve into their scoring mechanisms and find that the preference scores can be decomposed into two components: a text-relevant part that measures T2I alignment and a text-irrelevant part that assesses other visual aspects of human preference. Additionally, we find that the text-irrelevant part contributes to a common overfitting problem during fine-tuning. To address this, we propose a reweighting strategy that assigns different weights to these two components, thereby reducing overfitting and enhancing alignment. After fine-tuning $512 \\times 512$ Stable Diffusion (SD) v1.5 for about 20 hours using our method, the fine-tuned SD outperforms stronger foundation models in T2I alignment, such as PixArt-$\\alpha$ and Kandinsky v2.2. The code is available at https://github.com/luping-liu/LongAlign.
[ "Long Text Alignment", "Diffusion Models", "Preference Optimization", "Text-to-Image Generation" ]
Accept (Poster)
https://openreview.net/pdf?id=2ZK8zyIt7o
https://openreview.net/forum?id=2ZK8zyIt7o
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yCcuyoz3py", "wSSgqkKi7B", "tClVJWd1jI", "ra4aoZ2fGH", "rIp9qecYyt", "l60ofBb31x", "kax49zb3Nf", "k6SIkGUJF0", "jrgmuDhgKh", "fHfvuldIcu", "dxKsNmQoaR", "daVFsT6Huy", "dPFn7Fjb2G", "dL0o7xUy73", "bvxx0ig7pH", "ZiYOl72O7n", "XUXV7KkE3x", "QRJiSStwx4", "KQqSWEz6Mc", "JjKuDCcxuG", "EJdWP5J0Kf", "BcBJbJjkOF", "83lrSI8PMl", "6GD06cH6Hq", "5OJKeGR2Zu", "3hzB56jPeg", "3ZH3TEVhcS" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "meta_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732765197089, 1732112421395, 1730652382443, 1732112557873, 1732633414432, 1733208686920, 1730681594847, 1732765507013, 1732279726293, 1732628205610, 1732680699014, 1732375593535, 1732113712509, 1732112789375, 1732631172564, 1730649259508, 1732231560118, 1732112287267, 1733204588094, 1732112502859, 1732536019328, 1737523751315, 1730628345379, 1734452044943, 1732565194190, 1732587164238, 1732112754189 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6216/Reviewer_GqtG" ], [ "ICLR.cc/2025/Conference/Submission6216/Authors" ], [ "ICLR.cc/2025/Conference/Submission6216/Reviewer_9HBG" ], [ "ICLR.cc/2025/Conference/Submission6216/Authors" ], [ "ICLR.cc/2025/Conference/Submission6216/Reviewer_9HBG" ], [ "ICLR.cc/2025/Conference/Submission6216/Authors" ], [ "ICLR.cc/2025/Conference/Submission6216/Reviewer_GqtG" ], [ "ICLR.cc/2025/Conference/Submission6216/Authors" ], [ "ICLR.cc/2025/Conference/Submission6216/Authors" ], [ "ICLR.cc/2025/Conference/Submission6216/Authors" ], [ "ICLR.cc/2025/Conference/Submission6216/Authors" ], [ "ICLR.cc/2025/Conference/Submission6216/Authors" ], [ "ICLR.cc/2025/Conference/Submission6216/Authors" ], [ "ICLR.cc/2025/Conference/Submission6216/Authors" ], [ "ICLR.cc/2025/Conference/Submission6216/Authors" ], [ "ICLR.cc/2025/Conference/Submission6216/Reviewer_hkae" ], [ "ICLR.cc/2025/Conference/Submission6216/Reviewer_Hfc6" ], [ "ICLR.cc/2025/Conference/Submission6216/Authors" ], [ "ICLR.cc/2025/Conference/Submission6216/Reviewer_9HBG" ], [ "ICLR.cc/2025/Conference/Submission6216/Authors" ], [ "ICLR.cc/2025/Conference/Submission6216/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6216/Reviewer_Hfc6" ], [ "ICLR.cc/2025/Conference/Submission6216/Area_Chair_t8Ur" ], [ "ICLR.cc/2025/Conference/Submission6216/Reviewer_hkae" ], [ "ICLR.cc/2025/Conference/Submission6216/Reviewer_9HBG" ], [ "ICLR.cc/2025/Conference/Submission6216/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response\", \"comment\": \"The author addresses my concerns, and based on the overall quality, I will maintain the final rate\"}", "{\"title\": \"Rebuttal by Authors [1/2]\", \"comment\": \"Thank you for your constructive feedback. Your insightful questions have improved our work very much. Below we respond to the comments in weaknesses (***W***) and questions (***Q***).\\n\\n---\\n\\n***W1 & Q1: The performance of the segment-level encoding strategy under different text length conditions.***\\n\\nIn $\\\\\\\\textcolor{blue}{\\\\\\\\textrm{Table 6}}$ of our revised paper, we assess the generation results for prompts of lengths about 15, 60, 120, 240, and 500 tokens using Denscore-O and VQAscore \\\\[1\\\\]. For more details on the dataset construction and evaluation process, please see Appendix C.3 of the new version. The evaluation results in $\\\\\\\\textcolor{blue}{\\\\\\\\textrm{Table 6}}$ demonstrate that our method consistently outperforms current baselines.\\n\\nHere is a summary of the **VQAscore results** in $\\\\\\\\textcolor{blue}{\\\\\\\\textrm{Table 6}}$ in Appendix C.3:\\n\\n| Token | SD-1.5 | SD-2 | PlayG-2 | PixArt-$\\\\\\\\alpha$ | KanD-2.2 | ELLA-1.5 | LongSD (ours) |\\n| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |\\n| 15 | 88.32 | 90.27 | 88.32 | 91.12 | 91.88 | 90.28 | **92.52** |\\n| 120 | 83.63 | 85.33 | 84.78 | 86.81 | 86.02 | 86.93 | **87.49** |\\n| 500 | 81.14 | 82.69 | 80.42 | 84.79 | 84.94 | 85.84 | **87.24** |\\n\\n\\\\[1\\\\] Lin Z, Pathak D, Li B, et al. Evaluating text-to-visual generation with image-to-text generation\\\\[C\\\\]//European Conference on Computer Vision. Springer, Cham, 2025: 366-384.\\n\\n---\\n\\n***W2 & Q2: Does the reweighting strategy have any quantitative results to demonstrate its effectiveness in reducing overfitting? Please provide detailed steps or pseudocode of it.***\\n\\nThe pseudocode of the reweighting strategy is \\n```python\\n# Calculate the text-unrelated component $V$\\nfor image, text in dataset:\\n text_emb_list.append(CLIP(text) / ||CLIP(text)||)\\ncommon_text_emb = mean(text_emb_list) / ||mean(text_emb_list)|| \\n\\n# Calculate the reweighted loss\\nimage_emb = CLIP(image) / ||CLIP(image)||\\ntext_emb = CLIP(text) / ||CLIP(text)||\\n# Equation 8\\n# Ratio controls the reweighted proportion\\ntext_emb_reweight = text_emb - (1 - ratio) * (text_emb * common_text_emb) * common_text_emb \\nloss = text_emb_reweight * image_emb \\n```\\nThe quantitative results for reducing overfitting are presented in $\\\\\\\\textcolor{blue}{\\\\\\\\textrm{Figure 5}}$. When the ratio approaches 1, the Denscore decreases while the FID (FID evaluates the distribution distance between the dataset and generated images) increases, indicating generation overfitting to the Denscore and a departure from the correct image distribution. At a ratio of 0.3, we can maintain a relatively stable FID, suggesting that generated images stay within the desired distribution. Visual results are available in $\\\\\\\\textcolor{blue}{\\\\\\\\textrm{Figures 17}}$ of the new version. At a ratio of 1.0, there is clear evidence of overfitting, as all images show similar patterns; whereas at a ratio of 0.3, the images align best with human preferences without obvious overfitting. \\nIn addition to the experimental results, our paper identifies the main cause of overfitting: when training diffusion models with CLIP-based preference models, the model tends to optimize towards the text-unrelated component $\\\\\\\\mathbf{V}$, resulting in generated images that appear similar regardless of the text input. Based on this finding, we choose to reweight the item $\\\\\\\\mathbf{V}$ during training.\\n\\n---\\n\\n***W3: Discuss the alignment effectiveness when dealing with texts that have complex contextual dependencies or require strong semantic understanding.***\\n\\nWe would like to clarify that aligning long texts and complex texts are two distinct issues. A single-sentence prompt can also have intricate dependencies, but our paper focuses more on the length aspect. To demonstrate the effectiveness of our current methods in handling complex dependencies and semantic understanding we use DPG-Bench \\\\[2\\\\], which includes test prompts for various categories such as entity, attribute, relation, and count. The new results are shown in $\\\\\\\\textcolor{blue}{\\\\\\\\textrm{Table 5}}$ of Appendix C.3 of the new version. Compared to the baselines, we have also made obvious improvements. In addition, we agree that complex dependencies and semantic understanding are challenging tasks and remain far from being fully resolved. We have also included these points in the limitations section of the new version, with changes highlighted in blue for clarity.\\n\\nHere is a summary of $\\\\\\\\textcolor{blue}{\\\\\\\\textrm{Table 5}}$ in Appendix C.3:\\n\\n| Model | SD-2 | PlayG-2 | PixArt-$\\\\\\\\alpha$ | KanD-2.2 | SD-1.5 | ELLA-1.5 | LongSD (ours) |\\n| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |\\n| DPG-Bench | 68.09 | 74.54 | 71.11 | 70.12 | 63.18 | 74.91 | **77.58** |\\n\\n\\\\[2\\\\] Hu X, Wang R, Fang Y, et al. Ella: Equip diffusion models with llm for enhanced semantic alignment\\\\[J\\\\]. arXiv preprint arXiv:2403.05135, 2024\\\\.\"}", "{\"summary\": \"This paper proposes a novel method to improve text-to-image (T2I) diffusion models in handling long text inputs. Due to the input length limitations of existing encoders like CLIP, it becomes challenging to accurately align generated images with long texts. To address this issue, the authors propose a segment-level encoding strategy, which divides long texts into segments and encodes them separately, combined with a decomposed preference optimization method to reduce overfitting and enhance alignment. Experimental results show that the fine-tuned model surpasses several existing foundation models in long-text alignment, demonstrating significant improvements in handling long text inputs.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"[1] It introduces a segment-level encoding strategy that effectively handles long text inputs by dividing and separately encoding segments, overcoming traditional model input limitations and enhancing text-to-image alignment.\\n[2] The preference model is innovatively decomposed into text-relevant and text-irrelevant components, with a reweighting strategy to reduce overfitting and improve alignment precision.\\n[3] The paper conducts extensive experiments, demonstrating significant improvements in long-text alignment over existing models like PixArt-\\u03b1 and Kandinsky v2.2, proving the method's effectiveness for complex text generation tasks.\", \"weaknesses\": \"[1] The paper proposes a segment-level encoding strategy to handle long texts but does not thoroughly validate the performance of this strategy under different text length conditions. For very short or very long texts, can the segment-level encoding still maintain the same alignment effectiveness? The lack of fine-grained comparative experiments makes it difficult to adequately demonstrate the applicability of segment-level encoding across a wide range of text lengths.\\n[2] The paper proposes a reweighting strategy to address overfitting, but lacks detailed experimental data to demonstrate its effectiveness, failing to adequately prove its specific impact on reducing overfitting.\\n[3] The segment-level encoding and preference optimization strategies proposed in this paper show excellent performance in the experiments, but lack an analysis of the method's limitations. It would be beneficial to discuss whether these segment-level encoding methods might lose part of their alignment effectiveness when dealing with texts that have complex contextual dependencies or require strong semantic understanding.\", \"questions\": \"[1] Does your proposed segment-level encoding strategy demonstrate significant effectiveness for texts of varying lengths? Specifically, how does the model perform with very short texts (fewer than 10 words) or very long texts (over 500 words)? Could you provide additional experiments to show comparative results under different text length conditions to verify the generalizability of the segment-level encoding strategy?\\n[2] You mentioned using a reweighting strategy to mitigate the model's overfitting issue, but the description of this process in the paper is rather brief. Could you provide detailed steps or pseudocode to explain the implementation of this strategy? Additionally, does this method have any quantitative results to demonstrate its effectiveness in reducing overfitting in specific scenarios? Could you include comparative data from the experiments to validate the impact of this strategy?\\n[3] How were the 5k images in the test set specifically selected from datasets like SAM and COCO2017?\\n[4] Could you briefly explain the selection of models like CLIP-H and HPSv2 in the experimental section of Chapter 5, as well as the chosen evaluation metrics?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N.A.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"Thank you for your positive feedback. Your insightful questions have improved our work very much. Below we respond to the comments in weaknesses (***W***) and questions (***Q***).\\n\\n---\\n\\n***W1: An important paper that is missed here is ELLA \\\\[Hu et al. 2024\\\\]. ELLA might be a valid comparison, and its adapter would likely have been a better alternative.***\\n\\n**Baseline:** Thank you for your advice. We have included ELLA as a new baseline in the updated version. Our method outperforms ELLA on both our original evaluation metrics and the new DPG-bench and VQAScore. Considering our training time is only 1/7 that of ELLA, this highlights the effectiveness and efficiency of our method for text alignment. The detailed results can be found in $\\\\\\\\textcolor{blue}{\\\\\\\\textrm{Table 2 and 5}}$ of the new version. \\n**Adapter:** In the original paper, we selected the MLP as the adapter not for its performance but because its simplicity better emphasizes our main contribution. We acknowledge that ELLA's adapter can outperform a two-layer MLP, and we are currently working on combining our method with ELLA to create a stronger foundation model.\\n\\nHere is a summary of $\\\\\\\\textcolor{blue}{\\\\\\\\textrm{Table 2 and 5}}$:\\n\\n| Model | SD-1.5 | SD-2 | PlayG-2 | PixArt-$\\\\\\\\alpha$ | KanD-2.2 | ELLA-1.5 | LongSD (ours) |\\n| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |\\n| Denscore-O | 29.20 | 30.15 | 28.80 | 33.48 | 33.30 | 32.92 | **35.26** |\\n| VQAscore | 84.57 | 85.61 | 85.26 | 86.96 | 86.31 | 86.85 | **87.24** |\\n| DPG-Bench | 63.18 | 68.09 | 74.54 | 71.11 | 70.12 | 74.91 | **77.58** |\\n\\n---\\n\\n***W2: The paper ELLA also introduces DPG-Bench. Even on this 5k evaluation set, VQAScore might be a good option to consider.***\\n\\nThank you again for your advice. As mentioned above, we have added DPG-bench and utilized VQAscore on our 5k test set as two new evaluation metrics for text alignment. The results show strong consistency across three evaluation metrics (Denscore-O, VQAscore, and DPG-Bench) for text alignment, and our method consistently outperforms the baselines.\\n\\n---\\n\\n***W3: It would have been nicer to also have results with any of the newer, more performant models (e.g. SDXL).*** \\n\\nWe have added new foundation models, SDXL. Our method also extends the maximum input token limit for SDXL and improves long-text alignment. All training and testing settings match the SD1.5 version, but with 1024 resolutions. The evaluation results on 5k test set are shown below:\\n\\n| | FID | Denscore-O | Denscore | VAQscore | GPT4o |\\n| :---- | :---- | :---- | :---- | :---- | :---- |\\n| SDXL | 21.18 | 33.52 | 22.79 | 86.89 | 268 |\\n| longSDXL | 23.88 | 37.33 | 25.33 | 87.30 | 416 |\\n\\n---\\n\\n***Q1: I do not see other implementation details (dataset, other choices etc.) regarding the training of the Denscore model.***\\n\\nWe apologize for this. The Denscore training setup is briefly described at the end of the training part in Section 5.1. We maintain consistency with Pickscore across nearly all settings, except for the training objectives and datasets. Specifically, we train our Denscore using Pickscore\\u2019s [GitHub repository](https://github.com/yuvalkirstain/PickScore), making only modifications to the dataset loading and loss function code.\"}", "{\"comment\": \"Received with thanks!\"}", "{\"title\": \"Thank you for your feedback\", \"comment\": \"Dear Reviewer 9HBG,\\n\\nThank you for your feedback, and we appreciate your openness to the acceptance of our work. While we understand that the remaining time for responses is very limited, we would like to kindly ask for clarification on which specific concerns from other reviewers and what specific theoretical issues related to long text you are referring to. Since authors still have over a day to respond, we hope to have the opportunity to provide further clarification to address your concerns and improve your evaluation of our work during the Reviewer/AC discussions.\\n\\n\\\\\", \"regarding_your_statement\": \"> I would like to keep my rating to make it a boarderline, and it is fine for me if the paper is accepted considering its contribution on the model.\\n\\nWe understand that you may have reservations about our work and consider it borderline. However, we would like to note that a rating of **3** is not typically intended for borderline papers. The rating guidelines suggest using a score of **5** for \\\"*marginally below the acceptance threshold*\\\" and **6** for \\\"*marginally above the acceptance threshold*.\\\" **Balancing the overall rating rather than rating based on an individual reviewer's assessment might not align with the expectations of the program committee.**\\n\\n\\\\\\nNevertheless, we are truly grateful for your constructive review and thoughtful feedback. Please let us know if there are any remaining concerns we can address.\\n\\nBest regards,\\\\\\nThe Authors\"}", "{\"summary\": \"This paper presents a novel approach to enhance the alignment between long text descriptions and generated images in text-to-image diffusion models, introducing segment-level encoding to overcome input length limitations and decomposed preference optimization to mitigate overfitting and improve text-relevant alignment during fine-tuning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The motivation for using preference models is well-founded, and the paper is well-written.\\n2. It is interesting to identify two distinct focuses within preference models, and the analysis provided is both reasonable and thorough.\", \"weaknesses\": \"weakness\\n1. I am unsure why multiple <sot> tokens are retained; regarding the retention or removal of tokens, a more detailed explanation or analysis is needed, as it currently leaves me confused.\\n2.After reweighting, whether there will be a noticeable difference in the aesthetic quality of the generated results (due to text-irrelevant components) remains unclear. For Appendix B.1, it would be beneficial to provide some visualizations of the outcomes from the two loss functions.\\n3. Segmenting to leverage CLIP's alignment effect is an intuitive innovation, but does this become irrelevant in light of the development of Vision-Language Models (VLMs)? Can the current innovation still contribute to VLMs?\\n4. On line 363, it mentions mitigating the risk of overfitting to Denscore. Could you clarify where the potential source of this overfitting lies?\", \"questions\": \"see Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"na\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your support\", \"comment\": \"We appreciate your kind support! In our final revision, we will further improve the paper by incorporating the valuable insights gained from the rebuttal discussions. Thank you again!\"}", "{\"comment\": \"Thank you for raising the score\\\\! The evaluation results for SD1.5 and SDXL are summarized as follows:\\n\\n| | Denscore-O | Denscore | VAQscore | GPT4o |\\n| :---- | :---- | :---- | :---- | :---- |\\n| SD1.5 | 29.20 | 20.29 | 84.57 | 195 |\\n| longSD1.5 | 35.26 (+6.06) | 23.79 (+3.50) | 87.24 (+2.67) | 668 (+473) |\\n| SDXL | 33.52 | 22.79 | 86.89 | 268 |\\n| longSDXL | 37.33 (+3.81) | 25.33 (+2.54) | 87.30 (+0.41) | 416 (+148) |\\n\\nAccording to these results, we observe that (1) our methods significantly improve both SD1.5 and SDXL, demonstrating their robustness. (2) Among the two final fine-tuned versions, longSDXL clearly outperforms longSD1.5 in terms of long text alignment, indicating that a stronger foundation model achieves better performance limits. (3) The improvement is more pronounced in SD1.5, this is because the pretrained version of SDXL is already better than that of SD1.5. We have uploaded a new revision that includes these analyses in Appendix C.4, with changes highlighted in blue for clarity.\", \"title\": \"Thank you for your support\"}", "{\"title\": \"Thank you for your feedback\", \"comment\": \"Thank you for your feedback\\\\! Below, we address the two new questions (***Q***).\\n\\n---\\n\\n***Q1: Please elaborate a case of the input data that is long but not complex (for better illustration, please also include an example of a long and complex input).***\\n\\nWe appreciate the opportunity to provide further clarification. The subtle difference between \\\"long\\\" and \\\"complex\\\" input texts lies in whether the text contains redundant information that can often be inferred from other parts of the text, versus containing rich details related to quantities, positions, relative relationships, or intricate dependencies.\", \"an_example_of_a_long_but_not_very_complex_input_text_could_be\": \"> *In this captivating photograph, a giant panda sits serenely amidst a lush green backdrop of bamboo, creating a striking contrast with its surroundings. The panda\\u2019s distinctive black-and-white fur stands out beautifully against the vibrant greenery, emphasizing its unique appearance. As it rests peacefully, the serene expression on its face reflects the tranquility of its environment. Surrounded by towering bamboo, the scene encapsulates the essence of this beloved species, showcasing the giant panda in its natural habitat, where its striking black-and-white fur harmonizes with the lush green backdrop.*\\n\\nIn this example, descriptive elements like \\\"black-and-white fur\\\" or \\\"lush green backdrop of bamboo\\\" could be inferred from the general context, as they repeat or reinforce ideas present elsewhere in the text.\", \"an_example_of_a_shorter_but_more_complex_input_text_could_be\": \"> *The image shows five interconnected ecosystems, each containing 12 plants and 8 animals.*\\n\\nWhile this text is shorter, it is more complex due to its intricate relationships and specific numerical details, which require precise alignment of ecosystems, plants, and animals.\\n\\nWhile the examples illustrate some distinctions between \\\"long\\\" and \\\"complex,\\\" we acknowledge that \\\"complexity\\\" is inherently abstract and hard to quantify. Moreover, we agree that longer texts often correlate with greater complexity due to the potential for richer semantic information. In response to this, our results on DPG-Bench ($\\\\\\\\textcolor{blue}{\\\\\\\\textrm{Table 5}}$ in Appendix C.3) demonstrate that our method effectively handles input texts even when they involve intricate dependencies and complex semantics.\\n\\n---\\n\\n***Q2: Viewing that the LLMs may well-handle long text to some extent. Please brief a case where the proposed algorithm may win.***\\n\\nIn our paper, we observe that combining CLIP with LLMs like T5 as encoders is more effective than using T5 alone, as shown in $\\\\\\\\textcolor{blue}{\\\\\\\\textrm{Figure 4}}$. This effectiveness is probably due to contrastive pre-training encoders (e.g., CLIP) being specifically designed for text-image alignment, which potentially enhances the correspondence between text representations and generated images. Additional details are available in Section 5.3 of our paper.\\n\\nHowever, existing pretrained CLIP models have a maximum token limit. Our segment encoding enables us to extend this limit, allowing it to process much longer inputs (e.g., 250 to 500 tokens). Moreover, such long prompts can often be divided into several relatively independent segments, making our segment encoding both practical and logical. Supporting experiments about long inputs can be found in $\\\\\\\\textrm{\\\\\\\\color{blue}Table 6}$ of our paper.\\n\\nAdditionally, our whole method includes both segment encoding and decomposed preference optimization. Decomposed preference optimization is a training strategy that is beneficial for long-text alignment regardless of whether LLMs or CLIP are used as encoders. For example, the experiments of longSD in $\\\\\\\\textrm{\\\\\\\\color{blue}Table 2}$ use both T5 and CLIP as encoders; the fine-tuned version with decomposed preference optimization (S+R) performs significantly better than the version (S) without it.\"}", "{\"title\": \"Thank you for your feedback\", \"comment\": \"Thank you for your prompt feedback! We appreciate your time and thoughtful comments. Since a rating of 3 is still considered negative, we would be grateful if you could share any remaining concerns you might have. If you find our responses satisfactory, we kindly ask if you might consider revising your rating based on your updated evaluation of our work.\\n\\nThank you again for your valuable input.\"}", "{\"title\": \"Looking forward to further feedback\", \"comment\": \"Dear Reviewers,\\n\\nThank you again for your valuable comments and suggestions, which are really helpful for us. We have posted responses to the concerns raised in your reviews and included additional experiment results.\\n\\nWe totally understand that this is quite a busy period, so we deeply appreciate it if you could take some time to return further feedback on whether our responses solve your concerns. If there are any other comments, we will try our best to address them.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Summary of Paper Revision\", \"comment\": [\"We thank all reviewers for their constructive feedback, and we have responded to each reviewer individually. We have also uploaded a **Paper Revision** including additional results and illustrations:\", \"$\\\\\\\\textrm{\\\\\\\\color{blue}Figure 11}$: Generation results of segment encoding using different numbers of the \\\\<sot\\\\> token and segmentation strategies. (For reviewers GqtG and Hfc6.)\", \"$\\\\\\\\textrm{\\\\\\\\color{blue}Figure 12}$: Visual results of attention maps before and after training with our methods. (For reviewer Hfc6.)\", \"$\\\\\\\\textrm{\\\\\\\\color{blue}Figure 13}$: Retrieval results using the text-irrelevant part $\\\\\\\\mathbf{V}$ from Denscore models trained with different loss functions. (For reviewer GqtG.)\", \"$\\\\\\\\textrm{\\\\\\\\color{blue}Figure 17}$: Visual results using reweighting strategies with various ratios. (For reviewers GqtG and 9HBG.)\", \"$\\\\\\\\textrm{\\\\\\\\color{blue}Table 2}$: The new version includes an updated baseline ELLA \\\\[1\\\\] and a new evaluation metric, VQAscore \\\\[2\\\\]. (For reviewer hkae.)\", \"$\\\\\\\\textrm{\\\\\\\\color{blue}Table 5}$: Evaluation results using DPG-Bench \\\\[1\\\\] for a wider range of prompts. (For reviewers 9HBG, hkae and Hfc6.)\", \"$\\\\\\\\textrm{\\\\\\\\color{blue}Table 6}$: Evaluation results using prompts of varying lengths: 15, 60, 120, 240, and 500 tokens. (For reviewers 9HBG and Hfc6.)\", \"\\\\[1\\\\] Hu X, Wang R, Fang Y, et al. Ella: Equip diffusion models with llm for enhanced semantic alignment\\\\[J\\\\]. arXiv preprint arXiv:2403.05135, 2024\\\\.\", \"\\\\[2\\\\] Lin Z, Pathak D, Li B, et al. Evaluating text-to-visual generation with image-to-text generation\\\\[C\\\\]//European Conference on Computer Vision. Springer, Cham, 2025: 366-384.\"]}", "{\"title\": \"Rebuttal by Authors [2/2]\", \"comment\": \"***W3.1: Testing with alternative segmentation designs could reveal whether simpler or more complex methods yield better alignment.***\\n\\nWe examine different segmentation strategies by comparing the approach of treating each sentence as a segment versus grouping several consecutive sentences into one segment, as long as the total token count remains under 77\\\\. Our new ablation study in $\\\\\\\\textcolor{blue}{\\\\\\\\textrm{Figure 11}}$ in the new version reveals no significant differences in the results.\\n\\n---\\n\\n***Q1: Please better explain why the proposed split-and-merge approach can address the long-text alignment issue.***\\n\\nWe would like to clarify that the split-and-merge approach alone does not address long-text alignment. Segment encoding is a method to overcome the input limitations of the encoder. To enhance long-text alignment, we need an additional two-stage training process that includes both supervised fine-tuning and preference optimization.\\n\\n---\\n\\n***Q2: Please provide the ablation study clearly.*** \\n\\nThank you for your suggestion. In response, we have revised the paper to include new experimental results and detailed ablation studies, with all updates clearly highlighted in blue. We hope these additions address your concerns regarding the clarity of the ablation studies. If there are specific aspects you would like us to elaborate on further, we would be happy to provide additional results or analyses upon your guidance.\"}", "{\"title\": \"Thank you for your support\", \"comment\": \"Thank you for your support. We appreciate your detailed feedback and valuable suggestions, which have been instrumental in improving our work. We have corrected the typos in Table 7 and are conducting a thorough review of the paper, including refining the overall flow to better integrate the additions made during the rebuttal. Thank you once again for your valuable input!\"}", "{\"summary\": \"This paper presents a method for enhancing the prompt following of text-to-image models specifically in the case of long prompts. The key contribution for tackling this problem is twofold: a) using a combination of CLIP and T5 encoders (as is becoming increasingly common these days e.g. SD3, Flux) b) the introduction of a preference model tailored for long prompts (Denscore) and applying reward fine-tuning with this Denscore model to enhance the prompt following of SD1.5 models for long prompts.\", \"soundness\": \"4\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The paper tackles the crucial challenge of long prompt following in a very effective manner. Using a text encoder that can take the entire long prompt is a sound idea, and the Denscore preference model looks like a useful contribution in general.\\nApart from this, the reward fine-tuning with the orthogonal decomposition and the gradient reweighting looks like a good idea to deal with the \\\"reward-hacking\\\" problem.\\nFinally, the results also appear quite strong from the evaluations presented in the paper.\", \"weaknesses\": \"An important paper that is missed here is ELLA[Hu et al. 2024] for a couple of reasons. The first is that they propose replacing the CLIP encoder of SD1.5 with a T5-XL model and get significantly improved results (far superior numbers to those reported by Lavi-Bridge whose MLP adapter is used here). Therefore, this model might be a valid comparison (although the training cost of ELLA is a bit higher: 7 days with 8 A100s for SD1.5). Alternatively, the adapter provided by ELLA would have probably been a better alternative to the one used in the paper (from Lavi-Bridge).\\n\\nApart from the comparison/use of adapter, there's also DPG-Bench introduced in the paper which is a good benchmark for long prompt following (as compared to existing benchmarks like T2I-Compbench, DSG, TIFA etc.). Evaluating on DPG-Bench would be a useful addition since the 5k evaluation set of this paper is not fully understood and only a few models have been evaluated here. Additionally, from an evaluation standpoint, even on this 5k evaluation set, VQAScore[1] might be a good option to consider, since it uses a T5-XXL model which can take long prompts, and has shown some promising results for text-to-image evaluations. \\n\\nAnother aspect which is missing here is that all the experiments in this paper are conducted on SD1.5 which is a relatively older model, and there have been newer models in the past 2 years (e.g. SDXL). Therefore, it would have been nicer to also have results with any of the newer, more performant models, but I can understand that this might be a bit more computationally expensive (especially if the training has to be done at 1024 resolution). \\n\\nOverall, I dolike the paper, but I believe that incorporating these aspects (especially strengthening the paper with additional evaluations) could improve the paper significantly. \\n\\n[1] Lin et al. \\\"Evaluating Text-to-Visual Generation with Image-to-Text Generation\\\", ECCV 2024\", \"questions\": \"I apologize in advance if I missed it, but I do not really see clear details about the training of the Denscore model. B.1 has details on the training objectives and the fact that captions are generated by LLaVA-Next, but beyond this I do not see other implementation details (dataset, other choices etc.), so it would be great if the authors could point me to this.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for the rebuttal.\", \"comment\": \"Thanks for the efforts on addressing the issues. I have raised the rating accordingly. Would you mind discussing the results on SDXL and compare the results with the results based on SD1.5.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"Thank you for your positive feedback. Your insightful questions have improved our work very much. Below we respond to the comments in weaknesses (***W***).\\n\\n---\\n\\n***W1: Why are multiple \\\\<sot\\\\> tokens retained?***\\n\\nThank you for raising this question. In the revised paper, we have added an ablation study ($\\\\\\\\textcolor{blue}{\\\\\\\\textrm{Figure 10 and 11}}$ in Appendix A) to explore three scenarios: (1) removing all \\\\<sot\\\\> tokens, (2) retaining a single \\\\<sot\\\\> token, and (3) retaining all \\\\<sot\\\\> tokens. Our findings indicate that removing all \\\\<sot\\\\> tokens leads to a significant performance drop, while keeping one or more \\\\<sot\\\\> tokens yields comparable results. This analysis is elaborated in Appendix A, with changes highlighted in blue for clarity.\\n\\n---\\n\\n***W1.2: The difference in the aesthetic quality of the generated results after reweighting.***\\n\\nIn the revised paper, we have provided multiple generation images ($\\\\\\\\textcolor{blue}{\\\\\\\\textrm{Figure 17}}$ in Appendix C.2) at different reweight ratios while keeping all other settings the same to illustrate differences in aesthetic quality. A ratio of 1 indicates the original preference loss, resulting in significant overfitting, where all images exhibit similar patterns regardless of the inputs. A ratio of 0 implies that the loss only considers the text-relevant part, leading to low image quality that does not align with human preferences. We observe a ratio of 0.3 yields the best visual quality among these options.\\n\\n---\\n\\n***W1.3: Some visualizations of the outcomes from the two loss functions of Denscore.***\\n\\nFor long-text alignment, the two losses are similar; however, for aesthetics, Equation 10 eliminates the influence of the first item on the text-irrelevant part $V$, allowing it to focus more on aesthetic aspects. In $\\\\\\\\textcolor{blue}{\\\\\\\\textrm{Figure 13}}$ of Appendix B.1 in the revised paper, we present updated visual results for retrievals with the highest scores using the text-irrelevant part $V$ under two Denscore losses. We find that retrieval results from the text-irrelevant part $V$ trained with Equation 10 align more closely with human aesthetic preferences.\\n\\n---\\n\\n***W2: Can the current innovation still contribute to VLMs?*** \\n\\nAn important direction for VLM is to enable it to understand and generate visual input simultaneously \\\\[1,2\\\\]. Our methods could also be potentially used to train VLMs to generate images similarly to T2I diffusion models. Furthermore, in CLIP, there is a symmetrical relationship between image and text. By reversing their roles\\u2014using the image as input and generating text as output\\u2014we could also enhance VLM\\u2019s image captioning capabilities \\\\[3\\\\], presenting another interesting potential benefit.\\n\\n\\\\[1\\\\] Team C. Chameleon: Mixed-modal early-fusion foundation models\\\\[J\\\\]. arXiv preprint arXiv:2405.09818, 2024\\\\. \\n\\\\[2\\\\] Zhou C, Yu L, Babu A, et al. Transfusion: Predict the next token and diffuse images with one multi-modal model\\\\[J\\\\]. arXiv preprint arXiv:2408.11039, 2024\\\\. \\n\\\\[3\\\\] Liu H, Li C, Wu Q, et al. Visual instruction tuning\\\\[J\\\\]. Advances in neural information processing systems, 2024, 36\\\\.\\n\\n---\\n\\n***W3: Could you clarify where the potential source of this overfitting lies?***\\n\\nPrevious work \\\\[4\\\\] has observed the overfitting problem in preference optimization of diffusion models, while our paper identifies its main cause and solves it. We find that when using CLIP-based preference models to train diffusion models, the training loss can be divided into a text-related component $\\\\\\\\mathcal{C}\\\\_{P}^{\\\\\\\\bot}(p) \\\\* \\\\\\\\mathcal{C}\\\\_{X}$ and a text-unrelated component $\\\\\\\\mathbf{V} \\\\* \\\\\\\\mathcal{C}\\\\_{X}$. The text-unrelated component makes up a large part of the entire loss, and it is relatively easy for the model to learn since $\\\\\\\\mathbf{V}$ remains stable. As a result, the model tends to optimize towards $\\\\\\\\mathbf{V}$ regardless of the text input, leading all generated images to look similar. More information can be found in Section 4.2, and visual results can be found in $\\\\\\\\textcolor{blue}{\\\\\\\\textrm{Figures 6 and 17}}$ in the revised paper.\\n\\n\\\\[4\\\\] Wu X, Hao Y, Zhang M, et al. Deep Reward Supervisions for Tuning Text-to-Image Diffusion Models\\\\[J\\\\]. arXiv preprint arXiv:2405.00760, 2024\\\\.\"}", "{\"comment\": \"Considering the responses and the concerns of other reviewers, I think the proposed solution does not well-answer the theoretical issues raised by long text, and its main contribution is to make the more become more trainable under the input of long text. I would like to keep my rating to make it a boarderline, and it is fine for me if the paper is accepted considering its contribution on the model.\"}", "{\"title\": \"Rebuttal by Authors [2/2]\", \"comment\": \"***Q3: How were the 5k images in the test set specifically selected from datasets like SAM and COCO2017?***\\n\\nWe first collect about 2 million images from open datasets, including 500k from SAM, 100k from COCO2017, 500k from LLaVA (a subset of the LAION/CC/SBU dataset), and 1 million from JourneyDB. After obtaining this new dataset, we randomly select 5k images from it as the test set without any human intervention. The 5k test set is not used in any training stage. More information about our dataset can be found in Section 5.1.\\n\\n---\\n\\n***Q4: Briefly explain the selection of CLIP-H and HPSv2, as well as the chosen evaluation metrics?*** \\n\\nThank you for your advice. We have added more information about these methods in Section 5.1 and 5.2 of the new version. The explanations are as follows:\\n* CLIP-H and HPSv2: To demonstrate that our analysis of the CLIP-based preference models is generally applicable, we compare four different CLIP-based models: the pretrained CLIP, the single-value preference models Pickscore and HPSv2, as well as our segment-level preference model, Denscore. \\n* FID: FID evaluates the distribution distance between the dataset and generated images. \\n* Denscore: Denscore assesses human preference for generated images, while Denscore-O and VQAscore focuses on the text alignment of those images. \\n* VQAscore: VQAscore \\\\[1\\\\] also focuses on the text alignment of generated images. \\n* DPG-bench: DPG-bench \\\\[2\\\\] is a general benchmark that includes test prompts for categories like entity, attribute, relation, and count.\"}", "{\"title\": \"Looking forward to further feedback\", \"comment\": \"Dear Reviewer 9HBG,\\n\\nThank you once again for your constructive feedback. We would like to kindly remind you that we have included additional experiments to:\\n\\n- Validate the performance of our approach under varying text-length conditions.\\n- Assess alignment effectiveness when handling complex texts.\\n- Provide detailed evidence of the effectiveness of the reweighting strategy in addressing overfitting.\\n\\nWe have also clarified the construction of the test set, the selection of models (CLIP-H and HPSv2), and the evaluation metrics used. \\n\\n---\\n\\nAs the discussion period is coming to a close in two days, we look forward to your response and would be happy to address any further comments or questions you may have.\\n\\nBest,\\n\\nThe Authors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper presents a new approach for long text inputs on text-to-image (T2I) alignment since the clip text encoding only allows 77 tokens. The authors address limitations of existing encoding methods like CLIP by proposing segment-level encoding, where long texts are divided and processed in parts to bypass input length constraints. They further introduce a decomposed preference optimization that separates alignment-related and non-alignment components of CLIP-based preference scores. By reweighting these components, the method reduces overfitting, achieving superior T2I alignment after fine-tuning Stable Diffusion v1.5, outperforming models such as PixArt-\\u03b1 and Kandinsky v2.2.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"(1) Comprehensive Survey of Related Work.\\nThe paper presents a thorough and comprehensive survey of existing work in text-to-image (T2I) diffusion models, demonstrating an impressive grasp of the field. By delving deeply into previous approaches and their limitations, the authors effectively set the stage for their contributions, clarifying the gaps their method aims to fill. This background provides readers with valuable context and insight into the evolution of T2I models, particularly in handling longer, complex textual inputs. The comprehensive nature of this survey also reinforces the authors' understanding of the field's current challenges and strengths, building confidence in the relevance and timeliness of the proposed approach.\\n\\n(2) Importance of the Problem and a Reasonable, Well-Motivated Solution.\", \"the_authors_tackle_a_critical_issue_in_t2i_diffusion_models\": \"the difficulty of aligning generated images with longer text prompts. As the demand for complex, high-fidelity image generation grows, the ability to handle longer text inputs accurately is essential. The segmentation approach, paired with decomposed preference optimization, offers a well-motivated solution to this problem. Segmenting long text into manageable parts allows for better processing within the confines of existing encoding models, while the decomposed preference optimization fine-tunes the alignment, addressing the unique challenges posed by long prompts. The design choices reflect a reasonable and methodical approach to tackling these limitations, and the paper articulates the rationale for each component clearly. This structured approach suggests the authors have carefully considered the problem\\u2019s nuances, offering a solution that is not only effective but also grounded in sound methodology.\\n\\n(3) Demonstrated Superiority over State-of-the-Art Models.\\nOne of the paper's significant strengths is the demonstrated performance improvement over state-of-the-art models. Through rigorous experimentation, the authors show that their method surpasses current leading models like PixArt-\\u03b1 and Kandinsky v2.2 in T2I alignment, particularly for long-text prompts. By fine-tuning Stable Diffusion v1.5 with their approach, they achieve superior alignment, reducing overfitting while preserving text-relevant information in the generated images. This achievement underscores the potential of the proposed method to set a new benchmark for handling longer, more detailed textual inputs within T2I models. The improvement over established models validates the effectiveness of the segmentation and preference optimization strategy, indicating that this approach could meaningfully advance the state of the art in T2I diffusion modeling.\", \"weaknesses\": \"Although the proposed method solved an important issue, three major issues remain as listed below.\\n(1) Limitations and Ambiguities in the Segmentation and Merging Methodology. The segmentation and merging technique proposed in this work introduces a unique approach to handling longer text inputs but raises questions regarding its effectiveness and generalizability. When text inputs exceed 77 tokens, this method still encounters limitations, as it is fundamentally restricted by the underlying model\\u2019s capacity to handle \\u201clong\\u201d sequences since the split-and-merge process does not solve the problem. This constraint is particularly concerning as longer text inputs are common in real-world applications and often essential to producing detailed and contextually accurate image generations. The current approach of segmenting and then merging these sections seems like a workaround rather than a robust solution to handling extended texts, which may inherently limit its scalability and versatility. Furthermore, the mechanics of how segmentation and merging affect the underlying model's cross-attention dynamics remain underexplored. Cross-attention is a critical component in the alignment process between text and image features, and segmenting inputs may disrupt this alignment, especially as certain semantic connections might be lost or diluted across segmented inputs. Investigating the cross-attention differences between the original, unsegmented approach and the segment-and-merge methodology could shed light on any distortions introduced by this technique. A more thorough analysis of cross-attention\\u2019s role here could help refine segmentation methods to better retain textual coherence and improve image alignment fidelity, ultimately benefiting downstream performance.\\n\\n(2) Dependency on an Outdated Baseline Model (Stable Diffusion v1.5):\\nThe use of Stable Diffusion v1.5 as the primary evaluation model poses a significant limitation, given that the field has moved toward more advanced versions like SD-3 and SDXL. These newer versions incorporate improved architectures and training techniques, yielding enhanced performance, especially in terms of image quality and alignment with textual inputs. The reliance on an outdated model not only limits the relevance of the study\\u2019s results but also restricts the potential impact of the proposed method. Using v1.5 as the baseline reflects well on the approach\\u2019s applicability to older architectures, but it leaves unanswered questions about its efficacy on more sophisticated models that incorporate advancements in diffusion techniques, training scale, and multimodal alignment mechanisms. \\nMoreover, maintaining SD-1.5 as a standard for comparison could inadvertently hold back progress within the research community. As models continue to evolve, it\\u2019s essential to align benchmark tests with the latest technologies to ensure that methods are relevant and that advancements reflect real-world capabilities. Preliminary results from newer models, such as SD-3, have demonstrated considerable improvements in T2I alignment, indicating that the proposed method may benefit even further from these architectural updates. Testing on newer models would better position the approach in the context of current technological standards, ensuring that it remains relevant and applicable as diffusion models evolve. Future work should include evaluations on SD-3 and SDXL to substantiate claims of superiority over other methods in a more current setting. The test of SD-3 with the prompt used in the first example of Fig. 1 is shown below.\", \"https\": \"//ibb.co/CWyKQTZ\\n\\n(3) Over-reliance on Long Prompt Training and Lack of Generalizability Testing.\\nThe proposed method seems to rely heavily on training with long prompts, which could limit its flexibility and adaptability. While training on extended text inputs may enhance alignment for similar prompts, it raises concerns about the model's performance on shorter or more varied prompts. In real-world scenarios, prompt lengths and structures vary significantly, and a robust model should perform consistently across this spectrum. By focusing predominantly on long-prompt alignment, the current approach may overfit to specific input lengths, making it less effective for shorter or less detailed prompts where segmentation might not be necessary or where text segments are not sufficiently complex to benefit from this treatment.\\nTo address this potential limitation, it would be valuable to conduct experiments that vary prompt lengths and structures systematically, assessing whether the model\\u2019s performance holds across different scenarios. Additionally, testing with alternative segmentation designs could reveal whether simpler or more complex methods yield better alignment. These experiments would enhance our understanding of how adaptable the proposed method is, providing insights into its generalizability and robustness. The community would benefit from such insights, as they could guide further development of segmentation-based approaches for T2I tasks.\", \"questions\": \"(1) Please better explain why the proposed split-and-merge approach can address the long-text alignment issue.\\n(2) Please provide the ablation study clearly.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposes a novel text encoder to deal with long conditioning texts in text-to-image diffusion models. Long texts are divided into segments and processed separately. The authors finetune a stable diffusion model based on this encoding, using a CLIP-based preference optimization method.\", \"strengths_mentioned_in_the_reviews_include\": \"well written paper, good review of related work, good analysis of preference model leading to decomposition, and significant improvements over prior work.\", \"weaknesses_included\": \"missing some baseline comparisons, experiments only used the SD1.5 as generative foundation model, missing evaluations as function of prompt length, there were also suggestions to include DPG-bench and VQAScore for evaluation.\", \"additional_comments_on_reviewer_discussion\": \"Based on the initial reviews the authors submitted a rebuttal and revised manuscript to address the points raised by the reviewers. The rebuttal addresses most of the concerns of the reviewers, which in the light of the response recommended accepting the paper (3x), the single reviewer that was not quite satisfied with the author response to their questions indicated they were not arguing against acceptance of the paper despite their rating. The AC therefore follows the majority recommendation of accepting the paper.\"}", "{\"comment\": \"I thank the authors for taking the time to provide the clarifications and updating the paper. I think the additions (ELLA, DPG-Bench, VQAScore, SDXL, and providing implementation details) makes the paper much stronger and I happy to recommend acceptance of the paper.\", \"a_minor_observation\": \"Tab. 7 has 2 columns named VQAScore, I believe the first one should be Denscore? In general, it would be a good idea to carefully check the paper for typos once especially given that there are a lot of additions to the paper.\"}", "{\"comment\": \"Thank you for the response. Two quick questions:\\n1. Please elaborate a case of the input data that is a long but not complex (for better illustration, please also include an example of a long and complex input). I am not convinced by the claim that a long text may be purely long, rather that having more text to deliver more meaningful semantic information.\\n\\n2. Viewing that the LLMs may well-handle long text to some extent. Please brief a case where the proposed algorithm may win.\"}", "{\"title\": \"Rebuttal by Authors [1/2]\", \"comment\": \"Thank you for your constructive feedback. Your insightful questions have improved our work very much. Below we respond to the comments in weaknesses (***W***) and questions (***Q***).\\n\\n---\\n\\n***W1: These sections seem like a workaround rather than a robust solution to handling extended texts.***\\n\\nWe agree that segment encoding alone cannot solve all long text encoding problems, but it can enhance any existing model with a maximum input limitation. In practice, as the total maximum length increases, data collection becomes increasingly challenging. For instance, generating 250-token prompts with LLAVA is straightforward, while collecting 1000-token prompts using existing models (e.g. LLMs) is significantly harder. Our method enables us to extend an encoder trained on a maximum of 250 tokens to accommodate 1000-token inputs. Additionally, a 1000-token prompt can often be divided into several relatively independent sections, each with fewer than 250 tokens, making segment encoding for these sections a practical and sensible choice. \\n\\n---\\n\\n***W1.2: The mechanics of how segmentation and merging affect the underlying model's cross-attention dynamics remain underexplored.*** \\n\\nThank you for your advice. In $\\\\\\\\textcolor{blue}{\\\\\\\\textrm{Figure 12}}$ of the new version, we compare the original SD1.5 with our fine-tuned version and find that their cross-attention map behaviors are similar, regardless of whether segment encoding is used. Specifically, when the prompt accurately labels each object and references them in subsequent sentences, even though segment encoding does not process multiple sentences in a single forward pass, the attention maps for individual objects across different segments remain consistent. For example, the prompt is structured as \\\"A dog xxx and a cat xxx. This dog xxx.\\\" Even though the details about this dog are divided into two segments in our setup, the model can identify the same dog and apply attributes from both segments accordingly. This shows that T2I models with segment encoding can identify and manage information across segments of long-text inputs.\\n\\n---\\n\\n***W2: Future work should include evaluations on SD-3 and SDXL to substantiate claims of superiority over other methods in a more current setting.*** \\n \\nWe have added new foundation models, SDXL. Our method also extends the maximum input token limit for SDXL and improves long text alignment. All training and testing settings match the SD1.5 version, but with 1024 resolutions. The evaluation results on 5k test set are shown below:\\n\\n| | FID | Denscore-O | Denscore | VAQscore | GPT4o |\\n| :---- | :---- | :---- | :---- | :---- | :---- |\\n| SDXL | 21.18 | 33.52 | 22.79 | 86.89 | 268 |\\n| longSDXL | 23.88 | 37.33 | 25.33 | 87.30 | 416 |\\n\\n---\\n\\n***W3: It raises concerns about the model's performance on shorter or more varied prompts.***\\n\\n**Diversified Length:** In $\\\\\\\\textcolor{blue}{\\\\\\\\textrm{Table 6}}$ of our revised paper, we assess the generation results for prompts of lengths about 15, 60, 120, 240, and 500 tokens using Denscore-O and VQAscore \\\\[1\\\\]. For more details on the dataset construction and evaluation process, please see Appendix C.3 of the new version. The evaluation results in $\\\\\\\\textcolor{blue}{\\\\\\\\textrm{Table 6}}$ demonstrate that our method consistently outperforms current baselines.\\n\\nHere is a summary of the **VQAscore results** in $\\\\\\\\textcolor{blue}{\\\\\\\\textrm{Table 6}}$ in Appendix C.3:\\n\\n| Token | SD-1.5 | SD-2 | PlayG-2 | PixArt-$\\\\\\\\alpha$ | KanD-2.2 | ELLA-1.5 | LongSD (ours) |\\n| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |\\n| 15 | 88.32 | 90.27 | 88.32 | 91.12 | 91.88 | 90.28 | **92.52** |\\n| 120 | 83.63 | 85.33 | 84.78 | 86.81 | 86.02 | 86.93 | **87.49** |\\n| 500 | 81.14 | 82.69 | 80.42 | 84.79 | 84.94 | 85.84 | **87.24** |\\n\\n**Diversified structure:** For prompts with diversified structures, we test our model on DPG-Bench \\\\[1\\\\], which includes test prompts for various categories such as entity, attribute, relation, and count. The new results are shown in $\\\\\\\\textcolor{blue}{\\\\\\\\textrm{Table 5}}$ of Appendix C.3. Compared to the baselines, we have also made obvious improvements. \\n\\nHere is a summary of $\\\\\\\\textcolor{blue}{\\\\\\\\textrm{Table 5}}$ in Appendix C.3:\\n\\n| Model | SD-2 | PlayG-2 | PixArt-$\\\\\\\\alpha$ | KanD-2.2 | SD-1.5 | ELLA-1.5 | LongSD (ours) |\\n| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |\\n| DPG-Bench | 68.09 | 74.54 | 71.11 | 70.12 | 63.18 | 74.91 | **77.58** |\\n\\n\\\\[1\\\\] Hu X, Wang R, Fang Y, et al. Ella: Equip diffusion models with llm for enhanced semantic alignment\\\\[J\\\\]. arXiv preprint arXiv:2403.05135, 2024\\\\.\"}" ] }
2YzeOOjvOi
DET: Learn to Solve the Tunnel Traveling Salesmen Problem using Double-Encoder Transformer
[ "Nan Zhao", "Jinshan Zhang", "Paul Weng", "Feng Wang", "Jianwei Yin" ]
We delve into a challenging variant of the Traveling Salesman Problem (TSP), namely tunnel TSP, which incorporates a new important constraint requiring the traversal of a prescribed set of tunnels. While traditional deep reinforcement learning (DRL) based neural TSP algorithms excel in optimizing routes without tunnel restrictions, they often struggle to achieve optimal performance in tunnel TSP due to the neglect of the crucial role of tunnel attributes during solution generation. To address this challenge, we propose a simple but effective and flexible technique, called Double-Encoder Transformer (DET), which can be seamlessly integrated into various existing autoregressive neural TSP solvers. DET processes node and tunnel location information separately and encodes them in two distinct feature spaces. Following an efficient fusion strategy, DET then integrates the encoded information from nodes and tunnels, harnessing their intricate interactions. Experimental validation demonstrates that integrating DET into existing autoregressive neural solvers significantly improves performance, enabling us to reduce the average optimality gap for tunnel TSP from 12.58% (of the previous Single-Encoder model) to 7.35%.
[ "Combinatorial Optimization; Transformer; Deep Reinforcement Learning; Tunnel TSP" ]
https://openreview.net/pdf?id=2YzeOOjvOi
https://openreview.net/forum?id=2YzeOOjvOi
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tvvMKA2lfS", "oIQfK5dzNP", "kOweV6QWTo", "XribpqgXMD", "KgzAyvsTQq" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730657829209, 1730711252062, 1730602293045, 1730733554431, 1731661350574 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission437/Reviewer_xjCe" ], [ "ICLR.cc/2025/Conference/Submission437/Reviewer_ZRJH" ], [ "ICLR.cc/2025/Conference/Submission437/Reviewer_XdTB" ], [ "ICLR.cc/2025/Conference/Submission437/Reviewer_5byA" ], [ "ICLR.cc/2025/Conference/Submission437/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The work aims at a Transformer to solve the Tunnel Traveling Salesmen Problem. Previous single-encoder models are general to distinct vehicle routing tasks but in this work the Transformer is applicable to a specified variant. The performance is incrementally improved since the average optimality gap is still large, and Transformer's applicability obviously weakens.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The authors present a comprehensive evaluation of the model, demonstrating its effectiveness across diverse instances of the Tunnel TSP problem. The proposed approach shows versatility by successfully enhancing multiple neural solvers in addressing the Tunnel TSP, making the design plug-and-play. The paper is generally well-structured and clearly presented.\", \"weaknesses\": \"The technical novelty appears limited. The primary contribution centers on introducing a tunnel-specific encoder and corresponding decoding modifications to existing architectures, rather than presenting fundamentally new insights or methodologies.\\nThe method's applicability appears narrowly focused on Tunnel TSP, with insufficient exploration of its potential generalizability to broader combinatorial optimization problems. \\nThe evaluation relies exclusively on synthetic datasets, raising questions about the model's robustness to varying problem sizes and other data distributions. \\nThe computational complexity of the tunnel encoder appears comparable to the node encoder, potentially introducing significant overhead.\", \"questions\": \"How do the authors incorporate tunnel information for baselines such as POMO?\\nCould the authors provide a detailed computational analysis, including inference times and parameter counts, to better understand the practical implications of the additional neural network modules? \\nGiven that Tunnel TSP represents a specialized case of Clustered TSP, what are the technical challenges in extending the proposed framework to more general CTSP instances or related vehicle routing problems (e.g., pickup and delivery)? \\nCould the authors elaborate on concrete real-world applications where their framework provides practical advantages over existing approaches?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper addresses the tunnel traveling salesman problem using a deep reinforcement learning approach. It introduces the Double Encoder Transformer (DET) module, which encodes node and tunnel information through two separate encoders. The DET is compatible with existing neural solvers, allowing it to be utilized in a plug and play manner. Experimental results indicate that the proposed DET generally improves the performance of existing neural solvers for tunnel TSP.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Separating node and tunnel information into different encoding pipelines is a reasonable approach to improve the overall performance.\", \"The plug and play design of DET allows it to integrate smoothly with existing methods, as demonstrated in the experiments.\"], \"weaknesses\": [\"The technical contribution is somewhat limited to the DET module and the separation of the node and tunnel features.\", \"More explanation on some topics could be beneficial, for example,\", \"How are the baseline models trained? Do they explicitly receive tunnel information as inputs to their single-encoders? Or is it only implicitly incorporated through the cost/reward?\", \"What is the size of the test samples used to evaluate the models in Table 1? Reporting the variations across multiple training/testing runs would strengthen the claims about DET effectiveness, especially for claims such as guaranteed improvements (Line 468-469).\"], \"questions\": \"Please see Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a challenging variant of Clustered Traveling Salesman Problem (CTSP), called tunnel TSP, which incorporates an important constraint requiring the traversal of a prescribed set of tunnels. The authors utilize deep reinforcement learning (DRL) for this problem, where the method is called Double Encoder Transformer (DET). It encodes node and tunnel information and can be applied to the existing method to solve tunnel TSP problems. The experimental results show the effectiveness of the DET model on various scaled problems.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"It clearly redefines tunnel TSP task with a notation TTSP-m-n, where there are a total of m nodes and n tunnels. In this setting, a node can be connected or standalone, and the cost is similar to the original version, but includes a fixed distance, D(S).\", \"The model utilizes two separate transformer encoders, which encode different information like node and tunnel. It enhances the overall performance via distinctly encoding tunnel information from graphs.\", \"The proposed method can effectively solve scale-variant tunnel TSP problems, which is hard for existing approaches.\"], \"weaknesses\": [\"It is unclear why the tunnel TSP task is important to systematically define and resolve. This task looks like a simple variation of CTSP. Please provide real-world examples to support its importance.\", \"The explanation lacks clarity on why two encoders are necessary and what specific motivation supports this design choice. In addition, the overall method seems really simple and lacks a strong sense of novelty.\", \"While this may be the first application of DRL to CTSP, its novelty is questionable. The proposed method appears to be a straightforward application, lacking clarity on any specific challenges or problems it addresses.\", \"There are no experiments comparing costs. Additionally, it is unclear how the existing models would perform if the size of these models were increased.\"], \"questions\": [\"Tunnel TSP looks like the simplest special form of CTSP. Then can the definition of CTSP easily cover or extend to the one of tunnel TSP too? The comparison between them needs to be specified for better understanding.\", \"Please provide clear motivations for the target task and proposed approaches. It will help the readers to understand the novelty and importance of this work.\", \"Is there any challenge when DRL is applied to CTSP?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper tackles a variation of the Traveling Salesman Problem (TSP) referred to as the tunnel TSP, introducing a model called the Double-Encoder Transformer (DET) to solve it. Unlike conventional TSPs, the tunnel TSP includes specific constraints for tunnel traversal, which traditional neural TSP solvers struggle to handle effectively. The proposed DET model enhances existing autoregressive neural TSP solvers by incorporating separate encoders for nodes and tunnels, allowing the model to more accurately process the unique interactions between these elements in the tunnel TSP. The authors demonstrate that integrating DET into established neural solvers (such as POMO) can reduce the optimality gap for tunnel TSP, enhancing solution quality.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The DET model's architecture, with separate encoders for nodes and tunnels, is a practical adaptation that enables better handling of tunnel-specific constraints within TSP solutions. The authors demonstrate measurable performance improvements over existing solvers for this problem variant, validating DET\\u2019s utility in improving optimality gaps in tunnel TSP instances.\", \"weaknesses\": \"While DET shows practical utility, its novelty is limited due to its reliance on well-established architectures (POMO) and scoring techniques (regret). The approach primarily focuses on adapting feature encoding without introducing significant new concepts in neural TSP solving or reinforcement learning. Moreover, the evaluation lacks comparisons with a broader range of TSP variants and solvers, which would better contextualize DET\\u2019s relative efficacy. Lastly, the choice of DET may result in increased computational overhead due to the dual encoder, which the paper does not address in terms of efficiency or resource requirements.\", \"questions\": \"How does the proposed DET model perform in other TSP variations or combinatorial optimization tasks that have similar clustering constraints?\\n\\nCould further feature augmentation, beyond tunnel information, bring improvements, or would such modifications saturate the model's performance gains?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
2Y6xGE1K60
Speculate, then Collaborate: Fusing Knowledge of Language Models during Decoding
[ "Ziyao Wang", "Muneeza Azmat", "Ang Li", "Raya Horesh", "Mikhail Yurochkin" ]
Large Language Models (LLMs) often excel in specific domains but fall short in others due to the limitations of their training. Thus, enabling LLMs to solve problems collaboratively by integrating their complementary knowledge promises to improve their performance across domains. To realize this potential, we introduce a novel Collaborative Speculative Decoding (CoSD) algorithm that enables efficient LLM knowledge fusion at test time without requiring additional model training. CoSD employs a draft model to generate initial sequences and an easy-to-learn rule or decision tree to decide when to invoke an assistant model to improve these drafts. CoSD not only enhances knowledge fusion but also improves inference efficiency, is transferable across domains, and offers greater explainability. Experimental results demonstrate that CoSD improves accuracy by up to 10% across benchmarks compared to existing methods, providing a scalable and effective solution for LLM-based applications.
[ "Large language model; Knowledge fusion; Speculative decoding" ]
Reject
https://openreview.net/pdf?id=2Y6xGE1K60
https://openreview.net/forum?id=2Y6xGE1K60
ICLR.cc/2025/Conference
2025
{ "note_id": [ "y6HWCh2JsE", "wbNEoemMgt", "rrYgo7o5b1", "lHNJRegNRg", "j3KvYntZXx", "hJC2R6Q2J9", "alOQwawTqH", "WI8vbaaYbQ", "W7xKDxtAT7", "U48QRABaFS", "SYTxxyTUct", "QXW120PEph", "MCIXyxnOZa", "LW9ncrHP1B", "JElIlqDUJS", "IS2amtHlUx", "GWWy27GwSc", "G8cnib6OHB", "Dlww9bds4W", "CYp4H8ZsdC", "9BHG5oxvgl", "292FcnWiKs" ], "note_type": [ "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730172037727, 1732177378161, 1730589843390, 1732158549993, 1732315891347, 1732308234148, 1732553248443, 1737523942667, 1732553322624, 1730690281051, 1733110424152, 1732291525623, 1734774162118, 1732136403144, 1732549656028, 1732525423834, 1732235518096, 1732533406322, 1732136106898, 1732158500701, 1732178798032, 1732204670131 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8916/Reviewer_1scr" ], [ "ICLR.cc/2025/Conference/Submission8916/Authors" ], [ "ICLR.cc/2025/Conference/Submission8916/Reviewer_hdS6" ], [ "ICLR.cc/2025/Conference/Submission8916/Authors" ], [ "ICLR.cc/2025/Conference/Submission8916/Reviewer_1scr" ], [ "ICLR.cc/2025/Conference/Submission8916/Authors" ], [ "ICLR.cc/2025/Conference/Submission8916/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8916/Authors" ], [ "ICLR.cc/2025/Conference/Submission8916/Reviewer_JwxZ" ], [ "ICLR.cc/2025/Conference/Submission8916/Authors" ], [ "ICLR.cc/2025/Conference/Submission8916/Reviewer_JwxZ" ], [ "ICLR.cc/2025/Conference/Submission8916/Area_Chair_Rm3n" ], [ "ICLR.cc/2025/Conference/Submission8916/Authors" ], [ "ICLR.cc/2025/Conference/Submission8916/Reviewer_JwxZ" ], [ "ICLR.cc/2025/Conference/Submission8916/Authors" ], [ "ICLR.cc/2025/Conference/Submission8916/Authors" ], [ "ICLR.cc/2025/Conference/Submission8916/Area_Chair_Rm3n" ], [ "ICLR.cc/2025/Conference/Submission8916/Authors" ], [ "ICLR.cc/2025/Conference/Submission8916/Authors" ], [ "ICLR.cc/2025/Conference/Submission8916/Authors" ], [ "ICLR.cc/2025/Conference/Submission8916/Reviewer_JwxZ" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces the Collaborative Speculative Decoding (CoSD) algorithm that enables efficient LLM knowledge fusion during decoding. Upon the idea of speculative decoding, the paper proposes two novel decision-making rules: Rule-based and Tree-Based. The method features 1) efficiency (parallel generation, no additional model training), 2) transferability across different domains and models with different tokenizers, and 3) interpretability. CoSD successfully improves baselines by up to 10%.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"interesting and inspiring idea on fusing knowledge at the decoding time\", \"The algorithm is clearly presented in this paper through both a workflow diagram and mathematical expressions.\", \"Both Rule-Based verification and Tree-Based verification are well-designed and both make sense to me.\"], \"weaknesses\": [\"I'm not sure if the goal of this algorithm is to A) achieve performance comparable to the assistant model but in a more efficient way, or if it's aimed at B) outperforming both the draft model and the assistant model individually (1+1>2). How do these objectives apply to the four scenarios of knowledge fusion discussed in section 4.1? If the goal is A, since the draft models in complementary knowledge fusion and catastrophic forgetting recovery scenarios are about the same size as the assistant model, and the algorithm involves autoregressive generation of the draft model, I doubt the algorithm improves efficiency. If the goal is B, I can't see improvement based on Table 2.\"], \"questions\": [\"\\\"the algorithm regenerate and re-verify iteratively until all tokens are accepted\\\" How many iterations does it take on average?\", \"during the training process of the decision tree, if neither the draft model's generation nor the assistant model's generation match the target, you drop the sample and continue the loop with i &larr; i+1. Any ideas of improvement other than simply dropping these samples?\", \"typos: line 287, \\\"tree\\\" to \\\"three\\\", \\\"drat\\\" to \\\"draft\\\"\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal from Authors\", \"comment\": \"We greatly appreciate the reviewer's recognition of the advantage of CoSD over the existing frameworks. Regarding the weaknesses raised, we address all the concerns in detail below.\\n\\n>W1: I'm not sure if the goal of this algorithm is to A) achieve performance comparable to the assistant model but in a more efficient way, or if it's aimed at B) outperforming both the draft model and the assistant model individually (1+1>2). How do these objectives apply to the four scenarios of knowledge fusion discussed in section 4.1? If the goal is A, since the draft models in complementary knowledge fusion and catastrophic forgetting recovery scenarios are about the same size as the assistant model, and the algorithm involves the autoregressive generation of the draft model, I doubt the algorithm improves efficiency. If the goal is B, I can't see improvement based on Table 2.\\n\\nWe thank the reviewer for clearly pointing out goals A and B. In fact, both A and B are our goals. We expect our algorithm to achieve different objectives when applied to different tasks and models. Specifically:\\n\\n(1) When the assistant model has a much larger parameter size than the draft model, we expect CoSD to achieve goal A, which achieves performance comparable to the assistant model but in a more efficient way. This is also the goal of speculative decoding, and we expect our CoSD algorithm to retain this functionality while achieving results closer to the assistant model. In our experiments, pair 4 in Table 2 shows the effectiveness of CoSD in this scenario:\\n\\n| ID | Draft | Assist | Spc.Dec. | Avg. Dec. | CoLLM | CoSD-R | CoSD-T |\\n|-------|-------|--------|----------|-----------|-------|--------|--------|\\n| Pair4 | 14.67 | 25.16 | 24.11 | 22.43 | 23.72 | **24.39** | 23.66 |\\n\\nWe display the **average score** across all 3 benchmarks in this table. It shows that using a 1b draft model and a 7b assistant model in CoSD inference can achieve similar performance to the single 7b model (24.39 -- 25.16) and be 2 times faster.\\n\\n(2) When the two models have similar sizes and complementary knowledge, we expect CoSD to have higher averaging performance across all the tasks. Pair 2 and pair 3 in Table 2 show the effectiveness of CoSD in this scenario:\\n\\n| ID | Draft | Assist | Spc.Dec. | Avg. Dec. | CoLLM | CoSD-R | CoSD-T |\\n|-------|-------|--------|----------|-----------|-------|--------|--------|\\n| Pair2 | 41.89 | 44.18 | 35.57 | 42.05 | 43.05 | **44.40** | 43.08 |\\n| Pair3 | 37.96 | 30.59 | 29.15 | 38.04 | 39.27 | **44.49** | 40.29 |\\n\\nCoSD-Rule outperforms both the draft and the assistant model in these two pairs and has significant improvement when the areas of expertise of the two models are entirely distinct (pair 3). \\n\\n(3) When the two models are of similar size but one is significantly stronger overall, CoSD can achieve the performance level of the stronger model but cannot save computational costs. However, considering that in real-world applications, we cannot predict the performance of different models across all tasks, we believe it is still worthwhile to attempt fusing the knowledge of two similarly sized models to ensure that the combined performance is close to the better model across various domains. For instance, pair 1 in Table 2 shows this scenario:\\n\\n| ID | Draft | Assist | Spc.Dec. | Avg. Dec. | CoLLM | CoSD-R | CoSD-T |\\n|-------|-------|--------|----------|-----------|-------|--------|--------|\\n| Pair1 | 38.65 | 48.98 | 45.36 | 44.87 | 44.51 | **47.26** | 45.49 |\\n\\nCoSD still outperforms all the baselines. \\n\\nHere, we want to emphasize that the superiority of our approach lies in its ability to consistently achieve excellent cross-domain performance, regardless of the type or performance of the given models. By using the CoSD algorithm, we eliminate the need for users to evaluate and select models or consider goals like A and B. Instead, the system can automatically adapt to either efficient inference tasks or knowledge fusion tasks. This represents a significant advantage and contribution compared to previous works focused solely on single objectives like A: efficient inference (e.g., speculative decoding) or B: knowledge integration (e.g., CoLLM).\"}", "{\"summary\": \"This paper introduces a novel collaborative speculative decoding algorithm which can efficiently fuse the knowledge from different LLMs during inference. The experiment setting is quite interesting and includes different types: complementary knowledge fusion, catastrophic forgetting recovery, capacity imbalance and different tokenizers. The results are better than different baselines.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper provides an interesting perspective to fuse knowledge between LLMs using speculative decoding, which leverages the strengths of different LLMs while still keeping the efficiency.\\n2. The experiment setting is interesting, which tries complementary knowledge fusion, catastrophic forgetting recovery, capacity imbalance and different tokenizers.\", \"weaknesses\": \"1. The paper only does the experiment in each pair of the LLMs. It would be interesting to see more LLMs collaboratively fuse knowledge.\\n2. It would be better to show more details about the limitations of the proposed method and show some error analysis.\", \"questions\": \"1. Is the proposed algorithm suitable for collaboration among multiple LLMs? What will be the potential challenges?\\n2. Can you explain more about the limitations of the current method? I'm curious when it doesn't work well.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal from Authors\", \"comment\": \"We answer the questions below.\\n\\n>Q1: Is the proposed algorithm suitable for collaboration among multiple LLMs? What will be the potential challenges?\\n\\nThe experiments in W1 show that CoSD is suitable for collaboration among multiple LLMs, especially CoSD-Tree. For the CoSD-Rule, the potential challenge is how to define the rule of replacement when the LLMs predict multiple different tokens. For the CoSD-Tree, the potential challenge is how to reduce the number of substitute tokens to improve efficiency. We will add all the experiments and related discussions to the paper. Please let us know if you have more valuable suggestions.\\n\\n>Q2: Can you explain more about the limitations of the current method? I'm curious when it doesn't work well.\\n\\nWe discuss the limitations and the scenarios when CoSD doesn't work well in W1 and W2. Simply put, we summarize it into the following two points:\\n\\n(1) When the two collaborating models are of similar size and one significantly outperforms the other, CoSD offers no advantage over using only the better model. Naturally, this limitation exists in any work involving model collaboration.\\n\\n(2) The CoSD algorithm cannot theoretically guarantee that the replaced token is always better than the discarded one. We can only select the output of the more confident model to maximize the likelihood of choosing a better token.\\n\\nWe will add a limitation section to the paper to thoroughly discuss these potential limitations.\"}", "{\"comment\": \"Thanks for clarifying these points. I'd suggest directly highlighting them in the next version. I will keep my score.\"}", "{\"title\": \"Response from Authors\", \"comment\": \"Thank you for the comments!\\n\\n>About the average performance.\\n\\nWe believe we are in agreement regarding the limitations of the \\u201caverage performance\\u201d metric. Indeed we do not include it in our paper and provide more fine-grained per-task results. The reason we brought up average performance was to respond to one of the weaknesses you indicated in your initial review, i.e., our method does not outperform best of draft/assistant models in all cases. Broadly speaking, no method is typically the best in all cases. However, a good method tends to be the best choice more often than not, making it a robust option in practice, especially when the performance of all candidate methods is not known beforehand. We simply tried to point out that, while acknowledging that our method is not always the best, it is a good method as it performs well more often than not. Average performance was simply a way to quantify \\u201cmore often than not\\u201d. If you recommend other metrics or experiments that could help address your concern, please let us know.\\n\\n>About the inference speed.\\n\\nWe apologize for the confusion. To clarify, the inference speed gains are measured with respect to the vanilla decoding (simply using the assistant model). Both our method and speculative decoding improve the inference speed by 2 to 3 times in comparison to vanilla decoding. The inference speed of our method and speculative decoding are approximately the same. However, our method outperforms speculative decoding in terms of performance.\"}", "{\"comment\": \"Thank you for your valuable feedback. Following your valuable suggestions, we have added the clarifications and additional experiments in the rebuttal to the paper and updated the experiments, limitations and appendix. We highlight the goal of our algorithm and discuss the limitation in the paper and extend the paper to 10 pages.\\n\\nWe would greatly appreciate it if you could review these additions and consider raising your score based on the improvements. Please don\\u2019t hesitate to let us know if there are any other suggestions or areas where we could further enhance the paper.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Reminder\", \"comment\": \"Thank you for your feedback so far. We have provided a detailed rebuttal and an updated draft addressing your comments with additional results. Please feel free to share any further questions or thoughts before the discussion period ends.\"}", "{\"summary\": \"This paper proposes Collaborative Speculative Decoding (CoSD) that fuses LLM knowledge at test time. The algorithm employs a draft model to generate initial response sequences and a rule-based or decision tree to decide when to leverage an assistant model to improve the drafts.\\nThe authors have conducted experiments using different pairs of LLMs and under various experimental setups.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The authors have put effort in experimenting their proposed framework under different setups, including various draft and assistant models, different simulated scenarios, etc.\", \"The proposed framework gains some advantage over the existing framework of Co-LLM in certain scenarios.\"], \"weaknesses\": \"- In Table 2, I notice that in most cases, the fused model underperforms the draft model and the assistant model.\\nFor instance, for Pair 1, none of the fusion methods outperform both draft and assistant model for GSM8K, HumanEval; for Pair 2, none of the fusion methods consistently outperform both draft and assistant model for GSM8K, and MMLU.\\nThen I wonder what is the point of fusing knowledge in these cases if we can simply adopt one model instead of the other?\\n\\n- It seems that for Pair 3, CoSD-Rule performs exceptionally well on GSM8K, yielding 45.47 while the draft and assistant models yield 25.01 and 35.43, which is very different from the performance patterns for this same pair on other datasets such as MMLU and also other pairs. Could you give more insights into such a result? Could you present some examples that CoSD-Rule excel at under this situation that cannot be addressed by either the draft nor the assistant model?\", \"questions\": [\"In 3.2 Verification, for Tree-Based Verification, you claim to use benchmark datasets such as GSM8K to train the classifier, but then in your test, you incorporate the GSM8K dataset as well. Is there any information leakage in terms of that you are training your verifier on the test set so that it gains advantage over other models?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response from Authors\", \"comment\": \"Thank you for your comments!\\n\\nFollowing the suggestion from the reviewer, we have conducted experiments on all the benchmarks in the tinyBenchmark except AlpacaEval (we use it to train the decision tree) for pair 1, here are the new results:\", \"pair_1\": \"| Benchmarks | Draft | Assist. | Spc. Dec. | Avg. Dec. | CoLLM | CoSD-R | CoSD-T |\\n|------------|-----|-----------------|---------|---------|---------|---------|---------|\\n| **MMLU** | 54.81 | 52.02 | 53.20 | 52.31 | 55.25 | 56.97 | 58.37 |\\n| **GSM8K** | 39.79 | 51.02 | 43.85 | 43.89 | 41.04 | 45.72 | 41.89 |\\n| **HumanEval** | 21.34 | 43.90 | 39.02 | 38.41 | 37.25 | 39.10 | 36.22 |\\n| **Hellaswag** | 87.17 | 81.99 | 82.52 | 86.39 | 85.64 | 86.96 | 86.84 |\\n| **TruthfulQA** | 64.92 | 59.51 | 60.61 | 64.11 | 63.85 | 65.40 | 64.14 |\\n| **Winograde** | 80.20 | 78.02 | 79.77 | 80.42 | 80.31 | 80.52 | 80.69 | \\n| **Avg.** | 58.04 | 61.08 | 59.83 | 60.92 | 60.56 | **62.45** | 61.36 |\\n\\nIt can be seen that CoSD-R and CoSD-T consistently outperform the baseline on all newly added benchmarks and achieve the **best results in the average score**. This was achieved even on pair 1, where CoSD did not perform as well, so we believe this can also be realized in other pairs. We are currently conducting experiments on other pairs, and we will update the experimental table in the final version of the paper to include all the benchmarks.\\n\\nBesides, we also tried new experiments to switch the draft model and the assistant model, here are the results in pair 1:\\n\\n| Benchmarks | Draft | Assist. | Spc. Dec. | CoLLM | CoSD-R | CoSD-T |\\n|------------|-----|-----------------|---------|---------|---------|---------|\\n| **MMLU** | 52.02 | 54.81 | 54.23 | 53.92 | 54.17 | 56.18 |\\n| **GSM8K** | 51.02| 39.79 | 41.28 | 45.75 | 49.52 | 48.88 |\\n| **HumanEval** | 43.90 | 21.34 | 25.17 | 36.90 | 42.31 | 43.62 |\\n| **Avg** | 48.98 | 38.65 | 40.23 | 45.52 | 48.67 | **49.56** |\\n\\nWe find that if we use the better model as the draft model, the CoSD performance will be better. This provides valuable guidance for the application of CoSD.\\n\\nWhile more experiments are always nice to have, we believe we have already presented a reasonable amount of empirical evidence supporting our key claims: Our method consistently outperforms all other LLM fusion baselines such as Speculative Decoding, Avg Decoding, CoLLM, etc, which are recent works published in NeurIPS 2024, ICML 2024, and ACL 2024, thus we believe our method will be of interest to the ICLR community. We sincerely hope that our detailed responses, additional experiments, and clarified claims have provided the reviewers with a better understanding of our work. We would be truly grateful if these efforts could be reflected in a more favorable assessment and higher scores. Thank you for your thoughtful consideration.\"}", "{\"comment\": \"Thanks for the response.\\n\\n- > Averaging scores is a widely used metric for evaluating model knowledge fusion and overall performance.\\n\\nThis only applies when you have lots more datasets and want to show the general trend. Since you have conducted experiments on **three** datasets, averaging them clearly hides many critical details, just like the detail I pointed out in my earlier response that your method lags behind the baseline by 6 and 10 points.\\n\\nIn other words, taking average is clearly not robust when there are few datasets (even if there are many, it is still not encouraged). Say if you include the fourth dataset and on that dataset, your method significantly outperforms the baseline, when taking average, you would conclude that your method is much superior to the baseline; if say there is the fifth, and your method underperforms the baseline, the conclusion will then flip immediately when you only consider the average.\\n\\nI wish the authors can understand that such a practice shall not be encouraged. Even people conduct such practices, especially people from industry, we shall not lower our standard as researchers. Please think twice before adopting \\\"widely used metric\\\" and common practice.\\n\\n- Regarding the inference speed, \\n> Here the inference speed up is compared to only using the 7b assistant model for inference, which is also the baseline in speculative decoding works. According to Table 5, we have the similar efficient inference performance compared to our method.\\n\\nHere, you acknowledge that you have the similar efficient inference performance compared to our method, which contradicts to what you suggest before *CoSD can significantly improve the inference speed by 2 to 3 times*.\"}", "{\"metareview\": \"The paper proposes Collaborative Speculative Decoding (CoSD), a method for efficient knowledge fusion between LLMs during inference using rule-based and tree-based verification strategies. The approach aims to both improve efficiency when combining models of different sizes and enhance performance through complementary knowledge fusion.\\n\\nThe reviewers value the comprehensive experimental analysis across different model pairs and scenarios. Through extensive discussion, the authors clarified several reviewers\\u2019 questions. However, some concerns remain: 1) performance improvements are not consistent across all benchmarks, with only three test datasets making it difficult to draw robust conclusions, 2) efficiency claims compared to baselines need better substantiation, and 3) the averaging of scores across benchmarks may mask important per-task performance variations.\\n\\nWhile the authors have engaged constructively with reviewer feedback and provided additional experiments and analysis, I recommend rejection as more comprehensive evaluation across a broader set of benchmarks is needed to convincingly demonstrate the method's effectiveness and generality.\", \"additional_comments_on_reviewer_discussion\": \"Check above.\"}", "{\"title\": \"Rebuttal from Authors\", \"comment\": \"We answer the question here:\\n\\n>Q1:In 3.2 Verification, for Tree-Based Verification, you claim to use benchmark datasets such as GSM8K to train the classifier, but then in your test, you incorporate the GSM8K dataset as well. Is there any information leakage in terms of that you are training your verifier on the test set so that it gains advantage over other models?\\n\\nWe clarify that we did not use GSM8K for training the decision tree in the main experiments. As mentioned in lines 297-298, we use three samples from the **AlpacaEval** dataset to train the decision tree, which does not overlap with the benchmarks. We provide the experiment results on how the training data of the verifier can affect the final results in Table 4, which shows that there are some advantages if we train the decision tree on the same dataset as the benchmark. However, this concern does not arise in our main experiments in Table 2.\\n\\nIn addition, for an instruction-output pair with x output tokens, we can generate up to x samples for training the decision tree. Considering that decision trees require a small amount of training data, we can use very little data to generate the decision tree training set. For instance, we only use 10 samples in MMLU and 3 for other datasets as training samples in Table 4.\"}", "{\"comment\": \"I appreciate the author's effort throughout the discussion period.\\n\\nI am still not fully convinced by the performance gains demonstrated in the existing experiments, especially that there are only three benchmarks involved in the process. I am curious to see what happens if more general benchmarks get involved (e.g. Math, etc). Currently, we have seen that for a single pair, there is one out of three cases the performance drops significantly, there is also a case that the performance goes up, therefore we do not know in general whether your method would help or not. I think you will have much stronger support if say you have tested eleven benchmarks (you do not need to stick to this number exactly), and ten out of the eleven witness a performance boost, only one out of them suffers performance decline.\\n\\nTherefore, I would keep my score for now.\"}", "{\"title\": \"Detailed case studies added to the updated paper\", \"comment\": \"Thank you for your valuable feedback. Following your valuable suggestions, we have incorporated three detailed case study samples into the paper, complete with annotations and in-depth analysis ('case studies' in the experiment section and two big tables). We have expanded the paper to provide a more comprehensive presentation of our work.\\n\\nWe would greatly appreciate it if you could review these additions and consider raising your score based on the improvements. Please don\\u2019t hesitate to let us know if there are any other suggestions or areas where we could further enhance the paper.\"}", "{\"title\": \"Response from Authors\", \"comment\": \"Thank you for pointing this out. Our reply addresses the concerns below:\\n\\n>For the first weakness point, I disagree with the authors' claim. First, I think averaging scores is a biased practice, as it clearly hides the fact that in Table 2, for Pair 1, GSM8K, CoSD-Rule and CoSD-Tree achieve 45.72 and 41.89, respectively, lagging behind the Assistant model (51.02) by around 6 and 10 points. This clearly fails your aim to approximate the performance of the better-performing model as closely as possible.\\n\\nAveraging scores is a widely used metric for evaluating model knowledge fusion and overall performance. It is adopted in many benchmarks to evaluate the comprehensive model capabilities, such as the Hugging Face LLM leaderboard.\\n\\nWe clarify that the strength of our approach lies in its consistent cross-domain performance, without any prior knowledge of fused model performance. Consider that users do not always know the performance of their LLMs and LLM APIs on all the benchmarks, by applying the CoSD algorithm, they no longer need to evaluate or select models manually based on all the benchmark performance. For instance, in Table 2 pair 1, the users may not know in advance that the assistant model is much better than the draft model in all benchmarks, but by applying CoSD-Rule, the user will get improved MMLU score and much higher GSM8K and HumanEval scores than the base model, and comparable average score to the assistant model.\\n\\nOf course, we acknowledge that our approach cannot achieve the ideal scenario of completely matching the accuracy of the better-performing model across all benchmarks. It can only get closer to it compared to the baseline. This is a limitation of our method. We will soon add a limitation section to the paper to discuss this limitation and will let you know once we update the paper.\\n\\n>In addition, in the paper, for Table 5, I do not see significant token latency improvement over speculative decoding. Your claim that significantly improve the inference speed by 2 to 3 times seems to be ungrounded, what baselines are you comparing to? And what are the results?\\n\\nHere the inference speed up is compared to only using the 7b assistant model for inference, which is also the baseline in speculative decoding works. According to Table 5, we have the similar efficient inference performance compared to our method. However, as shown in Table 2, its knowledge fusion performance is inferior to ours. In fact, SD itself lacks strong complementary knowledge fusion capabilities and can only make the system perform approximately like the assistant model (as evidenced by Table 2, Pair 3).\\n\\n>Thanks for providing the example, I think the example itself is very interesting. I suggest when the authors iterate the paper, you can focus more on studying how models with complementary capabilities help each other rather than focusing on the performance improvement, which seems to be not guaranteed and in certain cases (the example I gave), is opposite to the authors' expectation.\\n\\nWe are glad to see that the reviewer finds our example interesting. Based on your and other reviewers' suggestions, we are drawing a figure containing several samples generated by CoSD, including the examples we provide for you. These samples intuitively illustrate how CoSD makes the model cooperate by annotating the draft tokens, the replaced draft tokens, and the assistant tokens. This will help us and readers have a better understanding that how the assistant model help polishing the draft. Also as the reviewer suggested, we prepared 2 \\\"bad examples\\\". One of them we also showed with reviewer hdS6, indicating that sometimes CoSD will also reject some good draft tokens and replace them with bad assistant tokens. We will update this part in the paper as soon as possible.\", \"here_is_one_example_instruction_from_mmlu_dataset_that_cosd_drop_the_correct_answer\": \"Rowena can paint a room in $14$ hours, while Ruby can paint it in $6$ hours. If Rowena paints for $x$ hours and Ruby paints for $y$ hours, they will finish half of the painting, while if Rowena paints for $y$ hours and Ruby paints for $x$ hours they will paint the whole room. Find the ordered pair $(x,y)$.\\nA. $(\\\\frac{11}{10}, \\\\frac{11}{10})$\\nB. $(\\\\frac{231}{20}, \\\\frac{21}{20})$\\nC. $(\\\\frac{231}{40}, \\\\frac{21}{40})$\\nD. $(1,1)$\\n\\nThe draft model gave the correct answer C with 0.32 probability but was rejected by the answer D from the assistant model with 0.51 probability. This example illustrates that low probability does not necessarily indicate an incorrect token. Therefore, replacing tokens with higher-confidence alternatives is merely a heuristic algorithm and is subject to error. After all, during the inference stage and model deployment, the ground truth is unknown. This example can help show that sometimes the performance improvement is not guaranteed. We will add this sample to the figure and let you know once we update the paper.\"}", "{\"title\": \"Action Required: Respond to Author Rebuttals - Nov 27\", \"comment\": \"Dear ICLR Reviewers,\\n\\nThe author discussion phase is ending soon. Please promptly review and respond to author rebuttals for your assigned papers. Your engagement is critical for the decision-making process.\", \"deadlines\": [\"November 26: Last day for reviewers to ask questions to authors.\", \"November 27: Last day for authors to respond to reviewers.\", \"November 28 - December 10: Reviewer and area chair discussion phase.\", \"Thank you for your timely attention to this matter.\"]}", "{\"title\": \"Rebuttal from Authors\", \"comment\": \"We greatly appreciate the reviewer's recognition of the advantage of CoSD over the existing frameworks. Regarding the weaknesses raised, we address all the concerns in detail below.\\n\\n>W1: In Table 2, I notice that in most cases, the fused model underperforms the draft model and the assistant model. For instance, ... Then I wonder what is the point of fusing knowledge in these cases if we can simply adopt one model instead of the other.\\n\\nThe reviewer mentioned that in most cases, the fused model underperforms the draft model and the assistant model, we disagree with this conclusion. Here we list the averaging accuracy across all 3 benchmarks of Table 2:\\n\\nID| Draft|Assist|Spc.Dec.|Avg. Dec.|CoLLM|CoSD-R|CoSD-T| \\n----|-----|-----|-----|-----|-----|-----|-----|\\n**Pair1**|38.65|48.98|45.36|44.87|44.51|**47.26**|45.49|\\n**Pair2**|41.89|44.18|35.57|42.05|43.05|**44.40**|43.08|\\n**Pair3**|37.96|30.59|29.15|38.04|39.27|**44.49**|40.29|\\n**Pair4**|14.67|25.16|24.11|22.43|23.72|**24.39**|23.66|\\n\\nFor pair 2 and pair 3, CoSD outperforms both the draft model and the assistant model, and all the baselines. It means that applying CoSD can achieve better and more balanced performance across multi-domain tasks.\\n\\nFor pair 4, CoSD achieves comparable performance to the assistant model and outperforms all baselines. Considering that the assistant model is much larger than the draft model, CoSD can significantly improve the inference speed by 2 to 3 times.\\n\\nFor pair 1, CoSD has comparable performance to the assistant model. This is because the assistant model has much better performance across domains than the draft model. In this case, we agree with the reviewer that we can simply adopt the assistant model. However, in practice, it is hard to know in advance which model will perform better. CoSD alleviates the need to choose a model, providing performance superior or comparable to the best model.\\n\\nWe further clarify that model fusion cannot guarantee consistently achieving better performance than the best individual model in the fusion across all benchmarks. Instead, we aim to:\\n\\n(1)Improve the average accuracy across all the benchmarks if the merged models have complementary knowledge.\\n\\n(2)If one model performs significantly better than the other across different benchmarks, the fusion aims to approximate the performance of the better-performing model as closely as possible.\\n\\n>W2: It seems that for Pair 3, CoSD-Rule performs exceptionally well on GSM8K, ... Could you give more insights into such a result? Could you present some examples that CoSD-Rule excel at under this situation that cannot be addressed by either the draft nor the assistant model?\\n\\nHere is an example of the generation of the draft model, the assistant model, and the CoSD-Rule for the GSM8K dataset in pair 3:\\n\\n**Draft:**\\n\\n``The cost of the subs is 2 x $7.50 = <<2*7.5=15>>15. The cost of the chips is 2 x $1.50 = <<2*1.5=3>>3. The cost of the cookies is 2 x $1.00 = $<<2*1=2>>2. The total cost of the food items is $15 + $3 + $2 = $<<15+3+2=20>>20. The cost of the delivery fee is 20% of the total cost of the order, so 20% of 20 = <<20% of 20=10>>10. The total cost of the food items and delivery fee is $20 + $10 = $<<20+10=30>>30. The cost of the tip is $5.00. The total cost of the order is 15 + 3 + 2 + 10 + 5 = <<15+3+2+10+5=45>>45.``\\n\\n**Assistant:**\\n\\n``The cost of the subs is 2 x $7.50 = $<<2*7.5=15>>15. The cost of the chips is 2 x $1.50 = $<<2*1.5=3>>3. The cost of the cookies is 2 x $1.00 = $<<2*1=2>>2. The total cost of her order without any fees is $15 + $3 + $2 = $<<15+3+2=20>>20. The delivery fee is 20% of the total order, so $20 * 0.20 = $<<20*0.20=4>>4. The tip is an additional $5. Therefore, her delivery order will cost $15 + $3 + $2 + $4 + $5 = $<<15+3+2+4+5=35>>35.``\\n\\n**CoSD-Rule:**\\n\\n``The cost of the subs is 2 x $7.50 = $<<2*7.5=15>>15. The cost of the chips is 2 x $1.50 = $<<2*1.5=3>>3. The cost of the cookies is 2 x $1.00 = $<<2*1=2>>2. The total cost of the food items is $15 + $3 + $2 = $<<15+3+2=20>>20. The delivery fee is 20\\\\% of $20, so the delivery fee is $20 x 0.20 = $<<20*0.2=4>>4. The total cost of the food items and delivery fee is $20 + $4 = $<<20+4=24>>24. The tip is $5.00, added to the total cost of the food items and delivery fee, so the total cost of the order is $24 + $5 = $<<24+5=29>>29.``\\n\\nWhere the **correct answer is 29**, only CoSD-Rule gets the correct answer. We found that in this case, the draft model helps establish a better chain of thought (e.g., clearly calculating the total cost first, followed by delivery fee and tips). However, the draft model's mathematical computation ability is weak, making errors in multiple steps within the chain of thought. At this point, the assistant model ia involved to correct the computational mistakes, resulting in an excellent final outcome. **We believe this situation occurs when the two models have highly complementary capabilities in different domains (e.g., pair 3 in the table of W1)**\"}", "{\"title\": \"Rebuttal from Authors\", \"comment\": \"We greatly appreciate the reviewer's recognition of the advantage of CoSD over the existing frameworks. Regarding the weaknesses raised, we address all the concerns in detail below.\\n\\n>W1: The paper only does the experiment in each pair of the LLMs. It would be interesting to see more LLMs collaboratively fuse knowledge.\\n\\nThank you for your valuable comment! Our proposed algorithm indeed supports collaboration among multiple LLMs, including scenarios involving three or more models. As an example, in a three-model CoSD, one draft model generates the draft and 2 assistant models verify the draft. The rule-based verification process will be:\\n\\n$ x_{t+i} \\\\neq x_{t+i}^{0}$ and $x_{t+i} \\\\neq x_{t+i}^{1},$\\n\\n$M_p(x_{t+i}) < \\\\alpha,$\\n\\nmax($M_q(x_{t+i}^{0}$), $M_q(x_{t+i}^{1})) > \\\\beta \\\\cdot M_p(x_{t+i}),$\\n\\nThe assistant token with a higher probability will replace the draft token if all the 3 conditions are met. \\n\\nFor the tree-based CoSD with $x$ LLMs, we extend the decision tree to \\\\($x$\\\\)-class classification and select the model that predicts the next token with the highest probability as the ground truth label, with all other settings remaining the same as the two-LLM setting.\\nWe have conducted experiments with this three-model CoSD to validate the algorithm\\u2019s capability in fusing knowledge from multiple LLMs. The results are shown in the table below:\\n\\nID|Draft|Assist. 1| Assist. 2|CoSD-Rule|CoSD-Tree|\\n---|---|---|---|---|---|\\nMMLU|32.13|47.65|35.62|44.14|**46.48**|\\nGSM8K|3.36|15.63|8.33|**15.85**|14.02|\\n\\nwhere Draft model = TinyLlama, Assist. 1 = Llama 2 Chat 7b, Assist. 2 = Llama-7b.\\n\\nThe current experimental results of this model group demonstrate that when three models collaborate if one significantly outperforms the other two, the final system will achieve performance close to that of the best model. This indicates that our algorithm is effective when applied to more than two models. We are conducting collaborative experiments with other model groups. We will keep you updated in real time if we obtain new results and conclusions. Once the experiments for all model groups are completed, we will incorporate this section into the main body of the paper.\\n\\n>W2: It would be better to show more details about the limitations of the proposed method and show some error analysis.\\n\\nA potential limitation of our approach is that in some cases when the two collaborating models have similar parameter sizes but one is significantly more powerful than the other, it is not necessary to use CoSD since directly using the more powerful model is enough. For example, in Table 2 pair 1, the assistant 8B model has much better overall performance than the draft 8B model, and CoSD can only achieve similar performance to the assistant model, and cannot save computation cost since the models have the same parameter size.\\n\\nAbout the error analysis, CoSD cannot guarantee the assistant token is better than the draft token it replaced. Sometimes it may drop some good draft tokens with a worse assistant token when the draft model is not confident enough. Here is an example question in MMLU:\\n\\nRowena can paint a room in $14$ hours, while Ruby can paint it in $6$ hours. If Rowena paints for $x$ hours and Ruby paints for $y$ hours, they will finish half of the painting, while if Rowena paints for $y$ hours and Ruby paints for $x$ hours they will paint the whole room. Find the ordered pair $(x,y)$.\\nA. $(\\\\frac{11}{10}, \\\\frac{11}{10})$\\nB. $(\\\\frac{231}{20}, \\\\frac{21}{20})$\\nC. $(\\\\frac{231}{40}, \\\\frac{21}{40})$\\nD. (1,1)\\n\\n**The draft model gave the correct answer C with 0.32 probability but was rejected by the answer D from the assistant model with 0.51 probability.** This example illustrates that low probability does not necessarily indicate an incorrect token. Therefore, replacing tokens with higher-confidence alternatives is merely a heuristic algorithm and is subject to error. After all, during the inference stage and model deployment, the ground truth is unknown.\"}", "{\"title\": \"Rebuttal from Authors\", \"comment\": \"We answer all the questions below.\\n\\n>Q1: \\\"the algorithm regenerate and re-verify iteratively until all tokens are accepted\\\" How many iterations does it take on average?\\n\\nThe number of interactions depends on the max length of the model output. Here we measure the average number of iterations in the GSM8K dataset when the max length is 128, 256, and 512:\\n\\n| Max Length | CoSD-Rule | CoSD-Tree | Spec. Dec. |\\n|--------|-----|-----------|-----------|\\n| 128 | 11.41 | 13.58 | 9.77 |\\n| 256 | 15.29 | 16.01 | 14.20 |\\n| 512 | 21.23 | 21.95 | 18.51 |\\n\\nNote that the number of iterations being $x$ does not mean that the generation time is $x$ times that of a single model. As the number of accepted tokens increases, the number of tokens that the draft model needs to regenerate decreases significantly. For instance, in our experiment with a max length of 128, the number of interactions is 11, and the draft model's final average total generation length is around 300. We will add this discussion to the paper soon.\\n\\n>Q2: During the training process of the decision tree, if neither the draft model's generation nor the assistant model's generation match the target, you drop the sample and continue the loop with i \\u2190 i+1. Any ideas of improvement other than simply dropping these samples?\\n\\nWe thank the reviewer for a very good point to make our paper better. A possible idea is to involve more collaborating models to make sure that at least some of them can predict the correct token during the decision tree training. We also discuss with reviewer hdS6 about the possibility of involving more models in CoSD. Here are some results when we let 3 models to collaborate with CoSD:\\n\\n| ID | Draft | Assist. 1 | Assist. 2 | CoSD-Rule | CoSD-Tree |\\n|--------|-------|-----------|-----------|-----------|-----------|\\n| MMLU | 32.13 | 47.65 | 35.62 | 44.14 | **46.48** |\\n| GSM8K | 3.36 | 15.63 | 8.33 | **15.85** | 14.02 |\\n\\nwhere Draft model = TinyLlama, Assist. 1 = Llama 2 Chat 7b, Assist. 2 = Llama-7b. \\n\\nWe find that CoSD is still useful with more models and is able to be extended to more than 3 models. We believe that with enough LLMs involved, we can make better use of the training data.\\n\\nIn addition, we clarify that for an instruction-output pair with $x$ output tokens, we can generate up to $x$ samples for training the decision tree. Considering that decision trees require a small amount of training data, we can use very little data to generate the decision tree training set even after dropping some tokens. For instance, we only use 10 samples in MMLU and 3 for other datasets as training samples in Table 4. So, there is no concern that the training data is not sufficient for the decision tree.\\n\\n>Q3: typos: line 287, \\\"tree\\\" to \\\"three\\\", \\\"drat\\\" to \\\"draft\\\"\\n\\nThank you very much to the reviewer for pointing out the typos. We will carefully double-check the typos in the paper and correct all typos before submitting the updated version.\"}", "{\"comment\": \"Thanks for the authors' response.\\n\\nFor the first weakness point, I disagree with the authors' claim. \\nFirst, I think **averaging scores is a biased practice**, as it clearly hides the fact that in Table 2, for Pair 1, GSM8K, CoSD-Rule and CoSD-Tree achieve 45.72 and 41.89, respectively, lagging behind the Assistant model (51.02) by around 6 and 10 points. This clearly fails your aim to approximate the performance of the better-performing model as closely as possible.\\n\\nIn addition, in the paper, for Table 5, I do not see significant token latency improvement over speculative decoding. Your claim that *significantly improve the inference speed by 2 to 3 times* seems to be ungrounded, what baselines are you comparing to? And what are the results?\\n\\nThanks for providing the example, I think the example itself is very interesting. I suggest when the authors iterate the paper, you can focus more on studying how models with complementary capabilities help each other rather than focusing on the performance improvement, which seems to be not guaranteed and in certain cases (the example I gave), is opposite to the authors' expectation.\"}" ] }
2XdRkRHBT9
AVOIDING BARREN PLATEAUS VIA GAUSSIAN MIXTURE MODEL
[ "Yun Shang" ]
Variational quantum algorithms is one of the most representative algorithms in quantum computing, which has a wide range of applications in quantum machine learning, quantum simulation and other related fields. However, they face challenges associated with the barren plateau phenomenon, especially when dealing with large numbers of qubits, deep circuit layers, or global cost functions, making them often untrainable. In this paper, we propose a novel parameter initialization strategy based on Gaussian Mixture Models. We rigorously prove that, the proposed initialization method consistently avoids the barren plateaus problem for hardware-efficient ansatz with arbitrary length and qubits and any given cost function. Specifically, we find that the gradient norm lower bound provided by the proposed method is independent of the number of qubits N and increases with the circuit depth L. Our results strictly highlight the significance of Gaussian Mixture model initialization strategies in determining the trainability of quantum circuits, which provides valuable guidance for future theoretical investigations and practical applications.
[ "Barren plateaus", "Gaussian mixture model", "Quantum circuits", "Variational quantum algorithms" ]
https://openreview.net/pdf?id=2XdRkRHBT9
https://openreview.net/forum?id=2XdRkRHBT9
ICLR.cc/2025/Conference
2025
{ "note_id": [ "y6SXHXAwDg", "uLBBHpC2Rj", "s8WTWWe2Tz", "jGaSA62NJH", "33OpYiEw4j" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730482921539, 1730667709272, 1731513075502, 1730833429346, 1729888965059 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4396/Reviewer_aycC" ], [ "ICLR.cc/2025/Conference/Submission4396/Reviewer_EBag" ], [ "ICLR.cc/2025/Conference/Submission4396/Authors" ], [ "ICLR.cc/2025/Conference/Submission4396/Reviewer_nSCi" ], [ "ICLR.cc/2025/Conference/Submission4396/Reviewer_C1Ja" ] ], "structured_content_str": [ "{\"summary\": \"The paper presents a novel initialization scheme for parameterized quantum circuits optimized with variation quantum algorithms (VQA). The method employs a strategy based on Gaussian Mixture Models (GMMs) to initialize the parameters of an $L$ block $N$ qubit ansatz. The paper claims that for the considered ansatz, the initialization scheme avoids the barren plateau phenomenon (BP).\\n\\nTheoretically, the authors prove lower bounds on the expectation of the gradient norm under three different assumptions for the observable in the loss function. By showing the expected gradient norm is nonzero, they are able to theoretically guarantee absence of the barren plateau at initialization\\n\\nThe authors also validate their initialization strategy on synthetic experiments. These experiments confirm that this initialization scheme indeed avoids the BP in practice as well.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. As far as I am concerned, this is a novel contribution in the field of quantum machine learning.\\n2. Given it concerns only parameter initialization, the proposed initialization scheme is simple and easy to implement and compute efficient.\\n3. The method is backed up by solid theoretical guarantees which are also validated empirically through experiments. Also, the theoretical guarantees hold for rather practical situations, not just an idealized case.\\n4. The proofs in the appendix seem correct and are easy to follow.\", \"weaknesses\": \"1. Although your method is applied to a popular ansatz, it does not seem to generalize to other ansatz structures. Could you discuss potential for generalizing to different ansatz?\\n2. I found the paper to be written in a style that hinders understanding. There are various errors/inconsistencies in the notation and long inline math sections (around lines 158, 236, 250 for instance) that I personally found difficult to read. Please see the minor comments section for examples. It would be constructive to break up long inline math sections and give more context around complex mathematical equations.\\n3. It seems as though you did not validate your experiments using multiple runs/seeds. Number of runs/standard error/variance is not reported. If you indeed ran your experiments only once, this would be a major weakness of your experimental section; given your method is based on pseudorandom initialization, this would hinder the statistical validity of the results. Would it be possible for you to provide results from multiple runs along with error bars and confidence intervals?\", \"also_here_are_some_minor_comments_you_may_want_to_address_for_the_final_version\": [\"Generally, it would be clearer if you define all variables present in a theorem in the theorem statement for clarity\", \"In lines 86-100, you introduce the VQA problem and define the cost function. This should go in the notation/background section.\", \"Line 54 typo BP is underlined for citation\", \"Line 73 typo \\\"expressibilityRagone et al.\\\"\", \"Line 227 typo, you have a citation in your big O\", \"Line 1131 \\\"Theorem 1\\\" should be \\\"Lemma 1\\\" I believe\", \"Generally inconsistent use of $cos$ and $\\\\cos$\", \"In the proofs, inconsistent use of $I_S$ vs $I_s$\"], \"questions\": [\"You claim your method \\\"avoids barren plateau\\\". However, the theoretical results only guarantee barren plateau is avoided at initialization. Do you have any insight on how this method may help avoid this phenomenon during training?\", \"Where does the eq for $f(\\\\theta_{k+1})$ on line 87 come from?\", \"What is the importance of the result given by eq. (5)?\", \"How do you achieve a bound that does not depend on the number of qubits $N$ for Theorem 1? This seems surprising to me.\", \"You interchangeably use $O$ and $\\\\mathbf{O}$ to describe observables. You also index this $O$ sometimes. Is this notation you did not define or simply a typo?\", \"In Figure 4, your method seems to reach the desired solution, but then as iterations continue, it diverges away before coming back. What explains this phenomenon you think?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes using mixture of gaussians as initialization for variational quantum algorithms using mixture of gaussians. Theoretical results show the expectation of the norm of the gradient following the assumed distribution is lower bounded.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"This paper proposes and proves that the mixture of gaussians as an initialization scheme avoids barren plateau (at initialization) even when the cost function is global. Experiments, even though settings are a bit unclear, seem to support their claim.\", \"weaknesses\": [\"My biggest concern is that the relation to the previous work [1] is hardly discussed. The authors should at least mention that [1] proposed for the first time the gaussian initialization precisely to prevent barren plateau at initialization\\u2014the exact setting this paper is addresses. I understand there are some differences discussed briefly starting from line 258, but even there the authors do not mention [1] uses Gaussian initialization. Such writing gives me the impression that the authors intentionally hide due to significant similarity with [1].\", \"I believe some people refer to \\u201cbarren plateau\\u201d not only at the initialization but also more generally, i.e., vanish exponentially with the size of the system, c.f. [2, 3]. The authors should clearly state the \\u201cbarren plateau\\u201d that they mention is only with regards to initialization; this is only mentioned in line 52 as: \\u201cthe phenomenon of the barren plateau is characterized by the *randomized initialization* of parameters $\\\\theta$ in VQAs,\\u201d\\u2026\", \"Continuing the above point, in my opinion it is misleading for instance to write in line 87 as:\", \"$\\\\theta_{k+1} = \\\\theta_k - \\\\alpha \\\\nabla_\\\\alpha f(\\\\theta_k)$ \\u2026 Therefore, typically $|| \\\\nabla_\\\\theta f (\\\\theta_k) ||^2$ is used to determine whether the cost function can be updated.\\u201d\", \"Given that this paper is only about initialization, what it really shows is that $|| \\\\nabla_\\\\theta f (\\\\theta_0) \\\\|^2||$ has significant magnitude, but the result does not say *anything* about $||\\\\nabla_\\\\theta f (\\\\theta_k) \\\\|^2||$ for $k > 1$.\", \"Please use parenthesized citations correctly; it\\u2019s very hard to read especially since the citation text colors are the same as the main text color\", \"Typos:\", \"line 214: \\u201cThen We expand\\u2026\\u201d\", \"line 231: wrong quotation marks for \\u201cinactive parameters\\u201d, etc\", \"[1] Zhang, et al. (2022) \\u201cEscaping from the Barren Plateau via Gaussian Initializations in Deep Variational Quantum Circuits\\u201d\", \"[2] Fontana et al. (2024) \\u201cCharacterizing barren plateaus in quantum ans\\u00e4tze with the adjoint representation\\u201d\", \"[3] Loracca et al. (2024) \\u201cA Review of Barren Plateaus in Variational Quantum Computing\\u201d\"], \"questions\": [\"If I understand the result correctly, this only prevents \\u201cbarren plateau\\u201d at initialization. I wonder if there is any comment the authors can make about the optimization trajectory (can you say anything about the norm of the gradient other than initialization?)\", \"I understand the above point is empirically argued e.g., in Figure 4. But the interpretation starting from line 413: \\u201cMoreover, the gradient norm remains within a relatively large range throughout the entire training process. This enables our approach to escape \\u2026 vanishing gradient problem \\u2026 . *These observations are entirely consistent with the conclusions drawn in Theorem 1*.\\u201d But isn\\u2019t Theorem 1 ONLY about initialization?\", \"In Theorem 1, the assumption of the parameters $\\\\theta$ is that it follows $\\\\mathcal{G}_1(\\\\sigma^2)$ ? But this is just the Gaussian distribution $\\\\mathcal{N}(0, \\\\sigma^2)$. Could you explain how to interpret this? Why is this not compared to [1]? (other than one line sentence in line 228, \\u201cThis is in stark contrast to the exponential lower bound $O(1/L^N)$ found in previous works for global cost functions Zhang et al. (2022a); Wang et al. (2023).)\", \"How is the experiment set up? Are these results of actually applying VQA to a quantum computer? Or are these some numerical simulations?\", \"Line 370: \\u201c\\u2026we compare our proposed method with\\u2026 , Gaussian distribution $\\\\mathcal{N} (0, \\\\frac{1}{4S(L+2)})$ : is this variance taken from [1, Theorem 4.1]? It should really be cited clearly\\u2026\", \"Can you say anything about the solution quality? (converged $\\\\theta$ after some iterations).\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper considers the variational quantum algorithms and deal with the barren plateau phenomenon. The new parameter initialization strategy is proposed combing with gaussian mixture models. The prove is provided that the initialization could avoids the barren plateaus problem.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The novel parameter initialization strategy is provided for the barren plateau phenomenon. And the prove is given,\", \"weaknesses\": \"The provided parameter initialization strategy is not clear in the figure 1. What\\u2019s more, the comparison with other methods is not given. Furthermore, the induced Gaussian mixture models is not the firstly introduced.\", \"questions\": \"1 In Figure2, how to determine the inactive parameters\\n2 in theorem1, what\\u2019s the impact of the partial of f(\\\\theta)\\n3 in theorem 2, there is no definition of M, how to determine the value of M for the different number of layers.\\n4 what\\u2019s more, there is no comparison with other approaches,\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Variational Quantum Algorithms (VQAs) are important tools to exploit the capabilities of Noisy Intermediate Scale Quantum devices. The basic idea is to leverage parametrized quantum circuits (PQCs), whose parameters are often angles of rotational gates, to find the ground state of a given Hamiltonian, which plays the role of the cost function.\\n\\nVQAs, however, suffer from several shortcomings. One of the most sever one is known as Barren Plateaus (BPs). When dealing with many qubits and layers in the quantum circuit, the number of variational parameters rapidly increases thus making the optmization problem more challenging. Furthermore, the optimization landscape becomes extrmely difficult to navigate and this results in very small gradient signals (during the optimization of the parametrized quantum circuits) which prevents to converge to global minima and, therefore, to the desired ground state wave function. \\n\\nIn this paper the authors propose to use Gaussian Mixture Models (GMMs) for the initialization of the parameters in PQCs. The main contribution of the paper claims to **rigorously** solve the problem of **barren plateaus** by initializing the parameters of PQCs which in turn leads to higher gradient signals throughout the optimization of the parameters.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"The paper adresses a fundamental problem in the context of variational quantum algoritms.\"], \"weaknesses\": [\"The major weakness of this paper lies in its claim. In the abstract the authors claim that \\\"*rigorously prove that, the proposed initialization method consistently avoids the barren plateaus problem for hardware-efficient ansatz*\\\". While this may even be true in theory, I do find their numerical experiments and the PQCs considered in the paper not to be general such that this claim can hold with the current phrasing. This very **strong** claim is reiterated several times throughout the paper. I strongly advise to motigate such claim to something more aligned with the results of the manuscript.\", \"Another important weakness of the paper is that the code for reproducing the experiments is not provided thus preventing reproducibility of the experiments and further investigation of the implementation.\", \"Furthermore, I find the presentation of the paper hard to follow at times. The notation is often confusing cluttered. I believe that recalling what each variable refers to, use more display math (instead of inline equations) and provide some intuitive sketches may help.\", \"The ansatzes used in the paper are not general and do not expliclty account for entanglement. As it is known in the field of quantum computing, a set of universal gates, e.g., a collection of gates able to represent any possible unitary transformation, consist of the rotational gates and entangling gates such as CNOT. I believe this aspect substantially limits the generizability of the proposed claims and theorems. Does the set of gates considered in this work represents a set of universal gates? If yes that should be made clear.\", \"I find the numerical results shown in the main paper to be somewhat inconclusive. In particular, the authors omit many fundamental details such as the type of algorithm used to optimize the PQC, if it consists of global or local optimizations at each iteration, if measurement noise and/or hardware noise are taken into account for these experiments, whether the results change when with different algorithms etc. I believe that a thorough ablation study is necessary in order to support the strong claims of this work.\", \"I furthermore find puzzling that other results on quantum chemistry simulation are limited to the appendix as I believe those may arguably be even more relevant (or at least complementary) compared to the Ising model.\", \"About the layout of the paper, it looks to me as if Table 2 and Table 3, as well as Table 5 and Table 6 are duplicates of each other. Could it be, or am I missing something?\", \"Another concern about the layout of the paper is about the citation style. I strongly recommend to adopt fix the citation styles, e.g., substituting \\\\cite with \\\\citep where needed in order to wrap refs around parenthesis where suitable.\", \"The notion of \\\"observable\\\" may not be immediate to grasp for the general audience, I believe it would be useful to provide more concrete examples of real physical examples which could be mapped onto these generalized $\\\\mathbf{\\\\mathcal{O}}$ discussed in the paper.\", \"The writing and the clarity of the paper I think can in general be improved.\", \"Table 4: I think it would be good to mention that higher gradients at initialization is better. This may not be intuitive at first sight. Perhaps making the best results in bold without editing the caption would already help.\", \"Line 32: I find the claim \\\"*[...] VQAs provides a feasible approach to solving complex problems [...]*\\\" to be far too general. I strongly encourage to be more specific about what VQA can be good at.\", \"Line 83: I recommend to change the wording *complex distribution* to complicated/non-trivial as the former may be confused to have different meaning, e.g., distribution of complex values.\", \"I storngly recommend to add a \\\"Related work\\\" section to give more structure to the paper and streamline the reading. Furthermore, I'd recommend a more thorough review citing also other work using ML methods to enhance optimization of PQCs such as Refs. [1-3] below. Similarly, I recommend to provide the list of contributions at the bottom of page 2 in bulletpoints so that they end up being more accessible and more evident to readers.\", \"For the people non familiar with optimization of PQCs, I'd briefly introduce the parameter shift rule and how to compute gradients on quantum computers at the begining of the last paragraph before section 2. I believe this would be useful to make the paper self-contained.\", \"Line 158: Something seems wrong with the parenthesis in the last equation of the sentence. As mentioned above, I strongly recommend to use more dispay math instead of inline equations which are often cluttered and hard to parse. If the authors need space I'd recommend removing one of the two figures Figure 1 and Figure 2 as I think the key messages therein can be merged into one figure.\", \"On the other hand I'd find beneficial to have some intuitive sketch about the main results/theorem the authors claim in the paper. That might make the paper more accessible also to a general audience (more from the ML community). At the moment the paper seems very much suited to an audience of physicists. For instance the authors never define what's a pure state. This cannot be assumed as common knowledge in the broader audience targeting this conference.\", \"To streamline the reading of the paper I'd find it useful to often recall what $q,n$ and $L,N,M$ are. That would help a lot to navigate both theorems and follow the sketch of proofs.\", \"Why does the gradient norm shown in figure 4 (right panel) has this double peak structure? Does this has any physical meaning? Is it intuitive why someone should expect such a steep increase in gradient norm during optimization? I believe this associates to the capability of the proposed algorithms to overcome barren plateaus but this is not discussed explicitly neither in the caption nor in the text. This might make it hard to the reader not familiar with the problem of Barren Plateaus to immediately grasp this.\", \"In line 470: \\\"*We validate our algorithm for diverse problems, [...]*\\\" I think this is not entirely correct. The paper only tackles (in the main part) the Transverse Field Ising Model with different setup. I think the authors should be clearer and more explicit here. This comment often applies to other parts in the paper where it would be useful to revisit the paper in ordare to ensure more precise claims.\", \"In line 483: what does the HEA acronym mean?\", \"### References\", \"[1] [Tamiya, Shiro, and Hayata Yamasaki. \\\"Stochastic gradient line Bayesian optimization for efficient noise-robust optimization of parameterized quantum circuits.\\\" npj Quantum Information 8.1 (2022): 90.](https://www.nature.com/articles/s41534-022-00592-6)\", \"[2] [Nicoli, Kim, et al. \\\"Physics-informed bayesian optimization of variational quantum circuits.\\\" Advances in Neural Information Processing Systems 36 (2024).](https://proceedings.neurips.cc/paper_files/paper/2023/file/3adb85a348a18cdd74ce99fbbab20301-Paper-Conference.pdf)\", \"[3] [Anders, Christopher J., et al. \\\"Adaptive Observation Cost Control for Variational Quantum Eigensolvers.\\\" Forty-first International Conference on Machine Learning.](https://openreview.net/pdf?id=dSrdnhLS2h)\"], \"questions\": \"Please refer to the section above as often question associates to the weaknesses of the paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
2XBPdPIcFK
Steering Language Models with Activation Engineering
[ "Alexander Matt Turner", "Lisa Thiergart", "Gavin Leech", "David Udell", "Juan J Vazquez", "Ulisse Mini", "Monte MacDiarmid" ]
Prompt engineering and finetuning aim to maximize language model performance on a given metric (like toxicity reduction). However, these methods do not optimally elicit a model's capabilities. To reduce this gap, we introduce a form of _activation engineering_: the inference-time modification of activations in order to control (or _steer_) model outputs. Specifically, we introduce the Activation Addition (ActAdd) technique, which contrasts the intermediate activations on prompt pairs (such as “Love” versus “Hate”) to compute a _steering vector_. By tactically adding in e.g. the “Love”$-$“Hate” steering vector during the forward pass, ActAdd can perform many tasks like topic steering, sentiment steering, and detoxification. ActAdd yields inference-time control over high-level output properties (like topic and sentiment) while preserving performance on off-target tasks. ActAdd is lightweight: it does not require any machine optimization and works with a single pair of data points, which enables rapid iteration over steering.
[ "interpretability", "steering", "alignment", "safety", "sentiment" ]
Reject
https://openreview.net/pdf?id=2XBPdPIcFK
https://openreview.net/forum?id=2XBPdPIcFK
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ygeGXbvfky", "yUcZgseguy", "vPosGYaBNO", "uG8IgM5kpm", "tZNrey18dB", "dQXH6dbblN", "cc3lLD3OR5", "Tbn37yhVwE", "TRsedxDvfc", "SRajTtVq0o", "PQIgoXZn9a", "OXApl27JEu", "MzXVPQz7Cg", "LkZrusewno", "LAFURG6hH6", "JsT5rbIE1i", "Ip3DHxgKQy", "HphG2l8HZB", "BZKDVidMS7", "Aep9qVPAMA", "6m3mbcpaDy", "2UeQWudww6", "1AsW5c23J9" ], "note_type": [ "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review" ], "note_created": [ 1729481853728, 1732534006083, 1730765782118, 1731937880220, 1732871476326, 1732510455837, 1731936913665, 1731678798325, 1737523876058, 1732793432548, 1731962971808, 1732516077230, 1730664877733, 1731940938402, 1731947067262, 1731937354759, 1731696058827, 1731937126816, 1732517581182, 1732507575334, 1734887907394, 1732514746292, 1729158962639 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7934/Reviewer_6r2T" ], [ "ICLR.cc/2025/Conference/Submission7934/Reviewer_tYYy" ], [ "ICLR.cc/2025/Conference/Submission7934/Reviewer_CLgM" ], [ "ICLR.cc/2025/Conference/Submission7934/Authors" ], [ "ICLR.cc/2025/Conference/Submission7934/Reviewer_tYYy" ], [ "ICLR.cc/2025/Conference/Submission7934/Authors" ], [ "ICLR.cc/2025/Conference/Submission7934/Authors" ], [ "ICLR.cc/2025/Conference/Submission7934/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7934/Authors" ], [ "ICLR.cc/2025/Conference/Submission7934/Reviewer_sGUL" ], [ "ICLR.cc/2025/Conference/Submission7934/Authors" ], [ "ICLR.cc/2025/Conference/Submission7934/Reviewer_sGUL" ], [ "ICLR.cc/2025/Conference/Submission7934/Reviewer_tYYy" ], [ "ICLR.cc/2025/Conference/Submission7934/Reviewer_6r2T" ], [ "ICLR.cc/2025/Conference/Submission7934/Authors" ], [ "ICLR.cc/2025/Conference/Submission7934/Reviewer_tYYy" ], [ "ICLR.cc/2025/Conference/Submission7934/Authors" ], [ "ICLR.cc/2025/Conference/Submission7934/Authors" ], [ "ICLR.cc/2025/Conference/Submission7934/Authors" ], [ "ICLR.cc/2025/Conference/Submission7934/Area_Chair_QQFv" ], [ "ICLR.cc/2025/Conference/Submission7934/Authors" ], [ "ICLR.cc/2025/Conference/Submission7934/Reviewer_tYYy" ] ], "structured_content_str": [ "{\"summary\": \"Paper proposed \\u201cAdd Act\\u201d, a type of activation engineering that, when applied to language models (LMs), can \\u201csteer\\u201d the model the output during inference. \\u201cSteering\\u201d an LM, in this context, would mean enabling the user to enhance or control some high-level property of the generated text such as topic or sentiment of the text.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The proposed activation engineering method can be applied during inference and does not require gradient-based optimization (thus making it computationally fast to compute and apply).\\n2. The proposed activation engineering method does not modify the original LM\\u2019s weights, and therefore would not change the model\\u2019s performance on tasks if the activation engineering method wasn\\u2019t applied. This is a unique advantage, as many related \\u201csteering\\u201d methods that modify a LM\\u2019s weights may harm model performance on tasks unrelated to the \\u201csteering\\u201d-related tasks.\\n3. Paper provides many compelling examples of where \\u201cAddAct\\u201d has been able to successfully steer an LM output (i.e., sentiment, topic, reducing toxicity) across many model architectures.\", \"weaknesses\": \"The authors have missed some related work in the area of activation engineering, and their paper may benefit from further comparing and contrasting the proposed \\u201cAddAct\\u201d method to these works:\\n\\n[a] Sakarvadia, Mansi, et al. \\\"Memory injections: Correcting multi-hop reasoning failures during inference in transformer-based language models.\\\" arXiv preprint arXiv:2309.05605 (2023).\\n\\n[b] Heimersheim, Stefan, and Neel Nanda. \\\"How to use and interpret activation patching.\\\" arXiv preprint arXiv:2404.15255 (2024).\\n\\n[c] Vig, Jesse, et al. \\\"Investigating gender bias in language models using causal mediation analysis.\\\" Advances in neural information processing systems 33 (2020): 12388-12401.\\n\\nSpecifically, I would like the authors to discuss the computational cost of computing the steering vector, especially if one must test multiple steering vectors for multiple target layers (as it is not obvious which layers/vectors would work best for a specific \\u201csteering\\u201d goal, and thus a user may need to do (costly) experimentation. Specifically, the \\u201cAddAct\\u201d method relies on generating the \\u201csteering vector\\u201d by doing two partial forward passes for the steering prompt pair. This itself is computationally expensive compared to a recent related work [a] which demonstrated that one could compute a \\u201csteering vector\\u201d simply using the model\\u2019s (un)embedding matrix, rather than running the steering prompts through the top N layers of an LM.\\n\\nFurther, the \\u201cAddAct\\u201d \\u201csteering vector\\u201d is layer-specific within a given LM. For example, if a steering vector is generated for layer N, it is not clear if the same vector can be applied to layer N+1. This is a drawback of the method as it may not be apparent to the user which layer would be best for an \\\"AddAct\\\" injection. Again, I would be interested if the authors could discuss how their proposed layer-specific steering vector generation strategy compares to related work [a] which proposed a steering vector that is layer-agnostic.\", \"questions\": \"n/a\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"Thank you for the reply. I looked at the uploaded code (specifically the files pointed out by the authors), and have several questions:\", \"No hyperparameter search is done in the uploaded code, and the l and c parameters are fixed at the top of each file. Could the authors point out where they create the validation set and do the hyperparameter search?\", \"I just realized that the random sampling you do ensures that the data on which you evaluate, is not the same as the data the baselines evaluate on (since you did not have implementations, I am assuming you do not have the same seed as they do). This is quite an important difference, which is hopefully going to be mitigated in the new experiments.\", \"The code runs on a limited number of samples, I assume the authors will upload the final code that is used for their experiments once the experiments are done?\", \"Small suggestion: It's a bit strange that the code in the linked notebooks does not use the library itself. Furthermore, the authors copy a lot of code between the two notebooks. For readability (and to avoid bugs), I would suggest to put code for activation addition in your library and define helper functions common between your experiments in separate files. This way, the files related to the experiments can really focus just on experimental setup etc. Personally, I usually create python scripts for the experiments (notebooks are not always accessible on servers) and then have a postprocessing notebook that post-processes the results to present them exactly as they appear in the tables. However, the authors can ignore this suggestion if they want to.\"]}", "{\"summary\": \"In this paper, the authors introduce a paradigm of controlling model outputs/behavior which they term activation engineering. In activation engineering, a user controls model behavior by editing intermediate activations/hidden states of the model during the forward pass. They propose and focus on a specific method in the class of activation engineering called Activation Addition (ActAdd), in which a vector encoding an axis (e.g. love vs hate) can be added to the intermediate activations to make the model shift along that axis, e.g., in sentiment from negative to positive. They compute this vector by taking the difference along a single paired example (e.g. a love vs hate example) and demonstrate effectiveness in experiments on sentiment analysis and toxicity reduction.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"Originality: The idea of activation engineering as \\u201cperturbing activations during the forward pass\\u201d is an important and simple idea. While it seems that much concurrent or previous work has also worked with this idea of editing activations, e.g. the ROME paper (Meng et al 2022), adding steering vectors (ActAdd) to control model outputs is to my knowledge original (and the authors do well to cite concurrent work in Li et al 2023b).\", \"quality\": \"Experiments are overall fairly thorough and demonstrate that ActAdd is a promising, intuitive, and simple approach to control model outputs.\", \"clarity\": \"The overall flow of the paper is clear and well written.\", \"significance\": \"This is an important contribution to interpretability and control of models using activation-level edits. The idea that you can controllably transform model behavior by adding a vector to the residual stream is important.\", \"weaknesses\": \"The biggest weaknesses in my read are a lack of clarity in the algorithm and some of the experiment setup and results. I leave specific questions/suggestions on this point for the Questions section of the review.\\n\\nAlso, the authors should be careful to clarify their definitions and contributions. In the intro/abstract, they define activation engineering as \\u201cthe inference time modification of activations in order to control model outputs\\u201d. However, section 2 states \\u201cActivation engineering invovles creating vectors of activation which cause desired changes to output text when added to the forward passes of a frozen LLM\\u201d. This latter definition sounds more specific than the original one; there are many works which fall under the first definition but not necessarily the second one. From my read, I would be careful to claim that you are introducing activation engineering and might instead recommend stating it as highlighting AE as a class of methods to control behavior, under which ActAdd (your primary contribution) falls.\", \"questions\": [\"Can you elaborate on how you search for injection coefficient c and injection layer l? How expensive is this process?\", \"In Figure 2, what is the x axis? Why should we expect perplexity to go down when x axis increases?\", \"In Figure 3, how is \\u201cP(steered completion contains wedding related words)\\u201d determined? Can you be more explicit about this in the paper?\", \"Can you elaborate on what the p value in table 3 and 4 is? That is, what is the null hypotheses you are testing (and the corresponding alternative hypothesis)?\", \"In Figure 5/S4.5, referring to the model\\u2019s behavior as \\u201coff-target answer probabilities\\u201d is rather misleading. That phrase reads as the model\\u2019s distribution over the answers for non-target tokens, whereas it seems that the actual probabilities being referred to is the P@K.\", \"How do you determine which example to use to determine the steering vector? Did you do any studies on variance across the effectiveness for vectors derived from different examples?\", \"Are there any experiments to support the claim in the intro that activation engineering can enable composition of multiple traits, e.g. speech eloquence and mathematical content? If not, I would remove this to avoid overclaiming.\", \"The notation in Algorithm 1 could use some improved clarity. For example, what is @? In code it can refer to a matmul; even though this seems like an indexing operation the ambiguity is confusing for the reader.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to CLgM\", \"comment\": \"We thank the reviewer for their extensive comments on ways to improve the write-up, lay-out and clarity of the paper. We will implement the suggestions and show them to the reviewer once complete.\\n\\n> \\u201cCan you elaborate on how you search for injection coefficient c and injection layer l? How expensive is this process?\\u201d\\n\\nThis is simple gridsearch over 17 * 18 = 306 values. This thus takes only around 1000 forward passes.\\n\\n\\n> \\u201cIn Figure 2, what is the x axis? Why should we expect perplexity to go down when x axis increases?\\u201d\\n\\nThe x-axis is the percentage of words in the tested passage that are wedding related. The graph is intended to show that adding a wedding steering vector improves predictive performance smoothly as the vector\\u2019s relevance to the domain increases (i.e. the x-value increases; more words are wedding-related).\\n\\n\\n> In Figure 3, how is \\u201cP(steered completion contains wedding related words)\\u201d determined? Can you be more explicit about this in the paper?\", \"this_is_poorly_flagged_by_the_y_axis_label\": \"\\u201cNon-zero wedding word count fraction\\u201d and by footnote 4. i.e. it is the fraction of completions that contain at least one of the hand-picked words {wedding, weddings, wed, marry, married, marriage, bride, groom, and honeymoon}.\\nWe will add a note in the text explaining this better, thanks.\\n\\n> Can you elaborate on what the p value in table 3 and 4 is? That is, what is the null hypotheses you are testing (and the corresponding alternative hypothesis)?\", \"see_appendix_c\": \"\\u201cFor bolding SOTA, we use a one-sample t-test to calculate p-values for sentiment and toxicity metrics.\\u201d\\n\\n\\n> In Figure 5/S4.5, referring to the model\\u2019s behavior as \\u201coff-target answer probabilities\\u201d is rather misleading. That phrase reads as the model\\u2019s distribution over the answers for non-target tokens, whereas it seems that the actual probabilities being referred to is the P@K.\\n\\nBy \\u201coff-target\\u201d we mean that the domain is unrelated to the steering vector. We will clarify this in the text.\\n\\n\\n> How do you determine which example to use to determine the steering vector? Did you do any studies on variance across the effectiveness for vectors derived from different examples?\\n\\nWe discovered the vectors via manual experimentation.\\n\\n> Are there any experiments to support the claim in the intro that activation engineering can enable composition of multiple traits, e.g. speech eloquence and mathematical content? If not, I would remove this to avoid overclaiming.\\n\\nWe said \\u201cmight\\u201d in hopes of flagging that paragraph as speculative. However, there's some preliminary evidence supporting the speculation. [1]'s Appendix C.4 shows the compositionality of two steering vectors in an RL maze-solving setting. Somewhat relatedly, [2] find\\n\\n> the emergence of hidden capabilities, i.e., where latent interventions show the model possesses the capability to manipulate a concept, but these capabilities cannot yet be elicited via naive input prompting.\\n\\n---\\n\\n> The notation in Algorithm 1 could use some improved clarity. For example, what is @? In code it can refer to a matmul; even though this seems like an indexing operation the ambiguity is confusing for the reader.\\n\\nWe agree. As the reviewer points out, @ is here the indexing operation. We will clarify this.\\n\\n[1] Mini, Ulisse, et al. \\\"Understanding and Controlling a Maze-Solving Policy Network.\\\" arXiv preprint arXiv:2310.08043 (2023).\\n\\n[2] Park, Core Francisco, et al. \\\"Emergence of hidden capabilities: Exploring learning dynamics in concept space.\\\" arXiv preprint arXiv:2406.19370 (2024).\"}", "{\"comment\": [\"Thank you for addressing the concerns regarding the experimental setup. The updated results and analysis address some of the original issues with the comparisons in Tables 3 and 4. However, after reviewing the new results, I have the following observations and concerns:\", \"Even if we consider LMA a \\\"successor\\\" method and exclude it from direct comparisons (though I would argue that it should still be included, given its publication at ICLR 2024), the updated results show that ActAdd does not outperform the PreAdd baseline on any benchmark.\", \"The performance on the /pol/ benchmark appears to be very poor. ActAdd reduces toxicity by only 2% while significantly increasing perplexity, reaching values close to Gu et al. 2022 (48.0 vs. 54.6). Since Gu et al's perplexity is classified as \\\"too high for practical use\\\" in your work, could you clarify what might be causing this gap in performance for this specific benchmark?\", \"While negative results can indeed offer valuable insights, the paper in its current form seems primarily structured around proposing ActAdd as a novel method that outperforms baselines. The new results, however, do not support this claim. A substantial revision would be needed to reframe the paper as a study presenting and analyzing a negative result.\", \"Additionally, the current negative result is somewhat limited in scope. A more compelling negative result might be along the lines of: \\\"internal steering of language models does not outperform methods based on logits for these tasks.\\\" The current finding suggests only that a specific approach to internal steering (ActAdd) does not outperform logit-based methods.\", \"Could you clarify why you dropped the relevance metric in the updated results?\", \"Finally, the updated results show a significant increase in ActAdd\\u2019s perplexity compared to baseline methods. This differs from previous tables, where ActAdd showed lower perplexity. Could you explain what factors contributed to this shift? For example, was this due to differences in temperature settings across experiments?\", \"Overall, the revised paper seem to introduce new weaknesses. The authors describe the new results as not purely positive, but they are actually (at least for these two very important tables) very negative. These concerns lead me to maintain my original evaluation score (reject).\"]}", "{\"title\": \"Topic steering, new experiment\", \"comment\": \"Results for the new topic steering experiment (i.e. replicating Figure 4 from the original). Setup:\\n\\n* n=1000 random [Stanford IMDb](https://ai.stanford.edu/~amaas/data/sentiment/) prompts \\n* These prompts were filtered out by GPT-4o-mini if they were deemed to be relevant to any of [art, finance, music, politics, science, weddings].\\n* ActAdd applied at layer 6 (selected beforehand on a validation set)\\n* A range of coefficients c (values fixed beforehand)\\n* Prompt pair: \\\"I talk about {topic} constantly\\\" - \\\"I do not talk about {topic} constantly\\\"\\n* Temperature = 1.0, top-p = 0.3, freq_penalty = 1.0, max_new_tokens = 50 (these sampling parameters are constant across all experiments)\\n* on completions from GPT-2-XL\\n* Binary relevance scored by GPT-4o-mini.\\n\\nResult (absolute % of completions deemed relevant):\\n\\n[relevance_n1000_gpt4omini_gpt2_l6](https://i.imgur.com/hOqPf8C.png)\\n\\nNote that some topics (like \\\"politics\\\") show non-monotonic response in the steering coefficient. We don't understand what's happening in that particular condition, but the trends look sensible for most of the topics.\\n\\nSince they were drawn from IMDb, obviously the prompts will be disproportionately about art and music. As well as our GPT-4o-mini filter, we also check that ActAdd nonetheless improves on this base rate by noting ActAdd's change in percentage over the unsteered baseline (that is, the ActAdd % relevant - unsteered completion % relevant):\\n\\n[diff_in_relevance_n1000_gpt4omini_gpt2_l6](https://i.imgur.com/BYWi79y.png)\\n\\nWe have revised the PDF and supplementary information above to allow the reviewer to see this change in context.\\n\\nWe think this is a much better demonstration of the method's topic steering potential. Thanks for the suggestion!\"}", "{\"title\": \"Initial response to tYYy's technical concerns\", \"comment\": \"We are preparing the following experiments and content which we plan to make available within the discussion period:\\n\\n- We will add the new Dekoninck et al 2024 method as a more up-to-date baseline. \\n\\n- Previously we used OPT for compatibility with the reported results of our baselines. We will standardise all our experiments on Llama-3 and run all baselines against them (where possible) with the same hyperparameters.\\n\\n- We\\u2019ll ensure that relevant experiment code is included in the Zenodo, and have removed the playground file.\\n\\n- PreAdd used `davinci-001`. We used `davinci-002` because the `davinci-001` API was shut down, preventing us from matching PreAdd\\u2019s environment; we thus picked the model closest to theirs. (And we could not get their method to run in time for submission, thus preventing using `davinci-002`.) We plan to reimplement PreAdd and then use the same perplexity model (Claude Sonnet 3.5) for all settings and baselines.\\n \\n- We will cite more-recent work in the area of steering.\\n\\n\\nIn the meantime, some discussion points:\\n\\n> Omission of Fudge: In Lines 378-380, Fudge is omitted, despite performing better on certain aspects and only slightly worse on others. This is a strange misrepresentation of the results.\\n\\nYou are correct, thank you for pointing this out. We did not intend to misrepresent the results. FUDGE is indeed reasonably competitive by these metrics, performing better on some and worse on others. We'll clarify in the camera-ready.\\n\\n> Basic Metrics: Perplexity and cosine similarity are insufficient metrics to fully capture fluency and relevance. Since controlled text generation methods edit the model's internals, they can yield unintuitive results that these metrics may not fully capture. The authors should include human or LLM-based evaluations to assess the outputs in Tables 3 and 4 and compare them with baselines.\\n\\nPerplexity and cosine similarity are the standard metrics in NLP for measuring fluency and relevance. We needed to include them to enable backwards compatibility with our baselines. Note that the paper suggested by the reviewer also uses perplexity (and omits any relevance metric).\\n\\n> \\u201cWhat is meant by \\\"this section does not have complete statistics\\\" in Line 533?\\u201d\\n\\nThis just means that we don\\u2019t report all 306 hyperparameter settings \\u201cWe perform simple grid search, usually between c \\u2208 [3, 20] and l \\u2208 [6, 24].\\u201d\\n\\n> \\u201cHow was grid search performed for ActAdd's hyperparameters? Were the results reported for the best set of parameters? If so, was a similar hyperparameter search conducted for the baselines to ensure accurate comparisons?\\u201d\", \"see_line_533\": \"for the given parameter ranges, simple grid search mixed with qualitative sampling of hyperparameters was performed.\\nWe used the reported values from the baselines, and thus depend on the original author\\u2019s gridsearch or other optimization. We will try to rerun the baselines ourselves during the response period.\\n\\n> Could you clarify the hyperparameter \\u201ca\\u201d discussed in Appendix C and explain its function?\", \"this_is_the_sequence_alignment_parameter\": \"the position the steering vector h_A and the forward pass from the user prompt are aligned at. It is also mentioned in Algorithm 1:\\n\\ta = sequence position to align h_A and h_{p^{\\u2217}}\", \"and_in_limitations\": \"\\u201cSo far we have had success with fixing the sequence alignment a = 1.\\u201d\\n\\n> For which experiments are the prompts mentioned in Lines 1218-1223 used? Appendix C presents a collection of unrelated details, making it difficult to follow and understand how it fits into the overall context of the paper. Could the authors clarify the connection to the experiments?\\n\\nThese prompts are for Figures 4 and 7. We will improve the pointers throughout the Appendix, thanks.\"}", "{\"title\": \"Addressing two major claims\", \"comment\": \"We want to clarify two major concerns you raised.\\n\\n> The primary issue with the paper is that it is outdated. The paper refers to several works published in 2023 as \\\"contemporary,\\\" implying that they are based on the presented work. This suggests that the paper may have been rejected in previous conferences and is now being resubmitted to ICLR without any major modifications. However, works from 2023 cannot be referred to as contemporary in a submission to ICLR 2025.\\n\\nWe agree that revisions are important. However, your speculation about no \\u201cmajor modifications\\u201d is incorrect. For example, we rewrote the entire paper for ICLR.\\n\\nFurthermore, works from 2023 would have been submitted to ICLR 2024, which makes comparison in ICLR 2025 more reasonable. \\n\\n> Moreover, the claim that both Liu et al. (2023) and Zou et al. (2023) are based on this work is questionable. \\n\\nWe did not mean to claim that we directly inspired that work (although our ActAdd paper has, in fact, inspired a range of follow-on work). What we said was that those papers \\\"followed\\\" ours, meaning \\\"followed\\\" in a _temporal_ sense. We see how this was unclear and will instead state that those papers \\\"came after\\\" this work. \\n\\n> A quick review of these papers reveals that Liu et al. (2023) merely cites ActAdd as related work, and Zou et al. (2023) actually outperforms ActAdd on one of the tasks. Therefore, I do not believe ActAdd presents any novel idea or result. This undermines the relevance of the method, and I believe this alone is sufficient for rejection. However, if I have misunderstood this point, the authors could clarify their claims.\\n\\nWe made a scientific discovery (in line with some past evidence from e.g. GANs): LLMs are steerable via linear manipulations of their activation space. We want our discovery to be scientifically validated by the peer review process. Zou et al explicitly noted that their approach is a \\u201cvariant of ActAdd.\\\" We also observe that their approach outperformed ActAdd on a different task - TruthfulQA - which is not part of our paper. Those two facts do not invalidate our findings or the scientific contribution of this paper.\\n\\nConsulting ICLR\\u2019s reviewer guidelines:\\n\\n> Q: If a submission does not achieve state-of-the-art results, is that grounds for rejection?\\n>\\n> A: No, a lack of state-of-the-art results does not by itself constitute grounds for rejection. Submissions bring value to the ICLR community when they convincingly demonstrate new, relevant, impactful knowledge. Submissions can achieve this without achieving state-of-the-art results.\\n\\nFor another angle on the issue, suppose paper X makes discovery Y. Paper X\\u2019 (published substantially later) makes a further discovery Y\\u2019. If paper X is not immediately published, and instead is being reviewed one year later for its scientific contributions, should it be considered to \\u201cnot present any novel idea or result\\u201d because people already know Y\\u2019 (and also, therefore, a subset of X's discoveries about Y)? We think the answer is \\u201cno, the original contribution is still valuable.\\u201d If you disagree, we are happy to consult with the area chair to come to a mutual understanding of this issue.\\n\\nWe will address the rest of your helpful feedback and concerns in a follow-up comment. We think many of your concerns are reasonable and fixable.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"New revision\", \"comment\": \"We conducted a wide range of experiments and changes which reviewers requested.\\n\\n1. We reran the topic steering experiment using 1,000 IMDb prompts, plotting the change in topic frequency relative to the base rate in the dataset. (See: figure 4)\\n2. We added strong new baselines for toxicity and sentiment shift: Language model arithmetic (Dekoninck et al., 2023) and SelfDebias (Schick et al., 2021). (See tables 3 and 4)\\n3. For toxicity and sentiment experiments, we used LLAMA-3.1-8B in all settings using hyperparameters consistent with Dekoninck et al.\\n4. We tested all approaches on a new dataset, /pol/. (See table 3)\\n5. At the request of reviewer 6r2T, we added an appendix testing ActAdd's computational overhead.\\n6. We updated our [linked Zenodo](https://zenodo.org/records/14177088) code with the new hyperparameters.\\n\\nThe topic steering experiments (figure 4) demonstrate that ActAdd provides effective steering across a wide range of original prompts and target topics. Furthermore, the standardized LLAMA-3.1-8B experiments show reasonably strong performance for ActAdd on RealToxicityPrompts. \\n\\nThe new results are not purely positive for our method (although negative results can still be valuable knowledge). Most starkly, ActAdd performs quite poorly on LLaMA-3.1-8B on /pol/ (table 3), significantly boosting perplexity while not steering nearly as well as the baselines. We are grateful to the reviewers for suggesting experiments which better clarify the limitations of this particular activation engineering method.\"}", "{\"comment\": \"> \\u201cbenchmarks created by the authors\\u201d.\\n\\nI'm referring to the experiment in sec 4.2. It's unclear what dataset this is run on. This experiment's conclusion is also unclear and the authors have not addressed my question about the results being different for different topics. \\n\\n> Discrepancies in baselines\\nMy point was that the discrepancies could indicate a different experimental setup leading to the results of Pei et al. being unusable for comparison. \\n\\nI am glad the authors plan to work on making the experimentation more thorough and concise, till they do I will maintain my scores.\"}", "{\"comment\": \"Thanks for your reply!\\n\\n> Obtaining implementations is essential to provide an accurate comparisons between the new method and baselines. How did you ensure that all prompts (e.g., no extra spaces or newlines), parameters (temperature, ...), or models (e.g., to measure cosine similarity) etc. are the same across all your baselines if you did not have implementations? \\n\\nUsing results reported in other work is common, but we agree that it opens up the possibility of uncontrolled experimental variation. As a result we\\u2019re currently re-running all the baselines using identical settings.\\n\\n&nbsp;\\n\\n> Could you add it by the end of the rebuttal period instead?\\n\\nYes! Results table forthcoming.\\n\\n&nbsp;\\n\\n> I fiddled around a lot with various algorithms that do internal steering, and found that they sometimes produce nonsense words (i.e., 2% of the words are nonsense while the sentence around it makes sense). This problem is not fully captured by perplexity. I just checked, and the baseline I provided does perform an experiment where an LLM decides which of two completions is the best as an extra experiment.\\n\\nInteresting! With ActAdd we only see this word-level corruption of completions with very high coefficient values, c~=20 (see Appendix G) or when intervening at the last layer. You can also verify this in the best-of-3 demonstration notebook. We aim to do a quick test of your LLM scorer idea in the remaining time.\\n\\n&nbsp;\\n\\n> \\u201chow did you decide that one set of hyperparameters was better than the other? Did you have a separate validation set over which you optimized them based on the numbers you got on that validation set? Or did you directly optimize them on the experiment and numbers your report in the paper? In the latter case, this would be quite problematic, especially if no similar search was done for baselines.\\u201d\", \"there_are_two_searches_involved\": [\"Finding a prompt pair $(p_+, p_-)$. We did not iterate over candidate prompt pairs during experiments; instead we manually discovered them and fixed this prompt pair during the gridsearch for the experiments (e.g. for sentiment steering the experiments were all done on (\\u201clove\\u201d - \\u201chate\\u201d)).\", \"For each experiment, we indeed used a validation set. You can see this in the [new Zenodo](https://zenodo.org/records/14177088): `act_add_iclr2025.tar/act_add_iclr2025/activation_additions_hf-main/notebooks/sentiment.ipynb` and `\\u2026/toxicity.ipynb`.\"]}", "{\"summary\": \"The paper proposes ActAdd, a method to _steer_ a Language Model's generation in a particular direction. ActAdd is lightweight and merely involves using contrasting prompts (related to the direction you want to steer the LM in). These contrasting prompts are used to compute a steering vector that can be applied at inference time to change the model's behavior.\\nThe authors experimented with various tasks such as steering the topic of the LM's generation, steering to reduce toxicity, and steering to change sentiment. \\nThe authors also show that ActAdd preserves the model's knowledge by showing that when the model's accuracy remains unchanged on ConceptNet when asked to steer towards a certain topic.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The proposed approach is straightforward, lightweight, and demonstrates effectiveness on certain benchmarks. However, the experiments conducted only partially support the claims made in the paper (see more details under weaknesses).\\n\\nThe algorithm is well-presented, though some aspects of the experiments could benefit from further clarification.\", \"weaknesses\": \"The paper\\u2019s experiments are interesting but could benefit from further depth and clarity. In some cases, it\\u2019s challenging to fully understand the conclusions drawn from certain experiments. Additionally, some benchmarks created by the authors are quite small, which makes the results appear more anecdotal than empirical. There are also a few discrepancies with the baselines, as well as cases where only portions of larger benchmarks are used (eg. why use only a subset of RealToxicityPrompts and Sentiment? The current experimentation is performed on ~10% of the test split)\\n\\nThe paper would greatly benefit from demonstrating how ActAdd performs on larger benchmarks specifically designed for steering and alignment, such as HelpSteer (1 and 2)[1,2]. Also comparisons to methods that involve alignment training might give some indication on if ActAdd can be used instead of or in tandem with some these approaches in practice [3]. \\n\\nI've summarized my concerns as questions for certain parts of the experiments section\\n\\nQuestions\\n1. ACTADD CAN CONTROL WHAT THE MODEL TALKS ABOUT\\n- Which dataset serves as the starting point for the prompts? Is the experiment based on a single prompt with 100 generations? If so, **using a single prompt might make it difficult to fully verify the claim that \\\"ActAdd can steer the model to talk about a topic.\\\"**\\n- Why does ActAdd perform well for certain topics but not others (e.g., Art)? Is it effective only for steering toward specific topics? Additionally, it is unclear what accounts for the drop at c=0.5 for weddings? This might indicate some experiments on how reliable ActAdd is. \\n\\n2. ACTADD CAN REDUCE TOXICITY\\n- The results in this section could be clearer. The only baseline models are the unsteered model, prompting, and PREADD, while other comparisons, such as FUDGE and AirDecoding, are tested on GPT-2, making direct comparison difficult given the model-dependent nature of the task.\\n- Regarding the other results there seem to be a lot of discrepancies -- The authors pick most of their baselines from (https://aclanthology.org/2023.findings-acl.636.pdf). However, the unsteered OPT result is very different. (0.152 vs 0.134 toxicity and 49.9 vs 8.9 for fluency). With such a large change in fluency, it seems there might be a difference in the experimental setup of the two papers. This throws some doubt if the ActAdds better fluency comes from a different experimental setup. \\n\\n3. ACTADD PRESERVES THE MODEL\\u2019S GENERAL KNOWLEDGE\\n\\nThere are some concerns regarding the setup here. ConceptNet, as a knowledge base, typically requires single-word answer predictions. Showing that the model performs similarly with and without ActAdd doesn\\u2019t entirely demonstrate that ActAdd avoids side effects on the model\\u2019s factual accuracy. Perhaps this could be bolstered with verifying if the factuality of longer form generations remain unaffected. The FactScore benchmark [4] might be a good place to start. \\n\\nFinally, while I attempted to review the provided code for further insights, it was challenging to navigate, and the links listed in tab 5 of the appendix did not seem to work.\\n\\n\\nOverall I believe the approach has potential and the paper could heavily benefit from more thorough and comprehensive experimentation.\\n\\n\\nRefs\\n\\n[1]https://arxiv.org/abs/2311.09528\\n\\n[2] https://arxiv.org/pdf/2406.08673\\n\\n[3] https://arxiv.org/abs/2310.05344\\n\\n[4] https://arxiv.org/abs/2305.14251\", \"questions\": \"Questions added in Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for their reply. Here some comments on my side:\\n\\nFirst of all, I find it a bit concerning that the authors did not have an implementation (either theirs, or from the original authors) of an important baseline. Obtaining implementations is essential to provide an accurate comparisons between the new method and baselines. Unfortunately, the authors now state they \\\"had to\\\" do things a certain way to ensure compatibility with the reported baselines, which is not a valid and a very worrisome argument. How did you ensure that all prompts (e.g., no extra spaces or newlines), parameters (temperature, ...), or models (e.g., to measure cosine similarity) etc. are the same across all your baselines if you did not have implementations? Of course, I trust the authors ensured this as far as they could, and I do not think this is a problem if done properly, but it warrants extra caution.\\n\\n **You are correct, thank you for pointing this out. We did not intend to misrepresent the results. FUDGE is indeed reasonably competitive by these metrics, performing better on some and worse on others. We'll clarify in the camera-ready.** \\nThank you. Could you add it by the end of the rebuttal period instead?\\n\\n**Perplexity and cosine similarity are the standard metrics in NLP for measuring fluency and relevance. We needed to include them to enable backwards compatibility with our baselines.**\\nI fiddled around a lot with various algorithms that do internal steering, and found that they sometimes produce nonsense words (i.e., 2% of the words are nonsense while the sentence around it makes sense). This problem is not fully captured by perplexity. I just checked, and the baseline I provided does perform an experiment where an LLM decides which of two completions is the best as an extra experiment.\\n\\n**See line 533: for the given parameter ranges, simple grid search mixed with qualitative sampling of hyperparameters was performed.** \\nThis does not really answer my question. To clarify: how did you decide that one set of hyperparameters was better than the other? Did you have a separate validation set over which you optimized them based on the numbers you got on that validation set? Or did you directly optimize them on the experiment and numbers your report in the paper? In the latter case, this would be quite problematic, especially if no similar search was done for baselines.\\n\\nThe remaining points are clear, thanks for the clarifications.\"}", "{\"comment\": \"RE computational cost: Thank you for your response. I suggest including those additional experiments in an appendix section as it makes the work stronger. If you could upload a revised PDF with a revised appendix within the rebuttal period that would be great.\", \"re_layer_specificity\": \"Thanks for pointing out Figures 3/7, I did notice that in my original read. However, what I was suggesting with my review is that it may be valuable to further discuss AddAct in the context of concept localization with LMs. It may benefit the work to add an extended discussion about concept localization within LMs w.r.t. how AddAct in an appendix section as some of the prior works I referenced above showed that some concepts are localizable to a high-degree while other seemed to span broad ranges of layers within the model.\"}", "{\"title\": \"Response to 6r2T\", \"comment\": \"Thanks for your comments and noting the method\\u2019s advantages!\\n\\nYou can see the relative computational cost (the increase in inference time from steering) in the following experiment which we conducted but did not include in this round: \\n- https://i.imgur.com/fBI32B1.png\\n- https://i.imgur.com/BhT8KPb.png \\n\\nIf you wish, we will include this in the camera-ready. As for the absolute computational overhead: ActAdd is just $n$ extra forward passes (where for instance our gridsearch over GPT-2 layers was $n=306$).\", \"as_to_layer_specificity\": \"Figures 3 and 7 show that ActAdd performs well for a relatively wide range of layers (~10 layers), though we agree it\\u2019s not layer-agnostic.\"}", "{\"comment\": \"I thank the authors for their reply. However, I still disagree with the points they make.\\n\\nBefore going into specific points made by the authors, let me reiterate the main argument: The paper includes references to work published over a year ago stating they are contemporary. While these papers use a very similar method as the authors present, a comparison (e.g., in the experiments) is not presented in the paper. This is specifically problematic since one of those papers outperforms the given method for at least one application.\\n\\n**We agree that revisions are important. However, your speculation about no \\u201cmajor modifications\\u201d is incorrect. For example, we rewrote the entire paper for ICLR.** \\nApologies for my mistaken assumption. My assumption was mainly based on the lack of recent citations, the use of outdated models, and the use of outdated baselines. This is in my opinion the most important part to address in an updated paper, and I believe this has not happened to a sufficient degree in this paper.\\n\\n**Furthermore, works from 2023 would have been submitted to ICLR 2024, which makes comparison in ICLR 2025 more reasonable.** \\nNo, it does not. The fact remains that these works are well-known, contain many of the same ideas, and are not compared against. One year should have given the authors plenty of time to compare against these baselines and improve upon their ideas. If a paper contained similar ideas to a paper published in ICLR 2024, but did not compare against it, this paper should be rejected.\\n\\n**We did not mean to claim that we directly inspired that work (although our ActAdd paper has, in fact, inspired a range of follow-on work). What we said was that those papers \\\"followed\\\" ours, meaning \\\"followed\\\" in a temporal sense. We see how this was unclear and will instead state that those papers \\\"came after\\\" this work.** \\n\\\"Followed\\\" is a very strange word to use in the temporal sense if you do not compare against the work. We are reviewing your paper against the current state-of-the-art, which now includes these papers.\\n\\n**We also observe that their approach outperformed ActAdd on a different task - TruthfulQA - which is not part of our paper. Those two facts do not invalidate our findings or the scientific contribution of this paper.** \\nBut it does! Unless you show that for the tasks on which you evaluate ActAdd, it outperforms their method, I have to assume your method is strictly worse.\\n\\n**We made a scientific discovery (in line with some past evidence from e.g. GANs): LLMs are steerable via linear manipulations of their activation space.** \\nUnfortunately, this scientific discovery has now been made in other papers as well. Please, do not understand me wrong: I do believe this idea is very interesting and worth noting. However, I cannot imagine this be presented at ICLR 2025 as \\\"novel\\\" since it has been already known for a year. In order to be accepted, your paper should now improve upon prior works that use this idea.\\n\\n**Consulting ICLR\\u2019s reviewer guidelines:** \\nThe guidelines clearly state that papers should contain new knowledge. This is not the case anymore. To cite another part of those guidelines:\", \"q\": \"Are authors expected to cite and compare with very recent work? What about non peer-reviewed (e.g., ArXiv) papers? (updated on 7 November 2022)\", \"a\": \"We consider papers contemporaneous if they are published within the last four months. That means, since our full paper deadline is October 1, if a paper was published (i.e., at a peer-reviewed venue) on or after July 1, 2024, authors are not required to compare their own work to that paper.\\n\\nThis part of the guidelines clearly states that you should compare against all works before July 1, 2024.\\n\\n**For another angle on the issue, suppose paper X makes discovery Y. ...** \\nHowever, in this case Paper X' makes the discovery \\\"Y+\\\" since it is outperforming your method and I have not seen evidence to the contrary. \\n\\nBased on the provided information from the authors, I will almost certainly not change opinions on this topic anymore. If the authors want to consult with the AC to discuss the issue, I would be happy to contribute to that discussion. If, after that discussion, the AC instructs me to ignore this point in my review, I will of course do so. However, please note that the remaining weaknesses would still lead me to reject the paper, although those points are much more addressable in a rebuttal and I therefore expect that the situation might improve there.\"}", "{\"title\": \"Response to sGUL\", \"comment\": \"Thank you for your comments! In response, we are preparing the following experiments:\\n\\n1. We used OPT for compatibility with the reported results of our baselines. We will standardise all our experiments on Llama-3 and run all baselines against them (where possible) with the same hyperparameters.\\n2. We used a random n=1000 subset of the benchmarks \\u2013 as is standard in the area, see Pei et al and Dekoninck et al. Increasing the subset size is thus a discrepancy which would weaken the validity of the baseline. We will however rerun our experiments on 10,000 examples from the RealToxicityPrompts and Sentiment test sets and see if there is any difference from our n=1000 run.\\n3. We will also run the topic steering experiments represented by Figures 4 and 7 on more prompts (drawn from the IMDb sentiment benchmark) to demonstrate that the steering works across a variety of prompts.\\n4. We will, if time permits, run the same experiment using Factscore and compare steered and unsteered metrics.\\n\\n\\nIn the meantime, some discussion points:\\n\\n> \\u201cbenchmarks created by the authors\\u201d. \\n\\nWe didn\\u2019t create any of the benchmarks used (RealToxicityPrompts, Perspective, Stanford IMDb, ConceptNet, ). Did you mean the topic steering experiment of Figure 4?\\n\\n\\n> Some discrepancies in results are also notable\\u2014for instance, the paper draws baselines from this paper ((https://aclanthology.org/2023.findings-acl.636.pdf)), but there are differences in the results for the unsteered OPT (0.152 vs. 0.134 toxicity, 49.9 vs. 8.9 fluency). Such large changes in fluency might suggest a difference in experimental setups, which could potentially affect the interpretation of ActAdd's fluency improvements.\\n\\nWe reported our runs for all the baselines we could reproduce. We agree that the discrepancy is regrettable, but did not find the Pei et al hyperparameters. However, the unsteered OPT is less toxic and more fluent in our run, which makes it a stronger baseline and a better comparator.\\n\\nWe thank the reviewer for suggesting HelpSteer. These two papers introduce training datasets for Attribute-Conditioned finetuned models. The HelpSteer papers do not use the dataset as a benchmark (for helpfulness, correctness, etc) but rather other methods such as MT Bench for helpfulness or TruthfulQA for correctness. Do you mean we should use their validation set as a test set?\"}", "{\"comment\": \"The code for the topic steering experiment can be found [here](https://pastebin.com/BLQMXyAu).\"}", "{\"comment\": \"> \\u201cI'm referring to the experiment in sec 4.2. Which dataset serves as the starting point for the prompts? Is the experiment based on a single prompt with 100 generations? If so, using a single prompt might make it difficult to fully verify the claim that \\\"ActAdd can steer the model to talk about a topic.\\\"\\n\\nYes, that\\u2019s right; as noted in Appendix C, \\u201cThe prompt used for all relevance completions is the neutral one: \\u2018Did you know that \\u2019\\u201d. We think this provides good evidence for ActAdd's topic steering capability, given that the base rate of completing \\u2018Did you know that \\u2019 with any particular topic is low.\\n\\n[Our new experiment](https://openreview.net/forum?id=2XBPdPIcFK&noteId=dQXH6dbblN) verifies the claim on a range of prompts: \\u201cWe will also run the topic steering experiments represented by Figures 4 and 7 on more prompts (drawn from the IMDb sentiment benchmark) to demonstrate that the steering works across a variety of prompts\\u201d. Sorry for being unclear!\\n\\n&nbsp;\\n\\n> \\u201cWhy does ActAdd perform well for certain topics but not others (e.g., Art)? Is it effective only for steering toward specific topics? Additionally, it is unclear what accounts for the drop at c=0.5 for weddings? This might indicate some experiments on how reliable ActAdd is.\\u201d\\n\\nWe aren\\u2019t sure. These artifacts disappear in [our new experiment](https://openreview.net/forum?id=2XBPdPIcFK&noteId=dQXH6dbblN) which uses 1000 random prompts, which is encouraging.\"}", "{\"metareview\": \"The paper is motivated by elicitation overhang - prompt engineering may not be able to elicit all the information from a language model. They introduce ActAdd, a method that modifies the inner activations of an LLM during the forward passes to elicit text with a specific property by taking the difference between the activations of a positive and negative prompt at a specific layer. Experiments are presented on two tasks: toxicity reduction and sentiment control.\\n\\n**Strengths:** The paper is well motivated -prompting might not be the only way in which we can get the desired behavior from a model. Activation engineering is an efficient way to alter model behavior without retraining the model.\\n\\n**Weaknesses:** There seem to be several weaknesses in this work. First of all, there was a lack of adequate comparison to baselines, as this is a crowded area of research, which was addressed in the authors\\u2019 response. Second, all reviewers noted a lack of clarity in the details of the presentation of the method, making it challenging to accept the claims by the authors. Overall, the method seems to have somewhat limited capabilities of their method - can do well in some topics but not others. New baselines show that the method performs much worse than other baselines, especially in the fluency of the generated language. \\n\\n**Reason for rejection**: See weaknesses. Most crucially, the lack of clarity as well as empirical weaknesses of the approach has made it very hard for the reviewers to be convinced about the merits of this paper, in its current shape.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers pointed out several weaknesses which were not adequately addressed by the authors. See weaknesses above which summarize these points. The authors\\u2019 response addressed the lack of reliable comparisons by introducing newer baselines. However, these reveal the limitations of the approach, several of which were hypothesized by discerning readers. Reviewers did engage in the discussions with the authors. However, the authors response seems to have not been able to address some concerns brought up by the reviewers.\"}", "{\"comment\": \"Thanks for this! You can see the appendix on overheads as C.1 in the newly revised supplementary information PDF at the top of this page.\", \"we_also_looked_into_the_works_you_mentioned\": \"* [a] Sakarvadia et al 2023 point out that their method relies on the unembedding matrix, which can misrepresent intermediate layers: \\n\\u201c_may portray attention head behavior inaccurately due to representational drift between model layers..._\\\" \\nThey also note that their future work will be layer-specific\\n\\\"_we aim to address this shortcoming in future work by instead using layer-specific learned projections to transform between hidden states and vocabulary._\\u201d\\n\\n* We think [b] Heimersheim et al 2024 isn't quite the same as activation engineering. They use activation patching to interpret model outputs by sweeping over model components to find locations that, if patched, change performance in the task of interest. (i.e. they replace the component with one from another run). This is different from our case: we _add_ to the activations rather than replacing them. Still, it is clearly somewhat related and we've added a short note.\\n\\n* We've added [c] Vig et al 2020 as related work.\"}", "{\"summary\": \"This paper introduces ActAdd, a controlled text generation technique that modifies the inner activations of an LLMduring forward passes to guide text generation towards a specific property. These modifications are applied using steering vectors, computed by taking the difference between the activations of a positive and negative prompt at a specific layer. The results demonstrate that ActAdd outperforms the baselines on tasks such as toxicity reduction and sentiment control.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-written and easy to follow.\", \"Activation addition is an intuitive and powerful technique that enables fine-grained control over model outputs.\", \"The results convincingly show that activation addition outperforms included baselines in both sentiment control and toxicity reduction tasks.\"], \"weaknesses\": [\"The primary issue with the paper is that it is outdated. The paper refers to several works published in 2023 as \\\"contemporary,\\\" implying that they are based on the presented work. This suggests that the paper may have been rejected in previous conferences and is now being resubmitted to ICLR without any major modifications. However, works from 2023 cannot be referred to as contemporary in a submission to ICLR 2025.\", \"Moreover, the claim that both Liu et al. (2023) and Zou et al. (2023) are based on this work is questionable. A quick review of these papers reveals that Liu et al. (2023) merely cites ActAdd as related work, and Zou et al. (2023) actually outperforms ActAdd on one of the tasks. Therefore, I do not believe ActAdd presents any novel idea or result. This undermines the relevance of the method, and I believe this alone is sufficient for rejection. However, if I have misunderstood this point, the authors could clarify their claims.\", \"Additional (and significant) weaknesses include:\", \"Outdated Models: Most of the experiments were conducted on outdated models (OPT, GPT2-xl, and Llama-2). While a few experiments were rerun on Llama-3, there were no baseline comparisons for these models.\", \"Inconsistent Baselines: The models used in the baselines do not match. For example, in Table 3, various models are used without a clear pattern. Ideally, all models should be run for every baseline to ensure fair comparison.\", \"Outdated Baselines: Baselines such as Fudge and PreAdd have been surpassed by newer techniques (e.g., [1]). Additionally, the paper does not include any baselines that use white-box transformations to control model behavior, despite several relevant works from 2023 (Liu et al. (2023) and Zou et al. (2023)).\", \"Inconsistent Perplexity Measurements: Perplexity for the included models was measured using Davinci 002, an old and less effective model. Furthermore, Lines 503-505 state that PreAdd's perplexity was measured using Davinci 001, making direct comparisons between the two methods problematic.\", \"Omission of Fudge: In Lines 378-380, Fudge is omitted, despite performing better on certain aspects and only slightly worse on others. This is a strange misrepresentation of the results.\", \"Redundant Experiments: The experiments in Sections 4.1 and 4.2 add little to the discussion, as they merely confirm that activation addition works. Furthermore, Tables 3 and 4 essentially present the same findings, but in a more interesting and applicable setting.\", \"Basic Metrics: Perplexity and cosine similarity are insufficient metrics to fully capture fluency and relevance. Since controlled text generation methods edit the model's internals, they can yield unintuitive results that these metrics may not fully capture. The authors should include human or LLM-based evaluations to assess the outputs in Tables 3 and 4 and compare them with baselines.\", \"Insufficient Code: The provided code lacks essential instructions and does not include scripts to reproduce the experiments. It only includes some notebooks for experimenting with activation addition, which overlooks the most important reason for providing the code. Additionally, the link to the GitHub repository that is present in the included code (playground.ipynb, top) violates the double-blind review process, as it is not anonymized.\", \"Unconvincing Experiment in Section 4.5: Evaluating a model with activation addition on one or more recent, open-form reasoning benchmarks (such as GSM8k, MixEval, or MMLU-Pro) would be much more convincing than the benchmark with perplexity measurements.\", \"Different Hyperparameters Across Experiments: If I am correct, the results for activation addition were generated using different values for top_p and temperature compared to some baselines (e.g., PreAdd), which undermines the validity of the comparisons. All non-critical hyperparameters should be kept consistent across baselines.\", \"[1] Dekoninck, Jasper, et al. \\\"Controlled text generation via language model arithmetic.\\\" arXiv preprint arXiv:2311.14479 (2023).\"], \"questions\": [\"What is meant by \\\"this section does not have complete statistics\\\" in Line 533?\", \"How was grid search performed for ActAdd's hyperparameters? Were the results reported for the best set of parameters? If so, was a similar hyperparameter search conducted for the baselines to ensure accurate comparisons?\", \"Could you clarify the hyperparameter \\u201ca\\u201d discussed in Appendix C and explain its function?\", \"For which experiments are the prompts mentioned in Lines 1218-1223 used? Appendix C presents a collection of unrelated details, making it difficult to follow and understand how it fits into the overall context of the paper. Could the authors clarify the connection to the experiments?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
2VmB01D9Ef
AutoHijacker: Automatic Indirect Prompt Injection Against Black-box LLM Agents
[ "Xiaogeng Liu", "Somesh Jha", "Patrick McDaniel", "Bo Li", "Chaowei Xiao" ]
Although large Language Models (LLMs) and LLM agents have been widely adopted, they are vulnerable to indirect prompt injection attacks, where malicious external data is injected to manipulate model behaviors. Existing evaluations of LLM robustness against such attacks are limited by handcrafted methods and reliance on white-box or gray-box access—conditions unrealistic in practical deployments. To bridge this gap, we propose AutoHijacker, an automatic indirect black-box prompt injection attack. Built on the concept of LLM-as-optimizers, AutoHijacker introduces a batch-based optimization framework to handle sparse feedback and also leverages a trainable memory to enable effective generation of indirect prompt injections without continuous querying. Evaluations on two public benchmarks, AgentDojo and Open-Prompt-Injection, show that AutoHijacker outperforms 11 baseline attacks and achieves state-of-the-art performance without requiring external knowledge like user instructions or model configurations, and also demonstrates higher average attack success rates against 8 various defenses. Additionally, AutoHijacker successfully attacks a commercial LLM agent platform, achieving a 71.9% attack success rate in both document interaction and website browsing tasks.
[ "Large Language Model", "Prompt Injection Attack", "LLM Agent" ]
Reject
https://openreview.net/pdf?id=2VmB01D9Ef
https://openreview.net/forum?id=2VmB01D9Ef
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vB9QGd8NAA", "lD1rWv4uG7", "VuX1PQBLWG", "LKwUQqx6CP", "4qdFX05eXq", "3ORDtU9GFw" ], "note_type": [ "official_review", "official_review", "official_review", "decision", "meta_review", "official_review" ], "note_created": [ 1730505983640, 1729978587034, 1730546225132, 1737524136214, 1734616163138, 1730720354312 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11629/Reviewer_8fKc" ], [ "ICLR.cc/2025/Conference/Submission11629/Reviewer_8Tqa" ], [ "ICLR.cc/2025/Conference/Submission11629/Reviewer_6Gim" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11629/Area_Chair_4WRi" ], [ "ICLR.cc/2025/Conference/Submission11629/Reviewer_rgkp" ] ], "structured_content_str": [ "{\"summary\": \"The paper introduces AutoHijacker, an automatic black-box prompt injection attack. Built on the concept of LLM-as-optimizers, AutoHijacker constructs an attack memory through batch-based optimization and selects the most effective prompt injection case during the attack. Experimental results show that AutoHijacker outperforms previous attacks in effectiveness.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper analyzes the limitations of previous LLM-as-optimizers-based methods and proposes improvements to address them.\\n\\n2. The proposed attack is black-box, making it applicable to certain closed-source LLMs, and therefore more broadly applicable than white-box attacks.\\n\\n3. Experiments are conducted on two different benchmarks, comparing the effectiveness of various attacks.\", \"weaknesses\": \"1. The contributions of the paper appear to be incremental.\\n\\n2. The improvement in the results does not seem significant, especially in comparison to the combined attack.\\n\\n3. The paper lacks evaluation against effective defenses.\", \"questions\": \"1. The overall idea of the paper does not appear to be novel. The core concept still revolves around LLM-as-optimizers, which uses LLM responses to optimize attack prompts. This makes the paper's contribution seem somewhat incremental.\\n\\n2. The evaluation results need further refinement. The paper describes the \\u201ccombined attack\\u201d as a grey-box attack, but in practice, it\\u2019s often easy to know the purpose of an LLM application (especially for task-specific LLMs) and craft fake answers accordingly. Constructing a \\\"combined attack\\\" requires no optimization, which is much more efficient than AutoHijacker. Notably, the paper mentions a log length of 30, implying that a successful AutoHijacker attack requires at least 30 optimization iterations. Yet, the results show that AutoHijacker only achieves comparable performance to the combined attack. This suggests that the proposed attack is significantly less efficient.\\n\\n3. The authors consider various defenses in Table 3, yet these defenses have been shown to be relatively ineffective in [1]. Why not test your attack against more robust defenses, such as Known-Answer Detection [1] or StruQ [2]?\\n\\n[1] Formalizing and Benchmarking Prompt Injection Attacks and Defenses\\n\\n[2] StruQ: Defending Against Prompt Injection with Structured Queries\\n\\n4. I recommend including visual examples of AutoHijacker attacks to make the paper easier to understand. For instance, illustrations of specific attack strategies and guides used in the first step, \\\"Meta Prompt Generation,\\\" would be helpful.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work introduces AutoHijacker, an automated black-box indirect prompt injection attack. It leverages the concept of LLM-as-optimizers. Specifically, it introduces a batch-based optimization framework to handle sparse feedback and also leverages a trainable memory to enable the effective generation of indirect prompt injections without continuous querying. Experiments are done on two benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The work presents AutjoHijacker as an automated black-box indirect prompt injection attack, which bridges the current research gap.\", \"The work did a good work in presenting the challenge of sparse feedback in indirect prompt injection tasks., and solve it in a simple and reasonable way.\", \"The results are promising with improvement over existing attacks on several LLMs.\"], \"weaknesses\": [\"I didn't see major flaws in the work and think it would be a good contribution to the community. I only have some questions for the authors regarding the evaluated defenses:\", \"The author did a great job in including defenses from the benchmarks. But I'm still curious how some state-of-the-art defenses could work for the attack: for example, in the work [Yi et al.], they show their white-box defense can reduce indirect prompt injection attack to nearly zero. Would the attack also work for such kinds of LLMs (optimized for defending against indirect prompt injection attacks)?\", \"I would recommend the author when introducing the concept of LLM-as-optimizer, can explain a little bit more before jumping into the challenge of sparse feedback.\"], \"minor\": [\"missing \\\".\\\" line 185\"], \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a black-box prompt injection method that leverages LLMs as optimizers to inject prompts indirectly into LLM agents, utilizing minimal feedback and a trainable memory framework.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The batch-based optimization moves beyond single-injection attacks by utilizing multiple, diverse data to perform batch-based optimization, effectively addressing the sparse feedback issue that typically limits indirect prompt injection attacks.\\n2. The method shows state-of-the-art performance across multiple benchmarks, surpassing other attacks, and demonstrates high success on a real-world LLM agent.\", \"weaknesses\": \"1. Text and images need a better presentation. \\\"Epochs\\\" in figures need improvement for better readability. Terms like Mi,n, Di,n, Si,n are inconsistent which detracts from understanding.\\n2. The paper could further explore the use of diverse victim LLMs within the optimization process, examining how this might impact transferability across models or scales. Does the size or type of this victim LLM affect the overall results?\", \"questions\": \"1. When constructing N training data points, does the study explore the success probability of attacks in relation to different attack goals, variations in external data, and user instructions? Could the testing phase generate specific attack targets based on different query types and attack categories?\\n2. How does the scorer LLM contribute to optimization performance, and could its role be discussed in more detail?\\n3. What is the source and collection methodology for the meta prompts used in the training process?\\n4. How do the hyperparameters ktop and kbottom affect model performance, and could a more thorough analysis of these parameters improve the method's robustness?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"This paper received three negative review and one positive review. The main concerns of reviewers are limited novelty, more details of baselines, more evaluation of defense, etc. However, the authors did not rebuttal so there is no discussion and further comments. After reading the paper and all reviews, the AC thinks the current version is still not ready for publication.\", \"additional_comments_on_reviewer_discussion\": \"There is not rebuttal.\"}", "{\"summary\": \"In this paper, the authors propose autohijacker, an automatic indirect black-box prompt injection attack. The results on two benchmark datasets indicate that it can be effective to both open-source and closed source models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1 This paper is easy to follow.\\n\\n2 The experiments are quite solid.\\n\\n3 The soundness of the proposed method is good.\", \"weaknesses\": \"1 My biggest concern is the novelty of the proposed method. Although in Table 1 and Table 2, the results indicate that AutoJacker can achieve outstanding performances. However, the technical contribution only include a batch-based optimization framework and a trainable memory. It is a little marginal to me. However, I am open to this problem and delighted to further discuss with authors and other reviewers.\\n\\n2 Details of the baseline attacks are needed. As far as I know, baseline methods such as PAIR are sensitive to various settings. Therefore, more details are required to provide to demonstrate the comparison is fair.\", \"questions\": \"1 Autohijacker is composed of two stages, including a training stage and a test stage. Therefore, my questions is how the authors divide the training data and the test data in their experiments.\\n\\n2 Autohijacker needs three assistant LLMs, including a prompter, and attacker and a scorer. My question is how to choose those models in authors' experiments. Will stronger attacker bring higher ASR?\\n\\n3 The authors show that AutoJacker can attack GPT-4o. How about other models such as Claude and Gemini?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}" ] }
2VhFZPYqjE
How to Get Your LLM to Generate Challenging Problems for Evaluation
[ "Arkil Patel", "Siva Reddy", "Dzmitry Bahdanau" ]
The pace of evolution of Large Language Models (LLMs) necessitates new approaches for rigorous and comprehensive evaluation. Traditional human annotation is increasingly impracticable due to the complexities and costs involved in generating high-quality, challenging problems, particularly for tasks such as long-context reasoning. Moreover, the rapid saturation of existing human-curated benchmarks by LLMs further necessitates the need to develop scalable and automatically renewable evaluation methodologies. In this work, we introduce **CHASE**, a unified framework to synthetically generate challenging problems using LLMs without human involvement. For a given task, our approach builds a hard problem in a bottom-up manner from simpler components. Moreover since we want to generate synthetic data for evaluation, our framework decomposes the generation process into independently verifiable sub-tasks, thereby ensuring a high level of quality and correctness. We implement CHASE to create evaluation benchmarks across three diverse domains: document-based question answering, repository-level code completion, and math reasoning. The performance of state-of-the-art LLMs on these synthetic benchmarks lies in the range of 40-60\% accuracy, thereby demonstrating the effectiveness of our framework at generating hard problems. Our experiments further reveal that the Gemini models significantly outperform other LLMs at long-context reasoning, and that the performance of all LLMs drastically drops by as much as 70\% when we scale up the context size to 50k tokens.
[ "Evaluation", "Synthetic data", "Benchmarking", "Question Answering", "Code Generation", "Math Reasoning" ]
Reject
https://openreview.net/pdf?id=2VhFZPYqjE
https://openreview.net/forum?id=2VhFZPYqjE
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zWzjW8g1oL", "w24bNkHkyS", "vLGPJfVDVD", "sbk7kEckGC", "qpsVYkpycw", "nPNgI02VhB", "fCtnKl3zBp", "dMN0Nzqakr", "ai8gKXq6ql", "ZUPn9DwfqW", "XIC5J7hVQb", "VlfZvymeAJ", "Vb4h3hsKNd", "Urc6pWZK3n", "UGskUGYmaJ", "RaOJuPo10I", "L7wkJlh8UU", "KExNV9G9QF", "IFDjYnNS0m", "EJ6w9dQoPn", "DkyifWteye", "7WHqkXu6UT", "51fqKUQxXq", "2nQyU2HoMN", "2PXf9IZBni", "1ZcqxxwDbc" ], "note_type": [ "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1731708206295, 1732495940682, 1732483729902, 1734984158193, 1732562883435, 1730609370640, 1731963748893, 1731708587491, 1732558696561, 1732413504962, 1732176260869, 1732213154471, 1732157087581, 1730487614517, 1732496145710, 1737524167138, 1733287989656, 1731709269269, 1731709176275, 1729127857844, 1732212786237, 1732156932983, 1731708781553, 1732486331194, 1732845916414, 1732845861543 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12112/Authors" ], [ "ICLR.cc/2025/Conference/Submission12112/Authors" ], [ "ICLR.cc/2025/Conference/Submission12112/Reviewer_yxKN" ], [ "ICLR.cc/2025/Conference/Submission12112/Area_Chair_WPXM" ], [ "ICLR.cc/2025/Conference/Submission12112/Authors" ], [ "ICLR.cc/2025/Conference/Submission12112/Reviewer_CAvC" ], [ "ICLR.cc/2025/Conference/Submission12112/Reviewer_gKcy" ], [ "ICLR.cc/2025/Conference/Submission12112/Authors" ], [ "ICLR.cc/2025/Conference/Submission12112/Reviewer_yxKN" ], [ "ICLR.cc/2025/Conference/Submission12112/Authors" ], [ "ICLR.cc/2025/Conference/Submission12112/Reviewer_yxKN" ], [ "ICLR.cc/2025/Conference/Submission12112/Authors" ], [ "ICLR.cc/2025/Conference/Submission12112/Authors" ], [ "ICLR.cc/2025/Conference/Submission12112/Reviewer_gKcy" ], [ "ICLR.cc/2025/Conference/Submission12112/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12112/Authors" ], [ "ICLR.cc/2025/Conference/Submission12112/Authors" ], [ "ICLR.cc/2025/Conference/Submission12112/Authors" ], [ "ICLR.cc/2025/Conference/Submission12112/Reviewer_yxKN" ], [ "ICLR.cc/2025/Conference/Submission12112/Authors" ], [ "ICLR.cc/2025/Conference/Submission12112/Authors" ], [ "ICLR.cc/2025/Conference/Submission12112/Authors" ], [ "ICLR.cc/2025/Conference/Submission12112/Authors" ], [ "ICLR.cc/2025/Conference/Submission12112/Authors" ], [ "ICLR.cc/2025/Conference/Submission12112/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Review\", \"comment\": \"Thank you for reviewing our paper. We are glad that you found our paper impressive and the results compelling. Please find our response to specific comments below.\\n\\n**[W1]** *Experimental results could have been deeper\\u2026but I am not too sure about its sensitivity and robustness*\\n\\nWe have now increased the size of the code and math datasets by more than twice as much. Further, we have shown that our framework is robust enough to be applied to three very different types of tasks, and succeed in generating challenging problems for all of them. We have also shown that we can generate data using powerful models (such as GPT-4o) and comparatively weaker models (such as Llama-3.1-8B). We hope our experiments with increased data size address your concern about robustness. If you would like more analysis, could you please specify the kind of analysis you would like?\\n\\nWe were encouraged to see that you had given a score of 8, and indeed your review (\\u201cIt is for this reason that I am not inclined to give the paper a stellar rating\\u201d) reflects that. We kindly ask you to consider raising the score if your concerns are addressed.\"}", "{\"title\": \"Request to engage in discussion\", \"comment\": \"We kindly ask you to increase your score if your concerns are addressed or to engage in a discussion with us. There are only 2 days left in the discussion.\"}", "{\"title\": \"Thanks a lot for the reply\", \"comment\": \"First of all, I am grateful to the opportunity to engage in these meaningful discussions with the authors.\\n\\nThe CHASE-Code focuses on generating new functionality based on its precise description. In the paper, the description is \\\"Given a repository of Python functions, the task is to implement a new function based on a set of objectives provided in natural language\\\". If I understand it correctly, this seems to be the only type of code generation problem considered in the paper. Additionally, the data only covers two domains (which may not be regarded as \\\"broad\\\"). I think the synthetic benchmark will be more valuable if it is large, diverse and comprehensive as the main claim here is that it requires much less human efforts, so should be very easy to scale up. The current version with only 220 problems in the dataset seems to be also practical for human annotators to finish in a reasonable period.\\n\\nBased on the example presented in the middle of Figure 2, I recognize that the natural language description is very detailed. I think there is a difference between precise/clear and detailed instruction. In realistic scenarios, it is not common to see a code generation problem specified in such a detailed manner (It is so detailed that even which specific parameter (price_col in Figure 2) to focus is described). In LeetCode, the objective describes the target goals instead of giving implementation details, while in realistic scenarios to solve an issue in Github repo, users or LLMs even need to figure out which file or while function to modify. I think these will reflect more of the genuine needs for code generation.\\n\\nRegarding controllability, I recognize the good motivation of authors. However, I believe that the authors need to find better scenarios to demonstrate it. I think a good scenario people would want to use the synthesized benchmark is: (1). there is very rare existing data to leverage; (2). The scenario is realistic and important. Specific to code generation, I think it is not very difficult to find problems in the domain data pre-processing and algorithms from Kaggle, LeetCode, codeforces, etc. If the author can find good scenarios and control the data generation on them, I think it will make more sense to me.\\n\\nHope these address the author's questions.\"}", "{\"metareview\": \"This paper presents a systematic approach to synthesize challenging compositional problems for LLM evaluation in math, coding and general question answering. The core idea is to take a \\u201cbottom-up\\u201d approach to gradually compose simpler sub-tasks that are easier to verify to form more challenging benchmark problems. The authors showed that state-of-the-art LLMs only attain 40%-60% accuracy on the generated benchmarks, and claimed that such evaluation results demonstrated the effectiveness of the proposed approach in generating hard evaluation problems.\\n\\n**Strengths:**\\n\\n* The paper is generally well written (CAvC, yxKN) and well-structured with comprehensive appendices (gKcy). The method also has a clear motivation (CAvC, yxKN). \\u201cThe problem addressed by this paper is critical to the evaluation of LLMs\\u201d (gKcy).\\n\\n* The paper presents a \\u201cnovel paradigm for data construction\\u201d (gKcy). The authors demonstrated the applicability of the benchmark synthesis approach on three distinct domains (CAvC).\\n\\n* Comprehensive evaluation results \\u201ccovering representative proprietary and open-source models\\u201d (yxKN). Reviewer CAvC also found the results \\u201cfairly compelling\\u201d as the benchmark \\u201cindeed succeeds in yielding performance drops even from advanced models\\u201d.\\n\\n**Weaknesses:**\\n\\nAfter the rebuttal period, there are several issues that are yet to be addressed.\\n\\nFirst, while the results suggest that LLMs do not perform well on this dataset, the paper lacks intrinsic evaluation on the complexity and difficulty of the synthesized problems (gKcy). The authors could consider using well-established domain-specific metrics to measure problem complexity, such as using the number of lines or the size of ASTs to approximate program complexity for CHASE-CODE. As a general suggestion, while the reviewer did not specify the exact metrics to use in their review, the authors would have been more proactive in the response period and report any potentially reasonable metrics in order to better address the reviewer\\u2019s concern.\\n\\nAnother potential issue is the quality of the synthesized problems (gKcy). While we appreciate the authors\\u2019 effort in reviewing O(30 - 100) problems in the three datasets, given the total size of each dataset (around 500), it would be more convincing to carefully review at least 20% examples (i.e., 100 problems) in order to reach any statistically significant conclusions. The authors already demonstrated that the CHASE-MATH dataset is in relatively decent quality by reviewing 100 examples, and I strongly suggest the authors review a similar number of tasks in the other two datasets.\\n\\nNext, as flagged by Reviewer yxKN, since there already exist high-quality, challenging datasets for repository-level code editing (SWE-bench), it is less clear that from an empirical perspective, what additional values would the new synthetic CHASE-Code dataset bring to the code LLM community. While we acknowledge that CHASE-Code focuses more on *generation* Instead of code editing, there are existing high-quality repo-level code generation benchmarks derived from real repository-level context, such as DevEval (https://arxiv.org/abs/2405.19856). I totally agree with the authors that the value of this paper lies more in the proposed data synthesis approach, instead of the datasets produced. However, a significant portion of this paper also comes from the value of the benchmarks the proposed approach synthesized, it is hard for me to assess the practical value and implications of CHASE-Code. In the future, maybe the authors could take the reviewer\\u2019s suggestion, and explore additional use cases or domains in coding where the proposed method could synthesize benchmarks with more practical value to the practitioners. On the other hand, the authors could also consider exploring whether your synthetic datasets like CHASE-Code could complement or correlate with existing benchmarks like SWE-Bench or LMSYS coding, or other datasets that require more laborious human efforts. In this way, the authors could more clearly demonstrate the value of CHASE as it provides a significantly more cost-effective approach to create novel code benchmarks.\\n\\n\\u2014\\n\\nFinally, I wanted to note that the authors\\u2019 attitude during the rebuttal period is unprofessional and might potentially violate ICLR code of ethics (\\u201cResearchers must show respect for colleagues, research participants \\u2026\\u201d). It is totally understandable that you may find reviewers might take a different perspective when judging your work, and it is critical to professionally resolve any concerns or misunderstandings via peaceful communication in a respectful manner. While I did not take this into consideration when rating your work this time, I wish the authors could bear this in mind in the future.\", \"additional_comments_on_reviewer_discussion\": \"There are other issues, such as questions around the size of the datasets, which are addressed during the rebuttal period.\"}", "{\"title\": \"Response\", \"comment\": \"We have provided concrete arguments against your concerns. We feel you are just repeating your concerns again without pointing out any flaws in our response. We shall again provide counter-arguments for your points.\\n\\n**Data Size.** We don\\u2019t generate more data out of choice, and not just because of the cost. Indeed the cost and speed of generation would be much faster compared to humans (see Table 5). We made this choice because we want to keep our datasets accessible for other researchers (even currently, it costs $50 to run a single model inference once). The **value in our synthetic benchmark does not come from the quantity of examples, rather the value that we want to emphasize is that we are 'automatically' able to create 'challenging' problems, irrespective of the scale**. And considering precedence of other benchmarks as cited in our previous response, this scale is sufficient to serve for evaluation. Lastly, note that the **CHASE approach by design is quite easily scalable** (as we even showed by quickly scaling from 220 to 500 examples in CHASE-Code and generating 10k examples for the fine-tuning experiments). Its just that we didn't feel the need to scale it too much for these evaluations.\\n\\n**Regarding point 2.** We have provided ample arguments in defence of our code generation scenario. Our scenario is \\u201crepository-level code completion\\u201d with \\u201ccomplete specification of desired functionality\\u201d. Yes, we agree complete specification is hard to find in real scenarios, but it is still objectively an easier task to accomplish compared to its more real counterpart (less-specific problem statement) and we have shown LLMs still fail on it. This makes our benchmark an essential step towards evaluating the completely realistic scenario.\\n\\n**Regarding point 3.** You are still talking about data collection with GitHub \\u201cissues\\u201d. We have already explained we focus on a different code generation task: that of generating new code functionality based on user intent or problem statement. Moreover, we ask how you would get example-specific test code? We have already explained, obtaining tests is a huge bottleneck in \\\"scraping-existing-code\\\" creation approaches. This is not the case for our approach.\\n\\nPerhaps to better understand our motivation, we ask you to think about this realistic task: \\u201cA user is working on a codebase. They now want to implement a new functionality. They specify a description of the functionality to the LLM and the LLM generates the corresponding code\\u201d. How would you create a benchmark to evaluate this task? Note that this is significantly different from the SWE-Bench task which focuses on fixing issues and bugs. Human annotation for such tasks is expensive and requires high expertise. With CHASE-Code, we simulate this scenario, only with better-specification (which is an easier task).\\n\\nWe again remind you that your concerns (that we have provided arguments for) are limited to CHASE-Code, making your score quite disproportionate. Our main contribution is the general framework to create challenging synthetic data for evaluation across multiple domains. This is the first time this problem is being studied. We have also shown its applicability for two other scenarios such as document-based question answering and math reasoning.\\n\\nWe are not troubled by different opinions, indeed we welcome constructive criticism. However, we request you to kindly take stock of our arguments and attempt to refute them if you are still dissatisfied.\"}", "{\"summary\": \"The authors introduce CHASE, a unified framework to synthetically generate challenging problems\\nusing LLMs without human involvement. For a task that is given, the approach builds a hard problem\\n in a bottom-up manner from simpler components. It decomposes the generation process into\\nindependently verifiable sub-tasks to ensure a high level of quality and correctness.\", \"chase_is_designed_to_address_two_challenges_that_the_authors_state_succinctly_on_pages_1_and_2\": \"first, how can it be used to create hard and realistic problems, and secondly, how can it be used\\nto automatically verify the correctness of the generated data? This second challenge is especially\\nprevalent in other work of this nature that is attempting to construct synthetic evaluation benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"--The paper is written at an impressive quality, especially the figures and the elucidation of the problem's motivation and challenges.\\n--The authors consider three sufficiently diverse tasks and benchmarks to showcase the utility of their approach.\\n--The results are fairly compelling, and the benchmark indeed succeeds in yielding performance drops even from advanced models.\", \"weaknesses\": \"--Experimental results could have been deeper in the main text. It is for this reason that I am not inclined to give the paper a stellar rating.\\n--The approach is simple and has some nice properties, but I am not too sure about its sensitivity and robustness. I felt inadequate attention was paid to this in the paper.\", \"questions\": \"None.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Authors\", \"comment\": \"I appreciate your detailed and patient response. I have carefully reviewed your revised paper (which I noticed mainly added content to the appendix) as well as your discussions with other reviewers. Below are my comments:\\n\\n**Regarding your response to [W2]:** First, I understand that the primary contribution of this work is the bottom-up automated framework for constructing challenging problems. However, I am not the only reviewer who raised concerns about the dataset size. Current mainstream benchmarks typically consist of at least a few thousand examples. As for the cost of generating more data, I believe that this is an important factor that an effective and broadly applicable framework should take into account. From the users\\u2019 perspective, both the time and financial cost of constructing data are clearly essential considerations (I would also like to see experiments related to this aspect). This is especially relevant since the primary area of this paper is **\\\"datasets and benchmarks\\\"** (which also relates to your statement, \\\"The byproducts are the resulting datasets.\\\" In my opinion, the dataset itself should still be one of the major contributions in this track).\\n\\n**Regarding your response to [W3]:** You mentioned that \\\"there exists no comprehensive framework...\\\" I believe that since the title of the paper highlights \\u201cChallenging Problems,\\u201d it would be helpful if you compared the difficulty of the data generated by CHASE with that of other similar datasets. This would allow us to better understand the quality of CHASE dataset. This is unrelated to whether the process is fully automated or whether it addresses the \\u201csimultaneous solution to the 2 core problems\\u201d. Furthermore, I think that your framework does not \\\"offer a simultaneous solution to the two core problems,\\\" since the generation process for QA, code, and math datasets differs in terms of prompts and procedures. The only commonalities among them seem to be the use of a LLM generator, a verifier, and the \\\"bottom-up\\\" concept. I may have misunderstood, so I welcome your clarification.\\n\\n**Regarding your response to [Q2]:** For the manually verified results, you mentioned that 6%-7% of the data is incorrect. In extreme cases, this means that the accuracy range for LLMs evaluated with CHASE could be off by as much as \\u00b17%, which may be unacceptable in the current LLM evaluation landscape. This is especially concerning, as newly released models often outperform current SOTA LLMs by only 1%-2% on some evaluation sets.\\n\\n**Regarding your response to [Q3]:** It seems you may have misunderstood my question. My concern is that the experiments presented in Table 2 aim to demonstrate that the CHASE dataset produces more challenging problems than direct prompting approaches, as you mentioned in your response to [W3]. Therefore, I believe it is unfair to filter out easy examples from the CHASE dataset and then compare it with direct prompting approaches. At the very least, this comparison should either include or exclude filtering for both approaches to ensure a fair comparison.\\n\\n**Regarding your response to [Q4]:** I found the example in Figure 8 inappropriate. The sentence \\\"He decides to continue running but at double the distance he covered during his recovery week for each day the next week, aiming to improve his overall performance.\\u201d is highly ambiguous. I tested it, and if the sentence is changed to \\\"The distance he ran each day was twice the total distance he ran during the recovery week,\\\" Gemini-1.5-Pro can answer it correctly. This makes me question whether there are many similar issues in the CHASE-Math dataset.\\n\\n**Regarding your response to [Q5]:** First, if simply increasing the context size can make the problem much more challenging, then how does this demonstrate the significance of CHASE? For example, you should conduct experiments comparing the performance of LLMs on CHASE problems with irrelevant context added, versus regular (seed) problems with irrelevant context added, to observe the trends in accuracy. Furthermore, the experiment with fine-tuning smaller LLMs seems insufficiently rigorous. You cannot conclusively state that stronger open-source models (e.g., llama3-70B) would not be able to \\\"hack\\\" the benchmark, as the success of hacking may depends non the strength of the LLM.\\n\\nIn conclusion, I appreciate your efforts to address some of my concerns. However, not all issues have been resolved, and thus **I am unable to raise my score at this time**.\"}", "{\"title\": \"Response to Weakness and Questions\", \"comment\": \"Thank you for reviewing our paper. We are glad that you found our research problem critical to study and our approach novel and innovative. Please find our response to specific comments below.\\n\\n**[W2]** *The current dataset is relatively small*\\n\\nWe have carried out new experiments that have significantly scaled the size of the data. We now have 500 problems for CHASE-Code, and 500 problems for CHASE-Math (more than twice compared to before). We believe these sizes are sufficient to draw conclusions from our experiments. Further note that CHASE-QA and CHASE-Code are long-context benchmarks, and it would be prohibitively expensive (making them less accessible for other researchers) to test models on them if they contain too many examples. \\n\\nIt is also important to note that the main contribution of this work is an end-to-end framework (i.e., the CHASE method) that can be used to automatically generate as much data as needed\\n\\n**[W3]** *Some experimental designs lack strong motivation*\\n\\nWe have addressed this in the answer to Q5 below.\\n\\n**[W3]** *there is a lack of experiments that demonstrate the advantage of CHASE over other synthetic data generation methods*\\n\\nWe would like to highlight that there exists no comprehensive framework to generate synthetic data for evaluation (for detailed discussion, please check our related works section). While there are many pipelines for generating synthetic data for training, they offer no simultaneous solution to the 2 core problems when generating data for evaluation (which our approach targets) - difficulty (for the generating LLM itself) and automatic verification. We did indeed compare CHASE with two popular synthetic data generation pipelines - self-instruct [1] and evol-instruct [2] (see L457-467 and Table 2), and concretely show the benefits of CHASE along the aforementioned dimensions.\\n\\n**[Q1]** *Figure for CHASE-Code.*\\n\\nWe felt it was redundant and that it would clutter the main figure. We have now provided the figure in the appendix (see Fig. 4 on page 20).\\n\\n**[Q2]** *how can we ensure that the data in the CHASE-QA, CHASE-Code, and CHASE-Math datasets are correct?*\\n\\nWe have manually verified each data point in CHASE-Math. It is impractical to manually verify each example in CHASE-QA and CHASE-Code because the context length for each example is 5-20k tokens. Further CHASE-Code requires a high level of technical expertise for verification. For these reasons, we randomly sampled 30 examples from CHASE-QA and CHASE-Code each, and manually verified them ourselves (discussed in L524-L533), which gives us a high level of confidence about the correctness of the data. To put in context the impracticality of manual verification, it took the author over 10 hrs to verify these 60 examples. \\n\\n**[Q3]** *you mentioned that approximately 33% of the data was filtered out\\u2026Would you still claim that CHASE-QA is a more challenging dataset?*\\n\\nWe are simply discarding a portion of the generated data that we know can be easily solved even by weaker models. Including such \\u201ceasy\\u201d examples does not serve much purpose in an evaluation benchmark if most models are going to be able to solve them. Indeed, our goal in this paper is to automatically find challenging problems that LLMs will struggle to solve, so we prioritize difficulty of problems over quantity. If we added those examples back, we expect the accuracy to be higher. However, ideally, we would like our method to create benchmarks where the performance of models reflects as much room for improvement as possible.\\n\\nIn reference to the experiments in Table 2, note that the same type of filtration was carried out for the baselines as was done for CHASE (we have now made this explicit in the paper: L462). Hence, yes we would still conclude CHASE-QA is more challenging (apart from having much higher quality).\\n\\n**[Q4]** *Intuitively, if the tested LLMs reason and calculate sentence by sentence, the accuracy may be significantly higher than under your current naive prompt.*\\n\\nIntuitively, we agree, and this is how humans might reason and perform well on this benchmark too. However, LLMs are still prone to various kinds of mistakes. We experimented with a new prompt following your suggested intuition (see section C.1, Fig. 29 and Table 5). While the performance of models does increase by ~3%, the task is still very challenging for the models. We have also provided an example of an error made by Gemini-1.5-Pro under this new prompt (see Fig. 8) - the model solves perfectly till sentence 6, but then forgets that it has to use the previously calculated value for the next step.\"}", "{\"title\": \"Response\", \"comment\": \"1. I think the dataset size with 220 or 500 examples is too small to demonstrate the value of synthesized benchmark, because it is not even larger than many human-curated benchmarks. I think authors need to provide more justifications on why it will be difficult or time-consuming for the synthesized dataset to be larger.\\n2. If the synthetic dataset does not reflect realistic scenarios, then I think the authors need to justify the motivation about why we need such a benchmark, or in what cases people will want to synthesize a benchmark, etc.\\n3. To find existing resources on data-preprocessing or algorithms, I think issues in repos like pandas or numpy will provide related problems. Overall, I think it would be more valuable to demonstrate that constructing benchmarks with existing resources is difficult, so we need to rely on synthetic data.\\n\\nAdditionally, although I may have different opinions from the authors, I hope that the author's response could be less aggressive (although this is very minor).\"}", "{\"title\": \"Request to read author responses and adjust scores\", \"comment\": \"Dear reviewers,\\n\\nWe have provided detailed responses to address your concerns and questions. Further, we have carried out many new experiments to support our claims and arguments. Since there are only 3 more days of discussion left, we request you to kindly look at our responses. If your concerns are addressed, we kindly ask you to increase your score. If not, we encourage you to respond to us and engage in a discussion.\"}", "{\"title\": \"Response\", \"comment\": \"I think the efforts to curate SWE-bench are not very significant, as they use existing Github issues, repos and verified commits with test cases. The reason I mention this dataset is not to ask for a better quality of generated benchmark but to seek stronger motivations about why we want to use a synthesized code generation benchmark if a very realistic one already exists. I feel like the way to create SWE-bench is scalable, e.g., they can efficiently collect a more difficult one using the same pipeline in other repos. The evaluation on such benchmarks will be the most accurate to reflect how capable LLMs are in assisting programmers in software engineering tasks.\"}", "{\"title\": \"Request to engage in discussion\", \"comment\": \"We have now significantly increased the size of the benchmarks. Further, we have experimented with another prompt type for CHASE-Math, provided a more fine-grained evaluation for CHASE-QA, and compared the difficulties with other challenging datasets in the corresponding domains (see Appendix C).\\n\\nWe kindly ask you to consider raising your score if your concerns are addressed or to kindly engage in a discussion with us.\"}", "{\"title\": \"Response to Reviewer (Cont'd)\", \"comment\": \"**Regarding Context Size.** Perhaps you have misunderstood our approach. For the long-context domains, i.e., QA and Code, we do not start with any seed example. The examples are generated completely from scratch. Without the CHASE framework, you will not be able to craft a valid example to add irrelevant context to. Note that the context size is just one dimension of difficulty, there are many others (considering the low accuracies we see even without adding irrelevant context). Further note that there is no element of difficulty arising from context size in CHASE-Math.\\n\\n**Regarding Finetuning.** Note that our claim is only meant for much weaker models (we have made this more clear in the text now). We only wished to focus on fine-tuning with smaller models of ~7B scale since that can be done more accessibly. Finetuning larger models introduces a lot more complexity and is currently outside the scope of this work, which primarily focuses on evaluation.\"}", "{\"summary\": \"To address the high cost of manually labeled data and the low quality of synthetic data, this paper proposes the CHASE framework. CHASE is a framework for automatically generating QA pairs, code problems, and math problems. It adopts an innovative bottom-up structure and divides the overall task into individually verifiable sub-tasks. This allows a seed problem to progressively increase in difficulty through multiple rounds of generation and verification. Experimental results show that the data generated by the CHASE have a certain degree of difficulty.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The problem addressed by this paper is critical to the evaluation of current LLMs -- the lack of comprehensive and challenging datasets.\", \"The paper is well-structured, with comprehensive appendices, such as a detailed list of prompts used in CHASE.\", \"This paper presents a novel paradigm for data construction, which may have significant potential in the field of synthetic data.\"], \"weaknesses\": [\"Some issues with the details of the paper. For example, in the main figure (Figure 1), the bottom-right corner should say \\\"12 pens\\\" instead of \\\"18 pens.\\\"\", \"The current dataset is relatively small, which may result in a high degree of randomness in evaluation results when using this dataset.\", \"The experiments are not sufficiently thorough. Some experimental designs lack strong motivation, and there is a lack of experiments that demonstrate the advantage of CHASE over other synthetic data generation methods.\"], \"questions\": \"1. Why does Figure 1 only provide an overview of constructing CHASE-QA and CHASE-Math, but not CHASE-Code? I believe all three should be at the same hierarchical level.\\n2. Without human verification, how can we ensure that the data in the CHASE-QA, CHASE-Code, and CHASE-Math datasets are correct? Is there a possibility that the ground truth/golden answer in the dataset themselves are incorrect?\\n3. In lines 333-340, you mentioned that approximately 33% of the data was filtered out through sampling and self-consistency, and subsequent experiments (e.g., Table 2) suggest that CHASE-QA generates more challenging data. I think it is unconvincing. If the 33% of the data were added back, how would the experimental results change? Would you still claim that CHASE-QA is a more challenging dataset?\\n4. From the examples given in the paper, CHASE-Math seems to concatenate a series of atomic problems. Intuitively, if the tested LLMs reason and calculate sentence by sentence, the accuracy may be significantly higher than under your current naive prompt. Could you elaborate further on how CHASE-Math is more challenging, given the point I raised?\\n5. What is the motivation behind the experiments in lines 469-477 and lines 486-493? In my understanding, the \\\"Impact of context size\\\" is not the focus of this paper. Also, the experiment in lines 486-493 only fine-tuned weaker models. Would the same conclusion apply to fine-tuning stronger models?\\n6. Could you provide some comparative experiments between the CHASE dataset and other synthetic datasets, such as a comparison between CHASE-QA and existing long-context benchmarks?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Request to engage in discussion\", \"comment\": \"We kindly ask you to increase your score if your concerns are addressed by our responses and additional experiments. There are only 2 days left in the discussion period.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Summary of Discussion and Final Comments\", \"comment\": \"**Reviewer CAvC**\\n\\nThe reviewer did not participate in the discussion.\\nIn their review, they made a vague comment about \\u201cnot being sure about sensitivity and robustness\\u201d. This statement is not backed by any concrete point or weakness and we have responded by explaining how our method and experiments are actually quite robust (we showed it works for multiple domains, it is scalable, works with weaker LLMs, etc).\\nFurther, we highlight that **the reviewer had initially given a score of 8, and then lowered it to 6 on the day the reviews were released without changing the review text and without waiting for our response**. This behaviour is quite unethical and we request the AC to consider their previous score in their deliberation.\\n\\n**Reviewer gKcy**\\n\\nWe believe the reviewer has some unfounded concerns which we have sufficiently addressed in our responses.\\n\\n**Size of data.** As already explained in detail in our responses, for long-context benchmarks, ~500 data points is the standard size. Asking for more data than that is against the spirit of open and accessible science since it will be prohibitively expensive to run models on the benchmark. Moreover, we have provided evidence of popular benchmarks and past papers published at ICLR whose sole focus is proposing a benchmark of just ~200-700 examples. Lastly, the benchmarks are not the main contribution of this paper, rather it is the underlying approach that was able to automatically generate the data. Hence, we strongly believe this concern to be unreasonable.\\n\\n**Correctness.** We have already clarified that the generated data for code and math domains is completely correct. There is a possibility of a small percentage (~6%) of errors in the QA benchmark. We address this by providing softer metrics of evaluation (see detailed response) and by noting the large gaps between performances of models. Further, we highlight that the level of correctness of generated data achieved by our method is far superior to other baselines.\\n\\n**Comparison of difficulty.** We have provided comparison of performance of LLMs on our benchmarks against other difficult benchmarks in the respective domains. The reviewer has failed to specify what other kind of \\u201cdifficulty analysis\\u201d they were hoping to see. We believe it is clear from our results that our generated benchmarks are indeed quite challenging.\\n\\nThe reviewer had misunderstood some aspects of our paper such as the context size and fine-tuning experiments, which we have clarified in detail.\\n\\n**Reviewer yxKN**\\n\\nThe reviewer has raised two concerns which we believe are not valid. We have already summarized our arguments for the size of data. The other point raised by this reviewer is that our experiments pertaining to the code domain do not reflect realistic scenarios. We believe this point is completely baseless and we have provided a detailed explanation in our response. Further note that the reviewer has no other standing concerns against our method or our experiments with the other 2 domains.\\n\\nOverall, we feel that **reviewer gKcy and yxKN\\u2019s scores are quite disproportionate corresponding to their standing concerns**. They have provided no concrete counter-arguments to our responses. This work provides the first comprehensive approach to generate challenging, high-quality synthetic data for evaluation across multiple domains. The **reviewers agree that the problem that we study is critical** (CAvC, gKcy), the **paper is well-written** (CAvC, gKcy, yxKN), the **approach is novel** (gKcy), the **experiments are comprehensive** (yxKN), and the **results are compelling** (CAvC). None of the reviewers have pointed out any technical weaknesses in our proposed method or the main results (Table 1), which are the main contributions of this work.\"}", "{\"title\": \"Response to Questions\", \"comment\": \"**[Q1]** *How to organize functions into files to build repositories from scratch in CHASE-CODE?*\\n\\nWe randomly sample irrelevant helper functions and combine them with the relevant helper functions for a particular problem. The ordering of these functions is then randomly permuted and distributed among 10 python files.\\n\\n**[Q2]** *Could you specify more details on rejection sampling?*\\n\\nWe run GPT-4o-mini twice (with temperatures 0.3 and 0.7) on the generated problems. Depending on the difficulty of the task, we remove a percentage of problems that GPT-4o-mini answered correctly on both runs. The reasoning is that we believe such problems will be easy to solve for most of the SOTA models as well and therefore we decrease their population in the final dataset to yield a challenging benchmark with lots of room for improvement.\"}", "{\"title\": \"Response to Weakness\", \"comment\": \"Thank you for reviewing our paper. We are pleased to see that you found our results comprehensive and the paper to be well-written. Please find our response to specific comments below.\\n\\n**[W1]** *SWE-bench [1] also focuses on repo-level code generation*\\n\\nWe emphasize that our main contribution is the end-to-end framework for generating challenging synthetic data for evaluation, and we show how it can be applied in the code generation domain. \\n\\nOur goal is not to present a code generation benchmark that competes with benchmarks like SWE-Bench. Our generation process is a complementary method to SWE-Bench's way of creation. Moreover, it is reasonable to assume that models will eventually become capable at SWE-Bench in the near-future (either due to contamination or due to genuine progress). Since such high-quality data curation will become increasingly difficult to do manually, it is very important to explore alternatives, including synthetic data evaluation strategies. Our method (and possibly future improvements) can automatically generate hard data that is challenging for even the most capable models themselves. Moreover CHASE facilitates a much higher level of controllability, i.e., we can generate the specific types/domains of code that we want to evaluate or analyze (like we did for algorithms vs data-preprocessing) and it is not bottlenecked by the availability of high-quality repositories with exhaustive tests.\\n\\n**[W5]** *data contamination not a big concern for challenging benchmarks\\u2026 even if codellama [2] has been intensively trained on Github data, its performance is still low on SWE-bench.*\\n\\nWe would like to note that the Codellama paper [2] clearly mentions it has not been trained on meta-level or temporal information such as issues or commits. As for the other SOTA models, they do not disclose their data so it is hard to comment on whether or not SWE-bench data is already a part of their training set.\\n\\nWe respectfully disagree that data contamination is not a big concern. There is significant evidence to suggest that models are showing improved accuracies at benchmarks like GSM8k [3] because of contamination [4][5].\\n\\nIn contrast, the contexts and problems of datasets created using CHASE are completely novel for all models, which makes for a better test of generalization. Moreover, if and when these datasets get saturated/contaminated, new test suites can be sampled from the (more powerful) LLMs of the future (possibly with improved iterations of our pipeline).\\n\\nLastly, note that the SWE-Bench idea is relatively new (<1 year old), so it is possible that leading AI companies have not yet incorporated this type of meta-data in their training set. However, it is reasonable to expect they will do so, in a format similar to the one used in SWE-Bench, to train future iterations of LLMs, which will then lead to a much higher performance.\\n\\n**[W2]** *To demonstrate that this pipeline is scalable, I think it is important to generate data of large size and apply it to training*\\n\\nWe respectfully disagree and it is a mischaracterization of our work. Our focus in this work is to obtain challenging data for evaluation. We have already generated a significant amount of data (our new experiments more than doubled the amount of data we generated earlier). We can generate more if needed, but generation is not cheap, and we believe that what we have generated so far is sufficient for challenging the currently available LLMs.\\n\\nFurther note that we indeed generated ~10k math problems using Llama-3.1-8B for our finetuning experiments (L486-493). Hence, we have demonstrated that our approach is clearly scalable.\\n\\n**[W4]** *why better performance of models different from the generator and verifier can indicate better data quality.*\\n\\nOur point with this statement was to suggest that the generated data is not too biased towards the generator or the verifier since other models are also performing better. However, this is a minor point, and we are willing to remove it if it is still confusing.\\n\\nWe hope we made the focus and scope of our work clear and clarified potential misunderstandings. We kindly ask you to consider raising the score.\", \"references\": \"[1] Jimenez et al. (2024). SWE-bench: Can Language Models Resolve Real-World GitHub Issues? In ICLR.\\n\\n[2] Rozi\\u00e8re et al. (2023). Code Llama: Open Foundation Models for Code. In Arxiv: 2308.12950.\\n\\n[3] Cobbe et al. (2021). Training Verifiers to Solve Math Word Problems. In Arxiv:2110.14168.\\n\\n[4] Zhang et al. (2024). A Careful Examination of Large Language Model Performance on Grade School Arithmetic. In Arxiv:2405.00332.\\n\\n[5] Mirzadeh et al. (2024). GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models. In Arxiv:2410.05229.\"}", "{\"summary\": \"This paper introduces CHASE (CHallenging AI with Synthetic Evaluations), a framework for generating challenging evaluation benchmarks using large language models (LLMs). The authors implement CHASE to create benchmarks in three domains: document-based question answering, repository-level code completion, and math reasoning. Experiments with 15 LLMs show that the generated benchmarks are challenging, with even top models achieving only 40-60% accuracy across domains. The authors demonstrate CHASE's utility in differentiating between state-of-the-art models and revealing performance drops with increasing context length. They argue that this approach offers advantages in scalability, renewability, and ability to evaluate tasks difficult for humans to assess, while providing high-quality, challenging problems for LLM evaluation.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The experiments are comprehensive, with a good set of LLMs covering representative proprietary and open-source models.\\n2. The paper is well-written, which clearly describes the methods, experiments and results.\", \"weaknesses\": \"1. Although overall I believe it is valuable to explore data synthesis for benchmark construction, I think the authors should be more careful in selecting appropriate settings. I think the most important motivation for this paper is that it is expensive and sometimes impracticable to create benchmarks with challenging problems. However, in some settings present in the paper, I feel that this may not be the case. For example, SWE-bench [1] also focuses on repo-level code generation, and they take existing Github issues as queries, and the modifications made by real users as the ground truth. The current state-of-the-art performance is only 43% in the leaderboard, which indicates its difficulty. Compared to CHASE-CODE, I think the pipeline used in SWE-bench is a better way to collect repo-level code generation data.\\n2. To demonstrate that this pipeline is scalable, I think it is important to generate data of large size and apply it to training. If the API cost is a concern, I think the authors can use open-source models, e.g., Llama-70B. \\n3. Typo in Figure 1: Jill has 12 pens in the bottom right corner. \\n4. In line 443-444, I don\\u2019t quite understand why better performance of models different from the generator and verifier can indicate better data quality.\\n5. One advantage of the CHASE claimed by authors is to mitigate data contamination, but I think this may not be a big concern for challenging benchmarks that involve intensive reasoning. For example, even if codellama [2] has been intensively trained on Github data, its performance is still low on SWE-bench (which uses the Github data).\\n\\n[1]. Jimenez, Carlos E., et al. \\\"Swe-bench: Can language models resolve real-world github issues?.\\\" arXiv preprint arXiv:2310.06770 (2023). \\\\\\n[2]. Roziere, Baptiste, et al. \\\"Code llama: Open foundation models for code.\\\" arXiv preprint arXiv:2308.12950 (2023).\", \"questions\": \"1. How to organize functions into files to build repositories from scratch in CHASE-CODE?\\n2. Could you specify more details on rejection sampling?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer\", \"comment\": \"Respectfully, your argument is not valid.\\n\\nFirst, if we assume that SWE-Bench is scalable (it is not as we will explain later), CHASE-Code is still a different paradigm as explained below:\\n\\n**Task.** SWE-Bench focuses on a different task compared to CHASE-Code. The task in SWE-Bench is about code editing/fixing or generating patches for \\u201cissues\\u201d or \\u201cbugs\\u201d in a repository. The task in CHASE-Code is about generating new functionality based on its precise description. Such problems may be there in SWE-Bench, but they are a clear minority (see Table 13 in [1]). You can also qualitatively compare the types of problem statements in SWE-Bench and CHASE-Code.\\n\\n**Controllability.** A very important feature of CHASE-Code is that you can control the parameters of the problem you want to design. You can choose the domain (such as algorithms, data pre-processing, etc.), how complex you want the function to be, what length the repository should be, etc. This is not possible for SWE-Bench.\\n\\nHence, **the scalability or existence of a high-quality SWE-Bench does not decrease the impact of CHASE-Code.**\\n\\nNow, we motivate why synthetic data for code generation is a good avenue to pursue. We have already explained above that synthetic data generation provides a **high level of controllability**. Further, benchmarks like SWE-Bench are not highly scalable. They are bottlenecked by the availability of well-maintained, high-quality repositories with extensive tests for each issue (which is why they focused on 12 popular repositories). In contrast, synthetic data generation allows for automatic creation of tests (as shown in CHASE-Code), which makes it **comparatively more scalable**. Lastly, note that SWE-Bench is a good way to collect challenging data for one particular type of task. There are many other tasks in the domain of code (such as the task in CHASE-Code, the task of code understanding [2], competitive programming [3], etc.) where there may not exist good ways of curating challenging data without human intervention.\", \"references\": \"[1] Jimenez et al. (2024). SWE-bench: Can Language Models Resolve Real-World GitHub Issues? In ICLR.\\n\\n[2] Gu et al. (2024). CRUXEval: A Benchmark for Code Reasoning, Understanding and Execution. In Arxiv:2401.03065.\\n\\n[3] Li et al. (2022). Competition-Level Code Generation with AlphaCode. In Arxiv:2203.07814.\"}", "{\"title\": \"Response to Reviewer\", \"comment\": \"We would like to thank you for engaging in the discussion.\\n\\n**Size of data.** We may be mistaken but it seems you are the only reviewer with concerns about the data size (reviewer yxKN\\u2019s point is about scaling for training, which we have separately addressed in our response to them). While it is true that many contemporary benchmarks have a few thousand examples, there are also many benchmarks that have much less examples. For instance, HumanEval [1] and SVAMP [2] are widely used benchmarks for code and math reasoning that have ~160 and ~1000 examples respectively. We would also like to point out example papers (whose main contribution is the dataset) such as EvoCodeBench [3], a repo-level code generation benchmark, and MuSR [4], a story-based QA benchmark published at ICLR and NeurIPS this year that have ~275 and ~750 examples respectively. Further note that many contemporary long-context benchmarks [5,6,7] have only ~500 examples per task. Hence, we believe that the amount of data we generated is sufficient for benchmarking current LLMs and for supporting our claims and conclusions. We would also like to note that we are not too troubled by the cost of generation (we added Table 5 and L1075-1079 providing the costs of generation). However, we also wish to keep our long-context benchmarks accessible to play around with for researchers with limited resources (currently it costs ~$50 to run inference for just one SOTA model on our benchmark).\\n\\n**Comparing difficulty of data.** We have provided model performance accuracies for our benchmarks, which sufficiently show its difficulty. We have now added discussions comparing the performance of models on our datasets against other widely-used challenging benchmarks in those domains in Section C.1 and Tables 6 and 7. If you would like more specific analysis, could you let us know how?\\n\\n**Correctness of problems.** We would like to highlight that this issue persists mostly in CHASE-QA, because we couldn\\u2019t find errors in CHASE-Code on inspection, and we have filtered out incorrect examples from CHASE-Math. We agree that this is a limitation, but it comes with the benefit of providing a very real-world test scenario. To give more context, the errors in CHASE-QA pertain to the presence of extra relevant information in the documents which is not mentioned in the ground-truth (see Fig. 9 for an example). We have now also included evaluation with softer metrics, such as \\u2018K-Precision\\u2019, which measures the how faithful the model prediction is to the given documents, and \\u2018Recall\\u2019, which measures whether the model prediction includes all the information in the ground-truth (while allowing the model to provide more information). The results are discussed in Appendix C.2 and Table 8. Note that the gaps in performance between many models on CHASE-QA (for both, accuracy and recall) is quite large, which does make the conclusions we draw to be valid. We have now also manually reviewed 30 examples from the QA dataset generated by the direct generation baseline and found that it had 9 errors (~30%). We had already reported 34% error for the math problems generated by the direct generation baseline. This shows the advantages of CHASE in generating more correct problems.\\n\\n**Regarding Q3.** We believe we did answer this point in our response above. We have carried out filtration for both, CHASE and the direct generation baselines, which makes the comparison fair.\\n\\n**Regarding Q4.** It is difficult for us to understand why the statement is ambiguous. There is no notion of distance covered per day in the recovery week, so the most probable meaning of \\u201cdistance he covered during his recovery week\\u201d has to be the total distance in the recovery week. Indeed, if we just add \\u201ctotal\\u201d in front of the \\u201crecovery week\\u201d, the model still makes the same mistake. In any case, we present another failure case for you to look at in Figure 9.\", \"references\": \"[1] Chen et al. (2021). Evaluating Large Language Models Trained on Code. In Arxiv:2107.03374\\n\\n[2] Patel et al. (2021). Are NLP models really able to solve simple math word problems? In NAACL.\\n\\n[3] Li et al. (2024). EvoCodeBench: An Evolving Code Generation Benchmark with Domain-Specific Evaluations. In NeurIPS Datasets and Benchmarks Track.\\n\\n[4] Sprague et al. (2024). MuSR: Testing the Limits of Chain-of-thought with Multistep Soft Reasoning. In ICLR.\\n\\n[5] Zhang et al. (2024). \\u221eBench: Extending Long Context Evaluation Beyond 100K Tokens. In ACL.\\n\\n[6] Li et al. (2024). LooGLE: Can Long-Context Language Models Understand Long Contexts? In ACL.\\n\\n[7] Wang et al. (2024). Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA. In EMNLP.\"}", "{\"title\": \"Response to Questions (Cont'd)\", \"comment\": \"**[Q5]** *What is the motivation behind the experiments in lines 469-477 and lines 486-493?*\\n\\nThe motivation for the context size experiments is to show how we can synthetically add irrelevant context information to evaluation examples to make them even more challenging for LLMs to handle. These results highlight one particular dimension, i.e., \\u201ccontext-size\\u201d, which can be very easily controlled in synthetic data generation pipelines to craft challenging problems.\\n\\nThe motivation for the fine-tuning experiments is to show that while smaller models such as Llama-3.1-8B can use CHASE to generate useful training data, they still perform poorly on CHASE data generated by a much more powerful model. Therefore, smaller models cannot \\u201chack\\u201d the benchmark just by knowing the recipe of generation.\\n\\n**[Q6]** *Could you provide some comparative experiments between the CHASE dataset and other synthetic datasets, such as a comparison between CHASE-QA and existing long-context benchmarks?*\\n\\nOur main contribution is the CHASE methodology for generating difficult evaluation data. The byproducts are the resulting datasets. Our emphasis is more on the methodology contribution. So we compared with two other popular synthetic data generation methods -- Self-instruct [1] and Evol-instruct [2] -- and found CHASE to generate superior data.\\n\\nRegarding comparison of CHASE-QA with other long-context benchmarks, we have added a detailed discussion in Appendix D.3 (L1142-L1163). But note that these are manually-annotated datasets. We are not aware of any synthetic long-context QA benchmarks targeting realistic scenarios and kindly request the reviewer to point to appropriate references and elaborate more on what kind of comparative experiments they hope to see.\\n\\nWe have carried out new experiments and clarified the writing in multiple places in response to your review. We kindly ask you to consider raising your score if your concerns are addressed.\", \"references\": \"[1] Wang et al. (2023). Self-Instruct: Aligning Language Models with Self-Generated Instructions. In ACL.\\n\\n[2] Xu et al. (2024). WizardLM: Empowering Large Pre-Trained Language Models to Follow Complex Instructions. In ICLR.\"}", "{\"title\": \"Response to Reviewer\", \"comment\": \"Perhaps you missed our new experiments as mentioned in our responses above: CHASE-Code now has 500 problems. This data size is adequate for a long-context code generation benchmark. Contemporary long-context benchmarks [1,2,3] have only ~500 examples per task and even widely used code benchmarks like HumanEval [4] have ~160 examples. We have shown that our approach is highly scalable. If you disagree, then we ask you to point out concrete bottlenecks with our approach to explain why it is not scalable.\\n\\n*The current version with only 220 problems in the dataset seems to be also practical for human annotators to finish in a reasonable period.*\\n\\nIt would be helpful if the reviewer provides evidence for this statement. As far as we are aware, there is no repository-level code generation benchmark of such high difficulty that can be created at similar expense (time and cost) as CHASE-Code (see Table 5 in Appendix). We have already explained how SWE-Bench is a different paradigm, so it is illogical to compare against it.\\n\\n*In realistic scenarios, it is not common to see a code generation problem specified in such a detailed manner*\\n\\nWe would like to understand your exact concern here. Yes, CHASE-Code perhaps simulates a scenario which is a bit easier than completely realistic scenarios. But still it is difficult for LLMs to solve, and hence it is a valuable benchmark. If you believe there is no value in such slightly-less-realistic benchmarks, then you would have to consider almost all benchmarks in NLP so far to be valueless.\\n\\n*I think it is not very difficult to find problems in the domain data pre-processing and algorithms from Kaggle, LeetCode, codeforces, etc.*\\n\\nThis is an unfair characterization of our work. We target repository-level code completion problems. None of these sources have any repository-level code.\\n\\nWe request you to precisely state your standing concerns with the paper. It is our opinion that **a disagreement over the \\u201crealness\\u201d of the scenario of just one example benchmark generated using our framework does not merit a score as low as 3**. We remind you that **our main contribution is a general framework to create challenging synthetic data for evaluation across multiple domains**. This is the **first time this problem is being studied**. We have also shown its applicability for two other scenarios such as document-based question answering and math reasoning.\\n\\nOur claim in this paper is that generating challenging data for evaluation using humans could be difficult or impractical for many reasons (cost, expertise, long-context generation, etc). Hence, we need to study the problem of generating synthetic data for evaluation. We have presented a general framework to do this, and showed its applicability across multiple domains. Even if you disagree with the exact way this was done for one particular domain (and note that we have provided concrete arguments in our responses for this), do you still think that this paper is not a valuable contribution?\", \"references\": \"[1] Zhang et al. (2024). \\u221eBench: Extending Long Context Evaluation Beyond 100K Tokens. In ACL.\\n\\n[2] Li et al. (2024). LooGLE: Can Long-Context Language Models Understand Long Contexts? In ACL.\\n\\n[3] Wang et al. (2024). Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA. In EMNLP.\\n\\n[4] Chen et al. (2021). Evaluating Large Language Models Trained on Code. In Arxiv:2107.03374\"}", "{\"title\": \"Request to engage in discussion\", \"comment\": \"Dear Reviewer,\\n\\nWe request you to kindly read our response and adjust your score if your concerns are addressed.\"}", "{\"title\": \"Request to engage in discussion\", \"comment\": \"Dear Reviewer,\\n\\nWe request you to kindly read our response and adjust your score if your concerns are addressed.\"}" ] }
2UozyR49ZB
Learning a Bi-directional Driving Data Generator via Large Multi-modal Model Tuning
[ "Xinzhi Zhong", "Andrew Silva", "Pradyumna Tambwekar", "Jonathan DeCastro", "Soyoung Ahn", "Guy Rosman" ]
Understanding human driving behaviors is crucial for developing a reliable vehicle and transportation system. Yet, data for learning these behaviors is scarce and must be carefully labeled with events, causes, and consequences. Such data may be more difficult to obtain in rare driving domains, such as in high-performance multi-car racing. While large language models (LLMs) show promise in interpreting driving behaviors, the integration of multi-modal inputs (e.g., language, trajectory, and more) and generation of multi-modal output in low-data regimes remains under-explored. In this paper, we introduce Bi-Gen: a Bi-directional Driving Data Generator, Bi-Gen is a bi-directional multi-modal model that connects a trained encoder-decoder architecture with a pre-trained LLM, enabling both auto-annotation and generation of human driving behaviors. Our experiments show that Bi-Gen, despite its smaller size, matches the performance of much larger models like GPT-4o in annotating driving data. Additionally, Bi-Gen generates diverse, human-like driving behaviors, offering a valuable tool for synthetic data generation in resource-constrained settings. Taken together, our experiments are a significant step towards applying LLMs to complex, multi-agent driving data.
[ "multi-modality", "synthetic data generation", "auto-annotation", "driving", "LLM applications" ]
https://openreview.net/pdf?id=2UozyR49ZB
https://openreview.net/forum?id=2UozyR49ZB
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qsSDvnKNhj", "XiOXhBxsl8", "NYGsvyoOOR", "KddgZVWLQJ", "K6heAArN39", "1YUpY90TVU" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730791991096, 1730597525684, 1730382334833, 1732733273372, 1731300269155, 1730341919908 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8192/Reviewer_cJ9k" ], [ "ICLR.cc/2025/Conference/Submission8192/Reviewer_PZJ4" ], [ "ICLR.cc/2025/Conference/Submission8192/Reviewer_6U3U" ], [ "ICLR.cc/2025/Conference/Submission8192/Authors" ], [ "ICLR.cc/2025/Conference/Submission8192/Reviewer_LN46" ], [ "ICLR.cc/2025/Conference/Submission8192/Reviewer_n3mP" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces Bi-Gen, a bi-directional large multi-modal model that enables both trajectory description (auto-annotation of driving data in language) and trajectory generation. The model leverages a pre-trained LLM and learns to embed multi-modal inputs (map, ego trajectory, opponent trajectory) into a shared latent space. The authors demonstrate Bi-Gen's capabilities on a racing car dataset, showing it can annotate trajectories comparably to GPT-4o and generate synthetic data to augment real datasets for downstream tasks.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The bi-directional approach allowing both trajectory description and generation within a single end-to-end framework is novel and interesting. Prior work has typically focused on only one direction.\\n2. The motivation of learning a model that can comprehend and generate multi-modal human driving data, especially in low-data regimes like racing, is sound and the proposed methodology of embedding multi-modal inputs into an LLM's latent space makes intuitive sense.\\n3. The paper is generally well-written, with a clear explanation of the model architecture, training process, and experimental setup. The figures help illustrate the approach.\", \"weaknesses\": \"1. While the motivation and methodology are sound, the experimental setup seems too simplistic to fully validate the capabilities of Bi-Gen. The authors mention that there are only 19 possible answers in their question-answering task, which is more akin to a classification problem. This limited setup may not adequately demonstrate the LLM's ability to freely annotate trajectories in an auto-regressive manner. More open-ended annotation would be valuable.\\n2. The dataset used for training and evaluation is relatively small, with only 877 trajectories collected. Moreover, the participants were racing against fixed trajectories rather than human players or other agents, which limits the diversity and complexity of the driving behaviors captured. A larger and more varied dataset would provide a more robust evaluation of Bi-Gen.\\n3. Given the large capacity of LLMs, it is possible that Bi-Gen is overfitting to the training data. The authors do not provide sufficient qualitative results to assess the model's generalization capabilities. It would be beneficial to compare Bi-Gen with a baseline method that uses classification-based annotation and recurrent trajectory generation, rather than solely comparing it with GPT-4o.\\n4. The binary classifier used to validate the quality of the generated trajectories may not be a strong indicator of performance if the data distribution is too simple. It is unclear whether the higher accuracy achieved by the classifier is due to the quality of the generated trajectories or the simplicity of the data distribution.\\n5. The paper heavily relies on the supplementary material to provide important details about the methodology and results. Some of this information should be included in the main paper to improve clarity and completeness. Additionally, there is redundant information in the main paper, such as the repeated mention of the model pipeline components (system prompt, map, opponent trajectory, and ego-trajectory).\", \"questions\": \"1. How would the authors justify the use of a simplistic experimental setup in the question-answering task? Does this truly showcase the LLM's auto-regressive generation capabilities?\\n2. Have the authors considered collecting a larger and more diverse dataset, possibly including human-human or human-agent interactions, to better capture the complexity of driving behaviors?\\n3. Can the authors provide more evidance to demonstrate Bi-Gen's generalization capabilities and address concerns about overfitting?\\n4. Would the authors consider moving some of the important details from the supplementary material to the main paper and removing redundant information to improve clarity and completeness?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper identifies that learning driving behaviors requires a lot of data with carefully labeled events, causes, and consequences. However, such data may be more difficult to obtain in rare driving domains, such as in high-performance multi-car racing. Therefore, this paper proposes Bi-Gen, which is a bi-directional multi-modal model that connects a trained encoder-decoder architecture with a pre-trained LLM, enabling both auto-annotation and generation of human driving behaviors. The experimental results show that Bi-Gen matches the performance of much larger models like GPT-4o in annotating driving data. Additionally, Bi-Gen generates diverse, human-like driving behaviors, offering a valuable tool for synthetic data generation in resource-constrained settings.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe idea of using LLM to generate scenarios is an interesting and promising topic. Since LLMs have limitations on processing other modalities, finetuning LLMs is also a promising way to direct them to the generation task.\\n2.\\tThe paper is generally well-written and well-organized. Figure 1 clearly shows the training and generation processes of the proposed method. Figure 2 also describes the two generation tasks with model details.\", \"weaknesses\": \"1.\\tThe examples shown in Figure 5 are quite confusing. First, the generated trajectories violate the vehicle dynamics and are out of road in most times. The first point of the generated trajectory in the middle figure is behind the last point of the history trajectory. The first point of the generated trajectory in the right figure is too far away from the last point of the history trajectory. Second, it is hard to identify Spinout, Stay-behind, and Overtake in these figures. In summary, I think the generated trajectories have low quality and low realism.\\n2.\\tI feel the evaluation of this paper is quite limited. It seems that this paper only focuses on high-performance multi-car racing scenarios, as mentioned in the abstract. Even though, I think it is still important to show quantitative results of the average performance of the proposed method. However, the only numerical evaluation now is the overtake classification task shown in Figure 4. I think it is necessary to show the evaluation of realism, diversity, and instruction following. In addition, scenario generation has been a widely investigated area, which means it is easy to find comparable baseline methods, for example, LCTGen [1] and ProSim [2].\\n3.\\tThere is no evidence to show the benefit of using the generated scenarios for downstream tasks. The only example is the overtake classification task. But I am not sure how large the value is of identifying if a scenario is overtaking or not. I think it is more important to show that the generated scenarios help with the training and testing of autonomous agents in terms of performance and safety.\\n\\n---\\n[1] Tan, Shuhan, Boris Ivanovic, Xinshuo Weng, Marco Pavone, and Philipp Kraehenbuehl. \\\"Language conditioned traffic generation.\\\" arXiv preprint arXiv:2307.07947 (2023).\\n[2] Tan, Shuhan, Boris Ivanovic, Yuxiao Chen, Boyi Li, Xinshuo Weng, Yulong Cao, Philipp Kr\\u00e4henb\\u00fchl, and Marco Pavone. \\\"Promptable Closed-loop Traffic Simulation.\\\" arXiv preprint arXiv:2409.05863 (2024).\", \"questions\": \"1. Did the authors consider different LLM backbones?\\n2. What are the statistical details of the used dataset? The size, the distribution, and the collection platform?\\n3. Similar to the second point in the weakness part, how to evaluate the quality of generated scenarios?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents Bi-Gen, a large multi-modal model designed to generate and annotate human driving data, particularly in complex racing environments with limited training data. It effectively handles both trajectory description and generation, demonstrating strong performance in comprehending driving behaviors. The study highlights the model's ability to produce realistic and varied driving scenarios, positioning it as a competitive alternative to larger models like GPT-4o.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The paper tackles the issue about interpreting and annotating unlabeled driving trajectories in the low-data domain of high-performance\\nmulti-car racing.\", \"weaknesses\": \"1. The model's performance in more diverse and complex driving scenarios beyond the tested environments may require further exploration and validation.\\n2. The evaluations appear to be inadequately conducted. The authors assert the existence of 19 potential answer classes; however, they report only the quantitative results from the overtaking prediction task. Furthermore, the zero-shot testing with GPT-4o is the sole baseline selected for comparison. There are also no numbers supporting the claim that training trajectory description and trajectory generation at the same time would be a more favorable approach.\", \"questions\": \"1. Both the training and test data are collected within a single racing track by driving in simulators (as indicated in Appendix.A). The number of trajectories collected are limited as well. How do you ensure that your model does not overfit to this narrowly defined domain?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"1. The paper introduces Bi-Gen, a bi-directional multi-modal model for human driving data generation and annotation, particularly aimed at low-data domains like multi-car racing.\\n2. Bi-Gen combines language-conditioned trajectory generation and trajectory-conditioned language generation, allowing it to serve both as an automated annotator and as a synthetic data generator. \\n3. The model integrates a language model with lightweight encoders and decoders to map trajectories and static map data into a shared feature space, enabling it to interpret and generate diverse driving behaviors based on limited real data. \\n4. Experimental results demonstrate Bi-Gen\\u2019s ability to match the annotation accuracy of larger models, like GPT-4o, while significantly reducing the data requirements for downstream tasks by generating high-quality synthetic data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The model\\u2019s ability to handle both trajectory-to-language and language-to-trajectory generation tasks offers a novel approach to understanding and generating human driving behaviors. I like the idea of treating map and trajectory tokens as the same latent space as language.\\n2. By incorporating lightweight encoders and a small language model (TinyLlama), Bi-Gen achieves annotation performance comparable to larger models like GPT-4o while remaining computationally efficient and suitable for real-time applications.\\n3. Flexible Multi-turn Interaction: The model\\u2019s multi-turn question-answering framework supports dynamic, interactive annotations and diverse trajectory generation, demonstrating versatility in handling complex driving scenarios.\", \"weaknesses\": \"1. The experiments focus on a racing domain with specific trajectory types, which may not generalize well to broader driving scenarios or other real-world applications without additional testing. I want to know if it's possible to extend to multi-agent scenarios, for example Waymo or Nuplan scenarios.\\n2. While the use of lightweight encoders and TinyLlama enhances efficiency, it might limit the model's capacity to capture finer details in complex, multi-modal interactions compared to larger models.\\n3. Bi-Gen\\u2019s performance relies on well-defined question-answer and generation prompts, which may limit its adaptability to novel or unexpected queries in deployment.\\n4. The paper does not explore the impact of different architectural choices (e.g., encoder and decoder sizes, tokenization approaches), which would strengthen understanding of the model's design trade-offs.\\n5. Following point 4, the tokenization approaches to the map and trajectory are unclear. See questions.\", \"questions\": \"1. How well does Bi-Gen generalize to other driving domains beyond multi-car racing? Could the model effectively handle scenarios with more varied driving behaviors, such as urban or highway driving in Waymo dataset?\\n2. How does Bi-Gen handle instances where trajectory descriptions or generation prompts are ambiguous or open-ended?\\n3. How the map and trajectory being tokenized? Are you using global coordinates? Do you do any transformation on the input data? In multi-agent complex scenarios such as Waymo dataset, there are huge number of elements in the scenes: multiple agents with different types and rich features (shape, velocity, type etc), map features with different types (stop sign, different types of lanes etc), and even traffic light. So I am very concerning about whether the proposed method can be extended to other data format.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The manuscript presents an exploration of finetuning a TinyLLaMa for generating multi-modal human driving data to benefit the community. While the proposed solutions sound acceptable, the motivation, experiment results, readability, and method description should all be improved. So far the weakness outweighs the strengths.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"An acceptable exploration in using LLMs to benefit understanding driving data.\", \"weaknesses\": \"1. Unmatched motivation and proposed solutions. To my understanding, the stated motivation in the abstract is the lack of multi-modal data and the high cost of obtaining labeled data for training, but the proposed solution is a model that can match the performance of LLMs but can be adopted in a resource-constrained setting. There is a gap between them. In the context of generating multi-modal data, why do we need a resource-constrained model? Do the authors try to generate data during driving, or deploy the generation model in the vehicle? If not, why is there a need to design such a model?\\n\\n2. Insufficient contribution. As the authors said between lines 88 and 97, the existing LLMs cannot fully comprehend the complicated multi-modal connections between trajectories and languages due to the lack of readily accessible world-knowledge. Hence, we would expect that with the proposed solutions, the generation performance should at least outperform the existing LLMs, despite the model size. But so far, the annotation performance is only compatible, and hence the proposed solution is not as effective as the authors claimed. \\n\\n3. Poor readability in terms of images and texts. The images are not aligned with the text around it. For example, Fig. 3 is too far away from the text describing it. Fig. 4 is 2-pages away from the corresponding text. Readers may get confused by the images and find it difficult to find the text and hence fail to follow.\\n\\n4. Unclear method description. Maybe I miss something. In the trajectory generation part, the loss is the auto-regressive loss between the generated trajectories and the actual ones. If the authors aim to fine-tune the model based on this task, then we are assuming that the model is the core component controlling how the trajectories are generated. But as we include human languages or prompts here, is it possible that the inputs are affecting the generating performance? Do we consider any loss in terms of the discrepancy between the generated trajectories and what the prompts ask for?\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
2U8owdruSQ
Has the Deep Neural Network learned the Stochastic Process? An Evaluation Viewpoint
[ "Harshit Kumar", "Beomseok Kang", "Biswadeep Chakraborty", "Saibal Mukhopadhyay" ]
This paper presents the first systematic study of evaluating Deep Neural Networks (DNNs) designed to forecast the evolution of stochastic complex systems. We show that traditional evaluation methods like threshold-based classification metrics and error-based scoring rules assess a DNN's ability to replicate the observed ground truth but fail to measure the DNN's learning of the underlying stochastic process. To address this gap, we propose a new evaluation criteria called _Fidelity to Stochastic Process (F2SP)_, representing the DNN's ability to predict the system property _Statistic-GT_—the ground truth of the stochastic process—and introduce an evaluation metric that exclusively assesses F2SP. We formalize F2SP within a stochastic framework and establish criteria for validly measuring it. We formally show that Expected Calibration Error (ECE) satisfies the necessary condition for testing F2SP, unlike traditional evaluation methods. Empirical experiments on synthetic datasets, including wildfire, host-pathogen, and stock market models, demonstrate that ECE uniquely captures F2SP. We further extend our study to real-world wildfire data, highlighting the limitations of conventional evaluation and discuss the practical utility of incorporating F2SP into model assessment. This work offers a new perspective on evaluating DNNs modeling complex systems by emphasizing the importance of capturing underlying the stochastic process.
[ "evaluation", "deep neural network", "stochasticity", "complex systems", "forecasting" ]
Accept (Poster)
https://openreview.net/pdf?id=2U8owdruSQ
https://openreview.net/forum?id=2U8owdruSQ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yhiRRrSwJL", "wFirUKJfZc", "uMV0ZnAQ0K", "paWoyqcMIZ", "oaezEWSMoe", "mOj6cxSOKE", "kTlrD3eUFf", "jXXyuuyeyD", "fhZ5EFmyXF", "bcHt0kFj1p", "PbpiZFZHKv", "LJFxtjxdPS", "H9PaE0SeEj", "Eyz3NpsTvi", "Eh2vKIiyMk", "Byjj1k6e9f", "BJb7E9Xw9m", "AbhA17hSlL", "7e90OwuFJh", "6yxZW3OWZt", "6O1JNzpqF9", "56A5nrsYV7", "3fdCy124Pd", "3Itw6iCNkG", "2FFAFx0AgZ", "2E0gEN5Ub5" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment" ], "note_created": [ 1732161793825, 1730338026030, 1732162189567, 1733041002218, 1732755230845, 1732161686249, 1732162524102, 1732386884554, 1732162446712, 1732163218272, 1732163564344, 1730497423636, 1731034750282, 1730254258775, 1733040915527, 1732690032887, 1737524122948, 1732162896946, 1732162969525, 1732387338461, 1732606792567, 1732618187056, 1732163857101, 1734679815855, 1729648510142, 1732162046813 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11418/Authors" ], [ "ICLR.cc/2025/Conference/Submission11418/Reviewer_sNm1" ], [ "ICLR.cc/2025/Conference/Submission11418/Authors" ], [ "ICLR.cc/2025/Conference/Submission11418/Authors" ], [ "ICLR.cc/2025/Conference/Submission11418/Reviewer_HXsj" ], [ "ICLR.cc/2025/Conference/Submission11418/Authors" ], [ "ICLR.cc/2025/Conference/Submission11418/Authors" ], [ "ICLR.cc/2025/Conference/Submission11418/Authors" ], [ "ICLR.cc/2025/Conference/Submission11418/Authors" ], [ "ICLR.cc/2025/Conference/Submission11418/Authors" ], [ "ICLR.cc/2025/Conference/Submission11418/Authors" ], [ "ICLR.cc/2025/Conference/Submission11418/Reviewer_7Rfe" ], [ "ICLR.cc/2025/Conference/Submission11418/Reviewer_cBqL" ], [ "ICLR.cc/2025/Conference/Submission11418/Reviewer_HXsj" ], [ "ICLR.cc/2025/Conference/Submission11418/Authors" ], [ "ICLR.cc/2025/Conference/Submission11418/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11418/Authors" ], [ "ICLR.cc/2025/Conference/Submission11418/Authors" ], [ "ICLR.cc/2025/Conference/Submission11418/Authors" ], [ "ICLR.cc/2025/Conference/Submission11418/Area_Chair_VRRs" ], [ "ICLR.cc/2025/Conference/Submission11418/Area_Chair_VRRs" ], [ "ICLR.cc/2025/Conference/Submission11418/Authors" ], [ "ICLR.cc/2025/Conference/Submission11418/Area_Chair_VRRs" ], [ "ICLR.cc/2025/Conference/Submission11418/Reviewer_iuWX" ], [ "ICLR.cc/2025/Conference/Submission11418/Authors" ] ], "structured_content_str": [ "{\"title\": \"Summary of Changes made in the Appendix\", \"comment\": [\"**Section C.2**: Expanded the discussion on the general use of ECE as an evaluation metric compared to its application in our work. Clarified how our approach broadens the utility of ECE in evaluating stochastic systems.\", \"**Section C.4**: Added two new references on computer vision tasks related to predicting the evolution of segmentation maps. Reorganized the discussion to emphasize how the current evaluation strategies in the computer vision community are predominantly focused on F2R.\", \"**Section F.1.2**: Updated Figure 16 to include ECE alongside other metrics, enabling a comparative analysis of their standard deviation versus \\\\( Var[Z_t] \\\\). Added a revised discussion explaining that both MSE and ECE show reduced sensitivity to macro-variance, but their convergence behaviors differ. Added Figure 17 to illustrate this distinction: MSE penalizes stochasticity effects, whereas ECE does not.\", \"**Section H**: Added a practical user guide for evaluating DNN models in complex systems, providing actionable steps for applying the proposed framework.\"]}", "{\"summary\": \"This paper presents a study on evaluating deep neural networks designed to forecast the evolution of stochastic complex systems. The authors identify a gap in traditional evaluation methods\\u2014such as threshold-based classification metrics and error-based scoring rules\\u2014which focus on a model's ability to replicate observed ground truth but fail to assess how well the model has learned the underlying stochastic process. To address this issue, they introduce a new property called Fidelity to Stochastic Process, representing the DNN's ability to predict the statistical ground truth of the stochastic process.\\n\\nThe paper proposes using the Expected Calibration Error (ECE) as an evaluation metric that satisfies the necessary conditions for assessing fidelity to statistical ground truth. This work underscores the importance of capturing the underlying stochastic processes in deep neural networks evaluations for complex systems.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The paper makes a significant contribution by introducing the concept of Fidelity to Stochastic Process (F2SP), a novel evaluation criterion specifically designed to assess a DNN's ability to learn the underlying stochastic interactions in complex systems.\\n\\nThe authors provide a rigorous formalization of F2SP within a stochastic framework, establishing clear criteria for its valid measurement. The use of Expected Calibration Error (ECE) as an evaluation metric is well-justified.\", \"weaknesses\": \"I found it hard to read the paper because there was a lack of consistency in the acronyms, the authors would redefine them in several parts of the text again and again. I addressed my comments on text in the questions section.\\n\\nIn the tables, the best neural networks based on each criterion are not highlighted, which makes it difficult to the reader to infer and correlate the arguments in the text. I addressed my comments on text in the questions section. \\n\\nThe focus of the paper is primarily on binary or discrete prediction tasks, leaving out regression tasks where the definition of calibration is more complex. While the authors acknowledge this and suggest it as an area for future work, the current scope limits the immediate applicability of the findings to a broader range of problems involving continuous outcomes.\\n\\nAdditionally, the use of the NDWS dataset, which is restricted to next-day predictions, prevents the assessment of ECE over longer time horizons, which are common in many complex systems. Could you elaborate on how future work might address this limitation? \\n\\nThe paper highlights the lack of open-source complex system datasets as a barrier to broader validation. Are there any ongoing initiatives or plans to develop, collect, or standardize such datasets?\", \"questions\": \"L50: Is --> is (lowercase)\", \"fig1\": \"no need to write the whole name, you can use acronyms because they're already defined in the text, however MSE is not defined at this point.\", \"l88\": \"fidelity to realization --> F2R (it was already defined previously, so you can use the acronym)\", \"l99\": \"the notation of the dimension of the real vector O_t is confusing, what is (R^n)^(H x W), is n = H x W? If so, make that explicit.\", \"table_1\": \"some rows end with full stop, other don't. Please make it consistent. Either all with or all without.\\nI find it odd to place Figures in columns as Figure 1 (which has a large top white margin) and Figure 3. I would suggest column figures into one row figure with multiple subfigures as you did with Figure 2.\", \"l201\": \"Isn't the indicator variable already defined as B_t in L99? Why defining again with different notation?\", \"l298\": \"MSE already defined in text previously, no need to write the whole name again.\", \"l516\": \"ECE already defined in text previously, no need to write the whole name again.\", \"table_2_and_table_7\": \"highlight the best performing DNNs.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author's Response (2/2)\", \"comment\": \"> While it's pretty clear to me how to use this immediately in my work, I think anyone who wasn't already aware they wanted exactly this might struggle. Could you provide something like a \\\"practical users guide\\\" for non-domain experts?\\n> \\n\\nThank you for the suggestion. We have added a practical user guide in Appendix H, which is now cross-referenced in the caption of Figure 1. This addition provides actionable steps for non-domain experts to apply the evaluation framework effectively and enhances the usability of our work.\\n\\nTo further integrate this guide into the paper, we revised the Introduction (L104) to reflect the discussion on its practical implications. Additionally, we updated the text (L487 onward) to introduce and motivate the cohesive evaluation framework, explicitly linking it to the practical guide. This framework clarifies metric prioritization by visually organizing AUC-PR, ECE, and MSE to reflect their complementary roles in evaluating model fidelity to stochastic dynamics (F2SP) and specific outcomes (F2R).\\n\\nWe believe these changes address the reviewer\\u2019s concern by providing clear guidance for broader usability and ensuring the framework\\u2019s practical relevance is well-articulated.\\n\\n> if the clarity of the plots can be improved, the naming of the stat/metric you're introducing, and improve it's \\\"usability\\\" to the community, I would be happy to upgrade my score. You've done great work and this would bring the paper to the level it deserves.\\n> \\n\\nThank you for the encouraging feedback and for recognizing the contributions of our work. In response to your suggestions, we have made significant updates to improve the clarity of the plots, refine the naming conventions of the statistics and metrics, and enhance the usability of the framework for the broader community. We hope these revisions meet your expectations and elevate the paper to the level you envision. We would greatly appreciate your reconsideration of the score in light of these improvements.\"}", "{\"title\": \"Thank you for your feedback\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your feedback and for increasing the score. We appreciate your insights and the time you\\u2019ve taken to help improve the paper. \\n\\nBest regards, \\nThe Authors\"}", "{\"comment\": \"I have read the authors' response and agree with them, therefore I updated my score. Thank you!\"}", "{\"title\": \"Summary of Revisions\", \"comment\": \"We thank the reviewers for their valuable feedback, which has significantly improved our paper. Peer review has been instrumental in clarifying and enhancing our findings, and we deeply appreciate the time and effort reviewers invested.\\n\\nThe reviewers acknowledged the paper's contribution to evaluating DNNs in stochastic complex systems, specifically the introduction of Fidelity to Stochastic Process (F2SP) and its rigorous formalization. They also noted the thorough experiments, the clear differentiation between classical and stochastic evaluation approaches, and the practical applicability demonstrated through real-world datasets.\\n\\nWe have revised the text in the paper, and the specific segments addressing reviewers\\u2019 comments have been highlighted in red. Key concerns and changes are summarized below. Revisions have been made in both the main paper and the Appendix.\\n\\n---\\n\\n## **Changes Made in the Main Paper**\\n\\n### 1. **Practical Applicability and Usability**\\n\\n**Pain Points**: \\nReviewers raised concerns about the clarity of the study\\u2019s practical applicability. There was also some confusion about whether Statistic-GT is required to measure ECE, raising questions about ECE\\u2019s applicability for testing F2SP in practical scenarios.\\n\\n**Our Response**: \\n\\n- **Framework Visualization**: Redesigned Figure 1 into a cohesive evaluation framework that visually organizes metrics (AUC-PR, ECE, MSE), consolidating our insights. \\n- **Practical User Guide**: Added a user guide in **Appendix H** with actionable steps for applying and interpreting the evaluation framework. This guide is cross-referenced in the main text and figures for easy navigation. \\n- **Clarified Applicability**: Highlighted in the introduction and discussion sections that ECE can test F2SP using only the Observed-GT, demonstrating its practical applicability. To clarify, the concept of Statistic-GT was formalized in the paper to substantiate this claim. \\n\\n---\\n\\n### 2. **Figures and Tables**\\n\\n**Pain Points**: \\nFigures and tables lacked clarity, with insufficient captions.\\n\\n**Our Response**: \\n\\n- Added missing axis labels and expanded captions to ensure all figures are self-contained, understandable, and clearly convey their main messages, making it easier to correlate them with the text. \\n- Merged the original Figures 1 and 3 into a **new Figure 1**, which provides a clearer visual overview of the evaluation framework, including: \\n - The two system properties, Observed-GT and Statistic-GT. \\n - The two evaluation criteria, F2R (Fidelity to Realization) and F2SP (Fidelity to the Stochastic Process). \\n - The evaluation metrics that measure these criteria. \\n- Updated Figure 2.b to clarify its message by highlighting the different sources of randomness in the synthetic datasets used in our study. \\n- Added color gradients to Table 2 to illustrate score transitions from good to bad and improve alignment with the text. \\n\\n---\\n\\n### 3. **Theoretical Justification**\\n\\n**Pain Points**: \\nNeed for theoretical support differentiating ECE and MSE with respect to sensitivity to system variance, explaining ECE\\u2019s behavior and its preference over MSE in certain contexts.\\n\\n**Our Response**: \\n\\n- **Sensitivity Analysis**: Included ECE in the metric sensitivity analysis to macro-variance in **Appendix F.1**, demonstrating that while ECE and MSE exhibit similar sensitivity, their convergence asymptotes differ. Added Figure 17 to show that ECE's convergence behavior makes it better suited for testing fidelity to Statistic-GT compared to MSE. Revised discussions in Section 3.3 to cohesively connect these findings. \\n- **Referenced Supporting Sections**: Updated figure captions and main text to directly reference sections providing theoretical insights, ensuring readers can easily locate and understand the explanations. \\n\\n---\\n\\n### 4. **Scope and Limitations**\\n\\n**Pain Points**: \\nLimited discussion on extending the framework to image-based tasks.\\n\\n**Our Response**: \\nExpanded and revised the related works section (Section 6), **Appendix C.4**, and the future works section (Section 7) to discuss potential applications to vision tasks, such as segmentation map forecasting and stochastic video prediction. \\n\\n---\\n\\n### 5. **Terminology and Notation Clarification**\\n\\n**Pain Points**: \\nConfusion around the terms ECE (Expected Calibration Error), F2SP, Statistic-GT, and their relationships.\\n\\n**Our Response**: \\n\\n- Provided clear and explicit definitions of all key terms in the introduction. \\n- Included a clarifying table in the updated Figure 1 to illustrate the distinctions between terms. \\n\\n---\\n\\n### 6. **Clarity, Readability, and Minor Corrections**\\n\\n**Pain Points**: \\nMinor issues such as undefined abbreviations, inconsistent punctuation, notation errors, and formatting inconsistencies were also noted.\\n\\n**Our Response**: \\nWe conducted a thorough sweep of the paper to address all these issues. Details of the revisions are provided in the individual responses.\"}", "{\"title\": \"Authors' Response (2/2)\", \"comment\": \"> A question arises regarding Figure 1: Can ECE be an effective metric for measuring F2R compared to other available metrics? Figure 1 suggests that the answer may be\\u00a0*no*.\\n> \\n\\nThank you for raising this question. We acknowledge that ECE, while effective for testing F2SP, is less suited for F2R evaluation due to its lack of a refinement term, which is crucial for capturing prediction sharpness and discriminative capabilities. We have updated the limitations section (L518\\u2013522) to incorporate this distinction, clarifying the different roles of ECE and classification-based metrics as you have pointed out.\\n\\n> An important indicator that ECE is a reliable measure is its diagonal pattern, showing low scores only when training and test S-Levels align, as illustrated in Figure 4. Could the authors provide theoretical insights to support this indicator?\\n> \\n\\nThank you for the observation. Section 3.4.1 and 3.4.2 provides theoretical insights explaining the diagonal behavior of ECE, demonstrating why low scores occur when training and test S-Levels align. To enhance clarity, we have updated the caption of Figure 4 to explicitly reference the sections providing the theoretical insights, ensuring the connection is clear.\"}", "{\"title\": \"Clarification on Multivariate Focus and Potential Extensions for Univariate Tasks\", \"comment\": \"We have updated the text to address your comment regarding this work's focus on multivariate tasks rather than univariate ones. Specifically, the discussion in the Limitations of ECE section (L518-526) has been refined to explicitly state that our work focuses on multivariate prediction tasks (L523). Additionally, we explain why the convergence of ECE poses challenges for univariate forecasting tasks due to the sample size requirements. To address this limitation, we propose a potential avenue for extending the framework by leveraging improved calibration error estimators that offer stronger convergence guarantees for smaller test sets (L525). The added text is highlighted in blue.\\n\\nThe reviewer's comments helped us directly identify two key avenues for future work: extending the framework to univariate tasks and applying it to vision-based problems. While we had included related discussions in the paper, the feedback allowed us to identify these opportunities and propose these directions more explicitly. Thank you for highlighting these important points.\"}", "{\"title\": \"Authors' Response (1/2)\", \"comment\": \"> The paper is somewhat difficult to follow. For example, providing a brief introduction to the structure of each section would enhance clarity, particularly in Sections 2 and 3.\\n> \\n\\nThank you for the helpful feedback. To improve the paper\\u2019s clarity, we have added brief summary text at the beginning of Section 2 (L110) and Section 3 (L185) to provide readers with a clear overview of each section\\u2019s structure and objectives. Additionally, we made several updates throughout the paper aimed at enhancing overall clarity. These changes include revising figures, captions, and corresponding text to better align with the content and improve readability. All changes are highlighted in red for ease of review.\\n\\n> Additionally, it is difficult to grasp the main messages conveyed by the table in Figure 2(b).\\n> \\n\\nThank you for the feedback. We have revised the table in Figure 2(b) to enhance clarity and convey its main message more effectively. Updates include a red dashed line separating deterministic and stochastic simulation styles, a legend clarifying symbols (\\\"\\u2713: Random \\u2716: Fixed\\\"), and explicit \\\\(=0\\\\) and \\\\(>0\\\\) markers under S-Level to differentiate setups. The \\\"Stochastic Process\\\" label highlights its connection to ESP and stochastic evolution, while grouping \\\"Forest Configuration,\\\" and \\\"Fire Seed Location,\\\" under \\\"Initial Conditions\\\" improves organization. The caption has also been updated to clearly articulate the table\\u2019s message.\\n\\n> Furthermore, in lines 229\\u2013240 (However, these systems explore \\u2026.), the macro-level concept is introduced abruptly, which may disrupt the clarity and readability of the main text.\\n> \\n\\nThank you for the feedback. In the revised text, we have refined the discussion to explicitly connect the aggregation of micro-level comparisons with the derivation of a macro-level evaluation score, framing it as a standard approach for summarizing DNN performance. This establishes a logical progression leading to the formal definition of the Macro Random Variable (\\\\( Z_t \\\\)) as a grid-level summary of system behavior, aligned with the calculation of the evaluation metric (see L241,249). These changes ensure the text flows more naturally and improves clarity.\\n\\n> The main findings' practical applicability appears limited. In real-world scenarios, data generally provides only a single observed outcome centered on observable ground truth (line 117). Since the primary evaluation is simulation-based, the controlled stochasticity falls short of capturing real-world complexity. The Statistic-GT is basically derived by normalizing the frequency of target state occurrences across multiple Monte Carlo simulations.\\n> \\n\\nThank you for the comment. We acknowledge that in real-world scenarios, only a single Observed-GT is available, making direct computation of the Statistic-GT infeasible. To address this, we clarify that Statistic-GT represents a conceptual system property capturing the expected behavior across all possible outcomes. While simulation-based experiments allow us to validate the framework by explicitly calculating the Statistic-GT, our key contribution is showing that F2SP can be tested using only the Observed-GT. This ensures our framework aligns with real-world constraints, with ECE reliably assessing DNN fidelity to the stochastic process without requiring multiple realizations.\\n\\nWe have added explicit acknowledgment of this challenge in the introduction (L78, L265) and revised the text to emphasize that ECE tests F2SP using only the Observed-GT (L99, L268). These updates clarify the practical applicability of our approach.\\n\\n> Minors:\\n> \\n> \\n> M1. The original text for the abbreviation RV is not given.\\n> \\n> M2. In Table 1, what about the possibility of recovery in the Host-Pathogen problem?\\n> \\n> M3. In line 152, maybe consider using an alternative symbol for Moore neighborhood, instead of (normally representing Gaussian distribution).\\n> \\n\\nThank you for highlighting these minor issues. We have addressed them as follows: the abbreviation *RV* has been explicitly defined, a note on the possibility of recovery in the Host-Pathogen problem has been added to Table 1, and the symbol for the Moore neighborhood in line 152 has been replaced to avoid confusion with Gaussian distribution notation.\"}", "{\"title\": \"Authors' Response\", \"comment\": \"> The explaination of figures is not sufficient, e.g., in Figure 2 (1), the label for x-axis is not specified (I guess it is time?), either add a label or explain it in the captions. Same problems also exist in Figure 4.\\n> \\n\\nThank you for the feedback. We have updated the figures and captions to address these issues. For Figure 2, we added \\\"time\\\" as the x-axis label. For Figure 3 (previously Figure 4), we clarified the axes in the caption (L373): *\\\"The x-axis (test S-Level) and y-axis (train S-Level) are consistent across all matrices.\\\"* \\n\\n> This work examines ECE on three synthetic environments (forest fire, host-pathogen and stock market models) and a real world wildfire spread dataset. I can tell that these datasets are all multivariate either for classification or regression. Maybe due to the limit of pages, the authors didn't include the experiments on images. I suggest the authors add some discussions or comments in the paper.\\n> \\n\\nThank you for your insightful comment. You are correct that this work focuses on multivariate predictions in complex systems and does not include experiments on univariate tasks or image-based scenarios. For univariate cases, such as image classification or object detection, the traditional context is not forecasting, which differs from our focus. Additionally, ECE requires a sufficient number of samples for reliable calibration error estimates (see L523). In our framework, grid-level predictions contribute significantly to ECE's binning process, which would not occur in univariate settings, introducing additional challenges.\\n\\nFor multivariate cases in vision, such as segmentation map forecasting or stochastic video prediction, there are conceptual parallels to our work. For example, predicting \\\\(s*\\\\) states across a grid is analogous to forecasting segmentation maps. Although such tasks are not traditionally classified as complex systems, their problem formulation and local pixel interactions in videos provide a compelling analogy. A more detailed exploration of these tasks would require adjustments to the current framework, which we suggest as an exciting direction for future work.\\n\\nTo ensure clarity, we have expanded the related works section to highlight that vision tasks, such as segmentation map forecasting and stochastic video prediction, predominantly rely on F2R evaluations, opening up the possibility of applying F2SP evaluation strategies in this context (L500). Additionally, Appendix C.4 has been updated to reflect this discussion. In our future work section (Section 7), we propose exploring how F2SP could be extended to vision tasks (L531).\"}", "{\"title\": \"Authors' Response (1/2)\", \"comment\": \"> Although the author attempts to explain the difference between their work and ECE in deep learning in Lines 282-288, it appears to me this work is still a direct application of using ECE to evaluate the model performance on a stochastic system. The author is encouraged to discuss more in-depth about the distinction between ECE in the proposed method (stochasticity comes from evolving in the environment, aka, Statistic-GT) and ECE in previous works (stochasticity comes from the output distribution).\\n> \\n\\nThank you for raising this point. To clarify the distinction between the use of ECE in our work and in prior studies, we have revised the discussion in the manuscript (L293\\u2013301). Unlike prior works, where ECE is primarily used to measure DNN output calibration in static tasks (e.g., image or text classification), our study highlights its unique suitability for evaluating fidelity to a system property (Statistic-GT) in multivariate stochastic systems. We demonstrate that ECE serves not just as a tool for measuring output calibration but as a critical metric for testing F2SP, effectively addressing system randomness. Furthermore, we establish perfect calibration as a necessary condition for F2SP\\u2014a fundamental insight absent in prior studies. This reframing positions calibration as central to the evaluation of stochastic systems, rather than a secondary consideration. To provide additional context, the discussion in Appendix C.2 has been expanded to further support the main paper. We hope this enhanced discussion addresses your concern.\\n\\n> In Lines 243-244, the author claims that Statistic-GT is more stable than classification-based metrics, but I could not find any evidence related to calculating ECE on Statistic-GT is less sensitive to the system variance than MSE. \\n> \\n\\nThank you for raising this important point. To clarify, we do not claim that Statistic-GT is more stable than classification-based metrics. Rather, we claim that Statistic-GT is a more stable property than Observed-GT, as a given stochastic process has only one Statistic-GT but can have many Observed-GTs. To avoid confusion, we have rephrased the text in the manuscript to make this distinction clear (L254).\\n\\nTo address your question, we have updated the discussion in Appendix F.1 (Figure 16) to include ECE in the plot analyzing the impact of macro-variance on metric sensitivity. The revised discussion (L1389\\u20131394) highlights that both MSE and ECE exhibit lower sensitivity to macro-variance due to their inherent asymptotic convergence guarantees. However, their steady-state behavior differs: MSE's steady-state value is directly influenced by \\\\(Var[Z_t]\\\\), making it dependent on this variance, whereas ECE's steady-state value remains unaffected by \\\\(Var[Z_t]\\\\) upon convergence. This distinction, illustrated in the newly added Figure 17, underscores ECE's suitability for testing fidelity to Statistic-GT, as it remains unaffected by \\\\(Var[Z_t]\\\\), the system property that quantifies randomness.\\n\\n> Is there any theoretical support for using ECE over MSE on stochastic systems with different noise levels and could the author clarify it a bit more?\\n>\", \"regarding_theoretical_support_for_using_ece_over_mse_in_stochastic_systems_with_different_noise_levels\": \"we formally and empirically demonstrate across various noise levels that ECE uniquely tests F2SP, while MSE does not. This distinction positions ECE as a critical metric for these systems. Figures 3 and 4 in the main paper provide empirical evidence, while Sections 3.4.1 and 3.4.2 offer theoretical insights into ECE's unique behavior compared to MSE in stochastic systems.\"}", "{\"summary\": \"This paper presents a study evaluating deep neural networks (DNNs) within stochastic complex systems, emphasizing the importance of Expected Calibration Error (ECE) in measuring fidelity to stochastic processes. The findings are validated through multiple experiments and comparisons.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The topic of evaluating DNNs within stochastic complex systems is both intriguing and important.\\n\\nIn the primary evaluations, the author conducted experiments across various settings, including different DNN architectures, comparisons with multiple evaluation metrics, and diverse simulation tasks.\\n\\nThe main text clearly explains the difference between ECE in classical assessment and stochastic process settings.\", \"weaknesses\": \"The paper is somewhat difficult to follow. For example, providing a brief introduction to the structure of each section would enhance clarity, particularly in Sections 2 and 3. Additionally, it is difficult to grasp the main messages conveyed by the table in Figure 2(b). Furthermore, in lines 229\\u2013240, the macro-level concept is introduced abruptly, which may disrupt the clarity and readability of the main text.\\n\\nThe main findings' practical applicability appears limited. In real-world scenarios, data generally provides only a single observed outcome centered on observable ground truth (line 117). Since the primary evaluation is simulation-based, the controlled stochasticity falls short of capturing real-world complexity. The Statistic-GT is basically derived by normalizing the frequency of target state occurrences across multiple Monte Carlo simulations.\", \"minors\": \"M1. The original text for the abbreviation RV is not given.\\n\\nM2. In Table 1, what about the possibility of recovery in the Host-Pathogen problem?\\n\\nM3. In line 152, maybe consider using an alternative symbol for Moore neighborhood, instead of $\\\\mathcal{N}$ (normally representing Gaussian distribution).\", \"questions\": \"A question arises regarding Figure 1: Can ECE be an effective metric for measuring F2R compared to other available metrics? Figure 1 suggests that the answer may be *no*.\\n\\nAn important indicator that ECE is a reliable measure is its diagonal pattern, showing low scores only when training and test S-Levels align, as illustrated in Figure 4. Could the authors provide theoretical insights to support this indicator?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"the metric of expected calibration error is introduced and studied as a way to capture fidelity of a learned representation to an underlying stochastic process (rather than a single realization of that process, as with typical metrics like AUC or MSE).\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"Great paper, wonderfully practical and insightful; I've been looking for something like this for 5+ years! Nice eval on real-world data.\\nI started writing a thing I would like to you add and then discovered it was already in the paper (long horizon behaviour)\", \"weaknesses\": \"While overall the paper is very clear, some of the captions and explanations of the experiments/insights from them and how they tie to the figures could be improved.\", \"some_specifics\": [\"first\\u00a0fig should say what you\\u00a0mean by realization, and F2R and F2SP should be bolded (not ital) to make them easy to find in the text. Observed GT should be explained a bit more, or maybe it would be enough to move the sentence currently after F2R to be the second sentence of the paragraph.\", \"Fig 5 is unclear to me. What is the data, what is S-level, why is it \\\"good\\\" that the 20 vs 10 lines are far apart? All of this should be clear from the caption\", \"the clarity wanes a bit as the paper goes on, and it's a bit confusing that you call it ECE vs. F2SP vs Statistic-GP. Do these different namings really serve something? It could be a lot more clear if you just have one naming.\"], \"questions\": [\"I don't understand the second part of the critical question, \\\"is it encountering different stochastic\\u00a0behaviours\\\" (different from what)? how is the \\\"differentness\\\" relevant?\", \"While it's pretty clear to me how to use this immediately in my work, I think anyone who wasn't already aware they wanted exactly this might struggle. Could you provide something like a \\\"practical users guide\\\" for non-domain experts?\", \"if the clarity of the plots can be improved, the naming of the stat/metric you're introducing, and improve it's \\\"usability\\\" to the community, I would be happy to upgrade my score. You've done great work and this would bring the paper to the level it deserves.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work offers a new perspective on evaluating DNNs in stochastic complex systems by emphasizing the importance of capturing underlying the stochastic process. Traditional evaluation methods assess the DNN\\u2019s ability to replicate the observed ground truth but fail to measure the DNN\\u2019s learning of the underlying stochastic process. This paper proposes a new property called Fidelity to Stochastic Process, representing the DNN\\u2019s ability to predict the ground truth of the stochastic process, and introduces an evaluation metric that exclusively assesses fidelity to the ground truth of the stochastic process. The Expected Calibration Error is used to evaluate the fidelity to ground truth of statistic process. Empirical experiments on synthetic datasets (including wildfire, host-pathogen, and stock market models) and real-world wildfire data are used to show the measurement of fidelity to stochastic process by Expected Calibration Error.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper offers a new perspective on evaluating DNNs by considering DNNs as stochastic processes and uses a widely used criteria in Bayesian Deep Learning application to assess the fidelity to stochastic process. This work clearly explains the Expected Calibration Error is used to assess DNN modes in three synthetic cases and one real world case.\", \"weaknesses\": \"This paper is well organized and well written, several minor issues should be addressed: (1) The explaination of figures is not sufficient, e.g., in Figure 2 (1), the label for x-axis is not specified (I guess it is time?), either add a label or explain it in the captions. Same problems also exist in Figure 4. (2) This work examines ECE on three synthetic environments (forest fire, host-pathogen and stock market models) and a real world wildfire spread dataset. I can tell that these datasets are all multivariate either for classification or regression. Maybe due to the limit of pages, the authors didn't include the experiments on images. I suggest the authors add some discussions or comments in the paper.\", \"questions\": \"As mentioned in the \\\"Weaknesses\\\" part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for the feedback\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your feedback and for taking the time to review our response. We also appreciate you updating the score for the contribution. However, we noticed that the overall score did not change and wanted to kindly check if this was intentional or perhaps an oversight. \\n\\nWe are grateful for your engagement and the opportunity to address your concerns. Please let us know if there\\u2019s anything further we can clarify. \\n\\nBest regards, \\nThe Authors\"}", "{\"title\": \"Final updates to the PDF before the submission deadline\", \"comment\": \"We have updated the PDF with additional minor refinements based on reviewer feedback. Specifically, in response to Reviewer 7Rfe's feedback, we highlighted the broad applicability of our findings in the introduction, supported by evidence from the analysis of real-world data. This is reflected in the updated last point in the contributions list in the introduction (L99\\u2013102):\\n\\n> Beyond synthetic systems, we analyze real-world wildfire data, identifying instances where stochasticity disrupts traditional metrics and observing trends that align with our synthetic findings, reinforcing the practical applicability of our study. \\n\\nAdditionally, we made minor cosmetic improvements to Figure 3 for better quality. The captions of Figures 3 and 4 were slightly refined to make their key messages clearer, incorporating feedback from all reviewers.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Authors' Response (1/2)\", \"comment\": \"> I found it hard to read the paper because there was a lack of consistency in the acronyms, the authors would redefine them in several parts of the text again and again. I addressed my comments on text in the questions section.\", \"l50\": \"Is --> is (lowercase) Fig1: no need to write the whole name, you can use acronyms because they're already defined in the text, however MSE is not defined at this point. L88: fidelity to realization --> F2R (it was already defined previously, so you can use the acronym) L99: the notation of the dimension of the real vector O_t is confusing, what is (R^n)^(H x W), is n = H x W? If so, make that explicit. Table 1: some rows end with full stop, other don't. Please make it consistent. Either all with or all without.\\n> \\n\\nThank you for your detailed feedback. We have addressed all the mentioned issues: acronyms are now used consistently throughout the text without unnecessary redefinitions. The notation for \\\\(O_t\\\\) has been clarified (L120). Additionally, we standardized punctuation in Table 1. These updates ensure consistency and improve readability.\\n\\n> In the tables, the best neural networks based on each criterion are not highlighted, which makes it difficult to the reader to infer and correlate the arguments in the text. I addressed my comments on text in the questions section.\", \"table_2_and_table_7\": \"highlight the best performing DNNs.\\n> \\n\\nThank you for the suggestion. In Table 2, we added a color gradient to the evaluation metric columns to indicate relative performance, with red representing worse scores and lighter shades representing better scores. The caption was updated to reflect this change, improving readability. For Table 7, we highlighted the best-performing DNN, making the rank conflict more prominent and reinforcing the key theme of how to select the best model. These updates enhance clarity and help readers better correlate the text with the tables.\\n\\n> The focus of the paper is primarily on binary or discrete prediction tasks, leaving out regression tasks where the definition of calibration is more complex. While the authors acknowledge this and suggest it as an area for future work, the current scope limits the immediate applicability of the findings to a broader range of problems involving continuous outcomes.\\n> \\n\\nThank you for your thoughtful comment. While we acknowledge that the current scope focuses on binary and discrete prediction tasks, these tasks span a wide range of applications in complex systems, as highlighted in the references. This focus enables us to rigorously develop a foundational framework for assessing fidelity to system properties like the Statistic-GT without diluting the work\\u2019s impact across disparate problem classes.\\n\\nWe agree that extending these findings to regression tasks with continuous outcomes is an important area for future work. However, expanding the scope within this paper, which already spans more than 30 pages including the appendix, would risk overwhelming readers and obscuring the primary contributions. We believe this work provides a robust foundation for future extensions and appreciate your perspective.\\n\\n> Additionally, the use of the NDWS dataset, which is restricted to next-day predictions, prevents the assessment of ECE over longer time horizons, which are common in many complex systems. Could you elaborate on how future work might address this limitation?\\n> \\n\\nThank you for highlighting this limitation. We acknowledge that the NDWS dataset, restricted to next-day predictions, limits the ability to robustly validate ECE over longer time horizons. Addressing this limitation would require datasets that span multiple time horizons (L539), many of which already exist but are not open-sourced. Future work could leverage such datasets to further validate our findings.\\n\\nThat said, our evaluation framework remains broadly applicable, even for next-timestep predictions. Importantly, if the time horizon between consecutive steps is large (e.g., monthly versus daily predictions), the relative importance of F2SP versus F2R shifts. For shorter horizons, F2R may be more critical, whereas for longer horizons, F2SP becomes more relevant. This distinction, along with guidance on interpreting results in these contexts, is discussed in the practical user guide added to the Appendix H.\\n\\n> The paper highlights the lack of open-source complex system datasets as a barrier to broader validation. Are there any ongoing initiatives or plans to develop, collect, or standardize such datasets?\\n> \\n\\nWe are exploring collaborations with domain experts, such as those in forest fire modeling, to assess the feasibility of assembling standardized datasets. We acknowledge the inherent challenges in unifying diverse data sources due to varying system constraints. We hope this work underscores the importance of open-source datasets and inspires broader efforts within the research community to develop and standardize such resources.\"}", "{\"title\": \"Authors' Response (2/2)\", \"comment\": \"> I find it odd to place Figures in columns as Figure 1 (which has a large top white margin) and Figure 3. I would suggest column figures into one row figure with multiple subfigures as you did with Figure 2.\\n> \\n\\nThank you for pointing this out. We agree that placing Figures 1 and 3 in columns with large top white margins was not optimal. In response, we have merged these two figures into Figure 1. We did the same for Figure 5 and Table 2. This adjustment improves the visual consistency of the paper and makes better use of space. We appreciate your suggestion, as it enhances the overall presentation of our work.\\n\\n> L201: Isn't the indicator variable already defined as B_t in L99? Why defining again with different notation? L298: MSE already defined in text previously, no need to write the whole name again. L516: ECE already defined in text previously, no need to write the whole name again.\\n> \\n\\nThank you for the feedback. We have addressed these issues by ensuring consistent use of previously defined notations and abbreviations. The redundant redefinitions of \\\\(B_t\\\\), MSE, and ECE have been removed. Additionally, the definition of the micro random variable has been rephrased to reference \\\\(B_t\\\\) directly for clarity (see L226).\"}", "{\"title\": \"Scope Clarification and Future Directions\", \"comment\": \"In response to the **HXsj**\\u2019s insightful feedback, we made a small change to the paper (L523,526) to further clarify its scope and propose future extensions. Specifically, we updated the Limitations of ECE section to explicitly state that the work focuses on multivariate tasks and to explain why univariate forecasting poses challenges due to ECE's sample size requirements. Additionally, we proposed using improved calibration error estimators as a potential extension to address this limitation.\"}", "{\"comment\": \"Dear all,\\n\\nThe deadline for the authors-reviewers phase is approaching (December 2).\\n\\n@For reviewers, please read, acknowledge and possibly further discuss the authors' responses to your comments. While decisions do not need to be made at this stage, please make sure to reevaluate your score in light of the authors' responses and of the discussion.\\n\\n- You can increase your score if you feel that the authors have addressed your concerns and the paper is now stronger.\\n- You can decrease your score if you have new concerns that have not been addressed by the authors.\\n- You can keep your score if you feel that the authors have not addressed your concerns or that remaining concerns are critical.\\n\\nImportantly, you are not expected to update your score. Nevertheless, to reach fair and informed decisions, you should make sure that your score reflects the quality of the paper as you see it now. Your review (either positive or negative) should be based on factual arguments rather than opinions. In particular, if the authors have successfully answered most of your initial concerns, your score should reflect this, as it otherwise means that your initial score was not entirely grounded by the arguments you provided in your review. Ponder whether the paper makes valuable scientific contributions from which the ICLR community could benefit, over subjective preferences or unreasonable expectations.\\n\\n@For authors, please respond to remaining concerns and questions raised by the reviewers. Make sure to provide short and clear answers. If needed, you can also update the PDF of the paper to reflect changes in the text. Please note however that reviewers are not expected to re-review the paper, so your response should ideally be self-contained.\\n\\nThe AC.\"}", "{\"comment\": \"Dear reviewer,\\n\\nPlease make to sure to read, at least acknowledge, and possibly further discuss the authors' responses to your comments. Update or maintain your score as you see fit.\\n\\nThe AC.\"}", "{\"title\": \"Authors' Response (2/2)\", \"comment\": \"> I'm curious about how the author evaluates ECE at time $t$ based on Statistic-GT $P_t$. Do we have to simulate it again from t = 0 for N times or can we sample states from t - 1 and go forward N times (the system is Markov)? Can we still apply ECE on Statistic-GT when the system is not Markov?\\n> \\n\\nThank you for your insightful question regarding the evaluation of ECE at time $t$ based on the Statistic-GT $P_t$. We appreciate the opportunity to clarify both the computation of Statistic-GT and its applicability across Markovian and non-Markovian systems.\\n\\nTo compute Statistic-GT $P_t$, we require a distribution of possible system states at time $t$, generated through multiple realizations of the system's evolution. Two primary methods can achieve this: $1.$ simulating from $t = 0$ for $N$ realizations up to time $t$, or $2.$ starting from the state at $t - 1$ and simulating $N$ trajectories forward to $t$, leveraging the Markov property if applicable. In our work, we employ the first method\\u2014simulating from $t = 0$\\u2014to capture the full stochastic evolution of the system. This approach accounts for the accumulation of stochastic effects over time and the potential divergence of trajectories due to system sensitivity to initial conditions. While the second method can be used for Markovian systems, the first method ensures consistency and applicability across both Markovian and non-Markovian systems.\\n\\nRegarding non-Markovian systems, the computation of Statistic-GT does not rely on the Markov assumption. In such systems, where the future state depends on a sequence of past states rather than just the current state, multiple realizations from the same initial conditions still allow us to approximate $P_t$. This makes ECE applicable regardless of whether the system exhibits Markovian properties. Additionally, while Statistic-GT is used conceptually in our study to demonstrate ECE's fidelity to a system property, it is important to emphasize that ECE can be directly computed using the Observed-GT and DNN predictions. The core challenge addressed in our paper is showing that ECE effectively evaluates fidelity to Statistic-GT without requiring its explicit computation in real-world scenarios. The introduction of the paper has been revised to make this clear.\"}", "{\"metareview\": \"The reviewers unanimously recommend acceptance (8-6-6-8-6). The paper presents a significant contribution for the evaluation of neural networks designed to forecast the evolution of stochastic complex systems. Reviewers recognize the importance of the work and the quality of the results. The author-reviewer discussion has been constructive and has led to a number of improvements to the paper, in particular regarding its presentation. The reviewers have raised some concerns about the practical applicability of the findings (e.g., the focus on binary prediction tasks, the limited horizon in the experiments), but the authors have provided convincing arguments arguing for the relevance of their work nonetheless. No major concerns have been raised by the reviewers. For these reasons, I recommend acceptance. I encourage the authors to implement the changes discussed with the reviewers in the final version of the paper.\", \"additional_comments_on_reviewer_discussion\": \"The author-reviewer discussion has been constructive and has led to a number of improvements to the paper, in particular regarding its presentation.\"}", "{\"summary\": \"This paper introduces a novel stochasticity-compatible evaluation strategy for assessing existing models in the context of complex systems. The author justifies the Expected Calibration Error (ECE) as suitable for assessing the model fidelity of stochastic systems through both simulation environments and real-world data.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Evaluating model fidelity on the stochastic system is significant and has wide applications.\\n2. The paper is well-motivated and both the dataset and experiments are thorough.\", \"weaknesses\": \"1. Although the author attempts to explain the difference between their work and ECE in deep learning in Lines 282-288, it appears to me this work is still a direct application of using ECE to evaluate the model performance on a stochastic system. The author is encouraged to discuss more in-depth about the distinction between ECE in the proposed method (stochasticity comes from evolving in the environment, aka, Statistic-GT) and ECE in previous works (stochasticity comes from the output distribution).\\n2. In Lines 243-244, the author claims that Statistic-GT is more stable than classification-based metrics, but I could not find any evidence related to calculating ECE on Statistic-GT is less sensitive to the system variance than MSE. Is there any theoretical support for using ECE over MSE on stochastic systems with different noise levels and could the author clarify it a bit more?\", \"questions\": \"I'm curious about how the author evaluates ECE at time $t$ based on Statistic-GT $P_{t}$. Do we have to simulate it again from $t=0$ for $N$ times or we can sample states from $t-1$ and go forward $N$ times (the system is Markov)? Can we still apply ECE on Statistic-GT when the system is not Markov?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author's Response (1/2)\", \"comment\": \"> first\\u00a0fig should say what you\\u00a0mean by realization, and F2R and F2SP should be bolded (not ital) to make them easy to find in the text. Observed GT should be explained a bit more, or maybe it would be enough to move the sentence currently after F2R to be the second sentence of the paragraph.\\n> \\n\\nThank you for the valuable feedback. We have restructured and merged the original Figure 1 with the previous Figure 3 to contextualize F2R and F2SP in terms of Observed GT and Statistic GT. The updated Figure 1 provides a comprehensive overview of the paper, accompanied by a self-explanatory caption that clearly outlines the system property of interest, the evaluation criteria, and the corresponding metrics used to assess these criteria.\\n\\nIn response to the comment about the text, we have revised the second, third, and fourth paragraphs of the introduction to address the suggestions provided by all reviewers. These changes aim to enhance the clarity of the arguments and resolve any confusion highlighted in the reviews. Please see the text highlighted in red in the revised pdf.\\n\\n> the clarity wanes a bit as the paper goes on, and it's a bit confusing that you call it ECE vs. F2SP vs Statistic-GP. Do these different namings really serve something? It could be a lot more clear if you just have one naming.\\n> \\n\\nThank you for highlighting this point. To address the concern about the clarity of naming conventions, we have explicitly defined the rationale behind the terms *ECE*, *F2SP*, and *Statistic-GT* in the revised text. Additionally, we have included a table in the updated Figure 1.b to further clarify the distinctions between these terms.\", \"specifically\": \"- *Statistic-GT* represents the *system property* we aim to evaluate fidelity to. It highlights the difference from *Observed-GT*, which refers to the observable outcomes of the system\\u2019s evolution.\\n- *F2SP* is the *evaluation criterion* that tests fidelity to *Statistic-GT*, contrasting with *F2R*, which evaluates fidelity to *Observed-GT*.\\n- *ECE* is an existing *evaluation metric* that serves as a tool to assess *F2SP*.\\n\\nWe have revised the introduction (Lines 43, 44, 50, and 77) and Figure 1.b to make these distinctions explicit. Moreover, we have ensured consistent use of *F2SP* and *F2R* throughout the paper as evaluation criteria, mentioning system properties only when relevant to the discussion. These changes aim to improve clarity and address the reviewer\\u2019s concerns directly.\\n\\n> - Fig 5 is unclear to me. What is the data, what is S-level, why is it \\\"good\\\" that the 20 vs 10 lines are far apart? All of this should be clear from the caption\\n> \\n\\nThank you for the feedback. We have updated the caption of Figure 5 (L403) to address your comments and ensure greater clarity. The revised caption now explicitly describes the data, the meaning of S-Level, and why the separation of the 20 vs. 10 lines is significant. For your convenience, the updated caption is provided below:\\n\\n*\\\"Two DNNs were trained on 700 forest fire simulations with different S-Levels\\u201410 (orange, low stochasticity) and 20 (blue, high stochasticity)\\u2014and evaluated on 300 test simulations with S-Level 20. Evaluation metrics (a) AUC-PR, (b) MSE, and (c) ECE were measured over an extended prediction horizon. AUC-PR shows similar trends for both models, failing to distinguish the stochastic mismatch, while MSE declines more steeply for the mismatch case but also shows a declining trend for both DNNs due to misalignment with the Observed-GT. ECE remains low and stable for the DNN trained on S-Level 20. This highlights ECE\\u2019s unique ability to track alignment with the Statistic-GT, unlike AUC-PR and MSE, which focus on the Observed-GT.\\\"*\\n\\n> I don't understand the second part of the critical question, \\\"is it encountering different stochastic\\u00a0behaviours\\\" (different from what)? how is the \\\"differentness\\\" relevant?\\n> \\n\\nThank you for pointing out the need for clarification. We have revised the text to make the critical question clearer and to emphasize the relevance of the \\\"differentness.\\\" The updated phrasing (L45 onwards) is as follows:\\n\\n*\\\"This focus on F2R raises a critical question when a DNN fails to match the Observed-GT: is the mismatch due to inherent stochastic variability, or does it result from exposure to a fundamentally different stochastic process that the DNN has not modeled? Understanding this distinction is crucial: a DNN that accurately captures the stochastic process, even if it mismatches the Observed-GT, may still offer valuable insights, whereas a failure to model the process entirely undermines its utility.\\\"*\\n\\nThis revision explicitly clarifies the meaning of \\\"different stochastic behaviors\\\" and highlights the practical importance of the difference between failing to match the Observed-GT vs. failing to match the underlying stochastic process.\"}" ] }
2TuUXtLGhT
Long-Context Linear System Identification
[ "Oğuz Kaan Yüksel", "Mathieu Even", "Nicolas Flammarion" ]
This paper addresses the problem of long-context linear system identification, where the state $x_t$ of the system at time $t$ depends linearly on previous states $x_s$ over a fixed context window of length $p$. We establish a sample complexity bound that matches the _i.i.d._ parametric rate, up to logarithmic factors for a broad class of systems, extending previous work that considered only first-order dependencies. Our findings reveal a ``learning-without-mixing'' phenomenon, indicating that learning long-context linear autoregressive models is not hindered by slow mixing properties potentially associated with extended context windows. Additionally, we extend these results to _(i)_ shared low-rank feature representations, where rank-regularized estimators improve rates with respect to dimensionality, and _(ii)_ misspecified context lengths in strictly stable systems, where shorter contexts offer statistical advantages.
[ "autoregressive", "linear", "statistics", "low rank", "mispecification" ]
Accept (Poster)
https://openreview.net/pdf?id=2TuUXtLGhT
https://openreview.net/forum?id=2TuUXtLGhT
ICLR.cc/2025/Conference
2025
{ "note_id": [ "Y0nafV1j1K", "VOMLpWnnCy", "QQENCoV1MU", "9rKXP9hWal", "3RiAjHBCxE", "1hAOEJIYBs" ], "note_type": [ "meta_review", "official_review", "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1734458796717, 1730672070997, 1737523471975, 1730264109463, 1730597918836, 1730303007232 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1861/Area_Chair_39A8" ], [ "ICLR.cc/2025/Conference/Submission1861/Reviewer_RGko" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1861/Reviewer_o3yE" ], [ "ICLR.cc/2025/Conference/Submission1861/Reviewer_Mair" ], [ "ICLR.cc/2025/Conference/Submission1861/Reviewer_tdsH" ] ], "structured_content_str": [ "{\"metareview\": \"This paper addresses the problem of linear system identification within a long-context framework. Specifically, it makes the following contributions:\\n\\n1. Theoretical Guarantee for the Constrained Least Squares Estimator: Under mild assumptions on the design matrix and sub-Gaussian noise, the paper establishes a theoretical guarantee for the constrained least squares estimator. This result parallels existing results in the i.i.d. setting, with an additional logarithmic factor.\\n2. Extension to the Low-Rank Setting: The main result is extended to a low-rank setting, demonstrating an improved statistical rate that depends on the rank constraint.\\n3. Misspecified Context Windows: The analysis is further extended to the case of misspecified context windows, revealing that partial learning still occurs under model misspecification.\\n\\nMost of the reviewers believe that the paper makes substantial contributions, and the AC also agrees with this assessment.\", \"additional_comments_on_reviewer_discussion\": \"Four reviewers have evaluated the paper, and their overall assessment is positive. I agree with their evaluation and believe the paper offers a strong contribution with compelling results.\\n\\nTwo reviewers raised a few technical questions, which the authors have addressed satisfactorily. Another reviewer suggested including remarks on the applicability of the proposed method in the revised version of the paper. I strongly recommend that the authors incorporate these remarks in the final version.\\n\\nOne reviewer recommended a \\u201cReject\\u201d; however, their review is very brief and lacks substantive feedback or justification. Given the absence of meaningful critique, I have chosen to disregard this rating in my final assessment of the paper.\"}", "{\"summary\": \"The authors study the problem of identifying long-context linear systems where the state at any given time depends on a sequence of previous states over an extended context window. In contrast to traditional linear system identification that typically assumes first-order dependencies, this paper focuses on autoregressive processes of order\\n$p>1$. The authors establish sample complexity bounds, demonstrating a \\\"learning-without-mixing\\\"-type of result. In particular, they show that a slow mixing does not inflate their learning rates. In addition, the authors further extend their results to the setting where the long-context linear model admits a low-rank representations. They also explore the implications of context length misspecification.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"**Clarity of exposition:** The paper is well-written and well-organized and systematically introduces the problem setting, contributions, and theoretical derivations. Definitions and assumptions are clearly stated, and the logical progression through each theoretical component makes the paper easy to follow.\\n\\n**Intuitive and well-discussed results:** The concept of \\\"learning-without-mixing\\\" is well-motivated by the authors. This result aligns with the literature on \\\"learning-without-mixing\\\" for linear system. In particular, the authors show that for long-context linear system identification, where long contexts naturally entails a strong sample dependency, it does not necessarily inflate the bounds. Moreover, the low-rank representation learning setting and misspecification scenarios are well-explained, with clear justifications for how each condition affects the error bounds.\\n\\n**Theoretical contribution:** The theoretical contributions are significant, providing error bounds that extend classical linear system identification results to long-context models, and the learning rates aligns with the literature of learning-without-mixing for linear systems.\", \"weaknesses\": \"**Misspecification Results and Assumptions:** Section 3.4, particularly Assumption 3.9, imposes a constraint on the misspecified model that may be too restrictive for practical applications. The requirement that $|| (MA^\\\\star - MA^\\\\star_{1:p'})L^\\\\star ||_{\\\\text{op}} \\\\leq D'$ implies that misspecification must remain controlled to a certain degree. The authors could discuss the limitations of Assumption 3.9 if this assumption does not hold in practical settings or offer heuristics for relaxing this constraint.\", \"questions\": \"1) In Section 5, the authors emphasize that the sample complexity bounds derived remain unaffected by mixing times, highlighting the \\\"learning-without-mixing\\\" result. Could the authors discuss more the slow mixing setting (i.e., when the system is marginally stable), would the learning rates deteriorate?\\n\\n2) The misspecification results in Section 4 suggest that shorter context lengths can still capture useful structure in long-context systems. Could the authors provide insights into specific applications where such misspecified models are particularly advantageous?\\n\\n3) I am curious about how coordinate descent minimization could be used to learn $P^\\\\star$ in polynomial time for this setting of long-context linear system identification and the implications of non-isotropic data when updating $P$. \\n\\n**Minor**: The abbreviation for Ordinary Least Squares (OLS) is used early but is only formally defined in Section 3.2.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"The paper provides a low rank approach for long-context linear system identification.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper provides a low rank approach for long-context linear system identification.\", \"weaknesses\": \"It's unclear how to tune the rank when we don't know it's true value (e.g. real dataset).\", \"questions\": \"How do you select the proper rank when it's not known?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper considers the linear system identification problem under a long context framework. More specifically, the paper presents:\\n\\n1. A main result on a theoretical guarantee of the constrained least squares estimator under mild assumptions on the design matrix and sub-gaussianity of noise. This result is shown to parallel previously existing results in the i.i.d. setting under some additional logarithmic factors.\\n2. An extension of the main result to a low rank setting, showing an improved statistical rate depending on the rank constraint.\\n3. A further extension of the main result to the case of misspecified context windows, suggesting partial learning occurs for misspecified models.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"While the topic of linear identification is certainly not new, the theoretical results developed in this paper are novel. The authors clearly stated problem formulations, main results and motivations. Overall, the paper was well written with an enjoyable read.\", \"weaknesses\": \"There are several minor questions and suggestions regarding confusions in the main text (these are deferred to the questions section below). Experiments were minimal and only provided in the appendix.\", \"questions\": \"Some minor questions and suggestions include:\\n\\n1. The first question is about the claim in line 379 that \\\"Importantly, the constant $C$ and the logarithmic terms are independent of the mixing related quantity $\\\\text{max}(1/1-\\\\rho,p).$ Here, $\\\\rho$ is the operator norm of $M_{\\\\textbf{A}^\\\\star}$\\\". However, as stated in line 263, the explicit constant $C(\\\\delta)$ depends on the diameter $D$ which is one of the constraints listed in Assumption 3.4 where it is assumed that the operator norm of $M_{\\\\textbf{A}^\\\\star}$ is less than or equal to $D$. These two statements seem to be contradictory. Can you elaborate on these dependencies?\\n\\n2. In Equation (14), the variable $E$ is not clearly defined in the main text although it is available in the appendix. It may be helpful to add it in the main text to prevent confusion.\\n\\n3. The last inequality in Equation (18) appears to be a typo.\\n\\n4. There are multiple typos in the appendix, and it would be helpful to do a careful revision of the text. For example, line 1184 formatting, line 1287 \\\"martices\\\", line 1378 \\\"rearrainging\\\", to name a few. \\n\\n5. Considering the main results depend strongly on the condition number of $L_\\\\star$, it would be helpful to include discussions about how this condition number typically behaves. For example, how does this condition number relate to the condition number or singular values of a matrix $A$ if $A_1^\\\\star = A_2^\\\\star = \\\\ldots = A$? How does it behave if $A_i^\\\\star$ have elements sampled i.i.d. from normal distributions? Does the condition number of $L_\\\\star$ also depend on $T$? \\n\\n6. While in Equation (8) it is stated that the result depends on polylog$(\\\\kappa)$, can you elaborate on whether this result depends on $\\\\log (\\\\kappa)$, or is it actually dependent on O($\\\\kappa$) or other polynomials of $\\\\kappa$? It is not immediately obvious from the proof, however, in many contexts theoretical guarantees of estimators are linearly related to logs of condition numbers. Since $\\\\kappa$ is already the log of the condition number, I think it makes sense to clarify this dependency.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethics concerns were found.\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The article focusses on the problem of identifying the $A_k$ matrices for $k=1,\\\\ldots,p$ where $p$ is the context length when the data is being generated via\\n\\n$$ x_t=\\\\sum_{k=1}^p A_k^* x_{t-k}+\\\\xi_t $$\\n\\nwhere $A_1^*,\\\\ldots,A^*_p$ are $d\\\\times d$ matrices and $\\\\xi_t$ is i.id. noise. The article discusses three problems \\n\\n$(1)$ Minimizing the empirical loss function under an induced-norm bound on the $A^*$ matrices where the loss function is given by\\n\\n$$\\\\ell({\\\\bf A})=\\\\frac{1}{NT}\\\\sum_{n=1}^N\\\\sum_{t=p}^T\\\\left\\\\| x_t^{(n)}-\\\\sum_{k=1}^p A_k x_{t-k}^{(n)}\\\\right\\\\|.$$\\n\\nHere, $T$ is the length of the trajectory and $N$ trajectories are collected.\\n\\n$(2)$ Minimizing the empiricial loss function with induced-norm constraints on the $A_k$ with an added rank constraint on $A_k$\\n\\n$(3)$ The same loss function is minimized with an induced norm contraints on $A_k$ matrices and a bound that captures a context length $p'$ which is smaller than the actual context length $p.$\\n\\nFor all the three problems above the article provides non-asymptotic bounds on the Frobenius norm of the error of the estimates with respect to $A_k^*$ that will result from an optimal solution to the problems (1), (2) and (3). The authors provide a discussion that why standard approaches of lifting the state will face challenges, and provide reasons on why the bounds they have obtained are independent of the time taken by the Markov Chain to reach any steady state distribution. The authors further comment on stability conditions.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The results provide non-asymptotic bounds on three problems that are well motivated; earlier works have not covered the case where the process has dependency on the past with a context length.\", \"weaknesses\": \"(1) The non-asymptotic bounds are not with respect to any specific algorithm that takes data and solves the related optimization problems. The authors indicate possible approaches for solving these problems but do not analyze any specific algorithm; however, it stands to reason that the sample complexity will depend on the approach being taken. The rank constrained problem is a particularly challenging one as its not a convex problem. The authors assume a an optimal solution to the problems. The authors need to comment on whether advances with respect to other works are also in the same spirit or if they analyze known solutions to the optimization problems. Proper justification of the utility of the results need to be provided if the article provides analysis assuming the existence of the optimal solution.\\n\\n(2) The mathematics is presented in a dense manner; here the approach and the problem description can be better presented and explained. In the Sketch of the Proof, some of the matrices such as $E$ are not defined in the main body (its defined in the Appendix; however, the main body should be self-contained; the definitions are buried deep in the Appendix). Some suggestions are to show the matrix operations in more detail to help a reader along.\", \"questions\": \"(1) The authors discuss how their results do not depend on the mixing time of the Markov Chains involved. The authors can provide better intuition on how they need not consider mixing time in the non-asymptotic bounds obtained.\\n\\n(2) Can the authors provide more details on the simulations reported. The problems being considered do not admit closed-form solutions and include non-convex problems and thus should be difficult to solve. How are these challenges reflected in the simulations section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
2TiU1JTdSQ
Selective LoRA for Domain-Aligned Dataset Generation in Urban-Scene Segmentation
[ "Minho Park", "Sunghyun Park", "Jungsoo Lee", "Hyojin Park", "Kyuwoong Hwang", "Fatih Porikli", "Jaegul Choo", "Sungha Choi" ]
This paper addresses the challenge of data scarcity in semantic segmentation by generating datasets through fine-tuned text-to-image generation models, reducing the costs of image acquisition and labeling. Segmentation dataset generation faces two key challenges: 1) aligning generated samples with the target domain and 2) producing informative samples beyond the training data. Existing methods often overfit and memorize training data, limiting their ability to generate diverse and well-aligned samples. To overcome these issues, we propose Selective LoRA, a novel fine-tuning approach that selectively identifies and updates only the weights associated with necessary concepts (e.g., style or viewpoint) for domain alignment while leveraging the pretrained knowledge of the image generation model to produce more informative samples. Our approach ensures effective domain alignment and enhances sample diversity. We demonstrate its effectiveness in generating datasets for urban-scene segmentation, outperforming baseline and state-of-the-art methods in in-domain (few-shot and fully-supervised) settings, as well as domain generalization tasks, especially under challenging conditions such as adverse weather and varying illumination, further highlighting its superiority.
[ "Dataset Generation", "Urban-scene Segmentation" ]
Reject
https://openreview.net/pdf?id=2TiU1JTdSQ
https://openreview.net/forum?id=2TiU1JTdSQ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yQ2lyaoSAo", "rIDIuOxsSX", "r6XXBk60Se", "qgGQbzOs8p", "pxwUF2r7Pr", "pKuL6VfsSX", "oxzp7cfwVq", "ocSFAvXr5K", "oKmQHSmITI", "o4xOM3euFg", "o1tkAtIYas", "hKnFFaEgzz", "gcYz0Ua3Ph", "fYCOiScPm9", "cIKfrJqr9T", "c9FkMcq82V", "a9BjQgDNOV", "YoB8oBArQ8", "YH4LSATgdT", "TXJgSfQu8N", "SmvahbvGdL", "Rpdsf6hknh", "RKh4AXYrxe", "NMHBEAEsEO", "N69rqKXUgb", "KgVEWpqTq2", "KZWRnunkCu", "GtfLqj6cPA", "CwONAs7gs5", "BxxT133sNe", "6yvY0m6oYy", "4rywX0upOP", "2p9VnJiGnk", "2mEWzMbsfu", "0Fb4YmDr47" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732301561211, 1732301731776, 1730640098055, 1732756898715, 1732756795594, 1732302189539, 1732568369035, 1733056430842, 1732302061379, 1737523510488, 1734845680635, 1732302068227, 1732301917813, 1732439107562, 1732568134094, 1730558719252, 1732302172841, 1732757534235, 1733167793692, 1732302005927, 1733156308791, 1732625619133, 1732568153177, 1732713275123, 1730646223284, 1732757070382, 1730607322581, 1732756862030, 1732302138055, 1732712594338, 1732484417406, 1732441467228, 1732568358177, 1732301793568, 1732301856825 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2521/Authors" ], [ "ICLR.cc/2025/Conference/Submission2521/Authors" ], [ "ICLR.cc/2025/Conference/Submission2521/Reviewer_hMNt" ], [ "ICLR.cc/2025/Conference/Submission2521/Authors" ], [ "ICLR.cc/2025/Conference/Submission2521/Authors" ], [ "ICLR.cc/2025/Conference/Submission2521/Authors" ], [ "ICLR.cc/2025/Conference/Submission2521/Authors" ], [ "ICLR.cc/2025/Conference/Submission2521/Authors" ], [ "ICLR.cc/2025/Conference/Submission2521/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2521/Area_Chair_UdKn" ], [ "ICLR.cc/2025/Conference/Submission2521/Authors" ], [ "ICLR.cc/2025/Conference/Submission2521/Authors" ], [ "ICLR.cc/2025/Conference/Submission2521/Reviewer_B65c" ], [ "ICLR.cc/2025/Conference/Submission2521/Authors" ], [ "ICLR.cc/2025/Conference/Submission2521/Reviewer_kFPj" ], [ "ICLR.cc/2025/Conference/Submission2521/Authors" ], [ "ICLR.cc/2025/Conference/Submission2521/Authors" ], [ "ICLR.cc/2025/Conference/Submission2521/Authors" ], [ "ICLR.cc/2025/Conference/Submission2521/Authors" ], [ "ICLR.cc/2025/Conference/Submission2521/Reviewer_hMNt" ], [ "ICLR.cc/2025/Conference/Submission2521/Reviewer_KMEN" ], [ "ICLR.cc/2025/Conference/Submission2521/Authors" ], [ "ICLR.cc/2025/Conference/Submission2521/Reviewer_kFPj" ], [ "ICLR.cc/2025/Conference/Submission2521/Reviewer_B65c" ], [ "ICLR.cc/2025/Conference/Submission2521/Authors" ], [ "ICLR.cc/2025/Conference/Submission2521/Reviewer_KMEN" ], [ "ICLR.cc/2025/Conference/Submission2521/Authors" ], [ "ICLR.cc/2025/Conference/Submission2521/Authors" ], [ "ICLR.cc/2025/Conference/Submission2521/Reviewer_kFPj" ], [ "ICLR.cc/2025/Conference/Submission2521/Reviewer_kFPj" ], [ "ICLR.cc/2025/Conference/Submission2521/Authors" ], [ "ICLR.cc/2025/Conference/Submission2521/Authors" ], [ "ICLR.cc/2025/Conference/Submission2521/Authors" ], [ "ICLR.cc/2025/Conference/Submission2521/Authors" ] ], "structured_content_str": [ "{\"comment\": [\"*We sincerely thank the reviewers for their thoughtful and constructive feedback, which has greatly contributed to improving the quality and clarity of our manuscript.*\", \"We deeply appreciate the recognition of the strengths of our work and have refined our manuscript further by thoughtfully incorporating the suggested improvements. The strengths highlighted by the reviewers include:\", \"The core methodology, Selective LoRA, was acknowledged as both reasonable and novel by all reviewers (**B65c**, **hMNt**, **KMEN**, **kFPj**).\", \"The extensive experiments and ablation studies effectively demonstrated that the generated dataset enhances segmentation performance across various settings, including both in-domain and domain-generalization tasks (**B65c**, **hMNt**, **KMEN**, **kFPj**).\", \"Our work addresses the critical challenge of data scarcity, a key issue in semantic segmentation and other vision tasks (**hMNt**, **kFPj**).\", \"The paper is well-organized and well-written (**B65c**, **KMEN**), with a clear and concise description of the motivation (**B65c**). Additionally, the pipeline figures were noted as easy to understand (**B65c**).\", \"We have carefully addressed all concerns raised by the reviewers and incorporated their valuable suggestions to further enhance the manuscript. Detailed responses to each reviewer\\u2019s comments are provided below.\", \"> **Improving the overall presentation of the manuscript**\", \"We have added detailed explanations of the segmentation dataset framework in **Section 3.1 Overall Framework** and **Section 3.4. Training Label Generator and Generating Diverse Segmentation Datasets**.\", \"Further details are provided in **Appendix A.1. Implementation Details**, along with a newly added **Figure 7 Detailed Label Generator Architecture**.\", \"To enhance clarity, we have reorganized **Section 4. Experiments** and updated **Table 1. (previously Table 3)**.\", \"> **Additional Experiments**: We conducted all additional experiments requested by the reviewers, as follows:\", \"Additional image-label alignment experiments are provided in **Figure 6** and **Appendix A.6 Comparison of Image-Label Alignment**, addressing concerns from reviewers **B65c** and **kFPj**.\", \"We tested our approach on a general domain dataset (Pascal-VOC), and the results are included in **Appendix A.8. In-domain Experiments for the General Domain Dataset**, as requested by reviewers **B65c** and **kFPj**.\", \"We conducted experiments with hand-crafted selection baselines (Selective LoRA fine-tuning on all cross-attention layers) as suggested by **hMNt**.\", \"Additionally, we included **InstructPix2Pix** as an additional baseline in the domain generalization setting (**Table 2**), addressing the suggestion by **KMEN**.\", \"To address concerns from **kFPj**, we added **Appendix A.7. Concept Sensitivity According to Prompt Augmentation**, demonstrating the robustness of our selection approach.\", \"> **Secondary Additional Experiments**: We conducted all the experiments requested by the reviewers, which were added as part of the second round review.\", \"We conducted an additional image generation baseline (DATUM) in **Table 2**, as requested by **KMEN**.\", \"At the request of **KMEN**, we performed analyses on Image Domain Alignment and Image-Label Alignment in a domain generalization setting, detailed in **Appendix A.12. Additional Analysis of Our Generated Dataset on the Domain Generalization Setting**.\", \"In response to **kFPj**'s suggestion, we further analyzed class-specific performance and proposed an additional method to improve specific classes, as outlined in **Appendix A.10. Class-wise Segmentation Performance Analysis**.\", \"As requested by **kFPj**, we conducted new experiments on text guidance considerations, presented in **Appendix A.11. Generating Datasets with Diverse Class Names**.\", \"We appreciate the reviewers' detailed comments and hope that our revisions satisfactorily address their concerns. Detailed responses to each reviewer's comments are provided below.\"]}", "{\"comment\": \"We sincerely appreciate your insightful feedback. Detailed responses to your concerns are provided below, and the revisions have been incorporated into the updated version. Please let us know if there are any further suggestions or comments.\\n\\n> **[W1.1] Clarification of Label Generator Integration and Method Distinction**\\n\\nThe explanation of the label generator was provided in **Appendix A.1** of the original paper. However, as the reviewer pointed out, the main paper did not sufficiently explain Stage 3 (label generator). Additionally, the description of Stage 4 (generating image-label pairs) was inadequate. Therefore, we added \\\"**Section 3.4: Training the Label Generator and Generating Diverse Segmentation Datasets** to explain both Stage 3 and Stage 4. Furthermore, we extended **Appendix A.1** with additional technical details and added **Figure 7** to illustrate the label decoder architecture.\\nBelow is a brief summary of the explanations for stages 3 and 4.\\n\\n\\n**[Stage 3]** We train an additional lightweight label generator to produce a segmentation label corresponding to the image, following DatasetDM.\\nTo train the label generator, we add noise to the given labeled image and denoise the image with the fine-tuned T2I model, which can provide semantically rich intermediate multi-level feature maps and cross-attention maps.\\nDistinct from DatasetDM, we train the label generator based on the fine-tuned T2I model using Selective LoRA.\\nThe added fine-tuning process causes a significant difference in image-label alignment, which we discussed in **Appendix A.6 Comparison of Image-Label Alignment**.\\nFurthermore, due to the difference between the base T2I model, architecture details slightly changed.\\n\\nSpecifically, given the increased number of blocks and channels in SDXL, we selected specific blocks to extract multi-scale feature maps and multi-scale cross-attention maps.\\nFeature maps were extracted from the last feature block at each resolution of the upsampling blocks, while cross-attention maps were sampled at regular intervals (every 7 blocks) from the total 36 upsampling blocks (i.e., 1st, 8th, ... 29th, 36th). \\nMoreover, as shown in the updated Figure 7, Stable Diffusion XL has only three resolution levels, compared to the four resolution levels in the Stable Diffusion v1.5 architecture. This difference required minor adjustments to the pixel decoder and transformer decoder, as described in **Appendix A.1**. \\n*Importantly, to ensure a fair comparison, the reported scores for DatasetDM were obtained using a re-implemented version based on SDXL with the same modifications.*\\n\\n\\n**[Stage 4]** Diverse image-label pairs are generated to address both domain generalization and in-domain scenarios. For domain generalization, text prompts are modified to include adverse weather conditions (e.g., foggy, snowy, rainy, night-time) by extending the default prompt, such as \\\"photorealistic first-person urban street view,\\\" to, for example, \\\"in foggy weather,\\\" enhancing the model\\u2019s ability to generalize across varying environmental conditions. For in-domain scenarios, diversity is introduced by varying the class names within the prompt template (e.g., \\\"\\u2026 with car\\\", \\\"\\u2026 with car\\\", etc.), allowing for the generation of images that reflect different object class combinations while maintaining consistency with the in-domain characteristics.\\n\\n> **[W1.2] Comparison in Training Efficiency with DatasetDM**\\n\\nAs mentioned in lines L402-404 of the revised manuscript, training Selective LoRA only requires *one hour* on a single Tesla V100 GPU.\\nWhile the reviewer is concerned about the additional stage in our method, we want to clarify that the additional stage requires a marginal amount of time compared to the entire training time (20 hours for training label generator).\\nConsidering the performance gains of using Selective LoRA compared to DatasetDM, we believe that the additional one hour of training is not as significant as the reviewer may be concerned about.\"}", "{\"summary\": \"The paper presents a method for fine-tuning pre-trained T2I models to generate datasets specifically for urban-scene segmentation, addressing the challenge of data scarcity. Traditional methods often utilize pre-trained T2I models directly or apply LoRA for fine-tuning, which can lead to generated samples that fail to align with the target domain or lack diversity. To overcome these issues, the paper introduces Selective LoRA, a novel fine-tuning approach that selectively identifies and updates the weights that are most closely associated with specific concepts for domain alignment. This approach reduces the number of parameters that need training, improving training efficiency while ensuring that the original T2I model's generalizability is preserved. Extensive experiments demonstrate that the generated datasets improve the performance of previous segmentation models in urban-scene segmentation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"S1: The paper \\\"Selective LoRA\\\" introduces a new training strategy for fine-tuning pretrained T2I models to generate diverse datasets for segmentation tasks, addressing the challenge of data scarcity.\", \"s2\": \"Extensive experiments show that the generated datasets enhance the performance of prior segmentation models in urban-scene segmentation.\", \"weaknesses\": \"W1: The writing in this paper is somewhat challenging to understand, and the technical descriptions lack clarity, which can lead to confusion during reading. Below are a few examples, though not exhaustive. For instance, in Figures 3 and 4, what do the layer indices represent? Are they the projection layers for all attention in the network? However, according to the description in line 251, it seems that all linear layers in the network are being trained with LoRA. Additionally, Section 3 only covers the first two stages, with the third and fourth stages not being described in detail, making this part less clear. The structure of the experimental section is also somewhat disorganized.\", \"w2\": \"The design of the tables lacks standardization, leading to confusion for the reader. Here are a few examples, though not exhaustive. For instance, many tables do not clearly explain what the numerical values represent, making interpretation difficult. In Table 3, the baseline names should be listed directly. Additionally, the entries under the \\\"Data Ratio\\\" column correspond to various methods, which creates some confusion. Furthermore, for the methods used to generate datasets that enhance baseline performance in Table 3, it would be clearer to label them as \\\"Baseline + Real FT\\\" rather than just \\\"Real FT.\\\"\", \"w3\": \"Additionally, I noticed that the baseline appears to be from a 2022 paper. Are there any more recent baselines available for comparison?\", \"w4\": \"Some modules may not appear particularly novel from a technical perspective. LoRA are also commonly used in various papers.\", \"questions\": \"Q1: From the design motivation, it seems that training all LoRA parameters may lead to overfitting, resulting in reduced diversity in the generated images. In contrast, Selective LoRA selects a subset of parameters that are most associated with the concepts, effectively training fewer parameters and better preserving the original T2I model's capabilities. The original LoRA setting applies training to all linear layers in the UNet with LoRA. I wonder if training LoRA only on certain layers' cross-attention (few parameters) could achieve a similar effect as Selective LoRA.\", \"q2\": \"I hope the authors can address the concerns raised in the \\\"Weaknesses\\\" section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> **Image-Label Alignment in Domain Generalization Setting**: (2) the quality of the corresponding pseudo labels.\\n\\nIn this analysis, we aim to evaluate how reliably our method provides pseudo labels compared to other methods in the domain generalization setting, both qualitatively and quantitatively.\\nThe detailed analysis is provided in **Appendix A.12**, featuring **Figure 19**, **Table 12**, and **Table 13**.\\n\\nFirst, we added qualitative results in **Figure 19**, which show that our method provides better labels than DatasetDM or InstructPix2Pix.\\nFor quantitative results, we utilized pretrained segmentors, as done in the analysis presented in **Table 8**.\\nHowever, the previously used pretrained Mask2Former, trained only on Cityscapes, does not deliver reliable performance under adverse weather conditions.\\n\\nTo address this, we fine-tuned the pretrained Mask2Former (M2F) on the ACDC training set for each weather condition, creating specialized segmentors for \\\"foggy\\\", \\\"night-time\\\", \\\"rainy\\\", and \\\"snowy\\\".\\nThe performance of these segmentors on the ACDC validation set before and after fine-tuning is presented below.\\n\\n| Segmentor | foggy | night-time | rainy | snowy | average |\\n| :------------: | :---: | :--------: | :---: | :---: | :-----: |\\n| Pretrained M2F | 67.66 | 23.17 | 51.94 | 47.55 | 47.58 |\\n| Fine-tuned M2F | 78.54 | 52.16 | 66.23 | 74.79 | 67.93 |\\n\\nEach fine-tuned model is now expected to provide more meaningful segmentation maps for its respective weather condition.\\nThe results of measuring Image-Label Alignment using these models are as follows.\\n\\n| Method | foggy | night-time | rainy | snowy | average |\\n| :-------------: | :---: | :--------: | :---: | :---: | :-----: |\\n| InstructPix2Pix | 25.98 | 48.60 | 63.04 | 40.66 | 44.57 |\\n| DatasetDM | 40.84 | 35.90 | 47.43 | 44.02 | 42.05 |\\n| Ours | 41.55 | 43.07 | 48.69 | 39.47 | 43.20 |\\n\\nAs shown in the table, InstructPix2Pix, which directly uses Cityscapes labels and only slightly edits the weather conditions of the images, demonstrates an advantage in image-label alignment.\\nDespite its high image-label alignment, we highlighted the limited performance improvements of InstructPix2Pix in **Table 2** and **Section 4.2 Main Results**, attributing this to the lack of scene diversity caused by its reliance on fixed segmentation label maps.\\n\\nWhen comparing methods that generate labels, our approach achieves better image-label alignment than DatasetDM.\\nThis improvement stems from our text-to-image generation model learning viewpoints from Cityscapes.\\nAs a result, even with the same label generator architecture, our finetuned text-to-image generation model provides representations with a smaller domain gap when generating images based on the Cityscapes dataset.\\n\\nWe sincerely appreciate the opportunity to perform more diverse comparisons and analyses that highlight the strengths of our approach, as well as the chance to further clarify our contributions.\\nIf you have any additional questions or concerns, please do not hesitate to let us know.\"}", "{\"comment\": \"> Thank you for your efforts and the detailed response. The majority of my concerns have been addressed, but a few questions remain.\\n\\nThank you for acknowledging our efforts and detailed responses. We are pleased to hear that the majority of your concerns have been addressed. Below, we provide responses to the remaining questions, including additional experiments and analyses.\\n\\n> **Clarification and Highlighting the Novelty of Our Contribution**\\n\\nOur contribution does not lie in the framework itself. As outlined in **Section 2.2. Segmentation Dataset Generation**, the use of generative models to create image-label pairs for enhancing perception models, has already been explored in prior works such as DatasetDM and DatasetGAN.\\n\\nWhat we aim to highlight, as summarized in **L099\\u2013L107**, is our proposal of the Selective LoRA fine-tuning method, which sets us apart from DatasetDM, as it directly uses pre-trained text-to-image generation models without any fine-tuning.\\nOur method enables selective learning of desired concepts (e.g., style, viewpoint), as demonstrated in the updated **Figure 2, Stage 1 and 2**.\\nWe would like to emphasize that these two aspects represent reasonable and novel contributions unique to our work, as agreed upon by all reviewers (**B65c**, **hMNt**, **KMEN**, **kFPj**).\\n\\n> **Comparison with DATUM [6]**\\n\\nAs per your suggestion, we conducted a comparison with DATUM [5] in the domain generalization setting.\\nThe following outlines the performance comparison between DATUM and our proposed method, which has been updated in **Table 2**.\\n\\n| DG Method | Generated Dataset | ACDC | DZ | BDD | MV | Average |\\n| :-------: | :---------------: | :-------: | :-------: | :-------: | :-------: | :-------: |\\n| DAFormer | DATUM | 54.06 | 27.10 | **54.74** | 62.40 | 49.58 |\\n| DAFormer | Ours | **55.83** | **31.68** | 54.68 | **63.09** | **51.32** |\\n| HRDA | DATUM | 58.11 | 30.18 | **56.94** | 64.29 | 52.38 |\\n| HRDA | Ours | **58.93** | **34.41** | 56.56 | **64.54** | **53.61** |\\n\\nDATUM [5] proposes a One-shot UDA approach. One-shot UDA refers to a setting where a real image from each target domain (\\\"foggy\\\", \\\"night-time\\\", \\\"rainy\\\", and \\\"snowy\\\") is available for domain adaptation.\\nIn contrast, domain generalization assumes no access to any real images from the target domain.\\nDATUM leverages a single real target domain image and uses a text-to-image generation model to create a diverse set of *unlabeled* images for the target domain.\\nThese *unlabeled* image sets are then used in combination with the given source domain by leveraging existing domain adaptation techniques (e.g., DAFormer, HRDA).\\n\\nOn the other hand, as illustrated in **Figure 2. Stage 3**, our approach trains a label generator to create a *labeled* dataset, which is then directly mixed into the training set for further training.\\nThis fundamental difference allows our method to achieve significantly higher performance than DATUM, even though DATUM utilizes a real target domain image.\\n\\nThank you for suggesting this comparison, as it provides an excellent opportunity to highlight the additional advantages of our approach.\\n\\n> **Comparison with PTDiffSeg [5]**\\n\\nWe appreciate your comments and would like to address the clarification regarding the extra experiment involving PTDiffSeg.\\n\\nFirst, PTDiffSeg and our method have different research purposes. PTDiffSeg is a segmentation model designed to resolve domain generalization (DG) tasks, *not an image generation method*.\\nIn contrast, our goal is to develop a data generation method for the segmentation field that can produce domain-aligned and diverse image-label pairs. \\nSpecifically, PTDiffSeg introduces a novel architecture by utilizing a pretrained diffusion model as its backbone and compares it with other DG methods such as ColorAug, DAFormer, and HRDA.\\nHowever, our method aims to resolve the lack of datasets for any existing segmentation model, whether in-domain or DG, to improve the segmentation performance.\\nFor example, our method can be applied to PTDiffSeg to achieve further improvements, similar to its application in DAFormer or HRDA.\\nHowever, it can not offer a direct comparison between PTDiffSeg and our proposed approach.\\n\\nSecond, unfortunately, PTDiffSeg's code is not currently publicly available, which causes challenges in conducting a quick comparison experiment. We are currently struggling with re-implementation. It is an intriguing experiment to investigate whether our generated dataset can effectively complement an advanced DG method like PTDiffSeg. However, we have concerns due to the re-implementation issue and the rebuttal schedule. Therefore, there is a possibility that we will include these results in our paper after the rebuttal period ends.\"}", "{\"comment\": [\"> **[Q4.1] Enhancing Clarity and Presentation for Better Understanding**\", \"We deeply appreciate the reviewer for recognizing the technical novelty and strengths of our experiments.\", \"Acknowledging the importance of improving the presentation of our paper, we have made the following revisions in **Sections 3 and 4**:\", \"We revised **Figure 1** to highlight the issues found in previous studies. DatasetDM does not involve fine-tuning the pretrained T2I model on the source dataset, and we identified the overfitting problem when training all layers in the Original LoRA.\", \"In the method section, we updated **Figure 3** to help understanding of **Section 3.2**.\", \"The explanation of the segmentation dataset generation framework was insufficient. We added detailed explanations in **Section 3.1 Overall Framework** and **Section 3.4 Training Label Generator and Generating Diverse Segmentation Datasets**.\", \"We reorganized the entire **Section 4. Experiments** section.\"]}", "{\"comment\": \"We sincerely appreciate your thoughtful feedback and have revised the manuscript accordingly, addressing all the raised concerns. Additionally, we conducted the suggested experiments to further strengthen our contributions. We would greatly appreciate it if you could let us know whether these revisions and experiments adequately address your concerns and, if so, consider reflecting this in your evaluation.\"}", "{\"comment\": \"As the discussion period approaches its conclusion, we would like to respectfully remind you of our revised manuscript. We kindly request your feedback on the second set of comments and would greatly appreciate your reconsideration of the evaluation in light of our updates.\\n\\nThank you sincerely for your time and efforts.\"}", "{\"comment\": \"> **[W3.4] Clarification of Training Process and Baselines in Table 3**\\n\\nFirst, we found the details of stages 3 and 4 are neglected in our initial manuscript, including the segmentation label generation process. \\nWe have newly included **Section 3.4: Training Label Generator and Generating Diverse Segmentation Datasets** to ensure our paper is self-contained, and revise minor elements of **Figure 2** for clearer presentation.\\nAdditionally, we provide further implementation details for the label generator in **Appendix A.1**, accompanied by the newly added **Figure 7**, which illustrates the label decoder architecture. Below is a brief summary of the explanations for stages 3 and 4.\\n\\n**[Stage 3]** We train an additional lightweight label generator to produce a segmentation label corresponding to the image, following DatasetDM.\\nTo train the label generator, we add noise to the given labeled image and denoise the image with the fine-tuned T2I model, which can provide semantically rich intermediate multi-level feature maps and cross-attention maps.\\nDistinct from DatasetDM, we train the label generator based on the fine-tuned T2I model using Selective LoRA.\\nThe added fine-tuning process causes a significant difference in image-label alignment, which we discussed in **Appendix A.6 Comparison of Image-Label Alignment**.\\nFurthermore, due to the difference between the base T2I model, architecture details slightly changed, as described in **Appendix A.1**.\\n\\n**[Stage 4]** Diverse image-label pairs are generated to address both domain generalization and in-domain scenarios. For domain generalization, text prompts are modified to include adverse weather conditions (e.g., foggy, snowy, rainy, night-time) by extending the default prompt, such as \\\"photorealistic first-person urban street view,\\\" to, for example, \\\"in foggy weather,\\\" enhancing the model\\u2019s ability to generalize across varying environmental conditions. For in-domain scenarios, diversity is introduced by varying the class names within the prompt template (e.g., \\\"\\u2026 with car\\\", \\\"\\u2026 with car\\\", etc.), allowing for the generation of images that reflect different object class combinations while maintaining consistency with the in-domain characteristics.\\n\\nWe modified **the Table 3 of the initial submission (now Table 1 in the revised version)**, and its caption to better describe the details of the in-domain segmentation performance:\\n\\n- Added caption: In the first row, we trained Mask2Former on various fractions of the Cityscapes dataset (Baseline). Then, we fine-tuned the baseline on DatasetDM and our generated datasets with 30K iterations and evaluated the performance of the fine-tuned segmentation models. Additionally, we include an additional fine-tuned baseline (Baseline (FT)) that is solely fine-tuned on the same real dataset for a fair comparison in terms of the total iterations.\\n- We modified the term \\\"Real FT\\\" to \\\"Baseline (FT)\\\" with specified types of training datasets to avoid confusion.\\n- We added a middle row stating, \\\"For a fair comparison, we fine-tune the baseline for 30K iterations using the following datasets\\\".\\n- We denoted the total iterations (e.g., 120K) to show that the baseline model was further fine-tuned for 30K iterations for each method.\\n\\n> **[Q3.1] Discussion of the Other Advanced LoRA Approaches**\\n\\nWe sincerely appreciate the reviewer\\u2019s insightful suggestions regarding studies utilizing LoRA. We have clarified the relationship between our work and these studies **in lines 247 and 250 of the revised manuscript**. The details are as follows.\\n\\nWhile LoRA enables parameter-efficient fine-tuning of large-scale models, it does not provide a mechanism for specifying target learning concepts (\\\\eg, style or viewpoint) from source datasets. Subsequent studies on LoRA have predominantly focused on enhancing the LoRA adapter itself (\\\\eg, architectures), as seen in approaches like LoRA-SP, GS-LoRA, and Tied-LoRA. However, there has been limited exploration of identifying which layers are most effective for LoRA fine-tuning to learn specific target concepts, particularly in the context of urban-scene segmentation.\\nThus, the advanced LoRA approaches recommended by the reviewer (LoRA-SP, GS-LoRA, and Tied-LoRA) take a complementary but distinct direction from our Selective LoRA. Nonetheless, we acknowledge that integrating these advanced LoRA methods with our Selective LoRA could be a beneficial direction for future research.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"Dear Authors,\\n\\nThis draft received 6, 5, 6, 5. Some reviewers updated their scores based on author feedback however, the average rating is still lower than \\\"marginally above the acceptance threshold\\\". After going over the comments, draft, and feedback from authors, we find direction to be interesting but require further work.\\u00a0\\n\\nWe will encourage authors to update the draft on the basis of comments from the reviewers. \\n\\n\\nregards\\n\\nAC\", \"additional_comments_on_reviewer_discussion\": \"Authors provided additional experimental results and comments. Authors provided effective enough rebuttal that reviewers increased the rating. However, reviewers did not assign the rating above than 6, and two of them assigned 5 and 3. Later one indicated that rating could be increased to 5, therefore it was taken inconsideration.\\nkFPj provided further guidance too.\", \"note\": \"hMNt has updated the portal to indicate rating 5. So now rating of paper is 6, 5, 6, 5. That's what was used during meta review.\"}", "{\"comment\": \"> **[Q3.2] Comparative Analysis with Text- or Image-Driven Diffusion Models**\\n\\nAs we responded in [W3.3], we compared our method with InstructPix2Pix, which we reported the results **in Table 2 of our revised manuscript.**\\nAlso, we included suggested studies [5] and [6] **in lines 158-160 of our revised paper.**\\nHowever, we found that the two suggested studies focus on tasks distinct from the primary objective of our work.\\n\\nIn addition to generating images, our main goal is to generate segmentation maps corresponding to a given image by fine-tuning a label generation module using a baseline model.\\nIn contrast, [5] is a model that directly performs segmentation, while [6] appears to focus on generating an unlabeled dataset and applying UDA without producing corresponding segmentation maps.\\nThis distinction makes them less suitable as direct baselines for our proposed method. \\nNevertheless, we acknowledge that both studies aim to enhance segmentation performance by leveraging the prior knowledge of pretrained text-to-image generation models, which aligns with the broader goals of our research.\"}", "{\"comment\": \"> **[W2.2] Standardization and Clarity in Table Design**\\n\\n\\nWe modified **the Table 3 of the initial submission (now Table 1 in the revised version)**, and its caption to better describe the details of the in-domain segmentation performance:\\n\\n- Added caption: In the first row, we trained Mask2Former on various fractions of the Cityscapes dataset (Baseline). Then, we fine-tuned the baseline on DatasetDM and our generated datasets with 30K iterations and evaluated the performance of the fine-tuned segmentation models. Additionally, we include an additional fine-tuned baseline (Baseline (FT)) that is solely fine-tuned on the same real dataset for a fair comparison in terms of the total iterations.\\n- We labeled the first cell as \\\"Method\\\"\\n- We used Mask2Former as the baseline segmentation model.\\n- We modified the term \\\"Real FT\\\" to \\\"Baseline (FT)\\\" with specified types of training datasets to avoid confusion.\\n- We added a middle row stating, \\\"For a fair comparison, we fine-tune the baseline for 30K iterations using the following datasets\\\".\\n- We denoted the total iterations (e.g., 120K) to show that the baseline model was further fine-tuned for 30K iterations for each method.\\n\\n> **[W2.3] Clarification of Recent Baselines**\\n\\nWe want to clarify that the baseline methods that we compare with are DatasetDM (NeurIPS 2023) and DGInStyle (ECCV 2024), not Mask2Former, ColorAug, DAFormer, or HRDA.\\nAdditionally, at the request of reviewer KEMN during the rebuttal period, we compared our method with InstructPix2Pix (CVPR 2023) in **Table 2**, which generates images of diverse styles (e.g., adverse weather) with a fixed label map for a given image, under the domain generalization setting.\\nWe believe that papers accepted to CVPR 2023, NeurIPS 2023, ECCV 2024 represent recent baseline methods for comparison.\\nTo the best of our knowledge, we are not aware of other studies generating datasets for segmentation tasks beyond the baselines methods we have compared with.\\nIf we are missing any recent seminal work, please inform us, and we will compare and discuss it in our revised paper.\\n\\n> **[W2.4] Clarification of Our Technical Novelty Distinct from LoRA**\\n\\n**In lines 254 to 256 of our revised paper**, we clarified the uniqueness and technical novelty of our proposed method compared to LoRA and its variants.\\nAs the reviewer mentioned, LoRA is indeed widely used in various papers recently.\\nWe want to clarify that the technical novelty of our paper does not come from using LoRA itself. \\nInstead, our novelty lies in identifying the specific parameters responsible for learning a desired concept and updating only those parameters for generating pairs of images and segmentation maps.\\nIn other words, while LoRA has been widely adopted in various studies, the question of *how* to utilize LoRA for dataset generation in urban-scene segmentation has been underexplored.\\nWe addressed this question by introducing the concept of Selective LoRA, providing valuable insights to the community. \\n\\n> **[Q2.1] Comparison of Selective LoRA and Hand-Crafted Layer Selection Approaches**\\n\\nWe concur with the reviewer's idea that training LoRA only on certain cross-attention layers in a hand-crafted manner is a viable approach.\\nTo demonstrate the superiority of our proposed method over such an approach, we conducted a comparison detailed in **Appendix A.9: Comparison with Hand-Crafted Layer Selection Approaches** in our revised paper.\\nThe quantitative results are presented in **Figure 14**, and the qualitative results are shown in **Figure 15**. Below are the key findings from our quantitative results presented in **Figure 14**.\\n\\nUnder the few-shot setting of Cityscapes (0.3\\\\%), we compared our proposed method with the reviewer's suggestion applied to self-attention layers only (SA-only) and cross-attention layers only (CA-only). \\n\\n| Fine-tuning target | Proportion of the fine-tuned layers | Segmentation Performance (mIoU) |\\n| :--: | :--: | :--: |\\n| Pretrained | 0\\\\% | 42.82 |\\n| SA-only | 50\\\\% | 43.40 |\\n| CA-only | 50\\\\% | 43.46 |\\n| Original LoRA | 100\\\\% | 42.97 |\\n| **Ours** | 2\\\\% | **44.13** |\\n\\nAs shown in the table, our method achieves the best segmentation performance (mIoU) while updating a much smaller number of layers compared to other manually selected approaches or the original LoRA.\\nThe main reason is that our method can identify the certain layers responsible for learning the desired concept, which is challenging to achieve by manually selecting the layers.\"}", "{\"comment\": \"Thanks for authors details explanations. My concerns are resolved, thus I raise my rate.\"}", "{\"comment\": \"Thank you for your valuable suggestions. We present the following additional experiments, analyses, and discussions to address each of your considerations.\\n\\n> **1. Class-specific Performance Analysis**\\n\\nAs the reviewer pointed out, measuring and analyzing class-wise IoU is a crucial aspect of urban-scene semantic segmentation. We have included the related experiments in **Appendix A.10. Class-wise Segmentation Performance Analysis**, with the key results summarized as follows.\\n\\n> Class-wise IoU for In-domain few-shot experiments (Cityscapes, 0.3\\\\%)\\n\\n| Class | IoU (Improvements) | Pixel Proportions |\\n| :-----------: | :----------------: | :---------------: |\\n| bus | +18.08 | 0.23\\\\% |\\n| fence | +10.99 | 0.87\\\\% |\\n| bicycle | +7.33 | 0.41\\\\% |\\n| wall | +5.21 | 0.66\\\\% |\\n| traffic light | +4.40 | 0.21\\\\% |\\n\\nWe demonstrated particularly significant performance improvements for the aforementioned classes in **Figure 16**. As evident from the experimental results, as you mentioned, rare classes that appear in less than 1\\\\% of the dataset showed substantial performance gains. This serves as an additional strength of our approach.\\n\\nIn addition, we observed that certain classes were generated at proportions lower than their actual occurrence in the Cityscapes dataset.\\nTo address this, we proposed a methodology to generate additional samples for these underrepresented classes. \\nAs shown in **Figure 17**, unlike the Original LoRA, the proposed Selective LoRA methodology effectively generates additional samples for specific classes, as it exclusively learns the style of the source dataset.\\n\\nWe conducted an experiment aimed at enhancing the performance of a specific target class (e.g., \\\"person\\\") by generating additional samples for that class.\\nThe result of the experiment is presented in **Figure 16**, which demonstrates that the additional dataset achieves more balanced performance across classes by adjusting the label proportions.\\nThe following highlights the key results of the performance improvements.\\n\\n| Method | Proportion of Person (\\\\%) | IoU (Person) | mIoU |\\n| ------ | :-----------------------: | :----------: | :--: |\\n| Baseline | 1.22 | 64.77 | 41.83 |\\n| Ours | 0.12 | 51.19 | 44.13 |\\n| Ours + (Additional Person Dataset) | 0.82 | 61.43 | 44.59 |\\n\\nPlease refer to **Segmentation Dataset Generation Focused on a Specific Class** in line 1283 for further details.\\n\\n> 2. **Text Guidance Considerations**: Given that this is a text-guided approach, we might expect varying degrees of performance improvement based on different text variations. Have you considered using alternative class names beyond the provided prompts? How might this affect performance?\\n\\nApplying text augmentations to the given class names to create more diverse text prompts for dataset generation is a highly promising research direction for our method. \\nTo explore this, we conducted experiments detailed in **Appendix A.11 Generating Datasets with Diverse Class Names**, where we replaced the simple use of \\\"Car\\\" for generating cars with more detailed class names. The key findings are summarized as follows.\\n\\nFirst, as shown in **Figure 18**, we augmented the text prompts by diversifying \\\"Car\\\" into more specific categories such as \\\"SUV car\\\", \\\"Sedan car\\\", \\\"Convertible car\\\", and \\\"Hatchback car\\\" for data generation. Our Selective LoRA demonstrated its ability to generate these diverse classes effectively, as it avoids overfitting to the Cityscapes content, maintaining its generalization capability.\\n\\nWe then incorporated these generated samples into the dataset and conducted training with the augmented dataset, as shown in **Table 10**.\\n\\n| Method | IoU (Car) | IoU (Bus) | IoU (Motorcycle) | mIoU |\\n| ------ | :-------: | :-------: | :--------------: | :--: |\\n| Baseline | 84.02 | 12.51 | 16.11 | 41.83 |\\n| Ours | 85.03 | 30.59 | 15.26 | 44.12 |\\n| Ours + (Additional Diverse Car Dataset) | **85.34** | **32.48** | **17.77** | **44.95** |\\n\\nThe generated dataset enhanced overall segmentation accuracy and demonstrated consistent performance improvements in vehicle classes such as car, bus, and motorcycle. Further exploration of additional test prompts presents a promising direction for future research.\"}", "{\"summary\": \"This paper proposes a novel approach to address the data scarcity in semantic segmentation by generating datasets (image-mask pairs) using text-to-image models. To solve the issue, it is necessary that the generative images align with the target doman and provide useful information beyond the training dataset. Therefore, this paper intorudces Selective LoRA, a finetuning approach for the pretrained text-to-image model that preserves the distributional diversity of the original pretrained model while aligning with the target domain. The proposed method selectively update weights for key concepts, such as style and viewpoint, which need to be aligned or to maintain the diversity. The authors shows that the proposed method generates datasets with desired distribution via ablation studies and improves in both in-domain and domain generalization settings.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"The proposed method differentiate the weights to align with target domain while preserving valuable information of the pretarined model. From this, the proposed method selectively fine-tunes the model. This approach effectively addresses challenges when to adapt large pretrained models to different domains, enabling better domain specific performance without losing the benefits of the pretrained features.\\n\\nThe data scarcity problem addressed in this paper is a critical issue not only for semantic segmentation but also for a variety of vision tasks. Therefore, the proposed technique for generating image and ground truth pair datasets can be considered a core technology in the advancement of deep learning.\", \"weaknesses\": \"I believe that the proposed method tackles an important problem and offers a reasonable approach to addressing the challenges, which I highly commend. However, there are some weaknesses to consider.\\n\\nFirst, the method relies on Stable Diffusion, trained on a large dataset. Although leveraging the distributional diversity learned by this pretrained model is the motivation behind the approach, it inherently sets an upper bound on the applicability of the proposed method based on the knowledge of the pretrained model. This is a fundamental limitation.\\n\\nDefining the desired concepts, identifying the critical parts of the architecture where these concepts are expressed, and retraining the model are all highly manual processes that depend heavily on individual characteristics of target data . To define the desired concepts, the user of this method must analyze the distributions of the pretrained model and the target domain, identify the differing concepts, and guide the process with appropriate text prompts. Additionally, finding the associated weights and determining their importance requires experimental work, which lacks standardized criteria.\\n\\nGenerating images aligned with the desired distribution is crucial, but creating high-quality masks to accompany these images is equally important for semantic segmentation models. This aspect has not been sufficiently addressed. While the current method leverages intermediate features, there could be consideration of various other ways to generate masks from the images. Given that the proposed method utilizes a model trained on a large dataset, it might also be worth exploring the use of models like SAM (Segment Anything Model) for mask generation (of course, there are lots of candidates). I am not requring additional experiments using SAM. It woud be beneficial to analyze the quality of the masks for the generated images. \\n\\nFrom a presentation, the paper is challenging to read. Figures 2 and 3 are difficult to understand, and it is not easy to infer the intended meaning from the related sections in the text. Additionally, the paper mentions L_Concept, but the figures use terms like L_style and L_viewpoint, which are not defined in the main text, causing confusion. The authors should clarify this and revise the figures accordingly. More detailed explanations about the process of creating text prompts are also necessary.\\n\\nThe experimental setup, including the ablation study, is not sufficiently explained. For instance, in experiments like those in Table 3, it is unclear how extensive the generated dataset is and how it is used.\\n\\nOverall, the paper needs to be written in a way that is easier to understand.\", \"questions\": \"I believe that the proposed method demonstrates sufficient technical novelty and shows effectiveness through its quantitative experimental results. However, I think it would be beneficial for the authors to revise certain aspects to make the paper easier to understand. Improving the clarity of the text would enhance the overall presentation of the work.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> **[W4.4] Importance of Mask Quality and Alternative Mask Generation Methods**\\n\\nAs the reviewer pointed out, creating high-quality masks corresponding to each generated image is indeed crucial. \\nImproving the label generator or exploring entirely different methodologies, such as SAM, is a promising future work direction. \\nWhile we utilize the architecture of the label generator in DatasetDM, we made a significant advancement in image-label alignment even with the same architecture.\\nWe have added a new qualitative comparison in **Figure 6** of the revised manuscript, demonstrating that our Style-Selective LoRA significantly outperforms both DatasetDM and the original LoRA (which applies LoRA parameters to all layers).\\n\\nFurthermore, although we could not elaborate on this in the main paper due to the page limit, we initially included a discussion on image-label alignment in the Supplementary Material **Appendix A.6: Comparison of Image-Label Alignment**.\\n**Table 8** reports the significant improvements our method achieved over DatasetDM in terms of image-label alignment.\\nTo provide quantitative results, we use the predictions from the pretrained Mask2Former model, which was fully supervised on the 100\\\\% Cityscapes dataset and achieves a 79.40 mIoU, as a proxy for the ground truth mask.\\nBelow are the key results from Table 8, which compares the image-label alignment (mIoU) of DatasetDM and ours under in-domain experiments. We selected 2\\\\% of layers for Style-Selective LoRA.\\n\\n| Method | Image-Label Alignment |\\n| :-------: | :-------------------: |\\n| DatasetDM | 25.18 |\\n| Ours | 39.37 |\\n\\nSimilar to the qualitative results, the Style-Selective LoRA significantly outperforms DatasetDM.\\nWe attribute this to the domain gap between the pretrained text-to-image model (SDXL) and the source dataset (Cityscapes), as detailed in **Appendix A.6: Comparison of Image-Label Alignment** under **Analysis of the Qualitative Comparison**.\\n\\n\\n> **[W4.5] Improving Readability, Consistency in Figures, and Text Prompt Explanation**\\n\\nAcknowledging the importance of improving the presentation quality, we have made a non-trivial number of changes, including Figures 2 and 3 and other notations. Specifically,\\n\\n- In **Figure 2**, we focused on explaining our segmentation dataset generation framework using \\\"viewpoint\\\" as the target concept instead of explaining with both viewpoint and style.\\n- In **Figure 2**, the caption mentions each stage and its related section, which we reiterate in Section 3.1 when discussing the overall framework. \\n- In **Figure 3**, we 1) modified $L_{style}$ and $L_{viewpoint}$ to $L_{Concept}$ and 2) referenced the equation to help readers connect each symbol with the equation.\\n- We simplified the illustration in **Figure 3 (a)** by adding a note that \\\"style-sensitive\\\" layers react when the primary concept (\\\"style\\\") is generated differently.\\n- In **Figure 3 (b)**, we also added symbols for better understanding.\\n\\nThank you for specifically pointing out the shortcomings in the presentation of our initial submission. If you have any further suggestions for the current revised version, we would be eager to incorporate them.\\n\\n> **[W4.6] Clarification of Experimental Setup and Ablation Study Details**\\n\\nWe revised the experimental setup, including the ablation study, and organized it into **Section 4 Experiments**. Also, we added details of the datasets including their sizes and usages in **4.1. Experimental Setup** under **In-domain semantic segmentation**.\", \"the_main_updates_are_as_follows\": [\"For the dataset sizes, we use 500 images for few-shot experiments (0.3\\\\% -- 10\\\\%) and 3,000 images for fully-supervised experiments.\", \"We train Mask2Former with few-shot samples, which we use as the baseline model.\", \"Then, we fine-tune this baseline model on various synthetically generated datasets using methods such as DatasetDM and ours.\", \"Furthermore, we modified **the Table 3 of the initial submission (now Table 1 in the revised version)**, and its caption to better describe the details of the in-domain segmentation performance:\", \"Added caption: In the first row, we trained Mask2Former on various fractions of the Cityscapes dataset (Baseline). Then, we fine-tuned the baseline on DatasetDM and our generated datasets with 30K iterations and evaluated the performance of the fine-tuned segmentation models. Additionally, we include an additional fine-tuned baseline (Baseline (FT)) that is solely fine-tuned on the same real dataset for a fair comparison in terms of the total iterations.\", \"We modified the term \\\"Real FT\\\" to \\\"Baseline (FT)\\\" with specified types of training datasets to avoid confusion.\", \"We added a middle row stating, \\\"For a fair comparison, we fine-tune the baseline for 30K iterations using the following datasets\\\".\", \"We denoted the total iterations (e.g., 120K) to show that the baseline model was further fine-tuned for 30K iterations for each method.\"]}", "{\"comment\": \"As the discussion period is nearing its end, we wanted to kindly remind you about our revised manuscript.\\n\\nWe have carefully revised the paper and incorporated the additional experiments you requested. Furthermore, we have addressed the major concerns raised by the other reviewer.\\n\\nCould you please let us know if our revisions resolve your concerns? Your feedback is crucial, and we sincerely hope our updates meet your expectations.\\n\\nThank you for your time and consideration.\"}", "{\"comment\": \"Thank you for sharing your valuable feedback. Below, we provide additional responses to address the concerns raised by the reviewer.\\n\\n> however, as shown in the new table provided by the authors, the improvement from selective LoRA is not as significant compared to fine-tuning only some of the LoRA layers.\\n\\nWe have further highlighted the degree of performance improvement achieved through LoRA fine-tuning based on the performance of DatasetDM, the major baseline that utilizes the pretrained text-to-image generation model.\\nThe table allows a direct comparison of the performance gains obtained purely through LoRA fine-tuning.\\n\\n| Fine-tuning target | Proportion of the fine-tuned layers | Segmentation Performance (mIoU) | Performance Improvement via LoRA fine-tuning (mIoU) |\\n| :--: | :--: | :--: | :--: |\\n| Pretrained (DatasetDM) | 0\\\\% | 42.82 | - |\\n| Original LoRA | 100\\\\% | 42.97 | + 0.15 |\\n| SA-only | 50\\\\% | 43.40 | + 0.58 |\\n| CA-only | 50\\\\% | 43.46 | + 0.64 |\\n| **Ours** | 2\\\\% | **44.13** | **+ 1.31** |\\n\\n> **Table R2.1.** Comparison of Performance Improvements Across Various LoRA Fine-tuning Approaches\\n\\nAs shown in the table, our Selective LoRA approach achieves a performance improvement of 1.31, which is *more than twice* the improvement observed with hand-crafted layer selection approaches (SA-only, CA-only).\\nFurthermore, we believe that a 1.31 mIoU improvement is significant in the context of segmentation performance.\\nDetailed experimental results can be found in **Appendix A.9: Comparison with Hand-Crafted Layer Selection Approaches** in our revised paper.\\n\\n> Additionally, training a LoRA for a specific domain is a common practice and not particularly novel.\\n\\nWhile LoRA fine-tuning for specific domains is widely recognized, as noted by the reviewer, no prior studies have explored its application in segmentation dataset generation approaches [1, 2, 3].\\nFurthermore, as demonstrated in the comparison of LoRA fine-tuning methods provided in the response above (**Table R2.1**), we have shown that the Original LoRA only marginally improves performance due to issues such as overfitting and memorization, which results in the generation of non-informative samples. \\nIn contrast, our Selective LoRA achieves a significant performance improvement of 1.31. This clearly highlights the substantial difference between Selective LoRA and Original LoRA.\\n\\nThe additional detailed technical differences with the original LoRA, recognized as a novel approach by all reviewers (**B65c**, **hMNt**, **KMEN**, **kFPj**), can be found in **L254-L256** of our revised paper, as outlined below.\\n\\n> **L254-L256**: The key distinction of Selective LoRA lies in *selectively fine-tuning only the crucial layers* based on an automatically computed score, termed concept sensitivity, for the desired concept in the source dataset, while previous LoRA studies update all projection layers.\\n\\nThank you once again for providing valuable feedback. Although there isn\\u2019t much time remaining, if you have any additional concerns or questions, please let us know, and we will do our best to address them within the discussion period.\\n\\nReference\\n\\n> [1] Wu, Weijia, et al. \\\"Datasetdm: Synthesizing data with perception annotations using diffusion models.\\\" Advances in Neural Information Processing Systems 36 (2023): 54683-54695. \\n> [2] Wu, Weijia, et al. \\\"Diffumask: Synthesizing images with pixel-level annotations for semantic segmentation using diffusion models.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. \\n> [3] Nguyen, Quang, et al. \\\"Dataset diffusion: Diffusion-based synthetic data generation for pixel-level semantic segmentation.\\\" Advances in Neural Information Processing Systems 36 (2024).\"}", "{\"comment\": \"Thank you for your valuable feedback. We have addressed your concerns in detail below, and these revisions have been reflected in the updated manuscript. If there are any additional points you'd like us to consider, please let us know.\\n\\n> **[W3.1. W3.2] Clarification of \\\"Informative Samples\\\" Definition and Figure 1**\\n\\nWe acknowledge the lack of clarity regarding \\\"informative samples\\\". In response to the reviewer's recommendation, we revised Figure 1 and its caption, adding our informative examples to clearly illustrate the concept. Additionally, we updated **L78-L80** in the revised version to explicitly define 1) domain-aligned and 2) informative samples using the example provided in **Figure 1**.\\nBelow is a brief clarification of the informative sample.\\n\\nInformative samples refer to generated image-label pairs that provide additional information beyond what is available in the existing source dataset, as described in **L43\\u201344**. In this study, the additional information is provided by pretrained text-to-image generation models (e.g., Stable Diffusion).\\nThe definition of \\\"informative samples\\\" varies depending on the problem setting in urban-scene segmentation. For example, when the target domain is ACDC, which includes driving-scene viewpoints and adverse weather, but the training dataset is Cityscapes, which consists of driving-scene viewpoints with only clear-day conditions, a diverse weather conditional dataset (informative) combined with a driving-viewpoint (domain-aligned) could serve as the optimal dataset for the problem.\\n\\nAs illustrated in the revised **Figure 1**, learning and fixing the style of the source dataset (e.g., the clear-day style from Cityscapes) does not add additional information, making it uninformative for this scenario. However, it may still be considered informative in the context of in-domain segmentation. Therefore, we propose a flexible method that selectively learns only the viewpoint or style from the Cityscapes training data while avoiding overfitting to the other concepts.\\n\\n\\n> **[W3.3] Comparative Analysis with Image-Driven Diffusion Models (InstructPix2Pix)**\\n\\nComparing our method to the suggested baseline, InstructPix2Pix, is a critical experiment.\\nWe have conducted additional comparisons with InstructPix2Pix and included the results in **Table 2, Section 4.2, of our revised manuscript.**\\n\\nInstructPix2Pix is an image-to-image translation model that generates diverse styles with a given image, similar to DGInstyle, which was included as a baseline method in our initial manuscript. \\nConsequently, the segmentation maps remain consistent across various generated images derived from a single image. Since InstructPix2Pix is designed to generate diverse styles of a given image, we concluded that it is not suitable for dataset augmentation in in-domain scenarios. To this end, we compared our proposed method with InstructPix2Pix using the domain generalization task using the domains of foggy, rainy, night-time, and snowy images in Cityscapes. The results are as shown. \\n\\n| DG Method | Generated Dataset | ACDC | DZ | BDD | MV | Average |\\n| :-------: | :---------------: | :-------: | :-------: | :-------: | :-------: | :-------: |\\n| ColorAug | InstructPix2Pix | 56.02 | 26.92 | 54.03 | 60.44 | 49.35 |\\n| ColorAug | Ours | **56.07** | **29.75** | **54.35** | **61.40** | **50.39** |\\n| DAFormer | InstructPix2Pix | 55.13 | 26.93 | 54.61 | 62.36 | 49.76 |\\n| DAFormer | Ours | **55.83** | **31.68** | **54.68** | **63.09** | **51.32** |\\n| HRDA | InstructPix2Pix | 58.50 | 29.56 | 56.10 | 64.10 | 52.07 |\\n| HRDA | Ours | **58.93** | **34.41** | **56.56** | **64.54** | **53.61** |\\n\\nThe table demonstrates that our proposed method consistently outperforms InstructPix2Pix across all DG methods. Specifically, we found that InstructPix2Pix is less effective for recent state-of-the-art baselines such as DAFormer or HRDA. This is because our method generates diverse scenes and their corresponding segmentation maps, whereas InstructPix2Pix merely changes the styles of a given image without creating diverse scenes. Such augmentation effects are already provided by severe data augmentation methods. This experiment clearly highlights the necessity of our proposed method, given the limitations of using current image-to-image translation models as data generators.\"}", "{\"comment\": \"Thank you for the effort in significantly updating the paper. I am considering increasing my score; however, as shown in the new table provided by the authors, the improvement from selective LoRA is not as significant compared to fine-tuning only some of the LoRA layers.\\nAdditionally, training a LoRA for a specific domain is a common practice and not particularly novel. Therefore, I can only increase my score to 5.\"}", "{\"comment\": \"Thank you for your efforts and the detailed response. The majority of my concerns have been addressed, but a few questions remain. Firstly, the generation of image-label pairs to enhance the perception model has already been tackled in DatasetDM. Therefore, I believe this contribution lacks sufficient novelty for ICLR publication. Regarding informative data generation, it plays a critical role in domain-alignment segmentation, and I recommend comparing your approach with those in [5] and [6], as both papers propose methods for diverse image generation.\\n\\nAdditionally, based on my experience, improving segmentation quality in the target domain using generated images relies on two key factors: (1) domain-invariance of the generated images relative to the target domain, and (2) the quality of the corresponding pseudo labels. However, the paper does not provide a detailed analysis or discussion on these aspects.\"}", "{\"comment\": \"> 3. **Domain Generalization Question**: I notice that the performance gains in domain generalization scenarios are lower compared to the improvements seen in Cityscapes. Could you explain the potential reasons for this discrepancy?\\n\\nThank you for recognizing the performance improvements in the in-domain setting. However, we believe the gains in the domain generalization (DG) setting are equally noteworthy.\\n\\nThe effectiveness of our approach becomes even more evident when examining individual datasets rather than the average DG performance across four datasets. \\nWe observed significant improvements on datasets with adverse weather conditions.\\nFor example, our method enhanced the performance of ColorAug by 2.95 on ACDC and 4.06 on Dark Zurich, which represent substantial gains compared to existing baselines.\\n\\nFurthermore, on HRDA, one of the most advanced DG methods, the second-best baseline achieves a performance increase of 0.38 (with DGInStyle reporting only a slight improvement of 0.71), whereas our approach achieves a significantly higher improvement of 1.53. \\n\\nFor the potential reasons why the performance improvements may appear modest, we would like to highlight the extensive use of color augmentations already employed by existing DG methods. \\nAs noted in **lines 454-460**, methods such as DAFormer and HRDA incorporate aggressive color augmentations. This extreme augmentation likely diminishes the impact of dataset generation approaches, including ours, by addressing color-related variations upfront.\\n\\nNevertheless, while image-to-image translation-based methods such as DGInStyle and InstructPix2Pix show increasingly marginal gains on DAFormer and HRDA, our method, though slightly reduced, continues to deliver substantial improvements. We believe this underscores the robustness and effectiveness of our approach in addressing domain generalization challenges.\"}", "{\"title\": \"Re: Re: \\\"person\\\" class and text guidance\", \"comment\": \"It is necessary to re-define class words in order to effectively guide the text-related models. Just using given class names is not the only answer. Although some reviewers might think there is no technical novelty, this approach will be the quickest approach to adapt for real-world or more practical scenario. (You can modify the word rider into \\\"person riding a bicycle/motorbike\\\"?).\"}", "{\"summary\": \"The paper introduces a new approach to generate training samples with sepecific concepts variants. The proposed method learn specific concepts, such as style or viewpoint, by selectively manipulate the gradients. The method claim that it improve domain alignment and sample diversity. In experiments, the method are compared with baseline and the DatasetDM, and results show improvements in in-domain, few-shot segmentation. The wide-scope experiments prove its practicability.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-organized, the motivation description is clear and concise. The pipeline figures are easy to understand.\\n2. The core idea is interesting, use language distinction to learn concept difference, and eliminate the requirement of paired visual data to learn specific concepts. I think the learning procedure is practicable. \\n3. The experimental settings are extensive, including both in-domain, few-shot and damain generalization.\", \"weaknesses\": \"1. The method mainly focus on introduce their Selective LoRA. However, the whole pipline includes training a label generator (stage 3 in Figure 2). The technical details in this part are not well delivered. How the label generator receive the intermediate features from T2I models and generate semantic maps? In addition, in line 190-197, the authors say their use Mask2Former as label generator as same as DatasetDM. As I know, the DatasetDM use only \\\"perceptual decoder\\\" which only includes a decoder architecture instead of whole Mask2Former segmentaiton. Clarifying this distinction could provide a clearer understanding of the contributions of the current approach.\\n2. While the method aims for simultaneous sample and segmentation map generation, it requires a two-stage training process for the T2I model and label generator separately, contrasting with DatasetDM\\u2019s one-stage training. This additional stage could indeed limit practicality for real-time or large-scale augmentation, and a comparison in training efficiency or practical adaptability would be beneficial.\\n3. The dataset used for evaluation is Cityscapes and BDD100K, which includes only city streets. Since singlar scene makes learning specific concepts changes easiler, it would be improved if the authors prove their method on more general dataset, e.g. coco, ade20k. Since the main comparison method DatasetDM use more general dataset, I wonder performances of Selective LoRA on other datasets.\\n4. If the selective learning process affects the reliability of generated segmentation maps? The authors seem not provide relavant discussion.\\n5. Minor erros: the box of viewpoints and styles in stage 4) of Figure 2 are reversed.\", \"reference\": \"[1] DatasetDM: Synthesizing Data with Perception Annotations Using Diffusion Models, NIPS 2023\", \"questions\": \"Please see Weaknesses. If the authors address my concerns, I will raise my rate.\\nAdditionally, It would be better if:\\n1. Provide quantitative metrics on the quality of the generated segmentation maps, comparing them to ground truth or to maps generated by other methods. Discuss any observed differences in segmentation map quality between their method and baseline approaches, particularly in relation to the selective learning process.\\n2. If possible, include a qualitative analysis (e.g., visual examples) of any artifacts or inconsistencies in the generated segmentation maps that might be attributed to the selective learning process.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely appreciate your thorough review and valuable feedback on our paper. We hope our responses address your concerns effectively.\\n\\nThe main contribution of our paper is a novel fine-tuning method using Selective LoRA, which selectively learns only the concepts needed for data generation while preventing overfitting. This approach enables the generation of domain-aligned and diverse images without requiring extensive text prompt engineering for data augmentation.\\n\\nWe acknowledge your highly valuable opinion regarding prompt augmentation and recognize the importance of additional prompt augmentation for the person category. We believe that we have already provided a potential solution with the car example in **A.11. Generating Datasets with Diverse Class Names**. Additionally, we have chosen to leave this idea as future work because it is currently slightly outside the focus of this paper (selective fine-tuning method).\\n\\nWe have addressed all the issues raised in your initial review, including extensive revisions and updates to figures and tables for better presentation. Additionally, we have responded to all concerns from the second review. We kindly ask the reviewer to take these efforts into consideration and hope that our revisions meet your expectations.\\n\\nThank you for your time and consideration.\"}", "{\"summary\": \"This paper addresses the challenge of data scarcity in semantic segmentation by generating datasets through fine-tuned text-to-image generation models. Existing methods often overfit and memorize training data, limiting their ability to generate diverse and well-aligned samples. This paper proposes Selective LoRA that selectively identifies and updates only the weights associated with necessary concepts for domain alignment while leveraging the pretrained knowledge of the image generation model to produce more informative samples.\\nThe authors demonstrate its effectiveness in generating datasets for urban-scene segmentation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written and easy to follow.\\n2. The idea of using concept loss to find important weights is reasonable.\\n3. The ablation studies and analytical experiments are interesting and inspiring.\", \"weaknesses\": \"1. The proposed method outlines how to fine-tune LoRA for generating informative target-style images. However, the definition of \\\"informative samples\\\" is not clear. This lack of clarity may hinder the reader's understanding of the intended contributions. For example, Figure 1 would benefit from including examples of informative data to provide a clearer context for what constitutes an informative sample.\\n\\n2. In Figure 1(b), the results of the LoRA-finetuned images for both foggy and night-time conditions appear remarkably similar, suggesting that the fine-tuning process may not have effectively generated between these two target styles. It raises concerns about the method's capability compared to the pretrained approach. \\n\\n3. The proposed Selective LoRA generates images in a specific style and containing particular content, but it lacks a comparative analysis with existing text-driven diffusion models, such as Instruct-Pix2Pix. A comparison in terms of both the quality of generated images and adaptation performance would significantly enhance the paper's contributions and provide the reader with a clearer understanding of how the proposed method stands in relation to established techniques.\\n\\n4. I find the results presented in Table 3 somewhat confusing. If I understand correctly, the baseline results are derived from fine-tuning Mask2Former using generated images, while RealFT represents the results from fine-tuning on real data. However, it is unclear how the authors obtained the labels for the generated data. Were these results obtained through an unsupervised training approach, or was an additional decoder trained similarly to the DatasetDM?\", \"questions\": \"Additional minor questions are listed as follows.\\n 1. The method uses selective LoRA to solve the problem of data scarcity in cross-domain segmentation. However, there are some similar methods to select LoRA weights in other fields, like LoRA-SP [3], GS-LoRA [2], Tied-LoRA [1] etc.. The authors should discuss these papers. \\n\\n[1] Tied-LoRA: Enhancing parameter efficiency of LoRA with Weight Tying \\n\\n[2] Continual Forgetting for Pre-trained Vision Models \\n\\n[3] LoRA-SP: Streamlined Partial Parameter Adaptation for Resource-Efficient Fine-Tuning of Large Language Models\\n\\n2. The authors should compare more text-driven or image-driven generated dataset baselines, such as Instruct-Pix2Pix [4], PTDiffSeg [5], DATUM [6], etc.\\n\\n[4] InstructPix2Pix: Learning to Follow Image Editing Instructions \\n\\n[5] Prompting Diffusion Representations for Cross-Domain Semantic Segmentation \\n\\n[6] One-shot Unsupervised Domain Adaptation with Personalized Diffusion Models\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> **Two Key Analysis for the Segmentation Dataset Generation**: (1) domain-invariance of the generated images relative to the target domain **(Image Domain Alignment)** and (2) the quality of the corresponding pseudo labels **(Image-Label Alignment)**\\n\\nWe have already conducted both analyses in the revised manuscript for the in-domain setting.\\nFirst, image domain alignment is discussed in the revised paper under **Section 4.3. Analysis**, specifically in the subsection titled **Image Domain Alignment**. As shown in **Table 3**, we provided quantitative results by measuring image domain alignment between the source dataset (Cityscapes) and the generated dataset using CMMD.\\nAdditionally, based on feedback from various reviewers, we analyzed image-label alignment qualitatively in **Figure 6** and quantitatively in **Table 8**. More detailed analyses can be found in **Appendix A.6 Comparison of Image-Label Alignment**. \\n\\nThus, we interpreted the current reviewer\\u2019s question as requesting additional analyses in the domain generalization setting.\\nTo address this, we have included a new analysis in **Appendix A.12. Additional Analysis of Our Generated Dataset on the Domain Generalization Setting**, with the key results summarized as follows.\\n\\n> **Image Domain Alignment in Domain Generalization Setting**: (1) domain-invariance of the generated images relative to the target domain\\n\\nFor domain generalization, we generated additional datasets for \\\"foggy\\\", \\\"night-time\\\", \\\"rainy\\\", and \\\"snowy\\\" conditions and incorporated them into the training process.\\nTo evaluate text adherence for each condition, we analyzed the datasets using CLIP scores, as shown in **Table 4**.\\n\\nIn this additional analysis, we further investigate how well each generated image set aligns with the actual images from ACDC corresponding to \\\"foggy\\\", \\\"night-time\\\", \\\"rainy\\\", and \\\"snowy\\\" conditions.\\nThis evaluation is performed using CMMD to measure image domain alignment (newly added in **Table 11**).\\n\\n| Method | foggy | night-time | rainy | snowy | average |\\n| :-------------: | :---: | :--------: | :---: | :---: | :-----: |\\n| DATUM\\u2020 | 2.41 | 2.46 | 2.91 | 2.10 | 2.47 |\\n| InstructPix2Pix | 3.43 | 3.13 | 2.99 | 3.32 | 3.22 |\\n| DatasetDM | 4.90 | 5.52 | 5.34 | 4.96 | 5.18 |\\n| Ours | 2.43 | 2.55 | 2.62 | 2.63 | 2.56 |\\n\\n\\u2020 *For DATUM, we provide an additional image for each weather condition to satisfy the requirements of the One-shot UDA setting, whereas the other methods do not rely on target domain images.*\\n\\nAs demonstrated in the results, our generated dataset achieves significantly better image domain alignment for each adverse weather condition compared to DatasetDM and InstructPix2Pix.\\nThis improvement likely arises from our method, which exclusively learns viewpoint information from Cityscapes, effectively utilizing driving scene knowledge.\\nBy contrast, DATUM requires a single real ACDC image for each weather condition and trains *four separate models*, one for each condition, *using the additional target domain images*.\\nEven without access to any real ACDC images and using a single model to generate datasets for multiple weather conditions, our approach achieves a comparable level of image domain alignment to that of DATUM.\\n\\nAdditionally, while our method currently addresses in-domain and domain generalization settings, exploring its application in a One-shot UDA setting, where additional training is performed using a single provided target domain image, would be an intriguing direction for future work.\"}", "{\"comment\": \"Thank you for your constructive feedback. We have carefully addressed your concerns below, and the proposed changes have been fully integrated into the revised manuscript. We welcome any additional input you may have.\\n\\n> [W4.1] I believe that the proposed method tackles an important problem and offers a reasonable approach to addressing the challenges, which I highly commend. However, there are some weaknesses to consider.\\n\\nThank you for acknowledging the significance of our proposed method and its approach to addressing the challenges. We sincerely appreciate your commendation. Regarding the weaknesses mentioned, we have carefully addressed them in the revised manuscript and provided additional clarifications and experiments where necessary. We believe these revisions strengthen the paper and hope they adequately address your concerns. Please let us know if there are any further aspects that require clarification or improvement.\\n\\n> **[W4.2] Limitation of Reliance on Pretrained Stable Diffusion Models**\\n\\nWe agree with the reviewer\\u2019s observation that a key limitation of our work is its reliance on the knowledge embedded in the pretrained Stable Diffusion model. \\nWhile the additional information is indeed constrained by the prior knowledge of the pretrained text-to-image generation model, we have demonstrated significant improvements in urban-scene segmentation across both in-domain and domain generalization tasks. \\nFurthermore, we believe that the proposed segmentation dataset generation framework has the potential to harness the extensive prior knowledge of large-scale text-to-image generation models for semantic segmentation by employing a selective adaptation methodology.\", \"this_aspect_has_been_addressed_in_the_revised_manuscript_in_section_5\": \"Conclusion and Future Work**, **L530\\u2013534**.\\n\\n\\n> **[W4.3] Challenges in Manual Concept Definition and Model Adaptation**\\n\\nAs the reviewer pointed out, the process of defining desired concepts can indeed be quite manual.\\nHowever, our main argument is that in cases where the desired concept is clear, such as urban-scene segmentation, our methodology provides an effective approach to exclusively learn the desired concept.\\nExtending this approach to identify and adapt desired concepts for other target datasets, as we have demonstrated for urban-scene segmentation, could be an exciting topic for future research.\\n\\nFurthermore, the following newly conducted experiments demonstrate the robustness of our method:\\n\\n**[Pascal VOC]** For in-domain tasks, even beyond urban-scene datasets, training on Pascal-VOC using the identified style-sensitive layers effectively captured the style of Pascal-VOC, resulting in additional performance improvements. These results are detailed in **Figure 13** and **Table 9** in **Appendix A.8: In-Domain Experiments for the General Domain Dataset (Pascal-VOC)**.\\n\\n**[Robustness to Prompt Augmentation]** As shown in **Figure 12** in **Appendix A.7: Concept Sensitivity According to Prompt Augmentation**, we found that when the desired concept (style or viewpoint) is well-defined, the sensitivity score remains consistent across various prompt augmentations, highlighting the robustness of our approach.\\n\\n> Additionally, finding the associated weights and determining their importance requires experimental work, which lacks standardized criteria.\\n\\nAs the reviewer noted, determining the proportion of selected layers may involve experimental work. However, we have introduced metrics for evaluating the quality of the generated dataset, including CMMD, CLIP-Score, and Image-Label Alignment (mIoU), as presented in **Table 3**, **Table 4**, and **Table 8**, which may serve as reasonable criteria.\\nAdditionally, we believe that the proportion of selected layers can be treated as a hyperparameter to adjust the exclusiveness of the desired concept relative to other concepts. We have also provided qualitative results and a detailed analysis in **Figure 16**, **Appendix A.10: Additional Qualitative Results**, and **L1241-L1273**, respectively.\"}", "{\"comment\": \"I have an opinion about the text guidance and the performance degradation or minor improvement of the class \\\"person\\\".\\nAs the authors used, it would be helpful to consider specific and diverse words like \\\"SUV\\\" and \\\"Sedan\\\". However, I think it will be better for authors to consider more text guidacne for the class \\\"person\\\". \\n\\nThe word \\\"person\\\" is fairly appropriate word to represent the class, because it represents human without any criteria. But it is too neutral for the distribution of the diffusion models' training data. Also, it is too ambiguous because of the class \\\"rider\\\" (in case of Cityscapes). Since \\\"person\\\" contains \\\"rider\\\", the class \\\"person\\\" needs to be changed into more specific words or clauses. It would be helpful to consider the word \\\"pedestrian\\\" which does not overlap with \\\"rider\\\". \\n\\nHowever, this is also another research direction. The authors don't have to consider this opinion. This will not affect my rating.\", \"title\": \"Re: \\\"person\\\" class and text guidance\"}", "{\"comment\": \"As you know, datasets like Cityscapes have distinct characteristics compared to general object detection datasets, presenting unique challenges such as class imbalance and co-occurrence issues[A]. These challenges make it one of the datasets with significant performance variations across different classes. I encourage analyzing these aspects in detail.\", \"a_few_key_points_to_consider\": \"1. Class-specific Performance Analysis:\\n- Which classes show the most significant improvements?\\n- If the improved classes are rare or typically difficult to detect, this could strengthen the novelty of the method\\n- Please provide a table showing class-wise improvements and include a detailed analysis\\n\\n2. Text Guidance Considerations:\\nGiven that this is a text-guided approach, we might expect varying degrees of performance improvement based on different text variations. Have you considered using alternative class names beyond the provided prompts? How might this affect performance?\\n\\n3. Domain Generalization Question:\\nI notice that the performance gains in domain generalization scenarios are lower compared to the improvements seen in Cityscapes. Could you explain the potential reasons for this discrepancy?\\n\\nThe focus on these aspects, particularly the class-specific analysis and domain adaptation challenges, could provide valuable insights into the method's strengths and limitations.\\n\\n[A] Kim, D., Lee, S., Choe, J. and Shim, H., 2024, March. Weakly Supervised Semantic Segmentation for Driving Scenes. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 3, pp. 2741-2749).\"}", "{\"comment\": \"Thank you for your positive support and reassessment of our manuscript. Your constructive feedback has been invaluable in improving our work.\"}", "{\"comment\": \"Thank you once again for your valuable insights, which significantly helped us improve our work. We have carefully revised the paper and included the additional experiments that were requested. Could you kindly let us know if the updated manuscript resolves your concerns and, if possible, whether this might warrant a reconsideration of the score?\"}", "{\"comment\": \"> **[W1.3] Semantic Segmentation Experiments on More General Datasets**\\n\\nIn response to the reviewer's request, we conducted a new experiment for generating a semantic segmentation dataset using Pascal-VOC, a more general dataset in **A.8 In-domain Experiments for the General Domain Dataset (Pascal-VOC)**.\\nFor this, we utilized only 100 images for segmentation, followed by a few-shot Pascal-VOC experiment in DatasetDM.\\n\\n> Pascal-VOC (100 images)\\n\\n| Generated Dataset | mIoU | Improvement |\\n| ---------------------------------------- | :---: | :---------: |\\n| Baseline (Mask2Former) | 44.59 | - |\\n| **Fine-tuning with additional datasets** | | |\\n| DatasetDM | 36.16 | - 8.43 |\\n| Ours | 45.52 | + 0.93 |\\n\\nAs shown in the table, although DatasetDM reduces performance compared to the baseline, our approach successfully enhances segmentation performance, even on a more general dataset. This experiment demonstrates the effectiveness of the selective learning approach, even with datasets exhibiting minimal distribution shifts.\\nWe include further experimental details and a qualitative comparison in **Appendix A.8** and **Figure 13** of the revised manuscript.\\nFurthermore, we have added this limitation in **5. Conclusion and Future Work** of our revised manuscript, and we believe that extending our method to general datasets would be interesting future work.\\n\\n> **[W1.4, Q1.1, Q1.2] Impact of Selective Learning on Segmentation Map Quality and Quantitative/Qualitative Comparison** \\n\\nAs the reviewer noted, the selective learning process does impact the quality of the generated segmentation maps.\\nFollowing the reviewer's suggestion, we have included a new qualitative comparison in **Figure 6** of the revised manuscript.\\nThis comparison demonstrated that our Style-Selective LoRA significantly outperforms both DatasetDM and the original LoRA (which applies LoRA parameters to all layers).\\n\\nAs illustrated in the figure, our Style-Selective LoRA significantly enhances the quality of segmentation maps compared to DatasetDM.\\n\\nAdditionally, while we could not discuss this in the main paper due to the page limit, we initially included such discussion in our Supplementary **Appendix A.6 Comparison of Image-Label Alignment**.\\n**Table 8** presents a quantitative comparison of image-label alignment between our Selective LoRA, DatasetDM, and other Selective LoRA variants.\\nTo provide quantitative results, we use the predictions from the pretrained Mask2Former model, which was fully supervised on the 100\\\\% Cityscapes dataset and achieves a 79.40 mIoU, as a proxy for the ground truth mask.\\nBelow are the key results from Table 8, which compares the image-label alignment (mIoU) of DatasetDM and ours under in-domain experiments. We selected 2\\\\% of layers for Style-Selective LoRA.\\n\\n| Method | Image-Label Alignment |\\n| :-------: | :-------------------: |\\n| DatasetDM | 25.18 |\\n| Ours | 39.37 |\\n\\nSimilar to the qualitative results, the Style-Selective LoRA outperforms both DatasetDM and the Viewpoint-Selective LoRA. We attribute this to the domain gap between the pretrained text-to-image (T2I) model (SDXL) and the source dataset (Cityscapes), as discussed in **Appendix A.6: Comparison of Image-Label Alignment** under **Analysis of the Qualitative Comparison**.\\n\\n> **[W1.5] Correction of Stage 4 of Figure 2 Annotation Error**\\n\\nWe want to clarify that stage 4) of Figure 2 (in the original manuscript) is not reversed.\\nLet us use the Cityscapes example as the running example.\\nWith Cityscapes, the style-sensitive layers learn the style of Cityscapes, while the viewpoint-sensitive layers learn the viewpoint of Cityscapes.\\nThis means that the style-sensitive layers can only output the style of Cityscapes, making them less effective at generating other styles of urban scenes.\\nConversely, the viewpoint-sensitive layers, which learn the viewpoint of Cityscapes, can generate diverse styles because the SD model retains knowledge of other styles.\\nTherefore, to generate diverse styles, we need viewpoint-sensitive layers, and to generate diverse viewpoints, we need style-sensitive layers.\\n\\nWe acknowledge that the terms \\\"style sensitive\\\" and \\\"viewpoint sensitive\\\" layers might be confusing due to their seemingly opposite outputs.\\nTo address this, we have revised **Figure 2** by removing the Style-Selective LoRA and focus on the segmentation dataset generation framework.\\nFurthermore, we added qualitative results for the Viewpoint-Selective LoRA with a brief explanation in the motivation figure (**Figure 1**) to better illustrate the sensitivity.\"}", "{\"comment\": \"We greatly appreciate your thoughtful comments. A detailed response to your concerns is provided below, and the corresponding updates have been included in the revised manuscript. Please feel free to share any further suggestions.\\n\\n> **[W2.1] Enhancing Writing, Technical Descriptions, and Experimental Structure**\\n\\nWe acknowledge the importance of improving the presentation of our paper and appreciate you pointing this issue out. \\nWe reorganized the paper and made overall adjustments to parts including **Sections 3 and 4, and Figures 1, 2, 3, 4 and 5.**\\nFollowing are the clarifications regarding the issues that the reviewer is confused with. \\n\\n**[LoRA Fine-Tuning]** We conducted LoRA fine-tuning on Stable Diffusion XL by adding LoRA layers to *all attention linear projection layers (query, key, value, and output)*.\\nSince \\\"Original LoRA\\\" in this work refers to updating all the added LoRA layers, we have revised the description in L249 in the revised manuscript to reflect this accurately.\\nAdditionally, we labeled all figures involving the T2I model with \\\"Attention Layers\\\" to clearly indicate that these are the attention layers of the T2I model, including Figures 2, 3, 4, 5, and others.\\n\\n**[Layer Indices]** The layer indices in Figures 3 and 4 represent the \\\"attention layer indices\\\" identified in Stable Diffusion XL. The left side corresponds to shallower layers, while the right side corresponds to deeper layers in a continuous progression. We labeled all relevant figures with \\\"shallower\\\" to \\\"deeper\\\" to provide clearer context.\\n\\n**[Stage 3, 4]** For stages 3 and 4, we have newly included **Section 3.4: Training Label Generator and Generating Diverse Segmentation Datasets** to ensure our paper is self-contained, and revise minor elements of **Figure 2** for clearer presentation.\\nAdditionally, we provide further implementation details for the label generator in **Appendix A.1**, accompanied by the newly added **Figure 7**, which illustrates the label decoder architecture. Below is a brief summary of the explanations for stages 3 and 4.\\n\\n**[Brief Summary of Stage 3]** We train an additional lightweight label generator to produce a segmentation label corresponding to the image, following DatasetDM.\\nTo train the label generator, we add noise to the given labeled image and denoise the image with the fine-tuned T2I model, which can provide semantically rich intermediate multi-level feature maps and cross-attention maps.\\nDistinct from DatasetDM, we train the label generator based on the fine-tuned T2I model using Selective LoRA.\\nThe added fine-tuning process causes a significant difference in image-label alignment, which we discussed in **Appendix A.6 Comparison of Image-Label Alignment**.\\nFurthermore, due to the difference between the base T2I model, architecture details slightly changed, as described in **Appendix A.1**.\\n\\n**[Brief Summary of Stage 4]** Diverse image-label pairs are generated to address both domain generalization and in-domain scenarios. For domain generalization, text prompts are modified to include adverse weather conditions (e.g., foggy, snowy, rainy, night-time) by extending the default prompt, such as \\\"photorealistic first-person urban street view,\\\" to, for example, \\\"in foggy weather,\\\" enhancing the model\\u2019s ability to generalize across varying environmental conditions. For in-domain scenarios, diversity is introduced by varying the class names within the prompt template (e.g., \\\"\\u2026 with car\\\", \\\"\\u2026 with car\\\", etc.), allowing for the generation of images that reflect different object class combinations while maintaining consistency with the in-domain characteristics.\\n\\n**[Organizing Experimental Section]** \\nWe have reorganized the experimental section (Section 4) as the reviewer suggested. \\nTo be more specific, we explained the experimental setup and implementation details at the beginning of the experimental section. \\nThen, we reported the main performance improvements in a section titled 'Main Results on the Semantic Segmentation Benchmarks'.\\nAfter the main result, we demonstrated various analyses and ablation studies. \\nWe deeply appreciate the reviewer's valuable feedback.\"}" ] }
2TasVD7FXp
InvestESG: A multi-agent reinforcement learning benchmark for studying climate investment as a social dilemma
[ "Xiaoxuan Hou", "Jiayi Yuan", "Joel Z Leibo", "Natasha Jaques" ]
**InvestESG** is a novel multi-agent reinforcement learning (MARL) benchmark designed to study the impact of Environmental, Social, and Governance (ESG) disclosure mandates on corporate climate investments. The benchmark models an intertemporal social dilemma where companies balance short-term profit losses from climate mitigation efforts and long-term benefits from reducing climate risk, while ESG-conscious investors attempt to influence corporate behavior through their investment decisions. Companies allocate capital across mitigation, greenwashing, and resilience, with varying strategies influencing climate outcomes and investor preferences. We are releasing open-source versions of InvestESG in both PyTorch and JAX, which enable scalable and hardware-accelerated simulations for investigating competing incentives in mitigate climate change. Our experiments show that without ESG-conscious investors with sufficient capital, corporate mitigation efforts remain limited under the disclosure mandate. However, when a critical mass of investors prioritizes ESG, corporate cooperation increases, which in turn reduces climate risks and enhances long-term financial stability. Additionally, providing more information about global climate risks encourages companies to invest more in mitigation, even without investor involvement. Our findings align with empirical research using real-world data, highlighting MARL's potential to inform policy by providing insights into large-scale socio-economic challenges through efficient testing of alternative policy and market designs.
[ "multi-agent reinforcement learning", "climate change", "ai for climate" ]
Accept (Poster)
https://openreview.net/pdf?id=2TasVD7FXp
https://openreview.net/forum?id=2TasVD7FXp
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z22aJDXMbQ", "ynBEy6XL8i", "wdznMyIBqB", "v9aekHRMmE", "muHrkAxFK8", "m5uiWcEhwM", "jNXsIDMMSY", "ieyQTik0z3", "e0AqQfrp3P", "bwYsBjHBSo", "aIZ79SETlO", "ZxapSEAmQJ", "ZPoeupSbLu", "WMMjtUapa6", "LDzqt1O31E", "J3yRe85n0V", "IzZNWXqhZ0", "I7WfRA0UQj", "BWfFlsbnGj", "AKLbgFXuKZ", "2cvURSOkqD", "1ZTmEdPziT" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730683988455, 1731736157349, 1732568032227, 1732588179852, 1731736490607, 1732811616299, 1730778291542, 1737524170140, 1732588489339, 1731738773928, 1731738213405, 1732811285920, 1734739922560, 1733190780920, 1732052718655, 1733094828644, 1731092043515, 1732481374809, 1731740056157, 1732664754607, 1731739613301, 1733094855720 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12159/Reviewer_fPY9" ], [ "ICLR.cc/2025/Conference/Submission12159/Authors" ], [ "ICLR.cc/2025/Conference/Submission12159/Authors" ], [ "ICLR.cc/2025/Conference/Submission12159/Authors" ], [ "ICLR.cc/2025/Conference/Submission12159/Authors" ], [ "ICLR.cc/2025/Conference/Submission12159/Authors" ], [ "ICLR.cc/2025/Conference/Submission12159/Reviewer_mQaB" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12159/Authors" ], [ "ICLR.cc/2025/Conference/Submission12159/Authors" ], [ "ICLR.cc/2025/Conference/Submission12159/Authors" ], [ "ICLR.cc/2025/Conference/Submission12159/Authors" ], [ "ICLR.cc/2025/Conference/Submission12159/Area_Chair_dGwn" ], [ "ICLR.cc/2025/Conference/Submission12159/Reviewer_fPY9" ], [ "ICLR.cc/2025/Conference/Submission12159/Reviewer_fPY9" ], [ "ICLR.cc/2025/Conference/Submission12159/Authors" ], [ "ICLR.cc/2025/Conference/Submission12159/Reviewer_9btN" ], [ "ICLR.cc/2025/Conference/Submission12159/Reviewer_9btN" ], [ "ICLR.cc/2025/Conference/Submission12159/Authors" ], [ "ICLR.cc/2025/Conference/Submission12159/Reviewer_mQaB" ], [ "ICLR.cc/2025/Conference/Submission12159/Authors" ], [ "ICLR.cc/2025/Conference/Submission12159/Authors" ] ], "structured_content_str": [ "{\"summary\": [\"The paper presents InvestESG, a MARL environment that studies the impact of ESG disclosures on company and investor agents. The benchmark is meant to simulate companies' investment decisions into climate mitigation, green washing and resilience spending as a social dilemma.\", \"Specifically, the contributions are:\", \"InvestESG, a climate-economic environment in which investors fund companies, which make decisions about how much to invest in climate-related spending over 100 years starting in 2020.\", \"Climate risks grow linearly in the absence of any mitigation\", \"Companies decide how much to spend on mitigation, greenwashing and resilience\", \"Investors decide which companies to invest in based on their preferences, which trade off between profits and climate efforts documented by ESG disclosures.\", \"As the simulation proceeds, companies make profits which they return to investors, while climate risks grow, resulting in a higher probability of extreme events.\", \"Agents are modelled using IPPO\", \"A set of experiments shedding light on agent behaviour in InvestESG\", \"With no ESG disclosures, purely profit-driven decisions result in suboptimal collective outcomes\", \"The impact of ESG disclosures depending on how many and how much investors care about ESG reports when choosing which companies to invest in\", \"Whether companies leverage greenwashing when it is allowed in InvestESG\", \"Whether visibility of the climate-related risk probabilities impacts agent behaviour\", \"Conclusions for policymakers and researchers\", \"Mandatory EST disclosure paired with ESG-conscious investors can drive corporate mitigation efforts.\", \"Knowledge of climate risks motivates investors and companies\", \"Agent behaviour is consistent with empirical evidence\", \"InvestESG is an example of using MARL to tackle complex social dilemmas in real-world, high-impact domains\"], \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"# Originality\", \"**Novel MARL application to ESG disclosures**: Even though Zhang et al. explore MARL in the policy space, as far as I know, this is the only MARL simulator that looks at ESG disclosure impact in this scenario.\", \"**Novel problem formulation**: the authors cleanly describe the relationship between companies and investors with a two agent type system, as well as an ESG disclosure component.\", \"# Quality\", \"**Relevant problem setup**: key decisions are captured by the problem setup. The ESG disclosure abstraction is simple and elegant. The reward structure effectively represents a social dilemma.\", \"**Extensive experimental results**: the authors go through many scenarios with InvestESG to analyze different outcomes.\", \"# Clarity\", \"The paper is **well structured**, and makes for a smooth read with little to no cognitive breaks.\", \"The work is **well situated** within the literature on MARL simulators, and they contrast well with similar work.\", \"The design and implementation of InvestESG is **clearly laid out**.\", \"The work makes judicious use of **relevant visualizations**, such as Schelling diagrams.\", \"# Significance\", \"The analysis is **timely and relevant** given the current discussions around ESG disclosures.\", \"The conclusions around the preferences of investors for climate-active companies is impactful.\", \"The use of MARL to study social dilemmas is an important subject of study.\"], \"weaknesses\": [\"# Soundness\", \"**The economic agents are not grounded in the economics literature**. This leads to issues such as capital being perfectly flexible across time steps. In traditional economics models, investments in capital last and they are not flexible. Here, there seems to be an implied assumption of perfectly flexible capital, which is unrealistic. Starting with an existing model of economic agents (with a citation), highlighting its limitations for InvestESG and then explaining how you extend to agent to accommodate for these limitations would be a much more compelling presentation.\", \"**Investor decisions are binary**, as opposed to continuous across all companies. Making investor decisions floats, i.e. a vector whose sum is capped at one, would allow for proportional investments across different companies. This is essential for investor diversification, which would also enable interesting extensions like regional damages to companies (i.e. climate events could affect subsets of agents either chosen at random or chosen somehow).\", \"Figure 7 b) is highly confusing. It looks like **with climate information, risk is *maximized* and market wealth is *minimized***. I'm not sure what exactly is going on in this plot, but it doesn't fit with the storyline of the paper. That is, it certainly does not look like more information improves decision making in this plot, if anything the effects of more information are catastrophic for both climate risk and market wealth.\", \"Figure 2b could be improved by showing the average number of events at each year across many episodes, as opposed to a single episode.\", \"The **number of agents is limited**. Granted, it is more than 2. However, it would be interesting to scale it up to more and see what types of behaviour emerge. There are group size effects that can emerge at scale in economics, e.g. see https://www.aeaweb.org/articles?id=10.1257/mic.20200290. This shows in section 9.2 of the paper in the appendix, but given the implications of such a result, it would be very important to expand upon these results.\", \"# Presentation\", \"The paper is well structured, but the plots are a pain to read. The labels and ticks are too small, and the axes are not annotated.\", \"If you use a pdf format for your images instead of png, you can avoid the graininess when zooming in, which is necessary because of the label sizes.\", \"The results section could benefit from additional structure. It would be less dense and easier to read if you highlighted which of your results you consider the main results, and which you consider additional.\", \"I found the description of schelling diagrams fairly unclear, it took me a minute to get it.\", \"It should be ICLR 2025, not 2024. Please make sure the template you used is up to date.\", \"Inconsistent use of \\\"MARL\\\" and \\\"multi-agent RL\\\"\", \"Might benefit for a problem setting section, where you introduce important concepts like bifurcated equilibria\", \"Typo: 3.2 \\\"self-interest\\\" -> \\\"self-interested\\\"\", \"# Contribution\", \"The importance of the contributions are weakened by what Figure 9 d) is suggesting, since there are many companies in the world. It seems to me that, without addressing the concerns raised by this result, your conclusions for policymakers do not hold.\"], \"questions\": [\"What does a sensitivity analysis of the Beta parameter do to the results?\", \"How would more longer capital investment timelines (e.g. min 5 year lock-in) impact the trained agents?\", \"Do observations include past climate events?\", \"How did you calibrate the 0.5% investment of capital into mitigation?\", \"Why do you think that agents are so insensitive to the value of Beta as shown in figure 6 b)?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 9btN\", \"comment\": [\"Thank you for your valuable feedback in helping us improve our work. In response to your comments, we have provided additional explanations below to address your concerns. We would greatly appreciate any further feedback on whether these revisions effectively resolve your concerns, as we are committed to strengthening the paper through the rebuttal process.\", \"**System with a limited number of agents**: Thank you for raising the question of whether a limited number of agents can represent a market with thousands of real agents. Post submission, we have developed a GPU-efficient Jax version of our environment, which runs at 10x the previous speed, enabling us to scale the simulation significantly. We will run larger-scale scenarios (on the order of 50 agents) to test whether we obtain similar findings when scaling to more agents. We will share updates during the discussion period as soon as the new results are available. We will also include a link to the Jax-based implementation in the final version of the paper.\", \"**Additional scenarios**: We plan to add a few additional scenarios to capture more of the real-world complexity, including seeding the first few steps of company actions with real-world company behaviors, and implementing a lock-in period for investments.\", \"**Simplicity of the framework**: We acknowledge, as you noted, that our work uses a simplified framework. Considering the magnitude and complexity of the global market, there might never be a model that fully captures the climate-market dynamics. But we believe that our choice of model captures the most important trade-offs. Briefly, our reasons for this approach are as follows.\", \"**It is grounded in the economics and finance literature** that studies similar questions with theoretical models. For example, in a recent highly impactful study published in a top finance journal, Pastor et al. (2021) built an analytical model for a single-period equilibrium that examines firm-investor tradeoffs much similar to ours. In their model, firms choose the \\u201cgreenness\\u201d level, which affects their cost of capital, while investors select portfolios to maximize utility derived from both financial returns and their \\u201ctaste\\u201d for green assets. Other influential studies in financial economics use comparable or even simpler frameworks. For example, Pedersen et al. (2021) model a single-period equilibrium with investors differentiated by their types of ESG preference, assuming fixed ESG characteristics for firms. Going beyond these works, we provide a modeling tool that can study a system which evolves over a much longer time period (100 years), rather than a single timestep.\", \"**Our simulations agree with the key findings in these papers** suggesting that ESG-conscious investors prioritize green companies and promote positive social impacts by shifting investment towards green firms.\", \"Our framework, albeit still simple, **allows for investigating more realistic and complex settings than the existing financial economics literature**. These include (1) climate evolution, where companies mitigate emissions to attract investment and reduce long-term climate risk exposure\\u2014often omitted in financial studies (2) the potential for greenwashing, linked to information asymmetry (e.g., Lyon and Maxwell, 2011), and (3) a dynamic multi-agent game where firms and investors interact over many periods, which is more realistic. The equilibrium of such a setting is challenging to solve analytically or numerically (Pakes and McGuire, 2001), and our simulations provide insights here.\", \"Despite the simplicity of the framework, **our simulation results match real-world data collected from countries that have implemented an ESG disclosure mandate** of some sort. For example, the mandate encourages more truthful mitigation than greenwashing (Fiechter et al., 2022), as shown in Figure 6. Raising public awareness motivates positive actions (Delmas and Toffel, 2008; Bowen, 2000), which we show as Figure 7, where including more information about climate risks helps both corporations and investors resolve the dilemma (we note that the legend lines for with vs. without climate information are accidentally mislabeled (flipped) on 7b and will fix it in the revised PDF). While we discuss these connections in Section 4, we will make this more clear in the revised version. We hope that by showing our benchmark matches existing empirical evidence, but enables studying novel policy algorithms, it can provide a useful tool for attracting ML researchers to develop algorithms that can help resolve the social dilemma posed by climate change investment.\", \"For these reasons, we deliberately aimed to make a benchmark that is **simple enough for the ML community to iterate on with reasonable computational resources** (even a single GPU), without compromising the fundamental incentive structures. We believe removing the computation barrier will help us encourage greater participation from the ML community.\"]}", "{\"comment\": \"Thank you for continuing to engage with us and sharing detailed feedback and exciting new ideas!\\n\\n>Limited number of agents, flexible investments, grounded in economics literature\\n\\nWe have run the suggested changes and across all these experiments, we observed the same directional results as in our original experiments. Since the system does not permit us to share figures here, we will include these findings in the updated version of our paper, which will be uploaded by 11/27. We will also revise our paper according to your other suggestions, as well as incorporate the above discussion of how our work is grounded in economics literature.\\n\\n>The ground-truth limitation\\n\\nThank you for raising the critical point regarding the \\u201cunobservable ground truth climate-economic damage function\\u201d and the \\u201csignificant uncertainty in the level of economic damages.\\u201d We agree that there is substantial uncertainty in companies\\u2019 economic losses resulting from climate events. To address this limitation in our current deterministic damage function, we propose conducting an experiment where economic losses from climate events are modeled as random variables within the range [0,1]. These losses will vary both across events and between companies. We believe this approach addresses two concerns:\\n- __Uncertainty and Realism__: Our experiment more closely aligns with the real-world ambiguity surrounding the economic impacts of climate events. While agents may still learn the distribution of these damages over time, increasing the variance of the random variable can ensure that the uncertainty remains significant enough for agents to consider disregarding the learned impacts, as real-world companies may do, because it\\u2019s too noisy.\\n\\n- __Bankruptcy Dynamics__: To clarify, our current framework already forces companies to go bankrupt if their capital becomes negative. However, in the experiments conducted so far, bankruptcy has not occurred because companies maintain an underlying economic growth rate sufficient to offset climate-induced losses and have learned not to overspend on climate-related efforts. By introducing a highly stochastic damage function, climate events could become severe enough to trigger bankruptcies. This adjustment could incentivize behaviors such as greenwashing (\\u201cif bankruptcy is inevitable in certain scenarios, it may be rational to prioritize immediate economic survival over long-term climate commitments\\u201d). We will report results on the number and frequency of companies going bankrupt in our existing experiments and the new experiments in the updated paper as well.\\n\\nWe understand that you may be concerned that RL agents \\u201csee into the future\\u201d by being trained over many episodes, whereas human beings cannot. It is true that fundamentally, RL works by learning to estimate future rewards over many experiences with the environment. Our rationale behind using RL agents as an approximation of rational actors is, although humans and companies cannot directly observe the future, they make decisions based on experience, reasoning, and expectations about how the world evolves, often with greater sophistication than RL agents trained from scratch over repeated episodes. RL agents mimic this process by iteratively learning from simulated episodes, which can be seen as analogous to the iterative learning process humans undergo through trial, error, and observation of historical patterns. For example, stock prices in financial markets reflect the aggregate expectations of rational agents regarding future economic conditions. Similarly, RL agents do not \\\"know\\\" the future; rather, they estimate expected outcomes by averaging over many plausible scenarios and optimizing their policies based on this probabilistic understanding.\\n\\nBy introducing uncertainty into key elements such as the climate-economic damage function (as described earlier), we create an environment that requires RL agents to operate under significant ambiguity, mirroring real-world decision-making processes. \\nMoreover, we believe that using RL agents currently represents the most effective method for approximating intelligent, rational agents at scale\\u2014such as individuals and corporations\\u2014who respond dynamically to incentives in this environment. This approach provides a practical and scalable way to explore emergent behaviors and policy outcomes in environments where traditional analytical methods are insufficient. \\n\\n>Investors learn ESG consciousness\\n\\nThank you for proposing this incredibly interesting idea! We could test this by making investors receive negative utility for extreme weather events. Then they could learn to invest in climate-conscious companies to reduce their own probability of undergoing extreme weather events. Although we may not be able to obtain these results before 11/27, we love this idea and will definitely explore it in ongoing work.\"}", "{\"comment\": \"Thank you for following up. We wanted to share the results of additional experiments we conducted with the following modifications based on our engaging discussions with Reviewer fPY9: (1) an increased number of agents (25 companies and 25 investors), (2) company agents seeded with real-world company actions, (3) investment commitments locked in for five years, and (4) adding random noise to determine the financial cost of extreme weather events which varies across companies and events. For company agents seeded with real-world company actions, we referred to the publicly available data from authority sources such as European Investment Bank and London Stock Exchange to seed the initial climate investment for 5 years, to resemble investment happening in the real-world. Across all these experiments, we observed the same directional results as in our original experiments. That is, in the default case where investors are only profit motivated, companies do not learn to mitigate effective, climate risks remain high, and market wealth is lower. However, by including investors which place a high weight on ESG scores, companies can learn to mitigate, decrease climate risk, and increase total market wealth. Since the system does not permit us to share figures here, we will include these findings in the updated version of our paper, which will be uploaded by 11/27.\\n\\nWe would also like to address your question on why RL agents can approximate rational actors in the real-world. Our rationale behind this is: fundamentally, RL works by learning to estimate future rewards over many experiences with the environment. Although humans and companies cannot directly observe the future, they make decisions based on experience, reasoning, and expectations about how the world evolves, often with greater sophistication than RL agents trained from scratch over repeated episodes. RL agents mimic this process by iteratively learning from simulated episodes, which can be seen as analogous to the iterative learning process humans undergo through trial, error, and observation of historical patterns. For example, stock prices in financial markets reflect the aggregate expectations of rational agents regarding future economic conditions. Similarly, RL agents do not \\\"know\\\" the future; rather, they estimate expected outcomes by averaging over many plausible scenarios and optimizing their policies based on this probabilistic understanding.\\n\\nAs we approach the rebuttal deadline, we would greatly appreciate any additional feedback you can provide on how to further strengthen our paper, or any additional concerns we could potentially address.\"}", "{\"title\": \"Response to Reviewer 9btN\", \"comment\": \"(cont.)\\n- We appreciate your point that the framework may seem oversimplified to the broader audience. **We will dedicate a new section in the updated paper** to discuss why we think our simulations are solid to help address these concerns.\\n\\nThank you again for your valuable feedback. We hope our responses address your concerns, and we welcome any further questions or comments you may have. Your insights are instrumental in helping us strengthen this work, and we look forward to your continued guidance during the rebuttal phase.\", \"reference\": \"[1] P\\u00e1stor, \\u013d., Stambaugh, R. F., & Taylor, L. A. (2021). Sustainable investing in equilibrium. Journal of financial economics, 142(2), 550-571.\\n\\n[2] Pedersen, L. H., Fitzgibbons, S., & Pomorski, L. (2021). Responsible investing: The ESG-efficient frontier. Journal of financial economics, 142(2), 572-597.\\n\\n[3] Lyon, T. P., & Maxwell, J. W. (2011). Greenwash: Corporate environmental disclosure under threat of audit. Journal of economics & management strategy, 20(1), 3-41.\\n\\n[4] Pakes, A., & McGuire, P. (2001). Stochastic algorithms, symmetric Markov perfect equilibrium, and the \\u2018curse\\u2019of dimensionality. Econometrica, 69(5), 1261-1281.\\n\\n[5] Fiechter, P., Hitz, J. M., & Lehmann, N. (2022). Real effects of a widespread CSR reporting mandate: Evidence from the European Union's CSR Directive. Journal of Accounting Research, 60(4), 1499-1549.\\n\\n[6] Delmas, M. A., & Toffel, M. W. (2008). Organizational responses to environmental demands: Opening the black box. Strategic management journal, 29(10), 1027-1055.\\n\\n[7] Bowen, F. E. (2000). Environmental visibility: a trigger of green organizational response?. Business strategy and the environment, 9(2), 92-107.\"}", "{\"comment\": \"Thank you for engaging with us and sharing your specific concerns when reviewing the new version of the paper. As we promised, we have included a JAX-based implementation of the environment in the supplementary material.\\n\\n >Advances towards more realistic agent behaviour and scenarios.\\n\\nIn response to your valuable feedback, we have made a series of changes to enhance our model and capture more caveats and complexity in the real world in our rebuttal revision, which you can find more details in in Appendix 11.\\n- Scaling up the number of agents: We scaled up the number of agents in the environment, conducting experiments with 10-company, 10-investor setups and larger 25-company, 25-investor setups. These scaled scenarios yield directionally consistent conclusions with those from smaller setups.\\n- Real-World Initializations: We initialized company agents with real-world corporate actions to better align with actual decision-making contexts.\\n- Capital Flexibility Challenges: We introduced a setting that locks in agents' decisions for five years, reflecting the inertia in capital allocation decisions faced by companies and investors in the real world.\\n- Unpredictability of Climate Event Damage: We introduced a setting in which companies have a randomized climate resilience parameter that varies across events and agents, simulating the unpredictability of climate event damages.\\n- Bankruptcy Mechanism: We implemented a stricter bankruptcy mechanism, creating a more severe risk for company agents, which in turn affects systemic outcomes.\\n\\n> model validation:\\n\\nThe series of new experiments above serve as evidence for the robustness of the model under different scenarios. The results are consistent with theoretical expectations from the literature, reinforcing the model's validity as a benchmark for studying climate investment dynamics. \\n\\n> quantitative results;\\n\\nWe have included additional social outcome metrics in the main text, including the average number of severe climate events in Figure 2 and Figure 4. We also include the number of companies that undergo bankruptcies in our additional experiments concerning bankruptcy mechanisms in Appendix 11.6.\\n\\n> discussions and critical evaluation of the assumptions, results and InvestESG.\\n\\nTo clearly capture the assumptions we have made, we added a Problem Setting section, which includes the key trade-offs faced by the corporation concerning climate action, and how our model is grounded from well-established finance and economics literature that explores similar questions. We carefully presented and interpreted our results in the Main Result section as well as in Appendix 11, and compared them to theoretical and empirical literature, demonstrating compatibility with existing studies on ESG investment dynamics. \\n\\nWe would like to clarify that, as added in the Introduction and Discussion section, we position InvestESG as a \\u201cfirst-principles\\u201d model that highlights the core incentive structures of the problem, rather than to fully capture real-world complexities and details. We plan to add more features. But our goal is to develop an environment that is flexible for researchers to select the level of granularity that best suits their needs.\\n\\n> Ultimately, does the paper provide sufficient evidence for the statement in the abstract: \\\"However, when a critical mass of investors prioritizes ESG, corporate cooperation increases, which in turn reduces climate risks and enhances long-term financial stability. \\\"?\\n\\nIn our revision, we included the results from experiments with 10-company, 10-investor setups and larger 25-company, 25-investor setups in Figure 8. These new results all agree on the above statement. We also provide additional evidence in Appendix Figure 11c showing that, when a critical mass of investors prioritizes ESG, the cooperating companies attract the majority of ESG-conscious investment, and form a positive feedback loop where their better margins draw in more investments.\"}", "{\"summary\": \"This paper presents InvestESG, a novel multi-agent reinforcement learning (MARL) benchmark designed to simulate and analyse the impact of varying Environmental, Social, and Governance (ESG) disclosure policies through the social dilemma paradigm. InvestESG uses two types of agents: companies and investors. Companies allocate capital across mitigation, greenwashing, and resilience, with varying strategies influencing climate outcomes and investor preferences. The findings are consistent with empirical research using real-world data. They capture the positive impact of companies using information about global climate risks to determine their level of investment in mitigation, even without investor involvement. The paper is beautifully written and rigorous.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper represents a novel contribution in a highly relevant, high-impact domain, at the intersection between climate change and MARL. It is beautifully written and self-contained, with rigorous specifications of the InvesESG environment. The implementation details and code are provided, and overall the paper makes a good case for a MARL benchmark for studying climate investment through the social dilemma paradigm, via two agent types: companies and investors. InvestESG is designed to simulate and analyse the impact of varying Environmental, Social, and Governance (ESG) disclosure policies on corporate climate investments. In InvestESG, companies allocate capital across mitigation, greenwashing, and resilience, with varying strategies influencing climate outcomes and investor preferences. The findings are consistent with empirical research using real-world data. The results capture the positive impact of companies using information about global climate risks to determine their level of investment in mitigation, even without investor involvement.\", \"weaknesses\": \"The main weakness of the paper consists in its simplifying assumptions, in terms of the types of agents, and the considered scenarios, analysis and discussions. These limitations are acknowledged in the paper. Due to these reasons, I believe that, in the current format, the paper makes an insufficient contribution for a top conference like ICLR.\\n\\nFor a more significant contribution, this work could be extended in one or more possible directions: extend the agent types (possibly consider insurance companies/market?), add more complex agent behavior, learn parameters and behaviors from real data, include more social outcome metrics (in addition to the final climate risk level and the final total market wealth, at the end of the simulation period) and/or include additional features, such as agent bankruptcy, and a dynamic number of agents.\\n\\nAssuming the agent-types remain just companies and investors, increasing the number of companies and investors, and learning their behavior from real world data, may be a sufficient extension for a more significant contribution.\\n\\nIn the longer term, the initial vision of InvestESG would benefit from a more diverse agent space, for a more realistic climate-change problem specification (however, this is not essential for a significant contribution).\", \"questions\": \"I think the paper could be extended in several possible directions, as indicated in the Weaknesses section, for a more significant contribution. Another possible direction would be to implement and assess the impact of other PPO policies than IPPO on the overall behaviour and insights.\\n\\nPotential relevant papers are suggested below.\\n\\nBisaro, Alexander, and Jochen Hinkel. \\\"Governance of social dilemmas in climate change adaptation.\\\" Nature Climate Change 6, no. 4 (2016): 354-359.\\n\\nBettini, Matteo, Amanda Prorok, and Vincent Moens. \\\"Benchmarl: Benchmarking multi-agent reinforcement learning.\\\" Journal of Machine Learning Research 25, no. 217 (2024): 1-10.\\n\\nBettini, Matteo, Ryan Kortvelesy, and Amanda Prorok. \\\"Controlling Behavioral Diversity in Multi-Agent Reinforcement Learning.\\\" arXiv preprint arXiv:2405.15054 (2024).\\n\\nBrogi, Marina, Antonella Cappiello, Valentina Lagasio, and Fabrizio Santoboni. \\\"Determinants of insurance companies' environmental, social, and governance awareness.\\\" Corporate Social Responsibility and Environmental Management 29, no. 5 (2022): 1357-1369.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you again for your valuable feedback. As promised, we wanted to share the results of experiments we conducted with the following modifications: (1) an increased number of agents (25 companies and 25 investors), (2) company agents seeded with real-world company actions, and (3) investment commitments locked in for five years, and (4) adding random noise to determine the financial cost of extreme weather events which varies across companies and events. For company agents seeded with real-world company actions, we referred to the publicly available data from authority sources such as European Investment Bank and London Stock Exchange to seed the initial climate investment for 5 years, to resemble investment happening in the real-world. Across all these experiments, we observed the same directional results as in our original experiments. That is, in the default case where investors are only profit motivated, companies do not learn to mitigate effectively, climate risks remain high, and market wealth is lower. However, by including investors which place a high weight on ESG scores, companies can learn to mitigate, decrease climate risk, and increase total market wealth. Since the system does not permit us to share figures here, we will include these findings in the updated version of our paper, which will be uploaded by 11/27. Additionally, we really appreciate the relevant papers you have shared with us in your initial response, and we found some of them really informative and included them in our citation.\\n\\nThe \\u201cadding random noise to determine the financial cost of extreme weather events\\u201d experiment was based on our actively engaging discussion with reviewer fPY9, which *may help address your concern around bankruptcy* as well. Our current framework already forces companies to go bankrupt if their capital becomes negative. However, in the experiments conducted so far, bankruptcy has not occurred because companies maintain an underlying economic growth rate sufficient to offset climate-induced losses and have learned not to overspend on climate-related efforts. In a new experiment, we will make companies\\u2019 financial losses during climate events highly stochastic, so that climate events could become severe enough to trigger bankruptcies. In addition, we made the standard for determining whether a company is bankrupt more realistic by deeming companies that have a margin worse than 10% for 3 consecutive years as bankrupt. This adjustment could reveal more insights on what happens if companies go bankrupt. \\n\\nAs we approach the rebuttal deadline, we would greatly appreciate any additional feedback you can provide on how to further strengthen our paper.\"}", "{\"title\": \"Response to Reviewer mQaB\", \"comment\": \"(cont.)\\n> Extend the agent types (possibly consider insurance companies/market?), add more complex agent behavior\\n\\n - Thank you for suggesting insurance companies as an additional agent type. As noted in Section 5 (Future Work), we plan to continue enriching the environment, and we will include insurance companies in our future developments, as they play a critical role in climate-related risk management. However, we believe our current setup effectively captures the primary trade-offs and dynamics needed to evaluate the policy. And we would like to start inviting the ML community to collaborate with us in further building and refining the simulator.\\n\\nThank you again for your valuable feedback. Please advise whether our planned additional experiments address your concerns, and we welcome any further questions or comments you may have. Your insights are instrumental in helping us strengthen this work, and we look forward to your continued guidance during the rebuttal phase.\", \"reference\": \"[1] Miller, Zach. (2023, October 17) *Nearly Half of Fortune 500 Companies Engaged in Major Climate Initiatives. David Gardiner and Associates.* https://www.dgardiner.com/fortune-500-climate-initiatives-2023/\\n\\n[2] European Investment Bank, Kalantzis, F., & Cimini, F. (2023). What drives firms\\u2019 investment in climate action? : evidence from the 2022-2023 EIB investment survey, European Investment Bank.\\n\\n[3] Fortune global 500 climate commitments. Climate Impact Partners. (2024). https://www.climateimpact.com/news-insights/fortune-global-500-climate-commitments/ \\n\\n[4] LSEG Data & Analytics. (2023) *Environmental, social and governance scores from LSEG.* https://www.lseg.com/content/dam/data-analytics/en_us/documents/methodology/lseg-esg-scores-methodology.pdf\\n\\n[5] Shek, Katia. (2023, January 30). *Patagonia: ESG's Golden Child. Global Research and Consulting Group Insights.* https://insights.grcglobalgroup.com/patagonia-esgs-golden-child/\\n\\n[6] InfluenceMap. (2022, September). *Big oil\\u2019s real agenda on Climate change 2022.* https://influencemap.org/report/Big-Oil-s-Agenda-on-Climate-Change-2022-19585 \\n\\n[7] Buchner, Barbara. (2023, November 2). *Annual finance for climate action surpasses USD 1 trillion, but far from levels needed to avoid devastating future losses. Climate Policy Initiative.* https://www.climatepolicyinitiative.org/press-release/annual-finance-for-climate-action-surpasses-usd-1-trillion-but-far-from-levels-needed-to-avoid-devastating-future-losses/\\n\\n[8] Damodaran, Aswath. (2024, January). *Capital Expenditures by Sector.* https://pages.stern.nyu.edu/~adamodar/New_Home_Page/datafile/capex.html\"}", "{\"title\": \"Response to Reviewer mQaB\", \"comment\": [\"Thank you for your insightful comments on our paper. We will work on incorporating your comments in our paper to increase the significance of our contributions. We would like to address some of your comments below.\", \"**Suggested literature**: We appreciate the list of literature you provided, which covers both MARL benchmarking papers from the ML community and empirical analysis of the climate social dilemma and climate ESG awareness of insurance companies from the climate community. We will cite these references in our paper update and use them to guide our future work.\", \"**Proposed new experiments to address the detailed concerns**: We appreciate the point of creating a richer environment. We propose to address your concerns by running some new scenarios as listed below. Please kindly let us know if these scenarios sufficiently address your feedback, or if there are additional analyses you would recommend.\", \"> increasing the number of companies and investors\", \"Thank you for the suggestion on running more agents. We agree with this. Post-submission we have developed a Jax version of the environment which allows for scalable and performant simulations. We will run larger-scale scenarios with more agents (eg. 25 companies + 25 investors) to test the robustness of our findings. We will share updates as soon as the new results are available. And we will also provide the Jax code in our updated version of the paper.\", \"> learn parameters and behaviors from real data\", \"To our knowledge, there isn\\u2019t a comprehensive database that currently tracks company behaviors in a way that can be readily analyzed, especially since the EU's ESG disclosure mandate only launched this year and the U.S. mandate is still pending. Therefore, we propose to use public information to estimate (1) the percentage of companies that invest in climate mitigation; and (2) for those companies, the amount of capital they allocate to mitigation.\", \"According to public sources (Miller, 2023; European Investment Bank, 2023; Climate Impact Partners, 2024), about 50% of large companies are investing in mitigation.\", \"We propose a range of 0.1\\\\~1% of capital allocated to climate mitigation based on the following reasoning.\", \"As noted earlier, there is no readily available database that tracks companies' actual climate spending. For instance, while the UK is one of the few large economies to implement a disclosure mandate, it does not require companies to report specific climate expenditures. Moreover, fewer than 5% of companies voluntarily disclose such information (LSEG, 2023).\", \"Patagonia spends \\\\~0.3% of its capital annually on climate initiatives (GRC Insights, 2023).\", \"The top five oil companies allocated 12% of CAPEX to low carbon activities annually, which translates to about 1.2% of their capital (InfluenceMap, 2022).\", \"About 1% of global GDP goes to climate finance, which includes public spending (Climate Policy Initiative, 2023). We can view GDP as roughly the counterpart of a company's sales. According to an NYU database, the average sales/capital ratio across sectors is 0.8~1.28 (Damodaran, 2024), i.e. on average sales is on par with capital in terms of order of magnitude. So we can also seed with 1%, although this is likely an upper bound as public spending tends to be higher than private spending in this domain.\", \"We will run a simulation where for the first 5 years 50% of companies spend 0.1\\\\~1% of capital on mitigation. We randomly select which 50% but keep it consistent. Could you please advise if this suggested experiment would meet your criteria?\", \"> include more social outcome metrics (in addition to the final climate risk level and the final total market wealth, at the end of the simulation period)\", \"We can add more social outcome metrics as you mentioned, such as the total number of severe climate events, or financial losses due to climate events during a certain period. Please let us know if there are other specific metrics you would recommend.\", \"> include additional features, such as agent bankruptcy\", \"To clarify, we do allow agents to go bankrupt if their capital turns negative. When a company goes bankrupt, it is blocked from further actions, and investors holding equity in the company lose their investment. This is briefly mentioned in Section 3 *Company Action Space*, and we will make it clearer in the updated version.\"]}", "{\"title\": \"Notes on paper update\", \"comment\": [\"We would like to inform the area chairs and reviewers that we have uploaded an updated version of our paper and supplementary materials with the following changes. We sincerely thank the reviewers for your valuable feedback, comments, and suggestions, as well as the time and effort dedicated to helping us improve our work.\", \"Implementation code in both PyTorch and JAX in supplementary materials.\", \"Additional experiments (presented in Appendix 11) which yield directionally consistent conclusions as our main text.\", \"Scaled up the number of agents to 10-company, 10-investor, and 25-company, 25-investor\", \"Initialized company agents with real-world company actions\", \"Setting agents' decisions as locked-in for five years to simulate capital flexibility challenges\", \"Setting companies' climate resilience parameter as random and vary across events and companies to simulate unpredictability of climate event damage\", \"Implementing a more strict bankruptcy mechanism\", \"We have added a problem setting section to introduce the key trade-offs, and further grounded our work within existing economic literature. In the introduction and conclusion, we clarified that our goal is to develop a first-principle model that highlights the core incentive structures of the problem, rather than to fully capture real-world complexities. For future work, we plan to incorporate additional variations, including reviewer suggestions that could not be fully developed during the rebuttal phase. This will enable fellow researchers to select the level of granularity that best suits their needs.\", \"We reorganized the results section to present a clearer story. Additionally, we incorporated writing and presentation improvements and included references suggested by the reviewers. Some parts of the mathematical details of the environment are moved to the appendix.\"]}", "{\"metareview\": \"The paper introduces InvestESG, a multi-agent reinforcement learning (MARL) benchmark designed to study climate investment decisions as a social dilemma. The benchmark models incorporate decision-making in the presence of ESG-conscious investors under climate risk uncertainty. Agents balance profit-driven objectives with long-term climate resilience through investments in mitigation, greenwashing, and adaptive strategies. The paper presents extensive simulation results using scaled agent populations, dynamic financial shocks, and diverse policy scenarios.\\n\\nAll reviewers acknowledged the importance and potential high impact of the research question. The simulation results are consistent with real-world data and could provide valuable insights for policymakers. On the other hand, given the significance of the domain, the main concerns raised by the reviewers are whether the simulated platforms generalize and scale. Regarding generalizability, the authors point out that their assumptions are grounded in economic and finance models and that the results align with real-world observations. However, traditional economic models often provide additional insights into their results (e.g., conditions under which the outcomes hold). This aligns with 9btn\\u2019s concern on the current work: \\u201cDepending on a few assumptions and a few modeling changes, we could get the model to do completely different things.\\u201d It might be helpful to include a discussion and results on the robustness of the findings\\u2014for example, how sensitive the results are to modeling choices. Regarding scalability, the authors have promised to provide additional results, e.g., with more agents.\\n\\nOverall, this is a borderline paper that could go either way. Due to its potential high impact, I lean towards recommending acceptance. If accepted, I strongly suggest that the authors address the reviewers\\u2019 comments in their revision.\", \"additional_comments_on_reviewer_discussion\": \"The main points are summarized above. One reviewer is clearly in support of the paper, while the other two are more reserved. They mentioned that they won't object the paper being accepted if there is space but are not enthusiastic.\\n\\nI am tentatively recommending acceptance, though the paper could reasonably go either way.\"}", "{\"comment\": \"Thank you for following up on the proposed experiments and ideas. I apologize for the brevity of my response: as all concerns have been addressed, I have opted to raise my score to 8.\"}", "{\"comment\": \"Thank you for your detailed response to my concerns.\\n\\nBelow, I present the responses in the same order that the authors opted have opted for.\\n\\n---\\n\\n> Limited number of agents\\n\\nThank you for your response. Given that the closest related work mentioned ([1]) runs their version with 27 agents, 25 agents seems like more than enough. In particular, it will be of interest to see if the conclusions for policymakers are robust to this increase in the number of agents, considering the implications of figure 9 d). Also, section 9.1's reference to figure 9 d) discusses resilience spending, but the figure itself plots climate risk as a function of the number of agents.\\n\\n> grounded in economics literature\\n\\nThank you for your response. I believe the inclusion of such a discussion in the paper would be more than sufficient.\\n\\n> flexible investments\\n\\nA 5-year lock-in period could certainly help simulate the lack of flexibility of capital.\\n\\n> binary investment decisions\\n\\nThank you for your response. Should the number and diversity of agents increase (as suggested above), it might not be necessary to modify the action space, which could complicate the learning dynamics. I do not believe this is a critical change.\\n\\n> results section structure \\n\\nThank you, this would more than satisfy my concerns.\\n\\n> Beta parameter sensitivity analysis\\n\\nThank you for your response. This highlights for me a limitation of the proposed framework. The agents, by training over many episodes, essentially learn a representation of the unobservable ground truth damage climate-economic damage function that relates climate events to economic damage as a function of their decisions. However, in reality, there is significant uncertainty in the level of economic damages (e.g. see [this article, section 3.4](https://link.springer.com/article/10.1007/s10640-015-9965-2#:~:text=A%20third%20approach%20to%20understanding,Page%202007%3B%20Macal%20and%20North)). This clashes for me with the conclusion that companies are insensitive to greenwashing efforts, as the supposed uncertainty around climate change damages might lead certain companies to greenwashing to please investors, while simultaneously disregarding (knowingly or unknowingly) future impacts.\\n\\n---\\n\\nThank you again for your detailed response. \\n\\nAn interesting direction of future work for this would be enabling investors to *learn* their own ESG consciousness levels over many episodes. If you consider a setup where the increases in probabilities of catastrophic events are somewhat stochastic, then investors could learn to use the history of catastrophic events to update their preference for ESG consciousness during a given episode.\", \"this_brings_up_another_important_direction_of_future_work\": \"allowing for companies that go bankrupt. A different perspective on emissions is to look at them as a mostly-unmeasured negative externality vs the measured economic performance of different companies. If a company goes bankrupt in 2050 after emitting heavily for 50 years and employing no mitigation, then the mitigation effort must be undertaken by others to reach the optimal collective trajectory. Including the possibility of bankruptcy would improve the realism of the simulation, while also providing a new interesting motivation for greenwashing. I'd be interested to see what impact this would have on ESG-conscious investors. This could also include the cost of climate audits to overcome greenwashing, sharing of such information among investors, etc.\\n\\nAnother interesting work would be to consider investors that have a preference for companies that have a history of ESG-conscious investors.\\n\\nDespite the unsound approach of learning a representation of the ground truth damage function over many episodes, I am conscious of the overwhelming diversity of design choices in such a situation. I believe this work is for the most part sound and heads in an **impactful, novel research direction**, paving the way for very interesting future research. Normally, I would raise my score to a 7, but this option is not available this year. Therefore, **I maintain my score** for now. However, I am happy to continue the discussion around now-raised ground truth limitation, should you believe this is not a caveat for the conclusions you draw, or should you have alternate solutions to propose. I am also amenable to additional results based on the Jax rewrite.\\n\\nI wish to reiterate the **novelty and impact** of this proposed submission. I look forward to seeing where future work on this project goes.\\n\\n[1] Zhang, Tianyu, et al. \\\"AI for global climate cooperation: modeling global climate negotiations, agreements, and long-term cooperation in RICE-N.\\\" arXiv preprint arXiv:2208.07004 (2022).\"}", "{\"title\": \"Global follow-up comment\", \"comment\": \"As the discussion period is coming to a close, we would like to provide further details and explanation of the 5 new experiments we ran during the rebuttal period to address reviewers\\u2019 suggestions and comments. We ask that reviewers please let us know if they have further questions or comments before discussion closes tomorrow.\\n\\n**Scaled up the number of agents:**\\n\\n*Requested by reviewers 9btN, mQaB, fPY9*\\n\\nFigure 8 in Section 11 shows the results of scaling up our simulation from the original 5 companies and 3 investors (5+3) to both 10+10 and 25+25 agents. As is evident in Figures 8(a-b), we obtained results consistent with our original findings: when investors have the ESG-consciousness parameter set to 10, overall climate risk is significantly reduced compared to the default case where both investors and companies are purely profit motivated (no ESG-consciousness).\\n\\n**Initialized company agents with real-world company actions:**\\n\\n*Requested by reviewer mQaB*\\n\\nFigure 9 (d-f) in Section 11 shows the effect of seeding the initial 5 years of the simulation with parameters based on statistics about real-world companies. Specifically, we seeded 50% of companies to invest between 0.5% and 1% of their total capital into mitigation, based on publicly available data on companies\\u2019 current mitigation efforts, as described in Section 11.3. Figure 9 (d-f) compares the results of this experiment with our original results (Status quo w/ mandate), and we see no significant differences as a result of this experiment. We hypothesize this is because in our original results companies explore randomly in the first few timesteps, and thus successfully explore mitigation as a strategy. However, they eventually learn not to mitigate because since there are no ESG-conscious investors, \\u2018defecting\\u2019 by not mitigating is the optimal strategy, as supported by the Schelling diagram in Figure 3(a). \\n\\n\\n**Set agents' decisions as locked-in for five years to simulate capital flexibility challenges:**\\n\\n*Requested by reviewer fPY9*\\n\\nFigure 9(g-i) in Section 11 shows the results of an experiment in which the allocation of capital is less flexible, and agents\\u2019 investment decisions are locked in for a period of 5 years. The figure compares our original results (Status quo w/ mandate) to the results of the lock-in experiment. Interestingly, we see that initial mitigation amounts are higher in the locked-in case, leading to lower climate risks for earlier training episodes. We hypothesize this is because companies initially explore mitigation, but it takes some time to learn that it is better to defect. We see that by the end of the training period, both experiments converge to similar values for the final mitigation amount, climate risk, and market wealth. \\n\\n**Increase uncertainty in the amount of economic damage incurred by extreme weather events:**\\n\\n*Requested by reviewer fPY9*\\n\\nThe reviewer made a strong case for the high level of uncertainty surrounding the economic damages of climate change (Farmer et al., 2015). Therefore we conducted an additional experiment in which economic losses from extreme climate events were modeled as Gaussian random variables $L^{C_i}_t \\\\sim \\\\mathcal{N}(\\\\mu, \\\\sigma), \\\\quad \\\\mu = 0.07, \\\\ \\\\sigma = 0.1$ clipped within the range [0,1], varying across both events and companies (described in Section 11.5). The results are shown in Figure 10, which explores several facets of this idea. Figures 10(a-c) show that when uncertainty is higher, climate risk remains higher, and overall market wealth is lower than it is in the default status quo case. This suggests it might be harder for companies to learn to mitigate in the face of such uncertain risks. Figures 10(d-g) investigate how uncertainty affects greenwashing in the presence of ESG-conscious investors. We see that when economic damage is more uncertain, companies invest more in greenwashing to attract immediate investment from ESG-conscious investors, while retaining a similar level of mitigation efforts. Figures 10(h-k) pertain to the stricter bankruptcy mechanism, as described below.\"}", "{\"summary\": \"The paper concentrates on developing a multi-agent reinforcement learning framework (which they name InvestESG), to be used to student individual and collective outcomes from company investment and climate risk. It is an overall well written paper on a timely and relevant problem, for which we clearly need to better understand how to drive investors decisions to align individual and collective objectives. The paper is completely application-driven, in the sense that the authors do not develop new methodology, or a new solution approach. They mainly focus on describing how the framework should look like for the purpose of the application. They then generate a lot of simulation results to study various aspects of the problem (e.g., greenwashing, different levels of ESG consciousness, etc.)\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The main strength of the paper is the topic is focuses on, and the idea to bring some momentum towards the development of a general platform to simulate a multi-agent system with focus on climate risk and company behaviour. Another strength (but which may also be seen as a weakness - see below) is that the framework is simple, in the sense that it is easily interpretable and flexible enough to interact with e.g. policy makers. The authors also aim to produce some relevant results, which may be seen as of value by policy-makers.\", \"weaknesses\": \"As mentioned by authors in a last part of the paper, maybe the main weakness is the simplicity of the framework, which may prevent a broad audience from accepting that it realistically model a real-world situation, and that it may be ring some relevant insights to be used as input to policy-making. In my opinion, it feels like an oversimplified and stylised approach where, depending on a few assumptions and a few modelling changes, we could get the model to do completely different things. Therefore, I believe that quite more work is necessary for such a paper, starting with the importance of the underlying assumptions, assessing the impact of modelling choices, sensitivity analyses, etc. I am not critical of the fact the authors are engaging in such developments - I am saying instead that I feel more work is necessary before sharing this work/paper with the world.\", \"questions\": \"I think some of the key points to consider are:\\n- questioning assumptions, e.g. rationality of the agents, why they would behave as if employing RL, etc.\\n- convincing us that simulating a system with a limited number of agents provides us with insights that are relevant for systems for very large number of agents\\nI clearly recognise that such issues may be more generally valid for the case of MARL environments and broader than for the case of this paper only. However, here, in view of the importance of the application, I find these issues particularly relevant.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your reply, and your willingness to make changes to your paper.\"}", "{\"title\": \"Response to Reviewer fPY9\", \"comment\": \"(cont.)\\n> Figure 7 b) is highly confusing. It looks like with climate information, risk is maximized and market wealth is minimized.\\n- Thank you so much for this catch. It turns out that it was an unintentional mistake from our side that we flipped the labels around. The actual result is that with climate information, risk is minimized and market wealth is maximized. We will correct this mistake in our updated version of the manuscript.\\n\\n> Figure 2b could be improved by showing the average number of events each year across many episodes, as opposed to a single episode.\\n- Thank you for the suggestion. We aimed to give an example of environmental dynamics and the progression of the environment in a single episode, and therefore we didn\\u2019t take the average of many episodes. However, based on your suggestion and reviewer mQaB\\u2019s suggestion to include more climate outcome metrics, we will provide additional plots of the average number of climate events across many episodes in the results for different scenarios. \\n\\n**Presentation and Questions**\\n> Figures and plots\\n- Thanks for the very constructive feedback regarding the presentation of the paper. We apologize for the inconvenience. We will increase the readability of our plots and fix the typo in the updated version accordingly. \\n\\n> results section could benefit from additional structure\\u201d\\n- Thank you for your suggestion. Our intention was to tell the story in a progressive way, starting from the baseline, then adding the ESG disclosure policy, followed by raising questions about investors with various level of environmental consciousness, and studying the impacts of greenwashing and sharing additional climate information. Following your feedback, we will restructure the result section by presenting the main story of baseline and behaviors of investors with different levels of consciousness, and delineate these central findings from the additional results from greenwashing and climate information. Would this satisfy your concerns? We will incorporate your feedback in the problem setting section in regard to introducing the important concepts.\\n\\n> What does a sensitivity analysis of the Beta parameter do to the results? Why do you think that agents are so insensitive to the value of Beta as shown in figure 6 b)?\\n- We would like to use a sensitivity analysis of greenwashing coefficient \\u03b2 to elaborate our point that companies\\u2019 decision to abandon greenwashing isn\\u2019t due to the relative cost of greenwashing compared to real mitigation. Although companies that achieve a high ESG score through cheap greenwashing, which is controlled by \\u03b2, attract more investments from investors, the increase in investment from investors by greenwashing isn\\u2019t comparable to their capital loss when exposed to severe climate events. The company agents learned greenwashing isn\\u2019t the most cost-effective way to maximize their capital no matter how much it costs.\\n\\n> Do observations include past climate events?\\n- The observation doesn\\u2019t include the past climate events by default (in the baseline case), but it is configurable in the environmental setting. In figure 7, we studied the effects of having additional climate information on mitigation, which include the number of severe climate events that occurred in the past year.\\n\\n> How did you calibrate the 0.5% investment of capital into mitigation?\\n- Since company actions are continuous, we needed to select a specific cooperation level to illustrate the social dilemma structure in the Schelling diagram. We chose 0.5% as a representative benchmark, aligning with Patagonia's aggressive cooperative approach, where approximately $100 million (around 0.3% of its $3 billion valuation) is allocated annually to climate initiatives. The 0.5% choice is thus in a similar order of magnitude, and the shape of the diagram remains consistent across other values we tested.\\n\\nThank you again for your valuable feedback. We hope our responses address your concerns, and we welcome any further questions or comments you may have. Your insights are instrumental in helping us strengthen this work, and we look forward to your continued guidance during the rebuttal phase.\", \"reference\": \"[1] P\\u00e1stor, \\u013d., Stambaugh, R. F., & Taylor, L. A. (2021). Sustainable investing in equilibrium. Journal of financial economics, 142(2), 550-571.\\n\\n[2] Pedersen, L. H., Fitzgibbons, S., & Pomorski, L. (2021). Responsible investing: The ESG-efficient frontier. Journal of financial economics, 142(2), 572-597.\\n\\n[3] Lyon, T. P., & Maxwell, J. W. (2011). Greenwash: Corporate environmental disclosure under threat of audit. Journal of economics & management strategy, 20(1), 3-41.\\n\\n[4] Shek, Katia. (2023, January 30). *Patagonia: ESG's Golden Child.* https://insights.grcglobalgroup.com/patagonia-esgs-golden-child/\"}", "{\"comment\": \"Thank you for the detailed responses to my and other reviewers' comments and suggestions, including descriptions of additional proposed experiments, of the JAX implementation, and the plan to include a link to the JAX-based implementation in the final version of the paper.\\n\\nBased on these, the paper seems to have greatly improved, and I look forward to reading the new version, once submitted. \\n\\nAs all the reviewers commented, the main strength of the paper is its potential contribution on a timely and crucial topic. The main aspects I will focus on when reviewing the new version of the paper include: (i) advances towards more realistic agent behaviour and scenarios; (ii) model validation; (iii) quantitative results; (iv) discussions and critical evaluation of the assumptions, results and InvestESG. Ultimately, does the paper provide sufficient evidence for the statement in the abstract: \\\"However, when a critical mass of investors prioritizes ESG, corporate cooperation increases, which in turn reduces climate risks and enhances long-term financial stability. \\\"?\"}", "{\"comment\": [\"Thank you for the very valuable and detailed feedback. We are grateful for your insights and the time you took to review our manuscript. Below, we address your comments.\", \"**Soundness**\", \"> The number of agents is limited.\", \"We acknowledge that the number of agents in our experiment is limited compared to the real world. Post-submission we have developed a Jax version of the environment which allows us to run the simulations much faster and at a scaled level. We will run larger-scale scenarios (e.g. 25 companies and 25 investors) to test the robustness of our findings. We believe that this number would match the expectation for a reasonably large group size as mentioned in the paper you suggested, unless you have alternative suggestions on the number of agents. We will share updates as soon as the new results are available. And we will also provide the Jax code in our updated version of the paper.\", \"> grounded in the economics literature.\\u201d\", \"We really appreciate your suggestion on \\u201cStarting with an existing model of economic agents (with a citation), highlighting its limitations for InvestESG and then explaining how you extend to agents to accommodate for these limitations would be a much more compelling presentation.\\u201d As a quick response, we want to note the following reasoning. We would love your feedback on if this meets the need, and we can revise the paper to make the presentation stronger as you suggested.\", \"Our work is grounded in the economics and finance literature that studies similar questions with theoretical models. For example, in a recent highly impactful study published in a top finance journal, Pastor et al. (2021) built an analytical model for a single-period equilibrium that examines firm-investor tradeoffs much similar to ours. In their model, firms choose the \\u201cgreenness\\u201d level, which affects their cost of capital, while investors select portfolios to maximize utility derived from both financial returns and their \\u201ctaste\\u201d for green assets. Other influential studies in financial economics use comparable or even simpler frameworks. For example, Pedersen et al. (2021) model a single-period equilibrium with investors differentiated by their types of ESG preference, assuming fixed ESG characteristics for firms.\", \"Our simulations agree with the key findings in these papers suggesting that ESG-conscious investors prioritize green companies and promote positive social impacts by shifting investment towards green firms.\", \"Our framework, albeit still simple, allows for investigating more realistic and complex settings than the existing financial economics literature. These include (1) climate evolution, where companies mitigate emissions to attract investment and reduce long-term climate risk exposure\\u2014often omitted in financial studies (2) the potential for greenwashing, linked to information asymmetry (e.g., Lyon and Maxwell, 2011), and (3) a dynamic multi-agent game where firms and investors interact over many periods, which is more realistic. The equilibrium of such a setting is challenging to solve analytically or numerically (Pakes and McGuire, 2001), and our simulations provide insights here.\", \"Despite the simplicity of the framework, our simulation results match real-world data collected from countries that have implemented an ESG disclosure mandate of some sort as we discussed in section 4.\", \"> investments in capital last and they are not flexible; longer capital investment timelines\", \"Thank you for your insightful point regarding the investment timeline. To address this, we propose running an additional scenario in which agents can make investment decisions only every five periods, reflecting a five-year lock-in on their choices. Would this approach adequately address your concern?\", \"> Investor decisions are binary, as opposed to continuous across all companies.\", \"We acknowledge that binary investor decisions simplify reality. However, to clarify, in our current setup, investors do not select only one company; they choose a set of companies to invest in each timestep, and distribute their current capital evenly to each of the selected companies, which allows for diversification within the binary framework. Further, because they can re-allocate investments at each timestep, they can achieve fine-grained diversification over the course of one episode. However, we agree that using continuous decision variables that sum to 1 would more accurately reflect real-world behavior, in which an investor does not always invest evenly. Could you please advise if you consider this a critical change that we should make during the rebuttal period? We are currently planning to prioritize experiments incorporating more agents, real-world data (based on reviewer mQaB\\u2019s suggestion), and the longer capital investment timelines experiment above.\"], \"title\": \"Response to Reviewer fPY9\"}", "{\"comment\": \"(cont.)\\n\\n**Bankruptcy graphs and a stricter bankruptcy mechanism:**\\n\\n*Requested by reviewers mQaB, fPY9*\\n\\nAlthough our original results did include the ability for companies to go bankrupt, since both reviewers mQaB, fPY9 were interested in how more companies going bankrupt might affect the results, we implemented a stricter bankruptcy mechanism, where if a company agent has a margin worse than -10% for 3 consecutive years, it is deemed as bankrupt (see Section 11.6). Figures 10(h-k) show the results. We see that even with this stricter bankruptcy mechanism, the results are almost identical to the original status quo case, where essentially no companies go bankrupt, there is little mitigation spending and high climate risk. However, interestingly when we combine the strict bankruptcy mechanism with more uncertain economic damage, we see that the number of bankrupt companies is significantly higher. In turn, companies invest significantly more in mitigation, leading to lower climate risk. We hypothesize that in the scenario where economic damage is more uncertain, companies are more likely to go bankrupt, so we hypothesize this incentivizes them to spend more on mitigation to avoid bankruptcy (which has much lower utility). This in turn leads to lower climate risks. However, the market wealth is also lower in the case where companies more frequently go bankrupt due to climate damage.\", \"title\": \"Global follow-up comment (cont.)\"}" ] }
2TIYkqieKw
DICE: Data Influence Cascade in Decentralized Learning
[ "Tongtian Zhu", "Wenhao Li", "Can Wang", "Fengxiang He" ]
Decentralized learning offers a promising approach to crowdsource data consumptions and computational workloads across geographically distributed compute interconnected through peer-to-peer networks, accommodating the exponentially increasing demands. However, proper incentives are still in absence, considerably discouraging participation. Our vision is that a fair incentive mechanism relies on fair attribution of contributions to participating nodes, which faces non-trivial challenges arising from the localized connections making influence ``cascade'' in a decentralized network. To overcome this, we design the first method to estimate Data Influence CascadE (DICE) in a decentralized environment. Theoretically, the framework derives tractable approximations of influence cascade over arbitrary neighbor hops, suggesting the influence cascade is determined by an interplay of data, communication topology, and the curvature of loss landscape.DICE also lays the foundations for applications including selecting suitable collaborators and identifying malicious behaviors. Project page is available at https://raiden-zhu.github.io/blog/2025/DICE.
[ "Decentralized Learning", "Data Influence", "Data Valuation", "Contribution Attribution", "Incentive Mechanism" ]
Accept (Poster)
https://openreview.net/pdf?id=2TIYkqieKw
https://openreview.net/forum?id=2TIYkqieKw
ICLR.cc/2025/Conference
2025
{ "note_id": [ "v0nhy2fW6j", "o4ZOA64y3l", "nwzA63cw3l", "kllwssUdtQ", "ibghGBJnak", "iDRI6QZa8Z", "gx2R9JsF54", "dAiAMuP4zf", "a4NcdJls0A", "TZnTMdcIb6", "RGEMLpGspN", "LgX6P36CeB", "JkO5ouPFfE", "IA8Hb84Aeg", "GVMtKNgAj5", "GCDlWx1olT", "G6FJOrDd1V", "F7lBtiglit", "34ipQ3RW1f", "0u0b86aRYL", "0I7XueosY4" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1733620148817, 1732586190200, 1732296817262, 1732297030919, 1732517164306, 1732297010437, 1732571280687, 1732621385435, 1732295958215, 1732590748145, 1730692651763, 1730812252661, 1732297731011, 1737523851224, 1732297663172, 1732588175661, 1732297245674, 1732296869087, 1730029602063, 1732620349154, 1732523789845 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7615/Area_Chair_eVLp" ], [ "ICLR.cc/2025/Conference/Submission7615/Authors" ], [ "ICLR.cc/2025/Conference/Submission7615/Authors" ], [ "ICLR.cc/2025/Conference/Submission7615/Authors" ], [ "ICLR.cc/2025/Conference/Submission7615/Reviewer_kSES" ], [ "ICLR.cc/2025/Conference/Submission7615/Authors" ], [ "ICLR.cc/2025/Conference/Submission7615/Reviewer_f3R7" ], [ "ICLR.cc/2025/Conference/Submission7615/Authors" ], [ "ICLR.cc/2025/Conference/Submission7615/Authors" ], [ "ICLR.cc/2025/Conference/Submission7615/Authors" ], [ "ICLR.cc/2025/Conference/Submission7615/Reviewer_f3R7" ], [ "ICLR.cc/2025/Conference/Submission7615/Reviewer_kSES" ], [ "ICLR.cc/2025/Conference/Submission7615/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7615/Authors" ], [ "ICLR.cc/2025/Conference/Submission7615/Reviewer_4yzV" ], [ "ICLR.cc/2025/Conference/Submission7615/Authors" ], [ "ICLR.cc/2025/Conference/Submission7615/Authors" ], [ "ICLR.cc/2025/Conference/Submission7615/Reviewer_4yzV" ], [ "ICLR.cc/2025/Conference/Submission7615/Reviewer_4yzV" ], [ "ICLR.cc/2025/Conference/Submission7615/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"This paper introduces the DICE framework, a systematic approach to measure cascading data influence in decentralized learning networks, addressing a critical gap in data contribution evaluation. With rigorous theoretical foundations and diverse experimental validations, it lays the groundwork for equitable incentive mechanisms and effective collaboration in decentralized systems. There were concerns in the paper on practical use cases, experiments, and related work in the original reviews which seem to have been addressed in the rebuttal. Further, given that all the reviews are positive after rebuttal, I recommend acceptance of this work.\", \"additional_comments_on_reviewer_discussion\": \"NA\"}", "{\"title\": \"Thank you!\", \"comment\": \"Thank you for your kind feedback and support! We are delighted that all your concerns have been cleared!\"}", "{\"title\": \"Author Response (Part 1/4)\", \"comment\": \"We thank the reviewer for the helpful comments, especially for pointing out the connections between clustered FL. We have carefully revised our manuscript in accordance with your suggestions. Hope all your concerns are cleared.\\n\\n**Q1**: Please motivate the approach with practical use-cases. \\n\\n**A1**: Thanks. DICE offers broad applicability in decentralized learning scenarios, which we summarized in Section 4.3 and Appendix B.1. For full details and results, please kindly consult the following anonymous link: \\n\\n**https://anonymous.4open.science/r/Anonymous-Repo-for-Rebuttal-793D/Practical%20applications/README.md**\\n\\n**Practical Use Case 1: Efficient Collaboration via Contribution-Based Reweighting**. It remains an open problem to set up the communication topology in decentralized learning. This challenge is largely attributed (1) local data remains private, while only parameter communication is allowed \\u2013 limited insights of neighbor information are accessible, (2) the absence of a central authority to manage everything, etc. Based on DICE, we may design an adaptive topology reweighting method, as an efficient mechanism for participants adjust their collaboration strategy based on their neighbors\\u2019 contributions. to estimate their neighbors' contributions using proximal influence. Establishing an optimal communication topology in decentralized learning remains a significant open challenge. This challenge arises primarily due to: \\n- The privacy-preserving nature of decentralized learning, where only model parameters are shared while local data remains private, limiting the information participants have about their neighbors. \\n- The absence of a central authority, which prevents global coordination and decision-making. \\n\\nDICE facilitates efficient collaboration by providing a framework for participants to estimate their neighbors' contributions toward reducing their own validation loss. By leveraging this estimation, participants can adaptively reweight their gossip weights to prioritize communication with neighbors who can positively impact their learning process. DICE supports the formation of adaptive communication topologies without requiring global coordination, addressing key challenges in decentralized learning. \\n\\nEach participant $k$ can reduce test loss on their local dataset by minimizing the sum of proximal influences from its neighbors:\\n\\n$$\\nI^{k,j}(z_j^t, z_k') = -\\\\sum_{j \\\\in N_{\\\\text{in}}^{(1)}(k)}\\\\eta^t W_{k,j}^t q_k \\\\nabla L(\\\\theta_j^t; z_j^t)^\\\\top \\\\nabla L(\\\\theta_k^{t+1}; z_k').\\n$$\\n\\nSpecifically, participant $k$ can reweight $W_{k,j}^t$ to align better with the gradient term $q_k \\\\nabla L(\\\\theta_j^t; z_j^t)^\\\\top \\\\nabla L(\\\\theta_k^{t+1}; z_k')$, thereby reducing $I^{k,j}(z_j^t, z_k')$. A reweighting strategy can be implemented as follows:\\n\\n$$\\nW_{k,j}^t = \\\\frac{\\\\nabla L(\\\\theta_j^t; z_j^t)^\\\\top \\\\nabla L(\\\\theta_k^{t+1}; z_k')}{\\\\sum_{l \\\\in N_{\\\\text{in}}^{(1)}(k)} \\\\nabla L(\\\\theta_l^t; z_l^t)^\\\\top \\\\nabla L(\\\\theta_k^{t+1}; z_k')},\\n$$\\n\\nwhich ensures row-stochasticity ($\\\\sum_{j \\\\in N_{\\\\text{in}}^{(1)}(k)} W_{k,j}^t = 1$).\\n\\nWe conducted experiments to validate this algorithm, utilizing Decentralized SGD to train ResNet-18 on CIFAR-10 and CIFAR-100. The evaluation compares DICE-reweighted topologies against pre-defined topologies, such as ring and exponential configurations. To emphasize the importance of effective collaboration strategies in heterogeneous environments, we simulated data heterogeneity by partitioning the datasets using Dirichlet sampling with $\\\\alpha = 0.3$. Each participant\\u2019s performance was measured on its local validation set, and the average validation accuracy across all participants was used as the comparison metric. The experiments were conducted with 16 participants, each utilizing a local batch size of 128 and a learning rate of 0.1. The results are summarized in the following Table and Figures. \\n\\n| Topology | Merging Strategy | CIFAR-10 | CIFAR-100 | \\n| --------------- | -------------------- | ------------ | ------------- | \\n| Exponential | Fixed | 83.83 | 53.01 | \\n| | DICE-reweighted | **85.53** | **55.25** | \\n| Ring | Fixed | 86.92 | 56.32 | \\n| | DICE-reweighted | **87.21** | **61.26** | \\n\\nThe experimental results demonstrate that the DICE-reweighted adaptive gossip strategy significantly outperforms the ring and exponential topologies in terms of stability, convergence speed, and validation accuracy - it has more stable training and higher validation accuracy on CIFAR-10, while exhibiting faster convergence and improved validation accuracy on CIFAR-100.\"}", "{\"title\": \"Author Response (Part 4/4)\", \"comment\": \"**Q3**: Please provide all necessary details to replicate the results. For instance, no indication on what the anomaly is vs normal client. Please evaluate the impact of batch size (smaller and larger values), to show the scalability of the technique and its robustness in showing the compatibility among clients.\\n\\n**A3**: Thanks and addressed. We have carefully provided all necessary details in Section 5 and the Appendix, including learning rate, batch size, training epochs, and different types of anomalies. To secure the reproducibility, we will release the source code package. \\n\\nAnomalies are generated by randomly flipping labels of training data or adding Gaussian noise to data features, please kindly refer to [5]. Furthermore, we note that varying the learning rate, batch size, training epochs, or types of anomalies, our conclusions are consistently robust, which highlights the reliability of the experimental results. \\n\\nFor further details and results with different hyperparameter setups, please kindly consult the following anonymous link:\\n\\n**https://anonymous.4open.science/r/Anonymous-Repo-for-Rebuttal-793D/Sensitivity%20analysis/README.md**\\n\\n**Reference**\\n\\n[1] Three Approaches for Personalization with Applications to Federated Learning, 2020. \\n\\n[2] An Efficient Framework for Clustered Federated Learning, NeurIPS 2020. \\n\\n[3] Clustered Federated Learning: Model-Agnostic Distributed Multi-Task Optimization under Privacy Constraints. IEEE Transactions on Neural Networks and Learning Systems. \\n\\n[4] Clustered Federated Learning via Gradient-based Partitioning, ICML 2024. \\n\\n[5] Anomaly Detection and Defense Techniques in Federated Learning: A Comprehensive Review. Artificial Intelligence Review.\"}", "{\"title\": \"Acknowledge the rebuttal\", \"comment\": \"Thank you for your feedback. After carefully considering your rebuttal, I believe the current score accurately reflects the work's strengths and areas for improvement. I would like to maintain the current score based on this evaluation.\"}", "{\"title\": \"Author Response (Part 3/4)\", \"comment\": \"We also note that there are key differences between DICE and gradient-based CFL.\\n\\n**Motivation**. \\n- Clustered Federated Learning addresses the challenge of data heterogeneity in federated learning by grouping clients with similar data distributions, without creating personalized models for each client. Gradient-based similarity is commonly used to **cluster clients**, based on the intuition that clients with similar data-generating distributions would share similar gradients. \\n- In contrast, DICE is motivated by a fundamentally different challenge: **quantifying the contributions of participants** in a decentralized learning system from a data influence perspective. Specifically, DICE provides a mechanism to measure how data at one node influences learning outcomes across the network, enabling the identification of pivotal contributors or potentially malicious actors in decentralized settings. \\n\\n**Theoretical Formulation**. While gradient-based CFL focuses on **peer-level gradient similarity** in one-hop, i.e., computing similarity metrics between pairs of nodes, DICE extends this concept to evaluate **multi-hop influence propagation** in the network. DICE can systematically quantify how influence from one node diffuses across multiple intermediate nodes in the graph, incorporating factors such as network topology and optimization curvature. Mathematically, DICE generalizes gradient similarity into a **non-trivial extension for decentralized networks** by introducing the notion of $r$-hop influence, which accounts for: \\n- The topological structure of the communication network. \\n- The curvature information (Hessian matrices) of intermediate nodes. \\n- The cascading interaction of gradients over arbitrary neighbor hops. \\n\\nSpecifically, the $r$-hop DICE-E influence $I_{DICE-E}^{(r)}\\\\(z_j^t, z^{\\\\prime}\\\\)$ is given by: \\n$$ \\nI_{DICE-E}^{(r)}\\\\(z_j^t, z^{\\\\prime}\\\\) = \\n-\\\\sum_{\\\\rho=0}^{r} \\\\sum_{ \\\\(k_1, \\\\dots, k_{\\\\rho}\\\\) \\\\in P_j^{\\\\(\\\\rho\\\\)} } \\n\\\\eta^{t} q_{k_\\\\rho} \\\\prod_{s=1}^{\\\\rho} W_{k_s, k_{s-1}}^{t+s-1} \\n\\\\nabla L\\\\(\\\\theta_{k_{\\\\rho}}^{t+\\\\rho}; z^{\\\\prime}\\\\)^\\\\top \\n\\\\prod_{s=2}^{\\\\rho} \\n\\\\(I - \\\\eta^{t+s-1} H\\\\(\\\\theta_{k_s}^{t+s-1}; z_{k_s}^{t+s-1}\\\\)\\\\) \\n\\\\nabla L\\\\(\\\\theta_{j}^{t}; z_j^t\\\\), \\n$$ \\nwhere $k_0 = j$, $P_j^{(\\\\rho)}$ denotes the set of all sequences $k_1, \\\\dots, k_{\\\\rho}$ such that $k_s \\\\in N_{out}^{(1)}(k_{s-1})$ for $s = 1, \\\\dots, \\\\rho$, and $H(\\\\theta_{k_s}^{t+s}; z_{k_s}^{t+s})$ is the Hessian matrix of $L$ with respect to $\\\\theta$, evaluated at $\\\\theta_{k_s}^{t+s}$ and data $z_{k_s}^{t+s}$. For further details, please refer to Proposition 3.\", \"this_formulation_highlights_a_key_distinction\": \"**DICE evaluates influence across multiple hops and characterizes the interplay between data, curvature, and communication topology**\\u2014factors beyond the scope of CFL frameworks. While CFL\\u2019s gradient similarity metrics, supported by strong theoretical foundations [3, 4], effectively cluster clients, they are inherently confined to the local, peer-to-peer level similarity, making it insufficient for modeling long-range or cascading influences. DICE extends these concepts by systematically quantifying \\\"influence cascades\\\" through decentralized networks, providing novel insights into how data, topology, and the optimization landscape interact to shape learning outcomes.\\n\\nWe have uploaded the revised version of our paper, which now includes the discussion on multi-hop influence on **page 8**. For your convenience, we provide the official link to the updated version of the paper below:\\n\\n**https://openreview.net/pdf?id=2TIYkqieKw**\"}", "{\"title\": \"Concerns addressed\", \"comment\": \"Thank you for carefully responding to my questions. I believe this makes the paper better and the experiments reproducible.\"}", "{\"title\": \"Thank you!\", \"comment\": \"Thank you so much for your kind support!\"}", "{\"title\": \"Author Response\", \"comment\": \"We thank the reviewer for helpful suggestions. We have carefully revised the manuscript to include new empirical results. Hope all your concerns are addressed.\\n\\n**Q1**: The experiments are weak. \\n\\n**A1**: In rebuttal, we strengthen our empirical results as follows: \\n\\n**Sensitivity Analysis**.\\n\\nWe conduct sensitivity analysis experiments to evaluate the robustness of DICE under varying hyperparameter settings, including learning rate, batch size, and training epochs. Results demonstrate that variations in these parameters (e.g., learning rates of 0.1 and 0.01; batch sizes of 16, 64, and 128 per participant; training epochs of 5,10, 20 and 30) have minimal impact on our conclusions: (1) one-hop DICE-E provides a strong approximation of DICE-GT; (2) anomalies introduced through random label flipping and feature perturbations are readily detectable by analyzing their proximal influence. These findings highlight the robustness of DICE across diverse configurations. \\n\\nFor your convenience, we provide an anonymous link summarizing the main experimental results: \\n\\n**https://anonymous.4open.science/r/Anonymous-Repo-for-Rebuttal-793D/Sensitivity%20analysis/README.md.**\\n\\n**Practical Applications**. \\n\\nDICE offers broad applicability in decentralized learning scenarios. Below, we outline two key use cases. \\n\\n- **Efficient Collaboration between Decentralized Participants:** DICE enables adaptive collaboration in decentralized systems by estimating the contributions of neighboring participants toward reducing its own validation loss. By leveraging this estimation, DICE facilitates dynamic reweighting strategies that adaptively prioritize interactions with more influential peers. This mechanism significantly improves both convergence speed and validation accuracy, as validated by experiments on CIFAR-10 and CIFAR-100. These results demonstrate the effectiveness of DICE in heterogeneous and decentralized learning environments. \\n- **Anomaly Detection:** DICE identifies malicious neighbors, referred to as anomalies, by evaluating their proximal influence, which estimates the reduction in test loss caused by a single neighbor. A high proximal influence score indicates that a neighbor increases the test loss, negatively impacting the learning process. By detecting malicious behaviors such as label-flipping attacks or feature perturbations, DICE can enhance the reliability of decentralized learning systems. \\n\\nFor your convenience, we provide further details and results at the following anonymous link: \\n\\n**https://anonymous.4open.science/r/Anonymous-Repo-for-Rebuttal-793D/Practical%20applications/README.md.** \\n\\n**Q2**: Section 5.3 is unfinished. \\n\\n**A2**: Thanks and addressed. We have carefully revised this section. The updated section highlights the purpose and significance of analyzing influence cascades, supported by detailed visual and experimental evidence. \\n\\nSpecifically, we emphasize: \\n\\n- **Purpose**: Influence cascades validate our theoretical insights by illustrating how data influence propagates through network topology and reveal \\\"power asymmetries\\\" in decentralized learning. \\n- **Findings**: As shown in Figure 1 and Appendix D.4, dominant nodes (nodes with higher outgoing communication weights in $W$) exert significantly larger influence, validating the topological dependency derived in our theory. \\n\\n**Q3**: The notation \\u03b7^t in Theorem 1 is previously appears as \\u03b7_t in Algorithm 1. \\n\\n**A3**: Thanks and addressed. We have carefully revised our manuscript.\\n\\nWe have uploaded a revised version of our paper, which can be accessed via the official link provided below:\\n\\n**https://openreview.net/pdf?id=2TIYkqieKw**\"}", "{\"title\": \"Appreciating your feedback and ready to address further concerns\", \"comment\": \"Thank you for updating the score! We truly appreciate your time and effort. We noticed that the updated score remains marginally below the acceptance threshold, and we would be more than happy to address any further questions or concerns you might have.\"}", "{\"summary\": \"The paper proposes a method for quantifying the impact of data points in decentralized machine learning settings. The influence is measured not only at immediate neighbors but the entire network. This method can be useful for machine unlearning or to develop new incentive mechanisms.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is well-organized, with clear definitions, figures, and explanations that make the methods and results easy to follow.\", \"The paper provides a solid theoretical framework, supported by rigorous proofs and analyses.\"], \"weaknesses\": [\"Need for more details about the practical use of this technique: While the authors use LLMs as one of the examples in the introduction, it might not be the best example to use in this case. It hard to see how this research addresses a practical problem or application that has real-world significance, or how this framework would be relevant for practitioners.\", \"Link with other papers that use gradient to cluster clients should be added, particularly interesting and relevant in the collaborator choice part.\", \"Experiments seem non-exhaustive and many details are missing to replicate the experiments. For instance, no indication on what the anomaly is vs normal client. This is particularly important when using gradients. I expect that the framework would perform differently if the anomaly is label flipping vs if it was noisy features. Additionally, evaluation of the impact of batch size would be particularly important for both scalability and compatibility among clients.\"], \"questions\": \"1) Please motivate the approach with practical use-cases.\\n2) Please discuss link with clustered federated learning, in particular techniques that use gradients to cluster clients. \\n3) Please provide all necessary details to replicate the results.\\n4) Please evaluate the impact of batch size (smaller and larger values), to show the scalability of the technique and its robustness in showing the compatibility among clients.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes the DICE framework for measuring the cascading propagation of data influence\\nin decentralized learning networks. Decentralized learning enables large-scale model training\\nthrough distributed computation, yet the lack of effective incentive mechanisms can lead to unfair\\ncontributions and malicious behavior among nodes. The DICE framework introduces data influence\\ncascades (DICE-GT and DICE-E), which respectively measure the direct and indirect influence of data\\nwithin the network, addressing the limitations of existing data influence measurement methods in\\ndecentralized environments. Experiments validate the consistency and accuracy of DICE across\\nvarious network topologies and demonstrate its potential in practical applications like anomaly\\ndetection and collaborator selection\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The DICE framework is the first to systematically measure the cascading propagation of data\\ninfluence in decentralized learning environments, providing an effective method to assess data\\ncontributions among nodes and filling a gap in data influence evaluation within decentralized\\nnetworks.\\n2. The experiments cover different network topologies (such as ring and exponential graphs) and\\ndatasets (such as MNIST, CIFAR-10, and CIFAR-100), validating the applicability and consistency\\nof the DICE framework across various scenarios.\\n3. The DICE framework provides accurate contribution measurement, laying the foundation for\\ndesigning fair and effective incentive mechanisms in decentralized learning systems, with the\\npotential to foster equitable collaboration within decentralized networks.\", \"weaknesses\": \"1. Figure 1 lacks legend information, making it difficult to understand.\\n2. The performance differences of the DICE framework under different parameters (such as learning\\nrate, batch size, etc.) have not been thoroughly discussed. It is recommended to add parameter\\nsensitivity experiments to demonstrate the impact of different parameter selections on the\\nperformance of the DICE framework, thereby enhancing its practicality.\", \"questions\": \"See weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General Response (Part 1/2)\", \"comment\": \"Dear AC and Reviewers:\\n\\nWe sincerely thank the reviewers for their helpful comments. We appreciate that reviewers recognize the solidity of our theoretical framework (Reviewers kSES and f3R7), especially for proposing the \\u202f\\u201cgold standard\\u201d data influence measures in decentralized learning and its first-order approximation (Reviewer 4yzV). \\n\\nWe have carefully revised our manuscript accordingly. The main revisions and contributions are highlighted below: \\n\\n**Highlight for Revision** \\n1. We conducted **additional experiments** following reviewers\\u2019 feedback: \\n - Comprehensive sensitivity analysis on hyperparameters (including batch size, learning rate, training epoch, communication topology and the number of participants) to evaluate the robustness of DICE (to address Reviewers kSES and f3R7). \\n - Additional experiments on Tiny ImageNet (to address Reviewer 4yzV). To secure the reproducibility, we have also provided detailed replication instructions in our revised manuscript, including the definition of anomalies methodologies(to address Reviewer f3R7). \\n2. We discussed **practical use cases of DICE** (to address Reviewer f3R7). \\n - **Efficient collaboration between decentralized participants:** DICE enables estimation of the contributions between neighboring participants. Leveraging this, DICE make dynamic reweighting strategies possible, which adaptively prioritizes interactions with more influential peers. This mechanism can significantly improve both convergence speed and validation accuracy, as validated by experiments on CIFAR-10 and CIFAR-100. These results demonstrate the effectiveness of DICE in heterogeneous and decentralized learning environments. \\n - **Anomaly Detection:** DICE strengthens the robustness of decentralized networks by identifying anomalies, such as label-flipping attacks or feature perturbations, by observing the deviations in proximal influence values. This capability is critical for detecting free riders, mitigating malicious behaviors, and maintaining system reliability, even under constraints of limited communication. \\n3. We added a comprehensive **discussion of related works on clustered federated learning** (to address Reviewer f3R7). \\n - There are shared aspects between gradient-based Clustered Federated Learning (CFL) and the one-hop DICE approximation, with DICE-E potentially serving as a more advanced high-order gradient similarity metric for clustering participants in decentralized federated learning\\u2014a promising direction for future work. We have carefully discussed and compared them. We also note that there are **key differences** between DICE and gradient-based CFL. While CFL is confined to peer-level clustering, which is in **one-hop**, DICE non-trivially extend the influence to measuring the **multi-hop influence propagation** to whole decentralized networks. Specifically, the r-hop DICE-E influence $I_{DICE-E}^{(r)}(z_j^t, z^{\\\\prime})$ is given by:\\n$\\nI_{DICE-E}^{(r)}\\\\(z_j^t, z^{\\\\prime}\\\\) = \\n-\\\\sum_{\\\\rho=0}^{r} \\\\sum_{ \\\\(k_1, \\\\dots, k_{\\\\rho}\\\\) \\\\in P_j^{\\\\(\\\\rho\\\\)} } \\n\\\\eta^{t} q_{k_\\\\rho} \\\\prod_{s=1}^{\\\\rho} W_{k_s, k_{s-1}}^{t+s-1} \\n\\\\nabla L\\\\(\\\\theta_{k_{\\\\rho}}^{t+\\\\rho}; z^{\\\\prime}\\\\)^\\\\top \\n\\\\prod_{s=2}^{\\\\rho} \\n\\\\(I - \\\\eta^{t+s-1} H\\\\(\\\\theta_{k_s}^{t+s-1}; z_{k_s}^{t+s-1}\\\\)\\\\) \\n\\\\nabla L\\\\(\\\\theta_{j}^{t}; z_j^t\\\\), \\n$\\nwhere $k_0 = j$, $P_j^{(\\\\rho)}$ denotes the set of all sequences $k_1, \\\\dots, k_{\\\\rho}$ such that $k_s \\\\in N_{out}^{(1)}(k_{s-1})$ for $s = 1, \\\\dots, \\\\rho$, and $H(\\\\theta_{k_s}^{t+s}; z_{k_s}^{t+s})$ is the Hessian matrix of $L$ with respect to $\\\\theta$, evaluated at $\\\\theta_{k_s}^{t+s}$ and data $z_{k_s}^{t+s}$. \\nFor further details, please refer to Proposition 3. This formulation highlights a key distinction: **DICE evaluates influence across multiple hops and characterizes the interplay between data, curvature, and communication topology**\\u2014factors beyond the scope of CFL frameworks., which primarily aims to cluster clients based on their similarity. \\n\\nFor your convenience, we provide an anonymous link summarizing our additional experimental results: \\n\\n**https://anonymous.4open.science/r/Anonymous-Repo-for-Rebuttal-793D/README.md**\\n\\nWe have uploaded a revised version of our paper, which can be accessed via the official link provided below:\\n\\n**https://openreview.net/pdf?id=2TIYkqieKw**\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"General Response (Part 2/2)\", \"comment\": [\"**Clarifications of Challenges and Contributions**\", \"**Major challenges**. Decentralized learning emerges as a promising paradigm to accommodate the growing demand for distributed computation, while also introduces distinct and unparallel challenges in quantifying and understanding data influence. In a centralized learning system, data influence is typically confined to a single model and can be statically analyzed after training. In decentralized learning, the influence of a single data instance propagates from directly connected neighbors to neighbors faraway. This put major challenge: **`measuring multi-hop influence beyond one-hop influence, which is determined not only by the stem node, but also the nodes on the way.`** The challenge invalidate the existing influence measure techniques. This phenomenon is termed as the **cascading effect** in our paper.\", \"**Major contributions**. Our **DICE (Data Influence CascadE)** method is the first work in the literature that can address this challenge. DICE introduces the concept of ground-truth data influence for decentralized learning, seamlessly integrating direct and indirect contributions to capture influence propagation across multiple hops during training. By transforming data-level influence into model-level influence and tracing the model-level influence of multi-hop neighbors, we developed a theoretical framework to derive tractable approximations for influence cascades. **`We uncover, for the first time, that data influence in decentralized learning is shaped by a synergistic interplay of original data, the topological importance of the data owner, and the curvature information of intermediate nodes mediating propagation.`** These theoretical insights enable a systematic and interpretable understanding of decentralized data influence, laying the groundwork for incentivized collaboration, anomaly detection, and scalable decentralized learning ecosystems.\"]}", "{\"title\": \"Update score\", \"comment\": \"I have updated the score.\"}", "{\"title\": \"Author Response\", \"comment\": \"We thank the reviewer for the helpful suggestions and kind support. We have carefully revised our manuscript according to your suggestions. Hope all your concerns are addressed.\\n\\n**Q1**: Figure 1 lacks legend information. \\n\\n**A1**: Thanks and addressed. Figure 1 provides visualization of influence cascades during decentralized training with ResNet-18 on CIFAR-10 under a designed communication matrix (see details in Appendix D.4). The thickness of edges represents the strength of communication links (i.e., weights of the communication matrix), while **node sizes correspond to the one-hop DICE-E influence scores** (see Proposition 1) computed for the same data batch across different participants. \\n\\n**Q2**: The performance differences of the DICE framework under different parameters (such as learning rate, batch size, etc.) have not been thoroughly discussed. \\n\\n**A2**: Thanks and addressed. Following your suggestion, we have conducted additional sensitivity analysis experiments to evaluate the robustness of DICE under varying hyperparameters, including learning rate, batch size, and training epochs. The results are summarized in the Experiments section and Appendix D. The learning rate is set as 0.1 or 0.01, batch size is set as 16, 64, or 128 per participant, training epoch is set as 5, 10, 20 and 30. Our conclusions robustly stand in all the settings: (1) One-hop DICE-E provides a strong approximation of DICE-GT; (2) anomalies introduced through random label flipping and feature perturbations are readily detectable by analyzing their proximal influence.This demonstrates the stability and robustness of the DICE approximation across different setups. \\n\\nFor further details and results, please kindly consult the following anonymous link: \\n\\n**https://anonymous.4open.science/r/Anonymous-Repo-for-Rebuttal-793D/Sensitivity%20analysis/README.md**\"}", "{\"title\": \"Author Response (Part 2/4)\", \"comment\": \"**Practical Use Case 2: Detection of Anomalies**.\\n\\nDICE identifies malicious neighbors, referred to as anomalies, by evaluating their proximal influence, which estimates the reduction in test loss caused by a single neighbor. A high proximal influence score indicates that a neighbor increases the test loss, negatively impacting the learning process. By detecting malicious behaviors such as label-flipping attacks or feature perturbations, DICE can enhance the reliability of decentralized learning systems. The experimental results shows that label flipping and feature perturbed anomalies (in red) are detectable with proximal influence values across various backbones and datasets. This application plays a critical role in addressing challenges like detecting free-riders and malicious behaviors in decentralized networks without central authorities. \\n\\nThese practical applications align with a broader vision of DICE as a foundation for incentivized decentralized learning, facilitating the development of self-regulating data and parameter markets. \\n\\n**Q2**: Please discuss link with clustered federated learning, in particular techniques that use gradients to cluster clients. \\n\\n**A2**: Thanks for pointing this out! There are shared aspects between gradient-based Clustered Federated Learning (CFL) and the one-hop DICE approximation, with **DICE-E potentially serving as a more advanced high-order gradient similarity metric for clustering participants in decentralized federated learning**\\u2014a promising direction for future work. We have carefully discussed and compared them. \\nClustered Federated Learning (CFL) groups clients with similar data distributions and training collaboratively but separately within each cluster [1, 2, 3, 4]. Gradient-based CFL specifically forms these clusters using client gradient similarities [3, 4]. \\nFor instance, [3] employs a post-convergence bi-partition to clients based on the cosine similarity of their gradients after convergence; and [4] dynamically performs spectral clustering in federated learning, leveraging gradient features as the similarity metric. \\n\\nThe gradient-based CFL has some similarity with the \\u201cone-hop\\u201d version of DICE estimator, which both use gradient similarity information. Gradient-based CFL typically employs the cosine similarity of gradients as a clustering criterion. Similarly, one-hop DICE-E estimates influence by considering the inner product between the training gradient of an \\\"influence sender\\\" and the test gradient of the evaluation node.\"}", "{\"summary\": \"The paper proposes DICE as a framework for measuring data influence cascades in decentralized environments. The framework explains how data influence propagates through the communication network, emphasizing the interaction between the original data and the network structure in shaping data influence within decentralized learning. The experimental results show that the first-order approximation of the \\u201cgold standard\\u201d for evaluating data influence in decentralized environment can approaching the truth, and this framework can used for detecting mislabeled anomalies.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper summarizes previous work on measuring data influence and highlights the gaps in applying these methods to distributed scenarios.\\n2. This paper proposes a sound \\u201cgold standard\\u201d and its first-order approximation to quantify individual contributions in decentralized learning.\", \"weaknesses\": \"1. The experiments are weak, and Section 5.3 is unfinished.\\n2. The notation \\u03b7^t in Theorem 1 is previously appears as \\u03b7_t in Algorithm 1.\", \"questions\": \"Please see weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Done\", \"comment\": \"I have updated the score.\"}", "{\"title\": \"Thank you!\", \"comment\": \"Thank you for your kind support!\"}" ] }
2Sn0ty7zoI
Learning through Conditioning on Natural Language Feedback
[ "Dylan Hillier", "Cheston Tan", "Jing Jiang" ]
In this paper we explore the simple idea of teaching models by allowing them to condition their answers on natural language feedback. Motivated by the idea that natural language interactions provide a targeted, flexible, and level-appropriate reward signal, we study the ability of small instruction-tuned models to leverage feedback from a larger frontier model. We find while the frontier model provides generally high quality feedback, especially smaller models can struggle to use this due to noise in their generative output. After incorporating techniques like negative sampling, we find that models trained on these feedback-conditioned responses can perform similarly to those trained directly on teacher responses. We explore training using supervised finetuning and preference learning algorithms over a broad set of tasks including Big-Bench Hard. These findings are broadly applicable and our methods rely only on the ability of models to give and receive linguistic feedback. As such, they contribute to a growing body of work exploring how to best utilise the linguistic capabilities of language models for human-like instructive learning.
[ "Social Learning", "Natural Language Feedback", "Instructive Learning" ]
https://openreview.net/pdf?id=2Sn0ty7zoI
https://openreview.net/forum?id=2Sn0ty7zoI
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vLeLAKJ8er" ], "note_type": [ "comment" ], "note_created": [ 1728006356192 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission14230/Authors" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"In retrospect rushed and not ready for review, don't want to waste reviewers time\"}" ] }
2RfWRKwxYh
Boost Self-Supervised Dataset Distillation via Parameterization, Predefined Augmentation, and Approximation
[ "Sheng-Feng Yu", "Jia-Jiun Yao", "Wei-Chen Chiu" ]
Although larger datasets are crucial for training large deep models, the rapid growth of dataset size has brought a significant challenge in terms of considerable training costs, which even results in prohibitive computational expenses. Dataset Distillation becomes a popular technique recently to reduce the dataset size via learning a highly compact set of representative exemplars, where the model trained with these exemplars ideally should have comparable performance with respect to the one trained with the full dataset. While most of existing works upon dataset distillation focus on supervised datasets, \todo{we instead aim to distill images and their self-supervisedly trained representations into a distilled set. This procedure, named as Self-Supervised Dataset Distillation, effectively extracts rich information from real datasets, yielding the distilled sets with enhanced cross-architecture generalizability.} Particularly, in order to preserve the key characteristics of original dataset more faithfully and compactly, several novel techniques are proposed: 1) we introduce an innovative parameterization upon images and representations via distinct low-dimensional bases, where the base selection for parameterization is experimentally shown to play a crucial role; 2) we tackle the instability induced by the randomness of data augmentation -- a key component in self-supervised learning but being underestimated in the prior work of self-supervised dataset distillation -- by utilizing predetermined augmentations; 3) we further leverage a lightweight network to model the connections among the representations of augmented views from the same image, leading to more compact pairs of distillation. Extensive experiments conducted on various datasets validate the superiority of our approach in terms of distillation efficiency, cross-architecture generalization, and transfer learning performance.
[ "dataset distillation", "self-supervised learning" ]
Accept (Poster)
https://openreview.net/pdf?id=2RfWRKwxYh
https://openreview.net/forum?id=2RfWRKwxYh
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wc9atAlL5y", "rvME15jXMW", "rcDkG6lVhE", "rGYgVObf1e", "qnyhVLKdqX", "oWYK9EK4k6", "k4W6tsyVZk", "ipUr4COUk6", "iHqXnhl1Ph", "hece42RPxy", "gdrdioGEM3", "cNQ8UoMWqb", "ZwtBOF0X6W", "XAPKpxlQCt", "Wdc9c9BCgT", "VddV1rDpjF", "QH87LgvoOp", "NXA20YAxfv", "Eo2cKlQHDz", "DfcqKFCOdl", "AYsWNb2IrW", "ATkERpSa5G", "6jFAHbut75", "2ec75tKJuO" ], "note_type": [ "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732265656493, 1732265732032, 1737523427707, 1732680819948, 1732386590552, 1730705714798, 1732206312119, 1732265832498, 1732891202959, 1732844321177, 1734646772516, 1732265844743, 1733100844149, 1732207373476, 1730180254623, 1732206632887, 1730098102698, 1732680770424, 1732237935233, 1730670118335, 1732855327456, 1732889117498, 1732206836047, 1732531542050 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission969/Authors" ], [ "ICLR.cc/2025/Conference/Submission969/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission969/Authors" ], [ "ICLR.cc/2025/Conference/Submission969/Reviewer_u46N" ], [ "ICLR.cc/2025/Conference/Submission969/Reviewer_TeEU" ], [ "ICLR.cc/2025/Conference/Submission969/Authors" ], [ "ICLR.cc/2025/Conference/Submission969/Authors" ], [ "ICLR.cc/2025/Conference/Submission969/Authors" ], [ "ICLR.cc/2025/Conference/Submission969/Reviewer_Q6us" ], [ "ICLR.cc/2025/Conference/Submission969/Area_Chair_7Qv3" ], [ "ICLR.cc/2025/Conference/Submission969/Authors" ], [ "ICLR.cc/2025/Conference/Submission969/Reviewer_Q6us" ], [ "ICLR.cc/2025/Conference/Submission969/Authors" ], [ "ICLR.cc/2025/Conference/Submission969/Reviewer_u46N" ], [ "ICLR.cc/2025/Conference/Submission969/Authors" ], [ "ICLR.cc/2025/Conference/Submission969/Reviewer_Q6us" ], [ "ICLR.cc/2025/Conference/Submission969/Authors" ], [ "ICLR.cc/2025/Conference/Submission969/Reviewer_MMPZ" ], [ "ICLR.cc/2025/Conference/Submission969/Reviewer_MMPZ" ], [ "ICLR.cc/2025/Conference/Submission969/Reviewer_u46N" ], [ "ICLR.cc/2025/Conference/Submission969/Authors" ], [ "ICLR.cc/2025/Conference/Submission969/Authors" ], [ "ICLR.cc/2025/Conference/Submission969/Authors" ] ], "structured_content_str": [ "{\"comment\": \"We sincerely appreciate your thoughtful feedback and recommendation of our work. We will explore more practical datasets in future work to enhance the applicability of our approach.\"}", "{\"comment\": \"Thank you for your valuable feedback. We have provided detailed responses to the issues raised and hope our explanations address your concerns. We welcome any further suggestions to improve our work.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"We sincerely appreciate your thoughtful feedback and the time you have dedicated to reviewing our paper. Please let us know if you have any further suggestions or comments. If there are any additional questions or issues you would like to discuss, we are fully committed to engaging further to enhance our paper.\"}", "{\"title\": \"Thanks for your response\", \"comment\": \"Thanks for your response. As I understood previously, the challenge of augmentation is similar to that of supervised learning, which tries to align the distilled image with the original image. Now I understand that this is actually the problem in self-supervised learning. However, I still have some problems with the augmentation networks. To be specific, designing neural networks to predict the shift of the representations can be heuristic, lacking a guiding principle to design them. With very small approximation networks (as shown in the implementation, the hidden layer size is pretty small), I have the question of whether the model capacity can handle this challenge.\"}", "{\"summary\": \"This work targets the cross architecture generalizability challenge in dataset distillation. When performing distillation, the data is often biased to the model used in the distillation process -- in this work the proposed self-supervised approach parameterizes the representations of images while studying/leveraging the effects of augmentations. This approach features a 5 stage method involving pertaining a network on the source dataset, followed by image parameterization (encoding the images and augmentations via low-dimensional bases vectors), bi-level optimization on the images, approximation to handle the distribution/representation shift, and reconstruction of the images using the bases and learned features. The method reports strong performance improvement on a variety of datasets against most of the current SOTA methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The key strengths of this paper include:\\n\\n1. More diverse datasets: Not many dataset distillation papers venture beyond the CIFAR/ImageNet datasets, however these authors included results on CUB2011 and StanfordDogs. Additionally, the ViT performance has been reported, and overall it appears that the authors performance improvement is maintained on Transformer architectures, albeit smaller.\\n\\n2. The basis and coefficient initialization ablation provides interesting insight into the sensitivity of the proposed framework.\\n\\n3. Personally, I found the use of the approximation networks to be a clever solution to reducing memory usage while preserving the essence of image augmentation. By learning a mapping between and subsequently the shift in distribution of the unaugmented distilled representation into it's augmented views, one can efficiently store simply the network rather than all the augmented views.\\n\\n4. Strong baselines: This work accurately surveyed some of the most seminal and current SOTA in the field of dataset distillation (with the exception of a few missing citations that should be added). I find the included competitive methods to be comprehensive enough to support the statements however, further comments on the benchmarking are included in the Weaknesses section.\", \"weaknesses\": \"Despite the interesting approach taken in this work, I find a few crucial weaknesses:\\n\\n1. I find that the experimental support is a bit lacking. As is common in Dataset Distillation works, it is generally good practice to show the scaling over different memory budges (N) on various datasets, rather than just a single dataset, in order to show generalizability.\\n2. I noticed that the resolutions on ImageNet scale to 64 x 64 -- however recently, the field has shifted to higher resolutions such as 128x128 or even 512 x 512 -- I think it would be important to see if the method can scale well to larger resolutions.\\n3. I think another important criteria that should be included is Applications -- as alluded to in the paper tasks like continual learning or neural architecture search (line 43) are important in the field, however none of these results were included in the main paper -- I think it is important to test the applicability of the method in order to determine significance and impact.\\n4. Given that this approach involves multi-level optimization, I think efficiency metrics should be compared as well (time per step, GPU memory etc). -- This will demonstrate wether the gain in performance is justified over other methods when comparing the relative compute demands.\\n\\n[Minor] Some missing citations including DataDAM (ICCV'23), CAFE (CVPR'22)\", \"questions\": \"I've highlighted a few of the issues/suggestions for the Authors to consider in the rebuttal phase above in the Weaknesses Section. These are crucial in determining the significance of the work and wide scale adoption.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Q1**: I find that the experimental support is a bit lacking. As is common in Dataset Distillation works, it is generally good practice to show the scaling over different memory budges (N) on various datasets, rather than just a single dataset, in order to show generalizability.\\n\\n**A1**: We appreciate the suggestion to extend the experiments to demonstrate generalizability over varying memory budgets. In addition to the results on CIFAR100 which we have provided in the main paper (cf. Table 2), as shown in Table 7 (in Appendix), we have conducted experiments on TinyImageNet with memory budgets $N=50, 100, 200$. Moreover, we further extended the experiment to $N=500$. The results below confirm that our method consistently outperforms baselines across a range of memory budgets, showcasing its scalability.\\n| Memory Budget $N$ | 50 | 100 | 200 | 500 |\\n|:-------------------------------:|:--------:|:-----------:|:-----------:|:-----------:|\\n| Random | 22.43 | 22.73 | 23.95 | 25.92 | \\n| KRR-ST | 25.29 | 25.64 | 27.23 | 30.46 |\\n| Ours | **28.03** | **29.35** | **31.25** | **33.63** | \\n\\n**Q2**: I noticed that the resolutions on ImageNet scale to 64 x 64 -- however recently, the field has shifted to higher resolutions such as 128x128 or even 512 x 512 -- I think it would be important to see if the method can scale well to larger resolutions.\\n\\n**A2**: We recognize the importance of validating the method's performance on higher resolution datasets. To address this, we conducted an experiment on ImageNette (a 10-class subset of ImageNet) with 128x128 resolution. The results demonstrate that our method scales effectively to larger resolutions.\\n| ImageNette | $N$=10 | \\n|:------------------:|:---------:|\\n| Random | 49.63 | \\n| KRR-ST | 50.14 |\\n| Ours | **59.31** |\\n\\n**Q3**: I think another important criteria that should be included is Applications -- as alluded to in the paper tasks like continual learning or neural architecture search (line 43) are important in the field, however none of these results were included in the main paper -- I think it is important to test the applicability of the method in order to determine significance and impact.\\n\\n**A3**: We appreciate the reviewer\\u2019s insightful suggestion regarding the inclusion of applications like continual learning and neural architecture search. While our current work primarily focuses on demonstrating the generalizability of the distilled data through various downstream tasks, as shown in the linear evaluation results, we agree that exploring these applications would further strengthen the impact of our method.\\nIn future work, we plan to conduct experiments on continual learning and neural architecture search to validate the broader applicability of our approach. The proposed method ability to train a decent feature extractor, as demonstrated in the paper, provides a solid foundation for such tasks. We anticipate that the compactness of distilled dataset and generalization properties will prove beneficial in these scenarios as well.\\n\\n**Q4**: Given that this approach involves multi-level optimization, I think efficiency metrics should be compared as well (time per step, GPU memory etc). -- This will demonstrate whether the gain in performance is justified over other methods when comparing the relative compute demands.\\n\\n**A4**: Based on CIFAR-100 with $N=100$, we evaluated GPU memory usage and execution time. The results demonstrate that while our method requires approximately 1.5x more GPU memory than KRR-ST, it remains feasible within modern hardware constraints (e.g., NVIDIA RTX 4090 with 24GB memory). Furthermore, our method's execution time is comparable to other baselines, as shown below.\\n| Target | DSA | DM | IDM | DATM | KRR-ST | Ours |\\n|:--------------------------:|:--------:|:---------:|:-------:|:----------:|:---------:|:-----------:|\\n| GPU memory (MB) | 3561 | 2571 | 2407 | 4351 | 4483 | 6917 |\\n| Time (mins) | 81 | 78 | 313 | 121 | 189 | 205 |\"}", "{\"comment\": \"Thank you for your valuable feedback. We have provided detailed responses to the issues raised and hope our explanations address your concerns. We welcome any further suggestions to improve our work.\"}", "{\"comment\": \"Q: The authors are hesitant to apply their method to ImageNet-1K with common settings, 224 * 224 and 1000 classes.\", \"a\": \"Our algorithm is applicable to ImageNet-1K with the requested settings, i.e. 224\\u00d7224 resolution and 1000 classes (in which we have shown our results of 128*128 on Imagenette dataset in the rebuttal for reviewer TeEU). However, the 224\\u00d7224 ImageNet-1K experiment was a late request during the review process, and due to the tight rebuttal timeline, we were unable to complete it in time.\\nMoreover, we want to emphasize that this additional experiment is not critical to the validation of our primary contributions. Our work is centered on advancing dataset distillation techniques and demonstrating their effectiveness across a range of datasets and architectures, which we have validated extensively.\\nNevertheless, we recognize the importance of addressing the reviewer\\u2019s concerns, and we commit to including the 224\\u00d7224 ImageNet-1K results in the final version of the paper.\"}", "{\"comment\": \"I apologize for my delayed response.\\n\\nAfter reviewing other reviewers' comments, I have concerns about the proposed method's resource requirements. In contrast to the research aim of reducing training costs, the current approach necessitates much more resources.\\n\\nIf so, why do users not simply train on the original data directly, which would seem like a more straightforward and cost-effective approach? Given this limitation, I understand why the authors are hesitant to apply their method to ImageNet-1K with common settings, 224 * 224 and 1000 classes.\"}", "{\"metareview\": \"a) This paper proposes a novel approach to self-supervised dataset distillation aimed at reducing training costs by creating compact dataset that, when used for training, maintains model performance. It uses PCA-based dimensionality reduction, which transforms images and their representations into lower-dimensional bases and coefficients; and data augmentation based on the approximation network. An extensive experimental evaluation demonstrates the significant improvements over previous baselines.\\n\\nb) The topic is of interest, especially in the era of large datasets. While most research on data distillation focuses on classification, this work is for self-supervised tasks, where the amount of unlabelled data can be quite high. The paper is well-written and easy to follow, with a clear explanation of each proposed component.\\n\\nc) The techniques employed are derived from previous work on data distillation for classification tasks. It is not clear which are the challenges for self-supervised data distillation and how their method specifically addresses those challenges. While performing better, the method still requires approximately 1.5x more GPU than previous approaches.\\n\\nd) After rebuttal, the remaining drawbacks of the method are minor, while the proposed contribution is original and compelling and deserves publication.\", \"additional_comments_on_reviewer_discussion\": \"Rev. TeEU raised some possible drawbacks and missing experiments on the paper. Authors did a good job to answer all rev. comments and rev. increase their score to 6.\\n\\nRev. MMPZ provided a positive review to the paper, but also pointed out some possible issues, mostly about computational cost and complexity. After authors' answers, rev. kept their positive score.\\n\\nRev. u46N had a positive feedback, but had doubts about the capability of the approximation network to predict the correct representation shifts. Authors provided additional experiments to prove that point. Rev. was satisfied and maintained their positive outlook. \\n\\nRev. Q6us has two main critical points: i) novelty: as the method is a composition of well-known techniques and ii) computational cost: how the method can scale to larger datasets such as ImageNet-1k. For novelty, authors provided compelling answers. For computational cost rev. noticed that in some cases it does not make sense to perform that heavy training just to reduce the size of the final training. Authors replied that this is applied only once and then the small dataset could be used multiple times. However, rev. was not convinced about the actual utility of the approach and maintained their score of 5. \\n\\nI see the point of rev. Q6us. However I think that this research, although it might be not so usable today, it can help to foster interest and improve methods, leading to much more useful results in the near future. In this sense, considering the interesting contributions listed by all revs. and the extensive evaluation, I recommend the paper for publication.\"}", "{\"comment\": \"Thank you for your valuable feedback. We have provided detailed responses to the issues raised and hope our explanations address your concerns. We welcome any further suggestions to improve our work.\"}", "{\"comment\": \"As the authors themselves noted in the paper, many datasets currently pose a significant challenge. For instance, CLIP relies on an enormous dataset of several hundred TB, which is essentially out of reach for most academics due to its size. However, the proposed method does not alleviate but rather exacerbates the challenge of securing additional resources. Although I acknowledge that the proposed method achieves good performance, I keep my score due to this critical limitation.\"}", "{\"comment\": \"**Q1**: The proposed techniques in the paper are not new, such as PCA and augmentation approximation networks.\\n\\n**A1**: To the best of our knowledge, there has been no prior work integrating PCA or augmentation approximation networks into a dataset distillation framework. While these techniques might be individually familiar, their integration within a comprehensive framework for self-supervised dataset distillation is novel, as well as highlighting the advantage of our proposed method to be simple and effective. In brief, our method enhances its efficacy in distilling compact and transferable datasets, as detailed in Section 3.2 and 3.3 of the paper.\\n\\n**Q2**: The proposed technique leverages data augmentation while minimizing bias, and similar ideas have been explored in self-supervised learning. It is important to compare it with other analogous methods [1][2][3].\\n\\n**A2**: The goals between self-supervised learning (SSL) and dataset distillation (DD) differ fundamentally. SSL aims to learn feature extractors that generalize to downstream tasks, often benefiting from augmentation to improve representation quality, as seen in [1][2][3]. In contrast, dataset distillation aims to compress datasets into a smaller storage size while preserving their training performance. Specifically, KRR-ST [4], a self-supervised dataset distillation framework, leverage a self-supervisedly trained backbone to perform DD. It demonstrated that random data augmentation introduces gradient bias in bilevel optimization, deeming it incompatible with dataset distillation. To address this, we propose predetermined augmentations to avoid randomness while improves condensed performance, as elaborated in Section 3.3. This is a distinct approach from typical augmentation strategies in SSL.\\n\\n**Q3**: Could the authors provide more details about the approximation networks, such as the number of networks used, structure, and layers?\\n\\n**A3**: Our experiments use rotation augmentation as the default. The augmentation includes rotations of $0^\\\\circ, 90^\\\\circ, 180^\\\\circ, 270^\\\\circ$, requiring three distinct approximation networks to predict representation shifts for $90^\\\\circ, 180^\\\\circ, 270^\\\\circ$ from $0^\\\\circ$. These networks are lightweight, designed as 2-layer perceptrons with hidden layer sizes of 4 (for CIFAR100) and 16 (for TinyImageNet/ImageNet). As detailed in Section A.1, each network contains 4,612 parameters for CIFAR100 and 16,912 parameters for larger datasets. Additional implementation details can be found in Section 3.3 and Appendix A.1\\u200b.\\n\\n**Q4**: Could the authors show a comparison of the distilled data sizes?\\n\\n**A4**: We provide comparisons of distilled data sizes in Table 2 and Table 7, conducted on CIFAR100 and TinyImageNet, respectively. These tables demonstrate the superior performance of the proposed method across various memory budget sizes.\\n\\n[1] Improving Transferability of Representations via Augmentation-Aware Self-Supervision. NeurIPS 2021 [2] Improving Self-supervised Learning with Automated Unsupervised Outlier Arbitration. NeurIPS 2021 [3] RSA: Reducing Semantic Shift from Aggressive Augmentations for Self-supervised Learning. NeurIPS 2022 [4] SELF-SUPERVISED DATASET DISTILLATION FOR TRANSFER LEARNING. ICLR 2024\"}", "{\"summary\": \"This paper proposes a self-supervised data distillation method based on image decomposition. By initializing with principal components and learning the impact of data augmentation, the performance of the distilled dataset is enhanced. The experiments provide a comprehensive analysis of the method\\u2019s effectiveness.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The topic is both valuable and practical, especially in the era of large datasets. While most current research on data distillation focuses primarily on classification tasks, which may be too narrow, this work seeks to improve self-supervised tasks. This approach is more general and can better support feature learning for downstream applications.\\n\\n2. The paper is well-written and easy to follow, with a straightforward method that is simple to understand. For each component, the authors clearly explain the rationale behind its inclusion.\\n\\n3. The experiments demonstrate the method\\u2019s effectiveness, as it consistently outperforms baseline methods in both transfer learning and linear probing tasks.\", \"weaknesses\": \"I did not find any major weaknesses in this paper. However, there are some concerns regarding its novelty. The techniques employed are largely derived from previous work on data distillation for classification tasks. It would be helpful if the authors could clarify what unique challenges exist for self-supervised data distillation and how their method specifically addresses those challenges.\", \"questions\": \"NA\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Q1**: The datasets in the experiments are CIFAR 100 and datasets with similar image attributes. I can understand it is possible to get a distilled dataset in a lab environment and the datasets are very feature-controllable. Do you have space to show that your experiment can be successful in other different scenarios? For example, some randomly taken images.\\n\\n**A1**: The reviewer raises a valid concern about the generalizability of our method beyond controlled datasets like CIFAR-100. To address this, we had conducted additional experiments on ImageNet and TinyImageNet datasets, whose details and results are presented in Appendices A.3, A.4, and A.5. These datasets offer increased diversity and scale, and the results consistently demonstrate the robustness of our method across different scenarios. Importantly, our distilled datasets exhibit superior cross-architecture generalizability, as evidenced by linear evaluation performance across multiple feature extractors, including VGG11, ResNet18, AlexNet, and MobileNet..etc. The success on datasets like ImageNet, which closely resemble real-world data, highlights the potential for applying our approach to scenarios involving randomly collected images.\\n\\n**Q2**: Though this is a memory saving method, a very large portion of the whole method is still computing intensive. Do you have any benchmark to show that the whole method could be executed in an efficient way?\\n\\n**A2**: We appreciate the concern regarding computational intensity. Our methodology primarily focuses on achieving storage efficiency after the distillation process by employing parameterization. This approach decomposes images and representations into linear bases and coefficients, significantly reducing storage requirements. The combination of bases and coefficients involves matrix multiplication, which incurs minimal computational overhead. Additionally, the use of lightweight 2-layer perceptron approximation networks ensures computational simplicity. \\nWhile our primary objective is storage efficiency, we recognize the importance of evaluating the computational cost during the distillation process. To address this, we benchmarked GPU memory usage and execution time using the CIFAR-100 dataset with $N=100$. The results, summarized below, indicate that our method requires approximately 1.5\\u00d7 more GPU memory than KRR-ST, yet remains manageable within the capacity of modern GPUs like the NVIDIA RTX 4090 (24GB). Furthermore, the computation time of our method is comparable to other baselines and does not impose a significant burden.\\n| Target | DSA | DM | IDM | DATM | KRR-ST | Ours |\\n|:---------------------:|:--------:|:---------:|:-------:|:----------:|:---------:|:-----------:|\\n| GPU memory (MB) | 3561 | 2571 | 2407 | 4351 | 4483 | 6917 |\\n| Time (mins) | 81 | 78 | 313 | 121 | 189 | 205 |\\n\\n**Q3**: Difficult for practitioners to implement and tune the method without extensive expertise in self-supervised learning and dataset distillation\\n\\n**A3**: We understand the concern regarding the accessibility of our method for practitioners. However, the core components of our approach\\u2014parameterization and approximation networks\\u2014can be implemented with just a few lines of code using popular deep learning frameworks like PyTorch or TensorFlow. Basically (cf. Figure 1), our pretraining step adopts the well-known and widely-adopted self-supervised learning scheme, Barlow Twins; our parameterization is based on the fundamental machine learning tool, PCA; our bilevel-optimization follows the practice of KRR-ST; and our approximate networks are simply multilayer perceptron while their training follows the typical supervised learning procedure, in which they are definitely not sophisticated. Detailed guidelines and implementation examples are provided in the supplementary materials to further assist practitioners in replicating the method.\"}", "{\"summary\": \"In this paper, the authors propose a method for dataset distillation based on KRR-ST. Two techniques are introduced: (1) PCA-based dimensionality reduction, which transforms images and their representations into lower-dimensional bases and coefficients; and (2) Data Augmentation, which employs predefined data augmentations and approximation networks to address the limitation of KRR-ST in utilizing data augmentation during dataset distillation. The authors conduct an extensive experimental evaluation and demonstrate significant improvements over previous baselines.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Reducing data size is a critical direction in self-supervised learning research.\\n2. Fixing the issue of incorporating data augmentation into data distillation is important, as it significantly improves performance.\\n3. The authors conduct a wide range of experiments, evaluating model performance with various network architectures and different numbers of training examples.\", \"weaknesses\": \"1. The proposed techniques in the paper are not new, such as PCA and augmentation approximation networks.\\n2. The proposed technique leverages data augmentation while minimizing bias, and similar ideas have been explored in self-supervised learning. It is important to cmopare it with other analogous methods [1][2][3].\\n\\n[1] Improving Transferability of Representations via Augmentation-Aware Self-Supervision. NeurIPS 2021\\n[2] Improving Self-supervised Learning with Automated Unsupervised Outlier Arbitration. NeurIPS 2021\\n[3] RSA: Reducing Semantic Shift from Aggressive Augmentations for Self-supervised Learning. NeurIPS 2022\", \"questions\": \"1. Could the authors provide more details about the approximation networks, such as the number of networks used, structure, and layers?\\n2. Could the authors show a comparison of the distilled data sizes?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely appreciate your thoughtful feedback and the time you have dedicated to reviewing our paper. Please let us know if you have any further suggestions or comments. If there are any additional questions or issues you would like to discuss, we are fully committed to engaging further to enhance our paper.\"}", "{\"title\": \"Accept Q2 and Q3, but just a little doubtful for Q1\", \"comment\": \"I accept the arguments for Q2 and Q3. As for Q1, I hope there are more practical data, but for experiment level, this is not very realistic, however, hope you can make further improvement. I still recommend this paper.\"}", "{\"summary\": \"This paper proposes a novel approach to self-supervised dataset distillation aimed at reducing training costs by creating compact datasets that maintain model performance. This method, intended to address challenges in self-supervised learning (SSL) for dataset distillation, introduces three key contributions: 1. Parameterization 2. Predefined Augmentation and feature approximation 3. Optimizations with approximation Networks. Generally they have shown a very contributing method.\\n\\nThe paper introduces a solid contribution to self-supervised dataset distillation, with innovative approaches to parameterization, augmentation handling, and memory efficiency with upgraded existing method named as KRR-ST. While the approach is complex, it provides a promising direction for reducing training costs in SSL, particularly in resource-limited settings. With further optimization and extension to diverse tasks, this method has the potential to make dataset distillation more accessible and applicable in real-world scenarios.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper demonstrated a very strategic parameterization.\\nThe use of bases for image and representation parameterization is a sophisticated approach to compress dataset information without sacrificing accuracy. This addresses both storage efficiency and computational cost.\\n\\n 2.Effective Augmentation Handling:\\nBy predefining augmentations, the method successfully mitigates the bias introduced by random augmentations, a notable challenge in SSL distillation methods.\\n\\n3. Improved Memory Efficiency:\\nThe inclusion of approximation networks to predict representation shifts from unaugmented to augmented views significantly reduces memory usage by eliminating the need to store augmented representations. This makes the approach more scalable.\\n\\n4. Transfer Learning Potential:\\n\\nThe method shows strong transferability to downstream tasks, making it particularly appealing for real-world applications where labeled data is scarce, and transfer learning is critical.\\n\\n5. Ablation Studies and Hyperparameter Analysis:\\n\\nThe paper includes ablation studies that isolate the contributions of parameterization, augmentation, and approximation networks, offering clear insights into each component's impact on performance.\", \"weaknesses\": \"1. Complexity and accessibility\", \"critique\": \"While the method claims to be memory-efficient due to approximation networks, the additional computational overhead introduced by these networks might reduce the method\\u2019s overall efficiency, especially in resource-constrained environments.\\n\\n3. Dependence on Synthetic Data for Evaluation:\\nThe experiments rely heavily on benchmark datasets like CIFAR100. However, these datasets have well-structured labels and relatively consistent image quality, which may not fully represent real-world data variability.\", \"questions\": \"1. The datasets in the experiments are CIFAR 100 and datasets with similar image attributes. I can understand it is possible to get a distilled dataset in a lab environment and the datasets are very feature-controllable. Do you have space to show that your experiment can be successful in other different scenarios? For example, some randomly taken images.\\n\\n2. Though this is a memory saving method, a very large portion of the whole method is still computing intensive. Do you have any benchmark to show that the whole method could be executed in an efficient way?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for conducting additional experiments; it appears that the lightweight networks perform well. However, I still find the method of predicting shifts using networks somewhat heuristic. Despite this, I will maintain a positive rating.\"}", "{\"comment\": \"Q: Concerns about resource requirements and the cost-effectiveness of dataset distillation. Why not train directly on the original data?\", \"a\": \"This is an important question about the goals and procedure of dataset distillation.\", \"as_stated_in_the_abstract\": \"*\\\"Dataset Distillation becomes a popular technique recently to reduce the dataset size via learning a highly compact set of representative exemplars, where the model trained with these exemplars ideally should have comparable performance with respect to the one trained with the full dataset.\\\"* This goal has been widely recognized and pursued in many recent works, including [1][2][3][4]. As highlighted in [3], *\\\"it is prohibitively costly to use all the examples from the huge dataset, which motivates the need to compress the full dataset into a small representative set of examples.\\\"* This underscores the necessity of compressing large-scale datasets into smaller, more manageable ones.\", \"dataset_distillation_involves_two_major_steps\": \"1. Synthesizing the distilled dataset \\u2013 This step involves computationally intensive optimization to create a smaller, representative dataset.\\n2. Training a new feature extractor \\u2013 The distilled dataset is then used to train a new model, with significantly lower computing requirements compared to training on the full dataset.\\n\\nThe concerns raised by Reviewer TeEU and Reviewer MMPZ regarding computational cost focus on the first step, which indeed requires more resources. However, this cost is a **one-time investment**. Once the distilled dataset is generated, it can be reused multiple times across different tasks or models, significantly reducing the cost for downstream training since the distilled dataset is much smaller than the original dataset. \\n\\nIn summary, by providing a compact yet highly effective dataset, dataset distillation facilitates efficient and flexible usage, aligning with these practical needs.\\n\\n[1] Dataset Condensation with Gradient Matching (Bo Zhao et al., ICLR 2021)\\n[2] Dataset Distillation by Matching Training Trajectories (George Cazenavette et al., CVPR 2022)\\n[3] Self-Supervised Dataset Distillation for Transfer Learning (Dong Bok Lee & Seanie Lee et al., ICLR 2024)\\n[4] Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching (Ziyao Guo & Kai Wang et al., ICLR 2024)\"}", "{\"comment\": \"**Q1**: unique challenges exist for self-supervised data distillation and how their method specifically addresses those challenges.\\n\\n**A1**: Self-supervised dataset distillation presents unique challenges, particularly the instability caused by the randomness of data augmentation, which is a crucial element in self-supervised learning. Prior work [1] has highlighted this issue, noting its adverse effects on bilevel optimization. We address this challenge by introducing predetermined augmentations, as detailed in Section 3.3, which eliminate randomness and maintain consistent gradients during optimization. Additionally, to enhance storage efficiency, we parameterize the distilled images and their representations into low-dimensional bases, significantly reducing redundancy. Furthermore, we introduce lightweight approximation networks to model the representation shifts caused by augmentations, enabling compact and efficient storage of augmented data. The effectiveness of these components is validated through an ablation study in Table 3, where each proposed component is shown to contribute significantly to the overall performance of the distilled dataset.\\n\\n[1] SELF-SUPERVISED DATASET DISTILLATION FOR TRANSFER LEARNING. ICLR 2024\"}", "{\"comment\": \"A: Thank you for raising this concern. The design of the approximation networks involves balancing model capacity and storage constraints. As shown in Appendix A.6, our results on CIFAR100 demonstrate that the proposed lightweight approximation networks can achieve better accuracy than \\\"Same\\\" and \\\"Bias\\\" baselines.\\nTo address your specific concern, we conducted additional ablation studies varying the hidden layer size of the networks. The table below summarizes the results, including linear evaluation accuracy of the feature extractor trained on distilled data and prediction MSE. The hidden size 2, 4, and 8 indicate the width of the hidden layer in the approximation networks, while \\\"Ideal\\\" represents storing all of the representation without considering the storage budget. \\nNotably, while larger networks achieve lower MSE, they do not always improve accuracy due to the storage budget constraint. These findings indicate that the proposed design can effectively predict 512-dimensional representation shifts caused by rotation augmentations, achieving a reasonable trade-off between MSE and accuracy.\\n\\n\\n| Method | Same | Bias | Ours (hidden size 2) | Ours (hidden size 4) | Ours (hidden size 8) | Ideal |\\n|:--------------------------:|:------------:|:-----------:|:----------------------------:|:-----------------------------:|:----------------------------:|:----------:|\\n| Accuracy | 50.10 | 50.32 | 52.30 | 52.41 | 51.19 | 53.51 | \\n| MSE | 0.31 | 0.30 | 0.07 | 0.06 | 0.04 | 0 |\"}" ] }
2RcTuBc4mA
Generalized Attention Flow: Feature Attribution for Transformer Models via Maximum Flow
[ "Behrooz Azarkhalili Aghmiyouni", "Maxwell Libbrecht" ]
This paper introduces Generalized Attention Flow, a novel feature attribution method for Transformer models that addresses the limitations of existing approaches. By generalizing Attention Flow and substituting attention weights with an arbitrary Information Tensor, the method leverages attention weights, their gradients, maximum flow, and the barrier method to generate more accurate feature attributions. The proposed approach demonstrates superior theoretical properties and resolves issues associated with previous methods that rely solely on simple aggregation of attention weights. Comprehensive benchmarking in NLP sequence classification tasks reveals that a specific variant of Generalized Attention Flow consistently outperforms state-of-the-art feature attribution methods across most evaluation scenarios, offering a more accurate explanation of Transformer model outputs.
[ "Attention Flow", "Feature Attributions", "Transformers", "Barrier Regularization", "Maximum Flow" ]
https://openreview.net/pdf?id=2RcTuBc4mA
https://openreview.net/forum?id=2RcTuBc4mA
ICLR.cc/2025/Conference
2025
{ "note_id": [ "KXAva9gM5q" ], "note_type": [ "comment" ], "note_created": [ 1730493414748 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"desk_reject_comments\": \"Margin violation\", \"title\": \"Submission Desk Rejected by Program Chairs\"}" ] }
2RQokbn4B5
Dataset Size Recovery from Fine-Tuned Weights
[ "Mohammad Salama", "Jonathan Kahana", "Eliahu Horwitz", "Yedid Hoshen" ]
Model inversion and membership inference attacks aim to reconstruct and verify the data on which a model was trained. However, these methods cannot guarantee to find all training samples, as they do not know the training set size. In this paper, we introduce a new task: dataset size recovery, which seeks to identify the number of samples a given model was fine-tuned on. Our core finding is that both the norm and the spectrum of the fine-tuning weight matrices are closely linked to the fine-tuning dataset size. Leveraging this insight, we propose DSiRe, an algorithm that accepts fine-tuned model weights, extracts their spectral features, and then employs a nearest neighbor classifier on top, to predict the dataset size. Although it is training-free, simple, and very easy to implement, DSiRe is broadly applicable across various fine-tuning paradigms and modalities (e.g., DSiRe can predict the number of fine-tuning images with a mean absolute error of $0.36$ images). To this end, we develop and release LoRA-WiSE, a new benchmark consisting of over $25k$ weight snapshots from more than $2k$ diverse LoRA fine-tuned models.
[ "Model Forensics" ]
https://openreview.net/pdf?id=2RQokbn4B5
https://openreview.net/forum?id=2RQokbn4B5
ICLR.cc/2025/Conference
2025
{ "note_id": [ "jjZ4XYnDcG", "eUbOQdXKaO", "FrraVWVJAy", "CE65nWQjmC", "9TV3415Xek" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730697919176, 1730892143971, 1730741170221, 1731828803540, 1730706222340 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3696/Reviewer_4DiE" ], [ "ICLR.cc/2025/Conference/Submission3696/Reviewer_hb6b" ], [ "ICLR.cc/2025/Conference/Submission3696/Reviewer_DCUx" ], [ "ICLR.cc/2025/Conference/Submission3696/Authors" ], [ "ICLR.cc/2025/Conference/Submission3696/Reviewer_8NQV" ] ], "structured_content_str": [ "{\"summary\": \"The paper proposes the problem of dataset size recovery for fine-tuned foundation models and consequently a strategy to infer dataset size using spectral analysis of the weight matrix. A benchmark is designed to evaluate various approaches for this problem and their proposed method is evaluated on it.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The problem of dataset size recovery for foundation models is interesting.\\n2. The correlation of dataset size to the Frobenius norm and singular values of the weight matrices is relevant.\\n3. A benchmark with pre-trained weight matrices of foundation models for dataset recovery is released.\", \"weaknesses\": \"1. The analysis of the correlation between dataset size and the Frobenius norm and the singular values is underwhelming. It is not clear if this trend holds across different model architectures, and if so, no theoretical evidence is advanced for this correlation.\\n2. The proposed method for dataset size recovery is way too simple to offer any insights.\\n3. The authors only study dataset size recovery for foundation models fine-tuned with a few samples. However, this problem is very general and should be explored in a broader framework.\", \"questions\": \"Please refer to weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a new task of dataset size recovery, which aims to infer the size of the training dataset used to fine-tune a pre-trained model.\\nThrough experiments, the authors uncover a clear negative correlation between dataset size and the norm and spectrum of the fine-tuning weight matrices.\\nLeveraging this insight, they propose the DSiRe algorithm to predict dataset size based on these spectral features.\\nAdditionally, the authors propose the LoRA-WiSE benchmark for evaluating on this task.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This paper introduce a novel task correlating to model inversion and membership inference attacks. The size of the training dataset will produce extra knowledge for these tasks. Besides, the authors propose a benchmark for evaluation.\", \"The paper is well-written and easy to follow.\", \"The authors provide code for reproducibility check.\"], \"weaknesses\": \"1. **Lack of theoretical support:** Although the authors reveal a quantitative relationship between dataset size and the characteristics of fine-tuning weight matrices, their evaluation is limited to diffusion tasks, lacking broader empirical evidence. Furthermore, the authors do not provide theoretical insights or justification to explain why this relationship exists.\\n2. **Experiments:** The authors should validate the effectiveness of the proposed method across a wider range of tasks, such as image classification.\\n3. **Experiments:** The authors claim that knowing the size of dataset could aid in model inversion and membership inference attacks. Could the authors provide additional experiments to support this claim?\", \"questions\": \"My questions are listed in Weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a new task, called \\\"dataset size recovery,\\\" which aims to identify the size of the fine-tuned dataset based on changes in model weights before and after fine-tuning. The authors define a data-driven pipeline to achieve this: several fine-tuned weights and their corresponding dataset sizes are provided as training samples, and during testing, a newly fine-tuned model is given. The goal is to predict the dataset size of this test model. Specifically, they propose extracting spectral features from the model weights and using these features to predict dataset size with a nearest neighbor algorithm. For experiments, the authors introduce a new benchmark named LoRA-WiSE, where various stable diffusion models are fine-tuned with LoRA parameterizations across different dataset sizes. They demonstrate the efficacy of the proposed algorithm by presenting mean absolute error (MAE) scores across three data regimes: low (up to 6 samples), medium (up to 50 samples), and high (up to 1,000 samples).\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper proposes an interesting and, to the best of my knowledge, novel problem: recovering the dataset size based on fine-tuned model weights. This approach seems potentially useful for tasks such as model inversion and membership inference attacks.\\n\\n2. The paper constructs a large-scale dataset, including 2,000 diverse LoRA fine-tuned models along with corresponding fine-tuning dataset information, which could be valuable for future research.\\n\\n3. The observed correlation between fine-tuning dataset size and both the weight norm and spectrum provides meaningful insights. The results presented with the proposed method appear reasonable across the benchmark.\", \"weaknesses\": \"1. The method appears to predict the fine-tuning dataset size for a given model only when \\u201ctraining samples\\u201d\\u2014pairs of model weights and corresponding fine-tuning dataset sizes\\u2014are available. However, it remains unclear how one would construct these training samples in practice, particularly without prior information about the actual fine-tuning dataset used by the model.\\n\\n2. Beyond dataset size, other factors likely influence the norms and spectra of the learned weights, such as the diversity of the fine-tuning dataset or its divergence from the pretraining dataset. Without direct knowledge of the fine-tuning data, these factors remain uncontrolled. For instance, a model fine-tuned on a large but homogeneous dataset may exhibit more overfitting than one fine-tuned on a small yet diverse dataset, resulting in higher norms or spectral values. This raises concerns regarding the method\\u2019s practical applicability.\\n\\n3. As shown in Figure 2, the distinctions between different fine-tuning dataset sizes diminish as dataset size increases, making it unclear how effective the method remains for larger datasets.\\n\\n4. The experiments focus solely on a stable diffusion model, leaving questions about the method\\u2019s generalizability to other model types. Additionally, why is the method restricted to fine-tuned weights? Could it be extended to estimate the dataset size for a model trained from scratch, and would the trends observed in Figure 2 apply in that context?\", \"questions\": \"Please see the weaknesses part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We sincerely thank the reviewers for their detailed and insightful feedback on our paper. After reflecting on the overall scores, we have decided to withdraw the submission. We truly value the time and effort the reviewers invested in evaluating our work.\"}", "{\"summary\": \"This paper investigates the challenge of estimating the training data size of a fine-tuned pre-trained model. The authors find that the norms and spectral properties of model weight are correlated with the dataset size used during fine-tuning. Based on this insight, they propose an algorithm called DSiRe. DSiRe utilizes a nearest-neighbours approach to classify each layer independently, with the final dataset size prediction determined by a majority vote across layers.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This work studies an interesting topic, which aims to find out the training data size from a given fine-tuned model.\\n\\nThe proposed DSiRe method shows promising results in predicting dataset sizes, suggesting that the spectral and norm-based characteristics of fine-tuned weights are indeed useful signals for this task. \\n\\nThis work offers a practical resource for future research by proposing a benchmark kit.\", \"weaknesses\": \"The authors formulate dataset size recovery as a classification problem, whereas it may be more appropriate to approach this as a regression problem. Since dataset size is inherently a continuous variable, a regression framework might offer a more precise and interpretable estimation than classification.\\n\\nThe number of samples (1~1000) used in the experiment is very limited, which may not get reliable conclusions in real-world scenarios.\\n\\nThe study does not discuss the potential effects of data augmentation on dataset size recovery. Given that data augmentation is a common practice in model training, understanding its impact on the proposed method's accuracy is crucial. It would be valuable to include experiments or discussions on how data augmentation could alter spectral and norm properties in fine-tuned weights.\\n\\nWhile the paper explores estimating dataset size, it would be insightful to discuss how this information could impact model inversion techniques or the general machine learning community. For example, does knowing the dataset size improve an adversary's ability to reconstruct original training samples?\", \"questions\": \"Please kindly see the weakness section\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
2RNGX3iTr6
Tabby: Tabular Adaptation for Language Models
[ "Sonia Cromp", "Satya Sai Srinath Namburi GNVV", "Catherine Cao", "Mohammed Alkhudhayri", "Samuel Guo", "Nicholas Roberts", "Frederic Sala" ]
While advances in large language models (LLMs) have greatly improved the quality of synthetic text data in recent years, synthesizing tabular data has received far less attention. Many of the top-performing approaches to this problem rely on techniques that adapt models originally developed for other modalities, potentially leaving generative performance on the table. We address these disparities in attention and performance for tabular data by introducing Tabby, a simple but powerful post-training modification to the standard Transformer-based language model architecture that enables its use for tabular dataset synthesis. Tabby relies on Gated Mixture-of-Experts layers, allowing each data column to be modeled by a dedicated set of parameters within the transformer multi-layer perceptrons or language modeling heads. Applying Tabby to Distilled-GPT2 improves synthetic data quality up to 7% compared to previous tabular dataset synthesis methods, achieving performance near or equal to that of real data.
[ "tabular", "generative", "llm", "mixture-of-experts", "synthesis", "transformer" ]
Reject
https://openreview.net/pdf?id=2RNGX3iTr6
https://openreview.net/forum?id=2RNGX3iTr6
ICLR.cc/2025/Conference
2025
{ "note_id": [ "pYj1yuqhku", "gYEjYp0Rot", "cUHjw14n4t", "ZqNXp4SaLc", "Yty2fmGZgc", "U0rgTrVjJJ", "SxJeC3pV3e", "NqbbMvkVkE", "NS5m2n4pMe", "Kza4IYDrDo", "IzlD5MXJqg", "I6UBQiEByK", "DTYGW1tH8y" ], "note_type": [ "official_review", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment" ], "note_created": [ 1730740535302, 1732664916697, 1730565777523, 1732128541114, 1732070445997, 1737523504204, 1730357960326, 1732071770574, 1732128788783, 1732128511378, 1732664990684, 1734619892940, 1732665028460 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2450/Reviewer_mxT7" ], [ "ICLR.cc/2025/Conference/Submission2450/Authors" ], [ "ICLR.cc/2025/Conference/Submission2450/Reviewer_MewM" ], [ "ICLR.cc/2025/Conference/Submission2450/Authors" ], [ "ICLR.cc/2025/Conference/Submission2450/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2450/Reviewer_RYWz" ], [ "ICLR.cc/2025/Conference/Submission2450/Authors" ], [ "ICLR.cc/2025/Conference/Submission2450/Authors" ], [ "ICLR.cc/2025/Conference/Submission2450/Authors" ], [ "ICLR.cc/2025/Conference/Submission2450/Authors" ], [ "ICLR.cc/2025/Conference/Submission2450/Area_Chair_rP92" ], [ "ICLR.cc/2025/Conference/Submission2450/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper proposes to use a MOE LLM fine tuned on table data for synthetic table data synthesis. The authors find that their method outperforms previous methods on table synthesis benchmarks.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The proposed method outperforms other methods on most datasets and metrics presented.\", \"weaknesses\": [\"There is no significant difference between tabby and the non-tabby (I assume no MOE?) baseline. Given that MOE has a lot more parameters, this is a negative finding.\", \"The papers contributions are very minor - applying MOE to a narrow problem (table generation). And the results are not all that strong.\", \"It's not easy from the presentation what exactly do the tasks require, what exactly are the baselines and model variations.\"], \"questions\": \"Can you please detail the various architectures MMLP, MH and MMLP+MH?\\nWhy does MMLP+MH underperform, even though it is more complex?\\nDo you replace every layer with MOE?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe thank you again for your feedback, questions, and suggestions! We believe we have answered all of your questions in our response. If you have additional questions, we would love to answer them!\\n\\nSincerely,\\nThe Authors\"}", "{\"summary\": \"This work introduces a new model called Tabby for tabular data. Tabby is an architecture modification that enables transformer-based language models to synthesize more realistic tabular data. It introduces Gated Mixture-of-Experts layers to better model the complex interdependencies and diverse data types found in tabular datasets. Tabby outperforms previous tabular data synthesis methods, achieving outstanding performance on multiple benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Tabby achieves strong performance in benchmark evaluation. It generates high-quality synthethic tabular data in comparison with the baseline methods.\\n2. The introduction of MoE shows effectiveness in helping the model understand tabular data structure and generate higher-quality tabular data.\", \"weaknesses\": \"1. The design of MoE layer is complex. For a table with V columns, this article should design an MoE model with V experts to adapt to the table. This is not generalizable to data of diverse formats. It is suggested to modify the model design to be more compatible and more generalizable.\\n2. Scalable experiments are advised to be conducted. This study needs to provide experimental results on datasets of larger scales and also more commonly used datasets.\\n3. The experiments are advised to be conducted on contemporary large language models, including Llama, Qwen, Mistral, instead of Distilled-GPT2.\", \"questions\": \"1. Have you conducted experiments on the recently released large language models? If yes, which model sizes did you choose?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"For your questions, please refer to point #3 for each of the model definitions. In our paper, each transformer block MLP is replaced with an MoE layer for the Tabby MMLP and MMLP-MH models, but it is also possible to experiment with only replacing the MLPs in a subset of the transformer blocks. Doing so may offer similar performance to Tabby MMLP or MMLP-MH, but with reduced parameter count.\\n\\nAs for why Tabby MMLP-MH does not perform as well as Tabby MH models, we hypothesize that it may be due to overfitting and memorization of training samples. We have conducted additional experiments since the deadline to determine the extent to which each of the tabular synthesis methods output memorized training samples during synthesis, and **find that Tabby MH models do not memorize at a rate significantly different from the prior works**.\\n\\nWe thank you again for your time and opinions!\\n\\n1. B. van Breugel and M. van der Schaar. Why Tabular Foundation Models Should Be A Research Priority, May 2024. URL [https://arxiv.org/abs/2405.01147v2](https://arxiv.org/abs/2405.01147v2).\\n2. M. F. Davila, S. Groen, F. Panse, and W. Wingerath. Navigating Tabular Data Synthesis Research: Understanding User Needs and Tool Capabilities, May 2024. URL [http://arxiv.org/ abs/2405.20959](http://arxiv.org/abs/2405.20959).\\n3. X. Fang, W. Xu, F. A. Tan, J. Zhang, Z. Hu, Y. Qi, S. Nickleach, D. Socolinsky, S. Sengamedu, and C. Faloutsos. Large Language Models (LLMs) on Tabular Data: Prediction, Generation, and Understanding \\u2013 A Survey, Feb. 2024. URL [http://arxiv.org/abs/2402.17944](http://arxiv.org/abs/2402.17944).\"}", "{\"comment\": \"We appreciate your feedback and careful attention to our paper! We are very happy with the promising results demonstrated by Tabby and grateful that you have noted this effect in your review. To address your comments:\\n1. **On performance gain**. We note that **_Tabby achieves similar results to Tab-DDPM, but with far fewer assumptions on the structure of the dataset_**. For example, as demonstrated in Figure 2 and discussed in Section 4.1.1, Tab-DDPM is not capable of generating real-valued numbers in the label column. In other words, Tab-DDPM is only capable of synthesizing regression datasets with integer-valued labels, which is a significant limitation that Tabby fully circumvents through its LLM architecture design. Additionally, Tab-DDPM is limited in its support for diverse column modalities. For instance, we have applied Tabby to a private dataset that contains a column of IP addresses (i.e. strings of the format 01.234.567.89 and similar). Tabby and other LLM-based tabular synthesis architectures are fundamentally capable of learning to synthesize these formatted strings and create novel IP addresses not occurring in the training dataset, whereas Tab-DDPM must model the IP column as categorical and can only synthesize datapoints with IP addresses that occur in the training data. This limitation poses severe privacy implications for datasets that contain columns such as home addresses, hospital patient names, telephone numbers and more. Tabby achieves similar performance to Tab-DDPM, while removing these limitations on column modalities.\\n2. **On simplicity**. While simple, Tabby is **_the first LLM architecture modification for tabular data synthesis_**. While the MoE approach is intuitive and achieves good results, as demonstrated in our evaluations and considering the modality limitations of the similarly-performing prior work (Tab-DDPM), the implementation of architecture modifications in large pretrained models is often far from simple. As we share with Reviewer MewM, the Tabby architecture modification will serve as the inspiration and starting point for additional architectural modifications to support modalities other than tabular data, including relational and geospatial data. We believe we have just scratched the surface here, and that experimenting with further modifications to support additional structured data modalities will produce additional value.\", \"as_for_your_questions\": [\"**On computation**. Tabby **_does not_** require any more tokens than the prior LLM works in our main results. In fact, our way of organizing tabular data is drawn from GReaT, and is also used in TapTap and Tabula.\", \"**On scaling**. Our Section 4.2, \\u201cTabby Performance as a Function of Base Model Size\\u201d, addresses how Tabby impacts different models (Distilled-GPT2 versus 8 billion parameter Llama 3). It finds that Tabby Distilled-GPT2's MLE performance is over halfway between the MLE performances of non-Tabby Distilled-GPT2 and Llama ($\\\\frac{\\\\texttt{MLE(Tabby DGPT2)}-\\\\texttt{MLE(Non-Tabby DGPT2)}}{\\\\texttt{MLE(Non-Tabby Llama)}-\\\\texttt{MLE(Non-Tabby DGPT2)}} \\\\approx 59.30\\\\\\\\%$), while Tabby Distilled-GPT2's increase in parameter count compared to non-Tabby Distilled-GPT2 is only about 2% of 8B Llama's increase in parameter count compared to non-Tabby Distilled-GPT2 ($\\\\frac{\\\\texttt{params(Tabby DGPT2)}-\\\\texttt{params(Non-Tabby DGPT2)}}{\\\\texttt{params(Non-Tabby Llama)}-\\\\texttt{params(Non-Tabby DGPT2)}} \\\\approx 2.39\\\\\\\\%$). We have updated our paper to include a plot of this scaling curve (Figure 3 in main text). Meanwhile, the Machine Learning Efficacy (MLE) metric in our main results (defined in Section 4.0.3 with results in Section 4.1 and Table 2) compares the performance of a downstream model trained fully on real versus synthetic data for each of the evaluated synthesis approaches. According to this metric, Tabby synthetic data achieves equivalent performance to real data in 3/6 evaluated datasets. While MLE is the standard metric by which table synthesis methods are evaluated, the exploration of a metric including a downstream model trained both on some real and some synthetic data could be an interesting area of future work.\", \"**On ablations**. Our main results table (Table 2) and analysis in Section 4.1 compares the performance of a plain-trained non-Tabby LLM and a plain-trained Tabby LLM. We are happy to provide any additional ablations.\", \"We thank you again for your time and thoughts!\"]}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"In this paper, the authors present a tabular data synthesis approach, Tabby. The novelty of Tabby lies in two main aspects: (1) modifying the original transformer model by applying MoE-like techniques to better model tabular data, and (2) designing a specialized data format for tabular data. Experimental results show that Tabby achieves comparable performance to the previous state-of-the-art, Tab-DDPM, and outperforms GTT NT.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The model modifications and data organization are well-motivated and intuitive.\", \"The distribution of the synthesized data is very close to the natural data.\", \"The experimental results looks good.\"], \"weaknesses\": [\"Tabby seems achieve comparable results to Tab-DDPM with marginal performance gain in Table 2.\", \"The method is quite simple and not much effective in final performance.\"], \"questions\": \"Q1: Have you computed the FLOPs for training on different datasets? It seems that Tabby uses a fixed pattern to organize tabular data, which may require more tokens for computation.\", \"q2\": \"Regarding Claim 2, could you provide a scaling curve showing performance relative to model size or data quantity? It would be interesting to see how Tabby impacts different models and how the amount of Tabby data influences the learning process. Additionally, a comparison of the scaling curve between Tabby data and natural data would serve as evidence of Tabby data being a scalable alternative to natural data.\", \"q3\": \"I'm not sure if the modification to original network is necessary. Is there an ablation study?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for taking the time to read our paper and sharing your thoughts! We are very excited to share our findings on the usefulness of MoE for the tabular synthesis community. To address your comments:\\n1. **On MoE design**. Indeed, when we have $V$-column data, we make a model with $V$ experts. This design is generalizable to any tabular dataset, where these are defined as any dataset consisting of $V$ columns and $N$ rows. While all prior related works to Tabby focus uniquely on tabular data as well, we note that Tabby **_can easily be generalized_** to be compatible with even more modalities that are similar to tabular data. For example, hierarchical or tree-structured data can be addressed with the same underlying design\\u2014 the main challenge is not specifically the structure, but rather the potential complexity from adding very large numbers of heads. However, this can be circumvented by using parameter sharing. For example, in the hierarchical case such as JSON objects, we can create nested MoE layers: the outer-level objects are modeled by the outer-level experts, which each are blocks comprised of inner-level experts that model the inner-level objects. \\n2. **On scalability**. Our choice of datasets for our experiments is **_simply based on the standard for tabular synthesis works, enabling for easy comparison_**. In particular, most works in our main results table hinge their analyses on the Adult, House and Diabetes datasets, which we center in our paper. Here are the datasets used in each of the prior publications that we compare against in our main results:\\n - [GReaT](https://arxiv.org/abs/2210.06280): *Adult, House, Diabetes*, Travel, Sick, HELOC\\n - [TapTap](https://arxiv.org/abs/2305.09696): *Adult, House, Diabetes*, HELOC, Credit score, Loan, Dubai housing, Crab age, Medical cost, Gem, Beam, Sick\\n - [Tabula](https://arxiv.org/abs/2310.12746): *Adult*, Loan, Covtype, Intrusion, King, Insurance\\n - [Tab-DDPM](https://arxiv.org/abs/2209.15421): *Adult, House, Diabetes*, Abalone, Buddy, Cardiovascular, Travel, Facebook, Gesture, Higgs, House 16H, Insurance, King, MiniBooNE, Wilt\\n - [CTGAN/TVAE](https://arxiv.org/abs/1907.00503): *Adult, House*, Covertype, Intrusion, News\\n- In our draft\\u2019s Section 4.4 (around line 515), we shared similar thoughts as you mention in your review: that the standard tabular synthesis benchmark datasets should include more challenging data. In our paper, **_we worked to include higher-diversity evaluation datasets than the prior works by including additional regression datasets (Abalone and Rain)_** while still allowing our evaluation setup to be comparable to prior works. Regression datasets have also been largely overlooked in prior tabular synthesis works, which often use only the House dataset as a representative for all regression tasks. Our main results table demonstrates that the synthesis methods (prior works included) struggle even more on regression datasets than on classification ones. This issue is further demonstrated by our Figure 2 and Section 4.1.1, which demonstrate that **_prior works require notable assumptions to generate regression data, which Tabby models do not_**. \\n3. **On LLM usage**. We agree! **_We conduct experiments using larger models (LLaMA 3 8B)_** and include them in the main section of our paper (Section 4.2). In the main results (Table 2 and Section 4.1), we work with Distilled-GPT2 because the prior works on tabular synthesis with LLMs (GReat, TapTap and Tabula) all implement Distilled-GPT2. As such, this choice allows for maximum comparability with the prior works. \\n\\nIn reference to your Question, please also refer to our paper\\u2019s Section 4.2, \\u201cInvestigating the Choice of Base Model\\u201d.\\n\\nThank you again for sharing your thoughts on our paper and we would appreciate any additional feedback!\"}", "{\"comment\": [\"We thank the attentive reviewers for their feedback and comments on our paper. We have received much valuable feedback that allows us to better explain the advantages of Tabby as compared to prior works, such as its freedom from prior assumptions on data column modalities. We reiterate the contributions and strengths of Tabby here:\", \"**Tabby achieves state of the art performance on the standard tabular evaluation metrics, with fewer limitations than the prior state of the art method** (Tab-DDPM). Tab-DDPM is unable to generate realistic datasets with real-valued numerical labels and is unable to synthesize strings to the same degree as LLMs, resulting in limitations to the privacy of Tab-DDPM\\u2019s synthetic data and the ability of Tab-DDPM to model areas of the data distribution that do not occur in its training data (for further details on these limitations, refer to our comment to Reviewer RYWz). Tabby wholly circumvents these limitations, while achieving comparable state of the art performance and even outperforming Tab-DDPM on some datasets.\", \"**Tabby is the first architecture modification that allows LLMs to directly generate tabular data**. Because the prior LLM tabular synthesis approaches (e.g. GReaT, TapTap and Tabula) are training techniques, they can be used in concert with Tabby. Further, Tabby is applicable to any transformer-based LLM, as we demonstrate in Section 5.2.\", \"**Tabby\\u2019s Mixture of Experts (MoE) layer design is flexible and may serve as a basis for future work in other structured modalities**, such as JSON objects and geospatial data. While our paper demonstrates the benefits of replacing language modeling heads and transformer block MLPs with MoE layers, there are many other possible combinations of MoE layers. For instance, future work can replace attention heads with MoEs or nest MoEs within each other to generate nested data structures such as JSONs. Tabby\\u2019s high performance for tabular synthesis demonstrates that these architecture variations bring large improvements in our ability to generate many varieties of structured data using pre-existing, pretrained architectures such as LLMs.\", \"We again thank our reviewers for the careful attention that they have devoted to our paper, as well as the PC, AC and others who assist in the conducting of ICLR\\u2019s rigorous reviewing process.\"]}", "{\"comment\": \"Thank you for your feedback on Tabby. We respond to each of your points below.\\n1. **On performance compared to Non-Tabby models**. Indeed, Non-Tabby means that we do not use MoE (refer to definition in Section 3.1 for details). **_Our main results in Table 2 and Section 4.1 demonstrate that Plain MH Tabby models outperform the prior best LLM-method (GTT Non-Tabby) on each of the six datasets that we evaluate_** in the primary metric of Machine Learning Efficacy (MLE). The Plain MH Tabby model reaches upper-bound performance in 3/6 datasets, whereas the GTT Non-Tabby model does not reach upper-bound in any of the datasets. Furthermore, **_our Plain MH Tabby model outperforms the Plain Non-Tabby model on five out of six datasets_** (both Plain MH Tabby and Plain Non-Tabby reach upper-bound performance on the sixth dataset, Adult) on MLE. According to these results, **_the Plain MH Tabby model significantly outperforms all preexisting LLM-based tabular synthesis methods_**.\\n2. **On significance of contribution**. As stated in Section 1 (Introduction), **_there have been \\u201cmany calls for improved tabular approaches\\u201d [1,2,3]_**. Tabular data is critical for many domains, and the development of high-fidelity tabular synthesis methods will have important implications such as improved data augmentation and missing value imputation for tabular tasks, as well as the ability to generate synthetic datasets that will preserve the privacy of original datasets in domains such as medicine. Furthermore, **_the tabular modality is part of a broader class of structured modalities_**, including data in nested or tree structures. Positive findings in the area of tabular data, such as Tabby, will spur future progress in the broader modalities of structured data as well.\\n3. **On definitions of tasks, baselines and model definitions**. We address each of these in-turn:\\n - **Tasks**: **_We use the standard evaluation tasks defined by prior tabular synthesis works_**, such as [GReaT](https://arxiv.org/abs/2210.06280) and [Tab-DDPM](https://arxiv.org/abs/2209.15421). While these tasks are described in detail in Section 4.0.3, we provide a brief summary here. The primary evaluation of a tabular synthesis method on a given dataset is **_Machine Learning Efficacy (MLE)_**. Denote the dataset\\u2019s train and test splits by $R$ and $D$, respectively. We first train our synthesis method on $R$, then use the synthesis method to create a synthetic dataset $S$. To calculate MLE, a downstream random forest classifier or regressor (determined by the dataset\\u2019s *label column*, which is specified in Table 1), denoted $K_R$, is trained on $R$ to predict some predetermined label column. Another classifier or regressor, denoted $K_S$ is trained on $S$ to predict the same label column. Then, both $K_R$ and $K_S$ are evaluated on the test dataset $D$. The difference in test-time performance between $K_R$ and $K_S$ is referred to as MLE. Further details are shared in Section 4.0.3, around lines 274-287, and our secondary metric of Discrimination Score is then detailed in lines 288-295. Our datasets are introduced in Section 4.0.2 and Table 1, with further information such as download links and descriptions of their columns in Appendix A.\\n - **Baselines**: Section 4.0.1, entitled \\u201cBaselines and Comparisons\\u201d, specifies each of our baselines. In particular, we include the prior LLM training techniques for tabular data (GReaT, TapTap and Tabula, and referred to as GTT when used in-concert with each other), the longstanding popular GAN- and VAE-based approaches (CTGAN and TVAE), and the prior state of the art Tab-DDPM, which uses a diffusion model architecture.\\n - **Model Definitions**: We define each of our architectures in Section 3.1, \\u201cArchitecture of Tabby Models\\u201d, around lines 154-157, and Figure 1 offers a visual comparison of Non-Tabby, Tabby Multi-MLP (MMLP) and Multi-Head (MH) models. For dataset with $V$ columns, compared to a standard, Non-Tabby transformer-based LLM, the Tabby MMLP model replaces the MLP within each of its transformer blocks with an MoE layer of $V$ experts and the Tabby MH model replaces its language modeling head with an MoE layer of $V$ experts. The Tabby MMLP-MH model replaces each of the transformer MLPs and the language modeling head with $V$-expert MoE layers.\\n\\n(continued in next comment)\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe thank you again for your feedback, questions, and suggestions! We believe we have answered all of your questions in our response. If you have additional questions, we would love to answer them!\\n\\nSincerely,\\nThe Authors\"}", "{\"metareview\": \"Tabby is a method for transformer-based Large Language Models (LLMs) to synthesize high-fidelity tabular data. Tabby produces higher-quality synthetic data for 4 out of 6 datasets compared to previous methods. An architecture modification is proposed for the task. The reviewers argue that the additional complexity of MoE design is undesirable, and the paper needs more scalable experiments to show generalizability. The paper needs further improvements to make it more solid.\", \"additional_comments_on_reviewer_discussion\": \"The decisions are consistent before and after the rebuttal period. The paper needs a major revision to meet the reviewers' expectation.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe thank you again for your feedback, questions, and suggestions! We believe we have answered all of your questions in our response. If you have additional questions, we would love to answer them!\\n\\nSincerely,\\nThe Authors\"}" ] }
2R7498e2Tx
PersonalLLM: Tailoring LLMs to Individual Preferences
[ "Thomas P Zollo", "Andrew Wei Tung Siah", "Naimeng Ye", "Ang Li", "Hongseok Namkoong" ]
As LLMs become capable of complex tasks, there is growing potential for personalized interactions tailored to the subtle and idiosyncratic preferences of the user. We present a public benchmark, PersonalLLM, focusing on adapting LLMs to provide maximal benefits for a particular user. Departing from existing alignment benchmarks that implicitly assume uniform preferences, we curate open-ended prompts paired with many high-quality answers over which users would be expected to display heterogeneous latent preferences. Instead of persona prompting LLMs based on high-level attributes (e.g., user race or response length), which yields homogeneous preferences relative to humans, we develop a method that can simulate a large user base with diverse preferences from a set of pre-trained reward models. Our dataset and generated personalities offer an innovative testbed for developing personalization algorithms that grapple with continual data sparsity---few relevant feedback from the particular user---by leveraging historical data from other (similar) users. We explore basic in-context learning and meta-learning baselines to illustrate the utility of PersonalLLM and highlight the need for future methodological development.
[ "Personalization", "LLM", "Alignment", "benchmark", "dataset", "reinforcement learning from human feedback", "language models", "RLHF", "preferences" ]
Accept (Poster)
https://openreview.net/pdf?id=2R7498e2Tx
https://openreview.net/forum?id=2R7498e2Tx
ICLR.cc/2025/Conference
2025
{ "note_id": [ "rsR7KkI68Y", "pQfpxoNtCv", "opAEZXHYvn", "h2UqVhSGeg", "dGZx3YVODr", "bS4QqGiB3e", "UyIFi4dswJ", "U2nFSRAQLW", "RYdTEN0G8d", "QaDPdqdYtE", "GZPWRpvOfS", "GPHBdOHNPJ", "FY6hulnaTO", "Dbd7jVldS6", "DBYTVOIMxO", "D6snoecEma", "CW25sKK9Kh", "6gowSv9Ke0", "5JbiRIQsdH", "3CyRvdPiEQ", "2zDH6DV2OF", "2UeF5DcB6o" ], "note_type": [ "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1733079383439, 1730512941299, 1733189997164, 1734767269238, 1732009765597, 1733079324669, 1732010097712, 1733079479204, 1737523642517, 1730601507350, 1732302059727, 1732009969362, 1732302088663, 1732302133938, 1730469360895, 1733190601479, 1732010221034, 1732010570709, 1732638447584, 1730712793777, 1732302113939, 1732010597554 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4473/Authors" ], [ "ICLR.cc/2025/Conference/Submission4473/Reviewer_ndCs" ], [ "ICLR.cc/2025/Conference/Submission4473/Reviewer_oPDc" ], [ "ICLR.cc/2025/Conference/Submission4473/Area_Chair_8G5P" ], [ "ICLR.cc/2025/Conference/Submission4473/Authors" ], [ "ICLR.cc/2025/Conference/Submission4473/Authors" ], [ "ICLR.cc/2025/Conference/Submission4473/Authors" ], [ "ICLR.cc/2025/Conference/Submission4473/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4473/Reviewer_YmDG" ], [ "ICLR.cc/2025/Conference/Submission4473/Authors" ], [ "ICLR.cc/2025/Conference/Submission4473/Authors" ], [ "ICLR.cc/2025/Conference/Submission4473/Authors" ], [ "ICLR.cc/2025/Conference/Submission4473/Authors" ], [ "ICLR.cc/2025/Conference/Submission4473/Reviewer_oPDc" ], [ "ICLR.cc/2025/Conference/Submission4473/Authors" ], [ "ICLR.cc/2025/Conference/Submission4473/Authors" ], [ "ICLR.cc/2025/Conference/Submission4473/Authors" ], [ "ICLR.cc/2025/Conference/Submission4473/Reviewer_ndCs" ], [ "ICLR.cc/2025/Conference/Submission4473/Reviewer_kUTu" ], [ "ICLR.cc/2025/Conference/Submission4473/Authors" ], [ "ICLR.cc/2025/Conference/Submission4473/Authors" ] ], "structured_content_str": [ "{\"comment\": \"We thank the reviewer again for taking the time to offer feedback on our paper. As the extended discussion period ends tomorrow, we hope that the reviewer might consider our rebuttal and please let us know if there are any remaining questions or concerns that we might be able to address. Thank you!\"}", "{\"summary\": \"This paper builds a dataset of open-ended prompts and high-quality responses where users might be expected to have different preferences, a method of sampling direct different user preferences based on reward models, and proposes different algorithms for personalization using data across multiple users. In addition, they empirically validate that their proposed method of sampling user preferences beats a baseline persona-based method for generating diverse user preferences.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"Originality: The paper proposes (as far as I know) an original method for generating diverse user preferences.\", \"quality\": \"The paper both creates a high-quality dataset, as well as empirically validates that its methodology creates diverse preferences at least as diverse as a persona-based method.\", \"clarity\": \"The paper is clearly written.\", \"significance\": \"The paper establishes a dataset and methodology for generating diverse user preferences, which is very important for studying LLM personalization.\", \"weaknesses\": \"1) The paper uses reward models from a leaderboard (as opposed to fine-tuning to persona data or something), which means that the reward models are all high-quality, but may result in reward models which are less distinct from each other than they might otherwise be. The paper clearly justifies this as not preventing their resampling method from reaching higher diversity than persona-based prompting, but are there other sources of high quality reward functions that might be more different from each other?\\n2) Similarly, were the leading LLMs used to sample the 8 preferences prompted with personas? The different LLMs might be somewhat more similar to each other than they need to be, but of course resampling the dataset could be quite expensive, and the dataset is quite valuable as is.\", \"questions\": \"1) Are there other sources of high-quality reward functions that can be used?\\n2) Were the leading LLMs used to sample the 8 preferences prompted with personas?\", \"flag_for_ethics_review\": \"['Yes, Legal compliance (e.g., GDPR, copyright, terms of use)']\", \"details_of_ethics_concerns\": \"It's worth it to double-check that including the LLM responses in a dataset is within the relevant terms of use -- my impression is that generally they are, but it should be double-checked.\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the clarifications.\"}", "{\"metareview\": \"This paper introduces PERSONALLLM, a novel dataset for advancing research in the personalization AI domain. The dataset captures user preferences through prompts accompanied by eight responses generated by various large language models (LLMs), including GPT-4 and Claude 3. To benchmark performance on PERSONALLLM, the authors propose in-context learning and meta-learning methods as baseline approaches for two personalization scenarios. Experimental results reveal significant room for improvement in addressing personalization challenges within the dataset, highlighting the potential for further advancements in this field.\", \"positive_points\": [\"This paper introduced PersonalLLM, which is an impactful direction to enhance the user experience.\", \"The dataset proposed in the paper is well-analyzed.\", \"The paper is well written.\"], \"negative_point\": [\"The simulated dataset can be not reliable (e.g., linear combination of the reward models)\", \"The comparison of the paper with other relevant personalized are insufficient\"], \"additional_comments_on_reviewer_discussion\": \"In the rebuttal period, the authors have addressed most concerns raised by the reviewers. The two negative ratings are both 5. I have read the comments from the reviewers, I believe most of the concerns can be easily addressed in the final version. As a result, I recommend acceptance of the paper.\"}", "{\"title\": \"Author Response\", \"comment\": [\"We thank the reviewers for their time and careful feedback. We appreciate that all reviewers highlighted the significance of the personalization problem and the potential for PersonalLLM to make a novel contribution to the field of LLM personalization along several dimensions.\", \"Motivated by the key methodological gap in personalizing LLMs, we provide an empirical testbed that can spur algorithmic innovations. We release a new open-source dataset with over 10K open-ended prompts paired with 8 high-quality responses from top LLMs scored by 10 different SoTA reward models.\", \"We propose a novel method for sampling diverse \\\"personas\\\" via randomly weighted ensembles of reward models, to avoid the need for opaque and expensive GPT4o evaluations or unreliable (and possibly discriminatory) persona prompting. Unlike standard approaches, our novel method creates a diverse set of preferences.\", \"At its core, our work is guided by the belief that the value of a benchmark lies in its capacity to drive methodological progress. We do not claim our personas replicate human behavior\\u2014this is a lofty goal and outside the scope of this work. Instead, we aim to create a rigorous and reasonable simulation environment that serves as an empirical foundation for innovation in LLM personalization.\", \"Our benchmark creates new possibilities for algorithmic development, by providing a challenging enough setting that methodological progress therein can imply progress on real applications. As an analogy, while ImageNet is noisy and synthetic--differentiating between 120 dog breeds is not a realistic vision task--it provides a challenging enough setting that methodological progress on ImageNet implies progress on real applications. We thus believe PersonalLLM represents a meaningful step forward in advancing the personalization of language-based agents.\", \"Also, we note that we attempted to follow best practices by including a dataset card, to inform users about potential concerns and how to responsibly use the data. We also discuss the risks and ethical implications of our dataset release in Section 6. If there are any remaining concerns that we can allay here, please let us know.\", \"Below, we respond to each reviewer's individual concerns. We have also submitted an updated manuscript reflecting reviewers\\u2019 concerns.\"]}", "{\"comment\": \"We thank the reviewer again for taking the time to offer feedback on our paper. As the extended discussion period ends tomorrow, we hope that the reviewer might consider our rebuttal and please let us know if there are any remaining questions we can answer. Thank you!\"}", "{\"title\": \"Author Response\", \"comment\": \"We thank the reviewer for their consideration of and feedback on our submission. Please see below for responses to specific questions and comments.\\n\\n**The paper is unclear about what the preference is in the data; is it user preference of items in recommender systems or any replacement of NLP tasks or others?**\\n\\nOur testbed is meant to explore user preferences with respect to different possible LLM responses to a user query. We have attempted to convey this in the first 3 paragraphs of the introduction, especially lines 40-46, as well as Figures 1-3. If the reviewer has any further suggestions to clarify this point we would be happy to update the submission.\\n\\n**The paper is uclear about how the PERSONALLLM is formulated, the author presented the reward model, but how it is trained/built up.**\\n\\nWe detail our approach to producing simulated personal reward models in Section 2.2. To summarize, we address the challenge of simulating diverse user preferences using a set of strong, open-source RLHF reward models (sourced through RewardBench https://huggingface.co/spaces/allenai/reward-bench). We generate simulated users by sampling weighted combinations of these models, defining user-specific preferences as weighted sums of reward scores from a selected set of 10 high-performing models, such as Llama3 and Mistral. This approach enables scoring of any (prompt, response) pair through simple weightings, providing a scalable framework for studying personalization.\\n\\n**The author illustrates the heter preference PERSONALLLM involves in which differs from the home ones, but how these two preferences demonstrate is not clear.**\\n\\nOne of our main goals in creating PersonalLLM was to create a set of preference models and data such that the preference models would offer heterogeneous preferences over the responses in the data. In order to verify our approach, in Section 3 we examine whether populations of personal preference models sampled via the method outlined in Section 2.2 do in fact display heterogeneous preferences over the prompt/response pairs in our dataset, and compare to the popular persona prompting baseline. In Figure 4 and the resulting analysis, we find that our method produces heterogeneous preferences over our dataset of prompts and responses, considerably more so than persona prompting an LLM. For example, under our method the most popular response to a query receives a majority user vote for only about half of the prompts, while that figure is closer to 90% for the persona prompting baseline. Also, for roughly 60% of prompts, at least 5 different answers are chosen as the best by at least 1 user under our set of simulated preference models; for LLM persona prompting, it is roughly 30%, meaning that for most data examples, at least half of potential responses are not preferred by a single user. Finally, our ensembled preference models have a much more diffuse set of preferences over the response-generating LLMs than persona prompting.\\n\\n**What is the relationship between PERSONALLLM and recommender system? Is it a replacement of existing ones or a more general preferenc-based system incuding RS? Why?**\\n\\nOur work on PersonalLLM is inspired by classic recommender systems in several ways. First, we aim for PersonalLLM to allow for the simulation of a large number of users, enabling the study of the full personalization paradigm for applications such as search engines and recommender systems wherein a historical database of user data is leveraged to personalize new interactions. We also build on the existing paradigm of using simulated rewards for developing recommender systems. Further, the setting in Section 4.2 resembles typical\\nrecommendation systems, but \\u201cactions\\u201d are now defined over the space of natural language outputs instead of a fixed set of items. We attempt to highlight this throughout the submission, but we will make sure to emphasize it further in the camera-ready version.\"}", "{\"comment\": \"We thank the reviewer again for taking the time to offer feedback on our paper. As the extended discussion period ends tomorrow, we hope that the reviewer might consider our answers to their questions, as well as the changes that we have made to our submission in response to their concerns. We would also be happy to respond to any remaining concerns or questions. Thank you!\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"The paper aims to propose a dataset called PERSONALLLM for the personalization AI area, which contains users\\u2019 preference illustrated by a prompt with eight responses. Specifically, the user responses are built up by various LLMs, e.g., GPT4, Claude 3.\\n\\nThe authors then propose in-context learning and meta-learning methods as baselines for two scenarios from PERSONAL. The results show that there is much room for improvement in solving the personalization problem in the proposed PERSONAL.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper uses multiple LLMs to generate various responses to improve the confidence of dataset.\\n2. The paper provides a specific analysis of the dataset.\", \"weaknesses\": \"1. The paper is unclear about what the preference is in the data; is it user preference of items in recommender systems or any replacement of NLP tasks or others?\\n2. The paper is uclear about how the PERSONALLLM is formulated, the author presented the reward model, but how it is trained/built up.\\n3. The author illustrates the heter preference PERSONALLLM involves in which differs from the home ones, but how these two preferences demonstrate is not clear.\", \"questions\": \"Answering and solving the weakness questions clearly can greatly help the reviewer target the focus of the paper. For the reviewer, these issues require a lot of time to carefully polish the paper before they can be completed. In addition, the review would ask:\\nWhat is the relationship between PERSONALLLM and recommender system? Is it a replacement of existing ones or a more general preferenc-based system incuding RS? Why?\", \"flag_for_ethics_review\": \"['Yes, Responsible research practice (e.g., human subjects, data release)']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Follow-up\", \"comment\": \"Hello, as the discussion period will be ending in a few days, we wanted to follow up and see if there are any remaining questions we can answer or any other changes we can make to address the reviewer\\u2019s concerns. Otherwise, we hope that the reviewer may consider raising their score. Thank you again for the time and consideration.\"}", "{\"title\": \"Author Response\", \"comment\": \"We thank the reviewer for the time and care taken in reviewing our submission. We are encouraged that they felt we had taken an impactful direction, and that they recognized the importance of using meta-learning to address the data sparsity issue in these settings. Below, we respond to particular points of feedback.\\n\\n**The personal preference models used to simulate diverse user preferences are not convincing enough to represent real users. First, it is difficult to verify whether the linear combination of scores from reward models aligns with the distribution of user rewards in the real world. Second, the candidate responses generated by LLMs may not cover real-world user-specific responses, making it challenging for LLMs to learn user-specific preferences or align with user-specific backgrounds. For instance, users may have particular preferences or habits that general reward models inherently struggle to account for when providing accurate rewards.**\\n\\nOur goal is not to produce a fully realistic simulation of human behavior, but instead to create a challenging simulation environment that can serve as an empirical foundation for innovation in LLM personalization. In this case, this primarily means creating a diverse enough set of preference models such that different users prefer different responses. In Section 3 we show that creating diverse preferences is challenging with existing approaches and that we resolve this technical bottleneck with our simulation method. Given that our testbed is the first to enable the study of settings where a large historical database of user data can be leveraged to personalize new chat outputs for new users, we believe that PersonalLLM represents a meaningful contribution towards advancing the personalization of language-based agents.\\n\\n**The paper lacks an overarching figure that illustrates the construction logic of the dataset and what the samples within the dataset look like.**\\n\\nWe appreciate the reviewer pointing this out, and have added a new Figure 7 that illustrates the construction logic of the dataset. Further, we have added an example of what a data sample looks like to Appendix A.\\n\\n**The comparison of the paper with other relevant personalized LLM benchmarks, such as the LaMP dataset.**\\n\\nThank you for pointing this out. We have added LaMP, as well as other relevant LLM personalization benchmarks, to our related works section.\\n\\n**Some related concepts are not clearly explained, such as 'interaction history', 'preference data', and 'user data,' which are not well defined.**\\n\\nThank you for the suggestion, we have attempted to clarify these terms in the paper.\"}", "{\"title\": \"Author-follow\", \"comment\": \"Hello, as the discussion period will be ending in a few days, we wanted to follow up and see if there are any remaining questions we can answer, or any other changes we can make to address the reviewer\\u2019s concerns. Otherwise, we hope that the reviewer may consider raising their score. Thank you again for the time and consideration.\"}", "{\"title\": \"Author Follow-up\", \"comment\": \"Hello, as the discussion period will be ending in a few days, we wanted to follow up and see if there are any remaining questions we can answer or any other changes we can make to address the reviewer\\u2019s concerns. Thank you again for the time and consideration.\"}", "{\"summary\": \"This paper presents a new dataset of simulated preferences. The data consists of 10K prompts X 8 responses from different LLMs for each prompt X 10 rewards from different reward models. 1000 simulated users are sampled, where each user\\u2019s preferences are defined by a weighted sum of rewards (the weights are sampled from a Dirichlet distribution). The data is then used in in-context learning (ICL) for improving the LLM responses w.r.t. the user\\u2019s preferences.\\n\\nPersonalization is achieved by ICL, adding examples of good/bad responses according to the weighted reward. The results (Figure 6 left) show that using ICL with historical preferences can improve performance compared to zero-shot.\\n\\nLearning across users is proposed, retrieving other users with similar preferences from a set of simulated users, and using their preferences for ICL. The results (Figure 6 right) show a small improvement when using both positive and negative preferences compared to ICL using only the user\\u2019s history.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The large-scale dataset could be useful for LLM development.\", \"The alternative to persona-based simulated users seems novel.\"], \"weaknesses\": [\"It is stated in the paper that the goal is not to match preferences of a distribution of real users, but rather to generate diverse preferences that are more heterogeneous/diverse. I think that this requires more justification since random preferences would give even higher diversity but may not be useful.\", \"Clarity/readability could be improved (see detailed questions).\"], \"questions\": [\"In line 76 it says \\u201cin doing so we are able to simulate an entire user base\\u201d. On the other hand it says in line 102 that \\u201cWe do not claim our simulated personal preference models provide a high-fidelity depiction of human behavior\\u201d, so this may be a bit confusing and you may want to rephrase these statements. After reading the first one I was hoping for some evaluation of how realistic the simulated users are. This is actually done in \\u201cComparison to Human Preferences\\u201d in Section 3, so I guess you are doing some of that? If the goal is to obtain high *coverage* rather than matching the distribution of users, perhaps this can be made explicit and possibly evaluated against real user behavior? Perhaps some measure of support instead of Wasserstein? It would be also interesting to compare the results in Figure 5 to those from standard personas baselines.\", \"Actually, if the goal is coverage then random preferences should give better coverage, but are probably not very useful, so just optimizing coverage doesn\\u2019t seem to be a good objective.\", \"Can you please clarify the objective here?\", \"Another potentially interesting baseline is to have each user choose one of the rewards, a hard choice instead of a weighted sum. There will only be 10 user \\u201ctypes\\u201d, so it may be interesting to see how the results change in that case.\", \"Sometimes there are long multi-line sentences that could be simplified to improve readability and flow. It is easier to read a paper that has no sentences that span more than 2 lines. Some examples:\", \"\\u201cGiven the expected data sparsity in this setting, beyond a particular user\\u2019s data, such personalized language systems will likely also rely on historical data from other (similar) users to learn how to learn from a small set of new user feedback (see Figure 2).\\u201d Could be simplified/broken (by an LLM): \\u201cThese personalized language systems will likely use more than just one user's data due to the expected data sparsity in this setting. They will also depend on historical data from other similar users. This helps them learn effectively from a small amount of new user feedback (see Figure 2 for more details).\\u201d\", \"\\u201cWe do not claim our simulated personal preference models provide a high-fidelity depiction of human behavior, but rather offer a challenging simulation environment that provides the empirical foundation for methodological innovation in capturing the complex array of human preferences that arise in practice.\\u201d Could be made easier to read (by an LLM): \\u201cWe don't claim that our simulated personal preference models perfectly mimic human behavior. Instead, they offer a challenging simulation that provides a basis for developing new methods. This helps in better capturing the complex range of human preferences encountered in real life.\\u201d\", \"\\u201cWhile human evaluation like that of Kirk et al. (2024) is a gold standard, wherein fine-grained preference feedback is gathered from a representative sample of diverse and multicultural participants, it is impractical or even impossible to get this feedback throughout the methodology development cycle, meaning that synthetic personal preference models will ultimately be needed.\\u201d I had to read this one slowly a couple of times\\u2026\", \"Line 354: \\u201cTwo first-order problems\\u2026\\u201d can be losslessly simplified to \\u201cTwo problems\\u2026\\u201d.\", \"Line 254: choosing only 500 personas may be too little if the goal is to achieve heterogeneity, especially since 1000 users are sampled for PersonalLLM. Can you please include results with 1000 personas? It may actually be interesting to see how the results change when increasing the sample size for both persona and PersonalLLM.\", \"Line 257: \\u201cwe can see that the top response receives a majority user vote for only about half of the prompts, while that figure is closer to 90% for the persona prompting baseline.\\u201d Sorry, I could not read that from the figure, can you please explain how the results show this?\"], \"also_in_line_258\": [\"\\u201cAlso, for roughly 60% of prompts, at least 5 different answers are chosen as the best by at least 1 under our set of personas; for LLM persona prompting, it is roughly 30%.\\u201d Please explain.\", \"Line 274: \\u201cWith respect to changes across the left 3 columns, we can observe that as \\u03b1 increases, preferences become more uniform. However, if \\u03b1 is set too low, user preferences cluster very tightly around the base reward models; we observe this behavior for \\u03b1 = 0.01.\\u201d \\u2014 looking at the figure, it actually seems like there is not much difference between the first 3 columns. Is there a better way to show this difference?\", \"Line 294: \\u201cIn Figure 5 (right), we compare the entropy in the population preferences over the responses to a given prompt based on keywords, comparing words we would expect to inspire heterogeneity (e.g., imagine, opinion, poem) to prompts beginning with \\u201cwho\\u201d, \\u201cwhen\\u201d, and \\u201cwhere\\u201d, which evoke more objective answers.\\u201d This was not clear to me, maybe add a formal definition and/or an equation for the entropy? Also, how do standard personas compare to the proposed approach in this task?\", \"In Section 4.2, is it mentioned how response (and prompt) embeddings are computed?\", \"Minor/typos:\", \"Line 32: Christiano et al., 2017, not 2023\", \"In Figure 6 (left), the dashed line is missing from the legend. I am guessing this is the zero-shot performance.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for considering our rebuttals. If our responses and updated submission have sufficiently addressed your concerns, we ask that you might consider raising your score.\"}", "{\"title\": \"Author Response\", \"comment\": \"We thank the reviewer for the time and consideration offered in reviewing our paper. We are encouraged to see that they see our work as original and clearly presented, and that they feel our testbed may make a significant contribution to the field of LLM personalization. Below we respond to particular points of feedback.\\n\\n**The paper uses reward models from a leaderboard (as opposed to fine-tuning to persona data or something), which means that the reward models are all high-quality, but may result in reward models which are less distinct from each other than they might otherwise be. The paper clearly justifies this as not preventing their resampling method from reaching higher diversity than persona-based prompting, but are there other sources of high quality reward functions that might be more different from each other?**\\n\\nWe agree with the reviewer that it is worth considering other methods for producing a large and diverse set of high-quality reward functions, given the acknowledged shortcomings of our approach. We are not aware of any such methods at this time but hope that researchers take inspiration from this work and are able to develop more faithful and diverse simulators in the future. We have further acknowledged this concern and the need for further work in the \\u201cFuture Directions\\u201d section of our paper.\\n\\n**Similarly, were the leading LLMs used to sample the 8 preferences prompted with personas? The different LLMs might be somewhat more similar to each other than they need to be, but of course resampling the dataset could be quite expensive, and the dataset is quite valuable as is.**\\n\\nWe hit our budget constraint as an academic lab producing the LLM responses for our dataset, and were not able to probe the effects of persona prompting on these responses from these strong models. We agree that this would be a very interesting direction for future research, and hope this is enabled by the release of our dataset and pipeline code.\\n\\n**It's worth it to double-check that including the LLM responses in a dataset is within the relevant terms of use -- my impression is that generally they are, but it should be double-checked.**\\n\\nThank you for this suggestion. We checked this before submission but did not explicitly state this in our dataset card in the appendix. We have added this explicitly in Section A.2.1.\"}", "{\"title\": \"Author Response\", \"comment\": \"We sincerely appreciate the reviewer for the time and care taken in reviewing our submission and offering feedback on how we might improve our paper. We are encouraged that they agree with the value and novelty of our method of simulating diverse personas for methodological development. Below, we respond to your particular comments. Noted changes can be viewed in our updated submission PDF.\\n\\n**It is stated in the paper that the goal is not to match preferences of a distribution of real users, but rather to generate diverse preferences that are more heterogeneous/diverse. I think that this requires more justification since random preferences would give even higher diversity but may not be useful.**\\n\\nWe agree with the reviewer that diversity in and of itself is not the only important criteria for a simulated benchmark. By building our simulated personas on top of reward models trained with human preference feedback, our simulated reward models inherit some reasonable biases about human preferences, while still exhibiting the desired diversity. We believe that our analysis in Section 3 shows that our simulated users achieve a good tradeoff between offering reasonable representations of human preferences while overcoming the technical bottleneck in creating diverse preference targets.\\n\\n**In line 76 it says \\u201cin doing so we are able to simulate an entire user base\\u201d. On the other hand it says in line 102 that \\u201cWe do not claim our simulated personal preference models provide a high-fidelity depiction of human behavior\\u201d, so this may be a bit confusing and you may want to rephrase these statements. After reading the first one I was hoping for some evaluation of how realistic the simulated users are. This is actually done in \\u201cComparison to Human Preferences\\u201d in Section 3, so I guess you are doing some of that?**\\n\\nWe appreciate this concern, and have updated Line 76 to more clearly reflect the nature of our preference models.\\n\\n**If the goal is to obtain high coverage rather than matching the distribution of users, perhaps this can be made explicit and possibly evaluated against real user behavior? Perhaps some measure of support instead of Wasserstein? It would be also interesting to compare the results in Figure 5 to those from standard personas baselines. Actually, if the goal is coverage then random preferences should give better coverage, but are probably not very useful, so just optimizing coverage doesn\\u2019t seem to be a good objective. Can you please clarify the objective here?**\\n\\nWe adopted the methodology of (Santurkar et al., 2023) for evaluating our simulated user base on OpinionQA, in order to measure how well human preferences are represented by our simulated users. We felt that this made for the strongest basis of comparison, and also allowed including the baseline results from other LLMs without reproducing outputs. With respect to coverage, this was roughly our goal in evaluating across the 60 demographic groups. We aimed to ensure that the preferences exhibited by our simulated users were reasonable with respect to many different segments of the population, and found positive results. In consideration of the reviewer\\u2019s concern, we will attempt to expand these comparisons before the camera-ready version.\\n\\nRegarding Figure 5, based on the reviewer\\u2019s suggestion we have extended these experiments to the persona prompting baseline. These new results can be seen in Figure 8.\\n\\n**Another potentially interesting baseline is to have each user choose one of the rewards, a hard choice instead of a weighted sum. There will only be 10 user \\u201ctypes\\u201d, so it may be interesting to see how the results change in that case.**\\n\\nWe agree that this is an interesting setting, and may reflect many applications where users exist in tight \\u201cclusters\\u201d. We have clarified in line 277 that this can be achieved by lowering the alpha parameter for the Dirichlet distribution for the sampling of weightings.\\n\\n**Sometimes there are long multi-line sentences that could be simplified to improve readability and flow. It is easier to read a paper that has no sentences that span more than 2 lines.**\\n\\nWe have shortened these sentences, and others that we felt were too long. Thank you for this suggestion.\\n\\n**Line 254: choosing only 500 personas may be too little if the goal is to achieve heterogeneity, especially since 1000 users are sampled for PersonalLLM. Can you please include results with 1000 personas? It may actually be interesting to see how the results change when increasing the sample size for both persona and PersonalLLM.**\\n\\nBased on this concern, we have updated Figure 4 (and Figure 8) with results from 1,000 randomly sampled personas, to make for a better comparison with the PersonalLLM simulated user population.\"}", "{\"title\": \"Thanks!\", \"comment\": \"Thanks for double-checking the terms of use! I stand by my positive assessment, and appreciate the authors explaining their constraints.\"}", "{\"summary\": \"The paper introduces PersonalLLM, a public benchmark designed to personalize Large Language Models (LLMs) to better align with individual user preferences. The benchmark focuses on simulating diverse personal preferences using a set of pre-trained reward models. The dataset consists of open-ended prompts paired with multiple high-quality LLM responses, and the goal is to optimize personalization by leveraging historical user data. Basic baselines, including in-context learning and meta-learning, are explored to showcase the utility of this benchmark, setting the stage for future research into personalization algorithms for LLMs.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. PersonalLLM provides a way to enhance the personalization of LLMs, which is an impactful direction to enhance the user experience.\\n\\n2. The benchmark includes extensive open-ended prompts with responses from state-of-the-art LLMs. \\n\\n3. The paper highlights the use of meta-learning to address data sparsity issues by leveraging historical interactions, which is crucial for real-world applications where personalized models lack sufficient user-specific data.\", \"weaknesses\": \"1. The personal preference models used to simulate diverse user preferences are not convincing enough to represent real users. First, it is difficult to verify whether the linear combination of scores from reward models aligns with the distribution of user rewards in the real world. Second, the candidate responses generated by LLMs may not cover real-world user-specific responses, making it challenging for LLMs to learn user-specific preferences or align with user-specific backgrounds. For instance, users may have particular preferences or habits that general reward models inherently struggle to account for when providing accurate rewards.\\n\\n2. The paper lacks an overarching figure that illustrates the construction logic of the dataset and what the samples within the dataset look like.\\n\\n3. The comparison of the paper with other relevant personalized LLM benchmarks, such as the LaMP dataset.\\n\\n4. Some related concepts are not clearly explained, such as 'interaction history', 'preference data', and 'user data,' which are not well defined.\", \"questions\": \"see the weakness\", \"flag_for_ethics_review\": \"['Yes, Discrimination / bias / fairness concerns']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Follow-up\", \"comment\": \"Hello, as the discussion period will be ending in a few days, we wanted to follow up and see if there are any remaining questions we can answer or any other changes we can make to address the reviewer\\u2019s concerns. Thank you again for the time and consideration.\"}", "{\"title\": \"Author Response (continued)\", \"comment\": \"**Lack of clarity in results stated in lines 257, 258, and 274.**\\n\\nWe have rewritten this paragraph to make it easier to observe the findings that we state. Also, we have added a new figure to clarify the point about how changes in alpha affect a simulated user base (line 257, 988).\\n\\n**Line 294: \\u201cIn Figure 5 (right), we compare the entropy in the population preferences over the responses to a given prompt based on keywords, comparing words we would expect to inspire heterogeneity (e.g., imagine, opinion, poem) to prompts beginning with \\u201cwho\\u201d, \\u201cwhen\\u201d, and \\u201cwhere\\u201d, which evoke more objective answers.\\u201d This was not clear to me, maybe add a formal definition and/or an equation for the entropy? Also, how do standard personas compare to the proposed approach in this task?**\\n\\nWe have updated this to clarify that we use the standard Shannon entropy (line 297) to measure the entropy in the distribution of preferences over the responses. Also, in response to the reviewer\\u2019s request, we have performed the experiments from Figure 5 on the persona prompting baseline. These results are shown in Figure 8 (line 935).\\n\\n**In Section 4.2, is it mentioned how response (and prompt) embeddings are computed?**\\n\\nOur method for extracting text embeddings is noted in Section 4 lines 359-360, and user embeddings are explained in Section 4.2 lines 443-446. We have also added a word to line 359 to clarify the point with respect to all text embeddings.\"}" ] }
2QkWSUMQh5
Robustness of Truss Decomposition and Implications for GNN-based Edge Classification
[ "Jakir Hossain", "Sucheta Soundarajan", "A. Erdem Sariyuce" ]
Truss decomposition is an effective and practical algorithm for dense subgraph discovery. However, it is sensitive to the changes in the graph: dropping a few edges or a bit of noise can drastically impact the truss numbers of the edges. It is of practical importance to understand and characterize the robustness of truss decomposition. In this work, we study and utilize the robustness of truss decomposition in an edge-driven way. We propose to construct a dependency graph among edges to denote the impact of an edge's removal on the neighboring edges. By using the dependency graph, we introduce three measures to capture the diverse and unique properties of the edges. We provide theoretical findings and design an efficient algorithm to compute the dependency graph faster than the naive baseline. We also show that our new edge-based truss robustness measures capture intrinsic graph structures and have the potential to unearth peculiar differences that can help with various downstream tasks, such as edge classification. We integrate our measures into the state-of-the-art GNN for edge classification and demonstrate improved performance on multi-class datasets. The overhead of computing our edge-based measures is insignificant when compared to the training time. We believe that utilizing edge-based truss and robustness measures can further be helpful in edge-driven downstream tasks.
[ "Graph mining", "dense subgraph discovery", "truss decomposition", "robustness", "edge classification" ]
Reject
https://openreview.net/pdf?id=2QkWSUMQh5
https://openreview.net/forum?id=2QkWSUMQh5
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z024LGi11u", "yXSZbXIPpX", "tE915cWvx1", "rhRcVooPLN", "rGGfEaDPlT", "oWNMsaQqWl", "kWj8YnU6oj", "hM0QC6I1Fw", "gVKmX1iJyQ", "d8hFWvxlKS", "aJbmoOXTyj", "TrNbS323Ml", "NGFNu2he7c", "MHOFeDPav4", "LVpCz0n18K", "LA4Ufx2V3W", "L8xwBDQ7LI", "Km2irAH1O4", "FKF7obQ7Bd", "EVuMAsLfya", "7Tn59s7Kqw", "7PKVvpZD1V", "5OKfotsy6F", "2Cb5MZR4d3" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_review", "comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1733149630194, 1733011069804, 1733159862822, 1732762671899, 1733011164564, 1732558562881, 1732558260470, 1733156733992, 1734732005861, 1730699919340, 1733283225786, 1732726397521, 1730584085186, 1740107757670, 1733159936183, 1730578674371, 1732777611851, 1732654267061, 1737524206845, 1732558191096, 1732750980907, 1732558454497, 1732853883363, 1730255636515 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12661/Reviewer_kpuv" ], [ "ICLR.cc/2025/Conference/Submission12661/Authors" ], [ "ICLR.cc/2025/Conference/Submission12661/Authors" ], [ "ICLR.cc/2025/Conference/Submission12661/Authors" ], [ "ICLR.cc/2025/Conference/Submission12661/Authors" ], [ "ICLR.cc/2025/Conference/Submission12661/Authors" ], [ "ICLR.cc/2025/Conference/Submission12661/Authors" ], [ "ICLR.cc/2025/Conference/Submission12661/Reviewer_Hyhz" ], [ "ICLR.cc/2025/Conference/Submission12661/Area_Chair_kLic" ], [ "ICLR.cc/2025/Conference/Submission12661/Reviewer_Hyhz" ], [ "ICLR.cc/2025/Conference/Submission12661/Authors" ], [ "ICLR.cc/2025/Conference/Submission12661/Authors" ], [ "ICLR.cc/2025/Conference/Submission12661/Reviewer_Ujfr" ], [ "ICLR.cc/2025/Conference/Submission12661/Authors" ], [ "ICLR.cc/2025/Conference/Submission12661/Authors" ], [ "ICLR.cc/2025/Conference/Submission12661/Reviewer_oeMQ" ], [ "ICLR.cc/2025/Conference/Submission12661/Reviewer_kpuv" ], [ "ICLR.cc/2025/Conference/Submission12661/Reviewer_oeMQ" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12661/Authors" ], [ "ICLR.cc/2025/Conference/Submission12661/Reviewer_Ujfr" ], [ "ICLR.cc/2025/Conference/Submission12661/Authors" ], [ "ICLR.cc/2025/Conference/Submission12661/Authors" ], [ "ICLR.cc/2025/Conference/Submission12661/Reviewer_kpuv" ] ], "structured_content_str": [ "{\"comment\": \"Thanks for the authors' prompt and detailed response. However, the new experimental results open more questions. Regarding W6, why does Cohen's d give a negative value on MIND?\\nRegarding W5, as shown in the new results, ERC is lower than Coreness + Degree and has worse accuracy. It seems the motivation to use this new model and algorithm is not very strong. Hence, I'd like to retain my score.\"}", "{\"comment\": \"Thanks again for your valuable feedback. Could you please acknowledge that you read our response to your comments? If your concerns are addressed, we\\u2019d be grateful if you can adjust the score. If not, we\\u2019d love to engage further to address your comments.\"}", "{\"comment\": \"Thank you for your response. Among the six graphs, only in the MIND graph does Coreness + Degree achieve higher accuracy than our proposed metrics (see Table 2). As a result, Cohen's d yields a negative value for MIND. For the runtime comparison with Coreness + Degree, we would like to mention that our approach demonstrates comparable performance, with better runtime observed in half of the datasets.\"}", "{\"comment\": \"Thank you so much for your acknowledgement. We would be grateful if you could champion our work to further support its acceptance.\"}", "{\"comment\": \"Thanks again for your valuable feedback. Could you please acknowledge that you read our response to your comments? If your concerns are addressed, we\\u2019d be grateful if you can adjust the score. If not, we\\u2019d love to engage further to address your comments.\"}", "{\"comment\": \"Thank you for your comments. Here, we address the Weaknesses you mentioned.\\n\\n**W1**: We did not introduce truss-related concepts such as truss number and trussness support; these terms were defined in earlier works (Cohen, 2008; Zhang & Yu, 2019). We have used these terms as described in those papers and provided their definitions in Lines 128-130 and Lines 135-136. Additionally, Figure 1 illustrates information related to the truss number and triangle count (which is related to trussness support). For further clarification, we recommend referring to the original papers. Additionally, thank you for pointing out the missing cardinality notation; we have updated it in the revised version.\\n\\n**W2**: Thank you for highlighting the inconsistency in the dependency graph. The descriptive text is accurate; however, the edge numbering (in 2 pairs) was assigned incorrectly. We have updated the figure in the revised PDF for clarity. \\n\\n**W3**: The detailed description of EdgeRank is provided in Lines 231-237. We have added the new formula and its description in Lines 238-241 of the revised PDF. \\n\\n**W4**: We acknowledge that ER shows a low standard deviation in Figure 2b. However, additional results provided in the Appendix (Figure 7) show that ER exhibits higher standard deviation on other graphs. While Figure 2b might suggest that ER contributes less, the ablation study results and Figures 7 and 8 demonstrate that ER also plays a significant role in improving edge classification performance.\\n\\n**W5**: Please note that Chen et al.'s approach is not directly comparable to ours. While their work focuses on addressing the community breaking problem to make a graph k-truss free, our study centers on edge-based truss robustness, making the two approaches fundamentally different. Chen et al. examine the robustness and stability of communities, whereas we focus on the robustness of individual edges. We provide runtime data to show that the ERC algorithm is more efficient than naive baselines. Other metrics (like RS_OD, RS_ID, degree, and core number) are included to show how effective our measures are compared to existing ones. Hence, we didn\\u2019t include the runtime of other metrics.\\n\\n**W6**: Although the improvement over coreness+degree in Table 5 may appear modest, our proposed metrics show an improvement of up to 3.03% (on the UNSW-NB15 graph). Additionally, we provide the p-values from the t-test comparing the last two columns (coreness+degree vs. our metrics) for each graph. These results confirm that the performance improvements in our metrics are statistically significant.\\n\\n| Graph | p-value |\\n|--------------|------------------|\\n| AMiner | 0.04416100726 |\\n| MAG | 0.001182955794 |\\n| MIND | 1.18E-72 |\\n| BoT-IoT | 0.00139144425 |\\n| ToN-IoT | 5.34E-22 |\\n| UNSW-NB15 | 4.13E-61 |\\n\\n\\n**W7**: In this paper, we focused on evaluating the individual contributions of truss robustness metrics. We also conducted experiments combining all the features, which resulted in better overall performance. The results are provided in Table 6 at Appendix of the revised pdf. While merging all the features yields the best results, the most significant improvement comes from our truss robustness metrics, as supported by the t-test p-values provided above in W6.\"}", "{\"comment\": \"Thank you for your comments. First, we address the Weaknesses you mentioned.\\n\\n**W1**: To demonstrate the usefulness of the proposed truss robustness metric, our study primarily focuses on edge classification. We agree that exploring other applications would better show its usefulness, and we have mentioned in our future work (see Lines 534-535) plans to apply this metric to tasks like link prediction.\\n\\n\\n**W2**: We have already provided the performance of core decomposition metrics, including the core number sum. Its comparable performance is reported in Table 2 (5th column). Please refer to Lines 471-472 for more details.\\n\\n\\nNow, we would like to address the questions you asked,\\n\\n**Q1**: While our current work focuses on edge classification as an initial application of truss robustness, we acknowledge its potential for other edge-centric tasks, such as link prediction. We have already noted in our future work (see Lines 534-535) plans to extend the application of truss robustness to these tasks and explore its broader implications in graph representation learning.\\n\\n**Q2**: Thank you for pointing this out. Although the current study does not explicitly measure the sensitivity of our truss robustness metrics to noise, we intend to address this in future research. By conducting controlled perturbation experiments, we aim to gain a deeper understanding of how noise influences truss robustness scores.\"}", "{\"comment\": \"Thanks to the author for the reply. I don't think those major concerns have been fully addressed and I will maintain the score.\"}", "{\"metareview\": \"The authors investigate edge-level robustness in truss decomposition. Specifically, they construct a dependency graph that captures the impact of each edge's deletion on its neighboring edges. Based on this graph, they define three edge-level robustness measures. Algorithmically, they propose an efficient method for constructing the dependency graph. On the application side, they demonstrate the utility of these measures for edge classification tasks.\\n\\nThe reviewers found the concept of truss robustness interesting and appreciated the theoretical soundness of the proposed algorithm.\\n\\nHowever, they raised the following concerns:\\n- W1: The connection between the proposed concepts and their application to edge classification needs to be more explicitly established.\\n- W2: The experiments could be more extensive with additional tasks, backbone models, and competitors.\\n- W3: Runtime comparisons are missing.\\n\\nWhile some concerns, including W3, were addressed during the discussion period, the paper still has substantial room for improvement. The meta-reviewer recommends that the authors revise their work based on the provided feedback and submit it to a future conference.\", \"additional_comments_on_reviewer_discussion\": \"Despite the discussion between the authors and reviewers, several concerns remained unresolved.\"}", "{\"summary\": \"This paper quantifies the effect of removing an edge from a graph on the truss decomposition result. The authors construct a dependency graph to compute truss robustness of each edge and propose a faster heuristic based on their theoretical findings. The authors also show the effectiveness and efficiency of the proposed truss robustness to the edge classification task.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The idea of truss robustness and dependency graph is intuitive and interesting.\\n2. Theoretical findings in section 4 make the process of computing truss robustness efficient.\", \"weaknesses\": \"1. I like the first half of the paper, including the whole idea and conceptualisation of truss robustness and subsequent optimisation. However, it is unclear how truss robustness or truss decomposition effect on edge classification tasks. The authors present an interesting and computable quantitative metric for each edge, but where the metric can be effectively applied should be elaborated. It seems intuitive to me that there exists a significant portion of graph edge classifications that are not sensitive to truss robustness at all.\\n2. The applicability of truss robustness seems slightly narrower due to the fact that it can be used only as a feature for edge classification. Truss robustness is expected in the study of other edge-based tasks in graph representation learning such as link prediction.\\n3. Moreover, only one edge classification model TER-AER was reported in experimental result. And more experiments to verify the effectiveness of truss robustness on edge classification tasks are expected. Given the results so far, it seems that the entire work's only proposes a new feature for one model on edge classification task, making it appear that the potential impact of the entire work is limited.\", \"assorted_minor_comments\": \"1. I recommend that all mentioned notations should appear in Table 3.\\n2. In time complexity analysis: $|E^{1.5}| \\\\rightarrow |E|^{1.5}$\\n3. I suggest that the authors use a different notation to indicate that the set of edges sharing a triangle with a particular edge (i.e., $E(e, G)$) to distinguish from the notation of set consisting of all the edges of the graph.\\n4. In Line 137, $ts(e,G)=\\\\Gamma_{\\\\geq}(e,\\\\phi(e)\\\\text{-truss})/2 \\\\rightarrow ts(e,G)=|\\\\Gamma_{\\\\geq}(e,\\\\phi(e)\\\\text{-truss})|/2$ ?\", \"questions\": \"1. Can the authors go into more far-reaching detail about how truss robustness can help with the edge classification task?\\n2. Are there any other edge classification models apart from TER+AER for which truss robustness can be applied?\\n3. Are there any hyperparameters such as damping factor in edgerank? If so, how are these parameters chosen?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely thank all the reviewers for their valuable feedback and constructive suggestions. We have addressed all the concerns raised by the reviewers and updated our pdf accordingly (if necessary).\\n\\n\\nIn summary, to address the reviewers' comments, we have provided new results that demonstrate the applicability of our approaches. The key additions are:\\n- We have provided the p-values and Cohen's d from the t-test comparing the performance of the last two columns in Table 2 (coreness+degree vs. our metrics) for each graph. These results demonstrate that the performance improvements in our metrics are statistically significant and practically impactful.\\n\\n- The primary contribution of our paper is improving the performance of state-of-the-art edge classifications. However, to address reviewer kpuv's comment for runtime comparisons with other baselines, we conducted additional experiments and provided the results. These results demonstrate that our metrics achieve comparable runtime performance to simpler graph properties, while our ERC algorithm is 37 times faster than the computation of RS_{OD} and RS_{ID}.\\n\\nWe are hopeful that our clarifications and the newly provided results will be carefully considered and reflected in the final scores.\"}", "{\"comment\": \"We sincerely thank the reviewer for reviewing our responses and increasing the score.\\n\\nWe have carefully considered and responded to all the points and concerns raised by the reviewer. If there is anything further we can clarify (or do) to improve the score, please let us know. We would be happy to provide additional details.\"}", "{\"summary\": \"This paper introduces novel measures for edge-based robustness in truss decomposition, a method for dense subgraph discovery. The authors propose constructing a dependency graph among edges to model truss robustness and introduce three measures: Edge Robustness, Edge Strength, and EdgeRank. They provide theoretical findings and an efficient algorithm for computing the dependency graph. The paper demonstrates the effectiveness of these measures in improving edge classification tasks using Graph Neural Networks (GNNs).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The study presented in the paper fills a gap in the literature.\\n2. The toy exmple in Figure 1 is very helpful in understanding the concept.\\n3. The proposed measures show potential in improving downstream tasks like edge classification, particularly for rare classes in imbalanced datasets.\", \"weaknesses\": \"1. The paper primarily focuses on edge classification to demonstrate the effectiveness of the proposed measures. Exploring other applications could strengthen the work's impact.\\n2. Comparison with core decomposition SOTA measures could provide more context for the proposed measures' effectiveness.\", \"questions\": \"1. Have you considered applying these measures to other edge-centric tasks beyond classification, such as link prediction or graph matching?\\n2. How sensitive are the proposed measures to noise or small perturbations in the graph structure? Is there a way to quantify this sensitivity?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"comment\": \"Thank you for your response. We believe we have addressed all the comments and questions you have. **Please let us know if there are any specific points we may not have addressed thoroughly, and we will be happy to provide further clarifications.**\"}", "{\"summary\": \"This paper aims to study the edge level truss robustness and improve the performance of edge classification.\\nThe authors propose three metrics to measure the truss robustness based on the dependency graph.\\nTo speed up the computation, they propose an algorithm based on the theorems of truss number computation.\\nThe experiments of edge classification have been conducted on six real-world graphs.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Theoretical analysis is provided.\\n2. The algorithm for fast computation is proposed based on theorems.\", \"weaknesses\": \"1. The writing of the paper can be improved.\\n2. Crucial evaluations of the proposed metrics are missing.\", \"questions\": \"1. In line 96, use the math symbol \\u201c$\\\\times$\\u201d instead of an English character \\u201cx\\u201d.\\n2. In line 104, the word \\u201ccutting-edge\\u201d is overly strong.\\n3. In Figure 2b, why use standard deviation (STD) to measure the importance of edge features? First, are all the features normalized to ensure their STDs are comparable? Second, if the goal is effective classification, why not use the idea of linear probing and report the classification performance of a linear classifier?\\n4. In Section 3, last paragraph, continuing from the previous question, the role of this paragraph is unclear to me. These metrics are proposed to measure the truss robustness of an edge. Instead of showing how precise these metrics measure the truss robustness, this paragraph shows they are useful for edge classification. A paragraph showing how well these metrics measure robustness should be provided.\\n5. In Section 4, what is the time complexity of the naive computation of the dependency graph? How much faster is the proposed algorithm?\\n6. In Figure 3, the same problem as Question 4, showing the proposed metrics have different distributions from the existing ones does not justify their correctness. The important thing is to measure how accurate these metrics are in estimating truss robustness.\\n7. In Table 2, the last two columns seem to be statistically tied. Could you provide the p-values from the t-test?\\n8. In summary, in my opinion, it is important to study the truss robustness of the edges and have a fast algorithm. However, the writing of this paper and the title seem to emphasize its usefulness on edge classification. While the first part lacks crucial evaluations and data analysis, the second part lacks novelty. It would be helpful if the authors can clarify this.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your response. However, I'm not fully convinced, especially for W5 and W6. Why not report runtime for other metrics? W6 What's the effect side (e.g., Cohen\\u2019s d) of your improvement apart from p-values?\"}", "{\"comment\": \"I thank the authors for the responses. I have reviewed the responses and increased the score accordingly.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for your comments. First, we address the Weaknesses you mentioned.\\n\\n**W1**: Truss robustness measures the structural importance and stability of edges within their local graph topology, capturing relationships often overlooked by traditional features. This makes it a valuable addition to edge classification tasks. In Section 3, we detail the implications of truss robustness for edge classification, highlighting its effectiveness (see Lines 241-262 and Figure 2b for more information). While we acknowledge that truss robustness may not be beneficial for every edge, it can effectively distinguish edges that other metrics fail to differentiate (as shown in Figure 1). Furthermore, the per-class recall scores provided in Figure 4 demonstrate that truss robustness improves performance, particularly in rare classes and when dealing with imbalanced datasets (see Lines 501-508).\\n\\n**W2**: We appreciate your concern regarding the applicability of truss robustness. As the first to introduce edge-based robustness, we used edge classification to illustrate its practical relevance and effectiveness. While our work centers on edge classification, we recognize the potential applications of truss robustness in other edge-based tasks, such as link prediction. We have highlighted this in our future work section (see Lines 534-535).\\n\\n\\n**W3**: We chose TER+AER as baselines because they are state-of-the-art methods for edge classification, capturing both structural features and higher-order proximities. Instead of comparing with other models, we focused on evaluating truss robustness against other edge-based features to better evaluate. We also included results for the geometric variant of TER+AER, along with AUC scores and ablation study results, in the appendix due to space constraints.\", \"we_would_also_like_to_address_your_assorted_minor_comments\": \"**AMC1**: Thank you for your recommendation. Most of the notations are already listed in Table 3. However, we will ensure that any remaining ones are also included in the final version.\\n\\n**AMC2**: Thanks for noticing. This has been updated in the revised PDF.\\n\\n**AMC3**: We use $E$ to represent the edges of $G$ and $E(e, G)$ to denote the set of edges incident to $e$. These notations were not introduced by us but are adopted from previous studies to ensure consistency with existing work.\\n\\n**AMC4**: Thanks for pointing this out. This has been updated in the revised PDF.\\n\\n\\n\\nNow, we would like to address the question you asked,\\n\\n**Q1**: Please refer to our answer on W1 above.\\n\\n**Q2**: We selected TER+AER as baselines because they are the state-of-the-art methods for edge classification. Our truss robustness metrics can be integrated into any edge classification model, including those reported in Wang et al., 2023.\\n\\n**Q3**: We did not focus on hyperparameter tuning for PageRank and used the default damping factor of 0.85.\"}", "{\"comment\": \"Understood. I would like to thank the authors for their detailed response.\"}", "{\"comment\": \"Thank you for your comments. First, we address the Weaknesses you mentioned.\\n\\n**W1**: We have addressed all of the questions you raised. We would be happy to address any specific comments you have on improving the writing of the paper.\\n\\n**W2**: Please see our answers to your questions below.\\n\\n\\nNow, we would like to address the question you asked,\\n\\n**Q1**: Thank you for pointing this out. We have updated the symbol in the revised pdf.\\n\\n**Q2**: Wang et al. (2023) proposed the TER+AER approaches, which are state-of-the-art models for edge classification. Thus, we referred to their GNN-based approach as \\\"cutting-edge,\\\" but it can be replaced with \\\"recent\\\" if preferred.\\n\\n**Q3**: The standard deviation is used as a preliminary analysis to examine the variability of edge feature values across different classes, helping identify features that may contribute to distinguishing edges. \\n\\nWe have normalized all features, as mentioned in Line 256. \\n\\nWe appreciate the suggestion of using linear probing for assessing classification effectiveness. However, our goal was to present the importance of edge robustness in a simpler and more statistical manner, rather than employing machine learning approaches. Besides, our truss robustness measures capture complex relationships in graphs that linear probing cannot easily evaluate. The connections between edges in truss structures are non-linear, making them harder for linear probing to handle. Additionally, our datasets are imbalanced, making linear probing less effective at identifying rare classes.\\n\\n\\n**Q4**: The proposed metrics inherently measure truss robustness. For instance, Edge Robustness quantifies an edge's ability to retain its truss number when a neighboring edge is removed, while Edge Strength measures an edge's influence on changing the truss number of other edges. Note that the dependency graph is constructed by removing each edge in the graph (which is optimized in our algorithm), and the robustness metrics are derived from this process. Thus, these metrics inherently capture truss robustness without requiring additional validation. Additionally, the last paragraph highlights the practical value of these metrics by demonstrating their usefulness in tasks like edge classification. \\n\\n**Q5**: The runtime of our ERC algorithm is $O(|E|^{1.5} + |\\\\mathcal{S}_G| \\\\cdot |\\\\triangle(TCE_S)| + |E| \\\\cdot |TCE_S|)$, as stated in Line 397. In a naive approach, all edges in the graph would need to be removed instead of just the k-exposed edges. This would result in a runtime of $O(|E|^{1.5} + |E| \\\\cdot |\\\\triangle(TCE_S)| + |E| \\\\cdot |TCE_S|)$. \\n\\nOn average, our algorithm is 3.74 times faster than the naive baseline. This is mentioned in Line 462, and detailed results can be found in columns 10, 11, and 13 of Table 1\\n\\n\\n**Q6**: The purpose of Figure 3 is to highlight how the proposed metrics differ from existing ones, supporting their potential use in edge classification. Please refer to Lines 430-432 and Lines 455-457 for more details. For the relevance of our new metrics in measuring truss robustness, please see our response to Question 4 above.\\n\\n**Q7**: The last two columns are NOT statistically tied. Below, we provide the p-values of t-test for each graph, which demonstrate that the performance improvements in our metrics are statistically significant.\\n\\n\\n| Graph | p-value |\\n|--------------|------------------|\\n| AMiner | 0.04416100726 |\\n| MAG | 0.001182955794 |\\n| MIND | 1.18E-72 |\\n| BoT-IoT | 0.00139144425 |\\n| ToN-IoT | 5.34E-22 |\\n| UNSW-NB15 | 4.13E-61 |\\n\\n\\n\\n**Q8**: While the primary focus of this work is to study truss robustness and propose an efficient algorithm, we have also included edge classification as an application to demonstrate the practical utility of the metrics. \\n\\nTo address the comment that \\u201cthe first part lacks crucial evaluations and data analysis, and the second part lacks novelty\\u201d, we would like to clarify:\\n\\nIn the first part of the paper, we introduced the truss robustness metrics and provided a thorough analysis of their usefulness. Specifically, we conducted a standard deviation analysis of two datasets in Figure 2b, demonstrating the value of truss robustness for edge classification. The results on three other datasets are provided in Figure 7 at Appendix. **Please let us know what other evaluation you\\u2019d like to see.**\\n\\nIn the second part, we focused on applying these metrics to the edge classification task. The novelty of the paper is primarily in the introduction of the new feature, which has potential to contribute to the edge classification task. The per-class recall scores from Figure 4, show that truss robustness significantly enhances performance in rare classes and when dealing with imbalanced datasets (see Lines 501-508). \\n\\nAs the proposed metrics offer new insights and improved performance in edge classification, we believe our work has novelty.\"}", "{\"comment\": \"Thank you for your response and concerns. Please find our responses to your concerns on W5 and W6 below.\\n\\n**Runtime for other metrics (W5):**\\nWe have conducted new experiments to obtain the baseline runtime results. Before discussing the results, let us first mention that the degree, core number, trussness, and triangle count are simpler graph properties. Computation of degree and core number has a time complexity of O(|E|), while trussness and triangle count have complexities of O(|E|^1.5) and O(|\\u25b3(E)|), respectively (where \\u25b3(E) is the list of all triangles). RS_{OD}, RS_{ID}, and our truss robustness metrics are built on these simpler properties, with some additional computational overhead. Computing RS_{OD} and RS_{ID} (given the core numbers) takes O(|V| $\\\\cdot$ |E|) time. Our truss robustness metrics rely on trussness and triangle count, with the detailed complexity analysis provided in Lines 374-377 and 394-397. \\nRuntime results are as follows (in seconds):\\n\\n| Graph | Coreness + Degree | RS_OD + RS_ID | Trussness + Triangle Count | ERC Algorithm | ERC Speedup vs RS_OD + RS_ID |\\n|----------------|-------------------|--------------|----------------------------|---------------|--------------------------------|\\n| AMiner | 0.62 | 35.69 | 0.66 | 0.52 | 68.80 |\\n| MAG | 0.61 | 21.95 | 0.69 | 0.65 | 33.67 |\\n| MIND | 6.70 | 579.87 | 9.34 | 32.42 | 17.88 |\\n| NF-BoT-IoT | 2.07 | 159.32 | 2.23 | 1.93 | 82.45 |\\n| NF-ToN-IoT | 1.55 | 23.10 | 1.70 | 1.46 | 15.77 |\\n| NF-UNSW-NB15 | 17.92 | 177.36 | 22.84 | 45.89 | 3.86 |\\n\\nAs expected the coreness, degree, trussness, and triangle count metrics are computationally efficient due to their simpler calculations. In comparison, the computation of RS_{OD} and RS_{ID} takes significantly longer, with coreness + degree being approximately 70 times faster. On the other hand, our algorithm has a comparable performance with trussness + triangle count (as well as coreness, degree). Additionally, our ERC algorithm is faster than the computation of RS_{OD} and RS_{ID}, primarily due to the reduced number of edge removals in our approach. This results in a significant runtime efficiency, with our approach being about 37 times faster. These results emphasize the efficiency of our approach compared to the RS_{OD} and RS_{ID} (while its effectiveness is already confirmed by the results in Table 2). We will include these additional results in the Appendix.\\n\\n**Cohen\\u2019s d of p-values (W6):**\\nHere, we present an additional column for Cohen's d as requested by the reviewer.\\n\\n| Graph | p-value | Cohen's d |\\n|---------------|------------------|------------|\\n| AMiner | 0.04416100726 | 0.540 |\\n| MAG | 0.001182955794 | 0.884 |\\n| MIND | 1.18E-72 | -34.667 |\\n| BoT-IoT | 0.00139144425 | 0.873 |\\n| ToN-IoT | 5.34E-22 | 4.286 |\\n| UNSW-NB15 | 4.13E-61 | 20.884 |\\n\\nThe results suggest that our metrics have meaningful improvements across different datasets. For instance, AMiner exhibits a medium effect size of 0.54, indicating a meaningful performance difference. MAG and BoT-IoT show large effect sizes of 0.88 and 0.87, respectively, highlighting substantial improvements. ToN-IoT and UNSW-NB15 demonstrate extremely large effect sizes, with values of 4.29 and 20.88, indicating overwhelming improvements. These results suggest that our approach is not just statistically significant but also practically impactful across several datasets.\\n\\nWe believe these responses have addressed all of the reviewers' concerns. If there is anything else we can clarify or provide to improve the score, please let us know. We would be glad to provide further details.\"}", "{\"summary\": \"This paper, titled \\\"Robustness of Truss Decomposition and Implications for GNN-Based Edge Classification,\\\" addresses the sensitivity of truss decomposition in dense subgraph discovery. Truss decomposition is noted to be highly effective but sensitive to small changes, like edge removals, which significantly impact edge truss values. The authors propose a new framework for characterizing truss robustness on an edge level by constructing a dependency graph that captures the impact of each edge's removal on its neighbors. They further use the captured robustness and dependencies in downstream edge classification problem via GNN.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The abstract is well-written.\\n2. The observation of this paper is insightful.\\n3. The proposed method is interesting and mathematically grounded.\", \"weaknesses\": \"1. In section 2, the paper introduces several truss-related concepts (e.g., truss number and trussness support), which can initially be confusing, especially the distinction between trussness support and truss number. An example would help clarify these concepts and highlight their differences. Additionally, the definition of trussness support in the formula (line 137) is missing the cardinality notation \\\"\\u2223\\u2223\\\" and should be corrected for clarity.\\n2. In Figure 2(a), the dependency graph does not fully align with the truss number definition. For example, there should be a single directed edge between e2 and e1, and an edge should also exist between e2 and e5, right? Furthermore, the statement \\u201cas is the case (e3, e5) for which are incident on the left but not connected on the right\\u201d contradicts figure 2(a), as e3 and e5 are indeed unconnected in the dependency graph. 3. \\n3. In Section 3, the paper lacks a formula for computing EdgeRank, which reduces the transparency and reproducibility of the method.\\n4. In Figure 2(b), Edge Robustness (ER) shows a relatively low standard deviation, yet no explanation is provided. It would be helpful to discuss why ER might show limited variability across classes.\\n5. The Experiments section lacks a direct comparison with the baseline from Chen et al. (2021) and omits runtime data for other robustness indicators like RS_{OD}, RS_{ID}, degree, and core number, which makes the efficiency claims not fully supported by experimental results. Adding a comparison with Chen et al. (2021) and reporting the runtime of other measures would provide a more comprehensive evaluation of computational efficiency and better support the paper\\u2019s claims.\\n6. According to Table 5, the improvement of the proposed method over coreness+degree is quite marginal, and perhaps coreness is easier to compute than the metrics proposed in this paper. Any justifications or explanations?\\n7. Are there other combinations (among metrics proposed in this paper and previous degree, coreness, etc) that could achieve better results? Seems they can be combined?\\n\\nHuiping Chen, Alessio Conte, Roberto Grossi, Grigorios Loukides, Solon P Pissis, and Michelle Sweering. On breaking truss-based communities. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 117\\u2013126, 2021.\", \"questions\": \"Please refer to the weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
2QdsjiNXgj
On a Connection Between Imitation Learning and RLHF
[ "Teng Xiao", "Yige Yuan", "Mingxiao Li", "Zhengyu Chen", "Vasant G Honavar" ]
This work studies the alignment of large language models with preference data from an imitation learning perspective. We establish a close theoretical connection between reinforcement learning from human feedback RLHF and imitation learning (IL), revealing that RLHF implicitly performs imitation learning on the preference data distribution. Building on this connection, we propose DIL, a principled framework that directly optimizes the imitation learning objective. DIL provides a unified imitation learning perspective on alignment, encompassing existing alignment algorithms as special cases while naturally introducing new variants. By bridging IL and RLHF, DIL offers new insights into alignment with RLHF. Extensive experiments demonstrate that DIL outperforms existing methods on various challenging benchmarks.
[ "Alignment" ]
Accept (Poster)
https://openreview.net/pdf?id=2QdsjiNXgj
https://openreview.net/forum?id=2QdsjiNXgj
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yY2AyWEirV", "yDQEHxJHm4", "uucY6xFpvl", "u4ME4QlTNR", "p9odDtYnCl", "oC3ThyCuPc", "mHWcfJnOgp", "kbnYgP2E6n", "kPaVuI8RIY", "fARNMzUWzA", "cBXT4sgXcV", "bLDWwPtsHK", "aWexqx4vWt", "aUP5t46woa", "ZhedrL6WiR", "ZXsBNGhn7M", "X1VS22Auoz", "UaqMZ33TpA", "SRkDpmXxIS", "N5vUwfi1d6", "MFJFTDgRf3", "KaBzvN6AfV", "JxQ8sylOGr", "JjkapYWinf", "8Sf9JZb4sB", "6z3qBybFeo", "6p2P1wr1tm", "3cCSvntUfr" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732426075730, 1732381007976, 1732380870015, 1732471055105, 1732418788951, 1732478455663, 1730476141943, 1730086529572, 1732643101937, 1737524051871, 1732210835988, 1732380086575, 1732512371213, 1732380401025, 1732397751115, 1732933130596, 1734661407527, 1732472082384, 1732571449387, 1732797814520, 1732379653494, 1732424499126, 1733096156428, 1731077772673, 1732379579655, 1730694666834, 1732463243577, 1733093571586 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10416/Authors" ], [ "ICLR.cc/2025/Conference/Submission10416/Authors" ], [ "ICLR.cc/2025/Conference/Submission10416/Authors" ], [ "ICLR.cc/2025/Conference/Submission10416/Reviewer_MhY4" ], [ "ICLR.cc/2025/Conference/Submission10416/Reviewer_hdJf" ], [ "ICLR.cc/2025/Conference/Submission10416/Reviewer_sbFs" ], [ "ICLR.cc/2025/Conference/Submission10416/Reviewer_MhY4" ], [ "ICLR.cc/2025/Conference/Submission10416/Reviewer_hdJf" ], [ "ICLR.cc/2025/Conference/Submission10416/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10416/Authors" ], [ "ICLR.cc/2025/Conference/Submission10416/Authors" ], [ "ICLR.cc/2025/Conference/Submission10416/Authors" ], [ "ICLR.cc/2025/Conference/Submission10416/Authors" ], [ "ICLR.cc/2025/Conference/Submission10416/Reviewer_MhY4" ], [ "ICLR.cc/2025/Conference/Submission10416/Authors" ], [ "ICLR.cc/2025/Conference/Submission10416/Area_Chair_Ss4z" ], [ "ICLR.cc/2025/Conference/Submission10416/Authors" ], [ "ICLR.cc/2025/Conference/Submission10416/Authors" ], [ "ICLR.cc/2025/Conference/Submission10416/Area_Chair_Ss4z" ], [ "ICLR.cc/2025/Conference/Submission10416/Authors" ], [ "ICLR.cc/2025/Conference/Submission10416/Authors" ], [ "ICLR.cc/2025/Conference/Submission10416/Authors" ], [ "ICLR.cc/2025/Conference/Submission10416/Reviewer_JHUL" ], [ "ICLR.cc/2025/Conference/Submission10416/Authors" ], [ "ICLR.cc/2025/Conference/Submission10416/Reviewer_sbFs" ], [ "ICLR.cc/2025/Conference/Submission10416/Authors" ], [ "ICLR.cc/2025/Conference/Submission10416/Reviewer_JHUL" ] ], "structured_content_str": [ "{\"title\": \"Further Response to Reviewer hdJf\", \"comment\": \"Dear Reviewer hdJf,\\n\\nThank you very much for carefully reading our response and increasing your score! We are very happy that our response has addressed your comments. We genuinely appreciate your support and will include the new results in the main paper.\\n\\nRegarding your observation, this was indeed a typo. We mistakenly wrote SLiC as KTO. Nevertheless, we are actively working on comparing our method with KTO on benchmarks.\\n\\n**We have updated the KTO results in the main Table 2 of the updated submission PDF. The results show that our DIL still achieves better performance than KTO on most widely used benchmarks, particularly on AlpacaEval 2.0 and MATH, demonstrating that DIL can more effectively preserve reasoning abilities compared to KTO while aligning with human preferences.**\\n\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer JHUL- part 2\", \"comment\": \"**Q5. \\\"Here, we use the set of rejected responses $y_l \\u223c \\\\pi_{ref}(y | x)$ to approximate the expectations under $\\\\pi_{ref}(y | x)$\\\". This is simply a wrong assumption. I do not know why the authors have chosen to make the assumption, but it feels like a contrived way to come to Equation 24 and form a connection to a DPO like objective.**\\n\\n**A5.** Thank you for your insightful comments! We believe there may be some misunderstandings. The assumption is indeed reasonable. It is acceptable to use the set of rejected responses to approximate the expectations since both chosen and rejected responses are indeed sampled from $\\\\pi_{ref}(y | x)$, as also demonstrated in [1,2]. \\n\\nFurthermore, we can use both chosen and rejected responses to approximate this expectation.\\nWe acknowledge that this approximation may have bias, and there are indeed many ways to approximate the expectations [1]. However, we have shown that with this approximation, DIL achieves better performance, as demonstrated in the following table. Additionally, we demonstrate that DPO with CPC as a density estimation method can be viewed as an imitation learning objective using this approximation, which we believe provides valuable insights. We have updated the paper to clarify this.\\n\\n| Methods | HumanEval | LeetCode | GSM8K | MATH | TheoremQA | AlpacaEval2.0 |\\n| --- | --- | --- | --- | --- |--- | --- | \\n| Chosen+Rejected | 29.5 | 2.9 |23.6 | 2.1 | 9.3 | 15.8 |\\n| Rejected | **33.5** | **3.4** | **32.2** | **3.0** | **12.5** | **21.7** |\\n\\nFrom the table, we can observe that utilizing rejected responses to approximate the expectation achieves better performance, which is reasonable as the goal is to decrease the likelihood of the rejected response rather than the chosen one.\\n\\n\\n[1] Self-Play with Adversarial Critic: Provable and Scalable Offline Alignment for Language Models. In Arxiv\\n\\n[2] Direct Preference Optimization: Your Language Model is Secretly a Reward Model. In NeurIPS 2023\\n\\n---\\n\\n**Q6. The results are on relevant benchmarks, but the improvement over DPO seems minor in most cases.**\\n\\n**A6.** Thank you for your excellent suggestions regarding potential additional experiments to justify the improvements over DPO. We want to emphasize that the improvements over DPO are indeed significant, especially on reasoning-heavy tasks. Recently, many researchers have observed that DPO generally decreases downstream task performance, particularly on reasoning-heavy tasks like Math and Coding. To verify this, we have included additional comparisons with DPO on more reasoning-heavy tasks, such as Coding (HumanEval, LeetCode, MBPP) and Math (GSM8K, MATH, TheoremQA).\\n\\n| Methods | HumanEval | LeetCode | GSM8K | MATH | TheoremQA | AlpacaEval2.0 |\\n| --- | --- | --- | --- | --- |--- | --- |\\n| Mixtral-7B-Base (SFT) | 28.1 | 3.3 | 28.1 | 2.3 | 7.0 | 6.2 |\\n| DPO | 31.7 | 2.2 |21.7 | 1.4 | 9.8 | 12.5 |\\n| SimPO | 26.5 | 1.9 |22.2 | 2.5 | 8.5 | 20.8 |\\n| DIL | **33.5** | **3.4** | **32.2** | **3.0** | **12.5** | **21.7** |\\n\\n\\nFrom the table, we observe that DIL achieves significant improvements over DPO. Moreover, it demonstrates that DIL more effectively preserves reasoning abilities, such as the mathematical and abstract reasoning skills of the base SFT model, and even significantly outperforms SFT in many cases. Consequently, DIL imposes a lower alignment tax, given its strong performance on the preference alignment benchmark, AlpacaEval 2.0.\\n\\n---\\n\\n**We gratefully appreciate your time in reviewing our paper and your comments. We have made extensive efforts to address your comments and believe that they adequately address all your comments. The reviewer's comments are mainly about some clarifications and are not fatal to the contributions of our manuscript; we believe that the reviewer's insightful comments can be easily and effectively addressed in the final version. We would be grateful if the reviewer could increase the score.**\"}", "{\"title\": \"Response to Reviewer JHUL- part 1\", \"comment\": \"Dear reviewer JHUL, we appreciate your efforts and detailed comments very much! However, we believe that there are some misunderstandings. Therefore, we would like to provide a point-by-point response to your comments.\\n\\n---\\n**Q1. My first main gripe with the paper is that the idea that RLHF in its entirety is performing imitation learning seems to stand on shaky foundations. A lot of leaps from one objective to another are required to arrive at this conclusion and a lot of the nuances in the differences between different objectives get lost along the way \\u2026**\\n\\n**A1.** Thank you very much for your insightful comments! \\n\\nWe agree that the title \\\"RLHF secretly performs imitation learning\\\" may overstate the case, particularly given the nuances of the transition between imitation learning and reinforcement learning within the RLHF pipeline. **We have revised the title in the updated submission to more accurately reflect this perspective: \\u201cDIL: Direct Imitation Learning for Preference Alignment and Connections to RLHF.\\u201d**\\n\\n**Although the choice of the loss function and optimization procedure differs, the central aim of our work is to emphasize that the optimal policies of RLHF and DPO are theoretically the same as the imitation learning process, they aim to discover identical optimal policies, i.e., the chosen response distribution. We would like to kindly remind the reviewer of our contributions compared to works based on RL as inference [1].**\\n\\n- **Theoretical Insight:** We are the first to show that the objective of current alignment methods, such as DPO and RLHF, can theoretically be viewed as fitting the chosen response distribution by minimizing the reverse KL divergence.\\n- **General Framework:** We provide a general framework based on Bregman divergence to directly optimize the reverse KL divergence between the policy and the chosen response distribution.\\n- **Empirical Results**: We demonstrate that our framework effectively alleviates the decrease in the likelihood of chosen responses and achieves better performance on reasoning-intensive tasks, addressing important limitations in DPO.\\n\\n[1] Reinforcement learning and control as probabilistic inference: Tutorial and review In ArXiv.\\n\\n---\\n \\n**Q2. The paper seems to be derived backwards as some of the connections made feel slightly contrived upon reading. E.g. the jump from the original objective to knowledge distillation mentioned above. The steps taken to arrive at a DPO like objective from density ratio estimation etc.**\\n\\n**A2.** Thank you for your comments. The knowledge distillation steps we provided are intended to demonstrate that the standard KL-regularized RLHF problem and DPO both aim to discover the same optimal policies as those obtained by conducting imitation learning on the chosen response using reverse KL divergence. \\n\\nBased on these insights, we propose a more general framework based on Bregman divergence to directly optimize the reverse KL divergence between the policy and the chosen response distribution. We have updated the paper to clarify the derivation step further.\\n\\n---\\n**Q3. The knowledge distillation connection seems tenuous (and already known), it seems more straightforward to think of the entire process as imitating a better policy as in MARWIL [2] or as chaining a specific variant of an improvement operator and distillation as already derived in detail for many different variants in [3].**\\n\\n**A3.** Thank you for your comments and mentioning these related works. We totally agree with the reviewer that knowledge distillation is well-known. However, The knowledge distillation steps we provided are intended to show that the standard KL-regularized RLHF problem and DPO both aim to discover the same optimal policies as conducting imitation learning on the chosen response with reverse KL divergence, which are inherently different from these works as also shown in our response to Q1. We will discuss more past works on knowledge distillation in the updated submission.\\n\\n\\n**Q4. E.g. \\\" In this section, we connect RLHF to the imitation learning framework. We show that RLHF is a special case of imitation learning problem by defining the following specialized energy-based mode\\\" in front of Eq 9, which very clearly is already derived in the DPO paper and literature on RL as probabilistic inference. It is fine to re-state such derivations but then please say: we built on a well known connection between RL and energy based models/probabilistic inference.**\\n \\n\\n**A4.** Thank you for your suggestions. We totally agree with the reviewer that the energy-based model in Eq. 9 is well-known; however, our contribution lies in establishing the connection between RLHF/DPO and imitation learning over the chosen response in the preference data (see our detailed response to Q1). We have updated the paper and rephrased this sentence as: \\u201cWe build our analysis on a well-known connection between RL and energy-based models.\\u201d We apologize for your confusion.\"}", "{\"title\": \"Thank you\", \"comment\": \"Thank you for the detailed response! Most of my concerns have been addressed, and hence I've increased the score.\\n\\nIt would be great if the authors could include both of the above tables (standard deviations & head-to-head with DPO/SimPO) in a final version of their paper.\"}", "{\"comment\": \"I have adjusted my score given the reviews and the replies. One last detail is the KTO results are missing despite being mentioned as a baseline in the paper, so it would be better if added.\"}", "{\"title\": \"Thank you to the authors\", \"comment\": \"Thank you for answering all my questions!\"}", "{\"summary\": \"This paper introduces a new method called Direct Imitation Learning (DIL), which is derived based on an imitation learning perspective on the alignment problem. Specifically, instead of minimizing the forward KL divergence as in SFT, DIL aims to minimize the reverse KL instead. This turns out to require estimating the density ratio $\\\\frac{\\\\pi_{\\\\mathrm{chosen}}}{\\\\pi_{\\\\mathrm{ref}}}$, which the authors show can be done through a Bregman divergence objective. Then, through a similar change-of-variables trick as used in DPO, the authors show that this reward objective can be instead minimized directly in terms of the relevant policies. Hence, the final objective directly optimizes $\\\\pi_{\\\\theta}$ through the Bregman divergence objective.\\n\\nThe authors also show that PPO and DPO can be seen as special cases of the proposed imitation learning formulation. Specifically, reward learning in RLHF can be formulated as a forward KL between $\\\\pi_{\\\\mathrm{chosen}}$ and $\\\\pi_{\\\\phi}$, and the RL step can be seen as a knowledge distillation process (through minimizing a reverse KL) into a final policy $\\\\pi_{\\\\theta}$. \\n\\nFrom the experiments side, the authors use the UltraFeedback Binarized dataset for evaluation on the Open LLM Leaderboard and show DIL is generally the best method across the board. For dialogue generation and summarization they use the Anthropic HH dataset and the Reddit TL;DR dataset and show through win rates (as judged by GPT-4) that DIL generally performs best against the SFT, Chosen, and Average responses. Finally, the authors also investigate the likelihood patterns of DIL and SimPO, which generally seem to show that the likelihood of chosen responses stay roughly the same while the likelihood of rejected responses goes down. This is unlike SimPO for which the likelihood of chosen responses also decreases.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"The paper has several strengths:\\n1. The paper provides new mathematical connections between imitation learning formulations (various forms of forward and reverse KL optimizations) and previously established RLHF methods like PPO and DPO. As far as I'm aware, these connections are novel and have not been highlighted in past work, making them valuable insights for the community to further build on. \\n2. To optimize the proposed imitation learning objective, the paper integrates ideas from density ratio estimation [1] and a change-of-variables approach [2] (rewards -> policies) to directly learn the target policy $\\\\pi_{\\\\theta}$, avoiding complexities such as adversarial training.\\n3. Strong empirical results: the new method DIL seems to generally outperform all baselines in both the Open LLM Leaderboard as well in the summarization and dialogue generation settings.\\n\\n[1] Masashi Sugiyama, Taiji Suzuki, and Takafumi Kanamori. Density-ratio matching under the bregman divergence: a unified framework of density-ratio estimation. Annals of the Institute of Statistical Mathematics, 64:1009\\u20131044, 2012.\\n\\n[2] Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36, 2024.\", \"weaknesses\": \"The paper has several weaknesses:\\n1. While the empirical results seem to consistently outperform prior methods, I\\u2019m a bit worried about the statistical significance since the margins seem rather small sometimes (e.g. for Table 2, the improvements are almost always smaller than 1 percentage point). Could the authors include some significance tests or at least standard errors / CIs to provide a better sense of the significance of these improvements?\\n2. The exposition of the math/theory in the paper could have been a bit clearer (section 4). It took me some time to understand what actually is the final objective that DIL optimizes, and how it came to be. This is because, for example, at the end of section 4.3 the authors state \\u201cWith the estimated density ratio reward, the surrogate imitation learning objective in Equation (17) can then be solved with any RL algorithms.\\u201d, which initially made it seem like DIL would have to resort to RL optimization anyways. But then reading section 4.4 it turns out that\\u2019s not what happens and there is actually a different objective that\\u2019s maximized (eq. 24). Maybe one thing that could help here is to add a summary box either at the beginning or end of section 4 that summarizes the key steps to go from the general DIL objective (eq. 16) to the formulation in eq. 24. \\n3. Some parts of the paper require further clarification - please see the Questions section for this.\", \"questions\": \"1. In Section 5 under the models paragraph, the authors state \\u201cFor fine-tuning on UltraFeedback Binarized dataset, we use Zephyr-7b-SFT (Tunstall et al., 2023) and Llama3-8b-SFT used in (Meng et al., 2024) as our base models.\\u201d, but then in Table 2 the top results are labeled as Mistral-7B-Base. Should that be Zephyr-7B-SFT instead?\\n2. In Section 5, the authors mention KTO as part of the baselines, but it doesn\\u2019t seem the result tables include it? Also, SLiC is included in the result tables, but is not discussed in the baselines paragraph?\\n3. Could the authors include the base model (SFT) performances in Table 2?\\n4. In Table 3, what is the difference between Chosen and Average?\\n5. In Table 3, it might be interesting to compare win rates of DIL directly with DPO or other baselines. Is there a reason the authors didn't include this?\\n6. At the end of section 6.1, the authors state that \\u201cWe hypothesize that these improvements can be attributed to avoiding the\\u00a0BT assumption and preventing the decrease in the likelihood of chosen responses.\\u201d Could the authors elaborate on why avoiding the BT assumption could lead to these improvements? Do they have examples in mind where BT might not be the right model?\\n7. I\\u2019m a bit confused as to how $\\\\pi_{\\\\mathrm{chosen}}$ is defined. Is it essentially defined to be the policy that, given a preference dataset of $ (x, y_w, y_l) $ triplets, was responsible for generating all the $y_w$ pairs?\\n8. In the beginning of section 4.3, the authors state that \\u201cIn the tabular setting, we can directly compute $\\\\pi_{\\\\mathrm{ref}}(y | x)$ and $\\\\pi_{\\\\mathrm{chosen}}(y | x)$.\\u201d Could the authors please elaborate on this a bit? It\\u2019s not clear to me what the tabular setting here means.\\n9. Is the Y-axis in figures 1 & 3 the *negative* log likelihood? And for the margins figure on the right, is it a difference of negative log likelihoods? This could use some better labeling. Putting the model name on the y-axis is a bit confusing, and might be better put in the caption.\\n10. At the end of section 4.1: \\u201cachieving this in practice requires full data coverage and infinite models that are rarely met\\u201d. What is meant by \\u201cinfinite models\\u201d here?\\n11. In the paragraph right after equation 22, what\\u2019s $\\\\pi_{\\\\mathrm{data}}$?\\n12. In the paragraph right after equation 22, why is there no log before the reward $r$ in $Z(x)$? Shouldn\\u2019t there be since there is one in equation 22 as well?\\n13. In the paragraph after equation 22, the authors state \\u201cThis characteristic, determined by the reward definition in Equation (17), is super beneficial as it allows our imitation learning to theoretically generalize to a broader class of loss functions beyond the pairwise BT preference model used in DPO.\\u201d. Could the authors please elaborate on this? What does \\\"this characteristic\\\" refer to? And how does it allow the imitation learning to generalize to a broader class of loss functions beyond BT?\\n14. At the end of section 4.5 the authors state \\u201cSpecifically, we demonstrate that DPO also falls under the imitation learning objective in Equation (16) and essentially employs the CPC method for density ratio reward estimation.\\u201d. While I agree CPC indeed estimates the correct density ratio, it\\u2019s unclear to me that this is used in equation 27. Specifically, the learned $f^*$ from equation 26 doesn\\u2019t seem to show up in equation 27?\\n15. Towards the end of 6.1, the authors state \\u201cNotably, we observe DPO and SimPO hurt the overall performance in most reasoning-heavy tasks such as GSM8K\\u201d. Is this compared to some base model performance? And if so, where is this reported?\\n16. This statement in 6.1 could use some clarification: \\u201cFor instance, on LLama3, the improvements are notable on the Math and AlpacaEval 2 benchmarks, with relative gains exceeding 7.5% and 18.2%, respectively.\\u201d Is this for DPO or SimPO?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No concerns.\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper generalizes preference learning or RLHF frameworks to an imitation learning framework, (DIL). Using this framework they propose multiple offline preference learning methods with different preference modeling such as Bradley-Terry for DPO and LSIF for the best DIL model. Moreover, its performance on benchmarks like Alpaca Eval 2 and Open LLM leaderboard is considerably better than other offline preference learning objectives.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-written and easy to follow.\", \"The first paper to connect RLHF with imitation learning if not mistaken\", \"Very strong results against popular DPO-variants\"], \"weaknesses\": \"## Is RLHF a form of imitation learning?\\nPaper frames reward learning as imitation learning and RL as knowledge distillation (KD) and I dont think either of them is correct.\", \"rl\": \"Equation 13 is the reverse KL between the behavior and optimal policy however knowledge distillation is forward KL between teacher (optimal) and student(behavior). KD (forward KL) is distribution fitting or mean seeking whereas reverse KL is mode seeking which makes the policy focus on high-reward regions rather than fitting the entire distribution with forward KL as in SFT. Overall, the KD claim by the paper is incorrect. Lastly, equation 13 is a known result from DPO paper which is the penultimate step of the optimum solution of equation 14th.\", \"reward_learning\": \"In standard RLHF, the reward model is a separate LLM with an additional MLP to predict the scalar reward. So by training a reward model, one does not imitate the expert or optimal policy. What we are doing is fitting a reward model to a predetermined preference model however the caveat is that the optima policy trained by the RL can be parametrized by the reward model trained with which was already proven by the DPO. Lastly, DPO parametrizes the reward model in terms of the policy so when the reward learning objective is trained, we obtain the actual policy.\\n\\nOn the other hand, this paper defines a Boltzmann distribution $\\\\pi_\\\\phi$ (equation 9) in an EBM framework which is the optimal policy induced by the $r_{\\\\phi}(x,y)$. This distribution is maximized on the chosen preferences generated by the $\\\\pi_{expert}$ or imitates $\\\\pi_{expert}$. Following derivations leads to reward likelihood training objectives whereas I am unsure whether $\\\\pi_{ref}$ approximation is free because it introduces rejected responses while IL objective only minimizes on chosen preferences. Nonetheless, this derivation is possible because the $pi_\\\\phi$ has a reward equivalence whereas it does not tell anything for other forms of policy. Overall, I would interpret it as imitating reward rather than policy, not vice versa.\\n\\n## Direct Imitation Learning\\nI dont think DIL is novel because it is the backtracking of the derivation of the DPO objective. After all, the 16th equation is the same as the 14th equation of the DPO without the partition and assuming $\\\\pi_{expert} = \\\\pi^*$. DIL is redefining the reward function of DPO, excluding the density ratio estimation part. All in all, I believe this part(excluding density ratio) is already present in DPO.\", \"questions\": \"Q1) You mention that DIL does not depend on Bradley-Terry but you introduce new reward training with different objectives such as LSIF, UKL, and BCE which are essentially replacements for BT, so doesn't the DIL still rely on some preference modeling assumption?\\n\\nQ2) In 6.3 you discuss learning dynamics DPO, SimPO, and DIL however Figure 3 does not have DPO, is the discussion from some other paper?\\n\\nQ3) Do you have additional results on MT-Bench or Arena Hard?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Kind reminder to Reviewer JHUL\", \"comment\": \"Dear ICLR Reviewer JHUL,\\n\\nWe greatly appreciate your time and the insightful comments. We have made extensive efforts to address all your questions, suggestions, and misunderstandings in our response and believe that we have addressed your concerns.\\n\\n**In your original review, you mentioned, \\u201cIf the authors can provide some more rationale and clean up the derivations, the score could be improved.\\u201d**\\n\\nThus, we sincerely want to confirm if there are any additional clarifications you would really like us to address. We would be grateful if you could consider increasing the score.\\n\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"We would like to thank all the reviewers for their thoughtful feedback and time.\\n\\nWe deeply appreciate the numerous positive comments on our work, such as describing it as \\\"valuable insights,\\\" \\\"solid motivations,\\\" and \\\"solid theoretical and empirical analysis\\\".\\n\\nThe main comments from the reviewers relate to some misunderstandings, clarifications, and the need for additional minor experiments.\\n\\nWe have made our greatest efforts to prepare a point-by-point response to each reviewer.\\n\\nThank you again for your time.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer MhY4\", \"comment\": \"Dear reviewer MhY4, we appreciate your efforts and detailed comments very much! However, we believe that there are some misunderstandings. Therefore, we would like to provide a point-by-point response to your comments.\\n\\n**Q1. While the empirical results seem to consistently outperform prior methods, I\\u2019m a bit worried about the statistical significance since the margins seem rather small sometimes (e.g. for Table 2, the improvements are almost always smaller than 1 percentage point).**\\n\\n\\n**A1.** Thank you for your comments. Please refer to our responses Q6 to Reviewer JHUL, where we provide additional results demonstrating that our improvements are significant compared to SFT, DPO, and SimPO.\\n\\n---\\n\\n**Q2. The exposition of the math/theory in the paper could have been a bit clearer (section 4). It took me some time to understand what actually is the final objective that DIL optimizes, and how it came to be. This is because, for example, at the end of section 4.3 the authors state \\u201cWith the estimated density ratio reward, the surrogate imitation learning objective in Equation (17) can then be solved with any RL algorithms.\\u201d, which initially made it seem like DIL would have to resort to RL optimization anyways\\u2026**\\n\\n**A2.** We apologize for any confusion. What we intend to convey is: \\\"With the estimated density ratio reward, the surrogate imitation learning objective in Equation (17) can be solved using any RL algorithm. However, this two-step process is complex and often unstable.\\\" To address this, in section 4.4, we introduce a simpler approach that directly optimizes the imitation learning objective. This method bypasses the need for RL training and density ratio estimation by leveraging a change-of-variables approach. We have updated the main paper to clearly articulate this. \\n\\n---\\n\\n\\n**Response and Clarification to Questions:**\\n\\n**Thank you for your detailed questions! Below are our responses and clarifications. We will update the submission to include these clarifications.**\\n\\n1. Mistral-7B-Base is the same as Zephyr-7B-SFT. We have corrected the typo. \\n2. This was a typo. We mistakenly wrote SLiC as KTO. We have corrected it.\\n3. Yes, we have included the base model (SFT) performance in Table 2 of the updated submission.\\n4. The \\\"average\\\" refers to the average win rates computed by GPT-4 when comparing the SFT-generated responses with the chosen responses in the dataset.\\n5. The reason we chose \\\"vs. SFT\\\" and \\\"vs. Chosen\\\" is to strictly align all settings with those in the original DPO paper.\\n6. As shown in the original DPO objective under the BT assumption, it maximizes the expected relative difference between the implicit 7. rewards of the chosen and rejected responses. Thus, while these methods preserve the relative ordering between the likelihoods of the chosen and rejected responses, they may reduce the absolute likelihood of the chosen response.\\n7. The reviewer is correct. $\\\\pi_{\\\\text{chosen}}$ refers to the probability of generating the chosen response in the preference dataset.\\n8. The tabular setting refers to cases where the state and action spaces are small, and both state functions and action-state functions are represented as tables.\\n9. Yes, it is negative log-likelihood. We apologize for the confusion and have updated the figure accordingly.\\n10. The term \\u201cinfinite models\\u201d was a mistake; it should be \\u201cinfinite computation.\\u201d We apologize for the typo.\\n11. It should indeed be $\\\\pi_{\\\\text{chosen}}$. We apologize for the confusion.\\n12. There should be a log operation here. We apologize for the typo.\\n13. The characteristic refers to the self-normalized property. DPO relies on the BT assumption (pairwise loss) to cancel out the normalization term. Due to this self-normalized property, our imitation learning approach generalizes to a broad class of objectives that do not rely on pairwise comparisons.\\n14. Sorry for your confusion. We also utilize a change-of-variables approach (critic function $f$ to policy $\\\\pi$ using Eqs. (23) and (26)).\\n15. Yes, the results are compared to some base model performance. Please see the updated Table 2, where we provide the performance of the base models (SFT).\\n16. The improvements are over SimPO. We apologize for the confusion and have clarified this in the updated submission.\"}", "{\"title\": \"Kind reminder to Reviewer JHUL\", \"comment\": \"Dear ICLR Reviewer JHUL,\\n\\nWe greatly appreciate your time and the insightful comments provided during the review of our paper.\\n\\nWe have made extensive efforts to address all your questions, suggestions, and misunderstandings in our response and believe that they adequately address your concerns. The reviewer's comments primarily focused on clarifying certain claims and experimental details. We have addressed these points in our response by providing detailed explanations, including clarifications on specific claims and additional results. We believe that the reviewer's insightful comments can be effectively and easily addressed in the final version.\\n\\n**As you and other reviewers have noted, our work offers valuable insights, builds a very interesting connection between imitation learning and RLHF, and presents a practical effective framework. Since your comments do not impact the major contributions of our manuscript, we would be grateful if you could consider increasing the score.**\\n\\nWe are extremely grateful for your time.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer sbFs\", \"comment\": \"Dear reviewer sbFs, we appreciate the reviewer's perception of our contributions to both empirical and theoretical analysis, and we thank the reviewer for their insightful questions. Please find our detailed responses below:\\n\\n\\n**Q1. The amount of data needed for satisfactory alignment with DIL compared to other methods is not clear.**\\n\\n**A1.** Thank you for your comments. We apologize for any confusion. The term \\\"efficient\\\" does not refer to data efficiency; instead, it highlights that, compared to PPO in RLHF, our DIL approach is more efficient in terms of computation, speed, and engineering effort. DIL eliminates the need for an additional stage of training a reward model and, during policy training, does not require decoding online responses (which is typically slow) or training an additional value model.\\n\\n---\\n\\n**Q2. All the models in the experiments are smaller (<10B parameters) so it\\u2019s not clear how effective DIL would be for larger models.**\\n\\n**A2.** Thank you for your suggestions. Following your suggestion, we conducted additional experiments on Mixtral-8x22B-Instruct-v0.1. From the following results, we find that the DIL still achieves better performance compared to baselines. These results have also been included in Table 8 in the updated submission: \\n\\n| Mixtral-8x22B | HumanEval | LeetCode | MATH | TheoremQA | \\n| --- | --- | --- | --- | --- |\\n| DPO | 75.1 | 24.5 | 48.5 | 34.7 |\\n| SimPO | 76.2 | 22.5 | 50.3 | 35.5 | \\n| DIL | **77.3** | **28.7** | **52.8** | **36.9** | \\n\\n---\\n**Q3. Since DIL doesn\\u2019t suppress the likelihood of dispreferred responses as much as SimPO, how does this affect alignment from a safety perspective? Is the model more prone to generate harmful responses?**\\n\\n**A3.** Thank you for your questions. Based on the results on Anthropic Helpful and Harmless, we observe that our DIL achieves significantly better performance than SimPO, demonstrating that DIL is less prone to generating harmful responses. We hypothesize that there may be two potential reasons for this:\\n- (i) Although DIL does not suppress the likelihood of dispreferred responses, the margin likelihood between preferred and dispreferred responses in DIL is at the same level as SimPO, demonstrating that DIL still has the capability to distinguish between preferred and dispreferred responses.\\n- (ii) Additionally, not all rejected responses in preference datasets are low-quality. It may be the case that both chosen and rejected responses are high-quality, but the chosen response is slightly better. In this case, it may not be ideal for the model to excessively reduce the likelihood of rejected responses.\"}", "{\"title\": \"Further review\", \"comment\": \"Thank you for the rebuttal!\\n\\n> Thank you for your comments. Please refer to our responses Q6 to Reviewer JHUL, where we provide additional results demonstrating that our improvements are significant compared to SFT, DPO, and SimPO.\\n\\nThank you for providing these! These do increase my confidence in the signifcance of the results. Nevertheless, could the authors also provide standard errors or CIs?\\n\\nAlso, looking further at Table 2, is it possible SFT should be bolded instead of DIL for Mistral MMLU-PRO? (27.58 > 27.44)\\n\\n> We apologize for any confusion. What we intend to convey is: \\\"With the estimated density ratio reward, the surrogate imitation learning objective in Equation (17) can be solved using any RL algorithm. However, this two-step process is complex and often unstable.\\\" ... We have updated the main paper to clearly articulate this.\\n\\nThank you, after having another look, this clarification does help with the reading of the paper!\\n\\n> Yes, we have included the base model (SFT) performance in Table 2 of the updated submission.\\n\\nGreat, thanks for adding that into the table - it gives a good starting point to compare to for all methods.\\n\\n> The reason we chose \\\"vs. SFT\\\" and \\\"vs. Chosen\\\" is to strictly align all settings with those in the original DPO paper.\\n\\nI agree it's generally a good idea to be consistent with prior work. However, that shouldn't restrict the authors from adding additional results here. It makes sense that in the DPO paper, the authors didn't include \\\"vs. DPO\\\", because they'd be comparing against themselves which is maybe not very meaningful. However, in this paper, it makes a lot of sense to do a head-to-head comparison with DPO and maybe SimPO in Table 3, instead of having to infer the strength of one method over the other *indirectly* through the performance against some other baseline (SFT and Chosen in this case).\\n\\n> Yes, it is negative log-likelihood. We apologize for the confusion and have updated the figure accordingly.\\n\\nActually, since it's negative, is it just log-likelihood? The version I'm seeing still seems to just say \\\"likelihood\\\". Though thank you for making the other changes in the figure! \\n\\n> Sorry for your confusion. We also utilize a change-of-variables approach (critic function to policy using Eqs. (23) and (26)).\\n\\nI'm still a bit confused here. Are the authors plugging the result from Eq. 26 ($f^*(x, y) / \\\\beta = \\\\log \\\\frac{\\\\pi_{\\\\mathrm{chosen}}(y | x)}{\\\\pi_{\\\\mathrm{ref}}(y | x) c(x)}$) into Eq. 25? If so, then you don't even need Eq. 23, since Eq. 26 gives essentially the same result? If the authors could clarify their exact steps in a bit more detail, that would be helpful. \\n\\n### Further questions\\n1. Why is DPO not included in Fig. 1 & 3? From the reply to some of the other reviewers I understand it's because SimPO is the motivating example, but I still think adding DPO could help put the results in those figures in better perspective, especially since DPO is still widely used.\\n\\n2. In 4.1, why is it justified to take $\\\\pi_{\\\\mathrm{ref}}(y | x) = 0.5 \\\\mathbb{I}(Y = y_l) + 0.5 \\\\mathbb{I}(Y = y_w)$ for sampling from the reference distribution?\\n\\n\\nFinally, thanks for taking the time to address all the other minor clarifications/typos!\"}", "{\"title\": \"Kind reminder to Reviewer JHUL (discussion deadline is approaching)\", \"comment\": \"Dear ICLR Reviewer JHUL,\\n\\nThank you again for your time and efforts in reviewing our paper. We have carefully responded to each of your questions. \\n\\nGiven that the author-reviewer discussion deadline is approaching, we would greatly appreciate it if you could kindly review our responses and share your valuable feedback. We would be grateful if you could consider increasing the score given our clarification.\\n\\nThank you very much!\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"metareview\": \"This paper presents Direct Imitation Learning (DIL), a novel framework for aligning large language models with human preferences. The authors reinterpret existing alignment methods like RLHF and DPO as special cases of imitation learning and introduce a new objective function based on minimizing the reverse KL divergence between the model's policy and the distribution of chosen responses.\\n\\nThe paper's strengths are its solid theoretical foundation and strong empirical results. The authors demonstrate that DIL outperforms existing methods on several benchmarks, including the Open LLM Leaderboard and AlpacaEval 2.0. However, the paper has some weaknesses. The reviewers pointed out that the connection between RLHF and imitation learning is not entirely novel, and some derivations could be more concise. Additionally, the reviewers raised concerns about the clarity of the presentation and the significance of the empirical improvements. Most of these weaknesses were addressed during the rebuttal period, making this work a good contribution to ICLR.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, the authors addressed reviewers' concerns by clarifications, providing additional experimental results, and improving the presentation. They also emphasized the novelty of their work by highlighting the connection between RLHF and imitation learning, which provides a new perspective on alignment research. The reviewers agreed that the paper should be accepted after the authors addressed their concerns during the rebuttal period\"}", "{\"title\": \"Thank you very much for your reply\", \"comment\": \"Dear Reviewer MhY4,\\n\\nWe are very glad to hear that our rebuttal and the discussion have adequately addressed your concerns. \\n\\nThank you as well for your additional comments, which have facilitated further discussion and will help improve our paper.\\n\\nWe will ensure that our discussion and the additional results are included in the final version of our paper.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Thank you for your response\", \"comment\": \"Dear Reviewer sbFs,\\n\\nThank you very much for reviewing our paper and reading our rebuttal. We sincerely appreciate your recognition of our contribution!\\n\\nWe are truly grateful for your time and your reply.\\n\\nBest regards, Authors\"}", "{\"title\": \"Concerns addressed?\", \"comment\": \"It'd be great if you can respond to the author's rebuttal and convey whether your opinion about this paper has changed.\"}", "{\"title\": \"Response to Reviewer hdJf- part 2\", \"comment\": \"**Q5. In 6.3 you discuss learning dynamics DPO, SimPO, and DIL however Figure 3 does not have DPO, is the discussion from some other paper?**\\n\\n**A5.** Thank you for your questions. Yes, the issue of decreasing likelihood of chosen responses in DPO has been widely noticed by many recent works [2,3,4]. Given the superior performance of SimPO over DPO, we only included SimPO in Figure 3 as the motivating example. \\n\\n---\\n\\n**Q6. Do you have additional results on MT-Bench or Arena Hard?**\\n\\n**A6.** Thank you for your questions. To answer your question, we have included additional comparisons on Arena-Hard given it is an updated version of MT-Bench in Table 2 (Comparison results on Mistral-7B-Base are also shown in the table below.) We report the win rate (WR) against the baseline model following SimPO. We find our DIL still achieves superior performance on Arena-Hard compared to baselines \\n\\n\\n| Models | SFT | DPO | SLiC | f-DPO | IPO | CPO| SimPO | DIL |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| Mistral-7B-Base| 1.3 | 10.4 | 7.3 | 8.1 | 7.5 |5.8 |16.6 | **18.3** |\\n| LLama-8B-Base| 3.3 | 15.9 | 10.3 | 14.2 | 17.8 |11.6 |23.4 | **25.6** |\\n\\n\\n**We wholeheartedly appreciate your suggestions and comments. Nevertheless, we believe there are indeed misunderstandings and not fatal to the major contributions of our manuscript. We have made extensive efforts to address your comments and believe that they adequately address all your concerns. We believe that the reviewer's insightful comments can be easily and effectively addressed in the final version. Could you please consider increasing your score to reflect the efforts of your review and our rebuttal.**\\n\\n\\n---\\n\\n\\n[1] MiniLLM: Knowledge Distillation of Large Language Models. ICLR 2024. \\n\\n[2] Iterative Reasoning Preference Optimization. NeurIPS 2024\\n\\n[3] Cal-DPO: Calibrated Direct Preference Optimization for Language Model Alignment. NeurIPS 2024.\\n\\n[4] Smaug: Fixing Failure Modes of Preference Optimisation with DPO-Positive. In Arxiv.\"}", "{\"title\": \"Further Response to Reviewer MhY4\", \"comment\": \"**Thank you for your additional comments to facilitate further discussion, which will further improve our paper. Please find our responses below.**\\n\\n**Q1. Thank you for providing these! These do increase my confidence in the significance of the results. Nevertheless, could the authors also provide standard errors or CIs?**\\n\\n**A1.** Yes. Thank you for your suggestion! In the following table, we provided the performance with standard deviation of our DIL and state-of-the-art baselines on Mistral-7B-Base. We can observe that our DIL does significantly outperform the baselines across multiple benchmarks. Specifically, DIL achieves the highest performance on HumanEval, BBH, MUSR, MATH, GSM8K, and AlpacaEval 2, with lower standard deviations in most cases, indicating both superior and more stable performance compared to DPO and SimPO. These results demonstrate the robustness and effectiveness of our approach, further validating our contributions to preference alignment. \\n\\n| Methods | HumanEval | &nbsp; BBH | MUSR | MATH | GSM8K | AlpacaEval 2 |\\n| --- | --- | --- | --- | --- |--- |--- |\\n| DPO | 31.7 $\\\\pm$ 0.3 | 43.27 $\\\\pm$ 0.4 | 43.65 $\\\\pm$ 0.3 | 1.36 $\\\\pm$ 0.4 | 21.76 $\\\\pm$ 1.2 | 12.5 $\\\\pm$ 0.2 |\\n| SimPO | 26.5 $\\\\pm$ 0.6 | 42.94 $\\\\pm$ 0.3 | 39.68 $\\\\pm$ 0.5 | 2.49 $\\\\pm$ 0.5 | 22.21 $\\\\pm$ 1.5 | 20.8 $\\\\pm$ 0.1 |\\n| DIL | **33.5 $\\\\pm$ 0.5** | **43.59 $\\\\pm$ 0.5** | **44.05 $\\\\pm$ 0.3** | **2.95 $\\\\pm$ 0.3** | **32.19 $\\\\pm$ 1.1** | **21.7 $\\\\pm$ 0.2** |\\n---\\n\\n**Q2. Is it possible SFT should be bolded instead of DIL for Mistral MMLU-PRO? (27.58 > 27.44)**\\n\\n**A2.** Thank you for pointing out the typo! We have corrected it.\\n\\n---\\n\\n**Q3. However, in this paper, it makes a lot of sense to do a head-to-head comparison with DPO and maybe SimPO in Table 3, instead of having to infer the strength of one method over the other indirectly through the performance against some other baseline (SFT and Chosen in this case).**\\n\\n**A3.** Thank you for your insightful comments. To address it, we provide the head-to-head win rates comparison with DPO and SimPO on both TL;DR summarization and Anthropic-HH. We can observe that our proposed DIL achieves higher head-to-head win rates compared to both DPO and SimPO on these tasks, as shown in the following table. Thank you again for your valuable feedback, and we hope these additional results address your comments thoroughly.\\n\\n| Datasets | TL;DR summarization | Anthropic-HH |\\n| --- | --- | --- | \\n| DIL v.s. DPO | 59.4% | 62.8% |\\n| DIL v.s. SimPO | 58.7% | 63.3% |\\n\\n---\\n\\n**Q4. Actually, since it's negative, is it just log-likelihood?**\\n\\n**A4.** Sorry for the confusion. Yes. It is log-likelihood, not negative log-likelihood. We apologize for any misunderstanding.\\n\\n---\\n\\n**Q5. If the authors could clarify their exact steps in a bit more detail, that would be helpful.**\\n\\n**A5.** Sorry for the confusion. We did indeed make a typo in Equation (26). The $r$ should be $r^*$. Using $r^*$ in both Equation (26) and Equation (23) establishes a connection between the policy $\\\\pi^*$ and the critic function $f^*$. Thus, we can use the policy $\\\\pi$ to represent the critic function $f$ in Equation (25) by leveraging a change-of-variables approach, resulting in the final loss function in Equation (27).\\n\\n---\\n\\n\\n**Response to Further Questions**\\n\\n1. We completely agree with the reviewer that including DPO would be beneficial. Following your suggestions, we have updated the DPO training dynamics results on Mistral-7B-Base in Figure 3 of the updated submission. (Given the short rebuttal period and limited computational resources, we are committed to including the results on LLaMA-8B in the final version.) We observe that the likelihood of both chosen and rejected responses continues to decrease in DPO, which aligns with observations from many recent works [1, 2, 3].\\n2. There are a number of choices for sampling from the reference policy in EBMs, but this particular choice simplifies the EBMs and has been found to produce stable results in practice, as demonstrated in recent works [4, 5].\\n\\n**Dear Reviewer MhY4, We gratefully appreciate your time in reviewing our paper and your comments. We have made extensive efforts to address your comments. If the reviewer's concerns are clarified, we would be grateful if the reviewer could increase the score. Many thanks for your time!**\\n\\n[1] Iterative Reasoning Preference Optimization. NeurIPS 2024\\n\\n[2] Cal-DPO: Calibrated Direct Preference Optimization for Language Model Alignment. NeurIPS 2024.\\n\\n[3] Smaug: Fixing Failure Modes of Preference Optimisation with DPO-Positive. In Arxiv.\\n\\n[4] Concept Learning with Energy-Based Models. In Arxiv.\\n\\n[5] Self-Play with Adversarial Critic: Provable and Scalable Offline Alignment for Language Models. In Arxiv.\"}", "{\"title\": \"Thank you for your support!\", \"comment\": \"Dear ICLR Reviewer JHUL,\\n\\nWe appreciate your recognition of our contribution and agree that our current version addresses your concerns and could be considered for publication as is.\\n\\nOur work provides an intriguing explanation and insight for DPO/RLHF from the perspective of imitation learning and presents a new alignment objective inspired by this insight. We will shorten the entire derivation following your suggestion.\\n\\nThank you once again for your thoughtful review and valuable feedback! \\n\\nSincerely,\\n\\nThe Authors\"}", "{\"summary\": \"The paper makes a connection between various approaches for RLHF of large language models and imitation learning. In particular the authors re-derive a well known connection between probabilistic inference and reinforcement learning which associates the reward function with the energy of a Boltzmann distribution (see e.g. [1] for a good review of all the related methods and derivations) for the special case of RLHF.\\nFrom this perspective classical reward model learning can be derived as matching the energy to the generated responses with highest reward. Based on this the authors then derive a surrogate objective (DIL) that is closely related to DPO and other RLHF algorithms that exist, but which makes less assumptions on the form of the reward model. They show empirical evaluations on language modeling which match/gives slight improvement over DPO.\\n\\n[1] Levine, Sergey. \\\"Reinforcement learning and control as probabilistic inference: Tutorial and review.\\\" arXiv preprint arXiv:1805.00909 (2018).\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The connection between RLHF and imitation learning approaches is highly relevant to the community and the first part of the paper (background and initial derivation up to Eq.12-14) is well presented and leaves the reader with a condensed and improved understanding of how different existing algorithms relate (although perhaps more references to the literature could help, see weaknesses below).\", \"Any improvement over DPO (which is perhaps the predominant algorithm at least for offline RLHF from fixed datasets) is relevant to the community.\", \"The benchmarks used are open and relevant and at reasonable scale (i.e. 7B models)\"], \"weaknesses\": [\"My first main gripe with the paper is that the idea that RLHF in it's entirety is performing imitation learning seems to stand on shaky foundations. A lot of leaps from one objective to another are required to arrive at this conclusion and a lot of the nuances in the differences between different objectives get lost along the way (that are already well discussed in the literature see e.g. Sergey Levine's review and also existing literature on offline RL as probabilistic inference). For example, the title says \\\"RLHF secretly performs imitation learning\\\" then up to Eq. 12 this thread is followed closely, and I find the connection that is made between reward model learning and the imitation learning perspective insightful, however directly after the authors make a leap to knowledge distillation / minimizing the reverse KL, which then attains the actual RL objective. This objective then is no longer directly related to learning from a dataset of reference or \\\"chosen\\\" examples (as would be the case in imitation learning) but instead can be understood as imitating an optimal policy (and not any policy that generated the dataset) on the state distribution induced by the currently learned policy (see also [3]). It thus really is RL (and not just imitation learning) and has to \\\"deal\\\" with all the problems RL comes with, i.e. exploration of the energy landscape of the optimal policy is required, premature convergence could be an issue etc.. The fact that the energy itself is given by a reward model that comes from matching chosen examples on a pre-collected dataset has no bearing on this. This is easy to see as depending on the temperature (which also pops out wihtout explanation) chosen in Eq 13. the policy may collapse to matching a single mode of the energy model but may also result in much higher energy / better reward than the chosen policy. The authors do discuss some of these nuances below in a short section on why SFT (which uses a forward KL) might underperform the reverse KL approach. But all this does is it leaves the reader with the impression that the authors painted too broad a picture to derive a connection that then, in practice is not relevant. This could be rectified by perhaps framing the paper as \\\"RLHF can be seen as imitating an optimal policy based on human preferences\\\" and toning down some of the quite strong language, e.g. \\\"learning without an RL loop\\\" etc.\", \"The paper seems to be derived backwards as some of connections made feel slightly contrived upon reading. E.g. the jump from the original objective to knowledge distillation mentioned above. The steps taken to arrive at a DPO like objective from density ratio estimation etc. The paper requires a lot of steps to arrive at a simple algorithm that the authors probably had in mind from the get-go and started from.\", \"The knowledge distillation connection seems tenuous (and already known), it seems more straightforward to think of the entire process as imitating a better policy as in MARWIL [2] or as chaining a specific variant of an improvement operator and distillation as already derived in detail for many different variants in [3].\", \"A lot of the derived formulas and connections are already known in the literature but this is often not explicitly stated, e.g. \\\"\", \"In this section, we connect RLHF to the imitation learning framework. We show that RLHF is a special\", \"case of imitation learning problem by defining the following specialized energy-based mode\\\" in front of Eq 9, which very clearly is already derived in the DPO paper and literature on RL as probabilistic inference. It is fine to re-state such derivations but then please say: we built on a well known connection between RL and energy based models/probabilistic inference.\", \"The key innovation that the paper hinges on seems to be the approximation of the log ratio between chosen and current policy but the derivation seems very ad-hoc and on shaky foundations. To be explicit: in order to arrive at their Eq. 21 (and thus Eq 24 which is their DIL objective) they make the assumption that the reference policy is the same as the policy that generates the rejected samples only and disregard any terms on the positive examples; i.e. \\\"Here, we use the set of rejected responses y_l \\u223c \\u03c0_ref(y | x) to approximate the expectations under \\u03c0_ref(y | x)\\\". This is simply a wrong assumption. I do not know why the authors have chosen to make the assumption, but it feels like a contrived way to come to Equation 24 and form a connection to a DPO like objective.\", \"The results are on relevant benchmarks, but the improvement over DPO seems minor in most cases. In this scenario what would be nice would be to analyze qualitative differences, e.g. examples which DIL seems to have stronger performances on compared to DPO. Or an analysis on how closeness (in KL) wrt. to the reference policy evolves during the course of optimization for different algorithms and how this affects performance. Or a plot that have the DIL objective on the x axis and win-rate (over different models, e.g. reference policy and DPO) on the Y-axis.\", \"[2] Wang, Qing, et al. \\\"Exponentially weighted imitation learning for batched historical data.\\\" Advances in Neural Information Processing Systems 31 (2018).\", \"[3] Ghosh, Dibya, Marlos C Machado, and Nicolas Le Roux. \\\"An operator view of policy gradient methods.\\\" Advances in Neural Information Processing Systems 33 (2020): 3397-3406.\"], \"questions\": \"A discussion and answers regarding the weaknesses listed above would be appreciated. And if the authors can provide some more rationale and clean-up the derivations the score could be improved.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer hdJf- Part 1\", \"comment\": \"Dear reviewer hdJf, we appreciate your perception of our contributions on connecting RLHF with imitation learning. We believe that there are some important misunderstandings. Please see our clarifications below:\\n\\n---\\n\\n**Q1. Equation 13 is the reverse KL between the behavior and optimal policy however knowledge distillation is forward KL between teacher (optimal) and student(behavior). KD (forward KL) is distribution fitting or mean seeking whereas reverse KL is mode seeking which makes the policy focus on high-reward regions rather than fitting the entire distribution with forward KL as in SFT. Overall, the KD claim by the paper is incorrect....**\\n\\n\\n**A1.** We apologize for any confusion and believe there may have been some misunderstandings. **The knowledge distillation in Equation (13) that we specifically mentioned here is indeed a \\\"reverse\\\" version, focusing on minimizing the reverse KL divergence, as also mentioned in recent work MiniLLM [1].**\\n\\n**Yes, Equation (13) is a known result from DPO, but our key contribution is not Equation (13). Please see our response to Q3 for detailed justification why our DIL differs from DPO.**\\n\\n---\\n\\n**Q2. In standard RLHF, the reward model is a separate LLM with an additional MLP to predict the scalar reward. So by training a reward model, one does not imitate the expert or optimal policy.**\\n\\n**A2.** Thank you for your comments. Although the choice of the loss function and optimization procedure differs, the central aim of our work is to emphasize that the optimal policies of RLHF and DPO are theoretically the same as the imitation learning process. In essence, all these methods aim to discover identical optimal policies, i.e., the chosen response distribution. Specifically, \\n\\n- Equation (12) shows that imitation learning loss between the distribution chosen response and energy-based policy is exactly the same as the reward loss based on BT assumption.\\n\\n- Equation (13) (also shown in DPO) shows that imitation learning between optimizing policy response and energy-based policy is exactly the same as the RL loss.\\n\\n- Thus, RLHF with two steps (reward learning and policy learning) can be viewed as conducting imitation learning between optimizing policy and the distribution of chosen response.\\n\\n---\\n\\n**Q3. I dont think DIL is novel because it is the backtracking of the derivation of the DPO objective. After all, the 16th equation is the same as the 14th equation of the DPO without the partition. DIL is redefining the reward function of DPO, excluding the density ratio estimation part. All in all, I believe this part(excluding density ratio) is already present in DPO.**\\n\\n**A3.** Thank you for your comments. We believe there are indeed some misunderstandings. Our work indeed significantly differs from DPO in several important ways:\\n\\n- **Theoretical Insight:** We are the first to show that the objective of current alignment methods, such as DPO and RLHF, can theoretically be viewed as fitting the chosen response distribution by minimizing the reverse KL divergence.\\n- **General Framework:** We provide a general framework based on Bregman divergence to directly optimize the reverse KL divergence between the policy and the chosen response distribution.\\n- **Empirical Results:** We demonstrate that our framework effectively alleviates the decrease in the likelihood of chosen responses and achieves better performance on reasoning-intensive tasks, addressing important limitations in DPO. \\n\\n---\\n\\n**Q4. You mention that DIL does not depend on Bradley-Terry but you introduce new reward training with different objectives such as LSIF, UKL, and BCE which are essentially replacements for BT, so doesn't the DIL still rely on some preference modeling assumption?**\\n\\n**A4.** Thank you for your comments. As shown in our paper, DIL and DPO indeed share the same assumption, as they both solve the same imitation learning objective (i.e., minimizing the reverse KL divergence between the policy and the chosen response distribution). \\n\\nHowever, as demonstrated in our paper, DPO relies on the BT assumption (pairwise loss) to cancel out the normalization term. Due to the self-normalized property in Equation (22), our DIL generalizes to a broad class of objectives that do not rely on pairwise comparisons, unlike DPO/SimPO. Since DPO/SimPO only learn to preserve the relative ordering between the likelihoods of the chosen and rejected responses, they reduce the likelihood of the chosen response, resulting in poor performance in reasoning tasks.\"}", "{\"summary\": [\"This paper reinterprets preference alignments methods like RLHF and DPO as special cases of a more general imitation learning objective.\", \"They mathematically show how the RLHF and DPO objective functions fit within a general imitation learning framework.\", \"They develop a new alignment method DIL based on imitation learning with the objective as minimizing the reverse KL loss between the optimal policy and current policy and derive a preference data based learning objective which suppresses the likelihood of generating dispreferred responses while increasing the likelihood of generating preferred responses.\", \"They empirically show that DIL results in a better policy compared to other offline alignment methods across reasoning and alignment benchmarks.\"], \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"This paper is well written and presents intriguing connections between imitation learning and human preference alignment.\", \"They derive a new alignment framework based on imitation learning and show empirical improvements on existing baseline.\", \"DIL shows significantly better training dynamics compared to SimPO by ensuring that the likelihood of generating chosen responses is maintained.\"], \"weaknesses\": [\"The amount of data needed for satisfactory alignment with DIL compared to other methods is not clear. The authors claim that DIL is more efficient, so it would be nice to see some metrics that measure this.\", \"All the models in the experiments are smaller (<10B parameters) so it\\u2019s not clear how effective DIL would be for larger models.\"], \"questions\": [\"Since DIL doesn\\u2019t suppress the likelihood of dispreferred responses as much as SimPO, how does this affect alignment from a safety perspective? Is the model more prone to generate harmful responses?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"A sincere and kind reminder to the ICLR Reviewer JHUL\", \"comment\": \"Dear ICLR Reviewer JHUL,\\n\\nWe greatly appreciate your time and the insightful comments provided during the review of our paper.\\n\\nWe have made extensive efforts to address all your questions, suggestions, and misunderstandings in the response and believe that they adequately address all your concerns. We believe that the reviewer's insightful comments can be easily and effectively addressed in the final version.\\n\\nWith the discussion phase ending soon, we would like to confirm whether there are any other clarifications they would like. We would be grateful if the reviewer could increase the score.\\n\\nThank you again for your time and valuable input; we are deeply appreciative.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Thanks for the responses!\", \"comment\": \"Thanks to the authors for adjusting the paper and replying to my concerns. Most of the minor concerns have been addressed and the additional results and ablations (e.g. re. the choice of sampling dataset for estimating Eq 24) are appreciated.\\n\\nI think the paper is improved and a bit clearer, especially putting emphasis more on connections between RLHF and IL seems a good choice in the title.\\n\\nI still maintain that some of the derivations are lengthy and the fact that the ablations reveal that the choice of the sampling dataset for Eq. 24 makes a huge difference (and is a choice that is clearly made to make the algorithm resemble DPO as much as possible) makes me feel that the entire derivation could have been much shortened and the paper just been presented as a better alternative to DPO. \\n\\nNonetheless the paper does provide potentially useful insights to the community and in its revised version could be considered for publication as is (albeit a larger rewrite would probably improve it further by quite a bit). I have thus adjusted my score upwards.\"}" ] }
2QXC4NX8oC
PartEdit: Fine-Grained Image Editing using Pre-Trained Diffusion Models
[ "Aleksandar Cvejić", "Abdelrahman Eldesokey", "Peter Wonka" ]
We present the first text-based image editing approach for object parts based on pre-trained diffusion models. Diffusion-based image editing approaches capitalized on the deep understanding of diffusion models of image semantics to perform a variety of edits. However, existing diffusion models lack sufficient understanding of many object parts, hindering fine-grained edits requested by users. To address this, we propose to expand the knowledge of pre-trained diffusion models to allow them to understand various object parts, enabling them to perform fine-grained edits. We achieve this by learning special textual tokens that correspond to different object parts through an efficient token optimization process. These tokens are optimized to produce reliable localization masks at each inference step to localize the editing region. Leveraging these masks, we design feature-blending and adaptive thresholding strategies to execute the edits seamlessly. To evaluate our approach, we establish a benchmark and an evaluation protocol for part editing. Experiments show that our approach outperforms existing editing methods on all metrics and is preferred by users 77-90% of the time in conducted user studies.
[ "Diffusion models", "Text-to-Image", "Image Editing" ]
Reject
https://openreview.net/pdf?id=2QXC4NX8oC
https://openreview.net/forum?id=2QXC4NX8oC
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ykwAvSEcS5", "vnKEaOAhh0", "to7bXfpw80", "oCWT0vzs5L", "nElVR4bf8z", "iUH0DXQgGq", "cxWuHoZAh7", "cEZvM6E3h6", "cBuwzpqurD", "TZcflcLLkj", "N8cL9P8sF3", "MG9xvSqAAJ", "HMizNMFTPH", "Efb4QpvwBF", "Bwiy90LsEt", "B1m7MWtIwQ", "7EvD0wZwyv", "5U7qvUf7oH" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732684300738, 1730646565321, 1732458396055, 1729673523653, 1730174581782, 1732458617980, 1732691892460, 1732829322617, 1732458003017, 1737523576921, 1733294819454, 1732458452425, 1733294840490, 1732542484664, 1734409719988, 1730707941245, 1732458220156, 1732657943875 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3454/Authors" ], [ "ICLR.cc/2025/Conference/Submission3454/Reviewer_8GnP" ], [ "ICLR.cc/2025/Conference/Submission3454/Authors" ], [ "ICLR.cc/2025/Conference/Submission3454/Reviewer_kfJM" ], [ "ICLR.cc/2025/Conference/Submission3454/Reviewer_rJwB" ], [ "ICLR.cc/2025/Conference/Submission3454/Authors" ], [ "ICLR.cc/2025/Conference/Submission3454/Authors" ], [ "ICLR.cc/2025/Conference/Submission3454/Reviewer_kfJM" ], [ "ICLR.cc/2025/Conference/Submission3454/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3454/Authors" ], [ "ICLR.cc/2025/Conference/Submission3454/Authors" ], [ "ICLR.cc/2025/Conference/Submission3454/Authors" ], [ "ICLR.cc/2025/Conference/Submission3454/Reviewer_kfJM" ], [ "ICLR.cc/2025/Conference/Submission3454/Area_Chair_ub6d" ], [ "ICLR.cc/2025/Conference/Submission3454/Reviewer_SEmy" ], [ "ICLR.cc/2025/Conference/Submission3454/Authors" ], [ "ICLR.cc/2025/Conference/Submission3454/Reviewer_8GnP" ] ], "structured_content_str": [ "{\"comment\": \"We thank the reviewer for their thoughtful feedback and engagement in the discussion.\\n\\nFirst, we would like to clarify that *token optimization* is a general technique, much like LoRA, which has been utilized across multiple *published* works to address different problems. Notable examples include concept learning in Avrahami et al., 2023a (SGA 2023), Safaee et al., 2024 (CVPR 2024), and unsupervised keypoint detection in Hedlin et al., 2023 (CVPR 2024). These works incorporated application-specific components to effectively integrate token optimization into their respective solutions. The effectiveness of token optimization for part-based image editing might appear self-evident in retrospect, but that is largely because we have successfully demonstrated its utility in this new domain.\\n\\nWhile determining whether our contributions are sufficient for acceptance at ICLR is inherently subjective, we believe our work offers significant value through *three* key types of contributions:\\n\\n**a) Conceptual Contribution:**\\nWe present the first approach for text-based fine-grained editing, which is more intuitive, user-friendly, and powerful compared to the traditional, less convenient mask-based editing methods.\\n\\n**b) Algorithmic Contributions:**\\nWe proposed a novel editing algorithm that integrates three diffusion paths, enabling fine-grained editing in a single inference pass using optimized part tokens. \\nWe also conducted a comprehensive analysis of several core aspects, including training and inference timesteps, layer selection, data scalability, and token padding strategies. \\nOur approach is designed to be dynamic, and we successfully adapted it for different Stable Diffusion models (SDXL and SD 1.5/2.1), thereby providing the community with deeper insights into the potential of token optimization for image editing across different architectures.\\n\\nFurthermore, we proposed a novel mask computation algorithm that generates non-binary editing masks, utilizing an adaptive thresholding strategy to produce seamless, natural edits. Results generated by this algorithm even outperformed *mask-based* editing approaches, where manually annotated binary masks were provided, as shown in Table 1. This novel strategy sets a precedent for future advancements in image editing.\\n\\n**c) Strong and Exhaustive Results:**\\nWe conducted extensive experiments and comparisons that demonstrated the effectiveness of our approach in various settings. Additionally, we highlighted the limitations of current state-of-the-art editing approaches, particularly in the context of fine-grained editing and racial bias (Figure 5). These insights pave the way for future research to address these challenges.\\n\\nWhile we understand that some might view contribution (a) alone as insufficient, we believe the combined package of contributions (a), (b), and (c) provides substantial value to merit acceptance.\"}", "{\"summary\": \"The paper introduces a method to enhance pre-trained diffusion models for fine-grained image editing by training part-specific tokens for localizing edits at each denoising step. This approach uses feature blending and adaptive thresholding for seamless edits while preserving unaltered areas. A token optimization process expands the model\\u2019s semantic understanding without retraining, using existing or user-provided datasets. Qualitative as well as quantitative experimental comparison have been conducted to demonstrate the effect of the proposed method.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. This paper addresses a critical problem in image editing: the inability to accurately edit specific parts of an object while keeping the rest of the image unchanged.\\n\\n2. The use of token optimization to learn adaptable tokens for subsequent editing tasks is intuitive and intriguing.\\n\\n3. The experiments are thorough, with comprehensive ablation studies that validate the effectiveness of the proposed approach.\\n\\n4. The paper is well-written, easy to follow, and logically structured.\", \"weaknesses\": \"1. The images used for training part tokens are very limited, with only 10\\u201320 images. In such cases, the representativeness of the images is crucial for generalization. It would strengthen the paper if the authors would conduct experiments to show the impact of varying the types of training images on the model's performance.\\n\\n2. The method involves many hyperparameters that require tuning, including the number of diffusion timesteps for training part tokens and inference, the selection of layers for optimization, and the adjustable tolerance for transitions between edited parts and the original object. This adds complexity to the overall framework and could make it challenging to implement effectively.\\n\\n3. In practical scenarios, one might want to adjust two parts simultaneously. Therefore, how will the method apply when handling instruction text that requires simultaneous editing of two parts? I suggest the authors include experiments or examples to show the model's performance on multi-part edits.\\n\\n4. Will the evaluation dataset for PartEdit be made publicly available? Also, will the code be available?\\n\\n5. Typo: The text inside Figure 1, \\\"prompt-to-prompt (ICRL2023),\\\" should be corrected to \\\"ICLR.\\\"\", \"questions\": \"1. Given the limited number (10\\u201320) of images used for training part tokens, how were these images selected to ensure representativeness, and what impact does this selection have on the model's generalization capabilities?\\n\\n2. Are there guidelines or best practices provided for hyperparameter tuning?\\n\\n3. How effectively does the method handle instructions that require simultaneous modifications to two or more parts of an image?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"W1 - We completely understand this concern, but in practice, the 7 tokens that we experimented with in the paper cover most of the common objects such as humanoids, animals, vehicles, and chairs. If the user is interested in learning specialized parts that are not provided, annotating the parts and optimizing the token is a straightforward process.\\nFor annotating the part, online annotation tools such as \\u201cMakeSense\\u201d take around 20-30 seconds per image, which is around 3-5 minutes for 10 images.\\nThen, we will provide the source code for optimizing new tokens and using them for editing upon the acceptance of the paper to facilitate learning new part tokens for the community.\\n\\nW2 - We provided these experiments in Table 1 under \\u201cMask-Based Editing.\\u201d In these experiments, we replaced the attention with manually annotated segmentation masks to act as an upper bound. Despite that, providing the masks gives a clear advantage to these methods over our approach, which is only text-based. Our approach was favored by users in the user study. \\n\\nQ1 - If an image has two objects of the same scale with the same part that is being edited, the edit will most likely apply to both objects. A potential mechanism for choosing which object to edit is optimizing tokens for <left>, <center>, and <right> that are combined with the part tokens. However, this approach does not scale to more than 3 objects. Another approach is using a Vision Language Model to parse the editing prompt into a localization mask to mask only the object of interest. We leave these investigations for future work.\\n\\nQ2 - Our approach can serve as many parts as possible since each optimized token is saved to disk (17 KB) and can be loaded upon request (when the user includes <part-token> in the editing prompt).\\n\\nQ3 - We use random normal initialization.\\n\\nQ4 - Our approach does not rely on specific datasets or pre-trained models. As we showed in Appendix C, 5-10 images are sufficient to achieve good localization. These images can be manually annotated or from parts datasets such as PascalPart or PartImageNet. \\nWe ensure that the training and the test set do not overlap.\\nTo demonstrate that our approach does not depend on the choice of training images and that the training and test set do not overlap, we conduct a 5-fold cross-validation experiment on the \\u201c<head>\\u201d token from PartImageNet.\\nWe obtain an average mIoU of 71.704 and a standard deviation of 4.372. These results demonstrate that our approach performs consistently well, irrespective of the choice of the training images. \\nThis is a consequence of the semantically rich features of pre-trained diffusion models highlighted in \\u201cEmergent Correspondence from Image Diffusion, Neurip 2024\\u201d.\"}", "{\"summary\": \"The paper proposes a method for part editing in images. The paper shows that current state-of-the-art fails when asked to change only particular parts of images (e.g. 'hood' of a car). The paper proposes to perform part token learning, and then uses the attention maps of the learned part tokens for accurate part-based editing of images. Results show improved results over several baseline methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"the objective of accurate part editing with text prompts is relevant for many users.\", \"the method is simple and results show that the method successfully addresses the problem.\", \"overall writing and presentation is good.\"], \"weaknesses\": [\"I find the main scientific/technologic contribution of the paper insufficient for ICLR conference, nor does the paper provided new insights in the functioning of DM for editing. I agree part-based editing is relevant. Given existing methods, part-based editing can be addressed by solving part detection or just user provided masks (for example based on segment anything). The proposed method of learning part tokens makes sense. The computed attention maps reduce the problem to a text-based inpainting problem.\", \"the method is only applied to a very limited set of parts (7). Are these stored in 7 different models or jointly held within a single network ? Could this scale to many more parts ? Some analysis of the quality as a function of number of parts (if contained in a single model) would be interesting.\", \"the method needs to learn new prompts for every new part users might want to change. The method dependents on existing part datasets for these parts, else they need to be created. Do the authors see any other solutions, using other existing models to preven t annotation?\", \"minor\", \"are all weights tuned, or do you use LoRA for layer optimization of the L layers\", \"figure 3 could be improved, it is hard to read in print\"], \"questions\": \"See weaknesses.\\n\\nI think is not of ICLR quality (the scientific contribution is too small) and could be published in a more applied venue (e.g. WACV) or dedicated workshop.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a text-based image editing method for object parts using pre-trained diffusion models. It enhances model understanding of object parts for fine-grained edits through optimized textual tokens and masks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper presents a flexible method for text-based image editing focused on object parts, which is a novel contribution to the field of image processing and editing.\\n2. The paper is well-written and easy to follow.\\n3. The authors have conducted extensive experiments and provide a solid basis for its practical application.\", \"weaknesses\": \"1. The approach relies on a finite and manually defined set of part tokens, which could restrict the flexibility and applicability of the method in real-world scenarios where users might need to edit object parts that are not covered by the predefined tokens. This limitation could affect the generalizability of the technique to a broader range of editing tasks and objects.\\n2. There are many methods nowadays that utilize semantic segmentation to create masks, which are quite similar to this paper. You should supplement your study with some relevant ablation experiments, like replace the attention mask with semantic segmentation part and compare it with similar methods [1][2][3].\\n\\n[1] SmartMask: Context Aware High-Fidelity Mask Generation for Fine-grained Object Insertion and Layout Control.\\n[2] Accelerating Text-to-Image Editing via Cache-Enabled Sparse Diffusion Inference.\\n[3] Towards Understanding Cross and Self-Attention in Stable Diffusion for Text-Guided Image Editing.\", \"questions\": \"1. The paper does not explicitly mention how it deals with fine-grained edits for multiple objects within an image, such as distinguishing between two heads in an image for editing purposes. could you provide some form of a mechanism to differentiate between objects? How your method deal with this situation?\\n2. How many part tokens can you serve?\\n3. What's your random methods for initializing textual embeddings?\\n4. How do you generate reliable localization masks? Does this process rely on specific datasets or pre-trained models? Will the distribution of the training data for the mask overlap with the distribution of the images used for testing?\\n5. see weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for all the comments, positive feedback, and questions. We address the concerns of reviewers individually for each question/weakness but encourage others to read the responses of other reviewers. Most reviewers see our method in a positive light, including that we address a critical problem in image editing (8GnP), an interesting question with great significance for downstream tasks and research (SEmy). The paper is easy to follow (SEmy, 8GnP, rJwB) and provides extensive experiments and practical applications (rJwB, 8GnP). The most common question was about the number of images (10-20), which we addressed by adding extra cross-validation experiments (\\u201cImpact of Images for training part tokens\\u201d in supplementary). Additionally, we want to note that we are aware that training with more images improves localization, but rather, we see it as a strength that our method works with limited images that can be manually annotated or annotated using some other binary mask methods. We added extra experiments and user study results in Table 1 (SEmy), extra works in related sections (SEmy, rJwB), and extra sections in supplementary sections for questions from the reviewers. We have addressed kfJM's concerns, which may have stemmed from the print version of Figure 3. To clarify that only tokens are trained, we have updated the figure by increasing the text size and adding a legend. Specifically, our approach focuses solely on training tokens and does not involve handling multiple models or LoRAs, as mentioned in W2/W4.\\nWe sincerely welcome the reviewers to reconsider their evaluation and update their score if they feel our clarifications and revisions have strengthened the submission. We hope our responses address your concerns, but if there are any remaining questions or additional points to clarify, please feel free to post them.\"}", "{\"comment\": \"We're glad that we were able to address your primary concerns.\\n\\nRegarding the complexity of the method, we analyze the impact of different hyperparameters in the paper to optimize them for fine-grained image editing.\\nHowever, all suggested hyperparameters are *stable*, and the user does not need to tune any of them to obtain good results.\\nWe only allow the user to *optionally* control the parameter $t_e$ to control the locality of the edit.\\nWe have included a screenshot of our Gradio user interface in the revised version of the paper (Appendix Q and Figure 22), which demonstrates how easy it is to use our method in practice with no parameter tuning at all.\\n\\nRegarding handling multiple parts, our pipeline is quite flexible in that term. To edit multiple parts, the user would either edit different parts sequentially or set the editing prompt in the form \\u201cwith a <edit> <part-1> <part-2>\\u201d \\nThese are largely implementation details, and we welcome suggestions on how to make this process more efficient and user-friendly.\"}", "{\"comment\": \"Thank you for your answer. I remain unconvinced that the technological or scientific contribution merits an ICLR paper. You write that the 'he effectiveness of token optimization for part-based image editing might appear self-evident in retrospect, but that is largely because we have successfully demonstrated its utility in this new domain.', but I am not surprised that it works for parts or actually any other semantic localizable description of images (like objects, parts, but also adjectives on color or texture etc). But this part, boils down to applying an existing methods to new data (part-data). I appreciate the inference timestep and layer selection but also consider these minor contributions. Given these considerations, I remain with my rating.\"}", "{\"comment\": \"W1 - As per the reviewer\\u2019s request, we expand Table 1 in the main paper with quantitative comparisons against PnPInversion [1] and InfEdit [2] and qualitative comparisons in Figure 18 in the supplementary material.\\nOur approach outperforms both methods by a huge margin and is favored by users 79.8% and 77.8% of the time, respectively.\\nFor DragonDiffusion, we find that it is a drag-based editing method and, therefore, not directly comparable to our approach. However, we include it in the related work section.\\n\\nW2 - **InstructPix2Pix Comparison** Table 1 of the main paper already has InstructPix2PIx [4] under \\u201ciP2P\\u201d as described in Evaluation setup section 4.1. We use the same shortened abbreviations in Figure 5. The results show that our approach is preferred by 77% of the users over iP2P. Figure 5 also shows that iP2P consistently fails at identifying parts and edits the whole object instead.\\n\\n**Visualization of the editing regions** We provide an additional Figure in the appendix of the revised version of the paper (Figure 20) that shows visualizations of the editing mask for different editing regions (Please see the revised PDF). The figure shows the continuous nature of our blending masks compared to the conventional binary masks used for inpainting.\\n\\n**Applicability to Different Diffusion Models** Our approach can be applied to any UNet-based diffusion model, i.e., any version of the Stable Diffusion family of models. In the paper, we use SDXL for the synthetic image setup, while we use SD 2.1 for the real image editing as we employ Leedits++ as a baseline for this setting.\\nOur approach can also be applied to SD 1.5 as it has the same architecture as SD 2.1.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"We believe that we have adequately addressed your concerns, as no follow-up questions or further discussions were raised. We respectfully request the reviewer to reconsider their rating if the additional information and rebuttal sufficiently resolve the issues highlighted.\"}", "{\"comment\": \"W1 - Text-based editing is the most convenient and user-friendly form of editing as it does not require any skill or low-level interaction from the user.\", \"our_paper_introduces_the_first_fine_grained_text_based_editing_approach_that_is_highly_beneficial_for_image_editing_research_as_noted_by_the_reviewers\": \"SEmy: \\u201cThis paper focus on an interesting question, which as great significance to downstreaming research and tasks.\\u201d , 8GnP: \\u201cThis paper addresses a critical problem in image editing\\u2026\\u201d, and rJwB: \\u201cThe paper presents a flexible method for text-based image editing focused on object parts, which is a novel contribution to the field of image processing and editing.\\u201d\\nFor mask-based editing, Segment-Anything (SAM) model usually struggles with segmenting parts as the model does not know if the user is interested in the object or the part (We provide some examples in Figure 21 in the revised version of the paper),\\nEven when users provide manually annotated masks, our approach is still preferred over inpainting (see 'Mask-Based Editing' in Table 1), despite the inherent advantage that manually annotated masks offer to mask-based methods.\\nThis demonstrates the strength and the impact of our approach.\\n\\nW2 - Our approach optimizes a token per part, where each token is a vector with the dimensionality 2048x2 as explained in section 3.1. We only store those tokens on disk, where each part token is only 17 KB. In practice, we can store as many tokens as possible and load them when requested by the user through the special token identified <part-name>.\\n\\nW3 - Our approach does not rely on specific datasets or pre-trained models. As we showed in Appendix C, 5-10 images are sufficient to achieve good localization. These images can be manually annotated or from parts datasets such as PascalPart or PartImageNet. \\nThe 7 tokens that we experimented with in the paper cover most of the common objects such as humanoids, animals, vehicles, and chairs. If the user is interested in learning specialized parts that are not provided, annotating the parts and optimizing the token is a straightforward process. \\nFor annotating the part, online annotation tools such as \\u201cMakeSense\\u201d take around 20-30 seconds per image, which is 3-5 minutes for 10 images.\\nThen we will provide the source code for optimizing new tokens and using them for editing upon the acceptance of the paper to facilitate learning new part tokens for the community.\\n\\n\\nW4 - As we explained in W2, we do not finetune the model weights and keep the model frozen during token optimization. This means no LoRA weights or changes to the underlying model. \\n\\nW5 - We did our best to enhance the figure by adding a legend and increasing the font size in the revised version for better readability on paper.\"}", "{\"comment\": \"We believe that we have adequately addressed your concerns, as no follow-up questions or further discussions were raised. We respectfully request the reviewer to reconsider their rating if the additional information and rebuttal sufficiently resolve the issues highlighted.\"}", "{\"title\": \"rebuttal\", \"comment\": \"I thank the reviewers for their response. Thanks for pointing out my misunderstanding in W2; that is clear now. (I was confused because line 245-249 referred to the layers L for optimization, but now understand that these layers are frozen and used to optimize the token).\\n\\nAll reviewers (including me) are convinced that part-based image editing is desirable. I am also convinced that the proposed method works. However, to make it a ICLR paper, there needs to be a significant technological or scientific contribution (W1). At the moment for me the contribution is too small; we know how to learn part-based tokens and their attention maps can subsequently be used for text-based image editing. Could you shortly summarize the main technological/scientific contributions, directly citing the most relevant methods and emphasizing the main differences with these. Did you use any new insight not used by other methods.\"}", "{\"metareview\": \"This paper presents an image editing method that can perform fine-grained object parts editing. This paper was reviewed by 4 experts in the field, and received 3, 5, 5, 6 scores. The reviewers find that this paper is well-written and easy to follow. However, some critical issues are pointed out in the reviews: insufficient technical contributions, complexity of the method, insufficient experiments. The rebuttal does not fully address these concerns. After rebuttal, this paper receives 3 negative ratings in the final rating, while Reviewer 8GnP though gave positive rating still have concern on complexity of the method. The AC recommends rejection mainly due to its limited technical contribution. The authors are encouraged to consider the reviewers' comments when revising the paper for submission elsewhere.\", \"additional_comments_on_reviewer_discussion\": \"This paper was reviewed by 4 experts in the field, and received 3, 5, 5, 6 scores. Reviewer kfJM keeps the initial rating 3 after reading the authors' rebuttal, because of the concerns on insufficient technological or scientific contribution. Reviewer 8GnP gave the only positive rating, but still had concerns about the complexity of the method and the ability to handle multiple part edit simultaneously on discussion period. After reading the paper and the authors' rebuttal, the AC agrees with the Reviewers concerns, especially, the limited technical contribution.\"}", "{\"summary\": \"This paper proposes an inference-based image editing method that can perform fine-grained object parts editing. Specifically, this paper trains part-specific tokens that specialize in localizing the editing region at each denoising step, then develop feature blending and adaptive thresholding strategies that ensure editing while preserving the unedited areas.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"(1) This paper focus on an interesting question, which as great significance to downstreaming research and tasks.\\n\\n(2) The overall design of the model design is generally make sense.\\n\\n(3) This paper is easy to follow.\", \"weaknesses\": [\"(1) Some related works are missing. Discussing and compare these related works will be good for improving the paper's quality.\", \"[1] Pnp inversion: Boosting diffusion-based editing with 3 lines of code\", \"[2] Inversion-free image editing with natural language\", \"[3] Dragondiffusion: Enabling drag-style manipulation on diffusion models\", \"(2) Can you provide more experimental results to prove the effectiveness of the proposed method? For example, more comparison results with training-based editing methods such as InstructPix2Pix[4]. More visualization for editing regions of various image-editing prompt pairs. Results of combining the proposed method to different pretrain checkpoints/different diffusion model backbones to show its generalization ability.\", \"[4] Instructpix2pix: Learning to follow image editing instructions\"], \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Q1/W1 - To demonstrate that our approach does not depend on the choice of training images, we conduct a 5-fold cross-validation experiment on the \\u201c<head>\\u201d token from PartImageNet.\\nWe obtain an average mIoU of 71.704 and a standard deviation of 4.372. These results demonstrate that our approach performs consistently well, irrespective of the choice of the training images.\\nThis is a consequence of the semantically rich features of pre-trained diffusion models highlighted in \\u201cEmergent Correspondence from Image Diffusion, Neurip 2024\\u201d. We expanded the supplementary material under \\\"impact of choice of images for training part tokens\\\".\\n\\nQ2/W2 - We understand the concern, and for this reason, we added the Hyperparameters section to Appendix L (in addition to the discussion of some parameters in Figure 10 - 12).\\nWe are also releasing the code, the evaluation benchmarks, and a demo upon acceptance of the paper to facilitate future development.\\n\\nQ3/W3 - Our approach is versatile and can incorporate multiple part edits simultaneously. We provide some examples in Figure 19 in the appendix of the revised version of the paper (please see the revised PDF). In this setting, when the user specifies multiple tokens in the editing prompt, the tokens are loaded and fed through the network to compute cross-attention maps per part token.\\nThen, we accumulate these maps across layers and normalize them jointly across different parts. We provide visualizations for the combined blending masks in Figure 19.\\n\\nW4 - Our source code for optimizing the tokens, editing, and evaluation datasets will be made publicly available upon the acceptance of the paper.\\n\\nW5 - We fixed the typo in the updated version of the paper.\"}", "{\"comment\": \"The authors have addressed primary concerns raised in the initial review. They conducted a 5-fold cross-validation experiment to demonstrate the robustness of their approach to the choice of training images, which is a significant improvement. The commitment to release the code and evaluation benchmarks upon acceptance are also positive steps. However, I still have concerns about the complexity of the method and the ability to handle multiple part edit simultaneously. Hence, I maintain my original recommendation.\"}" ] }
2Q8gTck8Uq
Gradient correlation is a key ingredient to accelerate SGD with momentum
[ "Julien Hermant", "Marien Renaud", "Jean-François Aujol", "Charles Dossal", "Aude Rondepierre" ]
Empirically, it has been observed that adding momentum to Stochastic Gradient Descent (SGD) accelerates the convergence of the algorithm. However, the literature has been rather pessimistic, even in the case of convex functions, about the possibility of theoretically proving this observation. We investigate the possibility of obtaining accelerated convergence of the Stochastic Nesterov Accelerated Gradient (SNAG), a momentum-based version of SGD, when minimizing a sum of functions in a convex setting. We demonstrate that the average correlation between gradients allows to verify the strong growth condition, which is the key ingredient to obtain acceleration with SNAG. Numerical experiments, both in linear regression and deep neural network optimization, confirm in practice our theoretical results.
[ "optimization", "convex", "nesterov momentum", "sgd", "neural network" ]
Accept (Poster)
https://openreview.net/pdf?id=2Q8gTck8Uq
https://openreview.net/forum?id=2Q8gTck8Uq
ICLR.cc/2025/Conference
2025
{ "note_id": [ "y51pQR6Apu", "v57KvhMJnO", "tDMEwaBa9m", "qUdxRC9eiL", "ouMrCaLzSC", "kz5Ujm8vJY", "e43uhYUx1C", "ZHINn2IF1d", "VExoQuPlZo", "PUCtkBPOZo", "OrvsLC9Qyi", "O46nwMmR8D", "M1Di5xluhG", "KiNRMDck8f", "IDt6OyOJ6y", "FILvCZv1hW", "FB4WI8o9s3", "CE78AnvH7E", "BPcKizMydN", "7jDazUedgO", "4M4Nxtrx7f", "2hOuHduR0J", "2D7vwmIjV3", "1lNUbQzgmj", "1JxaQ2BzBo" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732493004382, 1732491416975, 1732315331682, 1732627356523, 1731112986620, 1732050202425, 1737523586873, 1733628770830, 1732050393403, 1732047188169, 1732049909275, 1732049957361, 1730672768700, 1732048627012, 1732048601147, 1732049723807, 1732208606362, 1732049828982, 1732050507663, 1732598435577, 1732491504124, 1730646627857, 1730578233900, 1732210730484, 1732523679167 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3639/Reviewer_b9ZL" ], [ "ICLR.cc/2025/Conference/Submission3639/Area_Chair_V8cJ" ], [ "ICLR.cc/2025/Conference/Submission3639/Reviewer_Dovy" ], [ "ICLR.cc/2025/Conference/Submission3639/Reviewer_b9ZL" ], [ "ICLR.cc/2025/Conference/Submission3639/Reviewer_b9ZL" ], [ "ICLR.cc/2025/Conference/Submission3639/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3639/Area_Chair_V8cJ" ], [ "ICLR.cc/2025/Conference/Submission3639/Authors" ], [ "ICLR.cc/2025/Conference/Submission3639/Authors" ], [ "ICLR.cc/2025/Conference/Submission3639/Authors" ], [ "ICLR.cc/2025/Conference/Submission3639/Authors" ], [ "ICLR.cc/2025/Conference/Submission3639/Reviewer_zYBC" ], [ "ICLR.cc/2025/Conference/Submission3639/Authors" ], [ "ICLR.cc/2025/Conference/Submission3639/Authors" ], [ "ICLR.cc/2025/Conference/Submission3639/Authors" ], [ "ICLR.cc/2025/Conference/Submission3639/Authors" ], [ "ICLR.cc/2025/Conference/Submission3639/Authors" ], [ "ICLR.cc/2025/Conference/Submission3639/Authors" ], [ "ICLR.cc/2025/Conference/Submission3639/Reviewer_wrcq" ], [ "ICLR.cc/2025/Conference/Submission3639/Area_Chair_V8cJ" ], [ "ICLR.cc/2025/Conference/Submission3639/Reviewer_Dovy" ], [ "ICLR.cc/2025/Conference/Submission3639/Reviewer_wrcq" ], [ "ICLR.cc/2025/Conference/Submission3639/Reviewer_zYBC" ], [ "ICLR.cc/2025/Conference/Submission3639/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for the clear response. I will keep my score.\"}", "{\"comment\": \"Dear Reviewer b9ZL,\\n\\nThe author discussion phase will be ending soon. The authors have provided detailed responses. Could you please reply to the authors whether they have addressed your concern and whether you will keep or modify your assessment of this submission?\\n\\nThanks.\\n\\nArea Chair\"}", "{\"title\": \"reply to authors\", \"comment\": \"Thank you for the clarifications. I am keeping my positive score and recommend accepting this paper.\"}", "{\"comment\": \"Thank you for your reply. After consideration, I will increase my score and recommend to accept this paper.\"}", "{\"summary\": \"This paper studies the possibility of obtaining accelerated convergence of the Stochastic Nesterov Accelerated Gradient (SNAG) method. The authors provide a clear proof that the average correlation between gradients allows to verify the strong growth condition, which is essential for achieving accelerated convergence in convex optimization settings. Furthermore, the paper includes comprehensive numerical experiments in both linear regression and deep neural network optimization, empirically validating the theoretical findings. The experimental results are clear and concise. These contributions advance the understanding of momentum-based stochastic optimization techniques and demonstrate the practical effectiveness of SNAG in enhancing convergence rates.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"1. Originality:\", \"Proposes the hypothesis that Stochastic Nesterov Accelerated Gradient (SNAG) can accelerate over Stochastic Gradient Descent (SGD) and proves that this hypothesis is valid when SNAG is under a Strong Growth Condition.\", \"Provides new asymptotic almost sure convergence results for SNAG.\", \"Gives the new characterization of the SGC constant by using the correlation between gradients.\", \"Introduces a new condition named Relaxed Averaged COrrelated Gradient Assumption (RACOGA).\", \"2. Quality and clarity:\", \"Clearly shows when $f$ is convex and $\\\\mu$-strongly convex, it shows the possibility of acceleration of SNAG over SGD is highly dependent on the SGC constant $\\\\rho_K$, where $\\\\rho_K < \\\\sqrt{\\\\frac{L^2_{(K)}}{\\\\mu L}}$.\", \"Provides clear and explicit steps for proofs.\", \"The numerical results are readable and show a clear difference in convergence speed among different algorithms.\", \"3. Significance:\", \"People can get faster and better results by applying the condition proposed in this paper.\"], \"weaknesses\": [\"The text and formulas are a bit dense; the author can add a table to compare the convergence speed of SGD and SNAG under different conditions.\", \"The graphs look good. However, that would be better if the author gave more detail about the explanation for the graph, for example, what the \\\"small values\\\" of RACOGA mean on the graph.\", \"The colors in the right graph for Figure 1(a) are similar, author can use more contrasting colors.\"], \"questions\": [\"While RACOGA has been demonstrated to facilitate the acceleration of SNAG over SGD in convex and strongly convex functions, how does RACOGA perform in non-convex optimization scenarios, such as those commonly found in deep neural network training? Can RACOGA be effectively applied to these more complex models, or are there additional considerations needed to achieve similar acceleration benefits?\", \"The paper highlights that large RACOGA values enable the acceleration of SGD with momentum. However, what practical methods or criteria can be used to identify or achieve large RACOGA values in real-world applications?\", \"How robust is SNAG's performance to variations in RACOGA across different types of datasets and optimization problems?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your positive feedback and careful reading of our paper. Please see below our detailed answers\\n\\n+ Proof for theorem 4 heavily relies on an existing result (Sebbouh et al. 2021, theorem 9), which one could\\nargue it weakens the theoretical contributions of this work.\\n\\nWe are not sure what the reviewer mean with this remark, as there is no Theorem 9 in [1]. If referring to the\\ntheorem 9 present in our paper, it is a result from [2], that is a classical tool to study almost sure convergence\\nin stochastic optimization.\\n\\nWhat we meant is that we use this result to derive the almost sure rates. And to be transparent we mentioned\\nthat before us, Sebbouh et al. 2021 almost sure results also heavily rely on these results (from [2]), although\\nthey are applied to different algorithms. Finally, note that the strongly convex result of our Theorem 4 does\\nnot relie on the result of Robbins and Siegmund (1971), although it needs less developed tools.\\n\\n+ I appreciate that the authors made an effort to compare RACOGA with gradient diversity and gradient\\nconfusion and agree with the authors that they are not identical, but they do look quite similar.\\n\\nWe are glad the reviewer appreciate our attempt to relate our work with the literature. It is absolutely true\\nthat RACOGA and gradient diversity [3] are strongly related (Equation 39), although they correspond to\\ndifferent viewpoints. RACOGA is a direct measure of the gradient correlation. The gradient confusion [4]\\nassumption is a less closed assumption, as it only measures anti correlation.\\n\\n+ It would be nice to have a table that summarizes results from theorem 1-4 (perhaps including results from\\nthe literature) so that readers don\\u2019t have to go back and forth to compare them.\\n\\nWe thank the reviewer for this suggestion. In order to make the comparison between SGD and SNAG under\\ndifferent conditions more clear we add a table in Section 3 (Table 1).\\n\\n+ Authors have remark 5 to explain the results from theorem 4 which does help me to understand it. I wonder\\nif there is any intuition about why \\u03c1K plays a different role in convex vs strongly convex cases. Also, for the\\nstrongly convex case, it seems we need less noisy data for SNAG to beat SGD , because we want to be small.\\nAm I understanding this correctly ? For continuous strongly convex function, there is a unique minimizer,\\nmeaning it won\\u2019t stuck in some local minimizers. How does this fit into this theory ?\\n\\nThis is a very interesting question, that involves deep optimization concepts. The different role of $\\\\rho_K$ regarding the convexity or strongly convex functions stems from the different nature of momentum in these two cases.\\n \\nWe believe a good intuition can be gained from the continuous variants of SNAG [5,6]. In the strongly convex case, best convergences are achieved choosing momentum to be constant. In our case, the momentum depends on the SGC constant $\\\\rho_K$ making the role of $\\\\rho_K$ crucial. However, in the convex case, as this class of functions include very flat functions (e.g $x \\\\to x^{12}$), best convergences are achieved with increasing momentum. In this case, the SGC constant factor is asymptotically negligible.\\n As you noticed, we want $\\\\rho_K$ in the SGC to be small in both cases. \\n In the convex case, the finite-time speed up depends on SGC constant $\\\\rho_K$.\\n However, the asymptotic speed-up does not depend on $\\\\rho_K$. In the case of strongly convex $f$, both finite-time and asymptotic speed-up depend on $\\\\rho_K$.\\n\\nConvex functions do not have local minimizers (as mentioned just after Definition 2).\\nIn our study, note we that do not assume that the $f_i$ are convex, we only assume that $\\\\frac{1}{N}\\\\sum_{i=1}^N f_i$ is convex/strongly-convex. Therefore some $f_i$ could have many minimizers and not be convex.\\n\\n[1] Sebbouh and Gower and Defazio, Almost sure convergence rates for Stochastic Gradient Descent and Stochastic Heavy Ball, 2021.\\n\\n[2] Robbins and Siegmund A convergence theorem for non negative almost supermartingales and some applications, Optimizing methods in statistics, pages 233-257, 1971.\\n\\n[3] Yin et al. Gradient diversity: a key ingredient for scalable distributed learning, 2018.\\n\\n[4] Sankararaman et al. The impact of neural network overparameterization on gradient confusion and stochastic gradient descent. 2020\\n\\n[5] Su, Boyd, Cand\\u00e8s A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights 2016\\n\\n[6] Siegel Accelerated First-Order Methods: Differential Equations and Lyapunov Functions 2019\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"metareview\": \"The paper shows that positive gradient correlation ensures strong growth condition (SGC) for finite-sum functions. It introduces the RACOGA condition, linking SGC to accelerated convergence of Stochastic Nesterov Accelerated Gradient(SNAG). The results clarify why momentum-based stochastic methods perform well, supported by insightful numerical experiments, and thus make good contributions to optimization community.\", \"additional_comments_on_reviewer_discussion\": \"I gave a lower weight on Reviewer b9ZL's opinion because I know this reviewer personally and he/she is junior PhD student and has not published any work on this area.\\n\\n\\nReviewer zYBC had several concerns, most of which were about the presentation and have been addressed by the authors. \\n\\nReviewer wrcq expressed a concern on the (computational) limitation of the RACOGA condition, which is a minor issue in my opinion. \\n\\nReviewer Dovy had a concern that the proof in this paper uses a previous result (from [2]) which may reduce its novelty. However, I agree with the reviewer that the result used is mainly for proving the almost sure convergence. The main contribution, namely, RACOGA leading to accelerated, is still novel. \\n\\nThree reviewers raised their scores after the rebuttal.\"}", "{\"comment\": \"Thank you for your feedback and careful reading of our paper. Please see below our detailed answers\\n+ Although the almost sure convergence result provided in Theorem 4 deepens our understanding of the SNAG\\nmethod, I believe that the major focus of this paper is still on how the gradient correlation can lead to a\\nbetter SGC coefficient, which gives acceleration for SNAG. From this perspective, the result in Theorem 4\\nseems a bit disjoint from the other sections of the paper.\\n\\nWe thank the reviewer to share his interrogation about the relevance of Theorem 4. As the reviewer noticed,\\nour paper main interest is the link between gradient correlation and the possibility of accelerating SGD\\nwith SNAG. In order to have a complete comparison between these two algorithms, we include almost sure\\nconvergence rates (Theorem 2-4) additionally to expectation rates (Theorem 1-3). Importantly, in the strongly\\nconvex case notice that, as explained in Remark 5, the SGC, linked with gradient correlation, appears in the\\nrate of Theorem 4 so in this case the question of gradient correlation remains crucial. \\n\\n\\n+ Although the RACOGA condition holds in general with a coefficient of $c \\\\geq -\\\\frac{1}{2}$, it does not seem to be easy to find a tight $c$ for the objectives, as evaluating this lower bound involves analyzing the pairwise inner product between gradients for all choices of the parameters in the parameter space. Furthermore, when approaches to $-\\\\frac{1}{2}$, the SGC coefficient $\\\\rho = \\\\frac{N}{1+2c}$ approaches infinity, leading to a trivial condition.\\n\\nWe thank the reviewer for this relevant remark. As discussed in our Appendix H, it is not easy to find a tight RACOGA constant even in the relative simple case of linear regression, and it could be an interesting venue of research. Also, as the reviewer noticed, the case of RACOGA constant approaching $-\\\\frac{1}{2}$ is indeed a critical case, that is exactly the case where SNAG will perform poorly. We propose a modification of Remark 6 in our revised version in order to emphasize this property.\\n\\n+ The experimental verification of the paper seems quite weird. It is noticed that, in the linear regression case, the gradient correlation involves both the inner product term and the $a_i^T a_j$, sign of the residual terms $x^T a_i - b_i$'s. In particular, different signs of the residual terms could lead to completely different lower bound on the gradient correlation. However, it seems that in the experimental design the paper considered only the correlation between the data. Moreover, it may contradict the claim of the paper that RACOGA helps acceleration since in Figure 1.(a) the green curve, with a smaller RACOGA coefficient, led to a faster convergence than the blue curve.\\n\\n\\n We thank the reviewer for this interesting remark. As the reviewer noticed, the sign of $a_i^T a_j$ is not necessarily the same than the sign of RACOGA. We aim to stress that in the particular case of linear regression, Equation 15 gives a strong link between the correlation and the correlation between gradients (RACOGA). For instance, if the data are uncorrelated, then the gradients are also uncorrelated. This is exactly the example detailed in Example 3.\\n \\n Moreover, experimentally, on Figure 1-2, we show RACOGA values and not the data correlation. Then, on Figure 1, we see that the values of RACOGA are very different if the data are weakly correlated (Figure 1.a) or highly correlated (Figure 1.b). We proposed a revised version of Figure 1 with the same scale for RACOGA on both plots to stress this behaviour.\\n \\nFinally, in Appendix H, we develop more deeply the link between RACOGA and the data correlation in some simple examples to get more intuition about this link.\\n\\n+ How is Theorem 2 different from the results in Vaswani et al, 2019 ? It would be nice if the paper could include\\na detailed comparison of the two results.\\n\\nWe thank the reviewer for his suggestion. We assumed that by \\\"Theorem 2\\\", the reviewer meant \\\"Theorem 3\\\", which is the SNAG result in expectation. The convergence result from [1] and Theorem 3 differ as we consider a version of the Algorithm that involves a fewer amount of parameters. For completeness, we added a Section C.3 in our revised version where we compare SNAG with the algorithm of [1]. \\n \\nAlso, we want to emphasize that our main theoretical contributions are rather our almost sure convergence results (Theorem 4) and our gradient correlation analysis (Proposition 1-2, Theorem 5).\"}", "{\"title\": \"Response to all reviewers\", \"comment\": \"We thank all the reviewers for their careful reading and all their comments. A new version of the paper has been loaded with modifications. These modifications appear in blue in the revised version.\", \"the_contributions_of_our_work_can_be_summarized_as_follows\": \"1. We give a new characterization of the Strong Growth Condition (SGC) constant by using the correlation between gradients, quantified by RACOGA (Propositions 1-2), and we exploit this link to study the efficiency of SNAG. \\n 2. Using our framework, we study the theoretical impact of batch size on the algorithm performance, depending on the correlation between gradients (Theorem 5). \\n 3. We complete convergence results of [1,2] with new almost sure convergence rates (Theorem 4). \\n 4. We provide numerical experiments that show that RACOGA is a key ingredient to have good performances of SNAG compared to SGD.\\n\\n[1] Vaswani et al. Fast and faster convergence of sgd for over-parameterized models and an accelerated perceptron, 2019\\n\\n[2] Kanan et al. Achieving acceleration despite very noisy gradients, 2023.\"}", "{\"comment\": \"+ $L_{(K)}$ acts as an \\\"effective\\\" Lipschitz-continuity parameter of the gradient, depending on the batch size $K$. The results for SGD (Theorems 1 and 2) are provided in terms of $L_{(K)}$ without assuming SGC but the results for SNAG (Theorems 3 and 4) are provided in terms of the SGC parameter $\\\\rho_K$. Then these two results, derived under different conditions, are compared to conclude that SNAG does not accelerate over SGD unless $\\\\rho_k < \\\\frac{L_{(K)}}{\\\\sqrt{L}}$. $C$ (where $C$ is a constant that differs in the convex and strongly convex cases). This seems like an unfair and misleading comparison to me.\\n Both, $L_{(K)}$ and $\\\\rho_K$, measure the stochasticity of the gradient estimates but in different ways. The authors demonstrate in Appendix E.2 an example where $L_{(K)}$ is a tighter estimate of the effective Lipschitz constant than $L\\\\rho_K$. That does show that if the smoothness parameter $L_i$ of each summand $f_i$ is known, then using $L_{(K)}$ would allow us to choose a larger step-size for SGD than the one provided by $\\\\frac{1}{L\\\\rho_K}$. However, a fair comparison between SGD and SNAG can only be made if the same assumptions and information are used to calculate the step size, but there are no convergence results available for SNAG that directly make use of $L_{(K)}$. This feels like comparing apples and oranges. If the authors want to argue that you can use a larger step size for SGD than for SNAG, they should justify why Nesterov would blow up with that step size.\\n\\nWe thank the reviewer for this very relevant Remark. \\nIn fact, between the submission and the review period, we have realized that there was a gap in our work on this question. We have added Appendix F to fill this gap.\\n \\nThe question is to fairly compare the convergence rate of SGC and SNAG.\\n \\nFirst, in Appendix E.2, we justify that a convergence result for SGD under the SGC is not relevant. The reason is, although it allows to compare the two algorithms under the same assumptions, the convergence bound for SGD (Theorem 8) is always worse than the one for SNAG, which is misleading.\\n \\nReversely, as the reviewer notice in his Remark, in the previous version of the paper, we did not justify why we did not consider a convergence result for SNAG under the same assumptions as we did for SGD in Theorem 1, $\\\\textit{i.e.}$ using $L_{(K)}$. We answer this question with Theorem 9 in Appendix F. Note that the bounds of Theorem 9 are achieved by tuning the parameters such that we obtain the fastest decrease without SGC, which is of order $O(n^{-1})$. However, when assuming SGC, we can achieve with SNAG a convergence of $O(n^{-2})$ in the convex case: this indicates that SGC is the characterization of the noise that allows to achieve such a result, see our discussion (Remark 9 in particular) after Theorem 9 in Appendix F. \\n \\nIn conclusion, in order to compare the convergence speed of SGD and SNAG, it is more relevant to use Theorem 1 and Theorem 3, although Theorem 3 makes use of SGC and Theorem 1 does not. Otherwise, comparing Theorem 1 and Theorem 9 indicates that SGD is (almost) always better than SNAG, and comparing Theorem 3 with Theorem 8 indicates that SNAG is always better than SGD, which in both cases is misleading (see Figure 1 for an experimental counterexample).\\n \\nTo make these considerations clearer in the revised version of the paper, we modify Remark 4 to explain this briefly in the main text and we provide a detailed discussion in Appendix (Appendix F in particular).\\n\\n\\n+ Was the Algorithm 2, in its given form, first introduced by Nesterov (2012) \\\"Efficiency of coordinate descent methods on huge-scale optimization problems.\\\"? If yes, the authors should cite that paper. I appreciate Proposition 4 in the appendix showing that the more common two parameter NAG algorithm (Algorithm 8, with $\\\\tau = 0$) can be obtained as a special case of this algorithm with a reparametrization.\\n\\nWe thank the reviewer to point out to us the paper [7] that we indeed did not cite. In our bibliography research, we did not\\nfind older papers that present SNAG. Being able to identify and cite the seminal works is important so we\\nhave added this citation in the revised version.\"}", "{\"comment\": \"+ Proposition 2 suggests that RACOGA holding with $c>-0.5$ is sufficient to verify the SGC. But in Figure 1(a), SNAG does not accelerate over SGD despite the RACOGA values being greater than -0.12. Is there an explanation for this apparent discrepancy\\n\\nAs the reviewer noticed, if having RACOGA holding with $c>-0.5$ is sufficient to verify the SGC, verifying SGC is not sufficient to have acceleration of SNAG over SGD. Remark 3 tells us that $\\\\rho_K$ needs to be small enough to have that SNAG convergence bounds (Theorem 3) are better than SGD convergence bounds (Theorem 1). Importantly, according to our Section 4.2, in order to get $\\\\rho_K$ small, we need to have RACOGA verified with $c$ large enough. On Figure 1.a, the fact that $c > -0.12$ is not sufficient to observe acceleration of SNAG over SGD. Therefore, the observed behaviour is consistent with our theoretical results.\\n\\n+ Just to confirm, in the experiments, were GD and NAG used with the full batch gradient at each step (e.g.\\nwere all of 50k images used for the CIFAR-10 experiment at each training step) ? If yes, this might be worth\\nspecifying explicitly since most of the times in machine learning experiments, NAG refers to Algorithm 8,\\neven when it is used with mini-batch gradients.\\n\\nWe thank the reviewer to highlight that our notations are not totally clear. GD and NAG were indeed used with\\nfull batch gradient at each step. For the linear regression experiments, we coded NAG ourselves (Algorithm\\n7). For the neural network experiments we used the PyTorch implementation. As mentioned in Section 5.2,\\nNAG indeed refers to Algorithm 8. To make it clearer, when using Algorithm 8, we refer to Algorithm 3 full\\nbatch in the revised version of the caption of Figure 2.\\n\\n[1] Gupta et al., Nesterov acceleration despite very noisy gradients, 2024.\\n\\n[2] Sebbouh et al., Almost sure convergence rates for Stochastic Gradient Descent and Stochastic Heavy Ball, 2021.\\n\\n[3] Mertikopoulos et al., On the Almost Sure Convergence of Stochastic Gradient Descent in Non-Convex Problems, 2020.\\n\\n[4] Gupta et al., Nesterov acceleration despite very noisy gradients, 2024.\\n\\n[5] Liu and Yuan, On Almost Sure Convergence Rates of Stochastic Gradient Methods, 2022.\\n\\n[6] Bottou, Stochastic learning, Summer School on Machine Learning, pages 146-168, 2003.\\n\\n[7] Nesterov, Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems, SIAM Journal on Optimization, volume 22, pages 341-362, 2012.\"}", "{\"summary\": \"The paper studies stochastic versions of Nesterov's accelerated gradient descent (NAG). These algorithms have been previously shown to converge at the same accelerated rates as NAG when the stochastic gradient estimates satisfy the so-called strong growth condition (SGC). Specifically for functions satisfying a finite sum structure, this paper finds a sufficient condition (RACOGA) in terms of gradient correlation that implies the strong growth condition, consequently implying that SNAG converges at an accelerated rate in those settings. Numerical experiments are provided to verify the implications of the RACOGA condition on accelerated convergence of stochastic algorithms.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"Previous works have shown that stochastic versions of NAG converge at the same accelerated rates when the gradient estimates satisfy the strong growth condition (SGC). While they provide heuristics that suggest that SGC is a reasonable assumption in the context of overparametrized deep learning, it is not always clear when the condition is actually satisfied. This work addresses that gap in the literature. The authors show that for functions of the form $f=\\\\sum_{i=1}^N f_i$, positive gradient correlation (i.e. $\\\\langle f_i, f_j\\\\rangle \\\\geq 0$ for all $i,j$) is sufficient to guarantee the strong growth condition for the gradients. This result also gives a bound for the strong growth parameter ($\\\\rho$) in terms of the batch size, which is important for choosing optimal parameters for the SNAG algorithms. The main contribution of the paper is a gradient correlation condition (RACOGA) which implies SGC for functions with a finite sum structure. This further implies that SNAG converges at an accelerated rate in those settings. I think this is a useful contribution and a step in the direction of better understanding why momentum-based stochastic gradient algorithms perform well in practice. The authors provide numerical experiments to back their claims, which I found interesting and insightful as well.\", \"weaknesses\": \"1. Line 70: \\\"However, even the question of the possibility to accelerate with SNAG in the convex setting is not solved yet.\\\"\\nThis is either unclear or inaccurate or both. There are several works which address the convergence of accelerated methods in the stochastic setting, both under SGC and with classical Robbins-Monro bounds, at least for smooth objectives. For a rigorous statement, the authors should specify the geometric assumptions, smoothness assumptions, and assumptions on the gradient oracle. Since the authors are aware of previous works on acceleration in convex optimization under SGC noise, it is unclear what meaning is intended.\\n\\n2. More concerningly, the next sentence says \\\"Finally, note that our core results (Propositions 1-2) do not assume convexity, and thus they could be used in nonconvex settings.\\\" Juxtaposed with the previous sentence, it gives the reader the impression that the authors have addressed the question of acceleration for non-convex functions, which is not true. It is true that the conditions PosCorr and RACOGA studied in Propositions 1 and 2 imply SGC even for non-convex functions. But SGC/PosCorr/RACOGA alone is not sufficient for any of the accelerated convergence results provided here or in previous works, some form of convexity is still required. The current phrasing is misleading since, again, it conflates conditions on the noise in the gradient estimates and on the geometry of the objective function, which are in general independent. If there is a relation in the setting the authors consider, they need to explain and emphasize this. I do not see the implication.\\n\\n3. The authors claim one of their main contributions is \\\"new almost sure convergence results (Theorem 4)\\\". However, almost sure convergence is already covered by corollary 5 in Gupta et al. \\\"Achieving acceleration despite very noisy gradients\\\" arXiv:2302.05515. That paper studies a stochastic version of NAG under a condition similar to SGC. The authors should highlight the differences in their results.\\n\\n4. The theorem 4 statement suggests that the authors recover a rate almost surely, but in the current presentation, it is unclear what precisely is meant. Even for $O(n^{-2})$: Is there a random variable C such that $f(x_n) - f(x^*) \\\\leq C/n^2$ simultaneously for all $n$ (and almost surely in probability), or does the random constant $C$ depend on $n$? And, what is meant by $o(n^{-2})$? For a machine learning venue, they should state a non-asymptotic quantitative bound. Almost sure convergence is a notion of convergence which is *not* induced by a metric on a space of random variables. As such, there is no immediate way of making sense of the notion that $f(x_n)$ and $f(x^*)$ are $o(n^{-2})$-close in a specific sense. More explanation is needed. The same concern applies to Theorem 2.\\n\\n5. The title of the paper is \\\"Gradient correlation is **needed** to accelerate SGD with momentum\\\", which makes it sound like gradient correlation is a necessary condition (i.e. if it is not satisfied then SGD with momentum does not converge at an accelerated rate). But I did not see a result proving that in the paper. The results actually claim that it is a sufficient condition. The title does not accurately reflect the main results.\\n\\n6. $L_{(K)}$ acts as an \\\"effective\\\" Lipschitz-continuity parameter of the gradient, depending on the batch size $K$. The results for SGD (Theorems 1 and 2) are provided in terms of $L_{(K)}$ without assuming SGC but the results for SNAG (Theorems 3 and 4) are provided in terms of the SGC parameter $\\\\rho_K$. Then these two results, derived under different conditions, are compared to conclude that SNAG does not accelerate over SGD unless $\\\\rho_k<\\\\frac{L_{(K)}}{\\\\sqrt{L}}\\\\cdot C$ (where $C$ is a constant that differs in the convex and strongly convex cases). This seems like an unfair and misleading comparison to me. Both, $L_{(K)}$ and $\\\\rho_K$, measure the stochasticity of the gradient estimates but in different ways. The authors demonstrate in Appendix E.2 an example where $L_{(K)}$ is a tighter estimate of the effective Lipschitz constant than $L\\\\rho_k$. That does show that if the smoothness parameter $L_i$ of each summand $f_i$ is known, then using $L_{(K)}$ would allow us to choose a larger step-size for SGD than the one provided by $1/L\\\\rho_k$. However, a fair comparison between SGD and SNAG can only be made if the same assumptions and information are used to calculate the step size, but there are no convergence results available for SNAG that directly make use of $L_{(K)}$. This feels like comparing apples and oranges. If the authors want to argue that you can use a larger step size for SGD than for SNAG, they should justify why Nesterov would blow up with that step size.\", \"questions\": \"1. Was the Algorithm 2, in its given form, first introduced by Nesterov (2012) \\\"Efficiency of coordinate descent methods on huge-scale optimization problems.\\\"? If yes, the authors should cite that paper. I appreciate Proposition 4 in the appendix showing that the more common two parameter NAG algorithm (Algorithm 8, with $\\\\tau=0$) can be obtained as a special case of this algorithm with a reparametrization.\\n\\n2. Proposition 2 suggests that RACOGA holding with $c>-0.5$ is sufficient to verify the SGC. But in Figure 1(a), SNAG does not accelerate over SGD despite the RACOGA values being greater than -0.12. Is there an explanation for this apparent discrepancy?\\n\\n3. Just to confirm, in the experiments, were GD and NAG used with the full batch gradient at each step (e.g. were all of 50k images used for the CIFAR-10 experiment at each training step)? If yes, this might be worth specifying explicitly since most of the times in machine learning experiments, NAG refers to Algorithm 8, even when it is used with mini-batch gradients.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"+ How robust is SNAG's performance to variations in RACOGA across different types of datasets and optimization problems?\\n \\nAgain, it is a very interesting and open question. Identifying variations of RACOGA's behaviour across different datasets and optimization problems could help create link between model properties and appropriate optimizers. In all our experiments, both in linear regression (two datasets) and in deep learning classification (two datasets), we observe that large values of RACOGA allow to accelerate SGD with SNAG. Moreover, in the convex setting (linear regression), we show an example where low values of RACOGA lead to SGD being not accelerated with momentum. In our revised version, we added Figure 8 in Appendix A, where we plot the RACOGA values and convergence curves of SGD and SNAG for other classification datasets (MNIST, FashionMNIST, KMNIST, EMNIST), trained with a CNN. RACOGA values remain high for all datasets.\\n\\n[1] Carmon et al., Lower Bounds for Finding Stationary Points I, 2019.\\n\\n[2] Hinder et al., Near-Optimal Methods for Minimizing Star-Convex Functions and Beyond, 2023.\\n\\n[3] Arjevani et al., Lower Bounds for Non-Convex Stochastic Optimization, 2022.\\n\\n[4] Sankararaman et al. The Impact of Neural Network Overparameterization on Gradient Confusion and Stochastic Gradient Descent, 2020.\"}", "{\"comment\": \"Thank you for your positive feedback and careful reading of our paper. Please see below our detailed answers.\\n\\n+ The text and formulas are a bit dense; the author can add a table to compare the convergence speed of SGD and SNAG under different conditions.\\n\\nWe thank the reviewer for this suggestion. In order to make the comparison between SGD and SNAG under different conditions clearer we have added a table at the beginning of Section 3 (Table 1).\\n\\n+ The graphs look good. However, that would be better if the author gave more detail about the explanation for the graph, for example, what the \\\"small values\\\" of RACOGA mean on the graph.\\n \\nWe are glad that you appreciate our graphs. We also thank the reviewer for his suggestion to bring more clarity to our paper. We propose a new revised version where we add some details. First, we introduce the parameter $\\\\lambda$ (Figure 1) in the main text. Then, we display a revised version of the histogram of RACOGA values (same scale in both sub-figures) in order to show that we mean \\\"small values of RACOGA\\\" in Figure 1a compared to the correlated case (Figure 1b).\\n\\n+ The colors in the right graph for Figure 1(a) are similar, author can use more contrasting colors.\\n \\nWe thank the reviewer for his feedback about Figure 1. In the revised version, we propose a new set of colors in order to make this figure clearer.\\n\\n+ While RACOGA has been demonstrated to facilitate the acceleration of SNAG over SGD in convex and strongly convex functions, how does RACOGA perform in non-convex optimization scenarios, such as those commonly found in deep neural network training? Can RACOGA be effectively applied to these more complex models, or are there additional considerations needed to achieve similar acceleration benefits?\\n \\nWe thank the reviewer for this question. Indeed, it is crucial to derive from this work future theoretical results that could be applied to realistic machine learning optimization tasks. \\n\\nFirst, the possibility to achieve acceleration relies heavily on geometrical assumptions of the function. Even in a deterministic setting, gradient descent is optimal in some cases, as for instance in the case of $L$-smooth functions [1]. \\n\\nIt is therefore critical to find relevant geometrical properties that may relax convexity, and that allow to demonstrate an acceleration of SNAG over SGD. It is known that for some classes of non convex functions such as quasar convex functions [2] or functions with Lipschitz gradient and Lipschitz Hessian [3], momentum type algorithms allow to theoretically achieve faster convergence in a deterministic setting. \\n\\nFinally, we study in this work deep learning classification trainings (Figure 2), which is a non-convex optimization problem, in order to get intuition of the possibility to extend RACOGA tools to this non-convex setting. We observe in Figure 2, that large RACOGA leads to faster convergence of SNAG over SGD. This suggests empirically that RACOGA could be useful even in this non-convex setting. We leave for future works the extension of our theoretical results in this non-convex setting with the appropriate geometrical properties.\\n \\n+ The paper highlights that large RACOGA values enable the acceleration of SGD with momentum. However, what practical methods or criteria can be used to identify or achieve large RACOGA values in real-world applications ?\\n \\nIt is a very interesting and open question. As mentioned in our Remark 7, the authors of [4] show that when considering neural networks, some mechanisms such as increasing the width of the layers allows to increase the gradient confusion, a quantity which is related to RACOGA.\\n\\nHowever, the question of designing mechanisms that may exhibit positive correlation, quantified with RACOGA, is difficult (we discuss the linear case in our Appendix H), and it is still an open question as far as we know.\\n\\nIn this work, we introduce RACOGA as a theoretical tool to understand the possibility of accelerating SGD with SNAG, but more works need to be done to make it a truly practical tool.\\n\\n Finally, note that it appears with our experiments in Figure 2 show that classical neural network architectures such as MLP or CNN lead to high RACOGA values. This suggests that these architectures naturally produce high RACOGA values.\"}", "{\"comment\": \"Thank you for your positive feedback and careful reading of our paper. Please see below our detailed answers.\\n+ Line 70: \\\"However, even the question of the possibility to accelerate with SNAG in the convex setting is not solved yet.\\\" This is either unclear or inaccurate or both. There are several works which address the convergence of accelerated methods in the stochastic setting, both under SGC and with classical Robbins-Monro bounds, at least for smooth objectives. For a rigorous statement, the authors should specify the geometric assumptions, smoothness assumptions, and assumptions on the gradient oracle. Since the authors are aware of previous works on acceleration in convex optimization under SGC noise, it is unclear what meaning is intended.\\n\\nWe thank the reviewer to stress that this sentence might be confusing. In this paragraph of the introduction, we aim to stress the interest of studying the convex setting. The detailed explanation about existing works has been stated in the previous paragraphs of the introduction \\\"Stochastic Nesterov Accelerated Gradient(SNAG)\\\" and \\\"What keeps us hopeful\\\".\", \"we_propose_to_reformulate_this_sentence_as\": \"\\\"However, even in the convex setting,here is still work to do concerning the possibility of accelerating SGD with SNAG. For example, up to our knowledge, characterizing convex smooth functions that satisfy SGC has not been addressed yet.\\\".\\n\\n+ More concerningly, the next sentence says \\\"Finally, note that our core results (Propositions 1-2) do not assume\\nconvexity, and thus they could be used in nonconvex settings.\\\" Juxtaposed with the previous sentence, it\\ngives the reader the impression that the authors have addressed the question of acceleration for non-convex\\nfunctions, which is not true. It is true that the conditions PosCorr and RACOGA studied in Propositions\\n1 and 2 imply SGC even for non-convex functions. But SGC/PosCorr/RACOGA alone is not sufficient for\\nany of the accelerated convergence results provided here or in previous works, some form of convexity is still\\nrequired. The current phrasing is misleading since, again, it conflates conditions on the noise in the gradient\\nestimates and on the geometry of the objective function, which are in general independent. If there is a relation\\nin the setting the authors consider, they need to explain and emphasize this. I do not see the implication\\n\\nWe thank the reviewer to point out that this formulation might be misinterpreted. Our work does not solve the\\nquestion of acceleration for non convex functions and we do not claim in our work to do so. We meant to say\\nthat further works, e.g. considering SGC in non convex setting, could also use RACOGA. Indeed, the essence\\nof this definition and the properties we derive in Propositions 1 and 2 do not stem from convexity. We propose\", \"to_reformulate_this_sentence_as\": \"\\\"Finally, note that our core results about gradient correlation (Propositions\\n1-2) do not assume convexity, and thus could be used in future works beyond the convex setting.\\\".\\n\\n+ The authors claim one of their main contributions is \\\"new almost sure convergence results (Theorem 4)\\\".\\nHowever, almost sure convergence is already covered by corollary 5 in Gupta et al. \\\"Achieving acceleration\\ndespite very noisy gradients\\\" arXiv :2302.05515. That paper studies a stochastic version of NAG under a\\ncondition similar to SGC. The authors should highlight the differences in their results.\\n\\nWe indeed missed the recent result of [1], where the authors indeed give an almost sure convergence result. However note that if they show that $f(x_n) \\\\to f(x^\\\\ast)$ almost surely, they do not provide any convergence rates, as we do in our Theorem 4. We cite this work in our revision and add the sentence \\\"Almost sure convergence has already been addressed in [1] without convergence rates.\\\" in the beginning of section 3.3.\"}", "{\"comment\": \"In order to investigate more deeply the following question of the reviewer :\\n+ How robust is SNAG's performance to variations in RACOGA across different types of datasets and optimization problems?\\n\\nWe added new numerical experiments to a new revised version. On Figure 4, we run GD, SGD, SNAG and NAG to solve a linear regression problem on a real world dataset, which allows to test the behaviour of the algorithms and the associated RACOGA values in an under-parameterized regime (outside our theory) where SNAG appears to be the fastest algorithm. On Figure 10, we study the behaviour of SNAG and SGD, together with RACOGA values, when performing logistic regression (non-convex) on the CIFAR-10 dataset, where both algorithms have the same convergence rate. This might be due to the non-convexity or low RACOGA values.\"}", "{\"comment\": \"+ The theorem 4 statement suggests that the authors recover a rate almost surely, but in the current presentation, it is unclear what precisely is meant. Even for $O(n^{-2})$: Is there a random variable C such that $f(x_n)-f^\\\\ast \\\\leq C/n^2$ simultaneously for all $n$ (and almost surely in probability), or does the random constant depend on $n$ ? And, what is meant by $o(n^{-2})$? For a machine learning venue, they should state a non-asymptotic quantitative bound. Almost sure convergence is a notion of convergence which is not induced by a metric on a space of random variables. As such, there is no immediate way of making sense of the notion that $f(x_n)$ and f$(x^\\\\ast)$ are $o(n^{-2})$-close in a specific sense. More explanation is needed. The same concern applies to Theorem 2.\\n\\nWe thank the reviewer to point out to us that the notion of almost sure convergence has not been formally defined in our work. Almost sure convergence rate are standard in stochastic gradient algorithm analysis (see Theorem 8 in [2], Theorem 2-3 in [3], Corollary 5 in [4], Theorem 1-2-3 in [5] or Convex Convergence Theorem, page 156 in [6]), so we do not recall the definition of this notion. More formally, if $\\\\Omega$ is the set of realization of the noise, we say that $f(x_n) - f^\\\\ast \\\\overset{a.s.}{=} o\\\\left( \\\\frac{1}{n^2} \\\\right)$ if and only if $\\\\exists A \\\\subset \\\\Omega$, such that $\\\\mathbb{P}(A) = 1$ and $\\\\forall \\\\omega \\\\in A$, $\\\\forall \\\\epsilon > 0$, $\\\\exists n_0 \\\\in \\\\mathbb{N}$, such that $\\\\forall n \\\\ge n_0$, $|f(x_n(\\\\omega)) - f^\\\\ast| \\\\le \\\\frac{\\\\epsilon}{n^2}$. In order to make it clearer in the revised version of paper, we recall this definition in the beginning of section 3.\\n \\nFor practical usage, non-asymptotic quantitative bounds are in fact more useful. This is the reason why Theorem 2 and 4 are together with Theorem 1 and 3 that provide finite time quantitative bounds in expectation. However, we are not aware of finite time convergence rates, asymptotic being the standard results in almost sure convergence studies, as it can be seen in above references. As the reviewer mentioned, even the task to define such a convergence is tough.\\n\\nFinally, we are not sure to understand this part of the reviewer remark : \\\"Almost sure convergence is a notion of convergence which is not induced by a metric on a space of random variables.\\\". Do our clarifications about the almost surely rate definition solve this concern ?\\n\\n+ The title of the paper is \\\"Gradient correlation is needed to accelerate SGD with momentum\\\", which makes\\nit sound like gradient correlation is a necessary condition (i.e. if it is not satisfied then SGD with momentum\\ndoes not converge at an accelerated rate). But I did not see a result proving that in the paper. The results\\nactually claim that it is a sufficient condition. The title does not accurately reflect the main results.\\n\\nWe thank the reviewer to point out to us that our title might be confusing.\\nIn fact, our theoretical results show that positive gradient correlation, i.e. RACOGA verified for c > 0, ensures\\naccelerated convergence rate for SNAG. Note that we look empirically at the reverse implication. On Figure\\n1)a), we highlighted that the uncorrelated data, i.e. RACOGA not verified for c > 0, leads to SGD being\\nfaster than SNAG.\\n\\nIn order to suppress any logical ambiguities, we have decided to reformulate our title into : \\\"Gradient correlation is a key ingredient to accelerate SGD with momentum\\\"\"}", "{\"comment\": \"+ In Example 3, the paper demonstrates the benefits of PosCorr condition over the traditional way of verifying the SGC condition by showing that $\\\\rho_K \\\\leq \\\\frac{N}{K}$. Why is $\\\\frac{N}{K}\\\\leq \\\\frac{L_{(K)}}{\\\\mu}$, so that it could be considered an improvement ?\\n\\n\\n We thank the reviewer for this crucial question. In fact, as the reviewer noticed, the core goal of our paper is to see when the SGC is verified with smaller values of $\\\\rho_K$. The smaller the SGC constant $\\\\rho_K$ is, the more informative this condition is.\\n \\nThe bound $\\\\rho_k \\\\le \\\\frac{L_{(K)}}{\\\\mu}$ relies on geometrical properties of $f$ that are difficult to control in practice. The bound $\\\\rho_k \\\\le \\\\frac{N}{K}$ only relies on the number of gradients $N$ and the batch size $K$. These quantities are known in practice. The inequality $\\\\frac{N}{K}\\\\leq \\\\frac{L_{(K)}}{\\\\mu}$ is not true in general. However, in Example 2, this inequality has been shown ($N$ = 2 in this example).\\n \\nFinally, the bound $\\\\rho_k \\\\le \\\\frac{N}{K}$ is more practical, since it does not rely on geometrical properties of $f$ and it seems to be finer in practice. This is the reason why this bound can be considered as an improvement.\\n\\n+ What is $\\\\lambda$ in Figure 1?\\n\\nWe thank the reviewer to point out that the presence of $\\\\lambda$ on Figure 1 was not detailed in the previous main text. As mentioned in our Appendix A.1, $\\\\lambda$ is a parameter that replaces the unknown strong growth condition. Grossly, small $\\\\lambda$ values lead to more aggressive stepsizes, as $s = \\\\frac{1}{L \\\\lambda}$. High RACOGA values mean high correlation, and in this case we can take smaller $\\\\lambda$ and converge faster (figure 1.b). Note that it is not the case with low RACOGA values (figure 1.a). To make it clearer in our paper, we added a paragraph about the role of $\\\\lambda$ in Section 5.1, in the main text.\\n\\n[1] Vaswani et al. Fast and faster convergence of sgd for over-parameterized models and an accelerated perceptron, 2019\"}", "{\"comment\": \"Thank you so much for your detailed reply. I think the response together with the changes made to the paper has resolved most of my concerns. Therefore, I have raised my score to 6.\\n\\nI believe that what is eventually preventing me from giving a higher score to this paper is that the RACOGA condition seems to have limited applicability (as pointed in my Weakness #2). For instance, for neural network training it would be nearly impossible to verify the condition throughout the training process. Although the condition seems close to a necessary condition, I believe that a regulatory condition like this would have more value if it could be applied to more realistics scenarios. Nevertheless, the work itself presents a good contribution to the field, so I would still recommend acceptance.\"}", "{\"comment\": \"Dear Reviewer wrcq,\\n\\nThe author discussion phase will be ending soon. The authors have provided detailed responses. Could you please reply to the authors whether they have addressed your concern and whether you will keep or modify your assessment of this submission?\\n\\nThanks.\\n\\nArea Chair\"}", "{\"summary\": \"The authors study the convergence of SGD with momentum--Stochastic Nesterov Accelerated Gradient (SNAG) and obtained an improved rate compared to vanilla SGD. More precisely, they consider the strong growth condition that, intuitively, quantifies the amount of noise. Using this definition, they are able to achieve e.g. $o(\\\\frac{1}{n^2})$ convergence rate (SGD has $o(\\\\frac{1}{n})$) for convex functions. For certain objective functions, the authors propose a way to compute the strong growth condition and a new condition RACOGA. In addition, the authors have numerical experiments to verify their results.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-organized and easy to read. The material in the supplement serves as a good complement to the main paper. Experimental results are presented in a clear way with nice plots and great details.\\n\\n2. The main result is indeed very interesting to the community and gives some insight into a long-standing question. The theoretical contribution mainly comes from Theorem 4 which provides an almost surely convergence for SNAG showing a speed-up compared to SGD. \\n\\n3. By proposing a new characterization of SGC, the authors improved the assumption so that it only depends on the size of the dataset and batch size. Using this, the authors proposed a new condition--RACOGA. \\n\\n4. The authors also discuss the relation between batch size and gradient correlations which brings interesting insights into when to use stochastic and non-stochastic versions of these algorithms.\", \"weaknesses\": \"1. Proof for theorem 4 heavily relies on an existing result (Sebbouh et al. 2021, theorem 9), which one could argue it weakens the theoretical contributions of this work.\\n\\n2. I appreciate that the authors made an effort to compare RACOGA with gradient diversity and gradient confusion and agree with the authors that they are not identical, but they do look quite similar.\", \"questions\": \"1. It would be nice to have a table that summarizes results from theorem 1-4 (perhaps including results from the literature) so that readers don't have to go back and forth to compare them.\\n\\n2. Authors have remark 5 to explain the results from theorem 4 which does help me to understand it. I wonder if there is any intuition about why $\\\\rho_k$ plays a different role in convex vs strongly convex cases. Also, for the strongly convex case, it seems we need less noisy data for SNAG to beat SGD, because we want $\\\\rho_k$ to be small. Am I understanding this correctly? For continuous strongly convex function, there is a unique minimizer, meaning it won't stuck in some local minimizers. How does this fit into this theory?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies the acceleration of Stochastic Nesterov's Accelerated Gradient (SNAG) method over the SGD algorithm. In particular, the study is based on Vaswani et al, 2019, in which the acceleration is first proved based on the Strong Growth Condition (SGC) of the stocastic gradient. This paper extends the previous paper by showing an accelerated almost sure convergence result for SNAG, and develops condition that lead to a better SGC coefficient. Based on this condition, they show how the SGC coefficient changes as the batch size increases. The paper also verifies the condition using experiments.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper provides an extension of the original convergence theorem in Vaswani et al, 2019 in the almost sure convergence form, leading to a more comprehensive understanding of the SNAG algorithm.\\n\\n2. The paper's result covers both convex and strongly convex case. In particular, both the PosCorr and the RACOGA condition can lead to the SGC without assuming the strong convexity.\\n\\n3. Centered around the SGC, the paper develops conditions that implies the SGC, which allows the paper to investigate the relationship between the batch size and the SGC coefficient.\", \"weaknesses\": \"1. Although the almost sure convergence result provided in Theorem 4 deepens our understanding of the SNAG method, I believe that the major focus of this paper is still on how the gradient correlation can lead to a better SGC coefficient, which gives acceleration for SNAG. From this perspective, the result in Theorem 4 seems a bit disjoint from the other sections of the paper.\\n\\n2. Although the RACOGA condition holds in general with a coefficient of $c \\\\geq -\\\\frac{1}{2}$, it does not seem to be easy to find a tight $c$ for the objectives, as evaluating this lower bound involves analyzing the pairwise inner product between gradients for all choices of the parameters in the parameter space. Furthermore, when $c$ approaches to $-\\\\frac{1}{2}$, the SGC coefficient $\\\\rho = \\\\frac{N}{1 + 2c}$ approaches infinity, leading to a trivial condition.\\n\\n3. The experimental verification of the paper seems quite weird. It is noticed that, in the linear regression case, the gradient correlation involves both the inner product term and the $\\\\mathbf{a}_i^\\\\top \\\\mathbf{a}_j$, sign of the residual terms $\\\\mathbf{x} ^\\\\top\\\\mathbf{a}_i - b_i$'s. In particular, different signs of the residual terms could lead to completely different lower bound on the gradient correlation. However, it seems that in the experimental design the paper considered only the correlation between the data. Moreover, it may contradict the claim of the paper that RACOGA helps acceleration since in Figure 1.(a) the green curve, with a smaller RACOGA coefficient, led to a faster convergence than the blue curve.\", \"questions\": \"1. How is Theorem 2 different from the results in Vaswani et al, 2019? It would be nice if the paper could include a detailed comparison of the two results.\\n\\n2. In Example 3, the paper demonstrates the benefits of PosCorr condition over the traditional way of verifying the SGC condition by showing that $\\\\rho_K\\\\leq \\\\frac{N}{K}$. Why is $\\\\frac{N}{K} \\\\leq \\\\frac{L_{(K)}}{\\\\mu}$, so that it could be considered an improvement?\\n\\n3. What is $\\\\lambda$ in Figure 1?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your response. I appreciate that the authors are receptive to feedback and have made a good effort to incorporate the feedback into their revision. I still have some doubts about the comparison between SGD and SNAG under different conditions. But now with the addition of remark 4 and appendix F, the authors are at least being more transparent about the nature of the comparison. It is interesting to see that without SGC, only a non-accelerated rate can be proved for SNAG. If the authors have the space for it, they can consider moving the discussion in Remark 9 to the main body of the article (this is only a suggestion and does not affect my recommendation for the paper).\\n\\nNevertheless, I think that the overall contributions of the paper are interesting and useful enough. Most of my concerns were about the presentation, which have been addressed by the authors. I am happy to increase my score.\"}", "{\"comment\": \"Thank you for your answer. We thought we took into account your remarks and questions, and we modified the paper accordingly (Table 1, new design of Figure 1). However your recommendation is still not positive about our paper. Could you please detail what are the remaining weaknesses of the paper ? We will be glad to answer your concerns.\"}" ] }
2PzozgigiA
CollabEdit: Towards Non-destructive Collaborative Knowledge Editing
[ "Jiamu Zheng", "Jinghuai Zhang", "Tianyu Du", "Xuhong Zhang", "Jianwei Yin", "Tao Lin" ]
Collaborative learning of large language models (LLMs) has emerged as a new paradigm for utilizing private data from different parties to guarantee efficiency and privacy. Meanwhile, Knowledge Editing (KE) for LLMs has also garnered increased attention due to its ability to manipulate the behaviors of LLMs explicitly, yet leaves the collaborative KE case—in which knowledge edits of multiple parties are aggregated in a privacy-preserving and continual manner—unexamined. To this end, this manuscript dives into the first investigation of collaborative KE, in which we start by carefully identifying the unique three challenges therein, including knowledge overlap, knowledge conflict, and knowledge forgetting. We then propose a non-destructive collaborative KE framework, COLLABEDIT, which employs a novel model merging mechanism to mimic the global KE behavior while preventing the severe performance drop. Extensive experiments on two canonical datasets demonstrate the superiority of COLLABEDIT compared to other destructive baselines, and results shed light on addressing three collaborative KE challenges and future applications. Our code is available at [https://github.com/LINs-lab/CollabEdit](https://github.com/LINs-lab/CollabEdit).
[ "Collaborative Learning", "Knowledge Editing" ]
Accept (Poster)
https://openreview.net/pdf?id=2PzozgigiA
https://openreview.net/forum?id=2PzozgigiA
ICLR.cc/2025/Conference
2025
{ "note_id": [ "td7hNTFTjU", "rizItV1w6z", "qg89ZCavXb", "oxHUFuwwmW", "oRoWJPaJcT", "kp4r3n9het", "dbUAexON2C", "b7UxmHPBiE", "XeVOnnKfju", "TZTl6eGOCP", "Rvsi6OTqvO", "G8dYHLHLKA", "G1YQq90pJ3", "C0I0kwvH61", "7K6A1XUQ6e", "2138AS8DFT" ], "note_type": [ "official_review", "official_comment", "decision", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730499484275, 1732378169933, 1737523759616, 1732388264942, 1732697253963, 1734750350117, 1732377806212, 1732378069349, 1732697176575, 1732377908984, 1730128981351, 1730406644778, 1732377383721, 1732377588345, 1732378206068, 1733155791054 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6288/Reviewer_jVrX" ], [ "ICLR.cc/2025/Conference/Submission6288/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6288/Reviewer_hEBc" ], [ "ICLR.cc/2025/Conference/Submission6288/Authors" ], [ "ICLR.cc/2025/Conference/Submission6288/Area_Chair_9X4B" ], [ "ICLR.cc/2025/Conference/Submission6288/Authors" ], [ "ICLR.cc/2025/Conference/Submission6288/Authors" ], [ "ICLR.cc/2025/Conference/Submission6288/Authors" ], [ "ICLR.cc/2025/Conference/Submission6288/Authors" ], [ "ICLR.cc/2025/Conference/Submission6288/Reviewer_h2zh" ], [ "ICLR.cc/2025/Conference/Submission6288/Reviewer_hEBc" ], [ "ICLR.cc/2025/Conference/Submission6288/Authors" ], [ "ICLR.cc/2025/Conference/Submission6288/Authors" ], [ "ICLR.cc/2025/Conference/Submission6288/Authors" ], [ "ICLR.cc/2025/Conference/Submission6288/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper investigates the collaborative knowledge editing (KE) for large language models (LLMs). It identifies three primary challenges in this domain: knowledge overlap, knowledge conflict, and knowledge forgetting. The authors propose a framework called COLLABEDIT, which utilizes a non-destructive model merging mechanism to aggregate knowledge edits from multiple parties while maintaining performance and privacy.\\n\\nThe framework aims to mimic the optimal global editing behavior without the significant performance drops associated with existing destructive methods. Through extensive experiments on canonical datasets, the authors demonstrate that COLLABEDIT outperforms traditional approaches, addressing the identified challenges effectively.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"COLLABEDIT allows for non-destructive knowledge editing, which prevents significant performance drops that are common in traditional methods\\n\\nThe framework is versatile and can integrate existing knowledge editing methods, providing a comprehensive solution to collaborative KE challenges\\n\\nEmpirical results show that COLLABEDIT outperforms existing destructive baselines, demonstrating superior editing performance even with a large number of edits\", \"weaknesses\": \"The non-destructive merging mechanism may introduce additional complexity in implementation compared to simpler, traditional methods.\\n\\nIts scalability in large collaborative environments or with numerous clients may need further exploration.\\n\\nMore experiments on different LLMs could benefit the demonstration of the effectiveness of the proposed method.\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal for reviewer h2zh (Part 2)\", \"comment\": \"> **Q1:** Why was the setup of editing 10 models with 500 requests (Table 1 and 2) per model not applied consistently in Table3 ?\\n**Q2:** Could you clarify why the MCF dataset was not included in experiments in Table 3?\\n> \\n\\n**A:** Our experiments on MALMEN are primarily based on modifying their source code to support our collaborative knowledge editing. Therefore, to ensure the effectiveness of MALMEN, we use a consistent experimental setup as MALMEN.\", \"table_3_differs_from_table_1_and_table_2_in_our_paper_mainly_from_the_following_two_perspectives\": \"- **We evaluate MALMEN with 8 models and 125 edit requests per model due to the inherent issue of MALMEN\\u2019s codes.**\\n - **Simple experiments with default parameters:** Table 3 was proposed to verify whether our CollabEdit's collaborative editing performance is universally applicable across different KE methods. Therefore, we temporarily conducted a simple `1000=8x125` experiment based on the MALMEN\\u2019s codes, which is a relatively default experimental setup for MALMEN\\u2019s codes.\\n - **Defects in the source code of the MALMEN:** Due to underlying implementation issues, we are temporarily unable to modify the source code of MALMEN to support collaborative editing with 5000 edit requests. Directly changing the hyperparameters to 5000 edits would result in insufficient memory in the A800 (80G) GPU (it needs retraining the hypernetwork). The optimization of MALMEN's code is not within the scope of our work; we simply aim to validate the effectiveness of our framework on MALMEN. However, we will continue to optimize the MALMEN code in the future to complete the `5000=10x500` experiment and integrate it into our code framework.\\n- **We evaluate MALMEN just on the zsRE dataset because its code lacks support for MCF.**\\n - **Additional experiment of MALMEN on the MCF dataset:** The codebase of MALMEN does not support the MCF dataset. According to our re-implementation (i.e., **R-Table 3 and R-Table 4**), MALMEN is not effective on the MCF dataset. Even for global editing, the editing score is lower than 10%. However, we note that `CollabEdit` still achieves non-destructive editing performance as the global editing in this scenario.\\n - **Discussion on the MALMEN\\u2019s bad KE performance:** We explain that the poor editing performance of MALMEN on the MCF dataset is due to the need for a large training dataset for the training of hypernetwork. MALMEN achieves relatively good performance on the zsRE dataset because it uses `163,196` records of zsRE as the training set. However, the MCF dataset only contains `20,877` records in total. Even if we split the dataset with a ratio of 9:1 (R-Table2 and R-Table3), the results are still not desirable.\\n\\nTo better illustrate the generalizability of our `CollabEdit`, we further test our framework using the latest KE method\\u2014`AlphaEdit` [1]. `AlphaEdit` mitigates the issue of disruption and achieves state-of-the-art editing performance. As shown in **R-Table 5**, our method still exhibits nearly non-destructive collaborative KE performance when using `AlphaEdit` as the backend (The LLM is GPT2-XL). \\n\\n[1] AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models\\n\\n**R-Table 3:** Overall editing performance on GPT-J (6B) , based on MALMEN. We edit 8 models and each model will be edited by 125 requests of mcf. The \\u201cScore\\u201d serves as the overall metric. \\n\\n| **Method** | **ES**\\u2b06 | **GS**\\u2b06 | **LS**\\u2b06 | **Score**\\u2b06 |\\n| --- | --- | --- | --- | --- |\\n| **Global-Edit** | 2.63 | 41.11 | 20.21 | 6.60 |\\n| **Ties-Merging** | 0.09 | 8.39 | 14.16 | 0.26 |\\n| **Task-Arithmetic** | 1.26 | 17.18 | 19.53 | 3.32 |\\n| **Simple-Average** | 0.78 | 20.31 | 19.43 | 2.16 |\\n| **CollabEdit** | **2.05** | **42.18** | **18.94** | **5.31** |\\n\\n**R-Table 4:** Overall editing performance on GPT2-XL , based on MALMEN. We edit 8 models and each model will be edited by 125 requests of mcf. The \\u201cScore\\u201d serves as the overall metric.\\n\\n| **Method** | **ES**\\u2b06 | **GS**\\u2b06 | **LS**\\u2b06 | **Score**\\u2b06 |\\n| --- | --- | --- | --- | --- |\\n| **Global-Edit** | 4.49 | 38.47 | 17.18 | 9.77 |\\n| **Ties-Merging** | 0.09 | 6.73 | 9.86 | 0.26 |\\n| **Task-Arithmetic** | 1.46 | 18.75 | 16.89 | 3.76 |\\n| **Simple-Average** | 1.75 | 24.41 | 15.43 | 4.42 |\\n| **CollabEdit** | **4.49** | **41.11** | **17.18** | **9.82** |\\n\\n**R-Table 5:** Overall editing performance on GPT2-XL, based on AlphaEdit. We edit 10 models and each model will be edited by 500 requests of mcf. The \\u201cScore\\u201d serves as the overall metric. \\n\\n| | MCF | | | |\\n| --- | --- | --- | --- | --- |\\n| **Method** | **NS**\\u2b06 | **PS**\\u2b06 | **ES**\\u2b06 | **Score**\\u2b06 |\\n| **Global-Edit** | 65.51 | 85.4 | 97.76 | 80.63 |\\n| **Ties-Merging** | 75.42 | 31.97 | 31.72 | 39.44 |\\n| **Task-Arithmetic** | 51.74 | 58.04 | 65.64 | 57.92 |\\n| **Simple-Average** | 77.62 | 34.71 | 44.32 | 46.68 |\\n| **CollabEdit** | **63.61** | **84.08** | **96.04** | **78.89** |\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Thanks for the response.\", \"comment\": \"I would like to thank the authors for the detailed response to address my concerns. I have adjusted my scores accordingly. All the best.\"}", "{\"title\": \"A follow-up message about the rebuttal for the CollabEdit paper\", \"comment\": \"Dear reviewer `h2zh`\\uff1a\\n\\nWe hope this message finds you well.\\n\\nWe are writing to kindly inquire about **the status of your feedback on our recent rebuttal**. We understand that your time is valuable, and we greatly appreciate the effort you have already put into reviewing our manuscript. Your insights are crucial to the improvement of our work, and **we are eager to address any remaining concerns you may have**.\\n\\nIf there are any additional questions or clarifications needed from our side, please **do not hesitate to let us know**. Since the discussion phase has been extended, we hope to take advantage of this additional valuable time to **engage in more in-depth exchanges with you**.\\n\\nThank you once again for your time and consideration. We look forward to hearing from you soon.\\n\\nBest regards,\\n\\nAuthors of\\u00a0`CollabEdit`\"}", "{\"metareview\": \"In this paper, the authors explore collaborative knowledge editing in LLMs and propose a framework that leverages non-destructive model merging to enable knowledge editing from multiple parties while preserving both performance and privacy. Reviewers agreed that the proposed framework is effective, the theoretical analysis is robust, and the experiments are thorough. There were discussions regarding the framework's complexity, scalability, and the need for additional experiments with diverse LLMs. Overall, I believe this paper exceeds the acceptance threshold.\", \"additional_comments_on_reviewer_discussion\": \"The authors addressed reviewers' concerns by providing a complexity analysis and conducting simulated experiments to evaluate scalability with a large number of clients and diverse LLMs. They also added further descriptions to highlight the significance of the work. Overall, the responses are satisfactory.\"}", "{\"title\": \"Rebuttal for reviewer hEBc (Part 1)\", \"comment\": \"Dear reviewer\\u00a0`hEBc`:\\n\\nThank you for your review. We would like to address your concerns in detail below.\\n\\n---\\n\\n> **W1**: The destructive performance of fed-average for knowledge editing is not surprising\\n> \\n\\n**A:** Thank you for your appreciation of our mathematical contribution! Below are some responses to the mentioned weakness: \\n\\n- **Trying to solve rather than just reveal the problems:** In this work, we **pioneer the exploration** of collaborative KE and compare the performance of existing collaborative learning methods (e.g., fed-average) for KE. We emphasize that revealing the destructive performance of these methods is not the main focus of this work. Instead, we aim to tackle this performance gap with our designed theoretical framework to promote the practical applications of collaborative KE.\\n- **Our contribution:**\\n - **Tackling an unsolved problem:** Traditional collaborative learning methods lead to significant degradation in editing performance, highlighting the difficulty of collaborative KE.\\n - **Laying the Groundwork:** Our work systematically investigates this important yet challenging problem based on our theoretical framework. In particular, we propose `CollabEdit`, the first non-destructive collaborative KE method that allows multiple parties to jointly edit the knowledge of LLMs with guaranteed performance.\\n - **Comprehensive Analysis:** We identify three key problems tailored to this paradigm and propose approaches to (effectively) mitigate the performance drop (e.g., applying a dynamic covariance matrix to address knowledge forgetting). We believe this work can inspire a broad range of further exploration in this direction.\\n\\n---\\n\\n> **W2:** Knowledge Conflict is addressed in a rather ad hoc manner.\\n> \\n\\n**A:** Knowledge conflict poses a significant challenge in the context of collaborative KE. \\n\\n- **The primary challenge lies in the subjective and personalized nature of conflict resolution**. When conflicts arise between edits made by different clients, determining which edits to retain is inherently **ambiguous**. The decision should be tailored to specific application scenarios. For example, one might prioritize edits from clients with higher urgency or those who have made more substantial contributions.\\n - In this work, we aim for a **general and objective** approach. Specifically, we propose to utilize strategies such as `FCFS` (First Come, First Served) to resolve conflict and apply data augmentation (e.g., prompt rephrasing) to preserve the desired edit requests. We believe this approach, while straightforward, is effective and adaptable in tackling the issue of knowledge conflict in various scenarios.\\n- **Another significant challenge lies in identifying knowledge conflicts in a privacy-preserving manner**. Our framework prioritizes privacy by allowing conflicts to occur initially and then addressing them in subsequent rounds based on specific criteria (e.g. `FCFS`). Moreover, we employ data augmentation techniques to mitigate the effects of the knowledge conflict.\\n\\nThough there might be other methods to resolve the challenges, they involve mechanism design and are beyond the scope of this manuscript. We hope that our approach can serve as a catalyst for further research and inspire more insightful discussions in this direction.\"}", "{\"title\": \"Rebuttal for reviewer h2zh (Part 1)\", \"comment\": \"Dear reviewer\\u00a0`h2zh`:\\n\\nThank you for your review. We would like to address your concerns in detail below.\\n\\n> **W1:** Large-scale federated LLM scenarios are currently uncommon\\n> \\n\\n**A:** The reviewer might underestimate the research value and accumulated achievements of federated learning [1,2] and collaborative learning [9]. Below are some real-world application examples:\\n\\n- **Industrial federated** **scenarios: *Tencent's Tianyan Lab*** and ***WeBank*** jointly developed a medical federated learning framework [3]. ***NVIDIA*** introduced ***NVIDIA Clara*** [5] Federated Learning and NVIDIA FLARE SDK [10] for healthcare data privacy. On September 6, 2019, ***WeBank*** and ***Extreme Vision*** launched the first visual federated learning system [6] for industry upgrades.\\n- **The application of Federated learning for LLM:**\\n - On 16 Oct. 2023, ***FATE*** [11] facilitates federated learning for large language models.\\n - On 5 Apr. 2024, ***Google*** [12] applied federated LLM in `GBoard` with notable real-word results.\\n - On 11, Oct. 2024, ***Prime Intellect*** [13] announced the first decentralized training of a 10B Parameter Model.\\n - On 5 Nov. 2024, ***Photon*** [7] proposed worldwide federated pre-training of LLMs for data privacy.\\n - Collaborative/Federated learning [7,8,11,12,13] is a highly **promising approach** to address the issues of information silos and the contradiction between local limited computing resources and the unbounded scale of models. Collaborative KE, as compared to local training in FL, offers a more compute- & memory- efficient form of local knowledge editing and holds significant promise.\\n\\nFinally, we want to emphasize that regardless of whether the real-world applications or scenarios are \\u201ccommon\\u201d, the ultimate goal of scientific research is to **solve existing problems** and **cultivate fertile ground for theoretical applications**. Scientific research should precede industrial applications and lay the theoretical foundation for the industry, which may contribute to the development of the industry and lead to the emergence of more applications.\\n\\n[1] Openfedllm: Training large language models on decentralized private data via federated learning\\n\\n[2] Federated unlearning: Guarantee the right of clients to forget\\n\\n[3] Privacy-Preserving Technology to Help Millions of People: Federated Prediction Model for Stroke Prevention \\n\\n[4] [EGX Platform for Accelerated Computing | NVIDIA](https://www.nvidia.com/en-us/data-center/products/egx/)\\n\\n[5] [NVIDIA Clara](http://nvidia.com/clara)\\n\\n[6] FedVision: An Online Visual Object Detection Platform Powered by Federated Learning\\n\\n[7] Photon: Federated LLM Pre-Training\\n\\n[8] The Future of Large Language Model Pre-training is Federated\\n\\n[9] Collaborative learning via prediction consensus\\n\\n[10] [NVIDIA FLARE](https://nvidia.github.io/NVFlare/)\\n\\n[11] FATE-LLM: A Industrial Grade Federated Learning Framework for Large Language Models\\n\\n[12] Prompt Public Large Language Models to Synthesize Data for Private On-device Applications\\n\\n[13] [INTELLECT\\u20131: Launching the First Decentralized Training of a 10B Parameter Model](https://www.primeintellect.ai/blog/intellect-1)\\n\\n---\\n\\n> **W2:** Additional experiments on recent models such as LLaMA-2, LLaMA-3, or Gemma.\\n> \\n\\n**A: R-Table 2** presents additional experiments of collaborative KE on LLama-3-8B. We use MEMIT as the backend KE algorithm and adopt the default setting in our paper (i.e., 10 clients and 5000 edit requests in total). The experiments show that our `CollabEdit` still achieves non-destructive editing performance on the LLama-3.\\n\\n**R-Table 2:** Overall editing performance on LLama-3, based on MEMIT. The \\u201cScore\\u201d serves as the overall metric. (All metrics are better when higher)\\n\\n| **Method** | NS\\u2b06 | PS\\u2b06 | ES\\u2b06 | **Score**\\u2b06 |\\n| --- | --- | --- | --- | --- |\\n| **Global-Edit** | 86.62 | 76.07 | 95.66 | 85.36 |\\n| **Ties-Merging** | 89.65 | 16.44 | 16.36 | 22.53 |\\n| **Task-Arithmetic** | 49.33 | 51.12 | 50.48 | 50.29 |\\n| **Simple-Average** | 89.92 | 10.94 | 10.04 | 14.84 |\\n| **CollabEdit** | **85.8** | **77.2** | **95.3** | **85.46** |\\n\\n---\\n\\n> **W3:** some figures and tables (e.g., Figure 3 and Table 4) are misaligned\\n> \\n\\n**A:** As mentioned in the **Global Response**, we have adjusted the structure of the paper and highlighted these changes **in orange.**\"}", "{\"title\": \"A follow-up message about the rebuttal for the CollabEdit paper\", \"comment\": \"Dear reviewer `jVrX`\\uff1a\\n\\nWe hope this message finds you well.\\n\\nWe are writing to kindly inquire about **the status of your feedback on our recent rebuttal**. We understand that your time is valuable, and we greatly appreciate the effort you have already put into reviewing our manuscript. Your insights are crucial to the improvement of our work, and **we are eager to address any remaining concerns you may have**.\\n\\nIf there are any additional questions or clarifications needed from our side, please **do not hesitate to let us know**. Since the discussion phase has been extended, we hope to take advantage of this additional valuable time to **engage in more in-depth exchanges with you**.\\n\\nThank you once again for your time and consideration. We look forward to hearing from you soon.\\n\\nBest regards,\\n\\nAuthors of\\u00a0`CollabEdit`\"}", "{\"title\": \"Rebuttal for reviewer hEBc (Part 2)\", \"comment\": [\"> **W3:** The new knowledge can be easily prompted out from the LLMs by asking questions.\", \">\", \"**A:** Thanks for this intriguing question. However, we have noticed that the concerns raised by the reviewer contain some misunderstandings regarding the contributions of our CollabEdit. We will clarify them as follows:\", \"**1. The privacy risk of the reviewer\\u2019s concern is different from that of** `CollabEdit`**.**\", \"**The risk in our** `CollabEdit`**:** The issue mentioned by the reviewer is indeed one that LLMs should address. This issue is independent of the privacy risks that our `CollabEdit` focuses on. In our threat model, we focus on whether we can ensure the privacy of client-edited data in collaborative scenarios and, based on this, also ensure the privacy of the client's identity (we cannot determine who edited the knowledge).\", \"**The risk in reviewer\\u2019s concern:** The reviewer's concern, on the other hand, is about whether model weights might leak private data. To be honest, all KE methods may have this risk, and to address this, it might be more appropriate to firstly investigate strategies such as access control or safety alignment in just a single model context. These topics are out of the scope of our work, and we leave for future researchers to explore the intersection of these two areas (e.g., whether performing KE on a safety-aligned model would reduce its security).\", \"**2. The reviewer assumes stronger adversary capabilities: attackers may have to guess the edit requests.**\", \"The reviewer assumes that the attacker knows client-edited data and can craft targeted queries (questions) to test whether some specific data has been updated.\", \"However, it is non-trivial to extract the data due to the lack of access to edit requests (Please refer to the proof regarding the privacy guarantees of $KK^{\\\\top}$ in Section 6 of paper).\", \"Additionally, `CollabEdit` ensures that the attacker cannot identify which user edits a specific piece of knowledge due to the anonymization, which protects the identity of a client.\", \"In conclusion, the attacker can hardly gain any direct or related knowledge about the edits.\", \"Since the attacker cannot extract any information from the edit requests ($KK^{\\\\top}$), he/she can only use random queries to the old and new models. **With a large search space of** **edit requests**, it is challenging for a third party to **guess** what has been edited during the KE (In other words, the cost of the attack for a specific edit is extremely high).\", \"This is particularly important when KE is utilized for machine unlearning [1]. Specifically, machine unlearning often involves edit requests with private information (e.g., ID numbers), which, unless previously leaked, would be nearly impossible for an average person to accurately guess.\", \"**Summary:** the attack proposed by the reviewer follows a different threat model and assumes different adversary capabilities.\", \"[1] On Knowledge Editing in Federated Learning: Perspectives, Challenges, and Future Directions\"]}", "{\"summary\": \"This paper introduces COLLABEDIT, a framework designed for collaborative knowledge editing (KE) in LLMs within federated learning scenarios. COLLABEDIT allows multiple parties to collaboratively edit the knowledge in LLMs while preserving data privacy, a novel scenario within knowledge editing and federated learning. It addresses three main challenges\\u2014knowledge overlap, knowledge conflict, and knowledge forgetting\\u2014by implementing a non-destructive model merging technique that aims to achieve performance close to direct global model editing without degrading results. Extensive experiments on GPT-J and GPT2-XL demonstrate the effectiveness of COLLABEDIT, showing improvements over existing approaches in federated scenarios.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper identifies and addresses a novel problem of knowledge editing in federated learning for LLMs, a new setting within model editing research.\", \"The authors propose a straightforward yet effective method\\u2014COLLABEDIT\\u2014that enables privacy-preserving collaborative editing, which is an essential consideration in multi-party learning scenarios.\", \"Experiments on GPT-J and GPT2-XL show that COLLABEDIT can substantially improve performance over methods like MEMIT in federated settings, highlighting its practical effectiveness in this new problem space.\"], \"weaknesses\": [\"The need for collaborative knowledge editing within federated LLM may be limited, as large-scale federated LLM scenarios are currently uncommon. This reduces the perceived applicability and impact of the problem being solved.\", \"The experiments are conducted on older models like GPT-J and GPT2-XL. More recent models such as LLaMA-2, LLaMA-3, or Gemma would provide stronger validation of the proposed method\\u2019s efficacy.\", \"The paper\\u2019s structure could benefit from refinement, as some figures and tables (e.g., Figure 3 and Table 4) are misaligned, affecting readability and presentation quality.\"], \"questions\": [\"Why was the setup of editing 10 models with 500 requests (Table 1 and 2) per model not applied consistently in Table3?\", \"Could you clarify why the MCF dataset was not included in experiments in Table 3? This dataset would likely provide a valuable benchmark for evaluating the framework\\u2019s robustness in handling knowledge conflicts.\", \"In the knowledge overlap experiments, the focus was on the R value\\u2019s $\\\\ell_2$-norm rather than directly showing the editing method\\u2019s performance. How does COLLABEDIT perform when subjected to repeated editing requests for the same knowledge items?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper addresses the generalization of knowledge editing within the collaborative learning setting, with a focus on ensuring privacy while modifying the knowledge of large language models (LLMs). The authors propose a novel approach by sharing $KK^{T}$, an intermediate weight associated with the keys of edited knowledge, instead of naively sharing and averaging weights, which is theoretically proven to be resistant to attacks. The experiments conducted demonstrate the effectiveness of the proposed approach.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper tackles an important problem of generalizing knowledge editing to collaborative learning settings where privacy is a critical concern.\", \"The authors provide a compelling theoretical analysis of the limitations of naive weight sharing and introduce the concept of sharing $KK^{T}$, which is proved to be difficult to attack in the traditional privacy-aware setting.\", \"The experiments conducted seem to effectively demonstrate the effectiveness of the proposed method.\"], \"weaknesses\": [\"It is not surprising to see the destructive performance of direct fed-average for knowledge editing, as edits individual client are naturally diluted when models are averaged, although I appreciate the formal mathematical treatment of the issue.\", \"While knowledge conflict is identified as a key challenge, the paper addresses it in a rather ad hoc manner compared to other challenges, which are supported by theoretical analysis.\", \"My biggest concern is on the privacy part of the model. Although the authors propose to share $K^{T}K$ and providing theoretical proof of its resistance to attacks, the paper does not fully address the new privacy challenges faced by LLMs. If the edit is successful, the new knowledge can be easily prompted out from the LLMs by simply asking questions. This is especially convenient given that most knowledge editing tasks involve only the editing of factual knowledge. Therefore, the traditional privacy methods may not suffice in the LLM case, and further exploration in preserving privacy for knowledge editing is needed.\"], \"questions\": \"Please refer to my summary of my weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Global Response\", \"comment\": \"Dear Reviewer\\u00a0`jVrX` , `hEBc` and `h2zh` :\\n\\nWe sincerely thank the reviewers for their insightful feedback. We are delighted that the reviewers acknowledged that ***the tackled problems*** are important and novel (Reviewers `jVrX`,`hEBc`,`h2zh`), the collaborative editing ***performance*** of our `CollabEdit` is superior (Reviewers `jVrX`,`hEBc`,`h2zh`), our `CollabEdit` is ***privacy-ensured*** (Reviewers `hEBc`,`h2zh`), the ***experiment*** is effective and substantial (Reviewers `hEBc`,`h2zh`).\\n\\nFurthermore, our contributions to proposing ***a versatile framework*** (Reviewer `jVrX`), introducing a ***new problem space with practical effectiveness*** (Reviewer `h2zh`), providing ***compelling theoretical analysis*** (Reviewer `hEBc`), the ***comprehensive solutions*** to collaborative KE challenges (Reviewer `jVrX`) are acknowledged.\\n\\n### **Summary of Contribution and Novelty**\", \"our_work_stands_out_through_the_following_key_contributions_and_innovations\": [\"1. **A Novel Paradigm with Insightful Findings:**\", \"To our knowledge, we ***are the first*** to propose the collaborative KE paradigm (***a new problem*** space including naive collaborative KE baselines, Global-Edit, and our `CollabEdit`), which obtains ***practical effectiveness*** and are recognized by Reviewers `h2zh`.\", \"Our study introduces ***important and novel problems*** in this novel paradigm, which are also recognized by Reviewers `jVrX`,`hEBc`,`h2zh`.\", \"2. **A Novel Framework with Superior Collaborative KE performance:**\", \"we propose the first non-destructive collaborative KE framework with superior ***Collaborative KE performance*,** which are recognized by Reviewers `jVrX`,`hEBc`,`h2zh`.\", \"Our `CollabEdit` is ***versatile***, allowing nondestructive integration of existing KE methods and providing insights into ***the solution of three challenge***, as are recognized by Reviewers `jVrX`.\", \"3. **Compelling Theoretical Analysis for Novel problems**\", \"We identify ***the performance gap*** between the naive collaborative KE method and the upper bound performance (i.e., GLOBAL-EDIT) through ***formal mathematical analysis*,** which are recognized by Reviewers `hEBc`.\", \"Provide ***comprehensive solutions*** to novel collaborative KE challenges, as recognized by Reviewers `jVrX`.\", \"Prove the ***privacy-ensured*** nature of our `CollabEdit` with ***compelling theoretical analysis*,** which are also recognized by Reviewers `hEBc`,`h2zh`.\", \"4. **Thorough and effective Experimental Validation**:\", \"Our empirical results demonstrate the ***effectiveness*** of our proposed framework compared with baselines and that of ***the novel solutions*** to three challenges based on our `CollabEdit`, which are recognized by Reviewers `jVrX`,`hEBc`,`h2zh`.\", \"Our discussions shed light on ***future research*** for collaborative KE.\", \"### **Other Modifications**\", \"Following some suggestions from the reviewers, we have adjusted the structure of the paper and added some new experimental figures. We have highlighted these changes **in orange**, so the reviewers can easily locate the updated parts:\", \"1. **Structure Adjustment:** **Figure 3**, **4**, and **5**.\", \"2. **New Figure:** **Figure 6** in the appendix.\", \"We have also provided responses to each reviewer's comments regarding the weaknesses and questions raised. (W indicates weakness and Q indicates question)\", \"We hope these responses can address the reviewers' concerns. If you find them helpful, we would be most grateful if you would **consider raising your scores** in a manner. We would also appreciate it if you could inform us **whether our responses adequately address your concerns**. We are open to further discussion or providing additional explanations as needed. Thank you very much again for your thoughtful review and help in improving the paper. We appreciate your time and consideration.\", \"Best regards,\", \"Authors of `CollabEdit`\"]}", "{\"title\": \"Rebuttal for reviewer jVrX\", \"comment\": \"Dear reviewer\\u00a0`jVrX`:\\n\\nThank you for your review. We would like to address your concerns in detail below.\\n\\n---\\n\\n> **W1:** The non-destructive merging mechanism may introduce additional complexity in implementation\\n> \\n\\n**A:** In our experiment, we use three model merging algorithms as baselines: *Task Arithmetic (TA)*, *Simple Average (SA)*, and *TIES-merging*. Below, we provide a detailed analysis of each method\\u2019s computational complexity, including our `CollabEdit` (refer to **R-Table 0**).\\n\\nSuppose that there are $N$ clients, the updates $\\\\Delta$ have a dimension of $[v, k]$ (v and k in a similar scale). \\n\\n- SA and TA calculate averages across all the models, so their time complexity is relatively small: $O(N \\\\times v \\\\times k)$, and the space complexity is $O(N \\\\times v \\\\times k)$.\\n- The TIES-Merging includes \\u201cTrim\\u201d, \\u201cElect\\u201d, and \\u201cMerge\\u201d phases. The \\u201cTrim\\u201d phase has a time complexity of $O(N \\\\times v \\\\times k \\\\times \\\\log(N \\\\times v \\\\times k))$. The \\u201cElect\\u201d and \\u201cMerge\\u201d phase has a time complexity of $O(N \\\\times v \\\\times k)$. Therefore, the overall time complexity is $O(N \\\\times v \\\\times k \\\\times \\\\log(N \\\\times v \\\\times k))$. The space complexity is also $O(N \\\\times v \\\\times k)$.\\n- For CollabEdit, due to the matrix multiplication and matrix inversion operations, it has a time complexity of $O(N \\\\times k^3)$. The space complexity is $O(N \\\\times k \\\\times k)$ because of $KK^{\\\\top}$.\\n\\n**R-Table 0:** Overall computational complexity for different merging methods. \\n\\n| **Method** | Time complexity | Space complexity |\\n| --- | --- | --- |\\n| **Ties-Merging** | $O(N \\\\times v\\\\times k \\\\times log(N \\\\times v \\\\times k))$ | $O(N \\\\times v \\\\times k)$ |\\n| **Task-Arithmetic** | $O(N \\\\times v \\\\times k)$ | $O(N \\\\times v \\\\times k)$ |\\n| **Simple-Average** | $O(N \\\\times v \\\\times k)$ | $O(N \\\\times v \\\\times k)$ |\\n| **CollabEdit** | $O(N \\\\times k^3)$ | $O(N \\\\times k \\\\times k)$ |\\n\\nIn summary, \\n\\n- Our method incurs only a minimal increase in time complexity to achieve non-destructive collaborative KE. The time complexity can be significantly reduced by leveraging **GPU acceleration for matrix operations**, such as matrix inversion in $KK^{\\\\top}$. As a result, the actual time overhead will be quite small (e.g. **a few seconds in the scenario with 10 clients**).\\n- In addition, as the GPU memory is extremely restricted, the space complexity of `CollabEdit` can be further reduced by merging the updates sequentially, which results in a complexity of $O(2 \\\\times v \\\\times k)$.\\n\\n---\\n\\n> **W2:** `CollabEdit`\\u2019s scalability in large collaborative environments or with numerous clients\\n> \\n\\n**A:** We lack access to large-scale collaborative environments, such as industrial-level collaborations, to comprehensively test our framework. However, we evaluate the sensitivity of our framework to the number of clients within a simulated collaborative system. \\n\\nSpecifically, **R-Table 1** compares the editing performance of Global-Editing with that of `CollabEdit` under various numbers of clients. We assume each client edits 100 edit requests in total. \\n\\n- The results show that regardless of the number of clients, `CollabEdit` consistently achieves similar editing performance as that of Global Editing. Reviewer can also refer to the new **Figure 6** in the appendix of our paper.\\n- This highlights the non-destructive nature of our framework and shows its generalizability across diverse scenarios.\\n\\n**R-Table 1: Overall KE scores of CollabEdit and Global-Edit in scenarios with different clients** \\n\\n| | CollabEdit | Global-Edit |\\n| --- | --- | --- |\\n| **10 clients** | 84.04 | 83.99 |\\n| **30 clients** | 79.97 | 80.19 |\\n| **50 clients** | 77.32 | 77.08 |\\n| **70 clients** | 74.69 | 74.58 |\\n\\n---\\n\\n> **W3:** More experiments on different LLMs\\n> \\n\\n**A:** R-Table 2 presents additional experiments of collaborative KE on LLama-3-8B. We use MEMIT as the backend KE algorithm and adopt the default setting in our paper (i.e., 10 clients and 5000 edit requests in total). The experiments show that our `CollabEdit` still **achieves non-destructive editing performance on the LLama-3.**\\n\\n**R-Table 2:** Overall editing performance on LLama-3, based on MEMIT. The \\u201cScore\\u201d serves as the overall metric. (All metrics are better when higher)\\n\\n| **Method** | NS\\u2b06 | PS\\u2b06 | ES\\u2b06 | **Score**\\u2b06 |\\n| --- | --- | --- | --- | --- |\\n| **Global-Edit** | 86.62 | 76.07 | 95.66 | 85.36 |\\n| **Ties-Merging** | 89.65 | 16.44 | 16.36 | 22.53 |\\n| **Task-Arithmetic** | 49.33 | 51.12 | 50.48 | 50.29 |\\n| **Simple-Average** | 89.92 | 10.94 | 10.04 | 14.84 |\\n| **CollabEdit** | **85.8** | **77.2** | **95.3** | **85.46** |\"}", "{\"title\": \"Rebuttal for reviewer h2zh (Part 3)\", \"comment\": \"> **Q3:** How does COLLABEDIT perform when subjected to repeated editing requests for the same knowledge items ?\\n>\", \"a\": \"We have theoretically addressed the issue of knowledge overlap between different rounds. However, in our paper, we have yet to evaluate its impact on the overall performance in a single round. Below is our supplementary experiment:\\n\\n- **Experimental setup and results:** We conduct additional experiments to investigate the impact of knowledge overlap in **R-Table 6.** Specifically, we randomly sample 2000 edit requests from the MCF dataset as the non-overlapped dataset. Then, we randomly sample another k edit requests (e.g., k=10) and repeat them to construct an overlapped dataset with 2000 edit requests.\\n - Motivation: We aim to explore the impact of the overlapped dataset on the editing performance of the non-overlapped dataset in a single round.\\n - Results: **R-Table 6** reports the editing performance of non-overlapped dataset with/without additional repeated records. We can observe that even if we repeat k edit requests multiple times to incorporate a large overlapped dataset (i.e., 2000), the editing performance of the non-overlapped dataset is surprisingly not affected.\\n- **Discussion:** However, it is important to note that we did not consider more complex scenarios across multiple rounds, where unexpected effects might occur. Our theoretical derivation in Section 4.2.1 suggests that by evaluating the L2 norm of the residuals $\\\\mathbf{R}$ before editing, we can identify whether there are overlapping parts between the previous editing and the current one. Such a mechanism can avoid the waste of computational resources and the emergence of uncontrollable issues.\\n\\n**R-Table 6:** Supplementary experiment for Knowledge Overlap in a single round\\n\\n| **Method** | NS\\u2b06 | PS\\u2b06 | ES\\u2b06 | **Score**\\u2b06 |\\n| --- | --- | --- | --- | --- |\\n| w/ repeated records | 68.35 | 98.25 | 99.9 | 86.16 |\\n| w/o repeated records | 68.21 | 98.25 | 100 | 86.11 |\"}", "{\"title\": \"Thank you for timely response and increasing scores!\", \"comment\": \"Dear reviewer `hEBc`:\\n\\nAs the discussion phase is coming to an end, we would like to send this additional message to express our gratitude to you.\\n\\nYou provided constructive and thoughtful review with us and acknowledged the efforts we made during the rebuttal phase \\uff08we are very pleased that our rebuttal addressed your concerns\\uff09.\\n\\nWe are also extremely grateful for your timely feedback and, most importantly, your increasing scores to `(6)`!!! The concerns reviewer `hEBc` raised have also helped us think about deeper issues within and beyond this work. We hope that our discussion and this work can jointly inspire some new insights and research regarding collaborative editing or trustworthy AI.\\n\\nOnce again, we deeply appreciate your thoughtful and timely feedback as well as the increased scores!!!\\n\\nBest regards,\\n\\nAuthors of `CollabEdit`\"}" ] }
2PRpcmJecX
Global Convergence of Policy Gradient in Average Reward MDPs
[ "Navdeep Kumar", "Yashaswini Murthy", "Itai Shufaro", "Kfir Yehuda Levy", "R. Srikant", "Shie Mannor" ]
We present the first comprehensive finite-time global convergence analysis of policy gradient for infinite horizon average reward Markov decision processes (MDPs). Specifically, we focus on ergodic tabular MDPs with finite state and action spaces. Our analysis shows that the policy gradient iterates converge to the optimal policy at a sublinear rate of $O(\frac{1}{T})$, where $T$ represents the number of iterations. Performance bounds for discounted reward MDPs cannot be easily extended to average reward MDPs as the bounds grow proportional to the fifth power of the effective horizon. Recent work on such extensions makes a smoothness assumption that has not been verified. Thus, our primary contribution is in providing the first complete proof that the policy gradient algorithm converges globally for average-reward MDPs, without such an assumption. We also obtain the corresponding finite-time performance guarantees. In contrast to the existing discounted reward performance bounds, our performance bounds have an explicit dependence on constants that capture the complexity of the underlying MDP. Motivated by this observation, we reexamine and improve the existing performance bounds for discounted reward MDPs. We also present simulations that empirically validate the result.
[ "Policy Gradient", "Reinforcement Learning", "Average Reward MDPs" ]
Accept (Poster)
https://openreview.net/pdf?id=2PRpcmJecX
https://openreview.net/forum?id=2PRpcmJecX
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wOh5Ig33mg", "w6kRrpdNuM", "pDfxtfPQYn", "jDUuw5LXQZ", "YgUiwEofOe", "XBzYPhFT6n", "TZJy8p2PVz", "NsXXNf7BFi", "NFIuEnqAyU", "MpYpDieVeQ", "9Reir6QNl5", "21dKyCRJNm" ], "note_type": [ "official_comment", "meta_review", "official_review", "decision", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732153756899, 1733555188476, 1730621375934, 1737524206778, 1731269988409, 1732152643585, 1732153384838, 1730943993067, 1732558376294, 1730388315456, 1732563050092, 1732152038026 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12657/Authors" ], [ "ICLR.cc/2025/Conference/Submission12657/Area_Chair_9Bz1" ], [ "ICLR.cc/2025/Conference/Submission12657/Reviewer_D3ZP" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12657/Reviewer_cy7G" ], [ "ICLR.cc/2025/Conference/Submission12657/Authors" ], [ "ICLR.cc/2025/Conference/Submission12657/Authors" ], [ "ICLR.cc/2025/Conference/Submission12657/Reviewer_Z6Q6" ], [ "ICLR.cc/2025/Conference/Submission12657/Reviewer_cy7G" ], [ "ICLR.cc/2025/Conference/Submission12657/Reviewer_WdYp" ], [ "ICLR.cc/2025/Conference/Submission12657/Reviewer_D3ZP" ], [ "ICLR.cc/2025/Conference/Submission12657/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to the review\", \"comment\": \"Thank you for your helpful comments.\\n\\n**Response to Weaknesses:**\\n\\n1. We will rewrite the related work section with these inputs in our revised version.\\n\\n2. Since the focus of the paper was to provide the first comprehensive convergence analysis of average reward policy gradient, we focused on why the current state of the art in discounted reward policy gradient could not be leveraged to obtain non trivial bounds in the average reward counterpart. Hence, for sake of clarity we excluded all $\\\\epsilon$ dependences and focused on the role of the discount factor alone in our bounds. However, we will now include the $\\\\epsilon$ dependence in our revised version.\\n\\n3. The convergence rate with constant step sizes, is $O(\\\\epsilon^{-1})$ in (Xiao, 2022b), and linear convergence rate of $O(\\\\log\\\\epsilon^{-1})$ with increasing stp sizes. In our work, we used constant step size, hence we compared our result with its counterpart result in Xiao 2022b. Moreover, we believe that the our result can be extended to attain linear convergence rate with aggressive step sizes as well. But we leave that for future work. However, we note that, while we did not explicitly study the learning problem, increasing step sizes will not work when the $Q$ function has to be estimated and hence, we did not consider the increasing step-size case. However, we have linear convergence in Theorem 1, with constant step sizes for simple MDPs.\\n\\n4. Thank you for bringing this paper to our notice. This paper considers the natural policy gradient algorithm whereas we are dealing with projected policy gradient. Besides, they also work under the assumption that the average reward is smooth. Nonetheless, we shall include this paper in our related work section.\\n\\n5. We only included regret because such a notion would be useful if one were to extend our result to the case where learning is needed to estimate the policy gradient. For this, we have not discussed regret significantly in the paper. We only mention it in the abstract; we will remove it from the abstract and make a small comment in the paper clarifying when the notion of regret would be useful.\\n\\n**Response to Questions:**\\n\\n1. (a) The linear rate can be achieved for Projected Mirror Descent (also Policy Gradient) but it requires aggressively increasing step sizes. This aggressive step sizes makes the algorithm very similar to policy iteration (take step sizes to $\\\\infty$). However, this aggressive step sizes are not suitable when noise is present (or model is not known), and this is where policy gradient shines. Hence, we have limited our study to constant non-aggressive step sizes. \\n\\n (b) Moreover, the linear convergence in (Lin Xiao, 2022) requires an additional assumption on the mismatch coefficient. Under this assumption and aggressive step size, it is likely that their analysis applies to our case as well since our sub-optimality recursions are similar.\\n\\n (c) We have linear convergence in our Theorem 1 under non-aggressive step sizes for simple MDPs.\\n\\n2. We will mention in the table that the definition of $C_e$ and $\\\\lambda$ can be found in Assumption 1.\\n\\n3. The proof techniques in this paper depend on the ergodicity of the policy class. Whether non-ergodic MDPs admit a smooth average reward remains an open question. We leave this for future work, anticipating that if true, it will require a fundamentally different proof approach. However, most MDPs can be converted to satisfy our assumption at an $O(\\\\epsilon)$ loss of optimality using the following trick: suppose the original MDP has a probability transition kernel $P(s'|s,a),$ define a new MDP with transition kernel $\\\\frac{\\\\epsilon}{|A|} \\\\sum_{a'}P(s|s,a')+(1-\\\\epsilon)P(s'|s,a),$ where $A$ is the action space. If choosing an action uniformly at random in each state makes the resulting Markov chain irreducible (which is usually the case), then our assumption is automatically satisfied with an $O(\\\\epsilon)$ loss of optimality.\"}", "{\"metareview\": \"The paper considers the global convergence behavior of the vanilla policy gradient method for the average reward Markov decision processes (AMDPs), under the tabular setting with finite state and action spaces.\\nUnlike previous works that directly make smoothness assumptions without verification, this paper provides a complete proof of smoothness properties based on reasonable uniform ergodicity assumptions and then provides an O(1/T) finite-time convergence rate result. Based on the novelty and importance of the result, and the authors have well-addressed all the reviewers' comments, we decide to accept this paper.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers mostly raised some minor comments. The main concerns are about the size of constants and the assumptions on MDP (unichain). The authors have proper cleared these concerns.\"}", "{\"summary\": \"The author show that Project Policy Gradient ascent for average reward MDPs can achieve an $O(\\\\frac{1}{\\\\eps})$ rate to the optimal policy. To attain this rate, the authors prove the smoothness property of the objective. Additional experiments are conducted to validate the proposed rates.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"First proof of global convergence of Project Policy Gradient for average reward MDPs.\"], \"weaknesses\": [\"Missing comparison to [1]. This work improves the convergence rate of [2] and show the rate of Policy Mirror Descentt is linear. Projected Policy Gradient is an instance of Policy Mirror Descent when the squared Eucliden distance is used as the mirror map.\", \"The clarity of the writing could be improved,\", \"The precise definition of $d^\\\\pi(s)$ should be given\", \"It's not clear what the step-size used in Thereom 1 Is\", \"A reference / proof for Eq. 8 should be given.\", \"Formatting errors: 155: Bellman equation equation 3, 181: discount factorBertsekas (2007), 202: \\\\textit{equation 8}\", \"[1] Johnson, E., Pike-Burke, C., & Rebeschini, P. (2023). Optimal convergence rate for exact policy mirror descent in discounted markov decision processes.\", \"[2] Xiao, L. (2022). On the convergence rates of policy gradient methods.\"], \"questions\": [\"When presenting the convergence rates of the related works, why was the dependence of $\\\\epsilon$ omitted?\", \"Could the remark of Theorem 1 be clarified. Why is the bound $$\\\\frac{\\\\sigma}{k^p}$$ less meaningful for the inital $k$? Isn't $k$ the number of iterations? Also note that for softmax policies, there exists faster convergence rates shown in [1] compared to [2].\", \"Is it possible to show that the $O(\\\\frac{1}{\\\\epsilon})$ bound is tight?\", \"[1] Liu, J., Li, W., & Wei, K. (2024). Elementary analysis of policy gradient methods.\", \"[2] Mei, J., Xiao, C., Szepesvari, C., & Schuurmans, D. (2020, November). On the global convergence rates of softmax policy gradient methods.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper presents a comprehensive global convergence analysis for policy gradient in infinite-horizon average-reward MDPs. It proposes a novel proof framework for the smoothness of the average reward objective, which settles the intrinsic challenge of divergence face by the standard analysis technique that regards the average-reward setting as a limiting case of the discounted-reward setting (as $\\\\gamma \\\\to 1$). Based on the smoothness results, it further analyzes the convergence properties of policy gradient in the average-reward setting, and concludes with an instance-specific bound convergence bound. Simulation results are presented to justify the analysis and reveal the influence of instance-related constants.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is overall well-written, and the flow is friendly to first-time readers.\\n2. The research problem is of theoretical interest and importance, which is sufficiently motivated and justified by a thorough review of literature.\\n3. The technical contributions are solid, rigorous, and clearly articulated (as summarized in Section 1.2). The proofs are checked to be correct and are largely self-contained.\\n4. Table 1 is especially appreciated since it gives a high-level yet clear idea of the instance-related constants involved in the bound.\\n5. I like the discussion presented in Section 3.2 that relates the new results to existing results in the classical discounted-reward setting, as well as a brief hint on the reason why instant-specific bounds may be tighter and thus more useful in applications.\", \"weaknesses\": \"1. The simulation results do help to promote the understanding of the instance-related constants, but it can be improved to include more direct and more convincing evidence under the principle of controlled variables. E.g., exemplary MDP families might be explicitly constructed with certain constant(s) varying and all the others fixed, so that the curves clearly reflect how the performance depends on the varying constant(s).\\n2. There are a few typesetting issues: (a) Use $\\\\verb|\\\\citep|$ and $\\\\verb|\\\\citet|$ correctly for the author-year format, and avoid using $\\\\verb|\\\\cite|$ \\u2014 specifically, only use $\\\\verb|\\\\citet|$ when it's a part of the sentence. (b) On line 223 and below, use $\\\\verb|\\\\ll|$ ($\\\\ll$) instead of $<<$. (3) There are a few typos and grammatical issues (e.g., the inconsistency of tenses used in the literature review, where I would recommend the use of present tenses only).\", \"questions\": \"1. It is briefly touched upon in *Notes on Limitations and Future Work* that the approach can be generalized to \\\"parametric classes of policies\\\". I wonder if the authors have any rough ideas on how this could be done, and further, if it is also doable to extend the tabular MDP setting to generic MDPs with infinite state-action spaces (probably with function approximation, like linear/low-rank MDPs).\\n2. The relationship with discounted-reward MDPs is discussed in Section 3.2, where it's written that \\\"the constants can be derived through an *analogous* process\\\". Is it possible to (at least) sketch how the final results should look like in the appendix?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to the review\", \"comment\": \"Thank you for your helpful comments.\\n\\n**Response to Key Comments and Questions:**\\n\\n1. Theorem 1 states that the suboptimality at iteration $k$ is of $O(\\\\frac{1}{k})$. Hence the total regret accumulated after $T$ iterations of the algorithm is $\\\\sum_{k=1}^T O(\\\\frac{1}{k}) = O(\\\\log T)$. We will include a line on this in the main result. \\n\\n2. It is true that Assumption 1 is restrictive when compared to MDP classes such as weakly communicative ones. However, most MDPs can be converted to satisfy our assumption at an $O(\\\\epsilon)$ loss of optimality using the following trick: suppose the original MDP has a probability transition kernel $P(s'|s,a),$ define a new MDP with transition kernel $\\\\frac{\\\\epsilon}{|A|} \\\\sum_{a'}P(s|s,a')+(1-\\\\epsilon)P(s'|s,a),$ where $A$ is the action space. If choosing an action uniformly at random in each state makes the resulting Markov chain irreducible (which is usually the case), then our assumption is automatically satisfied with an $O(\\\\epsilon)$ loss of optimality. Additionally, even if we are able to relax this assumption for the planning problem, often model-free learning requires something like our assumption for algorithms such as TD learning to provide good estimates. This is due to mixing time conditions needed in the analysis of stochastic approximation algorithms (TD learning is an example of one).\\n\\n3. We agree with the reviewer that $C_{PL}$ can be quite large. However, it is important to note that $C_{PL}$ is non-trivial in our analysis. In contrast, for discounted reward MDPs, the equivalent constant is $\\\\frac{1}{(1-\\\\gamma)\\\\min_{s}\\\\mu(s)}$, where $\\\\mu(s)$ represents the initial state distribution. As $\\\\gamma$ approaches 1, this constant diverges to infinity, thus providing a vacuous upper bound on the PL constant. In our case, we manage to keep $C_{PL}$ non-trivial, even though it is admittedly large. Moreover, to the best of our knowledge, ours is the first complete proof of the global convergence of policy gradient methods for average-cost problems. Thus, we believe that the fact that $C_{PL}$ is large does not detract from the significance of our contribution; however, we agree with the reviewer that tightening this bound is an important future direction.\\n\\n4. In prior average reward literature, most of the work in gradient methods have considered the learning problem rather than planning problem where they make unproven assumptions on the nature of the average reward (such as smoothness). The planning problem without these assumptions remained unsolved but the learning problem has been studied under the assumption that policy gradient converges for the planning problem and hence, we cited those papers. In fact, one of the contributions of our work is in proving smoothness (which was an assumption in previous papers) previously assumed in many learning-based papers.\\n\\n**Response to Minor Comments:**\\n\\n1. $\\\\pi_k$ is the policy obtained at $k$-th iteration of the projected policy gradient algorithm in discounted MDPs. We will include this description in the revised manuscript.\\n \\n2. We will make these changes to the citation style in the revised manuscript.\\n\\n3. It's the same version. We will remove the redundancy in the revised version. \\n\\n4. We will change the notation to $\\\\nabla(\\\\mathcal{A})$ in the revised version.\\n\\n5. We will change Eq. 8 to include $d_{\\\\mu,\\\\gamma}^{\\\\pi^*}$ to maintain consistency .\\n\\n6. We will remove the citation to Boyd and Vandenberghe.\\n\\n7. We have updated the draft, it should be reflected in the final version. $C_p, C_m$ are defined using operator norm w.r.t. $L_\\\\infty$ norm. Precisely, $C_m = \\\\max_{\\\\pi} \\\\max_{||v||\\\\_\\\\infty\\\\leq 1} ||(I- \\\\Phi P^\\\\pi)\\\\^{-1}v||\\\\_\\\\infty $, and $C_p = \\\\max_{\\\\pi,\\\\pi'\\\\in\\\\Pi}\\\\max_{||v||\\\\_\\\\infty\\\\leq 1}\\\\frac{||(P^{\\\\pi'}- P^\\\\pi)v||\\\\_\\\\infty}{||\\\\pi'-\\\\pi||\\\\_2}$.\\n\\n**Regarding Typos:**\\n\\nThank you for pointing out these typos. We have fixed it in the revised version. Yes, by $L$ we do mean $L_2^\\\\Pi$. We denote $L_2^\\\\Pi$ to specify second derivative continuity on the restricted space $\\\\Pi$.\"}", "{\"title\": \"Response to the review\", \"comment\": \"Thank you for your helpful comments.\\n\\n**Response to Weaknesses:**\\n\\n1. We thank the reviewer for pointing it out, we will add it this discussion in the final version.\\n\\n (a) The work [1, 2] considers discounted reward setting, and our core contribution is in the average reward setting.\\n\\n (b) Furthermore, [1,2] achieves a linear rate for PMD with increasing step sizes. PMD with aggressive step sizes effectively reduces to policy iteration (under suitable condition), which is less suitable for scenarios with noisy gradients (though it is an important yet distinct algorithm).\\n\\n (c) We have a linear convergence for simple MDPs with constant step sizes (in average reward case, which can be trivially extended to discounted case.)\\n\\n (d) We think, PMD in average case, can have linear convergence too, using the similar techniques in [1,2] and our analysis, with aggressive step sizes. However, it is a different algorithm (with aggressive step sizes PMD becomes closer to PI rather than PG) and hence deserves its own study. \\n\\n2. We have fixed typos and improved upon notations in the revised version.\\n\\n3. $d^\\\\pi(s)$ is defined and elaborated upon in the paragraph below Equation 2 in the revised version (to be uploaded soon)\\n\\n4. The step-size is chosen as $\\\\eta<\\\\frac{1}{L^\\\\Pi_2}$, where $L^\\\\Pi_2$ is the restricted smoothness constant. We will include this in the main theorem statement.\\n\\n5. We have added a reference for this result in the revised version.\\n\\n6. We have fixed the formatting errors in the revised version.\\n\\n[1] Johnson, E., Pike-Burke, C., & Rebeschini, P. (2023). Optimal convergence rate for exact policy mirror descent in discounted markov decision processes. [2] Xiao, L. (2022). On the convergence rates of policy gradient methods.\\n\\n**Response to Questions:**\\n\\n1. Since the focus of the paper was to provide the first comprehensive convergence analysis of average reward policy gradient, we focussed on why the current state of the art in discounted reward policy gradient could not be leveraged to obtain non trivial bounds in the average reward counterpart. Hence, for sake of clarity we excluded all $\\\\epsilon$ dependences and focussed on the role of the discount factor in our bounds. However, we will now include the $\\\\epsilon$ dependence in our revised version.\\n\\n2. The existing bounds are of the form $\\\\frac{C}{k(1-\\\\gamma)}$, where $C \\\\gg 1$ is very large constant compared to the worst sub-optimality of $\\\\frac{2}{1-\\\\gamma}$. Now, let's say for $k =10$, we have $\\\\frac{C}{10(1-\\\\gamma)} \\\\gg \\\\frac{2}{1-\\\\gamma}$, which is the largest possible difference in the discounted rewards between two policies. And this is true for all $k \\\\leq C$, which is a very big number. Hence, the bound $\\\\frac{C}{(1-\\\\gamma)k}$, yields a meaningful bound only after a large $k$ which may not be preferable. On the other hand, our bound is of the form $\\\\frac{1}{\\\\frac{1-\\\\gamma}{2} + \\\\frac{k}{C}}$, and this bound is meaningful for all $k\\\\geq 0$. Observe that our bound aysmptotically becomes $\\\\approx \\\\frac{C}{k}$ which improves upon the existing result [2] by a factor $\\\\frac{1}{1-\\\\gamma}$.\\n**Regarding softmax policies**: The work [1] improves upon [2], but also uses the recursion $a_k-a_{k-1}\\\\geq a_k^2$ in their work, yielding $\\\\frac{CA}{(1-\\\\gamma)k}$. If our methodology is applied for solving the recursion, it yields the rate of $\\\\frac{1}{\\\\frac{1-\\\\gamma}{2}+\\\\frac{k}{CA}}$. This is a more meaningful bound for small values of $k$ and, asymptotically for large $k,$ provides an improvement by a factor $\\\\frac{1}{1-\\\\gamma}$ asymptotically. We would like to remind, that the work [1,2] is for discounted case, and our core contribution of the work is the analysis on average reward case. The resulting improvement on discounted case is a welcome bonus of our analysis.\\n\\n3. This is a good question. A lower bound of $O(\\\\frac{1}{\\\\epsilon})$ is known for policy gradient in the discounted reward case with softmax parametrization. It may be possible to use a similar approach in the average reward case, we will certainly look into this. \\n\\n[1] Liu, J., Li, W., & Wei, K. (2024). Elementary analysis of policy gradient methods. [2] Mei, J., Xiao, C., Szepesvari, C., & Schuurmans, D. (2020, November). On the global convergence rates of softmax policy gradient methods.\"}", "{\"summary\": \"This paper studies convergence of Policy Gradient (PG) in average-reward MDPs and present non-asymptotic bounds on the global convergence of an error function defined in terms of gains of the optimal policy and the output policy by PG. For the class of unichain MDPs (cf. Assumption 1), the authors present convergence rate to the globally optimal solution (of the reward maximization problem in the long run), but without any assumption on the smoothness of the value functions involved. Such smoothness assumptions were key in the analysis in discounted MDPs. The presented convergence rates decay as $O(1/k)$ where the involved constants depend on MDP-dependent quantities. These results also lead to improved convergence analysis of discounted MDPs.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"Policy Gradient (PG) and its variants are among interesting and important algorithms in RL. Their convergence properties for the class of discounted MDPs are very well-studied and by now well-understood. However, their counterparts for average-reward MDPs are less explored, especially when the interest lies in globally optimal solution. This is mostly due to the challenges involved in the average-reward setting, rather than the interest in the problem.\\n\\nOne strength of the approach taken in the paper is to depart from the classical approach of using a discounted MDP as a proxy, which further leads to sub-optimal bounds. This way the authors eliminate the smoothness assumption that is typically made in the convergence analysis of PG in the context of discounted MDPs. \\n\\nThe paper admits a good organization. Its technical part is written mostly clearly and precisely, apart from some inconsistent or undefined notations (see comments below). However, there are some inconsistencies in the presentation and advertisement of the results between the introductory part and the main technical part; further on this below. The writing quality is overall fine, but some parts could still benefit from a more careful polishing. \\n\\nAs a positive aspect, the paper delivers a good and accurate review of related literature, to my best knowledge. Yet another positive aspect is reporting numerical results, albeit on toy problems.\", \"weaknesses\": [\"Key Comments and Questions:\", \"-\", \"The opening of the paper (Abstract and Introduction) talk about regret bounds for PG (scaling as $O(\\\\log(T))$). Figuratively speaking, these are cumulative measures of error incurred by the algorithm. But they are not defined anywhere \\u2013 or do I miss something? \\u2013 and the core part of the paper only deals with per-step error measures. Please clarify.\", \"Despite some interesting results, one key limitation of the paper is the restriction to the class of unichain MDPs (cf. Assumption 1). They are far easier to deal with and are much less relevant in modeling practical RL tasks when compared to the more interesting class of communicating MDPs. Without this assumption, one will not get a closed-form value function in Lemma 1, which is key to establish the results. In other words, it renders unlikely, in my opinion, that the technical tools developed or promoted here could be used beyond the restricted class of MDPs satisfying Assumption 1.\", \"A key question is how bad the MDP-dependent constant $C_{PL}$ could be. Even though a convergence rate of $O(1/k)$ is superior to those decaying as $O(1/k^p)$ for some $p<1$, the involved MDP-constants (e.g., in Theorem 1) could be prohibitively large in some MDPs (that are not necessarily pathological). More precisely, I expect it could be exponentially large in the size of state-space $|\\\\mathcal S|$.\", \"In the first paragraph of Section 1, you discuss approaches for determining the optimal policy (i.e., planning algorithms) for average-reward MDPs. Yet you mostly cite papers dealing with the learning problem. Could you clarify, or correct if relevant?\"], \"minor_comments\": [\"-\", \"In line 50, you use $\\\\pi_k$ but it is not defined yet.\", \"Regarding refs: Please check formatting guidelines. In many places you must use \\\\citep or \\\\citet instead of \\\\cite so that you get (A & B, year) instead of A & B (year); for instance, in the first paragraph of Section 1. But they are correctly used in Section 1.1. This issue renders rather distracting when reading the paper.\", \"The work (Lin Xiao, 2022) is cited twice. Is there any difference between them?\", \"Line 133 (and elsewhere): Using $\\\\Delta(\\\\mathcal A)$ instead of $\\\\Delta \\\\mathcal A$ could make things more readable.\", \"Inconsistent notations: In Eq. (8) you used $d_\\\\mu(\\\\pi^*)$ whereas later you used $d_{\\\\mu,\\\\gamma}^{\\\\pi^*}$ to denote essentially the same thing.\", \"Unless I am missing something, the textbook (Boyd and Vandenberghe, 2004) does not include definition of $L$-smoothness, etc.\", \"Table 1: Make precise the norms used for $C_p$ and $C_m$.\"], \"typos\": [\"-\", \"Line 82: is , Bai et al. ==> remove \\u201c,\\u201d\", \"Line 198: \\u2026 relationBertsekas \\u2026. ==> \\u2026 relation (Bertsekas, \\u2026)\", \"Line 251: Further is the function is ==> Further if \\u2026\", \"Line 269: euclidean norm ==> Euclidean norm ---- to be consistent with an earlier use of this term.\", \"Line 346: in the Lemma below ==> \\u2026 lemma \\u2026\", \"Line 384 and elsewhere in Section 3.2: To be consistent with notations used elsewhere, use $|\\\\mathcal S|$ instead of $S$ since the latter is not defined.\", \"Line 398: By $L$, did you mean $L_2^{\\\\Pi}$?\", \"Line 388: a verb might be missing.\"], \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your response!\", \"comment\": \"The rebuttal is clear and convincing. I'll keep my positive attitude towards the acceptance of this paper.\"}", "{\"summary\": \"The paper presents the convergence rate analysis of the projected policy gradient algorithm for tabular average reward Markov decision processes (MDPs). Assuming access to the exact gradient, the authors proved a convergence rate of $\\\\mathcal{O}(1/T)$ where $T$ is the number of iterations. To prove the result, they established the smoothness property of the value function for ergodic MDPs, which is of separate interest.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. New state-of-the-art convergence rate of $\\\\mathcal{O}(1/T)$ for projected gradient descent algorithm for average reward MDPs.\\n2. New smoothness result of the value function for the same setting.\\n3. Despite some weaknesses stated below, the paper is overall nicely written.\", \"weaknesses\": \"1. The authors should rewrite the related works and put their work in context. First, they should separate the related works into two groups: ones that use exact gradients (and hence, are more of a planning problem), and others that use gradient estimates (and therefore, are more of a learning problem). Authors should note that some papers explicitly fall into the second group while many others discuss problems of both kinds. The work of the authors falls into the first group. This should be highlighted both in the abstract as well as in the introduction.\\n\\n2. While mentioning the convergence rate established by earlier works, the authors only focused on the $1-\\\\gamma$ factors while completely ignoring the $\\\\epsilon$ related factor. For example, equation (1) does not show any dependence on $\\\\epsilon$. Is there any specific reason for that? I think it makes the comparison quite confusing.\\n\\n3. Although one of the results of (Xiao 2022b) proves a convergence rate of $\\\\mathcal{O}\\\\left((1-\\\\gamma)^{-5}\\\\epsilon^{-1}\\\\right)$, in the same paper, they also provide a better result. Specifically, using policy mirror descent, which can be thought of as a generalization of the policy gradient, they establish a linear convergence rate of $\\\\mathcal{O}\\\\left((1-\\\\gamma)^{-1}\\\\log\\\\left((1-\\\\gamma)^{-1}\\\\epsilon^{-1}\\\\right)\\\\right)$. I am surprised that the authors failed to mention the linear convergence rate.\\n\\n4. Some of the state-of-the-art results mentioned are outdated. For example, (Bai et. al. 2023) is no longer the only work that establishes a regret bound for average reward MDP. A recent paper [1] supersedes their result.\\n\\n5. To my understanding, the concept of regret makes sense only for a learning problem, not for a planning problem. In my opinion, the author should solely stick to the convergence rate result.\\n\\n[1] Ganesh, S. and Aggarwal, V., 2024. An accelerated multi-level Monte Carlo approach for average reward reinforcement learning with general policy parametrization. arXiv preprint arXiv:2407.18878.\", \"questions\": \"1. Since a linear convergence rate is already available in the discounted setup (Xiao 2022b), is it possible to achieve the same in the average reward setup? What are the fundamental challenges to obtain it?\\n\\n2. Please mention in Table 1 that the constants $C_e$ and $\\\\lambda$ are taken from Assumption 1. It will help the reader.\\n\\n3. Is the smoothness result only valid for ergodic MDPs or is it possible to extend it to a larger class?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response and for the clarifications. I've raised my score to a 6.\"}", "{\"title\": \"Response to the review\", \"comment\": \"Thank you for your helpful comments.\\n\\n**Response to Weaknesses:**\\n\\n1. Creating MDPs with a fixed $C_p$ is infeasible because modifying the transition kernel inevitably changes $C_m$, preventing the isolation of a single variable's effect. However, for $C_r$, this is precisely what is examined in Figure 1b. In this experiment, all MDPs share the same transition kernel and identical minimal and maximal reward values, ensuring $C_p$, $C_m$, and $\\\\kappa_r$ remain constant. We start by freezing the transition kernel, ensuring that $C_p$ and $C_m$ remain identical across all MDPs. To keep $\\\\kappa_r $ consistent, we fix all rewards to $\\\\pm1$. Then, we vary $C_r$ by adjusting the proportion of actions yielding a reward of $-1$. For the maximal $C_r$, half of the actions return $+1$ and the other half return $-1$. To achieve faster convergence, we increase the proportion of actions that return $ +1$. More details on this can be found in the appendix.\\n\\n2. Thank you for the formatting suggestions. We will upload a revised manuscript with these changes soon.\\n\\n**Response to the Questions:**\\n\\n1. This is very interesting direction to explore, thanks for the question. \\n \\n A) Parametric classes of policies: The core idea would be to use the chain rule in our result.\\n \\n B) Infinite state-action space: This is very interesting direction, we thank the reviewer for this insightful question. The traditional bounds are $O(\\\\frac{|S| |A| }{\\\\epsilon})$ which yields vacuous bounds for infinite state-action spaces.\\n\\n i) Whereas (for starters), the current version of our result, can deal with infinite action space (possibly with some structure) as our bound is $O(\\\\frac{SL^\\\\Pi_2}{\\\\epsilon})$ and $L^\\\\Pi_2$ depends on the hardness coefficients such as $C_m,C_r,C_p$ which can be finite for infinite action space, hence yielding some meaningful bounds.\\n\\n ii) For infinite state space, getting rid of $|S|$ in the bound is challenging. It is obtained from the bound on diameter of the policy class $diam(\\\\Pi)^2 \\\\leq |S|$ (see eq. 160 of the draft). However, it is possible to bound $diam(\\\\Pi)^2$, for policy class with some structure such as low rank policy class for infinite-state space. We thank the reviewer for this intriguing question, we will definitely add this discussion in the main text of the final version. Besides, in order to characterize suboptimality associated with a policy, we need a finite concentrability coefficient $C_{PL}$ (See Lemma 7). This constant will be infinite when the state space is also infinite. Current approach requires $C_{PL}$ to be finite. Overcoming this state space constraint might require alternate ways to bound the suboptimality.\\n\\n2. The extension of this result to the discounted case is as follows. In the smoothness analysis of the discounted MDP case, $\\\\Phi P^\\\\pi, \\\\Phi R^\\\\pi, $ need to be replaced with the $\\\\gamma P^\\\\pi$ and $R^\\\\pi$ respectively. Only significant change is $C_m = \\\\max_{\\\\pi} \\\\max_{||v||_{\\\\infty} \\\\leq 1} ||(I - \\\\gamma P^{\\\\pi})^{-1} v || \\\\_{\\\\infty} \\\\leq \\\\frac{1}{1-\\\\gamma}$. We will add a subsection in the final version in the appendix, outlining this proof.\"}" ] }
2PKLRmU7ne
In-context learning and Occam's razor
[ "Eric Elmoznino", "Tom Marty", "Tejas Kasetty", "Leo Gagnon", "Sarthak Mittal", "Dhanya Sridhar", "Guillaume Lajoie" ]
A central goal of machine learning is generalization. While the No Free Lunch Theorem states that we cannot obtain theoretical guarantees for generalization without further assumptions, in practice we observe that simple models which explain the training data generalize best—a principle called Occam's razor. Despite the need for simple models, most current approaches in machine learning only minimize the training error, and at best indirectly promote simplicity through regularization or architecture design. Here, we draw a connection between Occam's razor and in-context learning—an emergent ability of certain sequence models like Transformers to learn at inference time from past observations in a sequence. In particular, we show that the next-token prediction loss used to train in-context learners is directly equivalent to a data compression technique called prequential coding, and that minimizing this loss amounts to jointly minimizing both the training error and the complexity of the model that was implicitly learned from context. Our theory and the empirical experiments we use to support it not only provide a normative account of in-context learning, but also elucidate the shortcomings of current in-context learning methods, suggesting ways in which they can be improved.
[ "generalization", "complexity", "compression", "in-context learning", "meta-learning" ]
Reject
https://openreview.net/pdf?id=2PKLRmU7ne
https://openreview.net/forum?id=2PKLRmU7ne
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yHKP9cAfbH", "w6u0MhyynV", "ujMIveBZpZ", "teEkozky2H", "rzopS5VPGF", "piPAmlS6V9", "pPPNVMcUkF", "njF8iP65G3", "jnwWONMyh9", "gXN643847K", "d0PAQejAoM", "ccJKohsD9H", "bR87axEW35", "ZYd15Yb43a", "UotUgLUGfY", "Pc7cXlTpCF", "OJ3H5n5GSv", "NwYfThYnTF", "Mky6gbUfiW", "M4zM7qQyig", "Kq5CHj5nbm", "JwOnSzQdaJ", "JOTGn4NoT9", "Ieg2AaVqkF", "HDN6STBQb0", "DpU3JqXarl", "1mk8JPyeoc", "0VFhdxmpA2" ], "note_type": [ "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_review" ], "note_created": [ 1737523401405, 1733153372372, 1732510767083, 1730703813296, 1732823647905, 1732160285381, 1732639364082, 1732161277216, 1733153557773, 1732510808048, 1732510744129, 1732161029255, 1732529548129, 1732160128517, 1733153235353, 1732161556814, 1732510781880, 1733165704262, 1730877149776, 1730622590228, 1732488281690, 1732161059984, 1732161211912, 1732510461556, 1730708144968, 1734822940059, 1733160313444, 1731017235828 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission531/Authors" ], [ "ICLR.cc/2025/Conference/Submission531/Authors" ], [ "ICLR.cc/2025/Conference/Submission531/Reviewer_3Zfk" ], [ "ICLR.cc/2025/Conference/Submission531/Authors" ], [ "ICLR.cc/2025/Conference/Submission531/Authors" ], [ "ICLR.cc/2025/Conference/Submission531/Reviewer_M5Ep" ], [ "ICLR.cc/2025/Conference/Submission531/Authors" ], [ "ICLR.cc/2025/Conference/Submission531/Authors" ], [ "ICLR.cc/2025/Conference/Submission531/Authors" ], [ "ICLR.cc/2025/Conference/Submission531/Authors" ], [ "ICLR.cc/2025/Conference/Submission531/Authors" ], [ "ICLR.cc/2025/Conference/Submission531/Reviewer_YT7Y" ], [ "ICLR.cc/2025/Conference/Submission531/Authors" ], [ "ICLR.cc/2025/Conference/Submission531/Authors" ], [ "ICLR.cc/2025/Conference/Submission531/Authors" ], [ "ICLR.cc/2025/Conference/Submission531/Authors" ], [ "ICLR.cc/2025/Conference/Submission531/Authors" ], [ "ICLR.cc/2025/Conference/Submission531/Reviewer_1nBD" ], [ "ICLR.cc/2025/Conference/Submission531/Reviewer_nR2b" ], [ "ICLR.cc/2025/Conference/Submission531/Reviewer_3Zfk" ], [ "ICLR.cc/2025/Conference/Submission531/Authors" ], [ "ICLR.cc/2025/Conference/Submission531/Authors" ], [ "ICLR.cc/2025/Conference/Submission531/Authors" ], [ "ICLR.cc/2025/Conference/Submission531/Reviewer_YT7Y" ], [ "ICLR.cc/2025/Conference/Submission531/Area_Chair_u14X" ], [ "ICLR.cc/2025/Conference/Submission531/Reviewer_1nBD" ], [ "ICLR.cc/2025/Conference/Submission531/Reviewer_M5Ep" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"As the discussion period is coming to a close, we kindly ask the reviewer for a response and feedback from our last post which we believe adresses a number of points the reviewer raised.\"}", "{\"comment\": \"Dear reviewer,\\n\\nWe are very appreciative of your time and feedback. As we are nearing the end of the rebuttal period, we would like to request the reviewer for an opportunity to answer any additional questions or doubts that may remain.\"}", "{\"summary\": \"This paper examines in-context learning (ICL) through the lens of Occam\\u2019s razor, which suggests that the simplest model that explains the data is most likely to be the true one. This paper proposes that ICL's next-token prediction loss functions similarly to prequential coding. The authors argue that by training models on this principle, ICL can produce models that generalize well across tasks without overfitting, especially in low-data scenarios.\\n\\nThis paper shows that ICL aligns with Occam\\u2019s razor more closely than traditional methods that only focus on training error. They also reveal limitations of current ICL methods, which may underfit in large-data regimes and have varying performance based on model architecture. This paper suggests refining architectures and data distribution controls to improve ICL\\u2019s generalization and task adaptability.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Novel Perspective. Instead of studying the algorithmic aspect or mechanistic aspect of how LLMs perform in-context learning, this paper proposes a different yet novel perspective --- contrasting ICL with most current approaches in machine learning, and conclude with occam razor's principle that ICL generalize well across tasks without overfitting.\\n\\n2. Innovative Use of Prequential Coding for Complexity and Generalization. By framing prequential coding, the paper introduces a novel approach to balance model complexity and training accuracy. This insight offers a practical metric for understanding model simplicity.\\n\\n3. Comprehensive Empirical Validation. The paper validates its theoretical claims through a variety of experiments across different tasks, architectures, and training conditions.\", \"weaknesses\": \"1. Limited Generalization to Non-IID and Complex Real-World Tasks. While the paper effectively applies its theory to IID data, the assumptions might limit its relevance to more complex, non-IID real-world tasks, such as language data or continuously evolving data streams.\\n\\n2. Underexplored Architectural Dependencies. Although the paper observes that model architecture significantly influences the effectiveness of ICL, especially in low-data and high-complexity settings, it does not thoroughly explore or analyze which architectural features are most beneficial. A deeper investigation could be interesting.\\n\\nNonetheless, I don't think the two weaknesses here are significant. They are more of good-to-haves or future research.\", \"questions\": \"N/A. See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Dear reviewer, we appreciate your insightful feedback to improve the rigor of our experimental protocol.\\n\\nWe acknowledge the reviewer\\u2019s observation regarding inconsistent performance on the linear regression task for the Transformer with a bottleneck when trained using a split tokenization scheme ([x] ; [y] ; [x] ; [y]). We have taken the necessary steps to ensure consistency in the additional results introduced below.\\n\\nFirstly, we would like to clarify that the result in Figure 2.a compares the performance between different training objectives (prequential ICL and train-risk ICL) keeping the architecture (Transformer with bottleneck) and tokenization scheme ([x, y] concatenated tokens) fixed. As such, the comparison between training objectives (Figure 2.a) demonstrating that prequential ICL outperforms train-risk ICL is done in a fair setting.\\n\\nSecondly, regarding the suggestion to present the comparison between architectures (Figure 2.b) using the same tokenization scheme in the main paper instead of in the appendix, we would like to provide some clarifications on our initial motivations. Our intention was never to run a formal ablation study to quantitatively compare the ability of different architectures and tokenization-scheme at minimizing PCL, but simply to show that different design choices could qualitatively impact the ability of a learner to minimize PCL. In practice, different tokenization schemes were used to facilitate the implementation of each architecture. We agree that introducing these schemes was leading to unnecessary confusion and we thank the reviewer for highlighting this point. For this reason, we made rapid and considerable efforts to standardize the tokenization scheme across all models in the main paper. Specifically, we ran new experiments with the Transformer without bottleneck using concatenated tokens ([x,y] ; [x,y]...) and updated Figure 2.b accordingly. The new results remain consistent with our theory, while being significantly better especially on harder tasks. Consequently, we have also removed the appendix section that discussed the impact of various tokenization schemes, since this is not the focus of our contribution.\\n\\nWe also acknowledge the reviewer\\u2019s last suggestion to use curriculum learning to improve the prequential ICL performance. We believe this is an interesting line of research to tackle a more complex distribution of tasks (and not simply linear regression), while it\\u2019s unclear yet how to define a measure of complexity within the distribution of tasks we consider at training time. We are eager to pursue this line of inquiry and share in the reviewer's intuition, but we respectfully argue that this is beyond the scope of the present paper which introduces the foundational theory and verifies its direct predictions with experiments.\\n\\nWe hope that these revisions enhance the clarity of our findings, align better with the reviewer\\u2019s expectation, and that the reviewer will share our enthusiasm for sharing these findings with the community.\"}", "{\"comment\": \"We thank the reviewer for their time, feedback and for the positive assessment of our work. We address the key questions raised by the reviewer below, and hope that it addresses all the concerns raised.\\n\\n**Why does the bottleneck model perform better than the model w/o bottleneck?**\\n\\nThe reviewer hypothesized that the result seems at odds because the bottleneck model might lose information about the task. We\\u2019d like to clarify that a bottleneck model can actually capture all the task information since our problems involve task-latents of fixed size (e.g., regression weights, code in Mastermind), where the bottleneck is of sufficient dimension to encode it.\", \"the_reviewer_also_made_an_astute_observation\": \"unlike the bottleneck model, the vanilla Transformer receives separate tokens for $x$ and $y$ as input, to which positional token embeddings are added. This (and causal attention masking) is required to allow predictions at all $x$ tokens in parallel without letting them access their corresponding $y$, and is the common modeling choice for Transformers in prior work [1,2]. However, this choice makes learning more difficult. Based on the reviewer\\u2019s feedback, we decided to investigate if this detail explains the difference between models with and without bottleneck and show in Appendix E that it does: forcing the bottleneck model to use separate $x$ and $y$ tokens eliminates its performance advantage, leading to a fair comparison between the models.\\n\\n**For train-risk ICL, is the attention causal, and how is the query selected without inducing shortcuts?**\\n\\nFor in-context learners that minimize train risk, we do not input all context subsequences to the bottleneck model. Using causal attention, we can process all context subsequences with a single forward pass. \\n\\nThe query for each subsequence is randomly chosen from the context points in that subsequence. This does not lead to shortcuts as the model does not know which point is chosen for the query, and hence must learn to model $\\\\theta = T_\\\\phi(D_{1:t-1})$ that would generalize well to any choice of context point.\\n\\n**Clarifying task difficulty and its impact on performance gap**\\n\\nThe reviewer astutely observes that the performance gap is smaller for linear regression but larger for sinusoidal regression and Mastermind. This is because a complex task requires the algorithm to learn more complex functions to successfully minimize train risk. However, learning more complex functions with very limited data leads to overfitting, which is the basis for our hypothesis that as task complexity increases, simple predictors learned by minimizing prequential code length enjoy a bigger advantage over predictors learned by minimizing train risk. \\n\\nThe reviewer also makes a great suggestion to systematically analyze this by fixing the function class and varying the problem dimension. We conduct the suggested experiment for sinusoid regression and show that with increasing task complexity, the difference between prequential code solution and train-risk solution increases, as shown in Appendix C, thereby validating our initial hypothesis.\\n\\n**Comparing a gradient-based learner and in-context learner**\\n\\nAs the reviewer notes, we sought to empirically compare Transformers \\u2013 learners with parameters that are trained to minimize PCL \\u2013 to SGD \\u2013 arguably the most popular learner in ML. Despite using early stopping when training MLPs with SGD, we found that the predictors produced by SGD continue to overfit in low-data regimes.\\n\\nHowever, we think that the reviewer makes an excellent suggestion to compare the prequential code lengths achieved by different out-of-the-box learning algorithms that regularize vanilla SGD (e.g., weight decay), which we now show in Appendix F.2. As expected from standard learning theory, these regularization techniques have substantial impact on prequential code length: regularization reduces overfitting in low-data regimes as it favors simple models, but at the expense of underfitting in high-data regimes.\\n\\n**References**\\n\\n[1] Von Oswald, J., Niklasson, E., Randazzo, E., Sacramento, J., Mordvintsev, A., Zhmoginov, A., & Vladymyrov, M. (2023, July). Transformers learn in-context by gradient descent. In International Conference on Machine Learning (pp. 35151-35174). PMLR.\\n\\n[2] Garg, Shivam, et al. \\\"What can transformers learn in-context? a case study of simple function classes.\\\" Advances in Neural Information Processing Systems 35 (2022): 30583-30598.\"}", "{\"comment\": \"Thank you for addressing my concern and adding multiple experiments in the appendix section.\\n\\nUpon reading the added experiments, I notice that the Figure E.1 indicating that Transformer with a bottleneck architecture cannot learn the ICL at all (almost random results). This seems weird.\\n\\nThe comparison between prequential ICL and train-risk ICL (in Figure 2) should be done in a fair setting. With that, I mean in the main paper, I think it make more sense to put Figure E.1's result as the prequential ICL, instead of the Figure 2's truncated version of [x,y]. I understand this will downgrade the performance of prequential ICL, which is not what you want in the paper. Therefore I suggest the authors to look for ways to improve the prequential ICL performance in Figure E.1. Since the code is not released, I cannot give detailed suggestions. But maybe try with the curriculum learning applied in Garg et al.? This should attributes to the optimization issue of Transformer rather than the problem itself.\\n\\nI am willing to raise the score if this could be addressed. Thanks for your time.\"}", "{\"comment\": \"We thank the reviewer for their time, feedback and positive evaluation of our work. We are encouraged that the reviewer found our work as innovative and providing a novel perspective, and address the questions and concerns raised by the reviewer below.\\n\\n**Generalization to non-iid data and complex real-world tasks**\\n\\nICL is used as a general term for systems that learn conditioned on context without weight updates. As the reviewer astutely points out, this includes non-iid contexts such as natural language. In the majority of our experiments, we consider contexts as iid observations of supervised learning tasks, in line with a long history of studying such problems to better understand general ICL properties [1]. Our experiments on non-iid data in a task capturing temporal correlations akin to ones found in language (see Figure 3) suggest that the theory could be extended to non-iid data. \\n\\nWhile we agree that extending the theory to non-iid data would be ideal, it is non trivial to do so as it would entail evaluating every possible ordering of the data to assess the best compression possible. That is, $K(D, p) < L_{preq}^{ordering}(D) + permutation\\\\_matrix(ordering)$. Thus, a key challenge to extending our theory is an efficient way of computing these orderings.\\n\\n**Under-explored architectural dependencies**\\n\\nWe agree with the reviewer that investigating the inductive biases and architectural choices that lead to better/worse minimization of prequential code length is a very interesting and important question. However, we believe that it is out-of-scope for our current work which is primarily intended as a theoretical result with confirmatory evidence that could subsequently drive future research on important questions like the role of architecture.\\n\\n**References**\\n\\n[1] Von Oswald, J., Niklasson, E., Randazzo, E., Sacramento, J., Mordvintsev, A., Zhmoginov, A., & Vladymyrov, M. (2023, July). Transformers learn in-context by gradient descent. In International Conference on Machine Learning (pp. 35151-35174). PMLR.\"}", "{\"comment\": \"Dear reviewer,\\n\\nAs the discussion period is drawing to a close, we kindly ask if our latest response addresses your point which prevented you from strengthening your support. We believe that the paper is greatly improved thanks to your interventions, and we remain available to provide more details.\"}", "{\"comment\": \"Dear reviewer,\\n\\nWe are very appreciative of your time and feedback. As we are nearing the end of the rebuttal period, we would like to request the reviewer for an opportunity to answer any additional questions or doubts that may remain.\"}", "{\"comment\": \"Dear reviewer,\\n\\nWe are very appreciative of your time and feedback. As we are nearing the end of the rebuttal period, we would like to request the reviewer for an opportunity to answer any additional questions or doubts that may remain.\"}", "{\"comment\": \"We thank the reviewer for their time, feedback and for the positive assessment of our work. We address the key questions raised by the reviewer below.\\n\\n**1) Preventing $T_\\\\phi$ from memorizing tasks**\\n\\nTo put this question into context, memorization came into question because we need $K(T_\\\\phi)$ \\u2013 the complexity of the learner itself \\u2013 to be small so that we minimize a tight bound of PCL. If the parameters $\\\\phi$ memorize datasets, then $K(T_\\\\phi)$ could be large. Thus, we avoid this pitfall by meta-learning the parameters $\\\\phi$ on a sufficiently large distribution of datasets \\u2013 with a fixed capacity, memorizing each dataset becomes impossible.\\n\\n**2) and 8) Validity of the approach for non-iid data**\\n\\nICL is a general term for systems that learn conditioned on context without weight updates. In most of our experiments, we consider supervised iid tasks in line with a long history of studying such problems to better understand general ICL properties [1]. Our experiments on non-iid data in a task capturing temporal correlations akin to ones found in language (see Figure 3) suggest that the theory could be extended to non-iid data. \\n\\nWhile we agree that extending the theory to non-iid data would be ideal, it is non trivial to do so as it would entail evaluating every possible ordering of the data to assess the best compression possible. Thus, a key challenge to extending our theory is an efficient way of computing these orderings, which is out-of-scope of the current work.\\n\\n**3) The author\\u2019s setting assumes the model is updated with every token seen, but isn\\u2019t the gradient only back propagated on the summed autoregressive loss?**\\n\\nWe believe that there may be a small confusion. It is important to distinguish between (1) the updates of the sequence model that parametrizes the in-context learner and (2) the updates of the prediction model inferred from the context. During the pre-training phase, the sequence model $T_\\\\phi$ is trained \\u201cwith gradient steps over the cumulative loss\\u201d as the reviewer noted, which updates the in-context learning algorithm it implements. However, the \\u201cmodel\\u201d whose complexity our theory pertains to is the prediction function specified by the sequence model conditioned on context, i.e $T_\\\\phi(D_{1:t-1})$. As explained in Section 2.4, this prediction function is parameterized by the latent activations after a forward pass of the sequence model given past tokens, and these latent activations do \\u201cupdate\\u201d as additional tokens are added to the context. The updates to this in-context model are not gradient updates, but are rather governed by whatever learning algorithm the sequence model implements after pre-training. Throughout the paper, we attempt to clearly indicate this distinction by calling the sequence model the \\u201clearner\\u201d and calling the implicit in-context model parameterized by latent activations the \\u201cmodel\\u201d. \\n\\n**4) Comparing a gradient-based learner and in-context learner**\\n\\nWe would first like to clarify that we are comparing properties of learners which can be either (a) in-context learning based, or (b) standard optimization based. ICL learners have trainable parameters $\\\\phi$ that are pre-trained once, after which they takes as input a dataset $D_{1:t-1}$ in-context to provide the model parameters $\\\\theta = T_\\\\phi(D_{1:t-1})$. Optimization based learners do not have trainable parameters and rely on optimization routines to provide model parameters (e.g. training a model with SGD using the training data $D_{1:t-1}$). Note that in this case, the dataset is not provided as context to a model.\\n\\nTo compare learners that minimize PCL to learners that do not, we train an MLP using SGD to minimize training risk. Our theory predicts that especially in low-data regimes where minimizing train risk leads to overfitting, ICL should yield lower test error. This prediction was borne out in our empirical studies.\\n\\n**5) Clarify why the expressivity of a sequence model scales as $N^2$, but the expressivity of the model it fits from context scales as $N$**\\n\\nThese scaling trends were meant to be approximate based on linear layers with $N$ units and $N^2$ weights. As the reviewer points out, this is not always the case for all DNN layers, such as self-attention. Regardless, the \\u201cparameters\\u201d of the model implicitly fit from context do not include the attention weights, but only the latent layer activations which the model can use to modulate predictions based on context.\"}", "{\"comment\": \"Thanks for your detailed response and for the updated draft, which I have enjoyed reading.\\n\\nI think the fundamental problem with the gap between the prequential code length and Kolmogorov complexity is not really resolved (nor is it clear to me how to resolve it). I see the point about minimising an upper bound, but am not convinced this really addresses the conceptual gap. However it is at least now clear what the statement is, and combined with the improvements in exposition around the experiments I have decided to raise my score.\"}", "{\"comment\": \"We are sincerely grateful to the reviewers for their comments and suggestions, and believe that the feedback has enabled meaningful improvements to our work. Moreover, we were encouraged to see that the reviewers highlighted that our theoretical insights were \\u201cinteresting\\u201d (M5Ep07, 1nBD06), \\u201cnovel\\u201d (YT7Y04, 3Zfk04), and supported by experiments that provided \\u201ccomprehensive empirical validation\\u201d (3Zfk04, M5Ep07).\\n\\nNevertheless, we noted comments suggesting that our contribution is hindered because our theory doesn't immediately apply to large language models (LLMs). We'd like to push back against this view: this work's key contribution -- shedding light on generalization properties of ICL-based predictors -- has implications that extend beyond LLMs. Indeed, our theory and empirical findings add to a growing body of work [e.g. Kirsch et al. 2022] that finds that ICL models may have distinct generalization properties over other models trained with conventional optimization methods, laying the possibility to use large sequence models and ICL to replace conventional optimization. Despite the fact that our contributions stand on their own, we did test our theory on sequence data that mirrors natural language (Fig. 3), and found that our theory continues to predict empirical trends for non-iid data.\\n\\nWe do note that the reviewers had excellent suggestions for extra experiments and writing changes, which we implemented. We address each reviewer's questions by replying separately to their comments, but here, we summarize the key changes to our draft for ease. With these changes, we believe that we have addressed the reviewers\\u2019 concerns.\\n\\nRigor in the theory section. We discuss various approximations in the theory section, and reviewers YT7Y and nR2b wished to see these approximations made more precise. We have updated our draft to improve clarity on this front by showcasing our theory in a more rigorous fashion through the use of upper-bounds, which exactly capture the mathematical relations in question as opposed to approximate equalities. Our modifications make it clear that sequence models adhere to Occam\\u2019s razor by minimizing an upper-bound to training error and model complexity (in the same way we minimize the negative evidence lower bound for latent variable models).\\n\\nGradient-based versus in-context learners. The goal in our empirical studies is to compare learners that minimize prequential code length (e.g., Transformers trained to predict the next token) to learners that do not. As such, one of the learners we study is stochastic gradient descent (SGD), arguably the most popular learning algorithm for outputting a predictive function given data. In our experiments, to study the effects of SGD on a given prediction task, we train an MLP using SGD using training data from that task to minimize training risk. In contrast, sequence models like Transformers are also learners, but don\\u2019t perform gradient descent and have learnable parameters -- these parameters are chosen to minimize prequential code length. The goal of our paper is to study the properties of these learners in relation to Occam\\u2019s razor, and we find that they output predictors that generalize better, especially in low-data regimes (as predicted by our theory). Nevertheless, reviewers asked about comparing SGD with regularization to prequential coding-based learners, and we have added these experiments in Appendix F.2. Briefly, we find that regularized learners achieved better compression (i.e. lower prequential code length), which implies a stronger incentive toward simple models according to our theory. This experiment confirms the claim that regularization techniques serve as indirect Occam-aligned methods to learn simple models.\\n\\n\\n**References**\\n\\nVon Oswald, J., Niklasson, E., Randazzo, E., Sacramento, J., Mordvintsev, A., Zhmoginov, A., & Vladymyrov, M. (2023, July). Transformers learn in-context by gradient descent. In International Conference on Machine Learning (pp. 35151-35174). PMLR.\\n\\nKirsch, L., Harrison, J., Sohl-Dickstein, J., & Metz, L. (2022). General-purpose in-context learning by meta-learning transformers. arXiv preprint arXiv:2212.04458.e\"}", "{\"title\": \"A plea to engage in discussion\", \"comment\": \"Dear Reviewer,\\n\\nAs the discussion period is coming to and end, we kindly ask for your response and feedback. As you can gather from our direct response to your points as well as those from other reviewers, we significantly improved the manuscript. Notably, we now present an improved formal bound that is minimized during prequential ICL.\"}", "{\"comment\": \"We thank the reviewer for their time and valuable feedback. We are heartened to hear that the reviewer liked the perspective of our paper. We address the questions and concerns raised by the reviewer below.\\n\\n**Validity of theoretical claims relative to the tightness of the bound**\\n\\nWe acknowledge the reviewer\\u2019s point regarding the tightness of the bound and believe that our initial discussions about it in the draft might have been distracting as this \\u201capproximate equality\\u201d is not necessary for our theoretical results. We have revised our draft to show that our quantity of interest (training loss + model complexity) is upper-bounded by the prequential code length, and minimizing this upper-bound is a valid proxy to minimizing the objective itself. Such an approach of minimizing a bound is in line with well-established methods like variational inference, where the negative ELBO is minimized which upper bounds negative log likelihood [1]. We kindly refer the reviewer to the updated Sections 2.2 and 2.3, which are now solely based on upper-bounds.\\n\\n**Prequential coding assumes that as the learner $T$ sees more data, it will generalize better on new data. Is this assumption valid? How is this quantitatively measured? Does it require assumptions on the distribution of the data? What is the sample complexity?**\\n\\nThe reviewer brings up an interesting point. The assumption in question is valid as the learner is trained to minimize the prediction error on the next datapoint given prior context. This is similar in spirit to the maximum likelihood estimator, which generalizes better on new data as it seems more data. However, this might not hold true for us in certain OoD scenarios, for e.g. when the number of context points seen during inference are larger than those considered during training. This is a known problem for transformer models, called length generalization.\\n\\nOur findings in Figure 2 precisely indicate that this assumption is valid when minimizing next-token prediction error. The monotonically decreasing curves imply that as $T$ sees more data through increasing context length (x-axis), its generalization error on new data decreases (y-axis).\\n\\nOur setup requires very limited assumptions on the data distribution. Akin to maximum likelihood estimation, observing more data points $(x,y) \\\\sim p$ leads to a better model in our setup as long as the underlying true data generating model $p$ is kept fixed, which is the standard setup in most machine learning approaches. In the extreme case where $y$ is independent of $x$, observing additional context would not lead to better generalization for either $T_\\\\phi$ or any other learner like SGD.\\n\\nFinally, Figure 2 also highlights that in low sample regimes prequential ICL outperforms train-risk ICL and SGD, demonstrating better sample complexity.\\n\\n**1) What is the decoding function used in the prequential coding algorithm?**\\n\\nThe decoding function used in the prequential coding algorithm corresponds to the decoding algorithm used in arithmetic coding (a lossless entropy coding scheme), which reconstructs the original data given an optimally compressed code and a probability distribution. For our purposes, the point is that the algorithm is short to write (does not contribute to total program complexity) and allows an object $x$ to be compressed using only $-\\\\log_2 p(x)$ bits and then recovered with 0 error using the arithmetic decoding function. Thank you for spotting this missing detail; we have updated the caption of Fig. 1a accordingly.\\n\\n**2) Is equation 4 on prequential code length missing the complexity of the learning algorithm $K(T)$?**\\n\\nThe reviewer is correct that the prequential code length + the complexity of the learner together upper-bounds the joint complexity of the data and model:\\n\\n$K(D|p_\\\\theta) + K(p_\\\\theta) = K(D, p_\\\\theta) \\\\le L_{preq}(D;T) + K(T)$\\n\\nWe argue that training the learner $T_\\\\phi$ on meta-datasets that do not include the target dataset $D$ ensures that $K(T_\\\\phi)$ is low (i.e. it cannot overfit to $D$). We point the reviewer to the last paragraph of Section 2.2 and the start of Section 2.3 for reference, with further details present in Appendix B.\\n\\n**3) What is there an approximation in equation (6)?**\\n\\nEquation (6) is an upper-bound that only becomes a tight approximation under certain circumstances. However, our theoretical results only require (6) to be an upper-bound, and therefore we have replaced all unnecessarily distracting uses of approximations with upper bounds in our revised manuscript.\\n\\n**Minor Suggestion**\\n\\nWe agree with the reviewer\\u2019s observation that the first sentence in our abstract presents generalization as the sole objective of machine learning, rather than as one of its important objectives. We have amended this to say that generalization is a \\u201ccentral\\u201d goal in ML.\\n\\n**References**\\n\\n[1] Bishop, Christopher M. \\\"Pattern recognition and machine learning.\\\" Springer google schola 2 (2006): 1122-1128.\"}", "{\"comment\": \"Dear reviewer,\\n\\nWe are very appreciative of your time and feedback. As we are nearing the end of the rebuttal period, we would like to request the reviewer for an opportunity to answer any additional questions or doubts that may remain.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"We provide clarifications below.\\n\\n**Model and Learner**: To better understand our bounds and implications, we distinguish between two objects of interest \\u2013 the model and the learner. The model is parameterized as $p_\\\\theta$ with parameters $\\\\theta$, while the learner $T_\\\\phi$ is parameterized with parameters $\\\\phi$. A working example is an MLP with parameters $\\\\theta$ as the model and SGD with parameters $\\\\phi$ (e.g. learning rate, weight decay, etc.) as the learner. \\n\\n**Notion of Training**: Our bounds talk about the complexity of the model $p_\\\\theta$ that is learned by the learner $T_\\\\phi$. Prequential coding algorithm requires one to *re-train* a new $p_\\\\theta$ every time new observations are provided, which with SGD implies running the SGD algorithm again every time. In contrast to gradient-based learners, the same setting under our in-context transformer based learner implies learning a new $p_\\\\theta$ for additional data *through only a forward pass*. \\n\\nThus, we emphasize that the learner $T_\\\\phi$ is pre-trained only once and the model $p_\\\\theta$ can be *re-trained (or inferred or updated)* **solely through a forward pass**. Note that given additional context, $\\\\phi$ is not updated but only $\\\\theta$ is. Another analogous way of thinking about this is by considering $T_\\\\phi$ as the meta-learner that provides the learned model $p_\\\\theta$ given any new observations in a fast, efficient and scalable manner. \\n\\nWe hope that this resolves the reviewer\\u2019s concerns regarding our framing and we will update our manuscript to reflect this more clearly.\"}", "{\"summary\": \"This paper draws a connection between the objective used to minimize the next-token prediction loss, when training with iid data, and a compression algorithm called prequential coding. They show that, if the model is updated after predicting each token, then the minimization of the cumulative loss corresponds to the minimization of the objective of prequential decoding, which serves as an upper bound for jointly minimizing the compression of the data plus the complexity of the model used. The authors also provide a set of experiments to corroborate their theoretical observations.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The proposed connection between the minimization of the next-token prediction loss and the prequential coding algorithm is interesting. Intuitively as an observation, it makes sense that as the model is trained, it should learn to represent new data better, if there is any overlapping information between the different data points. In the specific setting, the data are iid and so the model should get better with each training point. It is also interesting that this loss can be connected with minimizing the complexity of the model.\", \"weaknesses\": \"1. In general, the ICL properties of models arise when training the next-token prediction loss, without iid data. The current results do not cover the next-token prediction in general.\\n2. It seems that the current setting assumes that the model is updated each time a token is predicted, but isn\\u2019t it the case that when training a model auto regressively, the model is updated with a gradient step over the cumulative loss over all the tokens of the sequence.\", \"questions\": \"1. How is it ensured that $T_\\\\phi$ in section 2.3 does not memorize? \\u201cTo forbid $T_\\\\phi$ from memorizing a single dataset, ..\\u201d\\n2. Could the authors clarify what would change if the data were not iid ? Does any of the results hold? In general ICL properties arise by simply training the next-token prediction loss without iid data. Could any of the results be generalized? \\n3. It seems that the current setting assumes that the model is updated each time a token is predicted, but isn\\u2019t it the case that when training a model auto regressively, the model is updated with a gradient step over the cumulative loss over all the tokens of the sequence. And so, the loss is not the objective of the prequential code (eq. 4). Could the authors elaborate on why these two are equivalent? \\n4. What is standard SGD vs ICL? Do the authors mean that they simply use an MLP in which the examples are concatenated and given as Input to the MLP, rather than having them in the sequence length? I am not sure I understand this distinction since the minimization of the cumulative loss over the next token prediction is also requires to train a model with SGD. Could the authors clarify more this setting? \\n5. In section 3.2 the authors state: \\u201cFor instance, when $T_\\\\phi$ is a Transformer, the expressivity of the model it implicitly fits to the context scales with the number of activations in the network ($N$), whereas the expressivity of a DNN trained through SGD scales with the number of weights ($N^2$). A Transformer in the attention layer has the multiplication of two $d\\\\times n$ matrices, while it also has $d^2$ parameters for each weight matrix. Could the authors elaborate on how they deduce that the expressivity of a Transformer scales with the number of activations ($N$) and why for a DNN with the number of weights ($N^2$)? \\n6. Could the authors think of some other setting, which would not require to alter the architecture for training with the target of only minimizing the training loss?\\n7. In figure 3b, is the x-axis the data points seen during training or the context length? In the y-axis is it the prequential code lengths or the error measured over some batch on the next token only?If the x-axis is the context length, how is it exactly the generalization error measured? \\n8. I think the paper would be improved, by focusing more on the last setting of experiments, in which the theory does not provide any guarantees, to understand whether similar results would hold in the case of non-iid data.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper discusses an interesting topic to connect the Kolmogorov complexity, prequential coding with In-context learning. The authors first show the prequential code is a \\u201cgood\\u201d algorithm\\u201d to compress both data and model. And through meta learning, the prequential code length could be minimized. In the setting of ICL, the meta learner and inner model for each task are unified as the sequence model. And the next token prediction is equivalent to the prequential coding algorithm. Thus through the next token prediction, the training error and model complexity are jointly minimized.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"Overall I like the perspective of this paper. Kolmogorov complexity nicely poses the learning and generalization problem as a compression of data and model. It is surprising to see nowadays the modern LLMs, trained simply on next token prediction, generalizes so well in downstream tasks with or without some fine-tuning. Any effort connecting the two is always welcome.\", \"weaknesses\": \"Unfortunately the draft, to me, lacks a sense of rigor. The connection stated in the draft looks like a good story, but there is not much guarantee. How well prequential code length approximates the Kolmogorov complexity is always a question mark. I feel it is a very loose bound. In the prequential coding algorithm, it is assumed that as the learner T sees more data, it will generalize better on the new data. However there is no quantitative analysis on how this is measured. Any assumptions on the distribution of data? What is the sample complexity here?\", \"questions\": \"1. Figure 1.a: in the description of the prequential coding algorithm, there is a line D+= decode(d_next_encoded,p). I do not see where that decode function is defined. Can you add more details?\\n2. Equation (4): the first equality says the code length is the sum of bits for all the data based on the learner. Do we also need extra bits to represent the learner itself? Maybe I missed something here. Please feel free to comment.\\n3. Can you also provide some details on the approximation in (6). Why it is an approximation and what have we missed here. Thanks.\", \"finally_something_minor\": \"The first sentence in the abstract is a bold claim. Even though I agree generalization is a key to machine learning, I would be cautious claiming that the (only) goal of machine learning is generalization.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Revised accessment\", \"comment\": [\"After carefully considering the feedback from fellow reviewers and re-evaluating the paper, I have decided to lower my score from 8 to 5. While I initially found the paper's novel perspective on in-context learning and Occam's razor intriguing, the concerns raised about the theoretical rigor and applicability of the work are significant. The lack of clarity in the theoretical justifications and the limited extension to non-IID and real-world tasks diminish the paper's contribution to the field. Additionally, the underexplored architectural dependencies and issues with experimental validity suggest that the work is not yet ready for acceptance. I encourage the authors to address these criticisms in future revisions to strengthen the paper's impact.\", \"Here are the concerns I share with my fellow reviewers\", \"Theoretical Rigor and Clarity. The approximations made, particularly regarding the tightness of bounds and the relation to Kolmogorov complexity, are not sufficiently justified. There is concern that key claims rely on loose bounds or approximate equalities without adequate explanation or empirical validation (Reviewers YT7Y, nR2b)\", \"Novelty of Contributions. The connection between next-token prediction loss and prequential coding is not novel and has been previously discussed in related work, such as Del\\u00e9tang et al. The paper does not sufficiently acknowledge or build upon these existing contributions (Reviewer YT7Y).\", \"Experimental Validity. I share the same concerns about experimental setups introducing ambiguities or unfair comparisons, such as differing input formats between models and lack of appropriate regularization techniques in baselines (Reviewers M5Ep, 1nBD)\", \"Applicability to Non-IID Data. Reviewer 1nBD shares the same concern as I initially did. The theory primarily addresses IID data and does not extend to non-IID or complex real-world tasks, such as language modeling with large language models (LLMs). This limitation reduces the practical relevance of the work.\"]}", "{\"comment\": \"*Continued*\\n\\n**6) Exploring alternative training objective that only minimizes training loss without architectural modifications**\\n\\nUnless we have misunderstood the reviewer, we believe that this suggestion aligns exactly with our experimental setup in Section 3.1 where we compare \\u201cprequential ICL\\u201d with \\u201ctrain-risk ICL\\u201d while keeping an identical architecture. The only difference in the two setups is whether the learned ICL algorithm minimizes error on a new data point (which minimizes PCL) or on a randomly chosen context point (which only minimizes training loss).\\n\\n**7) Clarification on plot legend and axis**\\n\\nIn Figure 3b, the x-axis represents the context length. The y-axis, labeled as 'generalization error,' reflects the KL-divergence between the ground-truth and predicted next-token distributions (Figure 3b caption).\\n\\n**References**\\n\\n[1] Von Oswald, J., Niklasson, E., Randazzo, E., Sacramento, J., Mordvintsev, A., Zhmoginov, A., & Vladymyrov, M. (2023, July). Transformers learn in-context by gradient descent. In International Conference on Machine Learning (pp. 35151-35174). PMLR.\"}", "{\"comment\": \"We thank the reviewer for their time, feedback and for the positive assessment of our work. We address the key questions raised by the reviewer below, and hope that it addresses all the concerns raised.\\n\\n**1) Credit attribution to Del\\u00e9tang et al**\\n\\nIn Del\\u00e9tang et al., the authors discuss the connection between (pre-trained) sequence models and compression algorithms through prequential coding, which is indeed an important result used in our theory, but not our main contribution. In fact, this is exactly why we explicitly discuss such related work in the section \\u201csequence modeling and compression\\u201d to clarify that others have drawn connections between sequence models and compression. Our contribution, as outlined in that section, is that we frame the next-token prediction loss used to train sequence models in the meta-learning framework through in-context learning and draw its connections to minimizing prequential code length across a distribution of tasks. This yields learning algorithms that align with Occam\\u2019s razor. The meta-learning perspective is precisely what is absent from Del\\u00e9tang et al. (which does not aim to explain ICL, as we do), but it is one of the cornerstones of our theory.\\n\\n**2) Validity of theoretical claims relative to the tightness of the bound**\\n\\nWe acknowledge the reviewer\\u2019s point regarding the approximate equality and the tightness of the bound and believe that our initial discussions about it in the draft might have been distracting as this \\u201capproximate equality\\u201d is not necessary for our theoretical results. We have revised our draft to show that our quantity of interest (training loss + model complexity) is upper-bounded by the prequential code length, and minimizing this upper-bound is a valid proxy to minimizing the objective itself. Such an approach of minimizing a bound is in line with well-established methods like variational inference, where the negative ELBO is minimized which upper bounds negative log likelihood. We kindly refer the reviewer to the updated Sections 2.2 and 2.3, which are now solely based on upper-bounds.\\n\\n**3) Connection between experimental results and theoretical claims**\\n\\nOur theory predicts that ICL trained to minimize next-token prediction error produces an in-context learning algorithm that jointly minimizes training error and model complexity. Crucially, by Occam\\u2019s razor, model complexity is most important to take into account when data is limited and generalization is difficult. This is in line with evidence from standard statistics stating that maximum likelihood estimator is a consistent estimator, implying that as the amount of data goes to infinity one uncovers the true model. It is only in finite and limited data settings that regularization strategies and choice of prior play a big role. This ascertains that the gap in performance below 10 data-points in-context is still significant.\\n\\nThe purpose of Section 3.1 was to test exactly this hypothesis that minimizing prequential code length results in better generalization *for low-data regimes* compared to minimizing training error alone and similar performance for high-data regimes, as our theory predicts given the relationship between prequential ICL and Occam\\u2019s razor. Our empirical observations (see \\u201cFindings\\u201d paragraph of Section 3.1) precisely corroborate our theoretical claims and are thus crucial in validating the hypothesis that we laid out. The fact that this gap in generalization performance sometimes exists only in *very* low data regimes for certain tasks just implies that for certain tasks like linear regression, less data is required for generalization.\\n\\nOur other experiments are designed to test the abilities of current methods for ICL, i.e. to what degree do current methods (e.g., sequence model architecture) successfully minimize prequential code length in practice. These experiments were crucial in demonstrating that despite attractive theoretical properties, methods for ICL have significant room for improvement in terms of limitations we identified experimentally and interpreted using our theory.\\n\\nFor all experiments, we attempted to clearly identify in the \\u201cFindings\\u201d paragraphs what our theory predicts (in terms of Occam\\u2019s razor and model complexity) and then either confirmed or rejected these predictions based on experimental results, while highlighting the implications for current methods for ICL.\\n\\n**Minor suggestion**\\n\\n- We agree with the reviewer that the direct equivalence between next-token prediction loss and prequential code length could be clarified in Section 2.4. We added an equation in Section 2.4 to demonstrate this in line 229. \\n- Thank you for flagging the incorrect figure reference in section 3.4, we have updated the paper accordingly.\\n- The claim in line 1245 is indeed backward, thank you for pointing out this mistake. We have updated the paper accordingly.\"}", "{\"title\": \"Please see rebuttals to other reviewers\", \"comment\": \"We thank the reviewer for their continued participation during this rebuttal period. We are disheartened that while the reviewer considered the feedback from other reviewers in their reassessment, they did not take our responses to each of them in consideration, despite posting this revision well after our replies were posted. In particular, we already addressed all the concerns that the reviewer raises in their re-evaluation, but we provide a thorough response below nonetheless.\\n\\n**Theoretical Rigor and Clarity**\\n\\nWe currently do not make claims about the tightness of the bound and while a tight bound is preferable, it is not necessary for theoretical rigor [1-5]. As we pointed out in our response to Reviewers YT7Y and nR2b, prequential code length is still a valid upper bound and minimizing an upper bound as proxy to the quantity of interest is a well established methodology; for example optimizing ELBO which has applications in numerous probabilistic models (VAEs, diffusion models, etc.). We have amended equations in the main text to showcase rigorous inequalities that compose this bound, as well as added further formal details.\", \"novelty_of_contributions\": \"We refer the reviewer to lines 254-255 where we clearly state the contribution of Del\\u00e9tang et al and discuss it further in the related work; and thus would like more clarification from the reviewer as to how we should better acknowledge or build upon such contributions? Crucially, we believe that our contribution builds on the excellent work from Deletang et al. by considering the training of ICL models: whereas Deletang shows that certain sequence models have good inference-time compression abilities, our work gives a theoretical account of how those compression abilities emerge at training time and why they enable strong generalization through Occam\\u2019s razor.\\n\\n**Experimental Validity**\\n\\nWhile our baselines did use different input formats, we clearly stated it in our original draft. Further, our main contribution is the fact that generalization in in-context learning can be explained through prequential code length and Kolmogorov complexity; in particular the comparison between gradient based methods, train-risk ICL, and prequential ICL still holds.\\n\\nIn response to this concern, we refer the reviewer to the revised Appendix F.2 which provides a comparative analysis of prequential code length with different out of the box learning algorithms (e.g. with weight decay and other regularization).\\n\\n**Applicability to Non-IID Data**\\n\\nWe refer the reviewer to our experiments on hidden markov model (HMM) which is a non-iid task. Additionally, while we acknowledge that extending the theory to non-iid data is a very relevant future work, we strongly argue that the iid data case for supervised learning is not a limitation, but a burgeoning use case of sequence models for general ML practices. In particular, a lot of current machine learning research on both supervised and unsupervised learning does rely on iid assumption on the data. Even for language and other modalities and even with current LLMs, different batches of sequences are still treated independently. Despite the theory providing strict bounds for iid data, we note in the revised text that further work has the potential to extend our result for non-iid, and that our experiments confirm that the principle empirically holds for non-iid data. Like many theoretical contributions to ML in the past, idealized settings where theory is more tractable have great value in driving application progress. As such, we argue that this paper has the potential for notable relevance in the field. We refer the reviewer to our original main rebuttal where we outline this point in great detail.\\n\\n*References*\\n\\n[1] Gastpar, Michael, et al. \\\"Which Algorithms Have Tight Generalization Bounds?.\\\" arXiv preprint arXiv:2410.01969 (2024).\\n\\n[2] Shalev-Shwartz, Shai, and Shai Ben-David. Understanding machine learning: From theory to algorithms. Cambridge university press, 2014.\\n\\n[3] Bartlett, Peter L., Dylan J. Foster, and Matus J. Telgarsky. \\\"Spectrally-normalized margin bounds for neural networks.\\\" Advances in neural information processing systems 30 (2017).\\n\\n[4] Golowich, Noah, Alexander Rakhlin, and Ohad Shamir. \\\"Size-independent sample complexity of neural networks.\\\" Conference On Learning Theory. PMLR, 2018.\\n\\n[5] Arora, Sanjeev, et al. \\\"Stronger generalization bounds for deep nets via a compression approach.\\\" International conference on machine learning. PMLR, 2018.\"}", "{\"summary\": \"[edited: in response to the revision, I have raised my score]\\n\\nIn this paper the authors propose to re-examine the next-token prediction loss used to train sequence models such as transformers from the perspective of compression, and in particular prequential coding. This is an attractive idea that has been the subject of several recent works, including Del\\u00e9tang et al \\u201cLanguage modeling is compression\\u201d published in ICLR 2024, and it holds significant promise as a theoretical and empirical means to understand in-context learning (ICL) and more generally the generalisation behaviour of large language models.\", \"the_paper_has_three_main_components\": \"(1) The observation that the next-token prediction loss is related to prequential code length\\n\\n(2) The relation between this code length and Kolmogorov complexity, which begets the claim that training transformers \\u201cexplicitly\\u201d optimises an objective jointly minimising training error and model complexity (line 248), and\\n\\n(3) Experiments that aim to validate these theoretical claims, and suggest potential improvements to the design of transformers which will incentivise better ICL.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"I find the prequential code-length perspective on the pre-training objective of transformers useful, it is relatively novel, and I think it is a promising route to understanding ICL. I did not think any of these things before reading this paper, which introduced me to the idea.\"], \"weaknesses\": [\"I find the perspective adopted by the paper intriguing, however in its current form I do not think it has achieved its stated aims in any of three main components identified above:\", \"The relation between next-token prediction loss and prequential code length appears not to be novel, as it is explained clearly in Del\\u00e9tang et al Section 2, and I think this is not sufficiently emphasised in the paper (their work is cited on line 250 for other reasons).\", \"While Kolmogorov complexity is presented as playing a significant role in the framing of the theoretical contributions, I am not convinced of this in its current form. The inequality in (4) is of course true, but the major claims (about e.g. transformer training \\u201cexplicitly\\u201d optimising some objective involving model complexity) seem to rely on this inequality being an approximate equality. This is justified in passing, very briefly, around line 162 and a reference is made to Blier-Ollivier (presumably to the discussion in Section 2.4) but I do not understand how this amounts to a strong justification of the approximate equality.\", \"The experimental results seem a bit scattered, and I am unsure of how strongly they corroborate the theoretical claims. Taking Section 3.1 as an example, I think too much is left implicit about how this connects to the theoretical claims. I do not understand how these findings \\u201cfollow directly from our theory\\u201d (line 312). I do not know how to judge whether or not a gap in performance between prequential and train-risk ICL below 10 datapoints in-context is actually significant.\"], \"questions\": [\"Some small suggestions/questions:\", \"Section 2.4 stops short of actually writing down the next-token prediction loss and doing the simple calculation that connects it to the prequential code length. Since this is claimed in the summary as one of the key contributions, it seems worthwhile to make this explicit.\", \"Section 3.4 has a reference to Figure 2a (line 392) that should be 3a\", \"Perhaps I\\u2019m confused but is line 1245 backwards? Isn\\u2019t your proposal that models trained with maximal length contexts should lead to worse generalisation? Perhaps I am misunderstanding what \\u201cneed less tokens to arrive at simple models\\u201d means.\", \"In conclusion, while I believe prequential coding is a promising direction to understand ICL, I cannot agree with the authors that their theoretical arguments succeed in linking the next-token prediction objective to Occam\\u2019s razor (line 502), in their current form.\"], \"things_that_might_change_my_views\": [\"A more detailed explanation of why I should believe (4) is an approximate equality (either theoretical or empirical)\", \"A stronger link between the empirical work in Section 3 and the theory, explaining exactly how the experiments are predicted (as it stands, it reads to me as a few somewhat confirmatory pieces of evidence, but not strongly so).\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper shows an analogy between the next-token loss optimized in in-context learners and the compression technique of prequential coding. This minimizes not only the model's training error but also model complexity. The reviewers appreciated this intriguiging connection and the theory behind it. Criticisms were varied, ranging from concerns about fairness in experimental comparisons, lack of rigor concerning bounds, insufficient exploration of architectural dependencies and the limited scope to iid data. The breadth of these criticisms, combined with no reviewer strongly speaking up for the paper, leads me to recommend rejection.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer 3Zfk initially gave a score of 8 and then reduced to 5 upon further consideration. Across reviewers, the main improvements from rebuttals addressed fairness in experiments, clarified theoretical claims, and better aligned findings with predictions, leading to minor score increases to 6 for 3 reviewers. I largely ignored Reviewer nR2b who did not engage after their initial review.\"}", "{\"title\": \"Response to authors\", \"comment\": \"Thank you for your response.\", \"i_would_like_to_clarify_some_points\": \"In section 2.2 the authors mention that with each new data point the model is trained, and in this the section in which point 3 (in your response) was referring to. Do the authors mean training of the underlying parameters of the model or rather just a forward pass? My point is that if we need the model to be trained in such a matter for the provided bounds to hold, this is not how models are actually trained.\\n\\nAlso in your point 4, you state that: \\\"ICL learners have trainable parameters $\\\\phi$ that are pre-trained once, after which they takes as input a dataset $D_{1:t-1}$ in-context to provide the model parameters $\\\\theta = T_\\\\phi(D_{1:t-1})$\\\". I am not sure again I understand what trainable means here. Maybe updatable would be a more appropriate term?\"}", "{\"summary\": \"This paper first provide theoretical understanding that the success of ICL lies in its implicit optimization for both data fit and model simplicity through the lens of data compression. It then examine the case where the training objective is changed to minimize training error alone instead of the prequential code length, and found that it exhibits worse generalization performance compare to the standard next-token prediction error.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is well-written, the theory is interesting and the experiments are well serving the points.\", \"weaknesses\": \"Some of the experiments are not rigorous enough. Please see questions below.\", \"questions\": \"1. Figure 2b: Why does the Transformer without a bottleneck perform worse than the one with a bottleneck? Intuitively, one would expect that a Transformer with a bottleneck would lose essential information necessary for predicting the query\\u2019s label, making this result seem suspicious.\\n2. Regarding experimental details: I found that the Transformer without a bottleneck and the Transformer with a bottleneck were presented with different input formats\\u2014one with (x,y) concatenated and the other without. Why is this the case? This setup does not provide a fair comparison between the two models.\\n3. In the setting where the Transformer is trained with train-risk ICL: Given a total sequence length k (following the notation in line 265), do you break the sequence into k subsequences of length j, where $j\\\\in [k]$, or pass it as a whole sequence, relying on causal attention to prevent future information leakage? If it\\u2019s the latter, how do you select the query x? If x is not x_i in the sequence, then it\\u2019s not guaranteed that the query x is included in the context x_{1:j}. If it is x_1, would this allow the model to learn a shortcut solution, potentially biasing its predictions?\\n4. Following the previous question: If a sequence of length k is always broken into k subsequences, why use a decoder-only Transformer? If my understanding is correct, there should be no causality requirement in the context.\\n5. Regarding the gap observed in performance: Why is the performance gap smaller for linear regression but larger for sinusoid regression and Mastermind? The authors attribute this to task difficulty, but the explanation feels vague. Fixing the function class and varying the problem dimension (as a more concrete indicator of task difficulty) might clarify this point, rather than relying on a vague explanation.\\n6. Why does the MLP baseline generalize worse than the Transformer? Was model complexity minimized through regularization techniques, such as weight decay, in the MLP? This baseline offers limited insight into the results and seems to introduce some ambiguity. Additionally, what would be the Bayesian optimal model\\u2019s generalization error?\\n\\n---\", \"updated_after_the_discussion\": \"Thank you for the authors in taking time to explain the results. My concern has been addressed, therefore I raise the score from 5 to 6.\", \"reason_for_not_a_higher_score\": \"This paper primarily focuses on small-scale experiments. While it provides an explanation from the perspective of Occam\\u2019s razor, it does not offer additional insights on improving small-scale training, let alone addressing real LLM cases (which may involve out-of-distribution scenarios and deviate from the meta-ICL format). Therefore, I don\\u2019t believe it merits an 8.\", \"reason_for_not_a_lower_score\": \"The paper presents an interesting perspective on how in-context learning works in controlled experimental settings. The experiments are well-conducted, and the findings contribute to the literature on understanding in-context learning in small-scale experiments.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
2P4p4RxUxT
Conformal confidence sets for biomedical image segmentation
[ "Samuel Davenport" ]
We develop confidence sets which provide spatial uncertainty guarantees for the output of a black-box machine learning model designed for image segmentation. To do so we adapt conformal inference to the imaging setting, obtaining thresholds on a calibration dataset based on the distribution of the maximum of the transformed logit scores within and outside of the ground truth masks. We prove that these confidence sets, when applied to new predictions of the model, are guaranteed to contain the true unknown segmented mask with desired probability. We show that learning appropriate score transformations on an independent learning dataset before performing calibration is crucial for optimizing performance. We illustrate and validate our approach on polyps colonscopy, brain imaging and teeth datasets. To do so we obtain the logit scores from deep neural networks trained for polyps, brain mask and tooth segmentation segmentation. We show that using distance and other transformations of the logit scores allows us to provide tight inner and outer confidence sets for the true masks whilst controlling the false coverage rate.
[ "Deep learning", "neural networks", "uncertainty quantification", "confidence sets" ]
Reject
https://openreview.net/pdf?id=2P4p4RxUxT
https://openreview.net/forum?id=2P4p4RxUxT
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zsgZpXRtlu", "xmVM9THEwO", "uQmXlgx9op", "uNe9flptDo", "sfrYkxJ0uj", "qphPrnLJLS", "qj8cbBdYeh", "oNKuLuXuMI", "j652j9mf2P", "hcEr3iVDb1", "cEKXF6Og7T", "Xn2P1bKwuv", "TpO3kJFy7b", "T95UyPEhrW", "Nl0aqYQMMy", "KDJiM3Z4az", "Iige0eHNzh", "IAIbcSJfP2", "D9HXxmtvJQ", "CvYiZuZRbH", "CqZrZgHfRW", "8dQYFsl62q", "6P0ZfhEv7y", "3zM49mpACt", "3GYrbq4e2m", "2i4TK9k5oR" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732102312043, 1732103317197, 1730205653354, 1732875344290, 1732620996803, 1732541706876, 1730716243932, 1732538156979, 1732377768294, 1733269266677, 1732291980301, 1733269205092, 1732101581321, 1737523847367, 1734658061374, 1732103020353, 1732538236357, 1732102683725, 1730113965190, 1732102115591, 1732387873596, 1732103370649, 1732264302815, 1730554643137, 1733269706986, 1732101739501 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7562/Authors" ], [ "ICLR.cc/2025/Conference/Submission7562/Authors" ], [ "ICLR.cc/2025/Conference/Submission7562/Reviewer_8vPV" ], [ "ICLR.cc/2025/Conference/Submission7562/Reviewer_8vPV" ], [ "ICLR.cc/2025/Conference/Submission7562/Reviewer_Bd8E" ], [ "ICLR.cc/2025/Conference/Submission7562/Reviewer_XxS9" ], [ "ICLR.cc/2025/Conference/Submission7562/Reviewer_Jpsi" ], [ "ICLR.cc/2025/Conference/Submission7562/Authors" ], [ "ICLR.cc/2025/Conference/Submission7562/Authors" ], [ "ICLR.cc/2025/Conference/Submission7562/Authors" ], [ "ICLR.cc/2025/Conference/Submission7562/Reviewer_8vPV" ], [ "ICLR.cc/2025/Conference/Submission7562/Authors" ], [ "ICLR.cc/2025/Conference/Submission7562/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7562/Area_Chair_FeTS" ], [ "ICLR.cc/2025/Conference/Submission7562/Authors" ], [ "ICLR.cc/2025/Conference/Submission7562/Authors" ], [ "ICLR.cc/2025/Conference/Submission7562/Authors" ], [ "ICLR.cc/2025/Conference/Submission7562/Reviewer_XxS9" ], [ "ICLR.cc/2025/Conference/Submission7562/Authors" ], [ "ICLR.cc/2025/Conference/Submission7562/Authors" ], [ "ICLR.cc/2025/Conference/Submission7562/Authors" ], [ "ICLR.cc/2025/Conference/Submission7562/Reviewer_XxS9" ], [ "ICLR.cc/2025/Conference/Submission7562/Reviewer_Bd8E" ], [ "ICLR.cc/2025/Conference/Submission7562/Authors" ], [ "ICLR.cc/2025/Conference/Submission7562/Authors" ] ], "structured_content_str": [ "{\"comment\": \"6- We agree that using the learning dataset the training data may not provide the optimal score transformations. This is a not a problem for validity as the training data is assumed to be independent of the calibration and test datasets and so the results of Sections 2.2 and 2.3 still ensure that we can make inclusion statements with confidence. However doing so may impact the choice of score functions. As we now take greater care to emphasise in Section 2.4, we do not recommend using the training data as part of the learning dataset if there is a large amount of data available. However in cases where there is limited data (such as the teeth segmentation problem which we now consider), learning the score function on the training data may still be helpful. In particular the training data may still contain information which allow us to distinguish between different score functions. We saw in the application to teeth segmentation in which the score functions on the learning dataset (which was made up of the training data) had a similar performance on the test dataset. Doing so means that we are not required to give up any of the calibration data used, or make the decision to train the model using fewer images. This is a trade off that must be decided upon carefully by the researcher and where possible we recommend that researchers use a learning dataset which is independent of the training data (as we do with the polyps and brain imaging data settings which we consider). Where possible we thus recommend splitting the data into independent training, learning, calibration and test datasets.\\n\\n7- Our method is indeed a lightweight addition to any existing black box image segmentation model and is relatively easy to apply to additional datasets. The new datasets and applications which we have added to the paper help to illustrate this, showing that the model is generally applicable, informative and valid in these settings. In particular, for each of the models considered, we perform validations in which we resample with replacement from the data in order to check the coverage rate of the method, see Sections 3.3, A.7.4 and A.8.4 of the updated manuscript. We would like to clarify that the guarantees are in fact that 100% of true mask is included 90% of the time rather than that 90% of the mask is included 90% of the time. This guarantee allows full coverage and means that the resulting confidence sets are more meaningful. We shall include validations across additional settings for the camera ready version of the paper. \\n\\nWe thank the reviewer once more for their helpful comments and look forward to hearing their thoughts on our response and discussing any follow up questions that they may have.\"}", "{\"comment\": \"We are very pleased that the reviewer enjoyed reading the paper and are grateful for their comments and questions which we address below.\\n\\nWe have taken the reviewer\\u2019s advice on board and, in order to improve the quality of the manuscript, have included applications to two new datasets involving segmentation in the context of brain imaging and dentistry. Our results on these datasets show the robustness and wide applicability of our approach. See the relevant Section 4, 5, A.7 and A.8 of the updated paper for the results and application examples. \\n\\nRegarding the need for quantitative metrics we have now included dice, precision and recall metrics, in Section A.9, for the 3 different segmentation models used in the paper. These metrics correlate with the performance of the distance transformed scores but not necessarily with other score transformations. Moreover we would like to clarify that evaluation of the inclusion specified in equations 1 and 2 is done in the validations in Section 3.3 (and for the new datasets in Sections A.7.4 and A.8.4). These validations subsample the data with replacement (each time dividing into a calibration and a test set) and check whether the inclusions 1 and 2 hold in order to establish that the methods have the right coverage rate. They show that for each of the datasets considered the confidence sets provide coverage at the nominal rate for interesting coverage levels. \\n\\nIn the first version of the paper we compared to the bounding box approach of [1] as this is the main other approach we are aware of which controls the same error rate. Other methods used in conformal image segmentation typically consider weaker error rates as these are easier to satisfy whilst being less meaningful. However score transformations such as the distance transformation can be very helpful when using these other methods for the same reasons they are helpful in our context. We shall prepare and include an illustration of the resulting benefits of doing so, for other methods such as conformal risk control [2], for the final version of the paper. We shall also show quantitatively the degree to which sets derived using risk control (with the expected proportion of false non-discoveries) provide (severe) undercoverage when considering inclusion coverage rates.\\n\\n[1] And\\u00e9ol, L\\u00e9o, et al. \\\"Confident Object Detection via Conformal Prediction and Conformal Risk Control: an Application to Railway Signaling.\\\" Conformal and Probabilistic Prediction with Applications. PMLR, 2023.\\n\\n[2] Angelopoulos, Anastasios N., et al. \\\"Conformal risk control.\\\" ICLR, 2024.\\n\\nWe agree that there are other score transformations which can be considered. In particular as the reviewer remarks smoothing the score contributions via a smoothing kernel is a good idea. We illustrate this in the new applications to brain imaging and dental records, see Sections 4, 5, A.7 and A.8. Here we compare the results of smoothing the scores using a Gaussian kernel with varying levels of applied smoothness. In the brain imaging application we see that this leads to a big improvement over the use of the original scores (which perform quite poorly). However in this setting the improvement is not as great as using the distance transformed scores. Instead for the dental application, smoothing is very helpful and in fact provides the largest inner confidence sets, which we then use in practice. For this application it also helps to provide tight outer sets. These can in fact be tighter than those provided by the distance transformation however tend to have extra blobs which do not correspond to teeth which is why we settled on the distance transformation for the final calibration.\\n\\nInstead for the polyps application we found that smoothing did not significantly improve the quality of the inner and outer sets on the learning dataset, likely because the score contributions from the model are already smooth (see e.g. the surface plot of the scores in Figure 2). We will add the results of applying smoothing in the polyps application to the final version of the manuscript. \\n\\nWe have added labels to the rows/columns of the figures displaying the confidence sets throughout the main text and the appendix and thank the reviewer for this suggestion as it greatly helps to improve the clarity. Moreover we would like to apologize for the spelling error of polyps which we have now corrected in the updated draft and appreciate that this was spotted. We have also replaced \\\"... the set a side [num] images ...\\\", with the \\u201c\\u2026 [num] images which we set aside\\u201d or another appropriate variant.\"}", "{\"summary\": \"The paper proposes a conformal prediction based method to quantify the uncertainty for medical image segmentation. The proposed method is particularly designed for pre-trained segmentation models which notoriously make overconfident and wrong predictions. The proposed method learns thresholds using the maximum logit scores from a calibration set for the inside and outside of the ground truth masks and apply them on the logit scores of the test image to return conformalized segmentation prediction which guarantees to include the ground truth segmentation. The paper shows that naively learning the outside thresholds on max logits is not optimal and propose to transform the scores using a distance to make sure that far away pixels have lower scores. The method is validated on a single dataset for polyp segmentation and the results show that the proposed method produces conformal sets with narrower boundaries compared to using scores which are not transformed.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"The idea of using transformed max logit scores is simple but quite effective strategy to produces conformal segmentation sets.\", \"The presented experiments show the effectiveness of the method compared to using non-transformed logits.\"], \"weaknesses\": \"1- Although I found the proposed idea of transforming max logit scores interesting, I don't think that the paper presents enough contribution to be presented in ICLR. The idea of applying conformal prediction to max logits for inside and outside of the boundaries is a direct extension of initial conformal prediction methods developed for segmentation, and applying transformations based on distance is an intuitive choice to refine predicted boundaries.\\n\\n2- The paper does not present any comparisons with the existing conformal prediction works for image segmentation.\\n\\n[1] Mossina et al. Conformal Semantic Image Segmentation: Post-hoc Quantification of Predictive Uncertainty, CVPR Workshops, 2024,\\n\\n3- The method is evaluated on only a single dataset. Multiple datasets should be included to make sure that the performance generalizes across datasets.\\n\\n4- In many segmentation tasks, we are interested in segmenting multiple structures. The paper only focuses on binary segmentation. I think the method should be validated on multi-class setting to make sure that it is also applicable in that setting.\\n\\n5- The explanation of how the method is applied at test time could also be clearer. As I understand it, during testing, the method applies the inner threshold on max logits to find inner boundaries, then applies a distance transformation based on each pixel\\u2019s distance from these inner boundaries, and finally applies an outer boundary threshold. However, the exact steps of the algorithm during test time need more clarification.\\n\\n6- In conventional uncertainty quantification algorithms for segmentation such as [2, 3] the uncertainty is quantified by the variance of the segmentation samples generated from the posterior distribution. How can the quantification be done in this case? Is it the margin between the inner and outer boundaries? Is the uncertainty quantified by the algorithm correlates with the uncertainty in the input image? For example, does the method output larger margins when there is greater disagreement between the segmentations of different experts? \\n\\n[2] Kohl et al. A Probabilistic U-Net for Segmentation of Ambiguous Images\\n[3] Erdil et al. MCMC Shape Sampling for Image Segmentation with Nonparametric Shape Priors\\n\\n7- The margin between the inner and outer boundaries appears quite large and there can be many unplausible segmentations within this area. For practical applications, an uncertainty quantification method should ideally produce a set of plausible segmentation samples within this margin, rather than simply indicating a large margin that may or may not include the ground truth segmentation. How could one obtain a plausible segmentation sample from this margin?\", \"questions\": [\"How does the results generalize to other datasets and segmentation of multiple structures?\", \"How does the uncertainty quantified by the proposed method relates with the real uncertainty (assuming it can be measured by the disagreement between multiple experts)?\", \"How one can use the proposed method in a practical application? Can we get samples of plausible segmentations within the margin outputted by the algorithm?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the explanation. I understand the practical value of uncertainty quantification; however, I cannot see the practical value of uncertainty quantification in the way that the paper proposed. Perhaps, this is because the quantification is different than the uncertainy qualtification papers that I am more familiar with.\\n\\nThe existing segmentation algorithms for uncertainty quantification, e.g. Probabilistic Unet, PhiSeg, and so on, return multiple possible segmentations. And, they demonstrate that the quality of uncertainty quantification by show that the distribution of the samples generated by the network and the experts is similar. They mostly use a metric called as generalized energy distance for quantification. In this setting, if the network is uncertain about an image, then the image can be delegated to human experts for detailed analysis or to get a consensus. This makes sense since they already show that the uncertainty measure of the network is similar to the experts.\\n\\nThe paper does not present such an analysis and the authors responded my previous question about this as \\\"In the case that experts disagree on the true segmented mask we would for now recommend using a consensus mask which is a function of the masks produced by each expert.\\\". In this case, how can we make sure that the uncertainty quantified by the proposed method through the margin of the confidence bound reflects the true uncertainty. I would appreciate if the authors point me the specific table/figure where they present such quantification. This is extremely important because otherwise I cannot see a practical value of such uncertainty quantification. The use case previously mentioned by the authors, restricting the area of that the experts need to search for a certain structure, is not convincing to me because of the reasons I explained in my previous comment.\\n\\nOne response of the authors that I might have overlooked before is the following \\\"... as the predicted segmented mask approaches the ground truth mask in Hausdorff distance both inner and outer sets will converge to the ground truth mask.\\\" This could be actually an extremely important use case of the method if the negative correlation between the quantified uncertainty and the actual accuracy of the network (Hausdorff distance or Dice Score) is high. And, I believe this should be the main power that needs to be emphasized in the paper. The authors point a theorem and some visual results in their response but I couldn't really find analysis quantifying this. If there is a high correlation better than the other uncertainty quantification methods, such analysis makes the paper much stronger.\\n\\nSince I think these additional analysis require another round of major revision, I retain my current score.\"}", "{\"title\": \"Still rather unhappy with the global score functions\", \"comment\": \"Thank you for your reply. And let me reiterate that I can see that your contribution is a methodological one. However, the concept of relying on rather simple and global transformations - distance or threshold adjustment - seems to me to be too simplistic in terms of practical relevance. Let me rephrase why I find your approach of choosing score functions limiting:\\n\\nIn most biomedical segmentation tasks, you would have sharp decisions in some parts of the foreground boundary - just because there is a clearly visible difference between foreground and background - and in other areas you would have a rather smooth transition between \\\"very likely foreground\\\" and \\\"very unlikely foreground\\\". An example is your brain extraction map: the difference between brain and skull at the top of the head is clearly visible. The segmentation performance of most algorithms is likely to be accurate at the pixel level. Other parts of the outer boundary of the brain are not so clearly separated in terms of image intensities (such as the base of the brain), or are simply invisible and the result of anatomical reasoning extrapolating a general shape (such as the boundary between the brain and the brainstem). If my algorithm is producing segmentation errors for whatever reason, and I am still unhappy with the segmentation performance after adjusting the threshold on the logit, my only option is to add a few millimetres of uncertainty to the boundary everywhere. In many applications where the foreground is surrounded by a rather inhomogeneous background, or where the shape of the foreground is defined in a somewhat inconsistent way (by experts, image intensity differences, etc.), this is unlikely to be a valid assumption for how errors and uncertainties are distributed around the boundary of your foreground. [And all this reasoning also applies to the uncertainty of the inward direction of an inhomogeneous foreground]. \\n\\nSo, while I appreciate your methodological contribution, I feel that some of the basic assumptions underlying your solution are not very convincing to me. You would convince me with an empirical study, on a variety of data sets, demonstrating that inter-related \\\"logit-distance\\\" scores are ok to be ignored, in favor of using pure \\\"logit\\\" or \\\"distance\\\" scores only. But I see that a rebuttal is not the right place for this extra effort.\"}", "{\"comment\": \"Thank you for clarifying about Figure 4 and Figure A21 and A25, I had misunderstood what was in those. Yes, that was what I was looking for, so all good.\\n\\nAs you mentioned, about confidence band width, it would probably be interesting to measure the Hausdorff distance (for instance) between the inner and outer sets, but that would not be required from my side.\"}", "{\"summary\": \"Authors develop confidence sets providing spatial uncertainty guarantees for outputs of a black-box machine learning model designed for image segmentation. Specifically, this paper adapts conformal inference to the imaging setting, obtaining thresholds on a calibration dataset based on the distribution of the maximum of the transformed logit scores within and outside of the ground truth masks. Qualitative evaluations are implemented on a polyp tumor dataset to demonstrate the effectiveness of this approach.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The topic of this work is quite interesting. By proposing the concept of conformal confidence sets, this work could provide spatial uncertainty guarantees for the outputs of image segmentation models.\\n2. Theoretical proofs are well formulated to serve as a strong proof for this paper.\", \"weaknesses\": \"1. A very obvious typos \\u201cpolpys\\u201d exist many times, even in the abstract. That should be \\u201cpolyps\\u201d.\\n2. It will be more convincing if authors could provide quantitative results for the segmentation performance of polyp segmentation. The evaluation metrics include Dice, Precision, Recall, etc. For comparable baseline models, authors could choose PraNet, SANet, etc.\\n3. Since the concept of conformal confidence sets can be generalized to other medical image segmentation tasks, maybe more public datasets are applicable to this work, such as vertebrae or tooth segmentation.\\n4. Some technical terms need to be further explained for a better understanding, such as FWER/FDR/FDP in the introduction part.\", \"questions\": \"Please refer to the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer Jpsi,\\n\\nWe were wondering whether you had had a chance to have a look at the new changes which we have made to the paper in our response. In our view the new version of the paper has been greatly improved thanks to your review. Do you feel that we have addressed your comments? \\n\\nThanks in advance!\"}", "{\"comment\": \"We would like to thank the reviewer once again for their feedback. We admit that we had been hoping that the reviewer would consider increasing their score slightly in light of the changes which we have made. The addition of the new applications and theory \\u2013 prompted by the helpful comments of the reviewer (and other reviewers) \\u2013 have in our view greatly strengthened the paper.\\n\\nWith regards to the reviewer continued concerns. We shall address these in reverse order. \\n\\nWith regards to the reviewer\\u2019s second point. It is our strongly held view that deep learning models are widely used without proper uncertainty quantification. Indeed as discussed in [4], \\u201cneural networks do not deliver certainty estimates or suffer from over- or under-confidence\\u201d. These models are powerful but are essentially black boxes the outputs of which are difficult to properly understand. We would strongly argue that whenever it is useful to use a deep learning based segmentation model it is important to have proper confidence bands on the output in order to better understand the limitations of the model. As discussed in [5], \\u201c\\u2026medical AI, especially in its modern data-rich deep learning guise, needs to develop a principled and formal uncertainty quantification (UQ)\\u201d. Deep learning models are widely used without much thought or consideration of uncertainty and in our view the field is in great need of a change. We regard our work as a step along this path in the right direction. \\n\\n[4] Gawlikowski, Jakob, et al. \\\"A survey of uncertainty in deep neural networks.\\\" Artificial Intelligence Review 56.Suppl 1 (2023): 1513-1589.\\n\\n[5] Begoli, Edmon, Tanmoy Bhattacharya, and Dimitri Kusnezov. \\\"The need for uncertainty quantification in machine-assisted medical decision making.\\\" Nature Machine Intelligence 1.1 (2019): 20-23.\\n\\nThe reason for which uncertainty quantification is necessary in deep learning is the same as the reason for which statisticians advise against providing point estimates on effect sizes without confidence bands. In our setting the predicted output of the neural network is the point estimate of the segmented outcome and our confidence bands provide the necessary uncertainty. As discussed in [6], \\u201cPoint estimates alone can be misleading, as they do not quantify the variability or reliability of the estimate.\\u201d In particular , \\u201cconfidence intervals (and by extension, bands) offer a way to convey the precision of estimates, reminding us that data are noisy and estimates are not exact\\u201d, and they \\u201chelp us see how useful a model might be by explicitly recognising its limitations\\u201d as discussed in [7] and [8] respectively.\\n\\n[6] Casella, George, and Roger Berger. Statistical inference. CRC Press, 2024.\\n\\n[7] Freedman, David A. Statistical models: theory and practice. Cambridge University Press, 2009.\\n\\n[8] Box, George EP, and Norman R. Draper. Empirical model-building and response surfaces. John Wiley & Sons, 1987.\\n\\nWe would be happy to add the details discussed in the above paragraphs (and more) to the paper if the reviewer feels that this would better help to explain and motivate the importance of our work.\\n\\n\\nTo the reviewer\\u2019s first point. In our view the distance transformation is not the only contribution of the paper, the others being the theory developed (including the new results i.e. Theorems 2.8 and A.4) and the emphasis on the choosing the optimal score transformation based on a learning dataset, (and justifying this theoretically) which as far as we are aware has not been suggested in the literature on conformal inference for images. This is important no matter what error rate is being controlled.\\n\\nHowever the distance transformation is an important part of the paper. It has the distinct advantage of allowing conformal sets to work very well where other existing approaches such as [2] fail, sometimes extremely badly (c.f. Figure A20). Fixing other existing methods in this manner is, in our view, an important contribution. \\n\\n[2] Mossina et al. Conformal Semantic Image Segmentation: Post-hoc Quantification of Predictive Uncertainty, CVPR Workshops, 2024.\\n\\nWe look forward to hearing the reviewer's thoughts on our response and thank them once more for the time taken to read our work.\"}", "{\"comment\": \"We thank the reviewer for their thoughts. We shall aim to include a measure of the Hausdorff distance between the inner and outer sets for the final version of the paper.\"}", "{\"comment\": \"Thank you to the authors for their detailed response. I truly appreciate the effort they put into providing explanations and conducting additional experiments. However, I still have some concerns that prevent me from improving my initial score.\\n\\t1.\\tMy concern regarding the contribution of the paper remains. While I understand that distance transformations have not been explored before in the literature, I still view this as a relatively minor contribution. I would be more convinced if the practical value of the method was demonstrated more clearly. Currently, I struggle to see how such a method can be effectively applied in practice, which leads me to my second point.\\n\\t2.\\tThe authors propose the following practical use case for their method:\\n\\n\\t\\u201cFor polyp segmentation, the method could be used to rule over regions of the image where the polyps could lie. We can be sure, up to the guarantee provided by the model, that there are no polyps outside of the blue set, meaning that practitioners could deprioritize looking for polyps within those regions.\\u201d\\n\\nWhile I understand this scenario, I am not convinced that it significantly reduces the time and effort required by practitioners or addresses a pressing problem for them. In all the images shown in the experiments, there is a single polyp, and the images are relatively small. Practitioners can already identify the polyp region without needing to extensively search the entire image. I would find the method more compelling if there were examples involving larger images where experts or practitioners genuinely face challenges in locating polyps across a wide area. In such cases, narrowing the search region could provide substantial benefits. The same critique applies to the teeth dataset. Experts (and even non-experts) already know where to look to find the teeth. If the conformal prediction approach merely provides smaller regions to search within\\u2014leaving practitioners to \\u201cmanually\\u201d segment those areas\\u2014it is unclear to me what significant advantage this offers in a real-world scenario.\\n\\nFor these reasons, I am inclined to maintain my initial rating.\"}", "{\"comment\": \"Thanks for your further thoughts. The uncertainty quantification we are describing is indeed quite different from the papers that the reviewer mentions. Those papers rely on probabilistic model assumptions which are not guaranteed to hold in practice. Instead conformal confidence sets provide robust guarantees which hold without making additional assumptions.\\n\\nWith regards to the uncertainty quantification figures, we would like to point the reviewer to Figures 4, A21 and A25 in which we quantified the uncertainty and showed that the method provided the right guarantees. The width of the confidence sets are instead compared in Figures 5 and A17 and show big improvement over existing approaches whilst maintaining the same level of coverage. \\n\\nWe are glad that the reviewer agrees that the new result which we have provided is an important contribution. This result shows that the distance transformed scores provide guarantees which cannot be provided by the untransformed scores. As shown in the brain imaging application the confidence sets provided by the untransformed scores can be very poor in practice while the distance transformed scores are very informative. There is indeed a negative correlation that the reviewer describes, compare e.g. the table on the segmentation performance in Section A.9 with the performance of the distance transformed scores. We would be happy to include a range of further simulations which illustrate this in further detail for the final version of the paper.\"}", "{\"comment\": \"We would like to thank the reviewers for their feedback and constructive comments and for taking the time to read our work. All reviewers stressed the need to apply our methods to more than one dataset. We have taken this feedback to heart and in order to address this we now include the results of applying the methods to 2 new datasets and problems: brain mask segmentation and teeth segmentation. We find in these settings that the method works well, providing informative inner and outer confidence sets. On these datasets we also explored the impact of alternative score transformations based on smoothing the original untransformed scores with a kernel of varying bandwidth. The results of these applications have been included as Sections 4 and 5 of the manuscript, with comparisons between different transformations including smoothing included in Sections A.7 and A.8.\\n\\nOn these datasets, as previously with the polyps data, the best combination of score transformations was learnt from a independent learning dataset. For brain mask segmentation the distance transformed scores provided the tightest regions for both inner and outer confidence sets whilst the original (untransformed scores) were uninformative. Smoothing improved the original scores but not as much as applying the distance transformation. Instead for teeth segmentation distance transformed scores provided informative outer sets whilst smoothing the untransformed scores provided the best inner sets. Since the best transformation depends on the application these new data applications help to illustrate the importance of learning the score function in this manner. \\n\\nWe have also included new results (Theorems 2.8 and A.4) which characterise the relationship between the confidence sets based on the distance transformed scores and the Hausdorff distance between predicted and ground truth masks on the calibration dataset. These results shows that if the Hausdorff distance between predicted and ground truth masks on the calibration sets is bounded then confidence sets for new observations are guaranteed to be at most twice as wide as the bound. Importantly a corresponding result does not hold for the untransformed scores as we illustrate Figure A20. A comparison of the metrics, of the segmentation models used, is now included in Section A.9 and is related to the performance of the distance transformed scores (i.e. the model (HDBET) with the highest performance on these metrics has the most precise confidence bands). We have also added a description of the algorithm at test time in Section A.5. \\n\\nWe have uploaded a new version of the paper with the results of applying our method to these datasets and other changes in response to the reviewers comments. Changes in this new version are shown in red and sections referred to in the responses below refer to sections of the newly uploaded paper.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"1. The method is only evaluated on a single dataset and focuses exclusively on binary segmentation.\\n2. The manuscript lacks a thorough comparison with existing conformal prediction methods for image segmentation. \\n3. Some aspects of the methodology, particularly how the algorithm operates during testing and how uncertainty is quantified, require clearer explanation to ensure readers can fully understand and reproduce the results. \\n4. Reviewers highlighted the need for more quantitative results, including performance metrics like Dice, Precision, and Recall for segmentation, as well as aggregated coverage scores.\", \"additional_comments_on_reviewer_discussion\": \"The paper received mixed reviews and the authors were able to address some of the concerns raised by the reviewers. While there is no final consensus, the AC acknowledges both the merits outlined by the positive reviewers & shortcomings of the paper. The discussion clarified many of the concerns, however, after reading all discussions and responses, it appears that the paper requires a major revision to meet the standards required by ICLR and requested by the reviewers, e.g., the practical application, and more quantitative results.\\nThis means that the paper cannot be accepted in its current form and needs to be reviewed again potentially in a future venue!\"}", "{\"comment\": \"6- Uncertainty quantification for our method is indeed quantified by the margin between the inner an outer confidence sets. We do not rely on a posterior distribution, instead using the calibration set to calculate the inner and outer thresholds. As such our method does not make assumptions on the distribution of the data in order to provide valid uncertainty.\\n\\nIn particular the width of the confidence bands directly depends on the quality of the neural network. I.e. as the predicted segmented mask approaches the ground truth mask in Hausdorff distance both inner and outer sets will converge to the ground truth mask. In order to formalize this we have added Theorems 2.8 and A.4 which show that if the Hausdorff distance between predicted and ground truth masks on the calibration dataset is bounded then confidence sets for new observations are precise. Importantly this result does not hold for the original untransformed scores which can give very wide and uninformative confidence sets even when the neural network provides very good predictions. This very well illustrated in the brain imaging application, see Figure A20 in Appendix A.7.\\n\\nIn the case that experts disagree on the true segmented mask we would for now recommend using a consensus mask which is a function of the masks produced by each expert. In that case the method would provide confidence bands relative to this consensus mask. The method is only as good as the quality of the expert calculated masks and relies strongly on a good quality ground truth. We do not directly incorporate the uncertainty in the ground truth masks in our approach but it would be very interesting to do so, as we now observe in the discussion.\\n\\n7- The size margin between the inner and outer boundaries (for the confidence sets obtained from using the distance transformed scores) depends on the application setting and quality of the image segmentation algorithm, as shown in Theorems 2.8 and A.4 and discussed in the response to (6) above. The width of the uncertainty bands helps to visually capture the uncertainty of the model and in our view allows practitioners to better understand the limitations of these models. \\n\\nIt is indeed the case that not all segmentations within the margin will be equally plausible. Because we are not working with a posterior distribution it is not possible to obtain samples from the model. Instead obtaining a set of plausible segmentations within these bounds would in our view require additional biological information to be taken advantage of. We have added a comment to the discussion on this point as an interesting direction for future research.\\n\\n------------------------------------------------------------------------------------------------------------\\n\\nWe would direct the reviewer to our responses to 3 and 4 (and the response to all reviewers) with regards to their first question, to 6 for the second question and thirdly to 5 and 7 and below for the response to their third question.\\n\\nRegarding how the method can/should be used in practice. This depends on the application setting. For polyps segmentation the method could be used to rule over regions of the image where the polyps could lie. We can be sure, up to the guarantee provided by the model that there are no polyps outside of the blue set meaning that practitioners could deprioritize looking for polyps within those regions. \\n\\nInstead for instance in the brain imaging application it is important to detect locations which lie within/outside the brain for follow up analyses. Within the inner set we can be sure to find areas inside the brain which could help with alignment further down the pipeline and the detection of activation (e.g. when using fMRI). Instead the outer set can be used to mask out areas where we can be sure that there is no brain, and thus no activation. Having precise confidence bounds on this is important because otherwise we risk missing areas of the brain. \\n\\nWe thank the reviewer once more for their helpful comments and look forward to hearing their thoughts on our response and discussing any follow up questions that they may have.\"}", "{\"comment\": \"Dear Reviewer Bd8E,\\n\\nWe were wondering whether you had had a chance to have a look at the new changes which we have made to the paper in our response. In our view the new version of the paper has been greatly improved thanks to your review. Do you feel that we have addressed your comments? \\n\\nThanks in advance!\"}", "{\"comment\": \"We are grateful for the comments and thoughts of the reviewer and for the opportunity to clarify our contributions.\\n\\n1- The distance transformation is indeed a sensible choice of score transformation. However as far as we are aware other papers have not considered it in the context of conformal inference for image segmentation. Given how necessary this transformation turns out to be in some applications (see e.g. the new brain imaging example in which the untransformed scores provide extremely uninformative bounds), to us this is an important gap to fill in the literature. We also regard the theory which we derive surrounding inner and outer sets, including the newly added results, Theorems 2.8 and A.4, as a key contribution. \\n\\n2 - We would like to clarify further that we in fact do compare to the results of other existing methods. In particular the bounding box approach of [1] is compared to on the learning dataset and the testing datasets, for the polyps application, and shown to perform less well than the use of the distance transformation. This is shown visually in Figure 2 and Figures A8-12. We also compared to the precision of this approach in Figure 5 and included it in our validations in Figure 4. We explain the relationship with [1] in Section 2.5 of the manuscript.\\n\\n[1] And\\u00e9ol, L\\u00e9o, et al. \\\"Confident Object Detection via Conformal Prediction and Conformal Risk Control: an Application to Railway Signaling.\\\" Conformal and Probabilistic Prediction with Applications. PMLR, 2023.\\n\\nOur existing results in fact also compare to the result of applying [2], the paper mentioned by the reviewer. This is because for our problem setting the approach of [2] is equivalent to empirical risk control [3] with the binary loss function which we showed can be used to derive valid inner and outer sets in Section A.2. We have clarified this in Remark 2.4. Applying the method of [2], without modification, in our context would result in the blue outer set obtained from the identity score transformation which is typically very wide and not useful. This is exemplified in the brain imaging application, see Figure A20, in which the blue outer set (which would be the result of applying the algorithm in [2]) obtained from using the untransformed scores is extremely uninformative. Indeed [2] observed very poor performance with the binary loss function, noting that the resulting \\u201cprediction set will be theoretically valid but not very informative\\u201d. The use of the score transformations and the distance transformation in particular is thus crucial in improving the width of the confidence sets. As far as we are aware our paper is the first (other than the bounding box approach of [1] which we compare to) to provide informative conformal confidence sets which are guaranteed to fully contain the segmented outcome (rather than controlling another weaker error rate),\\n\\n[2] Mossina et al. Conformal Semantic Image Segmentation: Post-hoc Quantification of Predictive Uncertainty, CVPR Workshops, 2024,\\n\\n[3] Angelopoulos, Anastasios N., et al. \\\"Conformal risk control.\\\" ICLR, 2024.\\n\\n3 - We have now included two additional applications involving brain imaging and dentistry. These show that the performance of the model indeed generalizes across datasets. It also helps to emphasize the need for score transformations. The distance transformation does particularly well on the brain imaging dataset. We have performed validations on these datasets, see Sections A.7.4 and A.8.4, which show that the model correctly controls the coverage rate in these settings. \\n\\n4- Regarding the reviewer\\u2019s question about segmentation of multiple structures. This is indeed an interesting question. The segmentation problem for each one of these multiple structures is itself a binary segmentation problem. As such corresponding results for multiple structures follow as a corollary to our results. Joint coverage over the structures can then be obtained by jointly sampling the maximum of the scores over the different classes. We shall formalize this and add an application for the final version of the paper. \\n\\n5- In order to clarify what the algorithm does during test time we have included a formal algorithm describing the steps taken by the model. See Algorithm 1 in Appendix A.5, now referenced in Section 3.2. Inner and outer thresholds are in fact computed separately based on the inner and outer scores respectively during calibration. When applying the distance transformation the distance is computed relative to the predicted mask obtained by thresholding the logit scores at 0 not to the inner set. Then at test time transformed inner and outer scores are calculated and compared to the calculated threshold. We hope that the provided algorithm helps to make the steps taken clearer.\"}", "{\"summary\": \"The authors propose a conformal prediction method that computes confidence sets with spatial uncertainty guarantees in image segmentation from any machine learning model. They illustrate the usefulness of the proposed method on medical images.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is well-written and clear, although it took a second read-through to fully understand. The proposed method seems to work very well, and the presented experiments are convincing.\", \"weaknesses\": \"I am missing more quantitative results. For instance, aggregated coverage scores (e.g., mean; or other metrics, e.g., evaluate Equations 1 and 2) for the different versions on more than one dataset. This comparison should then also include some existing methods, to illustrate the relative strengths of different methods.\\n\\nAs just mentioned, for the results to be more convincing, I would also like to see examples on more than just one dataset.\\n\\nAlso, there must be other score transformation functions that could also be evaluated. Testing a couple more could strengthen the results and make it more convincing.\", \"questions\": [\"Couldn't a related/similar smooth distance be defined using kernels?\", \"What is called \\\"original scores\\\", is this when you use the identity score transformation?\", \"What are the dashed lines in Figures 4 and 5?\"], \"major_comments\": [\"Add labels and/or legends to the rows and columns of the figures.\"], \"minor_comments\": [\"The word \\\"polyp\\\" is misspelled in different ways in almost every instance. Do check this.\", \"It says \\\"... the set a side [num] images ...\\\", or something similar, a few times. Check the grammar there.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We are very grateful for the reviewer\\u2019s comments and remarks. Our response is below.\\n\\nRegarding the reviewers first concern. We in fact view the adaptability of the choice of score function to the dataset/model as a strength not a weakness of the method because different datasets/models have different features that mean the optimal score transformation may vary. This approach (of learning transformations on an independent dataset) has been previously used and theoretically justified in [1] in the context of conformal inference for time series data, in which the optimal copula was chosen based on a learning dataset (as we mention in Section 2.4). In the new datasets provided the optimal score transformations are different than for the polyps dataset and indeed certain choices (such as the original scores) can perform very badly (e.g. in the brain imaging application, see Figures A18 and A20 in the updated manuscript). This helps to illustrate the need to optimize the score functions. \\n\\n[1] Sun, Sophia, and Rose Yu. \\\"Copula conformal prediction for multi-step time series forecasting.\\\" ICLR, 2024.\\n\\nWe indeed regard one of the main contributions to be the theory developed. We would like to clarify that learning the score functions on an independent learning is theoretically valid. Crucially, as for the similar approach taken in [1] in the time series setting, the independence of the learning data set from the calibration and testing datasets guarantees the vaildity of the optimally chosen score function. \\n\\nRegarding the reviewers second concern that we only consider a single dataset we thank the reviewer for this comment. In order to address this we have added two additional datasets involving brain imaging and teeth segmentation. These new datasets help to illustrate the robustness and usefulness of our method. \\n\\nRegarding the reviewers questions. \\n\\n1- The dataset is indeed public but is independent of the data used to train the original polyp segmentation model.\\n\\n2- In the original dataset there were a few images from the same video frames however we removed these for the purposes of our analysis. This is important our model assumes exchangeability which would be violated if there was dependence between the images. We now clarify in Section 3 that the images used come from different patients.\\n\\n3- Regarding the use of the word tumor, we apologize for this oversight and have mostly removed the word throughout the paper, except in one setting in which we are not referring to polyps in particular. We thank the reviewer for pointing this out.\\n\\n4- The reviewer is right to note that the fact that the data is from different centres may influence the annotations. We rely strongly on a good quality ground truth and the model is only as good as the ground truth available. Where possible we would recommend taking a consensus rating by combining the annotations of multiple annotators and then using this consensus in combination with our model for best results. \\n\\n5 \\u2013 We have now included a table in Appendix A.9 illustrating the performance of the different segmentation models used, measured in terms of dice, precision and recall scores. This table helps to show how improvements in these metrics correspond to improvements in the performance of our method. In particular for the best performing model based on these metrics (the HDBET model designed for brain extraction), has relatively tight confidence sets. This is a relationship which we have now formalized in Theorems 2.8 and A.4 results which give guarantees on the size of the resulting confidence sets related to the performance of the model. Other choices of score function do not correlate with these metrics, indeed the original untransformed scores perform notably badly for the brain imaging application despite the high performance on the metrics (see e.g. Figure A20). As such, even for well performing segmentation models, appropriate score transformations are required in order to obtain tight confidence bounds. Theorems 2.8 and A.4 show that the model handles perfect segmentations very well as they have a Hausdorff distance of 0 from the true mask. Instead complete misses will typically increase the size of the confidence bands, appropriately so as they indicate a failure of the model in that instance.\"}", "{\"comment\": \"We would like to thank the reviewer for their further comments and for the effort they have put into reading and helping to improve the paper.\\n\\nRegarding the quantitative metrics on uncertainty quantification. If the reviewer doesn\\u2019t mind would they be able to clarify more specifically which metrics in particular they would like to see. We interpreted the mean aggregated coverage score, discussed in the reviewer\\u2019s first comments to be the same as the evaluation of equations (1) and (2) mentioned by the reviewer. As we clarified above these evaluations are included in the text in Figure 4 (for the polyps applications) and in Figures A21 and A25 for the brain imaging and teeth segmentation examples. Is the reviewer referring to alternative uncertainty metrics or are these sufficient? We would be very happy to include other measures which quantify the uncertainty. We included in Figures 5 and A17 measures of the performance among the different score transformations however we could also consider other measures to compare such as the width of the confidence bands or alternative measures.\\n\\nRegarding the comparison to other methods. This is indeed what we meant by the inclusion of risk control, i.e. we shall calculate equations (1) and (2) for sets designed for risk control [1] and other existing conformal inference methods \\u2013 these will under cover because they are not designed to control the coverage but instead an alternative error rate. This will help to explain the strengths of using confidence sets over these alternative approaches.\\n\\n[1] Angelopoulos, Anastasios N., et al. \\\"Conformal risk control.\\\" ICLR, 2024.\\n\\nThank-you for clarifying the type of transformation which you meant. It would also be interesting to explore this. We will add an exploration of the impact of using different kernel functions, used to measure similarity between v and a set A, and test how these perform relative to our existing transformations for the final version of the paper. \\n\\nWe will also label each of the rows of Figure 2 with the labels \\u201cScores\\u201d and \\u201cConfidence Sets\\u201d on the right hand side of the figure (we haven\\u2019t yet managed to do so due to a small latex issue but will do so when we figure that out). \\n\\nWith regards to the other points of the reviewer we have now resolved these and we thank the reviewer for pointing these out.\"}", "{\"comment\": \"Regarding the reviewers remaining questions. What we referred to as the original scores are indeed the scores which result from using the identity transformation. In order to improve the clarity of this in the paper we now refer to these scores as the logit scores or the untransformed logit scores throughout the paper instead of as the original scores. Furthermore the dashed lines in Figure 4 provide 95% uncertainty bands for the coverage, we have now clarified this in the caption of Figure 4. Instead the grey dashed line in Figure 5 indicates the value 1 at all levels, this is included for comparison because the best possible value of the inner and outer ratio in the respective plots is 1.\\n\\nWe thank the reviewer once more for their helpful comments and look forward to hearing their thoughts on our response and discussing any follow up questions that they may have.\"}", "{\"title\": \"Follow-up comments and clarifications\", \"comment\": [\"Thank you for carefully addressing my comments and concerns. I still have the following comments:\", \"I didn't mean metrics on the segmentation performance, but metrics on the uncertainty quantification. It would be great with aggregated metrics saying something about the overall performance for each method, to summarise their relative strengths and weaknesses. Or is this what you mean by the inclusion of risk control of the coverage?\", \"About score transformation: It is relevant to smooth the score contributions, but what I was thinking of was a variant of the score transformation that used a kernel instead of the sign function to measure similarity between v and a set A. Like a soft version of the current score transformation.\", \"Theorem 2.8: The H should have a subscript \\\\rho, with the distance metric.\", \"Line 269: Says \\\"poplys\\\".\", \"Figure 2: Label the rows of the image grid.\", \"Line 364: Missing start of the sentence, or just that the first letter should be capital?\", \"Figure 5: It still says \\\"Original scores\\\".\"]}", "{\"summary\": \"The authors formally present an approach that aims at inferring uncertainty margins to segmentations. They propose either take the logit score of a CNN and to threshold it to obtain this margin, or to threshold at a certain distance to the predicted segmentation. Threshold and type of margin (logit score / distance) is to be identified experimentally for a given dataset. Experiments on one public dataset are shown (containing still images from minmally invasive surgery).\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The authors present the problem in a formal manner, relating it to existing work.\", \"The overall problem addressed is relevant.\"], \"weaknesses\": [\"The motivation for the scores functions (logit, distance, ...) is weak. The necessity to choose the type and to even mix them gives the overall approach a bit of a heuristic touch. (While I do understand that you would consider your contribution here to be in the formal derivation of underlying theory, i.e., very much the opposite of a heuristic.)\", \"The experiments only provide insights into one very narrow application. they are merely fulfilling the purpose of an illustation of the problem, but not a validation.\"], \"questions\": [\"You are testing on public data. Has your pretrained polyp segmentation algorithm been trained on the same public data?\", \"Are there any susequent video frames in the dataset, or images of the same polyp / patient? If there are, did you stratify your training / testing set accordingly?\", \"Please remove the reference to tumors throughout the paper. Polyps may be precursors to tumors, but they aren't any.\", \"You are using a dataset from different centers, there may be systematic differences in how the polyp areas are annotated - some annotators being more inclusive with respect to surrounding tissue, others being less. How does this variability impact on your measure?\", \"I might have missed it but what is the accuracy of your underlying segmentation algorithm? I would be under the impression that it is a well performing algorithm on a rather easy segmentation task? How does your approach relate to extrema in algorithmic performance, i.e., perfect segmentations or complete misses?\", \"You are stating \\\"In order to make efficient use of the data available, the learning dataset can in fact contain some or all of the data used to train the image segmentor.\\\" Your training data may be fairly overfittet impacting on your logit score and, hence, your choice of margin (logit/distance, thresholds). Wouldn't it be a safer approach to generate cross-validated logit functions and use them in the comparison?\", \"I understand that the primary contribution of this study is the theory offered. Still, you are stressing that your algorithm is a very lightweight addition to any pretrained segmentation algorithm. And there are a lot of standard computer vision / biomedical image data sets for segmentation available, as well as pretrained algorithms. Would you be able to generate segmentations maps for predefined certainty levels, and compare these levels with the testing performances across a larger set of applications? It would be quite convincing, if e.g., your 90% certainty map of the outer margin would indeed include 90% pixels of a test set or lead to a sufficiently large overlap (that has previously been defined) in 90% of all test cases.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"none\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We would like to thank the reviewer for their further thoughts. We agree that one of the main contributions is methodological however they are practical as well. I.e. the distance transformations which we propose are also the difference between the methods working and not working (see e.g. Figure A20).\\n\\nWe're grateful for the thoughts of the reviewer regarding the brain imaging application. We would like to clarify that given the time available for the additional analyses we were not able to explore all possible choices of score transformations. Moreover it would in fact be possible to explore score transformations which take the shape/point of the brain into account and adjust accordingly - allowing for a more variable width across the brain. We would be happy to explore this for the final version of the paper. While it is indeed natural to add millimetres of uncertainty, doing so conformally is key in order to provide valid inference.\"}", "{\"comment\": \"We are very grateful for the reviewer\\u2019s comments and remarks. We agree that providing uncertainty quantification for black box neural network models is an interesting problem.\\n\\nRegarding additional datasets we have taken the reviewer\\u2019s advice on board and have now included extensive analysis of two further datasets. The first is a brain imaging dataset. And, the second, following the reviewer\\u2019s suggestion, involves teeth segmentation. As now shown in the main text (Sections 4 and 5) our method works very well in these scenarios providing meaningful confidence sets which have robust confidence guarantees. This demonstrates that our method extends robustly to other settings and models. Further analysis is shown in Sections A.7 and A.8. \\n\\nWe also now included Dice, Precision and Recall metrics (evaluated on the corresponding validation dataset) for each of the 3 segmentation algorithms considered. (I.e. PraNet, HDBET and the UNET based GAN model we used for teeth segmentation). See the relevant table Section A.9 in the updated draft for full details. The results are very helpful in understanding how to performance of the models affects the performance of the confidence sets. In particular improvements in these metrics correspond to improvements in the quality of the confidence sets based on the distance transformed scores. The HDBET model has the highest dice score and has the tightest confidence sets as a result. Note that other score transformations such as the identity (which yields the original logit scores) do not have this monotonicity property. Indeed Figure A20 shows that the untransformed logit scores can be very uninformative, but the degree to which this is true depends on the application. In order to formalize the relationship between the distance transformed scores and the quality of the model we have provided new results (Theorems 2.8 and A.4) which further motivate the use of the distance transformation. Comparison of between the metrics now shown in Section A.9 and the performance of the confidence sets helps to illustrate this result.\\n\\nRegarding the comparison to other baseline models, we shall include measures of the performance (e.g. relative to SANet and UACAnet and others) in the final version of the manuscript.\\n\\nRegarding the technical terms (FWER/FDR) we have fully written out their acronyms for clarity where they are introduced and have included a new section of the Appendix (Section A.10) in which these are formally defined and where we discuss the relationship between them and different measures of coverage in the segmentation setting.\\n\\nWe would like to apologize for the spelling error of polyps which we have now corrected in the updated draft and we thank the reviewer for pointing this out. \\n\\nWe thank the reviewer once more for their helpful comments and look forward to hearing their thoughts on our response and discussing any follow up questions that they may have.\"}" ] }
2Oh2EOcFSO
Can a Bayesian oracle prevent harm from an agent?
[ "Yoshua Bengio", "Michael K. Cohen", "Nikolay Malkin", "Matt MacDermott", "Damiano Fornasiere", "Pietro Greiner", "Younesse Kaddar" ]
Is there a way to design powerful AI systems based on machine learning methods that would satisfy probabilistic safety guarantees? With the long-term goal of obtaining a probabilistic guarantee that would apply in every context, we consider estimating a context-dependent bound on the probability of violating a given safety specification. Such a risk evaluation would need to be performed at run-time to provide a guardrail against dangerous actions of an AI. Noting that different plausible hypotheses about the world could produce very different outcomes, and because we do not know which one is right, we derive bounds on the safety violation probability predicted under the true but unknown hypothesis. Such bounds could be used to reject potentially dangerous actions. Our main results involve searching for cautious but plausible hypotheses, obtained by a maximization that involves Bayesian posteriors over hypotheses. We consider two forms of this result, in the i.i.d. case and in the non-i.i.d. case, and conclude with open problems towards turning such theoretical results into practical AI guardrails.
[ "AI safety", "probabilistic guarantees", "guardrails", "safe-by-design AI", "Bayesian inference", "posterior convergence" ]
Reject
https://openreview.net/pdf?id=2Oh2EOcFSO
https://openreview.net/forum?id=2Oh2EOcFSO
ICLR.cc/2025/Conference
2025
{ "note_id": [ "u7VlL1hLw5", "rrbbhxPZp8", "ixuMLb1LV5", "ivJJB5uIzs", "gnvSTQYFLn", "eyF1jQfEWi", "eT4kuo9BKY", "JvzstXgFIj", "DgWvP835G1", "D8Ro69AvD8", "9zYQuTBbl7", "4dO3mLwEGq", "47YnKT9MpX" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "decision", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "meta_review" ], "note_created": [ 1732376704466, 1732377108761, 1732377125994, 1730132722435, 1730662190505, 1732376870575, 1737523395460, 1730774214463, 1730669454268, 1732376935906, 1732376857805, 1732377074139, 1734771480117 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission424/Authors" ], [ "ICLR.cc/2025/Conference/Submission424/Authors" ], [ "ICLR.cc/2025/Conference/Submission424/Authors" ], [ "ICLR.cc/2025/Conference/Submission424/Reviewer_bi5T" ], [ "ICLR.cc/2025/Conference/Submission424/Reviewer_ADYh" ], [ "ICLR.cc/2025/Conference/Submission424/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission424/Reviewer_n2RR" ], [ "ICLR.cc/2025/Conference/Submission424/Reviewer_pbrg" ], [ "ICLR.cc/2025/Conference/Submission424/Authors" ], [ "ICLR.cc/2025/Conference/Submission424/Authors" ], [ "ICLR.cc/2025/Conference/Submission424/Authors" ], [ "ICLR.cc/2025/Conference/Submission424/Area_Chair_M29S" ] ], "structured_content_str": [ "{\"comment\": \"> This is a very well written paper, and it is easy to follow.\\n\\nThank you for saying so.\\n\\n> The authors present an upper bound on the harm probability, though it appears to be highly conservative. It would be valuable if they could offer a convergence rate or practical guarantees to make the framework more usable.\\n\\nIn the i.i.d. case, results about the rate require further assumptions about the model class, so we simply provide an outline of how to achieve them. In our general setting, Equation 3 is the tightest claim about rates we can make. Note that convergence rates are not relevant for the non-iid section because in that case we cannot expect to (safely) achieve convergence even in the limit. See for example [this paper](https://ieeexplore.ieee.org/document/9431093) on why in the RL case, wrongly assuming ergodicity (and thus convergence) can be catastrophic.\\n\\n> Since the theoretical results lack practical assurances, I would have appreciated more experimental validation, especially in complex and realistic settings.\\n> Obtaining a Bayesian oracle could be very challenging (posterior distribution).\\n\\nTo validate our theoretical results empirically in the context of our bandit experiment, we did conduct additional experiments comparing our bound against alternatives. We tested a minimal set of indices that would satisfy our theoretical requirements (one theory maximizing the posterior plus those with posterior $\\u2265 \\u03b1$) along with various aggregation methods (different quantiles, weighted means, etc.). This revealed that our formulation achieves good empirical trade-offs between safety and performance. Another important aspect is that our bound provides theoretical guarantees to overapproximate the harm probability, while alternatives that sometimes achieved higher rewards did so at the cost of more deaths. Due to space constraints, we didn't include these additional comparison results in the paper, but would be happy to add them to the appendix.\\nOn another note, it should be noted that taking this approach a step further and turning the proposed bounds into scalable algorithms is not trivial and will require many more years of research, which is why the paper indicates at the end several needed research directions in order to achieve this. Hence, only small-scale experiments can be done at this point, and have been done and described in the paper, to at least validate the ideas and mathematical results in the paper, showing how the bounds can be used to deliver safer decision-making in a bandit scenario. Indeed, we hope the results in this paper will help recruit more attention to the problem of modelling a Bayesian posterior in complex and realistic settings.\"}", "{\"comment\": \"> For the applicability, my core concern is that the main problem in AI is not providing safety guarantees under certain assumptions, but rather designing a Bayesian agent that actually works well for a given problem while satisfying these assumptions. To go into detail, section 3 only provides \\\"law of large numbers\\\"-style guarantees which are not useful in practice. A small paragraph on the rate of convergence (which would be very helpful to know) is included but essentially is very problem-dependent and thus not discussed in detail in this more general framework. In the experimental evaluation, where Prop. 3.4 is utilized, it is not even clear whether t is large enough for the guarantee statement of Prop. 3.4 to hold (on top of Prop 3.4 not being applicable due to non-i.i.d. as the authors mention themselves). Section 4 then relaxes to probabilistic guarantees, which is a more practical approach. However, to apply the results of section 4 in practice it ultimately relies on defining a hyperparameter alpha. On the theoretical side, the guarantees in section 4 only hold if alpha is chosen small enough (which is impossible to know without knowing the system in the first place) and on the practical side, the evaluation in section 5 shows that choosing alpha too large can have catastrophic consequences, even for the simple bandit system considered in section 5. In summary, I do not see any immediate way to take advantage of the theoretical results the paper provides.\\n\\nThe paper explicitly states that the theoretical results only open the door to a possible direction for AI safety and that at least five challenges remain in order to turn this kind of approach into efficient and reliable guardrails. The paper is not claiming otherwise and in our opinion should be considered for its value in highlighting the theoretical basis for future work in AI safety and conservative guardrails. \\n\\nIt is true that the bounds of Section 3 are asymptotic in data, whereas ultimately we aspire to have non-asymptotic safety bounds. Section 4 explores one way to achieve that, and we are currently investigating others. This paper is a first step.\\n\\n> This is also amplified by the fact that the main body essentially does not discuss related work, and how existing approaches can be embedded into the framework.\\n\\nWe will add a number of related works to the paper, as suggested by the reviewers.\\n\\n> As a minor comment, from a reader's POV, the paper can be hard to follow at times, especially in the formal sections. Many paragraphs are written in a very technical way, assuming a deep mathematical background. While this surely can be expected from an audience like ICLR, I feel like many sections disrupt the flow of the paper, e.g. the two paragraphs \\\"Setting\\\" (l.155ff and l. 268ff). While these are defnitely important to make the paper rigorous, they are not strictly required to convey the main ideas of the paper. In the interest of readability, it might be advantageous to instead outsource the technical definitions to a separate section.\\n\\nThanks for the suggestion regarding technical material. We agree that making as much of the paper as possible accessible to a broader audience of ICLR researchers is important.\\n\\nRegarding larger models, it is difficult to anticipate how the proposed approaches would perform or larger models because in our opinion there are several technical and algorithmic questions that need to be addressed (the five listed in the last section), which would result in methods that have the same spirit as what we proposed but probably a quite different algorithmic form.\\n\\n> Q1. Can you detail how you can utilize the CLT to obtain convergence rates (line 238ff)? If you applied this to the example in seciton 5, would it yield practical bounds?\\n\\nConvergence rates are held up by the model closest to $\\\\tau^*$ (smallest $D_{KL}(\\\\tau^* || \\\\tau)$). We are interested in the sample mean of the quantity $D_{\\\\tau}^{t+1} - D^t_{\\\\tau}$. The expectation is positive, since it is a KL divergence, so the sample mean will converge to a positive number at a rate governed by the CLT.\\n\\n> Q2. Are the results in Propositions 4.4 to 4.6 tight (in a similar vein as Remark 4.3 shows for Proposition 4.2)?\\n\\nUnfortunately not. In general, the harm probability predictions are like off policy predictions in RL--we are not assuming that it is possible for the data $Z_{1:\\\\infty}$ to contain direct observations of the harm probability. In this regime, tight bounds do not appear possible without making assumptions that are unlikely to be founded in practice.\"}", "{\"comment\": \"> Q4: Can you provide some heuristics on choosing a safe, yet effective \\u03b1 a priori? Which information might be helpful for this from e.g. which model paramteres have the biggest impact on \\u03b1 and what information from a domain expert could be incorporated?\\n\\nIntuitively, one might consider the number of qualitatively different models of harm that one could make a case for, which wouldn\\u2019t be easily ruled out by the data. Calling this number $n$, if after we get lots of data, those $n$ models will plausibly be in the top set along with the truth, and $\\\\alpha^{-1}$ should be $O(n)$. A domain expert with an understanding of the setting and the data generating process could help make such an assessment.\\n\\n> Q5. Can you make any predictions on how you proposed guardrails perform on larger, more complex models? In particular, how do you expect the overestimation of harm (see Fig. 2) to be affected?\\n\\nIt is difficult to anticipate how the proposed approaches would perform or larger models because in our opinion there are several technical and algorithmic questions that need to be addressed (the five listed in the last section), which will result in methods that have the same spirit as what we proposed but probably a quite different algorithmic form.\\n\\n\\n> Q6. Are there any existing works in which your framework fits, i.e. for which you can give (probabilistic) guarantees where they were previously unavailable? If not, are there certain settings in which you can make reasonable a priori assumptions such that your framework is applicable and concrete guarantees can be derived for a given data set?\\n\\nFor this framework to work robustly in realistic settings, we unfortunately await further work on computing reliable approximations of the posterior. But suppose we pretended that haphazard posterior approximations were robust, such as ensembles (see discussion in Wilson and Izmailov (2022)). Then we could apply our framework, although we wouldn\\u2019t confidently call these a priori assumptions \\u201creasonable\\u201d. In that setting, our work lends credence to the practice of considering the worst-case within an ensemble.\\n\\nHowever, because of the challenges we listed in the end, and in particular the challenge of approximating the posterior over large theories, we do not feel that it is feasible to go beyond small-size settings like those studied in the experimental section.\\n\\n> minor comments: line 96: explain what q is; line 193: introduce delta as dirac notation beforehand; the axis and legends in the figures in section 5 are barely readable\\n\\nYou\\u2019re right. Thanks for asking about $q$, we will clarify that it is an estimator of the true posterior. We will also introduce $\\\\delta$ as a notation for Dirac before using it, and we will redo the figures in section 5 to make them more readable.\"}", "{\"summary\": \"The paper is tackling the problem of safety in AI. The authors take the view of defining safety as avoiding certain undesirable states in specific contexts.\\nThey introduce a framework based on Bayesian inference from which an agent can derive safe policies that come with (probabilistic) guarantees of preventing harm.\\nThe approach is safe-by-design, i.e. able to prevent undesired outcomes even if no concrete example of harmful states was ever observed in the system.\", \"soundness\": \"4\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The main strength of the paper is the introduction of a (as far as I am aware) novel view at safe-by-design in AI at runtime and opening the possibilities for future Bayesian methods to utilize the safety guarantees shown in the paper. Allowing for safety guarantees and steering future work in a direction that empashizes those is a significant problem in AI.\\nI especially appreciate the discussion on open problems of the approach in the conclusion.\\n\\nThe theory being developed is also quite general and spans over a wide range of possible systems/problems.\\n\\nThe paper is well motivated and generally well structured, introducing formal concepts as needed in the respective sections. A small experimental evaluation is performed and well discussed. Proofs are provided in the appendix and I could not find any mistakes.\", \"weaknesses\": \"My main critique points of the paper are the lack of technical novelty (or at least it is not clarified enough if there are new results) and questions on applicability.\\n\\nFor the former, essentially all Propositions and Lemmata are either adaptations of well known results (Prop 3.1), taken from previous literature (Lemma 4.1, Prop 4.2), or rather simple Corollary derived from them (Lemma 3.3, Props. 3.4, 4.4, 4.5, 4.6). For Prop. 4.2 is is shown that the result is tight (Remark 4.3). To me it did not became clear whether this is a known result or a new contribution. It is also not clear whether the derived results (Props. 4.4, 4.5, 4.6) are also tight as a consequence or whether there is room for improvement.\\nFor Prop. 4.5 and 4.6 in particular, restricting the possible world models indeces to $\\\\mathcal{I}^{\\\\alpha}_{1\\\\colon t}$ is essential, however, the choice of definition of $\\\\mathcal{I}^{\\\\alpha}$ is not really motivated. At the same time, Fig. 2(a) shows a substantial gap between applying Prop 4.6 in practice, and the theoretical optimum. This begs the question whether a different definition of $\\\\mathcal{I}^{\\\\alpha}$ (e.g. a simple cutoff, or requiring $\\\\mathcal{I}^{\\\\alpha}$ to have a certain probability mass) has potential to yield tighter bounds. However, as the definition of $\\\\mathcal{I}^{\\\\alpha}$ is not motivated, these questions remain unadressed.\\n\\nFor the applicability, my core concern is that the main problem in AI is not providing safety guarantees under certain assumptions, but rather designing a Bayesian agent that actually works well for a given problem while satisfying these assumptions. To go into detail, section 3 only provides \\\"law of large numbers\\\"-style guarantees which are not useful in practice. A small paragraph on the rate of convergence (which would be very helpful to know) is included but essentially is very problem-dependent and thus not discussed in detail in this more general framework. In the experimental evaluation, where Prop. 3.4 is utilized, it is not even clear whether $t$ is large enough for the guarantee statement of Prop. 3.4 to hold (on top of Prop 3.4 not being applicable due to non-i.i.d. as the authors mention themselves). Section 4 then relaxes to probabilistic guarantees, which is a more practical approach. However, to apply the results of section 4 in practice it ultimately relies on defining a hyperparameter alpha. On the theoretical side, the guarantees in section 4 only hold if alpha is chosen small enough (which is impossible to know without knowing the system in the first place) and on the practical side, the evaluation in section 5 shows that choosing alpha too large can have catastrophic consequences, even for the simple bandit system considered in section 5. In summary, I do not see any immediate way to take advantage of the theoretical results the paper provides. This is also amplified by the fact that the main body essentially does not discuss related work, and how existing approaches can be embedded into the framework.\\n\\n*These weaknesses make the paper feel like more of a statement paper with some additional mathematical background, rather than a fully fletched research paper.*\\n\\nAs a minor comment, from a reader's POV, the paper can be hard to follow at times, especially in the formal sections. Many paragraphs are written in a very technical way, assuming a deep mathematical background. While this surely can be expected from an audience like ICLR, I feel like many sections disrupt the flow of the paper, e.g. the two paragraphs \\\"Setting\\\" (l.155ff and l. 268ff). While these are defnitely important to make the paper rigorous, they are not strictly required to convey the main ideas of the paper. In the interest of readability, it might be advantageous to instead outsource the technical definitions to a separate section.\", \"questions\": \"1. Can you detail how you can utilize the CLT to obtain convergence rates (line 238ff)? If you applied this to the example in seciton 5, would it yield practical bounds?\\n\\n2. Are the results in Propositions 4.4 to 4.6 tight (in a similar vein as Remark 4.3 shows for Proposition 4.2)?\\n\\n3. How do you motivate the definition of $\\\\mathcal{I}^{\\\\alpha}_{1\\\\colon t}$ and have you considered different approaches?\\n\\n4. Can you provide some heuristics on choosing a safe, yet effective $\\\\alpha$ a priori? Which information might be helpful for this from e.g. which model paramteres have the biggest impact on $\\\\alpha$ and what information from a domain expert could be incorporated?\\n\\n5. Can you make any predictions on how you proposed guardrails perform on larger, more complex models? In particular, how do you expect the overestimation of harm (see Fig. 2) to be affected?\\n\\n6. Are there any existing works in which your framework fits, i.e. for which you can give (probabilistic) guarantees where they were previously unavailable?\\nIf not, are there certain settings in which you can make reasonable a priori assumptions such that your framework is applicable and concrete guarantees can be derived for a given data set?\", \"minor_comments\": [\"line 96: explain what $q$ is\", \"line 193: introduce delta as dirac notation beforehand\", \"the axis and legends in the figures in section 5 are barely readable\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies the problem of evaluating an unknown world model from observed data to determine whether it satisfies a certain safety metric. The safety metric, or guardrail, is a binary variable $H$, taking other variables in the world model as input. The authors utilize a Bayesian approach. It assumes access to the actual prior distribution over the groundtruth world model. The authors first prove that under certain parametric assumptions, the posterior distribution over candidate models will uniquely converge to the ground-truth model at the limit of large samples. Building on this concentration results, the authors derive an upper bound over the posterior probability of the harmful event $H = 1$ conditioning on the observed data. This concentration bound is then extended to non-i.i.d settings where observed samples are correlated. Finally, simulations were performed, and results supported the proposed theory.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-organized and clearly written. All the theoretical assumptions have been stated.\", \"The proposed concentration results seem reasonable. The derivations seem technically sound.\", \"Training large AI systems to satisfy certain safety criteria (i.e., with guardrails) is an exciting problem. This paper formulates this problem as a hypothesis-testing problem and presents non-trivial algorithms to perform the test. This problem formulation could be inspiring for other AI researchers across domains.\"], \"weaknesses\": [\"The concentration result in Prop. 3.1 assumes \\\"all theories in $M$ are distinct as probability measures.\\\" This assumption does not seem to hold many common probabilistic models. For instance, in the linear component analysis, the number of independent components is generally not uniquely discernible (i.e., not identifiable) with non-linear mixing functions. Also, the number of latent components in Gaussian mixtures is generally not identifiable from the observed data. This seems to suggest that the application of the proposed concentration results might be limited.\", \"The proposed concentration results also assume access to the actual prior distribution generating the ground-truth world model. It is unclear whether the upper bound could still hold when the prior distribution is unknown and misspecified.\", \"Other concentration bounds exist over the target estimates using Baysian methods. Generally, one should be able to translate empirical concentration bounds to the Bayesian settings. For instance, (Osband & Van Roy, ICML'17) translates the concentration bounds for online reinforcement learning to Bayesian regret. How does the proposed method compare to other related work? This paper should include a section discussing related work in large deviation theory and how this paper is situated in the existing literature.\", \"Reference: _\\\"Osband, Ian, and Benjamin Van Roy. \\\"Why is posterior sampling better than optimism for reinforcement learning?.\\\" International conference on machine learning. PMLR, 2017.\\\"_\"], \"questions\": \"1. How does the upper bound in Prop. 3.4 apply if the prior distribution $P$ is misspecified?\\n2. How does this work compare to the existing literature on concentration bounds in the Bayesian setting? For instance, these methods could include analysis of Bayesian regret in RL, and PAC Bayes.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> How sensitive are the results to the choice of priors in the Bayesian framework? Can the authors discuss the robustness of the proposed approach under different prior choices?\\n\\nLike with Bayesian approaches in general, you are right to point out the importance of priors, especially in a small data regime. However, our long-term goal is to develop safety guardrails for AGI-level models, i.e., trained with very large quantities of data. In that regime, it makes sense to use very agnostic and thus very flat priors which are uniform for a given description length and then decay exponentially when multiple description lengths are possible for a variable (such as strings representing formulae or programs). The place where prior knowledge then comes in is in the language used to compute description length, and a sensible approach to this may be to use existing human-crafted languages (like the languages of mathematics and programming) for this purpose.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper explores the problem of designing AI systems that satisfy probabilistic safety guarantees. Within a Bayesian framework and given the safety specifications (as a probability), the authors provide risk bounds for potentially harmful decisions, showing that the probability of harm can be upper-bounded by a probability that can be estimated by approximating Bayseian posterior over theories given the observed data. They study two settings: i.i.d case and non i.i.d case and provide a simple experiment to evaluate the performance of safety guardrails.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This is a very well written paper, and it is easy to follow.\\n\\nThe proposed approach represents a promising initial step toward designing AI systems that ensure safety through built-in probabilistic guarantees, rather than relying solely on external safety mechanisms.\\n\\nThe authors also outline several open problems for future work.\", \"weaknesses\": \"The authors present an upper bound on the harm probability, though it appears to be highly conservative. It would be valuable if they could offer a convergence rate or practical guarantees to make the framework more usable. Additionally, it is unclear how this approach compares to other conservative methods for preventing harm.\\n\\nSince the theoretical results lack practical assurances, I would have appreciated more experimental validation, especially in complex and realistic settings. \\n\\nObtaining a Bayesian oracle could be very challenging (posterior distribution). \\n\\nOverall, while the paper introduces a promising method for designing safer AI systems, it would greatly benefit from additional components (both theoretical and experimental) before publication.\", \"questions\": \"None.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies the problem of bounding the probability of some event in the context of an unknown but consistent distribution and a Bayesian setting.\\n\\nThe paper is motivated by the prevention of harm by AI agents. In short, harm is inherently unavoidable since in real applications we have no direct access to the distribution governing the environment. However, if we assume a fixed distribution, and a prior assumption of that distribution, we can get better and better approximations when data is presented to us, by using the data to update our prior knowledge of the distribution. With this, we can theoretically bound the probability of doing harm. In deployment, actions whose probability of harm is larger than some threshold can be blocked.\", \"the_paper_explores_two_cases\": \"incoming data as iid and non iid, and obtains bounds on the probability of harm in both cases.\\n\\nThe paper presents an experimental evaluation on a multi-armed bandits example, blocking actions that are considered unsafe according to the different bounds obtained as well as a baseline (with an unrealistic assumption of the underlying model). The paper ends with a discussion of the open problems still to be solved to be able to use this method as a reliable guardrails for AI agents.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"S1. The topic of AI safety is timely and relevant for ICLR.\\n\\nS2. The theoretical results (as far as I could check) are sound.\\n\\nS3. The experimental evaluation serves to showcase how these bounds could be used in a realistic scenario.\", \"weaknesses\": \"W1. I understand the appeal to frame this work in the context of harm by an AI agent, and I think it is an interesting point. However, there is nothing inherent to \\\"harm\\\" in the concept presented. The concept of \\\"harm\\\" could be substituted by \\\"reward at a state\\\" and we could be discussing the same results in a different light. I think the paper may benefit from a more general motivation.\\n\\nW2. While the experimental evaluation is welcome, it is a very simple example, and one wonders if these theoretical bounds would find applicability in problems that are more complex and close to the real applications of guardrails.\\n\\nW3. The concept of guardrails presented here, as an algorithm that blocks an action if it shows an expected harm larger than some threshold, is very similar to the concept of probabilistic shielding in MDPs [1] (which is essentially the \\\"cheating guardrail in Sec. 5), and this can be extended to partially observable MDPs to eliminate the (unrealistic) assumption of having full knowledge of the ground truth [2]. The paper would benefit from comparing to these methods, especially with [2].\\n\\nW4. The paper does not engage in some recent work on defining harm in similar scenarios, see for example [3] or [4]. It could be useful to understand, in light of different definitions of harm, whether the results are specific to harm prevention, or can be framed in a more general understanding of bounds over rewards.\\n\\n\\n\\nOTHER (MINOR) REMARKS\\n\\nR1. The paper is mathematically dense and difficult to follow in parts. I'm not sure whether this is a weakness on its own, but I have the feeling that the ideas conveyed are simpler than the dense mathematical presentation seems to suggest. \\n\\n\\nREFERENCES\\n\\n[1] N. Jansen et al. Safe Reinforcement Learning Using Probabilistic Shields. CONCUR 2020.\\n\\n[2] S. Carr et al. Safe Reinforcement Learning via Shielding under Partial Observability. AAAI 2024.\\n\\n[3] S. Beckers et al. Quantifying Harm. IJCAI 2023.\\n\\n[4] J. G. Richens. Counterfactual harm. NeurIPS 2022.\", \"questions\": \"Q1. How do you envision these guardrails to be applied in realistic scenarios? For example, consider the situation of a language model trying to obtain your passwords, or an autonomous car trying to crash with another vehicle. Could this notion of harm be applied efficiently to these realistic scenarios?\\n\\nQ2. How sensitive are the results to the choice of priors in the Bayesian framework? Can the authors discuss the robustness of the proposed approach under different prior choices?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> Prop. 3.1 assumes \\\"all theories in M are distinct as probability measures.\\\" This assumption does not seem to hold many common probabilistic models.\\n\\nOur formalism allows for countably many discrete models. While it would be unwieldy in practice, the model classes you discuss could be replaced with a model class that is dense in them, and any duplicates could simply be removed. Since we are just giving formal results, not proposing a construction, the impracticality is not particularly problematic.\\n\\nAlso, it is important to note that the \\u201ctheories must be distinct\\u201d assumption is a limitation of the i.i.d. setting only (Section 3). However, our non-i.i.d. results (Section 4) do not require theories to be distinct as probability measures. This more general setting can handle cases like the ones you may be thinking of, where multiple parameterizations might lead to the same distribution. In fact, our experimental results use this more general setting, since in the bandit environment, multiple reward weight vectors can represent the same collection of reward distributions.\\n\\n> This seems to suggest that the application of the proposed concentration results might be limited.\\n\\nWhile our theoretical results make specific assumptions, we've tested their practical utility in our bandit setting through experiments comparing our approach against alternatives. We've experimented with a minimal set of indices satisfying Prop 4.5 (one theory maximizing the posterior plus those with posterior $\\u2265 \\u03b1$) along with various aggregation methods to estimate the harm (different quantiles, weighted means (arithmetic/geometric/harmonic) with different power exponents, various weightings based on posterior probabilities). These experiments showed that our formulation achieves good safety-performance trade-offs in practice (while some alternatives occasionally achieved higher rewards, they lacked theoretical guarantees and led to more deaths), and suggests practical applicability. Due to space constraints, these additional comparison results weren't included, but could be added to an appendix.\\n\\n> The proposed concentration results also assume access to the actual prior distribution generating the ground-truth world model. It is unclear whether the upper bound could still hold when the prior distribution is unknown and misspecified.\\n\\nIn our view, there is no notion of \\u201ctruth\\u201d to the prior: it represents the beliefs before seeing the data. The only thing that matters asymptotically is if the correct world model has a positive probability under the prior (and of course the larger the better) and that can be obtained if the priors are non-parametric (as we think they should). In practice and with large-scale large-data applications in mind, we anticipate that a good choice of prior is fairly agnostic and flat, with exponential decay of probabilities as a function of some description length of the theories but otherwise a uniform assignment of probability mass (in a discrete space). If the prior is parametric and does not cover the correct theory then of course Bayesian posteriors can be extremely wrong, which is not acceptable in the context of safety guardrails, and we will point that out in the paper.\\n\\n> Q1. How does the upper bound in Prop. 3.4 apply if the prior distribution P is misspecified?\\n\\nIf by misspecified you mean that the correct theory is not covered by the prior, then the theorems do not apply, by definition. If, instead, the concern is about whether the prior needs to match some \\\"ground truth\\\" distribution, the key insight is that the prior simply represents initial beliefs, and our bounds hold as long as these beliefs assign positive probability to the correct theory (cf. our previous response, just above).\\n\\n> Other concentration bounds exist ... for instance, (Osband & Van Roy, ICML'17) translates the concentration bounds for online reinforcement learning to Bayesian regret.\\n> \\n> Q2. How does this work compare to the existing literature on concentration bounds in the Bayesian setting? For instance, these methods could include analysis of Bayesian regret in RL, and PAC Bayes.\\n\\nWe would characterize Osband and Van Roy as studying how to translate concentration bounds in a pure predictive setting into an MDP setting where exploration is necessary. We don\\u2019t propose an exploration strategy to get more information about harm probability. In the iid setting, there is no such thing as exploration, and in the non-iid setting, it\\u2019s not clear how to ensure in a general setting that the exploration isn\\u2019t itself harmful. So the work of Osband and Van Roy and others on the question of translating concentration bounds into the RL setting seems to be addressing a question that is orthogonal to ours. We will make this comparison in the paper to better situate our work.\"}", "{\"comment\": \"> W1. I understand the appeal to frame this work in the context of harm by an AI agent, and I think it is an interesting point. However, there is nothing inherent to \\\"harm\\\" in the concept presented. The concept of \\\"harm\\\" could be substituted by \\\"reward at a state\\\" and we could be discussing the same results in a different light. I think the paper may benefit from a more general motivation.\\n\\nThat is true, the bounds could in principle be used for other purposes, but bounds on harm probability are the motivation for both the theory and the experiments. Bounds being bounds, they will overestimate the true probabilities, which may not be generally useful, but it is still very useful in the context of safety, when tail risks can be unacceptable and we are willing to construct conservative decision rules. On top of that, there is a definitional aspect: since these bounds are used as guardrails to prevent actions that trigger $H=1$ events in the paper, $H=1$ inherently represents outcomes we want to avoid, making \\\"harm\\\" an appropriate framing. \\nOn another note, it might be good to note that we do consider the \\\"reward at a state\\\" case you mention in our bandit example, where, in this setting, a reward that would be too high is considered harmful.\\nGranted, the generality of the mathematical framework allows it to be applied to other settings, and thanks for pointing this out: we will clarify that in the paper. However, the motivation and intended application (in the scope of this paper) is about preventing harmful outcomes in AI systems.\\n\\n\\n> While the experimental evaluation is welcome, it is a very simple example, and one wonders if these theoretical bounds would find applicability in problems that are more complex and close to the real applications of guardrails.\\n\\nMuch more work is needed (as pointed out at the end) to turn the proposed bounds into methods that can scale to complex problems. It is both about the nature of the bounds themselves (which involve full world models as objects to maximize over) and about the challenge of designing tractable approximations of the Bayesian posterior. However, given the high stakes in AI safety as we move towards AGI, it is really important to make such theoretical steps in order to guide the research agenda towards conservative decision-making in which safety guardrails are put in place around powerful AI systems.\\n\\n\\n> The concept of guardrails presented here, as an algorithm that blocks an action if it shows an expected harm larger than some threshold, is very similar to the concept of probabilistic shielding in MDPs \\\\[1\\\\] (which is essentially the \\\"cheating guardrail in Sec. 5), and this can be extended to partially observable MDPs to eliminate the (unrealistic) assumption of having full knowledge of the ground truth \\\\[2\\\\]. The paper would benefit from comparing to these methods, especially with \\\\[2\\\\].\\n\\nThanks for pointing this out. We will add those references and how they relate to our work.\\n\\n> The paper does not engage in some recent work on defining harm in similar scenarios, see for example \\\\[3\\\\] or \\\\[4\\\\]. It could be useful to understand, in light of different definitions of harm, whether the results are specific to harm prevention, or can be framed in a more general understanding of bounds over rewards.\\n\\nThanks for sharing these papers, which address different aspects of harm quantification but should certainly be mentioned in our paper, and we will do that. Note that the non-iid bounds in our paper are relevant to the scenario of distributional shift.\\n\\n\\n> How do you envision these guardrails to be applied in realistic scenarios? For example, consider the situation of a language model trying to obtain your passwords, or an autonomous car trying to crash with another vehicle. Could this notion of harm be applied efficiently to these realistic scenarios?\\n\\nIn order to apply the ideas in our paper in realistic scenarios where the world model is not trivially small, we provided five research directions in the last section of our paper. An important step is to go from manipulating full world models in the bound calculation to being able to focus on only a subset of the random variables, and we are currently working on this question, where it is enough to sample \\u201cdangerous scenarios\\u201d of how harm could occur in order to get a bound. Other challenges include the fact that \\u201charm\\u201d is usually not directly and explicitly observed in natural data (like text or videos) but instead is a latent variable that explains some of the words or images in the data, and that we wish to be able to informally define harm in natural language. Another challenge is of course the computationally efficient estimation of the Bayesian probabilities themselves, and we have ideas on how to use recent advances in amortized probabilistic inference to get such efficient approximations.\"}", "{\"comment\": \"> Essentially all Propositions and Lemmata are either adaptations of well known results (Prop 3.1), taken from previous literature (Lemma 4.1, Prop 4.2), or rather simple Corollary derived from them (Lemma 3.3, Props. 3.4, 4.4, 4.5, 4.6). For Prop. 4.2 is is shown that the result is tight (Remark 4.3).\\n\\nThis is substantially correct. So we think the question at stake is: can it be groundbreaking to construct a theoretical solution to a problem (the problem of harm avoidance in this case) by finding existing theoretical machinery that is fit-for-purpose and putting \\u201cwrappers\\u201d on it? We don\\u2019t mean this to be rhetorical; \\u201cno\\u201d is a valid opinion here. But a problem with \\u201cno\\u201d is that in some contexts, it means that only overcomplicated solutions to a problem will make it to ICLR, or no solutions at all. In defense of \\u201cyes\\u201d, simplicity is a virtue. If we provide a valid solution to an important problem, but it is not complicated enough, that hardly seems bad.\\n\\nThe argument we just provided would fail if the average ML theorist or statistician would quickly reproduce our results when posed the question, \\u201cHow can an agent lower bound harm probability in theory?\\u201d. Then it wouldn\\u2019t be a problem if no solutions make it to ICLR. But we don\\u2019t think this is the case.\\n\\nRegarding Remark 4.3, we are not aware of this result having been proven before.\\n\\n> It is also not clear whether the derived results (Props. 4.4, 4.5, 4.6) are also tight as a consequence or whether there is room for improvement. For Prop. 4.5 and 4.6 in particular, restricting the possible world models indeces to I1:t\\u03b1 is essential, however, the choice of definition of I\\u03b1 is not really motivated. At the same time, Fig. 2(a) shows a substantial gap between applying Prop 4.6 in practice, and the theoretical optimum. This begs the question whether a different definition of I\\u03b1 (e.g. a simple cutoff, or requiring I\\u03b1 to have a certain probability mass) has potential to yield tighter bounds. However, as the definition of I\\u03b1 is not motivated, these questions remain unadressed.\\n> \\n> Q3. How do you motivate the definition of I1:t\\u03b1 and have you considered different approaches?\\n\\nThese results are not tight, unfortunately, but we conjecture that we cannot achieve tight bounds in general without making unreasonable assumptions. \\nDespite that, we did conduct experiments to validate this $I^\\u03b1_{Z_{1:t}}$ and the corresponding Prop 4.6 bound empirically in the context of our bandits experiment, comparing them against the minimal non-empty set of indices that would satisfy Prop 4.5 (defined as the union of one theory maximizing the posterior and all theories with posterior probability $\\u2265 \\u03b1$), as well as various other bounds. With this modified index set of theories, we tested various approaches to aggregating harm estimates, including different quantiles, weighted means (arithmetic/geometric/harmonic) with different power exponents, and various weighting schemes based on posterior probabilities.\\nThese experiments showed that our formulation achieves a good empirical trade-off between safety and performance. Another advantage of the Prop 4.6 bound in the paper is that it provides a theoretical guarantee to overapproximate the probability of harm, contrary to the alternative aggregation methods we tested, which, while sometimes achieving higher rewards, lacked such guarantees (so it came at the cost of more deaths for the agent). Due to space constraints and to maintain focus on the main contribution, we didn't include these additional experimental results in the paper (but we would be happy to add them to the appendix if you think that they would be valuable to empirically motivate $I^\\u03b1_{Z_{1:t}}$ and the Prop 4.6 bound).\\nOn the theoretical side, the motivation for our specific definition is that if we form a Bayes mixture of just the top $n$ models, we can bound the lifetime prediction errors that this mixture makes on observed data $Z_{1:\\\\infty}$. Likewise for the Bayes mixture of just top $n-1$ models. We can translate this into a bound on the prediction errors that model $n$ makes on its own, to the extent that model $n$ has posterior weight within the Bayes mixture of the top $n$ models. So we enforce that this weight is above $\\\\alpha$. This reasoning was originally developed by [Cohen and Hutter (2022)](https://jmlr.org/papers/volume23/21-0618/21-0618.pdf), and it results in their Theorem 6 (i), which bounds the lifetime prediction error that any model in $I_{Z_{1:t}}^\\\\alpha$ makes on the data $Z_{1:\\\\infty}$. Note, however, that since harm probabilities do not necessarily appear themselves in the \\u201ctraining data\\u201d $Z_{1:\\\\infty}$, it does not follow that lifetime prediction error on harm probability is bounded.\"}", "{\"metareview\": \"This paper seeks to create practical safety mechanisms for AI by introducing a Bayesian framework.\\nThe reviewers agreed that the topic is important, given the potential implications for AI safety (e.g., Reviewer bi5T noted that the framework tackles \\u201ca significant problem in AI\\u201d) However, the reviewers were in agreement on certain limitations:\\n- Reviewer n2RR commented on the bounds being \\u201chighly conservative\\u201d, and questioned their practical applicability. Similarly, Reviewer bi5T wrote that they \\u201c[..] do not see any immediate way to take advantage of the theoretical results the paper provides.\\u201d\\n- On a similar note again, Reviewer pbrg seemed skeptical about the paper's ability to generalize to problems \\u201c[..] complex and close to [..] real applications.\\u201d\\n\\nBased on the agreement around these critiques, the decision was made to reject the paper. While the theoretical contributions were considered sound, expanding on what the theoretical model can offer in practice might be warranted.\", \"additional_comments_on_reviewer_discussion\": \"The main concerns raised by the reviewers centered around the applicability of the framework in practice, i.e., beyond the theoretical guarantees (see above for a short summary). The authors counterargued that the value of the paper lies in the theoretical groundwork. The reviewers did not change their position that more validation is in order.\\n\\nIn addition to the above, some reviewers suggested citing additional relevant literature, including probabilistic shielding in MDPs and Bayesian regret in reinforcement learning.\"}" ] }
2OegVbwvY2
ZIP: An Efficient Zeroth-order Prompt Tuning for Black-box Vision-Language Models
[ "Seonghwan Park", "Jaehyeon Jeong", "Yongjun Kim", "Jaeho Lee", "Namhoon Lee" ]
Recent studies have introduced various approaches for prompt-tuning black-box vision-language models, referred to as black-box prompt-tuning (BBPT). While BBPT has demonstrated considerable potential, it is often found that many existing methods require an excessive number of queries (i.e., function evaluations), which poses a significant challenge in real-world scenarios where the number of allowed queries is limited. To tackle this issue, we propose Zeroth-order Intrinsic-dimensional Prompt-tuning (ZIP), a novel approach that enables efficient and robust prompt optimization in a purely black-box setting. The key idea of ZIP is to reduce the problem dimensionality and the variance of zeroth-order gradient estimates, such that the training is done fast with far less queries. We achieve this by re-parameterizing prompts in low-rank representations and designing intrinsic-dimensional clipping of estimated gradients. We evaluate ZIP on 13+ vision-language tasks in standard benchmarks and show that it achieves an average improvement of approximately 6% in few-shot accuracy and 48% in query efficiency compared to the best-performing alternative BBPT methods, establishing a new state of the art. Our ablation analysis further shows that the proposed clipping mechanism is robust and nearly optimal, without the need to manually select the clipping threshold, matching the result of expensive hyperparameter search.
[ "vision-language models", "prompt-tuning", "black-box optimization", "zeroth-order optimization" ]
Accept (Poster)
https://openreview.net/pdf?id=2OegVbwvY2
https://openreview.net/forum?id=2OegVbwvY2
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yfVg3Vhyw2", "xGuahiKIv4", "rxWpfr6HaE", "rskuvgJwRD", "pXpJqo6Mik", "oOoug6Jylc", "glGgc8KMay", "caSPun20ij", "bZuw6QqRTY", "aXFFBMd9FY", "YgzhtrdLwf", "Y7jP8cA94U", "X44QHK1RSH", "T5cH8zHfJM", "RX95EoTqeY", "Qz3SJGS7yy", "Q8l5v5oOct", "LNLa3CHHcu", "L4djZ5gyoM", "KameDhv5DW", "HVfaFQBoLt", "G8twLYa9Bs", "FZWpIghaO7", "9jjuUJspSC", "7EMVqdjJ4X", "57fye09anm" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review" ], "note_created": [ 1732870157994, 1733287444252, 1732513966443, 1732247826664, 1730577551701, 1732256729398, 1733201172643, 1732444983086, 1732445165715, 1732256905528, 1730470058615, 1737523424328, 1730367711368, 1730772048035, 1732281976097, 1732260325142, 1732594272193, 1732445226957, 1733145193255, 1733287274689, 1733145322476, 1732260398933, 1732797627602, 1732797914114, 1732444816701, 1734425771425 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission939/Authors" ], [ "ICLR.cc/2025/Conference/Submission939/Authors" ], [ "ICLR.cc/2025/Conference/Submission939/Authors" ], [ "ICLR.cc/2025/Conference/Submission939/Authors" ], [ "ICLR.cc/2025/Conference/Submission939/Reviewer_Tcth" ], [ "ICLR.cc/2025/Conference/Submission939/Authors" ], [ "ICLR.cc/2025/Conference/Submission939/Reviewer_Tcth" ], [ "ICLR.cc/2025/Conference/Submission939/Authors" ], [ "ICLR.cc/2025/Conference/Submission939/Authors" ], [ "ICLR.cc/2025/Conference/Submission939/Authors" ], [ "ICLR.cc/2025/Conference/Submission939/Reviewer_Fqbz" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission939/Reviewer_NLRG" ], [ "ICLR.cc/2025/Conference/Submission939/Reviewer_BGp2" ], [ "ICLR.cc/2025/Conference/Submission939/Reviewer_NLRG" ], [ "ICLR.cc/2025/Conference/Submission939/Authors" ], [ "ICLR.cc/2025/Conference/Submission939/Reviewer_Tcth" ], [ "ICLR.cc/2025/Conference/Submission939/Authors" ], [ "ICLR.cc/2025/Conference/Submission939/Authors" ], [ "ICLR.cc/2025/Conference/Submission939/Authors" ], [ "ICLR.cc/2025/Conference/Submission939/Authors" ], [ "ICLR.cc/2025/Conference/Submission939/Authors" ], [ "ICLR.cc/2025/Conference/Submission939/Authors" ], [ "ICLR.cc/2025/Conference/Submission939/Authors" ], [ "ICLR.cc/2025/Conference/Submission939/Authors" ], [ "ICLR.cc/2025/Conference/Submission939/Area_Chair_o6PK" ] ], "structured_content_str": [ "{\"title\": \"Awaiting your feedback\", \"comment\": \"Dear reviewer Fqbz,\\n\\nWe sincerely appreciate your thoughtful review and valuable feedback, which have been invaluable in improving our research. It has been a while since we posted our last response, so we would like to follow up to see whether our response has sufficiently addressed your concerns. If there is anything else the reviewer wants us to address more, please let us know. We would be happy to engage in any further discussion.\\n\\nBest regards,\\\\\\nAuthors\"}", "{\"title\": \"closing\", \"comment\": \"Dear Reviewer Fqbz\\n\\nThank you for taking the time to provide thoughtful feedback on our manuscript. We have done our best to address your concern regarding ZIP, including clarifying the effects of feature sharing and explaining its generalization performance. We hope our responses have effectively clarified your concerns and provided the information you were looking for.\\n\\nBest regards, \\\\\\nThe Authors\"}", "{\"title\": \"Thank you for the response.\", \"comment\": \"Thank you for your valuable feedback. As suggested, we have incorporated the new table and discussions into Appendix A and citations into Figure 1 of the revised manuscript. These additions will be integrated into the relevant sections of the main manuscript after the rebuttal process. Your guidance has been instrumental in enhancing the clarity and completeness of the final version. Please let us know if there are any other aspects you would like us to address.\"}", "{\"title\": \"Response to Reviewer NLRG\", \"comment\": \"We sincerely thank the reviewer for finding our work practical and effective, and giving us constructive feedback to improve further. While we respond to the reviewer\\u2019s specific comments as below, we would be keen to engage in any further discussion.\\n\\n---\\n\\n**Ablation for different combinations of modules**\\n> Although the paper performs ablation studies on individual modules such as low-rank approximation with a diagonal matrix and feature sharing, it lacks ablation experiments on different combinations of these modules. Without evaluating different combinations, it is challenging to fully understand the synergistic effects and the relative contributions of each module to the overall performance.\\n\\nThank you for your suggestion. We have evaluated all possible combinations of {diagonal matrix, feature sharing (FS), intrinsic-dimensional clipping}. The results are provided below.\\n\\n| Number | Diagonal | FS | Clipping | Caltech | Pets | Flowers | Food | Aircraft | SUN | DTD | SVHN | EuroSAT | Resisc | CLEVR | UCF | IN | Avg |\\n|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|\\n| 1 | \\u2713 | \\u2717 | \\u2717 | 91.2 | 82.3 | 56.9 | 83.4 | 13.2 | 56.8 | 41.0 | 38.9 | 58.8 | 58.5 | 23.1 | 64.4 | 61.5 | 56.2 |\\n| 2 | \\u2717 | \\u2713 | \\u2717 | 90.1 | 89.3 | 65.3 | 84.6 | 22.7 | 60.6 | 42.4 | 38.4 | 59.3 | 59.8 | 18.9 | 66.7 | 63.4 | 58.6 |\\n| 3 | \\u2717 | \\u2717 | \\u2713 | 90.7 | 89.3 | 68.1 | 85.0 | 23.7 | 57.4 | 43.9 | 36.0 | 59.2 | 57.1 | 21.2 | 65.2 | 62.6 | 58.4 |\\n| 4 | \\u2713 | \\u2713 | \\u2717 | 91.3 | 86.0 | 59.7 | 83.4 | 16.6 | 58.9 | 46.1 | **44.9** | 61.2 | 59.2 | 23.0 | 64.8 | 59.0 | 58.0 |\\n| 5 | \\u2717 | \\u2713 | \\u2713 | 89.8 | 89.5 | 66.4 | 85.3 | 25.1 | 58.5 | 44.7 | 38.3 | 61.0 | 58.9 | 18.9 | 65.9 | 63.4 | 58.9 |\\n| 6 | \\u2713 | \\u2717 | \\u2713 | 93.1 | 90.8 | 67.1 | 86.0 | 25.2 | 59.0 | 44.4 | 40.9 | 60.6 | 63.3 | 20.2 | 67.4 | 64.8 | 60.2 |\\n| 7 | \\u2713 | \\u2713 | \\u2713 | **93.4** | **91.7** | **70.0** | **86.3** | **26.6** | **62.2** | **47.8** | 44.2 | **64.2** | **65.2** | **25.1** | **69.8** | **66.0** | **62.5** |\\n\\nFirst, we observe that using all the proposed modules together results in significantly better performance compared to using individual modules or pairs of modules. This demonstrates that each component works harmoniously to contribute to the generation of effective results.\\n\\nAdditionally, from the transitions 1&rarr;6, 4&rarr;7 and 5&rarr;7, we find that combining the low-rank approximation with diagonal matrix with intrinsic dimensional clipping yields more pronounced performance improvements (+4%, +4.5%, +3.6%) compared to other combinations.\\n\\nThese findings suggest that while each component is effective on its own, their combination creates a complementary synergy that maximizes overall performance. In future work, we plan to conduct an in-depth analysis to uncover the underlying mechanisms behind this synergy. This will provide deeper insights into its practical utility, paving the way for its application to a broader range of tasks.\\n\\n\\n---\\n\\n**Effectiveness of diagonal matrix?**\\n> The paper lacks an ablation study to isolate the effect of low-rank approximation alone, making it unclear if improvements are mainly due to the diagonal matrix. This analysis would clarify the diagonal matrix's contribution.\\n\\nWe kindly remark that the ablation analysis the reviewer requests is already provided in Table 6 in Appendix. This results clearly shows that adding a simple diagonal matrix (# of parameters < 10) enhances the performance of low-rank approximation, improving the average performance by +1.8%.\\n\\n---\\n\\n**Citation in Figure 1**\\n> The caption for Figure 1 should include citations for the baseline methods (BAR, BlackVIP, BPT-VLM) to provide appropriate references and context for these comparisons. This would enhance clarity for readers unfamiliar with these specific methods.\\n\\nThank you for your suggestion. We will reflect it to our final version. \\n\\n---\\n\\nWe sincerely appreciate the reviewer\\u2019s recognition of our work as \\u201cwell-suited\\u201d and \\u201cuser-friendly\\u201d for practical applications. We are also deeply grateful for the reviewer\\u2019s suggestion on additional ablations, which we believe has significantly improved our manuscript. We hope our response has adequately addressed your concerns, and yet, please let us know if there are any remaining issues. We would be eager to address them further.\"}", "{\"summary\": \"The paper proposes a method to optimize black-box models without the need for computing gradients (zeroth-order). The key observation is that increasing the number of learnable parameters in soft prompts hurts the performance and training speed of zeroth-order optimization, while this trend is reversed for SGD-based prompt tuning (first-order). To overcome this, authors propose to reparameterize soft prompts in order to reduce the effective number of learnable parameters while maintaining the extrinsic embedding dimensionality. The proposed reparameterization involves projecting parameters into a diagonal matrix, feature sharing and gradient clipping. In addition, reducing the number of learnable parameters results in increased query efficiency (reduced number of forward passes through the model). The proposed method is applied to black-box prompt-tuning of a CLIP model, and evaluated on a suite of standard vision-language benchmarks, achieving improvements of 6% in few-shot accuracy and 48% in query efficiency compared to the best performing existing methods.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"Good motivation to reduce the number of learnable parameters in ZO optimization (section 3) and clever idea to reduce the intrinsic dimensionality while maintaining the number of tokens (and the extrinsic dimensionality, which is a requirement from the model being optimized).\", \"Several techniques (diagonal matrix, parameter sharing) are applied to preserve performance while reducing the number of learnable parameters.\", \"The proposed method not only improves few-shot performance wrt existing ZO methods but also reduces considerably the number of function evaluations required to reach a certain level of performance (section 5.3).\", \"All the design choices for the soft prompt reparameterization are thoroughly ablated in section 6.\", \"The paper is clearly written and easy to follow.\"], \"weaknesses\": [\"Authors motivate fine-tuning black-box models with the use case of improving proprietary LLMs (e.g. GPT-4, Gemini) which are only accessible through API. However, this interface only accepts text and images as input, not soft prompts or embeddings, so the proposed method would not be directly applicable to API-based models.\", \"To verify the method's robustness and generality, it should be evaluated on other model families such as multimodal LLMs.\", \"Figures 2, 4, 6 and 7a should report validation accuracy since there could be overfitting.\"], \"questions\": [\"It is not until the background section that I understood what zeroth-order intrinsic-dimensional prompt-tuning means. I suggest to improve the introduction to make it clearer from early on.\", \"In figure 2, it would be good to add a baseline of accuracy when no soft prompts are optimized (i.e. m=0).\", \"Where are the learned soft prompts injected? Are they concatenated to text embeddings and fed to CLIP's text encoder?\", \"In table 3, the average accuracies for CDT between ZIP and the second-best method seem very close. Did authors run a significance test?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer Tcth (part 1)\", \"comment\": \"We sincerely thank the reviewer for acknowledging the contributions of our work and offering constructive feedback. We address the reviewer\\u2019s specific comments below and we would be keen to engage in any further discussion.\\n\\n---\\n\\n**Soft prompts not directly applicable?**\\n> Authors motivate fine-tuning black-box models with the use case of improving proprietary LLMs (e.g. GPT-4, Gemini) which are only accessible through API. However, this interface only accepts text and images as input, not soft prompts or embeddings, so the proposed method would not be directly applicable to API-based models.\\n\\nThank you for this insightful comment. We acknowledge that our soft prompt-based approach cannot be directly applied to the scenario described by the reviewer. Our study builds on the Language-model-as-a-Service (LMaaS) scenario [1], widely recognized in prior works as a plausible framework for black-box prompt tuning [2, 3]. While GPT-4 or Gemini do not currently support this specific scenario, we believe that exploring the LMaaS approach is essential for enabling more flexible and efficient fine-tuning of black-box models via APIs.\\n\\n---\\n\\n**Apply to other model families?**\\n> It should be evaluated on other model families such as multimodal LLMs for robustness and generality.\\n\\nThank you for the suggestion. It would be interesting to see how ZIP evaluates on multimodal foundation models such as LLaVA to demonstrate its robustness and generality. Unfortunately, due to resource and time limitations during the rebuttal period, we were unable to conduct such experiments.\\n\\nInstead, we provide additional results on SigLIP [4], another vision-language model distinct from CLIP, in the table below. While ZIP does not perform the best for all datasets, it achieves significantly higher accuracy on several datasets such as DTD, SVHN, EuroSAT, Resisc45, CLEVR, and UCF101, and records the highest on average, demonstrating its robustness and generality of our method compared to existing other BBPT methods.\\n\\n| Method | Caltech | Pets | Flowers | Food | Aircraft | SUN | DTD | SVHN | EuroSAT | Resisc | CLEVR | UCF | IN | Avg |\\n|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|\\n| BAR | **85.8** | **82.2** | **68.0** | **58.3** | 10.7 | 23.3 | 47.2 | 15.2 | 39.6 | 23.6 | 26.1 | 23.3 | **54.9** | 42.9 |\\n| BLackVIP | 81.6 | 78.9 | 56.6 | 47.8 | 9.6 | **46.3** | 41.1 | 30.7 | 22.5 | 37.0 | 26.3 | 37.0 | 49.4 | 43.4 |\\n| BPTVLM | 63.9 | 71.7 | 47.7 | 37.2 | 9.9 | 35.5 | 45.6 | 14.7 | 44.8 | 37.0 | 24.8 | 35.5 | 39.5 | 39.1 |\\n| ZIP | 72.1 | 82.0 | 59.3 | 40.9 | **11.8** | 45.3 | **49.4** | **34.2** | **48.7** | **44.3** | **31.4** | **40.6** | 49.7 | **46.9** |\\n\\n---\\n\\n**Need validation accuracy**\\n> Figures 2, 4, 6 and 7a should report validation accuracy since there could be overfitting.\\n\\nThank you for pointing this out. To address concerns about overfitting, we now include validation accuracy for all cases in Figure 9, 10, 11, and 12 in Appendix A. The results show that training accuracy and validation accuracy generally exhibit similar trends, with no significant signs of overfitting observed.\\n\\n---\\n\\n**Clarify in Introduction**\\n> It is not until the background section that I understood what zeroth-order intrinsic-dimensional prompt-tuning means. I suggest to improve the introduction to make it clearer from early on.\\n\\nWe appreciate this feedback. We will make sure to revise the paper and clarify what \\u201czeroth-order intrinsic-dimensional prompt-tuning\\u201d means earlier in Introduction.\\n\\n&nbsp;\\n\\n[1] Sun, Tianxiang, et al. \\u201cBlack-box tuning for language model as-a-service.\\u201d ICML, 2022.\\\\\\n[2] Yu, Lang, et al. \\u201cBlack-box prompt tuning for vision-language model as-a-service.\\u201d IJCAI, 2023.\\\\\\n[3] Song, Jiang-Long, et al. \\u201cCompetition solution for prompt tuning using pretrained language model.\\u201d arXiv preprint arXiv:2212.06369 (2022).\\\\\\n[4] Zhai, Xiaohua, et al. \\u201cSigmoid loss for language image pre-training.\\u201d ICCV, 2023.\"}", "{\"comment\": \"Thanks for your follow-up!\\n\\nI am not sure I understand the concept of \\\"no prompt\\\", as CLIP always computes similarity between an image and a text prompt. Does it mean that you used the same prompts as in the original paper, without further prompt engineering? In any case, now in figure 9 it seems that ZOO with m=1 consistently outperforms m=0, which is reassuring as I'd expect any optimization method to perform better than the baseline (m=0 in this case).\\n\\nFrom the additional significance results for table 3, I understand that on CDT, ZIP and BlackVIP have comparable performance and the advantage of ZIP lies in its query efficiency. I assume figure 10 shows query efficiency for the few-shot setting (table 1). Do these curves look similar for the setting corresponding to table 3?\"}", "{\"title\": \"Response to Reviewer Fqbz (part 2)\", \"comment\": \"**Feature sharing and robustness**\\n> In Section 4.2, the paper introduces feature sharing to enhance expressiveness. Could the authors clarify whether this feature sharing technique affects the generalization ability on unseen datasets, and if so, how?\\n\\nThank you for the question. We evaluate the generalization ability of feature sharing across unseen datasets. The results are summarized in the tables below.\\n\\n`Base-to-New Generalization`\\n| Method | Set | Caltech | Pets | Flowers | Food | Aircraft | SUN | DTD | SVHN | EuroSAT | Resisc | CLEVR | UCF | IN | Avg |\\n|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|\\n| Unshared | Base | 96.3 | 94.5 | 68.6 | **89.9** | 29.4 | 67.9 | 56.6 | 51.9 | 81.2 | 77.4 | 43.2 | 72.5 | 71.0 | 69.3 |\\n| Shared | Base | **96.6** | **94.9** | **72.1**| **89.9** | **29.8** | **70.3** | **61.7** | **52.9** | **84.0** | **81.6** | **50.1** | **75.1** | **72.1** | **71.6** |\\n| Unshared | New | **93.9** | 93.8 | 71.5 | 89.5 | 31.3 | 70.5 | 47.1 | **46.1** | **66.1** | 60.1 | 25.0 | 67.4 | 64.5 | 63.6 | \\n| Shared | New | 93.2 | **97.0** | **73.4** | **90.0** | **32.0** | **71.5** | **51.0** | 45.8 | 64.4 | **65.2** | **26.8** | **69.5** | **65.6** | **65.0** |\\n| Unshared | Harmonic | **95.1** | 94.1 | 70.0 | 89.7 | 30.3 | 69.2 | 51.4 | 48.8 | **72.9** | 67.7 | 31.7 | 69.9 | 67.6 | 66.0 |\\n| Shared | Harmonic | 94.9 | **95.9** | **72.8** | **89.9** | **30.9** | **70.9** | **55.8** | **49.1** | **72.9** | **72.5** | **34.9** | **72.2** | **68.7** | **68.2** |\\n\\n`Cross Dataset Transfer`\\n| Method | IN | Caltech | Pets | Flowers | Food | Aircraft | SUN | DTD | SVHN | EuroSAT | Resisc | CLEVR | UCF | Avg |\\n|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|\\n| Unshared | 65.2 | **91.3** | 85.4 | 64.2 | **84.2** | 19.6 | 58.3 | 38.3 | **27.9** | **46.3** | 53.3 | **17.6** | 61.3 | 54.0 |\\n| Shared | **66.0** | 90.4 | **85.6** | **65.6** | 83.6 | **20.5** | **60.6** | **40.9** | 27.0 | 42.3 | **55.6** | 14.5 | **63.6** | **54.2** |\\n\\n`Out-of-Distribution Generalization`\\n| Method | ImageNet-A | ImageNetV2 | ImageNet-R | ImageNet-Sketch | Average |\\n|----|----|----|----|----|----|\\n| Unshared | 47.2 | 59.0 | **75.2** | 45.0 | 56.6 |\\n| Shared | **47.8** | **59.5** | 74.7 | **45.4** | **56.9** |\\n\\nFrom these results, we observe that feature sharing positively impacts generalization across various tasks. In base-to-new generalization, it achieves significant performance improvements with average gains of +2.3%, +1.4%, and +2.2%. For cross-dataset transfer and out-of-distribution generalization, the improvements are moderate, with average gains of +0.2% and +0.3%, respectively.\\n\\nThese findings suggest that feature sharing is effective for generalization on unseen dataset. In future work, we aim to conduct an in-depth analysis to better understand the relationship between feature sharing and generalization. This will provide valuable insights into its empirical utility and open new avenues for its application in broader domain generalization research.\"}", "{\"title\": \"Response to Reviewer Fqbz (part 3)\", \"comment\": \"**Explanation of generalization**\\n> ZIP has demonstrated strong results across vision-language tasks, but could the authors provide more insights into its potential for domain generalization? Specifically, how well does ZIP adapt to unseen domains or datasets outside the evaluated benchmarks,\\n\\nThank you for the positive and constructive feedback. The strong generalization performance of ZIP can be attributed to two key factors:\\n\\n* **reduced model capacity**: Complex models are often prone to overfitting because they tend to memorize detailed properties of the training dataset rather than generalizing effectively. By reducing the capacity of the soft tokens through a series of low-rank approximations, we limit its ability to represent overly intricate patterns, thereby encouraging the model to focus on capturing broader, more generalizable features. This constraint on complexity directly mitigates the risk of overfitting, as the model is less likely to adapt excessively to task specific data during training.\\n* **noisy gradient estimates in zeroth-order optimization**: Zeroth-order optimization methods, which rely on function evaluations rather than exact gradient computations, inherently produce noisy estimates of the gradients. This noise introduces stochasticity into the optimization process, acting as a form of regularization [1, 2, 3]. The presence of noise helps prevent the model from overfitting to outliers in the training data. Specifically, the noisy gradients make it less likely for the optimization algorithm to become finely tuned to the peculiarities of outlier data points, which might otherwise skew the learning process of the model. Moreover, this inherent noise encourages the model to focus on capturing general patterns within the data rather than memorizing specific instances. As a result, the model develops a more robust understanding of the underlying data distribution, enhancing its ability to generalize to unseen domains. The stochastic nature of the optimization helps the model explore a wider range of parameter configurations, increasing the likelihood of finding solutions that perform well across different datasets and reducing sensitivity to anomalies present in the training set.\\n\\nWe speculate that these factors collectively hint at the potential of ZIP for achieving stable performance on unseen domains, as observed in our experiments, though more precise investigation is needed.\\n\\n---\\n\\n**Adjustment to improve robustness?**\\n> and would any adjustments be necessary to improve its robustness in such scenarios? Such as CoOp and CoCoOp.\\n\\nWe appreciate the reviewer's question. CoCoOp adopts an input-conditional prompting scheme, improving (domain) generalization. However, this _adaptive_ prompting in fact requires the access to the image encoder of the vision-language model, which may not suit the setting of purely black-box models we consider in this work. Therefore, it is challenging for us to precisely forecast whether or not such a scheme would improve ZIP. One potential detour to realize this idea is leveraging an additional and separate image encoder, which being considered as an interesting extension, we will investigate further in future work.\\n\\n&nbsp;\\n\\n[1] Blanc, Guy, et al. \\u201cImplicit regularization for deep neural networks driven by an Ornstein-Uhlenbeck like process.\\u201d COLT, 2022.\\\\\\n[2] Damian, Alex, et al. \\u201cLabel noise SGD provably prefers flat global minimizers.\\u201d NeurIPS, 2021.\\\\\\n[3] Ge, Rong, et al. \\u201cEscaping from saddle points - Online stochastic gradient for tensor decomposition.\\u201d COLT, 2015.\"}", "{\"title\": \"Response to Reviewer Tcth (part 2)\", \"comment\": \"**Adding baseline for Figure 2**\\n> In figure 2, it would be good to add a baseline of accuracy when no soft prompts are optimized (i.e. m=0).\\n\\nThank you for the suggestion. In response, we have updated Figure 9 to include a baseline accuracy where no soft prompts are optimized (i.e., manual prompt) for better comparison.\\n\\n---\\n\\n**Injection of learned prompts**\\n> Where are the learned soft prompts injected? Are they concatenated to text embeddings and fed to CLIP's text encoder?\\n\\nThank you for the question. The learned soft prompts are prepended to the text embeddings, as in CoOp, and subsequently fed into the text encoder of CLIP.\\n\\n---\\n\\n**Significance test for Table 3**\\n> In table 3, the average accuracies for CDT between ZIP and the second-best method seem very close. Did authors run a significance test?\\n\\nThank you for raising this point. To address the concern, we have included standard deviations in Table 3 to better reflect statistical significance.\\n\\n`Cross-Dataset Transfer`\\n| Method | IN | Caltech | Pets | Flowers | Food | Aircraft | SUN | DTD | SVHN | EuroSAT | Resisc | CLEVR | UCF | Avg |\\n|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|\\n| BAR | 64.0 (0.1) | 92.3 (0.2) | 84.3 (0.1) | 64.3 (0.1) | 83.1 (0.1) | 20.8 (0.1) | 61.0 (0.0) | 42.2 (0.2) | 20.0 (0.9) | **49.6** (0.6) | 50.6 (0.5) | 14.5 (0.1) | 63.0 (0.2) | 53.8 |\\n| BlackVIP | 65.5 (0.5) | **92.5** (0.3) | **86.2** (1.3) | 64.9 (0.4) | **83.6** (0.4) | **22.3** (0.4) | **62.0** (0.3) | **43.3** (1.0) | 18.7 (2.0) | 40.5 (1.1) | **55.7** (1.1) | **15.2** (1.0) | **64.1** (0.2) | 54.1 |\\n| BPTVLM | 55.5 (1.2) | 80.7 (1.4) | 77.7 (2.2) | 50.3 (8.6) | 77.6 (0.4) | 16.3 (2.0) | 43.8 (5.4) | 30.8 (0.8) | 15.5 (4.1) | 34.6 (9.6) | 37.7 (8.5) | 12.4 (1.4) | 54.8 (1.3) | 44.4 |\\n| ZIP | **66.0** (1.2) | 90.4 (1.8) | 85.6 (1.5) | **65.6** (1.3) | **83.6** (1.3) | 20.5 (0.1) | 60.6 (1.6) | 40.9 (2.6) | **27.0** (1.0) | 42.3 (3.4) | 55.6 (2.0) | 14.5 (1.3) | 63.6 (1.1) | **54.2** |\\n\\n`Out-of-Distribution Generalization`\\n| Method | ImageNet-A | ImageNetV2 | ImageNet-R | ImageNet-Sketch | Average |\\n|----|----|----|----|----|----|\\n| BAR | 40.2 (0.1) | 57.5 (0.1) | 72.0 (0.0) | 43.8 (0.1) | 53.4 |\\n| BlackVIP | 42.5 (1.7) | 59.2 (0.7) | 73.1 (0.5) | 44.6 (0.4) | 54.9 |\\n| BPTVLM | 32.7 (1.2) | 46.7 (3.2) | 61.7 (4.5) | 33.5 (1.9) | 43.7 |\\n| ZIP | **47.8** (0.7) | **59.5** (1.5) | **74.7** (0.9) | **45.4** (1.5) | **56.9** |\"}", "{\"summary\": \"The paper introduces ZIP, a zeroth-order prompt tuning method designed for efficient prompt optimization in black-box vision-language models, particularly under limited query budgets. ZIP achieves high efficiency by using low-rank representations and intrinsic-dimensional gradient clipping, which reduces query usage while maintaining robust performance. Evaluations on multiple benchmarks show that ZIP not only outperforms state-of-the-art methods in accuracy but also greatly enhances query efficiency.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"(1) The paper is well-organized and accessible, with clear visuals and structured explanations that effectively communicate the method's strengths.\\n\\n(2) ZIP innovatively enhances zeroth-order prompt tuning through intrinsic-dimensional gradient clipping and low-rank parameterization, making it highly efficient.\\n\\n(3) Comprehensive evaluations demonstrate ZIP's superior accuracy and query efficiency across 13+ tasks, proving its practical value under query constraints.\", \"weaknesses\": \"(1) While ZIP outperforms existing BBPT methods, comparisons with additional baseline methods in zeroth-order optimization could strengthen claims of superiority.\\n\\n(2) While ZIP shows strong performance on various tasks, its results on ImageNet in Table 1 are comparatively modest, suggesting limitations in scalability to complex datasets. An in-depth analysis of ZIP's performance on larger, diverse datasets would clarify its robustness and potential for broader application.\", \"questions\": \"(1) In Section 4.2, the paper introduces feature sharing to enhance expressiveness. Could the authors clarify whether this feature sharing technique affects the generalization ability on unseen datasets, and if so, how?\\n\\n(2) ZIP has demonstrated strong results across vision-language tasks, but could the authors provide more insights into its potential for domain generalization? Specifically, how well does ZIP adapt to unseen domains or datasets outside the evaluated benchmarks, and would any adjustments be necessary to improve its robustness in such scenarios? Such as CoOp and CoCoOp. \\n\\n(3) Could the authors elaborate on the sensitivity of ZIP to the choice of intrinsic dimensionality and low-rank approximation parameters? How do these choices impact both performance and query efficiency?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper introduces ZIP, a zeroth-order intrinsic-dimensional prompt-tuning method designed to efficiently optimize black-box vision-language models. By leveraging low-rank approximation, feature sharing, and intrinsic-dimensional gradient clipping, ZIP achieves faster training speeds and superior generalization performance while significantly reducing query requirements. Extensive experiments on diverse tasks demonstrate ZIP's robustness and query efficiency, outperforming existing BBPT methods and establishing it as a practical approach for resource-constrained scenarios.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.The paper presents a novel black-box prompt-tuning method, effectively addressing the issue in zeroth-order methods where an increase in trainable parameters adversely impacts accuracy. By reducing the number of parameters and query requirements, the proposed approach is well-suited for practical applications with limited query budgets.\\n\\n2.The paper demonstrates strong performance across three extensive and diverse experimental settings, which effectively validate the method\\u2019s efficacy. The ablation studies further support the approach, particularly highlighting that the feature-sharing technique helps preserve the model\\u2019s expressive capacity. \\n\\n3.The intrinsic-dimensional clipping mechanism in ZIP requires no manual hyperparameter tuning, making it highly practical and user-friendly. \\n\\n4.The paper is well-written, with clear explanations and logical organization that make the proposed method and its contributions easy to understand.\", \"weaknesses\": \"1.Although the paper performs ablation studies on individual modules such as low-rank approximation with a diagonal matrix and feature sharing, it lacks ablation experiments on different combinations of these modules. Without evaluating different combinations, it is challenging to fully understand the synergistic effects and the relative contributions of each module to the overall performance.\\n\\n\\n\\n2.The paper lacks an ablation study to isolate the effect of low-rank approximation alone, making it unclear if improvements are mainly due to the diagonal matrix. This analysis would clarify the diagonal matrix's contribution.\", \"questions\": \"Suggestions:\\n\\nThe caption for Figure 1 should include citations for the baseline methods (BAR, BlackVIP, BPT-VLM) to provide appropriate references and context for these comparisons. This would enhance clarity for readers unfamiliar with these specific methods.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethical concerns.\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces ZIP for efficient zeroth-order prompt-tuning of black-box vision-language models. ZIP addresses the challenge of excessive query requirements in existing black-box prompt-tuning methods by reducing problem dimensionality and gradient estimate variance through feature sharing and intrinsic-dimensional gradient clipping. ZIP demonstrates significant improvements in few-shot accuracy and query efficiency over other existing methods. Various experiments on image classification show the effectiveness of ZIP.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"ZIP is well-motivated.\", \"The paper is well-organized.\", \"Empirical analyses of the proposed method are sufficient.\"], \"weaknesses\": \"I'm not familiar with this research field, i.e. black box prompt tuning. Therefore, it's hard for me to accurately judge the novelty of the proposed method compared with existing works.\\n\\nFrom my perspective, one major weakness is that I find the competitors in the experiments are slightly old, e.g. BLACKVIP is published at CVPR'23 and BPTVLM is published at IJCAI'23. There are some more recent works like [a][b] in this field. I think the authors should better discuss the differences between ZIP and more recent works like [a][b], and provide fair experimental comparisons as well. \\n\\n[a] Language Models as Black-Box Optimizers for Vision-Language Models, CVPR 2024, https://openaccess.thecvf.com/content/CVPR2024/html/Liu_Language_Models_as_Black-Box_Optimizers_for_Vision-Language_Models_CVPR_2024_paper.html\\n[b] Connecting the Dots: Collaborative Fine-tuning for\\nBlack-Box Vision-Language Models, ICML 2024, https://arxiv.org/abs/2402.04050\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your detailed responses. To ensure these valuable additions are fully reflected, could you please integrate the new table, discussions, and citations into the revised manuscript? This will enhance the clarity and completeness of the final version.\"}", "{\"title\": \"Response to Reviewer BGp2 (part 1)\", \"comment\": \"We sincerely thank the reviewer for acknowledging the clear motivation of our method and providing constructive feedback. We address the reviewer\\u2019s specific comments below and welcome any further discussion.\\n\\n---\\n\\n**Novelty?**\\n> I'm not familiar with this research field, i.e. black box prompt tuning. Therefore, it's hard for me to accurately judge the novelty of the proposed method compared with existing works.\\n\\nThank you for the comment. We will first outline the problem context and then explain our proposed solutions.\\n\\nBlack-box models that are accessible only through APIs can be trained using black-box prompt tuning (BBPT). However, users are typically constrained by limited API budgets, making **query efficiency** a critical factor for the practicality of BBPT. Despite this, prior works have not adequately addressed this issue. For instance, BlackVIP and BPTVLM often require tens of thousands of queries, which severely limits their practicality. To address this, we are the first to explicitly highlight this problem and propose several techniques to significantly improve query efficiency.\\n\\nSpecifically, we introduce two key technical contributions:\\n* To address the challenge of **dimensionality dependency** (i.e., the number of queries scales with the dimensionality of the problem), we propose a novel low-rank representation. This approach reduces the dimensionality while effectively mitigating the loss of expressive power through feature sharing.\\n* High variance in zeroth-order information can significantly degrade query efficiency. To tackle this, we propose a threshold-free gradient clipping method, termed \\u201cintrinsic dimensional clipping\\u201d. Inspired by prior studies on clipping thresholds [1, 2, 3], we set the clipping threshold to $\\\\sqrt{d}$, which corresponds to the standard deviation of the zeroth-order gradient, where $d$ is the dimensionality of the problem. This approach not only reduces the variance of zeroth-order information but also achieves near-optimal performance without requiring manual tuning (See Figure 7b and 20 in revisioned paper).\\n\\n&nbsp;\\n\\n[1] Zhang, Bohang, et al. \\u201cImproved analysis of clipping algorithms for non-convex optimization.\\u201d NeurIPS, 2020.\\\\\\n[2] Zhang, Jingzhao, et al. \\u201cWhy are adaptive methods good for attention models?\\u201d NeurIPS, 2020.\\\\\\n[3] Zhang, Jingzhao, et al. \\u201cWhy gradient clipping accelerates training: A theoretical justification for adaptivity.\\u201d arXiv preprint arXiv:1905.11881 (2019).\"}", "{\"comment\": \"Thank you for your thorough response.\\n\\nCould you please comment on the performance of ZOO compared to m=0 (figure 9)? It seems that, for several datasets (e.g., Flowers102, Food101, FGVCAircraft, UCF101), optimizing soft prompts with ZOO actually hurts performance.\\n\\nCould you also assess the statistical significance of the reported results based on the standard deviations reported for table 3? For several datasets where ZIP achieves the highest accuracy (e.g., IN, Flowers102, Food101, ImageNetV2, ImageNet-Sketch), it seems there's overlap with the std of the second-best result (usually BlackVIP).\"}", "{\"title\": \"Response to Reviewer Fqbz (part 4)\", \"comment\": \"**Sensitivity to intrinsic dimensionality and low-rank approximation**\\n> Could the authors elaborate on the sensitivity of ZIP to the choice of intrinsic dimensionality and low-rank approximation parameters? How do these choices impact both performance and query efficiency?\\n\\nThank you for the question. We have evaluated the sensitivity of ZIP to intrinsic dimensionality and low-rank approximation parameters, with the results summarized in the tables below.\\n\\n`Dimensionality`\\n| Dim | Caltech | Pets | Flowers | Food | Aircraft | SUN | DTD | SVHN | EuroSAT | Resisc | CLEVR | UCF | IN | Avg |\\n|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|\\n| 100 | 92.08 | 91.13 | 68.11 | 85.49 | 24.96 | 59.76 | 44.64 | 40.46 | 62.97 | 62.28 | 20.26 | 66.82 | 64.75 | 60.29 |\\n| 500 | **93.39** | 91.74 | 69.97 | 86.31 | 26.62 | 62.17 | 47.83 | 44.15 | **64.21** | 65.22 | 25.09 | **69.81** | 65.99 | **62.50** |\\n| 1000 | 93.23 | **92.14** | **70.50** | 85.36 | 26.83 | 62.92 | **47.85** | 44.15 | 62.47 | 64.78 | **26.31** | 69.04 | **66.47** | 62.47 |\\n| 2000 | 93.29 | 91.47 | 69.48 | **86.98** | **26.97** | **64.17** | 45.76 | **45.46** | 62.22 | **66.10** | 23.58 | 69.18 | 66.21 | 62.37 |\\n\\nLower dimensionality improves query efficiency by simplifying the model, but it may reduce performance due to limited expressive power. On the other hand, higher dimensionality enhances expressive power and performance, but requires more API queries as training becomes slower. For example, extremely low dimensionality (e.g., 100) achieves an average performance of 60.29, while performance slightly declines as dimensionality increases from 500 to 2,000. We observe that a dimensionality of 500 strikes the best balance between query efficiency and performance, achieving an average score of 62.50, making it a practical choice for most scenarios.\\n\\n`Rank`\\n| Rank | Caltech | Pets | Flowers | Food | Aircraft | SUN | DTD | SVHN | EuroSAT | Resisc | CLEVR | UCF | IN | Avg |\\n|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|\\n| 1 | 92.72 | 90.89 | 68.43 | 85.94 | 25.31 | 61.99 | 44.05 | **45.60** | 59.98 | 64.26 | 21.71 | 68.34 | **66.10** | 61.17 |\\n| 3 | 92.64 | **91.78** | 69.35 | **86.40** | 26.17 | 62.12 | 43.50 | 41.84 | 62.73 | 62.71 | **26.89** | 68.35 | 66.02 | 61.57 |\\n| 5 | **93.39** | 91.74 | **69.97** | 86.31 | **26.62** | **62.17** | **47.83** | 44.15 | **64.21** | **65.22** | 25.09 | **69.81** | 65.99 | **62.50** |\\n\\nLower rank simplifies the model, improving query efficiency but potentially reducing expressiveness. As the rank increases, performance generally improves. In this experiment, rank 5 achieves the highest average score of 62.50, indicating it provides the best balance between efficiency and performance.\"}", "{\"title\": \"Global Response\", \"comment\": \"Dear reviewers (`BGp2`, `Tcth`, `Fqbz`, `NLRG`) and area chair,\\n\\nWe sincerely appreciate the time and effort you dedicated to reviewing our paper. Your insightful feedback has been invaluable in refining our research.\\n\\nWe are pleased that most reviewers recognized the key strengths of our work. During the discussion period, we also provided clarifications requested by reviewers (**need validation accuracy** (`Tcth`), **adding baseline for Figure 2** (`Tcth`), **explanation of generalization** (`Fqbz`)). To further support our approach, we conducted additional ablation studies (**apply to other model families** (`Tcth`), **ablation for different combinations of modules** (`NLRG`), **feature sharing and robustness** (`Fqbz`)).\\n\\nBelow, we summarize the **Strengths highlighted by reviewers $\\\\color{blue}{[S]}$**, and **Key contributions of our work $\\\\color{cyan}{[C]}$**.\\n\\n---\\n\\n### **Strengths highlighted by reviewers $\\\\color{blue}{[S]}$**\\n#### **Novelty** \\n- \\u201cinnovatively enhances zeroth-order prompt tuning through intrinsic-dimensional gradient clipping and low-rank parameterization\\u201d (`Fqbz`)\\n- \\u201cclever idea to reduce the intrinsic dimensionality while maintaining the number of tokens\\u201d (`Tcth`)\\n- \\u201ca novel black-box prompt-tuning method\\u201d (`NLRG`)\\n\\n#### **Motivation**\\n- \\u201cgood motivation to reduce the number of learnable parameters in ZO optimization\\u201d (`Tcth`)\\n- \\u201cZIP is well-motivated\\u201d (`BGp2`)\\n\\n#### **Evaluations**\\n- \\u201ccomprehensive evaluations demonstrate ZIP's superior accuracy and query efficiency across 13+ tasks\\u201d (`Fqbz`)\\n- \\u201cimproves few-shot performance wrt existing ZO methods\\u201d (`Tcth`)\\n- \\u201cthe paper demonstrates strong performance across three extensive and diverse experimental settings\\u201d (`NLRG`)\\n- \\u201call the design choices for the soft prompt reparameterization are thoroughly ablated in section 6.\\u201d (`Tcth`)\\n- \\u201cthe ablation studies further support the approach, particularly highlighting that the feature-sharing technique helps preserve the model\\u2019s expressive capacity.\\u201d (`NLRG`)\\n\\n#### **Practicality**\\n- \\u201cthe proposed approach is well-suited for practical applications with limited query budgets\\u201d (`NLRG`)\\n- \\u201cproving its practical value under query constraints\\u201d (`Fqbz`)\\n- \\u201cthe intrinsic-dimensional clipping mechanism in ZIP requires no manual hyperparameter tuning, making it highly practical and user-friendly.\\u201d (`NLRG`)\\n\\n---\\n\\n### **Key contributions of our work $\\\\color{cyan}{[C]}$**\\n#### **Problem identification**\\nBlack-box prompt tuning (BBPT), a method for training black-box models, is inherently restricted by limited API budgets, making query efficiency a crucial factor for its practicality. Despite this, prior works have not addressed this issue. For instance, existing works often require tens of thousands of queries, which severely limits their practicality. We are the first to explicitly identify this problem in the language-model-as-a-service scenario and propose a novel idea to significantly improve query efficiency.\\n\\n&nbsp;\\n\\n#### **Technical contributions**\", \"we_introduce_two_key_technical_contributions_to_tackle_these_challenges\": \"1. **Low-rank representation for dimensionality dependency**: To address the challenge of dimensionality dependency of zeroth-order methods (i.e., the number of queries scales with the dimensionality of the problem), we propose a novel low-rank representation. This approach reduces the dimensionality while effectively mitigating the loss of expressive power through feature sharing.\\n\\n2. **Intrinsic dimensional clipping for variance reduction**: High variance in zeroth-order information can significantly degrade query efficiency. To tackle this, we propose a threshold-free gradient clipping method, termed \\u201cintrinsic dimensional clipping\\u201d. The clipping threshold is set to $\\\\sqrt{d}$, which corresponds to the standard deviation of the zeroth-order gradient, where $d$ is the dimensionality of the problem. This approach not only reduces the variance of zeroth-order information but also achieves near-optimal performance without requiring manual tuning. \\n\\n&nbsp;\\n\\n#### **Experimental results**\\nOur method achieves state-of-the-art performance in few-shot learning, base-to-new generalization, and out-of-distribution tasks under limited query budgets, while also delivering competitive results in cross-dataset transfer. Furthermore, it reduces the number of queries required to achieve a specific accuracy level by 48% compared to the best-performing alternative BBPT methods in few-shot accuracy. These results highlight the effectiveness of our approach in tackling key challenges in BBPT.\"}", "{\"title\": \"Response to the official comment by Reviewer Tcth\", \"comment\": \"> I am not sure I understand the concept of \\\"no prompt\\\", as CLIP always computes similarity between an image and a text prompt. Does it mean that you used the same prompts as in the original paper, without further prompt engineering? In any case, now in figure 9 it seems that ZOO with m=1 consistently outperforms m=0, which is reassuring as I'd expect any optimization method to perform better than the baseline (m=0 in this case).\\n\\nYes, the term \\\"no prompt\\\" means there are no prompt tokens that are further engineered. Specifically, we only used the [CLASS] token for $m=0$ case, without any additional soft prompts. Since we append soft prompt tokens in front of [CLASS] token (i.e. \\u201cX X X [CLASS]\\u201d, where X are soft prompt tokens), we decided to use only [CLASS] token for $m=0$, to provide a fair baseline with respect to m. Sorry for the confusion, we will revise the naming of $m=0$ from \\u201cno prompt\\u201d to \\u201cno engineered prompt\\u201d for the final version. \\n\\n \\n\\n---\\n\\n> From the additional significance results for table 3, I understand that on CDT, ZIP and BlackVIP have comparable performance and the advantage of ZIP lies in its query efficiency. I assume figure 10 shows query efficiency for the few-shot setting (table 1). Do these curves look similar for the setting corresponding to table 3?\\n\\nTo evaluate on CDT and OOD tasks, the prompt is first trained on ImageNet (same one in Table 1), and then the resulting trained model is tested on CDT and OOD tasks **without any parameter tuning**. As a result, there would be no dedicated training curves specific to the CDT and OOD tasks.\\n\\nTo analyze query efficiency for CDT and OOD, one potentially indirect way would be to increase the available budget during ImageNet training and then evaluate the performance on CDT and OOD tasks. This would provide valuable insights into how query efficiency impacts task performance. We greatly appreciate the reviewer\\u2019s thoughtful suggestion, which has inspired this idea, and we plan to include it in the final version of the paper.\\n\\n---\\n\\nWe sincerely thank the reviewer for the constructive feedback and insightful discussion. These exchanges have allowed us to conduct a more comprehensive analysis of our method and further solidify its validation.\"}", "{\"title\": \"Awaiting your response\", \"comment\": \"Dear reviewer BGp2\\n\\nAs the discussion period is closing soon, we would like to see if our response has sufficiently addressed your concerns. Should there be any remaining issues, please do not hesitate to let us know. If you find our responses satisfactory, we would be sincerely grateful if you could consider revisiting your initial rating.\\n\\nBest wishes,\\\\\\nThe authors.\"}", "{\"title\": \"Response to Reviewer BGp2 (part 2)\", \"comment\": \"**Compare to more recent works [1, 2]**\\n> From my perspective, one major weakness is that I find the competitors in the experiments are slightly old, e.g. BLACKVIP is published at CVPR'23 and BPTVLM is published at IJCAI'23. There are some more recent works like [1, 2] in this field. I think the authors should better discuss the differences between ZIP and more recent works like [1, 2], and provide fair experimental comparisons as well.\\n\\nWe understand the reviewer\\u2019s concern regarding the lack of comparisons with recent works [1, 2]. To address this, we first describe the differences between [1], [2], and our method in the table below. Additionally, we highlight that both methods are based on assumptions different from ours, making a strict comparison challenging.\\n| Method | Prompt | Optimizer | Access Permission |\\n|----|----|----|----|\\n| [1] | Hard | LLM (ChatGPT) | Loss |\\n| [2] | Soft | CMA-ES | Logits |\\n| ZIP | Soft | ZOO | Loss |\\n* First, [1] proposes a method to enable user-specific utilization of black-box VLMs by learning hard prompts through an LLM. This approach uses an LLM as an optimizer to fine-tune the VLM, and separate APIs are required for both the LLM and VLM. This makes direct comparisons with ZIP difficult as our approach operates under a single API setting.\\n* Similarly, [2] proposes a method to learn soft prompts for black-box VLMs using CMA-ES, an evolutionary algorithm. This approach assumes access to the logits of VLMs, which is a relatively relaxed black-box scenario. In contrast, our approach relies solely on the single loss value of the VLM, making direct comparisons infeasible.\\n\\nWe emphasize that our method achieves state-of-the-art results under the widely adopted setting following prior works, such as BlackVIP and BPTVLM (i.e., using soft prompts as input and receiving a single loss value as output).\\n\\n---\\n\\nThank you for the constructive comments and thoughtful suggestions. We appreciate the opportunity to address your concerns and clarify the key aspects of our work. We have provided detailed responses to each of your points, including the novelty of our method and comparisons with recent works. We hope these clarifications address your concerns.\\n\\nIf there are any additional points that you believe we should address, please let us know. Otherwise, we would be sincerely grateful if the reviewer could reconsider the overall rating in light of our responses.\\n\\n&nbsp;\\n\\n[1] Liu, Shihong, et al. \\u201cLanguage models as black-box optimizers for vision-language models.\\u201d CVPR, 2024.\\\\\\n[2] Wang, Zhengbo, et al. \\u201cConnecting the dots: Collaborative fine-tuning for black-box vision-language models.\\u201d ICML, 2024.\"}", "{\"title\": \"Dear Reviewer\", \"comment\": \"Dear reviewer BGp2\\n\\nWe sincerely appreciate your valuable feedback and constructive comments. Given that some time has passed since we shared our response, we kindly ask whether it has adequately addressed your concerns. Please let us know if there is anything else we need to address. We would be happy to discuss further. Otherwise, if you find our response reasonably satisfactory, we would greatly appreciate it if you could consider re-evaluating your initial rating. \\n\\nBest wishes, \\\\\\nThe authors.\"}", "{\"title\": \"Response to the official comment by Reviewer Tcth\", \"comment\": \"Thank you for carefully reviewing our response. We address the reviewer\\u2019s additional comments below.\\n\\n--- \\n\\n> Could you please comment on the performance of ZOO compared to m=0 (figure 9)? It seems that, for several datasets (e.g., Flowers102, Food101, FGVCAircraft, UCF101), optimizing soft prompts with ZOO actually hurts performance.\\n\\nFirst of all, we would like to clarify that the result we presented as m=0 in Figure 9 was actually \\u201cmanual prompting\\u201d rather than \\u201cno prompts\\u201d (as mentioned as \\u201cmanual prompt (m=0)\\u201d in L809). However, we realize now that what the reviewer has asked us to include as a baseline is literally \\u201cno prompts\\u201d (i.e., without any prompting at all). We have updated the paper to include the results for \\u201cno prompts\\u201d. Please let us know if we need to address the reviewer\\u2019s request further.\\n\\n\\n\\n\\n---\\n\\n> Could you also assess the statistical significance of the reported results based on the standard deviations reported for table 3? For several datasets where ZIP achieves the highest accuracy (e.g., IN, Flowers102, Food101, ImageNetV2, ImageNet-Sketch), it seems there's overlap with the std of the second-best result (usually BlackVIP).\\n\\nTo provide more statistically robust results, we conducted experiments using an extended set of 10 seeds (1\\u201310). Based on this, we performed a t-test to compare p-values for the significance test. The results showed that ZIP demonstrated statistically significant superior performance in OOD, whereas ZIP and BlackVIP exhibited comparable performance in CDT. Please refer to the table below for detailed results and analysis.\\n\\n`Cross Dataset Transfer`\\n| Method | IN | Caltech | Pets | Flowers | Food | Aircraft | SUN | DTD | SVHN | EuroSAT | Resisc | CLEVR | UCF | Avg |\\n|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|\\n| BlackVIP | 65.19 | 92.69 | 86.28 | 64.72 | 83.36 | 22.31 | 62.04 | 42.88 | 17.68 | 39.71 | 55.98 | 15.70 | 64.07 | 54.82 |\\n| ZIP | 65.91 | 91.69 | 85.74 | 65.64 | 84.54 | 20.67 | 60.10 | 39.39 | 21.42 | 44.43 | 54.52 | 14.39 | 61.99 | 54.65 |\\n| t-stats | 1.745 | -1.879 | -0.690 | 2.942 | 2.974 | -8.771 | -2.755 | -4.349 | 2.615 | 3.409 | -1.588 | -2.479 | -2.650 | |\\n| p-value | 0.098 | 0.076 | 0.498 | 0.009 | 0.008 | 6.45e-08 | 0.013 | 3.87e-04 | 0.018 | 0.003 | 0.130 | 0.023 | 0.016 | |\\n\\nIn CDT tasks, significance tests indicate that ZIP performs significantly better on Flowers (p=0.009), Food (p=0.008), SVHN (p=0.018), and EuroSAT (p=0.003). In contrast, BlackVIP achieves significantly higher performance on Aircraft (p=6.45e-08), DTD (p=3.87e-04), CLEVR (p=0.023), and UCF (p=0.016) (p < 0.05).\\n\\n`Out-of-Distribution Generalization`\\n| Method | ImageNet-A | ImageNetV2 | ImageNet-R | ImageNet-Sketch | Average |\\n|----|----|----|----|----|----|\\n| BlackVIP | 41.90 | 58.85 | 72.81 | 44.32 | 54.47 |\\n| ZIP | 47.94 | 59.48 | 74.64 | 45.25 | 56.82 |\\n| t-stats | 10.299 | 1.549 | 4.359 | 2.619 | |\\n| p-value | 5.66e-09 | 0.139 | 3.78e-4 | 0.017 | |\\n\\nIn OOD tasks, ZIP consistently outperforms BlackVIP in average performance (ZIP: 56.82, BlackVIP: 54.47). Notably, ZIP shows statistically significant improvements on ImageNet-A (p=5.66e-09), ImageNet-R (p=3.78e-4), ImageNet-Sketch (p=0.017) while the performance difference on ImageNetV2 (p=0.139) is smaller and not statistically significant (p > 0.05).\\n\\nBlackVIP\\u2019s strength in generalization performance stems from its design and objectives. It is specifically tailored to enhance generalization through an image-dependent prompting strategy, drawing inspiration from prior work [1]. In contrast, ZIP prioritizes query efficiency, which differentiates it from BlackVIP in terms of its primary goal. Despite these different focuses, ZIP outperforms BlackVIP in OOD and base-to-new generalization tasks while still delivering competitive performance in CDT. \\n\\nWe sincerely appreciate the reviewer for the opportunity to improve the reliability of our results. If there are any additional points to discuss, please let us know. \\n\\n&nbsp;\\n\\n[1] Zhou, Kaiyang, et al. \\\"Conditional prompt learning for vision-language models.\\\" CVPR, 2022.\"}", "{\"title\": \"Response to Reviewer Fqbz (part 1)\", \"comment\": \"We sincerely appreciate the reviewer\\u2019s recognition and the constructive feedback. We have addressed the reviewer\\u2019s specific comments below, while we remain open to any additional suggestions.\\n\\n---\\n\\n**Other ZO Optimization methods?**\\n> Comparisons with additional baseline methods in zeroth-order optimization could strengthen claims of superiority.\\n\\nWe kindly request the reviewer to clarify whether it is other BBPT methods (that leverage ZOO) or literally other zeroth-order methods (e.g., ZO-SGD, SPSA-GC [1]) that are being questioned. If it is the former, to the best of our knowledge, there are no additional BBPT methods leveraging ZOO beyond those already considered in our work. If it is the latter which involves comparisons between other ZOO algorithms and their clipped versions, such as SPSA-GC and SPSA-GC with clipping, please let us know. We will address this through additional experiments.\\n\\n---\\n\\n**Complex datasets?**\\n> Its results on ImageNet in Table 1 are comparatively modest, suggesting limitations in scalability to complex datasets. An in-depth analysis of ZIP's performance on larger, diverse datasets would clarify its robustness and potential for broader application.\\n\\nThank you for raising this concern. We would like to point out that a 5,000 API query setting may not be sufficient to fully train a complex dataset such as ImageNet. For comparison, prior work [1] used significantly more queries, with 625,000 API queries employed for ImageNet training. To provide a more thorough evaluation of the performance of ZIP, we conduct additional experiments with 20,000 API queries. The results are summarized in the table below.\\n\\n| Method | Caltech | Pets | Flowers | Food | Aircraft | SUN | DTD | SVHN | EuroSAT | Resisc | CLEVR | UCF | IN | Avg |\\n|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|\\n| BAR | 92.5 | 88.1 | 67.7 | 83.2 | 23.0 | 63.8 | 47.1 | 32.3 | 54.0 | 64.1 | 25.4 | 65.7 | 65.5 | 59.4 |\\n| BlackVIP | 93.0 | 88.0 | 64.8 | 85.0 | 22.5 | 62.3 | 44.4 | 42.3 | 57.2 | 57.1 | **28.8** | 66.1 | 66.3 | 59.8 |\\n| BPTVLM | 91.6 | 90.3 | 69.9 | 85.1 | 26.3 | 57.2 | 50.1 | 34.4 | 65.8 | 63.6 | 27.8 | 67.4 | 61.6 | 60.9 | \\n| ZIP | **94.1** | **92.4** | **71.8** | **86.9** | **27.3** | **64.4** | **52.9** | **49.9** | **66.6** | **68.4** | 28.6 | **70.0** | **67.2** | **64.7** |\\n\\nThese findings reveal several key points. With a 5,000 API query budget, ZIP achieves an ImageNet accuracy of 66.0%, comparable to strong baselines such as Manual Prompt (66.7%) and BlackVIP (65.5%). Increasing the query budget to 20,000 further improves the accuracy of ZIP to 67.2%, outperforming those baselines by margins of 0.5% and 0.9%, respectively.\\n\\nMoreover, compared to prior work that leveraged significantly more API queries (e.g., BlackVIP: 67.1% on ImageNet), ZIP delivers outstanding performance using only 20,000 API queries. This demonstrates strong scalability and effectiveness, even on challenging datasets like ImageNet.\\n\\nIf you still have any remaining concerns regarding this issue, we would sincerely appreciate it if you could recommend a complex dataset to evaluate ZIP. We will gladly consider it as part of our future work and look forward to sharing the results based on your suggestion.\\n\\n&nbsp;\\n\\n[1] Oh, Changdae, et al. \\u201cBlackVIP: Black-box visual prompting for robust transfer learning.\\u201d, CVPR, 2023.\"}", "{\"metareview\": \"The paper tackles an important problem in black-box prompt tuning, i.e., existing methods rely on excessive queries for model update. The solutions proposed in this work include a low-rank reparameterization method for reducing learnable parameters and a gradient-clipping method for reducing the variance of zeroth-order gradients. The paper received four reviews with 3x borderline accept and 1x borderline reject. However, the \\\"negative\\\" reviewer indicated that he/she is not familiar with the topic and therefore not confident in his/her decision (the AC discounts this review). The rest of the reviewers are generally positive about this work: they found the paper well-motivated, the idea novel, and the performance strong.\", \"additional_comments_on_reviewer_discussion\": \"The major concerns raised by the reviewers are about lack of comparisons with some baselines and lack of ablation studies on the combination of different modules proposed in this work. The authors have provided a comprehensive rebuttal to address these issues. The AC has read the rebuttal and conversations between the authors and the reviewers and found them satisfactory. The AC strongly suggests that the authors add the additional ablation results and the results of applying their method to different model families to the camera ready.\"}" ] }
2OMyAFjiJJ
Flow matching achieves almost minimax optimal convergence
[ "Kenji Fukumizu", "Taiji Suzuki", "Noboru Isobe", "Kazusato Oko", "Masanori Koyama" ]
Flow matching (FM) has gained significant attention as a simulation-free generative model. Unlike diffusion models, which are based on stochastic differential equations, FM employs a simpler approach by solving an ordinary differential equation with an initial condition from a normal distribution, thus streamlining the sample generation process. This paper discusses the convergence properties of FM in terms of the $p$-Wasserstein distance, a measure of distributional discrepancy. We establish that FM can achieve an almost minimax optimal convergence rate for $1 \leq p \leq 2$, presenting the first theoretical evidence that FM can reach convergence rates comparable to those of diffusion models. Our analysis extends existing frameworks by examining a broader class of mean and variance functions for the vector fields and identifies specific conditions necessary to attain these optimal rates.
[ "flow matching", "generative model", "convergence rate", "optimality" ]
Accept (Poster)
https://openreview.net/pdf?id=2OMyAFjiJJ
https://openreview.net/forum?id=2OMyAFjiJJ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "w9avvZGAGP", "vpAT3kvp29", "s1OnKjDEBc", "jgN3YzoVcq", "iGl9XuR3St", "djeLD44O1C", "crreJbwQuo", "ap7sgPllbN", "YewcvsaWYa", "VWLS9Bm1Oi", "UJea3s0DSW", "OQuBHAtcfz", "IU5Y3PRD9B", "EyrQ0Jk3qk", "8xc7ZL50J2", "4sToYQwNBC" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_review" ], "note_created": [ 1730680176569, 1732020747601, 1733192527738, 1733183026389, 1734025271135, 1732485513321, 1730170233136, 1732020899332, 1732021218698, 1732871218487, 1732558799686, 1732021124518, 1737523964823, 1730647982768, 1732021434321, 1730580691772 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9156/Reviewer_nHx4" ], [ "ICLR.cc/2025/Conference/Submission9156/Authors" ], [ "ICLR.cc/2025/Conference/Submission9156/Reviewer_nHx4" ], [ "ICLR.cc/2025/Conference/Submission9156/Reviewer_SoUb" ], [ "ICLR.cc/2025/Conference/Submission9156/Area_Chair_haGL" ], [ "ICLR.cc/2025/Conference/Submission9156/Authors" ], [ "ICLR.cc/2025/Conference/Submission9156/Reviewer_SoUb" ], [ "ICLR.cc/2025/Conference/Submission9156/Authors" ], [ "ICLR.cc/2025/Conference/Submission9156/Authors" ], [ "ICLR.cc/2025/Conference/Submission9156/Authors" ], [ "ICLR.cc/2025/Conference/Submission9156/Reviewer_yJWQ" ], [ "ICLR.cc/2025/Conference/Submission9156/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9156/Reviewer_yJWQ" ], [ "ICLR.cc/2025/Conference/Submission9156/Authors" ], [ "ICLR.cc/2025/Conference/Submission9156/Reviewer_xqeo" ] ], "structured_content_str": [ "{\"summary\": \"The paper provides estimates for the 2-Wasserstein distance for the sample-based distribution obtained in the Flow-Matching framework relative to the exact distribution. These estimates depend on the number of samples used in training, the smoothness of the true distribution as an element of the Besov space, the asymptote of the growth of the conditional map at the initial time instant.\\nThe paper considers the early stopping mode of the ODE, when the solution stops at time $T_0<1$, and the estimates of $T_0$ are also given.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is the first to present estimates for the Flow Matching framework, which shows that almost optimal minimax converges rates are achieved under several assumptions.\", \"The paper is well written\"], \"weaknesses\": \"This paper contains many points in common with the paper cited therein [1]. In particular, using Besov space for target density, B-splines for its approximation, etc. Many estimates are based on thouse from [1], see, for example, Appendix A.4--A.5 of the presented paper, where the citation on [1] are explicit. In paper [1] diffusion models are considered, but as shown in paper [2], Flow Matching approach includes, under certain conditions, Diffusion models approach. Thus, generalization or obtaining similar results for Flow Matching is rather straightforward. Namely, in essence, the difference is to use Alekseev-Gr\\u00f6bner Theorem (Lemma 16 about error of a perturbed solution of ODE) instead of Girsanov\\u2019s Theorem (Proposition D.1 of [1] for error of a perturbed solution of SDE).\\nOne of the main differences is the presence in the estimates of the degree of growth of the parameter $\\\\sigma_t$ at 1, but the authors come to the well-known (empirical) conclusion that the optimal asymptotics is $\\\\sqrt t$. Does this provide the first theoretical justification for this empirically observed optimal scaling? How can one intuitively realize that the degree of $\\\\sigma_t$ growth near the time point $t=1$ is important if the ODE solution is considered on the interval $[0, T_0]$, where $T_0<1$?\\n\\n\\n[1] Kazusato Oko, Shunta Akiyama, and Taiji Suzuki. Diffusion models are minimax optimal distribution estimators. volume 202, pages 26517\\u201326582. PMLR, 4 2023\\n\\n[2] Aaron Lipman, Ricky T Q Chen, Heli Ben-Hamu, Maximilian Nickel, and Matthew Le. Flow matching for generative modeling. In The Eleventh International Conference on Learning Representations, 2023\", \"questions\": [\"How do estimates change if you take a distribution other than the Gaussian distribution as the initial distribution $P_{[0]}$?\", \"Can the obtained estimates be easily extended to the case of estimation error in the total variation (TV) distance?\", \"Would your estimates change if you use different heuristics for Flow Matching, such as OT-minibatch?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Global Responses\", \"comment\": \"We thank all the reviewers for their constructive feedback. Below, we address the specific questions and concerns raised. We appreciate your insights and look forward to further discussions to improve our manuscript. We have uploaded a revision, in which we have addressed your comment. Some major changes in the revised manuscript are highlighted in blue for clarity.\\n\\nSeveral reviewers have raised concerns regarding the novelty of our generalization analysis compared to that of Diffusion Models (DM). As a global response, we would like to clarify the relationship between the generalization analysis of Flow Matching (FM) and DM.\\n\\nWhile it is true that the probability dynamics of the reverse diffusion model can be represented via an ODE, specifically the probability flow ODE (Song et al., 2021), this does not imply equivalence in the generalization analysis of DM and FM.\", \"the_vector_field_for_the_probability_flow_ode_is_expressed_as\": \"$$\\nv_t(x_t)=a_t x_t + b_t \\\\nabla \\\\log p_t(x_t).\\n$$\\nHere, $a_t$ and $b_t$ are constants related to $\\\\sigma_t$ and $m_t$ (or $\\\\beta_t$ in the standard DM parameterization). In DM, only the score function $\\\\nabla \\\\log p_t(x_t)$ is estimated, and its generalization analysis primarily evaluates the accuracy of this score estimator. In contrast, for FM, the generalization analysis involves evaluating the accuracy of the entire vector field. As evident from the equation above, the accuracy of the score estimation is insufficient to ensure the accuracy of the vector field.\\n\\nFurthermore, in DM or SDE-based frameworks, bounds on KL divergence and Total Variation (TV) distance can be derived from score function estimates via Girsanov\\u2019s theorem. However, to the best of our knowledge, no analogous KL bound exists for ODEs in terms of the vector field. This highlights a fundamental distinction between the generalization analysis for ODEs and SDEs.\"}", "{\"comment\": \"I thank the authors for the answers. I have read the responses to the other reviewers and the updated version of the article. I keep my score 6.\"}", "{\"comment\": \"Apologies for the delay. Thank you very much for your response and the answers to my questions. I am happy to keep my initial positive rating.\"}", "{\"metareview\": \"The paper shows flow matching, or ODE-based generative models, achieves almost minimax optimal convergence, which complements the prior literature (e.g., Oko et al) on the minimax optimality of SDE-based generative models. Given the significance of flow matching, this result is a welcomed addition to the growing literature, and will be of interest to the ICLR audience.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal, it was clarified how the results here are different from Oko et al, e.g., using the Alekseev-Grobner Theorem instead of the Girsanov\\u2019s Theorem.\"}", "{\"title\": \"Reviewers' responses would be appreciated\", \"comment\": \"Dear Reviewers,\\n\\nWe submitted our responses to your review comments several days ago and provided detailed comments on your questions and concerns. We would appreciate it if you could give us any further feedback.\\n\\nThank you,\\nAuthors\"}", "{\"summary\": \"This paper proves an almost minimax optimality result for a class of flow models. Previously, Oko had shown that diffusion models are minimax optimal under the 1-Wasserstein distance. This paper builds on Oko to show that a class of FMs with terminal Gaussian distribution and paths of the form x_t = \\\\sigma_t x_0 + m_t x_1 (which includes diffusion as a special case) are almost minimax optimal, with a parameter kappa determining the non-optimality. They show this under the p-Wasserstein distance for 1 <= p <= 2.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Provides new theoretical results for a family of flow models, showing almost minimax optimality (with 'almost' depending on a specific parameter). The paper is rigorous, clearly written, and clearly places the results in the context of prior work.\", \"weaknesses\": \"Please see Questions.\", \"questions\": \"L223: Kappa = 1/2 corresponds to diffusion-FM, correct? So the result actually says that that only diffusion-FM is optimal within the family of FMs you consider? If so, L230 'FM is as strong as diffusion models' seems somewhat misleading -- it seems more like diffusion is actually stronger than FM, except when FM is equivalent to diffusion. Please correct me if I'm wrong; otherwise might want to state this differently.\", \"l225\": \"Can you elaborate on the ways in which your proof technique differs significantly from Oko's? (Since Oko's result is a special case of yours for diffusion-FM and 1-Wasserstein.)\", \"l133\": \"Can anything be said in the more general non-Gaussian case of FMs?\", \"l177\": \"Notation not super clear here. What is P_[1] vs p_[1]?\", \"theorem_1\": \"It seems like you are further restricting sigma to have the form (1-\\\\tau)^\\\\kappa?\", \"l222\": \"Typo 'revserse'\", \"l224\": \"Diffusion can be expressed as an ODE so I am not sure what you mean here?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Authors' replies\", \"comment\": \"Thank you for your helpful comments. We reply to your comments.\", \"w1\": \"Overlap with Oko et al 2023.\\n\\nWhile we acknowledge some overlap in mathematical techniques with Oko et al. 2023, there are significant difference also as listed below. \\n\\n(1) We have revealed the role of parameters $\\\\sigma_t$ and $m_t$ on the upper bound of the convergence rates. No such results have been derived in the literature for FM and DM. To obtain the results, the mathematical techniques for the proof of main theorems (Appendix C) require significant refinements from Oko et al 2023. \\n\\n(2) As detailed in the global comments, the generalization analysis of DM and FM have significant difference. Beyond this difference, you might notice that the bound $W_2(P_0, P_{T_0})$ requires completely novel arguments using the uniform Lipschitz constant in Lemma 10, which are based on the form of $\\\\sigma_t$ and $m_t$ considered in this paper and the assumption (A5). \\n\\n(3) Appendices A4 and A5 summarize known results (e.g., B-splines and Gaussian integral bounds) and do not intend novelty. We have explicitly stated this for clarity in the revision.\\n\\n(4) We do not think $\\\\sigma_t = \\\\sqrt{t}$ is an empirically popular choice for FM unlike DM. In fact, the most popular choice of path construction for FM is $x_t = (1-t)x_1 + t x_0$, which corresponds to $\\\\sigma_t = t$ and $m_t = 1-t$ ($\\\\kappa = 1$).\", \"w2\": \"degree of growth of $\\\\sigma_t$\\n\\nSection 3.1 discusses how $\\\\sigma_t$ acts as a smoothing parameter, analogous to the bandwidth in kernel density estimation. The early stopping time $T_0$ depends on $n$ ($T_0 = n^{-R_0}$ in the paper), and the growth rate $\\\\kappa$ in $\\\\sigma_t=t^\\\\kappa$ controls the smoothing parameter at the stopping time $t=T_0$. It is well known that the smoothing parameter of a nonparametric estimator essentially affects on the convergence rate. This is why $\\\\kappa$ is important for the convergence rate and the performance of the FM estimator.\", \"q1\": \"other than Gaussian distribution\\n\\nThis is an interesting question, and in fact we are working on this extension. In the current proof, special properties of Gaussian distribution help us making bounds at many places. Mathematically, it is challenging to extend it to general distributions.\", \"q2\": \"Extension to TV\\n\\nAs discussed in Global Responses, it is not straightforward to extend the (almost) minimax optimal convergence rate to the TV distance. This also illustrates the significant difference between the ODE and SDE for analyzing the generalization bounds.\", \"q3\": \"Heuristics for FM such as OT-minibatch.\\n\\nThe current analysis does not depend on the **joint** distribution of the source $N(0,I_d)$ and the target $P_{true}$. Thus, for any algorithms to match samples in the two distributions, including OT-minibatch, the theoretical results hold. We have mentioned it at line 158 in the revision .\"}", "{\"comment\": \"Weakness: Thank you for your suggestion to include a figure for explanation. We agree that a visual explanation could improve accessibility, and we have included a figure to illustrate the basic idea of the time division in Appendix E and mentioned it in line 458 in the revision.\\n\\nQ1 (probability estimation): FM methods do not estimate a density, but provide samples that approximate the true probability. We thus used the terminology *probability estimation*.\", \"q2\": \"(i.i.d. samples) We agree. We have reflected all of this in the revision.\\n\\nQ3, 4, 6 (typos): We have fixed as many typos as possible in addition to those you pointed out. \\n\\nQ5 (support) The cube size does not affect the result. We have mentioned it in line 166 in the revision.\"}", "{\"title\": \"Thank you for your clarification\", \"comment\": \"We sincerely apologize for misunderstanding your initial comments and greatly appreciate your detailed clarification. Your insights have allowed us to better address your concerns, particularly regarding the assumptions (A1)-(A5). Below, we provide a detailed discussion of these assumptions and their role in our theoretical framework. We are also willing to include these points in the final version of our paper if reviewers deem it beneficial.\\n\\n### **1. Assumptions (A3) and (A4):**\\n\\nThese assumptions concern the algorithm parameters $\\\\sigma_t$ and $m_t$, which are specified by the user. They are satisfied by many standard FM methods. \\n\\n### **2. Assumptions (A1), (A2), and (A5):**\\n\\nThese assumptions reflect the properties of the target probability, which is unknown in usual settings. While we acknowledge that verifying these assumptions for specific datasets is infeasible, The primary purpose of the current and many other theoretical work is to compare the potential ability of estimators or learning methods by the worst-case analysis over a function class, revealing the dependence on important parameters such as dimensionality and smoothness degree. . These rates provide a comparative understanding of the estimator's performance.\\n\\nFor instance, as discussed in Section 3.1, KDE with a Gaussian kernel achieves a minimax rate of $O(n^{-4/(4+d)})$, while using an optimal kernel can yield a better rate of $O(n^{-2s/(2s+d)})$ for the densities on $[0,1]^d$ with smoothness $s$. This comparison informs practical choices by highlighting the importance of kernel selection based on expected smoothness. Similarly, our work demonstrates that FM methods attain the almost minimax optimal convergence rate, which is comparable to DM methods, offering key theoretical insights into FM\\u2019s ability.\\n\\n### **3. Theoretical Role of (A1)**\\n\\nWe understand your concern regarding the practicality of (A1). While it may not always align with real-world data, it is critical for ensuring smoothness conditions that allow rigorous analysis. Relaxing this assumption is an important direction for future work. Nonetheless, (A1) does not necessarily impose overly unrealistic conditions; for example, functions with smoothness $s$ but not $s+1$ may still be differentiable almost everywhere (e.g., ReLU is non-differentiable only at the origin). This ensures that the function class considered under (A1) is both theoretically rich and practically relevant.\\n\\n### **4. Besov space**\\n\\nTo address your question regarding Besov spaces, Tong et al. (2023) did not use such spaces because their analysis did not involve convergence rates in large-sample asymptotics. In contrast, our work focuses on deriving minimax rates, which inherently depend on the smoothness of the target density. Besov spaces, despite their complexity, have been widely recognized as effective tools for formalizing smoothness. They have been extensively used in theoretical studies on approximation and estimation accuracy (DeVore et al., 1992; Donoho & Johnstone, 1995), and we adopt this tradition to characterize FM methods rigorously.\\n\\n### References: \\n\\nR. A. DeVore, B. Jawerth and B. J. Lucier. (1992) Image compression through wavelet transform coding. *IEEE Transactions on Information Theory,* 38 (2) 719-746. \\n\\nDonoho, D. L., & Johnstone, I. M. (1995). Adapting to Unknown Smoothness via Wavelet Shrinkage.\\u00a0*Journal of the American Statistical Association*,\\u00a0*90* (432), 1200\\u20131224.\"}", "{\"title\": \"mainly about above points (1) and (2) + change of mind on numerical rating\", \"comment\": \"Thank you for the reply, and for replying to my questions.\\n\\nAbout point (2)\\n\\n----------------\", \"apology\": \"About \\\"toy model\\\" I realize now that it was not clear what I meant, I am truly sorry for not having been specific before. I thought it would be clear that this statement has to do with the hypotheses of your main theorem, and I was expecting a discussion of these hypotheses/assumptions, but this was maybe clear only to me and not to an outside reader of my comment, sorry about that. Also, as I said, most likely no mathematical tools are available to improve things. This is also so for the diffusion version of your results (i.e. with previous work).\\n\\n*Ok so now to the concrete question.* To show that your assumptions are not over-indulgent (not toy-model-like), it would be great if you discuss to what extent (A1) - (A5) are verified by actual datasets. In particular, the regularity assumption (A1) seems hard to believe to be realistic, and I'd be happy to be proven wrong. The paper has no experiments, so the way you would prove that is by citing other papers that do have regularity verifications, but I don't know of such papers to be honest. That's why, for the time being, I stand by my previous sentence:\\n\\n\\\"Like in similar papers for other models, some of the setups look like toy models, this may be because the mathematical theory is unavailable in general.\\\"\\n\\nDid Tong et al. had your Besov assumption, or did they have something stronger? I have missed that, but if you can point to where they have an equal or stronger assumption I'd be interested to know. But even if Tong et al. did have stronger assumptions than you do, and even if that paper is \\\"popular\\\", this still looks like a toy model until it is compared to actual data.\\n\\n*Note that toy models are good for our intuition. Saying \\\"toy\\\" makes us think of children and simple stuff, and this makes it seem dismissive, which was not my intention. I nevertheless think this is a weakness of the paper.*\\n\\n\\nAbout point (1): I see that actually you have some differences to Oko et al., but the paper's techniques can't help but feel incremental compared to previous work.\\n\\nAnyway, even though your replies did not change much my mind about the textual content of my evaluation, I now realize that my initial \\\"numerical grading\\\" (grade 5) of the paper was perhaps too severe, I think rather than \\\"below acceptance threshold\\\" it should go \\\"above acceptance threshold\\\". This is mainly because you are right that people care about these results, it gives them something to cite regarding flow matching\"}", "{\"title\": \"Authors' replies\", \"comment\": \"Thank you for your comments. We write our replies to them.\\n\\n(1) Novelty: As detailed in our response to Reviewer nHx4 (W1), the dependence of convergence rates on the choice of $\\\\sigma_t$ and $m_t$, and the bound around the final time point of ODE are significantly different from the existing literature and novel to the best of our knowledge. \\n\\n(2) The setup covers various FM models, including the most popular OT-CFM (Tong et al 2023). Note that, as in the reply to Q3 of yJWQ, the way of constructing a coupling does not affect the analysis in the paper. We would greatly appreciate it if you could specify which aspects appear as toy models to address your concerns better.\", \"q1\": \"As discussed in line 446, the constant $R_0$ can be taken so that $R_0 \\\\geq (s+1)/min(\\\\kappa,\\\\bar{\\\\kappa})$. However, we cannot know $s$ in many practical cases, so it is not easy to set $R_0$ as this minimum value.\", \"q2\": \"$\\\\sigma_{[\\\\tau]}=1-\\\\tau$ ($\\\\kappa = 1$) is the most popular choice of FM, and $\\\\sigma_{[\\\\tau]}=(1-\\\\tau)^{1/2}$ ($\\\\kappa = 1/2$) is the diffusion path (Lipman 2023, e.g.). The choice in Theorem 1 covers practically popular ones. If my answer does not reply to your question, please ask further.\", \"q3\": \"For $\\\\kappa = 1/2$, the upper bound of Theorem 9 is of the same rate (up to an arbitrary positive $\\\\delta$) as the lower bound of Wasserstein distance in the minimax sense, as shown in Proposition 2. So, the bound is tight for $\\\\kappa = 1/2$. For other values of $\\\\kappa$, there are no theoretical arguments for the tightness of the obtained convergence rate. As discussed in (2) above, the result holds for a wide range of FM, such as OT-CFM.\", \"typo\": \"Thank you for pointing it out. We have fixed it in the revision.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"The paper applies the same framework as in Oko et al. (a paper on convergence rates of diffusion models as timesteps and/or sample size goes to infinity), to Flow Matching. Due to the application to a different model, some of the proofs are different but the results are of the same strength.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"It's interesting to know that FM have the same standard guarantees as DMs.\", \"weaknesses\": \"Mathematically, the paper does not shine as to novelty, it mostly chains known estimates, and applies them to FM.\\nLike in similar papers for other models, some of the setups look like toy models, this may be because the mathematical theory is unavailable in general.\", \"questions\": \"1) In Theorem 1, what is a good bound for R_0?\\n2) Still in theorem 1, why do you chose $\\\\sigma_{[\\\\tau]}$ in that form? In what applications does it appear that way?\\n3) Can you test the sharpness of the bounds of theorem 9 for some FM famous use cases? \\n\\nat line 310 \\\"in general we generally consider\\\" may be rephrased\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Authors' replies\", \"comment\": \"Thank you for your helpful comments. We provide our replies below.\\n\\nQ1 (comparison between FM and DM): More precisely, Theorem 9 tells that the derived upper bound (almost) attains the universal minimax lower bound at $\\\\kappa=1/2$ in terms of the convergence rate. For other $\\\\kappa > 1/2$, the derived upper bound does not attain the universal lower bound, and there is still a possibility that the convergence rate for $\\\\kappa > 1/2$ can be improved. To reflect this situation, we would improve the expression as \\u201cBoth FM and DM can attain the same almost optimal minimax convergence rate for generalization error\\u201d (line 230)\\n\\nQ2 (difference from Oko et al. 2023): As detailed in response to nHx4 (W1), there are two significant differences from Oko et al. 2023 in terms of proof techniques. \\n\\n(1) To relate the Wasserstein distance and the squared difference of vector fields (scores in the case of DM), the Alekseev-Grobner Theorem is used for FM, while Girsanov\\u2019s Theorem is for DM. This causes a completely different argument to bound between the final ODE solution and the true probability $W_2(P_0, P_{T_0})$; the analysis of FM requires the uniform Lipschitz constant in Lemma 10, which is based on the form of $\\\\sigma_t$ and $m_t$ considered in this paper and the assumption (A5). \\n\\n(2) The extension of the paths with $\\\\sigma_t$ and $m_t$ needs more elaborate arguments on the analysis. As a result, we can obtain the influence of the decreasing rate $\\\\kappa$ in $\\\\sigma_t \\\\sim t^\\\\kappa$ on the upper bound of the convergence rate.\", \"q3\": \"Extension to the non-Gaussian case is an interesting problem, and in fact, we are working on this extension. In the current proof, special properties of Gaussian distribution help us make bounds at many places. Mathematically, it is challenging to extend it to general distributions.\", \"q4\": \"We state at the beginning of Section 2 that we use $P_a$ for probability distribution and $p_a$ for its density function. We have rewritten the statement of Theorem 1 in the revision.\", \"q5\": \"This assumption is not so restrictive. Because $\\\\sigma_t$ is a decreasing function of $t$ and should approach 0 as $t\\\\to 0$, it is natural to set it in the form $t^\\\\kappa$ around 0. In practice, the OT flow uses $\\\\kappa = 1$ and diffusion flow $\\\\kappa = 1/2$.\\n\\nQ6 (typo): We have fixed all the typos in reviewers\\u2019 comments. \\n\\nQ7 (line 224; the relation between FM and DM) See our Global Responses.\"}", "{\"summary\": \"This paper provides near-minimax convergence guarantees for the flow matching (FM) algorithm for $p$-Wasserstein distances. Distinct from diffusion models, FM use ordinary differential equations (ODEs) at inference time instead of stochastic differential equations (SDEs). Their estimator is based on time-partitioned estimators, similar to the analysis of Oko et al (2023). They adopt the estimator for more general parameters (specifically the mean and covariance parameters) to provide an estimator for flow matching.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper provides (to my understanding) the first estimation rates for flow matching in the context of classical statistical estimation rates. This paper is similar in spirit to the FM paper of Lipman et al (2023), where they use different combinations of mean-variance parameters to define their path. While this work leverages many ideas from Oko et al. (2023), what is especially interesting is this idea that \\\"optimal parameter choices\\\" lead to minimax convergence rates, whereas other choices do not enjoy the same statistical rates.\", \"weaknesses\": \"The paper could be more clearly written, and the main text is very technical. This paper would benefit substantially from a small figure explaining the construction of the estimator at a high level, and also explaining the reason why the full minimax estimation is not possible. These are overall minor points, but I do believe the paper would benefit greatly from these modifications overall.\", \"questions\": \"My comments are minor (these are mostly typos I found, but this is far from exhaustive)\\n\\n1. Line 188 \\\"for probability *density* estimation\\\"?\\n2. L193 (this might happen in many places): I believe grammatically it makes sense to say \\\"i.id. sample*s*\\\" instead of a single sample\\n3. L222: *reverse* not revserve\\n4. L254: \\\"respectivly\\\" is misspelled\\n5. This is a question: is there a clean way to track the dependence on the diameter of the set of the support? The assumptions assume the support of the density is in the unit cube. What is the dependence if the radius was arbitrary? This might fall outside the scope of the paper, but I'm curious if the authors have the answer\\n6. L516: \\\"diffrence\\\" is misspelled\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
2OANNtX3T5
EXPLORING RESPONSE UNCERTAINTY IN MLLMS: AN EMPIRICAL EVALUATION UNDER MISLEADING SCENARIOS
[ "Yunkai Dang", "Mengxi Gao", "Yibo Yan", "Xin Zou", "Yanggan Gu", "Aiwei Liu", "Xuming Hu" ]
Ensuring that Multimodal Large Language Models (MLLMs) maintain consistency in their responses is essential for developing trustworthy multimodal intelligence. However, existing benchmarks include many samples where all MLLMs exhibit high response uncertainty when encountering misleading information, requiring even 5-15 response attempts per sample to effectively assess uncertainty. Therefore, we propose a two-stage pipeline: first, we collect MLLMs’ responses without misleading information, and then gather misleading ones via specific misleading instructions. By calculating the misleading rate, and capturing both correct-to-incorrect and incorrect-to-correct shifts between the two sets of responses, we can effectively metric the model’s response uncertainty. Eventually, we establish a Multimodal Uncertainty Benchmark (MUB) that employs both explicit and implicit misleading instructions to comprehensively assess the vulnerability of MLLMs across diverse domains. Our experiments reveal that all open-source and close-source MLLMs are highly susceptible to misleading instructions, with an average misleading rate exceeding 86%. To enhance the robustness of MLLMs, we further fine-tune all open-source MLLMs by incorporating explicit and implicit misleading data, which demonstrates a significant reduction in misleading rates
[ "UNCERTAINTY", "MLLMs", "Misleading" ]
Reject
https://openreview.net/pdf?id=2OANNtX3T5
https://openreview.net/forum?id=2OANNtX3T5
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zdzYDgFHBI", "zL2zBicCWv", "vjxmoVDmdu", "q7I8IvcZSR", "pTHzSfOrKO", "p8vNRSkdOD", "p6IcAoD51P", "njuTbCcyob", "mwKFl3VDyU", "kIyNpJl0xm", "haVnimRskE", "gvXhQecnnf", "ejduFX7ZmV", "aUXWDAdXPN", "ZR8ETzmld5", "YXlKdS5Uuy", "UgJH99F9dC", "P4ryPqZHn3", "MJlulJkMik", "JjiYhpudaX", "J6plsU9OsD", "IENIrBcTYy", "Hx3Omw4ugI", "HRszUFLabC", "Gw1mJLTW5H", "FdrVAU7QLn", "Dr4s7JT9XP", "BXNuBqUCPA", "92L5eCZGCV", "6bHINx86tz", "6XoFUlf2cr", "12PZs73M7e" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment" ], "note_created": [ 1732588091037, 1732281830931, 1732281698546, 1732682698944, 1732118328769, 1732682651742, 1732281895074, 1732587901216, 1732117034930, 1732588030511, 1732116825058, 1732616649611, 1730768261270, 1732120117394, 1730498495763, 1732480525341, 1732117003259, 1729357375539, 1732116751735, 1734830845858, 1732281857218, 1732116581056, 1732117488401, 1732682418727, 1732117399094, 1732118390069, 1732116966899, 1730785257742, 1732118724031, 1737523575935, 1732119760388, 1732587966843 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3437/Authors" ], [ "ICLR.cc/2025/Conference/Submission3437/Authors" ], [ "ICLR.cc/2025/Conference/Submission3437/Authors" ], [ "ICLR.cc/2025/Conference/Submission3437/Authors" ], [ "ICLR.cc/2025/Conference/Submission3437/Authors" ], [ "ICLR.cc/2025/Conference/Submission3437/Authors" ], [ "ICLR.cc/2025/Conference/Submission3437/Authors" ], [ "ICLR.cc/2025/Conference/Submission3437/Authors" ], [ "ICLR.cc/2025/Conference/Submission3437/Authors" ], [ "ICLR.cc/2025/Conference/Submission3437/Authors" ], [ "ICLR.cc/2025/Conference/Submission3437/Authors" ], [ "ICLR.cc/2025/Conference/Submission3437/Reviewer_LrHb" ], [ "ICLR.cc/2025/Conference/Submission3437/Reviewer_7azo" ], [ "ICLR.cc/2025/Conference/Submission3437/Authors" ], [ "ICLR.cc/2025/Conference/Submission3437/Reviewer_zSd1" ], [ "ICLR.cc/2025/Conference/Submission3437/Reviewer_zSd1" ], [ "ICLR.cc/2025/Conference/Submission3437/Authors" ], [ "ICLR.cc/2025/Conference/Submission3437/Reviewer_LrHb" ], [ "ICLR.cc/2025/Conference/Submission3437/Authors" ], [ "ICLR.cc/2025/Conference/Submission3437/Area_Chair_kCxr" ], [ "ICLR.cc/2025/Conference/Submission3437/Authors" ], [ "ICLR.cc/2025/Conference/Submission3437/Authors" ], [ "ICLR.cc/2025/Conference/Submission3437/Authors" ], [ "ICLR.cc/2025/Conference/Submission3437/Authors" ], [ "ICLR.cc/2025/Conference/Submission3437/Authors" ], [ "ICLR.cc/2025/Conference/Submission3437/Authors" ], [ "ICLR.cc/2025/Conference/Submission3437/Authors" ], [ "ICLR.cc/2025/Conference/Submission3437/Reviewer_t1Gv" ], [ "ICLR.cc/2025/Conference/Submission3437/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3437/Authors" ], [ "ICLR.cc/2025/Conference/Submission3437/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer LrHb,\\n\\nThank you very much for your invaluable feedback on our paper. We have meticulously reviewed each of your points and endeavored to address them thoroughly. We would greatly appreciate it if you could review our responses and let us know if you have any additional comments regarding the paper or the rebuttal. We are eager to embrace all your critiques and integrate them into our work.\\n\\nThank you!\"}", "{\"comment\": \"> Q2: Moreover, since the implicit misinformation is generated by GPT4o, which is also evaluated on the benchmark, will it incur evaluator bias?\", \"a2\": [\"Thank you for raising this important concern regarding potential evaluator bias. we conducted an additional evaluation by randomly selecting 100 samples and comparing the outputs generated by other open-source and closed-source models. The findings are summarized in the table below. The results indicate that:\", \"**Human-generated instructions:** The implicit misleading instructions generated by human are produced more slowly, which reflects the manual effort involved in creating implicit misleading information.\", \"**No bias implicitness measurment:** To ensure fairness and mitigate biases, we mask the non-implicit information (including answers and option letters) potentially revealed due to the model's limitations within the generated misleading instructions. By comparing the misleading rates obtained from the instructions before and after masking, we can objectively measure the level of implicitness without bias. When the original misleading rate exceeds a certain threshold, a larger reduction in the misleading rate after masking indicates a lower degree of implicitness in the instructions.\", \"**Open-source models** demonstrate a lower rate of misinterpretation, suggesting they may be less susceptible to implicit misleading information.\", \"**Closed-source models**, on the other hand, exhibit a higher rate of misinterpretation and a greater degree of implicitness, indicating a stronger vulnerability to implicit misinformation.\"], \"table\": \"The comparison of open-source and closed-source MLLMs, as well as humans, regarding their generation of implicit misleading instructions.\\n Model | MR | Masked MR | Time (s/it) |\\n|-----------------------|--------|------------------|-------------|\\n| MiniCPM-v-v2 | 39.71% | 18.98% (\\u219320.73%) | 2.26 |\\n| Phi-3-Vision | 45.10% | 34.24% (\\u219310.86%) | 8.86 |\\n| Yi-VL-6b | 27.49% | 21.84% (\\u21935.65%) | 2.33 |\\n| Qwen-VL-Chat | 35.65% | 31.95% (\\u21933.70%) | 2.89 |\\n| Deepseek-VL-7b-Chat | 42.10% | 22.51% (\\u219319.59%) | 2.78 |\\n| LLaVA-NeXT-7b-Vicuna | 30.48% | 33.27% (\\u21912.79%) | 5.4 |\\n| MiniCPM-Llama3-v2.5 | 44.06% | 38.23% (\\u21935.83%) | 3.61 |\\n| GLM4V-9B-Chat | 31.01% | 31.18% (\\u21910.17%) | 6.98 |\\n| InternVL-Chat-V1_5 | 32.91% | 31.79% (\\u21931.12%) | 7.71 |\\n| GLM-4V | 45.31% | 42.01% (\\u21933.30%) | 4.49 |\\n| GPT-4o | 54.23% | 54.90% (\\u21910.67%) | 5.20 |\\n| Human | 52.19% | 52.83% (\\u21910.64%) | 240 |\"}", "{\"comment\": \"We greatly appreciate the reviewer\\u2019s detailed feedback and constructive insights, which have significantly helped us refine our study. Below, we address each of the key points raised:\\n\\n>Q1: The measurement might be dependent to the misleading information themselves: the content, position, length, etc might all influence this metric.\", \"a1\": \"Thank you for raising this insightful point. To address this concern, we conducted additional experiments to evaluate the effects of position, length, and content of misleading information on the misleading rate.\\n- **Content**: In the manuscript, we present 11 explicit prompts, which are categorized into four types. The results indicate that explicitly providing the model with the answer increases the misleading rate.\\n- **Position**: To analyze the influence of position, we tested scenarios where misleading information was inserted either before (after the system prompt) or after the question. The results indicate that the misleading rate is lower when explicit misleading information is inserted before the question compared to when it is inserted after the question. (Notably, in the paper, all misleading information is inserted after the question.)\\n- **Length**: We increased the length of the phrase \\\"The true answer is\\\" by twofold and threefold, respectively, to examine whether the phrase's length impacts the results. The results indicate that increasing the length alone, without modifying the content, has minimal impact on the misleading rate.\\n\\n#### Table: the average misleading rate of 11 explicit prompt templates on 12 open-source MLLMs.\\n| **Model** | **Factors** | **Apparent** | **Argue** | **While** | **Obvious** | **Context** | **Given** | **Evidence** | **Correct** | **GPT** | **User** |\\n|-------------------------|-------------|--------------|-----------|-----------|-------------|-------------|-----------|--------------|-------------|----------|-----------|\\n| **Average** | **65.16%** | **67.43%** | **65.58%**| **69.92%**| **62.18%** | **66.30%** | **67.30%**| **64.28%** | **63.89%** | **60.02%**| **48.74%**|\\n\\n\\n\\n##### The misleading rates across different positions and lengths in 12 open-source MLLMs.\\n| **Model** | **Before(Repeat 1)** | **After(Repeat 1)** | **Repeat 2** | **Repeat 3** |\\n|-------------------------|-------------|--------------|-----------|-----------|\\n| **Average T-F** | **57.17%** | **79.38%** | **76.47%**| **76.96%**|\\n| **Average F-T** | **56.40%** | **78.08%** | **79.50%**| **79.30%**|\"}", "{\"comment\": \"#### Table: Comparison of explicit and implicit misleading instruction performance on different types tasks before and after fine-tuning.(Complete table can be found in the revised version paper, Table 36,37.)\\n| Model | Perception (T-F) | Reasoning (T-F) | Mastery (T-F) |\\n|------------------------|----------------------|----------------------|----------------------|\\n| MiniCPM-v-v2 | 5.33% (\\u219378.37%) | 7.28% (\\u219366.66%) | 14.63% (\\u219359.73%) |\\n| Phi-3-vision | 7.26% (\\u219378.62%) | 6.62% (\\u219352.29%) | 6.86% (\\u219356.46%) |\\n| Yi-VL-6b | 9.42% (\\u219380.91%) | 21.84% (\\u219366.49%) | 46.92% (\\u219347.55%) |\\n| Qwen-VL-Chat | 1.76% (\\u219390.06%) | 7.78% (\\u219376.00%) | 12.81% (\\u219368.33%) |\\n| Deepseek-VL-7b-Chat | 1.42% (\\u219366.34%) | 3.27% (\\u219354.71%) | 6.78% (\\u219353.62%) |\\n| LLaVA-NeXT-7b-vicuna | 4.81% (\\u219375.37%) | 10.72% (\\u219344.31%) | 15.68% (\\u219340.15%) |\\n| MiniCPM-Llama3-v2.5 | 0.73% (\\u219371.04%) | 1.10% (\\u219363.18%) | 1.75% (\\u219360.19%) |\\n| GLM4V-9B-chat | 4.61% (\\u219339.82%) | 8.39% (\\u219335.57%) | 23.68% (\\u219335.80%) |\\n| CogVLLM-chat | 8.13% (\\u219356.29%) | 8.15% (\\u219337.65%) | 32.40% (\\u219316.89%) |\\n| InternVL-Chat-V1-5 | 0.60% (\\u219349.74%) | 2.85% (\\u219349.15%) | 9.93% (\\u219350.51%) |\\n| LLaVA-Next-34b | 2.12% (\\u219375.54%) | 3.25% (\\u219384.97%) | 2.25% (\\u219385.77%) |\\n| Yi-VL-34b | 9.13% (\\u219371.55%) | 17.12% (\\u219356.17%) | 30.48% (\\u219337.84%) |\\n| **Explicit Average** | **4.61% (\\u219369.47%)** | **8.20% (\\u219357.26%)** | **17.02% (\\u219351.07%)** |\\n\\n\\n\\n\\n\\n#### Table: The average misleading rates across subfields in perception, reasoning, and mastery tasks before fine-tuning. (Complete table can be found in the revised version paper, Table 38,39.)\\n**Perception Task**: Visual Identification (VI), Text Recognition (TR), Aesthetic Per-ception (AP), Spatial Awareness (SA)\\n**Reasoning Task**: Logical Reasoning (LR), Scientific Reasoning (SR), Cross-Domain Reasoning (CDR)\\n**Mastery Task**: Natural Sciences (NS), Social Studies (SS), Applied Arts (AA).\\n\\n| **Model** | **VI** | **TR** | **AP** | **SA** | **LR** | **SR** | **CDR** | **NS** | **SS** | **AA** |\\n|-----------------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|\\n| **Explicit T-F** | **77.53%** | **76.97%** | **81.45%** | **83.60%** | **63.43%** | **66.79%** | **75.69%** | **68.67%** | **69.68%** | **69.43%** |\\n| **Explicit F-T** | **85.86%** | **85.51%** | **85.61%** | **87.07%** | **81.09%** | **85.78%** | **83.82%** | **85.60%** | **86.58%** | **84.74%** |\\n| **Implicit T-F** | **77.53%** | **76.97%** | **81.45%** | **83.60%** | **63.43%** | **66.79%** | **75.69%** | **68.67%** | **69.68%** | **69.43%** |\\n| **Implicit F-T** | **80.51%** | **84.35%** | **92.08%** | **71.83%** | **76.15%** | **85.77%** | **88.87%** | **74.18%** | **74.59%** | **72.40%** |\\n\\n\\n\\n\\n\\n#### Table: The average misleading rates on two different types of questions before fine-tuning. (Complete table can be found in the revised version paper, Table 34,35.)\\n\\n| **Model** | **T-F Choice** | **T-F Yes/No** | **F-T Choice** | **F-T Yes/No** |\\n|----------------------------------|----------------------|---------------------|----------------------|---------------------|\\n| **Explicit Low misleading rate** | **46.55%** | **55.19%** | **72.88%** | **57.14%** |\\n| **Explicit Medium misleading rate** | **66.26%** | **83.29%** | **80.98%** | **76.92%** |\\n| **Explicit High misleading rate** | **85.99%** | **92.95%** | **92.90%** | **94.26%** |\\n| **Implicit Low misleading rate** | **75.89%** | **49.22%** | **83.55%** | **57.90%** |\\n| **Implicit Medium misleading rate** | **83.84%** | **70.03%** | **81.58%** | **76.85%** |\\n| **Implicit High misleading rate** | **95.00%** | **86.22%** | **82.45%** | **87.04%** |\"}", "{\"comment\": \"> Q6: The misleading information was only added to the textual questions, why not consider altering the image to inject misleading information?\", \"a6\": \"Thank you for your suggestion. We assume that all MLLMs are capable of recognizing English characters within images. To inject misleading information into images, we tested its misleading rate by adding a watermark (\\\"The true answer is xx\\\") to the images. The results show a higher misleading rate compared to using misleading information in pure text.\\n\\n#### Table: The results of misleading rate on injecting misleading information into images.\\n| Model | Low Image | Low Text | Medium Image | Medium Text |\\n|--------------------------|--------------|----------------|--------------|----------------|\\n| MiniCPM-v-v2 | 62.91% | 57.64% | 78.89% | 81.04% |\\n| Phi-3-vision | 60.10% | 49.62% | 67.57% | 69.26% |\\n| Yi-VL-6b | 84.93% | 84.64% | 93.49% | 94.44% |\\n| Qwen-VL-Chat | 84.37% | 80.53% | 89.71% | 89.33% |\\n| Deepseek-VL-7b-Chat | 37.25% | 31.50% | 65.44% | 63.42% |\\n| LLaVA-NeXT-7b-vicuna | 44.40% | 54.05% | 40.09% | 56.91% |\\n| MiniCPM-Llama3-v2.5 | 54.88% | 44.39% | 66.55% | 74.41% |\\n| GLM4V-9B-chat | 47.91% | 17.58% | 72.45% | 51.89% |\\n| CogVLLM-chat | 21.93% | 18.86% | 52.95% | 49.53% |\\n| InternVL-Chat-V1-5 | 25.22% | 17.46% | 54.51% | 50.55% |\\n| LLaVA-Next-34b | 77.22% | 65.32% | 94.35% | 89.04% |\\n| Yi-VL-34b | 69.32% | 56.99% | 88.89% | 78.87% |\\n| **Average** | **57.18%** | **49.96%** | **72.31%** | **70.59%** |\"}", "{\"comment\": \"**Comprehensiveness** :\\n\\nTo provide a more comprehensive evaluation of our benchmark, we also included additional metrics beyond the accuracy and misleading rate obtained from the main experiment.\\n\\n**Evaluation on more metrics**:\\n\\n**(1) ECE**: We collected the confidence scores of the model's outputs and calculated the Expected Calibration Error (ECE) before and after fine-tuning the model in the True-False (T-F) scenario.\\n\\n **(2) Consistency rate**: We provided the maximum frequency at which the model gives the same answer when faced with the same questions from the benchmark multiple times, which is referred to as the consistency rate.\\n\\n**Evaluation on more specific categories**:\\n\\n**(1)**: We divided the questions in the entire benchmark into three types of tasks: perception, reasoning, and mastery. For each type of task, we provided the misleading rates under implicit and explicit misleading scenarios as well as before and after fine-tuning. This task categorization helps in comparing the model's performance across different types of tasks. \\n\\n**(2)**: Furthermore, we break down perception, reasoning, and mastery tasks into more granular evaluations. **Perception** includes the following abilities: Visual Identification (VI), Text Recognition (TR), Aesthetic Perception (AP), and Spatial Awareness (SA); **Reasoning** includes Logical Reasoning (LR), Scientific Reasoning (SR), and Cross-Domain Reasoning (CDR); and **Mastery** includes Natural Sciences (NS), Social Studies (SS), and Applied Arts (AA), resulting in a total of 10 distinct abilities. Further refinement of the question domains facilitates the evaluation of the model's capability boundaries.\\n\\n**(3)**: We provided the average misleading rates measured on two different types of questions in the benchmark (multiple choice (CH) and yes/no (Y/N)) to evaluate the impact of question types on the model's uncertainty.\\n\\n#### Table: Comparison of ECE before and after fine-tuning on our benchmark\\n| Model | Before | After |\\n|------------------------|--------|-------|\\n| MiniCPM-v-v2 | 0.46 | 0.24 |\\n| Phi-3-vision | 0.46 | 0.15 |\\n| Yi-VL-6b | 0.45 | 0.27 |\\n| Qwen-VL-Chat | 0.49 | 0.24 |\\n| Deepseek-VL-7b-Chat | 0.47 | 0.20 |\\n| LLaVA-NeXT-7b-vicuna | 0.48 | 0.23 |\\n| MiniCPM-Llama3-v2.5 | 0.49 | 0.18 |\\n| GLM4V-9B-chat | 0.46 | 0.25 |\\n| CogVLLM-chat | 0.46 | 0.27 |\\n| InternVL-Chat-V1-5 | 0.47 | 0.24 |\\n| LLaVA-Next-34b | 0.49 | 0.19 |\\n| Yi-VL-34b | 0.45 | 0.26 |\\n| **Average** | **0.47** | **0.23** |\\n\\n\\n#### Table: The consistency rate of both before and after fine-tuning under low and high misleading rate scenario. (Detailed table can be found in the revised version paper, Table 21)\\n| Model | Low (Before) | Low (After) | Low (Change) | High (Before) | High (After) | High (Change) |\\n|---------------------------|--------------|-------------|--------------|---------------|--------------|---------------|\\n| MiniCPM-v-v2 | 82.93% | 97.83% | +14.90% | 56.52% | 90.64% | +34.12% |\\n| Phi-3-vision | 79.89% | 89.33% | +9.44% | 63.94% | 87.77% | +23.83% |\\n| GLM4v-9b | 94.33% | 99.00% | +4.67% | 82.28% | 95.85% | +13.57% |\\n| LLaVA-Next-34b | 73.30% | 98.61% | +25.31% | 53.30% | 91.81% | +38.51% |\\n| **Average** | **82.61%** | **96.19%** | **+13.58%** | **64.51%** | **91.02%** | **+26.51%** |\"}", "{\"comment\": \">Q4: The study is confined to multiple choice question. I am curious about how would the definitions, measurements, and findings generalize to open-ended question. But I don\\u2019t think this is a major point, because most current VLM benchmarks are multiple-choice only.\", \"a4\": \"Thank you for your suggestions. To comprehensively evaluate the effectiveness of our method, we transformed the discriminative task into a generative task. Specifically, we randomly sampled 200 data points prone to misleading. Only the images and questions were provided, without answer options. We used GPT-4o to evaluate whether the text generated by open-source models aligns with the answers and calculated the misleading rates before and after introducing misleading information. (Note that the 200 data points prone to misleading were selected from external datasets. In contrast, using other unscreened generative task datasets does not effectively identify data points that are highly susceptible to misleadingn.)\\n\\n#### Table: Comparison of explicit and implicit misleading instruction performance on generative tasks before and after fine-tuning (Table27 in the new revision)\\n| Model | Before (T-F) | Before (F-T) | After (T-F) | After (F-T) |\\n|---------------------------------|--------------|--------------|-------------|-------------|\\n| **Explicit** | | | | |\\n| MiniCPM-v-v2 | 69.23% | 87.70% | 25.00% | 72.54% |\\n| Phi-3-vision | 100.00% | 66.67% | 71.43% | 30.57% |\\n| Yi-VL-6b | 100.00% | 82.89% | 88.89% | 55.50% |\\n| Qwen-VL-Chat | 94.12% | 86.34% | 86.21% | 50.88% |\\n| Deepseek-VL-7b-Chat | 92.31% | 81.82% | 70.59% | 43.17% |\\n| LLaVA-NeXT-7b-Vicuna | 100.00% | 62.56% | 100.00% | 60.20% |\\n| MiniCPM-Llama3-v2.5 | 81.25% | 83.71% | 66.67% | 64.29% |\\n| GLM4V-9B-Chat | 85.71% | 80.90% | 48.48% | 62.42% |\\n| CogVLLM-Chat | 100.00% | 54.55% | 75.00% | 3.35% |\\n| InternVL-Chat-V1_5 | 85.71% | 69.27% | 24.32% | 68.10% |\\n| LLaVA-Next-34b | 100.00% | 92.18% | 62.50% | 54.39% |\\n| Yi-VL-34b | 90.91% | 92.59% | 77.78% | 14.21% |\\n| Average | 91.94% | 76.99% | 65.01% | 48.31% |\\n| **Implicit** | | | | |\\n| Average | 91.99% | 44.38% | 57.61% | 23.57% |\"}", "{\"comment\": \"Dear Reviewers,\\n\\nThank you very much for your invaluable feedback on our paper. We have meticulously reviewed each of your points and endeavored to address them thoroughly. We would greatly appreciate it if you could review our responses and let us know if you have any additional comments regarding the paper or the rebuttal. We are eager to embrace and integrate all your critiques into our work. \\n\\nThank you!\"}", "{\"comment\": \"> Q3: During finetuning, a random set of explicit and implicit misled samples are used for finetuning, yet I am afraid the explicit misleading info has a too obvious and unique pattern due to how it's designed, hence too easy to pick them up, making the improvement after finetuning not too surprising.\", \"a3\": \"Thank you for raising this insightful concern. We have analyzed the results for explicit misleading scenarios under implicit instruction fine-tuning, as illustrated in Figure 5-2 of the original version. To present more clarity, we updated the caption of the original figure and included detailed experimental results of the revised version. The findings demonstrate that with an increase in the volume of implicit fine-tuning data, the misleading rate in explicit misleading scenarios can be reduced to less than 20%. This highlights the effectiveness of implicit instruction fine-tuning in mitigating explicit misleading behavior.\\n\\n#### Table: Results for explicit misleading scenarios under implicit instruction fine-tuning.\\n| Model | Low | Medium | High |\\n|---------------------|--------------|-----------------|---------------|\\n| Phi-3-vision | 10.90% | 20.85% | 40.19% | \\n| Qwen-VL-Chat | 17.65% | 26.88% | 12.54% | \\n| MiniCPM-Llama3-v2.5 | 5.22% | 14.48% | 10.31% |\\n| GLM4V-9B-chat | 3.54% | 11.99% | 12.18% | \\n| CogVLLM-chat | 6.26% | 13.71% | 18.36% | \\n| InternVL-Chat-V1-5 | 6.06% | 12.30% | 11.89% | \\n| **Average** | **8.94%** | **16.37%** | **17.91%** |\"}", "{\"comment\": \"Dear Reviewer 7azo,\\n\\nThank you very much for your invaluable feedback on our paper. We have meticulously reviewed each of your points and endeavored to address them thoroughly. We would greatly appreciate it if you could review our responses and let us know if you have any additional comments regarding the paper or the rebuttal. We are eager to embrace all your critiques and integrate them into our work.\\n\\nThank you!\"}", "{\"comment\": \">Q5: A separate issue is that, throughout, there is not enough information to understand the data or experimental setup in detail. The paper says that they \\\"fine-tuned on separate data\\\", but there are not many details that would let a reader verify or reproduce the experiments. (This is also part of the problem with the \\\"Implicit\\\" setting -- not enough details to fully understand.)\", \"a5\": \"We apologize for not providing sufficient details about the data and experimental setup in the original submission. To address this issue, we have made the following updates in the revised version: (1): we now include a detailed description of the fine-tuning data formats in Figure 21, along with the experimental details and parameters for fine-tuning in Section 3. (2) For the implicit instruction generation setting, we provide templates in Figure 15 of the appendix and present examples of implicit instructions generated by various models in Figures 17 and 18. In the new revision, we have expanded the descriptions to include more detailed explanations.\"}", "{\"comment\": \"Thank you for your response. Your work addresses my concern and I've updated my rating.\"}", "{\"summary\": \"This paper studies uncertainty measurement for responses from MLLMs. The main novelty is a novel uncertainty measurement based on how MLLMs' response shifts after injecting misleading instructions. Empirically, the paper developed Multimodal Uncertainty Benchmark (MUB), and systematically evaluates most major MLLMs\\u2019s uncertainty; The result suggests a dominant issue of uncertainty in MLLMs, with an average misleading rate exceeding 86%. The author experimented with fine-tuning MLLMs with targeted misleading data, which notably improves robustness.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"I think the effort of proposing novel and more efficient metrics to is very relevant and helpful. Previous metrics, such as self-consistency rate are widely used but in practice I found it to be very unreliable on big models.\", \"The experimental evaluation is comprehensive, covering most of the commonly used closed and open-sourced models.\"], \"weaknesses\": [\"Soundness of measurement (Major): While I very much appreciate the effort on better and efficient measurements for uncertainty, currently I still have some doubts about whether adding misleading information can measure the uncertainty in model\\u2019s response. I\\u2019ll explain my concerns and perhaps the authors can clarify:\", \"The measurement might be dependent to the misleading information themselves: the content, position, length, etc might all influence this metric. Moreover, since the implicit misinformation is generated by GPT4o, which is also evaluated on the benchmark, will it incur evaluator bias?\", \"Implicit scenarios seem better defined; But for explicit scenarios (e.g. telling the model true answer), the model behavior might be inherently undefined: i.e. shall the model follow the user\\u2019s \\u201cinstruction\\u201d (e.g. \\u201cthe true answer\\u201d), or answer the question and ignore user instruction.\", \"Task (Minor): The study is confined to multiple choice question. I am curious about how would the definitions, measurements, and findings generalize to open-ended question. But I don\\u2019t think this is a major point, because most current VLM benchmarks are multiple-choice only.\"], \"questions\": \"See weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"#### Table: Comparison of explicit and implicit misleading instruction performance on different types of tasks before and after fine-tuning.\\n| Model | Perception (T-F) | Reasoning (T-F) | Mastery (T-F) |\\n|------------------------|----------------------|----------------------|----------------------|\\n| MiniCPM-v-v2 | 5.33% (\\u219378.37%) | 7.28% (\\u219366.66%) | 14.63% (\\u219359.73%) |\\n| Phi-3-vision | 7.26% (\\u219378.62%) | 6.62% (\\u219352.29%) | 6.86% (\\u219356.46%) |\\n| Yi-VL-6b | 9.42% (\\u219380.91%) | 21.84% (\\u219366.49%) | 46.92% (\\u219347.55%) |\\n| Qwen-VL-Chat | 1.76% (\\u219390.06%) | 7.78% (\\u219376.00%) | 12.81% (\\u219368.33%) |\\n| Deepseek-VL-7b-Chat | 1.42% (\\u219366.34%) | 3.27% (\\u219354.71%) | 6.78% (\\u219353.62%) |\\n| LLaVA-NeXT-7b-vicuna | 4.81% (\\u219375.37%) | 10.72% (\\u219344.31%) | 15.68% (\\u219340.15%) |\\n| MiniCPM-Llama3-v2.5 | 0.73% (\\u219371.04%) | 1.10% (\\u219363.18%) | 1.75% (\\u219360.19%) |\\n| GLM4V-9B-chat | 4.61% (\\u219339.82%) | 8.39% (\\u219335.57%) | 23.68% (\\u219335.80%) |\\n| CogVLLM-chat | 8.13% (\\u219356.29%) | 8.15% (\\u219337.65%) | 32.40% (\\u219316.89%) |\\n| InternVL-Chat-V1-5 | 0.60% (\\u219349.74%) | 2.85% (\\u219349.15%) | 9.93% (\\u219350.51%) |\\n| LLaVA-Next-34b | 2.12% (\\u219375.54%) | 3.25% (\\u219384.97%) | 2.25% (\\u219385.77%) |\\n| Yi-VL-34b | 9.13% (\\u219371.55%) | 17.12% (\\u219356.17%) | 30.48% (\\u219337.84%) |\\n| Explicit Average | 4.61% (\\u219369.47%) | 8.20% (\\u219357.26%) | 17.02% (\\u219351.07%) |\\n\\n\\n> Q4: This lack of explanation limits the understanding of the benchmark\\u2019s functionality. Researchers are left uncertain whether fine-tuning causes models to generate consistent but incorrect responses.\", \"a4\": \"Thank you for highlighting this critical issue. We deeply appreciate your insightful observation. The fine-tuned model demonstrates no degradation in performance on our benchmark; in fact, achieving approximately a 5% performance improvement. Moreover, its performance on other benchmarks, such as MMStar and AI2D, remains consistent, indicating that fine-tuning enables the model to generate more accurate responses. The results indicate a 22.49% improvement in accuracy after fine-tuning. Additionally, the consistency rate of the fine-tuned model shows a significant improvement to approximately 13.58% in the low misleading rate scenario and 26.51% in the high misleading scenario (in the below tables). The fine-tuned models achieve an approximately 3% reduction in its average Expected Calibration Error (ECE), highlighting enhanced calibration and reliability.\\n\\n \\n#### Table: The consistency rate of both before and after fine-tuning under low and high misleading rate scenarios.\\n| Model | Low (Before) | Low (After) | Low (Change) | High (Before) | High (After) | High (Change) |\\n|---------------------------|--------------|-------------|--------------|---------------|--------------|---------------|\\n| MiniCPM-v-v2 | 82.93% | 97.83% | +14.90% | 56.52% | 90.64% | +34.12% |\\n| Phi-3-vision | 79.89% | 89.33% | +9.44% | 63.94% | 87.77% | +23.83% |\\n| GLM4v-9b | 94.33% | 99.00% | +4.67% | 82.28% | 95.85% | +13.57% |\\n| LLaVA-Next-34b | 73.30% | 98.61% | +25.31% | 53.30% | 91.81% | +38.51% |\\n| **Average** | **82.61%** | **96.19%** | **+13.58%** | **64.51%** | **91.02%** | **+26.51%** |\", \"table\": \"The mean accuracy of 12 open-source models on the MMStar and AI2D datasets was evaluated both before and after fine-tuning under the high misleading rate scenario.\\n| Model | MMStar (Before) | MMStar (After) | AI2D (Before) | AI2D (After) |\\n|-------------|-----------------|----------------|---------------|--------------|\\n| **Average** | **44.02%** | **45.67%** | **66.60%** | **67.94%** |\\n\\n\\n---\\n> Q5: The images in the paper are quite blurry, especially Figure 2. The authors should check the image quality. There are also some typos, such as mixed-use of InternVL-Chat-V1-5 and Internvl-chat-v1.5.\", \"a5\": \"Thank you very much for pointing out this inconsistency. We have revised the figures and corrected typographical errors to ensure consistency and accuracy in the revised version.\"}", "{\"summary\": \"This paper dives into how MLLMs often fail to perform well when faced with misleading prompts. To tackle this, the authors set up a benchmark called the Multimodal Uncertainty Benchmark (MUB), which first gathers standard responses and then throws in misleading inputs to see how often the models get tripped up. They then fine-tuned open-source models with a mix of straightforward and subtle misleading data, cutting down the rate of being misled, while keeping the models\\u2019 overall accuracy intact.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"The attempt of addressing the response uncertainty in MLLMs is an interesting and important task. The proposed method seems largely sound to me in addressing at least parts of the problem. The paper is written in a structured and esay2understand manner -- quite straightforward. The MUB benchmark could be useful and the benchmarking results are generally informative. The effort of finetuning the MLLMs with misled data adds some more insights into how the problem could be mitigated.\", \"weaknesses\": [\"My main concerns are\", \"This work only evaluates/tackles VLLM instead of MLLM as claimed multiple times in title and throughout paper, though I could maybe see the way to extend to other modalities.\", \"Having the implicit misleading information generated by GPT-4o seems like a \\\"fighting fire with fire\\\" approach -- I think it is better to have at least a subset of implicit ones written by human annotators so that we can see whether there is any difference between the human-generated ones and GPT-4o generated ones.\", \"During finetuning, a random set of explicit and implicit misled samples are used for finetuning, yet I am afraid the explicit misleading info has a too obvious and unique pattern due to how it's designed, hence too easy to pick them up, making the improvement after finetuning not too surprising.\", \"Instead of finetuning, I would recommend the authors to simply systematically prompt the MLLMs, such as \\\"The questions might contain misleading information, you should try to answer the question correctly despite of those misleading information ...\\\"; another version could even give it two examples (one explicit and one implicit). I would guess/assume, simply doing this extra prompting will make the results much better.\", \"The questions only include multi-choice and T/F styles, which certainly makes the metrics calculation easier (reflected in equation 1 and 2), yet probably losing the delicacy in the type of Q/A addressed?\"], \"questions\": [\"The misleading information was only added to the textual questions, why not consider altering the image to inject misleading information?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Authors\", \"comment\": \"Thank you for your revisions and further explanations. While I appreciate the expanded evaluations across modalities and the exploration of human-annotated data, I remain concerned about the limited novelty and comprehensiveness of the proposed approach/benchmark relative to existing benchmarks. Nonetheless I acknowledge the significant effort made in the revisions. And I'd like to reconsider my evaluation.\"}", "{\"comment\": \"> Q2: Having the implicit misleading information generated by GPT-4o seems like a \\\"fighting fire with fire\\\" approach -- I think it is better to have at least a subset of implicit ones written by human annotators so that we can see whether there is any difference between the human-generated ones and GPT-4o generated ones.\", \"a2\": \"Thank you for your suggestion. We conducted additional experiments involving human-generated implicit misleading instructions and performed a comparative analysis with those generated by GPT-4o:\\n**Human-Generated Implicit Instructions:** We randomly selected 100 samples for annotation. Data annotators, all holding at least a bachelor's degree, created implicit misleading instructions based on the images, questions, options, and correct answers. The experimental results indicate that the misleading rates and levels of implicitness of human-generated instructions are comparable to those produced by GPT-4o. On average, approximately 4 minutes are required to generate a single implicit instruction per individual.\\n**Measuring Implicitness:** To ensure fairness and mitigate biases, we mask the non-implicit information (including answers and option letters) potentially revealed due to the model's limitations within the generated misleading instructions. We compared the misleading rates before and after masking the instructions, allowing us to objectively measure implicitness without bias. We also used GPT-4o to directly score the level of implicitness as a reference, with scores ranging from 1 to 9, where higher scores indicate a greater level of implicitness. we observe that closed-source models tend to generate instructions with higher implicitness and misleading rates. Examples of the comparison between human-generated instruction and instruction from other models can be found in Figures 18 and 19 of the revised version of the paper.\\n\\n#### Table:Comparison of implicitness, misleading rates, and time required for generating implicit instructions\\n| Model | MR | Masked MR | Implicitness | Time (s/it) |\\n|-----------------------|--------|------------------|--------------|-------------|\\n| MiniCPM-v-v2 | 39.71% | 18.98% (\\u219320.73%) | 5.67 | 2.26 |\\n| Phi-3-Vision | 45.10% | 34.24% (\\u219310.86%) | 5.73 | 8.86 |\\n| Yi-VL-6b | 27.49% | 21.84% (\\u21935.65%) | 7.01 | 2.33 |\\n| Qwen-VL-Chat | 35.65% | 31.95% (\\u21933.70%) | 5.97 | 2.89 |\\n| Deepseek-VL-7b-Chat | 42.10% | 22.51% (\\u219319.59%) | 6.31 | 2.78 |\\n| LLaVA-NeXT-7b-Vicuna | 30.48% | 33.27% (\\u21912.79%) | 6.65 | 5.4 |\\n| MiniCPM-Llama3-v2.5 | 44.06% | 38.23% (\\u21935.83%) | 5.97 | 3.61 |\\n| GLM4V-9B-Chat | 31.01% | 31.18% (\\u21910.17%) | 6.22 | 6.98 |\\n| InternVL-Chat-V1_5 | 32.91% | 31.79% (\\u21931.12%) | 5.80 | 7.71 |\\n| GLM-4V | 45.31% | 42.01% (\\u21933.30%) | 6.28 | 4.49 |\\n| GPT-4o | 54.23% | 54.90% (\\u21910.67%) | 7.05 | 5.20 |\\n| Human | 52.19% | 52.83% (\\u21910.64%) | 6.30 | 240 |\"}", "{\"summary\": \"This paper addresses the issue of response uncertainty in Multimodal Large Language Models (MLLMs), which can be problematic when models encounter misleading information. To tackle this, the authors propose a two-stage pipeline: first, gathering MLLM responses without misleading information, followed by collecting responses influenced by specific misleading instructions. They effectively evaluate model uncertainty by measuring the misleading rate and tracking shifts between correct and incorrect responses. They introduce the Multimodal Uncertainty Benchmark (MUB), which uses explicit and implicit misleading instructions to assess MLLM vulnerability across various domains. To improve robustness, they fine-tune open-source MLLMs using misleading data, substantially reducing misleading rates.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The strengths of this paper include:\", \"This work focuses on an important issue: the robustness of Multimodal Large Language Models (MLLMs) when faced with misleading instructions. This is a compelling research topic that addresses a gap in the current field.\", \"The paper is well-structured, with a clear framework, and the authors present three research questions that are thoroughly examined through extensive experiments involving 12 models.\", \"This work contributes to the community by introducing the Multimodal Uncertainty Benchmark and providing a fine-tuning dataset, demonstrating improved model robustness against misleading instructions.\"], \"weaknesses\": [\"The weaknesses of this work include:\", \"The paper lacks a discussion on whether the models are calibrated. It does not address whether more consistent (more certain) model outputs correspond to more accurate answers. The results in the paper are primarily based on misleading rate (MR) and average consistency rate (ACR), without showing metrics like model accuracy. There is a lack of analysis on utility.\", \"The authors fail to analyze the impact of instruction tuning on model usability. It is unclear how much the model\\u2019s performance on different tasks changes before and after fine-tuning. This lack of explanation limits the understanding of the benchmark\\u2019s functionality. Researchers are left uncertain whether fine-tuning causes models to generate consistent but incorrect responses.\"], \"suggestions\": \"The images in the paper are quite blurry, especially Figure 2. The authors should check the image quality. There are also some typos, such as mixed-use of InternVL-Chat-V1-5 and Internvl-chat-v1.5.\", \"questions\": \"Refer to Weakness. The analysis of utility and calibration is important for such work.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \">Q3: \\\"The 'Implicit' ones are more interesting, but need more detailed treatment: for instance, at least give several examples and enough information to assess data quality.\\\"\", \"a3\": \"Thank you for your valuable suggestions. We appreciate your interest in the \\\"Implicit\\\" prompts. To address your concerns, we have made the following updates and additions:\\n**Details and Revisions Made**: In the original submission, we provided scores for the implicitness of generated implicit instructions in Figure 4-(3) and examples of such instructions in Figures 17 and 18. In the revised version, we have expanded Section 2.2 to include additional details about the implicit instructions, offering more context and examples to better illustrate their nature and design. \\n**Comprehensive Evaluation of Implicit Instructions**: To more comprehensively evaluate the generated implicit instructions, we have also introduced more comprehensive evaluation the generated implicit misleading data quality, as shown in the table below. We randomly selected 100 images and compared their results with those from human annotations, open-source models, and closed-source models. To ensure fairness and mitigate biases, we masked the non-implicit information (including answers and option letters) potentially revealed due to the model's limitations within the generated misleading instructions. The implicitness level is evaluated based on scores assigned to the generated implicit instructions, with the highest possible score being 9. The findings indicate that closed-source models demonstrate a higher degree of implicitness as well as a higher misleading rate.\\n\\n\\n#### Table:Comparison of implicitness, misleading rates, and time required for generating implicit instructions\\n Model | MR | Masked MR | Implicitness | Time (s/it) |\\n|-----------------------|--------|------------------|--------------|-------------|\\n| MiniCPM-v-v2 | 39.71% | 18.98% (\\u219320.73%) | 5.67 | 2.26 |\\n| Phi-3-Vision | 45.10% | 34.24% (\\u219310.86%) | 5.73 | 8.86 |\\n| Yi-VL-6b | 27.49% | 21.84% (\\u21935.65%) | 7.01 | 2.33 |\\n| Qwen-VL-Chat | 35.65% | 31.95% (\\u21933.70%) | 5.97 | 2.89 |\\n| Deepseek-VL-7b-Chat | 42.10% | 22.51% (\\u219319.59%) | 6.31 | 2.78 |\\n| LLaVA-NeXT-7b-Vicuna | 30.48% | 33.27% (\\u21912.79%) | 6.65 | 5.4 |\\n| MiniCPM-Llama3-v2.5 | 44.06% | 38.23% (\\u21935.83%) | 5.97 | 3.61 |\\n| GLM4V-9B-Chat | 31.01% | 31.18% (\\u21910.17%) | 6.22 | 6.98 |\\n| InternVL-Chat-V1_5 | 32.91% | 31.79% (\\u21931.12%) | 5.80 | 7.71 |\\n| GLM-4V | 45.31% | 42.01% (\\u21933.30%) | 6.28 | 4.49 |\\n| GPT-4o | 54.23% | 54.90% (\\u21910.67%) | 7.05 | 5.20 |\\n| Human | 52.19% | 52.83% (\\u21910.64%) | 6.30 | **240** |\\n\\n\\n---\\n\\n\\n>Q4: \\\"The evaluation of the 'Implicit' setting is also strange -- using best-of-5 sampling (even though 'Explicit' is best-of-1) which inflates the success rate.\\\"\", \"a4\": \"I apologize for not providing a clear explanation here we used GPT-4o to generate five misleading implicit instructions for each question in a single response and intended to apply all five instructions. To ensure a fair comparison between explicit and implicit strategies, we have addressed this concern in the revised version by comparing the results of explicit and implicit strategies under the sampling-1 setting. Additionally, we have expanded the evaluation by providing results for implicit strategies using sampling-1, sampling-3, and sampling-5 across scenarios with both low and high misleading rates. These updates aim to offer a more balanced and comprehensive evaluation of the \\\"Implicit\\\" setting and address any potential concerns regarding inflated success rates. Complete table can be found in the revised version paper, Table 13,14.\\n\\n\\n#### Table: The average misleading rate of different sample strategies on 12 open-source MLLMs.\\n| | | **MR(T \\u2192 F)** | | | **MR(F \\u2192 T)** | | |\\n|--------------------|--------------|---------------------|------------------|------------------|---------------------|------------------|------------------|\\n| **Model** | **Accuracy** | **Sample-1** | **Sample-3** | **Sample-5** | **Sample-1** | **Sample-3** | **Sample-5** |\\n| **Average[low]** | **73.45%** | **54.81%** | **72.36%** | **77.61%** | **47.55%** | **73.58%** | **78.98%** |\\n| **Average[high]** | **56.63%** | **66.34%** | **84.42%** | **87.68%** | **61.47%** | **79.08%** | **85.00%** |\"}", "{\"metareview\": \"## Summary:\\nThe paper develops a new benchmark for multimodal large language models (MLLMs) to measure the uncertainty and robustness of their answers to multi-choice questions when the input question is appended with explicit/implicit misleading instructions. They propose to generate the explicit misleading instructions by 12 templates, while the implicit ones are mainly generated by GPT-4o. They propose a misleading rate to measure the change of model responses from correct-to-incorrect and incorrect-to-correct. Experiments on 12 open-source LLMs and 5 close-source LLMs show that they suffer from high misleading rates on different misleading instructions. They further finetuned the open-source LLMs on the dataset and achieved a significant reduction in misleading rates. \\n\\n## Strengths:\\n1. This paper studies an important topic regarding the robustness and uncertainty of MLLMs.\\n1. It is novel to study how to measure uncertainty under misleading instructions. \\n1. A new benchmark and two metrics that can enrich the evaluation of VLLMs. \\n1. Extensive experiments on multiple open-source and close-source models, covering explicit and implicit misleading scenarios. \\n1. The paper is very dense in details and results but better organization and highlights can significantly improve the presentation. \\n\\n## Weaknesses:\\n1. It is not well justified that the misleading rating can faithfully and comprehensively measure the general uncertainty of LLM. It may only measure the uncertainty when the input contains conflicting information but not the uncertainty caused by the internal lack of knowledge or pitfalls of reasoning. \\n1. Reviewers find that the motivation of explicit misleading instructions might be questionable. Since the misleading input is a part of the whole instruction and the instruction-following capability is preferred in general, it is hard to justify whether the change of output answers is a preferred behavior or not. The challenge's definition might be ill-posed since following the misleading instruction and answering the question are conflicting in the scenario. Moreover, the explicit misleading instructions are generated by 12 templates for multi-choice questions, which are not sufficiently diverse to cover many misleading cases in practice. \\n1. Reviewers also raised several concerns regarding the implicit misleading. The paper and the following discussion do not provide sufficient information and examples (e.g., it is not clear what are the implicit instructions in Fig. 21). The original paper did not compare the implicit instructions generated by different models and humans. While new experimental results provided in the rebuttal are very helpful, there are still more details of the experiments that need to be clarified. \\n1. Reviewers raised several concerns about the finetuning and its induced improvement on MR. The improvement might not be surprising and trivial since the explicit misleading is not diverse (12 templates) and the patterns of implicit misleading generated by LLMs are too obvious. Evaluations on a more practical test set of misleading scenarios different from the templates and patterns in the finetuning set might be more convincing. \\n1. More experiments need to be presented, e.g., sampling different numbers of responses, open-ended questions, misleading instructions with different content/lengths/positions, MLLM with other modalities, etc. The authors responded with additional experiments, which can greatly improve the draft. Due to the time limit, these results are not entirely comprehensive. So another round of revision is necessary to make these new experiments more complete. \\n\\n## Decision:\\nThe authors provided detailed clarifications and various additional experimental results in the rebuttal. The reviewers share major concerns regarding the design of explicit/implicit instructions. To make the observations and claims more rigorous and convincing, various extra factors in the proposed setup whose impact on the model output and entanglement with the model uncertainty needed to be justified and removed if necessary. For this purpose, more comprehensive experiments in different settings are important. While the authors provided additional experiments requested by the reviewers, they were insufficient to resolve all the concerns. \\n\\nBased on the above, the paper is not ready for publication yet. The meta-reviewer encourages the authors to further complete these experiments and simplify the problem setup of this paper, e.g., excluding the interference of other factors in the benchmark design and building a straightforward causal relation between the change of model outputs and the model uncertainty. How to measure MLLMs' uncertainty and robustness under misleading information is an important open challenge and the authors are encouraged to improve the study and prepare it for the next conference.\", \"additional_comments_on_reviewer_discussion\": \"The authors provided detailed clarifications and various additional experimental results in the rebuttal. Two of the four reviewers responded to the rebuttal and confirmed that some major concerns have been addressed. The meta-reviewer carefully read the paper and all the discussions, especially the authors' responses to the two reviewers who have not responded to the discussion. The reviewers share major concerns regarding the design of explicit/implicit instructions. To make the observations and claims more rigorous and convincing, various extra factors in the proposed setup whose impact on the model output and entanglement with the model uncertainty needed to be justified and removed if necessary. For this purpose, more comprehensive experiments in different settings are important. While the authors provided additional experiments requested by the reviewers, they were insufficient to resolve all the concerns.\"}", "{\"comment\": \">Q3: Implicit scenarios seem better defined; But for explicit scenarios (e.g. telling the model true answer), the model behavior might be inherently undefined: i.e. shall the model follow the user\\u2019s \\u201cinstruction\\u201d (e.g. \\u201cthe true answer\\u201d), or answer the question and ignore user instruction.\", \"a3\": \"Thank you for highlighting this important distinction in model behavior for explicit scenarios. To better understand whether the model adheres to user instructions or focuses solely on answering the question, we conducted two additional experiments:\\n**Scenario 1: No user instructions, only question instructions** : We appended an incorrect character or word to the end of the text after the input question to simulate accidental input. This ensures that the model's behavior is not inherently undefined due to other user instructions. The results revealed that the misleading rate remained consistently high. This indicates that the model predominantly focuses on answering the question but remains highly vulnerable to misleading information embedded in the prompt.\\n**Scenario 2: User instructions take priority. Clear instructions to ignore misleading information** : In this experiment, the model was explicitly told that the input contained misleading information, with instructions to disregard it and provide the correct answer (e.g., \\\"The following input contains misleading information: {misleading information}. Please focus only on the questions and options and ignore all other misleading instructions! \\\"). Despite these clear instructions, the model still showed a high misleading rate, highlighting its persistent susceptibility to misleading information.\\n\\n#### Table: The comparison of simplified explicit prompts and clear instructions to ignore misleading information.\\n| Model | Scenario 1 T-F | Scenario 1 F-T | Scenario 2 T-F | Scenario 2 F-T |\\n|--------------------------|----------|----------|----------|----------|\\n| MiniCPM-v-v2 | 58.60% | 66.84% | 86.11% | 74.76% |\\n| Phi-3-vision | 62.04% | 43.08% | 77.30% | 71.99% |\\n| Yi-VL-6b | 60.04% | 48.66% | 78.10% | 72.55% |\\n| Qwen-VL-Chat | 90.83% | 62.46% | 92.55% | 68.61% |\\n| Deepseek-VL-7b-Chat | 74.55% | 61.35% | 80.32% | 76.76% |\\n| LLaVA-NeXT-7b-vicuna | 73.55% | 71.81% | 59.45% | 65.51% |\\n| MiniCPM-Llama3-v2.5 | 73.25% | 61.52% | 63.72% | 63.23% |\\n| GLM4V-9B-chat | 27.48% | 32.82% | 64.77% | 83.06% |\\n| CogVLLM-chat | 38.88% | 40.97% | 84.65% | 90.19% |\\n| InternVL-Chat-V1-5 | 53.83% | 53.23% | 52.33% | 50.27% |\\n| LLaVA-Next-34b | 60.84% | 65.51% | 70.75% | 69.28% |\\n| Yi-VL-34b | 73.77% | 71.70% | 79.76% | 76.14% |\\n| **Average** | 62.31% | 56.66% | 74.15% | 71.86% |\"}", "{\"comment\": \"We sincerely thank the reviewer for recognizing the significance of the problem we are addressing and acknowledging the potential of our work. We appreciate your constructive feedback regarding the experiments and writing, and we have made substantial efforts to tighten both aspects in the revised manuscript. Below, we detail the specific revisions and improvements made in response to your comments.\\n\\n>Q1: \\\"Is this going to affect real users in any way? Arguably this is even an intended feature to avoid getting into fights with users.\\\"\", \"a1\": \"Thank you for raising this important question. To address this, we offer the following insights and supporting evidence:\\n- Uncertain responses can significantly affect users\\u2019 trust and decision-making processes, as demonstrated by the following example: **What is the capital of the UK?** **A**: London (confidence: approximately 1.0). **B**: Paris (confidence: 1.29 \\u00d7 10\\u207b\\u00b9\\u2070). [1]. Even for questions where the correct answer is highly certain, there remains a non-zero probability of the model producing an incorrect response. For questions involving greater uncertainty, particularly in multimodal data, this raises concerns about how much users should rely on the model's answers.\\n- We conducted a simple experiment where, after the user inputted a question (including both multiple-choice and true/false questions), an incorrect character or word was appended to the end of the text to simulate accidental input. No additional guiding information was provided. However, **the results still demonstrated a relatively high misleading rate**, the model maintains a misleading rate of approximately 62.4%.\\n\\n#### Table: The misleading rate of adding a misleading letter to the end of a question.\\n| Model | ACC |Misleading rate|\\n|------------------------------|--------|--------|\\n| MiniCPM-v-v2 | 57.25% | 58.6% |\\n| Phi-3-vision | 44.48% | 62.04% | \\n| yi-vl-6b-chat | 55.52% | 60.04% | \\n| Yi-VL-6b | 61.36% | 90.83% | \\n| Deepseek-VL-7b-Chat | 59.96% | 74.55% | \\n| LLaVA-NeXT-7b-vicuna | 46.65% | 73.55% | \\n| MiniCPM-Llama3-v2.5 | 58.66% | 73.25% | \\n| GLM4V-9B-chat | 51.19% | 27.48% | \\n| CogVLLM-chat | 44.26% | 38.88% | \\n| InternVL-Chat-V1-5 | 56.49% | 53.83% | \\n| LLaVA-Next-34b | 56.39% | 60.84% | \\n| Yi-VL-34b | 54.87% | 73.77% | \\n| **Average** | 54.88% | 62.36% |\", \"ref\": \"[1] Yadkori et al., 2024. To Believe or Not to Believe Your LLM\\n\\n>Q2: \\\"As a result of this, I don't think the 'Explicit' misleading prompts are really a meaningful benchmark.\\\"\", \"a2\": \"Thank you for raising this concern. We would like to clarify the significance of the \\\"Explicit\\\" misleading prompts as a meaningful benchmark and provide additional context: we included the experimental results for explicit prompts in Table 8 of the appendix in our original submission(Table 6 in the revised version). These explicit prompts consist of 11 additional templates, including examples such as \\\"the users' answer is,\\\" \\\"the GPT-4's answer is,\\\" and \\\"given the context and the picture, the answer is xx.\\\" The experimental results reveal that explicit prompts consistently exhibit high rates of misinformation. **This pattern is not limited to a single template but extends across various forms of explicit prompts, making them a meaningful benchmark for assessing the vulnerability of MLLMs to adversarial manipulation.** Such insights are critical for developing more resilient models and better understanding their limitations. Complete table can be found in the revised version, Table 7.\\n\\n#### Table: the average misleading rate of 11 explicit prompt templates on 12 open-source MLLMs.\\n| **Model** | **Factors** | **Apparent** | **Argue** | **While** | **Obvious** | **Context** | **Given** | **Evidence** | **Correct** | **GPT** | **User** |\\n|-------------------------|-------------|--------------|-----------|-----------|-------------|-------------|-----------|--------------|-------------|----------|-----------|\\n| **Average** | **65.16%** | **67.43%** | **65.58%**| **69.92%**| **62.18%** | **66.30%** | **67.30%**| **64.28%** | **63.89%** | **60.02%**| **48.74%**|\\n\\n---\"}", "{\"comment\": \"> Q5: The questions only include multi-choice and T/F styles, which certainly makes the metrics calculation easier (reflected in equations 1 and 2), yet probably losing the delicacy in the type of Q/A addressed?\", \"a5\": \"Thank you for your suggestions. To comprehensively evaluate the effectiveness of our method, we transformed the discriminative task into a generative task. Specifically, we randomly sampled 200 data points prone to misleading. Only the images and questions were provided, without answer options. GPT-4o was used to evaluate the generated text and corresponding answers, and the misleading rates were calculated before and after introducing misleading information. (Note that the 200 data points prone to misleading were selected from external datasets. In contrast, using other unscreened generative task datasets does not effectively identify data points that are highly susceptible to misleading.) Complete table can be found in the revised version paper, Table 27.\\n\\n#### Table: Comparison of explicit and implicit misleading instruction performance on generative tasks before and after fine-tuning\\n| Model | Before (T-F) | Before (F-T) | After (T-F) | After (F-T) |\\n|---------------------------------|--------------|--------------|-------------|-------------|\\n| **Explicit** | | | | |\\n| MiniCPM-v-v2 | 69.23% | 87.70% | 25.00% | 72.54% |\\n| Phi-3-vision | 100.00% | 66.67% | 71.43% | 30.57% |\\n| Yi-VL-6b | 100.00% | 82.89% | 88.89% | 55.50% |\\n| Qwen-VL-Chat | 94.12% | 86.34% | 86.21% | 50.88% |\\n| Deepseek-VL-7b-Chat | 92.31% | 81.82% | 70.59% | 43.17% |\\n| LLaVA-NeXT-7b-Vicuna | 100.00% | 62.56% | 100.00% | 60.20% |\\n| MiniCPM-Llama3-v2.5 | 81.25% | 83.71% | 66.67% | 64.29% |\\n| GLM4V-9B-Chat | 85.71% | 80.90% | 48.48% | 62.42% |\\n| CogVLLM-Chat | 100.00% | 54.55% | 75.00% | 3.35% |\\n| InternVL-Chat-V1_5 | 85.71% | 69.27% | 24.32% | 68.10% |\\n| LLaVA-Next-34b | 100.00% | 92.18% | 62.50% | 54.39% |\\n| Yi-VL-34b | 90.91% | 92.59% | 77.78% | 14.21% |\\n| Average | 91.94% | 76.99% | 65.01% | 48.31% |\\n| **Implicit** | | | | |\\n| Average | 91.99% | 44.38% | 57.61% | 23.57% |\"}", "{\"comment\": \"We sincerely appreciate your thoughtful comments and constructive feedback. Regarding your concern about the novelty and comprehensiveness of our proposed approach/benchmark relative to existing benchmarks, we would like to further clarify and highlight the following aspects:\\n\\n**Novelty** :\\n\\nExisting methods for evaluating model robustness and uncertainty have notable limitations. For instance, [1] evaluated model robustness to generate leading questions, but it is limited to a narrow set of question types and defining the questions.\\n \\nSimilarly, [2] examined the effects of deceptive prompts on model behavior, but it lacks a systematic, quantitative metric to measure how such prompts induce uncertainty. [6] assessed the robustness of vision-language models to visual adversarial instructions, is limited by its focus on visual inputs alone and does not consider textual misleading prompts. [7] primarily investigated model robustness to manipulated visual inputs, overlooking the broader impact of textual misleading instructions on model uncertainty.\\n\\n[4],[5] evaluated the consistency or uncertainty only for language models. [3] focused on consistency on three types of tasks, but its evaluation is confined to fixed conditions and does not account for how misleading instructions trigger uncertainty in real-world scenarios. \\n\\nAs Reviewer t1Gv pointed out, our approach introduces misleading evidence in prompts to test model robustness, which is both novel and interesting. This method offers a systematic and scalable way to evaluate consistency and reasoning under adversarial conditions\\u2014an area that has not been fully explored in existing benchmarks. To the best of our knowledge, we are the first to employ both explicit and implicit misleading instructions in a multimodal setting to effectively identify uncertainty data and evaluate model response uncertainty.\\n\\nOur method does not require the design of specific misleading questions [1] [2] or identifications of particular visual inputs [6][7]. It is highly extensible, enabling applications to any multimodal dataset and providing greater flexibility.\\nPrevious studies typically relied on consistency and accuracy metrics to assess uncertainty [3] [4] [5], but they often required 5\\u201315 responses to effectively identify uncertain data. In contrast, our two-stage process using misleading instructions only requires two responses to identify uncertainty. As Reviewer 7azo highlighted, our approach introduces novel and more efficient metrics that address limitations in widely used metrics, such as the self-consistency rate. We also present the relationship between consistency rate and misleading rate, as well as between consistency rate and accuracy (see Figures 1 and 10 in the revised manuscript).\\n\\nUnlike existing methods [1-7], we also improve the performance of current MLLMs by fine-tuning the models based on the identification of uncertain data. This fine-tuning significantly reduces the misleading rate, improves the consistency rate, and importantly, achieves these improvements without any loss in accuracy.\\n\\n[1] Seeing Clearly, Answering Incorrectly: A Multimodal Robustness Benchmark for Evaluating MLLMs on Leading Questions \\n\\n[2] How Easy is It to Fool Your Multimodal LLMs? An Empirical Analysis on Deceptive Prompts \\n\\n[3] Unveiling the Tapestry of Consistency in Large Vision-Language Models\\n\\n[4] Benchmarking and improving generatorvalidator consistency of language models\\n\\n[5] Generating with confidence: Uncertainty quantification for black-box large language models\\n\\n[6] AVIBench: Towards Evaluating the Robustness of Large Vision-Language Model on Adversarial Vsual-Instructions\\n\\n[7] On Evaluating Adversarial Robustness of Large Vision-Language Models\"}", "{\"comment\": \"> Q4: Instead of finetuning, I would recommend the authors to simply systematically prompt the MLLMs, such as \\\"The questions might contain misleading information, you should try to answer the question correctly despite of those misleading information ...\\\"; another version could even give it two examples (one explicit and one implicit). I would guess/assume, simply doing this extra prompting will make the results much better.\", \"a4\": \"Thank you for your thoughtful suggestion. We conducted additional evaluations incorporating explicit and implicit misleading examples as part of systematic prompting strategies. The results indicate that even when the instructions explicitly warned the model about the presence of misleading information, the misleading rate remained above 70%. Similarly, providing explicit and implicit misleading examples in the instructions resulted in a misleading rate of approximately 70%. Although these strategies reduced the misleading rate by about 15%, the performance remains unsatisfactory for large models. The specific prompt and complete table can be found in the revised version paper, Table 22.\\n\\n#### Table: The average misleading rates of different explicit and implicit prompt-based defense strategies on 12 open-source MLLMs. (Warning means your first example. And different explicit and implicit instructions are shown in Appendix.A.2.4)\\n| Model | Warning | Example(1) | Example(2) | Example(3) | COT | Warning | Example(1) | Example(2) | Example(3) | COT |\\n|-------------|------------|------------|------------|------------|----------|------------|------------|------------|------------|----------|\\n| Average | 69.28% | 67.59% | 67.26% | 77.90% | 88.50% | 73.58% | 72.08% | 72.91% | 92.36% | 84.61% |\\n\\n---\\n\\n| Model | Warning | Example(1) | Example(2) | Example(3) | Warning | Example(1) | Example(2) | Example(3) |\\n|----------|------------|------------|------------|------------|------------|------------|------------|------------|\\n| Average | 66.62% | 70.13% | 72.05% | 72.72% | 58.12% | 61.70% | 61.54% | 61.19% |\\n---\"}", "{\"comment\": \"We greatly appreciate the thorough evaluation of our work and the valuable suggestions provided by the reviewers. Your feedback has been instrumental in helping us improve the clarity and comprehensiveness of our manuscript. In response to your concern, we conducted additional experiments and revised our analysis accordingly.\\n\\n> Q1: The paper lacks a discussion on whether the models are calibrated.\", \"a1\": \"Thank you for your valuable suggestion. We have added a discussion in the appendix to address the calibration of the models. This discussion provides an analysis from three key perspectives:\\n- **ECE.** We adopted the method described in [1] to enable the MLLMs to output their confidence scores. The results demonstrate a notable improvement in calibration, with the Expected Calibration Error (ECE) in the True-False (T-F) scenario decreasing from 0.47 to 0.23 after fine-tuning.\\n- **Consistency Rate.** The consistency rate of the fine-tuned MLLMs shows significant improvement. Specifically, we observed an approximate 13.58% increase in low misleading rate data and a 26.51% increase in high misleading rate data, as detailed in the revised Table 21.\\n- **Accuracy.** The accuracy of the fine-tuned MLLMs also improved moderately, with an approximately 5% increase observed after fine-tuning. These results are reported in the updated Tables 17 and 18 in the revised manuscript.\\n\\n\\n#### Table: Comparison of ECE before and after fine-tuning on our benchmark\\n| Model | Before | After |\\n|------------------------|--------|-------|\\n| MiniCPM-v-v2 | 0.46 | 0.24 |\\n| Phi-3-vision | 0.46 | 0.15 |\\n| Yi-VL-6b | 0.45 | 0.27 |\\n| Qwen-VL-Chat | 0.49 | 0.24 |\\n| Deepseek-VL-7b-Chat | 0.47 | 0.20 |\\n| LLaVA-NeXT-7b-vicuna | 0.48 | 0.23 |\\n| MiniCPM-Llama3-v2.5 | 0.49 | 0.18 |\\n| GLM4V-9B-chat | 0.46 | 0.25 |\\n| CogVLLM-chat | 0.46 | 0.27 |\\n| InternVL-Chat-V1-5 | 0.47 | 0.24 |\\n| LLaVA-Next-34b | 0.49 | 0.19 |\\n| Yi-VL-34b | 0.45 | 0.26 |\\n| **Average** | **0.47** | **0.23** |\\n\\n\\n#### Table: Comparison of consistency before and after fine-tuning on our benchmark\\n| **Model** | **Low (Before)** | **Low (After)** | **Low (Change)** | **High (Before)** | **High (After)** | **High (Change)** |\\n|--------------------|------------------|-----------------|------------------|-------------------|------------------|-------------------|\\n| MiniCPM-v-v2 | 82.93% | 97.83% | +14.90% | 56.52% | 90.64% | +34.12% |\\n| Phi-3-vision | 79.89% | 89.33% | +9.44% | 63.94% | 87.77% | +23.83% |\\n| GLM4v-9b | 94.33% | 99.00% | +4.67% | 82.28% | 95.85% | +13.57% |\\n| LLaVA-Next-34b | 73.30% | 98.61% | +25.31% | 53.30% | 91.81% | +38.51% |\\n| **Average** | **82.61%** | **96.19%** | **+13.58%** | **64.51%** | **91.02%** | **+26.51%** |\\n\\n[1] Xiong, M.,et al. 2023. Can LLMs express their uncertainty? An empirical evaluation of confidence elicitation in LLMs.\"}", "{\"comment\": \"We sincerely thank for your thoughtful and detailed feedback, which has provided invaluable insights for improving our work. We appreciate the recognition of the importance of addressing response uncertainty in MLLMs and the potential utility of our proposed MUB benchmark. Your insights regarding the limitations and areas for improvement in our work are invaluable, and we have carefully addressed these concerns in the revised manuscript. Below, we provide detailed responses to each of the points raised.\\n\\n>Q1: We appreciate your valuable suggestions. We also present the additional experimental results in video question answering (video-QA). This work only evaluates/tackles VLLM instead of MLLM as claimed multiple times in title and throughout paper, though I could maybe see the way to extend to other modalities.\", \"a1\": \"Thank you for your valuable suggestions. In response, we have conducted additional experiments in the video question answering (Video-QA) setting to evaluate the adaptability of our method to multimodal inputs, including video and audio modalities. Specifically, we tested VideoLLaMA-2 [1] on the Video-MME [2] dataset across various categories. The results, shown in Table 28 and 29 (revised version), demonstrate that our method is effective for video and audio modalities. For the **video-audio modality**, the average accuracy decreased from 48.3% to 40.4%. Similarly, for models employing only the **video modality**, the average accuracy decreased from 54.9% to 45.5%.\\n\\n#### Table: Comparison of results before and after adding misleading instructions with video-audio input for VideoLLaMA-2 on the Video-MME dataset. \\n| **Category** | **Short Before** | **Short After** | **Medium Before** | **Medium After** | **Long Before** | **Long After** | **Overall Before** | **Overall After** |\\n|----------------------------|------------------|-----------------|-------------------|------------------|-----------------|----------------|--------------------|-------------------|\\n| **Knowledge** | 59.6% | **51.1%** | 45.2% | **38.5%** | 39.3% | **31.1%** | 48.0% | **40.2%** |\\n| **Film & Television** | 68.3% | **56.7%** | 51.7% | **43.3%** | 35.8% | **27.5%** | 51.9% | **42.5%** |\\n| **Sports Competition** | 50.7% | **43.3%** | 44.7% | **36.0%** | 33.3% | 31.3% | 42.9% | **36.9%** |\\n| **Artistic Performance** | 61.7% | **55.0%** | 49.2% | **44.2%** | 44.2% | **35.8%** | 51.7% | **45.0%** |\\n| **Life Record** | 60.0% | **51.0%** | 43.3% | **34.8%** | 43.3% | **34.8%** | 48.9% | **40.2%** |\\n| **Multilingual** | 56.7% | **36.7%** | 36.7% | **30.0%** | 43.3% | **26.7%** | 45.6% | **33.3%** |\\n\\n[1] Cheng, Z., et al., 2024. VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs.\\n\\n[2] Fu, C., et al., 2024. Video-MME: The first-ever comprehensive evaluation benchmark of multi-modal LLMs in video analysis. \\n\\n---\"}", "{\"summary\": \"The paper introduces a dataset of misleading instructions for multimodal language models. This is done in two ways: through a template (telling the model that the answer is \\\"X\\\", where X is wrong), and through a language model (for instance by adding evidence or reasoning that contradicts the true answer). It is shown that models have lower consistency on instructions that are successfully misleading, and that fine-tuning can improve this.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The paper tackles an interesting problem, and the idea of adding misleading evidence to a prompt is a nice way to test for robustness. I also thought it was interesting that consistency decreases.\", \"weaknesses\": \"I'm not sure I buy the overall motivation -- of course if you tell the model the answer is wrong, it will flip some fraction of the time. But is this going to affect real users in any way? Arguably this is even an intended feature to avoid getting into fights with users.\\n\\nAs a result of this, I don't think the \\\"Explicit\\\" misleading prompts are really a meaningful benchmark. The \\\"Implicit\\\" ones are more interesting, but need more detailed treatment: for instance, at least give several examples and enough information to assess data quality. The evaluation of the \\\"Implicit\\\" setting is also strange -- using best-of-5 sampling (even though \\\"Explicit\\\" is best-of-1) which inflates the success rate.\\n\\nA separate issue is that, throughout, there is not enough information to understand the data or experimental setup in detail. The paper says that they \\\"fine-tuned on separate data\\\", but there are not many details that would let a reader verify or reproduce the experiments. (This is also part of the problem with the \\\"Implicit\\\" setting -- not enough details to fully understand.)\\n\\nI think the authors are tackling an interesting problem, and have made a good start on it, but in my opinion the experiments and writing should be tightened up before it's ready to be accepted to ICLR.\", \"questions\": \"Why best-of-5 sampling, and why only for \\\"Implicit\\\"?\\n\\nCan you provide several random samples from the \\\"Implicit\\\" setting?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> Q2: It does not address whether more consistent (more certain) model outputs correspond to more accurate answers.The results in the paper are primarily based on misleading rate (MR) and average consistency rate (ACR), without showing metrics like model accuracy. There is a lack of analysis on utility.\", \"a2\": \"Thank you for your valuable suggestion and for pointing out this important aspect of the analysis. We appreciate the opportunity to clarify and expand on this point. In the appendix of the original manuscript, we presented the changes in accuracy after fine-tuning in Table 22 on page 26 (updated as Table 17 and Table 18 in the revised version). To further address your concern and provide a clearer illustration of the fine-tuned model\\u2019s effectiveness, we have included the accuracy metrics directly in Table 2 of the main paper. We also provide the relationship between the accuracy and the misleading rate in Figure 10(in the revised version). The results indicate an inverse relationship between the misleading rate and the accuracy, where a higher misleading rate corresponds to a lower consistency rate. Also, we\\n\\n\\n#### Table: the average accuracy before and after fine-tuning on our benchmark\\n| | Explicit | | | Implicit | | |\\n|--------------------|--------------|---------------------|------------------|------------------|---------------------|------------------|\\n| Model | Low | Medium | High | Low | Medium | High |\\n| **Average** | **79.63% (\\u21914.95%)** | **60.18% (\\u21915.10%)** | **63.22% (\\u21915.69%)** | **78.34% (\\u21914.07%)** | **58.67% (\\u21914.76%)** | **62.81% (\\u21915.55%)** |\\n\\n\\n\\n\\n\\n\\n---\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"> Q3: The authors fail to analyze the impact of instruction tuning on model usability. It is unclear how much the model\\u2019s performance on different tasks changes before and after fine-tuning.\", \"a3\": \"Thank you for your constructive feedback. As you recommended, we have provided additional experimental results on the SEED dataset (Table 20 in the new revision). The results demonstrate that fine-tuning leads to improved accuracy and a reduction in the misleading rate. Furthermore, we validated our approach on generative tasks, where the results indicate a significant decrease in the misleading rate for these tasks as well(Table 27 in the new revision). We also present the experimental results under various category divisions. The results indicate a substantial reduction in misleading rates across different tasks. (Table 24 in the new revision).\\n\\n#### Table: Comparison of accuracy and misleading rate before and after fine-tuning on SEED dataset \\n| | Before | | | After | | |\\n|--------------------------|------- |------- |--------|--------|--------|--------|\\n| Model | ACC | T-F | F-T | ACC | T-F | F-T |\\n| MiniCPM-v-v2 | 63.65% | 53.45% | 87.02% | 71.00% | 6.76% | 16.21% |\\n| Phi-3-vision | 77.78% | 71.43% | 84.32% | 73.10% | 7.66% | 27.88% |\\n| Yi-VL-6b | 60.26% | 83.73% | 96.59% | 69.80% | 15.62% | 27.15% |\\n| Qwen-VL-Chat | 54.97% | 88.39% | 80.82% | 67.80% | 8.11% | 17.08% |\\n| Deepseek-VL-7b-Chat | 63.71% | 20.03% | 54.14% | 72.90% | 2.88% | 4.80% |\\n| LLaVA-NeXT-7b-vicuna | 62.72% | 56.39% | 58.30% | 72.50% | 17.52% | 38.18% |\\n| MiniCPM-Llama3-v2.5 | 68.08% | 44.02% | 87.87% | 74.90% | 1.47% | 1.20% |\\n| GLM4V-9B-chat | 68.71% | 32.93% | 78.03% | 75.20% | 4.12% | 18.55% |\\n| CogVLLM-chat | 67.73% | 24.69% | 65.96% | 75.60% | 8.20% | 9.02% |\\n| InternVL-Chat-V1-5 | 69.52% | 30.88% | 84.94% | 78.10% | 2.82% | 4.11% |\\n| LLaVA-Next-34b | 67.40% | 41.07% | 95.06% | 76.50% | 2.09% | 6.81% |\\n| **Average** | 66.44% | 51.72% | 78.47% | 73.00% | 7.47% | 17.46% |\\n\\n#### Table: Comparison of explicit and implicit misleading instruction performance on generative tasks before and after fine-tuning \\n| Model | Before (T-F) | Before (F-T) | After (T-F) | After (F-T) |\\n|---------------------------------|--------------|--------------|-------------|-------------|\\n| **Explicit** | | | | |\\n| MiniCPM-v-v2 | 69.23% | 87.70% | 25.00% | 72.54% |\\n| Phi-3-vision | 100.00% | 66.67% | 71.43% | 30.57% |\\n| Yi-VL-6b | 100.00% | 82.89% | 88.89% | 55.50% |\\n| Qwen-VL-Chat | 94.12% | 86.34% | 86.21% | 50.88% |\\n| Deepseek-VL-7b-Chat | 92.31% | 81.82% | 70.59% | 43.17% |\\n| LLaVA-NeXT-7b-Vicuna | 100.00% | 62.56% | 100.00% | 60.20% |\\n| MiniCPM-Llama3-v2.5 | 81.25% | 83.71% | 66.67% | 64.29% |\\n| GLM4V-9B-Chat | 85.71% | 80.90% | 48.48% | 62.42% |\\n| CogVLLM-Chat | 100.00% | 54.55% | 75.00% | 3.35% |\\n| InternVL-Chat-V1_5 | 85.71% | 69.27% | 24.32% | 68.10% |\\n| LLaVA-Next-34b | 100.00% | 92.18% | 62.50% | 54.39% |\\n| Yi-VL-34b | 90.91% | 92.59% | 77.78% | 14.21% |\\n| Average | 91.94% | 76.99% | 65.01% | 48.31% |\\n| **Implicit** | | | | |\\n| Average | 91.99% | 44.38% | 57.61% | 23.57% |\"}", "{\"comment\": \"Dear Reviewer t1Gv,\\n\\nThank you very much for your invaluable feedback on our paper. We have meticulously reviewed each of your points and endeavored to address them thoroughly. We would greatly appreciate it if you could review our responses and let us know if you have any additional comments regarding the paper or the rebuttal. We are eager to embrace all your critiques and integrate them into our work.\\n\\nThank you!\"}" ] }
2NqssmiXLu
Automated Proof Generation for Rust Code via Self-Evolution
[ "Tianyu Chen", "Shuai Lu", "Shan Lu", "Yeyun Gong", "Chenyuan Yang", "Xuheng Li", "Md Rakib Hossain Misu", "Hao Yu", "Nan Duan", "Peng CHENG", "Fan Yang", "Shuvendu K Lahiri", "Tao Xie", "Lidong Zhou" ]
Ensuring correctness is crucial for code generation. Formal verification offers a definitive assurance of correctness, but demands substantial human effort in proof construction and hence raises a pressing need for automation. The primary obsta- cle lies in the severe lack of data—there is much fewer proofs than code snippets for Large Language Models (LLMs) to train upon. In this paper, we introduce SAFE, a framework that overcomes the lack of human-written proofs to enable automated proof generation of Rust code. SAFE establishes a self-evolving cycle where data synthesis and fine-tuning collaborate to enhance the model capability, leveraging the definitive power of a symbolic verifier in telling correct proofs from incorrect ones. SAFE also re-purposes the large number of synthesized incorrect proofs to train the self-debugging capability of the fine-tuned models, empowering them to fix incorrect proofs based on the verifier’s feedback. SAFE demonstrates superior efficiency and precision compared to GPT-4o. Through tens of thousands of synthesized proofs and the self-debugging mechanism, we improve the capa- bility of open-source models, initially unacquainted with formal verification, to automatically write proofs for Rust code. This advancement leads to a signifi- cant improvement in performance, achieving a 52.52% accuracy rate in a bench- mark crafted by human experts, a significant leap over GPT-4o’s performance of 14.39%.
[ "Large Language Models", "Program Verification" ]
Accept (Poster)
https://openreview.net/pdf?id=2NqssmiXLu
https://openreview.net/forum?id=2NqssmiXLu
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uz9z3Wghvd", "tEI7grfbTy", "t709a3IAlB", "q4pqhegWJE", "p2sUWBCox8", "eXjNeT7qRW", "YrdrV5Eli2", "OjWTQQdjbT", "O4KxuulogZ", "LHzaqy6rrX", "Iyp60m3Ja0", "DOuYnZUXoD", "DMfrQaiVsy", "8xCL0eDJGF", "0CzxSsoRYN" ], "note_type": [ "official_review", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730673306940, 1734597749017, 1730458380825, 1732082575924, 1732111130745, 1732081778810, 1730668957576, 1732082368828, 1737523491779, 1732081593005, 1729908545679, 1732082437360, 1732595443983, 1732088703778, 1732309870795 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2209/Reviewer_73u5" ], [ "ICLR.cc/2025/Conference/Submission2209/Area_Chair_MrJn" ], [ "ICLR.cc/2025/Conference/Submission2209/Reviewer_K5mr" ], [ "ICLR.cc/2025/Conference/Submission2209/Authors" ], [ "ICLR.cc/2025/Conference/Submission2209/Reviewer_hvY6" ], [ "ICLR.cc/2025/Conference/Submission2209/Authors" ], [ "ICLR.cc/2025/Conference/Submission2209/Reviewer_aKrU" ], [ "ICLR.cc/2025/Conference/Submission2209/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2209/Authors" ], [ "ICLR.cc/2025/Conference/Submission2209/Reviewer_hvY6" ], [ "ICLR.cc/2025/Conference/Submission2209/Authors" ], [ "ICLR.cc/2025/Conference/Submission2209/Reviewer_aKrU" ], [ "ICLR.cc/2025/Conference/Submission2209/Reviewer_K5mr" ], [ "ICLR.cc/2025/Conference/Submission2209/Reviewer_73u5" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes SAFE, a data generation and fine-tuning procedure for improving LLMs in generating proofs for the correctness of Rust code. SAFE consists of three stages: (i) verus-compatible code generation, (ii) self-evolving specification synthesis, and (iii) self-evolving proof synthesis. During stage (ii), SAFE leverages a symbolic and quantitative measure based on the correctness and completeness of the specification. For stage (iii), SAFE fine-tunes both proof generation and repair models. The experiments demonstrate the advantages of SAFE: it significantly improves the performance, compared to both the base model and GPT-4o.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"This paper studies automating proof generation in formal program verification with LLMs, an important direction with great potential for practical applications. The focus is on Rust, a relatively new language that is gaining widespread adoption. Although synthetic data generation for fine-tuning LLMs is not a completely novel idea, the paper introduces a few interesting techniques for the domain of proof generation for Rust. I particularly like the metric for filtering high-quality specifications. The evaluation is thorough, demonstrating the benefits of SAFE over baselines and the effectiveness of its individual components.\", \"weaknesses\": \"The paper only focuses on small programs in the style of MBPP and CodeNet. Although I understand this is partly due to the limitation of the Verus tool, I do believe that the paper should present some case studies or discussion on how to scale the approach to real-world software projects.\\n\\nApart from proof generation, a major part of formal verification is writing the specifications. The paper covers mechanisms to fine-tune a \\u201cgood\\u201d specification generation. It would strengthen the paper if more evaluation can be done on the specification generation task and how it can be combined with proof generation to automate end-to-end verification.\\n\\nThe paper lacks a study on the choice of the correctness and completeness thresholds for the specification metric.\\n\\nThe paper writing can be improved. Below are some issues I found or some recommendations:\\n- The text in Section 3 is sometimes ad-hoc and contains low-level details (e.g., choice of parameters). I would be helpful to revise the text to be more formal and move the details to later sections.\\n- Line 289: The paper says \\u201cmuch previous work relies on running many test cases\\u201d without providing any references.\\n- Line 519: Table 2 should be Table 3\\n- Table 3: The split of model names to multiple lines is confusing. I thought one line of text corresponds to one single baseline. The $\\\\Delta$ rows look redundant as well.\", \"questions\": \"Please address the points raised in the \\u201cWeakness\\u201d section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper considers learning how to prove the correctness of Rust programs, an important programming language for which there is very little training data, by bootstrapping both specifications and proofs. This problem is more broadly emblematic of the need to produce verified code for low resource languages, potentially even low resource areas of math (though not explored in the paper). The primary strength is that the method is creative, potentially high impact, and has good empirical results. The primary weakness is that it only considers proofs for MBPP-style problems, and required nontrivial manual effort to bootstrap the system (while also relying on automated general-purpose methods). I recommend acceptance because this weakness is understandable given that this would only be the first step in this research program, and because it is both conceptually interesting and practically relevant to machine learning and formal methods.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers engaged in the discussion and suggested drawing more connections to existing work. The text has been revised to take such connections into account.\"}", "{\"summary\": \"The presented paper proposes a method to bootstrap training data for generating proofs of Rust code using LLMs. The pipeline starts from a small set of curated programs and gradually evolves using the verifier as signal for dataset curation. Finally they evaluate the resulting fine-tuned LLM and show state of the art results on a difficult dataset of low-resource correctness proofs.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"The paper is well-written and nicely structured. The figures and tables are well formatted and legible.\", \"The story is engaging and the tackled topic highly relevant.\", \"The results are clearly presented and provide interesting insights.\"], \"weaknesses\": [\"In Table 1 the Accuracy of GPT-4o on VerusBench @100 is unfortunately missing (likely due to high inference cost?). Similarly the result of DeepSeekCoder RAW @100 is missing. If the authors could provide these values, the tables would provide a much more complete picture.\", \"In Table 2, Round 3 appears to severely degrade performance of the resulting model on the Tutorial dataset. Does this constitute some first signs of overfitting or collapse or could the authors provide some more insight on what is happening here? It might be interesting to provide some basis on deciding where to stop the iterative process.\", \"There is no discussion of Limitations. While the provided method is clearly powerful some discussion on potential limitations would be highly appreciated.\"], \"questions\": \"Please provide a short statement or clarification to the points raised above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer hvY6\", \"comment\": \"We sincerely thank Reviewer **hvY6** for the insightful comments. \\u202fOur response is the following and our revision in the revised paper is highlighted in blue:\\n\\n \\n\\n### W1. \\u201cI find that the paper tries a little bit too hard to sell the novelty of this SAFE approach \\u2026 expert iteration \\u2026 I would encourage the authors to tone down the language a bit more\\u201d \\n\\nThanks for your suggestion about the writing of our paper. We have made the following revision in the revised paper following your suggestion: \\n\\n \\n\\na) Our original submission had a short discussion comparing \\u201cexpert iteration\\u201d and SAFE in Section 2. We have elaborated that discussion in the revised version. \\n\\n \\n\\nBasically, indeed, our work applies the self-evolving / expert-iteration approach to a new task: \\n\\nsynthesizing correctness proof for Rust code. This new target task raises some unique challenges and hence require different designs: \\n\\n- Data scarcity challenge in Verus. We do not have millions of manually-written mathematical theorems for bootstrapping, which is why we had to rely on GPT-4o for bootstrapping. We also have no access to a large quantity of proof problems and hence must synthesize problems (i.e., Rust-code transpilation + specification synthesis) by itself. \\n\\n- The underlying verification engine. Math-proof synthesis can be decomposed to many small steps or tactics thanks to the interactive theorem prover, LEAN. In contrast, Verus leverages SMT-solver to prove the correctness of a function as a whole. Consequently, step-wise search strategies used in prior work do not apply here. Instead, whole-proof debugging is crucial for SAFE. \\n\\n \\n\\nb) We have toned down the language a bit more in Introduction. Particularly, we moved the last paragraph originally in the Introduction, which summarizes the reason-of-success of SAFE and how it can help future research, to the end of Section 2 and rephrased it in the context of prior self-evolving/expert-iteration work. \\n\\n \\n\\nc) We replaced some appearance of \\\"SAFE\\\" with more precise phrases (e.g., \\u201cGPT-4\\u201d) at those places you pointed out. \\n\\n \\n\\n### W2. pass@k metrics for self-debugging is unfair \\n\\nThanks for pointing out this issue. In the revised paper, we made several changes to the evaluation section, especially Table 1, to address this fairness concern. \\n\\nAt the end of Section 4.1.2, we explain the number of samples produced under SAFE+\", \"we_re_organized_table_1\": \"in the revised version, SAFE+ has no Accuracy@1 result now. Instead, the generate once + debug once setting of SAFE+ is now presented in a newly added row of Accuracy@2, so that it is compared with taking two samples from other non-debugging models in a fair way. Similarly, we moved SAFE+ Accuracy@10 results in our original submission down to the Accuracy@100 row, as that is fairer as suggested by the reviewer.\\n\\nWe have revised the text in Section 4.2.1, so that we present a fairer statement of SAFE+\\u2019s capability. \\n\\nSince Table 2 and Table 3 do not compare SAFE+ with SAFE, instead they are designed to show the differences across rounds and different specification settings, we did not reorganize them in the revised paper. But, we added the annotation of \\u201c(K + K*K)\\u201d right after the \\u201cSelf-Debugging\\u201d column name to remind readers that SAFE+ produces many more than K samples. \\n\\n \\n\\n \\n\\n### Q1. Table 2\\u2019s \\u201ctotal\\u201d column \\n\\nThe \\u201ctotal\\u201d column is the mean Accuracy over all of the proof tasks in the entire VerusBench dataset. They are not the mean of the other columns in Table 2, because there are different numbers of proof tasks in SV, MBPP, and Tutorial. \\n\\n \\n\\n### Q2. \\u201cWhen constructing training data for the repair task, are you doing any filtering to make sure the \\u2018incorrect program\\u2019 is actually similar to the \\u2018correct program\\u2019?\\u201d \\n\\nWe do not filter based on the similarities between \\u201cincorrect program\\u201d and \\u201ccorrect program\\u201d. \\u202f For each proof task (i.e., a Rust program with a specification), we first divide its incorrect proofs into subgroups where each group's proofs have the same number of verification errors and then randomly sample at most 10 errors in each subgroup. Our target is to ensure the diversity of debugging data for training. \\n\\n \\n\\n### Q3. \\u201cthe difference between SAFE and expert iteration\\u201d \\n\\nPlease refer to the response to W1 and Section 2 in the revised paper.\"}", "{\"comment\": \"I thank the authors for their extensive reply to my comments and for successfully addressing all of my concerns.\\n\\nI have reviewed the changes made to the paper, as well as the replies to the other reviews. After doing so I am confident that this paper would make for a good contribution to the conference, and I have updated my score accordingly.\"}", "{\"title\": \"Response to Reviewer 73u5\", \"comment\": \"We sincerely thank Reviewer **73u5** for the insightful comments. \\u202fWe provide our response below and highlight related revisions in blue in the revised paper:\\n\\n \\n\\n### W1. \\u201cdiscussion on how to scale the approach to real-world software projects\\u201d \\n\\nWe have added a discussion in Section D.1 (in Appendix) on this topic. \\n\\n \\n\\nSince every function is the unit for Verus verification, we believe the LLM fine-tuned by SAFE on functions in small programs would continue to be useful for functions in large projects. Of course, if we apply SAFE to synthesize proof for large Rust projects, we expect a key challenge in how to resolve code dependencies across functions: a function may call other executable functions or specification functions, and the callee functions may exist in a different file and/or belong to a different class. How to resolve all the code dependency and provide LLM with all the needed information may require support that goes beyond machine learning. \\n\\n \\n\\n### W2.\\u201cIt would strengthen the paper if more evaluation can be done on the specification generation task and how it can be combined with proof generation to automate end-to-end verification\\u201d \\n\\nWe have added further evaluation results and related discussion in Section E.2 and Figure 4 (in Appendix of the revised paper) that show the distribution of the correctness-score and the completeness-score of all the specifications synthesized during the self-evolving process of SAFE. \\n\\n \\n\\nIn our original submission, we designed two baselines to show how the quality of specification generation would affect the end-to-end verification and hence the effectiveness of our proof generation training: this result was shown in Table 3 (and P-values in Table 7). As the reviewer pointed out, our original presentation in Table 3 was unclear; so, we have cleaned up Table 3 in the revised paper. In general, Table 3 shows that the quality-decrease in specification substantially decreases the effectiveness of end-to-end verification for the Rust programs in our training dataset and hence the accuracy of the final proof-generation model. \\n\\n \\n\\n### W3. \\u201cThe paper lacks a study on the choice of the correctness and completeness thresholds for the specification metric\\u201d \\n\\nAs explained in Section 3.2, SAFE needs specifications that have reasonably high scores, but not perfect scores (i.e., 1.0). Beyond the reasons that are already presented in Section 3.2, an extra reason for the relatively low Completeness threshold (0.6) is that a mutated test case might still be correct and hence should not be rejected. For example, Listing 7 illustrates a test case for \\u201csharedElements\\u201d while it does not require the order of output list to be the same as its inputs. If we change the target output from [13, 14] to [14, 13], it is still correct and hence will lower the Completeness score of some good specifications. \\n\\n \\n\\nWe apologize that we did not have time to re-run the whole training process using different specification-filtering thresholds. We hope that the newly added Figure 4 in the revised paper (it shows the score-distribution of synthesized specifications) and the newly cleaned-up Table 3 (how proof-synthesis accuracy drops when different sets of specification are used) will help readers see how our specification filtering has helped SAFE. \\n\\n \\n\\n### W4. Writing issues \\n\\n \\n\\n**4.1 \\u201cThe text in Section 3 is sometimes ad-hoc and contains low-level details\\u201d** \\n\\nIn the revised paper, we have deleted some ad-hoc discussion in Section 3, moved some ad-hoc discussion into Appendix (e.g., the now early part of Section C.1 Specification Filtering), and added formal definitions about our task target and specification metrics (Formula (1), (2), and (3) in Section 3). \\n\\n \\n\\n**4.2 \\u201cThe paper says `much previous work relies on running many test cases\\u2019 without providing any references\\u201d** \\n\\nWe have added reference to this sentence. It is Lines 271-272 now. \\n\\n \\n\\n**4.3 \\u201cTable 2 should be Table 3\\u201d** \\n\\nWe have changed the incorrect reference. It is Line 521 now. \\n\\n \\n\\n**4.4 \\u201cTable 3: The split of model names to multiple lines is confusing \\u2026 The delta rows look redundant as well\\u201d** \\n\\nWe have changed the model names in Table 3 and removed $\\\\Delta$ rows.\"}", "{\"summary\": \"This paper introduces SAFE, an innovative framework designed to address the challenges of automated proof generation for Rust code, SAFE overcomes the significant data scarcity issue ( i.e., there is far less proof data than code data for training language models) by using a self-evolving approach to synthesize specification/proof data and fine-tune models iteratively. SAFE operates through a feedback loop where synthesized proofs are continuously verified by a symbolic verifier, distinguishing correct from incorrect proofs. Incorrect proofs are used to train the model's self-debugging ability, while the correct proofs are used to improve the model for the next round. The design of the approach is smart and uses the insight that (1) using a quantitative metric to select high-quality specifications for fine-tuning; (2) we only need reasonably well, instead of perfect specifications to fine-tune in the next step; and (3) Verus can quickly tell correct proof from incorrect ones, which enables the collecting and filtering of large amount of data.\\n\\nSAFE achieves a substantial improvement, attaining a 70.50% accuracy on a benchmark set crafted by human experts, a notable advancement over GPT-4o's performance of 24.46%. SAFE also obtains self-debugging ability using the incorrect proofs collected during the data collection step. Experiments show that each round of self-evolving improves the accuracy of SAFE, and proves the importance of using high-quality training data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper proposes a novel approach that use self-evolving to iteratively improve LLM's ability of generating Rust specification and proofs. The fact that this approach does not rely on larger LLM such as GPT-4o in the following iterations (except the first round) makes it more generalizable and scalable.\\n2. The proposed approach shows great effectiveness, with three round of self-evolving, the fine-tuned LLM shows about 40% higher accuracy@1 compared to the prompting approach.\\n3. Comprehensive analysis and experiments, showing that each round of 1, 2 and 3 brings some improvement to the fine-tuned LLM (although the round 2 model is better than the round 3 model under some settings), and showing that high-quality specifications important to improve the model's accuracy during self-evolving.\", \"weaknesses\": \"1. The self-debugging ability is shown to be only effective for the first time, what could be potential approach for improving the self-debugging ability in the following rounds?\\n2. I am wondering if this self-evolving approach can improve smaller LLMs ability. For instance, if the backbone is DeepSeekCoder-1.3B, how effective is the self-evolving approach?\", \"questions\": \"1. A clarifying question about the self-evolving data: The data collected through GPT-4o (round 0) is used to fine-tuned the first specification/proof generation model. What's the data input used to let the generation model generate data for then next round?\\n* Are these data the same programs as those used in the generating round 0 data? If this is the case, would the training data in each round kind of repetitive and lack of diversity?\\n* Or does author use some strategies to leave some unique programs for each round, so that the fine-tuning data for each round contains different programs?\\n2. Self-debugging is quite effective and improves the accuracy, how does the model obtain the ability of self-debugging? does the fine-tuning procedure contains self-debugging training data?\\n3. Why are the baseline models prompted with 4 examples instead of more examples?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer aKrU\", \"comment\": \"We sincerely thank Reviewer **aKrU** for the insightful comments. \\u202fWe provide our response below and highlight our revisions in blue in the revised paper:\\n\\n \\n\\n### \\u202fW1.\\u201cWhat could be potential approach for improving the self-debugging ability in the following rounds\\u201d \\n\\nThis is a very good question, and definitely worth future research to look into. \\n\\nWe feel the potential approach likely needs to change the formation of our current self-debugging training data. Currently, the training data for self-debugging includes many pairs of incorrect proof Yx and a corresponding correct proof Y. Sometimes, the incorrect proof Yx \\n\\nmay contain many mistakes, causing many different verification errors. If future research can break-down the difference between Yx and Y, figuring out which edit in Y is used to fix which verification error in Yx, the resulting data can probably train a model that is better at fixing deeply flawed proof through multiple rounds of debugging. \\n\\nWe have added this discussion into the revised paper (Section D.2 in Appendix). \\n\\n \\n\\n### \\u202fW2.\\u201c SAFE\\u2019s ability on smaller LLMs\\u201d \\n\\nWe have conducted a new experiment with DeepSeekCoder-1.3B as backbone (Section E.4 and Table 8 in Appendix). \\n\\nWe use the same experimental setting as in the original submission on bigger LLMs, bootstrapping DSCoder-1.3B with GPT-4o's predictions. \\n\\nAfter the same rounds of self-evolving specification generation and proof generation. The results on the VerusBench are as below. \\n\\n| | Proof Generation | | Self Debugging (k+k*k) | |\\n|---------|------------------|-------------|------------------------|--------------|\\n| Metric | Accuracy@1 | Accuracy@10 | Accuracy@1 | Accuracy@10 |\\n| raw | 1.44 | 6.47 | - | - |\\n| Round 1 | 12.95 | 24.46 | 12.95 | 24.46 |\\n| Round 2 | 19.42 | 26.69 | 24.46 | 52.52 |\\n| Round 3 | 21.58 | 40.29 | 27.34 | 57.55 |\\n\\nThe results demonstrate that even when the model size is small, our self-evolution approach can still improve its capability of proof generation. \\n\\n \\n\\n### \\u202fQ1.\\u201c What's the data input used to let the generation model generate data for the next round? \\u2026 Are these data the same programs as those used in the generating round 0 data? Would the training data in each round kind of repetitive and lack of diversity\\u201d \\n\\n\\nWhen SAFE trains the proof-generation model, all the proof tasks (each proof task is a Rust function associated with a specification) that have *not yet* been proved by earlier rounds\\u2019 models are input used to let the generation model produce data. If any previously unproved task is now proved by the latest model, the proof is then added to the training set for the next round. \\n\\n\\nIn each round, we fine-tune our model based on the raw model, e.g., DeepSeekCoder-33b, using all the correct proofs generated so far in all previous rounds. Following the discussion above, if Round k manages to prove many proof tasks that were not proved in earlier rounds, the training data for Round K+1 will be much richer than the training data used for Round-k and all the earlier rounds. On the other hand, if Round-k only manages to prove few tasks not proved before, the training data for Round k+1 will indeed be kind of repetitive comparing with the training data used for Round k. In that case, the self-evolving process should stop. \\n\\nWhen SAFE trains the spec-generation model, the situation is a little bit different. At each round, all the 21K Rust programs are data input used to let the spec-generation model generate specifications. This part of the details is discussed in Section C.2 in the Appendix. \\n\\n### Q2. \\u201cSelf-debugging is quite effective and improves the accuracy, how does the model obtain the ability of self-debugging? does the fine-tuning procedure contains self-debugging training data?\\u201d \\n\\nIn our proof-generation step, we fine-tune our model on two tasks simultaneously, proof generation and self-debugging; the self-debugging training data for each round comes from the correct and incorrect proofs synthesized by our models in earlier rounds. We have revised Section 3.3 (self-evolving proof-synthesis) to make this clear, together with formal definitions of our two fine-tuning tasks. \\n\\n### Q3. \\u201cWhy are the baseline models prompted with 4 examples instead of more examples?\\u201d \\n\\nThese 4 examples have included the main language features of Verus proof annotations, and hence are sufficient for GPT-4o to conduct in-context learning. When we designed the GPT-4o prompt, we found that adding more examples does not clearly improve GPT-4o\\u2019s output quality. Furthermore, more examples, which means longer context, would lead to longer GPT-4o inference cost and time, which we cannot afford --- our current bootstrapping round already requires one month of non-stop GPT-4o invocation.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"General Response on Paper Revisions\", \"comment\": [\"We sincerely thank all the reviewers for their insightful and valuable comments. We have revised our paper based on the suggestions. They are marked in blue in the newly submitted paper.\", \"Suggested by Reviewer **hvY6**, in Abstract and Introduction, we remove the last paragraph of Section 1 and tone down the language a bit more to avoid overclaim the novelty of SAFE.\", \"Suggested by Reviewer **hvY6**, in Section 2, we elaborate the difference from expert iteration.\", \"Suggested by Reviewer **73u5** and **hvY6**, in Section 3.2 and 3.3, we move the low-level details to Appendix and revise the text to be more formal.\", \"Suggested by Reviewer **K5mr** and **hvY6**, we re-organize Table 1 and revise the corresponding description in Section 4 for, 1) filling in the missing results of Accuracy@100 for baseline models. 2) Add Accuracy@2 results to make a fair comparison between debugging approach SAFE+ and other methods without debugging.\", \"Suggested by Reviewer **73u5**, **aKrU** and **K5mr**, we add a new Section in Appendix D to discuss the limitations of our approach. Specifically, we discuss how to scale SAFE to real-world software projects (weakness 1 of Reviewer 73u5) and how to improve the self-debugging training beyond one round (weakness 1 of Reviewer aKrU).\", \"Suggested by Reviewer **73u5**, we add more evaluation on specifications in Appendix E.2.\", \"Suggested by Reviewer **aKrU**, we add new experimental results on the effectiveness of SAFE with smaller models in Appendix E.4.\", \"We add some missing citations and revise some expressions as suggested by the reviewers.\"]}", "{\"summary\": \"This paper seeks to finetune a code-generating LLM to generate verification annotations for code.\\nSpecifically, the authors target Verus, which is an SMT-backed automated-theorem-proving-style verifier for Rust code.\\nThe key technical challenge that the authors thus need to overcome is that ths is a very low resource language,\\nso simple techniques such as finetuning aren't directly applicable; and even API-backed models do so poorly\\non this task that naively distilling wouldn't help, either.\\nInstead, the authors basically bootstrap the finetuning process as follows:\\n- First, they generate a set of proof *specifications* for some Rust programs using GPT-4o. They then filter out specs which are \\\"low quality\\\", e.g. those which are always true.\\n- Then, they use these specifications to generate (against using GPT-4, with some expert-crafted task-specific prompts) proof *annotations* for a small subset of these specifications.\\n- Finally, they bootstrap a finetuning process from these initial annotations, training in each round the open-source model on the correct proofs it generated in the last round.\\nThere's some additional bells and whistles, such as also training on incorrect proofs by framing it as auxiliary repair task, but I believe this summarizes the core idea.\\n\\nIn terms of the experiments, the authors share results both for a small, human-written benchmark and for GPT-transpiled versions of MBPP and SV.\\nAt first glance I was a bit worried about the scale of this data, but given the novelty of the task and the relative lack of Rust datasets in the literature I actually commend the authors on their effort to collect as much data to evaluate on as possible.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"4\", \"strengths\": \"I think the core ideas in this paper are very interesting, and that there are several good contributions here:\\n- Filtering the specifications based on a symbolic, deterministic score computed from the tests seems like the right thing to do, and I appreciate the brief ablation study of the impact of this step (section 4.2.4).\\n- The experiment in 466-474 provide further evidence for previous findings in the code generation literature about \\\"sampling+greedy\\\" self-debugging outperforming \\\"greedy+sampling\\\" (I recommend that the authors consider explicitly comparing these results to e.g. [1, 2]).\\n- Perhaps most importantly, verifying Rust code is not only potentially impactful but also (as far as I know) a completely novel task; kudos to the authors for going through the effort to collect all the data.\\n\\n\\n[1] \\nTeaching Large Language Models to Self-Debug\\nXinyun Chen, Maxwell Lin, Nathanael Sch\\u00e4rli, Denny Zhou.\\nInternational Conference on Learning Representations (ICLR), 2024.\\n\\n[2] \\nIs Self-Repair a Silver Bullet for Code Generation?\\nTheo X. Olausson, Jeevana P. Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama.\\nInternational Conference on Learning Representations (ICLR), 2024.\", \"weaknesses\": \"There are a few rather major flaws that make me hesitant to recommend the paper for acceptance in its current format.\\n\\nOne is that I find that the paper tries a little bit too hard to sell the novelty of this \\\"SAFE\\\" approach.\\nBootstrapping finetuning of LLMs by interleaving it with search has been done before; most people call it \\\"expert iteration\\\" [3] (authors: please correct me if you think there is a *significant* difference between your method and this).\\nEspecially what you call \\\"SAFE_0\\\" feels a bit rich: unless I am mistaken, you are literally just doing synthetic data generation with GPT-4o and filtering the results based on some measure of quality.\\nAlso, on line 359 you say \\\"21,398 [programs] have been successfully transformed [...] by SAFE\\\"; unless I'm mistaken you mean \\\"by GPT-4\\\" here, because at this point you haven't done anything other than asking GPT to first transpile the code to Rust and to then transpile the Rust code into the subset of the language that is supported by Verus.\\n\\nI would encourage the authors to tone down the language a bit more and focus on the actually novel parts of this paper, which I believe to be: the task target; finetuning on repair tasks to improve generation performance; and the metric used to filter the specification samples.\\n\\nA more important issue is that I do not think the comparison to the baselines is fair in its current form for the \\\"SAFE+\\\" method.\\nThe authors themselves point out that in this variation (i.e., when you do a round of self-debugging if the initial generation does not succeed), they generate `k * k` repair samples - how can you then compare against pass@1? You have actually drawn `k + k*k` samples from the model, so you should at least compare against a baseline of `pass@(k + k*k)`.\\nThis is an issue that has come up again and again the self-debugging/refinement literature, and I once again encourage the authors to engage with that literature.\\nYou still have good results here - for example, the SAFE+ pass@10 is substantially higher than the SAFE pass@100 on VerusBench - but the way you're currently presenting them overstates their significance.\\n\\nFinally, the writing could use some more proof reading, especially the abstract and the introduction (but this is a minor complaint).\\n\\n\\n[3] \\n@misc{polu2022formalmathematicsstatementcurriculum,\\n title={Formal Mathematics Statement Curriculum Learning}, \\n author={Stanislas Polu and Jesse Michael Han and Kunhao Zheng and Mantas Baksys and Igor Babuschkin and Ilya Sutskever},\\n year={2022},\\n eprint={2202.01344},\\n archivePrefix={arXiv},\\n primaryClass={cs.LG},\\n url={https://arxiv.org/abs/2202.01344}, \\n}\", \"questions\": [\"In table 2 you report a \\\"total\\\" column; I noticed the numbers don't add up if you just take the mean of the other columns, so I presume what you're actually doing is taking the mean over all of the samples in the entire dataset? (I think that's what you want to do, I just want to make sure I understood correctly).\", \"When constructing training data for the repair task, are you doing any filtering to make sure that the \\\"incorrect program\\\" is actually similar to the \\\"correct program\\\", or could they be completely different?\", \"What is the difference between SAFE and expert iteration, other than your synthetic data generation for the specifications?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer K5mr\", \"comment\": \"We sincerely thank the insightful comments from Reviewer **K5mr**. The following is our response, and we have highlighted our revisions in blue in the revised paper:\\n\\n \\n\\n### W1. \\u201cIn Table 1 the Accuracy of GPT-4o on VerusBench@100 is unfortunately missing. Similarly, the result of DeepSeekCoder RAW @100 is missing\\u201d \\n\\nThanks for pointing out this issue. We have added the evaluation results of the Accuracy@100 on two raw models and GPT-4o in Table 1. Naturally, these models are all able to prove more tasks with 100 tries, comparing with only 10 or fewer tries. However, even with 100 tries, they still perform much worse than SAFE models with 10 or fewer tries even without SAFE\\u2019s self-debugging feature. \\n\\n \\n\\n### W2. \\u201cIn Table 2, Round 3 appears to severely degrade performance of the resulting model on the Tutorial dataset\\u201d \\n\\nThe degradation only occurs for Accuracy@1 for the Tutorial dataset. The main reason is that proofs generated by LLMs might be vulnerable to minor issues --- e.g., using an integer type with bit-width (i32) instead of an integer type without bit-width (int) may cause the whole proof to break down. The occurrence of these issues is rather random and typically goes away when more samples are synthesized. For example, if we look at Accuracy@10, Round 3 model\\u2019s accuracy is consistently better than that of Round 2 for every part of VerusBench, as shown in Table 2. \\n\\n \\n\\nOur iterative process should stop when the accuracy improvement becomes marginal. As shown in Table 6, the improvement between Round 2 and Round 3 is not significant in most settings, so we stop our self-evolution at Round 3. \\n\\n \\n\\n### W3. \\u201cThere is no discussion of Limitations\\u201d \\n\\nWe have added a \\u201cDiscussion and Limitation\\u201d section in Section D (in Appendix). Specifically, we explain two limitations of our SAFE approach and potential future work, scaling to real-world software projects and designing a fine-grained self-debugging strategy.\"}", "{\"comment\": \"Thank you for the detailed response and the additional experiments using DeepSeek-Coder-1.3B. I plan to keep my score.\"}", "{\"title\": \"Thank you for the Response\", \"comment\": \"Thank you for providing the requested additional details. I will further read through points raised in other reviews but don't expect to adjust my score.\"}", "{\"comment\": \"Thank the authors for preparing a detailed rebuttal! I have read them but plan to keep my score.\"}" ] }
2NqrA1wYi6
Unraveling the Complexity of Memory in RL Agents: an Approach for Classification and Evaluation
[ "Egor Cherepanov", "Nikita Kachaev", "Artem Zholus", "Alexey Kovalev", "Aleksandr Panov" ]
The incorporation of memory into agents is essential for numerous tasks within the domain of Reinforcement Learning (RL). In particular, memory is paramount for tasks that require the utilization of past information, adaptation to novel environments, and improved sample efficiency. However, the term ``memory'' encompasses a wide range of concepts, which, coupled with the lack of a unified methodology for validating an agent's memory, leads to erroneous judgments about agents' memory capabilities and prevents objective comparison with other memory-enhanced agents. This paper aims to streamline the concept of memory by providing precise definitions of agent memory types, such as long-term versus short-term memory and declarative versus procedural memory, inspired by cognitive science. Using these definitions, we categorize different classes of agent memory, propose a robust experimental methodology for evaluating the memory capabilities of RL agents, and standardize evaluations. Furthermore, we empirically demonstrate the importance of adhering to the proposed methodology when evaluating different types of agent memory by conducting experiments with different RL agents and what its violation leads to.
[ "memory-based RL", "memory", "pomdp" ]
Reject
https://openreview.net/pdf?id=2NqrA1wYi6
https://openreview.net/forum?id=2NqrA1wYi6
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zJqNGEKuv7", "rErAn2MHkX", "ft34PDon76", "aHEnkYhody", "ZsYQNWddy3", "ThPTHO1cnK", "SsU4GDA5dN", "ShFg9DAw0s", "LL4F9xFz8a", "IHO5QHnoMG", "FmcouEW17U", "DLP5Lfn7wP", "A5TQlzxops", "8zzhUrcfg4", "7PNvZ32K2S", "2SRh8MtBKJ", "0Y8PLcBYJB", "0XRBRJ7SZK" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_review", "meta_review" ], "note_created": [ 1732404325138, 1732905483390, 1731288678889, 1732404348357, 1733309846159, 1732559369204, 1732517600579, 1732490586966, 1732401008749, 1732806169521, 1732396905251, 1732791399773, 1730654154607, 1737524283423, 1732521386532, 1732882208762, 1730456349371, 1734725575224 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13810/Authors" ], [ "ICLR.cc/2025/Conference/Submission13810/Authors" ], [ "ICLR.cc/2025/Conference/Submission13810/Reviewer_nevP" ], [ "ICLR.cc/2025/Conference/Submission13810/Authors" ], [ "ICLR.cc/2025/Conference/Submission13810/Authors" ], [ "ICLR.cc/2025/Conference/Submission13810/Authors" ], [ "ICLR.cc/2025/Conference/Submission13810/Reviewer_QTL2" ], [ "ICLR.cc/2025/Conference/Submission13810/Reviewer_nevP" ], [ "ICLR.cc/2025/Conference/Submission13810/Authors" ], [ "ICLR.cc/2025/Conference/Submission13810/Authors" ], [ "ICLR.cc/2025/Conference/Submission13810/Authors" ], [ "ICLR.cc/2025/Conference/Submission13810/Reviewer_weyF" ], [ "ICLR.cc/2025/Conference/Submission13810/Reviewer_weyF" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13810/Authors" ], [ "ICLR.cc/2025/Conference/Submission13810/Reviewer_QTL2" ], [ "ICLR.cc/2025/Conference/Submission13810/Reviewer_QTL2" ], [ "ICLR.cc/2025/Conference/Submission13810/Area_Chair_zsZy" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer nevP\", \"comment\": \"We appreciate that the reviewer feedback and. Here are our responses.\\n\\n**W1. Focusing on RL**\\n\\nOur work focuses exclusively on RL, using neuroscience concepts as reference points to define memory types. We use the neuroscience framework because the terms \\u201clong-term/short-term, working, episodic memory, etc.\\u201d are already used in RL, but without a unified meaning. Therefore, we redefine them with clear, quantitative meanings to specify the type of agent memory, since the performance of many algorithms depends on their type of memory. We do not claim to have enumerated all types of human memory, since our work is focused exclusively on RL.\\n\\n**W2. RL is capturing declarative memory**\\n\\nDeclarative memory of RL agents involves recalling and reusing arbitrary \\u201cfacts,\\u201d which can be any environmental representation (e.g., a door\\u2019s color) learned during training and do not necessarily have to be verbalized. Our work focuses on Memory DM tasks, where agents use historical data for decisions. While LLMs excel at language tasks, declarative memory in RL often involves signal processing, making its study a distinct challenge.\\n\\n**W3. Declarative and procedural memory in RL**\\n\\nIn RL, \\u201cmemory\\u201d is used similarly in Meta-RL and Memory DM tasks, despite requiring different memory types, leading to comparison and validation issues (of two algorithms with a stated long-term memory one may not solve the same simple problems as the other one). To address this, we define procedural and declarative memory to clarify their roles in specific tasks. \\n\\n**W4. Practical application**\\n\\nOur definitions of declarative and procedural memory in RL are practical, such that they use two numerical metrics: the number of environments n_{envs} and episodes n_{eps}, enabling clear identification of the memory type required for a task. An environment refers to where the agent interacts and receives feedback, while an episode is the sequence from start to terminal state. These concepts are rooted in RL and widely used in existing benchmarks and baselines.\\n\\n**Q1. Memory meant to be a problem or a solution?**\\n\\nMemory can be both a problem and a solution, \\u200b\\u200bwhich is why we titled our paper this way. In POMDPs, it helps agents overcome the challenge of incomplete environmental information by storing and retrieving past interactions. However, implementing an effective memory mechanism is a complex challenge.\\n\\n**Q2. Marr-Poggio levels**\\n\\nYes, we consider that the methodology we propose for testing long/short-term declarative memory can be described using Marr-Poggio levels of analysis. However, this requires disclosure of par. 4 \\u201cAnalyze the results,\\u201d in Algorithm 1, to formalize the system\\u2019s outputs. In the current version, we leave this point to the independent interpretation of researchers.\\n\\n- Computational\", \"goal\": \"Test long/short-term declarative memory in Memory DM tasks\", \"input\": \"Memory model, Oracle agent acting random at memory event recall, memory-intensive environment (Theorem 1).\", \"language\": \"Standard POMDP/MDP formalism, along with memory definitions from our paper.\", \"output\": \"1 if memory model has tested type of memory, and 0 otherwise\\n\\n- Algorithmic\\n\\nUse Algorithm 1 to test long/short-term memory.\\n\\n- Implementation\\n\\nFocus on implementing memory-intensive environments and memory mechanisms, though the methodology is not tied to specific implementations.\\n\\n**Q3. Replay buffer**\", \"in_appendix_we_give_examples_of_different_memory_mechanisms_as_they_are_understood_in_various_works_in_the_rl_domain\": \"\\u201c'In RL, memory has several meanings, each of which is related to a specific class of different tasks\\u201d.\\n\\nWe added replay buffers to the text due to the fact that well-known works [1, 2] treat it as a memory mechanism.\\n\\nQ4. \\u201cExperiment did not follow methodology\\u201d.\\n\\nThis experiment shows how naive validation of an agent with memory in a memory-intensive environment can lead to incorrect conclusions about its memory type. We derive three configurations of context length K and correlation horizons $\\\\xi$ from Theorem 1 to evaluate: 1) long-term memory, 2) both long- and short-term memory, and 3) long-term memory only.\\n\\nOur results demonstrate that the same agent trained in the same environment produces different results based on the K and $\\\\xi$ configuration. When using our proposed Algorithm 1, the agent learns 0.53 \\u00b1 0.04, indicating its inability to solve the long-term problem. However, with the default configuration, it achieves 0.95 \\u00b1 0.02, which might suggest long-term memory, but since both long- and short-term memory were tested, we cannot definitively claim it has long-term memory.\\n\\n**Q5. Time interval RL**\\n\\nIn accordance with the definitions we introduced, in article [3], a feedforward agent employs a long-term declarative memory based on autostigmergy [3] to solve the task of reproducing time intervals.\"}", "{\"comment\": \"We appreciate your valuable feedback and the contribution to improving the paper.\\n\\nThe terminology introduced in our work aims to allow us to explicitly distinguish between types of agent memory. Through this separation, we distinguish declarative memory, which we focus on in the following, and for which we propose a validation methodology for STM and LTM.\\n\\nWe do not consider the Meta-RL framework because we initially started working on memory formalization because the tasks we defined as Memory DM were mixed with Meta-RL, being fundamentally different in nature. By abstracting away from Meta-RL, we were able to provide a methodology for evaluating STM and LTM in the Memory DM framework clearly.\\n\\nMoreover, as noted in Table 1, Meta-RL with POMDP inner-loop tasks and Meta-RL with MDP inner-loop tasks are also fundamentally different tasks that should be evaluated in a different way, and this evaluation is out-of-scope of our work, as we are solely interested in declarative memory.\\n\\nWe hope that our answer was able to clarify your concerns and will have a positive impact on your evaluation of our work.\"}", "{\"summary\": \"This is a sort of conceptual paper, it's main concern is to taxonomize the concept of memory in reinforcement learning. Given the taxonomy, it aims to demonstrate why paying attention to the categories it suggests is important for interpreting the results of experiments on RL agents involving memory.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"I'm generally favorably disposed toward conceptual papers like this. And I do agree with the authors that their target, memory in RL, is a worthy target for such an effort.\\n\\nThe main distinction is between declarative and procedural first, and then short-term versus long-term second. The latter is defined with respect to a context length parameter. I do think it's a good idea to highlight somehow the difference between associations within the context window and outside of it. This is a very relevant difference with many algorithmic implications.\", \"weaknesses\": [\"This is trying to be a conceptual paper aimed squarely in the intersection between AI and cognitive /neuro science. However, judged in that way, I don\\u2019t think it really makes the grade. The problem is that it doesn\\u2019t really connect clearly into the conversation on the cognitive / neuro science side. There are very few references to these disciplines for one, or less than I would expect any way. And critical references for multiple memory systems are missing (there are so many, but I like some of Squire's old papers on the topic). And there is basically no context in the paper connecting the work to the ways that researchers in these other fields have thought about memory. For a paper like this which purports to offer a formalization of what is meant by \\u2018memory\\u2019, it's clearly important to relate the new formalization to old ones and discuss how they are similar and different, and to try to sustain an argument for why the present one is an advance on the old.\", \"I\\u2019m not buying the claim that RL is capturing declarative memory. I would tend to say that a defining feature of declarative memory as opposed to procedural memory is the declarative memory\\u2019s arbitrariness. The classic prototype example of a declarative memory is a person\\u2019s name. And most definitions you find look something like \\\"declarative memory is defined as the type of memory that involves consciously recalling facts and events\\\". It\\u2019s all very language-like. But RL memories aren\\u2019t usually like that. In some cases they may be, but it\\u2019s not too common memory in RL outside of language data. I would probably have been more forgiving of this claim to capture declarative memory with RL had come a few years ago, but now that we have LLMs, what\\u2019s the point in trying to get all bent out of shape to capture declarative memory in RL? The prototype of declarative memory is arbitrary information conveyed by language e.g. \\u201cMy teacher\\u2019s name is Bob\\u201d (episodic flavor) or \\u201cParis is the capital of France\\u201d (semantic flavor). So it seems very reasonable to expect a model of declarative memory to use the kinds of AI systems that work for that kind of data, now that they exist and are so widespread. And of course there are plenty of ways to combine RL and LLMs. I understand though that that would take this too far afield for the present work. And this isn\\u2019t really a computational neuroscience modeling paper. So this isn\\u2019t really a weakness of the paper here. No need to reply to this bullet point since I don\\u2019t think it really matters for your paper. But I\\u2019m just leaving it here as a way to convey a bit more my mindset with regard to this paper. At very least, the arbitrariness of the associations seems like a critical part of declarative memory.\", \"Right after positing a difference between declarative and procedural memory in terms of the algorithms that implement them, in the very next paragraph it then acts like this distinction is established and ready to support further claims when it says \\u201cmany studies fail to differentiate between agents with declarative and procedural memory\\u201d. But that\\u2019s not a strong argument given that it just followed right after defining these terms. Why should other papers have tried to probe things according to the arbitrary categories you just defined? Especially since, I suspect many researchers would not necessarily agree with the classification. At any rate, the paper merely asserts that one set of tasks are declarative and another are procedural, but it offers no evidence that this distinction corresponds to what others mean by those terms.\", \"Definition 3 says one should call RL problems involving a \\u201csingle environment\\u201d problems of declarative memory and RL problems featuring multiple environments problems of procedural memory. This definition would be impossible to apply in practice. It would appear to suggest that all memory is declarative since one can always compose \\u201cmultiple environments\\u201d together into a single meta-environment. The difference between one environment or many environments is not in the task itself, it\\u2019s just a purely formal aspect of the modeling language. One generally is not supposed to predicate a general definition on such a purely formal property since it would make your classifications float around following specific and contingent task parameterization properties. Note: all the same comments also apply to the episode concept too.\"], \"questions\": \"1. In this paper, is memory meant to be a problem or a solution?\\n\\n2. Is there some kind of Marr-Poggio levels of analysis story that could be used to clarify the overall structure of the argument here?\\n\\n3. Doesn't this taxonomy seem to bundle together too many things that really are not similar? Replay buffers used just for training and dynamically accessed external memories, used at test time, are quite different algorithmically, and used in quite different ways. Why does this classification scheme seem to drop these in the same bucket? (There is text in the appendix suggesting this). Also, I don't see why it's even true according to the definition in the main text. I would think that definition would separate these approaches. And that would. be kind of the whole point of separating declarative and procedural memory. Why doesn't it separate them in the appendix anymore?\\n\\n4. The paper includes the following sentence in the results section \\u201cThis ambiguity arose because the first experiment did not follow our proposed methodology\\u201d, well this certainly doesn\\u2019t inspire any confidence. Why talk about an experiment that doesn\\u2019t fit the proposed methodology? It's likely I've misunderstood this paragraph. I find it very hard to follow this part.\\n\\n5. How would you think about an RL model that remembers and reproduces a time interval? E.g. [Deverett, B., et al. (2019). Interval timing in deep reinforcement learning agents. NeurIPS.] That paper showed that purely feedforward agents can sometimes solve what appear to be memory tasks. Does that matter? How would your definitions classify the feedforward agent in that paper?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"[1] Mnih, V. (2013). Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602.\\n\\n[2] Schaul, T. (2015). Prioritized Experience Replay. arXiv preprint arXiv:1511.05952. \\n\\n[3] Deverett B. (2019). Interval timing in deep reinforcement learning agents //NeurIPS 2019\"}", "{\"title\": \"General Response\", \"comment\": \"We are very grateful to the reviewers for their detailed and valuable feedback on our work. We are especially grateful that they highlighted key strengths, including our conceptual approach (Reviewer nevP), the emphasis on distinguishing agent memory types in RL (Reviewers nevP, weyF, and QTL2), and the rigorous formalism of our methodology (Reviewer QTL2).\\n\\nOur work is not aimed at the neurosciences and the precise treatment of memory in them. Instead, we use memory concepts from the neurosciences with respect to entities in the Memory RL domain that have a similar meaning. We use just such definitions from the neurosciences because they are already well-established in the Memory RL domain but are still interpreted differently in different works. The lack of common meanings for the same definitions leads to the possibility that agents with memory may be compared incorrectly, or memory mechanisms may be validated or used incorrectly.\\n\\nOur proposed terminology allows us to correct this ambiguity and, based on quantitative characteristics, accurately characterize the type of memory needed for a particular task. Additionally, our proposed algorithm for validating agent memory in the Memory DM framework allows us to explicitly separate and evaluate both short-term memory and long-term memory.\\n\\nWe believe that we have answered all of the reviewers' questions and comments and hope that this will have a positive impact on our final evaluation.\"}", "{\"title\": \"RL is not a particularistic framework, it's our target area\", \"comment\": \"Thank you for your reply!\\n\\nYou think that **we should not redefine terms from neuroscience to computer science**, and we agree with you, because **that is what our paper is about**. In RL, which is the focus of our work, the terms used in our paper are already well-established, so we are not the first to use them. We share your point of view regarding the divergence of meanings in neuroscience and RL, and in order to prevent these divergences from continuing to confuse the understanding of what memory is in RL, we propose to anchor certain meanings in a specific way that researchers in RL talk about in their papers.\\n\\nWe have **highlighted the main concepts** that are understood by memory in RL and r**elated them to similar concepts from neuroscience** that have already entered the field of RL. Thus, using our definitions, it is not necessary to introduce new entities.\\n\\nWe do not claim to have complete definitions for all sciences, as we **consider RL exclusively**. We are confident that our proposed definitions bring clarity to the field of Memory RL and will further contribute to an even more active development of this field.\\n\\nWe hope to continue the discussion on this topic, as we find your comments very valuable for our work.\"}", "{\"comment\": \"I appreciate your thoughtful response to the weaknesses I raised and your flexibility in updating the paper based on my review. However, about Q2, I am still confused regarding your reply. You mentioned that \\\"Declarative memory and procedural memory are indeed defined in the article as conceptually distinct concepts\\\" and indicated an update to $n_{\\\\text{envs}} \\\\times n_{\\\\text{eps}} > 1$. Yet, in the paper, I found Procedural Memory defined as $n_{\\\\text{envs}} \\\\times n_{\\\\text{eps}} \\\\geq 1$.\\n\\nRegarding the advantages, I agree with the points raised in your reply, the need to formalise distinctions between memory types to address inconsistencies in RL research and ensure standardised testing for memory mechanisms. These efforts provide a valuable conceptual framework and contribute significantly to advancing the field, however, while I recognise these strengths, the innovation and motivation as presented in the paper still fall short of convincing me to raise my scores.\"}", "{\"comment\": \"Thank you for your detailed response. However I am still not convinced. The response doubles down on some of the points I disagree with in the paper. In particular, I don't think it's helpful for the community to have computer science papers redefining neuroscience terms in ways that cause them to diverge so much from the way they are used in neuroscience. And especially to do so in a way that responds only to one particularistic framework (reinforcement learning in this case), and to do so with little reference to the literature that created the terminology, the reasons they did so, and ignoring the characteristic phenomena usually considered to fit in each category.\\n\\nI'll leave my score as it is.\"}", "{\"title\": \"Response to Reviewer QTL2\", \"comment\": \"Thank you for highlighting the strengths of our work. We respond to the comments below.\\n\\n**W1. Simple environments**\\n\\nOn one hand, the environments may seem relatively simple; however, they enable rapid and targeted exploration of various aspects of memory and hypothesis testing. In addition, the T-Maze environment is a common standard memory test in RL [1, 2, 3].\\n\\nThe goal of these environments is to facilitate testing specific hypotheses related to memory without requiring the RL agent to solve additional tasks that involve learning unrelated skills or abilities.These examples are enough to demonstrate the core ideas of the paper.\\n\\n**W2, Q1. Procedural memory**\\n\\nDefinitely, testing procedural memory is an important and valuable task; however, it lies beyond the scope of our work. We provide a definition of procedural memory along with examples to distinguish it from declarative memory. In this study, we focus primarily on declarative memory and propose a method for testing it.\\n\\n**W3. Visual aids or examples**\\n\\nWe have aimed to support all key concepts introduced in the article with examples and visualizations. For instance, the illustrations in Fig. 1, Fig. 2, and Fig. 3 visualize and help clarify the definitions of declarative and procedural memory, long-term and short-term memory, and the general classification of memory types.\\n\\nWe are confident that, in addition to formal definitions, these visualizations will assist readers in better understanding the concepts and terms introduced in the paper, making the work more accessible to a wider audience.\\n\\n**Q2: Declarative vs. procedural memory**\\n\\nDeclarative memory and procedural memory are indeed defined in the article as conceptually distinct concepts. Declarative memory is described as the use of knowledge within a single episode and a single environment, whereas procedural memory encompasses skill transfer across multiple episodes or environments. We agree that the current wording may lead to misinterpretation, and therefore, we have adjusted the definition to make it more precise:\\n\\n\\n- Declarative Memory $\\\\Leftrightarrow (n_{envs}\\\\times n_{eps}=1)$\\n\\n- Procedural Memory $\\\\Leftrightarrow (n_{envs}\\\\times n_{eps}>1)$\\n\\n**Q3, Q4**\\n\\n**Advantages of cognitive science definitions**\\n\\nDefinitions from cognitive science, such as short-term and long-term memory, as well as declarative and procedural memory, are already well-established in the RL community, but do not have common meanings and are interpreted in different ways. We strictly formalize these definitions to avoid possible confusion that may arise when introducing new concepts.\\n\\n**Memory classification**\\n\\nThe motivation for introducing a classification of different types of memory with respect to temporal dependencies and types of memorized information is motivated by practical goals. In the course of our research on memory in RL and a review of existing work in this area, we concluded that modern challenges and tasks in RL require such a classification to ensure proper memory testing in RL agents. \\n\\nFor instance, some interpret memory as employing transformers with extensive context windows, others as utilizing recurrent networks, and still others as a model\\u2019s ability to transfer skills across tasks. However, these approaches often differ fundamentally in design, making direct comparisons under identical conditions potentially invalid, or testing conditions suitable for one agent may not align with another\\u2019s memory mechanism.\\n\\n**How does this framework compare to existing memory evaluation approaches in RL?**\\n\\nCurrently, in RL memory mechanisms are tested by running agents in memory-intensive environments and evaluating metrics without considering environment or agent temporal configurations. Section 6.1 shows the issues with this approach.\\n\\nThis experiment shows how naive validation of an agent with memory in a memory-intensive environment can lead to incorrect conclusions about its memory type. We derive three configurations of context length K and correlation horizons $\\\\xi$ from Theorem 1 to evaluate: 1) long-term memory, 2) both long- and short-term memory, and 3) long-term memory only.\\n\\nOur results demonstrate that the same agent trained in the same environment produces different results based on the K and $\\\\xi$ configuration. When using our proposed Algorithm 1, the agent learns 0.53 \\u00b1 0.04, indicating its inability to solve the long-term problem. However, with the default configuration, it achieves 0.95 \\u00b1 0.02, which might suggest long-term memory, but since both long- and short-term memory were tested, we cannot definitively claim it has long-term memory.\\n\\n[1] Esslinger, K. et al.. Deep transformer q-networks for partially observable reinforcement learning. arXiv:2206.01078.\\n\\n[2] Grigsby, J. et al. Amago: Scalable in-context reinforcement learning for adaptive agents.ICLR 2024\\n\\n[3] Pramanik, S. et al. AGaLiTe: Approximate Gated Linear Transformers for Online Reinforcement Learning.TMLR 2024\"}", "{\"comment\": \"Thank you for your response! We appreciate that you have clarified your position on our work.\\n\\nIn our reply we have tried to answer all your concerns and we hope that we have been able to clarify the positioning of our work. Please, tell us, do you have any other comments/suggestions about our work? We would be very happy to continue the discussion.\"}", "{\"title\": \"Response to Reviewer weyF\", \"comment\": \"We appreciate the reviewer's comments and are glad to hear that you share our vision of the need to formalize concepts related to memory in RL. We respond to the reviewer\\u2019s comments below.\\n\\n**Broad overview of use of the memory term**\\n\\nOur work is not a review paper. Our main goal is to formalize the basic concepts of memory in RL so that new memory-enhanced agents in RL can be compared and validated correctly. We take the definitions of memory from neuroscience as the basis of our formalism, since they have been used in RL for a long time, albeit in different senses, as we write about in Section 3 - Related Works. In turn, in Appendix B - Memory Mechanisms, we give an extensive description of what is basically meant by memory in RL. \\n\\n**Abuse of the term \\u201cmemory\\u201d**\\n\\nWe cannot talk about the abuse of the term \\u201cmemory\\u201d precisely because everyone understands memory differently. For example, someone understands memory as the use of a transformer with a very large context window, someone understands it as the use of recurrent networks, and someone understands it in general as the ability of a model to transfer skills from one task to another. At the same time, these algorithms may have fundamental differences in their design and comparing them under the same conditions may not be correct, or the conditions for testing memory mechanisms for one agent may not be appropriate for another.\\n\\nThat is why we offer our definitions of different types of memory that allow us to fully describe the basic concepts in RL that are commonly understood by memory.\\n\\n**Purely review article**\\n\\nWriting a purely review article would be interesting to us as well, and we'll get into that next. However, for the moment, our goal is to propose practical ways to separate agent memory types in RL, as well as their validation in the Memory DM framework.\\n\\n**POMDPs**\\n\\nThe definitions we propose are based on the POMDPs formalism in both the Memory DM and Meta-RL contexts, which is why we have placed this section at the beginning.\\n\\n**Related Works**\\n\\nIn the Related Works section, since this is not a review paper but a practical one, we show that RL actively uses memory types from neuroscience, but with different meanings. Rather than providing a comprehensive review of all works on memory mechanisms in RL, we focus on our proposed contributions. Consequently, we do not conduct a detailed classification of existing memory mechanisms; instead, we provide an overview of them in the Appendix.\\n\\n**Cognitive science and RL**\\n\\nWe base our definitions of memory on neuroscience, as concepts from neuroscience have long been used in Memory RL, as we discuss in the Related Works Section. A deeper look into cognitive science aspects of memory is out-of-scope for our work, as we focus specifically on the practical application of our taxonomy and memory validation algorithm.\\n\\nThank you for your interest in our work and your valuable feedback. We appreciate your proposal to serve as a reviewer, which encourages us to deepen our research. We look forward to discussing your ideas and suggestions to improve the paper and hope for further collaboration to enhance its quality.\"}", "{\"comment\": \"Thank you for your reply. I don't disagree with you about the fact that you didn't write a review paper but instead attempt to formalize the concept of memory within RL. However, to get the community behind you and agree with your formalization, I think a review might be required to place your suggestion in context. I might be wrong about this, but suggested this to enhance your chances of impact.\\n\\nI am sorry about using the expression \\\"use and abuse\\\". I didn't mean RL people are using it incorrectly, but I wanted to emphasize the difference in how people use the term and think about it, just like you stated in your answer.\"}", "{\"summary\": \"The paper attempts to create clarity in the use of the term \\\"memory\\\" in a reinforcement learning context. As well as suggesting definitions for different kinds of memory and different memory related tasks, the authors present a more rigorous way for testing memory capabilities of reinforcement learning techniques and show possible pitfalls of violating the proposed methodology.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"I really like the paper and the topic it presents. I think it is good to have a more clear definition of what exactly is meant by memory and what its contribution can be in reinforcement learning. I like the approach the authors came up with and the clarity with which they presented it. I think there is a need for a paper like this, and I like how the authors looked at the current state of affairs in reinforcement learning research and its treatment of a new and important branch in the field that deals with memory in a bunch of different contexts.\", \"weaknesses\": \"However, the topic seems difficult to deal with in a conference paper. When reading the introduction and the goal of the paper as set out, I was expecting a more broad overview of current use of the memory term and the different ways it is used and abused in reinforcement learning literature and research. I think the topic is very interesting, but a paper doing a deep dive into a topic such as this has to build a clear foundation for its contributions by taking the body of existing work into account (To be clear, I am not suggesting that the authors don't do this.) and illustrating this by giving a broad overview of said existing work in the paper.\\n\\nIn my view, this contribution wants to be presented in a review paper, with an overview of recent existing work laying a strong foundation for the contributions made by the authors, namely, bringing clarity to the current mismatch in use of the term \\\"memory\\\" in the field. \\n\\nCurrently, the paper includes a very brief section on POMDPs, which are important, but don't represent all ways in which the term memory is used. However, since this is section 2, I think this is a bit misleading, as it seem to set the context in full. The related works section is very brief, and much related work is relegated to the appendix, where most of it is only referenced, but not placed in context of the suggested structure and definitions. Section 4 lays some foundation from cognitive science and RL, and talks about the credit assignment problem in relation to memory handling, but it feels rushed and the role or importance it plays isn't obvious. All of this should be given more room to be elaborated on. I understand that this is impossible in a conference paper however, but now it feels rushed, and I don't feel the work will get the attention it deserves or reach the audience that it should.\", \"questions\": \"This is where double blind reviewing sucks, as I would strongly recommend that the authors produce the review paper, with additional contribution this paper could be, and I would happily serve as a reviewer for said paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for your reply!\\n\\nRegarding Q2, - the updated definition from our response here is actual, i.e. with a \\u201c>\\u201d sign. We thank you for helping us to correct this typo in the text of the paper.\\n\\nWe would be very grateful if you could tell us what exactly you are questioning about our work after the response we have provided. It is very important for us to get this feedback in order to clearly communicate our ideas to the community.\"}", "{\"comment\": \"Thank you for addressing the typo in the paper. Related to my remaining concerns about novelty primarily related to the scope of the work. While procedural memory is intentionally left outside the experimental scope, its absence from evaluation feels like a missed opportunity especially that the flow of the paper initially introduces Memory DM and Meta-RL, but the testing methodology focuses solely on Memory DM, leaving the discussion incomplete.\"}", "{\"summary\": \"The paper introduces an approach inspired by human cognitive abilities to formalise memory types in reinforcement learning (RL), providing precise definitions for short-term memory (STM) and long-term memory (LTM). STM is defined as the agent's reliance on recent interactions, while LTM involves recalling information over longer time intervals outside of the immediate context. The authors differentiate between Meta-Reinforcement Learning (Meta-RL), which focuses on cross-task skill transfer (procedural memory), and Memory Decision-Making (Memory DM), where agents use historical data within a single environment (declarative memory).\\n\\nIn the Memory DM setting, the authors develop a rigorous evaluation methodology to assess memory capabilities in RL agents. This approach is validated in memory-intensive environments, such as the Passive T-Maze and Minigrid-Memory, by varying critical parameters\\u2014context length (the memory span an agent can handle) and correlation horizon (the temporal dependency between events). By varying key parameters, the experiments demonstrate the Memory DM framework\\u2019s ability to reliably assess STM and LTM in RL agents.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper provides neuroscience-based definitions of memory types, clarifying RL memory research, which enables more accurate agent comparisons and tailored evaluation methods for each type\\u200b. The cognitive science-inspired approach has interdisciplinary appeal, likely to attract interest from both RL and cognitive science researchers, fostering potential collaboration and cross-disciplinary insights.\", \"The paper\\u2019s methodology is grounded in theoretical rigour, offering a scientifically robust framework that enhances the validity and reliability of memory evaluation in RL studies.\", \"It introduces a standardised methodology for assessing memory capabilities, promoting reproducibility and consistency across RL studies by providing clear criteria for experimental setups.\"], \"weaknesses\": [\"The framework has been validated in simple environments, which may not capture the challenges of more sophisticated settings or real-world scenarios, potentially limiting its practical applicability.\", \"The paper discusses procedural memory as part of its classification scheme but does not provide or suggest an evaluation methodology related to it, focusing solely on declarative memory. This results in an incomplete validation and leaves open questions about the classification\\u2019s practical application to skill-transfer scenarios.\", \"The methodology section is dense and complex, additional visual aids or examples could clarify the experimental design and enhance comprehension for a broader audience.\"], \"questions\": [\"Could the framework be extended to evaluate procedural memory in Meta-RL settings? Are there specific experiments that could be added to address skill transfer across tasks?\", \"In my interpretation, declarative and procedural memory are intended as distinct concepts; however, the definitions in Equation 2 of Definition 3 imply that declarative memory could be included within procedural memory due to the \\u201cor\\u201d condition and \\u201c\\u2265\\u201d allow for overlap. Could the authors clarify whether declarative memory is meant to be a subset of procedural memory or fully distinct? How does this impact the proposed distinction between Memory DM and Meta-RL in the framework?\", \"How does this framework compare to existing memory evaluation approaches in RL? What are the specific advantages of using cognitive science-inspired definitions over more traditional RL memory metrics?\", \"What motivates the specific classification of memory types (declarative vs. procedural, STM vs. LTM), and how does it improve memory assessment in RL over a general approach?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This was a conceptual paper which proposed an RL-based way of thinking about memory with some arguable connection to neuroscience, unfortunately, reviewers did not find the correspondence to neuroscience convincing or adequately referenced. There were also issues with experimental evaluations.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers engaged in the discussion but did not want to increase their ratings.\"}" ] }
2MqyCIxLSi
TopoTune: A Framework for Generalized Combinatorial Complex Neural Networks
[ "Mathilde Papillon", "Guillermo Bernardez", "Claudio Battiloro", "Nina Miolane" ]
Graph Neural Networks (GNNs) excel in learning from relational datasets, processing node and edge features in a way that preserves the symmetries of the graph domain. However, many complex systems---such as biological or social networks---involve multiway complex interactions that are more naturally represented by higher-order topological domains. The emerging field of Topological Deep Learning (TDL) aims to accommodate and leverage these higher-order structures. Combinatorial Complex Neural Networks (CCNNs), fairly general TDL models, have been shown to be more expressive and better performing than GNNs. However, differently from the graph deep learning ecosystem, TDL lacks a principled and standardized framework for easily defining new architectures, restricting its accessibility and applicability. To address this issue, we introduce Generalized CCNNs (GCCNs), a novel simple yet powerful family of TDL models that can be used to systematically transform any (graph) neural network into its TDL counterpart. We prove that GCCNs generalize and subsume CCNNs, while extensive experiments on a diverse class of GCCNs show that these architectures consistently match or outperform CCNNs, often with less model complexity. In an effort to accelerate and democratize TDL, we introduce TopoTune, a lightweight software for defining, building, and training GCCNs with unprecedented flexibility and ease.
[ "Topological Deep Learning", "Graph Neural Network", "Graph Expansion", "Combinatorial Complex", "Cellular Complex" ]
Reject
https://openreview.net/pdf?id=2MqyCIxLSi
https://openreview.net/forum?id=2MqyCIxLSi
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zcJqZ6DiEc", "yWqr71Q5Sh", "yEPYKSD0CL", "wtvlQgY9q9", "wdftqQyhxd", "tt4PD3niks", "snTrNK3z6c", "rWbrwyGG73", "rNp6jbP7JV", "qq3VwzYGve", "l9DHWyyYg6", "l1wAXTuuOM", "k3elkmCkZQ", "iemdYiOZQ5", "hZMdp2yJoH", "fd3b47BQmq", "fVJAnQXD19", "f50efVceeg", "bjkbNgJZjN", "YF1xHUaEZ6", "XadXgvauxT", "XY4my8C38A", "X5DatvB27e", "WnenF4qo4O", "W9nGzQIF9g", "Vraq7VpK8I", "Utm3ChX9Od", "TceQdFet6w", "SCLaHzA6Ge", "R2oSgGUhWx", "Oi0ldIUfAv", "LqJ6Sh6wo0", "LZV8TD9rnr", "LEvBx30jmy", "IyhGXNGVxY", "HdZSLLQ3Yg", "HYnPbOYN3a", "G36z2Lvzph", "FFGxyHXI3j", "EaewmPtjHe", "EYxw2Bj7MZ", "CK8CytQV9J", "AiEIrYIRWQ", "90EBQjZzjs", "8XUGalxB7i", "7EtcuSgdCb", "77T2WgIlEZ", "6BDumaNh4s", "60OmTY2StX", "5gQKUH2fVZ", "4spP9crdsm", "1jznqn6e2n", "1YgJL8YOLS" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732065069882, 1732639241774, 1732896358588, 1731005597215, 1733168351641, 1733168926535, 1734731843226, 1733047260077, 1730203396799, 1730583605376, 1732362709598, 1732300061398, 1737524140795, 1733168475032, 1732657683982, 1733047385150, 1732470603788, 1732563141292, 1732639101629, 1732064892125, 1732065370703, 1733169736373, 1733168081979, 1732470810233, 1732362823445, 1732601935986, 1732470504556, 1733246636916, 1733248677770, 1732065770680, 1732470298009, 1732064820024, 1732065532789, 1732301526440, 1733168688690, 1732896115364, 1732533146356, 1732638924447, 1732304055186, 1733048220165, 1733221062126, 1732619006639, 1732562780145, 1732383624941, 1732065667296, 1732897034132, 1733171165643, 1732066039136, 1733167734952, 1732638064554, 1732065161560, 1730663112460, 1732300639555 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Reviewer_TxXe" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Area_Chair_tDzK" ], [ "ICLR.cc/2025/Conference/Submission11707/Reviewer_kr8N" ], [ "ICLR.cc/2025/Conference/Submission11707/Reviewer_kr8N" ], [ "ICLR.cc/2025/Conference/Submission11707/Reviewer_iHAm" ], [ "ICLR.cc/2025/Conference/Submission11707/Reviewer_kr8N" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Reviewer_kr8N" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Reviewer_iHAm" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Reviewer_kr8N" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Reviewer_iHAm" ], [ "ICLR.cc/2025/Conference/Submission11707/Reviewer_rJiq" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Reviewer_kr8N" ], [ "ICLR.cc/2025/Conference/Submission11707/Reviewer_kr8N" ], [ "ICLR.cc/2025/Conference/Submission11707/Reviewer_rJiq" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Reviewer_iHAm" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ], [ "ICLR.cc/2025/Conference/Submission11707/Reviewer_rJiq" ], [ "ICLR.cc/2025/Conference/Submission11707/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to reviewer TxXe\", \"comment\": \"Thank you for taking the time to provide such valuable feedback, we truly appreciate it.\", \"let_us_address_each_of_the_points_you_raised_in_weaknesses\": \"**W1 Test larger datasets on node-level tasks.** We now provide additional experiments on 4 larger node-level benchmark datasets (Amazon Ratings, Roman Empire, Minesweeper, Questions) that our machine can support memory-wise \\u2013please see results in Table 8 (Appendix I), as well as below. Specifically, all TDL models are constrained by the currently available topological liftings, as large graph-based datasets significantly increase in size in the process. (Note: developing better, more lightweight liftings is a current field of research, see https://arxiv.org/pdf/2305.16174). As an example, Tolokers dataset (11,758 nodes, 519,000 edges) raises OOM issues when storing either cliques (simplicial) or cycles (cell) due to its extremely dense connectivity, as was reported in Table 1 of (https://arxiv.org/pdf/2406.06642). We have added Questions dataset (48,921 nodes, 153,540 edges), not evaluated in TopoBenchmark, to show that our GCCNs are applicable as long as the lifting procedures are feasible. Regarding the results on the other three datasets, we observe that GCCNs achieve similar performance than regular CCNNs, outperforming them by a significant margin on Minesweeper.\\n\\n| | Amazon Ratings | Roman Empire | Minesweeper | Questions |\\n|-----------------------|----------------|--------------|--------------|--------------|\\n| Best GCCN Cell | 50.17 \\u00b1 0.71 | 84.48 \\u00b1 0.29 | 94.02 \\u00b1 0.28 | 78.04 \\u00b1 1.34 |\\n| Best CCNN Cell | 51.90 \\u00b1 0.15 | 82.14 \\u00b1 0.00 | 89.42 \\u00b1 0.00 | - |\\n| Best GCCN Simplicial | 50.53 \\u00b1 0.64 | 88.24 \\u00b1 0.51 | 94.06 \\u00b1 0.32 | 77.43 \\u00b1 1.33 |\\n| Best CCNN Simplicial | OOM | 89.15 \\u00b1 0.32 | 90.32 \\u00b1 0.11 | - |\\n| Best Hypergraph Model | 50.50 \\u00b1 0.27 | 81.01 \\u00b1 0.24 | 84.52 \\u00b1 0.05 | - |\\n\\n\\n\\n**W2 Give time complexity and training times.** We refer to response 1B in the main reply to all reviewers above.\\n\\n\\n**W3 Analyze performance vs size.** (2A in the main reply). We have made sure to better clarify this contribution in Section 6.2 by separating into two renamed subsections: \\u201cLottery Ticket GCCNs\\u201d and \\u201cImpactfulness of GNN choice is dataset specific.\\u201d There is indeed large variance in performance (and size) between choices of GNN. Such a variation in performance between GNNs is to be expected, as some message-passing functions are better suited to certain tasks/datasets, as has been studied in the GNN field through extensive benchmarking (see for example https://arxiv.org/abs/2003.00982). In the context of Topological Deep Learning, we are interested in understanding how these differences in performance couple with choice of neighborhood, and how we can optimize these hyperparameters for maximal performance at minimal cost. We also kindly refer to Figure 7 which includes performance versus size results on all tested datasets.\\n\\nFigure 5 specifically aims to show \\u201clottery-ticket\\u201d models with high performance and low parameter cost (ie, size). This figure only considers models performing within 10% of the best model. In the case of ZINC, both the best performing model and the within-10% models use GraphSAGE. On the other hand, PROTEINS and Citeseer achieve within-10% performance with models using GAT, GCN, and GraphSAGE. This indicates that for some choices of dataset/task, the choice of message-passing function is much more consequential than in other cases. Both sets of results in PROTEINS and Citeseer show that cutting down on parameter complexity can be as simple as choosing a lighter message-passing function (such as GAT or GCN) while keeping choice of neighborhood constant. For example, both in the PROTEINS and Citeseer cases, choosing GAT over GIN or GraphSAGE leads to equivalent performance for a given choice of neighborhood structure (purple). \\n\\nIf you have additional ideas on how to deepen this analysis, we would be happy to hear them during the rest of the discussion period.\\n\\n**Questions** \\nFor Q1, we refer to point 1 above. For Q2 we refer to point 2 above. For Q3 we refer to point 3 above.\"}", "{\"title\": \"Follow-up on Reviewer Feedback (#3)\", \"comment\": \"We are hoping to hear back from the Reviewer before the deadline for making edits to the manuscript on Wednesday. We would love to know if our previous response has addressed all concerns before this deadline. Otherwise, if the Reviewer is satisfied, we kindly ask they reconsider the rating of the paper. Thank you!\"}", "{\"title\": \"Follow up for Feedback (#3)\", \"comment\": \"Hello, happy Friday! We are following up once more to ask if our previous response addressed remaining questions about hyperparameters and ZINC. We would love to answer any additional questions by the deadline. As we said in our previous follow-up, we believe your feedback has significantly helped improve the manuscript. As such, in the event you do not have any questions left, we would really appreciate an updated rating that reflects these improvements.\"}", "{\"summary\": \"The paper focuses on the topological deep learning (TDL) models in particular CCNNs and proposes a new powerful graph-based methodology for new TDL architectures, named GCCNs. The paper proves that GCCNs generalize and subsume CCNNs. The paper conducts extensive experiments and shows that the GCCN architectures achieve comparable performance with CCNNs. An efficient toolkit, TopoTune, is also introduced to accelerate the development of TDL models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper proposes a new method to generalize any neural network to TDL architectures.\\n2. The proposed GCCNs formally generalize CCNNs and have the same expressiveness as CCNNs. \\n3. A new toolkit, TopoTune, has been developed to make it easy to design and implement GCCNs.\", \"weaknesses\": \"1. For node-level tasks, the paper only considers three very small datasets, which might limit the application of the method.\\n2. The complexity analysis of the method is missing and the paper does not report any training time in the experiment. \\n3. The experiment of \\\"performance versus size\\\" is not well analyzed especially for the graph-level datasets (i.e., PROTEINS, ZINC).\", \"questions\": \"1. Could the authors use larger node-level datasets for experiments?\\n2. What is the time complexity of the proposed GCCNs compared with CCNNs?\\n3. The GNN models perform very different results in Figure 5. More analysis is needed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Third response to Reviewer kr8N (3/4)\", \"comment\": \"**Hyperparameters**\\n\\nAccording to Appendix C.2 of the TopoBenchmark manuscript [1], TopoBenchmark developers performed a wide grid search across multiple training hyperparameters for each model and dataset. They published the best hyperparameters for each combination in their scripts. In the code, they offer a default configuration that automatically determines training hyperparameters if not customized. We used these \\u201cdefault hyperparameters\\u201d, thus avoiding a traditional tuning grid search. In the spirit of increasing clarity as much as possible, line 422 now reads: \\u201cWhile CCNN results reflect extensive hyperparameter tuning by [1], we fix GCCN training hyperparameters using the TopoBenchmark default configuration.\\u201d We argue that a practitioner could reasonably do the same thing and avoid a tuning grid search when introducing new choices of GNN and neighborhood structure.\\n\\n[1] Bernardez et al. \\u201cICML Topological Deep Learning Challenge 2024: Beyond the Graph Domain.\\u201d Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM) at ICML 2024 https://arxiv.org/abs/2409.05211 \\n\\n\\n**updated, stronger expressivity proof with accompanying figure for easy scanning.**\\n\\nWe completely understand this is a lot to review, and we really appreciate the feedback Reviewer kr8N has provided so far. These additional pages (we assume the 6 pages refer to Appendix B3) are in direct response to Reviewer kr8N\\u2019s initial feedback about how the expressivity proof statement could be more interesting. We took the initiative to add the accompanying Fig. 7 to convey the information concisely in one spot. Everything we have added has been in direct response to reviewer feedback.\\n\\n**Open problem 3**\\n\\nWe agree that our work is not about a \\u201cdataset benchmark suite\\u201d. However, we argue that by building our work directly into TopoBenchmark, the only available benchmarking platform (developed after this position paper came out), we make benchmarking an integral goal and capability of our novel TDL research. Up until now, TDL research has considered one architecture at a time and benchmark datasets have been widely heterogeneous in nature and processing/lifting methodology. By building a tool that can efficiently and objectively evaluate many, many novel models against multiple benchmark datasets that are homogeneously preprocessed/lifted, we argue that we do speak to this open problem. \\n\\n\\n**Open problem 6**\\n\\nExisting CCNNs only consider one topological domain at a time, as demonstrated in a literature review of the field [1] which organizes models by topological domain for which they were developed. The right column of Table 1 of this review shows that each CCNN is tested either by comparison to a GNN or by comparison to a CCNN of the same topological domain. To our knowledge, GCCNs are the first models to be tested across many domains at publication time. This was made easy due to the engineering of TopoBenchmark that allows for a practitioner to choose a lifting separately from a model. We hope this will pave the way for future TDL research to continue addressing problem 6: standardized implementations divorced from topological lifting, rather than ad-hoc, one-off implementations that include a singular lifting. \\n\\n[1] Papillon et al. \\u201cArchitectures of Topological Deep Learning: A Survey of Message-Passing Topological Neural Networks.\\u201d 2024. https://arxiv.org/abs/2304.10031 \\n\\n**Open problem 9**\\n\\nWe agree that this open problem focuses on the theoretical advantages of TDL at large. We do not claim to solve this problem, but rather address it through our theoretical contributions which provide general and comparable theoretically grounded inductive biases. By introducing and proving Propositions 1 (generality), 2 (permutation equivariance), and 3 (expressivity) on the combinatorial complex domain--the most general topological domain--, we believe we consolidate TDL advantages in one location and one language. Importantly, since this language is largely based on the GNN theory language (WL tests, permutation equivariance, and so on), we aim for this new, generalized theory of TDL to be as comparable as possible in considering the deep learning landscape. This is the first instance of WL tests (see Appendix B.3) on the combinatorial complex domain.\"}", "{\"title\": \"Follow-up for Feedback (#4)\", \"comment\": \"Hello Reviewer iHAm. We are following up once more to ask if our previous response was satisfactory in answering the two remaining questions about hyperparameters and ZINC. Your feedback has helped inform many improvements, including **runtimes, better contextualization of the contribution, and clarifications to the text.** We kindly ask that if the Reviewer has no remaining questions, they consider updating the rating of the paper. We would really appreciate the acknowledgement.\"}", "{\"metareview\": \"This paper proposes a new framework for topological deep learning that addresses several open problems in a recent position paper.\\nAll reviewers agree this is a very valuable contribution, and the paper is well-structured and well-written. \\nHowever, during the discussion period, one of the reviewers pointed out some technical issues with definitions and derivations. These technical issues were addressed by the authors but required significant changes to the manuscript. Any substantial revision should be resubmitted for a fresh assessment by the reviewers, as it\\u2019s unlikely that most of them had time to review the new material.\\nThis is why, after a conversation with the senior area chair, despite the good scores, I recommend rejecting it so a new set of reviewers can assess the paper in its current form.\", \"additional_comments_on_reviewer_discussion\": \"There is a long discussion between the authors and reviewer kr8N. I believe this discussion helped the authors improve the paper significantly.\\n\\nIn the end, the reviewer concluded by saying that some claims in the paper are somewhat overemphasized, and some arguments and responses during the rebuttal were slightly deceptive.\\n\\nAs I said above, due to the changes in the paper during the rebuttal period and the fact that this reviewer was very engaged in the discussion and wasn't happy with the final result, I would suggest to reject.\"}", "{\"title\": \"Second answer to rebuttal 1/2\", \"comment\": \"Thanking the authors for their efforts, I will reply point by point below, and further share additional thoughts.\\n\\n**rewordings of the expressions \\\"increased flexibility\\\" and \\\"topological symmetry\\\" to better explain the advantages of GCCNs**\\n\\nI note the authors' changes and attempt to clarify. I would just signal I have found other points in the current revision which refer to \\\"topological symmetries\\\".\\n\\nI will now reply to the clarifications provided by the reviewers. If I understand correctly, the authors claimed that specific rank and neighbourhood specific encoders is a novel feature of the proposed approach which allows adhering to symmetries in the topological domains such as permutation invariance, contrary to other approaches based, e.g., on marking.\\n\\nI agree that understanding the relative advantage of specific encoders is intriguing and deserves attention, but I still found their claims and arguments not convincing. Some counter-arguments could be: (i) \\\"marking strategies\\\" do not necessarily invalidate these symmetries; (ii) approaches like Transformers, whose application is more easily supported, something underscored by the authors, would blend together representations from cells irrespective of their connectivity structure and rank, seemingly in contrast with the argument that rank-specific modules and neighbourhood structures are a compelling feature; (iii) previous approaches, although less \\\"flexible\\\" can in any case support rank- and neighbourhood specific components, so this does not constitute an inherent architectural limitation that can only be overcome by the proposed architecture.\", \"let_me_remark\": \"to enquire into the relative gains offered by these architectural patterns is something that could be of interest. However, the authors' claims on the value and novelty of such a methodology are, in my opinion, overemphasised and justified with slightly deceptive arguments.\\n\\n**reasoning behind Transformer architecture**\\n\\nThe authors' points are generally reasonable, but these arguments do not seem particularly coherent with others given in the paper for other contributions, and this generates possible confusion.\\n\\nAlso, I note that to develop a new transformer architecture for TDL is an open research problem, but the authors do not discuss whether simply plugging transformer models as encoders into their architecture is something justified, or why exactly this would go in the direction of solving the open research problem. This is clearly research out of scope for the present work, and I believe should not significantly weigh in either negatively or positively. As my above comment on the open research problem does not obviously play a role in influencing my evaluation, at the same time I believe the authors' claims on the value of including transformers should not be overemphasised.\\n\\n**clarifications and better justifications in the manuscript about democratization claim**\\n\\nThe authors claim researchers \\\"can plug their GNN into TopoTune, which automatically makes it \\u201chigher-order\\u201d and can potentially improve its performance\\\". Again, I find this argument somewhat confusing \\u2013\\u00a0the architecture would use the GNN as an encoder rather than turning into a higher-order method ...? And the ultimate performance of the approach will (also) be determined by the combination of the lifting and the specific choices of the components in the GCCN, other than the specific features of the GNN. What would be the actual conclusion from this a practitioner would gain?\\n\\nSo, beyond these arguments, trying to sum up and clarify: eventually a portion of the proposed contribution is simply to provide a software framework that would facilitate experimenting with different relational encoders. Software which, surely could be useful overall, but that, it is also to be noted, has not been provided in the submission yet, or whose software-engineering features, components and design principles have not been illustrated in their detailed specifics (unless I missed that, in which case I apologise).\", \"regarding_hyperparameters\": \"\\\"Moreover, it can also be checked in their reproducible scripts [3] that the values of these base hyperparameters (learning rate, hidden state dimension, batch size, readout,...) largely vary across datasets and models.\\\" This makes then wonder how they were originally chosen. Do the authors have a pointer for that?\\n\\nIt would be reasonable to acknowledge that (better) results were obtained by additionally tuning new specific architectural components. To contextualise: my question and concern is, once again, on the solidity of the authors' arguments, which, in this case are writing \\\"results with GCCNs were obtained with very minimal \\u201ctraditional\\u201d hyperparameter tuning, keeping parameters like learning rate and embedded dimensions largely fixed across experiments, while the hyperparameters used with CCNNs were the result of an extensive benchmarking study (https://arxiv.org/abs/2406.06642)\\\".\"}", "{\"summary\": \"The authors tackle the challenge of systematically defining new Topological Deep Learning (TDL) architectures and to enlarge the accessibility of the latter to the broader community. The way they approach this endeavour is by (i) proposing a new class of TDL architectures that generalises previously proposed ones, and by (ii) implementing a software module that encapsulates architectural search over this class.\\n\\nAs for (i), the authors build upon the concepts of \\u201cstrictly augmented Hasse Graphs\\u201d and \\u201cPer-rank neighborhoods\\u201d. The former ones are employed to model the structure of a combinatorial complex via an ensemble of augmented Hasse graphs, one for each neighbourhood. The latter ones prescribe defining a specific set of neighbourhoods for each rank. The authors propose GCCN as architectures which process ensembles of strictly augmented Hasse graphs with per-rank neighbourhoods with specific neural models and \\u201csynchronisation\\u201d components.\\n\\nAs for (ii), the module is called TopoTune and is a configuration-oriented component integrated with other TDL frameworks.\\n\\nExperiments are conducted on graph datasets, lifted to either simplicial or cellular complexes. Results show that GCCNs can outperform standard architectures with a smaller number of parameters or lower computational cost.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The submission tackles an interesting research topic in a timely manner.\", \"The implemented TopoTune module can be helpful to practitioners and researchers outside of the specific field of TDL.\"], \"weaknesses\": [\"From the perspective of the framework generality, it is not clear how GCCNs would unlock new interesting operations or computational patterns.\", \"Eq. 3 and 8 look particularly alike, and it is not evident what kind of advantage the latter brings. In particular, in Eq. 3, the message function $\\\\psi$ can be specific to a particular neighbourhood (and rank), similarly to the neighbourhood message function $\\\\omega$ in Eq. 8 \\u2014 which, incidentally, is not rank specific.\", \"Specific information about ranks and neighbourhoods could be specified by features akin to \\u201dmarks\\u201d over nodes and edges of an augmented Hasse graph, and a general enough neural architecture could then make use of these for neighbourhood and rank specific updates.\", \"Proposition 3 appears to be quite trivial given Proposition 1. What is it telling us in addition to that?\", \"It is not clear how the proposed contributions would help \\u201cdemocratising\\u201d TDL, as the authors claim. The proposed approach appears to significantly enlarge the hyper-parameter space by considering a plethora of possible architectural designs arising from the combination of neighbourhood and rank specific neural modules. Although TopoTune lowers the practical effort of searching over these spaces, these large parameter searches may still require large computational capabilities to be satisfactorily performed in a reasonable time frame.\", \"The value and/or interest of some experimental questions and emerging results is not clear.\", \"\\u201cGCCNs outperform CCNNs\\u201d: It is not clear what the outperformance is due to when comparing to \\u201cstandard\\u201d CCNNs, which could have, potentially, neighbourhood and rank-specific message functions. What is the take-home message for readers?\", \"\\u201cGCCNs are smaller than CCNNs\\u201d: the authors do not explain why this is the case, and it is seemingly the first time this concept emerges in the manuscript\", \"\\u201cGCCNs improve over existing CCNNs\\u201d: the results seem to be merely a matter of additional hyper-parameter search?\", \"\\u201cPerformance-cost tradeoff\\u201d: The authors highlight the reduced number of parameters of GCCN models, but they do not expand into how this actually translates into lower computational cost (e.g. because run-time experiments are not discussed in this section).\", \"Generally speaking, the manuscript would benefit from a clearer and more punctual presentation in regards to the motivations behind the proposed contribution and how these precisely address the research questions put forwards by the authors.\"], \"questions\": [\"Can the authors expand on whether CCNNs can capture GCCNs? Are there functions expressed by GCCNs that cannot be expressed by CCNNs? If the two classes are equivalent, can the authors discuss more in detail what is the effective advantage of considering their proposed GCCN class?\", \"Can the authors better explain what were the research questions addressed in their experimental section and how their results contribute to answer them?\", \"Can the authors better discuss how TopoTune goes beyond merely a hyper-parameter search tool?\", \"Please also see weaknesses.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this work, the authors propose a generalization of Combinatorial Complex Neural Networks (CCNNs) called GCCNs and an accompanying software library called TopoTune, to generalize works on CCNNs into one computational framework and streamline the training and tuning of TDL architectures. Both theoretical and empirical results indicate that the proposed framework is indeed a useful generalization of previous efforts in TDL.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The main strength of this work is that the authors are able to subsume TDL architectures under a single framework.\", \"The empirical results indicate to me that the framework matches existing works, thus validating the claim that the framework is indeed general.\", \"The framework allows the use of GNNs, which should bring the two fields closer together and have TDL research benefit from progress in GNNs.\"], \"weaknesses\": [\"L458: The authors state that \\u201cGCCNs outperform CCNNs\\u201d. Out of the 8 presented datasets, I can only find two instances (NCI1, ZINC) where GCCNs actually perform better than the best CCNN baseline (accounting for one standard deviation). I could be convinced that the benefit of TopoTune is that one must only sweep over the GNN sub-modules to obtain an (at least) on-par model. However, this would still require some effort to find the best sub-module; see question 3 for more on this.\", \"In L468 and Figure 5, the authors discuss performance vs. number of parameters. However, I don not find this comparison convincing as a smaller number of parameters may not necessarily be more cost-efficient. Instead, I would like to see a comparison in terms of runtime and memory usage of the different models.\", \"Since the authors argue their approach to be superior to works on higher-order GNNs, a comparison of GCCNs and higher-order GNNs would be very useful. For example, PPGN++ (https://arxiv.org/abs/1905.11136), a higher-order GNN, performs much more on par with the best GCCN on ZINC than most CCNN baselines presented in the paper.\"], \"questions\": [\"In the introduction you say \\u201cHowever, constrained by the pairwise nature of graphs, GNNs are limited in their ability to capture and model higher-order interactions [\\u2026]\\u201d. I would expect that higher-order GNNs (https://arxiv.org/abs/1905.11136, https://arxiv.org/abs/1810.02244, https://arxiv.org/abs/1905.11136) are able to capture higher-order interactions. Could you elaborate on how TDL differs from higher-order GNNs?\", \"Related to the first question, in L88-L93 you mention the work of Jogl et al. (https://openreview.net/forum?id=HKUxAE-J6lq) on Cell Encodings, which is equivalent to using the standard Weisfeiler-Leman test on a transformed graph, but your argument for the shortcomings of this approach is not clear to me. In particular, you state that \\u201cHowever, although these architectures over the resulting graph-expanded representations are as expressive as their TDL counterparts [\\u2026] the former are neither formally equivalent to nor a generalization of the latter\\u201d. What is \\u201cthe former\\u201d? What is \\u201cthe latter\\u201d? Assuming the former are Cell Encodings and the latter topological GNNs, why is it important that they are formally equivalent or one being a generalization of the other? Are they different in their runtime or memory requirements? Do we expect better learning behavior from TDL methods?\", \"As outlined in the weaknesses, in Table 1, only on two datasets GCCNs outperform the best CCNN from TopoBenchmarkX. Can you further elaborate on the benefits of TopoTune in this context?\", \"Related to the third question, can the authors provide an overview over the runtime and memory complexity of the compared CCNNs, as well as GCCNs, possibly in relation to the complexity the underlying GNN submodules?\", \"Am I correctly assuming that the ZINC dataset used in this work is the full ZINC dataset with 250K graphs, rather than the ZINC (12K) version frequently benchmarked in graph learning?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Answer to the rebuttal 1/2\", \"comment\": \"I am grateful to the authors for their rebuttal. In the following I will point out related further comments and questions about it.\\n\\n**W1. How GCCNs would unlock new operations and patterns.**\\n\\n> This is an important point. [\\u2026] This increased flexibility of GCCNs over CCNNs is precisely what allows them to outperform previous architectures.\\n> \\n\\nHow can the authors be so firm in claiming this architectural modification is *precisely* the reason GCCN outperforms previous architectures? In a follow up response they indeed mention the outperformance can be due to \\u201cmyriad reasons\\u201d. I understand they believe explicitly calling out this additional flexibility to the readers is part of their contributions, but these strong claims requires being substantiated at least by ablation studies in controlled settings.\\n\\n> [\\u2026] Contrary to this, nothing forces the \\u03c9s of Eq. 8 to be message passing based (e.g., by using a Transformer or MLP architecture). This introduces a completely new landscape of possible TDL models, which have until now been very focused on message passing.\\n> \\n\\nI understand the appeal to seaminglessly introduce transformer layers, but it is also natural to wonder why this is a good idea in the specific TDL use-case. To the best of my understanding, applying fully connected computation via a transformer layer to a strictly augmented Hasse Graph would potentially indiscriminately intermingle representations from cells of different ranks (see e.g. the rightmost depiction in Fig. 3). Why would this be a good idea? Why would it even be relevant to apply Transformers to topological domains? Intuitively this would go in the opposite direction than incorporating \\u201cinductive biases\\u201d from topological domains as mentioned in the next rebuttal response:\\n\\n> However, the goal of our work is to design a model whose architecture (as opposed to features) naturally incorporates such inductive biases.\\n> \\n\\nRegarding \\u201cmarking\\u201d, the authors claim that their approach better embodies such inductive biases to respect the \\u201ctopological symmetry of the domain\\u201d. But how are these exactly defined? What do the authors exactly mean by this? Do they refer to \\u201csymmetries\\u201d as intended in the Geometric Deep Learning and Physics community?\\n\\n**W2. Stronger expressivity proof**\\n\\nFirst, whilst I appreciate the authors\\u2019 effort on this, it is due to signal that it constitutes a significant addition, on which I do not believe I can guarantee the same level of attention and depth I provided during the review period.\\n\\nIn any case, I am confused by some statements in the proof. A particularly puzzling point is the claim about the equivalence between the k-CCWL test and the k-dimensional WL test on the strictly augmented Hasse Graph. Why is that the case? It is not obvious to find an intuitive link between the two, especially, because the `k' in the former refers to the number of hops in the neighbourhood, whilst in the latter it refers to the size of tuples whose colours are refined. Generally speaking, deeper neighbourhoods do not obviously enhance discriminative power?\\n\\n**W3. Democratization of TDL**\\n\\n> **A plug-in approach that improves upon a given GNN performance** [\\u2026] This researcher can plug their GNN into TopoTune, which automatically makes it \\u201chigher-order\\u201d and can potentially improve its performance [\\u2026]\\n> \\n\\nThe claim that TopoTune is \\u201cA plug-in approach that improves upon a given GNN performance\\u201d can be misleading or incomplete, if anything. In my opinion, the issue with this argument is that it is missing an important ingredient: the \\u201clifting strategy\\u201d. I believe this could be a very impactful component in potential improvements obtained when turning a GNN into a higher-order architecture. I agree that a sophisticated search over neighbourhood functions can also contribute, but it is not necessarily the sole responsible for that.\\n\\n> [\\u2026] results with GCCNs were obtained with very minimal \\u201ctraditional\\u201d hyperparameter tuning, keeping parameters like learning rate and embedded dimensions largely fixed across experiments, while the hyperparameters used with CCNNs were the result of an extensive benchmarking study.\\n> \\n\\nCan the authors then specify how these \\u201cbase hyper-parameters\\u201d were chosen at the beginning (let me apologise in case I have personally missed this)? The risk is that, if the choice is driven by results obtained from tuning CCNNs, then one should also reason on the fact that prior computation was, somehow, already inherited.\\n\\nConsistently with the strength I noted, I understand and appreciate that TopoTune can unify interfaces and make TDL exploration more accessible to practitioners, but I believe some claims on its democratising effect are somewhat over-emphasised.\"}", "{\"title\": \"Follow-up on Reviewer Feedback\", \"comment\": [\"We would like to follow up to ask if our response addresses the reviewer\\u2019s concerns, weaknesses, and questions. To summarize our response, we have :\", \"added results on 4 large node-level tasks;\", \"provided time complexity;\", \"provided runtimes;\", \"re-contextualized performance analysis.\", \"We would greatly appreciate a prompt feedback, as it would allow us to clarify any remaining issues and further improve the quality of our manuscript. In case the Reviewer is satisfied with our response and the clarifications, we would kindly ask them to reconsider the rating of our submission.\"]}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Third response to Reviewer kr8N (4/4)\", \"comment\": \"**Open problem 11**\\n\\nWe refer to our response above to Open problem 6. Aditionnally, we specify that GCCNs are advantaged by the facts that \\n- i) their theory is built upon combinatorial complexes (most general domain), making them eligible for any special subcase domain (we note that this theoretically also applies to combinatorial CCNNs, although, in practice, no testing across domains has been performed)\\n- ii) their implementation is built into a platform where models and liftings are divorced, and there are many liftings to choose from.\\n\\n\\n> As an official reviewer, I felt it important to convey my opinion that some claims are somewhat overemphasised, and some arguments and responses have been given in a way I found, in some cases, slightly deceptive.\\n\\nWe thank Reviewer kr8N for the detailed response and continued engagement. We appreciate the care they have devoted to our work, and we hope that this response will help clear up the remaining misunderstandings that seem to shape their opinion of the novelty and value of the work. \\n\\nWe are more than happy to further clarify any claims through responses or in the paper (in camera ready version), as we have absolutely no intentions of deception. Our answers to the Reviewer\\u2019s specific questions should not be confused with the content of the paper which does not highlight nearly as strongly many of the topics discussed here.\"}", "{\"title\": \"Follow-up for Feedback (#2)\", \"comment\": \"Hello! We are following up to ask if our previous response has addressed your remaining questions about the selected training hyperparameters and the nature of the ZINC dataset. We would appreciate being able to incorporate any additional feedback into the manuscript before the deadline tomorrow.\\nIn the event you have found our responses to be satisfactory, we would really appreciate an updated rating. We believe your feedback has significantly helped improve the manuscript. Thank you!\"}", "{\"title\": \"Second answer to rebuttal 2/2\", \"comment\": \"**list of evidence of empirical results pointing to contribution of GCCNs over CCNNs** and **reworded paragraph headings in the Results subsection, per your suggestions**\\n\\nI note the authors' points and hope that this discussion helps clarifying the contributions of the presented paper.\\n\\n**updated, stronger expressivity proof with accompanying figure for easy scanning.**\\n\\nAfter looking into this, I would like to explicitly point out the following. As things stand now in the current revision, the authors are introducing six additional appendix pages with technical definitions and derivations. Importantly, no other reviewer has explicitly acknowledged reviewing this new part. I appreciate the authors' efforts, but I must also share that I do not believe I could (and, to some extent, should) additionally review this new content and alter my evaluations during the current discussion phase based on that. Comments on the soundness, scope, and impact of this additional part are therefore deferred.\\n\\n---\\n\\nI will conclude with further comments and questions in respect to addressing some open research questions as per the referenced position paper.\\n\\n**Open problem 3**\\n\\nThe authors of the position paper explain this point as *\\\"Benchmark suites are needed to enable efficient and objective evaluation of novel TDL research\\\"*, and it is clear from the paper they refer to a \\\"minimal collection of higher-order benchmark datasets\\\", \\\"implementations of graph lifting algorithms for generating synthetic datasets, \\\"a taxonomy of higher-order datasets\\\", a \\\"comprehensive set of performance metrics\\\". The authors' contribution can support the action of benchmarking itself, but the claim that it addresses such open problem, that is clearly about data and metrics, is rather far-fetched.\\n\\n**Open problem 6**\\n\\nWhy would be the ability to work across domain a specific feature of GCCNs? Why wouldn\\u2019t one be able to apply a standard CCNN to different domains by changing, for example, the lifting function? I do not fully understand why the present contribution would specifically go in the direction of addressing open problem 6.\\n\\n**Open problem 9**\", \"from_the_original_position_paper\": \"*\\u201dTheoretical foundations have not yet been adequately laid to consolidate the relative advantages of TDL. More theoretical research is needed to shed light on the relevance of topology in deep learning\\u201d.*\\n\\nThe open problem seems to be more about deriving theoretical results on the superiority of \\u201cworking higher-order / topological\\u201d and its general relevance in Deep Learning. Results on the theoretical properties of GCCNs and its comparison with other TDL methods are clearly a welcome contribution, but do not seem to strongly align to address the aforementioned open problem \\u2013 which is more about the role of TDL in the Deep Learning landscape.\\n\\n**Open problem 11**\", \"i_still_cannot_fully_understand\": \"are the authors claiming their model have advantages on building models that could work across topological domains? Why is that? Can they expand a little on this?\"}", "{\"title\": \"Second response to Reviewer kr8N (2/3)\", \"comment\": \"**W3. Democratization of TDL.**\\n\\n- *Topological liftings.* We completely agree with you that topological lifting is a very important part of the pipeline. It is in fact what is responsible for assigning higher-order structure to data originating from the graph domain. In fact, a main reason we chose to implement TopoTune inside TopoBenchmark [1] is that this platform hosts many choices of topological liftings and allows for the practitioner to choose the lifting of their choice, just as they would choose a dataset or a model. The authors of this platform even hosted a Topological Deep Learning Challenge at ICML [2] this year which aimed to crowd-source implementations of topological liftings for various purposes. We now specify easy access to topological lifting choice at line 402.\\n\\n- *Choosing hyperparameters.* Great question! We confirm that our \\u201cbase hyperparameters\\u201d choice was not inherited by the results obtained from tuning regular CCNNs. Instead, we decided to use TopoBenchmark defaults, and for those in which no default was provided (e.g. hidden state dimension) we just selected the lower value that was considered in TopoBenchmark\\u2019s grid search. Moreover, it can also be checked in their reproducible scripts [3] that the values of these base hyperparameters (learning rate, hidden state dimension, batch size, readout,...) largely vary across datasets and models. Therefore, we believe that our claim holds, and thanks to your feedback it is better supported in the paper (line 423).\\n\\nReferences\\n\\n[1] Telyatnikov et al. \\u201cTopoBenchmarkX: A Framework for Benchmarking Topological Deep Learning.\\u201d 2024 https://arxiv.org/pdf/2406.06642 \\n\\n[2] Bernardez et al. \\u201cICML Topological Deep Learning Challenge 2024: Beyond the Graph Domain.\\u201d Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM) at ICML 2024 https://arxiv.org/abs/2409.05211 \\n\\n[3] https://github.com/geometric-intelligence/TopoBenchmark/blob/main/scripts/reproduce.sh\"}", "{\"title\": \"Follow-up on Reviewer Feedback (#2)\", \"comment\": \"We are following up once again to ask if the Reviewer has any additional questions. We would greatly appreciate to hear back, such that we can address any remaining issues before the deadline.\\n\\nIf the Reviewer does not have any additional concerns since we have answered their questions, edited the manuscript, and added additional experiments, we would kindly ask that they consider updating the rating of the submission to reflect this.\"}", "{\"comment\": \"Thank you for your patience! We were struggling with MathJax. The updated manuscript with updated Fig. 5 and caption is now uploaded.\\n\\nPlease let us know if you have any more questions or feedback that could help improve the manuscript before the deadline on Wednesday!\"}", "{\"title\": \"Response to all reviewers (continued)\", \"comment\": [\"**2.B. Contextualizing the contribution (continued)**\", \"Below, we outline the 7 open problems of TDL we partially or fully address in this work. These problems were defined by many of the field\\u2019s leading authors in a recent position paper (https://arxiv.org/pdf/2402.08871).\", \"Open problem 1: Need for adaptation of TDL in practice and making TDL more accessible for relation learning in real-world applications. TopoTune removes much of the software-based barrier to entry and makes the choice of base model flexible. Hence, TopoTune makes TDL a more attractive choice for practitioners from the several fields that stand to benefit from relational learning, such as neuroscience, protein engineering, chip design, and so on (see https://arxiv.org/abs/2106.04051 at Appendix B).\", \"Open problem 3: Need for benchmarking. As TDL evolves quickly across an increasingly wide landscape of models, it becomes challenging to understand which model is best suited for a particular application. Through its integration in TopoBenchmark and its structure, TopoTune makes basic components of TDL such as choice of topological domain, neighborhood structure, and message-passing function easily comparable across many benchmarking tasks. This is a fundamental change of perspective compared to previous literature, which always considered one new model with one set of such choices at a time.\", \"Open problem 4: Limited availability of software. On one hand, TopoTune directly integrates with the fully open-source TopoBenchmark platform, which itself leverages other open source TDL tools from packages like TopoModelX and TopoNetX. On the other hand, TopoTune makes the much wider and more well-established platform of software tools for GNNs (such as PyTorch Geometric) directly applicable to TDL.\", \"Open problem 5: Need for a cost-benefit analysis framework. By exploring a large model space systematically, TopoTune makes it much easier to assess cost (model size, training time) in comparison to performance. We show an example of such an analysis in Figure 5.\", \"Open problem 6: Building scalable TDL models that work across domains. To our knowledge, GCCNs are the first models empirically tested across many higher-order domains. This feature is enabled by baking the implementation into TopoBenchmark, which already contains the data liftings for many topological domains.\", \"Open problem 9: Consolidating relative advantages of TDL through theoretical foundations. Beyond proving that GCCNs generalize and subsume CCNNs, we also provide expressivity and permutation equivariance proofs that are cross-domain applicable. To our knowledge, the WL-tests and related definitions of expressivity that we define on combinatorial complexes (in the updated manuscript) are the first definitions existing on combinatorial complexes. Further, we leverage these definitions to prove that GCCNs are strictly more expressive than CCNNs (in the updated manuscript), which is the first proof of expressivity for combinatorial complexes.\", \"Open problem 11. Developing a transformer architecture for TDL models that \\u201clays a unified foundation for TDL across different higher-order domains.\\u201d As presented in Table 1, TopoTune offers the possibility of using a vanilla transformer as a \\u201cfully-connected\\u201d message-passing function. However, TopoTune offers a more general answer to this problem, in that it provides a foundation, both theoretical and practical, for building models across topological domains.\"]}", "{\"title\": \"Response to reviewer iHAm\", \"comment\": \"Thank you very much for your time and your very helpful feedback. We address your points about weaknesses (W) and questions (Q) below.\\n\\n**W1. Justifying the summary statement \\u201cGCCNs outperform CCNNs\\u201d and contextualizing TopoTune\\u2019s benefits.**\\nIt is true that GCCNs only outperform outside 1\\\\sigma on 2 datasets across all domains. However, when considering inter-domain performance, as has been traditionally the case in TDL, GCCNs outperform (beyond 1\\\\sigma) the counterpart best CCNN on 11 out of 16 of the domain/dataset combinations tested. We now specify this at Section 6.2, paragraph GCCNs outperform CCNNs. \\n\\nMoreover, we stress that these results with GCCNs were obtained with very minimal \\u201ctraditional\\u201d hyperparameter tuning, keeping params like learning rate and embedded dimensions largely fixed across experiments, while the hyperparameters used with CCNNs were the result of an extensive benchmarking study (https://arxiv.org/abs/2406.06642). We have clarified this point in Section 6.1.\\nMore broadly, we would like to emphasize how powerful these results are in the context of how much easier and simpler TopoTune makes TDL. Each of the CCNNs we benchmark against is the product of a painstaking, one-at-a-time generalization of a specific choice of model to a specific choice of domain, always with a \\u201cfrom scratch\\u201d message-passing scheme. In contrast, each GCCN that achieves comparable performance is the result of a structured and systematic exploration of domain, neighborhood, GNN, and related parameters facilitated by TopoTune.\\nWhile it is true that selecting the best GNN sub-modules or parameter combinations requires some effort, TopoTune provides an integrated and standardized framework that significantly reduces the overhead associated with this process. Rather than manually engineering architectures from scratch for each task, users can leverage TopoTune to explore architectural choices within a unified, expressive framework that encapsulates and generalizes previous work. This approach not only accelerates the design process but also ensures consistency and comparability across models. As shown in Fig. 5, TopoTune also provides insight into how design choices\\u2014such as the number of parameters, neighborhood configurations, or GNN submodules\\u2014affect performance, offering a more principled way to navigate and refine architectural decisions.\", \"tldr\": \"By organizing the vast design space of TDL into a manageable and interpretable framework, TopoTune shifts the challenge from ad-hoc design to systematic exploration, which we believe is critical for advancing the field.\\n\\n**W2. Comparison in runtimes and memory usage.**\\nFor runtime, we point to bullet point \\u201cTraining times\\u201d in our global answer to all reviewers. \\nDue to the large amount of experiments and necessity for distributed training, reporting used memory beyond the number of parameters was not possible in this timeframe.\\n\\n**W3. Comparison with higher-order GNNs.** This is a very interesting point, thank you for bringing this work to our attention. Indeed, these models perform very well on graph benchmark tasks. \\n\\nFirst, let us directly answer your question about how works like PPGN++ are different from TDL, and specifically GCCNs. The main difference between higher-order GNNs and Topological Neural Networks (TNNs) lies at a very fundamental level.\\n\\nGNNs run on graphs, which are simple topological spaces. GNNs have often been studied through the lens of Geometric Deep Learning (GDL). GDL (in the sense of [1]) is built on group-theoretic arguments along with the frequent usage of Hilbert Spaces (strictly related to manifold learning and, in general, to metric spaces).\\n\\nTNNs run on combinatorial topological spaces that are inherently higher-order. TNNs have been studied through the lens of Topological Deep Learning (TDL). TDL is solely built on the modeling assumption of data living on the neighborhoods of a combinatorial topological space and having a relational structure induced by the neighborhoods\\u2019 overlap. Further insights can be gained from the thesis in [2]-[3].\\n\\nThis said, higher-order GNNs still run on graphs. As such, they usually use higher-order information to update node embeddings. In contrast, in TNNs (GCCNs included), all the cell embeddings are updated, as each cell is an element of the underlying space. This is not a detail. Indeed, while higher-order information is usually used in GNNs to achieve some desirable property (e.g., improved expressivity) but without any other additional theoretical argument, TNN usually results in improved expressivity while being supported by the sophisticated and powerful machinery of algebraic topology. This fact allows us to leverage additional structure (e.g., spectral theory, homology theory, homotopy theory,...) when combinatorial complexes are particularized to simplicial, cell, or path complexes [4][5][6][7].\\n\\nSee next reply.\"}", "{\"comment\": \"Regarding ZINC, I think this is largely addressed. In GDL it is more common to use edge features but what is important for this work is that the comparison between TDL models is fair, which seems to be the case.\\n\\nRegarding the hyperparameters, I definitely see the authors' point in that they did not need specialized hyperparameters but I want to emphasize that the authors still needed to rely on the hyperparameter defaults of TopoBenchmark and hence, especially for new tasks, a hyperparameter tuning may still be necessary.\\n\\nThat being said, I don't think that this is a central issue of this work. Further, I appreciate the many improvements the authors have made in the manuscript. Hence, I decided to raise my score.\"}", "{\"title\": \"Third response to Reviewer kr8N (2/4)\", \"comment\": \"**clarifications and better justifications in the manuscript about democratization claim**\\n\\nThere seems to be a big misunderstanding here. Our method **does indeed make any given GNN \\u201chigher-order\\u201d**, i.e., it produces a model that takes into account higher-order interactions (the neighborhoods) among groups of more than two nodes (the cells) and process them using the same methodological principles of the underlying GNN. This said, the resulting architecture is clearly a product of the lifting, neighborhoods, and GNN choice. However, implicitly using Proposition 1, the reviewer has to agree that most of the architectures in the TDL literature can be recovered using TopoTune with the corresponding graph model. For instance, with appropriate choices of liftings and neighborhoods:\\n- the Simplicial Complex Convolutional Neural Network [1] can be obtained from TopoTune by setting the GNN to a 1-layer Graph Convolutional Neural Network\\n- the Generalized Simplicial Attention Networks [2] can be obtained from TopoTune by setting the GNN to a 1-layer Graph Attention Network. \\n\\nThese constructive examples should make clear that TopoTune represents a significant resource for practitioners in TDL and across diverse related fields. Practitioners can now easily test and benchmark the performance of any GNN on the relational structure induced by an underlying topological domain. Practitioners can also test a new TDL model that can be implemented using a GNN on the set of strictly augmented Hasse graphs. For this reason, we chose TopoBenchmark as the host platform for TopoTune because it allows easy access to the widest available library of topological liftings, some of which are application specific (see white paper on topological lifting challenge organized by TopoBenchmark\\u2019s developers [3]).\\n\\n**Software**\\n\\nUnfortunately, due to anonymity concerns we could not include the link to the open-source repository. (This was stated in the original manuscript but we had to remove it to accommodate rebuttal-related modifications.)\\nIn terms of software features/components/design: \\n- At line 400, we summarize the tools offered by TopoBenchmark and their benefits for practitioners. \\n- At line 431, we reference pseudo-code in Appendix D for how a GCCN model forward runs, going from graph expansion to message-passing to aggregation. \\n- In terms of the broader software, we refer to the manuscript associated with TopoBenchmark [4], which details the design and operation of this extensive platform for lifting/processing data, implementing models, and training them.\\n- A central focus of our software tool is ease of use and customization. With no better way to prove this due to anonymity and the supplementary materials being frozen, we emphasize that we spent a significant amount of time organizing and thoroughly documenting TopoTune (every function documented with a docstring and parameters description in numpy documentation format) inside TopoBenchmark, as well as providing a descriptive set of instructions on getting started, customizing models, and reproducing experiments. We use ruff as a linting tool that ensures that our software coding style respects software engineering best practices (the associated linting Github Action passes). We use pytest as a unit-testing tool that automatically tests our code so that, together, 88% of TopoBenchmark and TopoTune\\u2019s code are unit-tested.\\n- Going beyond our own software development, an important contribution of the paper is making existing GNN software readily available for Topological Deep Learning (line 395, 537), making much more established software libraries like PyTorch Geometric available to use on topological domains.\\n\\nReferences\\n\\n[1] Yang, et al. \\\"Simplicial convolutional neural networks.\\\" ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022.\\n\\n[2] Battiloro, et al. \\\"Generalized simplicial attention neural networks.\\\" IEEE Transactions on Signal and Information Processing over Networks (2024).\\n\\n[3] Bernardez et al. \\u201cICML Topological Deep Learning Challenge 2024: Beyond the Graph Domain.\\u201d Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM) at ICML 2024 https://arxiv.org/abs/2409.05211 \\n\\n[4] Telyatnikov et al. \\u201cTopoBenchmarkX: A Framework for Benchmarking Topological Deep Learning.\\u201d 2024 https://arxiv.org/pdf/2406.06642\"}", "{\"title\": \"Second response to Reviewer kr8N (3/3)\", \"comment\": [\"**W4. Experimental questions on how GCNNs outperform CCNNs**\", \"**GCCNs outperform CCNNs: why?**\", \"To address this, we summarize in bullet points all of the emerging conclusions that we extract from our experimental results:\", \"Table 1 provides direct evidence that our proposed graph expansion, which leverages many strictly augmented Hasse graphs instead of just one, leads to better performance. (line 464)\", \"Evidence that being able to choose a message-function is helpful appears in:\", \"Table 2. We reproduce existing CCNNs as GCCNs and vary only the message function. All other parameters are kept fixed. We observe better performance (>1sigma) in 3 datasets for SCCN and in 3 datasets for CWN. (line 484)\", \"Fig. 5 (and its extended version in Fig. 8). We observe that some datasets like ZINC (line 508) and NCI1 strongly benefit from a specific choice of message-function, in these cases GIN and GraphSage, respectively.\", \"Disentangling the contribution of neighborhood structure towards performance is trickier. While there is a similar parallel to be made with Fig. 5 and 8 \\u2013 i.e., consider the different choices of neighborhood (color of marker) for a fixed GNN (shape of marker) \\u2013 it is clear there are many possible options that perform similarly well. In our work, we leverage this observation to remark that some much smaller/more lightweight neighborhood structures lead to similarly performing models, and hence provide a potential solution towards less costly TDL. (line 503)\", \"Therefore, even if the overall performance gains in Table 1 rests on a combination of multiple factors, we argue that our evaluation does provide supporting evidence that the various novelties proposed by GCCNs are useful. While we understand this is not as satisfying as a universal statement on how best to implement TDL models given a task, we would like to emphasize that it is the first work of its kind that even considers going beyond one particular choice of message-passing, neighborhood structure, and topological domain. It is only with such a set of results that one could make these emerging observations. We are very excited to continue using TopoTune for application-specific tasks (see future work, line 536) and better understand how to optimize GCCNs in a more focused setting going beyond standard benchmarking. Last, but not least, we truly appreciate your detailed feedback on this critical point; thanks to this ongoing discussion our key findings are much better distilled and contextualized in the revised manuscript.\", \"**Paragraph Headings.**\", \"Our goal with these paragraph headings was simply to organize the section into short take-home messages. Based on your feedback, we understand that they could be more accurate and descriptive. Here are the new paragraph headings:\", \"\\u201cGCCNs are smaller than CCNNs.\\u201d becomes \\u201cGCCNs perform competitively to CCNNs with less parameters.\\u201d Here we would like to stress that GCCNs are a novel, more general class of architectures that go beyond CCNNs, so we kindly avoid referring to them as \\u201cconfigurations that TopoTune has found\\u201d (as it could undermine this contribution).\", \"\\u201cGCCNs improve existing CCNNs.\\u201d becomes \\u201cGeneralizing existing CCNNs to GCCNs improves performance.\\u201d Again, we emphasize that GCCNs are a novel architecture that go beyond a new configuration of previous models.\", \"\\u201cLottery Ticket GCCNs\\u201d becomes \\u201cTopoTune finds parameter-efficient GCCNs.\\u201d We understand that the lottery ticket hypothesis could be a confusing reference here, so both the heading and the paragraph content has been edited to better frame this point.\"]}", "{\"title\": \"Answer to the rebuttal 2/2\", \"comment\": \"**W4. Experimental questions on how GCNNs outperform CCNNs**\\n\\n> \\u201cGCCNs outperform CCNNs: why?\\u201d: This performance is due to a myriad of reasons. [\\u2026]\\n> \\n\\nI understand there *could* be many reasons, in principle, but have the authors gathered concrete evidence on some in particular? Or do they believe there is evidence that all of them jointly contribute at the same time as they seem to explicitly claim in their answer? Is there an emerging conclusion they can leave to readers that TopoTune or the GCCN architectural components helped uncover? If the authors believe some components of GCCN are contributing to its performance, in particular, I believe that making these more specific claims could be more informative to readers rather than generally claiming that GCCNs outperform CCNNs.\\n\\nOverall, I am insisting on these points because I am concerned by the way the findings are presented and contextualised. Slightly zooming out a little, but still linking to the next answer the authors provided: some paragraph headings could be misleading. Examples follow. \\u201cGCCNs are smaller than CCNNs.\\u201d In which sense? Wouldn\\u2019t it be more accurate and informative to say that TopoTune helped finding configurations which worked competitively with a smaller number of parameters? \\u201cGCCNs improve existing CCNNs.\\u201d Would not it be more precise to say that TopoTune allowed searching for better configurations that improved on the original ones? \\u201cLottery Ticket GCCNs\\u201d \\u2013 this heading could be confusing because it refers to a line of research that is completely distinct [1]\\n\\n> dense, randomly-initialized, feed-forward networks contain subnetworks (\\\"winning tickets\\\") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations\\n\\n[1] The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks (https://arxiv.org/abs/1803.03635)\"}", "{\"title\": \"Second response to Reviewer kr8N: Proof of expressivity\", \"comment\": \"As promised, we are reaching out with an update on the stronger expressivity proof. First, we would like to sincerely thank you for taking the time to review the first version of the proof. We are grateful that you are engaging with our work during this discussion.\\n\\nSecond, we thank you for catching the oversight on the definition of the neighborhoods Nj(sigma) in the coloring scheme of our k-CCWL test. We meant the j-neighborhood as defined in [Morris et al (2019)] and _not_ the j-hop neighborhood. As you noted, the initialization also needs to be updated to use colors on k-tuples of cells. In fact, we will use k-sets of cells (as opposed to k-tuples of cells) for this proof. Accordingly, we have updated the coloring scheme of our k-CCWL test correcting both neighborhoods and initialization.\\n\\nYou are right, deeper neighborhoods do not immediately increase expressivity. However, the higher-order k-GNNs from [Morris et al. 2019] are strictly more expressive than GNNs. As GCNNs can implement k-GNNs on strictly augmented Hasse graphs whereas CCNNs cannot (as the proof shows), GCNNs are strictly more expressive than CCNNs.\\n\\nWe have added significant details to each step of the proof in the form of auxiliary definitions and propositions as well as references to [Morris et al. 2019], in purple in the updated manuscript. To make the proof easier to follow, particularly given the limited time left in this discussion period and the significant addition it represents, we have added a graphical summary of the definitions and propositions in the new Figure 7.\\n\\nOnce again, we were truly grateful for your engagement with the first version of our proof. We hope that you will appreciate this refined version. We thank you for the time and effort you are devoting to evaluating our work.\\n\\n\\nMorris, Christopher, Martin Ritzert, Matthias Fey, William L. Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. \\\"Weisfeiler and leman go neural: Higher-order graph neural networks.\\\" In Proceedings of the AAAI conference on artificial intelligence, vol. 33, no. 01, pp. 4602-4609. 2019.\\n\\nGrohe, Martin. Descriptive complexity, canonisation, and definable graph structure theory. Vol. 47. Cambridge University Press, 2017.\"}", "{\"title\": \"Second response to Reviewer kr8N (1/3)\", \"comment\": \"Thank you for considering our response and engaging with it. We really appreciate your continued feedback. We provide responses to points below.\\n\\n**W1. How GCCNs would unlock new operations and patterns.**\\n\\n- *\\u201cIncreased flexibility.\\u201d* In this response we used the expression of \\u201cincreased flexibility\\u201d to describe the contribution of per-rank neighborhoods that differentiates GCCNs (Eq. 8) from CCNNs (Eq. 3). You are correct that this is not necessarily the singular reason they perform better, but rather one of many \\u2013 this was poor phrasing on our part. In the paper, we specified this contribution as \\u201cincreased flexibility over CCNNs\\u201d (line 344). In terms of empirically studying the contribution, we refer to our response to \\u201cGCCNs outperform CCNNs: why?\\u201d.\\n\\n- *Transformer.* You are totally correct that using a Transformer message-function leads to a fully-connected computation setting within each strictly augmented Hasse graph, which inherently collapses relationship information at the neighborhood level. Our goal here is to show that GCCNs provide a bridge between traditional TDL architectures and models like Transformers. The simplicity of the bridge we propose does come at the cost of collapsing neighborhood-level relationship information, as would be the case with any non-message passing architecture that doesn\\u2019t enforce relationship structure through modified loss, for instance. Solving this issue is an active field of research in the graph domain, with works like [1] proposing a neighborhood contrastive loss to apply MLP on graphs. By seamlessly including Transformer in GCCNs, we hope to make a first step towards generalizing this research to the topological domain. In addition to this, let us also remark that a Transformer for TDL is outlined as an open problem in a recent position paper of the field [2]. While this is not strictly aligned with our research goals \\u2013i.e. incorporating meaningful inductive biases in the architectural design, as you pointed out\\u2013, TopoTune indirectly provides new possibilities to TDL practitioners interested in this topic. For instance, our solution can easily enable the application of Graphormers [3] to the per-rank neighborhood expansions, which could represent a sweet spot between message-passing TNNs and a pure TDL transformer that collapses all neighborhoods at once.\\n- *\\u201cTopological symmetry\\u201d.* Thank you for feedback on clarity \\u2013 we understand this can be a confusing choice of words. (We have reverted this edit in the main paper). To answer your question, by \\u201cpreserving the topological symmetry\\u201d we mean that the properties of the topological domain are preserved in the same way that the properties of the graph are preserved by GNNs. For example, the architecture is equivariant to permutation of cells within a rank, just as a GNN is equivariant to permutation of nodes. In our previous response to you, we detail exactly how a GCCN goes beyond \\u201cmarking\\u201d features in practice, and what the consequences are in terms of preserving information about the topological structure. \\n\\nReferences\\n\\n[1] Hu, Yang, et al. \\\"Graph-MLP: Node classification without message passing in graph.\\\" arXiv preprint arXiv:2106.04051 (2021).\\n\\n[2] Papamarkou, Theodore, et al. \\\"Position: Topological Deep Learning is the New Frontier for Relational Learning.\\\" Forty-first International Conference on Machine Learning. 2024.\\n\\n[3] Ying, C, et al. \\u201cDo transformers really perform badly for graph representation?.\\u201d NeurIPS 2021. Vol 34 p. 28877--28888. \\n\\n**W2. Stronger expressivity proof.**\\n\\nWe are working on improving the proof now, and will respond once more on Monday with an update on this.\"}", "{\"title\": \"Fourth response to Reviewer kr8N\", \"comment\": \"Firstly we would like to thank you very much for increasing your rating of the paper. We believe our conversations during this rebuttal have led to many important improvements and we really appreciate that you agree. In response to the remaining questions:\\n\\n**Claims on value and novelty**\\n\\n- (i) Thank you for clarifying. To our knowledge, no works using marking strategies have a proven statement on cell permutation invariance or equivariance. They only focus on preserving expressivity.\\n- (ii) That is correct. In the case of an incidence neighborhood, the input Hasse graph would be a fully-connected graph between both ranks involved. So yes, in this case there would be \\u201cblending\\u201d at the level of these two ranks. A way to avoid this would be to exclude such neighborhoods or implement a contrastive loss to partially \\u201cde-blend\\u201d on the backend. Thank you for the pointer \\u2013 we hope this has cleared up the confusion.\\n- (iii) Both the message-passing equations and software infrastructure of CCNNs standardly do not support per-rank neighborhoods or per-rank components at large, like update functions. To our knowledge, this is the first work that provides the theoretical and practical implementations that support these. While we agree that adding per-rank capability is a relatively straightforward generalization, we believe its simplicity should not diminish the value it brings as a true generalization.\\n\\n**Open problems 6,11**\\n\\nIt is true that one could take, for example, a simplicial complex neural network and, if it is implemented in TopoBenchmark, use it along with a cellular complex lifting. However, similarly to (iii) this is not done in CCNN literature, in part because authors run experiments that match their theoretical frameworks, which are often developed for a specific sub-domain (ex.: expressivity in the cellular domain with cellular-level WL tests). \\n\\nWe argue that our theoretical framework at the combinatorial level (including combinatorial-level WL tests), combined with our implementation that provides seamless access to many liftings across different domains, represents a fundamentally distinct approach compared to domain-specific CCNNs.\"}", "{\"title\": \"Follow-up on Reviewer Feedback (#6)\", \"comment\": \"We are once again following up to ask that you please consider replying to our response, or, if you have no further concerns or questions, consider updating your rating of the paper. It is really important to us and to the peer review system that we engage with you. The deadline is coming up fast, so please consider responding soon.\"}", "{\"title\": \"Response to reviewer kr8N (continued)\", \"comment\": \"**W3. Democratization of TDL.** Indeed, we believe that our \\u2018democratization\\u2019 claim can be better contextualized. Here are the reasons why TopoTune democratizes TDL.\\n\\n- **A plug-in approach that removes the need of expert TDL knowledge.** To date, the design of new TDL methods has required a great expertise of the field, as no standardized, base method has been made available to practitioners wishing to apply TDL on their own data. However, by turning these technical steps into a simple component of the architecture \\u2013as you noted under \\u201cStrengths\\u201d-- TopoTune enables both experienced TDL practitioners and newcomers the design of new TDL architectures and message passing pipelines. \\n\\n- **A plug-in approach that improves upon a given GNN performance.** TopoTune also democratizes TDL because it relies on GNNs, which are commonly used by deep learning practitioners. The typical use case is as follows. A deep learning researcher is using a GNN model for a given application. This researcher can plug their GNN into TopoTune, which automatically makes it \\u201chigher-order\\u201d and can potentially improve its performance. You are right that this increases the hyperparameter search space by considering previously fixed traits (topological domain, message function, etc.) as variables. However, we stress that these results with GCCNs were obtained with very minimal \\u201ctraditional\\u201d hyperparameter tuning, keeping parameters like learning rate and embedded dimensions largely fixed across experiments, while the hyperparameters used with CCNNs were the result of an extensive benchmarking study (https://arxiv.org/abs/2406.06642). We now specify this at Section 6.1.\\n\\n- **A framework that opens the door to a systematic exploration of the TDL landscape.**\\nWe emphasize that, going beyond a search tool, TopoTune provides an integrated and standardized framework that significantly reduces the implementation overhead associated with TDL. Rather than manually engineering architectures from scratch for each task, users can leverage TopoTune to explore architectural choices within a unified, expressive framework that encapsulates and generalizes previous work. By organizing the vast design space of TDL into a manageable and interpretable framework, TopoTune shifts the challenge from ad-hoc design to systematic exploration, which we believe is critical for advancing the field. We have further clarified this claim in lines 403-405.\\n\\n**W4. Experimental questions on how GCNNs outperform CCNNs.**\", \"addressing_the_different_subpoints_as_follows\": [\"\\u201cGCCNs outperform CCNNs: why?\\u201d: This performance is due to a myriad of reasons. First, GCCNs allow for more flexible neighborhood structures (\\u201cper-rank\\u201d neighborhoods) than CCNNs (see response to W1, Bullet 1). For example, on MUTAG, a cellular GCCN using a per-rank neighborhood significantly outperforms CCNNs. Secondly, GCCNs benefit from the use of $\\\\omega_\\\\mathcal{N}$ modules based on existing, well-studied architectures, rather than relying on the custom, ad hoc message-passing schemes typically implemented in CCNNs. By using existing GNNs in PyTorch Geometric as base $\\\\omega_\\\\mathcal{N}$, for example, GCCNs leverage a much larger body of literature and much more established software. Finally, GCCNs are now shown to be strictly more expressive than CCNNs (see response to W2).\", \"\\u201cGCCNs are smaller than CCNNs: why?\\u201d: After testing many combinations of GCCNs, we found the best performing models to have, in general, small parameter budgets. We found this empirically; it was not intended by design. However, we can hypothesize that this happens because i) GCCNs allow for smaller, more focused neighborhood structures, which we included in our search space and ii) two of the message functions (GCN, GAT) we consider are particularly lightweight, and could introduce less parameters than the ad-hoc message functions of CCNNs.\", \"\\u201cGCCNs improve over existing CCNNs: a matter of hyperparameter search?\\u201d: You are right that GCCNs introduce additional search space by making previously fixed parameters like topological domain and message function variable. However, as discussed in W3 Bullet 2, we stress that all the other potential critical parameters that were optimized while benchmarking CCNNs (learning rate, hidden channels, dropout values, batch size, readout function, etc) were totally fixed in our experiments. We now specify this at lines 420-423. In this way, despite increasing the overall combinatorics, we actually performed a more limited hyperparameter search than TopoBenchmark \\u2013yet we obtained better performance. This underlines the methodological contribution of GCCNs as an architecture.\", \"\\u201cPerformance-cost trade-off and running times.\\u201d Thank you for bringing up this excellent point. We now provide time complexity analysis in Appendix C as well as training runtimes in Appendix G. Please see our response to all reviewers, points 1A and 1B.\", \"See next reply.\"]}", "{\"title\": \"Response to Reviewer iHAm (#2)\", \"comment\": \"We are happy to hear we were able to address many of your questions! Thank you for engaging with our response. We respond to your additional points below:\\n\\n**W1. Benefits of GCCNs vs CCNNs.**\\nGreat question! We chose our hyperparameters using TopoBenchmark defaults, and for those in which no default was provided (e.g. hidden state dimension) we just selected the lower value that was considered in TopoBenchmark\\u2019s original grid search, which is recorded in [1]. We now specify this in the paper (line 423); thank you for your feedback.\\n\\n[1] https://github.com/geometric-intelligence/TopoBenchmark/blob/main/scripts/reproduce.sh\\n\\n**W2. Time complexity and runtimes.**\\nYes, that\\u2019s exactly right. We currently project combinatorial complexes onto strictly augmented Hasse graphs directly inside the forward function of the model, instead of pre-saving the expanded dataset. This introduces higher runtime grossly proportional to dataset size, which explains why we are comparable/faster on smaller datasets and slower on larger datasets.\\n\\n**W3. Comparison between higher-order GNNs and TNNs.**\\n- We begin by addressing your questions specific to ZINC.\\nIn terms of the literature, the best reported pure GNN performance on ZINC without edge features is 0.320 MAE using PNA [1]. With edge features (representing bond types), this improves to 0.188 MAE. The best TDL model on ZINC, CIN [2], achieves 0.079 MAE, leveraging edge features. From what we can see, PPGN++ does not include ZINC results in its evaluation.\\nIn terms of our results, we remark that the current TopoBenchmark implementation disregards edge features. As highlighted in [1], edge features significantly influence performance. This omission likely accounts for the observed performance gap in our results compared to state-of-the-art models.\\nYou are correct that TDL models generally outperform GNNs on ZINC. We appreciate your observation and will explore incorporating edge features into TopoBenchmark.\\n\\n- We now address your point about higher-order GNNs.\\nWe agree that a direct comparison between higher-order GNNs and TDL methods would be valuable, both for highlighting the benefits of TDL and evaluating its potential for real-world problems. Unfortunately, such a comparison belongs outside the scope of this paper, which aims to first and foremost generalize and make it easy for TDL methods to incorporate arbitrary layers. That being said, we absolutely plan to incorporate higher-order GNNs as baselines in future work and help expand the TopoBenchmark framework accordingly. \\n\\n- To answer your question about choice of dataset: The datasets used in this and other TDL works are largely inherited from the GNN literature. Molecular datasets, in particular, are appealing for TDL due to the significance of cycles and hyperedges in representing chemical rings and functional groups. TDL is not inherently limited to these datasets. However, the lifting procedures used to generate higher-order cells impose computational constraints, specifically on memory. Current implementations (e.g., finding cycles or cliques) do not scale well with large, densely connected graphs. There is ongoing research to design scalable lifting procedures that would enable TDL methods to generalize to broader datasets. For instance, Bernardez et al. [3] propose innovative approaches to extend TDL beyond the graph domain.\", \"all_of_these_aspects_have_been_clarified_in_the_revised_manuscript\": \"we provide further details on dataset selection (line 425), excluding edge features(line 430), higher-order GNNs as future work (line 539), and dataset selection + lifting limitations (Appendix E2).\\n\\nReferences\\n\\n[1] Corso, Gabriele, et al. \\\"Principal neighbourhood aggregation for graph nets.\\\" Advances in Neural Information Processing Systems 33 (2020): 13260-13271.\\n\\n[2] Bodnar, Cristian, et al. \\\"Weisfeiler and lehman go cellular: Cw networks.\\\" Advances in neural information processing systems 34 (2021): 2625-2640.\\n\\n[3] Bernardez et al. \\u201cICML Topological Deep Learning Challenge 2024: Beyond the Graph Domain.\\u201d Proceedings of the Geometry-grounded Representation Learning and Generative Modeling Workshop (GRaM) at ICML 2024 https://arxiv.org/abs/2409.05211\"}", "{\"title\": \"Response to all reviewers\", \"comment\": \"We thank the reviewers for their time and thoughtful comments. Reviewers found our work to be a meaningful and timely contribution to the field of Topological Deep Learning (TDL), which we find encouraging. Specifically, reviewers appreciated the generalization of prior methods into a unified framework, the practical utility of our TopoTune toolkit, and the extensive experiments showing competitive performance over previous TDL methods.\\n\\nAll reviewers identified similar areas for improvement, particularly regarding runtimes and complexity, as well as better clarifying and contextualizing our contribution. Here, we provide answers to these major comments shared by reviewers. We additionally upload the pdf of our revised paper and Appendix where we have directly addressed all comments (in blue in the text). A response to each individual reviewer\\u2019s comments is provided in the thread of the associated review. \\n\\n**1. Complexity and training times**\\nWe agree with reviewers TxXe and iHAm that complexity and training times are important features of any deep learning model. We thank both reviewers for catching this, as we believe that these additions strengthen our argument.\\n\\n- **A. Time complexity:** We now provide a time complexity analysis in Appendix C (referred to in main text at lines 475-478). This analysis shows that GCCNs achieve a compromise between time complexities of standard GNNs and of CCNNs. This is specifically due to GCCNs\\u2019 accommodation of lightweight per-rank neighborhood structures.\\n\\n- **B. Runtimes:** In our revised manuscript, we now report both model sizes and training times (Appendix G) on all experiments presented in Section 4, and briefly discuss them in Section 6.2 (paragraph GCCNs are smaller than CCNNs). We find that for smaller datasets (ex: MUTAG, Cora, Citeseer), GCCNs already train comparably or faster than CCNNs. For larger datasets however (ex: ZINC, PubMed) GCCNs train slower than CCNNs, but this is only due to an artifact of our implementation, which performs the on-the-fly graph expansion process before each forward pass of the model. We will move this expansion process into the preprocessing step, computing it only once and significantly speeding up the forward pass (see time complexity analysis).\\n\\n**2. Clarifying specific advantages of GCCNs**\\n\\n- **A. Analysis of performance versus size:** Reviewers TxXe and iHAm proposed to clarify the analysis of performance versus size provided in Figure 5. We edited the text to better contextualize this empirical observation. We also point to Fig. 8 in Appendix H which provides the same plots for all datasets. We respond directly to reviewers\\u2019 questions in the individual replies.\\n\\n- **B. Contextualizing the contribution:** Reviewers rJiq and kr8N asked for better contextualization of the research questions addressed by our work, and its usefulness in the context of TDL. We now clearly specify in our stated contributions the open problems of TDL that we address (see next reply), leveraging a recent position paper written by many of the field\\u2019s leaders. We also better describe current gaps in the field, and how our work builds upon previous GNN simulation works by respecting topological symmetry. We emphasize this contribution of respecting conserving topological symmetry in responding to Reviewer iHAm about previous works in higher-order GNNs. We also emphasize how TopoTune goes beyond a hyperparameter search tool by unlocking a new class of models (ex: per-rank neighborhoods, message functions from the GNN world) that we empirically validate to be helpful without an extensive training hyperparameter search. Specifically, hyperparameters like learning rate, batch size, embedded dimensions, readout function, and more are kept largely fixed. This is now specified at lines 420-423. (Continues in the next comment)\"}", "{\"title\": \"Response to reviewer iHAm (continued)\", \"comment\": \"**W3, continued**. Finally, on a more practical side, the fact that each cell is an entity in the underlying space allows us to select which higher-order interactions matter rather than relying on arbitrary structures induced by the k-hop neighborhoods of nodes. For instance, let\\u2019s consider a molecule to be our data. In the cellular domain, carbon rings of the molecule are directly incorporated as standalone features of the dataset. In a k-hop setting, this is not possible.\\n\\nWe believe this reply is also useful to partly clarify that two equally expressive architectures are not equivalent and that expressivity itself is not an exhaustive metric, as we explain more in detail in our reply to Q2.\\n\\nNevertheless, this very exciting progress in the GNN field (higher-order or not) speaks directly to TopoTune\\u2019s goal of making such advances more accessible to TDL. For example, it could be interesting to use PPGN++ as an $\\\\omega_\\\\mathcal{N}$ message-function (instead of the fairly vanilla GNNs used in the paper) and perform k-hop learning on the topological domain. Another important future research direction with TopoTune is using richer lighting mechanisms that better capture dataset topology (rather than adhering to strict theoretical domains like simplicial and cellular complexes), and thus potentially better highlight the contribution of TDL as a whole.\\n\\nReferences.\\n\\n[1] Bronstein, M., et al. \\\"Geometric deep learning: Grids, groups, graphs, geodesics, and gauges.\\\"\\n\\n[2] Bodnar, C, \\u201cTopological Deep Learning: Graphs, Complexes, Sheaves\\u201d\\n\\n[3] Battiloro, C, \\u201cSignal Processing and Learning over Topological Spaces\\u201d\\n\\n[4] Yang, et al. \\\"Simplicial convolutional neural networks.\\\" \\n\\n[5] Battiloro, C, et al. \\\"Generalized simplicial attention neural networks.\\\"\\n\\n[6] Barbarossa, S, et al. \\\"Topological signal processing over simplicial complexes.\\\"\\n\\n[7] Roddenberry, T. M, et al. \\\"Signal processing on cell complexes.\\\"\\n\\n\\n**Answers to questions**\\n\\n**Q1.** Answered in W3.\\n\\n**Q2.** Yes, in that sentence the former refers to Cell Encodings, and the latter to CCNNs \\u2013apologies for the complex formulation. We have rewritten that sentence in the paper to improve clarity. Additionally, we have expanded that paragraph to address the motivation behind our claim (Section 3, paragraph Retaining expressivity, but not generality). Indeed, as you pointed out, it lacked proper contextualization.\\n\\nWhen we say that a GNN running on the expanded graph is neither formally equivalent to nor a generalization of its TDL counterparts we straightforwardly mean that they are not the same model and one should not be used as a surrogate of the other. Indeed, a GNN running on the expanded graph does not take into account either the different ranks or the different neighborhoods, resulting in a rank- and neighborhood-independent message function. This collapses many relations induced by the topological domain and applies the same set of weights to the connections that survive the collapse. \\n\\nTake again a molecule as an example, represented as a combinatorial complex: bonds are modeled as edges (1-cells) and rings such as carbon rings are modeled as faces (2-cells). Two bonds can simultaneously share multiple neighborhoods. Two bonds can simultaneously share multiple neighborhoods. For instance, they could be lower adjacent because they have a common atom (0-cell) and, at the same time, also be upper adjacent because they are part of the same molecular ring (2-cell). Despite their different chemical meaning, the whole Hasse graph (i.e., the approach of [1]) would collapse these two relations (upper adjacent, lower adjacent) into one. Moreover, the resulting GNN would not be able to distinguish anymore which node of the Hasse graph was an atom or a bond or a ring in the original molecule, and would process all the connections with the same set of weights. \\n\\nTherefore, even if a GNN on the whole augmented Hasse graph of a combinatorial complex is as expressive in a WL sense as a CCNN on the CC, expressivity itself is not enough to employ a GNN rather than a TNN, as the resulting learning models are still inherently very different. In this sense, GCCNs are the first class of models to retain all the properties of [1] while being proper TDL models.\\n\\nReferences.\\n[1] Jogl et al. \\u201cExpressivity-preserving GNN simulation.\\u201d\\n\\n**Q3.** Answered in W1.\\n\\n**Q4.** We refer to reply 1A in our main reply to all reviewers. Memory wise, due to the large amount of experiments and necessity for distributed training, reporting memory complexity beyond the number of parameters was not possible in this timeframe, but would absolutely be important in future work.\\n\\n**Q5.** We\\u2019ve clarified this at Section 6.2\\u2014thank you for bringing it to our attention. We are using the 12K-graph subset of the ZINC dataset, as further detailed in Table 3, Appendix E2. We chose the subset version because it is most commonly used in graph (and TDL) benchmarking tasks.\"}", "{\"title\": \"Follow-up for feedback\", \"comment\": [\"We would like to ask you whether our responses have been helpful in addressing your raised points raised about weaknesses and questions. To summarize our response, have:\", \"better contextualized the performance contribution and TopoTune's utility as a whole;\", \"provided runtimes;\", \"explained how higher-order GNNs differ from Topological Neural Networks, including GCCNs\", \"answered questions about the inherent differences with GNNs beyond expressivity\", \"We would greatly appreciate hearing back from you on these points, as it would allow us to address any remaining issues and further improve the quality of our manuscript. In the case where this response has been helpful, we would kindly ask you reconsider the rating of the paper.\"]}", "{\"title\": \"Follow-up on Reviewer Feedback (#5)\", \"comment\": \"Hello again. We would really appreciate if Reviewer TxXe would reply to our rebuttal, and ask any remaining questions, if any. We repeat that your feedback has informed **adding new experiments, runtimes, time complexity, and clarifications to the main text**. We would really appreciate an updated review that reflects these improvements asked for by Reviewer TxXe.\"}", "{\"title\": \"Follow-up on Reviewer Feedback (#4)\", \"comment\": \"Hello Reviewer TxXe. We are following up a fourth time to ask if our rebuttal has addressed your concerns. Your input has informed many new improvements to the paper including **adding new experiments, runtimes, time complexity, and clarifications to the main text**. We would really, really appreciate hearing back from you about your thoughts on this. That way, if there are remaining issues, we\\u2019d be happy to clarify before the deadline.\\n\\nOtherwise, if you\\u2019re satisfied, we kindly ask that you reconsider the rating of our updated submission.\"}", "{\"comment\": \"Thank you for your response.\\n\\n> We chose our hyperparameters using TopoBenchmark defaults\\n>\\n\\nDoesn't this mean that hyperparameter search is still required, even if you ended up using the same hyperparameters for each GCCN model? Have you compared GCCN performance with the performance of CCNNs using the TopoBenchmark hyperparameter defaults?\\n\\n> From what we can see, PPGN++ does not include ZINC results in its evaluation.\\n>\", \"i_am_referring_to_table_3_in_https\": \"//arxiv.org/abs/2302.11556. If I am not mistaken, PPGN++ uses edge features. Can you confirm that the ZINC (12K) is the same as used in your evaluation?\\n\\nOverall, I am happy to see that you are actively adjusting the manuscript to incorporate the review feedback. I think that this will improve the quality of your work.\"}", "{\"comment\": \"Thank you! This fully answers my question. I agree that a brief explanation of the notation would help to avoid confusion.\"}", "{\"title\": \"Follow-up on reviewer feedback\", \"comment\": [\"We are following up on our previous responses to ask if they have been helpful in addressing the weaknesses and questions you laid out. To summarize our responses to these, have:\", \"explained how the mathematical definition of GCCNs is different from that of CCNNs and introduces novelty;\", \"explained how marking edge/node features leads to information loss that a GCCN, on the contrary, avoids;\", \"provided a stronger expressivity proof showing GCCNs are more powerful than CCNNs\", \"contextualized our democratization claim with concrete examples;\", \"explained point by point how GCCNs outperform CCNNs;\", \"contextualized the contributions within the field's open problems;\", \"answered questions about generality and how TopoTune is much more than a hyperparameter tool, as it unlocks a whole new landscape of models that are empirically shown, with little to no, training hyperparameter tuning, to perform better.\", \"We hope that these answers have addressed the Reviewer's concerns, and if there is anything else we can help clarify. Otherwise, in case the Reviewer is satisfied with our response and the clarifications, we'd greatly appreciate it if they could reconsider the rating of our submission.\"]}", "{\"title\": \"Second answer to rebuttal 3/2\", \"comment\": \"Zooming out a little: I hope my previous comments and questions do not sound gratuitously combative. As an official reviewer, I felt it important to convey my opinion that some claims are somewhat overemphasised, and some arguments and responses have been given in a way I found, in some cases, slightly deceptive.\\n\\nOther than this, in the above response I had additional comments to share and question to raise, which I invite the authors to concisely respond to.\"}", "{\"title\": \"Third answer to rebuttal\", \"comment\": \"Thank you. I will leave further comments and questions you could reply to in the time left.\\n\\n**Claims on value and novelty.**\\n\\n(i) My argument was referring to the symmetries you mentioned in your response, e.g. permutation invariance of cells. Marking strategies do not necessarily invalidate that. As for maintaining a strict hierarchical structure that is more an inductive bias than a symmetry.\\n\\n(ii) In the case one chooses an incidence neighbourhood, the strictly augmented Hasse graph would contain cells of different ranks, unless I missed something. See e.g. Fig. 3. Wouldn't a Transformer encoder applied to that intermingle their representations? I hope your answer to this will eventually clear up any outstanding confusion on this.\\n\\n(iii) What I meant with this is that nothing prevents one from using a per-rank update function in, say, a CWN msg-passing layer (https://arxiv.org/pdf/2106.12575), regardless of the fact that this is not common practice. At this point, at the same time, such an update function could learn to assign different weights to different messages, and, theoretically, potentially disregard some of them.\\n\\n**Open Problems 6, 11**\\n\\nAgain, why wouldn\\u2019t one be able to apply a standard CCNN to different domains by changing, for example, the lifting function? I understand that an easier infrastructure to do that would be helpful for the community, but, again, I am not sure I understand why this is something possible only with your proposed architecture and not with others.\"}", "{\"comment\": \"**Re Q1:** Thank you for clarifying the cause of the parameter size reductions in Fig 5. I am however still not quite sure I fully understand the reported differences. On the ZINC dataset you compared three GIN-based GCCN variants (orange: three per-rank neighborhoods, dark green: two neighborhoods, gray: three neighborhoods). With one $\\\\omega_{\\\\mathcal{N}}$ block per neighborhood $\\\\mathcal{N}$, I would expect the dark green variant to have the lowest parameter count. I assume that the difference is due the parameter count varying for per-rank and \\\"cross-rank\\\" neighborhoods. Is this assumption correct and, if yes, how do the GNN modules vary between different types of neighborhoods?\\n\\n**Re Q2:** Thank you! This fully answers my question. I agree that it would indeed be interesting to consider other inter-neighborhood aggregators as well.\\n\\n**Re Q3:** Thank you for providing additional references. The added context (re W1) and how your work relates to open problems in the field was also helpful.\\n\\nApart from the minor clarification request regarding the parameterization of the GNN modules, I have no further questions.\"}", "{\"title\": \"Response to Reviewer iHAm (#3)\", \"comment\": [\"Thank you for continuing to engage! Your feedback continues to help improve the manuscript and we are happy to hear you agree. To answer your questions:\", \"*Hyperparameters.* We did not do a traditional training hyperparameter search in the sense that we only considered one set of training hyperparameters for each combination of GCCN and task. This set of hyperparameters was selected from the defaults proposed by TopoBenchmark. If there was no default available, we picked the lowest value considered in Top Benchmarks reported grid search. As such, we compared :\", \"GCCN performance obtained with a combination of TopoBenchmark defaults/smallest-grid-search-value\", \"with\", \"CCNN performance obtained with TopoBenchmark tuned hyperparameters.\", \"Please let us know if that is more clear!\", \"*ZINC.* Thank you for specifying the Table, we were looking at the other PPGN++ paper (https://arxiv.org/pdf/1905.11136). Indeed, it does use edge features (p. 14, Appendix A1). Yes, we use the same ZINC-12k in our evaluation.\", \"Thank you again for your valuable feedback. We kindly ask that you consider updating your score to reflect the improvements your feedback has informed.\"]}", "{\"comment\": [\"First of all, I'd like to thank the authors for their extensive answers to my concerns and questions, the latter of which have been answered adequately. I specifically thank the authors for their time to explain in great detail the differences in modeling assumptions between higher-order GNNs and TNNs. Now, to the weaknesses:\", \"**Benefits of GCCNs vs. CCNNs**: I am largely convinced that your framework unifies many approaches in TDL and hence allows for an easier process for building models in practice. However, regarding the hyper-parameter tuning argument that you employ, **how did you select the hyper-parameters for GCCNs, if not by classical hyper-parameter tuning?**. Perhaps this is stated in your paper and I may have missed it. In any way, I would appreciate if the authors could elaborate on this.\", \"**Time complexity and runtimes**: I believe this concern is largely addressed. I have just a small follow-up question: You mention that the on-the-fly graph expansion could be instead implemented as a pre-processing step. Do I understand correctly that your current implementation runs the graph expansion for the same data point repeatedly at each training epoch, causing the higher runtime?\", \"**Comparison between higher-order GNNs and TNNs**: I just now realized that the best performance on ZINC reported in your paper is 0.19 MAE. Is the evaluation procedure here somehow different from that of standard GNNs? I am asking because 0.19 MAE on ZINC would be considered very poor performance in the graph learning setting; see e.g., the PPGN++ paper as an example, where even a simple GIN model surpasses 0.19 MAE. I would appreciate if authors could comment on this. So far I was under the impression that TDL methods perform very strongly on ZINC, maybe this was an oversight on my part.\", \"Anyway, more generally, while I understand that higher-order GNNs and TNNs use different modeling assumptions, I still think that a comparison between them would be interesting, both to draw more attention to the potential benefits of TDL, as well as to ensure that TDL methods can actually push the state-of-the-art on real-world problems. Considering the dataset choices made in this work and also other TDL works: **Can the authors explain why these datasets are typically used to benchmark TDL methods? Would it, in principle, be possible to apply TDL methods to arbitrary graph datasets to enable a fair comparison between GNNs and TNNs?**\"]}", "{\"title\": \"Response to reviewer kr8N\", \"comment\": \"Thank you for your detailed review and valuable feedback. We believe it has significantly helped to improve our paper. We address your main concerns (weaknesses (W) and questions (Q)) below:\\n\\n**W1. How GCCNs would unlock new operations and patterns.**\\n\\n**Bullet 1:** This is an important point. Comparing Equations 3 and 8:\\nThe collection of neighborhoods considered in CCNNs implicitly exclude per-rank neighborhoods. Specifically, in the definition of CCNNs by Hajij et al 2023 (Eq. 3), the notation \\\"N\\\" refers to the neighborhood N *across all ranks*. This means that, for example, if the incidence neighborhood is considered, all possible 1-hop rank incidences are considered (edges to nodes, faces to edges). It is not possible, for example, to only consider incidence between edges and nodes. GCCNs, on the contrary, are not constrained by this, hence unlocking new possible message-passing paths. This increased flexibility of GCCNs over CCNNs is precisely what allows them to outperform previous architectures. We realize the original text describing Eq. 3 was misleading, thank you for pointing this out.\\n\\nWhile $\\\\psi$s in Eq. 3 could potentially be neighborhood- and rank-dependent, by default the vast majority of CCNNs consider all of them to be of the same nature (Papillon et al. 2023). We have clarified the text to reflect this. Moreover, by definition, CCNNs are message-passing architectures, as the function $\\\\psi$ can at most define the message between cells. Contrary to this, nothing forces the $\\\\omega$s of Eq. 8 to be message passing based (e.g., by using a Transformer or MLP architecture). This introduces a completely new landscape of possible TDL models, which have until now been very focused on message passing (see response to Q3 for more).\\nWe have updated the descriptions of CCNNs (Eq. 3, Section 2) and GCCNs (Eq. 8, Section 4) to further clarify these points. \\n\\n**Bullet 2:** Regarding the possibility of \\u2018marking\\u2019 node and edge features to specify rank and neighborhood information, it is indeed a valid approach. In fact, it is precisely what simulation works like [1] do (see Section 3, paragraph Retaining expressivity, but not generality). However, the goal of our work is to design a model whose architecture (as opposed to features) naturally incorporates such inductive biases. Indeed, this fulfills TDL\\u2019s goal of preserving the topological symmetry of the domain, and requires an inherently different model. To better explain why this idea of incorporating inductive biases goes beyond marking features, we consider an example (now at lines 231-238).\\n\\nConsider a molecule, represented as a combinatorial complex: bonds are modeled as edges (1-cells) and rings such as carbon rings are modeled as faces (2-cells). Two bonds can simultaneously share multiple neighborhoods. For instance, they could be lower adjacent because they have a common atom (0-cell) and, at the same time, also be upper adjacent because they are part of the same molecular ring (2-cell). Despite their different chemical meaning, the whole Hasse graph (i.e., the approach of [1]) would collapse these two relations (upper adjacent, lower adjacent) into one. Moreover, the resulting GNN would not be able to distinguish anymore which node of the Hasse graph was an atom or a bond or a ring in the original molecule, and would process all the connections with the same set of weights. \\n\\nTherefore, even if a GNN on the whole augmented Hasse graph of a combinatorial complex is as expressive in a WL sense as a CCNN on the CC, expressivity itself is not enough to employ a GNN rather than a TNN, as the resulting learning models are still inherently very different. In this sense, GCCNs are the first class of models to retain all the properties of [1] while being proper TDL models. In other words, GCCNs still preserve the topological symmetry of the domain. \\n\\n[1] Jogl et al. \\u201cExpressivity-preserving GNN simulation.\\u201d\\n\\n**W2. Stronger expressivity proof.** We agree that Proposition 3 as it stood was not very interesting. We took your feedback to heart and now provide and prove a stronger proposition about expressivity. We show that GCCNs are strictly more expressive than CCNNs, using a higher-order analog to the k-WL test. The full proof is provided in Appendix B3. We provide a brief outline here:\\n\\n- We define WL-tests related to CCNNs and GCCNs, called CCWL and GCCWL.\\n\\n- The CCWL test is equivalent to a WL test on a strictly augmented Hasse graph.\\n\\n- The GCCWL test is at least as powerful as the k-WL test on a strictly augmented Hasse graph, for any k.\\n\\n- Using the fact that k-WL is more powerful than WL on graphs, we find a pair of two strictly augmented Hasse graphs that cannot be distinguished by WL but can be distinguished by k-WL. This yields two combinatorial complexes that cannot be distinguished by CCWL, but can be distinguished by GCCWL.\\n\\n- As such, GCCNs are strictly more expressive than CCNNs.\\n\\nSee next reply.\"}", "{\"title\": \"Follow-up on reviewer feedback (#2)\", \"comment\": [\"Hello reviewer r8N, we are following up about our recent response to your additional questions, which includes:\", \"rewordings of the expressions *\\\"increased flexibility\\\"* and *\\\"topological symmetry\\\"* to better explain the advantages of GCCNs;\", \"reasoning behind *Transformer* architecture;\", \"clarifications and better justifications in the manuscript about democratization claim;\", \"list of evidence of empirical results pointing to contribution of GCCNs over CCNNs;\", \"reworded paragraph headings in the Results subsection, per your suggestions;\", \"updated, stronger expressivity proof with accompanying figure for easy scanning.\", \"We are eager to hear back from you on these points, and address any remaining questions before the upcoming deadline. We believe your feedback has significantly helped improve the manuscript. In light of this, if there are no remaining questions or concerns, we would really appreciate if you considered updating your rating of the paper to reflect this.\"]}", "{\"title\": \"Thank you!\", \"comment\": \"We are happy to hear that we answered your remaining questions. Absolutely for ZINC, excluding edge features helps with comparisons. For hyperparameters, you are correct that tuning a new GNN could require extra tuning. That said, in our experiments, we used the same defaults across four separate GNNs (which TopoBenchmark did not include in their own grid search) and got good results.\\n\\nThank you very much for increasing your score. We believe our manuscript is now better because of your feedback and are very happy you agree.\"}", "{\"title\": \"Response to reviewer kr8N (continued, part 2)\", \"comment\": \"**W5. Clearer presentation of the motivations.** We refer to point 2B \\u201cContextualizing our contribution\\u201d in our main reply to all reviewers. We also refer to our answer to question Q2. At its heart, our work aims to address many open problems in this very young and emerging field. We hope that our previous answers to your raised points W1, W3, and W4 help further inform the value of the contribution, not only as a methodological advance but also as a tool for making the field more accessible and navigable.\\n\\n**Questions**\\n\\n**Q1.** By Proposition 1, proved in Appendix B1, GCCNs formally generalize CCNNs. As such, a CCNN is at most a special case of a GCCN. As a direct consequence, the two classes of architectures are not equivalent. Consider these two examples of how GCCNs go beyond CCNNs:\\nNon-message passing networks (ex.: MLP, Transformer, etc.) cannot be implemented in CCNNs. However, in a GCCN, such models can absolutely be implemented via $\\\\omega_\\\\mathcal{N}$. We provide an example in the paper of this using Transformer. This is a non-message passing architecture that is the building block of this GCCN.\\nCCNNs only explore 1-hop neighborhoods of the Hasse graph in a single layer. GCCNs, on the other hand, can leverage $k$-hop neighborhoods simply by choosing a $\\\\omega_\\\\mathcal{N}$ such as a GNN with $k$ layers. This is the basis of the proof of stronger expressivity provided in our response to W2.\\n\\nIf your question about expressed functions refers to universal approximations, this is unfortunately not within the scope of our work. Answering this would require extending universal approximation theory to TDL, which is a whole other line of work that the community has yet to tackle.\\n\\n**Q2.** Our experiment section asks the question: Can GCCNs provide an efficient graph-based alternative to CCNNs that performs just as well or better on many tasks? We believe that Tables 1 and 2 answer that question thoroughly. Our experiment section also provides observations that emerged from the empirical results, such as lottery ticket models and impact of choice of GNN. We have edited these sections for better clarity. \\n\\n**Q3.** We refer to our answer to W3, Bullet 2 as well as W4. We also further elaborate here. While it is true TopoTune provides an easy-to-use hyperparameter search tool, it also advances TDL by opening up a completely new landscape of models. TopoTune defines a rich space of novel architectures through modular components (that are engineered to be easily tunable). These components\\u2013 like per rank neighborhoods and graph-based, potentially non-message passing submodules (see answers to W1)\\u2013 enable previously unexplored model classes. These models, GCCNs, are in fact proven (proposition 1, Appendix B1) to generalize existing models (CCNNs). The experimental results obtained with TopoTune demonstrate that these architectural innovations drive performance gains, even though they are obtained without exhaustive training parameter searches (see answer to W3, bullet 2). TopoTune thus represents a fundamental reconceptualization of how topological neural networks can be constructed and composed. We have clarified this under the \\\"Accelerating TDL Research\\\" subsection of Section 5.\"}", "{\"title\": \"Third response to Reviewer kr8N (1/4)\", \"comment\": \"Thank you for your additional comments. We respond point by point below as concisely as possible.\\n\\n**\\u201cTopological symmetry\\u201d.** We apologize for the oversight \\u2013 we have removed the 2 instances we used this term (paragraph heading, introduction), which were both intended to introduce the topic at large rather than delve into it. We introduced this expression during rebuttal as an attempt to respond to Reviewer kr8N\\u2019s misunderstandings of GCCNs\\u2019 contribution. If the paper is accepted, there will be no mention of it in the camera-ready version.\\n\\n**Claims on value and novelty.**\\n\\n- (i) We think there continues to be a misunderstanding about the fundamental differences between GCCNs and marking-based GNNs. Marking strategies currently proposed by the literature do *necessarily* collapse topological information, as we detailed in our reply entitled \\u201cResponse to reviewer kr8N\\u201d sent Nov 19 and as we explicitly state in the manuscript (original and rebuttal versions) they are *not equivalent* to the CCNNs they seek to emulate. \\n- (ii) This is also not quite right. In GCCNs, Transformer modules only consider cells from one neighborhood at a time, thus avoiding this \\u201cblending\\u201d between the entire complex. It is true that in the current implementation, the Transformer module does collapse some connectivity, but only *within neighborhoods*, rather than within the whole complex. We mention that this does not perform nearly as well as other choices of $\\\\omega_\\\\mathcal{N}$. We also emphasize that the Transformer-GCCN is meant to be a first step towards non-message-passing models in higher order settings, and not meant to be the main focus of the paper. Indeed, it is only one GCCN parameterization among the many others we consider. We mention the word \\u201cTransformer\\u201d three times, all only in the context of hyperparameter lists (lines 71,119,418).\\n- (iii) This is not true. Previous architectures are not capable of accommodating per-rank neighborhoods (line 280) and are not constructed with \\u201cLego block\\u201d style modules that consider one neighborhood at a time. This novelty is inherently tied to our novel graph expansion mechanism, which is the first to represent a higher-order complex as a collection of strictly augmented Hasse graphs (line 258). As Proposition 1 (line 349) states, GCCNs generalize and subsume previous topological models (CCNNs). \\n\\n> \\u201cHowever, the authors' claims on the value and novelty of such a methodology are, in my opinion, overemphasised and justified with slightly deceptive arguments.\\u201d \\n\\nOur arguments of novelty are supported through a thorough motivation of the work in comparison to existing gaps (Section 3), clear enumeration of novel architectural contributions (lines 258, 280, 309), and proven theoretical statements (line 347) on generality, equivariance, and expressivity. Moreover, we refer to our reply to the Reviewer\\u2019s remarks (i,ii,iii) above.\\n\\n**reasoning behind Transformer architecture**\\n\\nWe agree that the Transformer-based GCCN architecture differs from the GNN-based GCCNs. We also agree that further developing non message-passing architectures is out of the scope of this work. We agree that this contribution should not be overemphasized, and stress that it is in fact not emphasized anywhere in the paper, beyond a short remark at line 371 that GCCNs imply a possibility for defining non message-passing architectures.\"}", "{\"title\": \"Reponse to Reviewer rJiq (#2)\", \"comment\": \"We are happy to hear that our answers were helpful! Addressing the additional question about GNN module parameterization:\\n\\n- *Small clarification 1.* On the ZINC dataset we in fact compared every single possible combination of GNN and neighborhood structure listed in lines 415\\u2013418. Figure 5 only shows three of these combinations because of the plot\\u2019s range, which only shows models that performed within 10% of the best model and within 40\\u2013100% of the best model\\u2019s parameter size. On ZINC, there are only 3 models, all GIN-based, who belong in this range.\\n- *Small clarification 2.* The neighborhood notation in Fig. 5 is not an accurate reflection of neighborhood size. The key here is that while the orange neighborhood is written in terms of per-rank neighborhoods (notation includes rank superscripts, see example at line 283), the green and gray neighborhoods are written in terms of regular, rank-independent neighborhoods (notation does not include rank superscript, see example at line 303). We were hoping to write out these neighborhoods in terms of their per-rank notation, but for some reason Open Review is having a very hard time displaying the math -- so sorry about this. We write them as clearly as possible in text:\\n\\n{N_{A, \\\\uparrow}, N_{A, \\\\downarrow}} = {N_{A, \\\\uparrow}^0, N_{A, \\\\uparrow}^1, N_{A, \\\\downarrow}^1, N_{A, \\\\downarrow}^2}\\n\\nN_{A, \\\\uparrow}, N_{A, \\\\downarrow}, N_{I, \\\\downarrow}\\\\} = N_{A, \\\\uparrow}^0, N_{A, \\\\uparrow}^1, N_{A, \\\\downarrow}^1, N_{A, \\\\downarrow}^2, N_{I, \\\\downarrow}^1, N_{I, \\\\downarrow}^2\\n\\nWe completely understand this could be a confusing choice of notation. To better clarify this distinction but still avoid having space issues, we have added to Fig. 5\\u2019s legend a symbol indicating which neighborhoods can only be expressed as per-rank, and mentioned it in the caption (line 526).\\n\\nThank you again for your continued feedback. We appreciate engaging with you and improving the paper through your input.\"}", "{\"title\": \"Response to reviewer rJiq\", \"comment\": \"We wish to express deep gratitude for your time and your thoughtful review. We are happy to read that the figures were helpful. We address the raised points about weaknesses and questions below.\\n\\n**W1 Contextualizing the contribution for TDL newcomers.** (Response 2B in main reply to all reviewers above). We completely understand that TDL might be an unfamiliar field, and appreciate your feedback about better contextualizing TopoTune\\u2019s contributions with respect to the field\\u2019s open problems. As such, we have baked into the paper\\u2019s stated contributions and conclusions a stronger connection with a position paper authored by many of the field\\u2019s leaders (https://arxiv.org/abs/2106.04051) which defines 11 open problems. This work fully or partially tackles 7 of these problems, which we elaborate upon in our second main reply to all reviewers above.\\n\\n**W2 Clarifying the simplicial vs. cellular domains.** We have incorporated in Appendix A a brief overview of the different topological domains of TDL. This addition is now referenced in Section 2 Background, and helps make this work more self-contained.\\n\\n**Math typos.** Thank you for pointing these out \\u2013we have fixed both.\\n\\n **Questions**\\n\\n1. Since a GCCN is made up of message-passing blocks, each individually assigned to a given neighborhood, modifying the amount of neighborhoods necessarily modifies the amount of message-passing blocks, which in turn modifies parameter size. You are correct that each GNN independently does not change in size (whether it be assigned to node-level adjacency, or face-to-edge incidence, for example), but the amount of GNNs used certainly does affect total size. We have also made sure to better clarify this in the paper in Section 6.2.\\n\\n\\n2. Each GCCN model is parametrized by choice of neighborhood structure, choice of topological domain, choice of message function (i.e. choice of GNN to use as building blocks), and choice of graph expansion method (we consider both ensembles of strictly augmented graphs and one augmented Hasse graph). For the purposes of this work, we only consider one set of intra- and inter- neighborhood aggregations, just to reduce the scope of comparisons already introduced by the previously mentioned parameters. Specifically, intra-neighborhood aggregation is left up to the choice of GNN, and inter-neighborhood aggregation is a sum. Considering and comparing various aggregators would be an interesting future research direction.\\n\\n\\n3. Architecture wise, this opens the door to newer advances in the graph learning field, going beyond \\u201cstandard\\u201d GNNs, such as graph-based MLP models (https://arxiv.org/abs/2106.04051), and diffusion models (https://arxiv.org/abs/2106.10934). Application wise, we envision TopoTune to be a tool for newcomers from the various fields which have already been shown to benefit from learning on higher order spaces, as outlined in Appendix B of TDL\\u2019s most recent position paper (https://arxiv.org/abs/2106.04051). These include: data compression, natural language processing, computer vision, chemistry, virus evolution, and more. Much of this interdisciplinary work is early and its future success is in part deeply tied to the accessibility of TDL, something we hope TopoTune will accelerate.\"}", "{\"summary\": \"The authors propose a general topological deep learning (TDL) architecture called Generalized Combinatorial Complex Network (GCCN). It aims to unify prior work on TDL under a common mathematical framework.\\nAdditionally, the authors provide the TopoTune library, a reusable software implementation of the proposed GCCN method.\\nThe experiments show that the flexibility of the GCCN framework allows it to match or outperform previously proposed TDL methods while, oftentimes, requiring fewer model parameters to do so.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"First, the proposed GCCN architecture (while fairly straight-forward) provides a useful framework for describing a large variety of TDL methods and it enlarges the design space for such methods.\\nThe experiments illustrate how this simplifies the optimization of TDL models and improving upon the state-of-the-art.\\nAdditionally, the authors show that GCCN can match or even outperform previously proposed approaches while requiring fewer parameters to do so.\\n\\nSecond, the provided TopoTune implementation of GCCN integrates with existing GNN and TDL libraries.\\nThis simplifies the exploration of novel TDL architectures and, as stated by the authors, could help accelerate research on TDL.\\nHowever, since I am not deeply familiar with the current literature on TDL and open problems, I can not confidently assess the relevance of this contribution.\\n\\nLast, I want to highlight the presentation. The paper is well structured and written. The figures are of high quality and helpful.\", \"weaknesses\": \"In Section 4 the authors show a number of theoretical properties of their proposed GCCN framework.\\nWhile certainly desirable, the value of those properties is limited. \\nAs stated by the authors themselves in the proofs in the supplement, those properties are, for the most part, fairly straight-forward.\\nAs far as I can tell, the GCCN framework is an intuitive generalization of prior work which only provides relatively small theoretical insights.\\nThe overall value of the contribution therefore seems to depend on the relevance of the previously described strengths of the paper, in particular, on the relevance of the provided TopoTune implementation.\\nHowever, as mentioned, I cannot fully assess this aspect.\\nThus, one potential general concern might be the overall relevance of the paper.\", \"apart_from_this_point_i_have_only_minor_suggestions_for_improvement\": \"1. I would have found a (brief) explanation of the evaluated types of combinatorial complexes (cellular vs simplicial) to be helpful.\\n2. There seem to be two small errors in the formal definitions in Section 2:\\n\\t- p. 3 (127): At $\\\\mathcal{P}(S) \\\\setminus \\\\{\\\\emptyset\\\\}$ it should probably read $\\\\mathcal{V}$ instead of $S$.\\n\\t- p. 3, eq. 2 (146): $\\\\mathrm{rk}(\\\\tau)$ after $\\\\exists\\\\ \\\\delta$ should be probably $\\\\mathrm{rk}(\\\\delta)$.\", \"questions\": \"1. In Figure 5, it did not become entirely clear to me why the parameter size is reduced by changing the neighborhoods. I would expect that the total number of parameters of the GNN modules are independent of the specific types of neighborhood used. However, as shown in Figure 5 this does not appear to be the case. Can you elaborate on what exactly you mean by parameter size and how it relates the the choice of neighborhoods?\\n2. It is not clear to me how exactly the GCCN models are parameterized in the different experiments. In particular, which intra- and inter-neighborhood aggregators were used for the different experiments?\\n3. In the conclusion, you state that you hope that TopoTune might help \\\"bridge the gap with other machine learning fields\\\". Apart from the connection GNNs (and possibly Transformer models), are there any specific fields you envision that might profit from such a connection?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Follow-up on Reviewer Feedback\", \"comment\": [\"We would like to ask you whether our response addressed your concerns, weaknesses, and questions so far. Also, we\\u2019d like to know whether you have any other questions? To summarize our first response, have:\", \"better contextualized the contribution in the context of the field;\", \"clarified domain definitions;\", \"fixed typos;\", \"provided clarity on model size, parametrization, and applications.\", \"We would greatly appreciate a prompt feedback, as it would allow us to clarify any remaining issues and further improve the quality of our manuscript.\"]}" ] }
2MLvV7fvAz
Spectro-Riemannian Graph Neural Networks
[ "Karish Grover", "Haiyang Yu", "Xiang song", "Qi Zhu", "Han Xie", "Vassilis N. Ioannidis", "Christos Faloutsos" ]
Can integrating spectral and curvature signals unlock new potential in graph representation learning? Non-Euclidean geometries, particularly Riemannian manifolds such as hyperbolic (negative curvature) and spherical (positive curvature), offer powerful inductive biases for embedding complex graph structures like scale-free, hierarchical, and cyclic patterns. Meanwhile, spectral filtering excels at processing signal variations across graphs, making it effective in homophilic and heterophilic settings. Leveraging both can significantly enhance the learned representations. To this end, we propose Spectro-Riemannian Graph Neural Networks (CUSP) - the first graph representation learning paradigm that unifies both CUrvature (geometric) and SPectral insights. CUSP is a mixed-curvature spectral GNN that learns spectral filters to optimize node embeddings in products of constant curvature manifolds (hyperbolic, spherical, and Euclidean). Specifically, CUSP introduces three novel components: (a) Cusp Laplacian, an extension of the traditional graph Laplacian based on Ollivier-Ricci curvature, designed to capture the curvature signals better; (b) Cusp Filtering, which employs multiple Riemannian graph filters to obtain cues from various bands in the eigenspectrum; and (c) Cusp Pooling, a hierarchical attention mechanism combined with a curvature-based positional encoding to assess the relative importance of differently curved substructures in our graph. Empirical evaluation across eight homophilic and heterophilic datasets demonstrates the superiority of CUSP in node classification and link prediction tasks, with a gain of up to 5.3\% over state-of-the-art models.
[ "Graph representation learning", "Spectral graph theory", "Riemannian geometry", "Non-Euclidean graph neural networks", "Geometric deep learning" ]
Accept (Poster)
https://openreview.net/pdf?id=2MLvV7fvAz
https://openreview.net/forum?id=2MLvV7fvAz
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vhlYwuRImx", "tccksIrJpG", "lvOW0qr4MM", "lGBAmBvFap", "k3P7SdXmFC", "gsznCnAYqp", "cCXl9QHn3i", "RrdPixl0JD", "PB2nhrHOHf", "NsuG7LHbM7", "MiSWWT2HIc", "MB0ElRRmWO", "KP0m2z3CKg", "HJnNryGp75", "GzeLU9ezo7", "FfTm2yRExx", "DhwLGy0z8b", "B92yg0iMOZ", "8A1KPBdVJo", "7miNgLvDtk", "3xoRWeYmO1", "39LWcmLEyn", "0hY4ED9919" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review" ], "note_created": [ 1737523614679, 1732372294414, 1732385561015, 1732139943314, 1730141940801, 1730718965398, 1732119201822, 1733169519411, 1732120320133, 1731732980350, 1732120388620, 1732702604415, 1732119972028, 1732704038233, 1732645128624, 1732139803568, 1732140222349, 1732630725198, 1732120074563, 1732140397206, 1732120140057, 1734794707766, 1730470131421 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4024/Reviewer_UpZ1" ], [ "ICLR.cc/2025/Conference/Submission4024/Authors" ], [ "ICLR.cc/2025/Conference/Submission4024/Authors" ], [ "ICLR.cc/2025/Conference/Submission4024/Reviewer_JRrs" ], [ "ICLR.cc/2025/Conference/Submission4024/Reviewer_UpZ1" ], [ "ICLR.cc/2025/Conference/Submission4024/Authors" ], [ "ICLR.cc/2025/Conference/Submission4024/Reviewer_iwKN" ], [ "ICLR.cc/2025/Conference/Submission4024/Authors" ], [ "ICLR.cc/2025/Conference/Submission4024/Reviewer_nwYb" ], [ "ICLR.cc/2025/Conference/Submission4024/Authors" ], [ "ICLR.cc/2025/Conference/Submission4024/Reviewer_iwKN" ], [ "ICLR.cc/2025/Conference/Submission4024/Authors" ], [ "ICLR.cc/2025/Conference/Submission4024/Authors" ], [ "ICLR.cc/2025/Conference/Submission4024/Authors" ], [ "ICLR.cc/2025/Conference/Submission4024/Authors" ], [ "ICLR.cc/2025/Conference/Submission4024/Authors" ], [ "ICLR.cc/2025/Conference/Submission4024/Reviewer_iwKN" ], [ "ICLR.cc/2025/Conference/Submission4024/Authors" ], [ "ICLR.cc/2025/Conference/Submission4024/Authors" ], [ "ICLR.cc/2025/Conference/Submission4024/Authors" ], [ "ICLR.cc/2025/Conference/Submission4024/Area_Chair_R3oc" ], [ "ICLR.cc/2025/Conference/Submission4024/Reviewer_iwKN" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Thanks!\", \"comment\": \"Thank you for your response and the work that went into preparing it! My questions and concerns are sufficiently addressed.\"}", "{\"title\": \"Any further concerns?\", \"comment\": \"We sincerely thank the reviewer for the thoughtful review and for taking the time to engage with our responses. We are glad to hear that our answers sufficiently addressed the raised questions and concerns. However, we noticed that the score has not been adjusted despite the positive feedback, and we were wondering if there are any remaining unanswered questions or additional concerns that might be preventing an increase in the score. If so, we would be more than happy to provide further clarifications or additional information to ensure that all the expectations are fully met. Thank you once again for your valuable time and constructive input!\"}", "{\"title\": \"Rebuttal Response to Reviewer iwKN (2/4)\", \"comment\": \"## **Novelties and Contributions**\\n> 4. The method is incremental. It\\u2019s hard to separate the new components in the paper and how they depend on the previous work. \\n\\nWe briefly reiterate the novelties in the paper here.\\n\\n1. **Cusp Laplacian**. We propose the CUSP Laplacian, a curvature-aware laplacian operator, incorporating Ollivier-Ricci curvature (ORC) to redefine edge weights, inspired from the heat-flow dynamics on a graph. This effectively combines local geometric properties with global graph structure, which is not addressed by traditional Laplacians. Importantly, this extends spectral graph theory to account for the geometry of the underlying data, a direction previously unexplored.\\n2. **CUSP Filtering**. We introduce a curvature-aware spectral filtering mechanism, generalizing a spectral GNN to mixed-curvature product manifolds. This filtering leverages the spectral properties of the CUSP Laplacian and adaptively learns frequency responses tailored to the geometry of the graph. This enables a seamless integration of geometric (curvature-based) and spectral (frequency-based) cues, making the model robust across both homophilic and heterophilic datasets. Traditional spectral methods lack this adaptability to changing geometric properties.\\n3. **Functional Curvature Encoding**. We design a novel functional mapping from curvature values to a mixed-curvature product manifold. This encoding acts as a positional encoding for the nodes, capturing geometric variations in substructures. It enables the model to dynamically adjust attention to different parts of the graph based on their curvature properties, which is particularly useful for tasks involving both homophilic and heterophilic graph datasets. To the best of our knowledge, such a geometric functional encoding mechanism has not been explored in graph learning.\\n4. **CUSP Pooling** Unlike prior pooling methods that rely solely on structural properties, CUSP pooling computes relative importance using both geometry (via curvature encodings) and spectral features. This dual consideration enables effective summarization of complex graph structures while preserving geometric and spectral information.\\n5. **Integrated Geometry-Spectral Framework**. To the best of our knowledge, this is the first attempt at seamlessly integrating geometry and spectral cues in a unified graph learning paradigm. By combining these two traditionally separate perspectives, our framework addresses limitations in existing graph neural networks, demonstrating superior performance across diverse tasks and datasets.\\n\\nCapturing the spectral information helps in universality i.e. in generalising across heterophilic and homophilic tasks since each require focus on different ends of the eigensoectrum, while capturing the geometry helps is learning optimal spatial representations based on differently curved substructures in the graph. An in-depth discussion of these motivations and limitations (L1 - L3) have been discussed in the paper. In summary, each component of our framework is novel and collectively contributes to advancing the state of the art in graph learning by unifying geometric and spectral insights.\\n\\n## **Discussion on Heat diffusion**\\n> 5. More details are needed for the heat diffusion equation and heat flow in Section 4.1. \\n\\nHere, we reiterate some discussion from Appendix 7.3. Suppose $\\\\psi$ describes a temperature distribution across a graph, where $\\\\psi(x)$ is the temperature at vertex $x$. According to Newton's law of cooling, the heat transferred from node $x$ to node $y$ is proportional to $\\\\psi(x) - \\\\psi(y)$ if nodes $x$ and $y$ are connected (if they are not connected, no heat is transferred). Consequently, the heat diffusion equation on the graph can be expressed as $\\\\frac{d \\\\psi}{dt} = -\\\\beta \\\\sum_y \\\\mathbf{A}_{xy}(\\\\psi(x) - \\\\psi(y))$, where $\\\\beta$ is a constant of proportionality and $\\\\mathbf{A}$ denotes the adjacency matrix of the graph. \\n\\nFurther insight can be gained by considering Fourier\\u2019s law of thermal conductance, which states that heat flow is inversely proportional to the resistance to heat transfer. ORC measures the transportation cost between the neighborhoods of two nodes, reflecting the effort required to transport mass between these neighborhoods. We interpret this transportation cost as the resistance between nodes. The vital observation here is that $-$ *Heat flow between two nodes in a graph is influenced by the underlying Ollivier-Ricci curvature (ORC) distribution}*. The diffusion rate is faster on an edge with positive curvature (low resistance), and slower on an edge with negative curvature (high resistance). We have further discussed why is $e^{-\\\\mathcal{R}_{xy}^{res}} = e^{\\\\frac{-1}{1-\\\\widetilde{\\\\kappa}(x, y)}}$ the right choice, in Appendix 7.3.\"}", "{\"summary\": \"This paper introduces CUSP, a graph representation learning model that integrates graph discrete curvature with a geometric extension of generalized PageRank. The method is comprehensively evaluated, demonstrating strong performance against several baselines.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"(+++) **Novelty and Relevance**: The proposed method is new and of interest to the geometric machine learning community.\\n\\n(+++) **Strong Empirical Evaluation and Performance**: The method is thoroughly evaluated, demonstrating improved performance on downstream tasks over the considered baselines.\", \"weaknesses\": \"There are several issues with the presentation that detract from the strengths. These concerns should be straightforward to address.\\n\\n(----) **Presentation**: The paper\\u2019s presentation is dense and, at times, unclear. Examples include:\\n* **Figure 3**: The figure is very busy. While each individual component is informative and of high quality, combining them without clear visual separators makes the overall figure difficult to interpret.\\n* **Section 4**: This section is similarly dense, as it combines mathematical background, theoretical motivations, and the presentation of the CUSP architecture. Perhaps this section could focus on the method and architecture, with theoretical discussions (e.g., of heat diffusion) moved to Preliminaries or a new dedicated section.\\n* **Notation**:\\n * The notation in Equations 2, 3, 5, 6, 7, and 8 is dense and may be difficult to parse for readers unfamiliar with geometric machine learning or the gyrovectorspace approach. (Also refer to \\\"Relation to Existing Literature\\\" below.) Adding equation annotations or using plain English function names (e.g., Enc for encoding) could improve readability.\\n * \\\"ORC for nodes\\\" is defined in line 176 without introducing the notation $\\\\tilde{\\\\kappa}(x)$ which is then used, e.g., in Equation 5. (There is a notation table in the appendix, but it does not cross reference the definition.)\\n* **Baseline Taxonomy**: The classification of baselines in Section 5 into \\\"spatial\\\" and \\\"Riemannian\\\" is inaccurate, as the Riemannian baselines are also spatial. \\\"Spatial-Euclidean\\\" and \\\"Spatial-Riemannian\\\" could be more accurate.\\n\\n(---) **Mathematical Motivation**: The justification for the Cusp Laplacian (Proposition 1) and Functional Curvature Encoding (Theorem 2) are more of rationales or motivations than rigorous proofs. For example, Proposition 1 motivates the Cusp Laplacian by introducing a modified resistance term in a heat flow equation. This would perhaps become clearer if presented as a definition, framed as, \\u201cIf one assumes a resistance of the form \\u2026,\\u201d which would help the reader recognize the principles from which the Cusp Laplacian is derived.\\n\\n(--) **Relation to Existing Literature**: Generalizing pipelines from Euclidean to Riemannian spaces by replacing Euclidean transformations with Moebius operations is a well-established pattern in geometric machine learning. Portions of this work follow this pattern, such as adapting PageRank GNN to product manifolds (Section 4.2) and using Moebius operations in the functional curvature encoding (Section 4.3) and cusp pooling (Section 4.4). Early works in hyperbolic graph neural networks such as [1] introduced these operations with clear motivation, via, e.g., illustrations of log and exp maps between manifolds and their tangent space. Since then, these operations have also been more broadly understood and interpreted within the framework of gyrovector spaces, aligning with Ungar's original work cited in the paper. See, e.g., [2-8]. In this work, however, the geometric components of the model are not similarly well-motivated. Perhaps a brief motivation for the operations would be helpful.\\n\\n\\n[1] Chami, I., Ying, Z., R\\u00e9, C., & Leskovec, J. (2019). Hyperbolic graph convolutional neural networks. NeurIPS.\\n\\n[2] Hatori, O. (2017). Examples and Applications of Generalized Gyrovector Spaces. Results in Mathematics.\\n\\n[3] Kim, S. (2016). Gyrovector Spaces on the Open Convex Cone of Positive Definite Matrices. Mathematics Interdisciplinary Research.\\n\\n[4] L\\u00f3pez, F., Pozzetti, B., Trettel, S., Strube, M., & Wienhard, A. (2021). Vector-valued distance and gyrocalculus on SPD matrices. NeurIPS.\\n\\n[5] Nguyen, X. S. (2022). The Gyro-structure of some matrix manifolds. NeurIPS.\\n\\n[6] Nguyen, X. S., & Yang, S. (2023). Building neural networks on matrix manifolds: A Gyrovector space approach. ICML.\\n\\n[7] Nguyen, X. S., Yang, S., & Histace, A. (2024). Matrix Manifold Neural Networks++. ICLR.\\n\\n[8] Zhao, W., Lopez, F., Riestenberg, J. M., Strube, M., Taha, D., & Trettel, S. (2023). Modeling graphs beyond hyperbolic: GNNs in SPD matrices. ECML PKDD.\", \"questions\": \"1. Do you use the same neural networks $f_\\\\theta$ in Line 263 for all component spaces?\\n2. How is the Riemannian projector $g_\\\\theta$ in Line 347 defined?\\n3. Could you clarify how Theorem 1 applies to CUSP? What insight does Theorem 1 provide?\\n4. How does the runtime of your pipeline compare to other baselines?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces Spectro-Riemannian Graph Neural Networks (CUSP), a novel approach to graph representation learning that combines spectral and curvature information into a spectral graph neural network operating on a manifold arising as a product of Euclidean as well as hyperbolic and spherical spaces. Traditional graph neural networks (GNNs) often struggle with diverse graph topologies and varying graph geometries/curvatures. CUSP addresses these challenges by enabling to integrate (geometric) features from both negatively (hyperbolic) and positively (spherical) parts of a given graph. This allows for the creation of more natural node embeddings that better align with the underlying structure of real-world graphs.\\nKey components of CUSP include (1) The Cusp Laplacian, which integrates Ollivier-Ricci curvature into the definition of a Laplacian-type matrix on the graph. (2) Cusp Filtering which allows for (high- and low-pass) filtering operatoions on a product manifold where each factor has constant positive, zero, or negative curvature. (3) Cusp Pooling A hierarchical attention mechanism that evaluates the importance of substructures with different curvature characteristics.\\nIn the empirical evaluation CUSP's performance is investigated across eight datasets. Here CUSP achieves a good performance (in node classification and link prediction tasks), with sometimes substantial gains over baselines.\\nThe research seems to be novel and highlights the potential that combining geometric and spectral information harbours for the design of GNNs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is well structured and mostly (please see next section) well written. A positive example is the explicit description of the limitations of previous work that are addressed in this paper (i.e. L1-L3 in the introduction).\\n\\n Including the notation table in Appendix 7.1 helps to keep track of the various mathematical concepts.\\n\\nThe idea underlying the introduced CUSP Laplacian of including curvature information into the Laplacian-matrix describing the graph is neat. \\n\\nFurthermore the idea of taking into account locally varying curvature structures by allowing for factors of varying curvature in the product manifold is nice. \\n\\nThe performance of the proposed architecture in both the node-classification and link-prediction tasks on the considered datasets is solid. \\n\\nThe ablation study on the impact of the signature of the product manifold structure (c.f. Section 5.1) as well as the surrounding discussion is illuminating.\", \"weaknesses\": \"I do have _some_ concerns regarding readability. An easy fix is the image quality in Figure 1. Here the axis labeling of the histograms is not readable if the paper is printed. Could the authors please fix this by including higher resolution images and/or using a larger font size for axis labeling.\\n\\nRegarding the paper itself, aside from some typos and grammatical errors that do not impede the flow of the paper too much, I had trouble understanding Section 4.3; especially from line 327 onward: The curvature kernel is defined twice: once using an inner product in an ambient space, and once as a translation invariant entity. I believe what the authors want to convey is that the former definition as utilized in the paper already leads to a translation invariant kernel. Is this correct?\\n\\nAlso the significance of the Bochner-Minlos theorem is not immediately apparent to me. I only gained some intuition about this after reading the proof of Theorem 2 and Theorem 3 in the Appendix. Could the authors comment more explicitly on the significance of Bochner's theorem here? \\n\\nIt might also be good to explain (to some extent) and motivate the k-stereographic model in the main text. \\nWhile I have some background in differential geometry, I had only ever come across standard stereographic projections.\\nEspecially for readers from a pure CS/EE background more details here might be useful, even if the model might be central to Riemannian GNNs. \\n\\nIn the same direction, it would also be good explain a bit more the respective operations in Table 8 in Appendix 7.2.4 and how they are natural.\\n\\n\\nFinally, in the experimental sections, the datasets that are being used are somewhat old and fairly small. I strongly encourage the authors to also consider newer and much larger datasets (e.g. using OGBN) and compare their approach with baselines there.\", \"questions\": \"1) What makes GPR special and why is this used as a backbone? I believe it is equivalent to any polynomial-filter spectral GNN. Why not e.g. use ChebNet, etc.?\\n\\n2) The (total) dimensions of the product manifolds of e.g. Table 5 and Table 2 seem to always be $d=48$. Yet in Table 13 of Appendix 6.6.4 it is indicated that $d = 64$ is the selected dimension of the product Manifold. Could the authors comment? Also, while I have seen Algorithm 1 in Appendix 7.6.5 (which seems to be used as an initial heuristic), it is still not clear to me how individual dimensions of product manifolds and respective factors are found/optimized. Is a grid search performed? If so what are the respective allowed values during this grid search? Could the authors clarify (again?)?\\n\\n3) In the ablation study on the impact of the CUSP Laplacian, performance using the CUSP Laplacian is compared to performance using the adjacency matrix instead. Could the authors repeat this ablation study comparing with the usual normalized and unnormalized graph Laplacians? This would shed light on whether the performance increase comes from using a Laplacian-type matrix vs. an adjacency type matrix, or indeed from the specific properties of the the Cusp Laplacian.\\n\\n4) Is it possible to include and discuss some basic spectral characteristics of the CUSP Laplacian (beyond self-adjointness and positivity)? As is, this new matrix is introduced without too much intuition beyond the heat-flow heuristic. Can something e.g. be said about the maximal eigenvalue or first-non-trivial eigenvalue (in a Cheeger-type fashion) for example? I realize the present paper is not mainly theoretical but introduces an architecture. However some additional theoretical foundation would indeed be nice.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal Response to Reviewer nwYb\", \"comment\": \"We sincerely thank the reviewer for the valuable and constructive feedback and are glad you like the paper. We now address the raised concerns and conduct relevant experimentation as suggested.\\n\\n> How to prove that the proposed alternative method is useful in large real-world graphs? Are there any experiments? \\n\\n## **Why is the ORC method useful for large graphs?**\\n\\nAs discussed in **Appendix 7.2.2**, the $\\\\texttt{ORC}$ of an edge can be approximated as the arithmetic mean of the bounds in Theorem 4 (Appendix 7.2.2) as $\\\\widehat{\\\\kappa}(x, y) := \\\\frac{1}{2} \\\\left( \\\\kappa^{\\\\text{upper}}(x, y) + \\\\kappa^{\\\\text{lower}}(x, y) \\\\right)$. This approximation relies solely on local structural information, such as the degree of the nodes and the number of triangles, making the computation highly efficient, with linear-time complexity. Moreover, since the ORC computation is localized to the neighborhood of each edge, it can be parallelized across multiple GPUs, making the method well-suited for scaling to very large graphs.\\n\\nThat said, training and evaluating the entire CUSP architecture on billion-scale graphs is a challenging task due to several factors, including the need for enhanced numerical stability in Riemannian optimization, increased training time, and memory constraints associated with large-scale product manifolds. Addressing these challenges requires significant engineering efforts that extend beyond the scope of the rebuttal timeframe. However, in light of this thoughtful suggestion and keeping in mind the limited rebuttal time, we have conducted additional experimentation on homophilic and heterophilic, **million-scale** graph datasets that are significantly larger than the eight benchmark datasets initially included in the submission. For all these experiments we adopt the **linear-time ORC approximation** discussed above.\\n\\n## **Additional experimentation**\\n\\nWe evaluate the performance of CUSP for **five** (2 homophilic and 3 heterophilic), **million-scale datasets**. For all these experiments we adopt the ORC approx. discussed above. \\n\\n| **Type** | **Dataset** | **Nodes** | **Edges** | **Input Features** | **Classes** | **Homophily Score** |\\n|--------------------|------------------|---------------|----------------------|-------------------------|------------------|-------------------------|\\n| **Homophilic** | `ogbn-arxiv`[1] | 169,343 | 1,166,243 | 128 | 40 | 0.632 |\\n| **Homophilic** | `ogbn-mag`[1] | 736,389 | 5,416,271 | 128 | 349 | 0.315 |\\n| **Heterophilic** | `twitch-gamer`[2] | 168,114 | 6,797,557 | 7 | 2 | 0.973 |\\n| **Heterophilic** | `snap-patents`[2] | 2,923,922 | 13,972,555 | 269 | 5 | 0.266 |\\n| **Heterophilic** | `tolokers`[3] | 11,758 | 1,038,000 | 10 | 2 | 0.634 |\\n\\nWe hereby report the F1-Score results for node classification task on the above datasets for several (Riemannian and Spectral) baselines and `CUSP`.\\n\\n| **Model** | **`ogbn-arxiv`** | **`ogbn-mag`** | **`twitch-gamer`** | **`snap-patents`** | **`tolokers`** |\\n|------------------|---------------------|---------------------|---------------------|---------------------|---------------------|\\n| **GCN** | 63.46\\u00b11.34 | 17.22\\u00b11.75 | 68.11\\u00b11.11 | 42.87\\u00b10.98 | 74.24\\u00b10.86 |\\n| **ChebNet** | 61.34\\u00b12.01 | 18.23\\u00b13.08 | 68.56\\u00b10.11 | 43.64\\u00b10.87 | 79.55\\u00b10.63 |\\n| **BernNet** | 46.64\\u00b10.87 | 16.87\\u00b10.23 | 66.34\\u00b10.52 | 49.23\\u00b10.45 | 65.36\\u00b13.45 |\\n| **FiGURe** | 71.23\\u00b10.23 | 33.65\\u00b11.56 | 72.54\\u00b12.26 | 40.34\\u00b10.75 | 83.09\\u00b11.08 |\\n| **KGCN** | 65.78\\u00b11.75 | 29.34\\u00b10.56 | 73.35\\u00b10.55 | 48.24\\u00b10.22 | 78.43\\u00b11.53 |\\n| **QGCN** | 70.24\\u00b11.54 | 32.44\\u00b11.97 | 74.24\\u00b13.64 | 45.85\\u00b10.29 | 81.07\\u00b10.01 |\\n| **`CUSP`** | **75.34\\u00b10.88** | **37.23\\u00b10.09** | **79.94\\u00b10.74** | **51.54\\u00b10.13** | **89.53\\u00b11.02** |\\n\\nConsistent with the results in the paper, `CUSP` outperforms all baselines by a significant margin for this task. We will include these results (and the corresponding analysis) in the camera-ready version, as suggested by the reviewer. \\n\\n***References***\\n\\n[1] Hu et al. Open Graph Benchmark: Datasets for Machine Learning on Graphs (NeurIPS 2020)\\n\\n[2] Lim et al. Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods (NeurIPS 2021)\\n\\n[3] Platonov et al. A critical look at evaluation of gnns under heterophily (ICLR 2023)\"}", "{\"comment\": \"Thank you for your responses. Most of my concerns have been addressed. However, I still find the hyperparameter details for the competing methods insufficient. I am assigning a borderline score but do not object to possible acceptance.\"}", "{\"title\": \"Rebuttal Response to Reviewer JRrs (1/2)\", \"comment\": \"We sincerely thank the reviewer for their valuable assessment, detailed analysis, and constructive feedback. We sincerely apologise for the oversights and hereby address the raised questions and concerns.\\n\\n## **Questions Raised**\\n> 1. Do you use the same neural networks in Line 263 for all component spaces?\\n\\nYes. The input feature matrix $\\\\mathbf{F}$ is passed *just once* through the neural network $f_{\\\\theta}(.)$. Post this, for every manifold component, a *separate* exponential map is taken over the generated initial features, since different component manifolds have different exponential maps. This is done as shown in Equation 2.\\n\\n> 2. How is the Riemannian projector in Line 347 defined?\\n\\nThe Riemannian projector $g_{\\\\theta}: \\\\mathbb{P}^{d_{\\\\mathcal{M}}} \\\\rightarrow \\\\mathbb{P}^{d_{\\\\mathcal{C}}}$ is implemented as a simple multi-layer perceptron (MLP) (2-layer in our implementation) with Riemannian linear layers. This choice of $g_{\\\\theta}$ enables us to reduce the dimensionality from the ambient product manifold $\\\\mathbb{P}^{d_{\\\\mathcal{M}}}$ to the desired curvature encoding dimension $d_{\\\\mathcal{C}}$. For computational efficiency, we just construct one product manifold $\\\\mathbb{P}^{d_{\\\\mathcal{M}}}$ (instead of separate product manifold classes for both curvature embedding and filtered representations). As a result, $\\\\mathbf{exp}^{\\\\kappa_{(q)}}$ is taken on the $q^{th}$ component manifold with curvature $\\\\kappa_{(q)}$, on the manifold $\\\\mathbb{P}^{d_{\\\\mathcal{M}}}$ and has a resultant dimension $d_{\\\\mathcal{M}}$. Now, to project this to the dimension $d_{\\\\mathcal{C}}$ (so as to result in a $\\\\mathcal{C}-$ dimensional curvature encoding), we use the projector MLP $g_{\\\\theta}: \\\\mathbb{P}^{d_{\\\\mathcal{M}}}\\\\rightarrow \\\\mathbb{P}^{d_{\\\\mathcal{C}}}$. We will add this explanation to a separate section in the Appendix, in the camera-ready version.\\n\\n### **Runtime Comparision with baselines**\\n> 3. How does the runtime of your pipeline compare to other baselines?\\n\\n\\n***Average training runtime per epoch** (in **milliseconds**) comparision (preprocessing excluded) of CUSP against relevant baseline for Node Classification task.*\\n\\n| Model | L + 1 (Filters) | Q (Components) | `Cora` | `Squirrel` |\\n|-------------|-----------------|----------------|----------|-----------|\\n| `GPRGNN` | 1 | 1 | 18.725 ms | 26.78 ms |\\n| `FiGURe` | 4 | 1 | 110.644 ms | 241.24 ms |\\n| `SelfMGNN` | 1 | 3 | 166.324 ms | 212.223 ms |\\n| $\\\\texttt{CUSP}_{euc}$ | 4 | 1 | 20.814 ms | 24.916 ms |\\n| $\\\\texttt{CUSP}_{fil}$ | 1 | 3 | 107.801 ms | 124.454 ms |\\n| $\\\\texttt{CUSP}$ | 4 | 3 | 225.014 ms | 264.459 ms |\\n\\n**Analysis**\\n\\nThe reported statistics reflect the average training runtime per epoch (excluding preprocessing) for $\\\\texttt{CUSP}$ and relevant baselines on the Node Classification task. For $\\\\texttt{CUSP}$, the configuration includes 4 filters in the filter bank and a product manifold $\\\\mathbb{H}^{16} \\\\times \\\\mathbb{S}^{16} \\\\times \\\\mathbb{E}^{16}$ (3 components). Here are key observations based on the Cora dataset:\\n1. **Efficiency of $\\\\texttt{CUSP}_{euc}$**\\n- The Euclidean-only variant with 4 filters, achieves a runtime of 20 ms, comparable to GPRGNN (18 ms) which uses a single filter. This demonstrates that even with additional filters, $\\\\texttt{CUSP}_{euc}$ maintains efficient performance while benefiting from its richer spectral design.\\n2. **Balanced Runtime for Full $\\\\texttt{CUSP}$**\\n- The full $\\\\texttt{CUSP}$ model (4 filters 3 manifold components) runs in 225 ms, which, while higher than GPRGNN (18 ms), is reasonable given the added complexity of jointly modeling spectral and geometric features. Notably, $\\\\texttt{CUSP}$ is more efficient than the combined runtime of FiGURe (110 ms, spectral only) and SelfMGNN (166 ms, geometric only), highlighting its ability to unify spectral and geometric insights efficiently.\\n3. **Improved Efficiency for $\\\\texttt{CUSP}_{fil}$**\\n- $\\\\texttt{CUSP}_{fil}$, with only 1 filter and 3 manifold components, achieves a runtime of 107 ms, significantly faster than SelfMGNN (166 ms) with the same number of filters and components.\\n\\nThus, $\\\\texttt{CUSP}$ offers a robust balance of computational efficiency and expressivity. Despite being slightly slower than simpler spectral-only baselines like GPRGNN, it remains computationally reasonable while delivering superior performance through its unified spectral-geometric design. This demonstrates that the additional complexity of CUSP.\"}", "{\"summary\": \"This paper is the first attempt that unifies spectral and curvature signals in graph representation learning. CUSP, a mixed-curvature spectral GNN, introduces three novel components: a curvature-aware Cusp Laplacian operator, a mixed-curvature spectral graph filtering framework, Cusp Filtering, and a curvature embedding method using classical harmonic analysis and a hierarchical attention mechanism called Cusp Pooling. CUSP records state-of-the-art performance on eight real-world benchmarking datasets for Node Classification (NC) and Link Prediction (LP) tasks.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"Originality. I think the proposed CUSP is innovative in the paradign of graph representation learning. CUSP is well-motivated, and indeed improves the performance significantly.\", \"Quality. The paper is well structured. A large number of experiments have been conducted, which are supportive and convincing.\", \"Clarity. The method proposed is deeply explored and proofs are presented as well as a detailed description of the algorithm and the rational for the design consideration that were made. The experimental section is typically sufficient. And ablation experiments fully illustrate the effectiveness of each component.\", \"Significance. The paper shows state-of-the-art in standard benchmarks. This paper explores how to integrate spectral and curvature signals in graph representation learning, and gives some valuable insights.\"], \"weaknesses\": \"As mentioned in the questions, the proposed OCR alternative method should be described in detail, or the proposed method should be used in the experiment.\", \"questions\": \"In the appendix, the complexity of ORC is analyzed, and it is mentioned that an alternative method is provided to approximate edge ORC in linear time, which is applicable to very large (billion-scale) real-world graphs. How to prove that the proposed alternative method is useful in large real-world graphs? Are there any experiments? As far as I know, the dataset used in the experiment is on the order of 100,000.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal Response to Reviewer JRrs (2/2)\", \"comment\": \"> 4. Could you clarify how Theorem 1 applies to CUSP? What insight does Theorem 1 provide?\\n\\nThank you for raising this question. Theorem 1 in our paper provides a crucial theoretical foundation for understanding how GPR weights can be leveraged to design both low-pass and high-pass graph filters. Specifically, the theorem states that depending on the initialization of the filter weights can exhibit either low-pass or high-pass behavior. CUSP leverages these adaptive properties of GPR filters to construct a filter bank that spans low-pass, high-pass, and band-pass behaviors. This flexibility is directly relevant to CUSP\\u2019s ability to handle graphs with varying homophilic and heterophilic structures. We encourage the reader to refer to the detailed discussion in [1], where the mathematical derivations and empirical validations of these properties of GPR are presented.\\n\\n[1] Chien et al. Adaptive universal generalized pagerank graph neural network. arXiv preprint arXiv:2006.07988, 2020.\\n\\n\\n## **Concerns raised**\\n\\n> Figure 3: The figure is very busy. \\n\\nBased on this valuable suggestion we have **improved the Figure 3** in the revised version of the draft, and tried to improve it's visual readability (separability, font sizes, etc.). Kindly check out the resubmission for the updated figure. \\n\\n> \\\"ORC for nodes\\\" is defined in line 176 without introducing the notation \\n which is then used, e.g., in Equation 5. (There is a notation table in the appendix, but it does not cross reference the definition.)\\n\\nThe Ollivier-Ricci curvature $\\\\widetilde{\\\\kappa}(x)$ for a node $x$ is defined as the average curvature of its adjacent edges i.e. $\\\\widetilde{\\\\kappa}(x) = \\\\frac{1}{|\\\\mathcal{N}(x)|}\\\\sum_{z\\\\in \\\\mathcal{N}(x)} \\\\widetilde{\\\\kappa}(x, z)$. We apologise for this oversight and the corresponding notation has been included in the updated draft of the paper.\\n\\n> The classification of baselines in Section 5 into \\\"spatial\\\" and \\\"Riemannian\\\" is inaccurate, as the Riemannian baselines are also spatial. \\\"Spatial-Euclidean\\\" and \\\"Spatial-Riemannian\\\" could be more accurate.\\n\\nKeeping in mind this valuable suggestion, we have **renamed the taxonomy** in the rebuttal revision of our manuscript, to -- \\\"Spatial-Riemannian\\\" and \\\"Spatial-Euclidean\\\", as this makes more intuitive sense. Thank you for helping us improve the clarity of our paper.\\n\\n> The justification for the Cusp Laplacian (Proposition 1) and Functional Curvature Encoding (Theorem 2) are more of rationales or motivations than rigorous proofs. \\n\\nWe sincerely thank the reviewer for highlighting the need for clearer framing of Proposition 1 and Theorem 2, particularly regarding their role as motivations rather than formal derivations. To address this concern, we have renamed Proposition 1 and Theorem 2 as *Definition 1* and *Definition 2*, respectively. They have been presented as: \\\"the functional curvature encoding is defined as .. \\\" and \\\"the Cusp Laplacian operator takes the form ..\\\". This renaming aligns with the intent of these sections to serve as foundational definitions (with derivations and motivations) rather than rigorous proofs.\\n\\n> In this work, however, the geometric components of the model are not similarly well-motivated. Perhaps a brief motivation for the operations would be helpful.\\n\\nSome brief motivation of these operations has been discussed in Section 3 and Appendices 7.2.3 - 7.2.4. To further address this concern, we will add a subsection in the Appendix (camera ready version) explicitly illustrating the geometric motivation behind these operations, including:\\n- Visualizations of the log and exp maps, highlighting their role in transitioning between the manifold and tangent spaces.\\n- A summary of gyrovector spaces and their connection to the Moebius transformations used in our model.\\n- Specific examples illustrating how the curvature-aware operations enhance modeling capabilities compared to Euclidean counterparts.\"}", "{\"comment\": \"Thank you for your detailed response. I found your clarification improves the presentation of the paper.\\n\\nI have another question. Could you clarify why the performance of OptBasisGNN, ChebNetII, JacobiConv, and Specformer on node classification tasks is lower in your experiments compared to the results reported in their original papers? The reported performance in those papers appears higher than what is shown here.\"}", "{\"title\": \"Rebuttal Response to Reviewer UpZ1 (1/3)\", \"comment\": \"We sincerely thank the reviewer for their valuable assessment, detailed analysis, and constructive feedback. We hereby address the raised questions and concerns.\\n\\n## **Additional experimentation**\\n\\n> 1. I strongly encourage the authors to also consider newer and much larger datasets (e.g. using OGBN).\\n\\nWe appreciate and thank the reviewer for raising this point of concern. We evaluate the performance of CUSP for the following **five** (two homophilic and three heterophilic), **million-scale datasets**. See our comment (https://openreview.net/forum?id=2MLvV7fvAz&noteId=cCXl9QHn3i) for the statistics of these datasets.\\n\\nWe hereby report the F1-Score results for node classification task on the above datasets for several (Riemannian and Spectral) baselines and `CUSP`.\\n\\n| **Model** | **`ogbn-arxiv`** | **`ogbn-mag`** | **`twitch-gamer`** | **`snap-patents`** | **`tolokers`** |\\n|------------------|---------------------|---------------------|---------------------|---------------------|---------------------|\\n| **GCN** | 63.46\\u00b11.34 | 17.22\\u00b11.75 | 68.11\\u00b11.11 | 42.87\\u00b10.98 | 74.24\\u00b10.86 |\\n| **ChebNet** | 61.34\\u00b12.01 | 18.23\\u00b13.08 | 68.56\\u00b10.11 | 43.64\\u00b10.87 | 79.55\\u00b10.63 |\\n| **BernNet** | 46.64\\u00b10.87 | 16.87\\u00b10.23 | 66.34\\u00b10.52 | 49.23\\u00b10.45 | 65.36\\u00b13.45 |\\n| **FiGURe** | 71.23\\u00b10.23 | 33.65\\u00b11.56 | 72.54\\u00b12.26 | 40.34\\u00b10.75 | 83.09\\u00b11.08 |\\n| **KGCN** | 65.78\\u00b11.75 | 29.34\\u00b10.56 | 73.35\\u00b10.55 | 48.24\\u00b10.22 | 78.43\\u00b11.53 |\\n| **QGCN** | 70.24\\u00b11.54 | 32.44\\u00b11.97 | 74.24\\u00b13.64 | 45.85\\u00b10.29 | 81.07\\u00b10.01 |\\n| **`CUSP`** | **75.34\\u00b10.88** | **37.23\\u00b10.09** | **79.94\\u00b10.74** | **51.54\\u00b10.13** | **89.53\\u00b11.02** |\\n\\nConsistent with the results in the paper, `CUSP` outperforms all baselines by a significant margin for this task. We will include these results (and the corresponding analysis) in the camera-ready version, as suggested by the reviewer. Once again, we would like to thank the reviewer for pointing this out and giving us the opportunity to demonstrate this set of results. \\n\\n## **Why GPR as the backbone? (over ChebNet, etc)**\\n> 2. What makes GPR special and why is this used as a backbone? I believe it is equivalent to any polynomial-filter spectral GNN. Why not e.g. use ChebNet, etc.?\", \"we_further_elaborate_on_why_gpr_is_an_ideal_backbone_when_compared_to_other_spectral_gnns_like_chebynet\": \"1. **Adaptive Filter Design**: GPR learns filter coefficients directly, allowing the spectral response to adapt to the task and dataset. This flexibility is critical for modeling both homophilic and heterophilic graphs.\\n2. **Universality**: Unlike fixed low-pass filters like ChebNet, which excel primarily in homophilic settings, GPR\\u2019s learnable filters enable it to balance low-pass and high-pass components, making it suitable for both homophilic and heterophilic graphs. This is one of the main goals of our paper - to achieve superior performance on homophilic and heterophilic tasks. Fixed polynomial filters in ChebNet and Bernstein-based methods approximate spectral responses up to a fixed order, limiting their ability to model complex spectral properties. \\n3. **GPRGNN prevents oversmoothing**: GPR weights are adaptively learnable, which allows GPR-GNN to avoid over-smoothing and trade node and topology feature informativeness. See Section 4 of GPRGNN [4] paper for more theoretical analysis on the same and proofs, which is beyond the scope of this work.\\n4. GPR not only mitigates feature over-smoothing but also works on highly diverse node label patterns (See Section 4 and 5 of [4]).\\n5. **Capturing node features and graph topology**: In many important graph data processing applications the acquired information includes both node features and observations of the graph topology. GPRGNN jointly optimizes node feature and topological information extraction, regardless of the extent to which the node labels are homophilic or heterophilic.\\n6. **Filter Bank Construction**: Using GPR based spectral filters, helps us to effectively construct a filter bank where each adaptive filter contributes to a specific spectral profile, enabling the model to aggregate information across different spectral bands. This approach captures diverse patterns in node features and topology, unlike ChebNet or Bernstein-based methods, which rely on fixed polynomial approximations and lack such flexibility.\\n\\nWe sincerely apologise for not explicitly including this information in the paper, although we have pointed several of these points at various locations. We have included this in the Appendix 7.4.2 in the revised draft.\\n\\n**References**\\n\\n[4] Chen et al. Adaptive universal generalized pagerank graph neural network. 2020.\"}", "{\"title\": \"Follow-up Clarification Response 2 for Reviewer iwKN\", \"comment\": \"We sincerely thank the reviewer for active discussion. We address the raised concerns.\\n\\n> I have another question. Could you clarify why the performance of OptBasisGNN, ChebNetII, JacobiConv, and Specformer on node classification tasks is lower in your experiments compared to the results reported in their original papers? The reported performance in those papers appears higher than what is shown here.\\n\\nYes, indeed it is lower than reported in these papers. The potential reasons are as follows:\\n\\n- **Different embedding dimensions**. For fair comparison we use d = 48 dimensional embeddings for all baselines (same as our model CUSP). Further, for instance, increasing the embedding size to 512 dimensions for the baselines can lead to better results as chosen in the **Specformer** paper originally (while still not outperforming CUSP).\\n- **Different data splits**: We use the split 60%/20%/20%. For example, for citation datasets (i.e., Cora, Citeseer, and Pubmed) **ChebNetII** uses 20 nodes per class for training, 500 nodes for validation, and 1,000 nodes for testing. Further for small datasets like Texas, they use the sparse split i.e. 95/2.5/2.5%.\\n- We use similar hyperparameters for all baselines for comparable performance.\\n\\nThere are similar variations across other models and datasets but we follow a uniform training setup for fair comparison. \\n\\nIt is crucial to note here that the results in a recent spectral GNN benchmark paper [1] are quite inline with the results we got for the mentioned baselines, and these are a little different from the reported in the original papers.\\n\\nAs mentioned in the previous comment, we will explicitly note these baseline-specific hyperparams for the camera-ready submission.\\n\\n[1] Liao, Ningyi, et al. \\\"Benchmarking Spectral Graph Neural Networks: A Comprehensive Study on Effectiveness and Efficiency.\\\" arXiv preprint arXiv:2406.09675 (2024).\"}", "{\"title\": \"Follow-up Clarification Response for Reviewer iwKN\", \"comment\": \"We sincerely thank the reviewer for their thoughtful engagement during the discussion phase. We greatly appreciate the opportunity to clarify the raised points and apoligise for the confusion.\\n\\n> 1. I still find the use of $\\\\widetilde{\\\\kappa}$ for $\\\\widetilde{\\\\kappa} \\\\in \\\\mathbb{K}$ in Eq. (4) and $\\\\widetilde{\\\\kappa}(x)$ for representing the Ollivier-Ricci curvature confusing. Since the paper is dense, encountering this ambiguity on page 7 makes it difficult to connect back to the preliminaries.\\n\\nWe sincerely apologize for the confusion caused by the notation in Eq. (4) and appreciate your feedback. To address this, we have clarified and improved the notation in our revised draft (Rebuttal Revision 2, latest update). Specifically, we have removed the use of $\\\\widetilde{\\\\kappa} \\\\in \\\\mathbb{K}$ and directly use $\\\\widetilde{\\\\kappa}(x) \\\\in \\\\mathbb{K}$ to represent the curvature of node $x$, ensuring consistency throughout the text. The updated text now states: *The functional curvature encoding $\\\\Phi_{\\\\mathbb{R}^{d_{\\\\mathcal{C}}}}: \\\\mathbb{K} \\\\to \\\\mathbb{R}^{d_{\\\\mathcal{C}}}$ for curvature $\\\\widetilde{\\\\kappa}(x)$ of node $x$, is defined as \\u2026* The revised Eq. (4) is as follows:\\n\\n$$\\n\\\\Phi_{\\\\mathbb{R}^{d_{\\\\mathcal{C}}}}(\\\\widetilde{\\\\kappa}(x)) = \\\\sqrt{\\\\frac{1}{d_{\\\\mathcal{C}}}} \\\\Big[\\\\cos(\\\\omega_1 \\\\widetilde{\\\\kappa}(x)), \\\\sin(\\\\omega_1 \\\\widetilde{\\\\kappa}(x)), \\\\dots, \\\\cos(\\\\omega_{d_{\\\\mathcal{C}}} \\\\widetilde{\\\\kappa}(x)), \\\\sin(\\\\omega_{d_{\\\\mathcal{C}}} \\\\widetilde{\\\\kappa}(x)) \\\\Big].\\n$$\\n\\n> 2. I\\u2019m still unclear about the notation L. In your notation table, L is also used in representing the GPR-based node representation for filter with layers. Does this same notation imply that the number of filters equals the number of layers? \\n\\nWe apoligise for this confusion and clarify the notation here. Yes, the same notation is used because the proposed filterbank is: $\\\\big[\\\\mathbf{Z}^{\\\\mathbf{I}}, \\\\mathbf{Z}^{(1)}, \\\\mathbf{Z}^{(2)}, \\\\dots, \\\\mathbf{Z}^{(L)}\\\\big]$ (removed the subscript product manifold $\\\\mathbb{P}^{d_{\\\\mathcal{M}}}$ for explanation purpose).\\n\\nFor a particular filter $\\\\mathbf{Z}^{(l)}$ in the filterbank, we stop the GPR propagation at $L = l$. Consider the example:\\n- Say, any GPR propagation goes on till $L$ *layers* (or steps). By stopping it at $L = 1$, we get our filtered representation $\\\\mathbf{Z}^{(1)}$.\\n- Similarly by stopping it at $L = 4$ gives $\\\\mathbf{Z}^{(4)}$. \\n- Finally, we get the $L^{th}$ filter by letting GPR propagate till $L$ layers, and hence getting $\\\\mathbf{Z}^{(L)}$.\\n\\nThus, we get $L$ filters by stopping GPR at $l = 1, 2, \\\\dots, L$. Because of this, we use the same notation L for both. The total number of final filters in our bank is $L + 1$ (because we also pass the identity matrix as an unfiltered case).\\n\\n\\n> 3. In GPRGNN (Chien et al., 2020), they use a 2-layer MLP with 64 hidden units. Are you using the same configuration?\\n\\nYes, we use the same 2-layer MLP with 64 hidden units for $f_{\\\\theta}(.): \\\\mathbb{R}^{d_{f}} \\\\rightarrow \\\\mathbb{R}^{d_{\\\\mathcal{M}}}$, which represents a neural network with parameter set $\\\\{{\\\\theta\\\\}}$ that generates the hidden state features of dimension $d^{\\\\mathcal{M}}$ (line 269). This is the same as used by the GPR paper in their feature extractor neural network.\\n\\nAlso, to further clarify, this 2-layer MLP is not the same *layers* being talked in the above question. Previously we were talking about the GPR propagation steps, now we are talking about the feature extractor (line 269, equation 2).\\n\\n> 4. In addition, do the hyperparameter configurations in Table 13 also apply to the competing baselines? It wasn\\u2019t clear how the comparisons with other methods were conducted.\\n\\nYes, as noted in the caption of Table 13, the *hyperparameter space for grid search* listed in the table is the same for all baselines to ensure a fair comparison. However, the hyperparameters highlighted in red are the **optimal** ones specifically for CUSP, determined through the grid search. To further enhance clarity, we will include the optimal hyperparameters for all the baselines in the camera-ready version to provide a more comprehensive view of the experimental setup. We appreciate your suggestion and will ensure these details are clearly documented. Thank you again for your valuable feedback!\"}", "{\"title\": \"Rebuttal Response to Reviewer iwKN (1/4)\", \"comment\": \"We sincerely thank the reviewer for their detailed analysis, insightful feedback, and constructive suggestions. Your comments have helped us identify areas that required clarification, additional discussion, or refinement. We also apologize for the oversights and inconsistencies in the original submission and greatly appreciate your thoroughness in pointing them out. We try our best to provide justifications for all the raised concerns one-by-one.\\n\\n\\n## **Additional Experimentation.**\\n> 1. The paper is missing important spectral GNNs.\\n\\nWe hereby provide the results for the above mentioned spectral GNNs over our 6 benchmarking datasets (for Node Classification) and their performance comparison with CUSP. These results (along with results on 2 remaining datasets) will be incoorporated in the camera-ready version. The results show the dominance of CUSP over spectral GNNs (in line with the results shown in the paper). \\n| **Model** | **Cora** | **Citeseer** | **PubMed** | **Chameleon** | **Actor** | **Squirrel** |\\n|---------------------|-------------------------|-------------------------|-------------------------|-------------------------|------------------------|------------------------|\\n| **CUSP** | **83.45\\u00b10.15** | **74.21\\u00b10.02** | **87.99\\u00b10.45** | **70.23\\u00b10.61** | **43.91\\u00b10.11** | **52.98\\u00b10.25** |\\n| **OptBasisGNN** | 69.13\\u00b10.11 | 67.25\\u00b10.15 | 78.45\\u00b10.40 | 56.20\\u00b10.89 | 36.78\\u00b10.12 | 40.45\\u00b10.33 |\\n| **ChebNetII** | 73.35\\u00b11.33 | 70.12\\u00b11.98 | 84.56\\u00b10.58 | 61.34\\u00b10.92 | 38.76\\u00b10.24 | 46.78\\u00b10.41 |\\n| **CayleyNet** | 72.24\\u00b10.55 | 68.45\\u00b10.48 | 79.67\\u00b10.45 | 60.01\\u00b10.87 | 38.20\\u00b10.15 | 45.56\\u00b10.39 |\\n| **APPNP** | 77.45\\u00b10.64 | 70.90\\u00b10.41 | 83.34\\u00b10.51 | 64.56\\u00b10.58 | 39.89\\u00b10.19 | 49.45\\u00b10.28 |\\n| **JacobiConv** | 68.25\\u00b11.77 | 66.45\\u00b11.34 | 78.90\\u00b10.67 | 55.78\\u00b11.20 | 36.45\\u00b10.20 | 39.78\\u00b10.42 |\\n| **SpecFormer** | 79.25\\u00b10.98 | 72.01\\u00b10.35 | 82.45\\u00b10.48 | 66.78\\u00b10.65 | 40.56\\u00b10.22 | 50.78\\u00b10.36 \\n\\n## **Questions Raised**\\n\\n> 2. In \\u03ba-right-matrix-multiplication, why do the authors choose to work with the projection between the manifold and tangent space at the origin?\\n\\n**Why projection at Origin?** The rationale for this choice is reinforced by several theoretical insights from Riemannian geometry:\\n1. **Diffeomoprhism**: The Hopf\\u2013Rinow theorem [1] ensures that if a manifold is geodesically complete, the exponential map can be defined on the entire tangent space. While this map is not generally a global diffeomorphism, its differential at the origin of the tangent space is the identity map. Consequently, by the inverse function theorem, the exponential map acts as a local diffeomorphism in a neighborhood of the origin, enabling stable and well-defined projections.\\n2. **Unified Mathematical Framework**: The origin serves as a consistent reference point for the exponential and logarithmic maps across all component manifolds in the product space. This unification simplifies the mathematical framework and ensures compatibility between the various Riemannian operations used in the CUSP framework. Experimentally, we observed that anchoring the exponential map at the origin does not degrade performance and aligns well with our framework\\u2019s computational needs.\\n\\n[1] Ekeland, Ivar. \\\"The Hopf-Rinow theorem in infinite dimension.\\\"\\n\\n> 3. The authors keep using the term \\u201ccurvature signal\\u201d throughout the paper. What does this term mathematically mean?\\n\\nThe term \\u201ccurvature signal\\u201d is used throughout the paper to describe the distribution of curvature values or curvature-related information that the model seeks to capture. Mathematically, it can be understood as follows: For a graph G = (V, E) with nodes V and edges E , the curvature signal refers to the scalar values of curvature (e.g., Ollivier-Ricci curvature, $\\\\widetilde{\\\\kappa}$ associated with each edge or node. For nodes, the curvature signal can be represented as a vector $\\\\mathbf{c} \\\\in \\\\mathbb{R}^{|V|}$, where each entry $\\\\mathbf{c}_i$ corresponds to the ORC of node $i$ i.e. $\\\\widetilde{\\\\kappa}(i)$, derived from the graph\\u2019s structure and local neighborhood. While we use this term without a formal mathematical definition in the main text, it is meant to intuitively convey the idea of capturing the curvature \\u201csignal\\u201d or \\u201cinformation\\u201d\\u2014essentially, the geometric properties of the graph reflected by the curvature values.\"}", "{\"title\": \"Rebuttal Response to Reviewer iwKN (3/4)\", \"comment\": \"## **Other concerns raised**\\n\\n> 6. The font size in Figure 3 is too small. It makes it very difficult to follow complicated figures.\\n\\nBased on this valuable suggestion we have **improved the Figure 3** in the revised version of the draft, and tried to improve it's visual readability (separability, font sizes, etc.). Kindly check out the resubmission for the updated figure. \\n\\n> 7. More explanation is needed for how GPRGNN jointly optimizes node features and topological information extraction.\\n\\nWe further elaborate on why GPR is an ideal backbone when compared to other Spectral GNNs. For brevity, kindly see comment: https://openreview.net/forum?id=2MLvV7fvAz&noteId=KP0m2z3CKg.\\n\\n> 8. It\\u2019s unclear why in line 251, the product space is $\\\\mathbb{P}^{d_{\\\\mathcal{M}}}$ but in line 322 the authors are interested in the $d_{\\\\mathcal{C}}$-dimensional product space.\\n\\nThe difference between $\\\\mathbb{P}^{d_{\\\\mathcal{M}}}$ and $\\\\mathbb{P}^{d_{\\\\mathcal{C}}}$ lies in their respective roles within our framework:\\n- $\\\\mathbb{P}^{d_{\\\\mathcal{C}}}$: This is the $d_{\\\\mathcal{C}}$-dimensional product manifold used for functional curvature encoding (for every node) (Used as positional encoding in Cusp Pooling).\\n- $\\\\mathbb{P}^{d_{\\\\mathcal{M}}}$: This is the product manifold used for manifold embeddings from the CUSP filtering pipeline, where the $d_{\\\\mathcal{M}}$-dimensional embeddings represent node features after geometric and spectral filtering.\\n\\nIn the CUSP pooling step, these two embeddings (for every node) are combined to compute the relative importance of nodes and substructures hierarchically. The final embeddings, therefore, reside in a space of dimension $d_{\\\\mathcal{C}} + d_{\\\\mathcal{M}}$, capturing both curvature-based positional encoding and task-specific manifold embeddings.\\n\\n\\n> 9. Based on Eq. (4) $\\\\widetilde{\\\\kappa} \\\\in \\\\mathbb{K}$, However, it\\u2019s unclear what $\\\\widetilde{\\\\kappa}(x)$ represents in Eq.(5).\\n\\nAs discussed in the preliminaries, the Ollivier-Ricci curvature $\\\\widetilde{\\\\kappa}(x)$ for a node $x$ is defined as the average curvature of its adjacent edges i.e. $\\\\widetilde{\\\\kappa}(x) = \\\\frac{1}{|\\\\mathcal{N}(x)|}\\\\sum_{z\\\\in \\\\mathcal{N}(x)} \\\\widetilde{\\\\kappa}(x, z)$.\\n \\n> 10. It\\u2019s unclear what the Riemannian projector is in line 347.\\n\\nThe Riemannian projector $g_{\\\\theta}: \\\\mathbb{P}^{d_{\\\\mathcal{M}}} \\\\rightarrow \\\\mathbb{P}^{d_{\\\\mathcal{C}}}$ is implemented as a simple multi-layer perceptron (MLP) (2-layer in our implementation) with Riemannian linear layers. This choice of $g_{\\\\theta}$ enables us to reduce the dimensionality from the ambient product manifold $\\\\mathbb{P}^{d_{\\\\mathcal{M}}}$ to the desired curvature encoding dimension $d_{\\\\mathcal{C}}$. For computational efficiency, we just construct one product manifold $\\\\mathbb{P}^{d_{\\\\mathcal{M}}}$ (instead of separate product manifold classes for both curvature embedding and filtered representations). As a result, $\\\\mathbf{exp}^{\\\\kappa_{(q)}}$ is taken on the $q^{th}$ component manifold with curvature $\\\\kappa_{(q)}$, on the manifold $\\\\mathbb{P}^{d_{\\\\mathcal{M}}}$ and has a resultant dimension $d_{\\\\mathcal{M}}$. Now, to project this to the dimension $d_{\\\\mathcal{C}}$ (so as to result in a $\\\\mathcal{C}-$ dimensional curvature encoding), we use the projector MLP $g_{\\\\theta}: \\\\mathbb{P}^{d_{\\\\mathcal{M}}}\\\\rightarrow \\\\mathbb{P}^{d_{\\\\mathcal{C}}}$. We will add this explanation to a separate section in the Appendix, in the camera-ready version.\\n\\n> 11. There is no theory in Theorem 2. It\\u2019s confusing to call it a theorem when the claims are only definitions.\\n\\nWe sincerely thank the reviewer for highlighting the need for clearer framing of Theorem 2, particularly regarding their role as motivations rather than formal derivations. To address this concern, we have renamed Theorem 2 as *Definition 2*, respectively. They have been presented as: \\\"the functional curvature encoding is defined as .. \\\" and \\\"the Cusp Laplacian operator takes the form ..\\\". This renaming aligns with the intent of these sections to serve as foundational definitions (with derivations and motivations) rather than rigorous proofs.\"}", "{\"comment\": \"Thank you for the detailed response.\\n\\nI still find the use of $\\\\widetilde{\\\\kappa}$ for $\\\\widetilde{\\\\kappa} \\\\in \\\\mathbb{K}$ in Eq. (4) and for $\\\\widetilde{\\\\kappa}(x)$ representing the Ollivier-Ricci curvature confusing. Since the paper is dense, encountering this ambiguity on page 7 makes it difficult to connect back to the preliminaries. A clearer distinction or additional clarification in the text would be helpful.\\n\\nI\\u2019m still unclear about the notation $L$. In your notation table, $L$ is also used in representing the GPR-based node representation for filter with $L$ layers. Does this same notation imply that the number of filters equals the number of layers? If not, I couldn't find explicit information about the layers in the paper.\\nIn GPRGNN (Chien et al., 2020), they use a 2-layer MLP with 64 hidden units. Are you using the same configuration?\\n\\nIn addition, do the hyperparameter configurations in Table 13 also apply to the competing baselines? It wasn\\u2019t clear how the comparisons with other methods were conducted.\"}", "{\"title\": \"Rebuttal Response to Reviewer UpZ1 (2/3)\", \"comment\": \"## **Product manifold signature and hyperparameters**\\n\\n> 3. The (total) dimensions of the product manifolds of e.g. Table 5 and Table 2 seem to always be 48. Yet in Table 13 of Appendix 6.6.4 it is indicated that 64 is the selected dimension of the product Manifold. Could the authors comment? \\n\\nWe sincerely thank the reviewer for pointing out this inconsistency and apologize for any confusion caused by this oversight. The final (total) dimension of the product manifold used in our experiments is indeed 48. The mention of 64 in Table 13 of Appendix 7.6.4 was a typo. We have carefully reviewed the Appendix and updated Table 13 in the revised submission to ensure consistency with the main text and the actual experimental setup. \\n\\n> 4. It is still not clear to me how individual dimensions of product manifolds and respective factors are found/optimized. \\n\\nConsider the following excerpt from the Algorithm 1 in Appendix 7.6.5.\\n____\\n20. **If** Predefined dimensions $d_{(h)}^{\\\\text{pre}} , d_{(s)}^{\\\\text{pre}}, d_{(e)}^{\\\\text{pre}}$ are provided.\\n21. - Assign dimensions $d_{(q)}$ to each component $q$ as per predefined values *#Dimension assignment*\\n22. **Else**\\n23. - Set total number of components $\\\\mathcal{Q} = |\\\\mathcal{H}| + |\\\\mathcal{S}| + |\\\\mathcal{E}|$ *#Dimension assignment*\\n24. - Allocate dimensions $d_{(q)}$ to each component $q$: $d_{(q)} = \\\\left\\\\lfloor d_{\\\\mathcal{M}} \\\\times \\\\frac{w_q}{\\\\sum_{p=1}^{\\\\mathcal{Q}} w_p} \\\\right\\\\rfloor$ *#Proportional to weights*\\n25. - Adjust $d_{(q)}$ to ensure $\\\\sum_{q=1}^{\\\\mathcal{Q}} d_{(q)} = d_{\\\\mathcal{M}}$\\n____\\n\\nGiven the total product manifold dimension i.e. $d_{\\\\mathcal{M}}$ as a hyperparameter, compute the weighted dimension assignment using the algorithm above (lines 23, 24 and 25). However, for experimental results in the paper, the use of predefined dimensions allows for flexibility. Since optimal dimension allocations can vary and are complex to analyze, we manually set the dimensions of the component manifolds as a hyperparameter. This ensures fair and uniform comparison across multiple datasets, as different datasets may perform best with different configurations. Here are the hyperparameter configurations for the **Required** components in Algorithm 1 (Including the predefined dimensions).\\n\\n1. $d_{\\\\mathcal{M}} \\\\in \\\\\\\\{32, 48, 64, 128, 256\\\\\\\\}$ (Best $d_{\\\\mathcal{M}} = 64$)\\n2. ${\\\\mathcal{H}_{max}} \\\\in \\\\\\\\{1, 2, 3, 4\\\\\\\\}$ \\\\(Optimum varies according to datasets\\\\)\\n3. $\\\\mathcal{S}_{max} \\\\in \\\\\\\\{1, 2, 3, 4\\\\\\\\}$ (Optimum varies according to datasets)\\n4. $\\\\epsilon \\\\in \\\\\\\\{0.2, 0.1, 0.05, 0.001\\\\\\\\}$ (Best $\\\\epsilon = 0.1$)\\n5. $d_{(h)}^{\\\\text{pre}} , d_{(s)}^{\\\\text{pre}}, d_{(e)}^{\\\\text{pre}} \\\\in \\\\\\\\{4, 8, 16, 24, 32\\\\\\\\}$ (Optimum varies according to datasets)\\n\\n## **Spectral properties of the Cusp Laplacian**\\n\\n> 5. Is it possible to include and discuss some basic spectral characteristics of the CUSP Laplacian (beyond self-adjointness and positivity)? As is, this new matrix is introduced without too much intuition beyond the heat-flow heuristic. Can something e.g. be said about the maximal eigenvalue or first-non-trivial eigenvalue (in a Cheeger-type fashion) for example? I realize the present paper is not mainly theoretical but introduces an architecture. However some additional theoretical foundation would indeed be nice\\n\\nWe sincerely thank the reviewer for their interest in the spectral properties of the CUSP Laplacian. As part of our initial spectral analysis, we have already presented and proven two foundational theorems in Theorems 5 and 6, detailed in Appendix 7.3.1. These theorems establish key properties of the CUSP Laplacian:\\n\\n**Theorem 5**: *The normalized Laplacian operator $\\\\widetilde{\\\\mathbf{L}}$ is positive semidefinite, i.e., for any real vector $u \\\\in \\\\mathbb{R}^n$, we have $\\\\mathbf{u}^T\\\\widetilde{\\\\mathbf{L}}_{\\\\text{n}}\\\\mathbf{u} \\\\geq 0$.*\\n\\n**Theorem 6**: *The eigenvalues of the normalized CUSP Laplacian $\\\\widetilde{\\\\mathbf{L}}_{\\\\text{n}}$ lie in the interval $\\\\lambda_i \\\\in [0, 2]$*\\n\\nDetailed proofs for these theorems, including an analysis of the Rayleigh quotient of the CUSP Laplacian, are provided in Appendix 7.3.1. Looking forward, we aim to extend this analysis to demonstrate the convergence of the CUSP Laplacian (or its weighted version) to the Laplace-Beltrami operator on Riemannian manifolds. This would be analogous to how the traditional graph Laplacian converges to the discrete Laplace operator in Euclidean geometry. Furthermore, this line of investigation could involve exploring how the eigenvalues of the CUSP Laplacian align with those of a discretized Laplace-Beltrami operator. However, such an analysis requires substantial theoretical development and is beyond the scope of the current work. We plan to address this in detail in future research. We appreciate the reviewer\\u2019s insightful feedback and hope this response clarifies the scope of our current analysis and the direction of our future efforts.\"}", "{\"title\": \"Rebuttal Response to Reviewer iwKN (4/4)\", \"comment\": \"> 12. It\\u2019s unclear why translation invariant is a desirable property in the functional curvature encoding in the proposed method.\\n\\n**Why Translation Invariance?** The translation-invariance property of the functional curvature encoding is a critical requirement for the application of Bochner\\u2019s theorem, which underpins our theoretical framework. Bochner\\u2019s theorem states: A continuous, translation-invariant kernel $K(x, y) = \\\\psi(x - y)$ on $\\\\mathbb{R}^d$ is positive definite if and only if there exists a non-negative measure on $\\\\mathbb{R}$ such that $\\\\psi$ is the Fourier transform of this measure. In our method, the curvature kernel $\\\\mathcal{K}_{\\\\mathbb{P}}(\\\\widetilde{\\\\kappa}_a, \\\\widetilde{\\\\kappa}_b)$ is defined in the product manifold $\\\\mathbb{P}^{d{\\\\mathcal{C}}}$. To ensure that Bochner\\u2019s theorem applies, it is essential to establish that this kernel is both positive semidefinite (PSD) and translation-invariant. We prove its translation-invariance (for the kernel mapping in the product manifold) in Theorem 2. \\n\\n> 13. It\\u2019s unclear how functional curvature encoding gives more attention to differently curved substructures.\\n\\nBy **integrating the curvature-encoding into CUSP pooling**, the model leverages this encoding as a positional embedding to compute the relative importance of nodes and substructures. The hierarchical attention mechanism in CUSP pooling evaluates this importance dynamically, ensuring that substructures with curvature profiles most relevant to the specific dataset or task receive greater attention. Functional curvature encoding plays a crucial role in capturing the geometric diversity of graph substructures, with different curvature values being more relevant for different tasks or datasets. To conclude, functional curvature encoding **in unison** with the Cusp pooling mechanism yields these attention weights.\\n\\n> 14. It\\u2019s unclear which part of the implementation is adopted from Ni et al. (2019) in line 412.\\n\\nWe adopt the implementation for the Ollivier-Ricci curvature (ORC) computation from Ni et al.\\n\\n> 15. It\\u2019s unclear what do the authors mean by they \\u201cheuristically determine the signature of our manifold (i.e. component manifolds) using the discrete ORC curvature of the input graph\\u201d. It\\u2019s unclear how many hyperbolic spaces and spherical spaces are considered.\\n\\nWe use the discrete ORC as a heuristic to determine the signature (i.e. number of components and their dimensions) of the product manifold as described in the Algorithm 1 in Appendix 7.6.5. By systematically analyzing the curvature distribution, our heuristic-based algorithm identifies the manifold signature that best represents the dataset\\u2019s underlying geometric structure. Kindly see Appendix 7.6.5 for more details. \\n\\n> 16. It\\u2019s unclear what the hyperparameter L represents.\\nIt\\u2019s unclear how many layers are considered in the experiments.\\n\\nThe hyperparameter L represents the number of filters in the filterbank (Also specified in the notation table - Appendix 7.1). The final model uses L = 10 filters in the filterbank, and we grid-search over `L = {5, 10, 15, 20, 25}` (Specified in the Table 13 Appendix 7.6.4).\\n\\n> 17. It\\u2019s unclear how L3 is resolved using the proposed method.\\n\\nWe thank the reviewer for pointing out the need to clarify how our method addresses $\\\\texttt{L3}$ (*Lack of geometry-equivariance in spectral GNNs*). Our proposed approach resolves this limitation as follows:\\n- **Equivariance to Geometry**: By extending spectral filtering to operate on the mixed-curvature product manifold, our method ensures geometry-equivariance. Specifically, changes in the underlying graph geometry (captured via curvature values) directly influence the spectral filters, allowing them to align with the graph\\u2019s evolving structure. This alignment resolves the lack of geometry-awareness present in traditional spectral GNNs.\\n- **Curvature-Aware Filtering**: Unlike traditional spectral GNNs that assume a flat Euclidean manifold, our method integrates curvature-aware filters via the CUSP Laplacian. By incorporating geometric information such as Ollivier-Ricci curvature, the CUSP Laplacian adapts spectral properties based on the graph\\u2019s local and global geometry. This ensures that the filters dynamically adjust to reflect the geometric characteristics of the graph.\\n- **Application to Diverse Geometries**: Our method is inherently versatile and works across both homophilic and heterophilic graphs, where different geometric properties (e.g., curvature distributions) play a significant role. \\n\\n**Proofreading concerns**. We apologize for the oversight in maintaining consistent terminology, notation, and punctuation throughout the paper. We have carefully reviewed the manuscript and addressed all the points you raised, including inconsistencies in English style, capitalization, tangent space notation, punctuation, spacing, and mathematical notation, in the rebuttal revision of our paper.\"}", "{\"title\": \"Rebuttal Response to Reviewer UpZ1 (3/3)\", \"comment\": \"> 6. In the ablation study on the impact of the CUSP Laplacian, performance using the CUSP Laplacian is compared to performance using the adjacency matrix instead.\\n\\nWe sincerely apologize for the confusion caused here. To clarify, CUSP uses the adjacency matrix corresponding to the Cusp laplacian (aka the Cusp adjacency) for computing the GPR propagation. This is because the GPR score is computed as $\\\\sum_{k=0}^{\\\\infty} \\\\gamma_k \\\\widetilde{\\\\mathbf{A}}_{\\\\text{n}}^k \\\\mathbf{H}^{(0)}$ (euclidean variant), based on the adjacency matrix. \\n\\nFor consistency and to isolate the impact of the CUSP Laplacian, we conducted an ablation study by replacing the CUSP Laplacian-based adjacency matrix $\\\\widetilde{\\\\mathbf{A}}$ with the standard graph adjacency matrix in the filtering pipeline. This ablation is included in the variant $\\\\texttt{CUSP}_{lap}$ to provide a fair comparison.\\n\\n## **Other concerns and more explanations**\\n\\n> 7. I do have some concerns regarding readability. An easy fix is the image quality in Figure 1.\\n\\nThis is a very valuable suggestion -- we have increased the font size for the axis labels and ensured a higher resolution to improve clarity of the figure. These changes have been incorporated into the updated draft of the paper. **[Fixed in updated draft]** \\n\\n> 8. I believe what the authors want to convey is that the former definition as utilized in the paper already leads to a translation invariant kernel. Is this correct?\\n\\nThat is correct, the inner product form of the curvature kernel indeed leads to its translation invariance, which is essential for applying Bochner\\u2019s theorem. We sincerely apologise for the confusion in understanding the section, so we provide a brief summary here -- In the *proposed* Functional Curvature Encoding, our objective is to construct a translation-invariant curvature kernel that serves as a continuous encoding of node curvature values within our model\\u2019s attention mechanism (**Why translation invariance?** -- To apply Bochner's theorem). This is done in two stages. First, we define the kernel map within a Euclidean embedding space, allowing us to represent (node) curvature values as inner products of mapped features. In the second stage, Theorem 2 extends this encoding from the Euclidean space to the mixed-curvature product manifold. Here, we use the exponential map to transition from the Euclidean embedding to the product manifold space, creating a flexible and expressive positional encoding tailored to the unique curvature structure of each graph. The derived mixed-curvature kernel is indeed translation invariant. (Proved in Appendix 7.5.2). \\n\\n> 9. Could the authors comment more explicitly on the significance of Bochner's theorem here?\\n\\nThe application of Bochner\\u2019s theorem is fundamental to the theoretical grounding of our functional curvature encoding. The curvature kernel $\\\\mathcal{K}$ is both positive semidefinite (PSD) and continuous, as it is defined via a Gram matrix and the mapping $\\\\Phi$, which is continuous. Consequently, the kernel $\\\\mathcal{K}$ satisfies the assumptions of Bochner\\u2019s theorem, which states that: *A continuous, translation-invariant kernel $K(x, y) = \\\\psi(x - y)$ on $\\\\mathbb{R}^d$ is positive definite if and only if there exists a non-negative measure on $\\\\mathbb{R}$ such that $\\\\psi$ is the Fourier transform of the measure*. This result ensures that our curvature kernel $\\\\mathcal{K}$, which is translation-invariant by construction, can be represented as the Fourier transform of a non-negative measure. Leveraging this property, we construct the curvature encoding $\\\\Phi$, which maps curvature values to a functional space, and extend this encoding to the mixed-curvature product manifold $\\\\mathbb{P}^{d_{\\\\mathcal{C}}}$.\\n\\n> 10. It might also be good to explain (to some extent) and motivate the k-stereographic model in the main text.\\n\\nWe have discussed some brief motivation about the kappa-stereographic model in the Preliminaries section (Section 3). Based on this suggestion, we have added some more intuition about the model (and Table 8 operations) in the Appendix 7.2.4. We will move some of this motivation to the main-text in the camera-ready version.\"}", "{\"metareview\": \"The paper introduces CUSP, a graph representation learning model that integrates graph discrete curvature with a geometric extension of generalized PageRank. The method is shown to perform well when compared against several baselines.\\nThe reviewers valued the paper's originality. The reviewers were split regarding the clarity of the paper, and the comprehensiveness of the numerical evaluation.\\n\\nAll reviewers gave a positive score (6) after the discussion, except for reviewer iwKN who gave a 5. This reviewer gave a detailed review that pointed out many weaknesses of the paper. After a long set of responses, the reviewer was mostly satisfied with the author's clarifications. The only remaining weakness that this reviewer pointed out is that the comparison with the competing methods didn't give sufficient details. This reviewer does not object to possible acceptance.\\n\\nThis paper is borderline.\", \"additional_comments_on_reviewer_discussion\": \"All reviewers gave a positive score (6) after the discussion, except for reviewer iwKN who gave a 5. This reviewer gave a comprehensive review that pointed out many weaknesses of the paper. After a detailed set of responses the reviewer was mostly satisfied with the author's clarifications.\"}", "{\"summary\": \"The paper introduces CUSP: integrating mixed-curvature representation learning with GPRGNN.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The paper introduces a new GNN model that considers spectral information and the curvature.\", \"weaknesses\": [\"The method is incremental. It\\u2019s hard to separate the new components in the paper and how they depend on the previous work. Also, there is no theoretical justification for integrating the spectral information in the frequency domain and the curvature in the spatial domain.\", \"More details are needed for the heat diffusion equation and heat flow in Section 4.1. For example, as the cooling depends on the direction (from x to y), what is the role of direction in the heat diffusion equation? What is the definition of heat flow? What is the ORC distribution? What is the diffusion rate? How the Wasserstein-1 distance be interpreted as the resistance? Does the resistance depend on the direction?\", \"It\\u2019s unclear what \\u201c$x \\\\sim y$ denotes adjacency between nodes x and y\\u201d means in Proposition 1 in the main text. Also, more details and motivation needed for how $\\\\\\\\bar{w}\\\\_{xy} = e^{\\\\\\\\frac{-1}{1-\\\\\\\\tilde{\\\\kappa}(x, y)}}$ is designed in main texts. Does this design have favorable properties? How does Cusp Laplacian operator act differently based on $\\\\\\\\bar{w}\\\\_{xy}$?\", \"The explanation is only in the Appendix, but an overview or high-level explanation in the main texts can help in understanding the reason behind the proposed component.\", \"The font size in Figure 3 is too small. It makes it very difficult to follow complicated figures.\", \"More explanation is needed for how GPRGNN jointly optimizes node features and topological information extraction.\", \"It\\u2019s unclear what does curvature domain $\\\\mathbb{K}_{\\\\mathbb{P}}$ represent in line 321-322. In addition, it\\u2019s unclear why in line 251, the product space is $\\\\\\\\mathbb{P}^{d\\\\_{\\\\\\\\mathcal{M}}}$ but in line 322 the authors are interested in the $d\\\\_{\\\\\\\\mathcal{C}}$-dimensional product space $\\\\mathbb{P}^{d\\\\_{\\\\\\\\mathcal{C}}}$.\", \"Based on Eq. (4), $\\\\widetilde{\\\\kappa}\\\\in\\\\mathbb{K}$. However, it\\u2019s unclear what $\\\\widetilde{\\\\kappa}(x)$ represents in Eq.(5).\", \"It\\u2019s unclear what the Riemannian projector is in line 347.\", \"It\\u2019s unclear what M2 is in line 319. There is no M2 in the paper.\", \"There is no theory in Theorem 2. It\\u2019s confusing to call it a theorem when the claims are only definitions.\", \"It\\u2019s unclear why translation invariant is a desirable property in the functional curvature encoding in the proposed method.\", \"It\\u2019s unclear how functional curvature encoding gives more attention to differently curved substructures.\", \"It\\u2019s unclear which part of the implementation is adopted from Ni et al. (2019) in line 412.\", \"It\\u2019s unclear what do the authors mean by they \\u201cheuristically determine the signature of our manifold $\\\\mathbb{P}$ (i.e. component manifolds) using the discrete ORC curvature of the input graph\\u201d. It\\u2019s unclear how many hyperbolic spaces and spherical spaces are considered.\", \"It\\u2019s unclear what the hyperparameter L represents.\", \"It\\u2019s unclear how many layers are considered in the experiments.\", \"It\\u2019s unclear what the experimental configurations and hyperparameters of the competing methods are.\", \"The spacing between the tables and the main texts in Section 5 is very tight and narrow.\", \"The paper is missing important spectral GNNs: OptBasisGNN, ChebNetII, CayleyNet, APPNP, JacobiConv, and Specformer.\", \"It\\u2019s unclear how L3 is resolved using the proposed method.\", \"The paper needs more work on proofreading. For example:\", \"the English style is not consistent. Sometimes the authors use \\u201cnormalised\\u201d, but sometimes they use \\u201cnormalized\\u201d.\", \"The word \\u201cLaplacian\\u201d sometimes has the uppercase letter L, but sometimes it is lowercase l.\", \"The tangent space notation is not consistent.\", \"There is a comma in Eq.(15), but there is no sentence afterward.\", \"The exponential map and logarithmic map are defined using boldface letters, but these notations are not consistent in the appendix\", \"In line 366, the sentence is not finished. The punctuation in Eq. (6), Eq. (7), and Eq. (8) is missing.\", \"The notation of Wasserstein-1 distance is not consistent in the main paper and appendix\", \"The reference pointer is missing in line 405\", \"## Minor\", \"Unclear what is $\\\\mathbf{W}$ in line 141\", \"In line 160, missing $\\\\times \\\\ldots \\\\times $ in $\\\\mathbb{P}$\", \"A formal definition of Wasserstein-1 distance is missing\", \"Unclear what is $\\\\psi_{xy}$ in Appendix 7.3\", \"In line 996, it\\u2019s unclear what is the element-wise product between a matrix $\\\\mathbf{L}$ and a scalar $e^{\\\\frac{-1}{1-\\\\tilde{\\\\\\\\kappa}(x, y)}}$\", \"In Eq.(25), $\\\\mathbf{X}_{i:}$ is not defined\", \"$\\\\delta$ in line 172 and $\\\\delta_{vs}$ in line 181 are not defined\", \"Unclear why $\\\\omega_{d_f}$ in line 341, there is no $d_f$ in Eq. (4)\"], \"questions\": [\"The authors keep using the term \\u201ccurvature signal\\u201d throughout the paper. What does this term mathematically mean?\", \"In \\u03ba-right-matrix-multiplication, why do the authors choose to work with the projection between the manifold and tangent space at the origin? How does this choice affect the empirical results?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
2LOtSPmopq
Unsupervised Whole Object Discovery by Contextual Grouping with Repulsion
[ "Fei Pan", "Sangryul Jeon", "Stella X. Yu" ]
It is challenging to discover and segment whole objects from unlabeled images, as features unsupervisedly learned on images tend to focus on distinctive appearances (e.g., the face rather than the torso), and grouping by feature similarity could reveal only these representative parts, not the whole objects (e.g., the entire human body). Our key insight is that, an object of distinctive parts pops out as a whole, due not only to how similar they are to each other, but also to it how different they are from their contexts within an image or across related images. The latter could be crucial for binding different parts into a coherent whole without preconception of objects. We formulate our idea for unsupervised object segmentation in a spectral graph partitioning framework, where nodes are patches and edges are grouping cues between patches, measured by feature similarity for attraction, and by feature dissimilarity for repulsion. We seek the graph cuts that maximize within-group attraction and figure-ground repulsion while minimizing figure/ground attraction and within-group repulsion. Our simple method consistently outperforms the state-of-the-art on unsupervised object discovery, figure/ground saliency detection, and unsupervised video object segmentation benchmarks. In particular, it excels at discovering whole objects instead of salient parts.
[ "Unsupervised Object Discovery", "Unsupervised Whole Object Segmentation", "Co-Segmentation", "Normalized Cut", "Attraction and Repulsion" ]
Reject
https://openreview.net/pdf?id=2LOtSPmopq
https://openreview.net/forum?id=2LOtSPmopq
ICLR.cc/2025/Conference
2025
{ "note_id": [ "nG1F4hy2V7", "aEwYGyiQ8A", "a3vLaAgAbI", "W6D086A5Dy", "VldvTohuSU", "V88xQtfXCk", "UbKtyT0Kw2", "60YTC2kchy" ], "note_type": [ "official_comment", "decision", "meta_review", "comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1733160360283, 1737523448495, 1733906941105, 1741386025891, 1730733591741, 1730690772114, 1730606539043, 1730564638487 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1347/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1347/Area_Chair_yHip" ], [ "ICLR.cc/2025/Conference/Submission1347/Authors" ], [ "ICLR.cc/2025/Conference/Submission1347/Reviewer_BDiv" ], [ "ICLR.cc/2025/Conference/Submission1347/Reviewer_3kb7" ], [ "ICLR.cc/2025/Conference/Submission1347/Reviewer_mTRy" ], [ "ICLR.cc/2025/Conference/Submission1347/Reviewer_aVoJ" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your thorough review and valuable feedback. We have revised the submission, with updated sections highlighted in purple. We address your concerns in detail below.\\n\\n**1. Time and memory consumption of the proposed method**\\n\\nIn our experiments, fine-tuning CGR with S/8 ViT architecture is conducted on 4 A40 GPUs for 3 days of training. CGR does not require or depend on extensive computational devices and large amounts of datasets. Our CGR is computationally efficient compared with DINO, which requires more than 16 GPUs over 3 days of training on ImageNet.\\n\\n**2. Clarification on Figure 5**\\n\\n- **Self-supervised transformer**: This refers to the ViT backbone pre-trained using self-supervised learning. In our experiments, we utilize DINO pre-trained with self-distillation as the ViT backbone. \\n- **Segmentation head**: The segmentation head contains a single 1x1 convolution layer.\\n- **Ground-truth masks**: Yes, the masks from co-segmentation serve as ground truth to compute the loss.\\n\\n**3. Compared with SAM2 for video object segmentation on the DAVIS dataset**\\nWe compare the performance of segmenting foreground video objects on DAVIS datasets using SAM2 and our proposed CGR (CGR-c, co-segmentation setting). We use the mean intersection over the union between the predicted foreground masks and ground truth as the metric. The results are shown here:\\n| Method | mIoU |\\n|--------|------|\\n| SAM2 | 69.8 |\\n| CGR | 71.4 |\\nAs a supervised pre-trained method, SAM2 is less effective than our CGR as SAM2 is confused between the foreground objects and background, while CGR using co-segmentation on reference frames pops out of the foreground off the background.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"This paper aims to discover whole object by contextual grouping with repulsion. Specifically, the key insight of this paper is that an object of distinctive parts pops out as a whole, due not only to how similar they are to each other, but also to it how different they are from their contexts within an image or across related images. The latter could be crucial for binding different parts into a coherent whole without preconception of objects. Therefore, the authors formulate their idea for unsupervised object segmentation in a spectral graph partitioning framework, where nodes are patches and edges are grouping cues between patches, measured by feature similarity for attraction, and by feature dissimilarity for repulsion. Experimental results reveal that the proposed method consistently outperforms the state-of-the-art algorithms on unsupervised object discovery, figure/ground saliency detection, and unsupervised video object segmentation benchmarks.\\n\\nThe paper is generally written clearly, and the key insight of this paper is also reasonable. However, the review committee considers that the main idea of this paper is not sufficiently novel for ICLR.\", \"additional_comments_on_reviewer_discussion\": \"One of the reviewer raises his/her score after rebuttal. The reviewer considers that the idea of \\\"whole object\\\" is fuzzy. Self-similar objects should be recognized as individual instances, and not as a collective blob. Also, the spectral graph partitioning has also been investigated before, and thus not compelling enough to be an ICLR submission.\\n\\nDue to the above reasons, as well as that the score of this paper is not high in my AC pool, I feel sorry that I cannot recommend an acceptance due to the limited acceptance rate.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper proposes a solution of discovering and segmenting objects in unsupervised setting. Inspired by object feature similarity as well as feature disimilarity, the paper proposes to utilize graph cuts that maximize similarity between object features while also maximizing dissimilarity between object and background features. Moreover, the paper shows performance gain for unsupervised object\\ndiscovery, saliency detection, and unsupervised object detection\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The main motivation of the proposed method is to focus on distinctive parts of an object by increasing similarity between them and simultaneously focus on how dissimilar they are from their context in the image. The paper first identifies this problem in existing method and show that upon taking into account both similarity and dissimilarity, there is a possibility of performance improvement. Empirically for three different unsupervised tasks, the proposed method show improvements over existing methods in both single image setting and reference image setting.\", \"weaknesses\": \"The proposed idea of utilizing attraction and repulsion doesn\\u2019t seem to be novel, As authors say in L194 \\u201cGiven attraction A and repulsion R, we follow (Yu & Shi, 2001 ) and conduct. . . \\u201d. The referenced paper proposes the same idea of utilizing attraction and repulsion for both to measure degree of attraction and segregation of features. The difference seems to be application of this on features obtained from self-supervised transformers instead of image features. Moreover, the segmentation method remains the same as before. The rest of the method is clearly followed from (Wang et al, 2023).\\n\\nThere are also concerns regarding reported quantitative results in table 3. As mentioned in L257, the authors use bilateral solver (BL) to refine the masks. However, when comparing with TokenCut (Wang et al, 2023), the results are taken without bilateral solver, TokenCut+BL shows better performance by a significant margin when compare with CGR (proposed method). Similarly there is inconsistency in table 2. TokenCut+BL is not reported, which clearly outperforms CGR.\\n\\nAnother minor concern in the paper is repetitive writing. There are multiple instances in abstract and introduction where sentences are\\nrepeated again and again. For e.g. L19 and L50. Also, few argumentative sentences in the paper are too long and complex which hinders the information being conveyed. This should be improved for clear understanding of the paper.\", \"questions\": \"1. Could you please clearly emphasize the novelty part and distinguish this work from the existing literature?\\n2. Could you please clarify how the comparisons drawn in the results are fair?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper addresses the unsupervised object segmentation task. The authors proposed the Contextual Grouping with Repulsion method (CGR), which considers both the internal similarities (attraction) among different parts of an object and their common dissimilarities (repulsion) to the background. The authors formulate their pipeline using a weighted graph where nodes represent image patches and edges encode grouping cues, measured by both feature similarity (attraction) and dissimilarity (repulsion). The proposed approach extends TokenCut, which solely relies on internal similarities between different object parts for segmentation. The proposed method demonstrates superior performance across multiple unsupervised segmentation benchmarks, including unsupervised object discovery, saliency detection, and video object segmentation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe proposed CGR is simple and easy to understand.\\n2.\\tThe paper is well-written and organized, making the author's ideas easy to understand.\\n3.\\tThe authors validated CGR's performance on different segmentation benchmarks.\", \"weaknesses\": \"1.\\tThis paper lacks sufficient details about the training and evaluation process. Specifically, it does not explain how the train/validation/test sets were divided and which data subset was used in the training, hyperparameter selection, and final model evaluation.\\n2.\\tRegarding the repulsion weight, Figure 9 shows that when $\\\\omega$ fluctuates in the range of 0~0.25, the performance difference is not significant, which raises doubts about the effectiveness of the proposed method. Additionally, the author only conducted an ablation study of $\\\\omega$ on the ECSSD dataset for unsupervised saliency detection and then applied this parameter to all tasks and datasets. I suppose this pattern is not convincing enough. I'm not suggesting that the authors should conduct ablation studies for all tasks to determine the repulsion weight. Rather, I think it's tricky to set this parameter as a fixed value and apply it to different tasks and datasets. The authors should discuss whether this parameter could adapt automatically when facing different tasks and datasets.\\n3.\\tStill for the repulsion weight, comparing the experimental results, it appears that the authors used the same data subset for both hyperparameter selection (Figure 9) and results reporting (Table 3). In other words, the authors did not strictly distinguish between the validation set and test set in the experiments, which suggests that their proposed method might be overfitting to the target dataset.\", \"questions\": \"My main concerns are about the training/evaluation process and parameter selection. Please refer to the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"It is challenging to discover and segment whole objects from unlabeled images, as features unsupervisedly learned on images tend to focus on distinctive appearances (e.g., the face rather than the torso), and grouping by feature similarity could reveal only these representative parts, not the whole objects (e.g., the entire human body). The key insight of this paper is that, an object of distinctive parts pops out as a whole, due not only to how similar they are to each other, but also to how different they are from their contexts within an image or across related images. The latter could be crucial for binding different parts into a coherent whole without preconception of\\nobjects. This paper formulate this idea for unsupervised object segmentation in a spectral graph partitioning framework, where nodes are patches and edges are grouping cues between patches, measured by feature similarity for attraction, and by feature dissimilarity for repulsion. This paper seek the graph cuts that maximize within-group attraction and figure-ground repulsion while minimizing figure/ground attraction and within-group repulsion. The simple method consistently outperforms the state-of-the-art on unsupervised object discovery, figure/ground saliency detection, and unsupervised video object segmentation benchmarks. In particular, it excels at discovering whole objects instead of salient parts.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The strengths are as follows:\\n1. This formulate the idea \\\"an object of distinctive parts pops out as a whole, due not only to how similar they are to each other\\\" for unsupervised object segmentation in a spectral graph partitioning framework, where nodes are patches and edges are grouping cues between patches, measured by feature similarity for attraction, and by feature dissimilarity for repulsion. \\n2. This paper seek the graph cuts that maximize within-group attraction and figure-ground repulsion while minimizing figure/ground attraction and within-group repulsion. \\n3. This paper investigate this idea not only within a single image, but also across related images in a co-segmentation setting, where contextual grouping with repulsion between images brings additional power for discovering whole objects together\", \"weaknesses\": \"This paper present a method for unsupervised segmentation/saliency detection/co-segmentation. The weakness are as follows:\\n1. The time cost and memory consumption for the proposed method is not presented. This is quite necessary as the method use a large model like ViT. \\n2. What does the Self-Supervised Transformer indicate in Figure5? How about the segmentation head? does it use a pretrained sementation model? Looks like it use a mask from a segmentation model as gt to compute loss, right?\\n3. In figure 8, the paper try to compare the result with SAM2, but only a few visual results are provided, it there more systematic comparison results?\", \"questions\": \"No\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a method to perform unsupervised object discovery and segmentation in videos. It utilizes spectral graph partitioning with both feature similarity and dissimilarity cues to capture whole objects from unlabeled images. A graph segmentation model is trained using cross-entropy loss and contrastive loss. The quality of segmentation on image and video datasets appears to improve compared to previous approaches.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is well-written and easy to follow.\\nThe proposed approach is an extension of the prior TokenCut approach, which utilized spectral graph partitioning with an attraction cue. Here, the method is extended by incorporating both attraction and repulsion cues in the graph structure, as proposed in Yu and Shi, 2001. \\nMoreover, the paper adapts the framework to video data by introducing multi-frame objectives.\", \"weaknesses\": \"However, given the main contributions, the paper appears to be incremental, with limited innovation beyond extending existing methods.\\nThe method focuses on extracting a single dominant object in the scene, which wouldn't apply to complex scenes with many objects. \\nThe results are primarily demonstrated on older datasets like VOC, COCO and DAVIS, and the comparisons focus largely on previous approaches, lacking evaluation against more recent research in the field, such as VideoCutLER: Surprisingly Simple Unsupervised Video Instance Segmentation (CVPR, 2004).\", \"questions\": \"1. The method relies on the initial segmentation provided by the graph cut method. How does it recover from large-scale errors in this prior segmentation.\\n2. How would this method be extended to multi-object segmentation.\\n3. How does it work with self-similar objects. E.g., multiple instances of the same object in the image.\\n4. How well does this method work on more complex datasets like YoutubeVIS etc.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
2LHzKdb8Ao
Reducing Symmetry Mismatch Caused by Freely Placed Cameras in Robotic Learning
[ "David Klee", "Dian Wang", "Robert Platt", "Robin Walters" ]
Equivariant policy learning has been shown to solve robotic manipulation tasks with minimal training or demonstration data. However, the effectiveness of equivariance depends on whether transformations of the scene align with simple transformations of the input data. This is true when the camera is in a top-down view, but in the common case where a camera views the robot workspace from the side, there is a symmetry mismatch, reducing model performance. We show that equivariant methods perform better when camera images are transformed to appear as top-down images. Our approach is simple to implement, works for RGB and RGBD images, and reliably improves performance across different view angles and learning algorithms.
[ "Equivariance", "Robotics" ]
Reject
https://openreview.net/pdf?id=2LHzKdb8Ao
https://openreview.net/forum?id=2LHzKdb8Ao
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wowr3QRwRc", "syNuSDoNRR", "stdhz3V3OD", "moZNzuQyoU", "htFHczEHCY", "edIX7iv1QT", "ayUBl24CJK", "VWtNfxblo6", "SyGPNm1WYc", "Rg13ePHNSq", "6c1mJykTK1", "29lIfTr4n0" ], "note_type": [ "official_review", "official_review", "decision", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1729433682907, 1730663643203, 1737524194247, 1734793736163, 1730610531548, 1732697048964, 1732604952072, 1732542920167, 1732542378845, 1732542312935, 1730601833891, 1732542357508 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12480/Reviewer_FKHM" ], [ "ICLR.cc/2025/Conference/Submission12480/Reviewer_sTng" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12480/Area_Chair_eAd8" ], [ "ICLR.cc/2025/Conference/Submission12480/Reviewer_vGKo" ], [ "ICLR.cc/2025/Conference/Submission12480/Reviewer_FKHM" ], [ "ICLR.cc/2025/Conference/Submission12480/Reviewer_vGKo" ], [ "ICLR.cc/2025/Conference/Submission12480/Authors" ], [ "ICLR.cc/2025/Conference/Submission12480/Authors" ], [ "ICLR.cc/2025/Conference/Submission12480/Authors" ], [ "ICLR.cc/2025/Conference/Submission12480/Reviewer_aBnp" ], [ "ICLR.cc/2025/Conference/Submission12480/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper addresses a challenge in robotic learning, where freely placed cameras cause a mismatch between input image transformations and the inherent task symmetry in robotic manipulation environments. The authors propose two preprocessing methods: reprojection of RGBD images and perspective transformation for RGB images. These techniques transform side-view images into top-down views, thus aligning the image transformations with the task symmetry. This approach is shown to consistently improve performance in robotic manipulation tasks, particularly in reinforcement learning and imitation learning setups.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1. The paper offers practical preprocessing methods (RGBD reprojection and RGB perspective transformation) that are simple. These methods can be applied across various robotic learning tasks without additional training or modification of the robot setup.\\n\\n2. The proposed methods require only knowledge of the camera\\u2019s intrinsics and extrinsics, making them straightforward to implement without the need for privileged information. This makes the approach broadly applicable across robotic tasks.\", \"weaknesses\": \"1. Limited Technical Contribution: The technical contribution of the paper is minimal. The methods of RGBD reprojection and RGB perspective transformation are well-established and mature techniques. The paper merely applies these existing methods to Equivariant Policy Learning without introducing any significant novel ideas. As a result, the work feels more like a technical report rather than a research paper offering new scientific insights.\\n\\n2. Lack of Real-World Experiments: The experiments are conducted only in six simple simulated environments, without any real-world validation. This limits the applicability and robustness of the proposed methods in practical scenarios, as real-world experiments are essential to demonstrate the effectiveness of the approach outside of controlled simulations.\\n\\n3. Performance Gap with Oracle: While the proposed methods reduce the performance gap with the oracle top-down view, they do not entirely close it. The occlusion of objects and grippers, especially in cluttered environments, remains an unsolved problem.\", \"questions\": \"1. Handling Extreme Occlusions: In the RGBD setting, how might more sophisticated inpainting or occlusion handling methods (e.g., learned inpainting) improve the performance gap with the oracle? Have the authors experimented with these techniques, and what were the results?\\n\\n2. Effectiveness in Real-World Scenarios: While the experiments are simulated, can the authors elaborate on the challenges and potential modifications required to apply these preprocessing steps in real-world robot learning tasks with physical cameras and hardware?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper addresses a key issue in equivariant neural networks for agent learning to decrease the gap between sideview camera observations, which perform sub-optimally when cameras view the scene from the side rather than directly above. The authors propose two simple preprocessing techniques to reduce this gap:\\n\\n1. For RGBD cameras, they reproject the image to a virtual top-down view, and \\n2. For RGB cameras, they apply a perspective transformation to align the ground plane with the image plane. \\n\\nThrough experiments across multiple robotic manipulation tasks using both reinforcement learning and imitation learning, they demonstrate that these preprocessing steps significantly improve the performance of equivariant networks compared to using raw side-view images.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This work addresses a very common problem prevailing in the robotic manipulation domain i.e lack of robustness of vision based policies to viewpoints.\\n2. The proposed solutions are very simple (under the known camera extrinsics assumption, which is typically common in table-top robotic manipulation settings).\\n3. The paper is generally well written and easy to understand in a single go.\", \"weaknesses\": \"4. Related works: I believe a small discussion on point cloud models (in the context of Image reprojection) should also be discussed. Several works in the the past few years have proposed using point clouds for RL / policy learning [1, 2] and shown robustness to viewpoints [3].\\n\\n5. Sample-efficiency of RGBD experiments: I don't particularly find a difference between *Point cloud equi* and *Reproj. equi* in Fig 5. and Table 1. What are the benefits of Reproj. Equi over point cloud equi?\\n\\n6. Sec 5.6 (Effects of camera angle) needs to also have the PointNet++ baseline (*point cloud equi*) for the RGB-D plots. Some works have suggested that point cloud RL policies are robust to viewpoint changes [3].\\n---\\n**References:**\\n\\n1. On the efficacy of 3d point cloud reinforcement learning, Zhan Ling et al., arXiv 2023\\n2. Point Cloud Matters: Rethinking the Impact of Different Observation Spaces on Robot Learning, Haoyi Zhu et al., NeurIPS D&B 2024.\\n3. Point Cloud Models Improve Visual Robustness in Robotic Learners, Skand Peri et al., ICRA 2024\\n\\n---\\n**Rationale for current rating**: Overall I believe this is a well written paper with clear contributions. However, I have particular questions regarding the baselines (points 5, 6, 8) and generalization (point 9) and based on that, I'm voting for a weak reject. However, this is *not* my final decision and I am willing to update my score based on other reviewers' comments and authors' rebuttal.\", \"questions\": \"7. Gripper Image: Does this formulation of having a gripper image generalize to a dextrous manipulation with a non-trivial gripper? Also, it would be better if the Fig 11 (from appendix) can be moved/integrated into the main paper. This is because, the gripper representation is one of the crucial aspects of the proposed solution and having it in a visual form would make the methodology more clearer to the reader.\\n\\n8. I would like to see an experiment with DrQ-v2 where image augmentaiton has shown significant sample-efficiency gains and am curious how that performs as compared to an explicit Equivariant policy. I believe the data-augmentations can be implemented in a straightforward manner within the SACfD codebase.\\n\\n9. Are the models in Fig 7(a) and 7(b) test-only models are are they trained on individual camera angles? If it's trained and tested separately -- I'm curious to see how Reproj equi or Presp. equi perform on testing on OOD camera viewpoints (i.e train on one camera angle and test on rest.)\\n\\n10. Are the class of Equivariant policies biased to the action space? Would the same set of architectures work for a other action spaces that are common in robotic manipulation such as end-effector pose, joint velocities, joint angle poisitions etc?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"The paper addresses the performance limitations of equivariant policy learning in robotic manipulation when using side-view camera perspectives by identifying a \\\"symmetry mismatch\\\" between side-view inputs and the symmetry assumptions of equivariant networks. To overcome this, the authors propose two simple preprocessing techniques: reprojecting RGBD images to a virtual top-down view using depth data and applying perspective transformations to RGB images to align the ground plane with the image plane. These methods transform side-view images into top-down representations, thereby enhancing the effectiveness of equivariant policy networks.\", \"strengths\": \"-- Tackles a common issue in robotic manipulation regarding viewpoint robustness, making equivariant learning applicable to more realistic camera setups.\\n\\n-- The preprocessing techniques are straightforward, requiring only known camera extrinsics, facilitating easy integration into existing systems.\\n\\n-- Demonstrates consistent performance enhancements across multiple tasks and image modalities, providing strong empirical support for the proposed methods.\", \"weaknesses\": \"-- Utilizes established computer vision techniques (3D reprojection and perspective transformation) without introducing new algorithms or theoretical advancements.\\n\\n-- Lacks thorough comparisons with state-of-the-art point cloud-based equivariant methods, weakening claims of superiority and generalizability.\\n\\n-- Relies solely on simulated environments, lacking validation in real-world settings which is crucial for practical applicability.\\n\\nAfter carefully reading the paper, the reviews and rebuttal discussions, the AC finds despite effectively addressing a practical problem and demonstrating consistent empirical improvements, the paper lacks significant technical innovation, fails to provide comprehensive benchmarking against advanced point cloud-based methods, and does not include real-world experiments. AC agrees with the reviewers on recommending to reject the paper.\", \"additional_comments_on_reviewer_discussion\": \"See the weakness and comments above, there are still remaining concerns from most reviewers.\"}", "{\"summary\": \"The paper proposes a method to improve equivariant policy learning in robotic manipulation tasks where camera views are not ideal (e.g., side views instead of top-down). The authors present two preprocessing techniques:\\n- Reprojection of RGBD images to approximate top-down views by generating point clouds and interpolating missing data.\\n- Perspective transformation of RGB images to map the ground plane onto a top-down view.\\n\\nThese methods enhance performance across different learning tasks and camera angles without additional data or privileged information, making them adaptable to real-world setups. The experiments show improved policy learning outcomes in several robotic tasks by aligning image transformations with physical symmetries in the robot workspace.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper defines a problem of \\\"symmetry mismatch\\\" from non-ideal camera placements in image based equivariant robotic learning. By applying reprojection and perspective transformations to side-view images, it extends the utility of equivariant learning in robotics, enabling its application in more realistic setups.\", \"The authors provide a thorough and well-validated empirical analysis across diverse robotic tasks and modalities (RGB and RGBD), with clear comparisons to multiple baselines. This experimental rigor strongly supports the paper's claims about the effectiveness of the preprocessing techniques.\"], \"weaknesses\": [\"The problem this paper attempts to address may not be a genuine issue. When handling tabletop robotic manipulation tasks and aiming to apply O(2)-equivariant policy learning algorithms, a fundamental assumption is the availability of top-view observations. If only side-view images are accessible, a more natural approach might be to consider non-equivariant policy learning algorithms instead.\", \"The methods proposed in this paper lack originality. Both 3D reprojection and perspective transformation are well-established algorithms in the field of computer vision. This paper merely applies them to a specific scenario\\u2014converting side-view images of tabletop robotic manipulation scenes into top-view images\\u2014to facilitate the use of O(2)-equivariant policy networks. I view these techniques as pre-processing tricks rather than substantive innovations.\", \"The formulation for 3D reprojection in this paper is not entirely realistic. To perform reprojection, RGBD information is required. However, if 3D data is available, it would be more straightforward to use equivariant policy networks based on 3D groups$^{[1,2]}$ (such as SO(3), SE(3), or SIM(3)). This would eliminate the need to address issues arising from mismatched camera viewpoints.\", \"[1] Yang, J., Cao, Z. A., Deng, C., Antonova, R., Song, S., & Bohg, J. (2024). Equibot: Sim (3)-equivariant diffusion policy for generalizable and data efficient learning. arXiv preprint arXiv:2407.01479.\", \"[2] Chen, Y., Tie, C., Wu, R., & Dong, H. (2024). EqvAfford: SE (3) Equivariance for Point-Level Affordance Learning. arXiv preprint arXiv:2408.01953.\"], \"questions\": [\"In the experimental part, the author compares many baselines (equivariant, non-equivariant, 2D, 3D), but does not clearly write out the specific structure of each baseline and the group on which their equivariance properties are defined.\", \"All baseline methods in this paper are based on the same framework (SACfD). To demonstrate the effectiveness of this preprocessing approach in broader scenarios, I believe it would be beneficial to include comparisons with other state-of-the-art methods.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response, but I believe your reply did not address my main concern: the technical contribution of the paper is minimal. Therefore, I will maintain the score I gave.\"}", "{\"comment\": \"I do not think the response of the authors effectively address my concerns.\\n\\nThis article uses two well-known techniques in computer vision as preprocessing for the input of equivariant networks. I think this work lacks innovation and is just a preprocessing technique. At the same time, the author has not verified the applicability range of this technique. Almost all experiments are based on the same algorithm framework (SACfD), and this technique has not been applied in more equivariant learning methods that require image input to verify its effect. In addition, the author has not verified in real-world input whether their proposed technique can improve existing methods. Therefore, combined with the opinions of other reviewers, I think the quality of this article is not sufficient to be accepted by ICLR.\"}", "{\"comment\": \"We thank the reviewer for their helpful comments.\\n\\n> The problem this paper attempts to address may not be a genuine issue\\u2026 a more natural approach might be to consider non-equivariant policy learning algorithms instead.\\n\\nWe directly compare our proposed approach against non-equivariant policy learning algorithms. The non-equivariant baselines perform much worse in terms of performance and sample efficiency (see \\u201cSideview NonEqui\\u201d Figure 5 and Table 1). The non-equivariant methods were trained with data augmentation and still underperformed the equivariant versions. These results were also observed in [1].\\n\\n> The methods proposed in this paper lack originality\\u2026 I view these techniques as pre-processing tricks rather than substantive innovations.\\n\\nWe agree that these techniques are simple and well-known, and we tried to make that clear in the writing. We see this simplicity as a benefit to our approach since it expands the problem settings where equivariant policy learning methods are useful without requiring different sensors or new network architectures. \\n\\n> The formulation for 3D reprojection in this paper is not entirely realistic. To perform reprojection, RGBD information is required. \\n\\nWe believe the reviewer is misunderstanding our work. We only perform 3D projection when RGBD information is available. When it is not available, we perform a perspective transform, which, as we note in the paper, deviates from a 3D reprojection for any visual features that are above the ground plane.\\n\\n>if 3D data is available, it would be more straightforward to use equivariant policy networks based on 3D groups$^{[1,2]}$ (such as SO(3), SE(3), or SIM(3))\\n\\nYou are correct. If we use an SO(3) equivariant network, then the original point cloud (in the sideview camera frame) could be used directly as input. In that setting, we would have to apply pooling at the end to reduce to an SO(2) equivariant representation for the output actions. One downside to this approach is the additional compute. Moving from SO(2) to SO(3)/SE(3) equivariance requires applying additional constraints that slow down training and increase memory. The other downside is that an SO(3)/SE(3) equivariant network cannot resolve the \\u201cgravity\\u201d direction (they are invariant to the input\\u2019s coordinate frame). In many robotic manipulation tasks, the \\u201cgravity\\u201d direction is important for determining the best action. There are ways to re-inject information about gravity, but it would take some experimentation to identify the best approach.\\n\\nWe believe running an experiment would be interesting. We are training a VectorNeuron [1] baseline and will try to add the results by the end of the discussion period.\\n\\n> In the experimental part, the author compares many baselines (equivariant, non-equivariant, 2D, 3D), but does not clearly write out the specific structure of each baseline and the group on which their equivariance properties are defined.\\n\\nWe include the symmetry group used and the specific structure of all networks in Appendix A.4. We refer to this part of the Appendix at the end of Baselines (Section 5.3).\\n\\n> All baseline methods in this paper are based on the same framework (SACfD). To demonstrate the effectiveness of this preprocessing approach in broader scenarios, I believe it would be beneficial to include comparisons with other state-of-the-art methods.\\n\\nIn this paper, we run experiments with SACfD (Figure 5 & 6) and behavior cloning (Table 1 & 2). Do you have a SOTA method in mind that we should run comparisons on?\\n\\n[1] Deng, Congyue, et al. \\\"Vector neurons: A general framework for so (3)-equivariant networks.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.\"}", "{\"comment\": \"We thank the reviewer for their helpful comments.\\n\\n> The experiments are conducted only in six simple simulated environments, without any real-world validation.\\n\\nWe agree that real-world validation would strengthen this method. We are not able to provide results for real world tasks during the rebuttal period.\\n\\n> While the proposed methods reduce the performance gap with the oracle top-down view, they do not entirely close it. The occlusion of objects and grippers, especially in cluttered environments, remains an unsolved problem.\\n\\nThis is a good point. Keep in mind that the oracle view is an unrealistic ideal setting, since its very challenging to achieve the oracle view in a real-world setting. We are interested in closing the gap with the oracle further and are open to suggestions for experiments that would shed light on why this gap remains.\\n\\n> In the RGBD setting, how might more sophisticated inpainting or occlusion handling methods (e.g., learned inpainting) improve the performance gap with the oracle? Have the authors experimented with these techniques, and what were the results?\\n\\nThere are many effective inpainting methods available that could be used in place of interpolation. In our simulated results, interpolation worked well, due to the simplicity of the scene. We believe these inpainting methods would be more useful in real-world tasks with more complex backgrounds and objects. \\n\\n> While the experiments are simulated, can the authors elaborate on the challenges and potential modifications required to apply these preprocessing steps in real-world robot learning tasks with physical cameras and hardware?\\n\\nWe have begun real-world experiments in the RGBD setting. The main difference we observed was noisiness in the depth values on the gripper. After reprojection, the appearance of the gripper was very distorted (missing regions or missing entire finger), which we believe made it difficult for the model to learn. One option to help with this is to add a synthetically rendered gripper image to the input (similar to how we do it for the RGB setting).\"}", "{\"comment\": \"We thank the reviewer for their helpful comments.\\n\\n> Related works: I believe a small discussion on point cloud models (in the context of Image reprojection) should also be discussed.\\n\\nWe will add a discussion on point cloud models in the updated version of the paper.\\n\\n> I don't particularly find a difference between Point cloud equi and Reproj. equi in Fig 5. and Table 1. What are the benefits of Reproj. Equi over point cloud equi?\\n\\nThe performance of Reproj Equi and Point Cloud Equi are comparable. In general, point cloud networks are more compute and memory-intensive (more so when enforcing equivariance constraints), so we downsample the input to 1024 or 2048 points. So in settings where the task requires high-resolution observations (like peg-insertion or grasping mug handle), the point cloud model may struggle. In contrast, Reproj Equi uses an image encoder that can handle high-resolution inputs.\\n\\n> Sec 5.6 (Effects of camera angle) needs to also have the PointNet++ baseline (point cloud equi) for the RGB-D plots. Some works have suggested that point cloud RL policies are robust to viewpoint changes\\n\\nWe agree that this would be a good comparison. We will try to add the result by the end of the discussion period.\\n\\n> I would like to see an experiment with DrQ-v2 where image augmentaiton has shown significant sample-efficiency gains and am curious how that performs as compared to an explicit Equivariant policy.\\n\\nImage augmentation is helpful for equivariant and non-equivariant policy learning, as shown by [1]. In our paper, all methods apply random image crops to observations during training as is the case in DrQ-v2. In the paper, we cite this technique as RAD [2], which was a simpler, concurrent work to DrQ (v1).\\n\\n>Does this formulation of having a gripper image generalize to a dextrous manipulation with a non-trivial gripper?\\n\\nThis is a great question. The gripper image is generated with the assumption that the gripper model is known. So generating the gripper image is possible as long as the gripper is fully actuated and composed of rigid parts, regardless of how dexterous the task is. \\n\\n> it would be better if the Fig 11 (from appendix) can be moved/integrated into the main paper.\\n\\nWe agree. We have added the gripper image to Figure 3, which is next to where we introduce the gripper image.\\n\\n> Are the models in Fig 7(a) and 7(b) test-only models are are they trained on individual camera angles?\\n\\nThe models in Figure 7 are tested on the same view angle as they were trained on. It would be interesting to see if the models could generalize to novel view angles (since the projection and perspective transform approximately canonicalize the perceived view to be topdown). \\n\\n> Are the class of Equivariant policies biased to the action space? Would the same set of architectures work for a other action spaces that are common in robotic manipulation such as end-effector pose, joint velocities, joint angle poisitions etc?\\n\\nEquivariant policy networks are specific to an action space. Consider the equivariance equation (Eqn 1), we need to know the action of the group on the output space when we create the equivariant network. If we change the output space (action space), then we need to modify the equivariant constraints at the end of the network accordingly. This is easy for action spaces based on end effector pose or velocity, but difficult for joint space control. For instance, if we apply a 2D transformation to the scene, it is not clear what transformation should be applied to the joint angles to produce a similar action. This would be an interesting direction to pursue in the future since there is growing interest in policy learning in joint space (like in the ALOHA system).\\n\\n[1] Wang, Dian, et al. \\\"The Surprising Effectiveness of Equivariant Models in Domains with Latent Symmetry.\\\" The Eleventh International Conference on Learning Representations.\\n\\n[2] Laskin, Misha, et al. \\\"Reinforcement learning with augmented data.\\\" Advances in neural information processing systems 33 (2020): 19884-19895.\"}", "{\"summary\": \"This paper addresses the limitations of a certain type of equivariant policy learning in robotic manipulation tasks when using side-view camera perspectives, which cause symmetry mismatches that reduce performance. The authors propose a simple method to transform side-view images into top-down representations, enhancing the performance of equivariant methods. Its effectiveness is demonstrated on RGB and RGBD images.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"The problem formulation is quite straightforward. The proposed method is simple, intuitive, and effective.\", \"The discussion on Occluded Regions for RGBD images and Out-of-plane Distortion for RGB images makes the proposed method more practical, giving it the potential to be deployed in real-world settings.\"], \"weaknesses\": [\"Though very simple and effective under the tested scenario, this paper seems more like a small pre-processing module specifically designed for a certain type of SO(2) RL and IL methods. How many equivariant methods could benefit from the proposed method? I would like the authors to discuss this question, and list as many papers as possible.\", \"Lacking real-world experiments. I am concerned whether the proposed method would be effective as well in real-world settings. And since the proposed method is mainly designed to tackle the challenge when deploying cameras in the real world, I think real-world experiments are indispensable.\"], \"questions\": [\"How many equivariant methods could the proposed method benefit?\", \"Would the proposed method also benefit general-purpose robot learning methods such as Diffusion Policy?\", \"It seems that the compared point cloud baseline is using a single-view RGBD image. What if we have access to multi-view RGBD images? Consider the scenario in [1].\", \"Would real-world experiments be conducted?\", \"[1] RiEMann: Near Real-Time SE (3)-Equivariant Robot Manipulation without Point Cloud Segmentation. Gao et al. CoRL'24.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for their helpful comments.\\n\\n> How many equivariant methods could benefit from the proposed method? I would like the authors to discuss this question, and list as many papers as possible.\\n\\nThe point of this work is not to improve existing equivariant methods. In fact, we believe existing equivariant methods work very well within structured settings (e.g. those where the symmetric transformation of the observation is well-defined). Instead, we show that image-based equivariant methods can also excel in less structured settings (e.g. those with non top-down camera views) when they incorporate the simple pre-processing steps. Here is a list of robotic manipulation papers that fit into this less structured category, where our work could be beneficial\\n\\n> since the proposed method is mainly designed to tackle the challenge when deploying cameras in the real world, I think real-world experiments are indispensable\\n\\nWe agree that real-world results would strengthen the paper. We are not able to provide these results within the discussion period. \\n\\n> Would the proposed method also benefit general-purpose robot learning methods such as Diffusion Policy?\\n\\nWe believe the proposed method would help any SO(2) equivariant policy learning methods with non-top down images. The processed pre-processing steps align transformations of the input with transformations of the action, which leads to faster learning. As shown by a recent paper [1], SO(2) equivariant diffusion policy is useful for robotic manipulation, and we expect our preprocessing steps could enhance performance with sideview image inputs.\\n\\n> It seems that the compared point cloud baseline is using a single-view RGBD image. What if we have access to multi-view RGBD images? Consider the scenario in [1].\\n\\nThat is a good question. In theory, multiple RGBD images could be fused and reprojected to produce a top-down image. The resulting top-down image would have less occluded regions than the single-view RGBD case, which would boost performance. However, in the multi-view RGBD setting, the equivariant point cloud method is probably a better option since it can process a complete point cloud effective (reprojection collapses the information in the z-direction). We are interested in looking more into the multi-view RGB setting, such as the ALOHA system.\\n\\n[1] Wang, Dian, et al. \\\"Equivariant diffusion policy.\\\" arXiv preprint arXiv:2407.01812 (2024).\"}" ] }
2L7KQ4qbHi
Concept forgetting via label annealing
[ "Subhodip Panda", "Ananda Theertha Suresh", "Atri Guha", "Prathosh AP" ]
The effectiveness of current machine learning models relies on their ability to grasp diverse concepts present in datasets. However, biased and noisy data can inadvertently cause these models to be biased toward certain concepts, undermining their ability to generalize and provide utility. Consequently, modifying a trained model to forget these concepts becomes imperative for their responsible deployment. We refer to this problem as *concept forgetting*. Our goal is to develop techniques for forgetting specific undesired concepts from a pre-trained classification model's prediction. To achieve this goal, we present an algorithm called **L**abel **AN**nealing (**LAN**). This iterative algorithm employs a two-stage method for each iteration. In the first stage, pseudo-labels are assigned to the samples by annealing or redistributing the original labels based on the current iteration's model predictions of all samples in the dataset. During the second stage, the model is fine-tuned on the dataset with pseudo-labels. We illustrate the effectiveness of the proposed algorithms across various models and datasets. Our method reduces *concept violation*, a metric that measures how much the model forgets specific concepts, by about 85.35\% on the MNIST dataset, 73.25\% on the CIFAR-10 dataset, and 69.46\% on the CelebA dataset while maintaining high model accuracy. Our implementation can be found at this following link: \url{https://anonymous.4open.science/r/LAN-141B/}
[ "Concept forgetting", "Privacy", "Bias", "Computer Vision (CV)" ]
https://openreview.net/pdf?id=2L7KQ4qbHi
https://openreview.net/forum?id=2L7KQ4qbHi
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xJhe2AiI8W", "sxXO89UmHq", "pG7p8nPKYf", "ozoN2RYpvq", "nZnlvJ7zTa", "h3Q47bL5Gh", "auQ25fb7Iu", "ZD77JlCOWK", "XEBrq9c3uv", "QCCoi1iI6l", "PZ3iot88Hk", "NVp0sF7fnO", "LfvMHctPl7", "Lf0jqVVVzz", "D6nziErZ7B", "C6KkICRkJX", "7tvzS9fwt2", "1GAmMRJTVJ" ], "note_type": [ "official_comment", "comment", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732514254949, 1737597219108, 1729047571557, 1732345077589, 1732493407732, 1730283010506, 1730701288984, 1732799298650, 1730456382265, 1732794968552, 1732355811499, 1732794488619, 1732790973085, 1732607944828, 1732590040755, 1732838745037, 1732340711528, 1732361805040 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9880/Reviewer_W77M" ], [ "ICLR.cc/2025/Conference/Submission9880/Authors" ], [ "ICLR.cc/2025/Conference/Submission9880/Reviewer_MEHY" ], [ "ICLR.cc/2025/Conference/Submission9880/Authors" ], [ "ICLR.cc/2025/Conference/Submission9880/Reviewer_MEHY" ], [ "ICLR.cc/2025/Conference/Submission9880/Reviewer_W77M" ], [ "ICLR.cc/2025/Conference/Submission9880/Reviewer_4rao" ], [ "ICLR.cc/2025/Conference/Submission9880/Reviewer_W77M" ], [ "ICLR.cc/2025/Conference/Submission9880/Reviewer_aSbg" ], [ "ICLR.cc/2025/Conference/Submission9880/Authors" ], [ "ICLR.cc/2025/Conference/Submission9880/Authors" ], [ "ICLR.cc/2025/Conference/Submission9880/Authors" ], [ "ICLR.cc/2025/Conference/Submission9880/Authors" ], [ "ICLR.cc/2025/Conference/Submission9880/Reviewer_W77M" ], [ "ICLR.cc/2025/Conference/Submission9880/Reviewer_4rao" ], [ "ICLR.cc/2025/Conference/Submission9880/Reviewer_MEHY" ], [ "ICLR.cc/2025/Conference/Submission9880/Authors" ], [ "ICLR.cc/2025/Conference/Submission9880/Authors" ] ], "structured_content_str": [ "{\"comment\": \"I sincerely appreciate the authors' comprehensive and well-articulated rebuttal aimed at addressing the concerns I raised.\\nThe authors have proposed an intriguing objective, and the manuscript provides supporting arguments for it.\\nHowever, my concern is centered on whether the proposed objective holds significant practical or theoretical value.\\nIf the authors can convincingly address this concern, I would regard this paper as a valuable contribution to the community, presenting a novel problem in the context of concept forgetting.\\nI have outlined below the remaining concerns.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper presents a new approach to studying concept forgetting, which aims to remove some concepts from pre-trained models while preserving their performance. To achieve this goal, the authors propose an algorithm called Label ANnealing (LAN), which employs a two-stage process to align the distribution of pseudo-labels with the class distribution, as generated by the trained model's predictions. Experimental evaluations on four benchmark datasets \\u2013 MNIST, CIFAR-10, miniImageNet, and CelebA \\u2013 demonstrate that concept violation can be effectively mitigated.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The idea of the paper is good and important for research.\\n2. The example in the introduction is also interesting that \\\"envision a CelebA (Liu et al., 2015) image classifier that heavily relies on background color as a distinguishing feature to classify different celebrities, limiting its ability to generalize effectively\\\".\", \"weaknesses\": \"1. Following the example provided in the introduction, I anticipated an improvement in performance after removing harmful features. Nevertheless, my findings contradict this expectation: despite claims of 'maintaining the model\\u2019s overall performance and generalization ability', I observed a significant drop in performance on all datasets, with a particularly notable 15% decrease on CelebA for the task 'Heavy makeup or not'. This discrepancy suggests that the authors should revisit their method to ensure it meets its stated objectives.\\n2. The concept of 'concept violation' is not rigorous, as it only evaluates model outputs without considering the nuanced effects of concepts within decision-making processes. Even when results appear identical, it is uncertain whether a particular concept has been entirely eliminated or merely masked in some way.\\n3. The alogrithm Label ANnealing is simple.\", \"questions\": \"1. I am not sure I fully understand the experiments. Are examples in forgetting classes removed, and examples in the rest of the classes are used to train and test?\\n2. I suppose the introduction example 'background' is good; I think in experiments, the authors should give the results of the example. Does the method only work with concepts that have labels? If so, this is a strong limitation to the proposed method.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"We are thankful for your positive comments, feedback, & suggestions. Specific doubts & questions are answered below.\\n1. **Clarity on text & figures**\\n - Apologies. We will improve the figures in the final version. For the text part, we have modified explanation of LAN algorithm in section 4.1. Please take a look and let us know if some other parts need modification.\\n2. **Limited experiments & State-of-the-art Baselines**\\n - We are open to incorporating further experimental suggestions. In current experiments, we cover both binary concept forgetting ($m=2$) & multi-level or non-binary concept forgetting ($m>2$). We have experimented with different classification models such as 2-layer-MLP, Mobinetv2, Densenet-121, Resnet-50 on both lower-dimensional datasets such as MNIST, CIFAR-10 & high-dimensional datasets such as miniImageNet (image dim: $224\\\\times224$), CelebA (image dim: $178\\\\times218$).\\n - We are open to incorporating further baseline suggestions. According to our knowledge, this is the first work that introduces *concept forgetting* property that induces independence from the forgetting feature during its prediction task. Thus we are unaware of any such state-of-the-art baselines for concept forgetting. However, we adapted three state-of-the-art baselines from fairness because these baseline methods also advocate for the independence of prediction and unfair concept features.\\n3. **Difference between concept forgetting & machine unlearning:**\\n - Please note that we have already explained the difference between concept forgetting and machine unlearning in great detail in the Introduction section 1 (Ref. lines 68-95).\\n4. **Meaning of c**\\n - Here $c$ is a particular concept/feature value to be forgotten. The concept targeted for forgetting defined as $\\\\mathcal{C}$ takes multiple values $c \\\\in (0,1,...,m-1)$. Now, for binary concepts ($m=2$) such as *beard*, $c=0$ denotes the absence and $c=1$ denotes the presence of the beard. For non-binary concepts such as *facial hair*, $c \\\\in (0,1,2,3)$ ($m=4$) signifies no facial hair, mustache, beard, and goatee respectively.\\n5. **Need for Pseudo-Labels and Error in Pseudo Labeling:**\\n - Pseudo labels are useful to create a dataset $\\\\widetilde{D}$ where the empirical concept violation is zero. In order to reduce the concept violation, the pre-trained model is fine-tuned on this pseudo-labeled dataset $\\\\widetilde{D}$.\\n - There is no error in pseudo label assignment because the label annealing subroutine in Algorithm 1 is deterministic in the sense that it assigns pseudo labels based upon the current model's prediction on a particular iteration. \\n6. **Removal of classifier head:**\\n - Removing a classifier head doesn't signify the model has forgotten a concept. Further, it is not always possible to remove the classifier head corresponding to the forgetting concept. Suppose in cat vs. dog classification, one wants to forget the concept of background (binary concept with c=0 signifies indoor background & c=1 signifies outdoor background). Now, classifying images as cat vs. dog, there is no classifier head corresponding to the background. Thus making this solution ineffective.\\n7. **Evaluation in weight space**\\n - Due to the high dimensionality of weight space and unexplainable correspondence between input features and the weight, it is hard to evaluate if the forgotten feature is effectively been removed from the weights or not. Thus a modest goal of concept forgetting is to remove the dependence on the forgetting feature from the model's prediction in output space. This dependence is quantified by empirical concept violation and achieving zero concept violation is indicative of achieving concept forgetting.\\n8. **Retraining**\\n - To forget a concept with the method of retraining is limited and not always feasible. In the case of tabular datasets removing the undesired features from the data and retraining the model seems trivial. However, this trivial method is hard and sometimes infeasible in the case of image and text datasets. Due to the high entanglement of the different concepts/features, it is not always possible to extract and remove the undesired features from the dataset. Thus retraining seems infeasible. Therefore this work tries to propose an algorithm that tries to forget concepts with a modest goal of removing the dependence of forgetting features from model's prediction.\\n9. **Unavailability of original data**\\n - In our case, we assume availability of original data.\\n10. **Multi-level concept:**\\n - In multi-level concept forgetting the forgetting concept is non-binary i.e. $\\\\mathcal{C}(z)=c \\\\in (0,1,...,m-1)$ with $m>2$ (Ref. lines 418-419). For example, if the forgetting concept is *facial hair* then it can take multiple values $(0,1,2,3)$ ($m=4$) which signifies no facial hair, mustache, beard, and goatee respectively. \\n\\n\\n**If you are satisfied with our answers please support our work by increasing the rating**\"}", "{\"comment\": \"Thank you for the careful responses.\\n\\n1. I acknowledge that incorporating useful features as concept forgetting can potentially impact test accuracy. Nevertheless, I believe it would be beneficial for the authors to provide explicit examples demonstrating how this proposed method can effectively eliminate or mitigate harmful features. For instance, in Review W77M, the removal of background features from \\\"dogs are often photographed outdoors\\\" should result in improved or comparable performance.\\n\\n2. Since there is currently no evidence supporting the claim that the proposed technique maintains performance, I suggest a more rigorous examination of the concept of forgetting using visualization techniques. This could help us better understand why there is no improvement and identify potential issues with the method, such as negative effects during decision-making in the middle layers, even when the output is zero.\"}", "{\"summary\": \"The author proposes a new issue termed concept forgetting.\\nThe author argues that, to forget a concept, the label proportions should be constant regardless of the concept.\\nThe author proposes an approach in which, when the label distribution varies according to a specific attribute in a pre-trained model, this is directly adjusted before further training.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The author has proposed an intriguing problem.\\nIf concept forgetting is feasible, it may also be possible to remove unwanted information from a pre-trained model.\", \"weaknesses\": \"First, the proposed problem appears to be an ill-posed problem.\\nAccording to the author\\u2019s assertion, the entire dataset must be pristine.\\nIf there is a concept not included in the dataset or if certain concepts are overrepresented, the optimal model for concept forgetting will be defined differently.\\nIn fact, consider the example commonly addressed in debiased classification: in the dog and cat problem, dogs are often photographed outdoors, while cats are typically photographed indoors.\\nIf additional outdoor photos are included, the label for indoor cats would need to be even more frequently replaced with that of dogs in the author\\u2019s algorithm.\\n\\nSecondly, despite the author\\u2019s algorithm being highly intuitive and straightforward, its characteristics are not well explained.\\nThe author replaces explanations of the proposed method with figures and algorithms, which does not aid intuitive understanding.\\nEven concept forgetting is not well explained beyond the measure defined as concept violation.\\nAt the very least, it would be essential to verify whether the author\\u2019s method is beneficial when solving zero-shot classification tasks that align concepts in the trained model.\\n\\nLastly, in the theoretical analysis, the gap between the two terms in the inequality is substantial.\\nFor the theoretical analysis to be meaningful, this gap needs to be minimized; the current gap arises from using the maximum value of the loss.\\nIn the case of cross-entropy loss, the bound is exceedingly large, and when multiplied by the concept violation values observed by the author in Table 1, the upper bound of the curated loss inevitably becomes significantly large.\\nIn fact, it is challenging to identify a clear correlation between the concept violation values and the reduced accuracy in the experimental results.\", \"questions\": \"(Clear problem definition)\\nCan the author explain the purpose of the algorithm with a real-world example? I did not intuitively grasp the goal of concept forgetting. For instance, I am curious about a plausible purpose, such as removing privacy-sensitive information.\\nFurthermore, the issue I mentioned in the weaknesses section, where the optimal solution for concept forgetting changes if the entire dataset changes, indicates that concepts may not be fully removed when a larger, pristine global dataset exists beyond the given dataset. I am curious about the author's assumptions regarding the entire dataset in this context.\\n\\n(Justification of the measure)\\nAdditionally, while concept violation appears to be a reasonable measure, it does not necessarily reflect whether concept forgetting has truly been achieved. Cross-entropy loss is a good measure for classification tasks, but for models trained with techniques like label smoothing, the loss can increase independently of accuracy. Similarly, I believe that concept violation cannot be considered a perfect measure. Since concept violation is a measure introduced by the author, it requires thorough analysis from multiple perspectives; however, in the submitted paper, it is only used as a measure without further analysis. It seems necessary to include a qualitative analysis in the experiments demonstrating that low concept violation indeed addresses the intended purpose of concept forgetting. In addition to the analysis I suggested, any results that can further demonstrate the utility and significance of your concept violation measure would be welcome.\\n\\n(Representation)\\nThe methods for the author\\u2019s algorithm can all be represented by figures and pseudo code. This implies that Section 4.1 is somewhat redundant. Adding insights into each step of the algorithm in the main text would be beneficial. For example, is the sorting in line 4 truly meaningful? What is the reason for selecting the next label deterministically in line 9? What is an adequate range for E? Addressing questions like these would enable a deeper understanding of the author\\u2019s algorithm.\\nLastly, the author\\u2019s theoretical analysis does not provide much help in interpreting the experimental results. Is it possible to define a tighter boundary under specific conditions?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"To enhance the safety and responsibility of machine learning, this paper introduces a new task, concept forgetting. To achieve the goal of forgetting specific concepts while retaining the general ability of the original model, authors develop an iterative two-stage algorithm. The core idea of the algorithm is to ensure zero concept-violation on the newly created dataset by redistribution and relabeling.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper proposes a novel and interesting problem referred to as concept forgetting. The task is set to forget a specific undesired concept without degrading the general ability. It is similar to the opposite counterpart of catastrophic forgetting but has not been well studied.\", \"The coherent text and the smooth transitions strengthened the readability of this paper.\"], \"weaknesses\": [\"It\\u2019s difficult to understand the explanation of Algorithm 1, eg. in line 311 to line 315.\", \"As shown in Table 1, there is still an obvious reduction in test accuracy. I recommend more analysis of the reasons.\"], \"questions\": [\"I have doubts about whether the algorithm has achieved a good experience effect. Firstly, it is because of the lack of enough competitors. Secondly, it is about the trade-off between concept violation and accuracy: if a concept is forgotten, the network should theoretically achieve better performance on other concepts.\", \"Have you considered the trade-offs between increasing the number of iterations (E) and maintaining model accuracy?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Unfortunately, despite the authors' kind responses, most of my concerns remain unresolved. Therefore, I will maintain my initial rating.\"}", "{\"summary\": \"The paper proposes a novel approach for concept forgetting in deep neural networks. For this purpose, they introduce a two-stage iterative algorithm called Label Annealing (LAN). In the first stage, pseudo-labels are assigned to the samples by annealing or redistributing the original labels based on the current iteration\\u2019s model predictions. In the second stage, the model is fine-tuned on the dataset with pseudo-labels. They also introduce a novel metric called 'concept violation' that measures how much the model forgets a specific concept. The proposed algorithm has been validated across various models and datasets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper addresses a very relevant topic nowadays related with data privacy, which is represented by machine unlearning\", \"The paper presents a novel approach for concept forgetting in deep neural networks\", \"The related work covers most of the relevant paper in the field\"], \"weaknesses\": [\"the paper is difficult to read, the clarity of both text and figures should be significantly improved\", \"the experimental validation is limited and not convincing. The authors compare their approach against 3 baselines, and none of them is related with concept forgetting\"], \"questions\": [\"Here are my concerns:\", \"The differences between concept forgetting and machine unlearning are mentioned at the end of section 2. The authors should clarify this differences much earlier, in the introduction.\", \"Regarding definition 1: 'c' represents a class label or a feature?\", \"Regarding LAN algorithm: Why do you need to assign pseudo-labels? How do you deal with errors in pseudo-label assignment? Why don't just remove the classifier head corresponding to the removed concept?\", \"The problem of concept forgetting relies not only in retraining the classifier. The knowledge associated with it is implicitly embedded into the network's weights. How do you remove the information related with the concept being forgotten from the network's weights? I have not seen any discussion about this. If you retrain the network with the remaining data (after extracting the concept to forget), then this solution is trivial. What if the original data (used to train initially the network) is no longer available?\", \"Section 5.5: What means multi-level concept forgetting? Do you assume data is multi-labeled?\", \"In the experimental results, compare your approach against some methods from the current state of the art.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Further Clarification by Authors\", \"comment\": \"**Maintain performance:** The goal is to reduce concept violation (ideally to zero) while maintaining the model's performance. Concept forgetting is a constrained learning problem where the goal of not only to maintain good performance but also to satisfy the constraint of low concept violation. Effectively one can image a restricted hypothesis space where the hypothesis must satisfy low concept violation. Now if you reduce the constraint i.e. concept violation increases the accuracy should increase because hypothesis space becomes large resulting us a better hypothesis with high accuracy. As we reduce the concept violation to zero this performance drop is expected.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"We are thankful for your positive comments, feedback, & suggestions. Specific doubts & questions are answered below.\\n1. **Accuracy improvement after forgetting:**\\n - Achieving better performance after concept forgetting depends on the usefulness of the feature targeted for forgetting. Inclusion of any biased/harmful concept/feature might increase the classification accuracy. However, including such features are undesired. In our CelebA experiments, classifying images as young vs not-young facial hair is a useful feature because young people generally don't have facial hair. Thus forgetting the facial hair concept will reduce the classification accuracy. However, our goal is to show that LAN is effective in reducing concept violation of facial hair while maintaining a reasonable amount of accuracy intact. \\n - This reduction of test accuracy is indeed expected as we are learning with further constraints of low concept violation. This type of phenomenon is also observed in fairness literature[1]. Further due to the effect of *catastrophic forgetting* [1,2,3] where adapting a model for new tasks can significantly degrade performance. In our case, the older task of retaining the pre-trained model's performance is traded off with the newer task of reducing concept violation.\\n2. **Concept violation is unaware of overall decision-making:**\\n - Due to the hardness of mathematically defining a concept/feature's effects on model's overall decision-making process, the mathematical definition of concept violation is only restricted to the model's end predictions. Our motivation for this restriction is due to the fact that in many practical cases, the most important criterion is model's output predictions. \\n - Concepts are not masked at the output. With the goal of making concept violation zero, we fine-tune the initial model's parameters to generate the forgotten model effectively eliminating the concepts from the model. \\n3. **LAN is simple**\\n - We thank the reviewer for pointing this out! In fact, we believe this is more of an advantage than weakness. Since the method is simple, it is easy to use.\\n4. **Explanation of Experiments:**\\n - No we don't remove forget class during training and testing. The experiments are mainly divided into two categories: binary concept forgetting ($m=2$) & multi-level or non-binary concept forgetting ($m>2$). In case of binary concept forgetting (Ref. lines 381-385 and Table 1) e.g. MNIST digit classification problem, the objective is to forget a particular class digit concept e.g. class-3 data. Thus here $c=0$ represents concepts of non-digit-3 data and $c=1$ represents concepts of digit-3 data. Similarly, for multi-level concept forgetting (Ref. lines 433-436 and Table-2) while classifying CelebA images as young vs. not-young, we aim to forget subtle feature concepts such as hair color ($c=0$ represents black hair, $c=1$ represents red hair, $c=2$ represents blue hair, $c=3$ represents grey hair) from the pre-trained models. It can be seen from Table 1 and Table 2 that the LAN method is effective in reducing concept violation while maintaining a reasonable amount of model accuracy. Further, it achieves better concept violation vs. accuracy trade-off (Reference: Figure 3) than other baseline methods making it a better choice for concept forgetting.\\n5. **Experiments on the 'background' examples**\\n - Thanks for your suggestion. Due to lack of dataset with labeled background features we could not include such examples in the experiments. However, we will further try to incorporate experimental results for forgetting 'background' concepts from a pre-trained model in the final version.\\n6. **Concepts with labels**\\n - Even though current experiments show forgetting concepts that are labeled, our methodology is generic and applicable to concepts without labels. However in the case of concepts that are not labeled, one needs to manually identify the examples in the dataset where the concepts are present. For example, in the case of cat vs. dog classification if one wants to forget the 'background' concept ($c=0$ for indoor background and $c=1$ for outdoor background) where there is no indication of the background type for each example it becomes a harder task to find these two types of background concept. After identifying such concepts, LAN is effective in forgetting the concept.\\n\\n\\n[1] Lowy et al. A stochastic optimization framework for fair risk minimization. arXiv, 2021\\n\\n[2] Goodfellow et al. An empirical investigation of catastrophic forgetting in gradient-based neural network. In Proc. of ICML, 2014\\n\\n[3] Kirkpatrick et al. Overcoming catastrophic forgetting in neural networks. Pre-print arXiv, 2017\\n\\n[4] Ginart et al. Making ai forget you: Data deletion in machine learning. In Proc. of NeurIPS, 2019.\\n\\n**If you are satisfied with our answers, please support our work by increasing the rating**\"}", "{\"title\": \"Further Clarification by Authors\", \"comment\": [\"Thanks for your suggestions.\", \"1. **Meaning of the objective:**\", \"This work has a clear goal: *forget concepts that are undesired without hurting the model's performance*. Concept violation which is the metric for concept forgetting is mathematically well defined. Thus the goal is to **reduce concept violation (ideally to zero) while maintaining the model's performance.**\", \"Yes we are confident that reducing concept violation while maintaining model performance is the true goal for concept forgetting. Please note that as we previously stated, we assume that we have a pre-trained model which has high accuracy. Now to answer your question, if the model's output remains constant regardless of the input the pre-trained model has very low accuracy. However in this case model gives output irrespective of input meaning it is already independent of any concept/feature present in the input. Thus no need to forget the concept in the first place. Please also note that these kinds of scenarios do not generally happen. Assuming a pre-trained model that has some dependence on input features is common and that is the primary motivation for the definition of concept neutral and concept violation.\", \"*Correlation between concept violation and maintaining model performance:* concept forgetting is a constrained learning problem where the goal of not only to maintain good performance but also to satisfy the constraint of low concept violation. Effectively one can image a restricted hypothesis space where the hypothesis must satisfy low concept violation. Now if you reduce the constraint i.e. concept violation increases the accuracy should increase because hypothesis space becomes large resulting us a better hypothesis with high accuracy. Thus there's a clear positive correlation which is also visible in the experiments.\", \"2. **Assumption of Dataset:**\", \"The definition of concept violation depends on the prediction of the model on the global dataset. Nothing further is assumed about the global dataset.\", \"3. **Representation:**\", \"As stated in the rebuttal we have already incorporated these changes in Section 4.1. The insights of line-4 are given in lines 312-313. The insights of line line-9 are given in lines 313-314. The insights for the values of E are given in lines 321-323.\", \"4. **Motivation and Practical applicability:**\", \"Our motivation for the proposed method lies in making model predictions independent of the concepts. In this case, we assume when a model forgets a concept its prediction is independent of the concept.\", \"LAN can be applied to a wide range of concept-forgetting cases where we have access to the original dataset.\"]}", "{\"title\": \"Further Clarification by Authors\", \"comment\": \"1. The motivation of this work is clear: forget concepts that are undesired without hurting the model's performance. Please note that the concept forgetting is now a constrained learning problem where the constraint is to reduce the concept violation. The addition of this low-concept violation constraint reduces model accuracy (effectively reducing the hypothesis space where concept violation is low). The starting point of concept forgetting was to reduce concept forgetting without much hurting the model performance. Experimental evidence suggests that LAN is effective in achieving this goal.\\n\\n2. Selecting samples unrelated to the forgetting concept is not feasible. In case of gender concept forgetting, if the dataset contains male and female images all samples have gender concepts. LAN is computationally inexpensive because of its effectivity in one-iteration (E=1) only. \\n\\n3. Further experimental suggestions are welcome. The goal of this work is to reduce concept violation without much hurting model's performance. Current experiments are realistic e.g. forget gender concept while determining attractiveness or forgetting hair color in determining young vs not. young. The current comparison is done on three state-of-the-art baselines. The goal of current experiments is to establish the effectiveness of LAN in various concept-forgetting scenarios.\"}", "{\"comment\": \"1) The Meaning of the Objective\\n\\nThe authors claim that the problem they define is mathematically well-defined through the proposed objective. However, I find this assertion difficult to accept.\\n\\nThe authors' motivation is clearly articulated in line 62: to maintain performance while forgetting a concept. Are the authors confident that minimizing the proposed objective is equivalent to achieving this goal?\\nIn fact, if the model's output remains constant regardless of the input, the concept violation is minimized. Thus, the objective appears to be merely a metric for quantifying the extent of concept forgetting, rather than a rigorous mathematical model of the authors' motivation.\\n\\nIf there is a misunderstanding on my part regarding the proposed objective, I would appreciate further clarification. Additionally, I request a detailed explanation of the correlation between minimizing the objective and maintaining model performance.\\n\\n2) Limitations of Concept Neutrality\\n\\nAs I outlined in my first review, the proposed objective is defined as a loss function that enforces the prediction distribution of the global dataset to be similar to that of the local (concept-specific) datasets, making it inherently dependent on the overall dataset.\\nTo strengthen the robustness of the proposed problem, the authors need to establish mathematical (or at least semantic) assumptions about the overall dataset.\\n\\nFor example, in the dog-and-cat problem I mentioned, the dog and outdoor concepts are strongly correlated. In such cases, if the overall dataset contains significantly more dog samples than cat samples, the proposed method may relabel most of the cat samples in the indoor concept as dogs. Ultimately, the model might be trained to output only \\\"dog.\\\" Contrary to the authors' claim that Theorem 1 is meaningful, in this scenario, even if concept violation converges to 0, the accuracy will inevitably drop significantly. The authors should recognize that cross-entropy has no upper bound.\\nThis scenario is likely not what the authors intended.\\n\\nI recommend that the authors introduce minimal assumptions about the overall dataset to prevent such situations. For instance, in the proof of the theorem, the authors leverage the bounds of the loss function, and these bounds could be tightened through mathematical assumptions about the overall dataset.\\nAs I mentioned in my first review, the current gap is substantial. To reiterate, in a scenario where the loss has no upper bound, the convergence of concept violation to 0 is not meaningful.\\nClearly defining the conditions the authors consider would likely address this concern.\\n\\n3) Representation\\n\\nI find the authors' rebuttal on this issue unconvincing. At a minimum, the authors should provide insights into line 4, line 9, and E (more informative than line 339) as mentioned in my first review. Without this, I cannot trust that the authors will adequately address the representation issue.\\n\\n4) Other Limitations\\n\\nI remain unconvinced by the rebuttals addressing concerns about the motivation and practical applicability of the proposed method, which other reviewers have also flagged. I will refer to the authors' responses to the other reviewers' concerns when determining my final evaluation score.\"}", "{\"comment\": \"Thank you for your detailed response. The authors have revised the presentation regarding Algorithm 1 and considered the trade-offs at higher values. However, three key concerns remain unaddressed:\\n1. Limited Motivation. Concept forgetting is proposed as an interesting technique to improve performance by removing undesired concepts. However, the experimental results show consistent degradation in performance across all settings. Since the authors consider this outcome to be expected, it is difficult to support the strarting point of this method.\\n2. Practicality of the Algorithm. LAN attempts to recreate a dataset through multiple iterations of assigning pseudo-labels, which is computationally expensive. Why not simply select samples that are unrelated to the forgotten concept, thus reducing the need for such a costly procedure?\\n3. Unconvincing Experimental Results. The paper's goal is ambiguous. If the aim is not just to reduce empirical concept violations, but to propose a new solution in an area with limited competitiors, then it is crucial to demonstrate the method\\u2019s effectiveness in a practical or realistic scenario.\\n\\nGiven these points, I am lowering my initial score.\"}", "{\"comment\": \"Again, I wish the authors provided examples whose concept violation negatively affects the performance, and removing them will improve the performance. The authors should not focus on special causes where concept violation increases the accuracy. This may be a dataset problem.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"We are thankful for your positive comments, feedback, and suggestions. Specific doubts and questions are answered below.\\n\\n1. **Explanation of Algorithm-1:**\\n - Apologies. We have further clarified the explanation of the algorithm in lines 311-315 (section 4.1). Please take a look.\\n\\n2. **Reduction in Test accuracy:**\\n - This reduction of test accuracy is indeed expected as we are learning with further constraints of low concept violation. This type of phenomenon is also observed in fairness literature[1] where the incorporation of additional constraints reduces the model's performance. \\n - Further, this phenomenon of lower accuracy can be explained by the effect of *catastrophic forgetting* [2,3,4] where adapting a model for new tasks can significantly degrade performance. In our case, the older task of retaining the pre-trained model's performance is traded off with the newer task of reducing concept violation. (Reference: lines 54-59). \\n\\n3. **Lack of enough competitors:**\\n - We are open to incorporating further baselines. Please let us know. According to our knowledge, this is the first work that introduces *concept forgetting* as a property of the forgotten model to induce independence from the forgetting feature during its prediction task. Thus we are unaware of such baseline methods for concept forgetting. However, for comparative evaluation, we adopt three state-of-the-art baselines from fairness because these baseline methods also advocate for the independence of prediction and unfair concept features. Thus, these baselines are included (Reference: lines 370-377). \\n\\n 4. **Better performance after concept forgetting** \\n - Achieving better performance after concept forgetting depends on the usefulness of the feature targeted for forgetting. Any biased/harmful feature can be useful for certain prediction tasks i.e. inclusion of such a feature might increase the classification accuracy. However, including such features are undesired. For example, suppose we are learning a model to predict whether a person should get a bank loan or not. Such a model should not depend on the gender or race of the person. However, it is possible that the machine learning model might inadvertently use these features to make better predictions (Reference: lines 39-42). Similarly, in our experimental settings (Reference: Table 2), classifying images as young vs not-young facial hair is a useful feature because young people generally don't have facial hair. Thus forgetting the facial hair concept will reduce the classification accuracy. However, our goal is to reduce the dependence (concept violation) on the facial hair concept while maintaining a reasonable amount of accuracy intact. \\n\\n5. **Trade-off at higher values of E:**\\n - Thanks for this suggestion. In Figure 4 (Reference: Section 5.6 Ablation studies lines 485-504), we demonstrate the effectiveness of the *LAN* algorithm over multiple iterations (E=2, E=4). As E increases at higher accuracy regions, the concept violation further decreases for the same accuracy value making the trade-off plot flatter. This indicates as we increase the values of E, a better trade-off between concept violation and model accuracy is achieved. \\n\\n[1] Lowy et al. A stochastic optimization framework for fair risk minimization. arXiv, 2021\\n\\n[2] Goodfellow et al. An empirical investigation of catastrophic forgetting in gradient-based neural network. In Proc. of International Conference on Learning Representations, 2014\\n\\n[3] Kirkpatrick et al. Overcoming catastrophic forgetting in neural networks. Pre-print arXiv, 2017\\n\\n[4] Ginart et al. Making ai forget you: Data deletion in machine learning. In Proc. of NeurIPS, 2019.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": [\"1. **Ill-posed Problem**\", \"We strongly refute this claim. The problem is mathematically well-defined. We propose a clear definition of concept forgetting by defining *concept neutral* & propose *concept violation* metric that measures the extent of concept forgetting. Now, the goal is to reduce the concept violation while retaining the model's accuracy.\", \"2. **Pristine Dataset & Assumption on dataset**\", \"This point is unclear - please further clarify. We don't assume anything about the dataset. The concept of forgetting is defined by the model's prediction on the dataset. All we assume is that we have a pre-trained model with good accuracy.\", \"3. **Cat vs. Dog classification**\", \"This point is also unclear - please further clarify. Here the task will be to classify images as dog vs. cat while forgetting the 'background' concept. Thus $c=0$ denotes indoor background & $c=1$ denotes outdoor background. Now LAN algorithm will try to reduce empirical concept violation to zero in $\\\\mathcal{D}_c$ for each $c \\\\in (0,1)$ and then fine-tune the pre-trained model to produce the forgotten model.\", \"4. **Characteristics of LAN**\", \"Apart from the LAN algorithm's intuitive explanation using the figure & detailed algorithmic insights for the algorithm (Ref. Sec 4.1), detailed theoretical characteristics of the label annealing subroutine in Algorithm 1 are well analyzed in Lemma-1 (please look into Appendix section A). From this lemma, it can be seen that the total no of labels changed by the LAN algorithm is in the order of $O(n \\\\cdot \\\\hat{V}(\\\\theta^*,\\\\mathcal{C},{\\\\mathcal{D}}))$. Now for the second stage (Algorithm 2), we provide a theoretical upper bound on the accuracy of the forgotten model.\", \"5. **Concept forgetting is not well explained and Analysis of concept violation**\", \"Concept forgetting is a broader term and one of the goals of our work is to mathematically define concept forgetting. Our definition of forgetting is motivated by the fact that in the case of humans if one forgets a concept, the forgotten concept doesn't affect one's decision-making. Thus concept forgetting within the context of machine learning entails ensuring that a model's predictions become entirely independent of the targeted forgetting concept. With this motivation, we propose the definitions of *concept neutral* & *concept violation* to characterize concept forgetting mathematically.\", \"Please note that the goal of *concept forgetting* is to achieve zero concept violation while retaining a reasonable amount of accuracy. Experimental evidence suggests that LAN achieves these objectives. Please further clarify what sort of qualitative analysis is required.\", \"6. **Zero-shot classification tasks**\", \"LAN assumes access to the original dataset to produce the forgotten model and thus needs samples from the original dataset (not a zero-shot case).\", \"7. **Theoretical Gap:**\", \"The upper bound on Theorem-1 can be large because of using the maximum value of the loss. This bound is represented in generic terms i.e. without any further assumption on the functional form of loss. However, we refute the claim that this upper bound is very loose. In particular cases, this bound can be well attained e.g. if the original concept violation is close to zero, then labels changed by the label annealing sub-routine in Algorithm-1 is zero, making the loss of the forgotten model (in expectation over SGD steps of algorithm 2) is the same as the loss of the original model.\", \"8. **Correlation between concept violation and accuracy:**\", \"There is a clear positive correlation between concept violation and accuracy. As accuracy increases so does the concept violation marking a clear trade-off between these two metrics (Ref. lines 410-412 and Figure 3).\", \"9. **Real-world Usecases**\", \"To ensure fairness and accountability, it is crucial to forget these biased/undesired concepts from trained models. Further, concept forgetting can enhance domain generalization. For instance, a CelebA classifier might over-rely on background color, hindering its ability to generalize. Thus forgetting background concepts is useful.\", \"10. **Goal of concept forgetting:**\", \"The goal of concept forgetting is to reduce concept violation (ideally zero) while retaining model's initial performance. Due to the hardness of removing features from a model we try to mathematically define concept forgetting and propose concept violation as a metric for concept forgetting. Also, concept forgetting should be computationally inexpensive (Ref. lines 219-232).\", \"11. **Intuitive Understanding & Insights on the algorithm:**\", \"Thanks for acknowledging that LAN is highly intuitive and straightforward. The figures and algorithms aid this intuitive understanding.\", \"We have modified the explanation of LAN algorithm in section 4.1 with insights for each steps. Please take a look.\", \"**If you are satisfied with our answers, please support our work by increasing the rating.**\"]}" ] }
2L4PTJO8VQ
Descent with Misaligned Gradients and Applications to Hidden Convexity
[ "Aditya Bhaskara", "Ashok Cutkosky", "Ravi Kumar", "Manish Purohit" ]
We consider the problem of minimizing a convex objective given access to an oracle that outputs "misaligned" stochastic gradients, where the expected value of the output is guaranteed to be correlated with, but not necessarily equal to the true gradient of the objective. In the case where the misalignment (or bias) of the oracle changes slowly, we obtain an optimization algorithm that achieves the optimum iteration complexity of $\tilde O(\epsilon^{-2})$; for the more general case where the changes need not be slow, we obtain an algorithm with $\tilde O(\epsilon^{-3})$ iteration complexity. As an application of our framework, we consider optimization problems with a "hidden convexity" property, and obtain an algorithm with $O(\epsilon^{-3})$ iteration complexity.
[ "optimization", "gradient descent", "hidden convexity" ]
Accept (Poster)
https://openreview.net/pdf?id=2L4PTJO8VQ
https://openreview.net/forum?id=2L4PTJO8VQ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vl84ASDZnm", "qcKH5MAkV4", "qLUhTDttf2", "nRGBTVY2t5", "m6T6uGfIuu", "lw783FjikM", "WqyvrRZUHL", "W2wEHK3obj", "UFmUNo3Hhx", "OoLyGV0Evo", "GpzV2ZIKun", "EyQE01ytM4", "9sI0YQzy9N", "38cKAGgEaj", "0hS0Fcq5IO" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "decision", "official_review", "official_comment", "meta_review" ], "note_created": [ 1731990790851, 1731990479454, 1731106023800, 1732310367217, 1732206926714, 1733177625590, 1733158471147, 1731990600501, 1730747219760, 1730067103445, 1731990703255, 1737523727551, 1730607825529, 1732503716164, 1733761651365 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5828/Authors" ], [ "ICLR.cc/2025/Conference/Submission5828/Authors" ], [ "ICLR.cc/2025/Conference/Submission5828/Reviewer_SpNX" ], [ "ICLR.cc/2025/Conference/Submission5828/Reviewer_HMLj" ], [ "ICLR.cc/2025/Conference/Submission5828/Reviewer_hxRG" ], [ "ICLR.cc/2025/Conference/Submission5828/Authors" ], [ "ICLR.cc/2025/Conference/Submission5828/Reviewer_SpNX" ], [ "ICLR.cc/2025/Conference/Submission5828/Authors" ], [ "ICLR.cc/2025/Conference/Submission5828/Reviewer_hxRG" ], [ "ICLR.cc/2025/Conference/Submission5828/Reviewer_HMLj" ], [ "ICLR.cc/2025/Conference/Submission5828/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5828/Reviewer_7H59" ], [ "ICLR.cc/2025/Conference/Submission5828/Reviewer_7H59" ], [ "ICLR.cc/2025/Conference/Submission5828/Area_Chair_MGer" ] ], "structured_content_str": [ "{\"title\": \"reply\", \"comment\": \"Thank you for your positive review! Below we address your questions:\\n\\n1. The update in line 6 is essentially performing a projection. It\\u2019s possible that replacing this line with an ordinary projection to the sphere of radius $D$ would yield a similar final result, but it is not clear how to show such a thing. Instead, line 6 adds a correction to the update that still guarantees that the iterates do not grow too large (similar to projections), but can be integrated more easily into the calculations.\\n2. Yes, strong convexity should help in these cases. However, notice that unlike in standard convex optimization settings, we cannot simply regularize a non-strongly convex objective to become strongly convex because this would eliminate the correlation property and it is not clear what effect this would have on the analysis.\\n3. This is a nice idea and thank you for the suggestion, but unfortunately, it does not seem to help. Notice that the *total number of gradient evaluations* is $N=\\\\sum_{t=1}^T B_t$, so with $B_t=\\\\Theta(t)$, we\\u2019d have $N=\\\\Theta(T^2)$. Thus, a convergence bound of $O(1/\\\\sqrt{T})$ is actually $O(1/N^{1/4})$, which is worse than our $O(1/N^{1/3})$ result. Please let us know if we are missing something here!\"}", "{\"title\": \"reply\", \"comment\": \"Thank you for your work reviewing our paper! Below we answer your questions:\\n\\n1. Thank you for pointing this out. Fortunately, the boundedness of the biased gradients holds in practice on many neural network tasks. See, e.g., the recent work of [Defazio et al.,](https://arxiv.org/pdf/2310.07831) where in Figure 3 they show that the assumption holds for commonly-studied models such as wide ResNet, IWSLT14, GPT, RoBERTa, DLRM, MRI, ViT, RCNN. (See also Figure 3 in [Zhao et al., ICML 2022](https://proceedings.mlr.press/v162/zhao22i/zhao22i.pdf) and Figure 3 in [Xiong et al., ICML 2020](https://arxiv.org/pdf/2002.04745).) Note that even with this assumption, our analysis turns out to be rather nontrivial. Removing the boundedness assumption from our algorithm/analysis is an interesting open question and we will mention it in the revision. \\n\\n2. The setting of Ajalloeian & Stich 2020 still requires that the bias is \\u201csmaller\\u201d than the true gradient (their $m\\\\le 1$ condition in Assumption 4). While our setting 3 also requires the bias to vanish as the gradient approaches zero, it does *not* require the bias to be smaller than the gradient. This is a significant difference that makes analysis much more complicated because intuitively we cannot rely on the \\u201csign\\u201d of the biased gradient to tell us the sign of the true gradient. Moreover, their bounds are worse: the relevant comparison is their Theorem 4, which implies convergence in gradient norm squared at a rate of $O(1/\\\\epsilon^2)$, which implies function value convergence at a rate $O(1/\\\\epsilon^4)$.\\n\\nIn contrast, in Section 4, our setting explicitly allows the bias to *not* vanish when we approach a solution, so it is unlikely that the condition in Ajalloeian & Stich 2020 would imply our assumptions. We consider this to be an interesting feature of the misalignment assumption: it provides a natural way to model non-vanishing bias that nevertheless allows for asymptotic convergence guarantees.\\n\\nThank you for suggesting these detailed comparisons - we will add them to the discussion in the paper!\"}", "{\"summary\": \"This paper focuses on the case where the stochastic oracle produces feedback which is not necessarily unbiased. More precisely, it introduces the notion of misaligned stochastic gradients in order to capture the lack unbiasedness in several practical scenarios. To that end, the authors test their theoretical machinery for the optimization problems with hidden convexity (also studied in Sakos 2024 and references therein) and provide an algorithmic method which exhibit $\\\\mathcal{O}(\\\\varepsilon^{-3})$ iteration complexity.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is very well written and easy to follow. Moreover, the mathematical analysis is sound and clear as far I have checked. The idea of misaligned stochastic vectors is quite intuitive and as far as my knowledge goes it paves the way for dealing with a useful practical methodology for structured biased stochastic gradients.\", \"weaknesses\": \"Concerning this paper I have two main concerns/questions:\\n\\n1. The almost sure boundedness of the biased gradients seems to be a quite restrictive statistical assumption. As far as my knowledge this type of assumption is usually used in methods which are run with adagrad-type step-sizes (see for example Levy 2017). Thus, my question is two-fold: Does this statistical assumption hold in practice and secondly do the authors believe that it is an artefact of the analysis or the method in order to overcome it ?\\n\\n2. The paper lacks a numerical comparison with other methods which consider biased gradients like the Stich 2020 paper. My question concerns the fact that the compression scheme presented in the said paper seems to cover the case of an \\\"relative bias\\\" (an analogy to Polyak's relative noise) in the sense that the bias vanishes when we approach a solution. To that end, some simple calculations may show that under this condition the second assumption in oracle & assumptions may be recovered. So, I think that a more thorough discussion is needed.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to authors\", \"comment\": \"1. I understand it is a projection but I don't understand an intuitive reason for that particular choice of update. It does not seem close enough to just a projection on the sphere.\\n\\n2. I agree, we cannot add a strongly convex regularization or something similar. However, if it were known that the underlying function is strongly convex to begin with, then what is the benefit in terms of convergence guarantees?\\n\\n3. I see what happens. I think I mistook $T$ for $N$. Another thing that might work is to use an epoch based approach where the batch size is kept constant throughout the epoch and doubled every epoch. The advantage in doubling is that the number of changes becomes $\\\\log T$ as opposed to $T$, which might prove to be helpful. In my experience with similar analyses where I have run into the same issue, it seems like exponential doubling can often be quite helpful.\"}", "{\"title\": \"Thanks!\", \"comment\": \"Thanks to the authors for their clarifications. I'll keep my score.\"}", "{\"title\": \"reply\", \"comment\": \"1. Taking a look at Lemma 4.1, the idea is that we first eliminate the component of $h_t$ that is parallel to $x_t$ if that component is pushing the update in the \\u201cwrong direction\\u201d (i.e., away from the origin). This will keep $x_t$ from getting too much bigger, but still even a purely perpendicular update results in a small increase in norm. So, we add a small $-\\\\eta^2 x_t/\\\\|x_t\\\\|^2$ update to counter this effect. This gives us a concrete formula for the update that keeps the norm from growing, but also because the update has a somewhat simple formula, we can argue is still essentially correlated with the true gradient and so allows for easier later analysis. Thanks for asking - we will add this explanation in the revision.\\n\\n2. It\\u2019s actually easier to go from analysis for smooth losses (such as ours) to analysis for strongly-convex losses in general. The classical approach uses the fact that the distance-to-optimality-squared is less than the function gap. This shows that if $\\\\|x_1-x_\\\\star\\\\|\\\\le 2^{-k}$, then after $N=O(2^{3k}/\\\\mu^{3})$ gradient evaluations, we can ensure $\\\\mathbb{E}[\\\\|x_T-x_\\\\star\\\\|]\\\\le 2^{-k+1}$. Repeating this for $K$ epochs costs $M=2^{3K}/\\\\mu^{3}$ iterations and yields a distance of $O(2^{-K})$, which implies a suboptimality gap of $O(2^{-2K})=O(1/(\\\\mu M)^{2/3})$ (since the loss is smooth). This is not as good as the $O(1/\\\\mu M)$ possible with standard unbiased gradient estimates, but satisfies the intuition that strong-convexity is helpful for optimization. It is incomparable with the $O(1/\\\\mu^2 M)$ achieved by Demidovich et al 2023: the dependency on $M$ is worse, but the dependency on $\\\\mu$ is better. Thanks for suggesting this discussion!\\n\\n3. Epochs and doubling: thanks for the suggestion; we did try similar approaches. For example, our current bounds can also be obtained by considering \\\\log T epochs, where in epoch $j$, the learning rate is ~ 1/2^j and the batch size is 2^{2j}. This is intuitively also why we have a \\\\log T in our bound. While it is, of course, possible that we missed some parameter setting, we believe improving over T^{1/3} will require a new idea, either a new potential or something algorithmically novel.\"}", "{\"title\": \"Thank you for the reply\", \"comment\": \"I thank the authors for their responses. My concerns are clarified therefore I am willing to keep my score.\"}", "{\"title\": \"reply\", \"comment\": \"Thank you very much for your comments; they will be very helpful in improving our work!\\n\\nRegarding the use of misaligned gradients, we believe that there is a misunderstanding. In all three sections of the paper, we consider misaligned gradients. Specifically, in\\n* Section 3, we assume that the expected gradient is obtained by multiplying the true gradient with a PSD matrix (unobserved); further, these matrices do not change much over time. \\n* Section 4, we consider a more general setup, where the expected gradient is simply assumed to be correlated with the true gradient. We also require a lower bound on the norm of the expected gradient.\\n* Section 5, we consider the setting of hidden convexity \\u2013 here we wish to minimize a (non-convex) function f that can be expressed as a convex function after a non-linear transformation, i.e., $ f(x) = C(P(x))$ where $C$ is convex and $P$ is a non-linear coordinate transform. Here, we obtain an unbiased estimate of the gradient of $f$; however, due to the transformation, this can be viewed as a misaligned gradient for $C$.\\n\\nIn the revision we will clarify our setting to avoid this confusion. \\n\\nRegarding the questions, \\n1. It\\u2019s possible that there is a connection here, but it is certainly not obvious as there are several key differences in our usage. For example, we do not use the matrices as explicit preconditioners to form updates because we never actually observe these matrices at all!\\n2. Note that Beznosikov et al. assume strongly-convex losses, which we do not assume. As discussed in our paper, the standard conversion from non-strongly convex to strongly-convex via regularization does not obviously apply here. Moreover, it appears that the assumptions of Beznosikov et al. are actually stronger than ours - their Definition 2 in fact forces gradient estimates that decay to zero when the true gradient goes to zero. Our setting explicitly does *not* require this and so is able to obtain asymptotic convergence even in the presence of persistent bias. Thank you for bringing this to our attention - we will incorporate this discussion into the revision.\\n3. The use of hidden convexity by Shu, Ramachandran, and Wang is very different from ours. They consider a convex objective with a constraint that is non-convex, but for which it is possible to replace the constraint with a convex constraint without changing the objective value. Thus, the problem can be reformulated as a convex optimization problem. Our problem *cannot* be reformulated as a convex optimization problem.\"}", "{\"summary\": \"The paper studies oracle-based optimization in three settings. First, when the oracle returns gradients that are misaligned with the true gradients in a specific manner: the expectation of the returned gradient is positively correlated with the true gradient (in terms of the inner product). Second, for more specific applications, they strengthen this assumption, and require that the lower bound not just be nonzero, but that it is at least the square of the true gradient. Third, for their setting of hidden convexity, they use the standard unbiased estimator assumption.\\n\\nThe paper provides improved rates of convergence under all three settings.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"I think it's a well-motivated paper with a coherent set of results. I like that the analysis in Section 3, though, simple, is complete and step-by-step. The authors are also quite honest about differences with prior work, though they could do a better job explaining why they do better than existing work in similar assumptions (see Questions).\", \"weaknesses\": \"My only complaint is that the paper's introduction suggests that the only assumption made is that the inner product of the true gradient with the expected gradient provided by the oracle is positive: however, this seems to hold only in Section 3. In Section 4, this is strengthened to say that the lower bound is the squared norm of the true gradient (an assumption same as Beznosikov et al), and in Section 5, it's further strengthened to be simply an unbiased estimator. Is my understanding accurate? If so, the \\\"misaligned\\\" description used throughout the introduction applies only to Section 3, and for the other results, there already exists standard terminology for those assumptions, so assigning them a new name wouldn't be the right thing to do.\\n\\nMy recommendation to the authors is to please clarify all the assumptions (for each of the different settings studied) in the introduction, so as to avoid any confusion. \\n\\nFurther, it would be useful to have a better understanding of what specific difference in the analyses in Section 4 lead to the improved rates as compared to existing work under this assumption (see Questions).\", \"questions\": \"1. Does the analysis in Section 3 have any connection with the analysis seen when using self-concordant barriers? The assumption that two matrices $A_t$ and $A_{t+1}$ do not change much is quite similar to saying that two successive Hessians do not (which is essentially what self-concordance captures). If the authors believe there could be a connection to this, it would be useful to add that to the paper and add pointers to the literature on interior-point methods, where this notion is used; if not, then it would still help to clarify why it differs.\\n\\n2. The assumption in Section 4 that the inner product of the true gradient and expected gradient (from the oracle) is lower bounded by the square of the true gradient norm is identical to that in Beznosikov et al (as the authors themselves note). Can the authors explain what exactly they do differently to improve the $\\\\epsilon^{-4}$ rate to $\\\\epsilon^{-3}$? Could they point to a specific step in their proof where they use this inner product assumption in a better manner?\\n\\n3. There was a recent paper https://arxiv.org/abs/2304.08596 by Shu, Ramachandran, and Wang, which also talks about hidden convexity. I think it would be useful to cite the paper if the way the phrase \\\"hidden convexity\\\" is used is the same. If not, it would be helpful to clarify the differences.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper considers the problem of first-order stochastic optimization where the learner can only access a stochastic gradient whose expectation is only guaranteed to be correlated with (but not equal to) the true gradient at the queried point. The authors consider three different settings commonly encountered in machine learning problems where the learner can only access biased gradients. For each of the three settings, they propose a new algorithm and provide its analysis.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"I like the paper overall. It uses simple but interesting additions to existing strategies for optimization to design the algorithm with biased gradient estimates which often yield optimal performance. I think the results are sufficiently novel and interesting and improve upon the best known results so far.\", \"weaknesses\": \"I don't see any glaring weakness in the paper but I have some questions listed in the next section.\", \"questions\": \"1. Can the authors explain the intuition behind the update in line 6 in algorithm 2? It seems like you just want to consider the update in the orthogonal direction but I can't quite understand why? Is it simply to reduce the norm of the update (and not considering the update along $x\\\\_t$ helps) or is there something fundamental that is going on there?\\n\\n2. Does strong-convexity help for algorithm 2 and 3? In other words, if we are given the additional information of strong convexity, how much does that help improve the error? Particularly for alg 2, if it does not help then what exactly from the analysis in Demidovich et al, 2023 does not work out in this case?\\n\\n3. I am not sure if this will work, but might be worth a try: If I understand correctly, the term $f(x_t) - f(x^{\\\\star})$ in line 413 cancels out with the term $-\\\\frac{\\\\eta\\\\_t \\\\alpha \\\\| g\\\\_t \\\\| }{3}$ in line 416. I think with an appropriate change of constants, we can retain a fraction of the negative term in line 416 and carry forward it to the equation in line 421. Now, if $\\\\|g\\\\_t\\\\| \\\\leq \\\\frac{C}{\\\\sqrt{t}}$ for some $C > 0$, then we are at a point with small gradient. Otherwise, it will cancel out the $\\\\frac{1}{\\\\sqrt{B_t}}$ term in the equation in line 421 with $B_t = t + 1 + k$. This might help you achieve optimal error rate. Of course this needs to be checked but it might be helpful to address the sub-optimality gap.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"reply\", \"comment\": \"Thanks very much for your review. Below we address some of the points you have raised:\", \"regarding_comparison_to_known_bounds\": \"In the setting of Section 3, we are not aware of any works that consider exactly our setting, but we note that the dependence on $N$ is certainly tight due to standard bounds with unbiased gradients. As discussed in the response to reviewer SpNX, our results in Section 3 and 4 improve upon those of Ajalloeian & Stich 2020 by either weakening assumptions of improving the convergence bound from $O(1/N^{1/4})$ to $O(1/N^{1/3})$. In Section 5, we improve upon the $O(1/N^{1/4})$ rate obtained by Chen et al. 2022 to $O(1/N^{1/3})$.\", \"regarding_the_assumption_statements\": \"Thank you for your suggestion. In the revision we will present them formally as numbered Assumptions and refer to them in the Theorem restatements.\", \"regarding_tightness\": \"our first result is tight in the dependence on $N$ ($O(1/N^{1/2})$) as this is the lower bound even for exact gradient oracles. For the other results (or the dependencies on the $\\\\mu$ and $\\\\lambda$ parameters), our results are the best in their respective classes, but we are not aware of any lower bounds that apply.\", \"regarding_the_analysis_novelty\": \"Our analysis is unique in several respects. For example, our analysis for Section 3.2 makes critical use of a \\u201cstability\\u201d property enjoyed by anytime averaging (but not by the \\u201cstandard\\u201d Polyak averaging). This stability is very rarely present in the literature. We also have never seen any result similar to our technique for viewing projection onto a sphere with the $\\\\ell_2$-norm as projection onto a different unknown convex set using an unknown norm of interest (Lemma 3.1). In Sections 4 and 5, we introduce novel techniques to control the norm of the updates that were more analytically tractable than standard projections.\", \"regarding_the_questions\": \"1. Obtaining high-probability statements should be a fairly straightforward exercise, but it would unduly complicate the analysis. Note that all of our results involve summing up expected values of certain quantities. The difference between the realized value and the expected value is thus controllable with standard martingale concentration (e.g., Azuma\\u2013Hoeffding) bounds. Thank you pointing this out - we will remark on this in the revision. \\n2. Yes, you are correct. In our analysis $\\\\nabla f$ can indicate an arbitrary subgradient whenever the gradient does not exist. Thank you for noticing this. We will clarify this point in the revision.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"The paper studies stochastic convex optimization where the stochastic gradient oracle is biased but correlated with the true gradient. The proposed algorithms achieve the following performances: for Lipschitz, convex objectives and slowly varying bias, the rate is O(N^{-1/2}); for Lipschitz, smooth convex objectives and general correlated stochastic gradient oracle, the rate is O(N^{-1/3}). The results are applied to problems with hidden convexity, which achieves a rate O(N^{-1/3}).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is overall well written, with clearly presented setups, algorithms, and performance. In addition, the correlated stochastic oracle studied, as pointed out on page 2, might have broad applications.\", \"weaknesses\": \"-- The paper is closely related to the stochastic optimization literature. Although the authors have cited many relevant works, the exact known results are missing in this paper. It might help the readers better appreciate the significance of the results by providing more details and/or comparisons with existing setups and known upper/lower bounds on the convergence rate.\\n\\n-- The assumptions of each theorem are stated at the beginning of each corresponding section (informally). It might be better to present them more formally, either as Asssumption 1/2/3, or stated directly in the theorems. \\n\\n-- In terms of significance, it is unclear how tight the bounds are. Would it be possible to derive some lower bounds from known results for other related problems? This would greatly help the readers appreciate the significance of the results. In addition, it seems that the analysis is relatively standard. Could the authors provide more comparisons with existing proofs for stochastic convex optimization, or related problems/setups?\", \"questions\": \"-- All the convergence results are presented in expectation. I\\u2019m wondering how hard it is to obtain ``with high probability\\u2019\\u2019 performance guarantee?\\n\\n-- In line 113, ``note that this is equivalent to the condition that \\u2026\\u201d, this seems to require that f is differentiable. Otherwise, the gradient of f may not exist, and instead the bound holds for all subgradients.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank the authors for the comments! I'll keep my score.\"}", "{\"metareview\": \"This paper introduces a novel optimization approach for convex objectives using misaligned stochastic gradients, where the gradients are correlated but not equal to the true gradient. The authors propose algorithms with iteration complexities of $\\\\widetilde{O}(\\\\epsilon^{\\u22122})$ and $\\\\widetilde{O}(\\\\epsilon^{\\u22123})$, depending on the rate of misalignment, and apply the framework to hidden convexity problems.\\n\\nOverall, the paper provides valuable theoretical contributions, particularly the introduction of the misaligned gradient framework, which has potential practical applications. The clarity of the mathematical analysis and the novel handling of biased gradients make this paper a valuable addition to optimization literature.\", \"additional_comments_on_reviewer_discussion\": \"There are some concerns regarding the clarity of the assumptions, especially in relation to the term \\\"misaligned\\\". The lack of empirical validation and direct comparison with existing methods (e.g., Stich 2020) also limits the paper\\u2019s practical applicability. The authors have responded to these concerns with clarifications and further comparisons.\"}" ] }
2L1OxhQCwS
Transformers versus LSTMs for electronic trading
[ "Paul Alexander Bilokon", "Yitao Qiu" ]
The rapid advancement of artificial intelligence has seen widespread application of long short-term memory (LSTM), a type of recurrent neural network (RNN), in time series forecasting. Despite the success of Transformers in natural language processing (NLP), which prompted interest in their efficacy for time series prediction, their application in financial time series forecasting is less explored compared to the dominant LSTM models. This study investigates whether Transformer-based models can outperform LSTMs in financial time series forecasting. It involves a comparative analysis of various LSTM-based and Transformer-based models on multiple financial prediction tasks using high-frequency limit order book data. A novel LSTM-based model named DLSTM is introduced alongside a newly designed Transformer-based model tailored for financial predictions. The findings indicate that Transformer-based models exhibit only a marginal advantage in predicting absolute price sequences, whereas LSTM-based models demonstrate superior and more consistent performance in predicting differential sequences such as price differences and movements.
[ "transformer", "LSTM", "electronic trading" ]
Reject
https://openreview.net/pdf?id=2L1OxhQCwS
https://openreview.net/forum?id=2L1OxhQCwS
ICLR.cc/2025/Conference
2025
{ "note_id": [ "w6Mh1OhwMR", "ubPvMvNBRF", "qHlWGQojif", "mpPE3rsTdV", "kYCjigTamA", "jwYN8r2Ym7", "flDuAZNkXt", "ZZn9fgzPuo", "ZG5smv23To", "Xrf6Fykthl", "Sc57fKUlrO", "RrZdW1zedu", "HRWTuL26v1", "FXzwon6IgJ", "Dwq2edSbos", "ANTEMJHOmC", "68kqjpokNR", "2SogTofH19" ], "note_type": [ "official_comment", "official_review", "decision", "official_comment", "official_review", "official_comment", "official_review", "official_review", "official_comment", "official_review", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732438791719, 1730701992965, 1737523700898, 1732341036466, 1730196782734, 1732341215475, 1730706699209, 1730420715328, 1732341119246, 1730667380510, 1733944052812, 1732341173482, 1730699762533, 1732340876954, 1732340939959, 1732395366734, 1733179015356, 1732513090464 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5348/Reviewer_Drb3" ], [ "ICLR.cc/2025/Conference/Submission5348/Reviewer_t3XX" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5348/Authors" ], [ "ICLR.cc/2025/Conference/Submission5348/Reviewer_Drb3" ], [ "ICLR.cc/2025/Conference/Submission5348/Authors" ], [ "ICLR.cc/2025/Conference/Submission5348/Reviewer_riPs" ], [ "ICLR.cc/2025/Conference/Submission5348/Reviewer_2dpN" ], [ "ICLR.cc/2025/Conference/Submission5348/Authors" ], [ "ICLR.cc/2025/Conference/Submission5348/Reviewer_bpRu" ], [ "ICLR.cc/2025/Conference/Submission5348/Area_Chair_YVP6" ], [ "ICLR.cc/2025/Conference/Submission5348/Authors" ], [ "ICLR.cc/2025/Conference/Submission5348/Reviewer_5BdM" ], [ "ICLR.cc/2025/Conference/Submission5348/Authors" ], [ "ICLR.cc/2025/Conference/Submission5348/Authors" ], [ "ICLR.cc/2025/Conference/Submission5348/Reviewer_2dpN" ], [ "ICLR.cc/2025/Conference/Submission5348/Reviewer_5BdM" ], [ "ICLR.cc/2025/Conference/Submission5348/Reviewer_riPs" ] ], "structured_content_str": [ "{\"comment\": \"Thanks for your rebuttal. Given that there is still some work to be done on this paper, I will keep my score.\"}", "{\"summary\": \"5348_Transformers_versus_LSTMs\\npdf\\nLT\\nHere is a new paper needs to be reviewed. Summary*\\nBriefly summarize the paper and its contributions. This is not the place to critique the paper; the authors should generally agree with a well-written summary.\\n*\\n\\nSummary\\n\\nThis paper conducts a comparative study between LSTM-based and Transformer-based models for financial time series forecasting, specifically in the context of electronic trading using high-frequency limit order book (LOB) data. The authors investigate the performance of these models across three prediction tasks: mid-price prediction, mid-price difference prediction, and mid-price movement prediction.\\n\\nFor the mid-price prediction task, the study finds that Transformer-based models like FEDformer and Autoformer achieve lower prediction errors than LSTM-based models. However, the authors note that the practical utility of these results for high-frequency trading is limited due to insufficient prediction quality.\\n\\nIn the mid-price difference prediction task, LSTM-based models demonstrate superior performance and robustness compared to Transformer-based models. The canonical LSTM achieves the highest R^2 of around 11.5% within about 10 prediction steps, while state-of-the-art Transformer models struggle to effectively process difference sequences.\\n\\nThe paper's main contribution lies in the mid-price movement prediction task, where the authors introduce a novel LSTM-based model called DLSTM. This model integrates LSTM with a time series decomposition approach inspired by the Autoformer architecture. DLSTM significantly outperforms all other models in classification metrics and proves its effectiveness in trading simulations, particularly when transaction costs are considered.\\n\\nAdditionally, the authors adapt the architecture of existing Transformer-based models to better suit the demands of the movement prediction task. They incorporate both past and projected mid-price data, followed by a linear layer and softmax activation, to determine price movements.\\n\\nOverall, the study highlights that while Transformer-based models may excel in certain aspects of mid-price prediction, LSTM-based models, particularly the proposed DLSTM, demonstrate consistent superiority and practicality in financial time series prediction for electronic trading.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Originality: The study offers a novel perspective on the application of LSTM-based and Transformer-based models in financial time series forecasting, specifically in the context of electronic trading using high-frequency LOB data. The authors introduce a new LSTM-based model, DLSTM, which creatively combines LSTM with a time series decomposition approach inspired by the Autoformer architecture. This innovative integration of existing ideas allows DLSTM to outperform other models in the mid-price movement prediction task.\", \"quality\": \"The paper demonstrates a high level of quality in its experimental design and analysis. The authors conduct a comprehensive comparative study across three prediction tasks (mid-price prediction, mid-price difference prediction, and mid-price movement prediction), using a diverse range of LSTM-based and Transformer-based models. The experiments are well-structured, and the results are thoroughly analyzed, providing valuable insights into the performance of different models in each task.\", \"clarity\": \"The paper is well-written and easy to follow. The authors provide clear explanations of the problem formulation, the proposed DLSTM model, and the experimental setup. The use of tables and figures enhances the clarity of the results, making it easy for readers to compare the performance of different models across various metrics and prediction horizons.\", \"significance\": \"The findings of this study have significant implications for the application of deep learning models in financial time series forecasting, particularly in the context of electronic trading. The authors demonstrate that while Transformer-based models may excel in certain aspects of mid-price prediction, LSTM-based models, especially the proposed DLSTM, exhibit superior and more consistent performance in tasks such as mid-price difference prediction and mid-price movement prediction. The incorporation of trading simulations with and without transaction costs further highlights the practical significance of the proposed DLSTM model for real-world trading scenarios.\\n\\nMoreover, the paper's adaptation of existing Transformer-based models' architecture to better suit the demands of the movement prediction task showcases the potential for further improvements in this domain. By incorporating both past and projected mid-price data, followed by a linear layer and softmax activation, the authors demonstrate a creative approach to enhancing the performance of Transformer-based models in financial time series forecasting.\\nIn summary, the paper's originality, quality, clarity, and significance make it a valuable contribution to the field of financial time series forecasting using deep learning models, offering new insights and directions for future research in this domain.\", \"weaknesses\": \"While the paper presents valuable insights and contributions, there are a few areas that could be improved or require further clarification:\", \"limited_dataset_diversity\": \"The experiments in this study are conducted using LOB data from a single cryptocurrency pair (BTC-USDT or ETH-USDT) on one exchange (Binance). To demonstrate the generalizability of the proposed DLSTM model and the comparative analysis between LSTM-based and Transformer-based models, it would be beneficial to include a wider range of financial instruments, such as stocks, forex, or other cryptocurrencies, as well as data from multiple exchanges. This would strengthen the paper's conclusions and provide a more comprehensive assessment of the models' performance across diverse financial time series.\", \"lack_of_ablation_studies\": \"While the paper introduces the novel DLSTM model, which integrates LSTM with a time series decomposition approach, there is a lack of ablation studies to investigate the individual contributions of each component. For example, the authors could compare the performance of DLSTM with and without the time series decomposition to assess the impact of this specific modification. Additionally, a more detailed analysis of the adapted Transformer-based models' architecture for the movement prediction task would provide valuable insights into the effectiveness of the proposed changes.\", \"limited_discussion_on_model_interpretability\": \"Interpretability is a crucial aspect of financial time series forecasting models, especially in the context of electronic trading, where understanding the factors driving the model's predictions is essential for risk management and decision-making. The paper could benefit from a more in-depth discussion on the interpretability of the proposed DLSTM model and the adapted Transformer-based models, as well as a comparison with the interpretability of other LSTM-based and Transformer-based models.\", \"hyperparameter_tuning_and_model_selection\": \"Can you provide more details on the hyperparameter tuning process and model selection criteria used for the various models in your experiments? Specifically, what approach was used for hyperparameter optimization (e.g., grid search, random search, Bayesian optimization), and which hyperparameters were tuned for each model? Additionally, how were the validation sets or cross-validation techniques employed in the model selection process?\", \"robustness_to_market_conditions\": \"Have you considered evaluating the performance of the proposed DLSTM model and the comparative analysis between LSTM-based and Transformer-based models under different market conditions, such as periods of high volatility, market crashes, or significant news events? Demonstrating the models' ability to generalize and adapt to various market scenarios could provide a more comprehensive assessment of their robustness and practical applicability in electronic trading.\", \"questions\": \"Dataset diversity and generalizability: Can you provide more insights into the choice of using only Binance LOB data for a single cryptocurrency pair in your experiments? How do you expect the proposed DLSTM model and the comparative analysis between LSTM-based and Transformer-based models to perform on a wider range of financial instruments, such as stocks, forex, or other cryptocurrencies, as well as data from multiple exchanges? Providing results on more diverse datasets could strengthen the claims of generalizability and robustness of the findings.\", \"ablation_studies_and_component_contributions\": \"Can you conduct ablation studies to investigate the individual contributions of the time series decomposition approach in the proposed DLSTM model? It would be helpful to compare the performance of DLSTM with and without this specific modification to assess its impact on the model's effectiveness. Additionally, can you provide a more detailed analysis of the adapted Transformer-based models' architecture for the movement prediction task, highlighting the importance of each proposed change?\", \"model_interpretability\": \"Can you elaborate on the interpretability of the proposed DLSTM model and the adapted Transformer-based models? How do these models compare with other LSTM-based and Transformer-based models in terms of interpretability? Providing insights into the factors driving the models' predictions and their relative importance could be valuable for understanding the models' decision-making process and enhancing trust in their applications for electronic trading.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"1. What specific modifications were made to the Transformer architecture to adapt it to financial prediction tasks?\", \"response\": \"The metrics for analyze mid-price prediction are MSE and MAE, this is a common metrics to compare among the regression task between different models through different papers with the same metrics.\\nUsing R^2 for mid-price diff prediction is more like industrial experience. When your prediction has high correlation of the price change, you can do better decision making during the Market Making/Order splitting scenario, judging what price should order, such as best ask/bid, mid-price.\\nFor classification task, using accuracy, precision, recall and F1 score is common metrics to compare among different models through different papers.\\nBut for all these metrics are not the most significant criteria in the practical utility of the models. The final criteria is the trading result. The importance depends on trader\\u2019s preference. If a trader is pursuing more profits, he cares more about the Return/PnL. But for portfolio manager, he will more concerns about Sharpe ratio/ Max drawdown.\"}", "{\"summary\": \"This research examines the performance differences between Transformer-based models and LSTMs across three cryptocurrency limit order book data prediction tasks. It also introduces DLSTM, a LSTM-based model, and a Transformer-based model redesigned for financial forecasting.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"(1) This research presents findings that compare the performance of two types of models.\\n(2) It successfully highlights the weaknesses in current measurement metrics.\\n(3) Interesting task definition.\", \"weaknesses\": \"1.Baseline Selection Rationale: The paper does not clearly explain why specific Transformer and LSTM variants, such as Autoformer and FEDformer, were chosen in the comparison. It remains unclear if these variants have unique advantages for financial time series forecasting. Providing additional theoretical support or rationale for model selection would enhance the scientific basis of this choice.\\n\\n2.Data Risk: The study only tests on a single asset (BTC-USDT), lacking a broader dataset. This limited scope may mean the model\\u2019s performance does not generalize well to other financial data. Testing on a single asset is insufficient to comprehensively assess the model\\u2019s generalizability.\\n\\n3.Lack of Experimental Details: The paper lacks adequate details on the experimental setup, especially regarding hyperparameter settings and baseline model architectures. This omission makes replication challenging and affects the reliability of the results. Sufficient information is not provided to ensure a fair comparison among baseline models.\\n\\n4.Unclear Result Interpretation: The paper does not adequately explain the significant differences in performance between experiments with and without transaction costs. Lacking theoretical support or data analysis, it's hard for me to understand the causes behind these variations under different settings.\\n\\n5.Limited Community Contribution: Time series decomposition, used in this study, appears to be a common approach, closely resembling classical time series decomposition methods. It is unclear how this study provides any specific advantage over the standard decomposition methods.\\n\\n6.Although the paper points out shortcomings in MSE and MAE metrics, it fails to propose a robust method to address these deficiencies.\\n\\n7.Some capitalization inconsistencies, eg. in line 034 Self-attention mechanism.\", \"questions\": \"1.Given the limited dataset used and the lack of detailed experimental information (settings of baselines), I am very concerned about the reliability of this paper's conclusions. How would you address or demonstrate the robustness of your findings under these limitations?\\n\\n2.How do you explain the significant differences in experimental results with and without transaction costs? What factors contribute to this discrepancy?\\n\\n3.What are the specific advantages of your time series decomposition method compared to other decomposition approaches, and why do these advantages arise?\\n\\n4.Other questions can refer to the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"1.Reliability of Findings Given the Limited Dataset and Lack of Experimental Details\", \"question\": \"What are the specific advantages of your time series decomposition method compared to other decomposition approaches, and why do these advantages arise?\", \"response\": \"Our time series decomposition method, which integrates LSTM with trend and residual components, offers several advantages over traditional methods, such as classical time series decomposition. By separating the trend from the residuals, our model allows the LSTM to focus on the underlying pattern (trend) while treating short-term fluctuations (residuals) separately. This separation helps the model better capture both long-term trends and short-term variations, which is crucial for financial time series data where both components often behave differently. Additionally, LSTM\\u2019s ability to learn temporal dependencies further enhances the model\\u2019s performance on residuals, making it more robust to noise. Unlike standard decomposition methods, which may not capture these intricate temporal patterns, our hybrid DLSTM approach enables the model to adapt more effectively to the noisy and volatile nature of financial data. In the revised paper, we will compare our approach with other decomposition techniques, to better highlight the benefits of our method.\"}", "{\"summary\": \"This research compares the effectiveness of Transformer and LSTM architectures in financial forecasting. The study examines both model types using high-frequency trading data and introduces DLSTM and a finance-specific Transformer. Results show that Transformers only slightly outperform in absolute price predictions, while LSTMs showing more reliable performance overall.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The paper addresses a relevant and significant question by comparing LSTM and Transformer models in financial time series forecasting.\\n2. The experimental setup is extensive and provides substantial data.\", \"weaknesses\": \"1. The paper lacks code and detailed implementation information for both the Transformer and LSTM models, which limits reproducibility.\\n2. The novelty of the proposed approach is limited. While the authors introduce a DLSTM model to improve performance, the idea of decomposition was previously explored in models like DLinear [1], diminishing the originality of the contribution. Beyond the comparative analysis, additional innovation is also limited.\\n3. The decomposition strategy appears to be applied only to the LSTM model. For a fair comparison, a decomposition approach for the Transformer model should also be included. In Table 3, DLSTM significantly outperforms LSTM, which suggests that a decomposed Transformer might also show improved results.\\n4. The paper does not include several state-of-the-art (SOTA) Transformer-based models, such as PatchTST [2], Crossformer [3], and iTransformer [4], in the comparison, which limits the comprehensiveness of the analysis.\\n5. The statement \\\"Transformer-based models exhibit only a marginal advantage in predicting absolute price sequences, whereas LSTM-based models demonstrate superior and more consistent performance in predicting differential sequences such as price differences and movements\\\" requires further investigation. A deeper analysis into the underlying causes of this observed difference is missing, which weakens the interpretability of the results.\\n\\n[1] Zeng, Ailing, et al. \\\"Are transformers effective for time series forecasting?.\\\" Proceedings of the AAAI conference on artificial intelligence. Vol. 37. No. 9. 2023.\\n\\n[2] Nie, Yuqi, et al. \\\"A Time Series is Worth 64 Words: Long-term Forecasting with Transformers.\\\" The Eleventh International Conference on Learning Representations.\\n\\n[3] Zhang, Yunhao, and Junchi Yan. \\\"Crossformer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting.\\\" The eleventh international conference on learning representations. 2023.\\n\\n[4] Liu, Yong, et al. \\\"iTransformer: Inverted Transformers Are Effective for Time Series Forecasting.\\\" The Twelfth International Conference on Learning Representations.\", \"questions\": \"1. What are the fundamental architectural characteristics that make LSTM models more effective for differential sequences compared to Transformers?\\n2. Can you provide deeper analysis to support the generalizability of your findings of LSTM vs Transformer?\\n3. How does financial time series forecasting different from other time series forecasting (like weather, traffic, etc.?)\\n4. To address the remaining limitations identified in *Weaknesses*: a) Could you provide detailed model implementations and hyperparameter configurations? b) How would decomposition techniques benefit Transformer architectures? c) Please include comparisons with state-of-the-art models\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this paper, the authors conduct a comparative analysis of various LSTM-based and Transformer-based models for multiple financial prediction tasks using high-frequency limit order book data. They introduce a novel LSTM-based model called DLSTM and a newly designed Transformer-based model specifically tailored for financial predictions. Their results reveal that Transformer-based models offer a slight advantage in predicting absolute price sequences. However, LSTM-based models show superior and more consistent performance in predicting differential sequences, such as price differences and movements.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"The structure and logic of the paper is well organized.\\n\\nThe experimental setup, description, and analysis are clearly stated with sufficient detail.\", \"weaknesses\": \"1. The authors compare Transformers and LSTMs, concluding that LSTMs have advantages in multiple electronic trading tasks. However, the selection of Transformer-based models is limited to earlier studies (prior to 2023) and does not include recent state-of-the-art (SOTA) works, such as those mentioned in references [1], [2], and [3]. Notably, Liu et al. [2] claim significant improvements on similar tasks. Excluding these recent studies makes it premature to conclude that Transformer-based models underperform compared to LSTMs. Additionally, there is insufficient evidence to assert that the authors' proposed DLSTM model is the optimal choice for this application. Could you please include comparisons with some of these SOTA results to more robustly justify the conclusion?\\n\\n[1] Garza, A., Challu, C., & Mergenthaler-Canseco, M. (2023). TimeGPT-1. arXiv preprint arXiv:2310.03589.\\n\\n[2] Liu, Y., Hu, T., Zhang, H., Wu, H., Wang, S., Ma, L., & Long, M. (2023). itransformer: Inverted transformers are effective for time series forecasting. arXiv preprint arXiv:2310.06625.\\n\\n[3] Das, A., Kong, W., Sen, R., & Zhou, Y. (2023). A decoder-only foundation model for time-series forecasting. arXiv preprint arXiv:2310.10688.\\n\\n2. The authors' conclusion lacks novelty and largely aligns with the findings and conclusions of Zeng et al. [4] It appears to apply established approaches and conclusions to domain-specific practices. While retaining empirical relevance, the study does not offer methodological breakthroughs.\\n\\n[4] Zeng, A., Chen, M., Zhang, L., & Xu, Q. (2023, June). Are transformers effective for time series forecasting?. In Proceedings of the AAAI conference on artificial intelligence (Vol. 37, No. 9, pp. 11121-11128).\\n\\n3. The experimental setup could be made more representative by incorporating additional metrics such as Mean Absolute Scaled Error and Relative Mean Absolute Error.\", \"questions\": \"Please refer to questions to be addressed, the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"1. Include LOB data from the LOBSTER dataset to increase generalizability\", \"response\": \"We appreciate the reviewer\\u2019s request for a more detailed discussion of DLSTM\\u2019s advantages over other temporal decomposition methods like DLinear. DLSTM\\u2019s unique contribution lies in its combination of LSTM\\u2019s ability to capture long-term dependencies with time series decomposition, specifically by separating trend and residual components. While DLinear also incorporates decomposition, DLSTM adapts this method within an LSTM architecture, allowing it to effectively handle both trend and noise in financial data. In future revisions, we will provide a more in-depth comparison with DLinear and other decomposition methods to clarify how DLSTM improves upon them, particularly in the context of price movement predictions.\"}", "{\"summary\": \"This paper compares the performance of LSTM and Transformer models in financial time series forecasting (limit order book data). They compared with FEDformer, Autoformer, Informer, Reformer, Transformer and LSTM. The main results show that Transformer has a slight advantage in predicting absolute price series, but the LSTM model performs more consistently and accurately in the prediction of price changes and price movements. In addition, the paper introduces DLSTM inspired by DLinear and Autoformer.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"Even relatively simple LSTM models perform well in financial time series forecasting tasks, compared with transformer-based model.\", \"weaknesses\": \"1. The writing quality of the paper is low, especially the format of literature citation is not uniform and some of the citations are not standardized in formatting and arrangement.\\n2. The experimental setup lacks comparison with the frameworks and standards widely used in the current research field and fails to demonstrate the advantages of the selected model. For example, the authors failed to cite and use the latest limit order book (LOB) benchmark frameworks, such as LOBFrame (https://github.com/FinancialComputingUCL/LOBFrame) and LOBCAST (https://arxiv.org/abs/2308.01915), both of which are open source frameworks currently widely used for Limit Order Book Forecasting. In addition, the authors did not include some of the latest Transformer based models (e.g., iTransformer and PatchTST), which have demonstrated advantages in terms of performance and efficiency in time series forecasting. Comparing these latest models would make the experimental results more convincing and practical.\\n3. The experimental data used in this paper is limit order book data from three cryptocurrencies, which, although suitable for high-frequency forecasting tests, is not representative of the financial market, and the volatility and noise characteristics of the cryptocurrency market are quite different from those of the traditional financial market. Data from LOBSTER (https://lobsterdata.com/) are more common and widely used in the literature currently.\", \"questions\": \"If possible, include LOB data from the LOBSTER dataset, to increase the generalizability of the experiment. If possible, include latest transformer based model (e.g. iTransformer, PatchTST). Recommend to use benchmarking frameworks such as LOBFrame or LOBCAST in the experimental design to ensure that the results can be more comparable to existing studies. A more detailed discussion of the specific differences and advantages of DLSTM over other temporal decomposition methods (e.g., DLinear) could be added. could also include some ablation studies. include code for reproducibility.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"A. Scientific Claims and Findings:\\n\\nThe paper compares the effectiveness of Transformer and LSTM architectures for financial forecasting using high-frequency trading data. The authors introduce a new LSTM-based model called DLSTM and a finance-specific Transformer. Their results indicate that Transformers have a slight advantage in absolute price predictions, but LSTMs are more reliable overall.\\n\\nB. Strengths:\\n\\nThe paper addresses a relevant and significant question in financial time series forecasting.\\nThe experimental setup is extensive and provides substantial data.\\nThe paper is well-written and easy to follow.\\n\\nC. Weaknesses:\\n\\nLack of code and detailed implementation information.\\nLimited novelty of the proposed approach.\\nThe decomposition strategy is only applied to the LSTM model.\\nSeveral state-of-the-art Transformer-based models are not included in the comparison.\\nThe statement about Transformers and LSTMs requires further investigation.\\nLimited dataset diversity.\\nLack of ablation studies.\\nLimited discussion on model interpretability.\\n\\nD. Reasons for Rejection:\\n\\nThe paper has several weaknesses, including limited novelty, lack of detailed information, and a limited comparison with state-of-the-art models. These issues raise concerns about the reproducibility and comprehensiveness of the research. Additionally, the paper lacks a deeper analysis of the observed differences between Transformer and LSTM models, which limits the interpretability of the results.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, the reviewers raised several concerns, including the lack of code, limited novelty, and the need for a more comprehensive comparison with state-of-the-art models. The authors responded to these concerns by stating that they will provide the code and include additional models in the revised manuscript. They also addressed the issue of novelty by emphasizing the unique contributions of their work, particularly in how DLSTM combines LSTM with time series decomposition.\\n\\nDespite the authors' responses, the reviewers were not fully satisfied and decided to maintain their scores. The reviewers felt that the paper still required further improvements and that the authors' responses did not fully address their concerns. \\u00a0 \\n\\nIn my final decision, I weighed each point raised by the reviewers and considered the authors' responses. I ultimately decided to reject the paper because I felt that the weaknesses outweighed the strengths. The limited novelty, lack of detailed information, and limited comparison with state-of-the-art models were major concerns that were not fully addressed during the rebuttal period.\"}", "{\"comment\": \"1.Include Comparisons with Recent State-of-the-Art (SOTA) Models\", \"question\": \"Could you incorporate additional metrics such as Mean Absolute Scaled Error (MASE) and Relative Mean Absolute Error (RMAE)?\", \"response\": \"We can consider adding these metrics in the Mid-price prediction section.\"}", "{\"summary\": \"The paper explores the use of Transformer and LSTM-based models for financial time series forecasting tasks using high-frequency limit order book (LOB) data. A new LSTM-based model, DLSTM, is proposed alongside a modified Transformer architecture tailored for financial predictions. The study compares these models across three tasks: mid-price prediction, mid-price difference prediction, and mid-price movement prediction. Results suggest that Transformer-based models offer only marginal improvements in specific tasks, while LSTM models, particularly DLSTM, are more reliable in predicting mid-price differences and movements.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Relevant Application: The use of LSTM and Transformer models for financial predictions on LOB data is timely and relevant given the growing interest in high-frequency trading and predictive models in finance.\", \"comparative_scope\": \"The study covers multiple models and tasks, providing a broad comparison between LSTM- and Transformer-based architectures on real-world financial data.\", \"weaknesses\": \"Unconvincing Novelty: The paper lacks substantial novelty. The DLSTM model is essentially a combination of existing methods, such as time series decomposition and LSTM layers, without a clear innovation. Similarly, the Transformer modifications are incremental and do not provide a compelling improvement. As a result, the contributions seem incremental and insufficiently distinct from existing work in financial time series forecasting.\", \"interpretability_issues\": \"The added complexity of Transformer-based models raises interpretability concerns, especially given the unclear benefit over simpler LSTM-based models. Without a more interpretable mechanism or explanation for its performance gains, the model\\u2019s added complexity appears unnecessary.\", \"insufficient_performance_gain_for_complexity\": \"The study demonstrates only marginal improvements from the proposed Transformer modifications over traditional LSTMs, particularly in mid-price prediction. Despite the significant computational complexity introduced by Transformer-based models, the improvements are minimal and do not convincingly justify their adoption for practical trading applications.\", \"questions\": \"What specific modifications were made to the Transformer architecture to adapt it to financial prediction tasks?\\n\\nCan the authors elaborate on the metrics used to evaluate the models' performance? What criteria were significant in determining the practical utility of the models?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"1. LSTM models are particularly effective for differential sequences because of their ability to capture long-term dependencies through their gating mechanism, which allows them to maintain and update hidden states over time. This is crucial when predicting price movements and differences, as financial data often involves short-term fluctuations and long-term trends. LSTM's architecture, with it forget, input, and output gates, naturally allows the model to focus on relevant past information, making it adept at forecasting changes or differences between consecutive time steps. In contrast, Transformer models, while excellent at capturing long-range dependencies with self-attention mechanisms, do not inherently prioritize temporal sequencing, which might be why they struggle with differential sequences that require the preservation of such dependencies.\\n2. We understand the importance of demonstrating the generalizability of our findings. Our current experiments focus on a specific dataset from the Binance exchange for two cryptocurrency pairs (BTC-USDT and ETH-USDT). We plan to extend this analysis to a wider range of financial assets, such as stocks, forex, and other cryptocurrencies, as well as data from multiple exchanges. In revised manuscript, we will compare the performance of LSTM-based and Transformer-based models on these diverse datasets to assess the robustness of the models across different market conditions. Additionally, we will conduct experiments under various scenarios, such as high-volatility periods or major market events, to further explore how these models generalize under different financial conditions.\\n3. Financial forecasting differs from other domains due to:\", \"high_noise_and_volatility\": \"Financial data often include random spikes and unpredictability.\", \"stationarity_issues\": \"Unlike weather or traffic data, financial time series can exhibit non-stationary behaviors.\", \"latency_sensitivity\": \"Predictions often require near-real-time responses, emphasizing efficiency and robustness over accuracy.\", \"feature_complexity\": \"Incorporation of market microstructure, like bid-ask spreads, adds a layer of complexity not found in other domains.\\n4. A) We appreciate the reviewer\\u2019s feedback and will provide detailed implementations and hyperparameter configurations in the supplementary material. The models were implemented using PyTorch, and we will include the full code, hyperparameter configurations, and training details in the revised manuscript. Key hyperparameters, such as learning rate, batch size, sequence length, and the number of layers for both LSTM and Transformer models, will be clearly documented. Additionally, we will outline the hyperparameter tuning process, which involved grid search over a predefined range of values for key parameters. \\nB) Decomposition techniques can significantly enhance Transformer models by simplifying the task of learning long-term dependencies. In the case of time series data, decomposing the input into trend and residual components, as done in models like Autoformer, helps the model focus on different aspects of the data. The trend component captures the underlying long-term pattern, while the residual component focuses on short-term fluctuations. Incorporating a decomposition approach into Transformer architectures could potentially improve their ability to forecast differential sequences, such as price movements, by reducing the model\\u2019s reliance on the entire sequence and instead focusing on the most relevant components. In future work, we will explore the integration of decomposition techniques into Transformer models and evaluate their impact on forecasting accuracy. \\nC) We agree with the reviewer that comparing our models with additional state-of-the-art (SOTA) Transformer-based models such as PatchTST, Crossformer, and iTransformer would provide a more comprehensive evaluation. In the revised manuscript, we will include these models in the comparison and report their performance on the same tasks and datasets. We will also discuss how these models perform relative to our proposed methods, highlighting the strengths and weaknesses of each approach. By doing so, we hope to provide a clearer perspective on the relative effectiveness of our DLSTM model and Transformer-based approaches for financial time series forecasting.\"}", "{\"comment\": \"1. Dataset Diversity and generalizability:\\nWe only use Binance LOB from a single cryptocurrency pair because we had limited infrastructure when we were doing the experiment. Because there is charge for the historical HFT data in Binance. In this case, we can only record the order book data in real time. The disk space is limited in the recording machine, that is why we only record single cryptocurrency pair. We expect that DLSTM can perform well on other financial instruments or datasets from multiple exchanges. I actually did experiment for DLSTM in China A-Share market for many instruments and it did perform well over other models.Now we have better infrastructure, so we are able to provide results on more diverse datasets to support the generalizability claims.\\n\\n2. Ablation Studies and component contributions:\\nYes, I can conduct studies to investigate the individual contribution of time series decomposition in DLSTM\\nYes, I can provide a more detailed analysis of the adapted Transformer-based models' architecture for the movement prediction task, highlighting the importance of each proposed change.\\n\\n3. Model interpretability\\nInterpretability is indeed crucial for financial applications. The DLSTM model benefits from its decomposition approach, which separates trend and residual components, offering clearer insights into the contributions of each component to predictions. For adapted Transformer-based models, they are detailed explained in their corresponding reference papers. It may be necessary to do a systematic interpretability analysis in future work.\\n\\n4. Hyperparameter tuning and model selection\\nYes, I can provide more details on the hyperparameter tuning process and model selection criteria used for the various models in my experiments in the supplementary material. The validation details can be added as well.\\n\\n5. Robustness to market conditions\\nWe can add a section to do Monte Carlo simulations to evaluate models\\u2019 performance under different market conditions.\"}", "{\"comment\": \"I acknowledge that I have read the authors' rebuttal and thank them for providing detailed insights into their future work. However, in the absence of preliminary comparative results, which are critical to address the raised questions, I have decided to maintain my current score.\"}", "{\"comment\": \"Thanks for your rebuttal. I will keep the scores unchanged.\"}", "{\"comment\": \"I appreciate the authors' rebuttal. Given that the work remains to be improved as noted in the responses, I will maintain my score at the current stage and wait to see more updates in the paper.\"}" ] }
2KWZjdFwmh
StEVE: Adaptive Optimization in a Kronecker-Factored Eigenbasis
[ "Jose Nicolas Marin Gamboa" ]
Adaptive optimization algorithms such as Adam see widespread use in Deep Learning. However, these methods rely on diagonal approximations of the preconditioner, losing much information about the curvature of the loss surface and potentially leading to prolonged training times. We introduce StEVE (Stochastic Eigenbasis-adaptive Variance Estimation), a novel optimization algorithm that estimates lower order moments in the Kronecker-Factored Eigenbasis (KFE). By combining the advantages of Adam over other adaptive methods with the curvature-aware transformations of methods like KFAC and EKFAC, StEVE leverages second-order information while remaining computationally efficient. Our experiments demonstrate that EVE achieves faster convergence both in step-count and in wall-clock time compared to Adam, EKFAC, and KFAC for a variety of deep neural network architectures.
[ "KFAC", "EKFAC", "Natural Gradient Descent", "Adam", "Optimization", "Stochastic Optimization" ]
https://openreview.net/pdf?id=2KWZjdFwmh
https://openreview.net/forum?id=2KWZjdFwmh
ICLR.cc/2025/Conference
2025
{ "note_id": [ "mnX97Vpn7L", "RanxPlFIeR", "MyaCBFgmI4", "LLZuWA0jBf", "EgAhIhuriU" ], "note_type": [ "official_review", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1730237376463, 1732818870854, 1730803991102, 1730713214934, 1730892434610 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9031/Reviewer_VXLg" ], [ "ICLR.cc/2025/Conference/Submission9031/Authors" ], [ "ICLR.cc/2025/Conference/Submission9031/Reviewer_Sser" ], [ "ICLR.cc/2025/Conference/Submission9031/Reviewer_5iqx" ], [ "ICLR.cc/2025/Conference/Submission9031/Reviewer_xFQj" ] ], "structured_content_str": [ "{\"summary\": \"Adamize diagonal corrections to KFAC in a similar way to EKFAC\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"This paper adopts and extends the style of thinking seen in EKFAC to apply diagonal corrections to KFAC.\", \"weaknesses\": \"The paper opts to use SGD as the base opt for KFAC and EKFAC. The official and unofficial codebases for KFAC allow and some actual suggest to use Adam as the base opt for KFAC and EKFAC. This is because it's well known that when we operate using Adam as the base opt that drives E/KFAC it works better.\", \"the_authors_say\": \"\\\"However, instead of using only the second moments, STEVE maintains bias-corrected exponential moving averages of both the first and second moments of the gradients in the KFE, estimated in the same manner as in Adam. By combining the benefits of the Kronecker-factored approximation with the\\nadaptive moment estimation of Adam, STEVE aims to achieve faster convergence.\\\" \\n\\nWhile the opt in this pape is not exactly Adam as the base opt driving E/KFAC it is in similar vain, as such it would have been helpful to have ran experiments with SGD and Adam as the base opts for E/KFAC so we could see if there is a delta. \\n\\nAnother weakness is not comparing to Shampoo which is an alternative kronecker factorized optimizer that has become quite popular recently due to its strong performance at Google. Furthermore the same way this paper proposes Adamized diagonal 1st and 2nd moment corrections to KFAC, SOAP proposes this for Shampoo. As such this paper should really compare to those methods. \\n\\nFurthermore, PSGD Affine or Kronecker factorized has been shown to outperform E/KFAC as well as Shampoo/SOAP and should be compared as well for this paper to be complete. \\n\\nAnother weakness is the use of a ViT for cifar datasets. The images are too small for patches to make sense and so it generally doesn't do well. Something like Keller's modded-nanoGPT would be a good place to show the performance of the opt since it's been benchmarked against all the latest curvature informed optimizers.\", \"questions\": \"What is the memory and computational complexity of the proposed opt?\\n\\nHow frequently is the preconditioner updated? Shampoo updates every 100 iterations, PSGD updates every 10 iters. It would be good to see how often the precond must be updated and how it effects performance. \\n\\nVariance bars? A\\n\\nThe claim that the proposed opt significantly outperforms (40% reduction in all clock time) Adam in fig 1 seems not true based on wall clock time. It seems at the end of training Adam ends at a higher accuracy, and Adam actually matches StEVE only a few hundred seconds later. Since the authors do not show variance bars we have no way of knowing if this is a legit speedup. \\n\\nFurthermore, with the extra memory needed to train with StEVE one could easily boost batch size for Adam and see an improvement in performance.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper presents an optimization method for deep learning, which performs Adam-style adaptation in a Kronecker-factored eigenbasis. The proposed method is evaluated empirically against vanilla Adam as well as other Kronecker-factored optimizers.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper gives a good introduction to relevant prior work and the contributions are adequately positioned in the context of prior work.\", \"The idea for the method is well-motivated, lifting the adaptive Adam scheme to a Kronecker-factored eigenbasis. To my knowledge, this idea has not been explored before and is original.\", \"The method shows promising initial results in the experimental framework of the paper.\", \"The paper is generally well-written and easy to follow.\"], \"weaknesses\": [\"The proposed method is a straight-forward combination of existing ideas. No supporting theory is provided. In my opinion, such a paper needs a very detailed and fair experimental comparison to warrant publication at ICLR. Unfortunately, the quality of the experiments is subpar. To mention just a few issues I see\", \"Experiments are run with a single random seed.\", \"All methods use the same learning rate and it is not explained where that learning rate value comes from. This is not adequate for an empirical comparison of different optimizers.\", \"Experiments use a constant learning rate instead of established learning rate decay schedules.\"], \"questions\": [\"How was the learning rate for the experiments chosen?\", \"Why weren't the learning rates set individually for each competing method?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The work proposes STEVE, a novel optimization method that combines the strengths of the Adam optimizer (cheap tracking and adaptation to diagonal second order properties) and EKFAC (amortized better approximation of full second order). This is achieved by applying Adam, not in original parameter space, but in the Kronecker-Factored Eigenbasis (KFE) i.e. the \\u201cpreconditioning\\u201d basis used by KFAC and EKFAC. Experiments on image classification tasks with ResNet-50 and ViT architectures show significantly faster optimization compared to Adam, both in number of epochs and wall-clock time.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"4\", \"strengths\": [\"Originality: The optimizer developed in the paper is novel: an original combination of the strengths of EKFAC and Adam.\", \"Significance: This is a significant and timely contribution in the context of a heightened interest in more efficient optimization methods for deep learning (e.g. [1,3]) . The development of better off-the-shelf optimizers suitable for training deep learning models is an essential component for driving progress in the field, as can be seen in the wide adoption of Adam. In spite of their theoretical superiority, non-diagonal second order methods have struggled to manifest practical superiority for training standard deep learning models over their simpler diagonal counterparts. That the proposed method manages to convincingly beat Adam on deep network training tasks, both in number of epochs and in wallclock time, is thus significant. It showcases the potential of the approach and warrants the attention of the community.\", \"Clarity: Motivation, background, and the proposed method are clearly explained (except for minor glitches, see below). This is in part thanks to a clear algorithm box. I also appreciate that readily usable pytorch code is given in the supplementary for reproducibility. Experimental setup and methodology are also briefly but clearly explained.\", \"Quality: The approach is well-motivated, appears sound and well implemented, and the presented experiments convincingly support the claim of superiority of the developed optimizer.\"], \"weaknesses\": [\"Missing a more thorough recent related works discussion.\", \"Related work pertaining to background is well covered, but the paper is missing a section discussing later advances in second order optimization methods for deep learning. See [1,2,3] and first question below for starting pointers.\", \"The experimental analysis could have been pushed further: to include also training loss curves, and an evaluation and discussion of the relative sensitivity to hyperparameters. (see questions section for details).\", \"Somewhat limited scope and scale of experimental evaluation.\", \"While I value the experimentation on 2 different deep architectures ResNet50 and ViT and 2 image datasets, a more extensive experimentation on a larger variety of tasks and datasets would help to more solidly establish the advantage of the approach. See e.g. deep net training benchmark [1].\", \"Paper would benefit from a little more polishing.\", \"Some (minor and easily fixable) clarity issues. See questions part for a list and suggested improvements.\"], \"questions\": \"Q1: I would like to draw your attention to concurrent work SOAP [3], which seems closely related as it also uses Adam inside a second order preconditioning approach, Shampoo [2]. This doesn\\u2019t lower the originality of your proposal, being concurrent work that you likely couldn\\u2019t know about at the time of submission. But given the relatedness of the approaches, I am interested to know how you would contrast them? What can you highlight as the differences / anticipated benefits & limitations of STEVE=EKFAC+Adam v.s. SOAP=Shampoo+Adam ?\\nAlso, how do they compare in memory and compute complexity? \\nThis discussion could become part of a fleshed out related works section.\", \"q2\": \"Algo lines 16 and 17 eigendecomposition(...): what are the expectations over B and T? Can you provide more details on how these expectations are computed/estimated/tracked? (I suggest to also update the algo box to provide this additional level of detail, as well as main text l 283 \\u201crunning averages\\u201d)\", \"q3\": \"Training loss curves associated with your test accuracy curves.\\nCan you include these (in supplementary if space is insufficient in main)\\nDo the higher test accuracies also correspond to lower training losses? Please discuss.\", \"q4\": \"What are the test accuracy and training loss reached by all algos at the max number of iterations you used?\", \"q5\": \"Sensitivity to hyperparameters?\\nDo you have evidence that your optimizer outperforming Adam does not require extensive fine-tuning of (additional?) hyper-parameters. E.g. how sensitive is it to recompute frequency?\\nSimilarly you write l291 \\u201cThe other methods did not converge at this learning rate\\u201d, but would thay at other rates?\", \"further_clarifying_suggestions\": [\"L 200 \\u201cas the critically important eigenvalues \\u2026 are not preserved by the approximation\\u201d -> needs more explanation.\", \"The explanation of EKFAC and in particular the KFE in paragraph line 202 is too dense. This is the algo that you build on, so please try to lighten expand and clarify.\", \"BUG towards end of update equation for Adam\\u2019s $v_{t+1}$ line 139, missing a square?\", \"Curves: please use more easily distinguishable colors than different shades of red! (given the chance, make them color-blind friendly, see e.g. https://davidmathlogic.com/colorblind, and/or use different line styles)\", \"Figure 3 is missing KFAC and EKFAC.\"], \"typos_and_english_fixes\": [\"Abstract L19: \\u201cEVE\\u201d -> \\u201cSTEVE\\u201d\", \"L 148: \\u201cvector-multiplication of $\\\\epsilon$ are done element-wise\\u201d. I see no vector multiplication of $\\\\epsilon$ ???\", \"L 161: \\u201cis taking\\u201d -> \\u201cis taken\\u201d\", \"L 175: \\u201creduces\\u201d -> \\u201cwhich reduces\\u201d\", \"L 198: \\u201cconverting\\u201d -> \\u201cchanging to\\u201d\", \"L 270: \\u201cagainst against\\u201d\", \"L 283: \\u201crunning averages\\u201d, computed how exactly?\", \"L 285: $\\\\alpha$ has never been defined. You should at least say what it is and does in KFAC/EKFAC.\", \"L 291: \\u201cThe other methods did not converge at this learning rate\\u201d -> do you mean they did to reach the target accuracy? What about at other learning rates? Did you hyper-optimize over it, and how sensitive are the methods to it?\", \"L 382: \\u201cof the Fisher\\u201d -> \\u201cof the empirical Fisher\\u201d\", \"L 391: \\u201cOther directions to take the work are to investigate the potential of the improvements that have been made over Adam in the KFE such as proper weight decay or Nesterov momentum.\\u201d -> \\u201cFuture work should also investigate the potential of using, in the KFE, other improvements that have been made over Adam, such as proper weight decay [ADD REFERENCE] or Nesterov momentum [ADD REFERENCES].\", \"[1] Benchmarking Neural Network Training Algorithms, Dahl et al. 2023 https://arxiv.org/abs/2306.07179\", \"[2] Shampoo: Preconditioned stochastic tensor optimization. V. Gupta, T. Koren, Y. Singer. ICML 2018\", \"[3] SOAP: Improving and Stabilizing Shampoo using Adam. Vyas et al. September 2024.\"], \"https\": \"//arxiv.org/abs/2409.11321\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces StEVE, a novel deep learning optimizer that combines aspects of Adam and KFAC. Specifically, they modify EKFAC, which corrects the eigenvalues of the KFAC approximation, by adding Adam's bias-corrected first and second moment estimators. The authors show that StEVE achieves faster training to a target performance in both step count and wall-clock time compared to Adam, KFAC, and EKFAC, on three different deep learning problems on CIFAR-10, CIFAR-100, and Tiny ImageNet.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper presents a novel and interesting approach to incorporating an Adam-style update into the EKFAC optimizer. The resulting algorithm is clearly described in Algorithm 1 and is rather straightforward to implement. The paper also provides code for the new optimizer (as well as the experiments). It presents an extensive introduction and background section and thus an accessible explanation of the method.\", \"Faster neural network training is a crucial research topic and any progress in this area is of great interest to the entire deep learning community.\", \"The paper not only focuses on the number of steps but also considers the - practically much more relevant - wall-clock runtime.\"], \"weaknesses\": [\"The empirical evidence for StEVE is too weak to be convincing. As there are now hundreds of deep learning optimizers, the empirical burden of proof of superiority is quite high, especially for optimizers like StEVE who are mostly motivated by their empirical performance. I believe the currently provided experiments don't provide enough evidence to convince people to adopt it in practical applications, for the following reasons:\", \"Most importantly, the hyperparameter selection seems to be performed in an opaque and potentially unfair way. Apparently, no hyperparameter tuning was performed, e.g., with all optimizers sharing the same learning rate. Yet, the selected learning rate differs between experiments (e.g. 0.001 for CIFAR-10 and 0.00005 for CIFAR-100). How was this chosen? I suspect that these choices work well for StEVE, but not the compared baseline. A more meaningful comparison would be to either tune the hyperparameters for each method on each test problem independently (using the same budget) or use fixed hyperparameters for all methods that are shared across all test problems. The latter would be a \\\"hyperparameter-free\\\" optimization and would require different baselines, e.g. Schedule-Free [1].\", \"All experiments are done on small problems, with CIFAR-100 being the largest. Also, all are from the same data domain and task, namely image classification.\", \"No learning rate schedule was used. I don't think a constant schedule is a very practical choice.\", \"Overall, the baselines seem to be very weak, likely due to inefficient hyperparameter choices (see the first point).\", \"The target performances seem rather impractical, e.g. only 44% on Tiny ImageNet and 46% on CIFAR-100. This is far from the performance that one can achieve on these datasets (with the used models) and thus not a performance practitioners care about. This is relevant because optimizers that can quickly achieve a low performance can be quite different from optimizers that achieve a more competitive performance quickly.\", \"Without a more rigorous evaluation, I doubt that the method will have a significant impact. I suggest having a look at [2], which describes a protocol for comparing deep learning optimizers. Although running the full benchmark might be too computationally expensive, following some of the described practices could significantly strengthen the empirical evidence for StEVE and thus demonstrate its strength more convincingly.\", \"[1] Aaron Defazio, Xingyu Alice Yang, Harsh Mehta, Konstantin Mishchenko, Ahmed Khaled, Ashok Cutkosky; \\\"The Road Less Scheduled\\\"; arXiv 2024; <https://arxiv.org/abs/2405.15682>\", \"[2] George E. Dahl et al.; \\\"Benchmarking Neural Network Training Algorithms\\\"; arXiv 2023; <https://arxiv.org/abs/2306.07179>\"], \"questions\": [\"Why is KFAC so much slower per step compared to EKFAC? E.g. in Figure 1, both KFAC and EKFAC perform 100 epochs, yet KFAC requires roughly 3x the wall-clock time.\", \"Could you add a short paragraph providing a complexity analysis of the computational and memory requirements of StEVE compared to Adam and EKFAC? In my understanding, it should be very similar to EKFAC in both time per step and memory, with the additional memory of a second EMA (for the first moment). Is this correct?\", \"Line 19: Should it be \\\"StEVE\\\" instead of \\\"EVE\\\"?\", \"Suggestion: Both Section 1 and Section 2 extensively describe existing work. Only at the bottom of page 4, do you start describing your own method. If you compress Sections 1 and 2, you have more space to present your method, which I think would strengthen your paper.\", \"In the paragraph starting at line 79, I think it might be worth mentioning and discussing Shampoo [e.g. 3] and related methods. Shampoo recently won the AlgoPerf: Training Algorithms competition and seems to be a practically relevant non-diagonal method (with likely use in training Gemini models).\", \"Line 88: George et al. should probably be a parencite or citep.\", \"Line 91: It should probably be \\\"Due to the expensive nature of [] computing [the] KFE \\\".\", \"Line 127: There should probably be a space before the citation.\", \"In Adam's equation, I think there is something missing for the EMA of the second momentum. Either a second gradient after the element-wise multiplication or rather a square (since you mention squaring below).\", \"Also just below the equation (line 148) you mention \\\"vector-multiplication of $\\\\epsilon$. Do you mean \\\"addition\\\"? I don't see where $\\\\epsilon$ is multiplied.\", \"Is there a reason that Section 1 uses $\\\\mathbf{P}$ as the preconditioner (line 45) and in Section 2 you use $\\\\mathbf{A}$ (line 116) instead?\", \"Line 197: I think $USU$ should also be bolded, since you use bold-face for matrices, no?\", \"Line 198: Is this sentence missing a \\\"to\\\", i.e. \\\"which is to say converting the gradient [to] $\\\\mathbf{A}$'s Eigenbasis\\\"?\", \"In Algorithm 1, you could highlight the differences between StEVE and EKFAC, e.g. by coloring lines that changed.\", \"Line 270: There is a double \\\"against\\\".\", \"Line 271: \\\"Epoch Count\\\" and \\\"Wall-Clock Time\\\" should probably both be lowercase.\", \"The figures, and especially the legends are relatively small and thus hard to read.\", \"In the figures, try using a consistent coloring/legend. For example, Adam is yellow in Figure 1 but in Figure 2 KFAC is yellow. This makes it hard to quickly compare across figures. The colors are also relatively similar (yellow, orange, red, pink) and thus hard to distinguish.\", \"Is there a reason to not compare to KFAC and EKFAC for the ViT on CIFAR-100?\", \"[3] Rohan Anil, Vineet Gupta, Tomer Koren, Kevin Regan, Yoram Singer; \\\"Towards Practical Second Order Optimization for Deep Learning\\\"; OpenReview 2021; <https://openreview.net/forum?id=Sc8cY4Jpi3s>\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
2JihLwirxO
ParaSolver: A Hierarchical Parallel Integral Solver for Diffusion Models
[ "Jianrong Lu", "Zhiyu Zhu", "Junhui Hou" ]
This paper explores the challenge of accelerating the sequential inference process of Diffusion Probabilistic Models (DPMs). We tackle this critical issue from a dynamic systems perspective, in which the inherent sequential nature is transformed into a parallel sampling process. Specifically, we propose a unified framework that generalizes the sequential sampling process of DPMs as solving a system of banded nonlinear equations. Under this generic framework, we reveal that the Jacobian of the banded nonlinear equations system possesses a unit-diagonal structure, enabling further approximation for acceleration. Moreover, we theoretically propose an effective initialization approach for parallel sampling methods. Finally, we construct \textit{ParaSolver}, a hierarchical parallel sampling technique that enhances sampling speed without compromising quality. Extensive experiments show that ParaSolver achieves up to \textbf{12.1× speedup} in terms of wall-clock time. The source code is publicly available at https://github.com/Jianrong-Lu/ParaSolver.git.
[ "Diffusion Models;" ]
Accept (Poster)
https://openreview.net/pdf?id=2JihLwirxO
https://openreview.net/forum?id=2JihLwirxO
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wZPZpftdAZ", "tfJcpjuRtZ", "qhlnv1fP6G", "goZ7uZO2ne", "ZXTL0JSvJv", "XOGP0YPD86", "SCCxEgbEzG", "R71BUX8Y81", "OfJOfbATmC", "LTnykECet1", "IQUwZpUvMK", "GGHP6jQ6RL", "BzCfKNMSdn", "BUnF0xu8WC", "BL0VzBliON", "7sI5myrrUl", "5xjfWyBznv", "0wFAXFqraO" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1730669332554, 1732694921568, 1732580687728, 1732295058554, 1732608779354, 1732695035543, 1730450199344, 1732295413230, 1737523584530, 1732532554601, 1732694948069, 1734626639233, 1732532254508, 1732296771508, 1732294544104, 1732532410259, 1730403927199, 1732562000722 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3593/Reviewer_totX" ], [ "ICLR.cc/2025/Conference/Submission3593/Authors" ], [ "ICLR.cc/2025/Conference/Submission3593/Area_Chair_bkQU" ], [ "ICLR.cc/2025/Conference/Submission3593/Authors" ], [ "ICLR.cc/2025/Conference/Submission3593/Reviewer_7Ggk" ], [ "ICLR.cc/2025/Conference/Submission3593/Authors" ], [ "ICLR.cc/2025/Conference/Submission3593/Reviewer_7Ggk" ], [ "ICLR.cc/2025/Conference/Submission3593/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3593/Authors" ], [ "ICLR.cc/2025/Conference/Submission3593/Authors" ], [ "ICLR.cc/2025/Conference/Submission3593/Area_Chair_bkQU" ], [ "ICLR.cc/2025/Conference/Submission3593/Authors" ], [ "ICLR.cc/2025/Conference/Submission3593/Authors" ], [ "ICLR.cc/2025/Conference/Submission3593/Authors" ], [ "ICLR.cc/2025/Conference/Submission3593/Authors" ], [ "ICLR.cc/2025/Conference/Submission3593/Reviewer_eRWg" ], [ "ICLR.cc/2025/Conference/Submission3593/Reviewer_totX" ] ], "structured_content_str": [ "{\"summary\": \"The authors present an approach to accelerating the inference of diffusion probabilistic models (DPMs). They transform the problem of sequential sampling of DPMs into one of solving banded nonlinear equations. The Jacobian of the nonlinear system, required by Newton's method for rootfinding (aka Newton-Raphson) is unit block-lower-banded (1 on the diagonal, bands below), allowing for efficient parallel solution through a simple recurrence relation. The authors also present an initialization procedure that accelerates convergence. Finally, they combine this framework with a sliding window technique to conduct parallel iterations only a subset of the points. The combined approach is then evaluated on StableDiffusion-v2 and the LSUN Church pixel-space diffusion model, and demonstrates large speedups on inference without a loss in visual quality.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"There has been a surge of recent interest in fast parallel sampling of diffusion models. The state of the art for parallel sampling, to the best of my knowledge, appears to use Picard iterations to solve the nonlinear system of equations. The authors of this work make a few important contributions, all of which serve to accelerate convergence: (1) they use Newton's rootfinding method, which converges quadratically to the root for smooth enough functions; (2) they leverage the banded structure of the Jacobian to accelerate their solver; (3) they come up with a good initialization for Newton so it in fact converges; (4) they batch their parallel sampling and denoising so that it only happens within a sliding window.\", \"weaknesses\": \"1. Newton's method for rootfinding converges rapidly only if the function one is rootfinding on is sufficiently smooth. The authors should discuss the smoothness properties of the nonlinear system and how it impacts the convergence of the Newton solver, and also comment on the theoretical guarantees and limitations of their approach in this context.\\n\\n2. If the nonlinear residual for the nonlinear system has a complicated landscape, Newton can easily get stuck. The state of the art in optimization is to either use trust-region Newton methods or use quasi-Newton. The authors skirt around this issue altogether and count on their results and experiments to drive their point home. It would be useful to see the loss landscape as a function of, say, two of the most \\\"important\\\" unknowns (determined for instance by PCA) or the eigenvalues of the kernel matrix of the neural tangent kernel to determine if Newton is the right choice for this problem. Alternatively, if the authors could justify why these failure modes don't occur in PDMs, that would also suffice.\\n\\n3. How are Equation 12 and 13 justified? If the Jacobian term in the paragraph below Equation 11 is expensive to compute, why not approximate it? Newton's convergence rate requires at least an estimate for the Jacobian. Using the identity matrix instead effectively reverts Newton to a first-order method. Did the authors experiment with alternatives? Please provide theoretical/empirical justification for using the identity matrix approximation and discuss any experiments you conducted with alternatives.\\n\\n4. Rootfinding can be inherently unstable. Did the authors investigate other alternatives, such as optimization-based methods? Why did the authors choose one over another?\\n\\n5. This is minor, but I would've picked a less generic name for the paper. \\\"ParaSolver\\\" could imply a large number of things, but this is mainly a Newton-based parallel solver for PDMs. Consider a name change.\", \"questions\": \"1. See the Weaknesses section above. These must be addressed.\\n\\n2. The language in the paper hinders the presentation occasionally. For instance, the second paragraph of the related work section (Section 2) was challenging to read, primarily due to strange use of passive voice. There are similar issues throughout the paper. I suggest reframing to active voice wherever possible to improve clarity.\\n\\n3. Section 4.2, below equation (9): What is the \\\"reverse of Jacobian matrix\\\"? Do the authors mean the inverse? \\n\\n4. The authors separately explore tolerance and speedup in the results. I'd like to know which tolerance leads to the best speedup without compromising visual results. The authors should add a new graph with this extra information.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We are happy to hear that our response addressed your concerns. Also, thank you for raising the score! It was a pleasure to discuss with you! Wish you all the best and continued success in your scientific pursuits\\uff01\"}", "{\"comment\": \"Dear Reviewers 7Ggk, eRWg,\\nIf not already, could you please take a look at the authors' rebuttal? Thank you for this important service.\\n-AC\"}", "{\"comment\": \"Thanks for the responsible review and valuable suggestions.\\n\\n> **Q1. The authors should discuss the smoothness properties of the nonlinear system and how it impacts the convergence of the Newton solver, and also comment on the theoretical guarantees and limitations of their approach. If the nonlinear residual has a complicated landscape, Newton can easily get stuck. if the authors could justify why these failure modes don't occur in DPMs, that would also suffice.**\\n\\n**Response:** This is an excellent suggestion, and we sincerely appreciate your insight as we did overlook this important idea while preparing the initial manuscript. In the revised version, we have included additional theoretical discussion regarding the smoothness of the proposed method in Proposition 2. It suggests that the residual function is sufficiently smooth since the F-norm of the Jacobian is upper bounded by 1, thereby making it suitable for Newton's method in root-finding without getting stuck easily. This conclusion stems from the nature of diffusion models, which involve progressively adding standard Gaussian noise to the data. We warmly invite you to see the general response and Proposition 2 in the revised manuscript for more details.\\n\\n> **Q2.\\tHow are Equations 12 and 13 justified? Did the authors experiment with alternatives? Please provide theoretical/empirical justification for using the identity matrix approximation and discuss any experiments you conducted with alternatives.**\\n\\n**Response:** Nice suggestion! We haven't investigated the other alternatives, as we found that the identity matrix approximation produces very effective and stable results across all experiments, which is also widely observed by existing works using similar practices. Furthermore, we present exciting theoretical developments showing that using the identity matrix to approximate the gradient in the Jacobian of the residual function can offer a descent direction. We pleasantly invite you to see the general response and Proposition 4 in the revised manuscript for more details.\\n\\n> **Q3.\\tRootfinding can be inherently unstable. Did the authors investigate other alternatives, such as optimization-based methods? Why did the authors choose one over another?**\\n\\n**Response:** We have not examined optimization-based methods because they generally require gradient calculations on all points in the diffusion trajectories simultaneously, which can be computationally prohibitive and reduce sampling speed. Instead, we chose the Jacobian-free root-finding method since it\\u2019s well-established without the need for gradient calculations. It also performs consistently well across all our experiments. \\n\\nWe completely agree that other optimization-based techniques may be more suitable for solving our proposed nonlinear system. However, we want to emphasize that this manuscript focuses on developing a foundational solver for the proposed generic nonlinear system and achieves sizeable speed up (Reviewer 7Ggk), which to our best knowledge is a new record in this field. We thus personally consider it reasonable to leave the alternatives for future research.\\n\\n> **Q4. Consider a less generic name for the method.**\\n\\n**Response:** That's a nice observation. We've had long discussions about choosing a less generic name. Names like \\\"HiPa,\\\" \\\"HiParaDPM,\\\" \\\"Nowton-ParaSolver,\\\" and \\\"ParaDPMs\\\" have come up in our considerations, but we think them either too generic, overly long, or not closely aligned with the topic of hierarchical parallel sampling for diffusion models. For now, we've decided to use \\\"HiParaDPM\\\" as a temporary name. It would be honored if the reviewer could provide a name for the method proposed in this paper, and we truly appreciate any recommendations.\\n\\n\\n> **Q5.\\tThe language in the paper hinders the presentation occasionally. For instance, the second paragraph of the related work uses a strange passive voice. There are similar issues throughout the paper. I suggest reframing to active voice wherever possible to improve clarity.**\\n\\n**Response:** Thanks for the attention to detail! We have revised them using an active voice to enhance clarity.\\n\\n> **Q6.\\tSection 4.2, below equation (9): What is the \\\"reverse of Jacobian matrix\\\"? Do the authors mean the inverse?**\\n\\n**Response:** Yes, we have fixed it.\\n\\n> **Q7. I'd like to know which tolerance leads to the best speedup without compromising visual results.**\\n\\n**Response:** We gently remind the reviewer that the setting of tolerance for the best speedup without compromising visual results for our method is detailed in the \\\"Hyperparameter Settings\\\". We apply these settings across all experiments.\\n\\n\\nWe hope this resolves your concerns and are delighted to answer any additional questions regarding our manuscript. It is heartening to hear that you praise \\\"the authors make a few important contributions\\\". We sincerely hope you can generously reconsider the score if your concerns have been resolved.\"}", "{\"comment\": \"I believe the authors have addressed my and the other reviewers concerns appropriately and thank them for their time and effort.\\n\\nSince I gave this manuscript a good grade, I am not changing my rating further.\"}", "{\"title\": \"The authors are looking forward to your feedback. Let's discuss.\", \"comment\": \"Dear Reviewer eRWg,\\n\\nWe sincerely appreciate the time and effort you have dedicated to reviewing our initial manuscript once again.\\n\\nBased on feedback from other reviewers, such as **\\\"The authors have addressed my concerns\\\"** (Reviewer totX) and **\\\"I believe the authors have addressed my and the other reviewers' concerns appropriately\\\"** (Reviewer 7Ggk), along with our direct responses regarding the identity matrix approximation and the comparison with ParaTAA, **we are more confident that we have addressed your main concerns thoroughly. Moreover, our current manuscript is now well-supported by both theoretical and experimental evidence.** \\n\\nFinally, it was a pleasure to discuss with you! Wish you all the best and continued success in your scientific pursuits\\uff01\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"summary\": \"The authors present an interesting extension of previous work for inference in DPMs. The general idea is to formulate the solution to the ODE or SDE not as a sequential integration, but instead look at it as solving a set of nonlinear equations, done either via fix-point iteration or utilizing root finding algorithms. While this class of approaches does not improve the computational effort per se, it can lead to reduced wall-clock time by using less evaluation points compared to what is necessary when sequentially integrating the differential equation.\\n\\nThe paper proposes a unified framework that encompasses previous approaches as extreme cases. This results in a set of banded nonlinear equations. One key insight of the authors is to realize and proof that the banded system posses a unique and unbiased solution. They then further utilize the Newton method of root finding to accelerate the fix-point iterations. For this, one needs to calculate the Jacobian matrix. This, in general, is computationally prohibitive. An approximation scheme is proposed, where only the diagonal of the Jacobian is used, and the off-diagonal terms are set to unity. This results in an only modest increase in function evaluations over a sequential solution, indicating in addition to the reduced wall-clock time only a small increase in computational cost.\\n\\nThe achieved scores are on par with previous methods. A sizeable speed up in terms of wall-clock time is achieved leading to a better user experience. This is done without an massive increase in computational cost.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"I believe this paper is generally well written and makes an relevant contribution to the field.\\n\\nAll claims are well supported by experiments, and the analyses appear sound.\", \"weaknesses\": [\"I believe the differences to the established methods utilizing fixed-point iterations for DPMs and their advances such as utilizing the Anderson acceleration used in previous work could be made clearer. It is currently not clearly mentioned that ParaTAA utilizes an conceptually similar idea. Albeit of course the approaches are different, they share common ideas which do not become clear without reading the literature carefully. I would encourage the authors to authors to rework the related work section and mention the differences to the other works more clearly.\", \"It would be interesting to see by how much the number of necessary iterations to reach the threshold decreases by utilizing the Newton method. I recommend the authors include an ablation study showing how the number of iterations and convergence are affected by (1) using the Newton method vs. fixed-point iteration, and (2) approximating vs. fully computing the Jacobian, on a toy problem.\"], \"questions\": [\"I have only a few minor comments:\", \"Please improve Figure 1 by using higher resolution, adding axis labels, and using a consistent font and style with the other figures in the paper.\", \"Figure 5: There is, to me, no discernible difference between the images for different N. Can the authors comment on this? Why do we see such a clear difference for DDPM, but not for DDIM? It would be good if the authors either (1) provide a quantitative analysis of the differences between results for different N, if they exist, or (2) explain why DDIM results are less sensitive to N compared to DDPM.\", \"In the Table 1 & 2, the results are ordered as DDPM, DDIM, DPMSolver, whereas in the figures the order is DDIM, DPMSolver, DDPM. I would appreciate some reordering to make it consistent.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the responsible review and valuable suggestions.\\n\\n> **Q1.\\tI believe the differences to the established methods in previous work could be made clearer. I would encourage the authors to authors to rework the related work section and mention the differences to the other works more clearly.**\\n\\n**Response:** Thanks for this nice idea. We have reorganized the related works to make the differences from the other works more clear.\\n\\n> **Q2.\\tIt would be interesting to see by how much the number of necessary iterations to reach the threshold decreases by utilizing the Newton method. I recommend the authors include an ablation study showing how the number of iterations and convergence are affected by (1) using the Newton method vs. fixed-point iteration, and (2) approximating vs. fully computing the Jacobian, on a toy problem.**\\n\\n\\n**Response:** Interesting idea! We consider a simple example: $ F_n(x_0, x_1, \\\\ldots, x_{N-1}) = x_{n+1} - 0.7x_n $ for $ n \\\\in \\\\{0, 1, \\\\ldots, N-1\\\\} $. We set $ N = 50 $ and randomly initialize 50 starting points for each test. After conducting 5 trials, we report the average number of iterations needed for convergence. Our results indicate that the identity approximation is indeed more effective than the fixed-point method; however, it still lags behind Newton's method. This suggests that a more accurate approximation can be developed for our proposed ParaSolver, which we believe represents a promising avenue for future research.\\n\\n| Method | Iterations |\\n|-----------------------|---------------------|\\n| Newton's method | 1 |\\n| Fixed-point method | 50 |\\n| Identity approximation | 38.6 |\\n\\n> **Q3. Please improve Figure 1 by using higher resolution, adding axis labels, and using a consistent font and style with the other figures in the paper.**\\n\\n**Response:** Thank you! Enhancing Figure 1 is a great suggestion. However, given the significant time we've already spent on theoretical analysis, we won\\u2019t be able to refine it to a high standard at this moment due to the constraints of the response period.\\n\\n> **Q4. Figure 5: There is no discernible difference between the images for different N. Can the authors comment on this? Why do we see such a clear difference for DDPM, but not for DDIM?**\\n\\n**Response:** Thanks for the attention to detail! We want to respectfully clarify that the differences between images for various $N$ primarily arise from the features of the generative images rather than the sequential methods. We think images featuring many prominent objects exhibit less noticeable variation across $N$, as the obvious objects are clear quickly while the fewer distinctive ones take longer. This leads to minimal changes needed in later parallel iterations, causing the images to appear nearly identical across different $N$.\\n\\nIn Figure 5, we can actually observe a minor difference in the less prominent building at the center of the image, while the other more obvious objects show no discernible differences across different $N$, as they are quickly generated to be clear.\\n\\n> **Q5. In the Table 1 & 2, the results are ordered as DDPM, DDIM, DPMSolver, whereas in the figures the order is DDIM, DPMSolver, DDPM. I would appreciate some reordering to make it consistent.**\\n\\n**Response:** Thanks for the catch! We have reordered it in the revised version. \\n\\nWe hope this resolves your concerns. We are very encouraged that you praise us for an interesting idea, well-supported experiments, sound analysis, and making a relevant contribution to the field. We sincerely hope you can recommend this work further if your concerns have been resolved.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"The authors are looking forward to your feedback. Let's discuss.\", \"comment\": \"Dear Reviewer eRWg,\\n\\nWe sincerely appreciate the time and effort you have devoted to reviewing our manuscript. \\n\\nWe now present the experimental results comparing our method with ParaTAA. We are thrilled to report that by utilizing early stopping in ParaTAA, our ParaSolver achieves significant speed improvements in both iteration steps and overall processing time. Please note that the results for ParaTAA are taken from its original paper, as we were unable to reproduce them.\\n\\nSpecifically, we followed ParaTAA\\u2019s methodology to report the CLIP Score when using the text-to-image model Stable Diffusion-v1.5 across 1,000 random samples. We set the maximum parallel iterations to 10 and the parallel window size to 6 for our ParaSolver.\\n\\nWe are excited to note that our experimental results surpass ParaTAA in both steps and speedup when applied to accelerate DDIM with 25 sequential steps. \\n\\n\\n| Method | CLIP Score | Steps | Speedup|\\n|-----------------|----------- |--------| --------|\\n| DDIM | 23.9 | 25 | $1.0\\\\times$ |\\n| DDIM+ParaTAA | 23.8 | 7 | $1.2\\\\times$|\\n| DDIM+ParaSolver | **24.1** | **5** | $\\\\mathbf{3.3}\\\\times$ |\\n\\n\\n\\nFor now, we have made sure to address your remaining concerns directly and thoroughly.\\n\\nWe understand that you may be handling multiple papers and have a busy schedule. \\n\\n**However, as the author-reviewer discussion phase is drawing to a close, with less than two days left, we are very concerned that there may not be sufficient time to thoroughly address any additional questions you might have**.\\n\\n**We eagerly await your feedback on our responses.**\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"comment\": \"We are glad to hear that our response addressed your concerns well. Also, thank you for your high score. It was a pleasure to discuss with you! Wish you all the best and continued success in your scientific pursuits\\uff01\"}", "{\"metareview\": \"This paper considers accelerating the sequential inference process of Diffusion Probabilistic Models by a parallel sampling algorithm. Although it is standard to equivalently view a sequential iteration as solving banded nonlinear equations and parallelize this solve to improve the computational efficiency, reviewers and I agree that the specific approach proposed in this work is interesting. Reviewers had concerns about technical details and comparison with existing approaches, but some of the concerns seemed to have resolved during the rebuttal process. Overall, the strengths overweigh weaknesses in my opinion, and I'm pleased to recommend acceptance.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers had concerns about technical details and comparison with existing approaches, but some of the concerns seemed to have resolved during the rebuttal process. Overall the reviewers' assessments were positive anyway, and I agree.\"}", "{\"title\": \"The authors are looking forward to your feedback. Let's discuss.\", \"comment\": \"Dear Reviewer totX,\\n\\nWe sincerely appreciate the time and effort you have devoted to reviewing our manuscript.\\n\\nWe now have more exciting experimental results on the smoothness.\\n \\nSpecifically, we further conduct experiments to calculate the values of the F-norm $||\\\\frac{\\\\partial}{\\\\partial \\\\hat{X}{t_n}} \\\\Phi(\\\\hat{X}_{t_n})||_F$ under various values of $N$. \\nWe report the mean and standard deviation of the F-norm over all iterations. It is exciting that the experimental results align closely with the theoretical predictions in Proposition 2, showing a very small F-norm around $1$. \\n\\n\\n| Method | F-norm |\\n|-----------------------|---------------------|\\n| $N$ = 10 | 1.2520 $\\\\pm $ 0.1294 |\\n| $N$ = 100 | 1.0152 $\\\\pm $ 0.0106 |\\n| $N$ = 500 | 1.0020 $\\\\pm $ 0.0030 |\\n| $N$ = 1000 | 1.0070 $\\\\pm$ 0.0021 |\\n\\n\\n**For now, we have both theoretical and experimental results that confirm the smoothness of our nonlinear system.** Consequently, we have made sure to address your remaining concerns directly and thoroughly.\\n\\n We understand that you may be handling multiple papers and have a busy schedule. \\n\\n**However, as the author-reviewer discussion phase is drawing to a close, with less than two days left, we are very concerned that there may not be sufficient time to thoroughly address any additional questions you might have.**\\n\\n**We eagerly await your feedback on our responses.**\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"comment\": \"Thanks for the responsible review and valuable suggestions.\\n\\n> **Q1.\\tThe reviewer thinks it would be essential for the authors to implement ParaTAA and use it as one extra baseline. Moreover, it might also be necessary for the authors to compare ParaSolver with approaches that accelerate diffusion models from other aspects, such as knowledge distillation [3-5], restart sampling [6], and self-consistency [7]. Furthermore, the authors should consider releasing the code used for implementing the ParaSolver algorithm.**\\n\\n**Response:** Good consideration! We attempted to implement ParaTAA using its source code, but it caused several issues on our machine. Despite spending considerable time trying to resolve these problems, we weren\\u2019t successful. We\\u2019ve decided to pause this for now and will revisit it later.\\n\\nWe believe our method is comparable to ParaTAA. First, according to its paper, ParaTAA can reduce the steps needed for sequential sampling by a maximum of 14 times, whereas our method can achieve a reduction of 31 times. Furthermore, in terms of wall-clock time speedup for optimizing 25 and 50 sequential steps, ParaTAA achieves improvements of 1.5 to 2.9 times, while we achieve improvements of 2.1 to 3.8 times.\\n\\nRegarding the other acceleration methods, they are not in conflict with ours. Our approach is compatible with both distillation and consistency-based methods. The distilled models utilize sequential denoising for sampling, which can also be parallelized using our ParaSolver. Hence, we believe a comparison with these methods is unnecessary.\\n\\nWe did not provide the code because our paper is not yet published, and we need to keep it confidential. We commit that all source code necessary for conducting and analyzing the experiments will be made publicly available upon publication, with a license permitting free use. We will include the publicly accessible code link, newly added experiments, and analysis in the final accepted manuscript. \\n\\n> **Q2.\\tThere are some minor issues regarding the presentation of the paper. For instance, it can be possibly rephrased as \\\"to construct a set of more precise initial values that conform to the Definition 1 quickly\\\". Moreover, the authors might also consider adding a few figures to illustrate the ParaSolver algorithm more vividly.**\\n\\n**Response:** Thanks for the catch! We have revised the problematic phrase. \\n\\nIllustrating the ParaSolver algorithm with figures is a valuable idea. However, given the considerable time we've already dedicated to theoretical analysis, we won't be able to add exquisite and vivid figures at this time due to the constraints of the response period.\\n\\n\\n> **Q3.\\tThe authors proposed to approximate the Jacobian term $\\\\frac{\\\\partial}{\\\\partial \\\\hat{X}^{(k)}{t_n}}\\\\Phi(t_{n+1}, t_n, \\\\hat{X}^{(k)}_{t_n})$ with the identity matrix. Could the authors discuss which specific parts in the cited papers on Jacobian-free backpropagation actually used similar techniques? Furthermore, would it be possible for the authors to provide some mathematical intuitions on why the identity matrix should work here? Is it possible to derive some error bounds via numerical analysis?**\\n\\n**Response:** Thanks for the reviews. The cited papers [1, 2, 3] employ similar techniques. In paper[1], Eqs. 14\\u201316 and Theorem 0.2 prove that substituting the Jacobian with the identity matrix still provides a descent direction. In paper[2], this is noted in Eq. 4.1, which claims that approximating the Jacobian with the identity is equivalent to considering the first term of the Neumann series. Meanwhile, Eq. 3 in paper[3] empirically found that omitting the U-Net Jacobian term results in an effective gradient for optimizing DIPs with diffusion models.\\n\\nFurthermore, we have achieved some exciting theoretical results of the identity matrix approximation. We warmly invite you to see the general response and Proposition 4 in the revised manuscript for more information. In particular, following [1], we have shown that the identity matrix approximation provides a descent direction not contradictory to the actual Jacobian.\\n\\nWe hope this addresses your concerns. We're greatly encouraged that you commend the manuscript as excellent soundness, presentation, and contribution. We sincerely hope that you can further recommend this work with a higher score if your concerns have been resolved.\\n\\n**Reference:**\\n\\n[1] Jfb: Jacobian-free backpropagation for implicit networks.\\n\\n[2] Training implicit networks for image deblurring using jacobian-free backpropagation\\n\\n[3] Dreamfusion: Text-to-3d using 2d diffusion\"}", "{\"comment\": \"# General Response\\n\\nWe sincerely thank the reviewers for carefully reviewing this initial manuscript and are encouraged by the exceptionally positive assessment on **excellent/good soundness, presentation, and contribution** (all three reviewers), **important/relevant contributions to the field** (Reviewers totX and 7Ggk), **well-supported/complete experiments** (Reviewers 7Ggk and eRWg), and **complete literature review** (Reviewer eRWg). \\n\\nThe reviewers have raised two common concerns regarding the applicability of Newton's method and the justification of the identity matrix approximation. In response, in the revised manuscript, we have some exciting theoretical analyses addressing these issues\\uff1a \\n\\n- We analyze the smoothness of the residual function of the proposed banded nonlinear system by integrating insights from diffusion models. Our findings demonstrate that **the residual function is sufficiently smooth, making it suitable for Newton's method in root-finding without getting stuck easily** (Proposition 2). \\n- Furthermore, we show that **using the identity matrix to approximate the gradient in the Jacobian of the residual function can ensure a descent direction that is not contradictory to the true Jacobian** (Proposition 4).\\n- To enhance the completeness of the manuscript, we additionally analyze the convergence speed. We discover that **the rate lies between linear and quadratic convergence** (Proposition 4).\\n\\n**We sincerely invite you to refer to Proposition 2 and Proposition 4 for more details and are genuinely excited about these findings**, as they highlight the potential for applying optimization-based methods with faster quadratic convergence. We completely agree with the reviewers on the importance of investigating alternative methods like optimization-based approaches and are eager to pursue future explorations, such as using more precise Jacobian approximation techniques or directly leveraging optimization-based methods. By developing more efficient ways to address the computational cost of the Jacobian, we believe these approaches can substantially improve the convergence speed of parallel methods.\\n\\n\\n\\n**Last but not least**, we thank the PCs, ACs, and reviewers again for their invaluable time and effort. **We commit that the source code necessary for conducting the experiments will be made publicly available upon publication, with a license permitting free use**. We will ensure that the final accepted manuscript includes a link to the publicly accessible code, along with newly added experiments and analyses. Furthermore, **we will keep the reviews and author discussion public at all times as well**.\"}", "{\"title\": \"The authors are looking forward to your feedback. Let's discuss.\", \"comment\": \"Dear Reviewer 7Ggk,\\n\\nWe sincerely appreciate the time and effort you have devoted to reviewing our manuscript once again. We understand that you may be handling multiple papers and have a busy schedule. \\n\\n**However, as the author-reviewer discussion phase is drawing to a close, with less than two days left, we are very concerned that there may not be sufficient time to thoroughly address any additional questions you might have.**\\n\\n**We eagerly await your feedback on our responses.**\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"summary\": \"This paper proposed a framework that generalizes the sequential sampling process of diffusion models as solving a system of banded nonlinear equations. Applying the Newton-Raphson method to solve the nonlinear equations then yields a corresponding parallel sampling algorithm for diffusion models. By utilizing the unit-diagonal structure of the banded nonlinear equations' Jacobian matrices, the authors further simplified the updating rules of the parallel algorithm. Extensive numerical experiments were also conducted to show that the ParaSolver algorithm proposed in this paper can indeed accelerate the inference time of diffusion models compared to existing implementations based on parallel sampling.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"1. This paper has provided a complete literature review of related work on accelerating diffusion models via parallel sampling. Also, both the theoretical and algorithmic results in this paper are presented in a relatively clear way to follow.\\n\\n2. A complete set of large-scale numerical experiments on the Imagenet and LSUN-Church datasets are included to justify the acceleration achieved by the proposed ParaSolver algorithm.\", \"weaknesses\": \"1. The authors mentioned in lines 366-367 that the ParaTAA algorithm proposed in [1] needs to be implemented for comparison as it has yet to be integrated into the Diffusers library. However, given that there are only a few empirical works on combining parallel sampling with diffusion models, the reviewer thinks it would be essential for the authors to implement ParaTAA and use it as one extra baseline. Moreover, it might also be necessary for the authors to compare ParaSolver with approaches that accelerate diffusion models from other aspects, such as knowledge distillation [3-5], restart sampling [6], and self-consistency [7]. Furthermore, the authors should consider releasing the code used for implementing the ParaSolver algorithm.\\n\\n2. There are some minor issues regarding the presentation of the paper. For instance, the phrase \\\"to fast construct a set of more precise initial values that conform to the Definition 1\\\" in lines 296-297 doesn't seem quite right. It can be possibly rephrased as \\\"to construct a set of more precise initial values that conform to the Definition 1 quickly\\\". Moreover, the authors might also consider adding a few figures to illustrate the ParaSolver algorithm more vividly, just as what has been done in previous work [2].\", \"questions\": \"1. The reviewer's main question about the design of the ParaSolver algorithm is the claim in lines 244-247 of the paper. Specifically, the authors proposed to approximate the Jacobian term $\\\\frac{\\\\partial}{\\\\partial \\\\hat{X}^{(k)}_{t_n}}\\\\Phi(t_{n+1}, t_n, \\\\hat{X}^{(k)}_{t_n})$ with the identity matrix in the original update rule (11). Could the authors discuss which specific parts in the cited papers on Jacobian-free backpropagation (lines 245-246) actually used similar techniques? Furthermore, would it be possible for the authors to provide some mathematical intuitions on why the identity matrix should work here? Is it possible to derive some error bounds via numerical analysis?\", \"references\": \"[1] Tang, Z., Tang, J., Luo, H., Wang, F. and Chang, T.H., 2024, January. Accelerating parallel sampling of diffusion models. In Forty-first International Conference on Machine Learning.\\n\\n[2] Shih, A., Belkhale, S., Ermon, S., Sadigh, D. and Anari, N., 2024. Parallel sampling of diffusion models. Advances in Neural Information Processing Systems, 36.\\n\\n[3] Luhman, E. and Luhman, T., 2021. Knowledge distillation in iterative generative models for improved sampling speed. arXiv preprint arXiv:2101.02388.\\n\\n[4] Meng, C., Rombach, R., Gao, R., Kingma, D., Ermon, S., Ho, J. and Salimans, T., 2023. On distillation of guided diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 14297-14306).\\n\\n[5] Salimans, T. and Ho, J., 2022. Progressive distillation for fast sampling of diffusion models. arXiv preprint arXiv:2202.00512.\\n\\n\\n[6] Xu, Y., Deng, M., Cheng, X., Tian, Y., Liu, Z. and Jaakkola, T., 2023. Restart sampling for improving generative processes. Advances in Neural Information Processing Systems, 36, pp.76806-76838.\\n\\n[7] Song, Y., Dhariwal, P., Chen, M. and Sutskever, I., 2023. Consistency models. arXiv preprint arXiv:2303.01469.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"The authors have addressed my concerns. I will increase my score to a 6.\"}" ] }
2JXe3RprGS
Turn-by-Turn Driving Navigation: Leveraging Sequence Model for Real-time Audio Instructions
[ "Yiming Yang", "Hao Fu", "Fanxiang zeng", "xikai yang", "Yue Liu" ]
Turn-by-turn (TBT) navigation systems are integral to modern driving experiences, providing real-time audio instructions to guide drivers safely to destinations. However, existing audio instruction policy often rely on rule-based approaches that struggle to balance informational content with cognitive load, potentially leading to driver confusion or missed turns in complex environments. To overcome these difficulties, we first model the generation of audio instructions as a multi-task learning problem by decomposing the audio content into combinations of modular elements. Then, we propose a novel deep learning framework that leverages the powerful spatiotemporal information processing capabilities of Transformers and the strong multi-task learning abilities of Mixture of Experts (MoE) to generate real-time, context-aware audio instructions for TBT driving navigation. A cloud-edge collaborative architecture is implemented to handle the computational demands of the model, ensuring scalability and real-time performance for practical applications. Experimental results in the real world demonstrate that the proposed method significantly reduces the yaw rate compared to traditional methods, delivering clearer and more effective audio instructions. This is the first large-scale application of deep learning in driving audio navigation, marking a substantial advancement in intelligent transportation and driving assistance technologies.
[ "Turn-by-Turn Navigation; Deep Learning; Sequence Models" ]
https://openreview.net/pdf?id=2JXe3RprGS
https://openreview.net/forum?id=2JXe3RprGS
ICLR.cc/2025/Conference
2025
{ "note_id": [ "dn5BRPSow1", "HiHjyjilxn", "BOy3QC2XZS", "3ORv7dOSs7" ], "note_type": [ "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1733478331373, 1730495820319, 1730129888639, 1730536183011 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9441/Authors" ], [ "ICLR.cc/2025/Conference/Submission9441/Reviewer_C5PV" ], [ "ICLR.cc/2025/Conference/Submission9441/Reviewer_N9PL" ], [ "ICLR.cc/2025/Conference/Submission9441/Reviewer_JAUk" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"In this paper, the authors propose a method to optimize turn-by-turn navigation instructions for drivers using a deep learning approach. They tested their method on a custom-created dataset and analyzed results through real-world A/B testing. The authors claim to be the first to investigate this problem and report making significant progress in this area.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"Spent lots of resources in developing this method, yet have to see the benefit of that.\", \"weaknesses\": \"The quality of the paper is quite poor. The text is lengthy yet fails to convey the main message effectively. Key terms, such as \\\"yaw rate\\\" and \\\"seesaw effect,\\\" are either undefined or introduced too late in the paper. The related work section is missing, and relevant literature is not cited. The final paragraph of the background offers no new information and merely summarizes the introduction.\\n\\nThe methodology is difficult to understand, lacking motivation for using such a complicated framework and failing to clarify what advantages it provides. Important details are relegated to the appendix rather than included in the main text. The paper is mostly written in the passive voice, with vague statements like, \\u201cTo address the challenges in generating real-time, context-aware instructions, we model the audio instruction in TBT driving navigation as a multi-task learning problem. Enables the model to optimize the necessary components for generating the audio.\\u201d They repeatedly use the term \\\"context-aware\\\" without explaining what it actually means.\", \"questions\": \"1. What new information does the background section provide? Most of it repeats content from the introduction, and the remaining parts would be more appropriate in the methodology section.\\n2. The user study protocol is not clearly described. It\\u2019s unclear if 100 drivers actually drove the car following the navigation instructions, or if they merely judged the instructions by listening to them. If the evaluation was only auditory, it is inconclusive to determine the effectiveness of the proposed method.\\n3. What roles do the GPT decoder and CrossNet network play in the proposed framework?\\n4. There are no details on data preprocessing or how the embedding features are extracted.\\n5. The paper lacks details on the layer-by-layer structure of the framework and how data is processed through each stage.\\n6. How is accuracy calculated in the ablation study?\\n7. what are the statistics of user study?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a deep-learning based method for generating audio navigation instructions for drivers. The proposed modelA neural is constructed based on the Transformer architecture and a mixture of experts (MOEs), along with a multi-objective loss function for training. Experimental results indicate that, compared to HMM-based methods, the proposed approach achieves higher subjective scores.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper presents a novel application of deep learning.\\n2. The paper is well-strutured and easy to read.\", \"weaknesses\": \"1. Lack of novelty. There are few innovative designs observed in the network architecture or training.\\n2. The experimental section lacks a broader comparison. Audio navigation instructions are a common feature in mapping applications, and there are likely many established methods available. The paper only compares with one HMM-based method, which was proposed in 2012 and is not novel.\\n3. The ablation study does not provide valuable insights. The experimental results indicate that removing most modules solely (e.g. the MOE module) from the network does not lead to significant performance degradation, which raises concerns about potential redundancy in the network design.\", \"questions\": \"1. Is there a specific advantage to using geometric averaging of different loss functions in this task? Was there any comparison made with arithmetic averaging?\\n2. The paper mentions that the HMM-based instruction policy was used during the data collection phase. Does this imply that the supervision signal for training the model comes from this algorithm?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a deep learning model for turn-by-turn navigation, enhancing traditional audio guidance systems by making instructions more adaptive and context-aware. Using a sequence-based approach and a cloud-edge setup, it delivers real-time, precise directions, reducing navigation errors and easing driver cognitive load.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"This paper introduces a deep learning model for turn-by-turn navigation, offering two key benefits: more adaptive, context-aware audio guidance and reduced navigation errors. The model\\u2019s sequence-based design and cloud-edge setup ensure real-time, precise directions, greatly enhancing driver support\\u200b.\", \"weaknesses\": \"Please refer to Questions section.\", \"questions\": \"1. The navigation capabilities targeted by this research are effectively addressed by existing navigation apps, which can already provide lane-level guidance with real-time audio instructions. This research does not clearly establish a novel problem or significant gap in current technology.\\n\\n2. The authors assert that theirs is the first real-world application of deep learning for audio navigation. However, similar problems have been thoroughly researched and resolved in the NLP field, with deep learning applications already prevalent. Thus, the claimed contributions seem overstated.\\n\\n3. The summary of contributions lacks specificity, offering mostly general points without a clear overview of the work. This makes it difficult for readers to grasp the precise focus and innovations in the research.\\n\\n4. The Related Work section references too few sources and does not include recent advancements in the field. Although the authors highlight using deep learning to solve their problem, they fail to reference relevant studies on deep learning in audio navigation, a significant oversight.\\n\\n5. The \\\"Problem Formalization\\\" section is inadequately explained. Readers cannot clearly understand the input-output flow, and while Table 9 in the appendix offers some clarification, the choice to use intermediate features as inputs adds unnecessary complexity, making the initial inputs and outputs unclear.\\n\\n6. Authors state that large language models (LLMs) are unsuitable for TBT audio instruction, opting instead for a transformer-based approach. However, this claim lacks sufficient rationale, given that many LLMs perform well on similar tasks and are widely used in both academia and industry. The authors neither justify nor experimentally validate why LLMs would be unsuitable for this task.\\n\\n7. Proposed method appears overly generic and largely involves combining existing model components without introducing novel ideas. This approach lacks sufficient originality to merit publication at a conference like ICLR.\\n\\n8. It is unclear how the authors have knowledge of GPT\\u2019s exact architecture, given that it is a black-box model. Furthermore, considerations such as model size, inference speed, training time, and computational cost, which are crucial for real-time applications, are not discussed.\\n\\n9. Important components in the methods section, such as Deep CrossNet and GPT Decoder, are not adequately described. This lack of detail leaves readers uncertain about how these components function within the model.\\n\\n10. The experiments are disorganized and limited in scope. There is a lack of strong baselines and comparisons to recent, relevant work, making it difficult to ascertain whether the method achieves SOTA performance. The experiments also lack ablation studies, visualizations, and key information.\\n\\n11. During driving, overly detailed instructions may be distracting, as drivers may not want or need continuous audio prompts. This issue of instruction density is not addressed.\\n\\n12. Minor language errors persist, such as a missing space between \\\"Figures 3(b)\\\" and \\\"3(c)\\\" on line 422, which reflects a lack of careful proofreading.\\n\\n13. The paper lacks novelty, as indicated by the outdated citations and few references to recent research, which suggests that the work does not align with the current cutting-edge.\\n\\nOverall, this paper is structured more like a technical report than a research paper. Given its organization and limited scientific contribution, it does not yet meet the standard for acceptance at a conference.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
2JN73Z8f9Q
MultiMedia-Agent: A Multimodal Agent for Multimedia Content Generation
[ "Daoan Zhang", "Wenlin Yao", "Xiaoyang Wang", "Yebowen Hu", "Jiebo Luo", "Dong Yu" ]
With the advancement of AIGC (AI-generated content) technologies, an increasing number of generative models are revolutionizing fields such as video editing, music generation, and even film production. However, due to the limitations of current AIGC models, most models can only serve as individual components within specific application scenarios and are not capable of completing tasks end-to-end in real-world applications. In real-world applications, editing experts often work with a wide variety of images and video inputs, producing multimodal outputs---a video typically includes audio, text, and other elements. This level of integration across multiple modalities is something current models are unable to achieve effectively. However, the rise of agent-based systems has made it possible to use AI tools to tackle complex content generation tasks. To deal with the complex scenarios, in this paper, we propose a multimedia content generation agent system designed to automate complex content creation. Our agent system includes a data generation pipeline, a tool library for content creation, and a set of metrics for evaluating preference alignment. Notably, we introduce the skill acquisition theory to model the training data curation and agent training. We designed a two-stage correlation strategy for plan optimization, including self-correlation and model preference correlation. Additionally, we utilized the generated plans to train the MultiMedia-Agent via a three stage approach including base/success plan finetune and preference optimization. The comparison results demonstrate that the our approaches are effective and the MultiMedia-Agent can generate better multimedia content compared to GPT4o.
[ "multimodal agent", "video generation" ]
https://openreview.net/pdf?id=2JN73Z8f9Q
https://openreview.net/forum?id=2JN73Z8f9Q
ICLR.cc/2025/Conference
2025
{ "note_id": [ "gTFR0tTri7", "cF3GpU59z1", "ZY64RRQY1e", "JUhfJNMbDA", "9lVqMGxIMv", "3P166ZHoGG" ], "note_type": [ "official_comment", "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1732057539304, 1730199241137, 1729885077529, 1732057584573, 1730731891940, 1730270007908 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8328/Authors" ], [ "ICLR.cc/2025/Conference/Submission8328/Reviewer_pJVa" ], [ "ICLR.cc/2025/Conference/Submission8328/Reviewer_1igN" ], [ "ICLR.cc/2025/Conference/Submission8328/Authors" ], [ "ICLR.cc/2025/Conference/Submission8328/Reviewer_gKyB" ], [ "ICLR.cc/2025/Conference/Submission8328/Reviewer_Jrej" ] ], "structured_content_str": [ "{\"title\": \"Response to all reviewers\", \"comment\": \"Thank you to all the reviewers for your appreciation and valuable suggestions. We have carefully studied each of your comments. We will design a more standardized and fair evaluation scheme and revise the paper thoroughly based on your feedback. We hope that the revised version will better reflect the value of our research and meet your expectations. Once again, thank you for your support and assistance with our work!\"}", "{\"summary\": \"This paper introduces a multimedia content generation agent, referred to as the MultiMedia-Agent, which is designed to automate complex multimedia content creation tasks. The authors position their agent as a system capable of outperforming existing generative AI models in this space, including GPT-4o. Through comparative analysis, they argue that their proposed MultiMedia-Agent generates higher-quality multimedia content, offering a better media outputs compared to GPT-4o.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe paper is well-structured and easy to follow, making its technical concepts accessible to readers, which enhances understanding and supports the proposed research\\u2019s coherence.\\n\\n2.\\tThe topic of multimedia content automation is timely and has high relevance, especially given the expanding demand for digital content across various domains, from marketing to education. This research holds considerable potential for real-world application, promising efficiency and scalability in daily content creation tasks.\\n\\n3.\\tThe authors\\u2019 attempt to specialize in multimedia content generation represents an innovative approach that could fill an important gap in automated content creation, potentially providing richer, multi-modal outputs beyond current text-based LLM capabilities.\", \"weaknesses\": \"1.\\tThe framework appears to primarily leverage existing technologies without significant structural innovation. It\\u2019s unclear if the advancements lie in model architecture or simply application. Expanding on how the MultiMedia-Agent advances beyond the foundational technologies would strengthen the paper.\\n\\n2.\\tThe comparison with GPT-4o raises concerns, as GPT-4o is not explicitly designed for multimedia content generation. This choice limits the comparative relevance, as the study might benefit from benchmarking against more specialized or similar frameworks in multimedia generation. Adding such comparisons would enhance the credibility of the proposed system's advantages.\\n\\n3.\\tI am a bit concerned about the evaluation metrics the authors proposed. It seems to be that most of the metrics are based on GPT-4o. It will be more convincing if the authors can show the evaluation from GPT-4o truly aligns with human perceptions.\\n\\n4.\\tMinor typographical errors appear in the text, including the abstract. For instance, in the abstract, \\u201cthe our approaches\\u201d should be revised to \\u201cour approaches\\u201d to maintain professionalism and clarity.\", \"minor_suggestions\": \"\\u2022\\tIncluding citations for comparison methods in Table 1 would allow readers to trace back the origins and contexts of these models, lending credibility and clarity.\\n\\u2022\\tEnsure consistent use of terms, such as \\u201cGPT4o\\u201d or \\u201cGPT-4o,\\u201d for a more polished presentation.\", \"questions\": \"1.\\tLarge Language Models (LLMs) can exhibit unpredictable behavior, so showing examples of failure cases for the MultiMedia-Agent would add depth and transparency. Analyzing these cases could provide insight into potential improvements.\\n\\n2.\\tHas the success rate of the MultiMedia-Agent been quantified? Understanding the model\\u2019s reliability across different types of content generation would strengthen the case for its practical application and offer a valuable metric for future benchmarking. Did the authors notice any bias issues during content generation?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a multimedia agent for content generation agent. It first proposes a data generation pipeline, a tool library and several evaluation metrics. A two-stage correlation plan curation method and a three-stage training pipeline are proposed according to the skill acquisition theory. The authors conduct experiments to compare its performance against GPT-4o.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The general idea of multimodal generation agent with tool using and planning is interesting.\\n2. The proposed method covers a wide range of tasks and tools.\", \"weaknesses\": \"This paper is poorly written. The experiments are also not convincing.\\n\\n1. The only compared baseline is GPT-4o, which is not specifically designed for most of the tasks. More baselines should be added, such as those in Table 1, even if they are not applicable for all of the tasks. It is also not clear how the GPT-4o baseline is implemented for other tasks like audio generation or video generation.\\n2. Samples of generation are not sufficient. The only provided demo is Figure 9, where the audio is indirectly presented as text descriptions. Demos for other tasks are not given.\\n3. The experiment improvements are trivial. Besides, there are only 10 queries for each of the tasks in the validation set. What are the confidence intervals? Are the results statistically significant?\\n4. The explanations of \\\"longer plans\\\" and \\\"fewer steps\\\" should not be concluded directly, but supported by additional experiments showing the average length of steps of each model.\\n5. What is the metric in Table 6? What are the meanings of the metrics in Table 7 and the two tables in the appendix? Key explanations are missing. Also, why are the tasks in Table 7 all text generation tasks? Shouldn't they be \\\"xx-V\\\"?\\n6. What are the details of the tools? Table 8 is not sufficient as entries like \\\"audio_to_text\\\", \\\"text_to_image\\\" are not detailed enough. For instance, what underlying models or algorithms are used? What are the input/output specifications and any key parameters?\\n7. What are the details of the metrics in Section 3.3.1? The current description is not enough for reproducibility.\\n8. Many typos and grammar mistakes throughout the paper.\", \"questions\": \"1. Why not use ImageBind for task formulation or evaluation?\\n2. What is the meaning of success rate? Does a failed plan mean it is not executable due to incorrect parameters, does not use the correct tools, or anything else?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"I am very motivated by this article because it indeed addresses a real issue. However, as with other studies in this research path, the validation of the experiments is very weak. I hope the authors can discuss this in the discussion period.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"I believe the advantages of this type of article are self-evident and are enough to impact the industry. Therefore, compared to the advantages, I hope to discuss more about the missing parts.\", \"weaknesses\": [\"This article seems more like a prototype design rather than a complete paper, as it lacks many implementation and experimental details.\", \"I didn't see any examples, nor did I see any supplementary materials provided for demonstration (did I miss something?).\", \"How is the success rate validated? How is success defined?\", \"I understand A stands for audio, and V stands for video, but what does AV-V mean? What is the task? What is the goal? Does it require - - human involvement, as the paper mentions human alignment as a contribution?\", \"What are Plan1, Plan2, and Plan3? What are the differences?\", \"What do Agent1, Agent2, and Agent3 represent? What is their significance?\", \"What does Average steps mean? Is fewer better?\", \"What are the differences in the success rate between Tables 4 and 5?\", \"Each task seems to have different input/output formats. How are they validated separately?\", \"The images look very rudimentary, and some of the text is even unclear.\"], \"questions\": \"Refer to weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduced a multi-agent large language model (LLM) framework based on the Skill Acquisition Theory that supports any-to-any styled generation, including text, image, audio, and image. Such framework was evaluated based on model-based preference evaluation metrics. As the result of evaluation, the framework's best version (i.e. with 3 stages included) was able to show comparative performance with GPT-4o while the overall success rate is lower. In summary, this study was able to propose a relatively good multi-agent LLM framework with multiple components, which showed comparative performance as GPT-4o in certain aspects based on the metrics that the paper claimed.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The originality of this paper is worth noting. The idea of applying the Skill Acquisition Theory into the design of the framework is inspiring. Using information theory as a design guide when implementing multi-agent is a good thought other than just adding multiple iterations naively. I personally feel this is a really interesting idea and definitely would like to see more related work in future.\\n2. The paper's structure is very clear and easy to follow. The quality of the overall presentation is pretty good.\", \"weaknesses\": \"Unfortunately, I will have to vote for reject for this paper as it has some fundamental flaws in its evaluation.\\n\\n1. The evaluation metrics is not a solid one. Although the idea of this paper might looks theoretically beautiful, its experiments lack convincing support. For content generation, especially when LLM is involved, there has been tremendous excellent studies where multiple kinds of evaluation have been introduced. For example, when handling artistic or abstract content generation (e.g. music/audio/image), it would be hard to solely rely on LLM model evaluation as LLM model could have certain problems such as LLM hallucination and unfortunately these problems are still pending on being solved/studied. Therefore, subjective evaluation is currently still necessary to evaluate generated content especially for **content matching task**, such as AB test, ranking test, or rating test. This could easily and intuitively show the advantage of each model based on a large group of experts/users' feeling/rating. Several studies on recent top conferences have made remarkable examples on such kind of evaluation such as [1][2][3].\\n2. The comparative study is not solid enough. This paper only compare the framework with GPT-4o. To make this paper in a better shape for publishment, it will need to include more relevant model/framework for comparison. \\n\\n[1] Yue, Xiang, et al. \\\"Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n[2] Deng, Qixin, et al. \\\"ComposerX: Multi-Agent Symbolic Music Composition with LLMs.\\\" arXiv preprint arXiv:2404.18081 (2024).\\n[3] Guo, T., et al. \\\"Large Language Model based Multi-Agents: A Survey of Progress and Challenges.\\\" 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024). IJCAI; Cornell arxiv, 2024.\", \"questions\": \"As mentioned as above, although I like the idea of this paper and enjoy reading it, I will have to reject it. To make this paper in a better shape for publihsment, I would recommend as below.\\n\\n1. Have a more thorough study on relevant studies and try to include them in the comparison/evaluation section.\\n2. Improve evaluation metrics and include more convincing experimental results on the superiority of the framework.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
2J18i8T0oI
Towards Universality: Studying Mechanistic Similarity Across Language Model Architectures
[ "Junxuan Wang", "Xuyang Ge", "Wentao Shu", "Qiong Tang", "Yunhua Zhou", "Zhengfu He", "Xipeng Qiu" ]
The hypothesis of \textit{Universality} in interpretability suggests that different neural networks may converge to implement similar algorithms on similar tasks. In this work, we investigate two mainstream architectures for language modeling, namely Transformers and Mambas, to explore the extent of their mechanistic similarity. We propose to use Sparse Autoencoders (SAEs) to isolate interpretable features from these models and show that most features are similar in these two models. We also validate the correlation between feature similarity and~\univ. We then delve into the circuit-level analysis of Mamba models and find that the induction circuits in Mamba are structurally analogous to those in Transformers. We also identify a nuanced difference we call \emph{Off-by-One motif}: The information of one token is written into the SSM state in its next position. Whilst interaction between tokens in Transformers does not exhibit such trend.
[ "Mechanistic Interpretability", "Sparse Autoencoders", "Universality", "State Space Models" ]
Accept (Poster)
https://openreview.net/pdf?id=2J18i8T0oI
https://openreview.net/forum?id=2J18i8T0oI
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zcxe8KGMMl", "zNAe5m9EKM", "xfsRvfe7f8", "wTndMKbs6x", "tXAgz26KGu", "tLUiqEbEIY", "t7jwCo6dTX", "qa0cwdoX6B", "pfaNev6sxV", "pVW5BQsE9b", "pVUKlWdvqq", "mzFcEOrDYN", "lzouXOlNuY", "lJnu0CioDF", "kfPyDzmE7q", "kJobmGiB3g", "jaZFz6e6fN", "jQ1GKS8jAi", "i8jGSizHIU", "dTrkP78VgY", "dOHT4yzZwM", "d3Sh4NQ0iB", "c9loTla2uW", "bdFa2J5ZLc", "WDVqjcl7JG", "MaoAl2YTEf", "MSmCDcwoEx", "MJeeEOWxK7", "KJ8eFupoqF", "IX3jgxrRi7", "GmXukLgXJc", "G5LCkBXzH3", "EYXn1Z2zER", "CX81Lf7K8v", "B1aWnEC2H9", "9vXdWuUzIh", "8jmS1MTOkp", "75W8rZ52id", "5FKDvWYbd6", "5D08DceP3N" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision" ], "note_created": [ 1732528334352, 1732627121329, 1732386189270, 1732390876702, 1732391116339, 1730215507041, 1732551289870, 1732388072758, 1732387779869, 1732391232835, 1732388907310, 1732389228030, 1732391079149, 1734600723832, 1733051974319, 1732390637986, 1732384615296, 1733297676244, 1733051581808, 1732386308514, 1733175234021, 1730711714267, 1732385624549, 1732387231470, 1730991374772, 1732623431602, 1732388718327, 1732386873100, 1732387595542, 1732395707890, 1732395580992, 1730730403144, 1732388673063, 1733193576422, 1732385776355, 1732388967128, 1732385043743, 1732551059128, 1732380833789, 1737523646981 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4540/Reviewer_jsst" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Reviewer_jsst" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Area_Chair_74W2" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Reviewer_4GEc" ], [ "ICLR.cc/2025/Conference/Submission4540/Reviewer_eqTo" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Reviewer_4GEc" ], [ "ICLR.cc/2025/Conference/Submission4540/Reviewer_eqTo" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Reviewer_nuP5" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Submission4540/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"Reply to Rebuttal\", \"comment\": \"Thank you for your reply and for running these further experiments, which indicate that the results you observed seem to hold for different model sizes and SAE architectures. Your comment regarding the novelty of your findings looks reasonable. However, I would argue that being able to transfer steering and vocabulary projection approaches across model architectures might already be a valid indicator of similar latent feature spaces. Still, I think your comments are generally valid and should be incorporated in the paper's discussion section (especially the part concerning the findings' applicability to model diffing).\\n\\nIn light of this, I am willing to slightly raise my score to reflect the improved generality of the findings. However, I remain skeptical about the statistical testing you performed. My main concerns are:\\n- Using a random SAE as a null hypothesis is a very low bar to claim feature similarity between the two models. Such tests may, at best, confirm that the similarity between latents is higher than random chance, which is unsurprising given that both models were trained on the same data. An evaluation comparing the similarity of features cross-layer vs. cross-architecture would have been more interesting in this regard (e.g., Is the similarity of features from upper layers of Mamba and upper layers of Pythia higher than between upper and lower layers in the same model?)\\n- I am confused about what the samples in your testing correspond to since I thought 0.681 was the averaged MPCC over a set of samples rather than the max SAE feature correlation for a single sample. Why wasn't the correlation measured over the full set of samples for which the experiment MPCC was computed? With n=1, you cannot estimate the variance of the experimental group, and since the t-test you are conducting is designed to account for uncertainty in both sample means, using the pooled standard deviation from the baseline group does not capture the variation around the mean for the experimental group.\"}", "{\"title\": \"Response to Reviewer eqTo Comments\", \"comment\": \"Thank you very much for your thoughtful comments and feedback. We truly appreciate your kind words about the usefulness and interest of our paper for the MI community. We are grateful for your time and effort in reviewing our work, and we are pleased to hear that you find it valuable.\\n\\nWe also appreciate your score of 8 and will continue to refine the paper during the discussion period based on the feedback we receive.\"}", "{\"title\": \"Dataset Choice (6 / 9)\", \"comment\": \">Why did you choose OpenWebText as your primary dataset for analysis? How might the choice of OpenWebText as the dataset influence your results? Have you tested if the feature similarities hold across different domains (e.g., code, mathematics, or structured data)? Would analyzing domain-specific text reveal different patterns of architectural universality?\\n\\nThanks for these insightful questions. The main reason we choose OWT as our primary dataset is that **OWT is a widely used comprehensive text corpus** in this field[1, 2, 3]. This is probably because of the popularity of GPT2-Small, whose training data WebText has its open-sourced version OWT.\\n\\nIt can be the case that older datasets of lower quality and duplication can result in overhigh correlation. To address this, **we additionally perform correlation analysis on two more datasets**, namely SlimPajama[4], a more recent cleaned and deduplicated text corpus for pretraining, and Github subset of RedPajama[5] for domain-specific analysis. The results are shown as follows:\\n\\n| Mean MPPC / Dataset | OpenWebText (Original) | SlimPajama | RedPajama-Github subset |\\n|---------------------------------|------------------------|------------|-------------------------|\\n| Main experiment | 0.74 | 0.681 | 0.745 |\\n| Skyline 1 (Model Seed Variant) | *0.76* | *0.725* | *0.782* |\\n| Skyline 2 (SAE Seed Variant) | **0.81** | **0.806** | **0.806** |\\n\\n\\nWe again appreciate this question as it makes us decide to demonstrate all of our experimental results with ones obtained on SlimPajama, rather than OWT, for its comprehensiveness, higher quality and containing OOD text data for both models we tested. This helps improve the robustness of our conclusion.\\n\\n**We have incorporated these updates in Sections 4.4 and Appendix D.2 of the revised manuscript** for further clarity and elaboration.\\n\\n[1] [Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small](https://arxiv.org/abs/2211.00593)\\n\\n[2] [Transcoders Find Interpretable LLM Feature Circuits](https://arxiv.org/abs/2406.11944v1)\\n\\n[3] https://www.alignmentforum.org/posts/f9EgfLSurAiqRJySD/open-source-sparse-autoencoders-for-all-residual-stream\\n\\n[4] https://cerebras.ai/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama\\n\\n[5] https://www.together.ai/blog/redpajama\"}", "{\"title\": \"Novelty of the Findings (2 / 5)\", \"comment\": \">While to my knowledge, this is the first work evaluating the similarity of SAE features between Transformers and Mamba models, other works such as Paulo et al. 2024 and Sharma et al. 2024 already showed that many interpretability approaches such as logit lenses, steering vectors, and probes produce similar results across a variety of LLM architectures (Mamba, Transformer and RWKV). It is, hence, not particularly surprising that such findings extend to SAE decompositions of model activations.\\n\\nThanks for pointing this out. [Paulo et al. 2024](https://arxiv.org/pdf/2404.05971) is a highly relevant work which we did not cite in our submitted manuscript. We have included this in our new revision. Apologies for this.\\n\\nWe would like to point out that **although our work mainly falls on the line of transferrability of interpretability methods across LM architectures**, which has already been explored by existing literature, **our findings provide new insights into the following aspects**:\\n- The analysis of depth specialization of SAE features across models suggests **the existence of an architecture-agnostic feature hierarchy for language modeling**.\\n- The induction circuit similarity analysis, though being rather simple, serves as a supporting example of **architecture-agnostic circuit motif for language modeling**.\\n- **Our complexity viewpoint of MPPC raises a broad concern for cross-architecture model diffing**. For example, a recent promising model diffing method [crosscoder](https://transformer-circuits.pub/2024/crosscoders/index.html) is also prone to false positive of divergent features, where features activating on the same concept might be identified as different ones due to their respective preference for specific instances. This suggests the need for sanity checks of subsequent model diffing methods.\\n\\nWe would like to appreciate it again for pointing out a missing related work. And we are glad to have further discussion on the novelty of our findings.\"}", "{\"title\": \"Statistical Significance Tests (4 / 5)\", \"comment\": \">(Reviewer 4GEc) Have you performed any statistical significance tests to support your claims of feature similarity and universality?\\n\\n>(Reviewer jsst) In Section 4.4, no hypothesis is formulated regarding the expected similarity of features found across the four tested variants, and consequently, no significance measure for the correlation coefficients is reported.\\n\\nThis is an important part we missed in our submitted manuscript and we are very thankful for pointing this out. \\n\\nWe additionally establish a **random baseline** by calculating MPPC for each Pythia feature against a random SAE **with matching feature count and sparsity level** (by masking all but the TopK activating features) to the Mamba SAE to observe the impact of feature quantity and sparsity on MPPC distribution. \\n\\nWe denote mean MPPC as the random variable \\\\\\\\( x \\\\\\\\). To evaluate whether the result of our main experiment (\\\\\\\\( x = 0.681 \\\\\\\\)) belongs to a distribution with the same mean as the random baseline, we conducted a **Hypothesis Testing**. Specifically, we tested the null hypothesis (\\\\\\\\( H_0 \\\\\\\\)) that **the mean of the experimental group is equal to the mean of the random baseline group** (\\\\\\\\( \\\\mu_{\\\\text{experiment}} = \\\\mu_{\\\\text{baseline}} \\\\\\\\)) against the alternative hypothesis (\\\\\\\\( H_1 \\\\\\\\)) that the two means are different (\\\\\\\\( \\\\mu_{\\\\text{experiment}} \\\\neq \\\\mu_{\\\\text{baseline}} \\\\\\\\)).\\n\\nThe random baseline group consisted of 16 samples(repeated the baseline 16 times and calculate mean MPPC for each) with a mean of \\\\\\\\( 0.1944 \\\\\\\\) and a standard deviation of \\\\\\\\( 0.00046 \\\\\\\\). The experimental group consisted of a single sample with \\\\\\\\( x = 0.681 \\\\\\\\). Since the sample size of the experimental group is one, we applied a two-sample \\\\\\\\( t \\\\\\\\)-test using the pooled standard deviation from the baseline group. Yielding a \\\\\\\\( t \\\\\\\\)-value of \\\\\\\\( -1026.24 \\\\\\\\). The degrees of freedom are \\\\\\\\( n_{\\\\text{baseline}} + n_{\\\\text{experiment}} - 2 = 15 \\\\\\\\). Using a **one-tailed \\\\\\\\( t \\\\\\\\)-test**, the resulting \\\\\\\\( p \\\\\\\\)-value is:\\n\\n\\n$p = 4.54 \\\\times 10^{-38}$\\n\\nGiven that the \\\\\\\\( p \\\\\\\\)-value is significantly smaller than any conventional significance level (\\\\\\\\( \\\\alpha = 0.05 \\\\\\\\)), we **reject the null hypothesis**. This indicates that the experimental result (\\\\\\\\( x = 0.681 \\\\\\\\)) is highly unlikely to belong to a distribution with the same mean as the random baseline.\\n\\n**We have incorporated these updates in Sections 4.1 and 4.4 of the revised manuscript** for further clarity and elaboration.\"}", "{\"summary\": \"This work investigates the similarity of sparse autoencoders (SAE) features and induction circuits between a Transformer-based and a comparable Mamba-based language model trained on the same dataset. Results show that many features from Transformers SAEs show a high max pairwise Pearson correlation with Mamba SAE features, with their depth along model layers roughly matching across the two models. The correlations found for cross-architecture comparison are compared to a neuron baseline and two skylines obtained from different models and SAE training seeds, showing that the cross-architecture comparison falls only slightly short of skylines. Authors further examine the correlation between cross-architectural matching features and their complexity, finding that features with the most overlap are generally simpler and more monosemantic. Finally, the authors briefly investigate the similarity between induction circuits in the same architectures using path patching, finding a similar mechanism mediated by the convolutional operation of the Mamba architecture. Notably, the information mixing necessary for the induction operation is performed earlier in Mamba (\\\"off-by-one\\\" mixing).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The universality evaluation pursued in this paper is timely and relevant, given recent advances in non-Transformer architectures for language modeling. The baseline and skylines employed in this work are well-motivated and provide helpful reference points for the analysis. The analysis of feature correlation based on complexity is also interesting, showing convincing proof that most commonalities across architectures are found for simpler SAE features. Overall, the figures are designed clearly and compellingly to support the findings detailed in the main body of the paper.\", \"weaknesses\": \"**Novelty of the findings** While to my knowledge, this is the first work evaluating the similarity of SAE features between Transformers and Mamba models, other works such as [Paulo et al. 2024](https://arxiv.org/pdf/2404.05971) and [Sharma et al. 2024](https://arxiv.org/abs/2404.03646) already showed that many interpretability approaches such as logit lenses, steering vectors, and probes produce similar results across a variety of LLM architectures (Mamba, Transformer and RWKV). It is, hence, not particularly surprising that such findings extend to SAE decompositions of model activations.\\n\\n**Generality of findings for larger models** Authors experiment only with 2 tiny models with ~100M parameters each. This can be reasonable in light of the requirements (same training data, effort of training SAEs on each layer), but these systems are significantly smaller than those used by [Paulo et al. 2024](https://arxiv.org/pdf/2404.05971) for comparable cross-architecture interpretability experiments. Notably, larger checkpoints for both models used by the authors are publicly available, including the same training data for control, and could have been used to further prove the generality of the reported results. Importantly, without further experiments, it cannot be excluded that the limited capacity of tiny models might be the main motivation behind the high similarity features and circuits across the two architectures, and this could not be the case for more capable models with e.g. 1B or 7B parameters.\\n\\n**Multiple Comparison and Correlation Analysis without Hypothesis Testing** The maximal correlation of feature activation patterns with other (24576 x # of layers) features is bound to be quite high due to the enormous amounts of comparisons. In Section 4.4, no hypothesis is formulated regarding the expected similarity of features found across the four tested variants, and consequently, no significance measure for the correlation coefficients is reported. As a result, conclusions regarding the similarity of Mamba and Pythia SAE features are ambiguous (e.g. the statement \\\"[...] our neuron baseline almost exhibits zero sign of similarity between Mamba and Pythia\\\" at line 268 does not agree with Figure 3a, where at least 15% of neurons exhibit a correlation > 0.4). To make the analysis more convincing, a clear hypothesis regarding the degree of similarity in resulting SAE features should have been formulated and tested for baseline, experiment, and skylines, each including a correction procedure such as the Bonferroni method to account for multiple comparisons.\\n\\n**Minor formatting/clarification points:**\", \"line_135\": \"The mention \\\"$F_a$ and $F_b$ are some kinds of operation function.\\\" is too generic in this context. The purpose of these functions should be specified, and at least one example of functions used for this purpose should be provided.\", \"line_179\": \"Broken reference.\\n\\nFigure 1 is too tight with the inline text, making the distinction between caption and main body text unclear.\\n\\nSection 4.1: title is ungrammatical. I imagine you meant something like \\\"Searching for / In search of Interpretable Primitives\\\".\", \"line_202\": \"Clarify that you mean all features for all SAEs across all model layers (it becomes clear only from Figure 3c later in the paper)\", \"line_263\": \"The acronym MPPC is never introduced alongside its meaning.\", \"figure_5\": \"The mention \\\"both Model Seed Variant and Cross-Arch SAE MPPC exhibit correlation, while one in SAE Seed Variant is weaker\\\" in the caption is not very meaningful, since the trends for all three variants are pretty similar. For part (b), the mention \\\"scores ranging from 1 (No) to 2 (Yes)\\\" is confusing: it would be better to say \\\"Distribution of MPCC for polysemantic (1) and monosemantic (2) auto-generated feature labels.\\\"\", \"questions\": \"What justified the choice of plain SAEs over more performant variants such as [Gated](https://arxiv.org/abs/2404.16014) or [Top-K SAEs](https://openai.com/index/extracting-concepts-from-gpt-4/)? It is currently hard to gauge the impact of this choice on the obtained results, and whether findings could have been different if improved SAE variants were tested.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer jsst Comments (2 / 2)\", \"comment\": \">I am confused about what the samples in your testing correspond to since I thought 0.681 was the averaged MPCC over a set of samples rather than the max SAE feature correlation for a single sample. Why wasn't the correlation measured over the full set of samples for which the experiment MPCC was computed? With n=1, you cannot estimate the variance of the experimental group, and since the t-test you are conducting is designed to account for uncertainty in both sample means, using the pooled standard deviation from the baseline group does not capture the variation around the mean for the experimental group.\\n\\nThank you very much for raising these important concerns.\\n\\nWith regard to your question on \\\"Why wasn't the correlation measured over the full set of samples\\\", we realize that **our explanation may not have been sufficiently clear**. Specifically, when we stated that the \\\"random baseline group consisted of 16 samples\\\", we intended to convey that **we repeated the baseline 16 times and calculated the mean MPPC for each repetition**. To address this, we have revised our response above to ensure a more detailed and precise explanation.\\n\\nConcerning your observation regarding the variance of the experimental group, **we acknowledge that we made the simplifying assumption that the variance of the experimental group is the same as that of the baseline group**. While we recognize that this assumption may not hold perfectly, we believe **it has minimal impact on the overall conclusion**. This is because the mean of the experimental group is significantly larger than that of the baseline group, which allows us to conclude that the means of the two distributions are distinct, even under the possibility of slightly larger variances.\\n\\nWe sincerely regret any confusion caused by the lack of clarity in our initial explanation and appreciate your thorough review and constructive feedback. Your meticulous and rigorous approach has been invaluable, and we hold the utmost respect for your careful evaluation of our work.\"}", "{\"title\": \"Empirically Supporting the \\\"Universal Induction Algorithm\\\" Claim. (3 / 6)\", \"comment\": \">The claims made in Section 6.2 need to be empirically supported.\\n\\nThanks for this suggestion. We are not sure we currently understand your suggestion correctly. We take it as \\\"designing extra experiments to show that induction circuits are similar\\\".\\n\\nIt is indeed necessary to quantify or validate cross-architectural circuit similarity with counterfactual experiments to further strengthen our claim that Mamba and Transformer implement the same induction algorithm. For instance, it may be a statistical method to reveal that model weights connecting similar features also tend to be similar. Or one can ablate matched pairs of features from both models and see whether downstream performance drops in the same trend. However, due to time and compute constraint, we are not currently able to conduct such experiments. Apologies for this. Nonetheless, since induction circuits have been widely studied for Transformers[1, 2], **we think that the \\\"previous token heads-local convolution\\\" and \\\"Induction heads-Layer 17 SSM State\\\" analogies are strongly backed up this claim**.\\n\\n[1][A Mathematical Framework for Transformer Circuits](https://transformer-circuits.pub/2021/framework/index.html)\\n\\n[2][In-context Learning and Induction Heads](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html)\"}", "{\"title\": \"Clarifying the Role of Circuit Analysis (2 / 6)\", \"comment\": \">The role of circuit analysis experiments is unclear.\\n\\n>They are indeed very interesting but I\\u2019m not sure how they contribute to building a mechanistic analogy between SSMs and Transformers.\\n\\nThanks for this question. Our thoughts on this question are mostly influenced by a lot of existing discussion of features and circuits in the field of Mechanistic Interpretability[1, 2]. **We think that circuit analysis is investigating how features are connected from a more macroscopic viewpoint**. If we have found that interpretable features in different models are universal, a natural research question to ask next is whether the weights connect them in the same way.\\n\\n**We conjecture the main reason we left you confused is that our circuit analysis seems not correlated to the SAE features** investigated in Section 4 and 5. We are actually studying individual mamba blocks, SSM states or attention heads rather than what we claim to be \\\"how the features are connected\\\".\\n\\nIf this is the case, we think that SAE feature circuits, despite some existing literature investigating this problem[1, 2, 3], is still not what we think a mature method to help support our claims due to its time and computation complexity. In addition, **we expect findings of SAE feature circuits to be a finer-grained extension of head-level or block-level circuit analysis**, which may not contradict with our findings.\\n\\n[1] [Dictionary Learning Improves Patch-Free Circuit Discovery in Mechanistic Interpretability: A Case Study on Othello-GPT](https://arxiv.org/abs/2402.12201)\\n\\n[2] [Sparse Feature Circuits: Discovering and Editing Interpretable Causal Graphs in Language Models](https://arxiv.org/abs/2403.19647v1)\\n\\n[3] [Transcoders Find Interpretable LLM Feature Circuits](https://arxiv.org/abs/2406.11944v1)\"}", "{\"title\": \"Choice of SAE Variants (5 / 5)\", \"comment\": \">What justified the choice of plain SAEs over more performant variants such as Gated or Top-K SAEs? It is currently hard to gauge the impact of this choice on the obtained results, and whether findings could have been different if improved SAE variants were tested.\\n\\nThanks for this valuable question. Vanilla SAEs are indeed less performant and somewhat outdated in a retrospective view. We conduct the main experiment and both skylines with TopK SAEs and get the following results:\\n\\n| Mean MPPC / SAE Variant | Vanilla (Original) | TopK |\\n|---------------------------------|--------------------|--------|\\n| Main experiment | 0.681 | 0.643 |\\n| Skyline 1 (Model Seed Variant) | *0.725* | *0.680* |\\n| Skyline 2 (SAE Seed Variant) | **0.806** | **0.726** |\\n\\n**We have incorporated these updates in Sections 4.4 and Appendix D.3 of the revised manuscript** for further clarity and elaboration.\"}", "{\"title\": \"Generalization to Larger Models (6 / 6)\", \"comment\": \">(Reviewer 4GEc) How do you expect your findings to scale to larger models? Did you observe whether model size impacts universality between architectures? Could smaller or larger versions of Transformers and Mambas exhibit different degrees of feature similarity?\\n\\n>(Reviewer nuP5) (Minor) The size of LMs is limited. Only ~100m models are used in the experiments.\\n\\n>(Reviewer jsst) Without further experiments, it cannot be excluded that the limited capacity of tiny models might be the main motivation behind the high similarity features and circuits across the two architectures, and this could not be the case for more capable models with e.g. 1B or 7B parameters.\\n\\nThanks for pointing this out, which greatly helps improve our work.\\n\\nWe additionally conduct the experiment for **2.8B variants of both models**, giving us the following results:\\n| Mean MPPC / Model Size | 130M (original) | 2.8B |\\n|------------------------------|-----------------|-------|\\n| Main experiment | 0.681 | 0.792 |\\n| Skyline 1 (Model Seed Variant) | *0.725* | *0.847* |\\n| Skyline 2 (SAE Seed Variant) | **0.806** | **0.878** |\\n\\n**We have incorporated these updates in Sections 4.4 and Appendix D.4** of the revised manuscript for further clarity and elaboration.\"}", "{\"title\": \"Exploring More Circuits with SAEs (2 / 4)\", \"comment\": \">It would be interesting to explore more circuits through SAE, as suggested in [1] (see Section 4.3).\\n\\nThanks for this suggestion. We think that SAE feature circuits, despite some existing literature investigating this problem[1, 2, 3], is still not what we think a mature method to help support our claims due to its time and computation complexity. In addition, **we expect findings of SAE feature circuits to be a finer-grained extension of head-level or block-level circuit analysis**, which may not contradict with our findings.\\n\\n[1] [Dictionary Learning Improves Patch-Free Circuit Discovery in Mechanistic Interpretability: A Case Study on Othello-GPT](https://arxiv.org/abs/2402.12201)\\n\\n[2] [Sparse Feature Circuits: Discovering and Editing Interpretable Causal Graphs in Language Models](https://arxiv.org/abs/2403.19647v1)\\n\\n[3] [Transcoders Find Interpretable LLM Feature Circuits](https://arxiv.org/abs/2406.11944v1)\"}", "{\"title\": \"Generalization to Larger Models (3 / 5)\", \"comment\": \">(Reviewer 4GEc) How do you expect your findings to scale to larger models? Did you observe whether model size impacts universality between architectures? Could smaller or larger versions of Transformers and Mambas exhibit different degrees of feature similarity?\\n\\n>(Reviewer nuP5) (Minor) The size of LMs is limited. Only ~100m models are used in the experiments.\\n\\n>(Reviewer jsst) Without further experiments, it cannot be excluded that the limited capacity of tiny models might be the main motivation behind the high similarity features and circuits across the two architectures, and this could not be the case for more capable models with e.g. 1B or 7B parameters.\\n\\nThanks for pointing this out, which greatly helps improve our work.\\n\\nWe additionally conduct the experiment for **2.8B variants of both models**, giving us the following results:\\n| Mean MPPC / Model Size | 130M (original) | 2.8B |\\n|------------------------------|-----------------|-------|\\n| Main experiment | 0.681 | 0.792 |\\n| Skyline 1 (Model Training Variant) | *0.725* | *0.847* |\\n| Skyline 2 (SAE Seed Variant) | **0.806** | **0.878** |\\n\\n**We have incorporated these updates in Sections 4.4 and Appendix D.4** of the revised manuscript for further clarity and elaboration.\"}", "{\"metareview\": \"The paper investigates the universality hypothesis, showing that Transformers and Mambas, despite architectural differences, learn similar features for language modeling tasks. Using Sparse Autoencoders (SAEs), the authors demonstrate cross-architecture feature similarity and identify a novel \\u201cOff-by-One motif\\u201d in Mamba models, providing new insights into their induction circuits. The study combines feature-level analysis and circuit-level comparisons to support claims of mechanistic universality.\\n\\nThe strengths of the paper lie in its novel application of SAEs to compare architectures, robust empirical results validated by statistical significance tests, and the depth of analysis that identifies meaningful similarities and differences. The primary weaknesses include limited experiments on larger models and questions about the robustness of the results. Additionally, reviewers raised concerns about potential biases introduced by SAE pre-processing and the generalization of findings beyond language modeling. Most of these issues were addressed during the rebuttal period: the authors provided results for larger models (2.8B parameters), performed statistical tests, and demonstrated that the SAE method did not artificially impose alignment. While broader generalization and deeper circuit validation remain future directions, the authors' responses sufficiently strengthen their claims.\\n\\nOverall the paper makes a significant and timely contribution to neural network interpretability. Its findings are well-supported, and the concerns raised have been addressed to a reasonable extent. I recommend accepting this paper.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, several important points were raised by the reviewers, and the authors addressed these through additional experiments and clarifications.\\n\\nReviewer 4GEc raised concerns about the scalability of the study, noting that the experiments were conducted on small models (~100M parameters). In response, the authors conducted additional experiments on larger models (2.8B parameters), which confirmed the robustness of their findings. This addressed the concern effectively, and the reviewer updated their score accordingly.\\n\\nReviewer nuP5 questioned the role of the circuit-level analysis and its contribution to building a mechanistic analogy between Transformers and Mambas, particularly regarding layer-specific behavior and inductive ability. They also asked whether the claims in Section 6.2 could be empirically supported. The authors acknowledged that while the circuit analysis was an evolving area, further experiments to validate the claims were beyond the scope of the work. They suggested these aspects could be explored in future research. This issue was noted, but it did not detract significantly from the paper's merit, given the other valuable contributions.\\n\\nReviewer 4GEc also raised the concern about the lack of statistical significance testing to support claims of feature similarity and universality. The authors responded by performing a two-sample t-test, showing a very low p-value and confirming that the observed similarities were statistically significant. This effectively addressed the concern and bolstered the credibility of their findings.\\n\\nReviewer nuP5 raised the possibility that the SAE pre-processing might have induced alignment between features in different models, potentially creating an \\\"interpretability illusion.\\\" The authors responded by demonstrating that the similarity remained even when the sparsity constraint was adjusted. They argued that the findings were not driven by SAE-specific effects, which helped mitigate concerns about methodological biases.\\n\\nFinally, Reviewer 4GEc asked about the generalization of the findings beyond language modeling to other tasks. While the authors did not extend their analysis to other domains, they referenced other work showing that their method could generalize well to tasks like vision and protein modeling.\\n\\nIn weighing these points for the final decision, I found the authors\\u2019 responses to be thorough and satisfactory. The additional experiments, particularly regarding model scalability and statistical significance, strengthened the paper significantly. While some concerns about the circuit analysis and SAE effects remain, these were acknowledged and set for future exploration, without undermining the overall contribution of the paper.\"}", "{\"title\": \"Follow-up on Feedback and Rating for Submission 4540\", \"comment\": \"Dear Reviewer nuP5,\\n\\nWe hope this email finds you well. We are reaching out regarding our ICLR 2025 submission (Submission 4540). With the discussion period coming to a close, We wanted to follow up and see if there are any additional questions or concerns about our paper or the rebuttal we provided earlier that we could help clarify.\\n\\nYour detailed feedback has been Insightful, and we have put significant effort into addressing the points you raised. If there are any remaining aspects where further clarification might strengthen your understanding of our work, please let us know\\u2014we would be happy to provide more information.\\n\\nFurthermore, if you feel our responses have addressed your concerns effectively, we would greatly appreciate it if you might consider revisiting your initial rating of our submission. Your expert evaluation plays a crucial role in shaping the final outcome, and we sincerely appreciate your time and efforts throughout this review process.\\n\\nThank you again for your dedication to improving the quality of submissions. Please feel free to let us know if there is anything else we can assist with.\\n\\nBest regards,\\n\\nAuthors of Submission 4540\"}", "{\"title\": \"Response to Reviewer jsst (1 / 5)\", \"comment\": [\"We sincerely appreciate your recognition of our work, with highlights in motivation, interest, soundness and clarity. We would also like to acknowledge the thorough and constructive feedback and questions provided which help strengthen our work, which we summarize as follows:\", \"Novelty of the findings;\", \"Generality of findings for larger models;\", \"Statistical significance tests;\", \"Choice of SAE variants.\", \"All of these aspects are helpful and insightful. In addition, thanks for pointing out the typos and clarification points in our submitted manuscript, which we have all fixed in our updated version.\"]}", "{\"title\": \"Statistical Significance Tests (2 / 9)\", \"comment\": \">(Reviewer 4GEc) Have you performed any statistical significance tests to support your claims of feature similarity and universality?\\n\\n>(Reviewer jsst) In Section 4.4, no hypothesis is formulated regarding the expected similarity of features found across the four tested variants, and consequently, no significance measure for the correlation coefficients is reported.\\n\\nThis is an important part we missed in our submitted manuscript and we are very thankful for pointing this out. \\n\\nWe additionally establish a **random baseline** by calculating MPPC for each Pythia feature against a random SAE **with matching feature count and sparsity level** (by masking all but the TopK activating features) to the Mamba SAE to observe the impact of feature quantity and sparsity on MPPC distribution. \\n\\nWe denote mean MPPC as the random variable \\\\\\\\( x \\\\\\\\). To evaluate whether the result of our main experiment (\\\\\\\\( x = 0.681 \\\\\\\\)) belongs to a distribution with the same mean as the random baseline, we conducted a **Hypothesis Testing**. Specifically, we tested the null hypothesis (\\\\\\\\( H_0 \\\\\\\\)) that **the mean of the experimental group is equal to the mean of the random baseline group** (\\\\\\\\( \\\\mu_{\\\\text{experiment}} = \\\\mu_{\\\\text{baseline}} \\\\\\\\)) against the alternative hypothesis (\\\\\\\\( H_1 \\\\\\\\)) that the two means are different (\\\\\\\\( \\\\mu_{\\\\text{experiment}} \\\\neq \\\\mu_{\\\\text{baseline}} \\\\\\\\)).\\n\\nThe random baseline group consisted of 16 samples (repeated the baseline 16 times and calculate mean MPPC for each) with a mean of \\\\\\\\( 0.1944 \\\\\\\\) and a standard deviation of \\\\\\\\( 0.00046 \\\\\\\\). We made the simplifying assumption that the variance of the experimental group is the same as that of the baseline group. The experimental group consisted of a single sample with \\\\\\\\( x = 0.681 \\\\\\\\). Since the sample size of the experimental group is one, we applied a two-sample \\\\\\\\( t \\\\\\\\)-test using the pooled standard deviation from the baseline group. Yielding a \\\\\\\\( t \\\\\\\\)-value of \\\\\\\\( -1026.24 \\\\\\\\). The degrees of freedom are \\\\\\\\( n_{\\\\text{baseline}} + n_{\\\\text{experiment}} - 2 = 15 \\\\\\\\). Using a **one-tailed \\\\\\\\( t \\\\\\\\)-test**, the resulting \\\\\\\\( p \\\\\\\\)-value is:\\n\\n$p = 4.54 \\\\times 10^{-38}$\\n\\nGiven that the \\\\\\\\( p \\\\\\\\)-value is significantly smaller than any conventional significance level (\\\\\\\\( \\\\alpha = 0.05 \\\\\\\\)), we **reject the null hypothesis**. This indicates that the experimental result (\\\\\\\\( x = 0.681 \\\\\\\\)) is highly unlikely to belong to a distribution with the same mean as the random baseline.\\n\\n**We have incorporated these updates in Sections 4.1 and 4.4 of the revised manuscript** for further clarity and elaboration.\"}", "{\"title\": \"General Response\", \"comment\": \"We sincerely thank all reviewers for their thoughtful feedback and constructive suggestions, which have greatly helped us refine this work. Below, we summarize and address the key points raised:\\n\\n- We are grateful for the recognition from all four reviewers for the **significance** (reviewers 4GEc, nuP5, eqTo, and jsst), **soundness** (reviewers 4GEc, nuP5, eqTo, and jsst), and **novelty** (reviewers 4GEc, nuP5, and eqTo) of our work.\\n- Specifically, we appreciate the recognition of our **complexity-based perspective on feature similarity** as both innovative (reviewer 4GEc) and interesting (reviewer jsst).\\n- The **skylines introduced for comparison** were also highlighted as well-motivated and supportive (reviewers jsst and eqTo).\\n- Additionally, our **induction circuit universality analysis** was acknowledged as adding depth to the study (reviewer 4GEc).\\n\\nWe are particularly thankful for the reviewers\\u2019 insightful suggestions, which have significantly enhanced this work. Below, we address four common concerns:\\n\\n1. **Generalization to larger models** (reviewers 4GEc, nuP5, and jsst): \\n Our initial experiments were conducted on ~100M parameter models, and we acknowledge concerns about scaling to modern, larger models. To address this, we conducted our main experiments along with corresponding baselines and skylines on 2.8B versions of both models. These results align with the trends reported in the original manuscript, demonstrating the robustness of our findings at larger scales.\\n\\n2. **Statistical significance for robust comparison** (reviewers 4GEc and jsst): \\n To ensure the reliability of our results, we conducted statistical significance tests, confirming a probability of less than $5 \\\\times 10^{-38}$ for random noise influence. Additionally, cross-architecture comparisons on complex features revealed greater similarities than cross-layer comparisons, highlighting the robustness and meaningfulness of our analysis.\\n\\n3. **Ablation studies on SAE training settings** (reviewers 4GEc and jsst): \\n Reviewers expressed concerns about the robustness of our findings to variations in SAE training hyperparameters and architectures. We expanded our ablation studies to include sparsity coefficients, SAE size, and TopK SAEs. These additional experiments confirm the robustness of our conclusions across a broader range of training settings.\\n\\n4. **Further discussions** (reviewers 4GEc, eqTo, and jsst): \\n During the discussion period, several inspiring ideas emerged that we plan to incorporate into future versions of this work. These include: \\n - **Implications of complexity-based MPPC analysis** for broader model comparison methods (from discussions with reviewer jsst). \\n - **Further details on the Mamba Off-by-One motif** (suggested by reviewer 4GEc). \\n - **Highlighting an interesting exception** in the final-layer representation observed in our depth-specialization analysis (suggested by reviewer eqTo). \\n\\nThese insights add significant depth to our work, and we thank the reviewers for their valuable contributions. We also deeply value additional feedback not explicitly mentioned here, which we address in the individual discussion sections. We are committed to incorporating these suggestions to improve both the depth and clarity of our work.\"}", "{\"title\": \"Follow-up on Feedback and Rating for Submission 4540\", \"comment\": \"Dear Reviewer 4GEc,\\n\\nWe hope this email finds you well. We are writing to follow up regarding our submission (Submission Number: 4540). We noticed that the discussion period is nearing its end, and We wanted to kindly check if you had any further questions or concerns regarding our paper or the rebuttal we provided earlier.\\n\\nWe deeply value your feedback and have aimed to address your comments comprehensively in our rebuttal. If there are any remaining points that we could clarify or elaborate on, please do not hesitate to let us know.\\n\\nAdditionally, if you believe our responses sufficiently addressed your concerns, we kindly ask if you would consider revisiting your rating for our submission. Your assessment is incredibly important to us, and we appreciate the time and effort you have dedicated to reviewing our work.\\n\\nThank you once again for your thoughtful review and contributions to improving our paper. Please feel free to reach out with any further questions or comments.\\n\\nBest regards,\\n\\nAuthors of Submission 4540\"}", "{\"title\": \"Further Exploring Off-by-one motif (7 / 9)\", \"comment\": \">Can you provide more details on why the \\\"Off-by-One\\\" motif exists in Mamba models?\\n\\nThis is an important question to understand circuits in Mamba models. However, our conjecture about this phenomenon is not rigorously tested and we leave this for future work focusing on Mamba circuits. Our current hypothesis on this problem is that Mamba performs \\\"Off-by-One\\\" to **implicitly revert the write-read order of its SSM States**.\\n\\n**Write-and-Read Nature of SSMs**:\", \"write\": \"$h_{i}^{(l)} = F_a(c_{i}^{(l)}) \\\\circ h_{i-1}^{(l)} + F_b(c_{i}^{(l)})$.\", \"read\": \"$s_i^{(l)} = h_i^{(l)} (W_c^{(l)} * c_i^{(l)}) + W_d^{(l)} \\\\circ c_i^{(l)}$.\\nConcretely, an SSM block has three data-dependent transformations A, B, C and a shortcut D. **It first write its input at token $i$** $c_i^{(l)}$ to the state space multiplied by B ($F_b(c_{i}^{(l)})$) and add this to its past state with a transformation $F_a(c_{i}^{(l)}) \\\\circ h_{i-1}^{(l)}$. **It then reads** from this state space with matrix C plus a shortcut, where the input $c_i^{(l)}$ is directly transformed by D without interacting with the SSM state.\\n\\n**The gate branch $g$ might prevent the need to write into SSM states in-time**.\\nThere are two ways for a Mamba block input, at step $i$ at layer $l$, $x_i^l$ to contribute to the block output: via the gate branch $g_i^l$ and via the conv-SSM branch. If the local convolution only merges information from the past timesteps rather than the current time step $i$, this means **the conv-SSM branch are blocked for input $x_i^l$**. However, $x_i^l$ can still affect subsequent computation via the gate. And it is not written into the SSM state until the next timestep $i+1$ due to Off-by-One. \\n\\n**This implicitly reverts the write-read order of its SSM States for easier token-mixing**.\\nIn this case where the conv-SSM branch are blocked for the current timestep $i$, the SSM actually **first reads from information in the past, and write information in $x_i^l$ in the next step**. We conjecture this is beneficial in reducing one term in the state space so that it can more effectively retrieve past information.\\n\\nSince there is relatively less work on Mamba interpretability compared to Transformers, we are open to the possibility that we missed something or made improper assumptions here. We do not dive deeper into this problem due to time and compute limitation and its being slightly beyond the scope of this paper.\"}", "{\"comment\": \"Thanks for the new experiments and replies to my concerns. I have updated my score. Good luck!\"}", "{\"summary\": \"This paper presents an exploration of the Transformer and Mamba models through mechanistic interpretability methodology. Despite the architectures being very different, the features and circuits of these models turn out to be very similar.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is clearly written.\", \"I appreciate the idea of using skylines, as it helps support the authors' claims.\", \"The results are interesting and useful for further research.\"], \"weaknesses\": [\"I couldn't identify any specific weaknesses. However, below are some suggestions that could enhance this work from my perspective:\", \"It would be interesting to explore more circuits through SAE, as suggested in [1] (see Section 4.3). However, it is unclear where SAE should be placed within the Mamba architecture to achieve similar features.\", \"While the Pearson correlation appears to be a natural choice for measuring feature similarity, it assumes that the feature space has linear properties. It might be worthwhile to explore other correlation measures, such as distance correlation, which could potentially yield better results.\", \"A clear statement clarifying that MPPC refers to the maximum Pearson correlation between models' features is needed to improve understanding.\", \"[1] Interpreting Attention Layer Outputs with Sparse Autoencoders (Kissane et al.)\"], \"questions\": [\"While the heatmap matrix in Figure 3c is mainly diagonal, I can see that there is a cluster of features located in the last layer of the Pythia and distributed fairly uniformly in the middle layers of Mamba. Can the authors clarify the meanings of these features?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Ablating SAE Training Hyperparameters / Settings (4 / 9)\", \"comment\": \">The paper does not include ablation studies on SAE hyperparameters (dictionary/code size, training duration, etc.). There is no discussion of how SAE reconstruction quality relates to feature similarity.\\n\\nWe appreciate your constructive feedback. We have included more ablations studies on dictionary size, L1 coefficient and SAE architecture (TopK variant). The results are as follows:\\n\\n**L1 coefficient**:\\n| Mean MPPC / L1 | 1e-4 | 2e-4 (Original) | 4e-4 |\\n|---------------------------------|------------------|-------------------|------------------|\\n| Main experiment | 0.673 | 0.681 | 0.720 |\\n| Skyline 1 (Model Seed Variant) | *0.717* | *0.725* | *0.766* |\\n| Skyline 2 (SAE Seed Variant) | **0.800** | **0.806** | **0.833** |\\n\\n**Dictionary size (Expansion factor * model hidden size D)**:\\n| Mean MPPC / Dictionary Size F | 32 * 768 (Original) | 64 * 768 |\\n|---------------------------------|---------------------|----------|\\n| Main experiment | 0.681 | 0.701 |\\n| Skyline 1 (Model Seed Variant) | *0.725* | *0.733* |\\n| Skyline 2 (SAE Seed Variant) | **0.806** | **0.795** |\\n\\nWe do not include results with respect to training duration because our SAEs quickly converge after 30 minutes of training on an H100 GPU. Due to time and computation constraint we do not further ablate on this setting.\\n\\n**We have incorporated these updates in Sections 4.4 and Appendix D.1 of the revised manuscript** for further clarity and elaboration.\"}", "{\"title\": \"Could the Sparsity Constraint Inadvertently Enhance Apparent Similarity? (9 / 9)\", \"comment\": \">Is there a risk that the Sparse Autoencoder pre-processing itself may impose a degree of alignment between features in Transformers and Mambas? Could the sparsity constraint inadvertently enhance apparent similarity?\\n\\nWe appreciate your insightful question. This is indeed a reasonable and inspiring concern.\", \"there_are_two_possibilities_we_can_think_of_that_this_interpretability_illusion_holds\": \"(1) **Illusion in MPPC analysis**: sparsity constraint causes the activation pattern of all features to be sparse, i.e., activating only on 1% of all tokens, leading to an increase in Max Pairwise Pearson Correlation. This turns out to be the case in the random baseline we established in Response 2/9, compared to one without being set to the same the sparsity level as a normal SAE. Nonetheless, it accounts for only a small portion of Pythia-Mamba feature similarity, as indicated by the statistical significance test. (2) **Illusion in SAE Training**, which we discuss as follows:\\n\\nWe think there is possibility that different model architectures actually learn different sets of features but SAEs inadvertently aligns them. **If SAEs learn commonly-composed features, it may be the case that we are being over-confident about feature universality**. It has already been suggested by [1] that SAEs may learn compositions of the \\\"true\\\" underlying features in a toy model due to the sparsity constraint. For instance, if two models encode the same concept in different ways (e.g. one identifies dogs with color and another identifies with breeds) but our SAEs are too small to capture such difference and only learn a \\\"universal dog feature\\\", we are then fooled and draw the wrong conclusion.\\n\\nThere are two reasons why we are not quite concerned with this possibility. (1) **Holding all else equal, larger SAEs exhibit higher MPPC values (Response 4/9)**. This serves as negative evidence against the hypothesis above since if features of larger SAEs 'split' into different sub-features, there should be a drop in MPPC. We are also open to the possibility that we have not scale our SAEs enough yet. (2) Even if there exists interpretability illusion for some reason we currently are not aware of, our findings reliably suggest **there at least exists a universal, sparse and interpretable, though lossy, decomposition of the models' hidden activations**. It is an exciting topic to discover hard-to-notice divergence hiding in part of the activation space uncaptured by SAEs, probably with better SAE training techniques[2] or some well-designed probes.\\n\\n**We have incorporated the baseline updates in Sections 4.1 and 4.4 of the revised manuscript** for further clarity and elaboration.\\n\\n[1] https://www.lesswrong.com/posts/a5wwqza2cY3W7L9cj/sparse-autoencoders-find-composed-features-in-small-toy\\n\\n[2] [Sparse Crosscoders for Cross-Layer Features and Model Diffing](https://transformer-circuits.pub/2024/crosscoders/index.html)\"}", "{\"summary\": [\"The paper investigates the \\\"universality hypothesis\\\" in mechanistic interpretability, which suggests that different neural network architectures may converge to implement similar algorithms when tasked with analogous objectives. The authors focus on two mainstream architectures for language modeling: Transformers and Mambas. They propose using Sparse Autoencoders (SAEs) to extract interpretable features from these models and demonstrate that a significant portion of features are shared between the two architectures. The paper validates the correlation between feature similarity and universality and delves into the circuit-level analysis of Mamba models, finding structural analogies with Transformers, particularly in induction circuits.\", \"The paper's contributions include:\", \"Introduction of a novel metric to isolate and quantify feature universality in the context of architectural variations.\", \"Empirical evidence shows that Transformer and Mamba models learn similar features through the application of SAEs.\", \"Circuit analysis of Mamba models reveals structural analogies and nuanced differences compared to Transformer circuits.\", \"Support for the universality hypothesis by demonstrating cross-architecture feature similarity and identifying the \\\"Off-by-One motif\\\" in Mamba models.\"], \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The identification of the \\\"Off-by-One motif\\\" in Mamba models is a unique contribution that highlights nuanced differences between architectures.\", \"The introduction of a complexity-based interpretation for understanding feature similarity differences is innovative.\", \"The circuit-level analysis of Mamba models, revealing structural analogies with Transformers, is good and adds depth to the study. The validation of feature similarity and its correlation with universality further strengthens the study.\", \"The findings of this paper have implications for the field of neural network interpretability. By demonstrating that different architectures can converge to similar algorithms and features, the study provides valuable insights into the generalizability of mechanistic findings across models.\"], \"weaknesses\": [\"While the paper focuses on Transformers and Mambas, it would benefit from a broader examination of additional architectures. Including a more diverse set of models, such as recurrent neural networks (RNNs) or convolutional neural networks (CNNs), could strengthen the universality hypothesis by offering a more comprehensive understanding of feature similarity across a wider range of neural networks. This would enhance the generalizability of the findings.\", \"The paper utilizes OpenWebText for correlation analysis but does not discuss how the choice of dataset might affect the results. A more detailed examination of the potential biases and limitations introduced by the dataset choice would provide a clearer context for the findings and ensure that the results are not overly dependent on a specific dataset.\", \"The claims of feature similarity and universality would be more robust if supported by statistical significance tests. Including such tests would provide stronger evidence for the observed correlations and enhance the credibility of the conclusions.\", \"SAE-related technical gaps:\", \"The paper does not include ablation studies on SAE hyperparameters (dictionary/code size, training duration, etc.). Conducting these studies would help to understand the sensitivity of the results to different hyperparameter settings and ensure the robustness of the findings.\", \"There is no discussion of how SAE reconstruction quality relates to feature similarity. Addressing this relationship would provide insights into the effectiveness of SAEs in isolating interpretable features and validate the methodology used.\", \"The use of GPT-4 for complexity scoring lacks rigorous validation. The paper does not provide inter-rater reliability metrics or comparisons with human annotations, nor does it discuss potential biases in the automated scoring.\", \"The paper provides a limited exploration of why the \\\"Off-by-One\\\" motif exists in Mamba models. A deeper investigation into the underlying reasons for this motif would enhance the understanding of the structural differences between the Mamba and Transformer models and provide more insights into the universality hypothesis.\"], \"questions\": \"1. Why did you choose OpenWebText as your primary dataset for analysis? How might the choice of OpenWebText as the dataset influence your results? Have you tested if the feature similarities hold across different domains (e.g., code, mathematics, or structured data)? Would analyzing domain-specific text reveal different patterns of architectural universality?\\n\\n2. Have you performed any statistical significance tests to support your claims of feature similarity and universality?\\n\\n3. How generalizable are your findings to other tasks beyond language modeling?\\n\\n4. Can you provide more details on why the \\\"Off-by-One\\\" motif exists in Mamba models?\\n\\n5. Is there a risk that the Sparse Autoencoder pre-processing itself may impose a degree of alignment between features in Transformers and Mambas? Could the sparsity constraint inadvertently enhance apparent similarity?\\n\\n6. How do you expect your findings to scale to larger models? Did you observe whether model size impacts universality between architectures? Could smaller or larger versions of Transformers and Mambas exhibit different degrees of feature similarity?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your responses. My score is already 8, and I believe this paper is useful and interesting for the MI community. Good luck in the discussion period.\"}", "{\"title\": \"RWKV-Mamba Feature Similarity (5 / 6)\", \"comment\": \">How does the feature mapping between RWKV and Mamba look like?\\n\\nThanks for this question. we provide **Pythia-160m&Mamba-130m&RWKV-169m similarity results** as shown below(mean MPPC of A->B):\\n\\n| Model A / Model B | Pythia | Mamba | RWKV |\\n|------------------|--------|-------|------|\\n| **Pythia** | 1 | 0.68 | 0.61 |\\n| **Mamba** | 0.74 | 1 | 0.71 |\\n| **RWKV** | 0.49 | 0.55 | 1 |\\n\\n\\n**We have incorporated these updates in Sections 4.4 and Appendix D.5 of the revised manuscript** for further clarity and elaboration.\"}", "{\"title\": \"A Broader Examination of Additional Architectures (8 / 9)\", \"comment\": \">Including a more diverse set of models, such as recurrent neural networks (RNNs) or convolutional neural networks (CNNs), could strengthen the universality hypothesis by offering a more comprehensive understanding of feature similarity across a wider range of neural networks. This would enhance the generalizability of the findings.\\n\\nThanks for the reasonable and consturctive suggestion. We would like to point out that **we included results for Pythia-RWKV** in Appendix D of our submitted manuscript, which is known as an RNN-like language model architecture. We have made this clearer in our revised version. In addition, we provide **Pythia-160m&Mamba-130m&RWKV-169m similarity results** as shown below(mean MPPC of A->B):\\n\\n| Model A / Model B | Pythia | Mamba | RWKV |\\n|------------------|--------|-------|------|\\n| **Pythia** | 1 | 0.68 | 0.61 |\\n| **Mamba** | 0.74 | 1 | 0.71 |\\n| **RWKV** | 0.49 | 0.55 | 1 |\\n\\nIt is also reasonable to include more novel architectures like xLSTM, RetNet or even convolution-based language model architectures and will indeed enhance the generalizability of our findings. Currently, however, we do not have resources at hand to conduct these experiments and we will leave this for future work. Apologies for this.\\n\\n**We have incorporated these updates in Sections 4.4 and Appendix D.5 of the revised manuscript** for further clarity and elaboration.\"}", "{\"title\": \"Response to Reviewer nuP5 (1 / 6)\", \"comment\": [\"We sincerely appreciate your recognition of our work, with highlights in novelty, soundness and significance. We would also like to acknowledge the thorough and constructive feedback and questions provided that help strengthen our work, which we summarize as follows:\", \"Clarifying the role of circuit analysis.\", \"Empirically supporting the \\\"Universal Induction Algorithm\\\" claim.\", \"Generalization to Larger Models\", \"Universal layer-specificity phenomenon.\", \"RWKV-Mamba feature similarity.\", \"All of these aspects are helpful and insightful. In addition, thanks for pointing out the typos in our submitted manuscript, which we have all fixed in our updated version.\"]}", "{\"title\": \"Implication of the Last Pythia Layer Exception in Depth-Specificity Analysis (4 / 4)\", \"comment\": [\">While the heatmap matrix in Figure 3c is mainly diagonal, I can see that there is a cluster of features located in the last layer of the Pythia and distributed fairly uniformly in the middle layers of Mamba. Can the authors clarify the meanings of these features?\", \"Thanks for pointing this out. Apologies for not explaining this exception which is indeed confusing without further details. We expect this to be reasonable for the following reasons:\", \"The residual stream after the last layer is directly connected with the unembedding (interleaved by a LayerNorm), so it should **most contain information about next token prediction**, rather than that about past tokens, making it substantially different from lower-layer residual stream activations.\", \"**One piece of evidence of this is the ultra-high norm of the last layer residual stream**, which was reported in the third figure in [this post](https://www.alignmentforum.org/posts/8mizBCm3dyc432nK8/residual-stream-norms-grow-exponentially-over-the-forward). One can think of this as \\\"Overwriting the whole residual stream to focus on predicting\\\".\", \"For example, in [this last-layer residual stream SAE](https://www.neuronpedia.org/gpt2-small/11-res_post_32k-oai), **the interpretation for most features are only clear if one looks into the top logits they contribute to**, which is much less often the case for lower-layer ones.\"]}", "{\"title\": \"Exploring More Correlation Metrics (3 / 4)\", \"comment\": \">It might be worthwhile to explore other correlation measures, such as distance correlation, which could potentially yield better results.\\n\\nThank you so much for your insightful suggestion! We genuinely appreciate the thoughtfulness behind it. In our paper, we used Pearson Correlation as a metric to measure feature similarity. However, we agree that, influenced by factors such as feature complexity, Pearson Correlation may not fully capture the semantic similarity of features. **Exploring more sophisticated and semantically aligned similarity metrics is indeed a promising direction for future research**.\\n\\nThat being said, Distance Correlation, while conceptually compelling, **presents significant computational challenges**. **Its time and space complexity scale as $O(n^2)$ with the number of sampled tokens \\\\\\\\(n\\\\\\\\), compared to the much more efficient \\\\\\\\(O(n)\\\\\\\\) complexity of Pearson Correlation**. Given the resource constraints we faced, we were unable to employ Distance Correlation in this work. Nonetheless, we see great potential in this idea and are excited to explore it in our future research endeavors.\\n\\nOnce again, thank you for your valuable feedback\\u2014it has given us fresh perspectives and inspired directions for further improvement!\"}", "{\"summary\": \"This paper studies the mechanistic similarity between language model structures (Mamba, RWKV, Transformer). The authors focus on their Universality, a hypothesized property that suggests different neural architectures implementing similar algorithms on similar tasks.\\n\\nIn the first part of the experiment section, they use Sparse Autoencoder (SAE) as the major tool for their analysis. The representations from two LM architectures are taken to train SAEs. The latent vectors in the SAEs, in which each dimension corresponds to various syntactic and semantic phenomena, exhibit mechanical similarities, and it is possible to find a matching between the vectors from different LM architectures.\\n\\nIn the second part, the authors study the induction behavior of two LM architectures. They found that the 17th layer of Mamba is the most important for the LM\\u2019s inductive ability.\\n\\nI think this paper studies an important problem and is well executed. I found the experiments in this paper to be well implemented. My only concern is with the role of circuit analysis experiments. They are indeed very interesting but I\\u2019m not sure how they contribute to building a mechanistic analogy between SSMs and Transformers. Do Transformers have such layer-specific behavior when it comes to inductive ability? Is there a way to empirically verify the claims in Sec 6.2?\", \"minor\": \"\", \"appendix_d\": \"How does the feature mapping between RWKV and Mamba look like?\\n* Sec 6.1: Is the layer-17 phenomenon robust to random initializations? I.e., if one retrains the SSM with another seed, would layer 17 still be the key in induction?\", \"line_179\": \"missing section reference.\", \"line_861\": \"missing space between \\u2018Universality\\u2019 and \\u2018is\\u2019\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This paper studies an important problem.\", \"It makes good use of sparse autoencoders for analysis.\", \"The experiments in this paper are well implemented.\"], \"weaknesses\": [\"The role of circuit analysis experiments is unclear.\", \"The claims made in Section 6.2 need to be empirically supported.\", \"(Minor) The size of LMs is limited. Only ~100m models are used in the experiments.\"], \"questions\": [\"Do Transformers have such layer-specific behavior when it comes to inductive ability?\", \"Is there a way to empirically verify the claims in Sec 6.2?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Universal Layer-Specificity Phenomenon (4 / 6)\", \"comment\": \">Do Transformers have such layer-specific behavior when it comes to inductive ability?\\n\\n>Is the layer-17 phenomenon robust to random initializations? I.e., if one retrains the SSM with another seed, would layer 17 still be the key in induction?\\n\\nThanks for this interesting question. It is an important hypothesis that the inter-layer structure of the final model should be very robust to initialization and even more other hyperparameters. **We conjecture layer 17 will still be a pivotal induction layer with a retrained Mamba**. One of the main reasons is that two Transformers trained with mostly the same configuration by two groups, [Pythia-160M](https://huggingface.co/EleutherAI/pythia-160m/blob/main/config.json) and [GPT2-Small](https://huggingface.co/openai-community/gpt2/blob/main/config.json), **have both been reported to mainly use their layer 5 (out of 12 layers) to perform induction**.\\n- Both models have a hidden dimension D=768 and # layers = 12. The main differences are summarized as follows:\\n| Model | Position Embedding | Embedding & Unembedding | Trained with Dropout |\\n|-----------------|-----------------------|--------------------------|-----------------------|\\n| GPT2-Small | Absolute, sinusoidal | Tied | Yes |\\n| Pythia-160M | Rotary | Independent | No |\\n- GPT2-Small: We quote Section 4.2 in [1], \\\"As a case study, we focus on GPT-2 Small [55], which has two induction heads in layer 5 (heads 5.1 and 5.5) \\\"\\n- Pythia-160M: We perform path patching on this model, finding that head 5.0 (the first attention head in layer 5) is the most notable induction heads:\\n| layer\\\\head | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 |\\n|--------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|\\n| 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |\\n| 1 | 0.07 | -0.15 | -0.10 | 0.03 | 0.09 | -0.08 | -0.07 | 0.06 | -0.01 | 0.11 | 0.34 | -0.05 |\\n| 2 | -0.14 | 0.07 | 0.10 | 0.14 | 0.14 | -0.13 | 0.60 | -0.03 | -0.14 | 0.10 | 0.04 | 0.03 |\\n| 3 | -0.24 | -0.14 | -0.96 | -1.20 | -0.49 | -0.14 | 0.20 | -0.38 | -0.10 | 0.06 | -0.11 | -0.07 |\\n| 4 | 0.13 | -0.26 | 0.09 | -0.16 | -0.10 | -0.02 | 0.89 | 0.13 | 0.09 | -0.28 | -0.14 | 0.30 |\\n| 5 | **4.00** | -0.20 | 0.05 | 0.06 | -0.53 | -0.04 | 0.48 | 0.62 | 0.06 | 0.08 | 0.05 | -0.23 |\\n| 6 | -0.04 | -0.23 | -0.04 | -0.22 | 0.02 | 0.09 | 0.04 | -0.33 | 0.02 | -0.04 | -0.38 | 0.04 |\\n| 7 | -0.28 | 0.17 | 0.03 | 0.06 | -0.28 | -0.07 | 0.01 | -0.18 | -0.23 | -0.03 | -0.02 | 0.18 |\\n| 8 | -0.07 | 0.03 | 0.50 | 0.00 | 0.15 | -0.02 | 0.01 | -0.22 | 0.02 | -0.02 | -0.08 | 0.38 |\\n| 9 | 0.54 | -0.03 | 0.07 | -0.09 | -1.10 | -0.04 | 0.04 | 0.00 | 0.04 | 0.10 | -0.01 | 0.02 |\\n| 10 | -0.01 | 0.03 | 0.00 | 0.00 | -0.03 | -0.10 | 0.01 | -0.01 | 0.00 | -0.04 | 0.03 | 0.01 |\\n| 11 | -0.14 | -0.13 | -0.05 | -0.04 | 0.00 | -0.02 | -0.11 | -0.02 | 0.01 | -0.07 | -0.02 | 0.06 |\\n[1] [Interpreting Attention Layer Outputs with Sparse Autoencoders](https://arxiv.org/pdf/2406.17759v1)\"}", "{\"title\": \"Response to Reviewer 4EGc: Thank You for Your Feedback\", \"comment\": \"Thank you for taking the time to review our responses and the additional experiments. We greatly appreciate your feedback and support throughout the review process. Best wishes!\"}", "{\"title\": \"Manually Validating Autointerp Scores (5 / 9)\", \"comment\": \">The use of GPT-4 for complexity scoring lacks rigorous validation. The paper does not provide inter-rater reliability metrics or comparisons with human annotations, nor does it discuss potential biases in the automated scoring.\\n\\nThanks for pointing this out. Without rigorously validating the reliability of autointerp scores, it is questionable to draw the conclusion in Section 5. We ask **two human annotators to score** 64 (out of 358 automatically evaluated) pairs for complexity scores and 32 (out of 72) for monosemantic scores. We **take the average of human labeled scores** and fit human-GPT-4 consistency. The fitted results in terms of (scope, R-square score) is (scope=0.45, R2=0.29) for complexity and (scope=0.38, R2=0.17) for monosemanticity, suggesting the existence of human-GPT4 scoring consistency. We notice that compared to GPT-4 labeled scores, human annotators tend to be more polarized, which may cause a lower R-square score.\\n\\n**We have incorporated these updates in Sections 5.2 and Appendix I of the revised manuscript** for further clarity and elaboration.\"}", "{\"title\": \"Response to Reviewer eqTo (1 / 4)\", \"comment\": [\"We sincerely appreciate your recognition of our work, with highlights in novelty, clarity, and impact. We would also like to acknowledge the thorough and constructive feedback and questions provided which help strengthen our work, which we summarize as follows:\", \"Exploring more circuits with SAEs;\", \"Exploring more correlation metrics;\", \"Clarify the implication of the last Pythia layer exception in depth-specificity analysis.\", \"All of these aspects are helpful and insightful. And many thanks for suggestions to clarify the meaning of the acronym MPPC, which we have included in our updated version.\"]}", "{\"title\": \"Generalization to Larger Models and Other Tasks (3 / 9)\", \"comment\": \">(Reviewer 4GEc) How do you expect your findings to scale to larger models? Did you observe whether model size impacts universality between architectures? Could smaller or larger versions of Transformers and Mambas exhibit different degrees of feature similarity?\\n\\n>(Reviewer nuP5) (Minor) The size of LMs is limited. Only ~100m models are used in the experiments.\\n\\n>(Reviewer jsst) Without further experiments, it cannot be excluded that the limited capacity of tiny models might be the main motivation behind the high similarity features and circuits across the two architectures, and this could not be the case for more capable models with e.g. 1B or 7B parameters.\\n\\nThanks for pointing this out, which greatly helps improve our work. \\n\\nWe additionally conduct the experiment for **2.8B variants of both models**, giving us the following results:\\n| Mean MPPC / Model Size | 130M (original) | 2.8B |\\n|------------------------------|-----------------|-------|\\n| Main experiment | 0.681 | 0.792 |\\n| Skyline 1 (Model Training Variant) | *0.725* | *0.847* |\\n| Skyline 2 (SAE Seed Variant) | **0.806** | **0.878** |\\n\\n**We have incorporated these updates in Sections 4.4 and Appendix D.4** of the revised manuscript for further clarity and elaboration.\\n\\n>How generalizable are your findings to other tasks beyond language modeling?\\n\\nWe do not further investigate this problem since it is slightly beyond the scope of this work. Nonetheless, we are optimistic about the generalizability for the following reasons.\\n- Our method (i.e., Sparse Autoencoders) has been shown to generalize to vision models[1, 2], Othello & chess models[3, 4] and protein language models[5] etc.\\n- There has been lines of evidences that interpretable vision neurons[6] and circuits[7] can be observed across a variety of vision model architectures. However, there is possibility that simpler features are neuron-aligned and more complex ones are stored in superposition and turn out do not match across architectures. We are also excited to see this line of work continue to ViT and conv model universality and we currently expect them to be similar as well.\\n\\n[1] [https://livgorton.com/inceptionv1-mixed5b-sparse-autoencoders/](https://livgorton.com/inceptionv1-mixed5b-sparse-autoencoders/)\\n\\n[2] [Towards Multimodal Interpretability: Learning Sparse Interpretable Features in Vision Transformers](https://www.lesswrong.com/posts/bCtbuWraqYTDtuARg/towards-multimodal-interpretability-learning-sparse-2)\\n\\n[3] [Dictionary Learning Improves Patch-Free Circuit Discovery in Mechanistic Interpretability: A Case Study on Othello-GPT](https://arxiv.org/abs/2402.12201)\\n\\n[4] [Evaluating Sparse Autoencoders with Board Game Models](https://www.lesswrong.com/posts/EWhA4pyfrbdSkCd4G/evaluating-sparse-autoencoders-with-board-game-models)\\n\\n[5] [InterPLM: Discovering Interpretable Features in Protein Language Models via Sparse Autoencoders](https://www.biorxiv.org/content/10.1101/2024.11.14.623630v1.full.pdf)\\n\\n[6] [Zoom in: An Introduction to Circuits](https://distill.pub/2020/circuits/zoom-in/#claim-3)\\n\\n[7] [High-Low Frequency Detectors](https://distill.pub/2020/circuits/frequency-edges/#universality)\"}", "{\"title\": \"Response to Reviewer jsst Comments (1 / 2)\", \"comment\": \">Using a random SAE as a null hypothesis is a very low bar to claim feature similarity between the two models. Such tests may, at best, confirm that the similarity between latents is higher than random chance, which is unsurprising given that both models were trained on the same data. An evaluation comparing the similarity of features cross-layer vs. cross-architecture would have been more interesting in this regard (e.g., Is the similarity of features from upper layers of Mamba and upper layers of Pythia higher than between upper and lower layers in the same model?)\\n\\nThank you very much for your thoughtful feedback and for adjusting the rating. We greatly appreciate the time and effort you have invested in reviewing our paper.\\n\\nWe understand your concern, which, if we have interpreted correctly, points to the baseline being relatively weak. You note that **the substantial performance of the main experiment over the baseline might not sufficiently reflect the extent of cross-model feature similarity**.\\n\\nTo address this, we would like to clarify that **our paper used the relatively small difference between the main experiment and Skyline1 to reflect this similarity**. Specifically, we highlighted that more than 60% of features exhibit near-zero MPPC differences (absolute value < 0.05). This implies that, for the majority of features (and their corresponding semantics), the similarity between Mamba and Pythia is almost indistinguishable from the similarity between Pythia and models of the same architecture.\\n\\nThat being said, your suggestion is particularly insightful. **Comparing cross-layer similarities to cross-architecture similarities could indeed provide a new and meaningful baseline** for evaluating cross-model similarity. Inspired by your comment, we have conducted additional experiments to explore this idea. Specifically, **we calculated the similarity between high layer(12\\\\~23) features and lower layer(0\\\\~11) features within Mamba, as well as the similarity between high layer features of Mamba(12\\\\~23) and Pythia(6\\\\~11)**. In the table below, the column headers represent the range of MPPC values calculated by mamba(all layers) to pythia(all layers), where **lower values indicate relatively higher feature complexity** (which sometimes may simply result from the absence of matching semantically similar features). The results are as follows:\\n\\n| Comparison Object \\\\ MPPC Interval | 0.2~0.3 | 0.3~0.4 | 0.4~0.5 | 0.5~0.6 | 0.6~0.7 | 0.7~0.8 | 0.8~0.9 | 0.9~1.0 |\\n|---------------------------|----------|----------|----------|----------|----------|----------|----------|----------|\\n| mamba high -> pythia high | 0.239 | 0.338 | **0.436** | **0.531** | **0.622** | 0.707 | 0.785 | 0.873 |\\n| mamba high -> mamba low | **0.264** | **0.361** | **0.436** | 0.521 | 0.615 | **0.732** | **0.863** | **0.961** |\\n\\nIt can be observed that for **relatively simple features** (the rightmost three columns), Mamba's higher layers and lower layers exhibit stronger similarity. For **relatively complex features** (the third, fourth, and fifth columns), Mamba's higher layers and Pythia's higher layers show stronger similarity(same in the third column). As for the leftmost two columns, we speculate that this might be due to some Mamba features **failing to find matches** in Pythia.\\n\\nWe hope this additional analysis addresses your concern and provides further clarity on the cross-model feature similarity presented in our paper.\\n\\nThank you once again for your valuable insights, which have helped us improve the rigor and depth of our study. Please let us know if you have any further suggestions or questions.\"}", "{\"title\": \"Response to Reviewer 4GEc (1 / 9)\", \"comment\": [\"We sincerely appreciate your recognition of our work, with highlights in novelty, quality, depth, and significance. We would also like to acknowledge the thorough and constructive feedback and questions provided which help strengthen our work, which we summarize as follows:\", \"Statistical significance tests;\", \"Generalization to larger models and other tasks;\", \"Ablation studies on SAE hyperparameters;\", \"Manually validating Autointerp complexity and monosemanticity scores;\", \"Dataset Choice;\", \"Further Exploring Off-by-one motif;\", \"A broader examination of additional architectures;\", \"Could the sparsity constraint inadvertently enhance apparent similarity?\", \"All of these aspects are helpful and insightful. We sorted them in the order we think are of decreasing importance and respond in the following comments:\"]}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}" ] }
2IoFFexvuw
Online Reward-Weighted Fine-Tuning of Flow Matching with Wasserstein Regularization
[ "Jiajun Fan", "Shuaike Shen", "Chaoran Cheng", "Yuxin Chen", "Chumeng Liang", "Ge Liu" ]
Recent advancements in reinforcement learning (RL) have achieved great success in fine-tuning diffusion-based generative models. However, fine-tuning continuous flow-based generative models to align with arbitrary user-defined reward functions remains challenging, particularly due to issues such as policy collapse from overoptimization and the prohibitively high computational cost of likelihoods in continuous-time flows. In this paper, we propose an easy-to-use and theoretically sound RL fine-tuning method, which we term Online Reward-Weighted Conditional Flow Matching with Wasserstein-2 Regularization (ORW-CFM-W2). Our method integrates RL into the flow matching framework to fine-tune generative models with arbitrary reward functions, without relying on gradients of rewards or filtered datasets. By introducing an online reward-weighting mechanism, our approach guides the model to prioritize high-reward regions in the data manifold. To prevent policy collapse and maintain diversity, we incorporate Wasserstein-2 (W2) distance regularization into our method and derive a tractable upper bound for it in flow matching, effectively balancing exploration and exploitation of policy optimization. We provide theoretical analyses to demonstrate the convergence properties and induced data distributions of our method, establishing connections with traditional RL algorithms featuring Kullback-Leibler (KL) regularization and offering a more comprehensive understanding of the underlying mechanisms and learning behavior of our approach. Extensive experiments on tasks including target image generation, image compression, and text-image alignment demonstrate the effectiveness of our method, where our method achieves optimal policy convergence while allowing controllable trade-offs between reward maximization and diversity preservation.
[ "Flow Matching", "Reinforcement Learning", "Wasserstein Regularization", "Exploration-Exploitation Trade-off", "Fine-Tuning", "Generative Model" ]
Accept (Poster)
https://openreview.net/pdf?id=2IoFFexvuw
https://openreview.net/forum?id=2IoFFexvuw
ICLR.cc/2025/Conference
2025
{ "note_id": [ "x7hO5zdA4M", "vFRhWMezZv", "q2y5iRWgFE", "kl87mpfL8r", "jM4VSgbho9", "gxKpJ50HsL", "df0TvA6O7H", "cOIYwMxbnt", "bWVNa346I5", "aNAseFJLgQ", "Yswqxhn5by", "YjaW2GNODo", "VcdPmsL25w", "UtXUWh8NOq", "UQrEvv4ndo", "NbNad9qUTr", "MNAlw33Sfi", "M06YqVPh8F", "L6jAT98cGr", "E91Jyx7KEZ", "DEpzXnCNLu", "Bvl0Wz0hJZ", "9HAaG3vl0D", "80jzdDz22h", "6cLZSMCAQ7", "6SKEaLNKAB", "4YJDiDgO1q", "3XnLNamhjT", "2jxvpVFxxF", "1WtF7qlMVL" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732480621983, 1732420784393, 1730712377075, 1732295729038, 1732501131645, 1732295866338, 1732085815219, 1731124966973, 1730681913930, 1732084797297, 1732082249164, 1732560329884, 1732501887422, 1732081622613, 1735512255391, 1732084933336, 1737523463844, 1732564686760, 1732082419689, 1732295918530, 1732379300664, 1732084078807, 1732084200948, 1732509217708, 1732084592105, 1732508856903, 1732420277873, 1732476898453, 1732518069789, 1732083690875 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1675/Authors" ], [ "ICLR.cc/2025/Conference/Submission1675/Authors" ], [ "ICLR.cc/2025/Conference/Submission1675/Reviewer_tfNW" ], [ "ICLR.cc/2025/Conference/Submission1675/Authors" ], [ "ICLR.cc/2025/Conference/Submission1675/Reviewer_tfNW" ], [ "ICLR.cc/2025/Conference/Submission1675/Authors" ], [ "ICLR.cc/2025/Conference/Submission1675/Authors" ], [ "ICLR.cc/2025/Conference/Submission1675/Reviewer_dXDz" ], [ "ICLR.cc/2025/Conference/Submission1675/Reviewer_S526" ], [ "ICLR.cc/2025/Conference/Submission1675/Authors" ], [ "ICLR.cc/2025/Conference/Submission1675/Authors" ], [ "ICLR.cc/2025/Conference/Submission1675/Authors" ], [ "ICLR.cc/2025/Conference/Submission1675/Reviewer_dXDz" ], [ "ICLR.cc/2025/Conference/Submission1675/Authors" ], [ "ICLR.cc/2025/Conference/Submission1675/Area_Chair_qcem" ], [ "ICLR.cc/2025/Conference/Submission1675/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1675/Authors" ], [ "ICLR.cc/2025/Conference/Submission1675/Authors" ], [ "ICLR.cc/2025/Conference/Submission1675/Authors" ], [ "ICLR.cc/2025/Conference/Submission1675/Reviewer_tfNW" ], [ "ICLR.cc/2025/Conference/Submission1675/Authors" ], [ "ICLR.cc/2025/Conference/Submission1675/Authors" ], [ "ICLR.cc/2025/Conference/Submission1675/Authors" ], [ "ICLR.cc/2025/Conference/Submission1675/Authors" ], [ "ICLR.cc/2025/Conference/Submission1675/Authors" ], [ "ICLR.cc/2025/Conference/Submission1675/Authors" ], [ "ICLR.cc/2025/Conference/Submission1675/Area_Chair_qcem" ], [ "ICLR.cc/2025/Conference/Submission1675/Reviewer_S526" ], [ "ICLR.cc/2025/Conference/Submission1675/Authors" ] ], "structured_content_str": [ "{\"title\": \"Quantitative Evaluation on Large-Scale Models\", \"comment\": \"Thanks for the suggestion of Reviewer tfNW, we have now added comprehensive quantitative evaluation in Table 1 (we also put it below for ease of reading) that systematically compares our method's performance on SD3. Our evaluation focuses on two key metrics: CLIP scores to measure reward optimization (higher indicates better text-image alignment) and diversity scores computed as the mean pairwise distance between CLIP embeddings of generated samples (higher indicates more diverse outputs). Using these metrics, we demonstrate significant advantages over baselines. Our ORW-CFM-W2 approach achieves the highest CLIP score while maintaining strong diversity comparable to the base SD3 model. Notably, even without W2 regularization, our ORW-CFM method outperforms both RAFT and ReFT on these metrics. The addition of W2 regularization helps preserve generation diversity across all methods, with our combined approach striking the best balance between alignment quality and output diversity.\\n\\n\\n\\n| Method | CLIP Score | Diversity Score |\\n| :------------------------ | :--------- | :-------------- |\\n| SD3 (Base Model) | 28.69 | **4.78** |\\n| **ORW-CFM (Ours w/o W2)** | **33.63** | **3.47** |\\n| RAFT | 29.30 | 2.05 |\\n| ReFT | 29.32 | 3.26 |\\n| **ORW-CFM-W2 (Ours)** | **35.46** | **4.21** |\\n| RAFT+W2 | 30.88 | 2.81 |\\n| ReFT+W2 | 32.03 | 3.63 |\\n\\n\\nWe believe our quantitative results effectively address the comments regarding larger scale experiments and further validate our method's effectiveness at larger scales, and we thank the reviewers for their constructive suggestions.\"}", "{\"comment\": \"> Q: **The \\\"ad hoc\\\" issue**. \\\"Use of W2 distance is ad-hoc, and indeed the authors show that W2 distance works well even if we use ReFL\\u2026That's said, the fact that the W2 regularization is a bit ad-hoc does not greatly compromise the contribution of this paper. But I would like to suggest that the authors lower their tone, instead of claiming that W2 is something very tied to the proposed online reward matching method.\\\"\", \"a\": \"Thank you for suggesting the possibility of implementing MDP-based RL method as future directions for camera-ready. As discussed in our first response, we believe the conversion of deterministic neural ODE to stochastic MDP while preserving the original probability path is a `non-trivial` topic, and categorizing it as a \\\"naive baseline\\\" might be a bit over-reaching, as the scope of which could lead to `a new research study` in our opinion. We're also not sure if such modification will still lead to a valid continuous normalizing flow and whether we could still consider it an FM fine-tuning method. We'll try our best to include discussions regarding the feasibility of such a method in our final version.\\n\\n## References \\n\\n[1] Dong, Hanze, et al. \\\"Raft: Reward ranked finetuning for generative foundation model alignment.\\\" arXiv preprint arXiv:2304.06767 (2023).\\n\\n[2] Huguet, Guillaume, et al. \\\"Sequence-Augmented SE (3)-Flow Matching For Conditional Protein Backbone Generation.\\\" arXiv preprint arXiv:2405.20313 (2024).\\n\\n[3] Rafailov, Rafael, et al. \\\"Direct preference optimization: Your language model is secretly a reward model.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[4] Black, Kevin, et al. \\\"Training diffusion models with reinforcement learning.\\\" arXiv preprint arXiv:2305.13301 (2023).\\n\\n[5] Domingo-Enrich, Carles, et al. \\\"Adjoint matching: Fine-tuning flow and diffusion generative models with memoryless stochastic optimal control.\\\" arXiv preprint arXiv:2409.08861 (2024). \\n\\n[6] Esser, Patrick, et al. \\\"Scaling Rectified Flow Transformers for High-Resolution Image Synthesis, March 2024.\\\" URL http://arxiv. org/abs/2403.03206.\\n\\n[7] Fan, Ying, et al. \\\"DPOK: Reinforcement Learning for Fine-tuning Text-to-Image Diffusion Models.\\\" Advances in Neural Information Processing Systems 36 (2024).\"}", "{\"summary\": \"The paper presents a method to perform reward finetuning of flow-based models. The idea starts with the reward-weighted version of the standard flow matching loss (i.e., doing simple importance sampling) and, to remove the dependency on pretraining datasets and to perform online training, changes the sampling distribution from the original data distribution to the finetuned sampling policy. Such a strategy proves very prone to overfitting, as the finetuned distribution collapses into a single mode if it is trained for too many epochs. Therefore, the authors further proposes to regularize the sampling policy to be not too far away from the pretrained one (using a Wasserstein distance). The paper discusses some theoretical results like the asymptotic behaviors of the proposed methods and empirically show that the proposed method can be applied to finetuning of flow matching models pretrained on MNIST and CIFAR-10.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper theoretically analyzes probably one of the most intuitive methods of reward reweighting, and by introducing a regularization loss on the finetuned distribution, shows that this naive method can be extended to the online setting. To support the claims, the paper does sufficient amount of experiments on different small-scale image datasets and different reward functions. Especially, the paper shows that their online variant is better than the offline one.\\n\\nCompared to baselines like DDPO that requires one specify the number of sampling steps for finetuning, the proposed method finetunes a model in the way very similar to flow matching -- to sample images from a \\\"data\\\" distribution and some random t in [0,1], and to compute the matching loss.\", \"weaknesses\": \"The paper does not compare the proposed method against any other methods, for instance DDPO (running PPO for diffusion finetuning). While one may argue that DDPO is not designed for continuous flow models, one eventually samples from CNFs with some discretization and can therefore construct an MDP for DDPO finetuning, and not to mention some more recent methods. On flow matching, there is a very recent paper [1] that does reward finetuning for flow matching (though this paper should be considered as a concurrent one). There also exist some more recent papers for reward finetuning to compare with, and I feel that showing at least one of them will be great.\\n\\nThe proposed method seems to be a bit sensitive (in theory) to hyperparameter tuning due to its online nature. It is a bit unsatisfactory that the resulted distribution (Eqn 12 in the paper) is dependent on the number of epochs. While in practice it is not a super big concern, an objective that guarantees convergence to a specific distribution (e.g. P_pretrained(x) * exp(lambda * r(x)) / Z) is generally considered better.\\n\\nMany of the baselines are tested on large-scale models like StableDiffusion, and many of them can converge in a reasonably fast speed on simple reward functions like Aesthetic Score used in DDPO. The paper fails to show results in these more realistic settings (though it probably requires some compute, but one might be able to find a smaller model to do experiments).\\n\\n[1] Adjoint Matching: Fine-tuning Flow and Diffusion Generative Models with Memoryless Stochastic Optimal Control\\n. Carles Domingo-Enrich, Michal Drozdzal, Brian Karrer, Ricky T. Q. Chen. https://arxiv.org/abs/2409.08861\", \"questions\": \"Besides the points raised in the weakness section:\\n\\n1. It is probably better to also show quantitative metrics like diversity scores (e.g., feature pairwise distances) and FID scores.\\n2. In Eqn 10, it is probably more aesthetic to write \\\\theta_\\\\text{ft} and \\\\theta_\\\\text{ref} (for the subscripts), instead of \\\\theta_{ft} and \\\\theta_{ref}.\\n3. W2 distance is great, but I wonder if it makes a big difference if one instead uses KL divergence (both theoretically and empirically).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you again for your thoughtful and constructive suggestions. We believe our response and additional experiments have thoroughly addressed your concerns regarding the theoretical connections between reward-weighting and W2 regularization, ablation studies, baseline comparisons, and scaling capabilities. We always welcome further discussion to help improve the clarity and contribution of our work.\"}", "{\"comment\": \"I appreciate the authors' response. I believe that the work is definitely valuable to the community and deserves a better score, and therefore I raise mine to 6.\\n\\nI would again encourage the authors to attempt to show in the final version theoretical results on the general cases of hyperparameters, though I imagine it is something hard to prove, plus it is not necessary to justify the value of the paper.\"}", "{\"comment\": \"Thank you for your highly constructive feedback and thorough assessment. We particularly appreciate your recognition of our theoretical analysis of reward reweighting and the observation that our method preserves the simplicity of flow matching training. We believe we have carefully addressed all concerns through: new large-scale SD3 experiments with RAFT and ReFT baselines, detailed theoretical analysis of convergence behavior and limiting cases, and comprehensive justification for using W2 distance over KL divergence in flow matching. The revision has significantly strengthened both our theoretical foundations and practical validation on realistic tasks. We always welcome any further suggestions for improvement.\"}", "{\"title\": \"Overall Response to Major Concerns and Feedbacks\", \"comment\": \"We sincerely thank all reviewers for their thoughtful and constructive feedback. After careful consideration of the reviewers' comments, we have comprehensively addressed three main shared concerns in our revision:\\n\\n1. **Extended Empirical Evaluation on Large-Scale Models** (Reviewers tfNW, dXDz, S526): Multiple reviewers requested demonstrations on larger models and stronger baseline comparisons. We address this through `extensive experiments on online fine-tuning of Stable Diffusion 3 (SD3)` in Section 5.3 and Appendix A. We first note that many existing RL fine-tuning methods cannot be directly applied to flow matching (FM) models due to intractable ELBO and computationally expensive likelihood/KL calculations in continuous-time ODE-based FM models. Therefore, we compare against applicable baselines RAFT and ReFT that don't require likelihood calculations. Our comprehensive evaluation demonstrates: (i) `superior spatial relationship alignment` while maintaining image quality (Figure 5), (ii) `effective prevention of policy collapse` through W2 regularization with detailed ablation studies (Figure 6), (iii) strong `adaptability across multiple reward architectures` (HPS-V2, Pick Score, Alpha CLIP, Figure 9), and (iv) successful handling of complex compositional prompts (Figure 10). Most importantly, our framework achieves these results while preserving the simplicity of the original CFM loss and continuous-time ODE properties, making it highly scalable and easily applicable to any flow matching architecture from TorchCFM to SD3. All of this is achieved through training on self-generated data without requiring manually collected datasets (i.e., `theoretically-guaranteed collapse-free online fine-tuning of FM while training on self-generated data`).\\n2. **Theoretical Motivation and Necessity of W2 Regularization**: Regarding the theoretical connection between reward-weighted matching and W2 regularization (Reviewer dXDz) and the choice of W2 distance (Reviewer S526, dXDz, tfNW), we emphasize that combining reward-weighted CFM with W2 regularization is theoretically motivated and necessary. Our analysis proves that online reward-weighted fine-tuning `without regularization inevitably collapses to a delta distribution (Lemma 1)`, necessitating effective regularization to maintain diversity. While KL divergence is common in previous works, it requires expensive ODE simulation with Hutchinson traces estimator for flow matching models, making `KL computationally intractable`. Unlike diffusion models which can leverage variational bounds, flow matching lacks tractable ELBO formulations to connect with its vector field loss. To address this fundamental challenge, for the first time, `we derive a tractable upper bound for W2 distance (Theorem 3)` in flow matching fine-tuning that enables practical regularization via vector field loss directly, avoiding expensive likelihood calculations while effectively `preventing policy collapse`.\\n3. **Improved Paper Structure** (Reviewer S526): Following suggestions about presentation balance, we have restructured the paper to improve accessibility while maintaining technical rigor. We moved theoretical analyses of RL perspectives to the appendix while `expanding Section 5.3 and Appendix A with comprehensive case studies on online fine-tuning of SD3`. The enhanced experimental section now clearly demonstrates our method's advantages through diverse examples: superior spatial relationship control (Figure 5), detailed ablations validating W2 regularization's effectiveness (Figure 6), strong adaptability across reward architectures (Figure 9), and complex semantic control capabilities (Figure 10). We also improved `notation aesthetics` following Reviewer tfNW's suggestions.\\n\\nThese improvements have strengthened both the theoretical foundations and practical impact of our work. We are grateful for the reviewers' detailed feedback that helped us make the paper more accessible to a broader audience while preserving its theoretical rigor. We hope our revisions have thoroughly addressed all concerns, and we welcome any further discussion.\"}", "{\"summary\": \"This work introduces a way to finetune conditional flow matching models to maximize some user-defined reward function. Specifically, the paper combines two techniques: (1) reward-weighted conditional flow matching; and (2) a constraint that bounds the pretrained model and the finetuned model. The work gives some theoretical analyses to justify the proposed method is grounded and some experiments also show its effectiveness.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The problem of finetuning conditional flow matching models is of general interest to the community. How to preserve the generation diversity and avoid model collapse is a challenging problem.\", \"Combining the reward-weighted matching loss and the Wasserstein distance regulatization seems to be empirically effective. The experimental results look good.\", \"There are quite a few theoretical justifications for the proposed method. Although I didn't check carefully, I find them to be quite reasonable claims.\"], \"weaknesses\": [\"The contribution of the paper seems ad-hoc to me. There is quite little connection between the reward-weighted matching loss and the Wasserstein regularization. I find both techniques independent of each other, so I find the motivation of the work quite weak. Could the author elaborate more on why these two techniques should be used together (other than empirically well-performing)?\", \"Given that the reward-weighted matching loss and the Wasserstein regularization are unrelated contributions, I will be interested to see how much gain each individual component contribute to the performance gain? Could the authors conduct some ablation study?\", \"I find it less convincing for the performance gain, since there is no compelling baselines for comparison. For example, the paper claims that the Wasserstein regularization performs well. How about other discrepancy measure? How is the Wasserstein distance a good choice here? I think more discussion on the motivatiojn will help the reader gain more insights.\", \"Whlle I am no expert in this domain, I am wondering whether there are other stronger baselines to compare to. The problem this paper studies doesn't seem to be new, so I think there will be some other finetuning methods for comparison, say [Imagereward: Learning and evaluating human preferences for text-to-image generation, NeurIPS 2023].\", \"The experiments are relatively small-scaled. I don't know how the proposed method scales with the size of the model/dataset. Could the authors conduct some experiments to study the scaling performance of this finetuning technique?\"], \"questions\": \"See the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces a new online reinforcement learning (RL) fine-tuning method for continuous flow-based generative models, named Online Reward-Weighted Conditional Flow Matching with Wasserstein-2 Regularization (ORW-CFM-W2). It addresses the challenges of policy collapse and high computational costs associated with traditional fine-tuning methods. The authors propose integrating RL within the flow matching framework, utilizing an online reward-weighting mechanism to focus on high-reward regions and a Wasserstein-2 distance regularization to balance exploration and exploitation. The paper provides theoretical analyses and empirical results across various tasks, demonstrating the effectiveness of the proposed method in achieving optimal policy convergence with controlled trade-offs between reward maximization and the generation capacity.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.The paper effectively identifies and addresses some key issues in fine-tuning continuous flow-based generative models, such as policy collapse and computational inefficiency.\\n2. The introduction of the online reward-weighting mechanism and Wasserstein-2 distance regularization is well-suited for flow matching models that balances exploration and exploitation, mitigating the policy collapse problem.\\n3. The theoretical analyses are rigorous and provide a solid foundation for the proposed method. The empirical results across various tasks are convincing and demonstrate the method's effectiveness.\", \"weaknesses\": \"Potential Overemphasis on Theoretical Analysis:\\nWhile the theoretical underpinnings are robust, the paper might overly focus on the theoretical aspects at the expense of practical considerations. Balancing the presentation (e.g.section 4.6 to the appendix) to include more case studies could make the findings more relatable to a broader audience.\", \"lack_of_comparative_analysis_with_other_regularization_techniques\": \"The paper introduces W_2 distance regularization but does not compare its effectiveness with other potential regularization methods. Including such comparisons could strengthen the paper's contribution by positioning it within the broader landscape of regularization strategies.\", \"narrow_empirical_validation\": \"The empirical validation is commendable, but the paper could benefit from testing the method across a wider range of datasets (e.g. CeleA face dataset) and tasks to further establish the generalizability and robustness of the approach.\", \"questions\": \"1. What kind of reward functions can be fine-tuned without collapsing by the proposed W_2 regularization method?\\n2. Is this method capable of performing fine-grained fine-tuning tasks, such as controlling specific semantic parts of images?\\n3. Why not use W_1 distance for regularizing?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer S526 [Part 2]\", \"comment\": \"> Q: Why choose W2 instead of other distance for regularizing (e.g., KL, W1)?\", \"a\": \"Our choice of W2 distance is both theoretically motivated and practically necessary to address policy collapse in online fine-tuning of flow matching models. Finding a computationally tractable divergence regularization in ODE-based flow matching models is non-trivial. While KL divergence is widely used for fine-tuning diffusion models and LLMs [3, 4], calculating KL divergence for flow matching models is infeasible, as it requires solving intricate transport dynamics and tracking probability flows across continuous time, which is computationally costly (i.e., `exact likelihood computation requires expensive ODE simulation with Hutchinson traces estimator`, detailed in Appendix B.2.4). Unlike diffusion models that get around this with variational bounds, there is no established relationship between vector field loss and ELBO in continuous-time ODE-based flow matching. Similarly, W1 distance faces intractability issues as the true marginal vector field is intractable (i.e., `Theorem 2 of [7] does not hold for W1`).\\n\\nTo address these challenges, we are the first to derive a computationally tractable upper bound for W2 distance in flow matching (Theorem 3). This bound only requires calculating the difference between vector fields, avoiding expensive likelihood calculations needed for KL divergence, thus enabling effective constraint of the discrepancy between fine-tuned and reference models while maintaining computational efficiency. Our theoretical analysis shows that this bound effectively prevents policy collapse while preserving the generation capacity of the pre-trained model.\\n\\nOur experimental results strongly validate this choice, particularly in our comprehensive experiments with SD3 in Appendix A. Figure 8 provides detailed ablation studies demonstrating that our W2-regularized approach successfully prevents policy collapse while maintaining semantic accuracy for challenging prompts like \\\"a cat in the sky\\\", where baseline methods without regularization generate nearly identical images. The effectiveness of W2 regularization is further evidenced in Figure 7 for spatial relationship control and Figure 10 for complex compositional prompts, where our method achieves precise semantic control while maintaining natural variations in generation. These results demonstrate that our theoretically-derived W2 bound provides a practical and effective solution for flow matching fine-tuning that enables both high performance and stable learning.\"}", "{\"title\": \"Reply to Reviewer dXDz [Part 2]\", \"comment\": \"> Q: How about comparison with more stronger baselines and how does the method scale to larger models/datasets? Whether there are other stronger baselines to compare to?\", \"a\": \"We now do provide comprehensive ablation studies through a series of controlled experiments on both small-scale and large-scale models that demonstrate the necessity and contribution of each component.\\n\\n**For online reward-weighting (ORW)**, Figure 2 quantitatively demonstrates how the entropy coefficient $\\\\tau$ affects convergence behavior - as $\\\\tau$ increases, the policy optimizes reward more aggressively but at the cost of diversity. We also show in Figure 6 that even `without W2 regularization, our ORW-CFM method achieves the best semantic alignment compared to other baselines` like RAFT [1] and ReFT [2], validating the effectiveness of our online reward-weighting mechanism. However, consistent with our theoretical prediction in Lemma 1, online fine-tuning methods without regularization exhibit clear policy collapse. This necessitates tractable regularization to maintain diversity.\\n\\n**For W2 regularization**, Figure 3 demonstrates how the coefficient $\\\\alpha$ controls the diversity-reward trade-off - from $\\\\alpha=0$ to $\\\\alpha=0.8$, the diversity of generated samples increases without significantly compromising performance. Figure 4 provides quantitative trade-off curves showing how $\\\\alpha$ enables explicit control over the balance between reward maximization and divergence from the pre-trained model. Our ablation studies in Figure 6 further validate this - `methods incorporating W2 regularization (RAFT+W2, ReFT+W2, and our ORW-CFM-W2) successfully prevent collapse` while maintaining high-quality generation, validating the effectiveness of our W2 regularization , wherein `our combined approach achieving the best balance` between semantic alignment and diversity preservation. The results empirically validate both the individual contributions of ORW and W2 regularization as well as their synergistic effects in enabling efficient online fine-tuning of flow matching models.\"}", "{\"comment\": \"Dear Reviewer S526,\\n\\nThank you for your positive recognition of our paper as \\\"definitely valuable.\\\" We are truly grateful for your thoughtful review process and constructive feedback throughout our discussion. Your suggestions have been instrumental in helping us achieve a better balance between theoretical depth and practical applications, particularly through the restructured content and expanded empirical case studies on SD3. We're pleased that our detailed explanations and clarifications have effectively addressed your concerns while earning your continued positive assessment of our work.\"}", "{\"title\": \"Thanks for the response\", \"comment\": \"Thanks for the detailed response. My concerns are mostly addressed. I am incresing my score to 6.\"}", "{\"title\": \"Reply to Reviewer dXDz [Part 1]\", \"comment\": \"We sincerely thank you for your insightful review and recognition of our empirical and theoretical contribution. We are happy to address each of your concern with the following:\\n\\n> Q: Why should reward-weighted matching loss and Wasserstein regularization be used together (other than empirically well-performing)? What's the theoretical connection? The contribution of the paper seems ad-hoc.\", \"a\": \"Thank you for your recognition of our empirical result. We'd like to clarify that our contributions include introducing and theoretically grounding both online reward-weighted CFM as an effective RL loss and W2-regularizer as an effective divergence/diversity control mechanism for flow matching (none of them is ad-hoc, but theoretically motivated and necessary). To the best of our knowledge, both components are novel and non-trivial (detailed below) in flow-matching online fine-tuning tasks. We do not claim that they should be always used together, but their combination is theoretically motivated, and we derived a clean and interpretable convergent distribution for ORW-CFM-W2 (Theorem 5) which demonstrates the tradeoffs between optimization and diversity controlled by key coefficients $\\\\tau$ and $\\\\alpha$ (i.e., controllable and interpretable convergent/learning behavior as Figures 2-4). We has provided extensive experiment results to illustrate this tradeoff (Figures 2-4), and now provide further ablation studies to show the effectiveness of both components in achieving superior online fine-tuning performance on the text-to-image alignment tasks of SD3 (Figure 5-6).\\n\\n**Motivation and Significance of Online-Reward-Weighting (ORW-CFM).** Obtaining theoretically guaranteed online RL fine-tuning methods for continuous-time ODE-based flow matching models is `non-trivial` due to the intractable ELBO and costly exact likelihood calculation in FM. Our online reward-weighted CFM loss is not a simple application of RL algorithms that leverage likelihood/ELBO, since the equivalence of CFM loss and ELBO cannot be established easily as in diffusion, and thus theoretical guarantees are non-trivial and not granted for free. `We do provide novel theoretical analysis of ORW-CFM's convergent behavior` (Theorems 2, 5), optimal convergence (Lemma 1), and its equivalence to the optimal policy as KL-regularized RL (Appendix C.9), all without relying on explicit likelihood/ELBO. We prove that although $\\\\tau$ can control the convergent speed, ORW alone may eventually collapse to delta distribution at optimal reward (Lemma 1), `motivating the use of W2-regularizer to further prevent policy collapse`.\\n\\n**Motivation and Necessity for W2 Regularization.** Finding a tractable divergence regularization in flow matching is non-trivial since the intractable ELBO and costly exact likelihood computation make `KL-divergence impractical` (See Appendix B.2.4). To address this fundamental challenge, `we derive a tractable upper bound for W2 distance (Theorem 3)` that enables tractable and efficient divergence regularization of flow matching via vector field loss directly. The W2 regularization is not an independent add-on but a theoretically motivated solution to the policy collapse problem inherent in our online reward-weighted approach for ODE-based flow matching.\\n\\n**Motivation and Necessity for Combining ORW-CFM with W2.** The W2 regularization is a theoretically motivated solution to the policy collapse phenomenon in our ORW-CFM, which is evident by Theorem 5-our key theoretical result on the convergent distribution after combining ORW and W2. We show that their combo nicely results in a Boltzmann distribution whose energy is controlled by both coefficients from ORW and W2, `balancing reward optimization and divergence regularization in an explicit and interpretable manner (Theorem 5)`. We provided extensive experimental results on how controlling $\\\\tau$ and $\\\\alpha$ leads to different tradeoffs (figure 2-4), and now include more benchmarks and ablations on Stable Diffusion3 (Figures 5-10) to show the effectiveness of ORW and W2-regularizer independently, as well as combinatorially.\\n\\nIn short, our online reward weighting enables us to achieve theoretically interpretable and controllable learning/convergent behavior (Theorem 5, Figures 2-6), optimal convergence (Lemma 1, Figure 2-6), ELBO/likelihood-free online RL fine-tuning (Theorems 5), and equivalent optimal policy with KL-regularized RL (App. C.9) - while `preserving the simplicity of original CFM loss and continuous-time ODE property`, making it easily integratible with any existing FM works like TorchCFM and SD3 (Sec. 5.3 and App. A). Additionally, our tractable W2 regularization (Theorem 3) effectively handles policy collapse (Lemma 1), allowing convergence to optimal policies that preserve diversity without collapsing (Theorem 5, Figure 6).\"}", "{\"metareview\": \"This paper introduces a novel method for fine-tuning flow matching models trained with a Wasserstein objective using reinforcement learning that addresses two key challenges: policy collapse and computational tractability. It does so by first showing two complementary observations: (a) online reward-weighted training of flow matching models inevitably leads to policy collapse, (b) a tractable upper bound exists for Wasserstein distance in flow matching that enables practical regularization without expensive likelihood calculations. By combining these insights, the authors finally yield a method that achieves optimal policy convergence while maintaining generation diversity. The paper's key strengths lie in its comprehensive theoretical analysis with clear proofs and guarantees and strong empirical results demonstrating better text-image alignment and diversity preservation compared to baselines like RAFT and ReFT. The reviewers highlighted some weaknesses with regards to the choice of Wasserstein distance being orthogonal to the finetuning procedure, as well as limited comparisons with related approaches.\", \"additional_comments_on_reviewer_discussion\": \"See above for strengths and weaknesses. The authors addressed concerns regarding practical validation during the rebuttal phase with experiments on SD3. Overall, the reviewers all voted for marginal acceptance, and I encourage authors to address the remaining concerns for the final version.\"}", "{\"title\": \"Reply to Reviewer S526 [Part 3]\", \"comment\": \"> Q: Balancing the presentation (e.g. section 4.6 to the appendix) to include more case studies could make the findings more relatable to a broader audience.\", \"a\": \"Following your kind suggestion, we have restructured the paper to improve accessibility while maintaining technical rigor. We have `moved the \\\"RL Perspective of Online Fine-Tuning Method\\\" (previously Section 4.6) to the appendix`, allowing us to dedicate more space in the main text to empirical evaluations that demonstrate the practical implications of our work. Specifically, we have significantly `expanded Section 5.3 and Appendix A with comprehensive case studies on online fine-tuning of Stable Diffusion 3 (SD3)`, where Figures 5, 7 demonstrate superior performance in spatial relationship control compared to baselines like RAFT [1] and ReFT [2], Figures 6, 8 provide detailed ablation studies of policy collapse prevention, Figure 9 validates our method's adaptability across different reward architectures (HPS-V2, Pick Score, and Alpha CLIP), and Figure 10 showcases complex semantic control capabilities.\\n\\nOur expanded experimental section demonstrates that our method not only has strong theoretical foundations (as shown by Theorem 3, 5) but also provides practical solutions to real challenges in fine-tuning large generative models. The addition of diverse case studies, from spatial understanding to multi-attribute control, makes our findings more accessible to readers while complementing our theoretical contributions. These comprehensive results, particularly with SD3, help readers better understand both the theoretical innovations and practical benefits of our approach compared to existing methods [1, 2].\\n\\n## References\\n\\n[1] Dong, Hanze, et al. \\\"Raft: Reward ranked finetuning for generative foundation model alignment.\\\" arXiv preprint arXiv:2304.06767 (2023).\\n\\n[2] Huguet, Guillaume, et al. \\\"Sequence-Augmented SE (3)-Flow Matching For Conditional Protein Backbone Generation.\\\" arXiv preprint arXiv:2405.20313 (2024).\\n\\n[3] Rafailov, Rafael, et al. \\\"Direct preference optimization: Your language model is secretly a reward model.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[4] Fan, Ying, et al. \\\"DPOK: Reinforcement Learning for Fine-tuning Text-to-Image Diffusion Models.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[5] Domingo-Enrich, Carles, et al. \\\"Adjoint matching: Fine-tuning flow and diffusion generative models with memoryless stochastic optimal control.\\\" arXiv preprint arXiv:2409.08861 (2024). \\n\\n[6] Esser, Patrick, et al. \\\"Scaling Rectified Flow Transformers for High-Resolution Image Synthesis, March 2024.\\\" URL http://arxiv. org/abs/2403.03206.\\n\\n[7] Lipman, Yaron, et al. \\\"Flow matching for generative modeling.\\\" *arXiv preprint arXiv:2210.02747* (2022).\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Thanks For The Fruitful Discussions\", \"comment\": \"We extend our deepest gratitude to all reviewers for their invaluable time and effort throughout the review and discussion process. We are encouraged that our responses, revisions, and additional empirical validation effectively addressed all concerns, leading to consistently positive assessments of our work. The recognition of both our theoretical foundations and practical contributions has been especially encouraging, with Reviewer dXDz noting our \\\"reasonable theoretical justifications\\\" and \\\"good experimental results,\\\" Reviewer tfNW emphasizing that our work is \\\"definitely valuable to the community\\\" and recognizing our \\\"sufficient small-scale experiments,\\\" and Reviewer S526 affirming that \\\"this paper is definitely valuable\\\" while highlighting our successful handling of \\\"policy collapse and high computational costs.\\\"\\n\\nFollowing the suggestions of all reviewers (Reviewers dXDz, tfNW, S526), we expanded our empirical validation by extending our method to online fine-tuning of large-scale flow matching models like SD3. We further appreciate the constructive and fruitful discussions with Reviewer tfNW, leading to enhanced clarity through additional qualitative and quantitative experiments with recent baselines. Once again, we sincerely thank the reviewers for their dedicated engagement, which has significantly improved the quality and clarity of our work.\"}", "{\"title\": \"Reply to Reviewer dXDz [Part 3]\", \"comment\": \"> Q: How about other discrepancy measures? Why is Wasserstein distance a good choice?\", \"a\": \"Our choice of W2 distance is theoretically motivated and necessary. While KL divergence is widely used in other methods [3] [7], `KL is computationally intractable` for continuous-time ODE-based flow matching models (Appendix B.2.4) and there are no tractable ELBO alternatives either. To overcome this, for the first time, `we derive a tractable upper bound for W2 distance (Theorem 3)` in online fine-tuning of flow matching that enables practical regularization via vector field loss directly.\\n\\nOur experiments strongly validate this choice - as shown in Figure 4, varying the W2 regularization coefficient $\\\\alpha$ provides explicit control over the trade-off between reward maximization and diversity. The reward-distance curves demonstrate that as $\\\\alpha$ increases, our method explores optimal solutions within a constrained neighborhood of the pre-trained model, preserving diversity while still optimizing the reward. Furthermore, our ablation studies in Figure 6 show that `methods without W2 regularization (RAFT [1], ReFT [2]) exhibit clear policy collapse, while our approach with W2 regularization successfully maintains generation diversity without sacrificing semantic alignment`. The results across various reward models in Figure 9 further demonstrate how our tractable W2 bound enables stable fine-tuning regardless of the underlying reward mechanism.\\n\\n## References\\n\\n[1] Dong, Hanze, et al. \\\"Raft: Reward ranked finetuning for generative foundation model alignment.\\\" arXiv preprint arXiv:2304.06767 (2023).\\n\\n[2] Huguet, Guillaume, et al. \\\"Sequence-Augmented SE (3)-Flow Matching For Conditional Protein Backbone Generation.\\\" arXiv preprint arXiv:2405.20313 (2024).\\n\\n[3] Rafailov, Rafael, et al. \\\"Direct preference optimization: Your language model is secretly a reward model.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[4] Black, Kevin, et al. \\\"Training diffusion models with reinforcement learning.\\\" arXiv preprint arXiv:2305.13301 (2023).\\n\\n[5] Domingo-Enrich, Carles, et al. \\\"Adjoint matching: Fine-tuning flow and diffusion generative models with memoryless stochastic optimal control.\\\" arXiv preprint arXiv:2409.08861 (2024). \\n\\n[6] Esser, Patrick, et al. \\\"Scaling Rectified Flow Transformers for High-Resolution Image Synthesis, March 2024.\\\" URL http://arxiv. org/abs/2403.03206.\\n\\n[7] Fan, Ying, et al. \\\"DPOK: Reinforcement Learning for Fine-tuning Text-to-Image Diffusion Models.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[8] Xu, Jiazheng, et al. \\\"Imagereward: Learning and evaluating human preferences for text-to-image generation.\\\" Advances in Neural Information Processing Systems 36 (2024).\"}", "{\"comment\": \"Thank you for your thoughtful and constructive feedback. We hope that our responses have thoroughly addressed your concerns about balancing theoretical analysis with practical considerations, particularly through demonstrating our method's reward-agnostic nature and fine-grained control capabilities in our comprehensive SD3 experiments. We truly appreciate your detailed review, which has helped us achieve a better balance between theoretical foundations and practical applications. We always welcome any further discussions to enhance the accessibility and clarity of our work.\"}", "{\"comment\": \"I appreciate the authors' response. Though I now feel slightly more positive, I do have some lingering concerns:\\n\\n1. **Quantitative results on large-scale experiments.** It would be more convincing to show the average reward, diversity scores (as done in the adjoint matching paper) plus FID scores, beyond just showing qualitative examples.\\n\\n2. **Dependency on epochs**. I fully understand that the authors prove the limiting cases. Yet, in practice nobody sets the parameters to some extreme values. The dependency on the number of training epochs means that one is not guaranteed to obtain an unbiased distribution that matches the target one, and for harder cases one may need to carefully tune the hyperparameters. Such concerns may be alleviated if some bound on the approximation error given different hyperparameters in a normal range can be proved, or if the authors may find some other way to show that it is not a big issue.\\n\\n3. **The \\\"ad hoc\\\" issue**. As pointed out by Reviewer dXDz, the use of W2 distance is ad-hoc, and indeed the authors show that W2 distance works well even if we use ReFL, a less theoretically grounded method. Theoretically, if we do constrained optimization with bounded W2 distance, we may even show a bound for ReFL. That's said, the fact that the W2 regularization is a bit ad-hoc does not greatly compromise the contribution of this paper. But I would like to suggest that the authors lower their tone, instead of claiming that W2 is something very tied to the proposed online reward matching method.\\n\\nOn the baseline issue, I agree with the authors that existing methods like DDPO are not very suitable for continuous flow models, but I am just curious how a naive baseline with an approximate MDP (by manually setting some noise schedule and do DDPM-like sampling) behaves. The authors may consider include something about it for their camera-ready version if their paper gets accepted.\"}", "{\"title\": \"Reply to Reviewer tfNW [Part 2]\", \"comment\": \"> Q: Why use W2 distance instead of KL divergence?\", \"a\": \"Equation 12 establishes a general mathematical framework for analyzing the convergent behavior of our proposed ORW-CFM-W2 method for different cases, and its limiting case depends on the configuration of key parameters ($\\\\tau, \\\\alpha$). To the best of our knowledge, we are the first to demonstrate the theoretical analysis of convergent behavior of online reward-weighted fine-tuning methods for CFM, applicable to both cases with W2 (Theorem 5) and without W2 regularization (Theorem 2 and Case 2 of Theorem 5) . Through both theoretical analysis (Theorem 5) and comprehensive experiments, `we have demonstrated two important limiting cases, where the learned policy will converge to a specific distribution independent of the number of epochs`:\\n\\n1. When $\\\\tau=0$ or $\\\\alpha \\\\to \\\\infty$ (Case 1 in Theorem 5), the model maintains behavior similar to the reference model, as validated by Figure 2 ($\\\\tau=0$); \\n2. Conversely, when $\\\\alpha=0$ or $\\\\tau \\\\to \\\\infty$ (Case 2 in Theorem 5), the distribution collapses to a delta distribution maximizing rewards, as shown in Figure 3 ($\\\\alpha=0$) and Figure 5, 6 (without W2).\\n\\nWe further provide another interesting case in appendix (Case 6 in App. C.7) and prove the existence of the best trade-off parameter for balancing reward optimization term and divergence. Although not all limiting cases have closed-form solution, the practical significance of our theoretical framework is validated through experiments where the tradeoffs can be visually confirmed (Figures 2-10).\"}", "{\"title\": \"Reply to Reviewer tfNW [Part 3]\", \"comment\": \"> Q: Results on larger-scale models like StableDiffusion.\", \"a\": \"Thank you for this suggestion. We agree that using $\\\\theta_{\\\\text{ft}}$ and $\\\\theta_{\\\\text{ref}}$ would improve clarity and aesthetics. We have updated this in our paper.\\n\\n## References \\n\\n[1] Dong, Hanze, et al. \\\"Raft: Reward ranked finetuning for generative foundation model alignment.\\\" arXiv preprint arXiv:2304.06767 (2023).\\n\\n[2] Huguet, Guillaume, et al. \\\"Sequence-Augmented SE (3)-Flow Matching For Conditional Protein Backbone Generation.\\\" arXiv preprint arXiv:2405.20313 (2024).\\n\\n[3] Rafailov, Rafael, et al. \\\"Direct preference optimization: Your language model is secretly a reward model.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[4] Black, Kevin, et al. \\\"Training diffusion models with reinforcement learning.\\\" arXiv preprint arXiv:2305.13301 (2023).\\n\\n[5] Domingo-Enrich, Carles, et al. \\\"Adjoint matching: Fine-tuning flow and diffusion generative models with memoryless stochastic optimal control.\\\" arXiv preprint arXiv:2409.08861 (2024). \\n\\n[6] Esser, Patrick, et al. \\\"Scaling Rectified Flow Transformers for High-Resolution Image Synthesis, March 2024.\\\" URL http://arxiv. org/abs/2403.03206.\\n\\n[7] Fan, Ying, et al. \\\"DPOK: Reinforcement Learning for Fine-tuning Text-to-Image Diffusion Models.\\\" Advances in Neural Information Processing Systems 36 (2024).\"}", "{\"comment\": \"Dear Reviewer tfNW,\\n\\nThank you very much for recognizing the value of our work to the research community. Your thoughtful suggestions throughout the review process have been invaluable, especially regarding the addition of quantitative results, along with extensive experiments comparing our method against recent baselines on large-scale flow matching models like SD3. We are glad that our response and revisions have addressed your concerns.\\n\\nWe will make our best effort to include more comprehensive theoretical discussions in the final version regarding the general cases of our proposed methods. Once again, we deeply appreciate your constructive engagement throughout the review process, which has significantly improved the clarity and impact of our work.\"}", "{\"title\": \"Reply to Reviewer S526 [Part 1]\", \"comment\": \"Thank you for your detailed and constructive review. We are grateful for your recognition of our work's theoretical rigor and its contributions to addressing key challenges in flow matching fine-tuning, particularly regarding policy collapse and computational efficiency. We\\u2019re happy to address your concerns with the following:\\n\\n> Q: Is this method capable of performing fine-grained fine-tuning tasks, such as controlling specific semantic parts of images? Include more case studies could make the findings more relatable to a broader audience.\", \"a\": \"Our method is designed to `fine-tune flow matching models with arbitrary reward functions` without requiring gradients of rewards [5] or filtered datasets [3], as comprehensively demonstrated through our experiments across different scales. In the experiments section, we showcase this versatility through controlled convergence using varying $\\\\tau$ and $\\\\alpha$ for classifier-guided digit generation (Figures 2-3), exhibiting precise control over digit categories while preserving diversity. Figure 4 validates our method with compression-based rewards for image optimization, where we demonstrate clear reward-distance trade-offs while maintaining generation quality. Figure 5 shows successful optimization with CLIP-based similarity rewards for text-image alignment, demonstrating superior performance compared to previous methods like RAFT [1] and ReFT [2]. Figure 6 further demonstrates through ablation studies that methods incorporating W2 regularization (including RAFT+W2, ReFT+W2, and ours) successfully prevent policy collapse while maintaining performance.\\n\\nIn our SD3 experiments (See Appendix A), we further demonstrate this reward-agnostic property across diverse reward architectures. Figure 9 validates our method's adaptability across different reward architectures including `HPS-V2, Pick Score, and Alpha CLIP`. Figure 10 exhibits success with complex compositional tasks using CLIP rewards, where we effectively optimize multiple attributes (colors, positions, objects) while maintaining semantic coherence. The W2 regularization prevents collapse across all these reward functions through the tractable bound derived in Theorem 3, as particularly evident in our results on challenging tasks like spatial understanding (Figure 7), preventing policy collapse (Figure 8), multi-scale rewards (Figure 9) and multi-attribute control (Figure 10), where the method maintains both high rewards and generation diversity, when KL-divergence and W1-divergence are prohibited due to constraints mentioned above.\"}", "{\"comment\": \"Dear Reviewer dXDz,\\n\\nWe are delighted that our responses and revisions have adequately addressed your concerns. Thank you sincerely for your valuable and constructive feedback, which has helped us improve the quality and practical impact of our paper. We truly appreciate your thoughtful review and insightful suggestions.\"}", "{\"comment\": \"Thank you very much for your kind response and your positive feedback. We\\u2019re happy to address your lingering concerns with the following:\\n\\n> **Q: Quantitative results on large-scale experiments.**\", \"a\": \"Thank you for this insightful observation. Besides the theoretically analyzed limiting cases, equation (12) in Theorem 5 provides intuitive guidance for practical parameter selection, and `we provide extensive experimental demonstrations of the effective controlling behavior for non-extreme value cases`. Our analysis clearly shows how $\\\\tau$ and $\\\\alpha$ intuitively control the convergent behavior: $\\\\tau$ determines the algorithm's preference for reward maximization (as shown in Figure 2's impact on policy collapse and reward optimization), while $\\\\alpha$ maintains diversity and prevents collapse (demonstrated in Figures 3-4's reward-diversity trade-off). This understanding allows practitioners to adjust parameters purposefully rather than through random guesses.\\n\\nThough deriving convergent distributions for all general cases is challenging, our empirical results demonstrate that our method achieves stable convergence and controllable fine-tuning across moderate hyper-parameter values. As shown in Figures 2-4, our method trains reliably to convergence without requiring early stopping, and practitioners can customize/adjust the convergent policy by adjusting $\\\\tau$ and $\\\\alpha$ according to their needs - increasing $\\\\tau$ for stronger reward maximization or $\\\\alpha$ for greater diversity. The convergence behavior is validated by extensive experiments across different tasks, from target image generation to text-image alignment, showing consistent and predictable behavior that follows our theoretical intuition.\"}", "{\"comment\": \"Dear Reviewers,\\n\\nThis is a gentle reminder that the authors have submitted their rebuttal, and the discussion period will conclude on November 26th AoE. To ensure a constructive and meaningful discussion, we kindly ask that you review the rebuttal as soon as possible (if you have not done so already) and verify if your questions and comments have been adequately addressed.\\n\\nWe greatly appreciate your time, effort, and thoughtful contributions to this process.\\n\\nBest regards, \\nAC\"}", "{\"comment\": \"Thank you for the detailed explanation and clarification, as well as the effort in restructuring the paper. I think this paper is definitely valuable and I'll keep my positive score.\"}", "{\"title\": \"Reply to Reviewer tfNW [Part 1]\", \"comment\": \"We deeply appreciate your thorough evaluation of our paper and the recognition of our theoretical contributions and empirical effectiveness. Thanks for noting that our method preserves the simplicity of the original flow-matching framework, making it easy to use while ensuring theoretical soundness. We are pleased to address each of your concern in detail:\\n\\n> Q: Why not compare with methods like DDPO and other recent baselines?\", \"a\": \"Thank you for the suggestion and we now provide additional results on text-to-image alignment tasks of Stable Diffusion3 (SD3) with comparison to more recent baselines. Meanwhile, we'd like to clarify several important perspectives regarding choice of baselines.\\n\\n**Technical Limitations of Existing RL Fine-tuning Methods for Flow Matching:**\\n\\nMost existing RL fine-tuning methods cannot be directly applied to fine-tune continuous-time ODE-based flow matching models due to intractable calculation of ELBO, KL divergence and expensive calculation of likelihood with Hutchinson trace estimator. Methods like DDPO [4] and DPOK [7] rely heavily on computationally tractable MDP trajectory likelihood calculations and ELBO approximations, which are computationally intractable or ill-defined for ODE-based flow matching models (detailed in Appendix B.2.4). Besides, the continuous-time nature of flow matching invalidates discrete timestep-based policy updates used in DDPO [4]. While discretizing ODE is possible, derivation of corresponding MDP trajectory likelihood (i.e., $\\\\log p_\\\\theta\\\\left(\\\\mathbf{x}_{t-1} \\\\mid \\\\mathbf{x}_t, \\\\mathbf{c}\\\\right)$[4]) in continuous-normalizing-flow is non-trivial and itself could lead to a new research work that deviates significantly from what's proposed in DDPO [4]. Additionally, KL divergence [7] is also intractable in flow matching, necessitating our novel and tractable W2 regularization approach.\\n\\nRegarding the concurrent work of Adjoint Matching [5], while theoretically interesting, it has certain practical limitations - `it requires differentiable rewards`, lacks theoretical convergence analysis, hasn't been open-sourced, and hasn't demonstrated effectiveness on SOTA FM architectures like SD3 [6]. It also employs a complex optimization procedure with stochastic optimal control. In contrast, `our method works with arbitrary rewards, and preserves the simplicity of original CFM training loss, while providing both theoretical guarantees and strong empirical results on SD3.`\\n\\n**Comprehensive Evaluation with Applicable Baselines:**\\n\\nGiven these technical constraints, we focused our comparisons on reward-based methods that can be adapted to flow matching - specifically RAFT [1] and ReFT [2], which don't rely on likelihood calculations. Though extending these methods [1] [2] to flow matching models was far beyond the empirical scope/contributions of their original work, we implemented them for SD3 fine-tuning to provide fair and meaningful comparisons. `To the best of our knowledge, we are the first to demonstrate successful online fine-tuning of SD3 models`. Our comprehensive experiments show our method's advantages across multiple dimensions: superior handling of spatial relationships (Figure 5), strong adaptability across different reward models including HPS-V2, Pick Score, and Alpha CLIP (Figure 9), and successful management of complex semantic relationships (Figure 10). `We also provide ablation results in Figure 6 to demonstrate unique contribution of ORW and W2 regularization.`\"}" ] }
2IhkyiF3to
Mutual Information Preserving Neural Network Pruning
[ "Charles Westphal", "Stephen Hailes", "Mirco Musolesi" ]
Model pruning is attracting increasing interest because of its positive implications in terms of resource consumption and costs. A variety of methods have been developed in the past years. In particular, structured pruning techniques discern the importance of nodes in neural networks (NNs) and filters in convolutional neural networks (CNNs). Global versions of these rank all nodes in a network and select the top-$k$, offering an advantage over local methods that rank nodes only within individual layers. By evaluating all nodes simultaneously, global techniques provide greater control over the network architecture, which improves performance. However, the ranking and selecting process carried out during global pruning can have several major drawbacks. First, the ranking is not updated in real time based on the pruning already performed, making it unable to account for inter-node interactions. Second, it is not uncommon for whole layers to be removed from a model, which leads to untrainable networks. Lastly, global pruning methods do not offer any guarantees regarding re-training. In order to address these issues, we introduce Mutual Information Preserving Pruning (MIPP). The fundamental principle of our method is to select nodes such that the mutual information (MI) between the activations of adjacent layers is maintained. We evaluate MIPP on an array of vision models and datasets, including a pre-trained ResNet50 on ImageNet, where we demonstrate MIPP’s ability to outperform state-of-the-art methods. The implementation of MIPP will be made available upon publication.
[ "structured pruning", "model compression", "mutual information" ]
Reject
https://openreview.net/pdf?id=2IhkyiF3to
https://openreview.net/forum?id=2IhkyiF3to
ICLR.cc/2025/Conference
2025
{ "note_id": [ "whjGqjXr7G", "pt2OrJ4Ext", "m4hRYmeG61", "jxwQpK9U6o", "fd7Wcj8DjS", "cQ2fCtnO7L", "XQZnfB4Syj", "VThLgOBO9a", "S4MaqSeS3U", "Ricp0H0IDl", "HGbUWeDUVL", "BFndfAx9eT", "45vg28LJ2y" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "decision", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730792290238, 1732283107232, 1732454610447, 1730649985921, 1733126471995, 1734916253810, 1737524060460, 1730458555550, 1732282896549, 1730644888357, 1732549819860, 1732282994640, 1732283056482 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10540/Reviewer_3cVS" ], [ "ICLR.cc/2025/Conference/Submission10540/Authors" ], [ "ICLR.cc/2025/Conference/Submission10540/Reviewer_ZnsM" ], [ "ICLR.cc/2025/Conference/Submission10540/Reviewer_7FoL" ], [ "ICLR.cc/2025/Conference/Submission10540/Reviewer_3cVS" ], [ "ICLR.cc/2025/Conference/Submission10540/Area_Chair_Fv9c" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10540/Reviewer_ZnsM" ], [ "ICLR.cc/2025/Conference/Submission10540/Authors" ], [ "ICLR.cc/2025/Conference/Submission10540/Reviewer_NS2s" ], [ "ICLR.cc/2025/Conference/Submission10540/Reviewer_NS2s" ], [ "ICLR.cc/2025/Conference/Submission10540/Authors" ], [ "ICLR.cc/2025/Conference/Submission10540/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The authors propose MIPP to enable real-time pruning, whole-layer pruning and global re-training guarantees for improving the performance of network pruning. Through comprehensive experimental evaluation, they demonstrate that MIPP can effectively prune networks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The authors provide a detailed analysis on the motivation and how the method works, especially on how the mutual information is preserved. And they conduct a lot of experiments to demonstrate the effectiveness of their methods.\", \"weaknesses\": [\"1) Experimental settings and result presentation is not clear.\", \"What is untrained network, trained network, pretrained network meaning in Figure 2 and Figure 4?\", \"What is LC in Figure 3?\", \"In Figure 2, for column 3 and 4, seems the proposed method does not perform well significantly than the baselines.\", \"Besides, the latest baseline is year 2022, is there any recent works in 2023 or 2024 to compare?\", \"2) in Figure 4, to my best knowledge, state-of-the-art ResNet50 for ImageNet achieves about 76% accuracy however the proposed approaches can achieve nearly 88% accuracy. Can you explain the settings in detail and what is the percentage of parameters reduced and what is the MACs reduced?\", \"In summary, it is quite unclear how is the comparisons between SOTA and proposed approaches. Besides, some common metrics in comparisons are missing, e.g., FLOPS, #params, MACs. Besides, the baselines seems out-dated. If the authors could address my concern, I can improve my rating.\"], \"questions\": \"As mentioned in Weakness, I have posed concrete questions for authors. Thanks.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response\", \"comment\": \"## Responses to Reviewer Comments\\n### Reviewer Comment 1:\\n\\u201cSome important references [a] are missing, which makes the novelty of the paper questionable. For example, [a] is also about using the mutual information to do filter pruning, what is the difference? Whether the proposed paper achieve higher performance? Why and How?\\u201d\\n\\n### Response\\nThe suggested paper [a] aims to select nodes based on their conditional mutual information (MI) with the target and a previously selected node. This places the paper in the category described in Section 2: \\u201cActivation-based pruning methods commonly view the activations as features and the outputs as targets, before ranking and selecting the top-k nodes in a global or local manner.\\u201d Our method, however, estimates the MI between activations in adjacent layers, as this function requires fewer parameters to approximate. Furthermore, the method in [a] conditions the information on a single selected node. Finally, [b] was published in June 2024 (at CVPR). According to the ICLR guidelines, this qualifies as \\u201ccontemporaneous\\u201d research (published within 4 months).\\n\\n\\n### Reviewer Comment 2:\\n\\u201cPerformance is not good. In [a], the performance on ResNet-50 on ImageNet is much higher than Thinet. Why just compare Thinet in Figure 6?\\u201d\\n\\n### Response\\nWe only compared to one baseline on ImageNet due to the large amount of GPU time required to train a model on ImageNet (even after pruning) given our computational budget at our institution.\"}", "{\"comment\": \"Thanks for your response.\\n\\nAfter reading the response and other reviewers' reviews, I'd like to keep my score since the response didn't provide the required comparison and experiments.\"}", "{\"summary\": \"This paper proposes a pruning approach based on mutual information between the activations of adjacent layers. The proposed approach has been evaluated on a number of models and datasets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The motivation of the paper is clear - global pruning approaches do have their limitations and the proposed approach can effectively avoid those.\", \"The idea of looking at the MI between activations of adjacent layers is interesting. It also makes sense to consider nodes that can maintain such MI.\", \"The theoretical analysis seems to make some good points of the observations.\"], \"weaknesses\": [\"The proposed approach has only been tested on some early architectures. It is not clear how this can be generalized to other models and datasets?\", \"It is also not clear if the proposed approach is sensitive to different activation functions.\", \"The comparison between baselines seems to be quite limited to only a few approaches.\"], \"questions\": [\"Can this approach work on other types of architectures like ViT?\", \"How it may perform with different activation functions?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"extra concerns\", \"comment\": \"As this work is mainly to prune the network, some common metrics in comparisons are missing, e.g., FLOPS, #params, MACs. Can you provide more data what are the results of these metrics? It would be nice if a method maintains accuracy while prunes a lot of neurons or saving a lot of computation costs.\\nSay, if your method's accuracy is higher but didn't prune a lot may not indicate your method is good.\"}", "{\"metareview\": \"The submission proposes Mutual Information Preserving Pruning (MIPP), a method to prune filters/nodes in neural networks that aims to preserve the mutual information between adjacent layers in a neural network.\\nAfter the inital round of reviews, this submission received scores of 5, 5, 3, 3. The issues raised by the reviewers are summarized in the section below, and were found to be valid by the ACs.\\nAfter the rebuttal and discussion, the reviewers remained unconvinced. \\nThe ACs did not find sufficient reason to overturn the negative consensus.\", \"additional_comments_on_reviewer_discussion\": \"Key among the weaknesses highlighted by the reviewers include:\\n- Lack of results on the current SoTA architectures, including Vision Transformers and large datasets such as ImageNet.\\n- Lack of comparison of FLOPs and timing metrics after pruning.\\n\\nDuring the rebuttal, the authors did not provide the requested metrics.\\nThe authors stated that they could only compare one baseline on ImageNet due to the lack of computational resources. This is unfortunate since a lot of the numbers reported in the submission are on smaller and simpler datasets such as MNIST, and CIFAR-10/100. While comparisons on these datasets made sense in the past, they are not very representative of real workloads anymore.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper introduces MIPP (Mutual Information Preserving Pruning), a structured pruning method for neural networks. The key idea is to preserve mutual information between adjacent layers' activations during pruning by selecting nodes that transfer entropy to the subsequent layer.\\nThe method operates by iteratively pruning from outputs to inputs, using transfer entropy redundancy criterion (TERC) with MI ordering to select nodes. Comprehensive experiments validate MIPP's effectiveness on both trained and untrained networks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The overall writing is clear for effective visualizations and well-structured presentation.\\n2. The paper conducts a wide range of experiments to validate the algorithm.\", \"weaknesses\": \"1. Some important references [a] are missing, which makes the novelty of the paper questionable. For example, [a] is also about using the mutual information to do filter pruning, what is the difference? Whether the proposed paper achieve higher performance? Why and How?\\n\\n2. The compared method is rather old. The authors claim \\\"For models trained on datasets smaller than ImageNet, we compare the performance of our method to SynFlow (Tanaka et al., 2020), GraSP (Wang et al., 2022), ThiNet (Luo et al., 2017) and SOSP-H (Non- nenmacher et al., 2022),\\\". Why not include paper published in 2024 [b].\\n\\n3. Performance is not good. In [a], the performance on ResNet-50 on ImageNet is much higher than Thinet. Why just compare Thinet in Figure 6?\\n\\n[a] Enhancing CNN efficiency through mutual information-based filter pruning, Digital Signal Processing 2024\\n[b] Auto-Train-Once: Controller Network Guided Automatic Network Pruning from Scratch\", \"questions\": \"See weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response\", \"comment\": \"## Responses to Reviewer Comments\\n\\n### Reviewer Comment 1:\\n\\u201cIt is also not clear if the proposed approach is sensitive to different activation functions.\\u201d\\n\\n### Response\\nThis can easily be dealt with (which we will make clearer in an updated version of the manuscript) by applying the activation functions to extracted activations before applying MIPP.\\n\\n\\n### Reviewer Comment 2:\\n\\u201cThe comparison between baselines seems to be quite limited to only a few approaches.\\u201d\\n\\n### Response\\nWe thank the reviewer for this comment and ask them to suggest further baselines that would complement the current experiments. Our selection criterion was to choose comparators that are representative of a class of systems, which have been thoroughly evaluated and cross-tested by the research community and in industry.\\n\\nWe would like to point out that the key papers in the areas (like those we cite in our paper have essentially the same number of reviews as ours, or less).\\n\\n\\n\\n### Reviewer Comment 3:\\n\\u201cCan this approach work on other types of architectures like ViT?\\u201d\\n\\n### Response\\nThe method is definitely applicable to ViT. In fact, there are no specific 'building blocks' to which our work cannot be applied.\"}", "{\"summary\": \"The paper introduces Mutual Information Preserving Pruning (MIPP), an activation-based pruning method that maintains mutual information between adjacent layers in neural networks, ensuring retrainability. Unlike traditional methods, MIPP dynamically selects nodes based on their contribution to information transfer, addressing limitations such as layer collapse and lack of adaptability. Experimental results demonstrate that MIPP outperforms state-of-the-art pruning techniques on various vision models, including ResNet50 on ImageNet, with implementation details to be released upon publication.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper provides an interesting perspective on neural network pruning. It considers the activations of the downstream layers, which allows pruning on trained and untrained networks; the idea is interesting.\", \"weaknesses\": \"1. Authors claim the method is compared with state-of-the-art techniques, yet most literature is from before 2022; many recent works, such as PHEW or NPB in Pruning at Initialization, and indeed, there are more works on pruning on trained networks in recent years. I strongly suggest that authors provide more valid reviews of recent works.\\n2. Although the method is interesting because it works for both trained and untrained networks, the motivation for this is not clear. \\nFor PaI, tasks aim to find networks before training to reduce training costs, while post-trained pruning aims to preserve the best performance on trained networks. Is MIPP better than SoTA methods on each side?\", \"questions\": \"1. Can the author compare MIPP in PaI and post-trained pruning in separate settings with SoTA methods in recent years, such as 2023 or 2024?\\n2. Most experiments are performed on small datasets and networks; showing the pruning task on more computationally intensive structures or datasets, such as the Efficentnet B7 and Imagenet1K tasks, would be better. As the author claimed, the methods work for trained networks; pruning on a network that is pre-trained in a large-scale dataset such as Imagenet21K could be interesting. \\n3. Will this method work in Vision transformers?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response. We appreciate your acknowledgement of the novelty and interest in the MIPP concept. We agree that the paper would benefit from additional experiments to validate its performance. Specifically, comparing MIPP to previous methods\\u2014such as pruning at initialization and post-training pruning\\u2014would be valuable in highlighting its strengths in terms of efficiency and performance compared to existing approaches. I will keep my score and encourage the author to continue improving the current paper for future venues.\"}", "{\"title\": \"Response\", \"comment\": \"## Responses to Reviewer Comments\\n### Reviewer Comment 1:\\n\\u201cWhat is untrained network, trained network, pretrained network meaning in Figure 2 and Figure 4?\\u201d\\n\\n### Response\\nIn Figure 4, the graphs titled \\u201cpre-trained\\u201d show the results when applying each method to a trained vision system. On the other hand, \\u201cun-trained\\u201d implies the application of MIPP and the baselines to a vision system yet to be trained.\\n\\n\\n### Reviewer Comment 2:\\n\\u201cWhat is LC in Figure 3?\\u201d\\n\\n### Response\\nLC stands for \\u201clayer collapse\\u201d; we will make this clearer in an updated version of the manuscript.\\n\\n\\n### Reviewer Comment 3:\\n\\u201cIn Figure 2, for column 3 and 4, seems the proposed method does not perform well significantly than the baselines?\\u201d\\n\\n### Response\\nOur method is often outperformed by a re-initialization baseline at high sparsity levels (as are all current pruning methods), and occasionally by GraSP. However, we still demonstrate consistently effective pruning, despite MIPP not being designed for application to an untrained model.\\n\\n\\n### Reviewer Comment 4:\\n\\u201cBesides, the latest baseline is year 2022, is there any recent works in 2023 or 2024 to compare?\\u201d\\n\\n### Response\\nWe thank the reviewer for their comment. In our opinion, there have not been significant works in this space in 2023/2024. More generally, our selection criterion was to choose comparators that are representative of a class of systems, which have been thoroughly evaluated and cross-tested by the research community, without considering solutions based on their variations. \\n\\nWe would appreciate it if the reviewer could suggest baselines published in 2023 and 2024 they believe would best complement the existing performance evaluation.\\n\\n\\n### Reviewer Comment 5:\\n\\u201cIn Figure 4, to my best knowledge, state-of-the-art ResNet50 for ImageNet achieves about 76% accuracy however the proposed approaches can achieve nearly 88% accuracy. Can you explain the settings in detail and what is the percentage of parameters reduced and what is the MACs reduced?\\n\\n### Response\\nWe believe there may have been a misunderstanding regarding the graph. Our training accuracies reached nearly 88%, while our test accuracies were below 70%. The pruning ratios are detailed in the figure caption.\"}", "{\"title\": \"Response\", \"comment\": \"## Responses to Reviewer Comments\\n\\n### Reviewer Comment 1:\\n\\u201cAuthors claim the method is compared with state-of-the-art techniques, yet most literature is from before 2022; many recent works, such as PHEW or NPB in Pruning at Initialization, and indeed, there are more works on pruning on trained networks in recent years. I strongly suggest that authors provide more valid reviews of recent works.\\u201d\\n\\n### Response\\nWe thank the reviewer for the comment. We did not select PHEW because, although it is an activation-based structured pruning method (aligned with MIPP), it is designed to be applied without training data, which is not the case for MIPP. Therefore, we considered ThiNet the most applicable baseline, as it is not only structured, activation-based, and widely adopted, but also designed to be data-dependent. The PaI paper appears relevant and will help inform our paper revisions.\\n\\nThe reviewer says that our baseline are old but in fact they were published in 2017, 2020 an 2022 (two of them).\\n\\n\\n### Reviewer Comment 2:\\n\\u201cAlthough the method is interesting because it works for both trained and untrained networks, the motivation for this is not clear. For PaI, tasks aim to find networks before training to reduce training costs, while post-trained pruning aims to preserve the best performance on trained networks. Is MIPP better than SoTA methods on each side?\\u201d\\n\\n### Response\\nWe tried to make this clear in the results section for MNIST; however, we will emphasize the following points further in a revised version of the paper. MIPP is always applicable and performant in the context of trained networks, while this is not always the case with untrained networks. \\n\\nMIPP maintains the information flow between the activations of adjacent layers in a network. It prunes uninformative nodes, whose activations may be irrelevant or redundant. In a trained network, the activations are optimized to complete the task at hand. Therefore, MIPP effectively preserves useful information, leading to competitive performance.\\n\\nOn the other hand, this is not always the case when applied to untrained networks. If untrained, the information in the network reflects the information in the input images. Unlike in the case of the trained network, the information in these input images may not necessarily be useful for the classification task. In this case, MIPP may preserve information that will not contribute to the training process, impeding its performance.\\n\\nHence, MIPP can be applied to both trained and untrained networks; however, as dataset complexity increases, it becomes less applicable to untrained networks. The reason we still see good results in some cases for untrained networks is due to MIPP\\u2019s ability to achieve competitive layer-wise pruning ratios. We tried to emphasize this with the experiments presented in Figure 1.\"}" ] }
2IUO0Iq5Bq
Fast Tensor-Based Multi-View Clustering with Anchor Probability Transition Matrix
[ "Wei Feng", "Dongyuan Wei", "Qianqian Wang" ]
Multi-view clustering effectively integrates information from multiple data representations, yet current methods face key challenges. They often lack interpretability, obscuring how clusters are formed, and fail to fully leverage the complementary information across views, limiting clustering quality. Additionally, large-scale data introduces high computational demands, with traditional methods requiring extensive post-processing and manual tuning.To address these issues, we propose a novel multi-view clustering approach based on probability transition matrices. By selecting anchor points and constructing bipartite similarity graphs, we can capture the relationships between data points and anchors in different views and reduce computational complexity. Through probability matrices, we efficiently transfer cluster labels from anchors to samples, generating membership matrices without the need for post-processing. We further assemble these membership matrices into a tensor and apply a Schatten \(p\)-norm constraint to exploit complementary information across views, ensuring consistency and robustness. To prevent trivial solutions and ensure well-defined clusters, we incorporate nuclear norm-based regularization. Extensive experiments on various datasets confirm the effectiveness and efficiency of our method.
[ "Multi-view clustering", "Fast clustering" ]
Reject
https://openreview.net/pdf?id=2IUO0Iq5Bq
https://openreview.net/forum?id=2IUO0Iq5Bq
ICLR.cc/2025/Conference
2025
{ "note_id": [ "sED2hGUdBL", "erdljSRT72", "cwQPSlrc6J", "TUA76o1Dgx", "JVCsT0n8Tt", "5YZAKKw7k1" ], "note_type": [ "decision", "official_review", "official_review", "official_review", "official_review", "meta_review" ], "note_created": [ 1737523432404, 1729500367504, 1730537391749, 1730019108229, 1730443675926, 1734327491318 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1030/Reviewer_JsSs" ], [ "ICLR.cc/2025/Conference/Submission1030/Reviewer_K6zb" ], [ "ICLR.cc/2025/Conference/Submission1030/Reviewer_gJfc" ], [ "ICLR.cc/2025/Conference/Submission1030/Reviewer_d6Mp" ], [ "ICLR.cc/2025/Conference/Submission1030/Area_Chair_VXZj" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper proposed a Fast Tensor-Based Multi-View Clustering with Anchor Probability Transition Matrix ((FTMVC-APTM) to address some key challenges in multi-view clustering like lacking interpretability, and high computational complexity from large-scale data. Extensive experiments on various datasets are conducted to demonstrate the effectiveness and efficiency.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"A good framework for this paper.\", \"weaknesses\": \"1.\\tInnovative may not be enough for such a conference, this work simply combines a lot of work, for example, nuclear norm and Schatten p-norm regularization are both very common regular terms, and the authors don't discuss in depth why they use these two items\\uff0c for example why Schatten p-norm regularization, there are many new low-rank tensor norm [1][2].\\n2.\\tThe article is poorly expressed, for example, whether the author employs Schatten p-norm or weighted tensor Schatten p-norm. the introduction states the Schatten p-norm, but Eq.6 uses the weighted tensor Schatten p-norm in [3]. These are two completely different concepts. If you use the weighted tensor Schatten p-norm, how did you determine the weight values for the different views?\\n3.\\tThis work states \\u201cfast tensor-based multi-view clustering\\u201d, but the dataset is only 4k in size and there is no runtime comparison, which is hard to believe!\\n4.\\tThe author states \\u201cEach experiment was replicated 5 times\\u201d, so why do the results in Table 3 not include variance?\\n5.\\tIn Figure 2, the performance always reaches best when anchor rate=1, which means the anchor is useless, and the complexity is also O(n^2logn), This result proves that the work proposed by the authors is not valid, At least it contradicts the author's \\u201cfast\\u201d statement.\\n\\n\\n[1] Guo J, Sun Y, Gao J, et al. Logarithmic Schatten-$ p $ p Norm Minimization for Tensorial Multi-View Subspace Clustering[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 45(3): 3396-3410.\\n\\n[2] Ji, Jintian, and Songhe Feng. \\\"Anchor structure regularization induced multi-view subspace clustering via enhanced tensor rank minimization.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\\n\\n[3] Gao, Quanxue, et al. \\\"Enhanced tensor RPCA and its application.\\\" IEEE transactions on pattern analysis and machine intelligence 43.6 (2020): 2133-2140.\", \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this paper, the authors propose a Fast Tensor-Based Multi-View Clustering with Anchor Probability Transition Matrix ((FTMVC-APTM) method. Within this model, to reduce the computational complexity, the relationships between data points and the selected anchors in different views are captured, and recorded by the bipartite similarity graphs. Based on these probability graphs, the cluster labels from anchors to samples are transferred, and the membership matrices can be obtained without the need for post-processing. To further exploit complementary information across views, the membership matrices are stacked into a tensor and contrained by a Schatten p-norm.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"S1. This paper is easy to follow, and the idea is straightforward.\\n\\nS2. The introduction about the framework is clear, and the equations are well presented.\", \"weaknesses\": \"W1. Overall, the idea of this paper is straightforward and clear. However, the novelty of FTMVC-APTM is limited, and the presented motivations/problems have been referred and solved.\\n\\nW2. The used datasets are too small, and the experiments provided in this paper are not convincing to show the superiority of the proposed method.\\n\\nW3. The running time comparison experiment is missing.\\n\\nW4. More recent fast multi-view clustering methods should be introduced and compared in the experiments.\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a new multi-view clustering method called Fast Tensor Multi-view Clustering Based on Anchor Probability Transformation Matrix (FTMVC-APTM). The method of directly calculating the membership matrix using the probability matrix avoids complex post-processing and enhances clustering interpretability. The nuclear norm and Schatten p-norm regularization are introduced to ensure the balance and robustness of the clustering results.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed method is clearly explained and easy to understand.\\n\\n2. This paper carefully analyzes the computational complexity of the method and illustrates the potential advantages of FTMVC-APTM in data scale expansion.\\n\\n3. Experimental results on eight multi-view datasets demonstrate its effectiveness.\", \"weaknesses\": \"1. The main contributions of this paper is to combine the anchor probability transformation matrix and the Schatten p-norm regularization of the multi-view tensor structure. However, these ideas are not new in the field of multi-view clustering, and the combination of anchor selection, tensor structure and probability matrix has been applied in some methods[1][2].\\n[1] Nie, Feiping, et al. \\\"Fast clustering with anchor guidance.\\\" IEEE Transactions on Pattern Analysis and Machine Intelligence (2023).\\n[2] Yu, Weizhong, et al. \\\"Multi-View Fuzzy Clustering Based on Anchor Graph.\\\" IEEE Transactions on Fuzzy Systems (2023).\\n\\n2. This paper lacks ablation experiments on the key design of using probability matrix to calculate membership matrix. Given that this method is a core contribution of FTMVC-APTM, conducting relevant ablation experiments will help evaluate the actual impact of this strategy on the model performance.\\n\\n3. Although this paper demonstrates the superior performance of FTMVC-APTM on multi-view datasets, the scale of these datasets is relatively limited (the number of samples ranges from a few hundred to a few thousand), which fails to fully verify the performance of the method on large-scale data. It is recommended to supplement the experiments on larger datasets, such as the YTF dataset and the Caltech dataset.\\n\\n4. It is recommended that the authors appropriately increase the visualization results of clustering to help readers more intuitively understand the performance and clustering structure of the proposed FTMVC-APTM method.\", \"questions\": \"1. This paper claims that this method is more interpretable than other complex multi-view clustering methods. Specifically, how does the membership matrix generated by the probability matrix help explain the final clustering structure?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this paper, a tensor-based method is proposed to solve the MVC problem. The authors propose a simple and efficient method and verify the rationality and superiority of the method through experimental results.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThis paper is well organized.\\n2.\\tThis paper implements an exploration of fast tensor clustering.\\n3.\\tThe proposed methodology is somewhat enlightening.\", \"weaknesses\": \"I have some concerns about the paper, as follows:\\n\\n1. This paper is also limited by its innovative. Affiliation matrix is not a new method, it has been widely used[1,2]. Tensor Schatten-p norm[3,4] are also a common way to deal with low rank. So, the innovation made by the authors is more in the sense of incremental.\\n\\n[1] Zhao, J. B., & Lu, G. F. (2022). Clean and robust affinity matrix learning for multi-view clustering. Applied Intelligence, 52(14), 15899-15915.\\n\\n[2] Li, X., Zhang, H., Wang, R., & Nie, F. (2020). Multiview clustering: A scalable and parameter-free bipartite graph fusion method. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(1), 330-344.\\n\\n[3] Xie, Y., Gu, S., Liu, Y., Zuo, W., Zhang, W., & Zhang, L. (2016). Weighted Schatten \\n-norm minimization for image denoising and background subtraction. IEEE transactions on image processing, 25(10), 4842-4857.\\n\\n[4] Li, X., Ren, Z., Sun, Q., & Xu, Z. (2023). Auto-weighted tensor schatten p-norm for robust multi-view graph clustering. Pattern Recognition, 134, 109083.\\n\\n2. The experimental results in this paper are inadequate. For example, the authors emphasize that their method enhances the interpretability of clustering. However, this needs to be verified experimentally. The superior performance of clustering alone may not provide effective support.\\n\\n3. In addition, the authors emphasize that their method requires only linear complexity and has a fast computational speed. However, the sample size of the dataset used is small, and I suggest the authors to increase their experiments on large-scale datasets such as AwA[5] or Youtube[6].\\n\\n[5] https://cvml.ista.ac.at/AwA/\\n\\n[6] https://www.cs.tau.ac.il/~wolf/ytfaces/\", \"questions\": \"1.\\tThe sample size of the existing dataset is small and the authors should increase the experiments on large-scale datasets.\\n2.\\tThe available experimental results are not sufficient to support the authors' opinion. I suggest the authors to add some visualization or other experiments.\\n3.\\tCompared to existing methods, the authors' innovation is unclear. I suggest that the authors should carefully consider the motivation and contributions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"In this paper, the authors directly compute the membership matrix using the probability matrix to avoid complex post-processing and enhance the clustering interpretability. The nuclear norm and Schatten p-norm regularization are introduced to ensure the consistency and robustness of the clustering results. The membership matrices are stacked into a tensor to further exploit complementary information across views. Extensive experiments on various datasets confirm the effectiveness and efficiency.\\n\\n\\n\\n\\n\\nAll reviewers give negative scores. Its novelty is limited. Affiliation matrix and Tensor Schatten-p norm are fairly common. The experimental results are inadequate, and the running time comparison experiment is missing. Moreover, some recent works should be introduced and compared in experiments. The scale of utilized datasets is relatively limited, failing to verify the performance of proposed method on large-scale data. The expression needs further refinement. Also, there is no any rebuttal.\", \"additional_comments_on_reviewer_discussion\": \"All reviewers give negative scores. Its novelty is limited. Affiliation matrix and Tensor Schatten-p norm are fairly common. The experimental results are inadequate, and the running time comparison experiment is missing. Moreover, some recent works should be introduced and compared in experiments. The scale of utilized datasets is relatively limited, failing to verify the performance of proposed method on large-scale data. The expression needs further refinement.\"}" ] }
2IBdk8cUdC
Topo-Field: Topometric mapping with Brain-inspired Hierarchical Layout-Object-Position Fields
[ "Jiawei Hou", "Wenhao Guan", "Longfei Liang", "Xiangyang Xue", "Taiping Zeng" ]
Mobile robots require comprehensive scene understanding to operate effectively in diverse environments, enriched with contextual information such as layouts, objects, and their relationships. While advancements like Neural Radiance Fields (NeRF) offer high-fidelity 3D reconstructions, they are computationally intensive and often lack efficient representations of traversable spaces essential for planning and navigation. In contrast, topological maps generated by LiDAR or visual SLAM methods are computationally efficient but lack the semantic richness necessary for a more complete understanding of the environment. Inspired by neuroscientific studies on spatial cognition, particularly the role of postrhinal cortex (POR) neurons that are strongly tuned to spatial layouts over scene content, this work introduces Topo-Field, a framework that integrates Layout-Object-Position (LOP) associations into a neural field and constructs a topometric map from this learned representation. LOP associations are modeled by explicitly encoding object and layout information, while a Large Foundation Model (LFM) technique allows for efficient training without extensive annotations. The topometric map is then constructed by querying the learned NeRF, offering both semantic richness and computational efficiency. Empirical evaluations in multi-room apartment environments demonstrate the effectiveness of Topo-Field in tasks such as position attribute inference, query localization, and topometric planning, successfully bridging the gap between high-fidelity scene understanding and efficient robotic navigation.
[ "Robotic scene understanding", "Neural scene representation", "Hierarchical representation", "Topometric map" ]
https://openreview.net/pdf?id=2IBdk8cUdC
https://openreview.net/forum?id=2IBdk8cUdC
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yARyaignKZ", "pOoFm0P53N", "oM8FzNa9AS", "n9QgfNubNq", "lGwdaiEHvv", "iquENvbC7D", "ifrSxM0hf4", "diYYH5Eus4", "cApwWkfAMS", "c9VOEyPbJJ", "U50GgpkvGU", "RftUxPwlrR", "QgfoIACqV0", "NlfjpEsbpy", "MgbaGmiSZV", "Lf8HVLgmFD", "I3aGlD8cQf", "BNNfc66qso", "4iWys78Gv4", "39FGqpKf9G", "35ROPZSQv6" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732726079079, 1732630224889, 1732727508542, 1730935871747, 1730662448296, 1732026122591, 1732763595242, 1730669626009, 1732630319892, 1732557344393, 1732026372454, 1733901428811, 1732026211667, 1732637410760, 1732635047704, 1732715745502, 1732564775010, 1730812592108, 1732635616103, 1732764762089, 1732026432329 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2902/Reviewer_GNDJ" ], [ "ICLR.cc/2025/Conference/Submission2902/Authors" ], [ "ICLR.cc/2025/Conference/Submission2902/Reviewer_dJX7" ], [ "ICLR.cc/2025/Conference/Submission2902/Reviewer_ms6U" ], [ "ICLR.cc/2025/Conference/Submission2902/Reviewer_GNDJ" ], [ "ICLR.cc/2025/Conference/Submission2902/Authors" ], [ "ICLR.cc/2025/Conference/Submission2902/Authors" ], [ "ICLR.cc/2025/Conference/Submission2902/Reviewer_wzd7" ], [ "ICLR.cc/2025/Conference/Submission2902/Authors" ], [ "ICLR.cc/2025/Conference/Submission2902/Reviewer_ms6U" ], [ "ICLR.cc/2025/Conference/Submission2902/Authors" ], [ "ICLR.cc/2025/Conference/Submission2902/Authors" ], [ "ICLR.cc/2025/Conference/Submission2902/Authors" ], [ "ICLR.cc/2025/Conference/Submission2902/Authors" ], [ "ICLR.cc/2025/Conference/Submission2902/Reviewer_GNDJ" ], [ "ICLR.cc/2025/Conference/Submission2902/Authors" ], [ "ICLR.cc/2025/Conference/Submission2902/Reviewer_wzd7" ], [ "ICLR.cc/2025/Conference/Submission2902/Reviewer_dJX7" ], [ "ICLR.cc/2025/Conference/Submission2902/Reviewer_GNDJ" ], [ "ICLR.cc/2025/Conference/Submission2902/Authors" ], [ "ICLR.cc/2025/Conference/Submission2902/Authors" ] ], "structured_content_str": [ "{\"comment\": [\"I am glad that the authors have presented concrete arguments highlighting their differences against the relevant work. Although both studies appeared on ArXiv before May, I will take HOV-SG as an exemplary case as CLIO was formally accepted after the ICLR submission deadline.\", \"The authors' response validates my earlier argument that **CONCEPTUALLY**, these scene graph based methods encompass complete forms of topometric maps, place cells, and POR as the proposed method. The primary differences lie in the implementation, and neither the proposed method nor the two relevant works directly replicate the representations found in neuroscience. For further revising the manuscript, the authors are encouraged to provide solid experimental evidence that the proposed method brings advantages given its distinct design choices. The remaining issues are summarized below:\", \"Theoretical basis. Please specify the paragraph that provides 'the theoretical basis' of the method, as this is regarded as the key contribution of the paper. In the current form, I can only see abstract evidence that similar behaviors exist in the human brain. Is each module necessary? Do different modules communicate in a manner similar to the proposed method? Is such a design superior to other existing designs in the vision community?\", \"Differences with relevant works. I want to stress the fact that HOV-SG (and CLIO) has the object nodes and region nodes besides the voronoi graph and the point cloud/meshes. The graph structure of the proposed method does not show any difference in terms of the object-region hierarchy but with only missing pieces. The major difference between the proposed methods and the two relevant works is the feature query manner through a hash-encoded network, where the relevant methods maintain the semantic features explicitly on the nodes. The authors are encouraged to provide evidence that query through a network instead of the maintenance on the sparse nodes leads to better accuracy or efficiency.\", \"Interpolate embeddings on unseen areas: Please provide concrete evidence that such interpolation/prediction behavior leads to practical advantages instead of providing noisy and inaccurate semantics.\", \"Map construction. Please clarify your map construction process. I note the following terms and definitions: 'the learned neural field' F (eq.1, F is formally denoted as the topo-field later at L375), a 'topometric map' G = (V, E) (eq. 3), a 'Topo-Field' with $g$ and $h$ (L269). As the authors claim that 'we construct the topometric map in a mapping and updating strategy based on the learned Topo-Field F', I wish the authors could clarify how the topometric map is constructed in the MAPPING phase based on the Topo-Field F as I don't find the involvement of F in the paragraph between L377 and L414.\", \"Navigable path. Please explain how the maintained map representation facilitates 'navigable path planning' (claimed in the first contribution). Note that the topo-field only contains the mapping between the coordinate and the semantic features, and the topometric map only contains sparse nodes and edges without a metric map (as defined in Eq. 4 & 5). How is A* algorithm (L568) applied to this map representation to generate the 'navigable path'?\", \"There are also numerous issues regarding the experiments.\", \"The authors first present the region inference results in Sec. 5.1. I don't understand why this evaluation is conducted as the region is manually annotated.\", \"Regarding the localization with text queries in Sec. 5.2, the failure cases of other methods (as shown in Fig. 4) consistently demonstrate that the text queries identify correct object semantics but in incorrect rooms. The results do not convincingly demonstrate the superiority of the proposed method, and the comparison is unfair since the room type information requires manual annotation in the paper.\", \"The paper mentions 10 test scenes (Table 2) but provides comparative results for only four of them (Tables 3 and 4). Please clarify this inconsistency.\", \"Please specify the scene IDs of Scene 1-10 so that future work can make fair comparisons.\"]}", "{\"comment\": \"Thank you for the active discussion, for remaining problems:\\n\\n**Feature encoding:** I now understand your concern. CLIP provides per-pixel feature and pixels in the bounding-box share the same feature of the object. As for background, all pixels outside the bounding-box share the same feature to represent the unified concept of \\\"region\\\". On the contrary, LSeg tends to provide per-pixel feature and each pixel feature varies. However, encoding the region information of an image is more likely an image classification task rather than a segmentation task, which aims to supervise all pixels in the background with a unified region label. It's hard to convergent scattered features from different pixels to a unified concept.\\n\\n**Feature contrastive learning:** For \\u201cHow do you make sure your features on the 3D surface point are exactly the feature you render?\\u201d, we further add a more detailed discussion to train the neural representation in Section 4.2. In the way discussed in 4.2 about neural encoding and 4.1 about target feature processing, given a posed RGB-D image, the target feature of each pixel is processed as mentioned in 4.1 denoted as $\\\\mathcal{E}\\\\{(e_v, e_s)\\\\}$ (features on the 3D surface point). At the same time, the related pixel in depth image is back-projected into 3D space according to depth and pose value, denoted as $p$, and processed by the process mentioned in 4.2 to form $F(p)=\\\\{f_v, f_s\\\\}$ (rendered feature). A contrastive loss is conducted between $\\\\{(e_v, e_s)\\\\}$ and $\\\\{f_v, f_s\\\\}$ to train the neural representation. Training details are declared in Section 4.4. Figure 2 clearly shows the feature contrastive learning pipeline.\\n\\n**Threshold:** As mentioned, \\\"we filter the points with similarity over threshold (0.6 in our practice), and simply draw a bounding box to cover these points for visualization\\\". Generally, there are more than 30\\\\~50 points on a single object, so there's a tolerance on the threshold choice. As for the exact value 0.6, it is an empirical choice based on our experiments on Matterport3D dataset among tens of scenes we've tested. In our practice values among 0.4\\\\~0.6 does not bring too much disturbance on results.\\n\\n**Image query localization revise in paper:** The metric and sample strategy declaration has been added in Section 5.2. \\n\\n**Ablation:** Our main contribution is proposing the allocentric layout-based scene encoding with the neural representation approach and constructing a topological map based on this. As can be seen in Fig. 7 the object encoding branch keeps nearly the same with previous method, so we do not ablate the object related metrics like semantic segmentation which would be nearly the same with previous works. As for graph, we propose a topometric map construction pipeline based on the learned neural representation based on the queried object and region features. Consequently, the topo-map construction result relies on the learned object and region embedding metrics. Evaluating and improving the topo-map are our undergoing future work on graph-based path planning and locomotion which would not be included in this paper.\"}", "{\"comment\": \"Thanks for your response to my earlier questions.\\n\\nI don't think the answer really addresses my question concerning additional information in the authors' method: The other baselines do not have access to the layout information and therefore are at an explicit disadvantage. This seems to be confirmed by the fact that all of the model variations in the ablation study seem to outperform the other baselines with a significant margin (Table 4). It would be useful to devise an additional evaluation protocol that would allow to disentangle the performance increase due to the simplification of the problem by the added layout information and the one due to the proposed architecture. \\n\\nThe motivation of the model from the evidence of places cells in POR is just too vague in my opinion. The paper would need some evaluation demonstrating that the proposed model and encoding are similar to the population coding observed in POR. \\nSimilarly, the statement that NeRF is more biologically plausible as an implicit coding would require a more rigorous argumentation based on the known properties from neuroscience evidence.\"}", "{\"summary\": \"The paper introduces Topo-Field, a framework designed to enhance mobile robot navigation by integrating detailed semantic information about layouts, objects, and their positions (LOP) into a neural field representation. Interestly, such structure is inspired by the role of postrhinal cortex neurons on spatial layout. By querying a learned NeRF, Topo-Field constructs a semantically rich yet computationally efficient topometric map for hierarchical robotic scene understanding. Experimental results demonstrate its effectiveness in tasks like position inference, localization, and planning, bridging the gap between detailed scene understanding and efficient robotic navigation.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The authors tackle the problem of hierarchical robotic scene understanding, which is an interesting and important topic\\n2. The proposed LOP is bio-inspired, to me this concept seems interesting.\", \"weaknesses\": \"**Unclear descriptions of target feature processing in Sec 4.1**\\n1. How do you know if a 3D point belongs to the object, or the background? Do you use the GT annotations from the dataset? (the Matterport3D you show has that information I believe?)\\n2. For the background features, you will get only a single feature for each image. How do you fuse those features from different views? \\n3. Also, isn\\u2019t it making more sense to take per-pixel CLIP features from models like LSeg/OpenSeg, and fuse that information?\\n\\n**Unclear descriptions of neural scene encoding in Sec 4.2**\\n1. Related to the questions above. In this section you mention that there are object-level local features and layout-level region features and MHE seems to be a good representation for learning such hierarchy. However, how exactly do you learn these two set of features respectively under MHE? No details there\\n2. To learn MHE or NeRF in general, you need to actually shoot a ray for each pixel and sample along the ray. The final features are the weighted sum of all values along the ray, with volume rendering. How do you make sure your features on the 3D surface point are exactly the feature you render? \\n\\n**Unclear Topometric Mapping in Sec 4.3**\\n1. Line 309, what is ${C_t, S_t}$? What are the differences to ${C_R, S_R}$ (I know this is the embeddings for region) in Line 304, and ${C_I, S_I}$ in Line 314? You did not specify them before. It is confusing and making it hard to understand\\n2. Figure 3 (b) does not really match with what you write in \\u201clocalization with text/image query\\u201d between Line 306-318. In the figure, all you get are the per-point features, and try to match with query features, omitting many important details in your description. \\n3. \\u201cMatching\\u201d in Figure 3 is never really discussed. What kind of matching? Do you mean calculating the cosine similarity among the features, and take the one with the highest score?\\n\\n**Text query localization in experiments**\\n1. How do you decide the similarity threshold for the bounding box? Do you need to choose a different threshold for each text query? My own experience is that, it is not really possible to get a single threshold for every query.\\n2. One more thing: once you have the right threshold, how exactly do you get the bounding boxes out from thresholding? \\n3. What are the \\u201csamples\\u201d in Table 1?\\n4. How many queries are you considering for each scene, and how do you obtain the GT? Same question applies to Table 3 as well.\\n\\n**Image query localization in experiments** \\nif I understand correctly, you show the heatmap of the query. You claim that \\u201cTopo-Field constrains the localization results to a smaller range in the exact region\\u201d. However, that is not really true to me. If you look at the washbasin in the bathroom, you also have many points highlighted in other regions, like kitchen, and even some points in the bedroom. In such a case, how can you get such good numbers in Table 3? \\n\\n**Ablation study**\\nHow come your ablation in Table 4 is only evaluating the region prediction accuracy, which does not even require most parts of your methods (objects, the graph you build, etc). Why not evaluate on other things as well? And even that, your default strategy seems not outperform much over any of baselines, even the very simple baseline 1 in some scenes. \\n\\n**Writing** \\n- Overall I think the writing is not good since many things are not justified well. \\n- There are so many cases when the author writs (), a space is not added before, e.g. L214 \\u2026mapping(SLAM), L259 Multi-layer Perceptron(MLP), etc.\", \"questions\": \"It would be very important if you can justify those points in the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper targets an interesting problem of topometric mapping but is not ready for publishing. The quality is poor regarding writing, organization, annotations, and experimental setups.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"The idea of constructing a topometric map using the implicit neural field is interesting\"], \"weaknesses\": [\"The writing is far from satisfactory. The corresponding authors should revise the manuscripts besides the abstract and introduction.\", \"Though the paper proposes a Topo-field to integrate layout-object-position, this representation is not clearly presented in Sec. 3. The definition of the topometric map (or the graph structure in Eq. 3) is vague and hard to follow, and the generation of the graph from dense field F (L199) is unclear. Note that the implicit neural field F is similar to previous methods with a distilled feature field, the novelty and the contribution of the proposed method are unclear.\", \"The hierarchical structure of point-object-room is common in scene graph generation. However, no relevant work (e.g., CLIO, HOV-SG) is referred to in the related work section or the experiments section.\", \"Multiple annotations are not formally defined in the paper (e.g., the functions $C_t, S_t$). The training stage in Sec. 4.4 should be carefully revised to make it clear.\", \"The experimental setups lack clear demonstration, and comparisons against recent methods are missing.\"], \"questions\": \"With the issues addressed above, the authors should revise the paper accordingly.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for showing interest in our novel idea and carefully reading to make this paper better. Sorry for the Ambiguity, we carefully apply proof-reading and formulation clarification on the methodology and the revised sections is highlighted red in rebuttal pdf.\\n\\n**For ambiguity problems**, the pixel-wise encoding strategy is clarified in Section 4.1; neural encoding including MHE and separate heads are declared in Section 4.2; Topo-mapping pipeline and matching method is explained in detail in Section 4.3; formulation in the figure 2, 3 has been clarified; space is added before all ().\", \"for_other_questions\": \"**Bounding-box in text query localization:** Actually, it\\u2019s not the general bounding-box from object detection, we filter the points with similarity over threshold (0.6 in our practice), and simply draw a bounding box to cover these points for visualization. \\u2018Samples\\u2019 is the number of text queries, which is clarified in revised pdf. Ground truth comes from the object instance labels from Matterport3D. For table 3, we\\u2019ve mentioned in Section 5.2 that more than 40 images are sampled from each scene, ground truth comes from back-projecting image pixels into 3D according to ground truth pose and depth.\\n\\n**Image query localization:** Table 3 shows weighted average distance among all samples in a scene, using similarity as weight. Given that few points in orange may appear in other rooms as noise, the max distance of a single point from these points (similarity would be 0.3\\\\~0.6) would be less than 6\\\\~8 m, which counts relative little.\\n\\n**Ablation study:** For better understanding, we add the origin feature encoding strategy in CLIP-Field (Shafiullah et al., 2022) into Figure 7 and Table 4 as comparison to show the improvement more obviously. As for the improvement from Baseline1 to current Topo-Field, this metric is evaluated on more than 100~200k position samples on each scene, even if the number seems not growing in a large scale (1% ~ 3%), it\\u2019s a robust and obvious progress.\\n\\nFor the metrics, while there have been many works distilling object features and comparing semantic segmentation, our work focuses on the layout-level encoding and the integration with object-level. As you can see, the single object feature encoding branch nearly remains the same in our work. As for topometric graph, we aim to provide a pipeline to build this map based on neural implicit representation and evaluate its effectiveness with tradition graph-based plan method. More quantification for optimization and evaluation on robotic would be our under-going future work.\\n\\nBy the way, for fairness, the same input, metrics, and evaluation strategy are employed for our method and compared ones.\"}", "{\"comment\": \"Thank you for your insightful comments and suggestions.\\n\\nRegarding the motivation of our model, we were inspired by evidence that neurons in the postrhinal cortex (POR) exhibit a preference for the spatial layout of local scenes, which is determined by the geometry of regions, such as a room's boundaries. Based on this neurobiological evidence, we abstracted the spatial representation of regions to align with our spatial layout encoding of connected regions. This encoding aims to capture the spatial structure in a way that is consistent with the principles observed in POR.\\n\\nFor the integration of layout information in our method, the addition to the Topo-field is motivated by this brain-inspired approach, incorporating the spatial layouts connected by regions. To evaluate the impact of this addition, we conducted comparisons with and without the layout information, as shown in our ablation studies. These results demonstrate the contribution of layout information to the model's performance. As for architecture improvement, ablations from Baseline1 to Topo-Field already include as shown in Figure 7 and Table 4.\"}", "{\"summary\": \"This paper presents a method for training a neural implicit field that utilizes supervisory signals from pre-trained foundation models to capture semantic features. The proposed model is applicable to several critical downstream tasks in robotics, including text/image query localization, semantic navigation, and path planning. Experimental results demonstrate significant improvements in performance metrics, supported by qualitative evidence.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper addresses a compelling problem in semantic mapping and its applications for enabling robots to navigate real-world environments. The experimental results demonstrate impressive improvements in performance. Additionally, the supplementary materials, such as the code snippets and prompts, enhance the understanding of the proposed method's details and implementation.\", \"weaknesses\": \"Despite its potential, the system heavily relies on various input types, such as annotated room maps and camera poses, as well as off-the-shelf object detection methods for generating bounding boxes and masks. This dependence poses challenges in real-world applications, where inaccuracies in these inputs can lead to errors. Additionally, the system's reliance on ChatGPT complicates debugging and explanation when errors occur in complex real-world environments.\\n\\nEncoding semantic information and supervising it with pre-trained features alleviates some annotation burdens; however, this approach is already a common practice in the field of implicit representation for semantic mapping [1][2]. The overall system resembles a large engineering project, making it challenging to distill its theoretical contributions.\\n\\n[1] V. Tschernezki, I. Laina, D. Larlus and A. Vedaldi, \\\"Neural Feature Fusion Fields: 3D Distillation of Self-Supervised 2D Image Representations,\\\" 2022 International Conference on 3D Vision (3DV), Prague, Czech Republic, 2022, pp. 443-453, doi: 10.1109/3DV57658.2022.00056.\\n[2] Zhu, Siting, et al. \\\"Sni-slam: Semantic neural implicit slam.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n\\nTable 4 could benefit from clearer labeling, where Baselines 1-4 are not explicitly defined. A reference to Figure 7 could help.\\n\\nThe authors frequently reference the postrhinal cortex from the biological literature, but the connection to the proposed method is not clearly articulated. Topological mapping is indeed a common computer vision task relevant to navigation.\", \"questions\": \"The experimental results are impressive when compared to baseline performances; however, it is unclear whether the benchmarks used are new proposed by the authors or following existing ones, which raises concerns about the fairness of the evaluation. What are the primary factors driving the significant improvements?\", \"computation\": \"the paper mentions a large batch size of 12,544. It would be helpful to clarify what specific data is contained within this batch size.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the active discussion, for remaining problems:\\n\\n**ChatGPT:** As declared in Section 4.3.2, we leverage LLM to help validate the \\nconstruction of topometric graph edges. The input is two JSON files including \\nregion vertices and object vertices whose attributes have been declared in Section 4.3.2.\\nand samples are given in Appendix A.6. With the input and prompts listed, LLM is supposed\\nto output a new JSON file including the graph edges among vertices, where edge attributes\\nare declared in Section 4.3.2 and samples are given in Appendix A.6.\\n\\nAs for the phrase \\\"would not cause big error\\\", we mean in our practice we have not found \\nobvious error. One shortcoming may come as follows, the position relation of regions is defined\", \"as_one_of\": \"1) a is to the east of b, 2) a is to the west of b, 3) a is to the north of b, 4) a is to\\nthe south of b. In fact, a room could be located to the southeast-south of the other room, LLM\\nmay decide the relationship to be 1) a is to the east of b. As mentioned \\\"As mentioned, the GPT \\nis used in our approach to filter out unreasonable relationships and check vertices relations \\nwhich mainly decide problems like (1) whether a bike located in a bedroom is possible (2) the \\n3D location relationship of b-box[x1,y1,z1] and b-box[x2,y2,z2].\\\" LLM has the common knowledge\\nand ability to deal with these easy problems.\\n\\n**Cognitive Map:** As discussed in Section 2.2, traditional topo-map didn't include semantics (Zhang, 2015; Zhang et al., 2015; Garrote et al., 2018; Oleynikova et al., 2018; Badino et al., 2012). Concept-graph (Gu et al., 2024) makes a step forward utilizing LFM to model the object structure with a topo map, which introduces open-world semantics. CLIO Maggio et al. (2024) and HOV-SG Werby et al. (2024) propose using feature point cloud clustering and mapping in an incremental approach, which is not a cognitive inspired approach.\\n\\nOn the contrary, based on the mental representation of cognitive map, LaChance et al. in 2019 discoveried that population code in POR is strongly tuned to spatial layout than content. More recently, Zeng et al. in 2022 proposed that geometry representations of local layouts relative to environmental centers are needed to form a high-level cognitive map from egocentric perception to allocentric understanding. We propose to encode spatial layout and contents with a layout-object-position field. By querying the neural representation from egocentric perceptions, we form an allocentric high-level graph-like topometric map representing layouts with connected regions relative to its centers.\\n\\n**Contributions and Related Work:** In a word,\\nmost semantic feature fields learned in existing methods (Zhi et al., 2021; Fan et al., 2022; Xie et \\nal., 2021; Shafiullah et al., 2022; Huang et al., 2023; Kerr et al., 2023) focus on object \\nsemantics but do not include layout-level features. Works like RegionPLC (Yang et al., 2023) considered region\\ninformation by fusing multi-model features but no explicit representation of layout features is learned.\\nThe discussion has been included in Section 2.1 and 2.2 with more detail.\\n\\n**Evaluation:** The dataset splits used for our localization evaluation follows a general setting which is about 4:1 to 5:1 like most learning-based localization works (Kendall et al., 2016).\"}", "{\"comment\": \"I appreciate the authors' effort for all the responses! Still, there are many points that I am not convinced or not clear.\\n\\nFor example, there are still some unclear points from your method, I am just listing a few below.\\n1. Line 243, why CLIP can give you \\u201cper-pixel\\u201d feature? Isn\\u2019t that a global feature for each image? You still did not really answer my questions of using \\u201cLSeg/OpenSeg\\u201d for per-pixel feature.\\n2. You still did not justify my question \\u201cHow do you make sure your features on the 3D surface point are exactly the feature you render?\\u201d In Sec 4.2, I still don\\u2019t understand it\\n\\nMoreover, there are still unjustified points in experiments:\\n1. Ok, so you choose a cosine similarity score threshold of 0.6, why this value? How can such a single threshold value work well? For example, my personal experience with the CLIP cosine similarity score is that, threshold=0.4 might work well for \\u201cbed\\u201d, but might not be working well with \\u201cbed with a pillow on it\\u201d (just an example). Therefore, an experiment to justify your choice of threshold, and an explanation of why a single value of 0.6 can work well for all scenarios is necessary\\n2. Now I understand better why your image localization performance is good, but in your paper, you still did not add an explanation \\n3. Ablation: you partially answer my question, I appreciate it. However, you still did not answer the key question: why your ablation is only on the region prediction, not any other parts of your method (objects, the graphs, etc)\\n\\nBased on my concerns above, I am afraid that the paper right now has not reached my standard for publication. Please incorporate all the comments from every reviewer and submit them to the next venue.\"}", "{\"comment\": \"Thank you for agreeing on our contribution, after proofreading and formulation clarification, the paper includes more details and is helpful for understanding and reproducing. Revised version is attached in rebuttal with revised sections highlighted in red.\\n\\n**For ambiguity problem,** Table 4 and Figure 7 are revised, referenced, and described in section 5.4.\", \"for_other_problems\": \"**The system heavily relies on various input types, poses challenges in real-world applications. The system's reliance on ChatGPT complicates debugging and explanation when errors occur in complex real-world environments.**\\n\\nAs using NeRF, the posed images are needed and mentioned in Section 4.1, COLMAP (Sch\\u00a8onberger & Frahm, 2016) is widely employed method to provide these problems. This is the same with most other reconstruction and scene representation works. And off-the-shelf large foundation models are widely employed by approaches to provide labels without human labor. Indeed, besides Matterport3D which is a real-world dataset, a real-world apartment environment (Zhu et al., 2022) is also employed for evaluation, which proves effectiveness and practicability.\\nFor usage of GPT to help evaluate the vertices relationship, additional details have been clarified in Section 4.3.2. As mentioned, the GPT is used in our approach to filter out unreasonable relationships and check vertices relations which mainly decide problems like (1) whether a bike located in a bedroom is possible (2) the 3D location relationship of b-box[x1,y1,z1] and b-box[x2,y2,z2]. The GPT is well-prompted as in appendix and this would not cause big error.\\n\\n**The overall system resembles a large engineering project, making it challenging to distill its theoretical contributions, the connection to the proposed method is not clearly articulated:**\\n\\nA cognitive map is a mental representation used by an individual to order personal store of information about spatial environment, and the relationship of its component parts (Tolman, 1948, Psychological review). The cognitive map is embodied by Place cells (O\\u2019Keefe et al., 1971, Brain research) and population code in POR is strongly tuned to spatial layout than content (LaChance et al., 2019, Science). Although encoding the layout and contents to form a cognitive map seems a straightforward idea, it has been more than 70 years since the original concept raised.\", \"we_mimic_the_neural_mechanisms_of_spatial_representation_in_three_key_aspects\": \"1) The cognitive map corresponds to a topometric map, which uses graph-like representations to encode relationships among its components, e.g. layouts and objects. 2) The population of place cells is analogous to a neural implicit representation with position encoding, enabling location-specific responses. 3) POR, which prioritizes spatial layouts over content, aligns with our spatial layout encoding of connected regions.\\n\\nWe believe this work makes a step forward mimicking and applying mechanisms of spatial cognition on robotics. Our method describes a clear pipeline with details for reproductivity and experiments shows the ability to manage layout-related tasks and the effectiveness of the topo-map.\\n\\n**Whether the benchmarks used are new proposed by the authors or following existing ones, which raises concerns about the fairness of the evaluation.**\\n\\nExisting scene representation methods either evaluate semantic segmentation results like mIOU (Shafiullah et al., 2022; Huang et al., 2023) or simply evaluate the localization accuracy (Kerr et al., 2023). Based on the existing evaluation strategy and for detailed quantification, we improve this metric by employing point cloud distance of prediction and target and region localization accuracy according to our proposal. For fairness, our method and compared ones share the same input, metrics, and evaluation strategy.\\n\\n**Computation: the paper mentions a large batch size of 12,544. It would be helpful to clarify what specific data is contained within this batch size.**\\n\\nAs mentioned in (Shafiullah et al., 2022), a larger batch size helps the CLIP (Radford et al., 2021) series works reduce the variance in the contrastive loss function. As a reliable baseline, CLIP-Field (Shafiullah et al., 2022) used a batch size of 12,544 to maximize the VRAM usage. For fairness, we keep align with the same settings in our approach for the settings of MHE network.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"comment\": \"Thank you for agreeing on our contribution and carefully reading to make this paper better. Sorry for the Ambiguity, proof-reading and formulation clarification on the methodology have been applied and the revised sections is highlighted red in rebuttal pdf.\\n\\n**For ambiguity problems**, environment partitioning detail is described in Section 4.1; discussion of proposed approach with bio-inspired theory is added in Introduction and contribution; citation error is fixed; formulations in Section 4.1, 4.2, 4.3 4.4 are checked and clarified; neural encoding including MHE and separate heads are declared in Section 4.2; Topo-mapping pipeline and matching method is explained in detail in Section 4.3.\", \"as_for_other_questions\": \"**How well the proposed approach may model the neural structures it claims to be inspired by:**\\n\\nA cognitive map is a mental representation used by an individual to order personal store of information about spatial environment, and the relationship of its component parts (Tolman, 1948, Psychological review). The cognitive map is embodied by Place cells (O\\u2019Keefe et al., 1971, Brain research) and population code in POR is strongly tuned to spatial layout than content (LaChance et al., 2019, Science). Although encoding the layout and contents to form a cognitive map seems a straightforward idea, it has been more than 70 years since the original concept raised.\", \"we_mimic_the_neural_mechanisms_of_spatial_representation_in_three_key_aspects\": \"1) The cognitive map corresponds to a topometric map, which uses graph-like representations to encode relationships among its components, e.g. layouts and objects. 2) The population of place cells is analogous to a neural implicit representation with position encoding, enabling location-specific responses. 3) POR, which prioritizes spatial layouts over content, aligns with our spatial layout encoding of connected regions.\\n\\nWe believe this work makes a step forward mimicking and applying mechanisms of spatial cognition on robotics. Our method describes a clear pipeline with details for reproductivity and experiments shows the ability to manage layout-related tasks and the effectiveness of the topo-map.\\n\\n**It would seem that the proposed approach also benefits from significantly more task-specific information for those tasks:**\\n\\nFor fairness, the same input, metrics, and evaluation strategy are employed for our method and all compared ones. So the better performance comes from our method explicitly modeling the layout structural and objective information and the hierarchical integration, powered by the whole pipeline mentioned before. \\n\\n**It would seem that the partitioning of the space requires human labeling? If that is the case, it is a significant limitation of the approach.**\\n\\nWe agree that labeling the scene regions needs human labor. However, in fact, partitioning the buildings needs little human labor, where in most human-made buildings spatial layouts are easily available divided by straight walls. Clarified in section 4.1. Layout information is available in datasets like Matterport3D. However, if not provided, the region distribution can be easily annotated taking little human labor. As in our practice, region annotation of a house with 8 rooms only takes 3 min by drawing lines from top-down view according to walls to form a rule to separate (x,y) coordinates, bounding 3D points to different regions.\\n\\n**Is MHE the only possible solution? How does it compare with other fast approaches discussed in the literature, like, for example, Gaussian Splatting:**\\n\\nSurely the MHE is not the only possible solution as you mention, other methods like plenoxels, octree, feature grids or others could be solution. Gaussian Splatting is a hot explicit scene representation research recently, however we believe the NeRF, being an implicit way to encode information, is more likely a neural way mimicking the bio-encoding way. There\\u2019s no research proving NeRF represented implicit strategy or GS represented explicit way which is better. Further comparison could be a good research, however would not be included in this paper.\"}", "{\"comment\": \"We double-checked the revised paper, as mentioned, we employ an implicit neural scene representation in an Instant-NGP (Muller et al., 2022) way rather than a NeRF-based network. The mentioned implicit neural representation or feature field is not the same as neural radiance field.\"}", "{\"comment\": \"I appreciate the authors' efforts in revising the manuscript and addressing the issues raised. However, it is unfortunate that they continue to emphasize the significance of their 'cognitive realization.' This bio-inspired perspective could be briefly mentioned as the motivation behind the paper, rather than being regarded as a key contribution. The design merely shares a philosophy with the cognitive map, but it can hardly be considered a 'cognitive realization.' The map structures themselves are quite common in the vision community. The information transfer and interactions among the maps have nothing to do with neuroscience. Simply identifying three related concepts separately does not justify calling the proposed method a 'cognitive realization.'\\nI strongly recommend that the authors focus on the specific advantages of their proposed method and the particular designs, as the hybrid map with semantic-aware topology is not novel. Take HOV-SG and CLIO for instance:\\n* Topometric map: HOV-SG and CLIO maintain Voronoi graphs that explicitly model the topology of free space. These methods also maintain scene graphs to encode objects and their relationships, which represent the realization of a 'topometric map.'\\n* Place cells: HOV-SG and CLIO maintain queryable features on the graph nodes, with dense point clouds or triangle meshes for each instance. By constructing a kd-tree of the point cloud/vertice, we can achieve the mapping function \\nF (Eq. 1) to retrieve features for any given coordinate x through nearest neighbor search. I don't think a hash-encoded MLP is more 'cognitive' as both can achieve the same function.\\n* POR: If the proposed method, with annotated room-type information, can be considered a realization of POR, then HOV-SG and CLIO already maintain explicit nodes for room types within their graph structure (without human annotations).\\n\\nTherefore, from a high-level perspective, HOV-SG and CLIO effectively achieve what the proposed method claims.\"}", "{\"comment\": \"Thank you for your effort in making this paper better. Although these two works (HOV-SG and CLIO) are very current, one published in Oct. and one in July, we discussed the differences between our work and theirs in Section 2.2. They construct the map in a feature point cloud clustering and incremental mapping way while we learn a neural implicit representation and construct the map by querying the neural representation. And we are the first to explain the theoretical basis and neuroscience reference to manage the hierarchical encoding of spatial layouts and contents in the form of objects and connected regions.\", \"the_differences_between_our_approach_and_their_approach_are_listed_as_follows_to_show_our_novelty\": [\"1) Different map construction approach: CLIO Maggio et al. (2024) built a task-driven scene graph forming task-relevant clusters of primitives. HOV-SG Werby et al. (2024) utilized feature point cloud clustering and managed the mapping in an incremental approach. We learn and represent the spatial embeddings with an implicit neural representation approach and form the topo-map graph by querying the learned representation. 2) Different graph structure: The Voronoi graph is built based on the exploring path-guided point cloud embeddings and clustering process. We query the learned representation in a two-level standard (object or region) separately with fewer vertices. Each vertice clearly represents only one object or region with its attributes. 3) Different scene representation approach: CLIO and HOV-SG form point cloud with features to explicitly represent the scene while we learn an implicit function mapping the 3D positions to the embeddings. That means our approach can interpolate and predict the embeddings of unseen areas and places with sparse or no point cloud.\", \"As suggested, we consider to update our contribution as follows:\", \"We develop a brain-inspired Topo-Field, which combines detailed neural scene representation with high-level efficient topometric mapping for hierarchical robotic scene understanding and navigable path planning. Various quantitative and qualitative experiments on real-world datasets are conducted, showing high accuracy and low error in position attributes inference and multi-modal localization tasks. Examples of topometric construction and path planning are also employed.\", \"We explain the theoretical basis and neuroscience reference to manage the hierarchical encoding of spatial layouts and contents in the form of objects and connected regions, according to the spatial mechanism of cognitive map with POR population and place cells.\", \"We propose to learn a Layout-Object-Position associated implicit neural representation with target features from separately encoded object instances and background contexts as objects and layouts. The process is explicitly supervised by LFM-powered strategy with little human labor.\", \"We propose a topometric map construction pipeline by querying the learned neural representation in a two-stage mapping and updating approach, leveraging LLM to validate edges conducted among vertices.\"]}", "{\"title\": \"Discussion\", \"comment\": \"Thank you to the authors for responding to the questions. However, I feel that some of the issues have not been fully addressed. Please see the details below.\\n\\n1. ChatGPT\\n\\nThank you for including the prompts in the Appendix. However, I am still unclear about the exact inputs and demo outputs for the system.\\n\\nAdditionally, the phrase \\\"would not cause big error\\\" is somewhat ambiguous. Could you clarify what potential errors might arise, and how the system can address these to ensure robust deployment in real-world, open-world environments?\\n\\n2. Cognitive Map\\n\\nWhile I understand that the theory of cognitive maps is well-established, I am unclear about what novel aspects are introduced into this paper from a cognitive perspective, as opposed to existing well-defined knowledge in the topology domain. While the narrative is compelling, I feel there is a weak logical connection between the ideas.\\n\\n3. Contributions and Related Work\\n\\nMost importantly, the authors have not addressed my primary concern regarding the contribution of this work relative to other papers. Specifically:\\n\\n\\\"Encoding semantic information and supervising it with pre-trained features alleviates some annotation burdens; however, this approach is already a common practice in the field of implicit representation for semantic mapping [1][2].\\\"\\n\\n3. Evaluation\\n\\nI understand the evaluation metrics presented. However, my key concern lies with the dataset splits. Specifically, I would like to know if the training and evaluation data used are consistent with those commonly employed in previously published work in this area.\"}", "{\"summary\": \"This article proposes a novel approach for encoding scene information into a topometric map, for improving localisation and planning. The proposed approach is based on a Layout-Object-Position (LOP) approach. Layout information from knowledge of the environment's rooms. Object information from semantic segmentation (Detic) and a joint encoding of the segmented object patch using clip and of the object-region labels using Sentence-BERT. Finally, position information is produced by a 3D reconstruction of the scene using Multi-scale Hashing Encoding (MHE). This information is combined into a single Topometric map coined Topo-Field.\\nThe proposed method is evaluated for the inference of position attributes and localisation and appears to clearly outperform the presented baselines.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The combination of structural and semantic information in a way that is efficient for robotic system to query and plan on is a critical problem for robotics.\", \"The approach seems to perform very well on the evaluation, clearly outperforming the presented baselines on those tasks.\", \"The proposed approach is also reasonable in computational terms, as all experiments were performed on a single GPU (no information is given on training time though).\"], \"weaknesses\": [\"The description of the approach is lacking specifics, and the reader has to infer the architecture and information flow from the provided diagrams rather than formal description in mathematical and algorithmic terms.\", \"It is not fully clear how the partitioning of the environment (location) into rooms is performed and how well it would generalise to new environments.\", \"The motivation of the work from neuroscience is interesting, but remains very vague. Little discussion is provided on how well the proposed approach may model the neural structures it claims to be inspired by.\", \"The performance is very good compared to the discussed baselines, but it would seem that the proposed appraoch also benefits from significantly more task-specific information for those tasks (ie, the room information is provided directly). This is not a critical issue in my view, but it would be good to discuss the limitations of the presented baselines and issue of fairness of comparison some more.\", \"I note that the reference for Reimers & Gurewych should probably cite the published version of the article rather than the pre-print.\"], \"questions\": [\"In line 235: what is C and S? I assume it is the output of CLIP and Sentence-BERT? How are the regions r_p defined?\", \"In line 239: what is m in this equation?\", \"In page 5, line 241: It would seem that the partitioning of the space requires human labeling? If that is the case, it is a significant limitation of the approach.\", \"In line 242: Could you clarify the sentence \\\"the predicted implicit representation outputs are targeted to match the features from the pre-trained models separately\\\", what it means in practice or how this is achieved. I assume this is what is described in 4.2, but it would be good to make it unambiguous if that is the case, as F is not referenced in that section.\", \"Could you make the description in Section 4.2 more specific and formal? The only description of inputs/outputs and process we are provided is via the diagram in figure 1, it would be good to have a proper formal description of the process, description of the architecture, and format/dimensionality of inputs and outputs for each component, as well as a formal algorithm\", \"In line 254: Could you provide a more in-depth argument for using MHE? The computational cost of standard NeRFs is well known, but is MHE the only possible solution? How does it compare with other fast approaches discussed in the literature, like, for example, Gaussian Splatting?\", \"In line 258: Could you describe the mapping in more formal terms? Fig. 2 only provides a schematic description of the process.\", \"In line 268: How is the similarity between E_pi and {C_R, S_R} calculated? It would be good to have a formal equation for this operation.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Besides the lack of novelty, there is a significant misleading regarding the argument of the learned 'NeRF'. NeRF is an abbreviation for neural **radiance** field, the field should contain color information and is usually optimized through differentiable rendering. The term 'rendered feature' (response to Reviewer ms6U) is also adopted by the authors. However, as illustrated in Fig. 3 and according to Eq. 1, there is no such radiance field (no color and density channels, no differentiable rendering process) at all. The method simply supervises the mapping function F between coordinates and features (L268, 294, 295) given the projected point cloud (coordinate) and the pixel-wise feature pairs.\"}", "{\"comment\": \"The **BRAIN-INSPIRED** Topo-field draws intuitive lessons from biological evidence, with the primary differences lying in our perspective and then leading to different subsequent implementations. We were inspired by evidence that neurons in the postrhinal cortex (POR) exhibit a preference for the spatial layout of local scenes, defined by the geometry of regions, such as a room's boundaries. Based on this, we abstract the spatial representation of regions to align with our spatial layout encoding of connected regions. However, our goal is not to elucidate the neural mechanisms of POR firing patterns but rather to leverage these principles for practical spatial representation.\\n\\nHOV-SG (and CLIO) and Topo-field share similar intuitive ideas about graph-like representations with nodes for rooms and objects, likely stemming from shared human common sense. Supported by the preference for spatial representations observed in POR, we specifically conceptualize rooms as spatial layouts of local scenes, establishing a one-to-one node correspondence with regions in our topometric map. In contrast, the Voronoi graph in HOV-SG serves primarily to provide traversable areas with more detailed nodes and edges, whereas our topometric map emphasizes integrating spatial layout information and semantic details about rooms and objects. While the underlying ideas may seem similar, the distinct starting points result in significantly different implementation methods.\", \"for_the_remaining_issues\": [\"We incorporate the spatial layout information into Topo-Field supported by the biological evidence that the POR population prefers the spatial layout of local scene, corresponding to the geometry of regions, connected rooms in the topometric map.\", \"Neurons in POR prefer layouts of local scenes inspiring the one-to-one correspondence of rooms where the map maintains fewer nodes with explicit room and object semantic information.\", \"Obviously, our proposed method could take any 3D position in the map as input to predict the semantic information, unlike the point cloud or meshes.\", \"We build the topometric map by sampling among positions to get the semantic and metric information of room and object nodes as shown in Figure 2(c).\", \"As shown in Eq. 4 & 5 the bounding_box includes the center and extent information of objects and regions clearly providing the metric for the topometric map, which can be used for A* path planning.\"], \"for_the_issues_in_the_experiments\": [\"We learn and evaluate the annotated region information to validate that the neural representation is able to conduct a relationship between observed image background contexts and region vision-language embeddings. It is realized by mapping back-projected 3D locations to region embeddings.\", \"It is our contribution to explicitly represent and learn the region layouts. For other compared methods, it does not mean that they haven't considered the context information, the open-world embeddings\", \"distilled in their feature fields implicitly include the objects and contexts. The experiments show our advantages in explicitly\", \"constructing the layouts and relationships with objects and 3D positions.\", \"We choose 2 large scale and 2 small scale scenes as representative scenes in Matterport3D dataset for query\", \"localization as mentioned in Section 5.1.\", \"It would be included in the paper and our provided reproducible code as demo if accepted.\"]}", "{\"comment\": \"Thank you for the valuable advice. Sorry for the Ambiguity, proof-reading and formulation clarification on the methodology and settings have been applied and the revised sections are highlighted red in rebuttal pdf.\\n\\n**For ambiguity problems,** formulations in Section 3, 4 are clarified; Topo-mapping pipeline and matching method are explained in detail in Section 4.3; more training details of neural implicit representation are added in Section 4.4.\\n\\n**The novelty and the contribution of the proposed method are unclear.**\\n\\nA cognitive map is a mental representation used by an individual to order personal store of information about spatial environment, and the relationship of its component parts (Tolman, 1948, Psychological review). The cognitive map is embodied by Place cells (O\\u2019Keefe et al., 1971, Brain research) and population code in POR is strongly tuned to spatial layout than content (LaChance et al., 2019, Science). Although encoding the layout and contents to form a cognitive map seems a straightforward idea, it has been more than 70 years since the original concept raised.\", \"we_mimic_the_neural_mechanisms_of_spatial_representation_in_three_key_aspects\": \"1) The cognitive map corresponds to a topometric map, which uses graph-like representations to encode relationships among its components, e.g. layouts and objects. 2) The population of place cells is analogous to a neural implicit representation with position encoding, enabling location-specific responses. 3) POR, which prioritizes spatial layouts over content, aligns with our spatial layout encoding of connected regions.\\n\\nWe believe this work makes a step forward mimicking and applying mechanisms of spatial cognition on robotics. Our method describes a clear pipeline with details for reproductivity and experiments shows the ability to manage layout-related tasks and the effectiveness of the topo-map.\\n\\n**No relevant work (e.g., CLIO, HOV-SG) is referred to in the related work section or the experiments section.**\\n\\nAs suggested, the mentioned two very current works (Maggio et al., Oct 2024 RAL; Werby et al., July 2024 RSS) have been added to the related work and discussed. CLIO Maggio et al. (2024) built a task-driven scene graph inspired by Information Bottleneck principle to form task-relevant clusters of primitives. At the same time, HOV-SG Werby et al. (2024) proposed a hierarchical scene understanding pipeline, using feature point cloud clustering of zero-shot embeddings in a fusion scheme and realizing the mapping in an incremental approach. Unlike the incremental mapping and clustering-based graph construction method, we propose to build the topometric map based on querying the trained neural field which serves as knowledge-like memory base, whose nodes and edges include attributes representing object and layout information explicitly learned when training the specific neural encoding.\"}" ] }
2HjRezQ1nj
CLIPDrag: Combining Text-based and Drag-based Instructions for Image Editing
[ "Ziqi Jiang", "Zhen Wang", "Long Chen" ]
Precise and flexible image editing remains a fundamental challenge in computer vision. Based on the modified areas, most editing methods can be divided into two main types: global editing and local editing. In this paper, we choose the two most common editing approaches (\ie text-based editing and drag-based editing) and analyze their drawbacks. Specifically, text-based methods often fail to describe the desired modifications precisely, while drag-based methods suffer from ambiguity. To address these issues, we proposed \textbf{CLIPDrag}, a novel image editing method that is the first to combine text and drag signals for precise and ambiguity-free manipulations on diffusion models. To fully leverage these two signals, we treat text signals as global guidance and drag points as local information. Then we introduce a novel global-local motion supervision method to integrate text signals into existing drag-based methods by adapting a pre-trained language-vision model like CLIP. Furthermore, we also address the problem of slow convergence in CLIPDrag by presenting a fast point-tracking method that enforces drag points moving toward correct directions. Extensive experiments demonstrate that CLIPDrag outperforms existing single drag-based methods or text-based methods.
[ "Computer Vision", "Generative Model", "Diffusion Model", "Image Editing." ]
Accept (Poster)
https://openreview.net/pdf?id=2HjRezQ1nj
https://openreview.net/forum?id=2HjRezQ1nj
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uH5FT6R8xm", "sNEe4Q1EGy", "pmAOZQVObn", "n4QrfRM1c4", "h3mTlzcxpU", "g8CGHntL1A", "dONzA6rWKz", "cEx8rCd730", "bKAAneflJA", "YUYSzjLUXv", "UqtJPIuosx", "RTu8SH7gHz", "R6FrEgBsCB", "R152ZlnXKI", "LwiaPstJz7", "J6z7euTCQs", "H7a7LRYIZC", "Gt9UZkDpfj", "BcPmvBbFcA", "9j4lQxIjms", "4rXIcc6u1m", "3uhQRCdefB" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment" ], "note_created": [ 1730371663948, 1732702356281, 1732528689438, 1733117205956, 1732465834933, 1732464973628, 1731117011594, 1732465310747, 1737523823997, 1730688663155, 1733117166037, 1732535410360, 1732464644733, 1732702270770, 1732464428412, 1729177455223, 1732463267676, 1732464386637, 1732465591096, 1732464726581, 1734763719356, 1732532269034 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7216/Reviewer_qFtD" ], [ "ICLR.cc/2025/Conference/Submission7216/Authors" ], [ "ICLR.cc/2025/Conference/Submission7216/Reviewer_CxEc" ], [ "ICLR.cc/2025/Conference/Submission7216/Authors" ], [ "ICLR.cc/2025/Conference/Submission7216/Authors" ], [ "ICLR.cc/2025/Conference/Submission7216/Authors" ], [ "ICLR.cc/2025/Conference/Submission7216/Reviewer_RgXk" ], [ "ICLR.cc/2025/Conference/Submission7216/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7216/Reviewer_R2Bm" ], [ "ICLR.cc/2025/Conference/Submission7216/Authors" ], [ "ICLR.cc/2025/Conference/Submission7216/Reviewer_CxEc" ], [ "ICLR.cc/2025/Conference/Submission7216/Authors" ], [ "ICLR.cc/2025/Conference/Submission7216/Authors" ], [ "ICLR.cc/2025/Conference/Submission7216/Authors" ], [ "ICLR.cc/2025/Conference/Submission7216/Reviewer_CxEc" ], [ "ICLR.cc/2025/Conference/Submission7216/Authors" ], [ "ICLR.cc/2025/Conference/Submission7216/Authors" ], [ "ICLR.cc/2025/Conference/Submission7216/Authors" ], [ "ICLR.cc/2025/Conference/Submission7216/Authors" ], [ "ICLR.cc/2025/Conference/Submission7216/Area_Chair_KA2d" ], [ "ICLR.cc/2025/Conference/Submission7216/Authors" ] ], "structured_content_str": [ "{\"summary\": \"CLIPDrag combines text and drag-based controls to improve image editing, using text for broad guidance and drag points for precise adjustments. The author introduces Global-Local Motion Supervision, which combines gradients from both text and drag inputs, and Fast Point Tracking to speed up convergence. This method eliminates common issues like vagueness in text-only edits and ambiguity in drag-only edits.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1.\\tThe motivation is clear and effective, combining text and drag editing to leverage the strengths of both approaches, achieving more precise edits.\\n2.\\tThe Global-Local Gradient Fusion method is innovative, merging global text and local drag gradients to enhance editing quality, with experiments showing notable improvements in performance.\", \"weaknesses\": \"1.\\tThe illustration in Figure 2 is unclear in terms of workflow. If CLIP guidance is applied, the latent space should ideally be converted to the pixel domain to align with CLIP\\u2019s processing. However, the diagram uses SD rather than a VAE.\\n2.\\tCLIPDrag lacks comprehensive quantitative comparisons with other methods in image editing. The current evaluation only includes DragDiff in Figure 6, which is insufficient.\\n3.\\tThe ablation study also lacks more detailed quantitative comparisons. In Figure 8, the visual differences between (b) and (c) are subtle, making it hard to discern the impact of changes.\", \"questions\": \"The comparisons in this paper in insufficient, why only DragDiff is compared to in this paper? More comparisons should be added.\", \"flag_for_ethics_review\": \"['No ethics review needed.', 'Yes, Discrimination / bias / fairness concerns', 'Yes, Privacy, security and safety', 'Yes, Legal compliance (e.g., GDPR, copyright, terms of use)', 'Yes, Potentially harmful insights, methodologies and applications', 'Yes, Responsible research practice (e.g., human subjects, data release)', 'Yes, Research integrity issues (e.g., plagiarism, dual submission)', 'Yes, Unprofessional behaviors (e.g., unprofessional exchange between authors and reviewers)', 'Yes, Other reasons (please specify below)']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Follow-up\", \"comment\": \"Dear Reviewer,\\n\\nAs the deadline for the author-reviewer discussion phase is approaching, we would like to check if our response addressed your concerns. If there are any remaining issues or if you require further clarification, please feel free to inform us.\\n\\nThanks!\"}", "{\"comment\": \"Thank you for your detailed rebuttal. Your clarification of the experimental methodology and additional data analysis effectively addresses my initial concerns.\\n\\nOne remaining concern is about the evaluation of FastDrag's inference time, as they report an inference time of less than 5 seconds.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Dear Reviewer qFtD,\\n\\n**As the rebuttal discussion period ends in two days**, we would be grateful for your feedback on whether our responses have adequately addressed your concerns. We are ready to answer any further questions you may have.\\n\\nThank you for your valuable time and effort!\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Response to Reviewer CxEc(2/2)\", \"comment\": \">w4: User Input Optimization. The authors could explore incorporating vision-language models like GPT-4V to automatically interpret the input image (as shown in the first column of Figure 4). This approach could significantly reduce user burden while maintaining the benefits of text-guided editing.\\n\\nThank you for your valuable suggestions. We agree that incorporating vision-language models, such as GPT-4V, to automatically generate prompts would make our method more user-friendly. Following your advice, we plan to implement the following modifications in our code to integrate a large vision-language model:\\n\\n1. If a text input is detected, it will be used for fine-tuning and editing.\\n2. If no text prompt is detected, the GPT-4V API will be called to generate a caption for the input image and the resulting caption will be used for subsequent operations.\\n\\nThis modification is included in the revised manuscript.(line 269)\\n\\n>Q1: Could you elaborate on how the addition of text signals specifically contributes to preserving the object's original identity during the edit? Additionally, are there specific conditions or types of text prompts that particularly enhance this preservation aspect within the CLIPDrag framework?\\n\\n**How does the addition of text signals specifically contribute to preserving the object's original identity during the edit?**\\n\\nText signals contribute to preserving the object's original identity by providing gradient information that guides both local edits and global preservation. While drag-based image editing focuses primarily on aligning local features, text-based editing inherently considers both region-specific modifications and the overall preservation of identity. The gradient of the text signals $G_g$ thus contains two components: one for editing specific regions and another for preserving global identity.\\n\\nIf the edit component is known, we can extract the identity-preservation information through decomposition (as shown in Figure 3(a)(b)). Since drag signals are focused on editing local regions, the direction perpendicular to them represents the identity-preserving direction. Consequently, as shown in Equation (6), when the edit direction of the text signals and drag signals align, the identity-preserving component of the text gradient is added to maintain global features.\\n\\n**Are there specific conditions or types of text prompts that particularly enhance this preservation aspect within the CLIPDrag framework?**\\n\\nYes, when the text prompt is simply a description of the object, we found that the gradient of the text signals and the drag signals are nearly orthogonal (i.e., $\\\\sin <G_g, G_l>$ close to 1 in Equation 6). This means that almost all of the gradient information from the text signals is used for preserving the global identity of the object, rather than altering local features.\\n\\n**References**\\n\\n[1]Shin, Joonghyuk, Daehyeon Choi, and Jaesik Park. \\\"InstantDrag: Improving Interactivity in Drag-based Image Editing.\\\" arXiv preprint arXiv:2409.08857 (2024).\\n\\n[2]Shi, Yujun, et al. \\\"LightningDrag: Lightning Fast and Accurate Drag-based Image Editing Emerging from Videos.\\\" arXiv preprint arXiv:2405.13722 (2024).\\n\\n[3]Cui, Yutao, et al. \\\"StableDrag: Stable Dragging for Point-based Image Editing.\\\" ECCV (2024).\\n\\n[4]Lu, Jingyi, Xinghui Li, and Kai Han. \\\"RegionDrag: Fast Region-Based Image Editing with Diffusion Models.\\\" ECCV (2024).\\n\\n[5]Zhao, Xuanjia, et al. \\\"FastDrag: Manipulate Anything in One Step.\\\" NeurIPS (2024).\\n\\n[6]Nie, Shen, et al. \\\"The blessing of randomness: Sde beats ode in general diffusion-based image editing.\\\"ICLR (2024).\"}", "{\"title\": \"Response to Reviewer qFtD(1/2)\", \"comment\": \"We sincerely appreciate your constructive comments to improve our paper and detail our response below.\\n\\n> The illustration in Figure 2 is unclear in terms of workflow. If CLIP guidance is applied, the latent space should ideally be converted to the pixel domain to align with CLIP\\u2019s processing. However, the diagram uses SD rather than a VAE.\\n\\nWe adopt the method from the DDIM paper to convert the latent $z_t$ into the corresponding image: the latent $z_t$ is first input into the diffusion part $\\\\epsilon_\\\\theta$ of the SD to obtain the predicted noise $\\\\epsilon_\\\\theta(z_t,t)$ and then get the predicted initial latent $\\\\hat{z_0}$ :\\n $$\\\\hat{z_0} = \\\\frac{z_t - \\\\sqrt{1-\\\\tilde{\\\\alpha_t}}\\\\cdot\\\\epsilon_\\\\theta(z_t)}{\\\\sqrt{\\\\tilde{\\\\alpha_t}}}$$\\n\\nNext, $\\\\hat{z_t}$ is converted to the pixel domain by the VAE part of SD. As you suggested, the leftmost part of Figure 2 should indeed represent the VAE, as the DDIM inversion does not involve the diffusion process. However, the subsequent projections should be attributed to SD, as both the diffusion part and the VAE are involved.\\n\\nWe recognize that this might have been misleading, and to clarify this, we have added a footnote in the manuscript.(line 215)\\n\\n\\n> W2: CLIPDrag lacks comprehensive quantitative comparisons with other methods in image editing. The current evaluation only includes DragDiff in Figure 6, which is insufficient.\\n\\nTo achieve a more comprehensive comparison of the drag edit experiment in Figure 6, we have added the results of two recent drag-based methods, InstantDrag and FastDrag.\\nThe corresponding visual results are as in Appendix E. \\n\\nBesides, here is the quantitative analysis on DragBench:\\n\\n**Mean Distance(MD), the lower the better**\", \"note\": \"the number(10,20,40,80,160) means the maximum iterations,\\n\\n| Method | 10 | 20 | 40 | 80 | 160 |\\n| ---------- | ---- | ---- | ---- | ---- | ---- |\\n| Ours | 49.5 | 45.1 | 39.2 | 35.8 | 32.3 |\\n| DragDiff | 51.3 | 50.6 | 42.9 | 38.8 | 35.1 |\\n| StableDrag | 50.8 | 48.8 | 42.3 | 39.0 | 34.8 |\\n| FastDrag | 52.1 | 51.1 | 44.4 | 41.9 | 36.9 |\\n\\n\\n\\n**Image Fedlity, the higher, the better**\\n\\n| Method | 10 | 20 | 40 | 80 | 160 |\\n| ---------- | ---- | ---- | ---- | ---- | ---- |\\n| Ours | 0.95 | 0.94 | 0.93 | 0.90 | 0.88 |\\n| DragDiff | 0.95 | 0.93 | 0.90 | 0.87 | 0.85 |\\n| StableDrag | 0.95 | 0.90 | 0.88 | 0.84 | 0.81 |\\n| FastDrag | 0.95 | 0.93 | 0.93 | 0.89 | 0.83 |\\n\\n\\n\\nWe observe that compared to DragDiff, StableDrag achieves more stable image edits, meaning the object's identity is better preserved. However, it may not always perfectly align the handle features with the target points. In contrast, FastDrag tends to perform the opposite, where the handle features are more accurately dragged to the target positions, but the stability of the image edit (and preservation of identity) is somewhat compromised.\"}", "{\"summary\": \"This paper introduces CLIPDrag, a novel image editing approach that integrates both text-based and drag-based controls to achieve more precise and flexible edits. Traditional text-based editing provides general guidance but often lacks specificity, while drag-based editing offers local control but can be ambiguous without context. CLIPDrag addresses these issues by using text as global guidance for overall image context and drag points for fine-grained, localized control. The model leverages a global-local motion supervision (GLMS) system and a fast point-tracking (FPT) method to streamline and accelerate the editing process. The paper is well written and easy to understand, the paper has comprehensive experimental results which show CLIPDrag outperforms both traditional drag- and text-based methods in accuracy and image fidelity. The detailed ablations make the hypothesis clear. The paper presents an interesting path for image editing and is theoretically grounded which should be shared within the community\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. Novel Approach to combine local and global gradient: Building on text inversion methods to combine text and drag signals, CLIPDrag enables pixel-level control, offering both specific and contextually aware edits.\\n2. Efficient Convergence: The fast point-tracking method improves the editing process by guiding handle points toward their target positions faster.\\n3. Extensive Ablations: The paper has ablations for all different components such as point tracking, GLMS and controls with edit and text showing clear performance gain. \\n4. Qualitative Results: The papers presents representative set of results allowing easy intuition and help with clarity of the paper.\", \"weaknesses\": \"1. The need for identity preservation beyond citing DragDiffusion is not shared, given the improvement of base models, the intuition behind it is lacking.\\n2. Gradient accumulation is discussed assuming the latent code is continuous, formulating why the gradient manipulation will still lead to plausible images is unclear. \\n3. Assumption around using nearest neighbors in FPT moving monotonically towards target is not explained, given the optimization is highly non linear.\", \"questions\": \"1. What are some common failure cases for the editing, especially if the text and local edits conflict.\\n2. How are the number of iterations for denoising fixed for drag editing and how do the impact change with fewer to larger iterations. \\n3. One of example is shown to incorporate masks for editing, can it be explained how masks are incorporated in this framework ?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer qFtD(2/2)\", \"comment\": \">W3:The ablation study also lacks more detailed quantitative comparisons. In Figure 8, the visual differences between (b) and (c) are subtle, making it hard to discern the impact of changes.\\n\\nWe would like to provide some clarification regarding Figure 8. In this ablation study, we compare our FPT strategy with the point tracking (PT) strategy used in DragGAN/DragDiff. Because the primary goal of FPT is to accelerate the editing process, to ensure a fair comparison we have to make sure the visual differences between panels (b) and (c) are subtle, which means FPT achieves similar editing results as PT but at a faster pace. Therefore we can conclude that FPT can speed up the edit process without compromising the quality of the edits.\\n\\n\\n\\n\\n\\n> Q1: The comparisons in this paper is insufficient, why only DragDiff is compared in this paper? More comparisons should be added.\\n\\nThanks for the suggestion. To better show the performance and generalization of our CLIPDrag method, we have added the result of other drag-based methods (SDE-Drag, InstantDrag, RegionDrag, and LightningDrag), shown in Appendix A. As we can see\\uff0c all these methods have the problem of ambiguity(InstantDrag, LightningDrag) or identity-preserving(StableDrag, RegionDrag).\\n\\n**References**\\n\\n[1]Shin, Joonghyuk, Daehyeon Choi, and Jaesik Park. \\\"InstantDrag: Improving Interactivity in Drag-based Image Editing.\\\" arXiv preprint arXiv:2409.08857 (2024).\\n\\n[2]Shi, Yujun, et al. \\\"LightningDrag: Lightning Fast and Accurate Drag-based Image Editing Emerging from Videos.\\\" arXiv preprint arXiv:2405.13722 (2024).\\n\\n[3]Cui, Yutao, et al. \\\"StableDrag: Stable Dragging for Point-based Image Editing.\\\" ECCV (2024).\\n\\n[4]Lu, Jingyi, Xinghui Li, and Kai Han. \\\"RegionDrag: Fast Region-Based Image Editing with Diffusion Models.\\\" ECCV (2024).\\n\\n[5]Zhao, Xuanjia, et al. \\\"FastDrag: Manipulate Anything in One Step.\\\" NeurIPS (2024).\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper proposes a Text-Drag Editing framework to address text-based and drag-based editing limitations. To achieve this, the authors introduce global-local motion supervision that integrates the semantic aspects of text with drag guidance. They utilize a novel approach of gradient fusion, combining gradients from text and drag conditioning based on their directional alignment to provide a unified gradient.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"For the first time, the paper provides an algorithm that integrates text-guided editing with drag-guided editing. The proposed editing algorithm attempts to provide more precise global editing and reduce ambiguity in local editing. The independent guidance or supervision of text and drag is combined interestingly by disentangling the global gradient that is perpendicular and parallel to the local gradient.\", \"weaknesses\": \"1. Lack of Comprehensive Review of Diffusion-Based Image Editing Literature:\\nThe paper does not provide an adequate overview of diffusion-based image editing methods. A more thorough review of recent approaches in diffusion-based image editing is necessary to strengthen its background and situate the proposed method within the broader field. Specifically, the authors should consider discussing recent methods, such as\", \"sine\": \"SINgle Image Editing With Text-to-Image Diffusion Models (Zhang et al., CVPR 2023),\", \"paint_by_example\": \"Exemplar-based Image Editing with Diffusion Models (Yang et al., CVPR 2023),\", \"flexiedit\": \"Frequency-Aware Latent Refinement for Enhanced Non-Rigid Editing (Koo et al., ECCV 2024),\", \"and_regiondrag\": \"Fast Region-Based Image Editing with Diffusion Models (Lu et al., ECCV 2024).\\nIncorporating these examples will provide a more robust foundation and context for the reader, enabling a clearer understanding of how the current approach builds upon or diverges from existing work.\\n\\n2. Unconvincing Example in Figure 1:\\nThe example provided in Figure 1 does not convincingly illustrate the motivation of the study. The intention seems to highlight the limitations of drag-based and text-based editing approaches, yet the figure only demonstrates an instance where drag-based editing is ineffective. A more persuasive example might involve a scenario where drag-based editing produces a subtle change\\u2014such as adjusting a subject's smile\\u2014which could then be further refined by the proposed text-drag editing method to achieve a more detailed, natural effect. This change would clarify the benefits of text-drag editing over existing methods.\\n\\nAdditionally, the similarity between the proposed method's results and traditional drag-based editing in Figure 1 and the statute example raises questions about the added benefit of the proposed approach. If these similarities are not intentional, a different example or refinement of the illustrations might better demonstrate the unique advantages of the proposed method.\\n\\n3. Handling of Distinct Effect Regions in Text-Based and Drag-Based Editing\\nThe paper does not adequately explain how it manages distinct effect regions associated with text-based and drag-based editing despite these methods likely targeting different areas in an image. Clarifying how these regions are defined, integrated, or adjusted during editing would provide more specificity and improve understanding of the algorithm's functionality. This discussion is crucial to distinguish the contribution of the combined editing approach.\\n\\n4. Suggested Comparative Experiments for Method Validation\\nComparative experiments should include scenarios where text-based editing is applied after drag-based editing and vice versa to illustrate the proposed method's effectiveness better. This comparison would help demonstrate the practical advantage of combining both methods in the proposed approach and establish whether there are meaningful improvements when they are applied sequentially.\\n\\n5. Limited Novelty in Gradient Combination Approach\\nThe novelty presented in Equation (6), which combines the two editing approaches by decomposing one gradient into the other perpendicular component and then summing them, seems linear, and it is conceivable that a non-linear combination may provide a more effective result. Including alternative approaches as comparative experiments would strengthen the paper's case for its approach or help contextualize its performance relative to existing methods.\\n\\nThe paper introduces a combined text-drag editing approach but lacks a comprehensive literature review, convincing examples, clarity regarding region specificity, and evidence of sufficient novelty. Addressing these areas would help elevate the study\\u2019s contributions and clarify its position within diffusion-based image editing.\", \"questions\": \"Please provide examples with and without drag edit the following examples. \\\"The sculpture is smiling and not showing his teeth.\\\" and \\\"The sculpture is smiling and not raising his head\\\".\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Dear Reviewer R2Bm,\\n\\n**As the rebuttal discussion period ends in two days**, we would be grateful for your feedback on whether our responses have adequately addressed your concerns. We are ready to answer any further questions you may have.\\n\\nThank you for your valuable time and effort!\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"comment\": \"Thank you for the reply, all my concerns have been fully addressed.\"}", "{\"title\": \"Response to Reviewer R2Bm(1/2\\uff09\", \"comment\": \"We sincerely appreciate your constructive comments on improving our paper. We detail our response below and have corrected the corresponding part in our revision.\\n\\n>W1: Lack of Comprehensive REview of Diffusion-Based Image Editing.\\n\\nThank you for the suggestions. We have updated the related work section in our latest manuscript version. Specifically, we have made the following improvements:\\n\\n1. A subsection was added to introduce the development of diffusion models(lines 128-141).\\n2. Included recent methods in text-based image editing, such as SINE, Paint by Example, and FlexEditlines 147-150).\\n3. Provided a more detailed overview of drag-based methods, including RegionDrag, FastDrag, FreeDrag, InstantDrag, and other recent approaches(lines 160-200).\\n\\n>W2: Unconvincing Example in Figure 1.\\n\\nThanks for the suggestion. To better show our motivation, We have made some modifications in Figure 1. Since the first example has already demonstrated the function of text signals, for the second case of the sculpture, we try to show the effect of drag signals. Specifically, we keep the text prompt \\\"The sculpture is smiling\\\" unchanged, by changing the position of drag points, CLIPDrag can control the extent of the smile: showing or not showing the teeth. We hope this modification will help readers understand our motivation.\\n\\n\\n>W3: Handling of Distinct Effect Regions in Text-Based and Drag-Based Editing.\\n\\n**How are effect regions defined?** \\n\\nThe effect regions in drag-based editing are clearly defined by the patch features of handle points and target points. However, the effect regions of text signals are not explicitly defined. Instead, they correspond to positions with high attention scores in the cross-attention map of the U-Net. For instance, in the text prompt \\\"A photo of a smiling woman,\\\" the effect regions would include the woman's mouth (related to 'smiling') and other areas like the face or eyes (related to 'woman').\\n\\nIn our method, we categorize all effect regions into two types:\\n(i) Edit regions: These are the areas where we aim to change features, such as regions around the drag points and tokens like \\\"smiling.\\\"\\n(ii) Identity regions: These are the areas where we want to preserve features, such as the regions corresponding to \\\"woman.\\\"\\n\\n\\n**How effect regions are integrated or adjusted?**\\n\\n\\nActually, we integrated these effect regions through gradient fusion, but the Gradient Fusion can be explained from the perspective of regions. The effect regions of the edit component of $G_g$ align with the edit regions, and similarly for the identity component.\\n\\nWhen the edit regions of the text signal are consistent with the drag points' effect region, we use the drag signals to modify the image and the text signals to preserve identity, as shown in Figure 3(a). When the edit regions of the text signals contradict the drag points' effect regions, we use the edit regions of the text signals to adjust the effect regions of the drag signals. The integration and adjustment of these regions are achieved through gradient fusion.\"}", "{\"title\": \"Follow-up\", \"comment\": \"Dear Reviewer,\\n\\nAs the deadline for the author-reviewer discussion phase is approaching, we would like to check if our response addressed your concerns. If there are any remaining issues or if you require further clarification, please feel free to inform us.\\n\\nThanks!\"}", "{\"title\": \"Response to Reviewer RgXk(2/2\\uff09\", \"comment\": \"> Q1: what are some failure cases when the text and local edit conflicts?\\n\\nThank you for raising this concern. While our primary motivation is to use the text prompt to complement the local drag edit\\u2014ensuring that these two signals are consistent in most cases\\u2014we also explored scenarios where these guidance signals conflict. Based on our experiments, we observed two types of potential editing outcomes:\\n\\n1. Ambiguity or Neutralization at Moderate Text Strength ($\\\\lambda$<10). When the strength of the text signal is moderate, ambiguity can arise if the text guidance fails to provide accurate information. In other cases, the text signal may counteract the drag operation, effectively neutralizing its effect. For instance, in Figure 4(e), the drag instruction combined with the prompt \\\"Make the heel of the shoes higher\\\" might yield a result akin to \\\"Make the heel not so high.\\\"\\n\\n2. Implausible Results at High Text Strength($\\\\lambda=100$). When the text guidance is excessively strong, it overwhelms the denoising process, making it difficult to handle the perturbation. This can result in implausible or unrealistic edits.\\n\\nYou can see the corresponding example in Appendix D.\\n\\n\\n\\n\\n\\n> Q2: How are the number of iterations for denoising fixed for drag editing and how does the impact change with fewer to larger iterations?\\n\\n\\nThank you for your questions. Below, we address them individually:\\n\\n\\n**How are the (max) number of iterations fixed?**\\n\\nThe number of iterations is a hyperparameter in drag-based methods, representing the maximum number of optimization steps allowed during editing. If the handle points fail to reach the target positions within this limit, the model halts latent optimization and directly denoises the latent to generate the final image. In previous methods, this value was typically set to 80; however, in our work, we increased it to 2000 to explore the effects more comprehensively.\\n\\n**How does the impact change with fewer to larger iterations?**\\n\\nWhen the maximum number of iterations is larger, handle points are more likely to reach the target positions. However, this comes at the potential cost of image quality due to error accumulation during the denoising process. \\n\\nIt is about the error accumulation in the denoising process. During each iteration, drag methods optimize the latent to move the features of handle points to target points slightly (Motion Supervision) and then relocate the position of new handle points in the feature map of the optimized latent(Point Tracking). This iterative process can be thought of as applying small perturbations to the latent representation, and as the iterations increase, the perturbation accumulates. Therefore, if the handles reach targets within small iterations, the perturbation is minor and can be handled by the denoising process due to the robustness of the diffusion model. Consequently, the edited image is plausible and the semantics are changed precisely.\\n\\nHowever, in many cases(like the example of Fig. 8), the handles may move in the wrong direction or even form a loop, which means more iterations are needed. Then there is a trade-off between the semantic revision and image fidelity: \\n\\n1. Fewer iterations: Handle points may fail to reach the targets, leading to incomplete semantic revisions.\\n2. Larger iterations: While achieving better alignment between handle and target points, the excessive perturbation can compromise image fidelity.\\n\\n\\n\\n> Q3: One of example is shown to incorporate masks for editing, can it be explained how masks are incorporated in this framework?\\n\\nThanks for the suggestion. The mask is used to specify an editable region to achieve the desired editing and is specified by the user. If a mask is given, a related term is added to $L_{ms}$ in Equation (5) :\\n$$||(z^k_t - sg(z^0_t)) \\\\odot (\\\\mathbb{1}-M)||_1$$\\nwhere, $z^k_t$, $z^0_t$ are the t-th and 0-th step latent, $M$ is the corresponding mask. As we can see, this term encourages the unmasked area to remain unchanged in the motion supervision phase. (line 284)\"}", "{\"summary\": \"This manuscript introduces CLIPDrag, a novel method that integrates text-based and drag-based signals for image editing, leveraging both for precise control and reduced ambiguity. The method utilizes Global-Local Motion Supervision (GLMS) and Fast Point Tracking (FPT) to enhance the image editing process, aiming to outperform existing methods by combining the strengths of both editing approaches.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. **Innovative Integration**: The paper presents a compelling approach by combining text and drag inputs to guide image editing. This dual-input strategy addresses the limitations of each method when used independently, potentially offering more controlled and precise edits.\\n2. **Technical Depth**: The introduction of GLMS shows a deep understanding of the challenges in image editing, particularly in handling the complexities associated with combining different types of editing signals.\\n3. **Experimental Validation**: Extensive experiments, including ablation studies, demonstrate the effectiveness of CLIPDrag against state-of-the-art methods. The results are well-presented and support the claims of improved performance in terms of both precision and ambiguity resolution.\", \"weaknesses\": \"1. **Novelty of FPT**: The paper should acknowledge that searching for handle points along the path from handle points to targets has been previously explored in methods like DragGAN and DragDiffusion. To clarify the unique contributions of FPT, the authors should provide side-by-side comparisons of point searching strategies, highlighting any improvements or distinctions in their approach.\\n\\n2. **Comprehensive Comparisons**: While the paper compares CLIPDrag with some existing methods, it would benefit from more extensive comparisons or discussions with recent techniques such as InstantDrag, LightningDrag, StableDrag, and RegionDrag. Although these methods may use different training approaches or inputs, incorporating their text-supervision signals could demonstrate CLIPDrag's ability to address ambiguities present in these methods, showcasing its generalizability. Additionally, these methods should be thoroughly discussed in the related work section to provide a more complete context.\\n\\n3. **Performance Metrics**: The paper should include a discussion or report on inference time comparisons. This information is crucial for understanding the practical applicability of CLIPDrag in real-world scenarios and how it compares to other methods in terms of computational efficiency.\\n\\n4. **User Input Optimization**: While the text prompt is provided in DragBench, it's worth noting that the original DragGAN paper did not require text input. The additional text prompt in CLIPDrag may increase user effort. To address this, the authors could explore incorporating vision-language models like GPT-4V to automatically interpret the input image (as shown in the first column of Figure 4). This approach could significantly reduce user burden while maintaining the benefits of text-guided editing.\", \"questions\": \"In the context of drag-based editing where maintaining the original identity of objects while manipulating specific features is a major challenge, your manuscript suggests the integration of text-based inputs to guide the editing process. Could you elaborate on how the addition of text signals specifically contributes to preserving the object's original identity during the edit? Additionally, are there specific conditions or types of text prompts that particularly enhance this preservation aspect within the CLIPDrag framework?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General Response to All Reviewers.\", \"comment\": [\"We appreciate their suggestions and comments and carefully revise our paper accordingly. Our major revisions include the following four aspects:\", \"1. In the Introduction, we revised the second example in Figure 1 for better illustration.\", \"2. In the Related Work, we included the discussion of diffusion model(lines 128-141) and provided a more comprehensive discussion of drag-based methods(lines 160-200).\", \"3. In the Experiments:\", \"We added four more baselines in the drag-text edit setting(line 355).\", \"We added explanations of the pipeline diagram(line 214,269).\", \"We compared two more methods in the quantitative experiment of drag-based editing(Figure 6).\", \"4. In the Appendix:\", \"Appendix A shows more drag-based results on text-drag edit.\", \"Appendix B shows four more examples to explain the motivation.\", \"Appendix C shows results when applying text and drag guidance sequentially.\", \"Appendix D shows examples when text and drag signals conflict\", \"Appendix E shows examples of StableDrag, and FreeDrag on drag-based editing.\", \"Appendix F shows the inference time comparisons.\", \"Please note that we colorized (blue) the revisions in the new version of the paper.\"]}", "{\"title\": \"Response to Reviewer RgXk(1/2\\uff09\", \"comment\": \"We thank the reviewer for acknowledging the novelty of the proposed method the promising experimental results, and the organization of the paper presentation. We will respond to your concerns one by one as follows:\\n\\n> W1: The need for identity preservation beyond citing DragDiffusion is not shared.\\n\\nWe appreciate the reviewer\\u2019s feedback and agree that providing the intuition behind the identity preservation step will enhance the clarity of the paper.\\n\\nIdentity preservation is crucial because drag-based methods primarily focus on regional features during image editing\\u2014i.e., they optimize latent variables based on differences between localized feature patches. This localized focus can inadvertently compromise global information, such as the object\\u2019s structure. By fine-tuning a pre-trained diffusion model in advance, identity preservation helps the model maintain the image\\u2019s overall structural integrity throughout the editing process.\\n\\nThis rationale is further supported by the ablation study by DragDiffusion. The study demonstrates that while image editing without identity preservation can achieve semantic modifications, it often alters global attributes such as the background. Consequently, identity preservation has become a standard step in drag-based methods, including the one proposed in this paper.\\n\\n\\n> W2: why the gradient manipulation will still lead to plausible images.\\n\\nThank you for the question. The key reason for this behavior is the robustness of diffusion models to noise. The result of gradient manipulation is the modification of the optimized latent. Each gradient manipulation corresponds to a small perturbation of the original latent, and these perturbations accumulate across iterations.\\n\\nWhen the handle points reach the target points within just a few iterations, the perturbation remains minor, allowing the denoising process to effectively handle it and produce a plausible image. However, if the handles take more iterations to reach the targets, the accumulated perturbation becomes larger, which may exceed the diffusion model's ability to handle it, potentially affecting image quality.\\n\\n> W3: why the FPT strategy can make handles move monotonically towards targets.\\n\\nBoth our FPT strategy and DragDiff\\u2019s point tracking (PT) mechanism use a nearest-neighbor search to update the positions of handles. However, as you noted, this approach does not inherently ensure that handle points will move monotonically toward target points. This limitation arises because the na\\u00efve point tracking method searches for new handles within a square patch centered on the current handle positions, as defined by:\\n\\n$$ h^{k+1} _ i = \\\\arg\\\\min _ {a\\\\in\\\\Omega(h^k _ i,r _ 2)} || F_q(z^{k+1} _ t)-F_{h^k _ i}(z^0 _ t)|| $$\\n\\nTo address this issue, we introduced a simple constraint on the search patch: $dist(a,g_i)< dist(h^k_i,g_i)$. This constraint ensures that our FPT method only considers candidate points that are closer to the target points, effectively converting the optimization into a monotonic process:\\n\\n$$ h^{k+1} _ i = \\\\arg\\\\min _ {a\\\\in\\\\Omega(h^k _ I, r _ 2) \\\\And dist(a,g _ i)< dist(h^k _ i,g _ i)} || F_q(z^{k+1} _ t)-F_{h^k _ i}(z^0 _ t)|| $$\\n\\nWith this adjustment, the new handles move closer to the target points after each iteration, as candidates farther from the targets are excluded from consideration.\"}", "{\"title\": \"Response to Reviewer CxEc(1/2)\", \"comment\": \"Thanks for the constructive comments. We try to address your concerns point by point as follows:\\n\\n> W1:Novelty of FPT: The paper should acknowledge that searching for handle points along the path from handle points to targets has been previously explored in methods like DragGAN and DragDiffusion. To clarify the unique contributions of FPT, the authors should provide side-by-side comparisons of point tracking strategies, highlighting any improvements or distinctions in their approach.\", \"the_principle_underlying_all_point_tracking_methods_is_consistent\": \"they use a nearest-neighbor search algorithm to update the positions of handle points. The key distinction between our FPT strategy and the approach used in DragGAN/DragDiffusion lies in the search area, as illustrated in Figure 3(c).\\n\\nSpecifically, DragGAN/DragDiffusion searches for new handle points within a square patch centered around the current handle positions:\\n\\n$$h^{k+1} _ i = \\\\arg\\\\min _ {a\\\\in\\\\Omega(h^k _ i,r _ 2)} || F_ q(z^{k+1} _ t)-F _ {h^k _ i}(z^0 _ t)|| $$\\n\\nHowever, this approach has notable limitations, as shown in Figure 8. Handle points may move in the wrong direction or even form a loop, leading to inefficient optimization and requiring more iterations to complete the drag edit.\\n\\nTo address this issue, we introduced a simple constraint on the search area: $dist(a, g _ i)< dist(h^k_i,g _ i)$. This ensures that our FPT method only considers candidate points that are closer to the target points, effectively transforming the optimization into a monotonic process:\\n\\n$$h^{k+1} _ i = \\\\arg\\\\min _ {a\\\\in\\\\Omega(h^k _ i,r _ 2) \\\\And dist(a,g _ i)< dist(h^k _ i,g _ i)} || F_q(z^{k+1} _ t)-F _ {h^k _ i}(z^0 _ t)|| $$\\n\\nAs demonstrated in Figure 8(d), our FPT strategy achieves similar editing results with significantly fewer iterations, thereby accelerating the editing process.\\n\\n\\n\\n>W2: More extensive comparisons or discussions with recent techniques such as InstantDrag, LightningDrag, StableDrag, and RegionDrag. Additionally, these methods should be thoroughly discussed in the related work section to provide a more complete context.\\n\\nThanks for the constructive suggestions. To better demonstrate CLIPDrag's ability to address the ambiguity phenomenon, we give the result of InstantDrag, LightningDrag, StableDrag, and RegionDrag in our appendix(Appendix A). As we can see\\uff0c all these methods have the problem of ambiguity(InstantDrag, LightningDrag) or identity-preserving(StableDrag, RegionDrag).\\n\\n\\n\\nAlso, we revise the content of the related work and discuss these methods in detail.\\n\\n\\n>W3: Performance Metrics. The paper should include a discussion or report on inference time comparisons. \\n\\nWe understand the concern and report the average inference time of CLIPDrag, DragDiff, SDE-Drag and FreeDrag as follows:(the result is calculated on a single 3090 GPU by averaging over 100 examples sampled from the DragBench.) This part is also included in Appendix F.\\n\\n| Method | Inference Time |\\n| -------------- | -------------- |\\n| DragDiff | 80.3s |\\n| FreeDrag | 69.4s |\\n| FastDrag | 75.5s |\\n| SDE-Drag | 70.0s |\\n| StableDrag | 72.3s |\\n| CLIPDrag(Ours) | 47.8s |\\n\\nAs shown in the table, our method is significantly faster than previous works. This is because the inference time is directly correlated with the number of optimization iterations. In CLIPDrag, the text guidance helps to indicate the correct optimization direction, while our FPT strategy prevents handles from moving in the wrong direction or forming loops. Both of these factors reduce the number of iterations required to move the handle points to their target positions, resulting in faster editing speeds.\"}", "{\"title\": \"Response to Reviewer R2Bm(2/2)\", \"comment\": \">W4: Suggested Comparative Experiments for Method Validation.\\n\\nWe understand the reviewer\\u2019s concern and would like to clarify why we did not consider applying the two types of guidance sequentially or in the opposite order.\\n\\n**Why not text-based edit after drag-based edit?**\\n\\nThe goal of our paper is to perform drag-based editing while eliminating ambiguity with the help of a text prompt. If we were to apply drag-based editing first, the optimization direction of the latent could be incorrect, meaning that the ambiguity problem would occur before the text guidance is applied.\\n\\nFor example, consider Figure 4(d): If the drag signal is applied first, the model would edit the image towards the direction of \\\"Enlarging the woman's face.\\\" Consequently, the prompt \\\"make the woman smile\\\" would modify the image based on an incorrect initial edit, leading to results that are not consistent with the user\\u2019s intention.\\n\\nAdditionally, if text-based editing were applied after the drag operation, the position of the target points would be altered. As a result, after the text-based edit, the final positions of the handles would no longer align with the targets, which contradicts the core principle of drag-based methods.\\n\\n\\n**Why not drag-based edit after text-based edit?**\\n\\n\\n This approach would also introduce ambiguity. When text-based editing is applied first, the position of the handle points is altered. This alteration can mislead the subsequent drag operation. For example, in Figure 4(d), after applying the text guidance \\\"make the woman smile\\\", the new handle points might end up at the top right of the target points. As a result, when the drag operation tries to move the handles to the targets, it could imply a semantic change like \\\"Make the woman not smile,\\\" which contradicts the original intent.\\n\\nIn addition to this analysis, we also give some results in Appendix C when the two signals are applied sequentially.\\n\\n>W5: Limited Novelty in Gradient Combination Approach.\\n\\n\\nWe completely agree that a non-linear combination of local and global gradients could improve performance, and designing better gradient fusion strategies is a promising direction for future work. In fact, our gradient fusion strategy is implemented as a plug-and-play method in the official code, making it easy to experiment with different fusion approaches.\\n\\nHowever, we would like to emphasize that the main focus of our work is to address the ambiguity problem in drag-based image editing. Since no prior work has explored incorporating text signals in this context, our primary contribution is to propose a novel idea for combining these two signals from the perspective of gradients. The gradient combination approach (GLGF) was introduced to demonstrate that gradients can serve as an effective medium for merging the two signals. This is why we chose not to delve into the specifics of gradient fusion design in this paper.\\n\\n>Q1:provide the examples.\\n\\nYes, according to your suggestion, we have added four results with and without drag edit using the prompt: \\\"The sculpture is smiling and not showing his teeth.\\\" and \\\"The sculpture is smiling and not raising his head\\\". These results are included in Appendix B.\"}", "{\"metareview\": \"This paper proposes an image editing method called CLIPDrag, which aims to combine text and drag signals for unambiguous manipulations on diffusion models.\\n\\nThis paper received mixed initial reviews with ratings of 8, 6, 5, and 3. After the rebuttal, one reviewer increased the score from 6 to 8, while others maintained their initial ratings. The final ratings remained varied at 8, 8, 5, and 3. It should be noted that the reviewers who gave negative ratings of 3 and 5 were unresponsive during the rebuttal and discussion phases. The area chair had encouraged their involvement but did not get further feedback. As a result, the area chair considered lowering the weight of their reviews. \\n\\nDespite the unresponsiveness of those two reviewers, the area chair felt that most of the concerns raised were adequately addressed by the authors, for example,\\n* Adding subsections to review diffusion models, text-based image editing methods, and drag-based methods;\\n* Modifying Figure 1 to illustrate the idea of Text-Drag Edit;\\n* Explaining how are effect regions defined and integrated;\\n* Clarifying why not considering applying the two types of guidance sequentially.\\n\\nOn the other hand, the reviewer who increased the rating from 6 to 8 acknowledged the detailed rebuttal from the authors, stating that *\\\"[Y]our clarification of the experimental methodology and additional data analysis effectively addresses my initial concerns.\\\"* After further discussion, the reviewer's additional concern regarding the inference time was also fully addressed.\\n\\nGiven that two reviewers expressed definitive support for the paper, the area chair concurs with their suggestions and recommends accepting the paper.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers pointed out concerns about the FPT strategy, such as\\n* Why the FPT strategy can make handles move monotonically towards targets?\\n* To clarify the unique contributions of FPT, the authors should provide side-by-side comparisons of point searching strategies, highlighting any improvements or distinctions in their approach.\\n\\nThe authors are encouraged to include the discussions in Sec. 3.3 of the paper.\"}", "{\"title\": \"Follow-up from reviewer CxEc.\", \"comment\": \"Thanks for your concern. This is because the setting in our paper is: **Making sure all handle points can reach target points** (lines 320,321). So we have a larger **maximum iteration number** (2000), while in previous methods like DragDiff/StableDrag, this hyper-parameter is set to 80, which means the optimization will stop even if handle points are far away from the target points.\\nSo for fairness, although FastDrag is a one-step editing method, it needs to run its algorithm many times if the handle points are checked to be far away from the target points, which significantly increases the edit time.\\n\\nWe hope this answers your questions. Thank you again for your valuable feedback, and please don\\u2019t hesitate to let us know if there are follow-up questions.\"}" ] }
2HdZPEQUig
Efficient Object-Centric Learning for Videos
[ "Rickard Maus", "Atsuto Maki" ]
This paper introduces a method for efficiently learning video-level object-centric representations by bootstrapping off a pre-trained image backbone, which we term Interpreter. It presents a novel hierarchical slot attention architecture with local learning and an optimal transport objective that yields fully unsupervised video segmentation. We first learn to compress images into image-level object-centric representations. Interpreter then learns to compress and reconstruct the object-centric representations for each frame across a video, allowing us to circumvent the costly process of reconstructing full frame feature maps. Unlike prior work, this allows us to scale to significantly longer videos without resorting to chunking videos into segments and matching between them. To deal with the unordered nature of object-centric representations, we employ Sinkhorn divergence, a relaxed optimal transport objective, to compute the distance between unordered sets of representations. We evaluate the resulting segmentation maps on video instance segmentation in both realistic and synthetic settings, using YTVIS-19 and MOVi-E, respectively. Interpreter achieves state-of-the-art results on the realistic YTVIS-19 dataset and presents a promising approach of scaling object-centric representation learning to longer videos.
[ "Object-Centric Learning", "Representation Learning", "Video", "Segmentation", "Video Object Segmentation" ]
https://openreview.net/pdf?id=2HdZPEQUig
https://openreview.net/forum?id=2HdZPEQUig
ICLR.cc/2025/Conference
2025
{ "note_id": [ "p4TNJBWIQU", "UihS1adkx7", "IEkyHCP5rV", "HLF9baavR7", "Cnc3oMuLTZ", "5nvCGfFnxk" ], "note_type": [ "official_review", "comment", "official_review", "official_review", "official_review", "official_comment" ], "note_created": [ 1730557298612, 1732540190288, 1730701087147, 1730648073948, 1730587799437, 1732540138390 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10066/Reviewer_ipv7" ], [ "ICLR.cc/2025/Conference/Submission10066/Authors" ], [ "ICLR.cc/2025/Conference/Submission10066/Reviewer_CfWA" ], [ "ICLR.cc/2025/Conference/Submission10066/Reviewer_cf9z" ], [ "ICLR.cc/2025/Conference/Submission10066/Reviewer_jWcK" ], [ "ICLR.cc/2025/Conference/Submission10066/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces a hierarchical slot attention approach for handling temporal context in video segmentation. To achieve this, it incorporates a video-level slot that aggregates temporal information across all frame-level slots. Additionally, to smoothly apply the video-level slot for video representation prediction, the paper proposes an attention map propagation technique. For loss calculation, Sinkhorn Divergence is utilized. With these components, the proposed model, Interpreter, achieves state-of-the-art performance on the YTVIS-19 dataset in terms of mIoU and on the MOVi-E dataset in terms of FG-ARI.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The paper introduces a straightforward design for unsupervised video object segmentation using slot attention. This architecture demonstrates remarkable performance on the real-world dataset YTVIS-19.\", \"weaknesses\": \"**1. Limited Architectural Contribution**\\n\\nThe primary contribution of Interpreter lies in its hierarchical architecture, a concept previously introduced in the video instance segmentation method, VITA [1]. Similar to VITA, Interpreter employs a hierarchical design where video-level queries aggregate temporal context from frame-level queries. Apart from differences in target tasks and objective functions, the overall architectural design remains largely similar to that of VITA.\\n\\n**2. Insufficient Experimental Support**\\n\\nAdditional experiments are necessary to validate the proposed methods. In Tables 1 and 2, as noted by the authors, results on the MOVi-E dataset show a trend that significantly deviates from results on YTVIS-19. Section 4.3 discusses specific cases with limited examples to interpret these discrepancies. However, since these two sets of results exhibit opposite trends, it remains challenging to conclude that the proposed method is generally applicable. To address these contradictions, further analysis\\u2014such as statistical investigation\\u2014would be beneficial. Additionally, only two ablation studies are presented, even for critical hyperparameters, and key factors like the effect of varying the number K are not explored.\\n\\n**3. Limited Readability**\\n\\nThe overall structure of the paper hinders readability and comprehension. In particular, the experimental section is challenging to follow, as it combines main experimental results, qualitative findings, and ablation studies within the same section, making it difficult to discern the purpose and implications of each individual experiment. Furthermore, Figures 3 and 4 display segmentation results without the original samples, which complicates the reader\\u2019s ability to fully interpret the analysis.\\n\\n[1] Heo, Miran, et al. \\\"Vita: Video instance segmentation via object token association.\\\" Advances in Neural Information Processing Systems 35 (2022): 23109-23120.\", \"questions\": \"**Q1. What are the main architectural differences between Interpreter and VITA?**\\n\\nIn terms of architectural design, what distinguishes Interpreter from VITA? Could you specify the unique aspects of Interpreter\\u2019s approach, especially in how it addresses hierarchical design and temporal context aggregation?\\n\\n**Q2. Is there any statistical basis for the analysis beyond observations on a few samples?**\\n\\nBeyond observations from a limited set of examples, does the paper offer any statistical foundation for its analysis? For instance, is there evidence that specific factors like object movement or motion changes hinder the performance of slot-based approaches? A more comprehensive investigation into such cases could help substantiate the findings.\\n\\n**Q3. Are additional ablation studies provided to validate the proposed method\\u2019s effectiveness?**\\n\\nApart from the current experiments, are there further ablation studies examining key factors, such as the influence of varying the number of slots (K) or the impact of end-to-end fine-tuning in the second stage?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The authors present Interpreter, a VOS method based on hierarchical slot attention that consists of separate image-level and video-level processing. To compute the image-level attention slots, Interpreter uses implicit slot attention to learn object-centric features from an image-trained backbone. Implicit slot attention is also used at the video level to learn object representations across frames, relying on the Sinkhorn divergence to learn correspondence between sets of slots across different frames. Experiments are conducted on the YTVIS-19 and MOVi-E datasets to compare Interpreter to other slot-attention-based methods. An ablation study is carried out on YTVIS-19 to determine the effect of number of slots and clustering distance threshold.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. Significant qualitative results are included, including failure cases.\\n2. The technical contribution appears to be novel for VOS.\", \"weaknesses\": \"1. Issues with Experimental Evaluation.\\n\\n a. The paper claims Interpreter is a VOS method but performs its evaluation using a Video Instance Segmentation (YTVIS-19) and Video Semantic Segmentation dataset (MOVi-E). If Interpreter is a VOS method then it should be evaluated on VOS datasets such as DAVIS [1], and compared to the state of the art VOS methods, in order to assess the contribution of the work.\\n\\n b. The paper claims Interpreter targets long videos but the length of the videos in the datasets chosen are on the order of seconds not minutes, making it difficult to verify this claim.\\n\\n c. Qualitative results are included for Interpreter but not for competing methods, making it difficult to assess the performance quality of Interpreter.\\n\\n2. Exposition of method lacks mathematic details. In particular, Sinkhorn Divergence is never defined mathematically and the final loss function is not included. This makes it difficult to understand the method beyond a surface level.\\n\\n3. Related works section lacks mention of query-key-value retrieval-based methods such as STM [2] for VOS, which is a major and important direction for the task. The motivation for using slot attention based methods is not clear.\\n\\n4. Writing is not direct. For example, it should be explained why Interpreter performs \\\"unexpectedly well\\\" in l. 471 and what is \\\"surprising\\\" in l. 474. As another example, the phrase in l. 339 \\\"The last row shows a cute cat.\\\" seems out of context. \\n\\n[1] \\\"The 2017 DAVIS Challenge on Video Object Segmentation\\\". J. Pont-Tuset, F. Perazzi, S. Caelles, P. Arbel\\u00e1ez, A. Sorkine-Hornung, and L. Van Gool. arXiv:1704.00675, 2017.\\n[2] \\\"Video Object Segmentation using Space-Time Memory Networks\\\". Seoung Wug Oh, Joon-Young Lee, Ning Xu, Seon Joo Kim. ICCV, 2019.\", \"questions\": \"Please address the points brought up in the weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a method called Interpreter, aimed at efficient, unsupervised video-level object-centric representation learning. Interpreter introduces a hierarchical slot attention architecture where image-level representations are compressed first, then video-level representations are derived from them using a relaxed optimal transport objective, Sinkhorn Divergence, for unsupervised segmentation. This approach circumvents the typical computational load associated with reconstructing frame-level feature maps, allowing Interpreter to process longer videos effectively. Experiments show that Interpreter achieves strong results on the YTVIS-19 dataset and synthetic datasets like MOVi-E.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The paper is well-written and structured, with clear explanations of the novel hierarchical slot attention mechanism and its advantages in scaling object-centric representation to longer videos.\\n\\n2. The approach of per-frame slot attention followed by video-level slot attention is both novel and elegant, allowing the model to handle temporal dependencies across the entire video without chunking.\", \"weaknesses\": \"1. The second-level slot number does not stay under ten (8 for YTVIS-19), which contradicts the paper\\u2019s claim of handling extensive temporal context effectively (L101).\\n\\n2. Results on the DAVIS-17-unsupervised dataset are absent, and performance on metrics (FG-ARI and mIOU) shows considerable variation across different benchmarks, suggesting limitations in the model\\u2019s generalizability.\\n\\n3. The discussion around FG-ARI and mIoU metrics lacks sufficient depth, especially in explaining the model\\u2019s inability to perform consistently across both benchmarks. It remains unclear why the method does not yield strong outputs on both metrics concurrently\\u200b.\", \"questions\": \"1. Could the authors clarify the significant discrepancy observed between FG-ARI and mIoU performance in this model? In my view, both FG-ARI and mIoU should be high if object segmentation remains accurate over time.\\n\\n2. Have the authors considered using only frame-wise slot representations at the second level (where the same slot index per frame corresponds to the same object), rather than applying slot attention at the video level? What would be the implications of this approach?\\n\\n3. To what extent is the DINOv2 feature extractor crucial for this model? Would the method fail without it?\\n\\n4. Why is a different number of second-level slots used for the YTVIS-19 and MOVi-E datasets?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The work introduces an unsupervised approach to object-based segmentation of video sequences. The approach comprises two stages. The first stage follows previous work and trains an autoencoder that decomposes an input image into a set of slot tokens. In the second stage, another autoencoder learns to represent the set of slot tokens, extracted from each frame in a video, with a more compact set of video-level slot tokens. To train this autoencoder, the approach leverages Sinkhorn divergence, which establishes a (relaxed and differentiable) correspondence between the set of predicted tokens and the input set. The final output -- a temporally consistent segmentation -- is the result of attention propagation, which relies on the similarity between image-specific slots and the video-level slots. The results on YouTube-VOS and synthetic MOVi-E demonstrate impressive segmentation quality, but the quantitative results are a bit mixed.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"I like the work\\u2019s technical contribution, the Sinkhorn divergence. However, I\\u2019d encourage the authors to include more details (what\\u2019s behind SH function in (2)).\", \"The approach is technically sound. It makes a lot of sense to represent a video with a compact set of slot tokens and computing the segmentation through attention propagation, as described in ll. 206-215.\", \"Fig. 1 provides a great overview for the approach, which helps in following the technical details. (Remark: It could\\u2019ve been more compact and use vectorised graphics).\", \"I enjoyed that the text does not stop after the mixed quantiative results, but instead makes a good effort to analyse and explain them.\"], \"weaknesses\": [\"The exposition, especially the technical part, feels way to congested. I would have preferred more technical details in Sec. 3.2 than the Figures 2-4 loading two full pages, which feel a bit like space-fillers. For example, the work does not really explain how the Sinkhorn divergence is computed in Eq. (2), nor does it really explain the architecture of the encoders/decoders in the two stages of training, etc.\", \"The results are obviously mixed: On YouTube-VOS the approach discriminates between the objects well, but falls behind on foreground-background segmentation and vice-versa on MOVi-E. I like that the text discusses these weaknesses, but the analysis would have been more convincing with more informative qualitative examples (including the ground truth and the output from previous work).\", \"The title falls short on the promise of efficiency. Perhaps the method is efficient, but I did not find convincing arguments or corresponding experiments to support this point.\", \"The experiments are bit too brief. I would be curious to see the approach with another pre-trained backbone and dataset (e.g. DAVIS).\"], \"questions\": [\"How is the Interpreter more efficient tham previous work? (e.g. STEVE, BA, VideoSAUR).\", \"How does the model compare to previous work if normalised for the pre-trained architecture? E.g. BA uses ViT-s/8, while ViT-B/14 is used here.\", \"Interpreter is developed with long videos in mind. What is the definition of \\u201clong\\u201d in this work and how is this reflected in the experimental setup?\", \"How would approach compare to move naive objectives, e.g. matching the slots with Hungarian matching and minimising the corresponding distance (e.g. L1/L2)?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We kindly thank the reviewers for providing their valuable feedback.\\n\\nAfter reviewing the provided criticisms, we agree that the paper needs more work. In particular, the paper should provide a more rigorous exposition of Sinkhorn, model architecture and training, and a better motivation for the methods efficiency. Additionally, more ablations (e.g. testing different backbones), and quantitative results should be added to strengthen the existing mixed results, with a more thorough investigation into the proposed method's points of failure. Further comparisons with existing methods should also be added, in addition to more clarity showing how Interpreter is distinguished from prior works.\\n\\nWe once again thank the reviewers for their valuable and thorough feedback, and for highlighting both the strengths and weaknesses of our submission.\"}" ] }
2HN97iDvHz
LLM-Powered Predictive Decision-Making for Sustainable Data Center Operations
[ "Hanzhao Wang", "Jingxuan Wu", "Yu Pan", "Yumeng Li", "Yansong Wang", "Helang Liu", "Fuqiang Wang", "Guanting Chen" ]
The growing demand for AI-driven workloads, particularly from Large Language Models (LLMs), has raised concerns about the significant energy and resource consumption in data centers. This work introduces a novel LLM-based predictive scheduling system designed to enhance operational efficiency while reducing the environmental impact of data centers. Our system utilizes an LLM to predict key metrics such as execution time and energy consumption from source code, and it has the potential to extend to other sustainability-focused metrics like water usage for cooling and carbon emissions, provided the data center can track such data. The predictive model is followed by a real-time scheduling algorithm that allocates GPU resources, aiming to improve sustainability by optimizing both energy consumption and queuing delays. With fast inference times, the ability to generalize across diverse task types, and minimal data requirements for training, our approach offers a practical solution for data center scheduling. This framework demonstrates strong potential for advancing sustainability objectives in AI-driven infrastructure. Through our collaboration with a data center, we achieved a 32% reduction in energy consumption and a 30% decrease in waiting time.
[ "Large Language Models", "Generative AI", "Sustainability", "Real-time decision-making" ]
https://openreview.net/pdf?id=2HN97iDvHz
https://openreview.net/forum?id=2HN97iDvHz
ICLR.cc/2025/Conference
2025
{ "note_id": [ "pD5E8SsWg2", "8ONzfyE31V", "7WxANba4D7", "5LG4Q0N5jG" ], "note_type": [ "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1732778420216, 1730079097748, 1730715426296, 1730675832283 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13409/Authors" ], [ "ICLR.cc/2025/Conference/Submission13409/Reviewer_vdc1" ], [ "ICLR.cc/2025/Conference/Submission13409/Reviewer_X2L8" ], [ "ICLR.cc/2025/Conference/Submission13409/Reviewer_Hxm4" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper introduces a compound-AI based technique to predict the performance, energy consumption, and other key operational metrics for data center operations. The paper motivates the problem well, showing how a pre-trained LLM can possibly be used to predict the performance of workloads on different hardware types. This can be further used in scheduling of workloads on devices. The authors then device a scheduling optimization problem along with two algorithms to show how such a deployment can help datacenter operators. The authors run simulations based on a dataset acquired from a production system from a small datacenter over a period of about 2 months. The dataset has an aggregate task counts of less than 200 tasks. They adapt the pretrained model using 500 source codes. To label the data (and run their experiments), the authors use two GPU models A100 and A6000.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1. Thank you for submitting your paper to ICLR. I enjoyed reading the paper as it is well written generally.\\n2. The paper covers an important topic that many datacenter operators care about, how to better utilize accelerator resources.\\n3. The paper uses data from a data center and I believe is the first paper to suggest a compound AI system with two LLMs to assign GPU resources.\", \"weaknesses\": \"I think the paper however has several shortcomings that I will aim to detail next. The paper is neither strong on the systems side nor on the ML side, and this is the main shortcoming in my opinion. I will detail what I mean by that in the next points:\\n1. To start with, I am not entirely sure if for the scale of the problem you define, the LLM is doing much. For an ICLR submission, I think It would have been better to focus more on the ML side of the problem and not the decisions making. After all, you have only provided an overview of the prediction results in the paper in Table 1. However, non of these results show tradtitional metrics that one would expect on the predictions, e.g., accuracy, recall, F1-Score, MAPE, etc. I would like to see some of these aspects for the method. \\n2. There is not much novelty in the ML side of the paper, except maybe with the Align-LLM part. However, the authors treat this in passing only, with very little to no evaluations on even how this extra LLM helps. It would help the paper to do an ablation study with Align-LLM. In addition, you effectively have only two classes for your classifier, A100 and A6000. I wonder how your system would expand to a larger system with say 10s of accelerator types?\\n3. From a systems perspective, I think there are way too many shortcomings. Since ICLR is not a systems conference, I will only include the following few shortcomings. First of all, you have a total of less than 200 tasks over a period of 2 month. That is about three tasks per day. Since you are running this in simulations, you can scale this up by, e.g., creating a set that resembles the original tasks you have. There are also multiple other public traces now available, e.g., from Alibaba with GPU workloads (https://github.com/alibaba/clusterdata/tree/master/cluster-trace-gpu-v2020#pai_task_table) . That being said, you do not need even GPU traces, you can easily simluate larger systems. \\n\\n- Second, what is your use-case? A task that runs for short periods? How would you know how long this task runs in a real datacenter unless it is a repetitive workload? Third, how would your system scale with 30+ accelerator types and 10s to 100s of tasks arriving per minute, per second, and per hour?\", \"questions\": \"1. My first question is, what is the use-case that you are trying to solve?\\n2. What are the accuracy and prediction metrics of your system?\\n3. What is the scale of the datacenter you collaborate with?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes using LLMs to predict performance metrics of jobs submitted to a data center. Metrics include runtime, waiting time, and energy consumption. The main idea is to leverage the power of LLMs in creating meaningful representations of complex data, such as the source code, which can then be trained into specific predictions. Based on these predictions, the authors propose two scheduling algorithms, which they apply to a data center use case to show savings of up to approximately 30%.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The idea to leverage an LLM to create a powerful representation instead of hand-crafted features. This also brings a series of positive properties, as listed in the paper.\", \"Good results on the presented data center use case.\", \"Includes discussions on practical problems in applying the scheme.\"], \"weaknesses\": [\"Lack of details on what is considered a job and its source code. Implicitly (especially in the introduction), authors seem to assume Python scripts for machine learning, but real-world workloads might differ from this assumption. Please be more clear on any assumptions and restrictions on the jobs considered.\", \"Authors estimate the model performance metrics solely from the source code. Still, the execution time (and all other performance metrics) for some models, e.g., for LLMs due to their autoregressive nature, depends heavily on the generated output based on the input/prompt and not only on the source code. (see also next point)\", \"The paper lacks results on the achieved prediction accuracy of the considered metrics. Here, it would be nice to have some statistics on the achieved error between predicted and real values as well as a comparison with some of the mentioned related works for job prediction.\", \"Also, an ablation study to see if the improvement stems more from the smarter scheduling algorithm or from the more precise predictions would have been nice. This would also likely allow to hint into how well the approach might generalize to other data centers that might use a different baseline scheduling.\"], \"questions\": \"What is the prediction error for runtime and energy consumption (also in comparison with baselines)?\\n\\nIt is well known that the lack of (remaining) run time of jobs is one of the main issues in achieving schedulings that are proven optimal under some aspects, i.e., minimizing waiting time. Can some variant of shortest job first (SJF) be used here?\\n\\nDifference between simple and greedy alsgotiehms should be better highlighted.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents an LLM-powered automatic predictive scheduling system for AI workloads in data center, with the goal of optimizing both performance(job completion time) and energy consumption. The system consists of two main components: 1) An LLM-based predictive model that takes job's source code as input, and predicts its execution time and energy consumption; 2) A decision-making model that uses these predictions for deciding GPU resource allocation to each job. Through collaboration with a data center, the authors demonstrated 32% reduction in energy consumption and 30% decrease in waiting time. The key innovation is using LLMs to generate code representations that enable generalizable prediction across diverse task types.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper designs a novel end-to-end solution using LLMs for predictive data center resource allocation. Also, their combination of LLM and probe network reduces the number of training data needed.\\n\\nTheir framework can generalize to diverse job task types including composite and unseen tasks, making it more flexible than traditional methods that required separate models for different task types.\\n\\nThe writing is easy to follow and clearly explains their model architecture.\", \"weaknesses\": \"The paper lacks information about which pre-trained LLM was used, details about its output representation, and how did they leverage LLM to generate the output representation (eg, prompting method).\\n\\nThe evaluation section seems to be incomplete. More comprehensive evaluation details are necessary to evaluate whether their proposed solution works.\\n\\nThe proposed method could cause potential privacy concerns when sending confidential user-submitted code to LLM for analysis. \\n\\nNo discussion of the computational and cost overhead of running the LLM-based prediction framework.\", \"questions\": [\"Could you provide more details about the LLM model used in the experiments, including\", \"LLM model type, and how was it pre-trained or fine-tuned for this task\", \"the dimension of the LLM output representations\", \"how to leveage LLM to generate the output representation (eg, prompting method)\", \"What is the computational and cost overhead of running the LLM-based prediction framework? How does this compare to the benefits gained in terms of improved resource allocation and reduced energy consumption?\", \"Can you provide more details about the evaluation, including\", \"detailed experimental setup\", \"data center scale\", \"the number and distribution of different task types\", \"baseline scheduling algorithms to compare with\", \"evaluation metrics\", \"ablation study\", \"How does the framework handle prediction errors? Is there any mechanism to adapt predictions based on actual execution results?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
2H6KhX1kJr
Transformers and slot encoding for sample efficient physical world modelling
[ "Francesco Petri", "Luigi Asprino", "Aldo Gangemi" ]
World modelling, i.e. building a representation of the rules that govern the world so as to predict its evolution, is an essential ability for any agent interacting with the physical world. Recent applications of the Transformer architecture to the problem of world modelling from video input show notable improvements in sample efficiency. However, existing approaches tend to work only at the image level thus disregarding that the environment is composed of objects interacting with each other. In this paper, we propose an architecture combining Transformers for world modelling with the slot-attention paradigm, an approach for learning representations of objects appearing in a scene. We describe the resulting neural architecture and report experimental results showing an improvement over the existing solutions in terms of sample efficiency and a reduction of the variation of the performance over the training examples.
[ "Transformers", "world modeling", "slot attention" ]
Reject
https://openreview.net/pdf?id=2H6KhX1kJr
https://openreview.net/forum?id=2H6KhX1kJr
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vaiDrq2uPH", "sDXtQbfFjV", "qhZCGk0y8k", "i7lf3VsJsZ", "YruEud1peK", "E078tNkFSq", "8hXTBqGum1", "5WAslvING0" ], "note_type": [ "decision", "official_review", "official_comment", "official_review", "official_comment", "official_review", "meta_review", "official_review" ], "note_created": [ 1737523819586, 1730644895240, 1732097405416, 1730562576865, 1732097428815, 1729223972527, 1734661876954, 1730564303029 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7138/Reviewer_NsJS" ], [ "ICLR.cc/2025/Conference/Submission7138/Authors" ], [ "ICLR.cc/2025/Conference/Submission7138/Reviewer_XAqU" ], [ "ICLR.cc/2025/Conference/Submission7138/Authors" ], [ "ICLR.cc/2025/Conference/Submission7138/Reviewer_AfcX" ], [ "ICLR.cc/2025/Conference/Submission7138/Area_Chair_m79u" ], [ "ICLR.cc/2025/Conference/Submission7138/Reviewer_XiUu" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper present a slotted recurrent network model which uses transformers as the main backbone for \\\"world modeling\\\". In this context the resulting model is an \\\"object centric\\\" learning model which cross attends into VQ-VAE encoded input frame, updates the current state and then predicts the next state using a transformer. The model is trained for state prediction (with two variants, either next state prediction or current state prediction) and is demonstrated to mildly work better than a single external baseline (STEVE) and one ablation model (decoder only, where there is not explicit state representation, just prediction and decoding). The experiments are run on a physical reasoning task (PHYRE) and the output is a classification readout.\", \"soundness\": \"2\", \"presentation\": \"The experimental result figures are not to the level I would expect to see in an ICLR paper - raw training curves are fine if they tell a clear story. Here, however, they do not - there is very little signal there to observe. Export quality is also quite low and does not at the level I would expect.\", \"contribution\": \"2\", \"strengths\": \"Originality:\\nThe model presented is a very mild variation on previously published works - using VQ-VAE encodings is nice (though probably requires a bit more analysis) and the general recurrent setup is appealing.\", \"quality\": \"The proposed model variants (pre and default) are interesting and probably a good step towards analyzing the model's behaviour.\", \"clarity\": \"The paper is nicely structured and well written.\", \"significance\": \"The context of the work is important, but see below for criticism.\", \"weaknesses\": \"Unfortunately the paper suffers from several weaknesses.\", \"experimental_validation\": \"The method is only validated on one task and even on that task results are not very convincing. The models perform very closely to one another and the claims for efficient learning with the model are not well supported.\", \"analysis\": \"In general I don't mind when results of a model are not competitive with baselines or ablations as long as there is good analysis of why that is the case, and how this can improve our understanding of the model or problem. Here, however, these are absent - there's very little analysis of what the model learns, how it does that and what determines its performance.\", \"novelty\": \"While usually I don't think novelty is a determining factor for a paper, I feel here this is quite lacking and the proposed model is indeed quite close to existing literature (SAVI++, PARTS and more ). These are cited in the paper, so I have no complaints on that side, but given the generally weak results and analysis I think this hurts the paper.\", \"questions\": \"My main question is the use of \\\"slot attention\\\" - as far as I can understand there is actually no slot attention in this model, am I right? it seems that the corrector just uses cross-attention and not slot attention? (the difference would be the soft-max axis).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We would like to thank all the reviewers for their thorough work and insightful comments.\", \"let_us_address_some_common_concerns_in_this_general_comment\": \"\", \"slot_attention\": \"it is true that our model does not, in fact, use the exact slot attention paradigm. This work was originally motivated by a desire to apply the same concepts to a more transformer-like structure, but many details got lost in translation. We apologize for the confusion.\", \"experimental_evaluation\": \"given the time and compute restrictions we faced before the submission deadline, we decided to focus on testing the generalization capabilities of our model by having it face many different task variations of the same dataset. We still have our eyes on the Physion dataset and other relevant work, though.\"}", "{\"summary\": \"The paper addresses the challenge of creating sample-efficient models for physical world modeling, focusing on predicting object interactions in dynamic environments.\\n\\nThe authors propose an architecture that combines Transformers with slot encoding to improve sample efficiency and stability in world modeling. Unlike existing models that operate at the image level, this model incorporates object-based representations, enabling it to capture and predict interactions more accurately.\\n\\nTheir model, named Future-Predicting Transformer Triplet (FPTT), uses a corrector-predictor-decoder triplet of Transformers. The corrector aligns the internal state representation with the actual video evolution to prevent model drift, the predictor forecasts the next state based on the corrected representation, and the decoder converts this predicted state back into tokens for further training.\\n\\nExperiments using the PHYRE dataset (a benchmark for physical reasoning) show that FPTT achieves greater sample efficiency and training stability compared to baseline models like STEVE. The model\\u2019s structured approach enables it to generalize well in physical environments simulated with basic Newtonian physics.\\n\\n\\nIn summary, the paper presents a architecture that leverages the strengths of Transformers and slot encoding for efficient and stable world modeling, demonstrating improvements in tasks requiring understanding and predicting object dynamics in a physical environment\\u200b\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper focuses on world modeling which is an important problem.\", \"The writing and presentation is clean, which makes understanding the paper easy.\", \"The paper compares efficiency and accuracy, which helps understand the trade-offs.\"], \"weaknesses\": [\"The contribution is not significant having an internal representation of the previous timesteps is common in world modeling architectures, for instance: Dreamer: https://arxiv.org/pdf/2301.04104, Slotformer: https://arxiv.org/pdf/2210.05861. It's unclear to me how this work is a better architecture than Dreamer or Slotformer or other recent works.\", \"The evaluations and baselines are weak, the paper only compares against STEVE. I dont think STEVE is a fair comparision as their objective was to get interpretable object representations and not necessarily the metric/benchmark paper uses for evaluation. Further the Decoder-Only model seems to perform as well as the proposed architecture on almost all tasks except efficiency.\", \"Lastly the work only compares on a single benchmark, which is not being used in the baseline works such as STEVE. I think a fair thing to do would be to compare on benchmarks shown in prior baselines, so we assume they are tuned well.\"], \"questions\": [\"How would the architecture compare against Dreamerv3 or Slotformer in world modeling?\", \"What happens if u try to make the decoder only model more efficient by reducing the number of tokens or dimensionality of the token?\", \"How would the paper compare against the baselines in benchmarks proposed in Steve or DreamerV3 or SlotFormer?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Regarding the second question (as the other ones are addressed in the general comment), we kept the decoder-only model with the same number of parameters as the decoder in the complete model for a fair ablation study. Studying the efficiency of the model as the number of parameters varies is left for future work.\"}", "{\"summary\": \"This paper proposes a world-modeling architecture that captures object-level interactions in the scene, instead of the scene itself. The architecture consists of three transformer-based models: a corrector, a predictor, and a decoder, alongside a VQ-VAE tokenizer for image encoding.\\n\\nTo evaluate the proposed model\\u2019s performance as a world model, the authors provide a physical reasoning task using the PHYRE benchmark, demonstrating that their model outperforms STEVE, the baseline, in terms of prediction accuracy and sample efficiency.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"The authors' approach of capturing slot-like internal representations from VQ tokens, instead of CNN embeddings, was intriguing and showed promising results.\", \"They propose a novel evaluation protocol for testing world model architecture, utilizing the shared benchmark with different protocols.\"], \"weaknesses\": \"- **Lack of Novelty and Justification**: The main idea and direction of the paper have been explored in several existing works (OCVT[1], SlotFormer[2]). Although the authors are likely aware of this, they fail to convincingly demonstrate why their approach is unique and necessary for the proposed direction, compared to previous works.\\n- **Architecture and Design Choices**: The proposed architecture appears to be a combination of SAVi[3] and STEVE[4] architectures, but with a predictive loss function instead of a reconstructive loss function. While this variant may be promising, the paper lacks sufficient details to justify the design choices, such as thorough ablation studies. Furthermore, it is unclear whether the proposed model can outperform existing works, as it is not comprehensively compared.\\n- **Lack of Clarity in Architecture and Experiment Description**: The architecture section of the paper lacks clarity and detail, particularly in the description of the core components: corrector transformer, predictor transformer, and decoder transformer. Although the author provides a high-level overview of these architectural concepts, the explanation is insufficient given the emphasis on this part as the paper's core contribution. To thoroughly understand and investigate the proposed architecture, a detailed formulation of these components is necessary, including their implementation details and mathematical representations.\\n \\n Furthermore, the experiment section lacks sufficient details about the metrics used in the evaluation. To ensure transparency and reproducibility, it is essential to provide a clear explanation of each metric, including how the metric is calculated and what it represents.\\n \\n- **Limited Evaluation of Proposed Architecture:** The author only provides a single task to evaluate the proposed architecture, which is insufficient to demonstrate its generality and versatility. To thoroughly assess the world modeling ability of the proposed architecture, it is essential to evaluate it on a diverse range of tasks that require the model to infer and understand the relationships between objects and scenes. Additionally, to facilitate a fair comparison with existing works, the authors may consider including several generation tasks (e.g. OBJ3D[1], CLEVR[5], Physion[6]), as has been done in prior research.\\n \\n To further demonstrate the effectiveness of the proposed model, the author could compare it with a broader range of baselines, such as SlotFormer, OCVT, SAVi, and other relevant models mentioned in the paper. Although these models are typically used for generation tasks, their predicted representations can be evaluated using the same protocol as the proposed model. Additionally, comparing with image-based world models would illustrate the advantages of object-level world models over their image-based counterparts. This approach would provide a more comprehensive understanding of the proposed model's performance and allow for a more accurate assessment of its strengths and limitations relative to existing approaches.\\n \\n- **Ablation Results Raise Questions about Proposed Model:** The ablation results indicate that the \\u2018decoder-only\\u2019 model performs comparably to the proposed models. This suggests that VQ-tokenization and predictive loss might be sufficient to drive performance without explicitly enforcing object-level representations. This outcome seems misaligned with the paper's main theme, which emphasizes the importance of object-level representations. Consequently, this misalignment raises questions about the necessity and effectiveness of the proposed model's architecture.\\n\\n[1] Wu, Yi-Fu, Jaesik Yoon, and Sungjin Ahn. \\\"Generative video transformer: Can objects be the words?.\\\" International Conference on Machine Learning. PMLR, 2021.\\n\\n[2] Wu, Ziyi, et al. \\\"Slotformer: Unsupervised visual dynamics simulation with object-centric models.\\\" arXiv preprint arXiv:2210.05861 (2022).\\n\\n[3] Kipf, Thomas, et al. \\\"Conditional object-centric learning from video.\\\" arXiv preprint arXiv:2111.12594 (2021).\\n\\n[4] Singh, Gautam, Yi-Fu Wu, and Sungjin Ahn. \\\"Simple unsupervised object-centric learning for complex and naturalistic videos.\\\" Advances in Neural Information Processing Systems 35 (2022): 18181-18196.\\n\\n[5] Johnson, Justin, et al. \\\"Clevr: A diagnostic dataset for compositional language and elementary visual reasoning.\\\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.\\n\\n[6] Bear, Daniel M., et al. \\\"Physion: Evaluating physical prediction from vision in humans and machines.\\\" arXiv preprint arXiv:2106.08261 (2021).\", \"questions\": [\"It appears that slot attention (or inverted attention) is absent, with only cross attention being mentioned. Is this an oversight in the explanation, or is it indeed absent? If it's truly absent, how can we be confident that it captures object-level dynamic understanding?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper introduces a framework for world modelling using tokenised (object-centric) representations. A key part of the method is the combination of a transformer architecture with a slot-attention-like mechanism. Experiments are run on the PHYRE simulated physical reasoning benchmark.\\n\\nThe reviewers acknowledged that this paper addresses an important problem, namely whether structured representations can benefit physical world models.\\n\\nAll reviewers raised significant concerns about clarity and quality of presentation, validity and strength of experimental evaluation, and novelty of the method, resulting in a clear reject decision.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer consensus was to reject the paper.\"}", "{\"summary\": \"The paper proposes the Future-Predicting Transformer Triplet (FPTT), an architecture aimed at modeling of physical world dynamics from video data. It employs Transformers to learn object-centric representations, enabling the model to predict physical interactions between objects more effectively. The architecture is tested on synthetic video dataset PHYRE. The authors also perform an ablation study to understand the contribution of different components.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The authors address an important problem of physical world modeling using structured latent representations.\", \"weaknesses\": \"* __Missing slot encodings and object-centricity.__\\n\\nWhile the paper includes _slot encoding_ in its title, the approach itself appears to lack this feature. As I understand it, $\\\\Lambda$ was intended to serve as slot encodings, but it is not even referred to as such. From the description, $\\\\Lambda$ seems more like a standard intermediate representation in transformer layers rather than a distinct slot encoding.\\n\\nIn Appendix A2, the authors reference [1] for their transformer implementation, where they also mention using four slots. However, the referenced implementation does not include a parameter for the number of slots, leaving it unclear how slot encodings are actually integrated into the proposed approach, or if they are implemented at all.\\n\\nFurthermore, the authors state that _\\\"the representation remains opaque and lacks interpretability,\\\"_ which raises questions about the motivation for using _slot encodings_ in the first place.\\n\\n\\n* __Experimental methodology.__ \\n\\nFirstly, the authors evaluate their model on a single, very simplistic dataset, while using a relatively large number of parameters. This limited evaluation setup may not provide sufficient empirical evidence to support their claims.\\n\\nA more significant issue lies in their positioning among related works and choice of baselines. The authors overlook most recent related work (e.g., [2, 3, 4, 5]) and rely solely on STEVE as a baseline, aside from variations of their own approach.\\n\\n\\n* __Presentation.__ \\n\\nIn addition to unclear explanations of their approach and its novelty, the authors fail to position it effectively within the existing literature, lacking a comparative analysis with prior work.\", \"all_the_figures_also_present_issues\": \"they are unnecessarily large, some are in low resolution, and it is often unclear what the authors aim to demonstrate.\\n\\n\\\\\", \"references\": \"[1]: Andrej Karpathy. nanoGPT: The simplest, fastest repository for training/finetuning mediumsized GPTs (Generative Pretrained Transformers), 2023. URL https://github.com/karpathy/nanoGPT. \\\\\\n[2]: Nakano, A., Suzuki, M. and Matsuo, Y., 2023. Interaction-based disentanglement of entities for object-centric world models. In The Eleventh International Conference on Learning Representations.\\\\\\n[3]: Villar-Corrales, A., Wahdan, I. and Behnke, S., 2023, October. Object-centric video prediction via decoupling of object dynamics and interactions. In 2023 IEEE International Conference on Image Processing (ICIP) (pp. 570-574). IEEE.\\\\\\n[4]: Wu, Z., Dvornik, N., Greff, K., Kipf, T. and Garg, A., 2022. Slotformer: Unsupervised visual dynamics simulation with object-centric models. arXiv preprint arXiv:2210.05861.\\\\\\n[5]: Daniel, T. and Tamar, A., DDLP: Unsupervised Object-centric Video Prediction with Deep Dynamic Latent Particles. Transactions on Machine Learning Research.\", \"questions\": [\"__Slot encodings.__ How and where do the authors utilize slot encodings, and what specific benefits do they offer in this context?\", \"__Experimental design.__ Why do the authors limit their evaluation to a single dataset? Additionally, what is the rationale for selecting STEVE as the sole baseline, excluding other relevant related works?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
2GwMazl9ND
Algorithmic Stability Based Generalization Bounds for Adversarial Training
[ "Runzhi Tian", "Yongyi Mao" ]
In this paper, we present a novel stability analysis of adversarial training and prove generalization upper bounds in terms of an expansiveness property of adversarial perturbations used during training and used for evaluation. These expansiveness parameters appear not only govern the vanishing rate of the generalization error but also govern its scaling constant. Our proof techniques do not rely on artificial assumptions of the adversarial loss, as are typically used in previous works. Our bound attributes the robust overfitting in PGD-based adversarial training to the sign function used in the PGD attack, resulting in a bad expansiveness parameter. The peculiar choice of sign function in the PGD attack appears to impact adversarial training both in terms of (inner) optimization and in terms of generalization, as shown in this work. This aspect has been largely overlooked to date. Going beyond the sign-function based PGD attacks, we further show that poor expansiveness properties exist in a wide family of PGD-like iterative attack algorithms, which may highlight an intrinsic difficulty in adversarial training.
[ "algorithmic stability", "generalization", "adversarial training" ]
Accept (Poster)
https://openreview.net/pdf?id=2GwMazl9ND
https://openreview.net/forum?id=2GwMazl9ND
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zSKWe7qNm6", "yPzyYK0cqg", "vJRb437G6R", "uz8QJNpipQ", "tVZ5vLm32X", "npIuu5liZw", "lF3z7CTKKR", "lBslg78tkh", "gSyTSpIGub", "fr2qWXNtRJ", "eMZTquZILI", "daQbs0RXyd", "d2cr21TSta", "cFeCFL1RUU", "WruvVnqz0U", "V1Ek8aQPOv", "Tbwzg4Wp7S", "T47IxvE5kL", "QFekwNzx5X", "O5CroOzZR3", "J9nNTWQycB", "ILcRcrDbVZ", "H3wrsJ0Bx2", "GLqpKBcJoP", "CIJf3XX0yQ", "8FJ6mouw9x", "4wTtgIAS8v", "4WPpk4JU9F", "3H5Lvb1tnP", "0mAESngzB3" ], "note_type": [ "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732929563858, 1733186386623, 1732342889642, 1734069705609, 1733025588941, 1732343333785, 1732343897261, 1732664886023, 1732672239726, 1737523667431, 1732667285684, 1732052587451, 1733025916632, 1729167522595, 1732051977822, 1733193215743, 1733026335702, 1732929253329, 1729546677336, 1730228944916, 1732929290902, 1732666145605, 1729637845520, 1732665085266, 1732344032279, 1732929308552, 1732527477260, 1732666305367, 1732665397441, 1732052349710 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4877/Reviewer_2VnC" ], [ "ICLR.cc/2025/Conference/Submission4877/Reviewer_2VnC" ], [ "ICLR.cc/2025/Conference/Submission4877/Authors" ], [ "ICLR.cc/2025/Conference/Submission4877/Area_Chair_AUr2" ], [ "ICLR.cc/2025/Conference/Submission4877/Authors" ], [ "ICLR.cc/2025/Conference/Submission4877/Authors" ], [ "ICLR.cc/2025/Conference/Submission4877/Authors" ], [ "ICLR.cc/2025/Conference/Submission4877/Authors" ], [ "ICLR.cc/2025/Conference/Submission4877/Reviewer_Jg1M" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4877/Authors" ], [ "ICLR.cc/2025/Conference/Submission4877/Authors" ], [ "ICLR.cc/2025/Conference/Submission4877/Authors" ], [ "ICLR.cc/2025/Conference/Submission4877/Reviewer_Jg1M" ], [ "ICLR.cc/2025/Conference/Submission4877/Authors" ], [ "ICLR.cc/2025/Conference/Submission4877/Authors" ], [ "ICLR.cc/2025/Conference/Submission4877/Authors" ], [ "ICLR.cc/2025/Conference/Submission4877/Reviewer_2VnC" ], [ "ICLR.cc/2025/Conference/Submission4877/Reviewer_Fpbq" ], [ "ICLR.cc/2025/Conference/Submission4877/Reviewer_e9T9" ], [ "ICLR.cc/2025/Conference/Submission4877/Reviewer_2VnC" ], [ "ICLR.cc/2025/Conference/Submission4877/Authors" ], [ "ICLR.cc/2025/Conference/Submission4877/Reviewer_2VnC" ], [ "ICLR.cc/2025/Conference/Submission4877/Authors" ], [ "ICLR.cc/2025/Conference/Submission4877/Authors" ], [ "ICLR.cc/2025/Conference/Submission4877/Reviewer_2VnC" ], [ "ICLR.cc/2025/Conference/Submission4877/Reviewer_e9T9" ], [ "ICLR.cc/2025/Conference/Submission4877/Authors" ], [ "ICLR.cc/2025/Conference/Submission4877/Authors" ], [ "ICLR.cc/2025/Conference/Submission4877/Authors" ] ], "structured_content_str": [ "{\"comment\": \"> the threat model [...] remains unchanged in the experiments of Section 6.\\n\\nI see. Thanks for the clarification. I mistakenly thought that you changed the threat model, instead of the step size. The experiments now make more sense, and basically do what I originally asked for. However, a question remains about the number of inner iterations: how did you change this value when changing the inner steepest ascent method? Did you vary it at all?\"}", "{\"comment\": \"Thank you for your reply and your clarifications.\\n\\n> We would like to clarify that the number of iterations for each $G_p-$PGD variants remains unchanged.\\n\\nFor future reference, I think an exploration of learning step together with number of iterations would have been interesting.\\n\\nI revisited the paper and most of my concerns regarding presentation and contributions have not been met unfortunately. However, I do see some value in the experiments of Section 6 (which I missed in my original evaluation), so I updated my scores accordingly. However, I still recommend (borderline) rejection for most of the points raised in the Weaknesses section of my review.\"}", "{\"title\": \"Reply to your questions and concerns\", \"comment\": \"Thank you very much for your careful review! To address your concerns and questions, we have performed additional theoretical analysis and experiments and include the results temporarily in Appendix E.2 (from line 1124) of our paper.\", \"regarding_your_questions_and_concerns\": \">(1) While the paper considers the algorithmic stability of PGD attack, a missing component is the convergence of PGD attack. Intuitively, if we always use a fixed attack direction, then the algorithmic stability is not largely affected by the attack. However, the attack is not efficient. When using PGD attack, there is supposed to have a trade-off: with more iterations, the stability gets worse, but the attack becomes stronger. If at the test stage the attacker uses a very strong attack, e.g., AA attack, then balancing the attack effectiveness and stability is essential to obtain a better robust testing performance. Could the authors elaborate more from this perspective?\\n\\nThank you very much for this interesting question and perspective. We now have included a new section Appendix E.2 in the revised paper, which provides a convergence analysis of the PGD attacks. We now summarize the results here (for more details and more precise statements of our results, please refer to Appendix E.2)\\n\\nWe consider PGD attacks with the mapping $G$ that satisfies the following condition: $\\\\nabla_{x}f(w,x,y)^{T}G(\\\\nabla_{x}f(w,x,y))>0$, for any $(x,y)\\\\in {\\\\cal X}\\\\times {\\\\cal Y}$ and any $w\\\\in{\\\\cal W}$. Note that this condition simply requires the direction of the modified gradient $G(\\\\nabla_{x}f(w,x,y))$ aligned near the direction of the original gradient, within 90 degree angle. Then we have the following result (Lemma E.1 in the revised paper)\\n\\n**Lemma E.1** Suppose that $f(w, x, y)$ satisfies the condition (22).Let $ x^*= J^* (x;y,w)$ and suppose that $\\\\nabla_x f(w, x^*,y)=0$. Suppose $\\\\|G(\\\\nabla_{x}f(w,x,y))\\\\|^2\\\\le C$ for any $(w,x,y)$. Then performing the $K-$step PGD with step size $\\\\lambda=\\\\frac{1}{\\\\sqrt{K}}$ results in\\n\\n$ f(w, x^*, y)-\\\\frac{1}{K}\\\\sum_{k=1}^{K}f(w, x^k, y)\\\\le \\\\frac{(2C+d^*)}{2K} + \\\\frac{d^* (\\\\eta^2 + \\\\eta + 1)}{2} $\\n\\nwhere $d^* = \\\\max\\\\limits_{k\\\\in \\\\{ 1,\\\\cdots, K \\\\} } \\\\Vert x^k -x^* \\\\Vert ^2$ and $x^k := T^k _{x, y} (x; w)$ denotes the perturbed instance generated by the $k-$step PGD with $k\\\\le K$.\\n\\nThis result bounds the difference between the maximal loss $f(w, x^*, y)$ and the average of the losses achieved by $K$-step PGD (averaged over the $K$ steps). If the achieved loss $f(w, x^k, y)$ increases over the K steps, the result implies\\n\\n $ f(w, x^*, y)-f(w, x^K , y)\\\\le \\\\frac{(2C+d^*)}{2K} + \\\\frac{d^* (\\\\eta^2 + \\\\eta + 1)}{2} $\\n\\n\\nNotably this upper bound decays with $K$, but converges to a positive constant. This should come as no surprise since without stronger conditions or knowledge on $f$ (e.g., concavity), it is hopeless to have PGD attacks to reach the true maximal loss value $f(w,x^*, y)$. \\n\\nIf we further assume loss functions $f(w,x,y)$ to be concave in $x$ and consider the \\\"raw-gradient (RG)\\\"-PGD where the mapping $G$ is taken as the identity map, we have the following convergence upper bound for PGD by directly adapting the Theorem 3.7 in [Bubeck et al, 2015convex] (Lemma E.2 of the revised paper):\\n\\n**Lemma E.2** Suppose that $f(w, x, y)$ satisfies the condition (22) and is concave in $x$. Let the mapping $G$ be the identity map. Then the $K-$ step PGD with step size $\\\\lambda=\\\\frac{1}{\\\\eta}$ satisfies\\n\\n $ f(w, x^*, y) - f(w, x^K, y) \\\\le \\\\frac{3\\\\eta \\\\|x - x^*\\\\|^2 + f(w, x^*, y)-f(w, x, y)}{K} $\\n\\nwhere $x^*= J^* (x;y,w)$ and $x^K:=T^K _{x, y}(x; w)$.\\n\\nThe bound obviously vanishes with $K$.\"}", "{\"metareview\": \"The theory of adversarial training has been substantially explored by many works. But, this paper present a novel stability analysis of adversarial training and prove generalization upper bounds in terms of an expansiveness property of adversarial perturbations. The proof technique used in this paper is totally different from the previous papers. The derived bound is more general than existing works. This is really a good theory job in adversarial training. Congratulations!\", \"additional_comments_on_reviewer_discussion\": \"A reviewer seems to have bias on this paper and ask some quesitons that do not really matter.\"}", "{\"title\": \"Reply to your follow-up questions and your remaining concerns (part 1)\", \"comment\": \"Thank you very much for taking the time to revisit our paper. We address your further comments as below.\\n\\n\\n>- This is a matter of style, but I consider it good practice to remind the reader of previously defined quantities in the statement of a theorem (rather than only directing them to previous definitions). This is not crucial.\\n\\n>- Right. In line 258, you specifically talk about the case where $J=J^{\\\\rm id}$ and $\\\\pi$ is a particular perturbation. The point is that you could simply call this quantity as \\\"standard generalization gap of a predictor trained robustly\\\", instead of \\\"mis-matched generalization gap\\\". Again, this is not crucial.\\n\\n\\nThank you for the suggestion and we will consider it during next revision -- currently we are not allowed to revise the manuscript.\\n\\n\\n>- Indeed. My critique of \\\"However, they do not show any benefits in terms of robustness with the new method.\\\" was not relevant. However, the rest of my critique still holds as far as I see: \\\"Furthermore, the fact that for small $\\\\gamma$ we do not observe overfitting and the generalization gap is small appears to be a trivial observation, as the evaluation basically approaches the standard case of no perturbations. In short, it is not a good method for finding worst-case $\\\\ell_\\\\infty$ perturbations.\\n\\n \\n When $\\\\gamma$ is zero or close to zero, we agree with your comment that it \\\"basically approaches the standard case of no perturbations.\\\" However, beyond such an extreme case, for example when $\\\\gamma$ takes a relatively large value (e.g., $\\\\gamma=10, 10^2, 10^3$), the relationship between robust generalization and the strength of the perturbation (or its optimality with respect to achieving the worst-case $\\\\ell_\\\\infty$ perturbation) used in AT is not clear. \\n \\nFor instance, as demonstrated in Figure 1, adversarial training (AT) using three-step sign-PGD is far from optimal for finding the worst-case perturbations, yet it results in robust overfitting. On the other hand, ${\\\\tanh}_{\\\\gamma}$-PGD with small $\\\\gamma$ is similarly non-optimal for finding the worst-case perturbations, it however enables AT to achieve better generalization performance. \\n\\nTherefore, the generalization performance for ${\\\\tanh}_{\\\\gamma}$-PGD-AT in relation to the value of $\\\\gamma$ is not automatically clear without theoretical study or experimental investigation. As such, our experimental results are not trivial.\"}", "{\"comment\": \"**Trade-off between robustness and generalization** We now discuss the tradeoff between robustness and generalization as was brought up in your comments.\\n\\nWe will rewrite K-step PGD perturbation as\\n\\n $ \\\\pi^{\\\\rm PGD} _{K}(x; y, w):= T^K _{x, y}(x; w) $\\n\\nto emphasize its dependency on $K$ in the PGD attack and define the expected robustness gap (on training set) as \\n\\n $ {\\\\rm RG}(J^*, \\\\pi):=\\\\mathbb{E} _ {S,A} \\\\left[R_{S}[A_{\\\\pi}(S), J^*] - R_{S}[A_{\\\\pi}(S), \\\\pi] \\\\right] $\\n\\nThis term characterizes the robustness of a model on the training set against $J^*$ when it is trained by AT using some other adversarial perturbation $\\\\pi$.\\n\\nFor shorter notation, let $w=A_{\\\\pi}(S)$ and consider ${\\\\rm RG} (J^*, \\\\pi^ {\\\\rm PGD} _ {K})$. We can show that\\n\\n${\\\\rm RG} (J^*, \\\\pi^ {\\\\rm PGD} _ {K}) \\\\le \\\\sup\\\\limits _{(x,y,w)} \\\\left[ f(w, x^*, y)- f(w, x^K, y) \\\\right] $\\n\\nwhere $x^*= J^* (x;y,w)$ and $x^K :=\\\\pi^{\\\\rm PGD} _{K}(x ;y, w)$. \\n\\nThis result and the result above (Lemma E.1 of our revised paper) apply to arbitrary choice of $(w,x,y)$. They suggest that a smaller robustness gap ${\\\\rm RG}(J^*, \\\\pi^ {\\\\rm PGD} _ {K})$ can be achieved for $\\\\pi^{\\\\rm PGD} _ {K}$ with larger $K$. Lemma 5.1 on the other hand suggests that $\\\\pi^ {\\\\rm PGD} _ {K}$ with smaller $K$ tends to achieve a smaller expansiveness parameter $q_{c}(\\\\pi^ {\\\\rm PGD} _ {K})$ and therefore the corresponding generalization gap ${\\\\rm GG} _ {n}(J^*, A_{\\\\pi})$ with $\\\\pi=\\\\pi^ {\\\\rm PGD} _ {K}$ tends to be smaller for smaller $K$. \\n\\nIn summary, this theoretical analysis characterizes the potential trade-off between generalization and the \\\"effectiveness of PGD attack\\\" (measured by ${\\\\rm RG}(J^{*}, \\\\pi^{\\\\rm PGD}_{K})$) as was brought up in your comments -- We thank you for this pointer, which has helped improve this paper.\"}", "{\"comment\": \"Regarding your concerns:\\n>(2) Please highlight the technical challenges for the theoretical contributions in this paper.\\n\\nA key challenge in this development lied in identifying the impact of the perturbation operators on the generalization of adversarially trained models. Extensive experiments had been conducted before we came to the recognition that the perturbation operator may play different roles in defining the loss function for evaluation and in the training process and should be isolated for theoretical analysis. Then the stability framework appeared to be a natural option for our analysis. But it remained difficult to find an appropriate measure to characterize the property of the perturbation operator suitable for this analysis. It took a number of iterations before we were able to find an appropriate notion of expansiveness for the perturbation operators.\\n\\n>(3) Please consider using some SOTA methods from RobustBench, e.g., leveraging synthetic data in adv training, to conduct the experiments. While improving the sign function seems to be helpful as illustrated by this paper, there is no enough evidence to demonstrate that this is one of the key issues in adversarial training.\\n\\n\\nWe have conducted additional experiments on the CIFAR-10 dataset following the AT framework in [Wang et al, 2023] where the model is trained to minimize the TRADES loss proposed in [Zhang et al, 2019] and an additional 1M synthetic dataset is used in the training. Detailed description and results can be found in Appendix E.2 of the revised paper (under \\\"TRADES\\\")\\n\\nWe conduct experiments to observe if replacing the sign function with the ${\\\\tanh} _ {\\\\gamma}$ function would affect the generalization performance of TRADES. We follow the same setup and hyper-parameter settings in [Wang et al, 2023] and perform TRADES with $G={\\\\tanh} _ {\\\\gamma}$ for $\\\\gamma= 1,10,100,10^3, 10^5 $. Specially, we call this type of TRADES as the ${\\\\rm tanh} _ {\\\\gamma}-$TRADES. \\n\\nModels in each experiments are trained for 200 epochs. The trained models are then evaluated by the $J$-(0-1) loss with $J$ taken from ${\\\\tanh} _ {\\\\gamma}$-PGD, sign-PGD, $J ^{\\\\rm id}$. \\n\\nExperimental results are presented in Figure 9(a) where a phenomenon similar to that in Figure 2(a) is observed. When model is trained by ${\\\\tanh}_{\\\\gamma}$-TRADES with smaller $\\\\gamma$, reduced generalization gaps are observed (indicated by the reduced gaps between the dot-shaped and star-shaped curves within each color category).\\n\\nComparing Figure 2(a) with Figure 9(a), one may notice that for larger $\\\\gamma$, the generalization gaps of ${\\\\rm tanh} _ {\\\\gamma}$-TRADES appears to be smaller than those of ${\\\\rm tanh} _ {\\\\gamma}$-PGD-AT. This difference is likely due to the additional 1M synthetic data used in ${\\\\rm tanh} _ {\\\\gamma}$-TRADES while our PGD-AT experiments only utilize the original training dataset which contains far less number of training examples. \\n\\nWe have also measured the $J$-(0-1) loss with $J$ taken as the ${\\\\rm tanh} _ {\\\\gamma}$-PGD along the training trajectories of ${\\\\rm tanh} _ {\\\\gamma}$-TRADES on both the training and the testing sets. The results, shown in Figure 9(b), use different colors to distinguish ${\\\\rm tanh} _ {\\\\gamma}$-TRADES with different $\\\\gamma$ values. Solid and dashed curves respectively represent the $J$-(0-1) loss on the training and the testing set. It shows that the solid curves drops faster than the dashed curves, indicating that $J$-(0-1) loss decreases more rapidly for the ${\\\\rm tanh} _ {\\\\gamma}$-TRADES with smaller $\\\\gamma$.\\n\\n\\nIn summary, the experimental results indicate that, similar to PGD-AT, the choice of perturbation operators in TRADES also affects its training and generalization performance. On the other hand, we also note that the current analysis in this paper does not fully address the impact of the SIGN function in other adversarial training frameworks, particularly those involving delicate regularization terms, such TRADES. The key difference between TRADES and our set up is in the form of perturbation: our set up restricts the perturbation to a transformation of the gradient of the standard loss, whereas in TRADES alike approaches, the perturbation is a transformation of the gradient of other quantities. Nevertheless, we expect that the general methodology presented in this paper can be adapted to broader families of adversarial training frameworks. -- We sincerely thank the reviewer for bringing up this question, and we will make an effort in that direction.\"}", "{\"title\": \"Thank you for raising your score\", \"comment\": \"Dear reviewer,\\n\\nThank you for raising your score! We have now included comparison of our work with [1, 2] in Appendix F. We also would like to elaborate this aspect in the main body of the paper. But at current stage, in order not to disturb line numbers for the ease of all reviewers, we will postpone this when we prepare next version of the paper.\"}", "{\"comment\": \"Thank you for your reply, I have no further questions.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Reply to your concerns and questions (part 4)\", \"comment\": \"Regarding your questions:\\n\\n>- Which part of the analysis bypasses prior work and makes the bounds decay as $O(\\\\frac{1}{n})$?\\n\\n\\nThe work in Xiao et al. (2022b) define the notion of \\\"$\\\\eta-$approximate smoothness\\\" for loss functions and derive generalization bounds based on this quantity. The subsequent work by Wang et al. (2024) builds upon this framework. We believe that the primary reason their bounds include terms that do not vanish with $n$ is that the approximate smoothness paramete $\\\\eta$ is independent of $n$. A more detailed discussion of their framework is provided in Appendix F of our revised manuscript. \\n\\n\\n\\nOur work, on the other hand, derives generalization bounds from a different perspective. We define the notion of $c-$expansiveness for the perturbation operations used in AT. Our bounds derived based on this quantity then address the limitations in Xiao et al. (2022b) and Wang et al. (2024), and vanish with $O(\\\\frac{1}{n})$ when the expansiveness parameter is finite.\\n\\n\\n\\n\\n\\n>- In the experiments of Section 6, why do you change the threat model (finding different values of $\\\\lambda_p$)? One could imagine experiments with different steepest descent algorithms for the solution of the inner maximization problem, where the threat model does not change (i.e., projecting every time to the same $\\\\ell_\\\\infty$ balls around the original points). Of course, different steepest ascent algorithms (besides the commonly used sign gradient ascent) will perform worse in finding adversarial examples, so the number of inner iterations should be adjusted appropriately. However, I believe this could be an interesting experiment to conduct.\\n\\n\\n\\nAs explained above, the threat model (i.e., $\\\\mathbb{B} _ {\\\\infty}(x, \\\\epsilon)$) remains unchanged in the experiments of Section 6. What varies is $\\\\mathbb{B} _ {p}(x ^k, \\\\lambda _ p)$, which is used to determine $G_p$ for the gradient ascent step of PGD. Finding different values of $\\\\lambda_p$ is to maintain the same volume for the balls $\\\\mathbb{B} _{p}(x ^k,\\\\lambda _ p)$ across different $p$ values.\"}", "{\"comment\": \"Regarding your questions:\\n\\n\\n- \\\"How does the bound in [1,2] change when considering ${\\\\rm tanh}_{\\\\gamma}$-PGD?\\\"\\n\\nAs we have shown above, the bound remains unchanged in [1,2] when considering ${\\\\rm tanh}_{\\\\gamma}$-PGD with different $\\\\gamma$.\\n\\n- \\\"Have you tried larger $\\\\gamma$ s? $\\\\gamma=10^5$ seems to be very far from sign-PGD in Figure 2 (b). It would be interesting to see how does tanh-PGD behave when it\\u2019s close to sign-PGD.\\\"\\n\\n\\nThe reviewer might have misread Figure 2b. In fact, in that figure when $\\\\gamma = 10^5$, ${\\\\rm tanh}_{\\\\gamma}$ function closely approximates the sign function and $\\\\gamma = 10^5$ is actually sufficiently large. \\n\\nTo illustrate, we have plotted the training trajectory of ${\\\\rm tanh}_{\\\\gamma}$-PGD AT with $\\\\gamma=10^5$ and the trajectory of the sign-PGD AT in Figure 7 (a), Appendix E.1. (from line 1074). The results show that their trajectories overlap almost entirely.\\n\\nFurther, we have plotted the $\\\\tanh_\\\\gamma$ function with $\\\\gamma=10^5$ and the sign function in Figure 7 (b), Appendix E.1, to show that they are indeed very close.\\n\\n- \\\"Can you construct an experiment where the dependence on $n$ is displayed? For example taking a smaller number of samples from the studied datasets in order to see how the generalization gap grows. A synthetic distribution could also be employed where more and more samples are drawn and the gap decreases to zero for finite $\\\\gamma$.\\\"\\n\\nUpon your request, we have performed AT experiments using various fraction of the training set from CIFAR-10 and SVHN. The generalization gaps are estimated and plotted in Figure 8 in Appendix E.1. It is clear that the generalization gaps reduce with the increase of training sample size $n$. Notably, in the limit when $n$ approaches infinity, the model is effectly trained on the entire data distribution, and the generalization gap must approach zero.\\n\\nAlso observed from the figure is the phenomenon that smaller $\\\\gamma$ gives smaller generalization gap. This is consistent with our theoretical analysis.\"}", "{\"title\": \"Reply to your follow-up questions and your remaining concerns (part 2)\", \"comment\": \">- No, I think I understood what you wrote in the paper in the first place. Perhaps my comment about steepest descent has not been clear enough. I understand that that Section is about the update rule, and not the projection step. Maximizing the linear approximation of the loss around the current iterate around a ball induced by a norm $|\\\\cdot|$ is equivalent to using steepest descent (ascent) on the loss with respect to this norm -- see, for instance, Section 9.4 in 'Convex Optimization by Boyd and Vandenberghe'.\\n\\nWe are glad that your concern did not arise from the confusion between the norm ball used in gradient asecent and that used in the projection operation. Now returning to your original comment \\\"the experiments fail to highlight anything new\\\", we remark the following.\\n\\n\\nFrom Sections 5, we see that the gradient operator $G$ has a significant impact on generalization. In Section 6, we recognized that sign operator, as the gradient operator, results from solving the inner maximization problem by a locally linear approximation of the loss function. Changing the range of locally linear approximation from $\\\\ell_\\\\infty$ ball to other norm balls, results in a family of gradient operators, i.e., the operators $G_p$'s. The results in this section are NEW in three aspects:\\n1. We observe that the impact of the operator $G_p$ on generalization of AT, depends on the value of $p$. \\n2. We present a theoretical explanation of the above observations. Specifically in Lemma 6.1, we relate the Lipschitz constant $\\\\alpha_p$ of the $G_p$ operator to the value of $p$. The Lipschitz constant in turn affects the expansiveness of $G_p$-PGD (as shown in Lemma 5.1), and hence impacts generalization (as suggested by equation (15)).\\n3. The family of $G_p$ gradient operators appear to result in poor generalization. This potentially points to a fundamental limitation in the approach to solve inner maximization using locally linear approximation of the loss function.\\n\\n>- However, a question remains about the number of inner iterations: how did you change this value when changing the inner steepest ascent method? Did you vary it at all?\\n\\nWe would like to clarify that the number of iterations for each $G_p-$PGD variants remains unchanged. We however have adjusted the value of $\\\\lambda_p$ (the step size of the inner iteration) to ensure that the volumes of $\\\\mathbb{B}_{p}(x^k, \\\\lambda_p)$ are the same across different values of $p$ for a fair comparison.\\n\\n\\nThank you again for your response. We hope that our clarifications have addressed your remaining concerns. Meanwhile, as several of your earlier concerns are now resolved, we hope you consider raising your score. Should there be other issues for which you need our clarification, please let us know and we will make an effort to explain.\"}", "{\"summary\": \"This paper provides a new stability bound for adversarial training, where the inner maximization perturbation $J$ and the evaluation perturbation $\\\\pi$ can be different. The introduced term expansiveness can partly explain robust overfitting and experiments are conducted to validate the theoretical results.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The new stability theory differs from existing ones in terms of assumptions and the form of bounds. I like the separation of adversarial training perturbation $J$ and the evaluation perturbation $\\\\pi$, which means that the theory in this paper is a more abstract framework and can be applied in many cases.\", \"The writing is good.\"], \"weaknesses\": [\"It seems that the framework in this paper can not provide a proper description for the $Q(c^*)$ term, we need to calculate it according to the concrete choice of $J, \\\\pi$. However, to make this framework more significant, examples of how to calculate $Q(c^*)$ and how to choose $c^*$ should be added. Note: I mean examples in practice but not examples with overly simple assumptions (such as the assumption on the second moment in lines 293-294 and the assumption in Corollary 5.2 that a term is bounded by $B$ with probability 1). Just like the VC dimension, if we can not calculate the VC dimension of some hypothesis classes, the bound with the VC dimension is meaningless.\", \"Typo: in line 149, it should be \\\"the label space $\\\\mathcal{Y}$ is finite\\\"\", \"A minor issue: many equations in this paper are numbered, in fact, the equations that are not used later need not be numbered. For example, equation (2) is not used.\", \"In lines 87-88, the paper says that \\\"the bound convergence to a constant, this helps explain the robust overfitting phenomenon\\\". In fact, a lower bound of the generalization gap that converges to a constant can explain overfitting. However, an upper bound can not because your bound may not be tight enough.\"], \"questions\": \"In total, I think this is a good paper, but there are some points that can improve this paper. Please refer to the weaknesses part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to your concerns and questions\", \"comment\": \"Thank you very much for your careful review! We have fixed the typo in our paper. To address your concerns and questions, we have conducted additional experiments and include the results temporarily in Appendix E of our paper.\", \"regarding_your_concerns\": \"- \\\"Authors claim that their upper bound converges to zero with increasing number of training samples and the ones of [1,2] do not. This is misleading as [1,2] do not consider the expansiveness of the attack operator and the bound provided in this work, in the same setup as [1,2] does not vanish with $n$ (see lines 369-371).\\\"\\n\\nWe acknowledge that our bound does not vanish when $J$-loss defined using sign-PGD. However, we note that although the behavior of our bound in this specific setting is similar to that in [1,2], our setup for this result and the setup of the results in [1,2] should not be taken as the same. For example, the main result in [2] (Theorem 5.1) states, using our notations, that for any $J$-loss satisfying the various assumptions in [2], the geneneralization bound does not vanish. However, in our paper, we show that only for $J$-loss defined using sign-PGD, our bound does not vanish. \\n\\nFor $J$-losses satisfying the conditions of Theorem 5.1 in [2], we show that there exists a broad family of $J$-losses for which our generalization bound does vanish. That is, for this family of $J$-losses (namely, those having bounded expansiveness), our results are stronger than that in [1, 2]. This is further elaborated when we address your next comment.\\n\\n- \\\"The difference with the previous bounds is not clearly covered in the paper. The proof technique and assumptions are very similar to [1,2], nevertheless the bounds in [1,2] are not presented in the work and there is no discussion about how to integrate previous bounds with the expansiveness setup introduced in this work, making it difficult to assess the contributions. It would be nice to add a discussion about which role expansiveness plays in the result of [1,2], i.e., can it result in a vanishing upper bound with $n$? It would also be good to have a table comparing the different upper bounds.\\\"\\n\\nSince the work of [1] is built upon the framework in [2], we here only present the connections and differences between [2] and our work. \\n\\n**Summary of generalization bounds in [2]:** First we would like to note that our problem setting includes the setting in [2] as a special case. Specifically, the generalization gap discussed in [2] corresponds to the generalization gap ${ \\\\rm GG} _ {n} (J^*, A _ {J^*})$ defined in our work, where the perturbations in both $J-$loss and the AT algorithm are taken as the optimal adversarial perturbation $J^{*}$.\\n\\nOur work and [2] both take the Lipschitzness and smoothness conditions of the standard loss $f$ as the starting point, but derive generalization bounds from different perspectives: the work in [2] defines and proposes to study the $\\\\eta-$approximate smoothness of the adversarial loss ( $f^*$ in our notation) and derive generalization bounds based on this quantity. Our work define the notion of $c-$expansiveness of the perturbation operator (e.g., $J^{*}$) and show how this quantity affects generalization performance of AT.\\n\\nFor completeness, we here present the definition of $\\\\eta-$approximate smoothness, rewrite the Definition 4.1 of [2] using our notations.\\n\\n**Definition** ($\\\\eta-$approximate smoothness [2]) A loss function $f_J$ is called $\\\\eta-$approximately $\\\\beta-$gradient Lipschitz if there exists $\\\\beta>0$ and $\\\\eta>0$ such that for any $(x,y)\\\\in {\\\\cal X}\\\\times {\\\\cal Y}$ and for any $w_1, w_2 \\\\in {\\\\cal W}$ we have \\n\\u200b $$ \\\\Vert \\\\nabla f_J(w_1, x, y)-\\\\nabla f_J(w_2, x, y) \\\\Vert \\\\le \\\\beta \\\\Vert w_1-w_2 \\\\Vert + \\\\eta$$ \\nThe work in [2] then derives generalization bounds for loss functions that are $\\\\eta-$approximately smooth. For example, after replacing the notations in [2] with ours, Theorem 5.1 of [2] shows that if $f_{J}$ is $\\\\eta-$approximately $\\\\beta-$gradient Lipschitz, convex in $w$ for all $(x,y)$ and the standard loss $f$ satifies the same Lipschitz condition in (6) of our paper (or Assumption 4.1. in [2]), then their bound in Theorem 5.1 becomes\\n\\u200b $${\\\\rm GG}_ {n} (J, A _ {J}) \\\\le \\\\frac{L _ {\\\\cal W}}{\\\\beta}\\\\eta T + \\\\frac{2L_{\\\\cal W}^{2}}{n\\\\beta}T$$ \\nThe authors of [2] show that the adversarial loss $f^*$ satisties $\\\\eta$-approximately $\\\\beta$-gradient Lipschitz with $\\\\eta = 2\\\\Gamma _{\\\\cal X}\\\\epsilon$ so that the generalization bound above gives their generalization bound for adversarial training. In their determination of the $\\\\eta$ parameter, they have assumed that the standard loss $f$ satisfies certain Lipschitz and smoothness condition; this condition is effectively equivalent to our condition (7).\\n\\nIt is worth noting that the generaliztion bounds derived based on the approximate smoothness parameter $\\\\eta$ contain a term unrelated to the sample size $n$ because of the independence of $\\\\eta$ on $n$.\"}", "{\"comment\": \"Dear reviewer,\\n\\n\\nThank you for revisiting the paper and raising the score. We are however surprised to see your comment that \\\"most of my concerns regarding presentation and contributions have not been met unfortunately\\\". Based on our discussions and your responses, we believe that most of your concerns have been effectively addressed, as can be seen from a revisit of your initial comments below:\\n\\n- \\\"Poor presentation ...\\\"\\n\\n In your earlier comments, you noted, *\\\"This is a matter of style... This is not crucial.\\\"* where you indicate that this is less critical for your evaluation. \\n\\n- \\\"Introduction of many ad-hoc terms to denote well-established concepts ...\\\"\\n\\n Similarly, you mentioned, *\\\"Again, this is not crucial.\\\"* Hence, we interpreted this as a minor stylistic issue rather than a significant drawback.\\n\\n- \\\"Unclear contributions...\\\"\\n\\nIn response to our earlier clarification, you remarked, *\\\"The included discussion seems to be good. I also welcome the fact that the comments on the smoothness assumptions of prior works have been rectified.\\\"* We interpreted this as an acknowledgment that the concern has been addressed.\\n\\n- \\\"Unclear motivation for experiments ...\\\"\\n\\nBased on your feedback \\\"Indeed. My critique of 'However, they do not show any benefits in terms of robustness with the new method.' was not relevant.\\\", we suppose this concern has been partially addressed in our first round of clarification.\\n\\nYou then mentioned that \\\"However, the rest of my critique still holds as far as I see....\\\". We have provided further clarifications addressing the remaining concerns and would like to know if there are any aspects still needing elaboration.\\n\\n- \\\"Results of Section 6...\\\"\\n\\nIn your initial feedback, you mentioned, \\\"Thanks for the clarification... The experiments now make more sense.\\\" Furthermore, in your current comment, you acknowledged, \\\"I do see some value in the experiments of Section 6 (which I missed in my original evaluation).\\\" Based on this, your concerns about Section 6 appear to have been resolved.\\n\\n\\nIn summary, from your previous feedback during the discussions, we felt that most of your concerns have been addressed adequately. Now that you suggest this is not the case, we will appreciate that you elaborate on your concerns so that we can make our best effort clarifying.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nAs the rebuttal period is ending, we hope to hear your comments on our reply to your review. In case you have further questions, please let us know in your earliest convenience, so that we can make an effort to respond before the rebuttal period ends. If we have addressed all your concerns, please consider raising your score. Thank you.\"}", "{\"comment\": \"Thank you for your response. I will try to be to the point and reply to your answers:\\n\\n> we would like to note that in Theorem 4.1 of our original manuscript, we have explicitly stated at the beginning that \\\"Suppose that $f$ satisfies the conditions (6) and (7). \\\". The constants $\\\\beta, \\\\Gamma_X$ are introduced in conditions (6) and (7).\\n\\nThis is a matter of style, but I consider it good practice to remind the reader of previously defined quantities in the statement of a theorem (rather than only directing them to previous definitions). This is not crucial.\\n\\n> we suspect that the reviewer might have mis-read this line of statement: this notion is not the standard generalization gap for a predictor trained robustly, but includes it as a special case. Specifically, the \\\"standard generalization gap of a predictor trained robustly\\\" is a type of \\\"mis-matched generalization gap\\\" when $J=J^{\\\\rm id}$ and $\\\\pi$ is a particular perturbation.\\n\\nRight. In line 258, you specifically talk about the case where $J=J^{\\\\rm id}$ and $\\\\pi$ is a particular perturbation. The point is that you could simply call this quantity as \\\"standard generalization gap of a predictor trained robustly\\\", instead of \\\"mis-matched generalization gap\\\". Again, this is not crucial.\"}", "{\"summary\": \"This paper studies the algorithmic stability of adversarial training with a focus on how the inaccuracy of PGD attack affects the stability. Theoretical analysis are provided to justify that the sign function in PGD updates can significantly harm the stability, leading to a worse generalization (gap).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"(1) This paper is clear and easy to understand.\\n\\n(2) This paper studies the algorithmic stability of adversarial training from an interesting angle of the PGD attack.\\n\\n(3) Experiments demonstrate that using tanh to replace sign function can improve the generalization performance.\", \"weaknesses\": \"(1) While the paper considers the algorithmic stability of PGD attack, a missing component is the convergence of PGD attack. Intuitively, if we always use a fixed attack direction, then the algorithmic stability is not largely affected by the attack. However, the attack is not efficient. When using PGD attack, there is supposed to have a trade-off: with more iterations, the stability gets worse, but the attack becomes stronger. If at the test stage the attacker uses a very strong attack, e.g., AA attack, then balancing the attack effectiveness and stability is essential to obtain a better robust testing performance. Could the authors elaborate more from this perspective?\\n\\n(2) Please highlight the technical challenges for the theoretical contributions in this paper.\\n\\n(3) Please consider using some SOTA methods from RobustBench, e.g., leveraging synthetic data in adv training, to conduct the experiments. While improving the sign function seems to be helpful as illustrated by this paper, there is no enough evidence to demonstrate that this is one of the key issues in adversarial training.\\n\\n(4) Minor: In one line of research, to save the computation budget of adversarial training, algorithms have been proposed to explore fast adversarial training: instead of calculating the attack at each iteration, they update the attack for each sample for one step at each iteration, e.g., \\n\\nCheng, Xiwei, Kexin Fu, and Farzan Farnia. \\\"Stability and Generalization in Free Adversarial Training.\\\" arXiv preprint arXiv:2404.08980 (2024).\\n\\nI'm wondering if the authors can provide some comments on algorithms of this type.\", \"questions\": \"Please address my comments above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Authors study the generalization of adversarial training with projected gradient descent. They provide uniform stability upper bounds of the generalization gap that consider the expansiveness of the adversarial attack operator. In the particular case of replacing the $\\\\text{sign}(x)$ operation in the PGD attack with $\\\\text{tanh}(\\\\gamma\\\\cdot x)$, they can show that their generalization upper bound decays to zero with the number of samples $n$ for a finite value of $\\\\gamma$. The experimental evaluation shows the tradeoff between generalization and robustness given by $\\\\gamma$, where smaller values of $gamma$ obtain good generalization but poor robustness and the opposite happens for larger $\\\\gamma$.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Simple theory and easy to follow paper. I didn\\u2019t read the proofs in full detail, but the main paper is easy to follow and the analysis and experiments are reasonable.\", \"I found the analysis of the expansiveness of the adversarial attack operator very interesting and up to my knowledge, this has not been considered before.\", \"When considering finite $\\\\gamma$, authors can show that their generalization upper bound converges to zero with increasing number of training samples $n$.\"], \"weaknesses\": [\"Authors claim that their upper bound converges to zero with increasing number of training samples $n$ and the ones of [1,2] do not. This is misleading as [1,2] do not consider the expansiveness of the attack operator and the bound provided in this work, in the same setup as [1,2] does not vanish with $n$ (see lines 369-371).\", \"The difference with the previous bounds is not clearly covered in the paper. The proof technique and assumptions are very similar to [1,2], nevertheless the bounds in [1,2] are not presented in the work and there is no discussion about how to integrate previous bounds with the expansiveness setup introduced in this work, making it difficult to assess the contributions. It would be nice to add a discussion about which role expansiveness plays in the result of [1,2], i.e., can it result in a vanishing upper bound with $n$? It would also be good to have a table comparing the different upper bounds.\"], \"questions\": [\"Some small typos:\", \"Line 190: pernutation -> permutation\", \"Line 267: exist -> exists\", \"Line 483: It then curious -> It is then curious\", \"How does the bound in [1,2] change when considering $\\\\text{tahn}_{\\\\gamma}$-PGD?\", \"Have you tried larger $\\\\gamma$s? $\\\\gamma = 10^{5}$ seems to be very far from sign-PGD in Figure 2 (b). It would be interesting to see how does $\\\\text{tahn}_{\\\\gamma}$-PGD behave when it\\u2019s close to sign-PGD.\", \"Can you construct an experiment where the dependence on $n$ is displayed? For example taking a smaller number of samples from the studied datasets in order to see how the generalization gap grows. A synthetic distribution could also be employed where more and more samples are drawn and the gap decreases to zero for finite $\\\\gamma$.\", \"**References:**\", \"[1] Wang et al., Data-dependent stability analysis of adversarial training, ArXiv 2024.\", \"[2] Xiao et al., Stability Analysis and Generalization Bounds of Adversarial Training, NeurIPS 2022\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> we note that this has been clarified in our response to Reviewer e9T9 and we also have incorporated the detailed comparison with Xiao et al. (2022b) and Wang et al. (2024) into the revised manuscript from Line 1392 in Appendix F. We invite the reviewer to read that discussion.\\n\\nThe included discussion seems to be good. I also welcome the fact that the comments on the smoothness assumptions of prior works have been rectified.\\n\\n> the purpose, as we stated in the original paper (line 387 in the current manuscript), is \\\"To investigate how the expansiveness property affects generalization\\\". It is not proposing a new AT algorithm. The experiments are designed to empirically validate our theoretical analysis.\\n\\nIndeed. My critique of \\\"However, they do not show any benefits in terms of robustness with the new method.\\\" was not relevant. However, the rest of my critique still holds as far as I see: \\\"Furthermore, the fact that [...] for finding worst-case perturbations.\\\"\"}", "{\"title\": \"Reply to your concerns and questions (part 2)\", \"comment\": \">- Unclear contributions: The paper does not clearly describe how the derived bounds differ from those of Xiao et al. (2022b) and Wang et al. (2024). In particular, the bounds from these prior works are not presented, and they are solely critiqued on the basis that they do not vanish with increasing sample size. Furthermore, the criticism of the non-smoothness of the loss function adopted in prior work seems unfounded (\\\"The source of the non-smoothness is, however, not explained in their work\\\"). Even for linear models under $\\\\ell_\\\\infty$ perturbations, a cross-entropy loss is non-smooth. Hence, the property of non-smoothness is well-motivated.\\n\\n\\n\\nRegarding your comment that \\\"The paper does not clearly describe how the derived bounds differ from those of Xiao et al. (2022b) and Wang et al. (2024)...\\\", we note that this has been clarified in our response to Reviewer e9T9 and we also have incorporated the detailed comparison with Xiao et al. (2022b) and Wang et al. (2024) into the revised manuscript from Line 1392 in Appendix F. We invite the reviewer to read that discussion.\\n\\nRegarding your next critique on our statement that \\\"The source of the non-smoothness is, however, not explained in their work\\\", we believe there might have been some misinterpretation. To clarify, this statement is not intended as a critique of Xing et al. (2021) for assuming non-smoothness of the adversarial loss. Rather, it highlights that the non-smoothness property used in Xing et al (2021) is not characterized at any quantitative level. In fact, the development of Xing et al (2021) does not rely on any quantitative specification of the non-smoothness and directly invokes the previous result of Bassily et al (2020) .\\n\\nNonetheless we recognize that this statement wasn't clear enough, and we have revised it to \\\"The non-smoothness is however not quantitatively characterized in their work\\\".\\n\\n\\n\\n\\n\\n\\n\\n>- Unclear motivation for experiments: The authors seem to identify the sign function in the solution of the inner maximization problem in the robust objective as problematic, and they suggest an alternative based on a smooth approximation. However, they do not show any benefits in terms of robustness with the new method. Furthermore, the fact that for small $\\\\gamma$ we do not observe overfitting and the generalization gap is small appears to be a trivial observation, as the evaluation basically approaches the standard case of no perturbations. In short, it is not a good method for finding worst-case $\\\\ell_\\\\infty$ perturbations.\\n\\n\\nWe suspect that the reviewer misunderstood this part of the paper. \\n\\nThe motivation of introducing ${\\\\rm tanh}_{\\\\gamma}$ function in our experiments stems directly from the theoretical analysis presented in the previous section, where the purpose, as we stated in the original paper (line 387 in the current manuscript), is \\\"To investigate how the expansiveness property affects generalization\\\". It is not proposing a new AT algorithm. The experiments are designed to empirically validate our theoretical analysis.\"}", "{\"summary\": \"This paper studies generalization bounds for robust training by leveraging the framework of uniform stability. The authors analyze $\\\\ell_\\\\infty$ perturbations and derive several upper bounds on the generalization gap of predictors. They then investigate experimentally the performance of adversarially trained models using several algorithms to solve the inner maximization problem of the robust objective.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper studies an interesting problem: the large generalization gap of robust empirical risk minimization (adversarial training) in neural networks. This work leverages the framework of uniform stability, which has been rather unexplored in the robust learning community, and could potentially provide insights on this topic. Based on the theoretical analyses, the authors propose a sensible relaxation of the commonly used PGD attack, using the $tanh$ function instead. Finally, I agree with the authors that the optimization algorithm in the inner maximization problem has not received adequate attention in the literature, and thus, its study is welcome (despite its limitations\\u2014see below).\", \"weaknesses\": [\"The paper is unfortunately difficult to follow, making it challenging to assess its content due to presentation issues. Furthermore, the conclusions seem unoriginal to me. In particular, I identified the following weaknesses:\", \"Poor presentation: There are many instances where the text is not polished, with numerous grammatical errors (see the non-comprehensive list at the end of the Weaknesses). Additionally, the presentation of the technical results could be substantially improved (e.g., Theorem 4.1: remind the reader of the constants $\\\\beta, \\\\Gamma_X$). Furthermore, the authors should mention in the introduction that all of their results are solely about $\\\\ell_\\\\infty$ perturbations.\", \"Introduction of many ad-hoc terms to denote well-established concepts: In many places, the authors use obscure words to define concepts that are well-defined in learning theory. For instance, lines 258-259: \\\"mis-matched generalization gap\\\" \\u2014 this is just the standard generalization gap of a predictor trained robustly. Several such choices make it difficult for readers to comprehend the contributions of this work. Similarly, with so-called \\\"RG-PGD\\\" and the \\\"expansiveness property\\\" (a relaxed notion of Lipschitz continuity).\", \"Unclear contributions: The paper does not clearly describe how the derived bounds differ from those of Xiao et al. (2022b) and Wang et al. (2024). In particular, the bounds from these prior works are not presented, and they are solely critiqued on the basis that they do not vanish with increasing sample size. Furthermore, the criticism of the non-smoothness of the loss function adopted in prior work seems unfounded (\\\"The source of the non-smoothness is, however, not explained in their work\\\"). Even for linear models under $\\\\ell_\\\\infty$ perturbations, a cross-entropy loss is non-smooth. Hence, the property of non-smoothness is well-motivated.\", \"Unclear motivation for experiments: The authors seem to identify the sign function in the solution of the inner maximization problem in the robust objective as problematic, and they suggest an alternative based on a smooth approximation. However, they do not show any benefits in terms of robustness with the new method. Furthermore, the fact that for small $\\\\gamma$ we do not observe overfitting and the generalization gap is small appears to be a trivial observation, as the evaluation basically approaches the standard case of no perturbations. In short, it is not a good method for finding worst-case $\\\\ell_\\\\infty$ perturbations.\", \"Results of Section 6: The authors mention the connection between adversarial training and steepest descent methods, but it is clear that this has been the motivation for the iterative solution of the inner maximization problem since the introduction of adversarial training. Furthermore, the experiments fail to highlight anything new, in my understanding (basically optimising an $\\\\ell_\\\\infty$ objective yields better coverage against $\\\\ell_\\\\infty$ attacks).\", \"Grammatical errors (non comprehensive list):\", \"in the abstract: \\\"These expansiveness parameters appear not only govern the vanishing rate of the generalization error but also govern its scaling constant.\\\"\", \"line 190: \\\"perturnation\\\" -> perturbation\", \"line 202: \\\"related with\\\" -> related to\", \"line 241: \\\"draw\\\" -> draws\", \"line 245, 256: \\\"descend\\\" -> descent\", \"line 316: \\\"independent with\\\" -> independent of\", \"lines 536-537: \\\"Like all up-bound based theoretical results, such an approach is adequate for understanding performance guarantees but may be inadequte to explain poor generalization.\\\"\"], \"questions\": [\"Which part of the analysis bypasses prior work and makes the bounds decay as $O(\\\\frac{1}{n})$?\", \"In the experiments of Section 6, why do you change the threat model (finding different values of $\\\\lambda_p$)? One could imagine experiments with different steepest descent algorithms for the solution of the inner maximization problem, where the threat model does not change (i.e., projecting every time to the same $\\\\ell_\\\\infty$ balls around the original points). Of course, different steepest ascent algorithms (besides the commonly used sign gradient ascent) will perform worse in finding adversarial examples, so the number of inner iterations should be adjusted appropriately. However, I believe this could be an interesting experiment to conduct.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to your comments\", \"comment\": \"Thank you very much for your careful review and your positive feedback!\", \"regarding_your_comments\": \">- It seems that the framework in this paper can not provide a proper description for the $Q(c^*)$ term, we need to calculate it according to the concrete choice of $J, \\\\pi$. However, to make this framework more significant, examples of how to calculate $Q(c^*)$ and how to choose $c^*$ should be added. Note: I mean examples in practice but not examples with overly simple assumptions (such as the assumption on the second moment in lines 293-294 and the assumption in Corollary 5.2 that a term is bounded by $B$ with probability 1). Just like the VC dimension, if we can not calculate the VC dimension of some hypothesis classes, the bound with the VC dimension is meaningless.\\n\\n\\nWe agree that it would be nice to have a better characterization of the term $Q(c^*)$. In this paper, the introduction of this term is to bring in a better handle for driving the bound to decay at the rate of $1/n$. It mainly serves as an analytic technique at the moment. Nonetheless we appreciate your suggestion and will look into the possibility of better characterizing the term when more structural assumptions are incorporated in the loss function or in the network structure.\\n\\n\\n\\n\\n>- Typo: in line 149, it should be \\\"the label space $\\\\mathcal{Y}$ is finite\\\"\\n\\n\\n\\nThank you very much for pointing out this typo. We have fixed the typo.\\n\\n\\n\\n>- A minor issue: many equations in this paper are numbered, in fact, the equations that are not used later need not be numbered. For example, equation (2) is not used.\\n\\n\\n\\nThank you very much for pointing this out. For the time being, to avoid causing any inconvenience for the reviewers during the rebuttal phase, we will retain these equation numbers, allowing the reviewers to easily refer to specific equations in the paper. The unused equation numbers will be removed in the final revised version of the paper.\\n\\n>- In lines 87-88, the paper says that \\\"the bound convergence to a constant, this helps explain the robust overfitting phenomenon\\\". In fact, a lower bound of the generalization gap that converges to a constant can explain overfitting. However, an upper bound can not because your bound may not be tight enough.\\n\\n\\nYou are correct and we fully agree. The statement lacks rigor. We have removed the statement in our manuscript.\"}", "{\"comment\": \"Regarding your last question:\\n\\n>(4) Minor: In one line of research, to save the computation budget of adversarial training, algorithms have been proposed to explore fast adversarial training: instead of calculating the attack at each iteration, they update the attack for each sample for one step at each iteration, e.g.,\\nCheng, Xiwei, Kexin Fu, and Farzan Farnia. \\\"Stability and Generalization in Free Adversarial Training.\\\" arXiv preprint arXiv:2404.08980 (2024).\\nI'm wondering if the authors can provide some comments on algorithms of this type.\\n\\n\\nThanks for pointing out this paper to us. The paper studies the generalization performance of free adversarial training and the fast adversarial training (AT). Their theoretical analysis and empirical results suggest that Free AT and Fast AT allows a better generalization performance compared with the vanilla PGD-AT.\\n\\nUnder our framework and with the notations in our paper, Fast AT can be treated as $A_{ \\\\pi}$ with $\\\\pi$ taken as the one-step PGD. As suggested in our Lemma 5.1, PGD with less number of steps $K$ tends to have a smaller expansiveness parameter and the corresponding AT tends to achieve a smaller generalization gap. Our theory therefore also supports the conclusion in that paper, namely, that Fast AT tends to achieve a better generalization performance.\\n\\nFree AT, however, has a quite different dynamics from the vanilla PGD-AT discussed in our paper. Specifically, Free AT may be regarded as a modification of Fast AT, where 1-step PGD does not start from the orignial example $x$, but starts from the previous perturbed version $x^{\\\\rm adv}$ of $x$, until $x$ has undergone $K$ steps of perturbation. This dynamics does not fit immediately to our framework. However, a similar approach as we propose here may be adapted to analyzing Free AT.\"}", "{\"comment\": \"No, I think I understood what you wrote in the paper in the first place. Perhaps my comment about steepest descent has not been clear enough. I understand that that Section is about the update rule, and not the projection step. Maximizing the linear approximation of the loss around the current iterate around a ball induced by a norm $\\\\|\\\\cdot\\\\|$ is equivalent to using steepest descent (ascent) on the loss with respect to this norm -- see, for instance, Section 9.4 in \\\"Convex Optimization by Boyd and Vandenberghe\\\".\"}", "{\"title\": \"Thanks for your response\", \"comment\": \"Dear Authors,\\n\\nI am happy with the clarifications and appreciate your efforts in this rebuttal. I was expecting the analysis regarding the comparison with [1,2] to be included in the revised version. I believe this **must** be included in the manuscript. Overall I am satisfied and have increased my score.\"}", "{\"title\": \"Reply to your concerns and questions (part 3)\", \"comment\": \">- Results of Section 6: The authors mention the connection between adversarial training and steepest descent methods, but it is clear that this has been the motivation for the iterative solution of the inner maximization problem since the introduction of adversarial training. Furthermore, the experiments fail to highlight anything new, in my understanding (basically optimising an $\\\\ell_\\\\infty$ objective yields better coverage against $\\\\ell_\\\\infty$ attacks).\\n\\n\\n\\nThere have been a misunderstanding regarding Section 6 of our paper. This section is not intended to explain \\\"the connection between adversarial training and steepest descent methods,\\\" as stated by the reviewer. The beginning of the section intends to explain why the sign function (or the sign gradient) is specifically used in PGD rather than the raw gradient. Note that this peculiar choice of sign function is not related to the $\\\\infty-$norm ball used in the projection step of PGD. We will elaborate this below. \\n\\nIn Section 6, we investigate the effects of replacing the sign function with alternative operators $G_p$. Importantly, the projection step in PGD consistently operates with respect to the same $\\\\infty-$norm ball, regardless of the choice of $G_p$.\\n\\n\\n\\nEach iteration of PGD involves a gradient ascent step followed by a projection step. The emergence of the sign function arises from treating the **gradient ascent step** as maximizing a locally linear approximation of the loss function. \\n\\n\\n\\nSpecifically, at the $k-$th iteration of PGD where $x^ k$ is to be updated by PGD, the loss function is approximately considered to be linear within a $p-$norm ball (i.e., $ \\\\mathbb{B} _ p (x^ k , \\\\lambda _ p) $ ) around $x^ k$ (not around $x$) with radius $\\\\lambda _ p$ . If this norm ball is chosen as $\\\\mathbb{B} _ {\\\\infty}(x^ k, \\\\lambda)$, the sign function naturally arises in the gradient ascend step of the PGD. However, this does not mean choosing $\\\\mathbb{B} _ p (x^ k, \\\\lambda _ p)$ as $\\\\mathbb{B} _ {\\\\infty}(x^ k, \\\\lambda)$ is the only option. We therefore investigate the norm ball $\\\\mathbb{B}_{p}(x^k, \\\\lambda_p)$ with other choices of $p$. This merely corresponds to the linear approximation of the loss function within different local ranges. Consequently, $G_p$ in different forms results.\\n\\nIt is also important to emphasize that both $\\\\mathbb{B} _ {p}(x^ k, \\\\lambda _ p)$ and $\\\\mathbb{B} _ {\\\\infty}(x^ k, \\\\lambda)$ are different from $\\\\mathbb{B} _ {\\\\infty}(x, \\\\epsilon)$ that is used in the projection step of PGD. Regardless the choice of $\\\\mathbb{B} _ {p}(x^ k, \\\\lambda _ p)$ in determing $G _ p$ for the gradient ascend step of PGD, the projection step always operates using the same $\\\\infty-$norm ball $\\\\mathbb{B}_{\\\\infty}(x, \\\\epsilon)$ . We suspect the reviewer might have confused these two types of norm balls when first reading our paper, leading to a misunderstanding of Section 6.\\n\\n\\nFinally, to the best of our knowledge, the perspective of replacing the sign function with $G_p$ and analyzing its effects is novel. Contrary to the comment that this approach \\\"fails to highlight anything new,\\\" we believe this original angle offers valuable insights into understanding the generalization of AT.\"}", "{\"title\": \"Reply to your concerns and questions (part 1)\", \"comment\": \"Thank you for taking the time to read our paper. We acknowledge that our manuscript contains minor typos and has room for improvement. Given the extensive use of mathematical notations, we understand that fully grasping the content may require some patience.\\n\\nHowever, we would like to note that the other three reviewers have found our paper to be \\\"clear\\\", \\\"easy to follow\\\" and \\\"easy to understand\\\", or \\\"the writing is good\\\". We have now fixed the typos and kindly invite you to revisit the paper.\", \"for_your_comments_and_questions\": \">- Poor presentation: There are many instances where the text is not polished, with numerous grammatical errors (see the non-comprehensive list at the end of the Weaknesses). Additionally, the presentation of the technical results could be substantially improved (e.g., Theorem 4.1: remind the reader of the constants $\\\\beta, \\\\Gamma_X$). Furthermore, the authors should mention in the introduction that all of their results are solely about $\\\\ell_\\\\infty$ perturbations.\\n\\nWe have fixed the typos and the grammatical errors that you mentioned. \\n\\nRegarding your next critique that \\\" the presentation of the technical results could be substantially improved (e.g., Theorem 4.1: remind the reader of the constants $\\\\beta, \\\\Gamma_X$).\\\", we would like to note that in Theorem 4.1 of our original manuscript, we have explicitly stated at the beginning that \\\"Suppose that $f$ satisfies the conditions (6) and (7). \\\". The constants $\\\\beta, \\\\Gamma_X$ are introduced in conditions (6) and (7). \\n\\n\\n\\nRegarding your suggestion that \\\"the authors should mention in the introduction that all of their results are solely about $\\\\ell_\\\\infty$ perturbations.\\\", this has been clearly stated in the original version of the paper (Section 3, current line number 152), as stated \\\"Each adversarial attack (or adversarial perturbation) on input $x$ is assumed to live in an $\\\\infty$-norm ball \\\"\\n\\nWe will consider emphasizing this in the Introduction section of the final manuscript, provided that page limit allows. \\n\\n>- Introduction of many ad-hoc terms to denote well-established concepts: In many places, the authors use obscure words to define concepts that are well-defined in learning theory. For instance, lines 258-259: \\\"mis-matched generalization gap\\\" \\u2014 this is just the standard generalization gap of a predictor trained robustly. Several such choices make it difficult for readers to comprehend the contributions of this work. Similarly, with so-called \\\"RG-PGD\\\" and the \\\"expansiveness property\\\" (a relaxed notion of Lipschitz continuity).\\n\\n\\n\\nRegarding your critique that \\\"For instance, lines 258-259: 'mis-matched generalization gap' \\u2014 this is just the standard generalization gap of a predictor trained robustly.\\\", we suspect that the reviewer might have mis-read this line of statement: this notion is not the standard generalization gap for a predictor trained robustly, but includes it as a special case. Specifically, the \\\"standard generalization gap of a predictor trained robustly\\\" is a type of \\\"mis-matched generalization gap\\\" when $J=J^{\\\\rm id}$ and $\\\\pi$ is a particular perturbation. \\n\\nWe acknowledge that our paper introduces several new terms for the ease of reference and improved clarity. They seem to be have been appreciated by other reviewers.\"}", "{\"comment\": \"**The limitation of the framework in [2]:** We would like to note that when the standard loss $f$ satisfies the Assumption 4.1 in [2] (or condition (7) in our paper), in fact every $J-$loss (for any arbitrary $J$, including but not limited to $J^*$) is $2\\\\Gamma_{\\\\cal X}\\\\epsilon-$approximately smooth. To see this:\\n\\n$\\\\Vert \\\\nabla_{w_1}f_{J}(w_1, x, y)-\\\\nabla_{w_2} f_{J}(w_2, x, y)\\\\Vert$\\n\\n$=\\\\Vert\\\\nabla_{w_1}f(w_1, J(x;y, w_1), y)-\\\\nabla_{w_2}f(w_2, J(x;y, w_2), y)\\\\Vert$\\n\\n$\\\\le \\\\beta \\\\Vert w_1 - w_2 \\\\Vert + \\\\Gamma_{\\\\cal X} \\\\Vert J(x;y, w_1) - J(x;y, w_2) \\\\Vert \\\\quad (1)$\\n\\n$\\\\le \\\\beta \\\\Vert w_1 - w_2 \\\\Vert + \\\\Gamma_{\\\\cal X} (\\\\Vert J(x;y, w_1) - x \\\\Vert+ \\\\Vert x-J(x;y, w_2) \\\\Vert)\\\\quad (2)$\\n\\n$\\\\le \\\\beta \\\\Vert w_1 - w_2 \\\\Vert + 2\\\\Gamma_{\\\\cal X}\\\\epsilon \\\\quad (3)$\\n\\nwhere inequality (1) follows from Assumption 4.1 in [2]. Inequality (2) and (3) are derived by using the triangle inequality and the condition that $\\\\Vert J(x;y,w)-x \\\\Vert \\\\le \\\\epsilon$ for any $w \\\\in {\\\\cal W}$.\\n\\nDue to the fact that all the $J-$losses have the same approximate smoothness parameter $\\\\eta$, the generalization bounds derived for different $J-$loss, based on the framework in [2], will be the same. This type of generalization bound ignores the influence of the perturbations used in AT on generalization and is therefore unable to explain the experimental observations in our work where different choices of perturbations indeed have distinct impact on generalization.\\n\\n**Difference of our approach from [2]:** In this paper, we depart from the approach of [2], which ignores the specific properties of perturbation $J$, and take a different route which considers the impact of $J$ measured via its expansiveness parameter. Our approach allows us to analyze how different perturbations used in AT affect its generalization performance. Our bounds, derived based on the expansiveness parameter, also avoid having the non-vanishing term (like the first term in Theorem 5.1 of [2]) when the expansiveness parameter is finite. Only in the case when the expansiveness parameter is unbounded, our results are similar to [2], where the generalization bound contains a non-vanishing term.\\n\\nThe UAS parameter of AT characterizes the gap $\\\\Vert w-w' \\\\Vert$ where $w=A(S)$ and $w'=A(S')$ are the model parameters produced by the AT algorithm on two nearly identical datasets $S\\\\simeq S'$. Intuitively, the difference between $w$ and $w'$ arises from the single different example in $S$ and $S'$ (where larger training sample size $n$ tends to reduce the probability of using that single different example to update model parameters in AT), and gets \\\"magnified\\\" by the perturbation $J$ along the AT training trajectory. The expansiveness parameter of $J$ that we define effectively captures this \\\"magnification\\\" factor. Thus, the eventual difference between $w$ and $w'$ depends on not only the sample size $n$ but also the expansiveness parameter of $J$. Then our exploitation of the expansiveness of $J$ brings sample size $n$ into our bound.\"}" ] }
2GcR9bO620
I Can Hear You: Selective Robust Training for Deepfake Audio Detection
[ "Zirui Zhang", "Wei Hao", "Aroon Sankoh", "William Lin", "Emanuel Mendiola-Ortiz", "Junfeng Yang", "Chengzhi Mao" ]
Recent advances in AI-generated voices have intensified the challenge of detecting deepfake audio, posing risks for scams and the spread of disinformation. To tackle this issue, we establish the largest public voice dataset to date, named DeepFakeVox-HQ, comprising 1.3 million samples, including 270,000 high-quality deepfake samples from 14 diverse sources. Despite previously reported high accuracy, existing deepfake voice detectors struggle with our diversely collected dataset, and their detection success rates drop even further under realistic corruptions and adversarial attacks. We conduct a holistic investigation into factors that enhance model robustness and show that incorporating a diversified set of voice augmentations is beneficial. Moreover, we find that the best detection models often rely on high-frequency features, which are imperceptible to humans and can be easily manipulated by an attacker. To address this, we propose the F-SAT: Frequency-Selective Adversarial Training method focusing on high-frequency components. Empirical results demonstrate that using our training dataset boosts baseline model performance (without robust training) by 33%, and our robust training further improves accuracy by 7.7% on clean samples and by 29.3% on corrupted and attacked samples, over the state-of-the-art RawNet3 model.
[ "Deepfake audio detection", "Audio augmentations", "Frequency-Selective Adversarial Training" ]
Accept (Poster)
https://openreview.net/pdf?id=2GcR9bO620
https://openreview.net/forum?id=2GcR9bO620
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ydjcgr1P4p", "vVIjCHzjoM", "t6GH7eKfPg", "lEOWHSPv0s", "kd5AP2W4gt", "jryJvLGXq4", "hxSj5KuCs0", "eqKvi6sVpy", "eNUIJtj79i", "bUMow1NKWm", "YuLdQUS7xV", "WpibbkZuux", "VzNgTuYKpl", "VlpY29w0F2", "V5A3fHtbCN", "TJsqSKmy7n", "SsVje6vm18", "SZcLseThTx", "RxGgwxuqqk", "PrBFa5nFXl", "IZGjzf0X7Z", "FwD3v5NWNb", "DoeAWiGmLw", "BsrQWFcNy8", "9bLBEAkNn2", "81xG74C6bu", "5CfgGHG1Z1" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732699771552, 1732512073310, 1732510815144, 1732510626151, 1734531493245, 1737523404592, 1732620258163, 1732529013532, 1730566031243, 1732510697686, 1732619399395, 1732510152604, 1732510326625, 1732558729572, 1732511022663, 1733365738110, 1732509774106, 1732511745866, 1730556828630, 1732773298728, 1732575015783, 1729862159460, 1730675115549, 1732622219261, 1732658228843, 1732511317852, 1732586979890 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission584/Reviewer_NjHP" ], [ "ICLR.cc/2025/Conference/Submission584/Authors" ], [ "ICLR.cc/2025/Conference/Submission584/Authors" ], [ "ICLR.cc/2025/Conference/Submission584/Authors" ], [ "ICLR.cc/2025/Conference/Submission584/Area_Chair_KT4S" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission584/Reviewer_NjHP" ], [ "ICLR.cc/2025/Conference/Submission584/Reviewer_NjHP" ], [ "ICLR.cc/2025/Conference/Submission584/Reviewer_ijUY" ], [ "ICLR.cc/2025/Conference/Submission584/Authors" ], [ "ICLR.cc/2025/Conference/Submission584/Reviewer_ijUY" ], [ "ICLR.cc/2025/Conference/Submission584/Authors" ], [ "ICLR.cc/2025/Conference/Submission584/Authors" ], [ "ICLR.cc/2025/Conference/Submission584/Reviewer_EGHT" ], [ "ICLR.cc/2025/Conference/Submission584/Authors" ], [ "~Andrew_C._Cullen1" ], [ "ICLR.cc/2025/Conference/Submission584/Authors" ], [ "ICLR.cc/2025/Conference/Submission584/Authors" ], [ "ICLR.cc/2025/Conference/Submission584/Reviewer_erKc" ], [ "ICLR.cc/2025/Conference/Submission584/Authors" ], [ "ICLR.cc/2025/Conference/Submission584/Authors" ], [ "ICLR.cc/2025/Conference/Submission584/Reviewer_NjHP" ], [ "ICLR.cc/2025/Conference/Submission584/Reviewer_EGHT" ], [ "ICLR.cc/2025/Conference/Submission584/Reviewer_NjHP" ], [ "ICLR.cc/2025/Conference/Submission584/Authors" ], [ "ICLR.cc/2025/Conference/Submission584/Authors" ], [ "ICLR.cc/2025/Conference/Submission584/Reviewer_erKc" ] ], "structured_content_str": [ "{\"comment\": \"Okay, that works for me. I maintain my score.\"}", "{\"title\": \"We ran the experiments asked by the reviewer and updated the paper. Edits are in blue.\", \"comment\": \"We sincerely thank all reviewers for their thoughtful and insightful feedback. We are encouraged by the recognition of the significance of our dataset and the effectiveness of our robust training methods. We ran all the experiments asked by the reviewers and integrated them to strengthen our paper. We indicate our paper edits in blue.\"}", "{\"title\": \"Thank you for your thoughtful review and useful suggestions\", \"comment\": \"We appreciate the reviewer's insightful feedback on our work. Key strengths highlighted include our comprehensive dataset, advanced deepfake detection method, and robust defense against high-frequency adversarial attacks. These contributions collectively push forward the boundaries in deepfake detection research. Further details and additional insights are discussed in the subsequent sections.\\n\\n**W - contribution on our dataset**\\n\\nThank you for your suggestions. We are glad that the reviewer recognize our dataset to be well organized and processed. As suggested, we will remove the claim of \\u201clargest\\u201d in the revised version. Our dataset comprehensively summarizes existing TTS and VC models, evaluating over 30 AI-synthesis models proposed within the last 2-3 years, including both open-source and commercial models. We utilized 14 of these models, which we believe are significant to the community and merit publication.\\n\\n**W - \\\"Compared with other defense method**\\n\\nThank you for the suggestion. We have included a discussion of the paper \\u2018High-frequency Adversarial Defense for Speech and Audio\\u2019 (2021) in the related work section and present comparative results in the experiments section. We appreciate Reviewer EGHT's recommendation for this comparison.\\n\\nWe compared our method against MAD smoothing[1], as suggested in the paper, and also included Gaussian smoothing[3], and adversarial training in the time domain[2], which are used as baselines in the reference paper. We employed the same parameters as those in [1]. The results of these comparisons are outlined below. As the table shows, our method outperforms all other methods.\\n\\n| Approach | Ori Real | Ori Fake | Ori Avg | Att(T) Real | Att(T) Fake | Att(T) Avg | Att(F) Real | Att(F) Fake | Att(F) Avg |\\n|------------------------|----------|----------|---------|-------------|-------------|------------|-------------|-------------|------------|\\n| RawNet3+RandAug | **97.6%** | 97.0% | 97.3% | 74.7% | 66.0% | 70.4% | 63.0% | 62.4% | 62.7% |\\n| +AT(Time) | 94.7% | 83.1% | 88.9% | 87.1% | 12.4% | 49.8% | 66.5% | 16.5% | 41.5% |\\n| +Gaussian smooth | 57.8% | 49.4% | 53.6% | 53.2% | 51.3% | 52.2% | 46.2% | 51.4% | 48.8% |\\n| +MAD smoothing | 96.2% | 91.4% | 93.8% | 60.2% | 59.0% | 59.6% | 56.3% | 57.4% | 56.8% |\\n| +F-SAT (Ours) | 97.5% | **98.4%** | **98.0%** | **90.2%** | **87.0%** | **88.6%** | **93.3%** | **92.8%** | **93.1%** |\\n\\nWe also found that robust deepfake voice detection lacks enough related work in recent years. However, with GenAI making the deepfake generation increasingly realistic and widespread, it is urgent and crucial to design tools to detect them in a robust way. As far as we know, Our work is the first to tackle deepfake audio detection in a real-world setup after the boom of recent GenAI. Our work will be important in publishing and mitigating the urgent risk from generated audio.\\n\\n[1] Olivier, Raphael, Bhiksha Raj, and Muhammad Shah. \\\"High-frequency adversarial defense for speech and audio.\\\" ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021\\n\\n[2] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu, \\u201cTowards deep learning models resistant to adversarial attacks,\\u201d international conference on learning representations, 2018.\\n\\n[3] Jeremy Cohen, Elan Rosenfeld, and Zico Kolter, \\u201cCertified adversarial robustness via randomized smoothing,\\u201d in Proceed- ings of the 36th International Conference on Machine Learning, Kamalika Chaudhuri and Ruslan Salakhutdinov, Eds. 09\\u2013 15 Jun 2019, vol. 97 of Proceedings of Machine Learning Research, pp. 1310\\u20131320, PMLR.\\n\\n**W - DeepFakeVox-HQ can not indicate out-of-distribution generalization**\\n\\nApologies for any confusion. The DeepFakeVox-HQ test set featured in our paper comprises entirely out-of-distribution (OOD) samples. As noted in the caption for Table 2, our experiments include seven fake sources not utilized in the training set. This also applies to real samples from recent YouTube videos, which are OOD as well.\\n\\nFor future research, we provide additional test samples from seven fake sources that were included in the training set, although these were not used in the current study. We have clarified this at the beginning of the experiments section to prevent any further confusion.\"}", "{\"title\": \"Thank you for your thoughtful review and constructive suggestions\", \"comment\": \"We thank the reviewers for their insightful feedback. We are glad that the reviewers recognize the broad impact that releasing the DeepFakeVox-HQ dataset would have on the community. Additionally, our robust training method has been highlighted as a significant advancement in deepfake detection. We address further details and reviewer suggestions in the sections below.\\n\\n**W1 - Baseline models using contemporary adversarial methods would provide a fairer comparison**\\n\\nThank you for your question. RawNet3 is the best baseline model among all the baseline we study. As reviewer suggested, we've applied standard adversarial training to RawNet3 and compared it with F-SAT in Table 3. F-SAT exceed standard Adversarial training method by 9% on orign data and by Average of 43% on attacked data.\\n\\n| Approach | Origin Real | Origin Fake | Origin Avg | Attack (Time) Real | Attack (Time) Fake | Attack (Time) Avg | Attack (Frequency) Real | Attack (Frequency) Fake | Attack (Frequency) Avg |\\n|-------------------------------|-------------|-------------|------------|---------------------|---------------------|--------------------|--------------------------|--------------------------|-------------------------|\\n| RawNet3+AT(Time) | 94.7% | 83.1% | 88.9% | 87.1% | 12.4% | 49.8% | 66.5% | 16.5% | 41.5% |\\n| RawNet3+F-SAT | **97.5%** | **98.4%** | **97.9%** | **90.2%** | **87.0%** | **88.6%** | **93.3%** | **92.8%** | **93.1%** |\\n\\nWe will include experiments that apply adversarial training to all baseline models in the revised manuscript.\\n\\n**W2 - The rationale behind the reliance on high frequencies**\\nThe key rationale for our algorithm is that high frequency features help distinguish deepfake audio but are vulnerable to attacks, while low frequency features, though robust, are insufficient for training an effective detector. Our rationale is supported by three experiments:\\n\\n1. We find the state-of-art Detection Model automatically rely on high frequencies for decision-making, as shown in Figure 2.\\n2.\\tFigure 8(b) \\uff08Figure 7b in revised version\\uff09 demonstrates that adversarial attacks on high frequencies reduce model performance more significantly, suggesting their vulnerability. Additionally, human ears are less sensitive to high frequencies, making those attacks even harder to notice, highlighting the need for secure high-frequency features.\\n3. Low frequencies feature alone cannot adequately distinguish deepfake audio. As indicated in the table, employing a Biquad-lowpass filter to remove high frequency features during training and testing will reduces accuracy on original data.\\n\\n| Approach | Origin Real | Origin Fake | Origin Avg |\\n|--------------------|----------|----------|---------|\\n| RawNet3+RandAug | 97.6% | 97.0% | 97.3% |\\n| + Biquad-lowpass | **98.9%**\\uff08+0.7%\\uff09| 86.5%\\uff08-10.5%\\uff09 | 92.7%\\uff08-4.6%\\uff09|\\n| + F-SAT | 97.5% \\uff08-0.1%\\uff09 | **98.4%** (+1.4%) | **98.0%** (+0.7%) |\\n\\nTherefore, to develop a powerful and robust detector, retaining high-frequency features is essential, but they must be secured. This is the rationale behind our F-SAT approach.\\n\\n**W3 - F-SAT's training efficiency**\\n\\nThank you for your advice. We have included an evaluation of F-SAT's training efficiency in the appendix. F-SAT\\u2019s training efficiency is influenced by hyperparameters such as attack iterations and restart counts, which identify the worst-case perturbation. Drawing on insights from \\\"Fast Is Better Than Free: Revisiting Adversarial Training,\\\" we optimized these parameters by setting restarts to one and attack iterations to one or two, while employing a larger attack magnitude to enhance robustness. Training time is shown in the table below.\\n\\n| Description | w/o Adversarial Training | Standard Adversarial Training | F-SAT |\\n|----------------------------|---------------|--------------------------------|------------|\\n| Training Duration (Days) | 2 | 4.5 | 8 |\\n| Number of Epochs | 15 | 15 | 15 |\\n| Hardware Used | Single A100 GPU | Single A100 GPU | Single A100 GPU |\\n\\n\\nAlthough F-SAT requires longer training times, it improves accuracy by an average of 9% on original data and 43% on attacked data compared to Standard Adversarial Training. We should not compromise accuracy merely to accelerate training.\"}", "{\"metareview\": \"This paper makes significant advancements in deepfake detection through three key contributions: (1) Introduction of DeepFakeVox-HQ: A comprehensive dataset comprising high-quality synthetic and real speech recordings, designed to benefit the research community. (2) Development of an enhanced data augmentation method: Tailored to improve the performance of deepfake detectors. (3) Proposal of frequency-selective adversarial training to enhance detector robustness against adversarial attacks. The reviewers have recognized the paper as well-written, with the dataset being a valuable resource for the community and the proposed methods effectively improving detection accuracy. However, the final version should address the reviewers\\u2019 suggestions to enhance clarity and reproducibility. Key revisions include: (1) Adversarial training details: Adding the setup specifics for adversarial training and incorporating the Equal Error Rate (EER) as an evaluation metric. (2) Claims revision: Refining the claims of the paper\\u2019s contributions, as indicated by Reviewer erKc.\\n(3) Additional details: Providing more information in response to Reviewer NjHP's feedback. Overall, this work is meaningful to the community but requires these revisions to further ensure clarity and reproducibility.\", \"additional_comments_on_reviewer_discussion\": \"All reviewers provided meaningful comments, checked the responses of the authors, and provided feedback to the authors. At the first round of reviewing, reviewers agree the work is meaningful to the community, the problem is important, and the proposed methods are effective. The reviewers concerned about the missing setup details of adversarial training and incorrect claims. The authors' rebuttal addresses the concerns of reviewers well.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Thank you for responding.\", \"comment\": \"Thank you for updating the paper. It was essential to avoid conflicting numbers in the future and make the change in the experimental setting explicit. That way, future work won't have to make impossible comparisons. I wish to maintain my original score.\"}", "{\"title\": \"Thank you for answering my questions\", \"comment\": \"Dear authors,\\n\\nMy last remaining concern is with Q8 (Table 2). Is it possible to explicitly state that the results are not comparable to the original wavefake paper? Furthermore, Wavefake does not exclusively contain LJSpeech samples. It also has a JSUT part. Is it possible to fix the caption?\"}", "{\"summary\": \"This paper addresses the challenge of deepfake audio detection, presenting two major contributions: (1) the creation of DeepFakeVox-HQ, the largest and most diverse public dataset for deepfake audio detection, which enables realistic testing conditions and exposes limitations in existing models, and (2) the introduction of Frequency-Selective Adversarial Training (F-SAT), a novel approach that improves detection robustness by focusing on high-frequency audio components. The work is well-written and logically structured, making complex concepts accessible, and holds significant potential for advancing the robustness and reliability of deepfake audio detection models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.DeepFakeVox-HQ stands out as a substantial addition to the field, with over 1.3 million samples, including 270,000 high-quality deepfake samples from 14 sources. This dataset addresses the limitations of existing datasets in diversity and scale, making it a valuable resource for benchmarking future detection models. Releasing this dataset would have a broad impact on the community.\\n\\n2.The F-SAT method is an important innovation, targeting high-frequency features that are critical for detection but vulnerable to adversarial manipulation. This frequency-focused adversarial training enhances model robustness without compromising accuracy on clean data, addressing a key gap in existing deepfake detection methods.\\n\\n3.Comprehensive Experimental Evaluation: \\n The experimental design is extensive, evaluating performance across standard benchmarks (ASVspoof2019 and WaveFake) as well as the authors' own test dataset. F-SAT demonstrates clear improvements in robustness across multiple corruption and adversarial attack scenarios. The addition of an ablation study further supports the effectiveness of the proposed method.\\n\\n4. Extending RandAugment from image processing to audio is an inventive adaptation that helps improve model robustness on both clean and corrupted audio. This demonstrates the authors' resourcefulness in leveraging existing techniques and could be beneficial for future work in audio data augmentation.\", \"weaknesses\": \"1. The paper does not specify whether baseline models were subjected to adversarial training. If only the F-SAT model received this enhancement, it could bias the results. Including adversarially-trained versions of baseline models using contemporary adversarial methods would provide a fairer comparison and highlight F-SAT\\u2019s unique advantages.\\n\\n2. While F-SAT\\u2019s focus on high-frequency components is intriguing, the rationale behind the reliance on high frequencies for detecting deepfake audio could be further elaborated. \\n\\n3.Adversarial training, especially in the frequency domain with iterative updates, can be computationally demanding. Assessing F-SAT's efficiency, particularly compared to baseline models, would improve the paper's practicality.\", \"questions\": \"1.Given that F-SAT focuses on high-frequency perturbations, have the authors considered whether these perturbations might be perceptible to human listeners?\\n\\n\\n\\n\\n2. Were all baseline models subjected to similar adversarial training procedures as the proposed F-SAT model? Consistency in adversarial training across baseline models is essential to ensure a fair comparison of robustness improvements. If not, would the authors consider including adversarially trained baselines in future comparisons?\\n\\n3 . How sensitive is F-SAT to the choice of hyperparameters, particularly the frequency ranges and perturbation magnitudes used for adversarial training?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Continue\", \"comment\": \"**Q1 - Whether attack is perceptible to human listener**\\n\\nThank you for bringing this up. The attack we use to evaluate model robustness in our paper is imperceptible. We confirmed this through a human study where we presented 20 pairs of attacked and unattacked audio clips to 10 participants, who were asked to identify the attacked ones. The average prediction accuracy was 55% with a standard deviation of 7.58%, indicating that the attacked audio is nearly indistinguishable from the unattacked, resembling random guessing.\\n\\nAdditionally, following Reviewer EGHT's advice, we utilized Signal-to-Noise Ratio (SNR) to quantitatively assess the perceptibility of the attack. An SNR greater than 40 dB indicates that the attack is almost imperceptible.\\n\\n| | Mean | Standard Deviation | Minimum | Maximum |\\n|-----------------------|-------|---------------------|---------|---------|\\n| Frequency Domain Attack | 68.76 | 5.80 | 40.07 | 84.83 |\\n| Time Domain Attack | 58.39 | 5.38 | 32.14 | 71.07 |\\n\\n**Q2 - Consistency in adversarial training across baseline models.**\\n\\nThank you for your question. Among all the baseline models we evaluated, RawNet3 emerged as the best. In Table 3, we've applied standard adversarial training (AT) to RawNet3 and compared it with F-SAT. F-SAT significantly outperformed standard AT on both unattacked and attacked data.\\n\\nIn the revised manuscript, we will include experiments applying adversarial training to all baseline models.\\n\\n**Q3 - How sensitive is F-SAT to the choice of hyperparameters, particularly the frequency ranges and perturbation magnitudes**\\nThank you for highlighting this. We have included an ablation study of hyperparameters, such as frequency ranges and magnitudes, in Figure 9 (Figure 8 in revised version) of the main paper. The results demonstrate that our approach is not sensitive to these parameters.\"}", "{\"title\": \"Thanks for the response.\", \"comment\": \"Thank you very much to the authors for addressing the concerns I raised. The issues I was concerned about have been satisfactorily resolved. Considering the overall quality and contribution of the paper, I am willing to raise my score to 8.\"}", "{\"title\": \"Continue\", \"comment\": \"**W1.4.5 - Quantitative measurement of synthetic speech quality using metrics like DNSMOS, or NORESQA.**\\n\\nThanks for your advice, we have used DNSMOS [1] to quantitatively measure synthetic speech quality across these models, on a scale from 1 to 5, where higher values indicate better quality. We utilized sections of the VCTK and In-The-Wild datasets for this evaluation to assess model performance under both clean and noisy conditions.\\n\\n**Table: MOS Scores for VCTK Speaker_id: p244**\\n**Table for VCTK Speaker: p244**\\n| **Model** | **Ovrl MOS** | **Sig MOS** | **Bak MOS** | **P808 MOS**|\\n|-----------------|----------|---------|---------|----------|\\n| Real refer | 3.26 | 3.56 | 4.04 | 3.61 |\\n| metavoice | **3.29** | **3.58**| 4.05 | 3.63 |\\n| StyleTTS v2 | 3.28 | 3.56 | **4.08**| **3.87** |\\n| XTTS v2 | 3.13 | 3.41 | 4.00 | 3.78 |\\n| VoiceCraft | 3.16 | 3.51 | 3.94 | 3.61 |\\n| Whisperspeech | 3.28 | 3.56 | 4.07 | 3.82 |\\n| Vokan-TTS | 3.23 | 3.55 | 4.01 | 3.71 |\\n\\n\\n**Table for In-the-Wild Speaker: Alan Watts**\\n| **Model** | **Ovrl MOS** | **Sig MOS** | **Bak MOS** | **P808 MOS** |\\n|----------------|--------------|-------------|-------------|--------------|\\n| Real refer | 3.02 | 3.40 | 3.74 | 3.57 |\\n| metavoice | 3.15 | 3.52 | 3.88 | 3.55 |\\n| StyleTTS v2 | **3.28** | **3.57** | **4.06** | **3.83** |\\n| XTTS v2 | 3.11 | 3.41 | 3.98 | 3.70 |\\n| VoiceCraft | 3.01 | 3.34 | 3.80 | 3.43 |\\n| Whisperspeech | 3.15 | 3.44 | 3.99 | 3.59 |\\n| Vokan-TTS | 2.94 | 3.39 | 3.66 | 3.60 |\\n\\n[1] Reddy, C. K., Gopal, V., & Cutler, R. (2021, June). DNSMOS: A non-intrusive perceptual objective speech quality metric to evaluate noise suppressors. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 6493-6497). IEEE.\\n\\n**W2 - The choice of accuracy as a metric**\\nThank you for the suggestion. We will include both F1-score and EER in our revised version. The accompanying table presents comparisons with other defenses, detailing F1 and EER results. These trends align with our original performance metrics and relate to our response in reply-W4 to you.\\n\\n| Model | Origin F1 | Origin EER | Attack (Time) F1 | Attack (Time) EER | Attack (Frequency) F1 | Attack (Frequency) EER |\\n|------------------------|-----------|------------|---------------|----------------|---------------|----------------|\\n| RawNet3+RandAug | 0.9729 | 0.0238 | 0.6932 | 0.3048 | 0.6308 | 0.3671 |\\n| +AT(Time) | 0.8799 | 0.1175 | 0.2105 | 0.6921 | 0.2184 | 0.6829 |\\n| + Gaussian smoothing | 0.5267 | 0.4206 | 0.5424 | 0.4397 | 0.4903 | 0.5306 |\\n| + MAD smoothing | 0.9366 | 0.0603 | 0.5880 | 0.4079 | 0.5712 | 0.4298 |\\n| + F-SAT | 0.9810 | 0.0190 | 0.9048 | 0.0876 | 0.9297 | 0.0694 |\\n\\n**W3 - previous work relevant to adversarial attacks in the frequency domain and reverting to the temporal domain**\\n\\nThank you for bringing up the paper \\u201cCross-representation Transferability of Adversarial Attacks.\\u201d While that study focuses on adversarial attacks, our work concentrates on defense. Additionally, our research conducts an in-depth study on the impact of adversarial training across various frequency domains, providing more insights and achieving state-of-the-art robustness results. We have cited and discussed this paper in the related work section of our revised version.\"}", "{\"title\": \"Continue\", \"comment\": \"**W4 - Compared with other defense method**\\n\\nThank you for directing our attention to the paper titled \\\"High-frequency Adversarial Defense for Speech and Audio.\\\" [1] Following your suggestion, we compare ours methods with [1] and the baselines used in [1], including Gaussian Smoothing [3], and adversarial training [2] in the time domain\\u2014techniques. We employed the same parameters as those in [1]. The results of these comparisons are outlined below. As the table shows, our method outperforms all other methods.\\n\\n| Approach | Ori Real | Ori Fake | Ori Avg | Att(T) Real | Att(T) Fake | Att(T) Avg | Att(F) Real | Att(F) Fake | Att(F) Avg |\\n|------------------------|----------|----------|---------|-------------|-------------|------------|-------------|-------------|------------|\\n| RawNet3+RandAug | **97.6%** | 97.0% | 97.3% | 74.7% | 66.0% | 70.4% | 63.0% | 62.4% | 62.7% |\\n| +AT(Time) | 94.7% | 83.1% | 88.9% | 87.1% | 12.4% | 49.8% | 66.5% | 16.5% | 41.5% |\\n| +Gaussian smooth | 57.8% | 49.4% | 53.6% | 53.2% | 51.3% | 52.2% | 46.2% | 51.4% | 48.8% |\\n| +MAD smoothing | 96.2% | 91.4% | 93.8% | 60.2% | 59.0% | 59.6% | 56.3% | 57.4% | 56.8% |\\n| +F-SAT (Ours) | 97.5% | **98.4%** | **98.0%** | **90.2%** | **87.0%** | **88.6%** | **93.3%** | **92.8%** | **93.1%** |\\n\\n[1] Olivier, Raphael, Bhiksha Raj, and Muhammad Shah. \\\"High-frequency adversarial defense for speech and audio.\\\" ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021\\n\\n[2] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu, \\u201cTowards deep learning models resistant to adversarial attacks,\\u201d international conference on learning representations, 2018.\\n\\n[3] Jeremy Cohen, Elan Rosenfeld, and Zico Kolter, \\u201cCertified adversarial robustness via randomized smoothing,\\u201d in Proceedings of the 36th International Conference on Machine Learning, Kamalika Chaudhuri and Ruslan Salakhutdinov, Eds. 09\\u2013 15 Jun 2019, vol. 97 of Proceedings of Machine Learning Research, pp. 1310\\u20131320, PMLR.\\n\\n**W5 - SNR to evaluate how perceptible the adversarial attack is**\\n\\nThank you for the suggestions. We have incorporated Signal-to-Noise Ratio (SNR) to assess the perceptibility of adversarial attacks in our main paper. The SNR for attacks in the time domain is approximately 58.4, while in the frequency domain it is 68.7. These values indicate the imperceptibility of the attacks.\\n\\n\\n**W6 - Clarity issues**\\n\\nThank you for the suggestions, we have revised the caption for Figure 9 (Figure 8 in revised version) and added results for the 0-8K range in Figure 8b (Figure 7b in revised version).\\n\\n**Q1- What frequency range is the spec-magnitude attack applied over in Figure 8a?**\\n\\nThe spec-magnitude attack is applied across a frequency range of 0-8 kHz.\\n\\n**Q2 - Why Use F-SAT Instead of Adversarial Training in the Time Domain and Employ a Bandpass Filter for Post-Processing Perturbations?**\\n\\nThank you for your question. Our F-SAT approach targets the magnitude component of audio signals, not affecting the phase. This focused strategy empirically enhances the effectiveness of our method.\\n\\nAdversarial training in the time domain has been shown to be ineffective, as evidenced by our experiments (referenced in Table 3) and supported by reference paper, \\\"High-frequency adversarial defense for speech and audio\\\" (Table 1). The ineffectiveness stems from time-domain attacks impacting both the magnitude and phase of the audio. Even using a bandpass filter does not alleviate the phase change. Our studies show that phase-focused adversarial training in the frequency domain degrades the performance of the RawNet3 model on natural data.\\n\\n**Q3 - Why is the performance of the model trained on DeepFakeVox-HQ so low on the In-the-wild dataset**\\n\\nTraining on all datasets except \\u2018In-the-wild\\u2019 and testing on \\u2018In-the-wild\\u2019 leads to poor performance. Conversely, training on \\u2018In-the-wild\\u2019 and testing on all other datasets results in even worse outcomes. This is likely due to significant distributional differences in the \\u2018In-the-wild\\u2019 dataset. Factors such as the synthesis models used, the quality of reference audio, and other methods of generating deepfake samples contribute to these disparities.\"}", "{\"comment\": \"Thank you for providing detailed responses. I find that my concerns have been address and I have increased my score.\\n\\nPlease ensure that the tables below have been included and appropriately referenced in the manuscript, many of them have not yet been included. I have the following comments:\\n\\nW1.4.3 -- For several of these datasets the speaker demographic data is available. I would advise that that data be compiled into a table and included in the paper. My main concern is that if the data is heavily biased towards a specific demographic, then the generalizability of the results may be questionable. For example, if most of the data is from male speakers, does that impact the ability to detect deepfakes with female speakers? \\n\\nW4 -- please update results to use EER.\\n\\nI would recommend that all plots and tables in the main text use EER.\", \"q2\": \"Please include this explanation in the paper if it is not already there.\", \"title\": \"Thank you for the comprehensive responses\"}", "{\"title\": \"Thank you for your your thoughtful review and helpful experiment suggestions!\", \"comment\": \"We appreciate the reviewer's positive feedback on our paper. It is encouraging to see the recognition of our tackling a significant problem, the interest in our adversarial attack perspective, and our efforts to keep results current by integrating samples from commercial models. We discuss these contributions and address reviewer suggestions in the sections below.\\n\\n**W1 - MP3 compression removes high-frequency content, which may impact the effectiveness of F-SAT**\\n\\nTraditional compression algorithms like MP3 typically remove frequencies above 16 kHz. However, in our experiment, the audio is resampled to 16,000 Hz before being input to the model, capping the maximum representable frequency at 8,000 Hz (per the Nyquist theorem). Thus, MP3\\u2019s high-frequency removal has no impact on our data, as it already falls below this range.\\n\\n**W2 - Robustness to compression**\\n\\nThank you for your suggestion. We have added results on compressed audio in the appendix. To evaluate the robustness of our detection model to compression, we tested two lossy formats: MP3 and AAC. The evaluation utilized RawNet3 combined with RandAug and F-SAT. As shown in the table, both MP3 and AAC compression had minimal impact on detection accuracy.\\n\\n| Format | Real | Fake | Avg |\\n|----------------------------|---------|---------|---------|\\n| Origin (90% wav + 10% mp3) | 97.50% | 98.40% | 98.00% |\\n| MP3 | 97.50% | 97.60% | 97.60% |\\n| ACC | 96.90% | 98.60% | 97.80% |\\n\\n**W3 - Dataset details like the length in hours or training hyperparameters like the learning rate are missing**\\n\\nThank you for bringing this up. We have added dataset details and training hyperparameters in the revised version.\\n##### Training Hyperparameters for RawNet3 on DeepfakeVox\\n- **Learning Rate (lr):** `1e-5`\\n- **Epochs:** `15`\\n- **Batch Size (bs):** `16`\\n- **Optimizer:** `adam`\\n- **Augmentation Number (aug_num):** `1` or `2`\\n- **Augmentation Probability (aug_prob):** `0.9`\\n\\n##### LR Scheduler (Warmup Cosine)\\n- **Warm-up Epochs:** `1`\\n- **Warm-up LR:** `1e-6`\\n- **Minimum LR:** `1e-7`\\n\\n##### Attack Hyperparameters\\n- **Attack Type:** `l_inf`\\n- **Epsilon:** `0.005`, **Alpha:** `0.002`\\n- **Gamma (control ratio of clean loss and robust loss):** `0.1`\\n- **Attack Iterations:** `2`\\n- **Restarts:** `1`\\n\\n##### Mixup Hyperparameters\\n- **Mixup Alpha:** `0.5`\\n\\n#### Dataset Details\\nFor the training set, we used six open-source models to generate data from four datasets: VCTK (12k samples), LibriSpeech-clean-100 (28k samples), AudioSet (narration) (12k samples), and In-The-Wild (real parts: 9k samples), resulting in a dataset nearly six times larger than the real samples and evenly distributed across these sources. Additionally, we used one commercial model, ElevenLabs, to generate 2,500 samples. The total duration is detailed below.\\n\\n| | MetaVoice | StyleTTS v2 | XTTS v2 | VoiceCraft | Whisperspeech | Vokan-TTS | Elevenlabs |\\n|----------------|-----------|-------------|---------|------------|---------------|-----------|------------|\\n| Duration (hrs) | 189.1 | 186.6 |175.5 | 119.9 | 155.2 | 161.7 | 3.3 |\\n\\nWe also combined our generated data with existing public datasets to enhance its diversity. For the test set, we included 14 different fake sources, with approximately 200 samples each.\\n\\n**Q1 - Which model was used to create the plot on the left side in Figure 2**\\n\\nRawNet3 Model (without F-SAT)\\n\\n**Q2 - How many hours of speech does the dataset encompass exactly? How long are the samples?**\\n\\nThe total duration of our dataset is approximately 2700 hours, comprising 1400 hours of real audio and 1300 hours of fake audio. The fake audio includes 300 hours from a previous dataset and 1000 hours we generated. Sample lengths vary across subdatasets, ranging from 4 to 15 seconds. Specifically, for VCTK and the corresponding fake audio, the average duration is about 4 seconds per sample. In contrast, for AudioSet (narration), the average duration extends to around 15 seconds per sample.\\n\\n\\n**Q3 - Are the samples aligned? Do all models synthesize speech using the same input sentences?**\\n\\nFor the high-quality deepfake samples we generated, alignment is ensured. All models synthesize speech from the same input sentences.\"}", "{\"comment\": \"Public comment re Ethics\\n\\nAs someone who is not reviewing this paper, I'd just like to flag up that the authors have been sourcing data from social media and YouTube, without appearing to acknowledge that this may be a) against the terms of service of these platforms and b) may constitute human involved research. Both of these factors may be considered as research practices that should require ethical oversight, and, at the very least, demonstration of having been issued appropriate licenses by copyright holders. These issues are exacerbated when one of the contributions of this paper is to produce a dataset that may be distributed, which is almost certainly against the terms of any licenses granted to the authors. \\n\\nThe Youtube terms of service explicitly state that users are not allowed to \\n- \\\"access, reproduce, download, distribute, transmit, broadcast, display, sell, license, alter, modify or otherwise use any part of the Service or any Content except: (a) as expressly authorized by the Service; or (b) with prior written permission from YouTube and, if applicable, the respective rights holders;\\\" \\n- \\\"access the Service using any automated means (such as robots, botnets or scrapers) except (a) in the case of public search engines, in accordance with YouTube\\u2019s robots.txt file; or (b) with YouTube\\u2019s prior written permission;\\\" \\n- \\\"use the Service to view or listen to Content other than for personal, non-commercial use (for example, you may not publicly screen videos or stream music from the Service); or\\\"\", \"see\": \"https://www.youtube.com/static?template=terms&gl=AU\\n\\nAll three of those above provisions could be seen to be violated by the authors scraping data from YouTube without a license.\\n\\nThanks\"}", "{\"title\": \"Thank you for your thoughtful review and suggestions on the experiments\", \"comment\": \"We thank the reviewer for their thoughtful feedback. We are glad that the reviewer finds our dataset to be significant and worthy publishing, as well as the effectiveness of our approach to improve deepfake dectection. We address the reviewer\\u2019s questions below:\\n\\n**W1.1 - The settings used for adversarial training during AT, F-SAT**\\n\\nThank you for the suggestion. We have included the training hyperparameters for AT and F-SAT in our revised version, as shown in the table below.\\n\\n| **Method** | **Type** | **Epsilon** | **Alpha** | **Attack Iter** | **Restart** | **Gamma (Loss Ratio)** |\\n| ---------- | -------- | ----------- | --------- | --------------- | ----------- | ---------------------- |\\n| AT | l_inf | 1.00E-4 | 4.00E-5 | 2 | 1 | 0.1 |\\n| F-SAT | l_inf | 5.00E-3 | 2.00E-3 | 2 | 1 | 0.1 |\\n\\n**W1.2 - Attack Settings for Evaluation and SNR-based Perceptibility Assessment**\\n\\nThank you for the suggestion. We have detailed the attack settings for evaluation in the table below and included them in our revised paper.\\n| **Attack** | **Type** | **Epsilon** | **Alpha** | **Iter** | **Restart** | **SNR (mean)** | **SNR (std)** |\\n|------------------|----------|---------------------------|---------------------------|----------|-------------|----------------|---------------|\\n| Waveform(Time) | l_inf | 1.00E-4 | 4.00E-5 | 2 | 1 | 58.4 | 5.4 |\\n| Magnitude (Frequency) | l_inf | 1.00E-3 | 4.00E-4 | 2 | 1 | 68.7 | 5.8 |\\n| Phase (Frequency) | l_inf | 2.00E-1 | 1.00E-1 | 2 | 1 | 39.7 | 4.9 |\\n\\n**W1.3 - Randaug Hyperparameter**\\n\\nThank you for the advice. We have included the parameters of the augmentations used in RandAugment in the appendix. In our experiments, we configured aug_num=1 and set aug_prob=0.9.\\n\\n**W1.4.1 - Generate Deepfakes**\\n\\nThank you for the advice. We have included the Dataset Details in appendix. We use the list of deepfake models (XTTS v2, StyleTTS v2, Metavoice, Whisperspeech, Vokan-TTS, and VoiceCraft and Elevenlabs) to generate deepfake voice for training set. to generate voices for the training set. Additionally, we employ Cosyvoice, PlayHT 2.0, Resemble, LOVO AI, and Lipsynthesis to create deepfake voices for the test set. we generate the noisy deepfakes with postprocessing augmentations.\\n\\n**W1.4.2 - The number of audios from each deepfake generation system**\\n\\nWe utilize four real datasets\\u2014VCTK (12.0k), Librispeech-clean-100 (28.5k), Audioset (narration) (12.2k), and In-The-Wild (real parts: 9.3k)\\u2014to generate deepfake voices for the training set, with the number of samples from each source listed in the table below. For the test set, each source contributes 200 samples.\\n| **Model** | **metavoice** | **StyleTTS v2** | **XTTS v2** | **VoiceCraft** | **Whisperspeech** | **Vokan-TTS** | **Elevenlabs** |\\n|-------------------|---------------|-----------------|-------------|----------------|-------------------|---------------|----------------|\\n| **Samples** | 61.7k | 61.6k | 61.8k | 59.4k | 61.9k | 61.6k | 3.2k |\\n\\n**W1.4.3 - the demographic distribution of the real and fake speakers**\\n\\nWe are sorry about that. Our dataset, drawn from multiple sources, makes it difficult to calculate the overall demographic distribution of speakers. However, for the deepfake audio we generated, the speaker distribution is a composite of VCTK, In-the-wilds, Audioset (narration), and Librispeech. This is because each source uses the same sentence inputs as the real audio.\\n\\n**W1.4.4 the number of utterances used from each of the datasets from prior works**\\n\\nThe details of the reference datasets used are provided in the table below.\\n| **Real** | **Fake** |\\n|--------------------------------|----------------------------|\\n| Libri-clean-100 28k | Wavefake (English) 134k |\\n| Libri-clean-360 104k | ASRspoof2019-train 2k |\\n| Libri-other-500 149k | ASRspoof2019-dev 2k |\\n| Audioset (narration) 12k | ASRspoof2019-eval 6k |\\n| VCTK 12k | |\\n| In-The-Wild 9k | |\\n| ASRspoof2019-train 2k | |\\n| ASRspoof2019-dev 2k | |\\n| ASRspoof2019-eval 7k | |\\n| VoxCeleb1 149k | |\\n| **Total** 474k | **Total** 144k |\"}", "{\"title\": \"Continue\", \"comment\": \"**Other Concerns \\u2014 Open Source of Data and Augmentation Instructions**\\n\\nWe have outlined our data augmentation setup in the file dataset_aug_all.py, included in the supplemental materials. The hyperparameters for augmentation and detailed instructions for running the commands are provided in the README.\\n\\nUpon acceptance, we will open-source our code and data, and provide a comprehensive README to guide users.\"}", "{\"summary\": \"Glad to review the paper.\\nThis paper proposes a novel method, F-SAT for deepfake audio detection.\\nThe topic of this work is promising, and the paper is easy to follow.\\nI believe this work has reference values to domain-related researchers.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"Three main contributions involved in this work include (1) a carefully organized dataset, (2) a deepfake detection method, and (3) the ability against adversarial attacks (with the setting focusing on high-frequency signals).\\nIn general, the contributions of this work are multi-fold.\", \"weaknesses\": \"My major concern is whether the contributions (or advantages) of this work are over-claimed.\\nRegarding the dataset, although it is well organized and processed, the samples are generated using existing approaches, thus, \\\"the largest\\\" is not a significant contribution.\\nRegarding generalization, as in Table 2, the significantly superior results of the proposed method are achieved on the self-organized dataset, DeepFake Vox-HQ. However, as the author introduced in Section 3, there are overlapped synthesis methods between training and testing data in this group of results (as in Figure 6). Thus the results in DeepFake Vox-HQ can not indicate out-of-distribution generalization.\\nRegarding enhancing robustness, in the last paragraph of the related section, the referenced solutions were published in 2019,2018 and 2018, I am not sure whether any recent works focus on the adversarial issue, that should be discussed or compared.\", \"questions\": \"My main concern is about the generalization and robustness issues listed above.\\nI will consider changing my score based on the author's responses and other reviewers.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your additional suggestions\", \"comment\": \"Thanks for your advice.\\n\\n**W1.4.3**\\n\\nFor demographic, we are using the \\\"wav2vec2-large-robust-24-ft-age-gender\\\" model, available on Hugging Face, to analyze the age and gender distribution of our dataset. Due to the computational demands, this analysis requires several days to complete. We will include it in our appendix once finished. \\n\\n\\n**Q2** \\n\\nWe have included explanation of F-SAT in revised paper (line 516-527).\\n\\n\\n**W4** \\n\\nWe have updated all tables to display the F1 score in the main text, while both F1 and EER scores are provided in Appendix A.8. All tables and figures in main text are clearly readable right now. We continue to report separate accuracy figures for real and fake samples, as it offers deeper insights. Specifically, 1. as datasets scale up, detecting fake audio proves more challenging than real audio because obtaining diverse real audio is always easier than generating diverse fake audio; 2. fake features concentrate at higher frequencies more than real features.\\n\\nRegarding the use of EER as a metric, we wish to highlight a potential limitation. Our experiments with RawNet3, trained using rand-augmentations (different noises, time stretch, pitch shift) and mixup on the ASVspoof2019 and tested on \\u2018In-the-Wild\\u2019 dataset [1], have shown results that greatly surpass current state-of-the-art methods. Although the EER is low, it does not reliably indicate detection effectiveness. When transferring from ASVspoof2019 to \\u2018In-the-Wild\\u2019, the distribution of real and fake samples shifted equally, causing the decision boundary to adjust from 0.5 to 0.9. This resulted in lower accuracy but an artificially improved EER. Therefore, we employ both F1 and EER to provide a more comprehensive analysis.\\n\\n| Approach | Real Acc | Fake Acc | EER |\\n|---------------------------|----------|----------|---------|\\n| RawNet3, Noise, MixUp | 0.01 | 0.99 | 0.11 |\\n| RawNet2 [1] | - | - | 0.34 |\\n| [2] | - | - | 0.24 |\\n\\n[1] M\\u00fcller, Nicolas M., et al. \\\"Does audio deepfake detection generalize?.\\\" arXiv preprint arXiv:2203.16263 (2022).\\n\\n[2] Yang, Yujie, et al. \\\"A robust audio deepfake detection system via multi-view feature.\\\" ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2024.\"}", "{\"title\": \"Thanks for your suggestion\", \"comment\": \"Dear Reviewer:\\nThank you for your guidance. We have revised the caption of Table 2 to clearly state that the results are not directly comparable to those in the original WaveFake paper. Additionally, Section 5.1, \\u201cIntroduction to WaveFake,\\u201d has been thoroughly updated to detail the differences in experimental settings and dataset utilization.\"}", "{\"summary\": \"The paper \\\"I CAN HEAR YOU: SELECTIVE ROBUST TRAINING FOR DEEPFAKE AUDIO DETECTION\\\"\\nintroduces the DeepFakeVox-HQ data set, which contains audio from 14 sources.\\nIn addition to the dataset, the authors introduce Frequency-Selective Adversarial Training (F-SAT), a training method that focuses on the high-frequency part of the spectrum. In addition to FSAT, this paper evaluates robustness concerning various input perturbations.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper tackles a significant problem.\", \"The related work is well-researched and described.\", \"The adversarial attack perspective is interesting.\", \"Authors ensure their results are up to date, combining existing datasets with samples from commercial models.\"], \"weaknesses\": [\"Traditional compression algorithms like MP3 remove high-frequency content; according to line 84, FSAT focuses on this part of the spectrum.\", \"If I understand correctly, compression is not part of the corruption set, as shown in Figure 7. Including compression would have been important for real-world applicability.\", \"Data-set details like the length in hours or training hyperparameters like the learning rate are missing.\"], \"questions\": [\"Which model was used to create the plot on the left side in Figure 2?\", \"How many hours of speech does the dataset encompass exactly? How long are the samples?\", \"Are the samples aligned? Do all models synthesize speech using the same input sentences?\", \"Is it possible to add a data sheet that outlines the exact sources and utterance lengths per source?\", \"Are the WaveFake test samples also part of the DeefFakeVox-HQ test set?\", \"WaveFake contains Japanese language JSUT samples.\", \"Are these part of the dataset?\", \"Should the Caption of Table 1 make this explicit? Since WaveFake is listed as\", \"English-language data set, I assume JSUT is not considered a part of WaveFake in this paper.\", \"Do the Utterance numbers in table one exclude JSUT?\", \"If yes, should this be mentioned somewhere else?\", \"Is it possible to include leading works from the audio classification world, like the Audio Spectrogram Transformer (AST)[1], in the evaluations? Related work [2] found it to perform well on the WaveFake-dataset. It would be interesting if it also outperforms other methods on DeepFakeVox-HQ.\", \"The Wavefake paper [3] trains with binary settings with fake audio from a single source and measures generalization. Training on which source network led to the numbers in Table 2? Are the numbers comparable to the related work?\", \"Which software libraries have been used to implement this project?\", \"Which hyperparameters underpin the network training?\"], \"related_work\": \"[1] AST: Audio Spectrogram Transformer, https://arxiv.org/pdf/2104.01778,\\n[2] Towards generalizing deep-audio fake detection networks, https://arxiv.org/pdf/2305.13033,\\n[3] WaveFake: A Data Set to Facilitate Audio Deepfake Detection, https://arxiv.org/abs/2111.02813\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper attempts to improve deepfake detection by (1) proposing a large training and evaluation dataset called DeepFakeVox-HQ containing diverse synthetic and real speech recordings, (2) proposing a data augmentation method similar to randaugment for deepfake detectors and (3) proposing a frequency-selective adversarial training (F-SAT) method to make deepfake detectors more robust to adversarial attacks.\\n\\nDeepFakeVox-HQ is a large dataset containing real and synthetic speech from existing datasets, speech generated by SOTA speech synthesis models as well as deepfakes found in-the-wild (on social media, etc.). Results show that models trained on DeepFakeVox-HQ generally perform better on existing deepfake while the models trained on the existing datasets have weak performance on DeepFakeVox-HQ, which indicates that DeepFakeVox-HQ includes information that prior works do not provide. DeepFakeVox-HQ will likely be a useful resource in deepfake research. \\n\\nThe proposed RandAugment scheme for deepfake detection utilizes a large bank of audio augmentations during training and yields significant improvements in deepfake detection accuracy.\\n\\nThe key contribution of F-SAT is to add adversarial perturbations to only certain frequency bands, which apparently results in lesser degradation of accuracy on un-perturbed data while providing greater robustness than standard adversarial training.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Paper is generally well-written and easy to read but some important details are missing\\n1. DeepFakeVox-HQ is a novel dataset containing data from prior datasets as well as novel deepfakes generated from SOTA speech synthesis models. I appreciate that the authors have curated a test set containing deepfake generation methods not covered in the training set _as well as deepfakes gathered from the internet_. I encourage the authors to consider uploading the dataset to a platform like Huggingface Hub.\\n1. The proposed randaugment data augmentation method is effective at improving deepfake detection for RawNet3 models and is likely to be widely adopted if the source code is easy to use (I looked at the README in the attached supplemental material but did not find any instructions for the augmentation).\\n1. The proposed adversarial training method improves deepfake detection accuracy on clean and adversarially perturbed recordings (though I have some reservations regarding the experimental setup).\", \"weaknesses\": \"1. Some important details about the proposed approaches are not mentioned in the paper.\\n \\n 1. The value of $\\\\epsilon$ and $p$ (or $q$) used in adversarial training methods should be mentioned in the main body of the paper. Currently, it is mentioned in the caption of a table in the appendix\\n 1. The settings used for adversarial attacks during AT, F-SAT and evaluation need to be mentioned.\\n 1. The parameters of the augmentations used in randaugment need to be mentioned at least in the appendix.\\n 1. Detailed composition of DeepFakeVox-HQ needs to be mentioned including \\n\\n 1. the method used for generating deepfakes (particularly noisy deepfakes), \\n 1. the number of audios from each deepfake generation system, \\n 1. the demographic distribution of the real and fake speakers,\\n 1. the number of utterances used from each of the datasets from prior works, \\n 1. quantitative measurement of synthetic speech quality using metrics like DNSMOS, or NORESQA.\\n\\n1. The choice of accuracy as a metric seems to be inappropriate for a binary classification task. I would suggest using F1-score and equal error rate as the metrics. Moreover, reading tables and plots with two accuracy metrics for accuracy is a little confusing.\\n1. Conducting adversarial attacks in the frequency domain and reverting to the temporal domain is not novel and has been done before [2].\\n1. There is no comparison with other adversarial defenses for audio models. Many of the defenses created for speech and speaker recognition will also apply to the deepfake detection scenario. One method that is quite simple is [3]\\n1. The common practice is to use signal-to-noise ratio (SNR) as the bound for adversarial attacks in the audio domain [1] instead of $\\\\ell_p$ bounds. I would highly recommend the authors use SNR as well. It is fairly straightforward to convert SNR to $\\\\ell_2$ bounds and vice-versa. The main advantage of using SNR is that one has an idea how _perceptible_ the adversarial attack is.\\n\\n1. Clarity issues:\\n 1. The caption Figure 9 needs to state that the results are of F-SAT\\n 1. Add results for 0-8K in Figure 8b \\n\\n\\n[1] Carlini, Nicholas, and David Wagner. \\\"Audio adversarial examples: Targeted attacks on speech-to-text.\\\" 2018 IEEE security and privacy workshops (SPW). IEEE, 2018. \\n\\n[2] Koerich, Karl Michel, et al. \\\"Cross-representation transferability of adversarial attacks: From spectrograms to audio waveforms.\\\" 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020.\\n\\n[3] Olivier, Raphael, Bhiksha Raj, and Muhammad Shah. \\\"High-frequency adversarial defense for speech and audio.\\\" ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021.\", \"questions\": \"1. What frequency range is the spec-magnitude attack being applied over in Figure 8a?\\n1. Do you have any _formal_ explanation for why the time-domain attack is less successful than the spec-magnitude attack? To me this seems counter-intuitive because the STFT is a linear and (mostly) invertible function so, from the perspective of the optimization, it should not matter if the attack was computed in the frequency domain or the time domain. I would be very interested in seeing more explanation for why the time-domain attack is unable to reach the solution acheived by the frequency-domain attack. Please also provide the detailed settings for all the adversarial attacks (time, frequency and phase domain) used in AT, F-SAT and during evaluation.\\n1.In principle, a frequency selective adversarial attack could be constructed entirely in the time domain by applying a band-pass filter to the adversarial perturbation after each optimization step (i.e. include the BP filter as part of the projection operation). This might be less computationally intensive than the proposed approach. Can you provide some discussion on why the proposed approach was favored?\\n1. Why is the performance of the model trained on DeepFakeVox-HQ so low on the In-the-wild dataset (see Figure 3)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"A last concern\", \"comment\": \"Dear authors,\\nI have had another look at lines 359-362 of the paper\\n`` Our method\\nachieved state-of-the-art results across all three benchmarks. Compared with RawNet3, our method\\nshows improvements of 7.7% points on DeepFakeVox-HQ (test), 8.4% points on ASVspoof2019,\\nand 0.1% points on WaveFake. ``\\n\\nI am not sure the claim on WaveFake is valid since the experimental setup is different. Furthermore, RawNet3 did not produce good results in the WaveFake study. Would it be possible to reproduce the original WaveFake setup and compare FSAT to the GMM that performed best in the original study? I think a fair comparison would be critical here. If that is not possible in the remaining time, it's probably better not to make claims regarding performance on WaveFake.\"}", "{\"title\": \"Thank you for sharing your concern\", \"comment\": \"Thank you for sharing your concern. To address potential misunderstandings, we have removed the results related to WaveFake. We would also like to clarify that the key contribution of our work lies in the robust training method applied to the detection model, rather than the detection model itself. In Table 2, our objective is to demonstrate that our adversarial training method maintains accuracy on the original data (RawNet3 vs. ours). The comparison of RawNet3 with other baseline models was conducted solely to determine the most suitable detection model for our training dataset. That's the reason why we did not replicate the exact experimental settings or compare our results directly with those in the original paper.\"}", "{\"title\": \"Continue\", \"comment\": \"**Q4 - Add a data sheet that outlines the exact sources and utterance lengths per source**\\n\\nThanks for you suggestions. We have added a data sheet that outlines the exact sources and utterance lengths per source in revised version.\\n\\n| Real Source | VCTK | Librispeech(Train) | In-The-Wilds | ASRspoof2019(LA) | Voxceleb1 | Audioset(Narration) |\\n|------------------------|--------|----------------------|--------------|-----------------------|-----------|---------------------|\\n| Total Duration (Hours) | 14.1 | 961.1 | 14.6 | 11.9 | 340.4 | 50.1 |\\n| Audio Count (k) | 12.0k | 281.2k | 9.3k | 12.5k | 148.6k | 12.2k |\\n| Mean Duration (Seconds)| 4.2 | 12.3 | 5.7 | 3.4 | 8.2 | 14.8 |\\n\\n\\n| Fake Source | Metavoice | StyleTTS-v2 | XTTS-v2 | VoiceCraft | Whisperspeech | Vokan-TTS | Elevenlabs | ASRspoof2019(LA) | Wavefake(English) |\\n|--------------------------|-----------|-------------|---------|------------|---------------|-----------|------------|-----------------|--------------------|\\n| Total Duration (Hours) | 189.1 | 186.6 | 175.5 | 119.9 | 155.2 | 161.7 | 3.3 | 97.8 | 198.7 |\\n| Audio Count | 61.7k | 61.6k | 61.8k | 59.4k | 61.9k | 61.6k | 3.2k | 109.0k | 117.9 |\\n| Mean Duration (Seconds) | 11.0 | 10.9 | 10.2 | 7.3 | 9.0 | 9.4 | 3.7 | 3.2 | 6.1 |\\n\\n**Q5 - Are the WaveFake test samples also part of the DeefFakeVox-HQ test set?**\\n\\nNo, WaveFake utilizes six AI synthesis models: MelGAN, ParallelWaveGAN, Multi-band MelGAN, Full-band MelGAN, HiFi-GAN, and WaveGlow, none of which are covered in our test set.\\n\\n**Q6 - Clarify for WaveFake Dataset**\\n\\nWe apologize for any confusion caused; our experiment utilized only the English portion of the WaveFake dataset. We have updated our paper accordingly, revising Table 1 to indicate that WaveFake includes both English and Japanese languages. Additionally, in Section 5.1, we have clearly specified that we used only the English component of the dataset in revised version.\\n\\n**Q7 - Audio Spectrogram Transformer performance on DeefFakeVox-HQ**\\n\\nAudio Spectrogram Transformer does not perform as well as RawNet3 on DeepFakeVox-HQ. We use the same training hyperparameters (learning rate schedule, optimizer, batch size, etc.) and same augmentation.\\n\\n| Model | Real | Fake | Avg |\\n|--------------------|--------|--------|--------|\\n| AST + Randaug | 99.4% | 78.0% | 88.7% |\\n| RawNet3 + Randaug | 97.6% | 97.0% | 97.3% |\\n\\n**Q8 - Setting for training Wavefake in Table 2**\\n\\nSorry for the confusion, the training settings for the WaveFake dataset in our paper differ from those in the original paper to maintain consistency across all datasets. We used all available sources and divided them into training, validation, and testing sets with a ratio of 7:1.5:1.5.\\n\\n**Q9 - Which software libraries have been used to implement this project?**\\n\\nFor the detection model, we used standard libraries such as Torch, Torchaudio, Scikit-learn, and Librosa. Detailed information on each package and its version is included in the supplementary file env.txt, which lists all dependencies. The software libraries used by TTS (Text-to-Speech) and VC (Voice Conversion) models to generate deepfake audio, however, vary. If accepted, we will open-source our code and data and provide a comprehensive README.\"}", "{\"title\": \"Thanks for the response.\", \"comment\": \"Thank the authors for your responses, which have addressed my concerns to some extent, and I have raised my score to 6.\\n\\nRegarding the generalization issue evaluated through a self-built dataset, I suggest the authors to provide additional explanations in the final version, if the work could be accepted.\"}" ] }
2GEiBzs2Do
Simple and Fast CNN for Vision
[ "Shenqi Lai", "Hao Zhang", "Zheng Yang", "Haifeng Liu", "Deng Cai", "Wenxiao Wang", "Kaipeng Zhang" ]
Traditional Convolutional Neural Networks (CNNs) tend to use $3\times 3$ small kernels, but can only capture limited neighboring spatial information. Inspired by the success of Vision Transformers (ViTs) in capturing long-range visual dependencies, recent CNNs have reached a consensus on utilizing large kernel convolutions (e.g., astonishingly, 111 kernel). Nevertheless, these approaches are unfriendly to hardware, imposing a serious computation burden on training or inference. This paper introduces a Simple and Fast Convolutional Neural Network (SFCNN) that employs a sequence of stacked $3\times 3$ convolutions but surpasses state-of-the-art CNNs with larger kernels. In particular, we build a thin and deep model, which encourages more $3\times 3$ convolutions to capture more spatial information under the limited computing complexity rather than opting for a heavier and shallower architecture. To further enlarge the receptive field, we redesign the traditional inverted residual bottleneck with two $3\times 3$ depthwise convolutions. In addition, we propose a novel Global Sigmoid Linear Unit (GSiLU) activation function to capture global coarse-grained spatial information. Our SFCNN performs better than state-of-the-art CNNs and ViTs on various tasks, including ImageNet-1K image classification, COCO instance segmentation, and ADE20K semantic segmentation. It also has good scalability and outperforms existing state-of-the-art lightweight models. All materials containing codes and logs have been included in the supplementary materials.
[ "Convolutional Neural Network", "Vision Backbone", "Lightweight", "Fast" ]
Reject
https://openreview.net/pdf?id=2GEiBzs2Do
https://openreview.net/forum?id=2GEiBzs2Do
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yO4tlQXDwn", "qkeo1WxMKt", "q5IybN8zk5", "p1a178XeUt", "iAtKPQ9L3X", "B8k0RzDAQW", "AeiU411XtM", "4MlwLeqZO6" ], "note_type": [ "official_review", "official_comment", "meta_review", "official_review", "official_review", "official_review", "decision", "official_comment" ], "note_created": [ 1730603092212, 1732945180429, 1734533127559, 1730083041893, 1729968771719, 1730562425987, 1737523767559, 1733191942316 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6406/Reviewer_nywT" ], [ "ICLR.cc/2025/Conference/Submission6406/Reviewer_nywT" ], [ "ICLR.cc/2025/Conference/Submission6406/Area_Chair_HEJa" ], [ "ICLR.cc/2025/Conference/Submission6406/Reviewer_d9Q7" ], [ "ICLR.cc/2025/Conference/Submission6406/Reviewer_fTbR" ], [ "ICLR.cc/2025/Conference/Submission6406/Reviewer_pTXi" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6406/Reviewer_nywT" ] ], "structured_content_str": [ "{\"summary\": \"This paper targets the computational inefficiency and hardware compatibility issues of recent ConvNets that rely on large kernels to capture long-range dependencies. The authors propose a new ConvNet architecture SFCNN for vision tasks, which has shown impressive performance through a thin-and-deep design philosophy. Concretely, it combines a dual 3\\u00d73 depth-wise convolutions branch with Global Sigmoid Linear Unit (GSiLU) activation, which captures both local and global dependencies without large kernels. The proposed SFCNN is evaluated on mainstream vision benchmarks, such as ImageNet-1K classification, COCO instance segmentation, and ADE20K semantic segmentation, demonstrating great performance while maintaining better hardware efficiency across different platforms (GPU, TensorRT, iPhone). The experiments seem to strongly support the claims about achieving better accuracy-efficiency trade-offs compared to existing ConvNets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"**(S1) A critical research question with real-world significance:**\\nThe paper shows great industrial relevance by addressing a critical need in real-world deployment scenarios, in which computational efficiency is crucial, particularly in edge devices and mobile applications. The proposed SFCNN is notably cost-effective, providing a more resource-efficient alternative to existing approaches while maintaining or improving performance metrics. The presented thin-and-deep architecture appears to show great scalability, demonstrating computational efficiency across different model sizes, and making it highly adaptable to various resource constraints. From a practical impact perspective, this work has the potential to significantly reduce infrastructure costs for computer vision applications at scale, making it valuable for industrial applications rather than merely pushing the limit of accuracy metrics.\\n\\n**(S2) Thorough experiments and validation:** \\nExtensive experiments are conducted on multiple mainstream computer vision tasks, such as ImageNet-1K classification, COCO detection, and ADE20K semantic segmentation. The consistency of performance across different scales is noteworthy. Ablation studies are also conducted, providing a detailed analysis of the contribution of each component to the overall performance. More importantly, the authors present a clear demonstration of the impact of model depth vs. width, supported by the evaluation of different activation functions and receptive field analysis. Hardware performance evaluation is particularly thorough, encompassing cross-platform testing on GPU, TensorRT, and iPhone, with detailed latency and throughput measurements under various scenarios. All these experiments strongly support the paper\\u2019s claim.\", \"weaknesses\": \"**(W1) Technical Originality:**\\nThe basic building blocks of SFCNN, including 3\\u00d73 depth-wise convolutions and point-wise convolutions, largely rely on well-established techniques without significant technical originality. The GSiLU activation also bears considerable similarity to existing approaches like CBAM and SE modules. The thin-and-deep philosophy, while effectively implemented, has been explored in previous works. The theoretical foundation could be strengthened significantly, as it currently lacks enough theoretical insights into the nature of convolution operations and their relationships with model depth. A more thorough analysis of the relationship between depth and receptive field would strengthen the paper's contributions.\\n\\n**(W2) Technical Soundness & Empirical Analysis:**\\nWhile mobile testing is included, more empirical analysis could benefit this work and improve its technical soundness. For example, the Grad-CAM heat map visualization and training dynamics investigation would provide insightful and straightforward support for understanding the technical strengths of SFCNN. Moreover, the discussion of failure cases and limitations is inadequate, potentially leaving practitioners without clear guidance on the architecture's boundaries. The exploration of model behavior under extreme resource constraints could provide valuable insights for edge deployment scenarios. I strongly recommend that the authors carry out more empirical analyses that lead to more systematic conclusions for efficient ConvNet design. The thin-and-deep design philosophy is inspiring but not specific and systematic enough. Also, this work first tries stacking multiple depth-wise convolutions in a single block rather than just one. How it works for better representation capacity is still worth digging deep.\\n\\n**(W3) Presentation Clarity and Details:**\\nThe writing organization exhibits several points that require further improvement. The technical content sometimes lacks coherence, with important methodological details scattered across different sections rather than presented in a unified manner. The description of the architecture could benefit from a more structured approach, particularly in explaining the interaction between different components. Several key concepts are offered within dense paragraphs, making it challenging for readers to extract crucial implementation details. In addition, the method description, while comprehensive, could be reorganized to better highlight the progressive development of ideas and design choices. Moreover, the tables of experimental results presentation would benefit from highlighting the performance advantages. The formatting consistency across tables and figures needs attention, with some inconsistencies in style and presentation detracting from the overall appearance. For example, the thickness of table lines is inconsistent. I recommend the authors to first go through the entire manuscript for a thorough refinement.\", \"questions\": \"**(Q1) Trade-offs Analysis and Discussions:**\\nThe paper's analysis of various trade-offs deserves deeper exploration. The proposed SFCNN shows great superiority in speed. However, I have noticed that some architectures like MogaNet show better parameter efficiency while at lower speeds. Thus, a more detailed investigation of parameter efficiency versus computational speed would provide valuable insights for practitioners choosing between different model configurations. Moreover, there are several points that are tightly associated with this work that deserve further exploration: First, the memory-compute trade-off analysis could be expanded to include different hardware scenarios and deployment conditions. Second, the relationship between training efficiency and inference efficiency deserves more attention, since these can often have different optimal choices. Third, the model scaling properties, particularly regarding the relationship between model depth and width at different computational budgets. \\n\\n**(Q2) Broader Architecture Considerations:**\\nThe scope of this paper lies in ConvNets in vision tasks. However, there are more kinds of architecture emerged these years. A thorough comparison with emerging architectures like Vision Mamba and RWKV models would provide valuable context for the field's evolution. Besides, there are various efficient computation techniques proposed to boost the computational efficiency of these new architectures. The evaluation against attention-based alternatives could provide insights into the relative strengths and weaknesses of different vision backbone architectures. These expanded analyses and discussions would significantly strengthen the soundness and contribution of this paper and provide valuable guidance for future research in the community.\\n\\n---\\n**Additional Comment:**\\n\\nI hope my review helps to further strengthen this paper and helps the authors, fellow reviewers, and Area Chairs understand the basis of my recommendation. I also look forward to the rebuttal feedback and further discussions, and would be glad to raise my rating if thoughtful responses and improvements are provided.\\n\\n---\\n\\n## **------------------- Post-Rebuttal Summary --------------------**\\n\\nThe authors have not provided any response to the concerns and suggestions raised in my initial review during the rebuttal stage. The lack of engagement makes it difficult to assess whether or how the authors might address these fundamental concerns. Given that no clarification or improvement has been offered, I maintain my original comment that this submission falls below the acceptance threshold for ICLR.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Suggestions from Reviewer nywT to Submission 6406\", \"comment\": \"Dear Authors,\\n\\nAs the November 27th manuscript revision deadline has passed and we are now in the discussion period until December 3rd, I feel it important to provide my current evaluation and suggestions for Submission 6406.\\n\\nThis work presents a fundamental exploration of ConvNet architectures for vision tasks, yet the key concerns outlined in my detailed review regarding technical originality and theoretical foundations remain unaddressed. The technical points raised by myself and fellow reviewers have not yet received responses, leaving several critical questions unresolved.\\n\\nI strongly encourage the authors to address these technical points with your responses during the remaining discussion period, which could provide valuable insights for strengthening this work, whether for the current stage or future submissions. I am also confident that the collaborative discussion process with reviewers could help further refine this research to meet the high standards expected in the community.\\n\\nGiven the current stage of the review process, while I maintain my rating of 5, I remain highly engaged and look forward to any responses or clarifications the authors may provide. \\n\\nBest regards,\\n\\nReviewer nywT\"}", "{\"metareview\": \"The paper proposes a Simple and Fast Convolutional Neural Network (SFCNN) for vision tasks, which uses stacked 3\\u00d73 convolutions and a novel Global Sigmoid Linear Unit (GSiLU) activation function to achieve state-of-the-art performance while maintaining computational efficiency.\\n\\nAll reviewers have provided consistently negative ratings and the authors did not provide a response to address these issues. The final consensus of negative ratings lead to a rejection for this submission.\", \"additional_comments_on_reviewer_discussion\": \"The authors did not provide a response to address the original concerns from reviewers. All reviewers remain the original ratings.\"}", "{\"summary\": \"This paper presents a new convolutional neural network architecture, called SFCNN. Unlike recent popular CNN works that mainly aim to explore how to better take advantage of large-kernel convolutions, this paper explains that using thin but deep network architecture with only 3x3 convolutions can still achieve good results. In addition, the authors also rethink of the design of the SiLU activation function and propose a new one, which involves in global information based on SiLU. Experiments show that the classification performance on ImageNet is better than most previous CNN-based models. In terms of latency, the proposed approach achieves better results as well.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The motivation of this paper is clear. Using small-kernel convolutions may lead to faster inference but makes the receptive field of CNNs not large enough to capture the target objects. This paper presents to add more 3x3 convolutions to enlarge the receptive field of the proposed network.\", \"This paper receives better trade-off between model classification performance and latency. Compared to recent CNN-based models, the proposed method has compact model architecture but better performance.\"], \"weaknesses\": [\"The authors claim that large receptive field is important for CNNs. In Fig. 3, it is shown that the proposed approach has large effective receptive field. However, it is not as large as the one of UniRepLKNet. According to the numerical results on ImageNet, the proposed approach gets better numbers. Does this mean that large effective RF is not an important measurement for building CNNs?\", \"The authors claim that their bottleneck block design with two 3 \\u00d7 3 DWConvs is novel. However, as far as I know, adding two depthwise convolutions in a single basic block has been explored before, e.g., MobileNeXt (ECCV'20, Zhou et al.). Though the order of the layers is a bit different, the design intention is similar. So, I do not think this can be viewed as a core contribution for this paper.\", \"From the paper, it seems that CNNs with thin but deep architecture and small kernel convolutions perform more efficient than those with large kernels. However, the macro model architecture of the proposed method is not actually the same to previous large-kernel CNNs. I think the authors should conduct more specific experiments to demonstrate this.\", \"In Table 5, it is good to see the results on instance segmentation but the methods the authors compare with are not new already. I have no idea why the results of the recent published works on CNNs are not reported here.\", \"It seems that the 7th table has two captions? Where is Table 8?\", \"From the ablation results, it seems that the proposed GSiLU indeed performs better than other activation functions. However, have the authors analyzed why global information should be added into activation functions? The motivation of designing such an activation function is not clear. In addition, as GSiLU is already used, why the original SiLU is still used?\"], \"questions\": [\"The contributions of this paper should be further explained.\", \"More analysis on the advantages of using multiple small-kernel convolutions should be elaborated more.\", \"The motivation of introducing global information in activation functions should be made clearer.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper is a very traditional architecture design paper. The major motivation of this work is: large kernel vs small kernel. This debate has been in the community since 10 AlexNet. In VGG, they change the large kernel from 7x7 to a stack of 3x3. Recent years observe a reverse trend that moving back to extremely large kernels. In this work, the authors argue again to use stack of small kernels, due to \\\"Nevertheless, these approaches are unfriendly to hardware, imposing a serious computation burden on training or inference\\\"\\n\\nBased on this motivation, the authors carefully craft a new architecture, SFCNN. The architecture shows minor performance gain on IN1k (\\\"+0.1% accuracy compared to SwiftFormer (Shaker et al., 2023) with 87% FLOPs\\\"), as well as downstream tasks like COCO and ADE20k.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"1\", \"strengths\": \"1. The paper writing is clear, including the motivation, main results summary, architecture design, main results, architecture ablations. All these necessary components are easy to find and comprehend.\\n\\n2. The experiments are adequate for a traditional architecture design work, including main results on 1N1k, coco and ADE20k. There are also component contribution ablations and architecture variant.\", \"weaknesses\": \"1. The motivation / idea of this work is not new (from large kernels to a stack of smaller kernels.)\\n\\nThis idea dates back to VGG (2014). Authors can refer to sec 2.3 in the paper for more discussions. Placing this as the main motivation largely harm the overall contribution, because this makes the paper more like a revisit / conversation in the debate.\\n\\n2. Minor performance gain vs large variance in different architecture hyperparams.\\n\\nThe performance gain over SOTA models is minor, compared with performance variance in similar architectures with different hyperparams. As shown in Table9, searching a best setup for 1N1k is critical (min 81.3 vs max 82.6), while the performance gain over sota is only 0.x% level. This is also reflected in Table 8.\\n\\nI deeply appreciate the efforts in searching a best setup for the architecture. However, this makes the major performance contribution more in the \\\"searching\\\" part but no in the architecture itself. Currently, due to the development of NAS, such searching efforts can be largely automated.\\n\\n3. (Minor) Table 7 and 8 are mixed together in the manuscript. It is confused.\", \"questions\": \"1. Minor performance gain vs large variance in different architecture hyperparams. This deserves a deep discussion.\\n\\n2. Due to the nature of carefully manually crafted CNN (which may be overfitted on IN1k), I am wondering how the architecture perform on IN22k-pretraining + IN1k-finetuning? \\n\\n*This is not a must-do due to the training cost. However, if this is provided, my concern on the performance perspective can be alleviated.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N.A.\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a CNN architecture for visual recognition. The main contribution is a small CNN architecture called Simple and Fast CNN with a core idea of stacking 3x3 convolutions to design a deep architecture. The work proposes an inverted residual bottleneck with two 3x3 depth-wise convolutions. Also, this paper proposes a Global Sigmoid Linear Unit activation function to capture global information.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper performed experiments of the proposed approach on several visual recognition tasks.\\n2. The architecture presents good results in comparison with other approaches.\", \"weaknesses\": \"1. The main concern is regarding the lack of novelty. The proposed contributions are already well known and explored in literature for quite a long time. For instance, using a sequence of stacked 3x3 convolutions to enlarge the receptive field is an approach deeply explored in computer vision community for many years (ResNets, VGG-Nets, MobileNets, etc). Depth-wise convolutions are also explored intensively for efficiency gains (Xception, MobileNet, etc). Including the proposed Global Sigmoid Linear Unit is just a form of the existing work (already quite old) Squeeze-and-Excitation Networks. After reading this work I could not find anything novel or some new insight that is not already known to the vision community.\\n2. Besides lack of novelty, this work does not compensate with some new experimental findings or some new insights for practitioners. \\n\\nOverall, I find the contributions of this work to be quite limited to qualify for publishing the work at such high venue. Maybe a workshop contribution can be more appropriate.\", \"questions\": \"Please see above my main concerns.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Official Comments by Reviewer nywT\", \"comment\": \"Dear Authors and ACs,\\n\\nAs the reviewer designated for this submission, I note that the authors have not provided any response to the concerns and suggestions raised in my initial review during the author-reviewer discussion period. While the original submission presented some insights, several critical issues remain unaddressed.\\n\\nThe lack of engagement makes it difficult to assess whether or how the authors might address these fundamental concerns. Given that no clarification or improvement has been offered, I maintain my original rating of 5, indicating that this submission falls below the acceptance threshold for ICLR.\\n\\nThis conclusion is not a judgment on the potential merits but rather reflects the current state of the manuscript and the missed opportunity to address reviewers\\u2019 concerns in the rebuttal stage.\\n\\nBest regards,\\n\\nReviewer nywT\"}" ] }
2G021ZqUEZ
From Commands to Prompts: LLM-based Semantic File System for AIOS
[ "Zeru Shi", "Kai Mei", "Mingyu Jin", "Yongye Su", "Chaoji Zuo", "Wenyue Hua", "Wujiang Xu", "Yujie Ren", "Zirui Liu", "Mengnan Du", "Dong Deng", "Yongfeng Zhang" ]
Large language models (LLMs) have demonstrated significant potential in the development of intelligent LLM-based agents. However, when users use these agent applications to perform file operations, their interaction with the file system still remains the traditional paradigm: reliant on manual navigation through precise commands. This paradigm poses a bottleneck to the usability of these systems as users are required to navigate complex folder hierarchies and remember cryptic file names. To address this limitation, we propose an LLM-based Semantic File System (LSFS) for prompt-driven file management in LLM Agent Operating System (AIOS). Unlike conventional approaches, LSFS incorporates LLMs to enable users or agents to interact with files through natural language prompts, facilitating semantic file management. At the macro-level, we develop a comprehensive API set to achieve semantic file management functionalities, such as semantic file retrieval, file update summarization, and semantic file rollback). At the micro-level, we store files by constructing semantic indexes for them, design and implement syscalls of different semantic operations, e.g., CRUD (create, read, update, delete), group by, join. Our experiments show that LSFS can achieve at least 15% retrieval accuracy improvement with 2.1× higher retrieval speed in the semantic file retrieval task compared with the traditional file system. In the traditional keyword-based file retrieval task (i.e., retrieving by string-matching), LSFS also performs stably well, i.e., over 89% F1-score with improved usability, especially when the keyword conditions become more complex. Additionally, LSFS supports more advanced file management operations, i.e., semantic file rollback and file sharing and achieves 100% success rates in these tasks, further suggesting the capability of LSFS . The code is available at https://github.com/agiresearch/AIOS-LSFS.
[ "Large Language Model", "Semantic File System" ]
Accept (Poster)
https://openreview.net/pdf?id=2G021ZqUEZ
https://openreview.net/forum?id=2G021ZqUEZ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yw7MYvaLAv", "yi8738lEFw", "yHoQSQGwVk", "y8C1tFJ393", "y2AB05Pna5", "wkh4Qa8wkW", "l0GsEu2UeT", "jGXDrdRAMp", "i7Z1067fVH", "gDpkS25BBi", "df98yLgAE8", "cxqjt28lDb", "YkNhPl0Bm0", "YZZXRKEqgD", "Y3YbPTKoTJ", "VpQzC6JSTv", "RwTasrwUaj", "RiUsJWLPko", "Ra9j6bmoWw", "R6j5T5RQZB", "QEMcAhxTdi", "NrwjZxBiWv", "NDp2KuRvpp", "MHpqLrlg2I", "JmDEJJHhVJ", "JkvS0r6DEk", "ISdOBaq0WN", "ES4yjB5w08", "78AiPQPGPx", "6q3NaMbVNM" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1730649757654, 1732118367019, 1732118749666, 1732596263919, 1732695409476, 1733060862329, 1732255910925, 1732257855859, 1732119253864, 1732119066523, 1732118859332, 1732532892435, 1737523715042, 1732119507555, 1732118389782, 1732118910580, 1732118690334, 1733060800410, 1730720329212, 1732352846519, 1732118296191, 1733060905773, 1729464744745, 1732508712509, 1732119463119, 1734773807402, 1732696179164, 1732258129919, 1731052677530, 1732119018370 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5592/Reviewer_a3yw" ], [ "ICLR.cc/2025/Conference/Submission5592/Authors" ], [ "ICLR.cc/2025/Conference/Submission5592/Authors" ], [ "ICLR.cc/2025/Conference/Submission5592/Reviewer_aTqT" ], [ "ICLR.cc/2025/Conference/Submission5592/Reviewer_iyLz" ], [ "ICLR.cc/2025/Conference/Submission5592/Authors" ], [ "ICLR.cc/2025/Conference/Submission5592/Reviewer_aTqT" ], [ "ICLR.cc/2025/Conference/Submission5592/Reviewer_aTqT" ], [ "ICLR.cc/2025/Conference/Submission5592/Authors" ], [ "ICLR.cc/2025/Conference/Submission5592/Authors" ], [ "ICLR.cc/2025/Conference/Submission5592/Authors" ], [ "ICLR.cc/2025/Conference/Submission5592/Area_Chair_fvJM" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5592/Authors" ], [ "ICLR.cc/2025/Conference/Submission5592/Authors" ], [ "ICLR.cc/2025/Conference/Submission5592/Authors" ], [ "ICLR.cc/2025/Conference/Submission5592/Authors" ], [ "ICLR.cc/2025/Conference/Submission5592/Authors" ], [ "ICLR.cc/2025/Conference/Submission5592/Reviewer_iyLz" ], [ "ICLR.cc/2025/Conference/Submission5592/Authors" ], [ "ICLR.cc/2025/Conference/Submission5592/Authors" ], [ "ICLR.cc/2025/Conference/Submission5592/Authors" ], [ "ICLR.cc/2025/Conference/Submission5592/Reviewer_aTqT" ], [ "ICLR.cc/2025/Conference/Submission5592/Authors" ], [ "ICLR.cc/2025/Conference/Submission5592/Authors" ], [ "ICLR.cc/2025/Conference/Submission5592/Area_Chair_fvJM" ], [ "ICLR.cc/2025/Conference/Submission5592/Authors" ], [ "ICLR.cc/2025/Conference/Submission5592/Reviewer_aTqT" ], [ "ICLR.cc/2025/Conference/Submission5592/Reviewer_QPMt" ], [ "ICLR.cc/2025/Conference/Submission5592/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The authors propose, implement and describe an LLM-based semantic file system, where commands are replaced by prompts. They describe APIs and in several experiments compare how this filesystem is used and performs, compared to a conventional file system.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"very interesting and original setup\", \"includes interesting examples how to use the file system\"], \"weaknesses\": [\"weak evaluation, based on examples and not on more extensive use cases and human evaluation, evaluation setup not described in detail / cannot be reproduced\", \"unclear for which users this could be helpful\", \"unclear how robust the system is when translating NLP input into actual actions\", \"unclear how the new API maps and extends conventional file handling APIs, and why setting up a new API set is superior to adding some APIs to a conventional file system\"], \"questions\": \"1. Your systems seems to have advantages for searching and retrieving files based on keyword or semantic search. This could be implemented on top of a conventional file system, why implement a new API for that?\\n2. Is the accuracy of the LSFS parser of about 90% enough for meaningful work? That means 10% incorrect results. How did you collect the NLP commands for this evaluation?\\n3. How exactly did you perform the evaluation summarized in table 2?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response(2/3)\", \"comment\": \"> ***W4. The experimental results need more explanation. The traditional file system needs to execute commands to extract files, which should be faster than calling one LLM, even though these LLMs are light-weight. The authors should also list the inference time of LLMs, which should be also counted as the interaction time between users and the file system. Then the authors can also list the time that users manually write commands, which should be a good point to prove the point that -- LSFS can not only speed up file retrieving accuracy and speed, but also can reduce the interaction time between users and file system.***\\n\\nWhile LSFS operates as an intermediate layer for traditional file systems, its file operations exhibit longer latency compared to conventional systems due to LLM inference and vector database retrieval times. Despite this, LSFS simplifies operations by reducing the time users spend learning and inputting commands. To provide a clearer understanding of the time costs, we present a comprehensive time analysis below.\", \"traditional_command_execution_typically_follows_this_workflow\": \"learn command parameters (via ChatGPT or search engines) > locate appropriate file paths through OS lookup mechanisms > input the complete command > and finally obtain results. In contrast, LSFS streamlines this process into a simpler workflow: input natural language commands with file names or semantic keywords > confirm LSFS suggested file operations and obtain results.\\n\\nIn order to quantitatively analyze the time difference between traditional file systems and LSFS, we adopt FS commands and LSFS commands to achieve different file operations shown below. 10 Ph.D. student are invited perform file operations through using Linux commands in traditional file systems and using natural language commands in LSFS, respectively. \\n\\n| operation | Traditional FS | LSFS |\\n|---------|---------|---------|\\n| Keyword_retrieve | \\u00b7 find /path -type f -exec grep -l \\u201dkeyword1\\u201d \\\\; -exec grep -l \\u201dkeyword2\\u201d \\\\; | Find the file contains 'keyword1' and 'keyword2' |\\n| Rollback | \\u00b7 btrfs subvolume snapshot /path/to/directory /path/to/snapshot <br> \\u00b7 btrfs subvolume delete /path/to/directory <br> \\u00b7 btrfs subvolume snapshot /path/to/snapshot /path/to/directory | Rollback the 'filename' to the version in 'date' |\\n| Group by | \\u00b7 mkdir -p /path/to/new_folder <br> \\u00b7 find /path/to/search_folder -type f -exec grep -l \\\"keywords\\\" {} \\\\; -exec mv {} /path/to/new_folder/ \\\\; | *group_kewords* with input : search_folder, keywords, new_folder | Change | cat /path/to/source_file.txt | tee -a /path/to/destination_file.txt | Change \\\"source_file\\\" by '/path/to/destination_file.txt' |\\n| Join | cat /path/to/file1.txt /path/to/file2.txt > /path/to/new_file.txt | *file_join* syscall with input: file1, file2, new_file|\\n| Link | ln -s /home/user/file_name /home/user/shortcut/data_link | Create a link for file_name |\\n\\nThe breakdown of time in different steps for the students to operate files correctly is collected and the average time in each step is calculated as below. \\n\\n**LSFS**\\n| Input Command | LSFS Parser | Task Execution | Total|\\n| ---------|---------|---------|---------|\\n| 11.43s | 4.21s | 11.95s | 27.59s |\\n\\n**Traditional FS**\\n| Learn Command | Find Path | Input Command |Task Execution | Total|\\n| ---------|---------|---------|---------|---------|\\n| 153.61s | 28.23s | 30.30s | 0.02s | 212.16s |\\n\\nWhile LSFS takes longer for executing file operations due to the inference time of the LLM, the total time it takes for users to successfully perform the file operations is significantly shorter than that of traditional file systems. This is because traditional systems require users much more time to learn through trial-and-error during the \\\"Learn Command\\\" step. In contrast, LSFS simplifies the process through natural language commands, reducing overall time.\"}", "{\"title\": \"Author Response(2/4)\", \"comment\": \"> ***4. Very disappointed to have the future directions (as Appendix H) not included at all in the main article, but instead, pushed as optional reading material at the very very end of the .pdf\\u2026***\\n\\nThank you for recognizing the future directions that can be built on our method. The part of future directions has been added in our revised version in Section 6. \\n\\n> ***5. Finally, the overall article, from \\u00a73 onwards especially, reads a bit like a technical report and lacks -to me- a step-back to better highlight the novelties and the perspectives while guiding the reading with more examples / intuitions directly in the text.***\", \"we_would_like_to_highlight_our_work_addresses_a_fundamental_problem_in_agent_systems\": \"the lack of a systematic approach to semantic file management and would like to present the two major novelties we propose when designing this system as the following.\\n- The first novelty is we close the semantic gap from prompts contain file indentions to actual file operations. Traditional file systems rely on exact matches and rigid hierarchical structures, making it difficult to perform operations based on semantic understanding such as 'find documents about machine learning'. To address this challenge, we propose the semantic-based file index structure using vector databases to capture and store file content meanings. Also, we design the semantic parser that accurately translates semantic queries into new designed file operational APIs. \\n- The second novelty is that simple architecture designs that directly map from language prompts to file system calls can pose safety vulnerabilities. For instance, a misinterpreted command could lead to unintended file deletions or destructive overwrites, potentially causing irreversible damage. To address this challenge, we design the layered architecture with high-level file operation API to isolate the direct access to low-level file syscalls. We also design multiple verification mechanisms at both APIs and syscalls to validate operations before execution and design the rollback mechanism that can reverse potentially harmful operations, ensuring operation safety. \\n\\n### Response to Minor Comments:\\nThank you for your valuable writing suggestions. The content has been updated accordingly in our revised version. \\n\\n### Response to Questions\\n> ***Q1: Intro (line 57) \\\"The community still lacks a more general LSFS to serve as a common foundation that can be used by various agents on the application-level.\\\" Do you have some references backing up this lack? I mean, have people expressed somewhere they'd like/need a FS organized by an intelligent agent, i.e., an LLM here?***\\n\\nThank you for your question of the use cases for our LSFS. Semantic-based file systems are designed to cater to a broad range of users. For instance, as highlighted in [1], traditional file systems often impose a cumbersome approach to organizing documents, posing significant challenges for small and medium-sized enterprises (SMEs), public administration bodies, and individual users. LSFS addresses this gap by enabling more efficient organization and management of document content, reducing operational complexity for non-computer practitioners, including SMEs and public administration agencies. Furthermore, [2] identifies semantic file systems as a critical development trend, emphasizing their versatility and applicability across various domains in information technology (IT). Additionally, LSFS can fill a significant gap in current LLM-based multi-agent systems, as noted in [3], which often lack robust mechanisms for managing interaction records and background knowledge. By providing a more intuitive file management framework, LSFS benefits both end-users and system designers by enhancing file access and reducing administrative overhead.\\n\\n> ***Q2: Related Work (line 133) \\\"Besides, it integrates comprehensive semantic information across all aspects of the file system\\u2013from storage and file operations to practical applications\\\" This sentence is a bit vague, could you name some of the semantic information here, so to guide the reader?***\\n\\nIn LSFS, \\\"comprehensive semantic information\\\" reflects our focus on integrating semantic insights across various stages of file management. Examples include:\\n\\n1. **Semantics for File Storage:** Beyond traditional metadata (size, timestamp), LSFS adds details like themes and keywords to enrich file descriptions. \\n2. **Semantics for File Content:** Each file is indexed semantically, enabling operations like finding a file based on descriptors such as \\\"a Hollywood science fiction movie.\\\" Files can also be grouped by related topics. \\n3. **Semantics for File Operations:** LSFS includes a parser that translates natural language commands into specific actions using semantic understanding.\"}", "{\"comment\": \"Got it, thanks for taking it into consideration.\"}", "{\"title\": \"Regarding the experiments with the 10 PhD students\", \"comment\": \"Thank you very much for taking the time to design and run this experiments. (It was more a suggestion than a requirement of mine to be honest; I know how time consuming this can be, so I appreciate the efforts!)\\n\\nRegarding the results themselves, I do not think the current comparison is fair, as typically:\\n1. command users usually know how to use commands and therefore the `learn command` is close to zero in the long run, (and in addition the training time of the LLM isn't here considered).\\n2. lay users tend to use the OS graphic user interface, and typically the file system search to perform some of the operations listed, for instance MacOS users may use directly `spotlight`.\\n\\nNevertheless, I know how hard it is to run such experiments (and also to have a somehow representative cohort of testers) and I acknowledge that LSFS seems to bring a layer of simplification while being efficient compared to traditional command manipulations, especially if such commands have to be run over a system on which the users aren't very familiar yet.\"}", "{\"comment\": \"Dear reviewer a3yw,\\n\\nWe highly appreciate the constructive comments and insightful suggestions you have offered for our work. As the deadline for the extended discussion period is nearing, in order for us to have sufficient time to address any additional questions you may have, we kindly encourage you to engage in the ongoing discussion and share any further insights or clarifications you may have.\\n\\nThank you very much for your time. We look forward to hearing from you soon.\\n\\nBest Regards,\\n\\nAll authors\"}", "{\"comment\": \"This answers my questions. Thanks!\"}", "{\"comment\": \"This clearly answers my questions. Thanks a lot for elaborating on it\"}", "{\"title\": \"Author Response(1/3)\", \"comment\": \"Thank you very much for taking your valuable time to review our paper and provide us with many constructive suggestions. Below are our detailed responses to your concerns:\\n> ***W1: The paper could have used an example to walk through the implementation. Each component description could have been presented with design diagrams or a flowchart that is easy to understand; visual representation always helps!***\\n\\nThank you for raising this question. A finer-grained example to walk through the components has been added in the Fig.1. in the revised version. \\n\\n> ***W2: More evaluations to prove their architecture is better than the traditional ones based on performance, latency, operational burden, and cost.***\\n\\n> ***Q2: How much cost are we saving with the new architecture?***\\n\\n> ***Q4: How much of an operational overhead is this architecture based on traditional architecture?*** \\n\\nCompared to traditional file systems, LSFS offers enhanced functionality, such as grouping and semantic retrieval, and performs better in keyword-based retrieval tasks. However, LSFS introduces additional overhead, including LLM inference and database operations, leading to longer execution times compared to traditional systems. Despite this, LSFS simplifies operations by reducing the time users spend learning and inputting commands. To provide a clearer understanding of the time costs, we present a comprehensive time analysis below.\\n\\n| operation | Traditional FS | LSFS |\\n|---------|---------|---------|\\n| Keyword_retrieve | \\u00b7 find /path -type f -exec grep -l \\u201dkeyword1\\u201d \\\\; -exec grep -l \\u201dkeyword2\\u201d \\\\; | Find the file contains 'keyword1' and 'keyword2' |\\n| Rollback | \\u00b7 btrfs subvolume snapshot /path/to/directory /path/to/snapshot <br> \\u00b7 btrfs subvolume delete /path/to/directory <br> \\u00b7 btrfs subvolume snapshot /path/to/snapshot /path/to/directory | Rollback the 'filename' to the version in 'date' |\\n| Group by | \\u00b7 mkdir -p /path/to/new_folder <br> \\u00b7 find /path/to/search_folder -type f -exec grep -l \\\"keywords\\\" {} \\\\; -exec mv {} /path/to/new_folder/ \\\\; | *group_kewords* with input : search_folder, keywords, new_folder | Change | cat /path/to/source_file.txt | tee -a /path/to/destination_file.txt | Change \\\"source_file\\\" by '/path/to/destination_file.txt' |\\n| Join | cat /path/to/file1.txt /path/to/file2.txt > /path/to/new_file.txt | *file_join* syscall with input: file1, file2, new_file|\\n| Link | ln -s /home/user/file_name /home/user/shortcut/data_link | Create a link for file_name |\\n\\nIn order to quantitatively analyze the time difference between traditional file systems and LSFS, we invited 10 Ph.D. to execute the above commands with Linux commands in traditional file systems and natural language commands in LSFS, and calculated the time consumption of each part according to our above time division. The results are as follows:\\n\\n**LSFS**\\n| Input Command | LSFS Parser | Task Execution | Total|\\n| ---------|---------|---------|---------|\\n| 11.43s | 4.21s | 11.95s | 27.59s |\\n\\n**Traditional FS**\\n| Learn Command | Find Path | Input Command |Task Execution | Total|\\n| ---------|---------|---------|---------|---------|\\n| 153.61s | 28.23s | 30.30s | 0.02s | 212.16s |\\n\\n> ***W3: The paper didn't touch on any security concerns while using the LLMS. Are there guardrails in place to restrict the LLMs to scanning through the personal data?***\\n\\n> ***Q1: Are there guardrails in place to restrict the LLM to not scan through personal data?***\\n\\nIn LSFS, our user filter is implemented to prevent LLMs from scanning to personal information. When LSFS retrieves an eligible file to perform subsequent tasks, it will first send the user confirmation, and if the user finds that the retrieval results contain personal information, he can cancel sending the file to LLM.\"}", "{\"title\": \"Author Response(2/2)\", \"comment\": \">***W3: unclear how robust the system is when translating NLP input into actual actions.***\\n\\n> ***Q2: Is the accuracy of the LSFS parser of about 90% enough for meaningful work? That means 10% incorrect results. How did you collect the NLP commands for this evaluation?***\\n\\nWe conduct the case study of the incorrect results and find that LSFS parser sometimes performs bad in parsing complex commands due to the capability and inherited randomness of the LLM. To further improve the reliability of our system, we make the following enhancements. We add the failure case to the prompt like ***the result of {failure case} is wrong, you should refer to the case and regenerate it.*** Then let LSFS parser generate the answer again, the experiment results as follow:\\n\\n**Gemini-1.5**\\n| Operation | First Parsing Success Rate | Second Parsing Success Rate |\\n|--------------|-----------------------------------|-----------------------------|\\n| Retrieve-Summary API| 100%| - |\\n| Change-Summary API | 96.7% | 100%|\\n| Link API | 100% | - |\\n| Rollback API | 83.3% | 100% |\\n\\n**GPT-4o-mini**\\n| Operation | First Parsing Success Rate | Second Parsing Success Rate |\\n|--------------|-----------------------------------|-----------------------------|\\n| Retrieve-Summary API| 91.3% | 100% |\\n| Change-Summary API | 100% | - |\\n| Link API | 100% | - |\\n| Rollback API | 100%| - |\\n\\n**Qwen2:7b**\\n| Operation | First Parsing Success Rate | Second Parsing Success Rate |\\n|--------------|-----------------------------------|-----------------------------|\\n| Retrieve-Summary API| 86.7% | 100% |\\n| Change-Summary API | 100% | - |\\n| Link API | 100% | - |\\n| Rollback API | 83.3% | 100% |\\n\\n**Gemma:2b**\\n| Operation | First Parsing Success Rate | Second Parsing Success Rate |\\n|--------------|-----------------------------------|-----------------------------|\\n| Retrieve-Summary API| 76.7% | 85.7% |\\n| Change-Summary API | 96.7% | 100%|\\n| Link API | 91.3% | 100% |\\n| Rollback API | 100%| - |\\n\\nAfter a second parsing using the use cases that were incorrectly parsed the first time, we can see that all LLM backbone achieved 100% accuracy on each task except Gemma-2, which did not achieve 100% accuracy on the Retrieve-Summary API. This means that our parser can parse the task correctly at most twice on most tasks and llm backbone, so we consider the parser to be useful for mapping work. For data collection, we invited 10 Ph.D. students to write preliminary instructions for related tasks. Then the results are used for testing, and the instruction information is fine-tuned according to the test results, and finally the instruction with the highest score is selected. \\n\\n> ***W4: unclear how the new API maps and extends conventional file handling APIs, and why setting up a new API set is superior to adding some APIs to a conventional file system.***\\n\\n> ***Q1:Your systems seems to have advantages for searching and retrieving files based on keyword or semantic search. This could be implemented on top of a conventional file system, why implement a new API for that?***\\n\\nThank you for the recognition of the advantages of our LSFS. The building of APIs by simply combining system calls to achieve functionalities can not consider the semantic information of file in the file's lifecycle including creation, update, deletion. Therefore, we can not use them to achieve flexible semantically related APIs. Under this consideration, we design a new set of APIs to build the semantic index of file and support more flexible semantic file management operations. \\n\\n\\n> ***Q3: How exactly did you perform the evaluation summarized in table 2?***\\n\\nIn the \\\"w/o LSFS\\\" scenario, we input each text directly into the LLM sequentially, checking if it meets the target condition. If the condition is satisfied, the subsequent action is executed; otherwise, the text is skipped. In contrast, under the \\\"w/ LSFS\\\" setup, LSFS performs an initial search and prompts the user for confirmation. The corresponding file from the search results is then directly fed into the LLM for further processing. Additional details can be found at Appendix D.\\n\\n[1] D. Di Sarli, F. Geraci, \\u201cGFS: a Graph-based File System Enhanced with Semantic Features\\u201d, Proceedings of the 2017 International Conference on Information System and Data Mining, pp. 51-55, April 1-3, 2017\\n\\n[2] Mashwani, S.R. and Khusro, S. 2018. The Design and Development of a Semantic File System Ontology. Engineering, Technology & Applied Science Research. 8, 2 (Apr. 2018), 2827\\u20132833. DOI:https://doi.org/10.48084/etasr.1898.\\n\\n[3] Zhiyong Wu, Chengcheng Han, Zichen Ding, Zhenmin Weng, Zhoumianze Liu, Shunyu Yao, Tao Yu, and Lingpeng Kong. Os-copilot: Towards generalist computer agents with self-improvement. arXiv preprint arXiv:2402.07456, 2024\"}", "{\"title\": \"Author Response(3/4)\", \"comment\": \"> ***Q3: Could you add a positioning sentence in \\u00a72.3 to explain clearly where LSFS delineates from this research axis?***\\n\\nWe appreciate this feedback. In response, a positioning statement has been added: \\\"While researches of agent systems primarily focus on build of LLM applications that can leverage file resources, our represents a fundamental innovation in the infrastructure that manages file resources based on semantics to support LLM-based agent systems.\\\"\\n\\n> ***Q4: In this stage many operations from traditional FS aren't there, from what I understood, this is typically the case for right modification of files or group affiliation\\u2026 These would be particularly helpful so to \\\"propagate\\\" these rights to any retrievers, preventing typically to see the private/exclusive file of someone else appearing in my search results, wouldn't it?***\\n\\n\\nPrivate information is important for file system. Our LSFS system operates as a middleware layer over a traditional file system, inheriting all file permissions from the underlying system, ensuring no file has multiple user affiliations. To further safeguard user privacy, LSFS incorporates a defensive mechanism that prevents large language models (LLMs) from accessing personal data without explicit consent. When a user requests file access through LSFS, the system prompts for confirmation before sharing data with the LLM. Users retain control, with the option to cancel the operation at any point, thereby preventing unintended data exposure.\\n\\n> ***Q5: In \\u00a74,2 (line 294), the authors mentioned that supervisor updates \\\"periodically\\\" what is the period between each check and therefore how expensive resource-wise is it? Did the authors check various values for this, searching for the sweet-spot between resource-consumption and freshness of the LFSF data? Also how does it scale in terms of file number and disk footprint?***\\n\\nThe LSFS supervisor checks for file updates every 10 seconds. To assess its resource usage and scalability, we measured its response time across different file counts and monitored its CPU usage. The results are as follows:\\n\\n| Number of Files | Response Time (seconds) |\\n|-----------------|--------------------------|\\n| 100 | 0.0006 |\\n| 200 | 0.0011 |\\n| 400 | 0.0021 |\\n| 800 | 0.0042 |\\n| 1600 | 0.0044 |\\n\\n**CPU Usage**: Consistently between 0.1% and 0.2%.\\n\\nThese results show that the supervisor is efficient, with millisecond-level response times and low CPU usage remained, even as the number of files increases.\\n\\n> ***Q6: Overall, \\u00a74.4 seems to be more or less an NL2CSV tool, filling fields of a JSON, right? In such case, this is something that the community has been exploring a lot these past two years, so maybe adding some pointers wouldn't hurt. This goes also for the \\u00a75.1 associated with RQ1.***\\n\\nThank you for mentioning the related works. References of related works have been updated in the revised version.\\n\\n> ***Q7: Are authors considering releasing their test data for \\u00a75.1? Also, it would be good to have some examples in the body of the article.***\\n\\nThe test data has been included in our anonymous code link. An example of the natural prompt command is presented in Fig.1. and more examples of the test data are given in the appendix due to the space limit. \\n\\n> ***Q8: In \\u00a75.2 why no QWen or Gemma in the experimental run for Table 2, and no Gemma in Figure 6?***\\n\\nWe conducted the above experiments for both Qwen-2 and Gemma-2 but excluded these models from the final results due to their poor performance on the task and hallucinated outputs. For example, in Table 2, when judging files without the LSFS system, both Qwen-2 and Gemma-2 produced outputs containing inaccuracies. Using identical text and prompts to check for keyword1 and keyword2, their responses included irrelevant stories about Michael Jordan. Similarly, in Figure 6, Gemma-2 demonstrated unstable outputs. This instability makes it challenging to assess the model's reliability and to derive meaningful conclusions from the system's performance, and the discussion of this has been updated in the revised version.\"}", "{\"title\": \"Action Required: Respond to Author Rebuttals - Nov 27\", \"comment\": \"Dear ICLR Reviewers,\\n\\nThe author discussion phase is ending soon. Please promptly review and respond to author rebuttals for your assigned papers. Your engagement is critical for the decision-making process.\", \"deadlines\": \"\", \"november_26\": \"Last day for reviewers to ask questions to authors.\", \"november_27\": \"Last day for authors to respond to reviewers.\", \"november_28___december_10\": \"Reviewer and area chair discussion phase.\\nThank you for your timely attention to this matter.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Author Response(3/3)\", \"comment\": \"> ***Q7: Is there an Andon Cord mechanism to stop the LLM to give out hallucinations and wonky results to the user?***\\n\\nFirst, our parser helps mitigate the hallucination problem by incorporating the fault-tolerance mechanism. If the LSFS parser incorrectly parses a user's command and fails to map it to the API, the failure case will be added into the prompt like *the result of {failure case} is wrong, you should refer to the case and regenerate it.* Then let LSFS parser generate the answer again. Secondly, our rollback mechanism provides a safeguard against potential LLM hallucinations during file operations. If the LLM erroneously modifies a file, the user can restore it to its previous version using the rollback feature.\\n\\n\\n> ***Q8: While scanning through the files, is the data saved in memory? Does the data contain PII information (ppersonal information about the user)?***\\n\\nIn the LSFS architecture, large language models (LLMs) serve as downstream processing tools. Users cannot add files to LSFS through LLMs, nor does LSFS provide interactive memory functionalities for LLMs. Consequently, files scanned or referenced during LLM interactions are not automatically added to the LSFS system's memory. Instead, the files present in LSFS are explicitly added by the user. Besides, to help protect the personal information of user, we provide the user confirmation mechanisms, which is detailed in our response to Q1.\\n\\n[1] D. Di Sarli, F. Geraci, \\u201cGFS: a Graph-based File System Enhanced with Semantic Features\\u201d, Proceedings of the 2017 International Conference on Information System and Data Mining, pp. 51-55, April 1-3, 2017\\n\\n[2] Mashwani, S.R. and Khusro, S. 2018. The Design and Development of a Semantic File System Ontology. Engineering, Technology & Applied Science Research. 8, 2 (Apr. 2018), 2827\\u20132833. DOI:https://doi.org/10.48084/etasr.1898.\\n\\n[3] Zhiyong Wu, Chengcheng Han, Zichen Ding, Zhenmin Weng, Zhoumianze Liu, Shunyu Yao, Tao Yu, and Lingpeng Kong. Os-copilot: Towards generalist computer agents with self-improvement. arXiv preprint arXiv:2402.07456, 2024\\n\\n[4] Guohao Li, Hasan Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. Camel:Communicative agents for \\\"mind\\\" exploration of large language model society. Advances in Neural Information Processing Systems, 36, 2023.\\n\\n[5] Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Shaokun Zhang, Erkang Zhu, Beibin Li,Li Jiang, Xiaoyun Zhang, and Chi Wang. Autogen: Enabling next-gen llm applications via multi-agent conversation framework. arXiv preprint arXiv:2308.08155, 2023.\"}", "{\"title\": \"Author Response(3/3)\", \"comment\": \"> ***W5. Safety insurance mechanisms is pointed out as one contribution, however, there is no description of this mechanism and no experimental comparison between the performance of LSFS with and without the safety insurance mechanisms.***\", \"we_designed_some_security_mechanisms\": \"1. We added a process lock to LSFS to prevent consistent reads and writes to the same file\\n2. We design a user confirmation step: When a user makes a change to a file, the user will be asked to confirm the changed object twice\\n3. We designed rollback operations: if the user makes a wrong change to the file, they can roll back to the correct version\\n\\nFor first and third mechanisms, the file operation reliability can be improved as long as these two mechanisms are enabled.\\n\\nFor the second mechanism, we conducted a quantitative evaluation of two aspects: the probability of file misoperation and the proportion of risky operations. We compared these metrics with and without the confirmation mechanism enabled. \\n\\nIn the current experiment, we used the retrieval function to locate target files for operations. The table below shows the probability of retrieval errors with and without the confirmation step:\\n\\n| Number of Files | Without User Confirmation | With User Confirmation |\\n|------------------|---------------------------|-------------------------|\\n| 10 | 13% | 0% |\\n| 20 | 16.7% | 0% |\\n| 40 | 15.8% | 0% |\\n| 120 | 14.8% | 0% |\\n\\nAdditionally, we evaluated the proportion of potentially dangerous operations executed (e.g., write, update, or delete) across all file management APIs. The results below demonstrate that the confirmation mechanism in LSFS effectively prevents unintended dangerous operations:\\n\\n| Without User Confirmation | With User Confirmation |\\n|---------------------------|-------------------------|\\n| 36.8% | 0% |\\n\\nThese results highlight that enabling the confirmation mechanism significantly enhances the safety and reliability of file operations in LSFS.\"}", "{\"title\": \"Author Response(4/4)\", \"comment\": \"> ***Q9: In \\u00a75.2 still, what about very large number of files?***\\n\\nWe increased the number of files to 120, including binary file types \\\".pdf \\\", \\\".doc\\\" and plain text files \\\".txt\\\". The experimental results are as follows:\\n| LLM-backbone | Retrieval Accuracy w/o LSFS | Retrieval Accuracy w/ LSFS | Retrieval Time w/o LSFS | Retrieval Time w/ LSFS |\\n|---------|---------|---------|---------|---------|\\n| Gemini-1.5-flash| 35.2% | 92.9%(164%&#8593;) | 605.59s |48.08s(92.1%&#8595;) |\\n| GPT-4o-mini| 63.8% | 92.9%(45.6%&#8593;) | 938.68s | 88.93s(90.5%&#8595;)|\\n\\n\\nAs shown in the table and Tab. 2, retrieval time increases with the number of files, but LSFS maintains a linear growth trend and offers more stable retrieval accuracy compared to the pure LLM method. This demonstrates the scalability of LSFS. However, at very large scales, system overhead may further delay retrieval time and we plan to evaluate our system's performance under such large-scale scenarios in our future work.\\n\\n> ***Q10: Ibid., same for the number of versions?***\\n\\nIn our experiments, we tested up to 40 versions and observed that the rollback time does not increase exponentially with the number of versions rolled back, as shown in Fig. 6. This is because the rollback API stores each file version independently, allowing efficient retrieval. However, when the number of files scales up significantly, the rollback time could still increase due to heavier file storage overheads will affect the whole system's latency and exploration on large-scale scenarios in our future work.\\n\\n[1] D. Di Sarli, F. Geraci, \\u201cGFS: a Graph-based File System Enhanced with Semantic Features\\u201d, Proceedings of the 2017 International Conference on Information System and Data Mining, pp. 51-55, April 1-3, 2017\\n\\n[2] Mashwani, S.R. and Khusro, S. 2018. The Design and Development of a Semantic File System Ontology. Engineering, Technology & Applied Science Research. 8, 2 (Apr. 2018), 2827\\u20132833. DOI:https://doi.org/10.48084/etasr.1898.\\n\\n[3] Zhiyong Wu, Chengcheng Han, Zichen Ding, Zhenmin Weng, Zhoumianze Liu, Shunyu Yao, Tao Yu, and Lingpeng Kong. Os-copilot: Towards generalist computer agents with self-improvement. arXiv preprint arXiv:2402.07456, 2024\"}", "{\"title\": \"Author Response(1/4)\", \"comment\": \"Thank you very much for providing us with many constructive suggestions. Below are our detailed responses to your concerns:\\n### Response to General Remarks\\n> ***1. Even though the Introduction is clear, I'd have liked a more concrete / detailed example, maybe having a finer-grained figure 1 would help.*** \\n\\nThank you for this suggestion. A finer-grained figure 1 has been updated in our revised version of paper.\\n\\n> ***2. The balance between \\u00a73 Vs. \\u00a74 is unexpected, I would have imagined a more detailed architecture section (\\u00a73), explaining the various design choices. Instead, the authors motivated their architecture. In addition to not being referenced, this motivation -to me- would have been better positioned directly in the Introduction. Similarly, the overview of the architecture and the description of Figure 2 would have benefit also the introduction of an example, especially since Fig2.a doesn't contain precise information, but rather e.g. a list of blocks entitled API.***\\n\\nThank you for your suggestion of the structure of the paper, we have updated all of them to the paper and a detailed example and figure has been added to \\u00a73.\\n\\n> ***3. Overall, for \\u00a75, it would have been very interesting and convincing to see an experiment involving users' usage and performances, using a TFS and the presented LSFS. The authors could have reviewed the success rate and the time efficiency of users in both settings together with collecting feedback from them, following the traditional user studies.***\\n\\nThank you for the suggestion of collecting user feedback. The comparison of keyword retrieval accuracy of users and LSFS can be seen in Table.3. In addition, we invited 10 Ph.D students to supplement the following experiments to evaluate the time cost to fully complete the task:\", \"traditional_command_execution_typically_follows_this_workflow\": \"learn command parameters (via ChatGPT or search engines) > locate appropriate file paths through OS lookup mechanisms > input the complete command > and finally obtain results. In contrast, LSFS streamlines this process into a simpler workflow: input natural language commands with file names or semantic keywords, which are then processed by the LSFS parser for task execution.\\n\\nIn order to quantitatively analyze the time difference between traditional file systems and LSFS, we adopt FS commands and LSFS commands to achieve different file operations shown below. 10 Ph.D. student are invited perform file operations through using Linux commands in traditional file systems and using natural language commands in LSFS, respectively. \\n\\n| operation | Traditional FS | LSFS |\\n|---------|---------|---------|\\n| Keyword_retrieve | \\u00b7 find /path -type f -exec grep -l \\u201dkeyword1\\u201d \\\\; -exec grep -l \\u201dkeyword2\\u201d \\\\; | Find the file contains 'keyword1' and 'keyword2' |\\n| Rollback | \\u00b7 btrfs subvolume snapshot /path/to/directory /path/to/snapshot <br> \\u00b7 btrfs subvolume delete /path/to/directory <br> \\u00b7 btrfs subvolume snapshot /path/to/snapshot /path/to/directory | Rollback the 'filename' to the version in 'date' |\\n| Group by | \\u00b7 mkdir -p /path/to/new_folder <br> \\u00b7 find /path/to/search_folder -type f -exec grep -l \\\"keywords\\\" {} \\\\; -exec mv {} /path/to/new_folder/ \\\\; | *group_kewords* with input : search_folder, keywords, new_folder | Change | cat /path/to/source_file.txt | tee -a /path/to/destination_file.txt | Change \\\"source_file\\\" by '/path/to/destination_file.txt' |\\n| Join | cat /path/to/file1.txt /path/to/file2.txt > /path/to/new_file.txt | *file_join* syscall with input: file1, file2, new_file|\\n| Link | ln -s /home/user/file_name /home/user/shortcut/data_link | Create a link for file_name |\\n\\nThe breakdown of time in different steps for the students to operate files correctly is collected and the average time in each step is calculated as below. \\n\\n**LSFS**\\n| Input Command | LSFS Parser | Task Execution | Total|\\n| ---------|---------|---------|---------|\\n| 11.43s | 4.21s | 11.95s | 27.59s |\\n\\n**Traditional FS**\\n| Learn Command | Find Path | Input Command |Task Execution | Total|\\n| ---------|---------|---------|---------|---------|\\n| 153.61s | 28.23s | 30.30s | 0.02s | 212.16s |\\n\\nAlthough LSFS takes longer for executing file operations due to the inference time of the LLM, the total time it takes for users to successfully perform the file operations is significantly shorter than that of traditional file systems. This is because traditional systems require users much more time to learn through trial-and-error during the \\\"Learn Command\\\" step. In contrast, LSFS simplifies the process through natural language commands, reducing overall time.\"}", "{\"comment\": \"Dear reviewer QPMt,\\n\\nWe highly appreciate the constructive comments and insightful suggestions you have offered for our work. As the deadline for the extended discussion period is nearing, in order for us to have sufficient time to address any additional questions you may have, we kindly encourage you to engage in the ongoing discussion and share any further insights or clarifications you may have.\\n\\nThank you very much for your time. We look forward to hearing from you soon.\\n\\nBest Regards,\\n\\nAll authors\"}", "{\"summary\": \"In this article, the authors based their efforts on the hypothesis that Large language models (LLMs) have the potential to improve file management systems by enabling interactions through natural language rather than traditional manual commands. Following this idea, they proposed LLM-based Semantic File System (LSFS) to address some of the current File System limitations (to the users), by allowing typically semantic file management through natural language prompts. LSFS has through some APIs for semantic file operations, achieves better retrieval accuracy and speed compared to traditional systems or the use of standalone LLMs respectively. It supports complex tasks like semantic file retrieval, rollback, and sharing with high success rates.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"Well motivated, especially through the Introduction.\", \"Clearly written\", \"Very nice figures\", \"Hot topic nowadays, with many LLM-based applications reshaping the ways we interact with machines\"], \"weaknesses\": [\"### General Remarks\", \"Even though the Introduction is clear, I'd have liked a more concrete / detailed example, maybe having a finer-grained figure 1 would help.\", \"The balance between \\u00a73 Vs. \\u00a74 is unexpected, I would have imagined a more detailed architecture section (\\u00a73), explaining the various design choices. Instead, the authors motivated their architecture. In addition to not being referenced, this motivation -to me- would have been better positioned directly in the Introduction. Similarly, the overview of the architecture and the description of Figure 2 would have benefit also the introduction of an example, especially since Fig2.a doesn't contain precise information, but rather e.g. a list of blocks entitled API.\", \"Overall, for \\u00a75, it would have been very interesting and convincing to see an experiment involving users' usage and performances, using a TFS and the presented LSFS. The authors could have reviewed the success rate and the time efficiency of users in both settings together with collecting feedback from them, following the traditional user studies.\", \"Very disappointed to have the future directions (as Appendix H) not included at all in the main article, but instead, pushed as optional reading material at the very very end of the .pdf\\u2026\", \"Finally, the overall article, from \\u00a73 onwards especially, reads a bit like a technical report and lacks -to me- a step-back to better highlight the novelties and the perspectives while guiding the reading with more examples / intuitions directly in the text.\", \"### Minor Comments:\", \"Abstract \\\"agent applications **TO** perform file operations\\\"\", \"Introduction (line 95), \\\"a LLM-based\\\" should be **an**\", \"Table 1 (line 235), typo on \\\"Hybrid retrieval\\\" better to put everything lower-case as the rest of the table\", \"In \\u00a74.1, in Composite Syscall of LSFS, it would be better if the authors could make explicit the composition of atomic calls, i.e. for each entry, adding a generic formula (or examples) of how the composite call is practically chaining the atomic ones.\", \"In Figure 4, \\\"Please summary all paper from AAA University about LLM\\\" there's a typo in the second word: **summarize**.\", \"Similarly, still in Figure 4, \\\"Please use file A update the content of file B\\\" misses the word **to** before 'update'.\", \"In \\u00a75.2, (line 450), typo: \\\"vary the the number of rollback versions\\\" remove one **the**.\", \"In \\u00a75.3, (line 478), \\\"Therefore, we make two enhanced versions, named as TFS-grep and TFS-grep* to make the comparison\\\" I would be great to tell there differences in a line instead of relying on the Appendix, so to make the article (before page 10) self-contained.\"], \"questions\": \"1. Intro (line 57) \\\"The community still lacks a more general LSFS to serve as a common foundation that can be used by various agents on the application-level.\\\" Do you have some references backing up this lack? I mean, have people expressed somewhere they'd like/need a FS organized by an intelligent agent, i.e., an LLM here?\\n2. Related Work (line 133) \\\"Besides, it integrates comprehensive semantic information across all aspects of the file system\\u2013from storage and file operations to practical applications\\\" This sentence is a bit vague, could you name some of the semantic information here, so to guide the reader?\\n3. Could you add a positioning sentence in \\u00a72.3 to explain clearly where LSFS delineates from this research axis?\\n4. In this stage many operations from traditional FS aren't there, from what I understood, this is typically the case for right modification of files or group affiliation\\u2026 These would be particularly helpful so to \\\"propagate\\\" these rights to any retrievers, preventing typically to see the private/exclusive file of someone else appearing in _my_ search results, wouldn't it?\\n5. In \\u00a74,2 (line 294), the authors mentioned that supervisor updates \\\"periodically\\\" what is the period between each check and therefore how expensive resource-wise is it? Did the authors check various values for this, searching for the sweet-spot between resource-consumption and freshness of the LFSF data? Also how does it scale in terms of file number and disk footprint?\\n6. Overall, \\u00a74.4 seems to be more or less an NL2CSV tool, filling fields of a JSON, right? In such case, this is something that the community has been exploring a lot these past two years, so maybe adding some pointers wouldn't hurt. This goes also for the \\u00a75.1 associated with RQ1.\\n7. Are authors considering releasing their test data for \\u00a75.1? Also, it would be good to have some examples in the body of the article.\\n8. In \\u00a75.2 why no QWen or Gemma in the experimental run for Table 2, and no Gemma in Figure 6?\\n9. In \\u00a75.2 still, what about very large number of files?\\n10. Ibid., same for the number of versions?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA.\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your recognition of our work. Guardrails are an important and worthwhile issue to explore, and we will aim to address this issue in future research.\"}", "{\"comment\": \"We are deeply grateful for your valuable time and insightful feedback. Below are our detailed responses to your concerns:\\n\\n> ***W1. The motivation is not concretely convincing, especially the first challenge mentioned in Introduction. In the Intro section, the authors mentioned that \\\"For instance, if two files have similar content\\u2013such as different versions of the same document\\u2013traditional file systems lack the capability to organize or retrieve these files based on their content similarity.\\\" Why should the files be organized by the similarity of their content? What are the benefits and what are the practical application scenarios? It would be better to at least add one short discussion or a few examples.***\\n\\nThank you for raising this question regarding the motivation. We would like to clarify that certain types of documents, such as legal or contract documents, often have multiple versions with very similar content, typically differing by only one or two entries. To be specific, POSIX file system interface only provide basic operations to store and retrieve data, and do not provide any interfaces or semantics based on the contents of the files. However, in LSFS, users can perform content-based retrieval by instruction such as: \\\"Please help me search for the xxx file, which does not include xxx information\\\".\\n\\n> ***W2. This paper does not point out what key problem they want to solve. Compared to a research paper, it seems more like a technical report.***\", \"we_would_like_to_highlight_our_work_addresses_a_fundamental_problem_observed_in_agent_systems\": \"the lack of a systematic approach to semantic file management and we would like to emphasize the two major research challenges we encounter when designing this system as the following.\\n- The first challenge is the semantic gap from prompts contain file indentions to actual file operations. Indeed, none of the existing file systems support semantics of data store and retrieval based on the content of files. These traditional file systems rely on exact matches and rigid hierarchical structures, making it difficult to perform operations based on semantic understanding such as 'find documents about machine learning'. To address this challenge, we propose the semantic-based file index structure using vector databases to capture and store file content meanings. Also, we design the semantic parser that accurately translates semantic queries into new designed file operational APIs. \\n- The second challenge is that simple architecture designs that directly map from language prompts to file system calls can pose security vulnerabilities. For instance, a misinterpreted command could lead to unintended file deletions or destructive overwrites, potentially causing irreversible damage. To address this challenge, we design the layered architecture with high-level file operation API to isolate the direct access to low-level file syscalls. We also design multiple verification mechanisms at both APIs and syscalls to validate operations before execution and design the rollback mechanism that can reverse potentially harmful operations, ensuring operation safety. \\n\\n> ***W3. The experimental setting is questionable. No baselines and introduction of datasets.***\\n\\nIn our experiments, our dataset contains various types of files collected from the Web (e.g., google scholar), such as plain text files \\\".txt\\\", binary text files \\\".pdf\\\", and \\\".doc\\\".\", \"here_is_our_detailed_experimental_setup\": \"1. In Experiment 5.1, we validate the effectiveness of our parser using four different LLM backbone. For each API, we created 30 use cases, each with a unique language structure. The test data is provided in the anonymous link.\\n\\n2. In Experiment 5.2, we evaluated the effectiveness of LSFS in semantic retrieval. Since traditional file systems lack semantic retrieval capabilities, we used LLM alone as the baseline for file retrieval in our evaluation. Specifically, we measured the time and accuracy of executing a semantic file retrieval task by simply feeding file content into LLM to retrieve files and retrieving files using LSFS.\\n\\n3. In Experiment 5.2, in order to check the robustness of our rollback function when the number of files increases, we tested the rollback time under 5-40 versions respectively, and obtained the robustness of the rollback time.\\n\\n4. In Experiment 5.3, we compared LSFS with traditional file retrieval methods based on Precision, Recall, and F1 score. The methods include:\\n - TFS-search-window: Uses the computer's search window (e.g., MacOS Spotlight) to retrieve both binary and plain text files.\\n - TFS-grep: Uses the Linux terminal command grep, which can only retrieve plain text files.\\n - TFS-grep*: An enhanced version of TFS-grep that first converts binary files into plain text before using grep to retrieve.\\n\\n5. In the second part of Experiment 5.3, we compared LSFS and pure prompting to different LLMs as baselines to generate code for creating sharable file links.\", \"title\": \"Author Response(1/3)\"}", "{\"comment\": \"Thank you again for your constructive suggestions!\\n\\nPlease let us know if you have any further questions. If you find that our response addresses your concerns, would you kindly consider raising your rating score for our paper? We greatly appreciate your consideration.\\n\\nBest regards,\"}", "{\"summary\": \"The paper represents a problem in the current scenario of semantic file matching algorithms; currently, we use the traditional way of semantic matching algorithms based on file name, size, and timestamps. This involved remembering syntax and filenames. This fails in scenarios where two files have similar text; here its hard to distinguish files based on pure string matching. The paper introduces LLM with traditional file systems to do LLM based semantic file management.\\n\\nLSFS extracts semantic features from the file content and generates corresponding embedding vectors. LSFS incorporates semantic information into its file operations. In Linux, if we need to change a file, i.e., replace a file with another, we need to remember the path, but with LSFS the users don't need to remember the file name and can talk to LLM to make the changes for them. They have introduced a LLM-based Semantic File System (LSFS), an LSFS parser, and safety insurance mechanisms to the traditional file matching algorithms. The paper has done a great job at explaining the traditional way and modifications done with NLP. They have elaborately explained the API changes they have made over traditional architecture and given diagrams to explain the architecture. Also, they have demonstrated how components of LSFS interact with each other to achieve different functionalities.\\n\\nEvaluations are carried out based on success, performance, and performance on non-semantic based tasks like file sharing over sample data/files.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper touches on an existing problem that exists in the day-to-day lives of developers and Mac OS users of remembering the file names and directory where the files are present and need modification. There is no way to solve this problem at the present. Even while using LLMs, sometimes developers have to hard code the file path for retrieval. LLM based file retrival system is new and useful for anyone who is fed up of the traditional based systems.They did pretty well work on describing the APIs to be used in the new framework and a commendable job in comparing the APIs to the traditional ones. The quality of the paper was good and the presentation with diagrams were very useful to get the context of the paper.\\nThe architecture of the new framework was explained in detail and they have done a good job in explaining how each component in the architecture is integrated with LLMs. Evaluations are carried out based on success, performance, and performance on non-semantic based tasks like file sharing over sample data/files and are pretty easy to follow.\", \"weaknesses\": \"The paper could have used an example to walkthrough the implementation. Each component description could have been presented with design diagrams or a flowchart that is easy to understand; visual representation always helps! More evaluations to prove their architecture is better than the traditional ones based on performance, latency, operational burden, and cost. The paper didn't touch on any security concerns while using the LLMS. Are there guardrails in place to restrict the LLMs to scanning through the personal data? One more thing the paper lacked was elaborating on the use cases where this architecture can be used.\", \"questions\": \"1) Are there guardrails in place to restrict the LLM to not scan through personal data?\\n2) How much cost are we saving with the new architecture?\\n3) Are there any security concerns for using this architecture?\\n4) How much of an operational overhead is this architecture based on traditional architecture ?\\n5) What are the other use cases of this architecture in real life scenarios?\\n6) This seems like an ongoing problem that needs to be resolved; are there any similar existing architectures? Have you looked at those papers?\\n7) Is there an Andon Cord mechanism to stop the LLM to give out hallucinations and wonky results to the user?\\n8) While scanning through the files, is the data saved in memory? Does the data contain PII information (ppersonal information about the user)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear reviewers/ACs/SACs/PCs,\\n\\nWe would like to summarize the strengths of this work acknowledged by the reviewers, and the responses we have made to address all the reviewer\\u2019s concern.\\n\\nStrengths acknowledged by the reviewers\\n\\n1. Novelty (**Reviewer aTqT, Reviewer iyLz**): Our work is based on current hot topics and proposes an approach to a problem that no one has solved so far.\\n2. Practicability (**Reviewer aTqT, Reviewer QPMt**): Our work can effectively solve the problems that users will encounter when using traditional file systems, and simplify the interaction of file systems. It makes the operation more intuitive and effective\\n3.Well extensible (**Reviewer aTqT, Reviewer QPMt**): Our work proposes a set of apis and syscall that are very easy to follow\\n4. Clear writing and presentation skills (**Reviewer aTqT, Reviewer iyLz, Reviewer a3yw**): Our work clearly introduces and evaluates LSFS through rigorous experiments and clear presentation\\n\\nThere are some main conerns raised by reviewers\\n1. Load reduction of our approach compared to traditional file systems (**Reviewer aTqT, Reviewer iyLz,Reviewer QPMt**)\\n2. What are the specific user groups and reference scenarios of our approach? (**Reviewer a3yw, Reviewer aTqT**)\\n3. Specific implementation strategy of our method on security (**Reviewer aTqT, Reviewer QPMt**)\\n4. More specific experimental setup and use case description of our API. (**Reviewer QPMt, Reviewer iyLz, Reviewer a3yw**)\\n\\nAll of these main concerns have been successfully addressed during the rebuttal phase, and we hope that the improvements we made during this stage will be taken into consideration.\\n\\nWe sincerely appreciate your valuable time!\\n\\nThanks and regards,\\n\\nAuthors\"}", "{\"title\": \"Author Response(2/3)\", \"comment\": \"> ***W4: One more thing the paper lacked was elaborating on the use cases where this architecture can be used.***\\n\\n> ***Q5: What are the other use cases of this architecture in real life scenarios?***\\n\\nSemantic-based file systems are designed to cater to a broad range of usecases. For instance, as highlighted in [1], traditional file systems often impose a cumbersome approach to organizing documents, posing significant challenges for small and medium-sized enterprises (SMEs), public administration bodies, and individual users. LSFS addresses this gap by enabling more efficient organization and management of document content, reducing operational complexity for non-computer practitioners, including SMEs and public administration agencies. Furthermore, [2] identifies semantic file systems as a critical development trend, emphasizing their versatility and applicability across various domains in information technology (IT). Additionally, LSFS can fill a significant gap in current LLM-based multi-agent systems, as noted in [3], which often lack robust mechanisms for managing interaction records and background knowledge. By providing a more intuitive file management framework, LSFS benefits both end-users and system designers by enhancing file access and reducing administrative overhead.\\n\\n> ***Q3: Are there any security concerns for using this architecture?***\\n\\nThank you for raising the question. We designed several mechanisms to address potential security concerns:\\n1. We added a process lock to LSFS to prevent consistent reads and writes to the same file\\n2. We design a user confirmation step: When a user makes a change to a file, the user will be asked to confirm the changed object twice\\n3. We designed rollback operations: if the user makes a wrong change to the file, they can roll back to the correct version\\n\\nFor first and third mechanisms, the file operation reliability can be improved as long as these two mechanisms are enabled.\\n\\nFor the second mechanism, we conducted a quantitative evaluation of two aspects: the probability of file misoperation and the proportion of risky operations. We compared these metrics with and without the confirmation mechanism enabled. \\n\\nIn the current experiment, we used the retrieval function to locate target files for operations. The table below shows the probability of retrieval errors with and without the confirmation step:\\n\\n| Number of Files | Without User Confirmation | With User Confirmation |\\n|------------------|---------------------------|-------------------------|\\n| 10 | 13% | 0% |\\n| 20 | 16.7% | 0% |\\n| 40 | 15.8% | 0% |\\n| 120 | 14.8% | 0% |\\n\\nAdditionally, we evaluated the proportion of potentially dangerous operations executed (e.g., write, update, or delete) across all file management APIs. The results below demonstrate that the confirmation mechanism in LSFS effectively prevents unintended dangerous operations:\\n\\n| Without User Confirmation | With User Confirmation |\\n|---------------------------|-------------------------|\\n| 36.8% | 0% |\\n\\nThese results highlight that enabling the confirmation mechanism significantly enhances the safety and reliability of file operations in LSFS.\\n\\n> ***Q6: This seems like an ongoing problem that needs to be resolved; are there any similar existing architectures? Have you looked at those papers?***\\n\\nAs highlighted in our related work, existing semantic file systems integrate semantics only into metadata and do not leverage large models to create a fully semantic file system. \\n\\nTo the best of our knowledge, no work has systematically proposed semantic file management based on LLMs. Existing LLM-agent systems focus on enhancing agent functionalities through leveraging file content while neglecting the fundamental file management based on semantics. Agent frameworks with storage mechanism enabled, such as Autogen[4], Camel[5], and OS-Copilot[3], build user profiles and obtain knowledge from files. Knowledge in these systems typically stores agents' past interactions or acquired different files. However, agent systems built by these frameworks face limitations due to their reliance on traditional file management methods. One significant challenge is the need for developers to explicitly set up file paths for agents, which becomes increasingly cumbersome as the number of agents and agent-related files (e.g., task completion records and knowledge bases) grows. This limitation hinders scalability and efficiency in deploying multiple agents. Recognition of the limitation in existing agent systems inspires us to propose a semantic file system that supports the development of LLM-based agents, enabling more efficient and scalable semantic file management to support building of agents.\"}", "{\"metareview\": \"The paper proposes LSFS (LLM-based Semantic File System), a novel approach that enhances traditional file systems with semantic understanding through LLMs. It enables natural language interactions for file operations with some designed safety mechanisms. The authors demonstrate LSFS's effectiveness through evaluations on file retrieval and management tasks.\\n\\nThe reviewers value the paper's clear motivation addressing real-world file management challenges, practical architecture design, user studies and expanded experiments with more files and different LLM models. Key concerns discussed include the fairness of user study comparisons between experienced and lay users, security considerations for LLM access to files, parser accuracy and robustness, and limited evaluation on file types beyond basic tasks. \\n\\nThe authors have engaged constructively with reviewer feedback and provided extensive additional experiments and clarifications. While some concerns about security mechanisms and evaluation comprehensiveness remain to be addressed in future work, I support acceptance at ICLR given the paper's novel contribution to semantic file management and demonstrated practical benefits.\", \"additional_comments_on_reviewer_discussion\": \"Some improvements are recommended, particularly regarding security mechanisms for handling personal information and more detailed experiments with larger-scale file systems (traditional file systems are robust and efficient on managing larger-scale files) for future work.\"}", "{\"title\": \"Fairness of the experiment\", \"comment\": \"Thanks for your suggestions:\\nAs you pointed out, the time required for the **learn command** tends to approach zero over the long run. However, the time a user spends locating the relevant path and entering commands remains long. Additionally, as demonstrated in Table 4 of the paper, even graphical user interfaces such as **Spotlight** often return numerous irrelevant files, requiring further filtering by the user, which incurs additional time costs. Our results show that, even after subtracting the learning time, traditional file systems still take significantly more time to complete tasks compared to LSFS. However, your suggestions are very valuable to us and we plan to conduct a larger-scale user study covering users with different expertises in computer use in the future exploration.\"}", "{\"comment\": \"\\\"In LSFS, our user filter is implemented to prevent LLMs from scanning to personal information. When LSFS retrieves an eligible file to perform subsequent tasks, it will first send the user confirmation, and if the user finds that the retrieval results contain personal information, he can cancel sending the file to LLM.\\\" - The guardrails should be there in place to avoid passing personal data , there should be guardrails for input and output from LLM. It should not be a manual process where user has to interviene and cancel sending the file to LLM.\"}", "{\"summary\": \"This paper introduces an LLM-based Semantic File System (LSFS), designed to improve file management through natural language prompts, rather than traditional command-based interactions. LSFS integrates large language models (LLMs) to facilitate semantic file operations like retrieval, summarization, and rollback. At its core, LSFS uses a vector database to create semantic indexes for files, enabling high-level file operations that consider the content and context of files. It also includes a comprehensive set of APIs that allow complex operations, such as CRUD, grouping, and semantic retrieval, to be executed through natural language prompts. Experimental results show that LSFS outperforms traditional systems in retrieval accuracy (with a 15% improvement) and speed (2.1x faster), proving especially effective for semantic file tasks that go beyond conventional keyword searches.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"S1. Semantic file systems enhance file management by incorporating content context, enabling more intuitive and effective operations, which is an important direction.\\n\\nS2. LSFS simplifies interactions with file system, making file management more accessible and user-friendly.\\n\\nS3. Integrating LLMs in system-level tasks expands functionality, enabling intelligent, responsive, and user-focused file extraction.\", \"weaknesses\": \"W1. The motivation is not concretely convincing, especially the first challenge mentioned in Introduction.\\n\\nIn the Intro section, the authors mentioned that \\\"For instance, if two files have similar content\\u2013such as different versions of the same document\\u2013traditional file systems lack the capability to organize or retrieve these files based on their content similarity.\\\" Why the files should be organized by the similarity of their content? What are the benefits and what are the practical application scenarios? It would be better to at least add one short discussion or a few examples. \\n\\nW2. This paper does not point out what key problem they want to solve. Compared to a research paper, it seems more like a technical report.\\n\\nW3. The experimental setting is questionable. No baselines and introduction of datasets.\\n\\nW4. The experimental results need more explanation. The traditional file system needs to execute commands to extract files, which should be faster than calling one LLM, even though these LLMs are light-weight. The authors should also list the inference time of LLMs, which should be also counted as the interaction time between users and the file system. Then the authors can also list the time that users manually write commands, which should be a good point to prove the point that -- LSFS can not only speed up file retrieving accuracy and speed, but also can reduce the interaction time between users and file system.\\n\\nW5. Safety insurance mechanisms is pointed out as one contribution, however, there is no description of this mechanism and no experimental comparison between the performance of LSFS with and without the safety insurance mechanisms.\", \"questions\": \"W1 - W5\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response(1/2)\", \"comment\": \"We are deeply grateful for your valuable time and insightful feedback. Below are our detailed responses to your concerns:\\n\\n> ***W1: weak evaluation, based on examples and not on more extensive use cases and human evaluation, evaluation setup not described in detail / cannot be reproduced***\\n\\nWe would like to further clarify our experiment setting as below. \\n1. In Experiment 5.1, we validate the effectiveness of our parser using four different LLM backbone. For each API, we created 30 use cases, each with a unique language structure. The test data is provided in the anonymous link. \\n2. In Experiment 5.2, we evaluated the effectiveness of LSFS in semantic retrieval. Since traditional file systems lack semantic retrieval capabilities, we used LLM alone as the baseline for file retrieval in our evaluation. Specifically, we measured the time and accuracy of executing a semantic file retrieval task by simply feeding file content into LLM to retrieve files and retrieving files using LSFS. Our target files contain various types of files such as \\\".txt,.pdf,.doc\\\", etc. To make our experiments more extensive use cases, we supplemented with experiments under 120 files, and the results are as follows:\\n\\n| LLM-backbone | Retrieval Accuracy w/o LSFS | Retrieval Accuracy w/ LSFS | Retrieval Time w/o LSFS | Retrieval Time w/ LSFS |\\n|---------|---------|---------|---------|---------|\\n| Gemini-1.5-flash| 35.2% | 92.9%(164%&#8593;)| 605.59s |48.08s(92.1%&#8595;) |\\n| GPT-4o-mini| 63.8% | 92.9%(45.6%&#8593;) | 938.68s | 88.93s(90.5%&#8595;)|\\n\\n\\n3. In Experiment 5.2, in order to check the robustness of our rollback function when the number of files increases, we tested the rollback time under 5-40 versions respectively, and obtained the robustness of the rollback time.\\n\\n4. In Experiment 5.3, we compared LSFS with traditional file retrieval methods based on Precision, Recall, and F1 score. The methods include:\\n - TFS-search-window: Uses the computer's search window (e.g., MacOS Spotlight) to retrieve both binary and plain text files.\\n - TFS-grep: Uses the Linux terminal command grep, which can only retrieve plain text files.\\n - TFS-grep*: An enhanced version of TFS-grep that first converts binary files into plain text before using grep to retrieve.\\n\\n5. In the second part of Experiment 5.3, we compared LSFS and pure prompting to different LLMs as baselines to generate code for creating sharable file links. \\n\\nFinally, we would like to highlight that our code is provided through the anonymous link and **it's all reproducible**. \\n\\n> ***W2: unclear for which users this could be helpful***\\n\\nSemantic-based file systems are designed to cater to a broad range of users. For instance, as highlighted in [1], traditional file systems often impose a cumbersome approach to organizing documents, posing significant challenges for small and medium-sized enterprises (SMEs), public administration bodies, and individual users. LSFS addresses this gap by enabling more efficient organization and management of document content, reducing operational complexity for non-computer practitioners, including SMEs and public administration agencies. Furthermore, [2] identifies semantic file systems as a critical development trend, emphasizing their versatility and applicability across various domains in information technology (IT). Additionally, LSFS can fill a significant gap in current LLM-based multi-agent systems, as noted in [3], which often lack robust mechanisms for managing interaction records and background knowledge. By providing a more intuitive file management framework, LSFS benefits both end-users and system designers by enhancing file access and reducing administrative overhead.\"}" ] }
2FMdrDp3zI
Is Complex Query Answering Really Complex?
[ "Cosimo Gregucci", "Bo Xiong", "Daniel Hernández", "Lorenzo Loconte", "Pasquale Minervini", "Steffen Staab", "Antonio Vergari" ]
Complex query answering (CQA) on knowledge graphs (KGs) is gaining momentum as a challenging reasoning task. In this paper, we show that the current benchmarks for CQA are not really complex, and the way they are built distorts our perception of progress in this field. For example, we find that in these benchmarks most queries (up to 98% for some query types) can be reduced to simpler problems, e.g., link prediction, where only one link needs to be predicted. The performance of state-of-the-art CQA models drops significantly when such models are evaluated on queries that cannot be reduced to easier types. Thus, we propose a set of more challenging benchmarks, composed of queries that require models to reason over multiple hops and better reflect the construction of real-world KGs. In a systematic empirical investigation, the new benchmarks show that current methods leave much to be desired from current CQA methods.
[ "complex query answering", "knowledge graph", "multi-hop reasoning" ]
Reject
https://openreview.net/pdf?id=2FMdrDp3zI
https://openreview.net/forum?id=2FMdrDp3zI
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ybeFKtb46j", "yQmFt9aMgL", "y9qeugFLq6", "uFub8HUJYV", "sSQaigwiV4", "qtGcXvhwDZ", "qYV4V2taJk", "plffc0SJhz", "m3KYolybSC", "ly6IqrRv7L", "lorxZeBWR6", "lh8tfT0MK7", "kwfr8GKV08", "kL7rR3vYdF", "iwIC35g3a5", "i0J6dQ8HgP", "eR43doEaS7", "bl1IgXqsfe", "Ywaxjbf0EX", "XpgwPt6Q6s", "XbRISBQNnA", "X13AE6VCsK", "Vz9c8Zm7CJ", "UqR4kIy96N", "T7wIUq405q", "RPOM0GVEMG", "QbIAQ9LIbZ", "PYTXHK8kSa", "OgQn5eqdYO", "Ofd3mLvDpb", "NnmtJ4lbHe", "LXW7ABHRi3", "Kz33gVKVju", "JmyYUGqgaw", "HGxtzknEw1", "GuxmnIXERD", "Fhg929eSGJ", "DrENprOix3", "BX57pgyhqn", "6B0CqmjzLJ", "45nCIWZxRp", "1Yg9osAjSw", "1UQRFsxPDC", "0McmLNWBLU" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732105659127, 1732527360980, 1732104064153, 1733220639535, 1732555012934, 1733079695555, 1732635898141, 1732757488039, 1732703464716, 1733062574714, 1732637288197, 1733156394490, 1732105404437, 1732641796927, 1732729913082, 1730643650458, 1733062600095, 1733081612416, 1730604093965, 1732104928968, 1732787052876, 1733153333023, 1734584444479, 1732625944213, 1732703191912, 1733225391852, 1732971910914, 1733073409419, 1732105138562, 1733046295965, 1732554778457, 1732609245186, 1732536857209, 1733226714222, 1732637553832, 1733046241640, 1732105970941, 1737523867749, 1730708791308, 1733154379711, 1732971894872, 1733046679273, 1730142979685, 1733073262149 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7818/Authors" ], [ "ICLR.cc/2025/Conference/Submission7818/Reviewer_z9wR" ], [ "ICLR.cc/2025/Conference/Submission7818/Authors" ], [ "ICLR.cc/2025/Conference/Submission7818/Authors" ], [ "ICLR.cc/2025/Conference/Submission7818/Authors" ], [ "ICLR.cc/2025/Conference/Submission7818/Reviewer_z9wR" ], [ "ICLR.cc/2025/Conference/Submission7818/Authors" ], [ "ICLR.cc/2025/Conference/Submission7818/Reviewer_z2gA" ], [ "ICLR.cc/2025/Conference/Submission7818/Authors" ], [ "ICLR.cc/2025/Conference/Submission7818/Reviewer_z9wR" ], [ "ICLR.cc/2025/Conference/Submission7818/Authors" ], [ "ICLR.cc/2025/Conference/Submission7818/Authors" ], [ "ICLR.cc/2025/Conference/Submission7818/Authors" ], [ "ICLR.cc/2025/Conference/Submission7818/Authors" ], [ "ICLR.cc/2025/Conference/Submission7818/Area_Chair_URtb" ], [ "ICLR.cc/2025/Conference/Submission7818/Reviewer_fPSh" ], [ "ICLR.cc/2025/Conference/Submission7818/Reviewer_z9wR" ], [ "ICLR.cc/2025/Conference/Submission7818/Authors" ], [ "ICLR.cc/2025/Conference/Submission7818/Reviewer_z2gA" ], [ "ICLR.cc/2025/Conference/Submission7818/Authors" ], [ "ICLR.cc/2025/Conference/Submission7818/Authors" ], [ "ICLR.cc/2025/Conference/Submission7818/Reviewer_DNKW" ], [ "ICLR.cc/2025/Conference/Submission7818/Area_Chair_URtb" ], [ "ICLR.cc/2025/Conference/Submission7818/Authors" ], [ "ICLR.cc/2025/Conference/Submission7818/Authors" ], [ "ICLR.cc/2025/Conference/Submission7818/Reviewer_z9wR" ], [ "ICLR.cc/2025/Conference/Submission7818/Reviewer_z9wR" ], [ "ICLR.cc/2025/Conference/Submission7818/Authors" ], [ "ICLR.cc/2025/Conference/Submission7818/Authors" ], [ "ICLR.cc/2025/Conference/Submission7818/Authors" ], [ "ICLR.cc/2025/Conference/Submission7818/Authors" ], [ "ICLR.cc/2025/Conference/Submission7818/Reviewer_fPSh" ], [ "ICLR.cc/2025/Conference/Submission7818/Reviewer_z2gA" ], [ "ICLR.cc/2025/Conference/Submission7818/Authors" ], [ "ICLR.cc/2025/Conference/Submission7818/Reviewer_z9wR" ], [ "ICLR.cc/2025/Conference/Submission7818/Authors" ], [ "ICLR.cc/2025/Conference/Submission7818/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7818/Reviewer_z9wR" ], [ "ICLR.cc/2025/Conference/Submission7818/Reviewer_z9wR" ], [ "ICLR.cc/2025/Conference/Submission7818/Reviewer_z9wR" ], [ "ICLR.cc/2025/Conference/Submission7818/Authors" ], [ "ICLR.cc/2025/Conference/Submission7818/Reviewer_DNKW" ], [ "ICLR.cc/2025/Conference/Submission7818/Authors" ] ], "structured_content_str": [ "{\"comment\": \"We thank the reviewer for their comments and for deeming our work well written, easy to follow and highlighting an important issue.\\n\\n> The baselines lack of the symbolic methods like QTO and FIT, which are the mainstream of CQA methods; I am curious that the performance of symbolic method QTO and FIT as they already have the hybrid information\\n\\nWe thank the reviewer for pointing out the existence of such symbolic methods. In the revised version of the paper, we will also add a comparison with other neuro-symbolic methods like QTO. We do not consider FIT as it is equivalent to QTO on the query types we considered (see appendix G.2, and Table 5 of FIT). See our global answer concerning QTO reporting new results. We are in the process of adding complete results for the new benchmarks. The bottom line is that QTO performance is comparable to the other solvers: that is the new benchmarks reveal the \\u201ctrue hardness\\u201d of CQA. Our story and contribution still holds. \\n\\n>BetaE have three KGs but only two KGs are presented in the paper.\\n\\nAs we mentioned in the paper \\u201c We do not consider FB15k [1] as it suffers from data leakage [2]\\u201d, we do not use FB15k as it was found to suffer from a data leakage problem [2].\\n\\n[1] Bordes, Antoine, et al. \\\"Translating embeddings for modeling multi-relational data.\\\" Advances in neural information processing systems 26 (2013).\\n\\n[2] Toutanova, Kristina, and Danqi Chen. \\\"Observed versus latent features for knowledge base and text inference.\\\" Proceedings of the 3rd workshop on continuous vector space models and their compositionality. 2015.\\n\\n> 'reduced to easier types' is weird because query types with less constraint will be easy to solved than original query types, for example the performance of 3i is good than 2i. \\n\\nThe performance of 3i being much better than 2i is an indication that the old benchmarks are not good for evaluating true complex query answering performance. In fact, when adding another constraint there should be a drop in performance, as we\\u2019re adding an additional prediction that carries an error. \\nInstead, in the old benchmarks there is a significant increase in performance because such added links were already present in the training data, making the query easier, not harder! \\n\\n\\n\\n>CQD-hybrid is not the first an hybrid solver. QTO and FIT use the information from observed KG and trained link predictor \\n\\nWe thank the reviewer for pointing out the existence of such symbolic methods. We amended the line \\u201cTo the best of our knowledge, this is the first time such an hybrid solver is proposed in the CQA literature.\\u201d and added more lines to contextualize QTO on page 8. We remark that CQD-Hybrid is not the main contribution of the paper, see our global answer above.\\n\\n\\n\\n> Do you vary your argument in train queries? I am wondering the phenomenon that existed CQA models fails is caused by the train datasets have too many partial inference answers. Thus I am curious about the performance of symbolic search methods where these methods don not use queries to train.\\n\\nNo, we re-used the same training queries. Which \\u201carguments\\u201d are you exactly referring to?\"}", "{\"title\": \"Thanks for your contribution\", \"comment\": \"Dear Authors,\\n\\nI acknowledge the efforts during the rebuttal period to add new baseline models (QTO) and new data (queries with logical negation). The empirical results obtained are certainly of high quality and with great detail. I also appreciate the author's efforts to populate new and hard benchmarks (C3).\\n\\nThis paper's central claim is based on the fact that most of the samples in the old dataset are **partially inference queries**. It then claims that the scores from the old dataset are conflated because of memorization, so the scores are biased. \\nHowever, I still hold another view for (C1) and (C2) even after carefully reading, comprehending, and reflecting on the quantitative results and claims. The disagreement is in how to interpret the results:\\n- Firstly, it is too reckless to simplify reasoning as predicting new triples (inference-based).\\n - The definition of partial inference relies on the observation that removing one trivial edge in the reasoning tree can reduce the original query to a simpler query. This reduction is valid only when the logical calculus is conducted under boolean truth values and exhaustive search. It is **NOT** valid under a more realistic and machine-learning scenario, which is also suggested in Table A.3. I'm afraid I have to disagree with the differentiation of reasoning hardness by solely the categorization of full and partial inference queries.\\n - Table A.3 also supported my point. If the reduction is valid, one can predict the partial-inference 2p performance (2p-1p) with the full-inference 1p performance (1p-1p). However, the fact is that the relation is not even monotonic, see (GNN-QE 1p-1p < ConE 1p-1p but GNN-QE 2p-1p > ConE 2p-1p). This also happens for perfect memorization methods (CQD-hybrid 1p-1p > QTO 1p-1p but CQD-hybrid 2p-1p < QTO 2p-1p). The non-monotonic performances revealed that the performance of partial inference queries (even already reduced to 1p) is not dominated by the link prediction performance.\\n - I think this happens because different methods model the logical variables, quantifiers, and connectives in different ways. The various ways of parameterization/calculation make the actual implementation largely deviate from the **bipolar narrative of link memorization vs. link prediction suggested in this paper**. In other words, reasoning or query answering is more than link prediction.\\n- Acknowledging that the reasoning is just more than predicting new links, the argument that **because 98% of queries reduce to 1p leads to a clear artifact** also becomes questionable. At least, this argument does not apply to the neural models without explicit memorization and boolean logical calculus because the reduction is not valid. Unfortunately, none of the evaluated baselines satisfy such a reduction.\\n\\nBest\"}", "{\"comment\": \"Dear reviewers,\\n\\nThanks for the insightful feedback and questions. Before addressing your questions individually, we would like to remark the following points.\\n\\n**Scope and motivation.** We remark that ***our main contribution is not to propose a new method for CQA, but to reveal that the reported performance of standard benchmarks are inflated due to the presence of training links***. Table 1 helps understand our claim: very few queries are full-inference, and the aggregate performance people report in papers (even QTO, see below), is mostly due to the ability to memorize links and solve (1-hop) link prediction tasks. While we analyze only positive queries, our analysis can be easily transferred to negative queries and to benchmarks beyond FB15k-237 and NELL995, as the presence of training links depends on *how* a benchmark is created, rather than on the specific query types it contains. \\n\\n**Negation queries.** In fact, analyzing the queries involving negation in FB15k-237 and NELL995, reveals training links in the non-negative reasoning tree of the query-answer pairs (as expected). From our counts, 95.4% of 3in query-answer pairs and 98.4% of pin in FB15k-237 have existing links present in the non-negative part of their reasoning tree. \\nFurthermore, also negated links appear in the training, thus potentially leaking information (how this propagates to performance is less clear than the positive-part, as each method treats negation differently). We report these values in Table A.1 in Appendix A.1.\\nWe are running the full analysis for all remaining query types and will provide the remaining results in the revised version of the paper and in this discussion.\", \"the_bottom_line_is\": \"also ***negated queries are affected by the leak*** and potentially all queries that are generated in the same way.\\n\\n\\n**QTO and hybrid solvers.** We were not aware of the existence of other hybrid solvers in the literature. We thank the reviewers for pointing us to QTO and FIT. We amended the line \\u201cTo the best of our knowledge, this is the first time such an hybrid solver is proposed in the CQA literature.\\u201d and added more lines to contextualize QTO on page 8. We remark that CQD-Hybrid is not the main contribution of the paper, see above. \\n\\nWe introduce CQD-Hybrid to test our hypothesis that performance on the old benchmarks are inflated. We support our claim by showing that even the remarkable performance reported by QTO in its original paper is also an artifact of the fact that ~98% of all test queries require predicting a single link (see Table 1) as it happens for all baselines.\\nIn fact, if we run our stratified analysis for FB15k-237 with QTO to as we do in Table 2, it follows the same pattern of the other methods:\\n\\n\\n| Query type | all | 1p | 2p | 3p | 2i | 3i | 1p2i | 2i1p | \\n|------------|---------|-------|------|------|------|------|------|------|\\n| 1p | 46.7 | 46.7 | - | - | - | - | - | - | \\n| 2p | 16.6 | 16.7 | 4.0 | - | - | - | - | - | \\n| 3p | 15.6 | 15.8 | 4.5 | 5.0 | - | - | - | - | \\n| 2i | 39.7 | 40.8 | - | - | 5.7 | - | - | - | \\n| 3i | 54.6 | 56.4 | - | - | 15.4 | 5.4 | - | - | \\n| 1p2i | 33.8 | 35.9 | 15.8 | - | 6.2 | - | 7.3 | - | \\n| 2i1p | 24.7 | 25.3 | 10.3 | - | 8.6 | - | - | 8.1 | \\n\\n\\n\\ni.e for 2p queries: the aggregate statistic reported in the paper (\\u2018all\\u2019) has mrr=16.6, and 2p queries that are reducible to 1p have mrr=16.7. But 2p queries that are not reducible to any query type have mrr=4.0, much lower. We report the complete values for FB15k-237 and NELL995 in the updated table A.3 and A.4.\\n\\nThis highlights that the issue is to be found not in \\u201cold\\u2019\\u2019 baselines we use, but in the classical benchmark and affects all baselines.\\n\\n**QTO on new benchmarks.** We also evaluate QTO on our new benchmarks, and we find that performance indeed degrades as for the other baselines. We are still running the last experiments and we will report full data in Table 5. Here is an extract of the table for FB15k-237+H:\\n\\n| Model | 1p | 2p | 3p | 2i | 3i | 1p2i | 2i1p | 2u | 2u1p |\\n|--------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|\\n| GNN-QE |42.8 | **6.5** | **4.2** | 10.3 | 10.3 | 4.6 | 7.8 | 36.9 | 7.2 |\\n| ULTRAQ | 40.6 | 5.5 | 3.7 | 8.2 | 7.9 | 4.1 | 7.9 | 33.8 | 5.1 |\\n| CQD | **46.7** |6.3 | 2.7 | **18.4** | **16.2** | **5.6** | **9.4** | 34.9 | 7.3 |\\n| ConE | 41.8 | 5.7 | 3.9 | 16.8 | 13.9 | 4.0 | 6.6 | 25.9 | 5.3 |\\n| QTO | **46.7** | 5.9 | 3.5 | 13.5 | 11.8 | 4.7 | 8.8 | **37.3** | **7.4** |\", \"title\": \"Global answer to all reviewers\"}", "{\"title\": \"Final debunking?\", \"comment\": \"> On my side, I think we can blame the link predictor because the algorithm implemented by QTO/FIT follows the standard evaluation of existential queries if the adjacency matrix used by them is perfect (by link predictor is perfect, but this does not happen usually). Please check Chapter 4 in the literature [8].\\n\\n\\nYou mentioned `QTO, GNN-QE and CQD` first and now you are providing some ex-post explanation only for QTO. ***This is goalpost shifting*** and it happened several times, starting from the first review that only criticizes the paper about missing baselines and query types. ***Empirical evidence is rejecting your claim that improving link prediction is sufficient to improve CQA***, as follows.\\n\\nFirst, GNN-QE and CQD do not even try to approximate Dan Suciu\\u2019s algorithm, as QTO does, and yet they can perform better than QTO in many scenarios (see our Tables on new and old benchmarks). This is even more striking when we re-run QTO using the same link predictor we use for CQD. ***You cannot blame the link predictor alone***, as CQD with the same link predictor and a much simpler algorithm fares better than QTO. Your approximation of Dan\\u2019s algorithm in QTO, if it were true reasoning, would score better than the cruder algorithm in CQD. This is not the case, and we provided reproducible code. \\n \\nFurthermore, you are confusing formal reasoning with probabilistic reasoning, even if you were to implement exactly Dan\\u2019s algorithm with perfect numerical precision, you would not be guaranteed to predict the correct answers, but just the correct distribution. When you move to the mode of the distribution your performance can drop. Another hint that the current benchmarks can distort the sense of performance gains we have.\\n\\n> Please see my above and explain what is the reason if I can not produce the correct answer only because I had a bad adjacency matrix produced by the imprecise link predictor.\\n\\nYou are confusing an adjacency matrix with a probability tensor (that is definitely non-sparse, unless you enforce constraints as in [A]). And confusing logical reasoning with probabilistic reasoning. Current score-based models are not giving you (calibrated) probabilities [A].\\n\\n[A] Loconte, Lorenzo, et al. \\\"How to turn your knowledge graph embeddings into generative models.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n> I have no intention of changing your personal beliefs because this is your free will, and I left it to the community\\u2019s judgment. \\n\\nIt is not a matter of changing beliefs, it is supporting beliefs with solid evidence. In case you do not have evidence for yours, you should at least admit \\u201cI do not know\\u201d instead of claiming the others\\u2019 are wrong. If they are equivalently probable, you have no reason to reject.\\n\\n> Please add to it if I missed anything about your math.\\nYes, please re-read again what we wrote above. You have to take logical constraints into account, as we wrote above and therefore the number of valid triples drops from 50+ billions. However, even if the number were to drop by 99.99%, you would still have 5 million triples. Much larger than the current observed number of 200k triples.\\n\\nAs a last remark, we are quite baffled that simple math and logical arguments can be taken lightly. ***The point of a review process is not to impose personal beliefs about future KGs or speculative opinions about what models are doing**, it is to exchange solid evidence about what we observe and what we can quantify properly.\", \"this_is_one_of_the_aims_of_our_paper\": \"properly and rigorously measuring what we are reporting in tables, disentangling the hype from science.\"}", "{\"title\": \"Follow up response (part 2)\", \"comment\": \"> If the reduction is valid, one can predict the partial-inference 2p performance (2p-1p) with the full-inference 1p performance (1p-1p).\\n\\nThis is not true, and we never claim it is true. **A reduction is a syntactic property of two query classes, it does not imply that a ML model should be equivalently good on both query classes**. The fact that a reasoner can benefit from it depends on how the reasoner is implemented. Note that **memorization in non-hybrid solvers is a spectrum**, and depends on the learning process and how the model is implemented. It is very well known that for a large embedding space, neural link predictors are able, in principle, to exactly reconstruct the original training tensor, see e.g., [7] and the literature of tensor factorizations and universal approximation of neural networks. However, for a small embedding size, and limited capacity, and given training dynamics, they can only partially reconstruct it in practice.\\n\\nThis is expected whenever we are learning some ML model, there is *noise* in the optimization, and the learning problem is highly non-convex.\\n\\nThis is why, from a clear ML perspective, predicting 2p->1p might not match exactly the performance of 1p->1p given the noise and the learning issues we mentioned above. Nevertheless, we do not need performance to match exactly to make our claim that the query type \\u201c2p->1p\\u201d is overrepresented and dominates the overall scores, see our points (A-D) above. Furthermore, the comparison with hybrid-solvers, who memorize exactly, brings further evidence that for non-hybrid models this happens for a certain kind of memorization (which again is not exact memorization, but some relaxation).\\n\\n[7] Trouillon, Th\\u00e9o, et al. \\\"Complex embeddings for simple link prediction.\\\" International conference on machine learning. PMLR, 2016.\\n\\n> The non-monotonic performances revealed that the performance of partial inference queries (even already reduced to 1p) is not dominated by the link prediction performance.\", \"we_also_do_not_need_additional_empirical_evidence_to_say_that_this_is_a_form_of_memorization_happening\": \"hybrid-solvers, which memorize exactly, score better by design on the old benchmarks, and much worse on the new benchmarks (see the performance drop of QTO). You cannot deny this empirical evidence.\\n\\nNote that *monotonicity is not needed* and is a confounder (that maybe is distracting you from our points A-D): monotonicity would only make sense if the models were not learning and there were no noise. As explained above, learning poses several challenges and we cannot match performance exactly. But there is already a clear statistical trend that can be measured: **performance drops when we move to harder classes**. This happens systematically (up to noise), and a regression plot is sufficient to see this.\\n\\n\\n> The various ways of parameterization/calculation make the actual implementation largely deviate from the bipolar narrative of link memorization vs. link prediction suggested in this paper. In other words, reasoning or query answering is more than link prediction.\\n\\nThis is essentially what we said above (and in the paper!). We invite you to re-read the paper with fresh eyes: we never claim an opposition between link memorization vs link prediction.\\n\\nIf some specific line in the paper is still bothering you, please report it here and we are happy to discuss it further but also to amend it if needed, to make our point crystal clear.\"}", "{\"title\": \"Response to \\\"Wishful thinking or misunderstanding of logic?\\\" and \\\"Debunking non-factual assumptions\\\"\", \"comment\": \"> This is factually wrong. You can reconstruct the training triple tensor perfectly, but fail terribly at reasoning (this is what ML models are trying to do!), because you do not know how to emulate exactly a formal reasoner in a ML (even hybrid) model.\\n\\nThis is a mental experiment. As you said, it will never be achieved. The goal of raising this is to highlight the main cause of the hardness of the proposed benchmark, and this point will further be part of my argument that the new benchmark will be unnecessary but potentially confusing. I will explain why next.\\n\\n> This is wishful thinking...\\n\\nYou must have misunderstood my point. Of course, **improving link prediction implies improving CQA** can not be derived from the claim **improving CQA implies improving link prediction**. But my claim is not made by the abovementioned logic.\\n\\nPlease keep in mind the sufficient condition of perfect CQA performance, which is how the data is sampled.\\n\\n*(A) Perfect link prediction* $\\\\land$ *(B) perfect graph traversal* $\\\\land$ *(C) perfect logical calculus* $\\\\to$ *(D) perfect CQA performance*.\\n\\nThe claim **improving link prediction implies improving CQA** is a relaxed version of the sufficient condition above. The relaxation is that we replace the \\\"perfection\\\" in (A), (B), (C), and (D) by some scores s(A), s(B), s(C), s(D) in [0, 1], 1 means perfect. \\n\\nMathematically speaking, **improving link prediction implies improving CQA** is equivalent s(D) increases with the growth of s(A) up to a noise, with fixed s(B) and s(C).\\n\\nFirst, let's consider QTO, GNN-QE and CQD. For those models, (B) and (C) are almost satisfied by how they store their search stats and conduct the triangular norms. My mental experiments before wanted to suggest that if (A) is satisfied, then (D) is satisfied because (B) and (C) are satisfied too. If you didn't see any flaws here, it means s(D), as a function of s(A), say s(D) = f(s(A)), satisfies f(1) = 1 already. Also, f(0) = 0 is self-evident. So, given a snapshot of model checkpoints where more and more links are successfully predicted, it seems that f(x) is an increasing function up to a noise.\\n\\nFor general neural models, (B) and (C) are only partially satisfied but fairly good, as suggested by your table A.3. The link predictors should work well for both prediction and collaboration with other components implementing functions in (B) and (C). The general trend, however, is still there and might be revealed by a regression plot. Is it also wishful thinking? I am curious about your opinion.\\n\\n-----\\n\\n# Response to \\\"Debunking non-factual assumptions\\\"\\n\\nWe still have different beliefs that distort our sense of the importance of various parts of data, such as you want to stress hard benchmarks, and I think distorting $S_{X,II}$ is also necessary. I believe this is why we need other reviewers. I would like to leave the right and wrong to the others to decide and promise to weaken this impact on the final score.\\n\\nI need to emphasize that this very difference in belief does not sufficiently confuse me. My position is that this new setting is somehow invalid because of the combination of two arguments: the first one is the attention paid to different parts of data. I think we already agreed; the second one is that it is confusing when working with old benchmarks, especially if a better link prediction solves it; as I described earlier, what is the value added by evaluating the new benchmark that conveys the old message?\"}", "{\"title\": \"Follow up response\", \"comment\": \"We thank the reviewer for recognizing that we addressed some of their concern, and we hope this can be reflected in a score update. We proceed by addressing the left concerns.\\n\\n> Regarding the discussion of the union query, if there must be two missing links, then what is the difference between a conjunctive query and a disjunctive query? 2u and 2i will be exactly the same! I think that violates the very definition of the disjunctive query.\\n\\nLet\\u2019s break this into the following points:\\n\\n1) Overall, when answering a 2u union query, we\\u2019re interested in any link between the two. This is not the issue, however. The issue is that during ***evaluation*** in the old benchmarks, we have to ***deal with non-existing links***, that is, links that do not belong either to the train or test splits. Evaluating a query-answer pair that has one non-existing link and one missing link in their reasoning tree is problematic. ***This is why we filter-out non-existing links and only evaluate on answers that have two missing links in their reasoning tree.*** \\n\\n2) Why is the presence of non-existing links problematic for evaluating 2u queries?\\nModels are trained on existing links and non-existing links will have a very low probability to exist. Therefore, if at evaluation time we ask a model to perform well to non-existing links, the reported MRR will be lower than expected, as these links are out-of-distribution.\\nThis can be clearly seen in the old benchmarks, where 2u queries had a much lower MRR than 1p queries: on these benchmarks we are asking models to score triples that do not belong to the original KG.\\n\\n3) Therefore we filter out non-existing links. Such filtering is done in the same spirit of filtering out easy answers in Q2B/BetaE datasets. By filtering out some answers we do not say they are *invalid*, rather that ***we do not evaluate on them***. While easy answers are filtered-out because they can be directly retrieved without predictions, answers that have non-existing links in their reasoning tree are filtered-out to evaluate the models\\u2019 performance in performing the union ***between missing links only.***\\n\\n4) Even after the filtering, 2i and 2u are not the same, as ***still*** we\\u2019re evaluating two different operators. Logically, 2u is as hard as 1p, you should agree with this statement, because we just need the model to correctly predict one of the two missing links, while for 2i we still need both to hold. You can see that this was not reflected in the old benchmarks, but it is now reflected in the new benchmarks. 2u and 1p are roughly the same in terms of MRR and there is now a significant performance difference between 2u and 2i in the new benchmarks (Table 5). \\n\\n> **that queries with negation contain a \\u201cpositive\\u201d sub-query and a single negative link** is only true based on the Q2B/BetaE dataset, it is far from something that can be taken as granted and can be wrong in more advanced dataset, therefore the whole discussion is really questionable when the query doesn't meet this condition and the definition of ``full-inference'' becomes dubious\\n\\nWhich benchmarks are you referring to? The vast majority of CQA benchmarks that are commonly used, interpret negation queries in this way.\\n\\nWe furthermore remark (again) that **regardless of the query type**, the analysis that we did can be extended to other benchmarks. Applying it to negation queries is just an example to show that this can be carried out in different ways to different query types. The only condition that we impose is that every link in the reasoning tree has to be only in the test graph; such condition can be applied to *any* query structure. However, if one wants to bring our analysis to other benchmarks, one need to check the way query are generated to make sure this condition is satisfied.\\n\\nWe hope that we addressed all your concerns now, and we are happy to discuss it further if needed. We also encourage you to point to specific lines in the paper that might lead to a confusion.\"}", "{\"comment\": \"Thank you for providing the anonymous repository to reproduce the experimental results.\\n\\nHowever, I encountered some issues while trying to run your experiments. First, several necessary files are missing, including **dataset.py** and the **src/ directory**, as well as the **benchmark data**. After obtaining the necessary files and data from the QTO repository and your anonymous repository referenced in the main paper, I still faced a data loading issue, specifically: **Exception has occurred: UnpicklingError invalid load key, 'v'**.\", \"title\": \"Reproduce issues\"}", "{\"title\": \"Follow up response (part 2)\", \"comment\": \">Suggesting such a new benchmark sends readers important messages to outweigh this particular data distribution when optimizing their models.\\n\\nRather than outweigh a particular data distribution a model can learn to reason without relying on training data. Would you rather continue to report performances that are clearly inflated due to memorization but claiming generalization instead? Note that the hybrid-reasoner class is tiny w.r.t. all the other classical machine learning models for CQA.\\n\\n> I am worried that the measurement from this distribution might be biased towards certain subsets of triples/entities (at least they are 10% missing triples). \\n\\nWe already had such analysis on the new benchmark in Table C.1, which shows that there is no evident bias in the new benchmarks.\\n\\n> Then, my question is whether this new data distribution remains valid and effective in evaluating the CQA method's comprehensive capability (for logical connectives and variables).\\n\\nWe remark again that this is not a new data distribution. We are simply getting rid of the simplifying assumption that triples are independent. We could turn the question the other way around: do you have any evidence that assuming triples are independent is more realistic?\\n\\n>Should we outweigh such particular data distribution from a developing point of view? Also, following my understanding of the ratio of full-inference query-target pairs, my opinion is no. The reason is that the advancement of LLMs further reduces the hardness of knowledge harvesting. I conjecture that the ratio of missing links in KG will decay, so the ratio of full-inference pairs will also decay even faster.\\n\\nWe remark that presenting links to be inferred as \\u201cmissing links\\u201c in the sense that they could be known and people did not just put in sufficient effort, is a mistake. Links are missing not because people did not put enough effort (this happens, too), but because they cannot be known. In this context, they are not just \\\"missing links\\\" but rather \\\"unknown missing links\\\". These two things are not identical. We want to make inferences over unknown missing links. And we cannot assume that just more data in the future will solve the problem, this is wishful thinking.\\n\\n> it is still hard for people to select the most suitable model per query type with multiple benchmarks. Should they make decisions by averaging the one used before and the new one proposed here? If one is willing to choose a pure neural model, is it right for the model selection process to underweight the prediction of observed links?\", \"we_believe_you_agreed_that_averages_done_blindly_are_evil_and_are_distorting_the_perception_of_progress\": \")\\nAveraging across the two datasets is problematic, and we discourage it. However, both benchmarks can still be used: the authors should ***report a stratified analysis on the old benchmarks*** and an aggregate analysis on the new benchmarks. We do not see a problem here. As we already stated in our takeaway messages.\"}", "{\"title\": \"Response to \\\"You are assuming triple independence, why is this realistic? (I and II)\\\"\", \"comment\": \"Dear authors,\\n\\nThanks very much for your continuous engagement in the process that we make the question we discussed more precise. If you feel the goalpost was ever moved, the reason might be that you dodged or misinterpreted my questions from time to time. The title of your post \\\"You are assuming triple independence, why is this realistic?\\\" is a perfect example suggesting that you didn't listen to me when I put forward a series of notations.\\n\\nIn this round, I hope I fully understand the misunderstandings. I hope those questions can explain why there is such a radical disagreement between us. And I hope we can face those questions faithfully and let the community and chairs of higher levels judge. I prepare to state the questions here and explain my opinion afterward.\\n1. Is the ratio of missing edges in a KG large or small?\\n - In your opinion: \\\"In a given KG, the number of real missing edges is far greater than the number of seen edges.\\\"\\n2. Is the definition of $T_{X, I}, T_{X, II}, T_{X, III}$ and $S_{X, I}, S_{X, II}, S_{X, III}$ independent from sampling?\\n - In your opinion, this is dependent.\\n\\n## Q1. Is the ratio of missing edges in a KG large or small?\\nThis question concerns an axiom-level assumption to the knowledge graphs we handle. You cannot prove or disprove it.\\n**I believe the ratio is small** and will be smaller for the knowledge graphs. Here is why:\\n1. Industrial-level knowledge graphs and their embeddings already supported a vast number of applications. If a majority of knowledge cannot be predicted by existing edges and ML models, many applications we saw will not happen.\\n2. In the future, the ratio of missing knowledge will be smaller because\\n 1. LLMs that learn from large corpus support question-answering applications quite well with the assistance of existing knowledge graphs or knowledge bases. That means the training corpus of LLMs + existing KG or KB consists of a great amount of knowledge we need.\\n 2. Information extraction from the sources above (training corpus of LLMs + existing KG or KB) is easy, given the success of LLM in natural language understanding.\\n 3. Based on 1 and 2, I think the ratio of missing edges in KG will be small to satisfy people's real needs.\\n3. Your argument, as I quoted here, is problematic to me.\\n> Consider just the proportion of edges in the current FB, NELL or even WikiData, where one can find millions of entities, this implies the possibility to have thousands of billions of possible triples. But only a fraction are observed. So in the real world $|E_m| > |E_o|$.\\n\\nYou assume that there could be pair-wise connections in KG first. **This assumption reflects your belief**. But please see my arguments above, and they imply that if there is a certain amount of knowledge graph edges sufficient **to support people**, we can approach them with the technology by far, so your belief is not correct for **application purpose**. Of course, you can check whether there are pair-wise connections and play the endless game of relation enrichment to connect entities for logical completeness. I am very passive about how active this research direction can be.\\n\\n## Q2. Is the definition of $T_{X, I}, T_{X, II}, T_{X, III}$ and $S_{X, I}, S_{X, II}, S_{X, III}$ independent from sampling?\\nLet's define it with an algorithmic example.\\n\\nBy saying $T_{2p}$ is a set of **all** reasoning trees for 2p (2-path), I am suggesting that you can obtain $T_{2p}$ by iterating every entity in KG and conducting a DFS with depth of 2. Then you get a set $T_{2p}$. This set contains all possible reasoning trees because it is the largest already, no matter how you sample the data in your script. \\n\\nFor the query type $X$ other than 2p, we always keep $T_{X}$ the largest set, no matter how you sample them.\\n\\nBy saying $S_{X}$ is a set of **all** query-target pairs, I am suggesting that you can populate a query from each reasoning $t\\\\in T_{X}$ and $S_{X}$ contains all of them. So, $S_{X}$ is already the largest set of query-target pairs (because every query-target pair has a reasoning tree), no matter how you sample the benchmark.\\n\\nThen, each $T$ or $S$ is separated into three splits depending on whether the reasoning tree (or all reasoning trees in a query) is a subgraph of $E_o$ (Type I), $E_m$ (Type III), and otherwise (Type II). Those three splits are non-overlapping by definition.\\n\\nPlease note that for now, we don't need a specific sampler to define those three types. Your comment in the post, \\\"*You are assuming triple independence, why is this realistic? (II)*\\\" exactly reflects that you are unaware of this problem structure.\\n\\nThen, no matter how you sample the dataset, the dataset you constructed is only contained in $S_{X, III}$ but never $S_{X, II}$. Here is why I think your benchmark does not have sufficient coverage, which totally ignores the performance on $S_{X,II}$.\"}", "{\"title\": \"Follow up - Global answer to all reviewers\", \"comment\": \"Dear reviewers,\\n\\nWe uploaded a new version of the paper, where we completed all the experiments:\\n\\n**Negation queries**: We run the full analysis for all negation queries, ***confirming that even by just looking at the positive reasoning tree, the vast majority of query-answer pairs contain existing links*** thus pointing at the same issues we already analyzed for positive-only queries in the paper. We report these values in Table A.1 in Appendix A.1.\\n\\n**QTO on old benchmarks**: The additional experiments on the old benchmarks with QTO, table A.3, A.4, ***confirm that QTO aggregated performance is due to 1p queries and it drops for the portion of full-inference query-answer pairs, as any other solver***. Moreover, even when QTO is the SoTA on a certain query type, most of the time it is not so on the portion of full-inference query-answer pairs only, showing that improving performance on the partial-inference query-answer pairs does not necessarily result in improvements over the full-inference ones. \\n\\n**QTO on new benchmarks**: We completed the experiments for QTO on the new benchmarks, see Table 5. The experiments show that QTO have similar performance with CQD, a much simpler and older baseline, enforcing our claim for which the perception of progress in the field has been inflated due to the presence of a massive amount of partial-inference query-answer pairs in the old benchmarks.\"}", "{\"comment\": \">considering the limitations of the motivational statement of this paper (the comparison between inductive settings and the link leaks in transductive settings)\\n\\nWhich motivational statement are you referring to? We do not see the point, as transductive and inductive are two distinct settings that are evaluated separately.\\nWe remark that we do not propose a benchmark for the inductive setting, rather propose a benchmark on the transductive setting that truly evaluates reasoning performance of transductive models on performing complex query answering. \\n\\n> some missing discussions with some related work\\n\\nWe added a discussion of QTO in Appendix A.2, along with experiments of QTO for the old benchmarks, Tables A.3,A.4 and the new benchmark in Table 5. We remark that we do not consider FIT as it is equivalent to QTO on the query types we considered (see appendix G.2, and Table 5 of FIT). Could you provide specific references for the related works that are missing?\"}", "{\"comment\": \"We thank the reviewer for considering our paper interesting and praising our deep analysis. We proceed by answering their questions, believing all can be easily addressed.\\n\\n> almost all researches conducted on complex query answering in recent years included negative queries \\n \\nNote that, since queries with negation contain a \\u201cpositive\\u201d sub-query and a ***single*** negative link, our analysis can be easily carried over as the positive sub-query can be reduced to a simpler type in the presence of training links.\\nIn fact, in FB15k-237 and NELL995, for queries involving negation, we analyzed the presence of existing links in the non-negative reasoning tree of the (q,a) pairs, and we can reveal that 95.4% of 3in query-answer pairs and 98.4% of pin (q,a) pairs in FB15k-237 have existing links present in the non-negative part of their reasoning tree. We will provide such percentages for the rest of the queries involving negation in the revised version of the paper in Table A.1 in Appendix A.1\\n\\n>SOTA CQA models fail significantly on so-called full-inference pair is questionable, as it doesn't include recent models that are built by symbolic search algorithms, like QTO[1] and FIT[2], which use neural link predictors combined with searching algorithms and seems to bypass the challenges proposed by full-inference pair\\n\\nIn the revised version of the paper we will show that this is quite the opposite. Those hybrid models work great on the old benchmark, mainly because they\\u2019re constituted in the vast majority by partial-inference query-answer pairs, which QTO, FIT, and CQD-Hybrid exploit. In fact, when evaluating QTO on the portion of full-inference 2p queries of FB15k-237, the state-of-the-art remains ConE (table A.4), confirming how the perception of the progress in the field has been distorted due the the high presence of existing links. We do not consider FIT as it is equivalent to QTO on the query types we considered (see appendix G.2, and Table 5 of FIT).\\nWe are open to run more baselines if the reviewer has additional suggestions.\\n\\n> definition of union query just requires one link to hold in the graph, I do not see the necessity to do such filtering as Figure A.1 as it more resembles 2i query type after filtering.\\n\\n\\nWe do the filtering because some links are impossible.\\nEven if it\\u2019s true that only one link needs to hold in the graph, union queries should evaluate how good CQA models are in performing such a union between two ***missing*** links. However, if only one link is missing and the other does not exist at all, we are measuring something different, which gives a \\u2018\\u2019false\\u2019\\u2019 sense of hardness of union queries: 2u queries should be as hard as 1p, not harder!\\n\\nIn fact, 2u was reported to have an MRR much much lower than 1p, but this was not due to the hardness of the union itself, rather due to the presence of non-existing links in their reasoning tree. By filtering out such query-answer pairs we obtain performance that are similar to 1p.\"}", "{\"comment\": \"We kindly ask the reviewer to review our response and our changes in the revised submission. We completed all the experiments we presented in the previous response, showing that even the performance of QTO drop when evaluated on the full-inference query-answer pairs of the old benchmark (diagonal of Tables A.3, A.4), and on the new benchmark (Table 5), with its performance being similar to CQD, a much simpler and older baseline.\"}", "{\"comment\": \"Dear reviewers,\\n\\nThank you for your efforts reviewing this paper. Can you please check the authors' latest response and see if your concerns have been addressed? Please acknowledge you have read their responses. Thank you!\"}", "{\"summary\": \"This paper studies the complex query answering task on knowledge graph and questions whether the data in existing dataset is unqualified. To be specific, the author proposes to term those pairs of complex query & answers which the corresponding reasoning process can leverage some parts of the knowledge in the training graph as partial inference pair and thus evaluate existing CQA models on full-inference pairs. This paper conducts extensive experiments to showcase this observation and analysis on some certain query types like 2u.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1. The key observation of this paper is interesting. The partial-inference pair is prevailing in existing datasets and the paper shows that full-inference pair is empirically much harder than partial-inference pair and thus the reasoning ability of SOTA CQA models may be less powerful than our imagination.\\n\\n2. This paper's case study and deep analysis are praiseworthy. For example, the paper studies the query type with union and additionally finds that if we filter out such pairs that can be accessed by just one link, the performance of 2u will increase significantly, similar to that of the 1p query type.\", \"weaknesses\": \"1. Firstly, the discussion of the query type is constrained in this paper. Most dominantly, almost all researches conducted on complex query answering in recent years included negative queries yet this paper avoids that completely. Perhaps it's a drawback of their model design originating from the initial CQD paper, or perhaps the reasoning process defined in this paper fails in a negative query. Either way, it's problematic as the scope of the query type it investigated is strictly contained.\\n\\n2. The claim of SOTA CQA models fail significantly on so-called full-inference pair is questionable, as it doesn't include recent models that are built by symbolic search algorithms, like QTO[1] and FIT[2], which use neural link predictors combined with searching algorithms and seems to bypass the challenges proposed by full-inference pair. As the paper itself proposes a symbolic search method, the missing baselines in other symbolic search methods is questionable.\", \"questions\": \"The comparison of 2u-filter is dubious. As the definition of union query just requires one link to hold in the graph, I do not see the necessity to do such filtering as Figure A.1 as it more resembles 2i query type after filtering.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Continued\", \"comment\": \"Whether $S_{X, II}$ is significant or not is determined by both graph topology and the ratio of $E_m$ and $E_o$ (in my belief $|E_m|$ is smaller, but I cannot change your belief so let the future readers decide it.). I want to note that this argument is already free from my previous rough estimation, so please provide further justification if you want to reject it. In addition, if one accepts my belief that $|E_m|$ is smaller and will be even smaller than $|E_o|$, I think it is just natural if two graphs $E_m$ and $E_o$ follow a mildly similar graph property, the counter-example can of course happen for example when $E_m$ is complete and $E_o$ is sparse, but the existence of mild graph property is again, my belief. I believe some empirical counting can be side evidence to justify the size of $S_{X, II}$. I will do that later.\"}", "{\"title\": \"Non sequitur?\", \"comment\": \"> This is a mental experiment. As you said, it will never be achieved.\\n\\nWe fail to see the value of this mental experiment that will never be achieved and follows some \\\"interesting\\\" non-classical logic. We comment more below.\\n\\n> You must have misunderstood my point. Of course, improving link prediction implies improving CQA can not be derived from the claim improving CQA implies improving link prediction. But my claim is not made by the abovementioned logic. [...] The claim improving link prediction implies improving CQA is a relaxed version of the sufficient condition above\\n\\nWe fail to understand what this made-up relaxation of a non-sufficient condition can imply rigorously. Please, ***let's stick to rigorous logical reasoning***, or bring concrete evidence when criticizing precise claims.\\n\\n> First, let's consider QTO, GNN-QE and CQD. For those models, (B) and (C) are almost satisfied by how they store their search stats and conduct the triangular norms\\n\\nThis is again factually wrong, ***there is no concrete evidence that B and C are \\\"almost satisfied***, but there is plenty of evidence of the opposite. In fact, if it were true, then QTO, GNN-QE and CQD would perform much better for simple queries such as 2p on the new benchmarks. Here you can have a reason why the new benchmarks are important: you can exactly disentangle A, from B and C.\\n\\n> If you didn't see any flaws here, it means s(D), as a function of s(A), say s(D) = f(s(A)), satisfies f(1) = 1 already. Also, f(0) = 0 is self-evident. So, given a snapshot of model checkpoints where more and more links are successfully predicted, it seems that f(x) is an increasing function up to a noise.\\n\\nWe see several flaws in this reasoning. You are projecting what you would like to see and achieve in a scenario that has no concrete basis to exist. The experimental evidence -- ***please re-run our experiments*** -- (and see also our answer to z2gA) says the opposite. Here you are claiming that the \\\"growth\\\" of any of these property is also the same for all of them. We wish this could be the case -- *hence all CQA would be easily solvable by link prediction and no more papers on complex reasoning are necessary* -- but rigorous empirical evidence claims the opposite.\\n\\n> For general neural models, (B) and (C) are only partially satisfied but fairly good, as suggested by your table A.3. The link predictors should work well for both prediction and collaboration with other components implementing functions in (B) and (C). The general trend, however, is still there and might be revealed by a regression plot. Is it also wishful thinking? I am curious about your opinion.\\n\\nYes it is wishful thinking unfortunately, as Table A.3 highlights the opposite, the ***MRR suddenly drops already for the `2p` column***. Where do you see in it that `(B) and (C) are only partially satisfied but fairly good`?\\n\\n> We still have different beliefs that distort our sense of the importance of various parts of data\\n\\nWe provided solid math evidence on the number of missing links being much larger on real-world KGs. Could you please comment on that? Do you really believe that among the 50+ billion possible triples of FB15k-237 there are less \\\"truly\\\" missing links than 200k observed ones?. Even if you (arbitrarily) decide that 49 billion triples are meaningless somehow, there will be more missing links in the remaining 1+ billion.\"}", "{\"summary\": \"The hard answers studied in complex logical queries are those that cannot be retrieved due to gaps in the knowledge graph (KG). This paper reclassifies these hard answers into two categories: full-inference and partial-inference The authors argue that partial inference can be reduced into simpler query types and partial inference occupies the majority of existing datasets, BetaE. They discover that current models perform poorly on full inference tasks and propose a new benchmark to highlight this issue. Additionally, they introduce a new model specifically designed to tackle partial inference answers by explicitly retrieving existing links from the training knowledge graph.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper find a interesting weakness of existing CQA dataset and propose a useful method and benchmark.\\n2. This paper is well written and easy to follow.\", \"weaknesses\": \"1. The baselines lack of the symbolic methods like QTO and FIT, which are the mainstream of CQA methods. The used CQD is a old symbolic method.\\n2. BetaE have three KGs but only two KGs are presented in the paper.\\n3. The argument of 'reduced to easier types' is weird because query types with less constraint will be easy to solved than original query types, for example the performance of 3i is good than 2i. I suggest the authors use a preciser expression.\\n4. I disagree your arguments that your proposed CQD-hybrid is the first an hybrid solver. QTO and FIT use the information from observed KG and trained link predictor to construct the matrix and can use the hybrid information of train edges and pre-training embeddings.\\n5. Because of Weak 4, I am curious that the performance of symbolic method QTO and FIT as they already have the hybrid information.\", \"questions\": \"1. Do you vary your argument in train queries? I am wondering the phenomenon that existed CQA models fails is caused by the train datasets have too many partial inference answers. Thus I am curious about the performance of symbolic search methods where these methods don not use queries to train.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We believe the main concerns of the reviewer can be fully addressed, as discussed in the general answer to all reviewers and in the single comments below.\\n\\n>The dataset discussed only covers the query types in [1], which is outdated today. In 2024, several datasets covering far more complex queries are also proposed, including [2] for queries with logical negation, [3] for cyclic queries, and [4] for multi-answer variables.\\n\\nWe agree that more sophisticated queries are possible but i) the benchmarks we analyze are still widely used (see our global answer as well) and therefore our message to the community using them is valid and ii) more sophisticated benchmarks such as those using negation are still based on the same principle, and can be affected in the same way.\", \"see_our_global_answer_above\": \"training links are also present in negation queries. We will update the paper with full results. For example, in 95.4% of 3in query-answer pairs and 98.4% of pin in FB15k-237 there are training links in the non-negative part of their reasoning tree. We report these values in Table A.1 in Appendix A.1.\\n\\n\\n\\nThe bottom line is that regardless of how intricated a query structure is, the hardness of a query-answer pair depends on the number of training links that can be found in its reasoning tree\\n\\n>For the ``fair'' split of triples, the answer is also unaware of existing studies on temporal CQA [5]\\n\\n We remark we are not solving temporal CQA. Instead, we use the temporal information to better sample classical triples (no temporal information is retained). \\n\\nMoreover, in [5] they use kg splits obtained by sampling uniformly at random the knowledge graph , while we create new splits s.t. the test only contains future triples (w.r.t the triples contained in train and valid). \\n\\n> baselines discussed are also no later than the year 2023\\n\\nThat\\u2019s not true, as ULTRA-Query was published on Neurips 2024 (yet to be presented!). We acknowledge that QTO was not present, and we added to our revised paper. See our global answer where we show that also the performance of QTO depends on the data leak. In fact, if we perform our stratified analysis of QTO on FB15k-237,\\n\\n\\n| Query type | all | 1p | 2p | 3p | 2i | 3i | 1p2i | 2i1p | 2u | 2u1p |\\n|------------|---------|-------|------|------|------|------|------|------|------|------|\\n| 1p | 46.7 | 46.7 | - | - | - | - | - | - | - | - |\\n| 2p | 16.6 | 16.7 | 4.0 | - | - | - | - | - | - | - |\\n| 3p | 15.6 | 15.8 | 4.5 | 5.0 | - | - | - | - | - | - |\\n| 2i | 39.7 | 40.8 | - | - | 5.7 | - | - | - | - | - |\\n| 3i | 54.6 | 56.4 | - | - | 15.4 | 5.4 | - | - | - | - |\\n| 1p2i | 33.8 | 35.9 | 15.8 | - | 6.2 | - | 7.3 | - | - | - |\\n| 2i1p | 24.7 | 25.3 | 10.3 | - | 8.6 | - | - | 8.1 | - | - |\\n| 2u | 37.0 | - | - | - | - | - | - | - | 37.0 | - |\\n| 2u1p | 12.2 | 11.6 | - | - | - | - | - | - | 35.3 | 11.3 |\\n\\n\\n\\n\\nit provides evidence that the aggregate performance reported in the papers is essentialy due to the ~98% of queries that can be reduced to 1p (see Table 1). For the full stratified analysis please refer to table A.3 and A.4.\\n>the proposed CQD-hybrid method is fundamentally identical to the QTO [6] proposed in ICML'23. CQD-hybrid in this paper, is not new to the community because it is practiced in QTO [6] and later followed by FIT [3]. It hardly says why these findings are essential.\\n\\nWe thank the reviewer for pointing out the existence of such hybrid solvers. We included it in our revision (page 8) and highlight how QTO performance is inflated (as CQD-Hybrid and all other solvers) by the current benchmarks. \\n\\nWe remark that , CQD-Hybrid is definitely not identical to QTO. The only point in common between the two methods is setting a score=1 to the training links. However, QTO is much more sophisticated than CQD-Hybrid, as they 1) calibrate the scores with a heuristics, 2) store a matrix $|V|\\\\times|V|$ for each relation containing the score for every possible triples, 3) have a forward/backward mechanism in the reasoning. \\nCQD-Hybrid only set the score for the existing triples =1, proving that a pre-trained link predictor and memorization of the training triples *alone* are enough to obtain SoTA performance on the old benchmarks. We hence need a new benchmark where we cannot leverage existing links to truly evaluate a model\\u2019s reasoning capabilities and advance the field of complex query answering.\"}", "{\"title\": \"Files re-upload\", \"comment\": \"Thank you very much for the effort in trying to reproduce our results. The error might have been related to the .pkl files in the repository \\u2013 hence, we re-uploaded them here: https://github.com/anonsubmission7818/test-QTO/releases/tag/v.1.0.1. Also, please note that you have to recompute the adjacency matrix using our pre-trained models for a fair comparison. To ensure that, it\\u2019s enough to remove any precomputed .pt files in QTO-main (\\u201crm -f QTO-main/*.pt\\u201d), and the adjacency matrices will be recomputed automatically.\\n\\nPlease let us know if this solves the issue! In case of additional problems, please let us know, and please try to provide as many details about the issue as possible, such as the terminal commands and stack traces, so that we can resolve it in a timely manner.\"}", "{\"comment\": \"Thanks for your reply. However, considering the limitations of the motivational statement of this paper (the comparison between inductive settings and the link leaks in transductive settings) and some missing discussions with some related work, I think this paper needs a major revision, so I tend to maintain the score.\"}", "{\"metareview\": \"This paper shows that current benchmarks for CQA are not really complex, and that in these benchmarks most queries (up to 98% for some query types) can be reduced to simpler problems, e.g., link prediction, where only one link needs to be predicted. The performance of state-of-the-art CQA models drops significantly when such models are evaluated on queries that cannot be reduced to easier types. Thus, the results reported on existing CQA benchmarks might distort researchers\\u2019 perception of progress in this field. This paper then proposes a set of more challenging benchmarks, composed of queries that require models to reason over multiple hops and better reflect the construction of real-world KGs, and show that there is a lot of room to further improve current CQA methods on the new benchmarks.\", \"strengths\": \"Reviewers generally agree that this paper provides an in-depth study and analysis of existing benchmarks and offers interesting observations.\", \"weaknesses\": \"Reviewers actively engaged in discussions with the authors, and many comments raised in the original reviews were addressed through the rebuttal period. Here is a summary of the unresolved (or partially resolved) issues after rebuttal:\\n\\n1. The distribution of the proposed benchmark by Reviewer z9wR (relatedly, the discussion around query types raised by Reviewer fPSh), and how this impacts the proposed benchmark\\u2019s usefulness (or, why it is necessary) still remain unclear. \\n\\nWhile I really appreciate the authors\\u2019 efforts in actively responding to reviewers\\u2019 comments, there are still issues that remain unresolved after the discussion period. In addition, some issues that did get resolved in this process will need to be clarified in the revised version. I believe the paper could benefit from another round of revision which integrates the discussions during the rebuttal period, particularly addressing the necessity of the proposed benchmark.\", \"additional_comments_on_reviewer_discussion\": \"(Partially) Addressed weaknesses:\\n\\nMost reviews mentioned the lack of baselines such as QTO and FIT originally, which the authors added during the rebuttal period. However, the newly added results of QTO seem problematic to Reviewer z2gA, who even tried to reproduce the results but encountered some issues. I did not consider this to be a major issue in making my decision, but do encourage the authors to consider doing more comprehensive experiments using these baselines in the revised version.\"}", "{\"title\": \"Follow up response\", \"comment\": \"We first remark that, as specified in the Appendix D, we use the same link predictor for CQD, CQD-Hybrid and QTO for a fair comparison. Then, we simply run the code as available in the QTO repository. We provide the scripts and models to re-run our experiments in the anonymous repository here: https://github.com/anonsubmission7818/test-QTO. *We encourage you to reproduce our results and let us know if you cannot*.\\n\\nAnalyzing the results, we stress that the drop in performance on the ***new benchmarks*** is expected. In fact, for full-inference query-answer pairs, hybrid solvers as QTO (but also CQD-Hybrid) have no advantage over other baselines since no training triple is in their reasoning tree.\\n\\nInstead, if you refer to the ***old benchmarks***, for partial-inference queries, QTO is the SoTA in most cases, as shown in non-diagonal cells of Table A.3, A.4. The performance drop you might refer to comes from using the pre-trained link predictor we used in our experiments.\\n\\nAs you mention that we addressed all your previous concerns, we now hope that no one is missing and that the score will be updated accordingly. Please let us know if something is still not clear.\"}", "{\"title\": \"Follow up response (part 1)\", \"comment\": \"We are happy that you are now agreeing with us on all points we make in the paper but one, we believe we have the perfect mathematical argument to solve it!\\n\\n> First, it is very important to understand why full-inference query-target pairs consist of a minor part of the query. Not surprisingly, this is the direct result of the fact that missing links (valid or test triples) are only a small ratio of all links in the investigated knowledge graph splits\", \"we_believe_that_you_fell_prey_of_a_logical_fallacy\": \"***The fact that the current scripts generate queries in this way does not imply this is the best way nor the only possible way to do it.***\\n\\nFirst, consider that ***the number of missing links present in the benchmarks is arbitrary***, as it depends on the proportion of test triples selected in the original datasets (which were designed for link prediction many years ago). No one dictates that the test split should be 15%. Note that this is just a way to **simulate unknown links** (see our answer below, regarding missingness as unknowns).\\n\\nAs this is arbitrary, try to perform this mental experiment: we could change this proportion, and you would see all the MRR results drop because more queries would be full-inference in the old benchmarks! In Machine learning, instead, changing the size of the test set should not change dramatically performance, because we assume i.i.d..\\nAgain, these dataset splits were designed for 1p queries only, and used as-is, without much critical thinking for CQA.\\n\\nSecond, ***the number of missing links does not necessarily imply that the number of full-inference queries is low***. In fact, we are able to get more than 50000 full-inference query instances for each type easily (*note we are not changing the train/val/test splits!*). We simply do not sample queries in the old way (that we discuss next).\\n\\n> Why are there roughly 98% percent of partial-inference query target pairs for 2p? The answer is simply because the missing links (valid+test links) comprise about 15%. Then, the probability of two links being missing could be roughly estimated as 15% * 15% = 2.25%. I think this estimation is valid since the ratio of non-reducible 2p in FB15k237 is 1.9% and 2.4% in NELL955\\n\\nThe computation correctly computes the fraction of triples in the old benchmarks, but this is not representative of the \\u201ctrue\\u201d fraction of all possible queries. In fact, we can easily sample 50000 full-inference queries. Your wrong deduction assumes that one has to sample triples independently, but that is just an *assumption* and a non-realistic one. ***Triples are never independent in the real world***. It is however the simplest assumption one can make, and we conjecture it is in the old benchmarks because it was the simplest to implement.\\n\\nWe not only get rid of that assumption, but also experiment with sampling queries according to a temporal pattern, which is more realistic.\\n\\n\\n> The difficulty of such query-target pairs, evident by the low MRR, is the result of selecting difficult samples from the original datasets, and the difficult sample is intuitively defined by how many edges in the queries are missing\\n\\nThis is true, we want difficult samples, and we want to highlight that there is a correlation between the number of missing edges and low MRR, something that went unnoticed by the community and that you also seemed not to notice before our last message. We remark that ***difficult queries are not less frequent*** in the overall distribution.\\n\\n> This means the full-inference query-target pairs become the significant ONLY when the ratio of missing links grows significantly. The ratio decays of full-inference queries will be less than 1% in many settings.\\n\\nAgain, the fact that the current ratio is 15% matters if you sample triples independently. We are questioning this way to sample triples, which biases the construction of the dataset towards easier partial-inference queries.\\nAs argued above, we are not obliged to sample triples in this way, nor to keep the test split to 15% (we are keeping this for keeping the performance of 1p - link prediction).\\n\\n\\n> To me, the new benchmark follows another distribution in the probabilistic space of all possible query-target pairs. The new distribution is featured by all the reasoning edges being missing and, of course, has a smaller support than the previous data distribution. How small is the support of the new distribution? My previous estimation might provide some straightforward but rough intuition suggesting it is small, with a rough ratio of about where the is the ratio of the missing edges, is the number of the predicates in a specific query type.\\n\\nYour estimation of the support is wrong, as you are sampling triples independently. The real support for full-inference queries is much larger, as we easily demonstrated in practice: we could flawlessly sample 50k full-inference query instances for all query types (and we can sample much more!)\"}", "{\"title\": \"How can you debunk with logical fallacy?\", \"comment\": \"> You mentioned QTO, GNN-QE and CQD first and now you are providing some ex-post explanation only for QTO. This is goalpost shifting.\\n\\nI think QTO is a valid example that made my arguments clear due to my limited time. But it seems that I failed :-). But thanks for your response and for being OKAY with my justification with QTO. Then, let me express my point why CQD and GNN-QE are also satisfied.\\n\\n1. For (B), the link predictor part. I say that QTO uses the adjacency matrix multiplication, and the link predictor is perfect if the **adjacency matrix produced by scores** is perfect, and added to that, **set projection modeled by adjacency matrix multiplication**. Now, you should see that the link predictor plays a crucial role with the combined (i) and (ii), which means the **perfect set projection**.\\n - Now you should look at CQD, I suppose you used CQD-beam, if you can just make your beam size larger, the argument is the same as QTO.\\n - If you look at GNN-QE, you should be aware that the the role of the backbone NBFNet plays the role exactly as **set projection**, which is made by a perfect link predictor.\\n\\n2. For (C), both use triangle norm.\\n\\n> Furthermore, you are confusing formal reasoning with probabilistic reasoning. [...] You are confusing an adjacency matrix with a probability tensor (that is definitely non-sparse, unless you enforce constraints as in [A])\\n\\nWhy do the link predictor produce a dense adjacency matrix? This very accurate phrase is named by its functionality of predicting links. A formal reasoner, if it predicts links, can also be named a link predictor. I am not talking about KG embedding; please read carefully. What do you mean by a paper about KG embedding? Don't **distort my words**.\\n\\nWhen a link predictor is perfect, the adjacency can be binary. I hope you can see that those two concepts (formal reasoning with probabilistic reasoning) are the same under this very circumstances, and this very discussion is also linked to the mental experiment, and also how you construct your hard dataset.\\n\\n> it is supporting beliefs with solid evidence.\\n\\nI am very cautious with your example. Your reasoning is flawed; I can only see what you are doing is like (1) the number of total candidates is large, but that is okay. (2) Let's assume there is a constant ratio of the missing links; if 1/49 does not seem convincing enough, let's try 1/10000. \\n\\nDo you think (1) and (2) is valid?\\n\\nBut wait, why is the ratio a constant? Have you ever thought about that critically?\\n\\n----\\n\\nThis is a friendly reminder. In your revised manuscript, the equation (1) about the definition of MRR is wrong; it seems to be a mean rank. :-)\"}", "{\"title\": \"continued responses\", \"comment\": \"## Why do I feel this benchmark is not necessary?\\n\\nAnother point you made in your feedback is that, even though only the proposed benchmark is insufficient, it is still meaningful for the readers to see the performance on a particular set of hard samples in $S_{X, III}$ in the so-called stratified analysis.\\n\\nHowever, this suggestion is actually not necessary for the following reasons:\\n- The hardness of $S_{X, III}$ is caused by the generalization gap of the link prediction. Let's conduct one mental experiment where the link prediction is perfect. Then, the performance drop due to the of the link prediction is gone, which is exactly how the proposed stratified analysis is constructed. The ONLY thing that the stratified analysis can reveal is how other logical elements performed on different parts of the data $S_{X, II}$ or $S_{X, III}$, which was identified as the confounder in your previous response and is not significant in previous practice. This mental experiment suggested that one could almost diminish the gap between $S_{X,II}$ and $S_{X,III}$ queries by directly improving the link prediction. \\n- Then, let's consider what the practitioners could do if they see the old averaged scores and new stratified analysis. When they see the old averaged score, what they will try to improve is, as you argued, almost just link prediction. But interestingly, the improved link prediction will also close the gap between $S_{X, III}$ and $S_{X,II}$ performances. Interestingly, the problem you raised can be solved by optimizing the old benchmark, with still sufficient attention on logical elements on $S_{X, II}$. On the other hand, when they see a stratified analysis, they think we might sacrifice some performance on dumb samples in $S_{X,II}$ criticized by you, but win some new points from the hard samples in $S_{X,III}$ encouraged by you. However, this practice is actually encouraging in that the model overfits a tiny portion of datasets, which could be possibly problematic as one loses attention to logical elements that make the queries **really logically complex** on a broad range of data in $S_{X,II}$, as I demonstrated before.\"}", "{\"title\": \"Ignoring math?\", \"comment\": \"Please revise carefully the evidence we provided above with the claims about MRR decrease if you just increase the ratio of artificial missing to 30%. That suffices to say that with the current benchmarks, performance will significantly change for CQA, implying the importance of how many triples one selects, something more than just \\\"believing they can be smaller\\\" as you are doing.\"}", "{\"comment\": \"> the existing benchmark is problematic\\\" is questionable and somehow self-contradictory with this paper's philosophy of choosing outdated simple queries\\n\\n\\nWe state again that the issues we find in simple queries are carried over more complex queries. See our analysis for negative queries. What matters is the way the queries are created.\\n\\n> scores on the previous benchmarks [1-5] are far from saturated because the average score is still less than 50 out of 100. \\n\\nWe did not claim scores are saturated, we claim that scores are inflated, in the sense that we are measuring the ability of a model to memorize training triples, not to reason at test time.\\nAnd it is definitely possible that models are not able to exactly memorize the whole training datasets (hybrid solvers can easily). This does not imply that they are not memorizing: they do! See our Tables.\\n\\n>Optimizing empirical models on previous benchmarks will also benefit the performance of the proposed \\\"hard\\\" benchmark.\\n\\n This is not necessarily true. As shown in Table 2, and later on the new benchmark in Table 5, more sophisticated baselines not always perform better on the full-inference queries.\\nEven QTO fails short on the new benchmarks. For example, for FB15k-237+H :\\n\\n| Model | 1p | 2p | 3p | 2i | 3i | 1p2i | 2i1p | 2u | 2u1p |\\n|--------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|\\n| GNN-QE | 42.8 | **6.5** | **4.2** | 10.3 | 10.3 | 4.6 | 7.8 | 36.9 | 7.2 |\\n| ULTRAQ | 40.6 | 5.5 | 3.7 | 8.2 | 7.9 | 4.1 | 7.9 | 33.8 | 5.1 |\\n| CQD | **46.7** | 6.3 | 2.7 | **18.4** | **16.2** | **5.6** | **9.4** | 34.9 | 7.3 |\\n| ConE | 41.8 | 5.7 | 3.9 | 16.8 | 13.9 | 4.0 | 6.6 | 25.9 | 5.3 |\\n| QTO | **46.7** | 5.9 | 3.5 | 13.5 | 11.8 | 4.7 | 8.8 | **37.3** | **7.4** |\\n\\nFor additional results on the new benchmark please see Table 5.\\n \\n\\n>Although the previous benchmark consists of too many observed triples, as shown in this paper, it can also be reasonable by arguing that the train graph consists of a sufficiently large portion of knowledge that users are interested in.\\n\\nThis is true, and we argued for it by presenting a hybrid solver. The issue with the current benchmarks however is that: i) 98% of queries reduce to 1p only, which is unrealistic in the real-world and is a clear artifact of the creation of the benchmark 2) people do not realize that the performance they are reporting is essentially link prediction performance. We not only highlight these issues, but propose a new benchmark that is much more challenging and designed to avoid conflating memorization with reasoning.\"}", "{\"title\": \"You are assuming triple independence, why is this realistic? (II)\", \"comment\": \"> When $|E_m| < |E_o|$, it is natural that $|T_{X, III}| << |T_{X, II}|$ and then $|S_{X,III}| << |S_{X,II}|$ [...] This fact does not affect how you sample the data. It is about the range of data you can sample from.\\n\\n\\nYou do not provide any rigorous reason why this should happened and unfortunately you cannot provide a \\u201cnatural\\u201d derivation here unless you i) fix a starting ratio and ii) assume a sampling process.\\n ***The numerosity of query-answer pairs in a benchmark cannot be detached from the way you sample them***. You never rebutted on the fact that ***assuming triples to be independent is a very simplistic way to model the real world***. This is needed to reach your conclusions. Otherwise, given fixed ratio, we can assume complex queries are distributed differently, yielding to another bias in the type numerosity.\\n\\nThis is basic probability. Assuming two random variables are independent induces a certain joint distribution you are sampling from (the simplest ever). ***If we do not assume independence, we can have that, for instance, if I sample a first missing link, the probability of sampling a second one given (conditioning) the first is much higher than sampling a known link***. This implies that there are relationship in the joint distribution you cannot break by independence, you cannot forget the conditioning part. And it is a more real-world assumption: ***if you do not have information about the first link, why would you have information about a link that depends on it (conditioning)?***\\n\\n> Please be aware that the discussion above holds regardless of the sampling algorithm. \\n\\nAgain, ***this is factually wrong***. We hope we clarified this point for the third time, with an argument from basic probability.\\n\\nLastly, we remark that it is easy to see why the benchmarks are skewed and inflating results, with the current sampling process (which implies triple independence). Do this mental experiment: increase the current ratio of artificially missing edges from 15% to 30%. This scenario still falls under your misunderstood condition that $|E_m| < |E_o|$. Given the current triple independence assumption, suddenly the number of easy query-answer pairs would drop from 98% to 80%, and current average MRR performance for CQA as reported in papers could drop up to 10-15 points, while the single link prediction one will be remaining stable (up to noise). ***This clearly highlights how the old benchmarks were not designed to measure the spectrum of CQA in mind and why we need more benchmarks***.\"}", "{\"title\": \"Follow up response (part 1)\", \"comment\": \"> I acknowledge the efforts during the rebuttal period to add new baseline models (QTO) and new data (queries with logical negation). The empirical results obtained are certainly of high quality and with great detail. I also appreciate the author's efforts to populate new and hard benchmarks (C3).\\n\\nWe are happy **we answered most of your previous concerns and requests regarding experiments, types of queries used, baselines and writing**. We hope this can be reflected in a score change, as only one point is left open, concerning the \\u201cmotivation\\u201d. We will address this below as we believe there is a clear misconception here that we can easily disentangle.\\n\\nIt is easier if we start from your last comment.\\n\\n> the argument that because 98% of queries reduce to 1p leads to a clear artifact also becomes questionable. At least, this argument does not apply to the neural models without explicit memorization and boolean logical calculus because the reduction is not valid.\", \"our_claim_is_based_on_just_statistics_and_we_think_you_will_agree\": \"98% of the queries of old benchmarks share a \\u201csyntactic\\u201d property, all their links but one are present in the training set. Even if you disagree on calling them \\u201creducible to 1p\\u201d, you have to agree on the following points:\\n\\nA) this is a statistical property of the query distribution, we just counted number of links\\n\\nB) having 98%+ of this \\u201ctype\\u201d of queries means that the overall performance \\u2013 a simple arithmetic mean \\u2013 will overcount the performance of this type more than other types\\n\\nC) this \\u201ctype\\u201d (column \\u201c1p\\u201d in Tables 2, A.3, A.4) is incredibly **easier for all SOTA methods, regardless they are hybrid solvers or not**, than the \\u201cother types\\u201d which are much harder (lower MRR) as empirical evidence shows (other columns of the previous Tables) \\n\\nD) the fact that \\u201cother types\\u201d are harder is not a distorted statistic coming from evaluation on few triples, because as we increase the number of queries (for which we need the new benchmarks) the MRR stays very low \\n\\nAs such, from B+C+D it follows logically that the current benchmarks are skewed towards a certain \\u201ctype\\u201d of queries that is inherently easier than the rest of the queries, and as we are computing arithmetic means. This inflates performance as the easier type is overrepresented. This reasoning is solidly based on numbers and we are not pushing any additional interpretation over this \\u201ctype\\u201d. **This is enough to tell the community to be concerned with the current benchmarks** as we are measuring an aggregate performance over an overrepresented type.\\n\\n> Firstly, it is too reckless to simplify reasoning as predicting new triples (inference-based).\", \"this_is_a_crucial_misconception\": \"We never claim one should simplify reasoning to predict *only* full-inference query. However, we need a full-inference only dataset to check point D above, and to thus set a more challenging bar. We elaborate more next.\\n\\n> This reduction is valid only when the logical calculus is conducted under boolean truth values and exhaustive search. It is NOTvalid under a more realistic and machine-learning scenario, which is also suggested in Table A.3.\\n\\n\\nWe, in fact, say we do not want to discard the old benchmarks, but one has to use i) hybrid reasoners on them (**takeaway #2**) and ii) a stratified analysis on it (**takeaway #1**). Reporting aggregated means gives a misleading sense of performance. We agree with you that **reasoning in the real world needs hybrid reasoners**, but even with them, not doing a stratified analysis makes all performance collapse to the most represented, \\u201ceasy type\\u201d only.\"}", "{\"comment\": \"Dear Authors:\\n Thank you for making the rebuttals and I recognize your effort in adding QTO as one of your baselines to make the experiments more extensive.\\n However, I think the other concern is still valid. \\n\\n> We do the filtering because some links are impossible. Even if it\\u2019s true that only one link needs to hold in the graph, union queries should evaluate how good CQA models are in performing such a union between two missing links. However, if only one link is missing and the other does not exist at all, we are measuring something different, which gives a \\u2018\\u2019false\\u2019\\u2019 sense of hardness of union queries: 2u queries should be as hard as 1p, not harder!\\n \\nRegarding the discussion of the union query, if there must be two missing links, then what is the difference between a conjunctive query and a disjunctive query? 2u and 2i will be exactly the same! I think that violates the very definition of the disjunctive query.\\n\\n> Note that, since queries with negation contain a \\u201cpositive\\u201d sub-query and a single negative link, our analysis can be easily carried over as the positive sub-query can be reduced to a simpler type in the presence of training links. \\n\\n``that queries with negation contain a \\u201cpositive\\u201d sub-query and a single negative link`` is only true based on the Q2B/BetaE dataset, it is far from something that can be taken as granted and can be wrong in more advanced dataset, therefore the whole discussion is really questionable when the query doesn't meet this condition and the definition of ``full-inference'' becomes dubious.\"}", "{\"title\": \"Other concerns\", \"comment\": \"Thank you for your hard work in addressing my concerns. However, the experiments over QTO are strange, and the performance is even worse than ConE and CQD in certain query types. It does not make sense that the symbolic method has such poor performance.\"}", "{\"comment\": \"> Now, you should see that the link predictor plays a crucial role with the combined (i) and (ii), which means the perfect set projection.\\n\\nyou are assuming you have perfect B and C, which is not true. You started by saying you wrote a mistake, now you are saying again that it is a perfect reasoning. It is not. And as we said above, ***the proof is in the pudding: the empirical scores are supporting our claim***. In fact...\\n\\n> Now you should look at CQD, I suppose you used CQD-beam, if you can just make your beam size larger, the argument is the same as QTO.\\n\\n... the simple CQD-beam is better than QTO in many scenarios, see our tables and this happens with having the same link predictor. Can you comment on that? Why is QTO worse? No need to increase the beam size for CQD. Hence, the B and C are not perfect in QTO. \\n\\n> If you look at GNN-QE, you should be aware that the the role of the backbone NBFNet plays the role exactly as set projection, which is made by a perfect link predictor.\\n\\nalso ***the implementation in GNN-QE is not realizing perfect logical (or probabilistic) reasoning***. The backbone can not implement Dan's algorithm. No need to debate this. In any case, you are building now a wishful argument to defend your stance that link prediction is sufficient and that we do not need additional benchmarks. ***But by the same line of reasoning, you would not need the old CQA benchmarks, and just link prediction benchmarks.*** \\n\\n> When a link predictor is perfect, the adjacency can be binary.\\n\\nThis is wishful thinking, mistaking for \\\"perfection\\\" something that will never happen in ML, and something we do not want in probabilistic ML. There is no easy way to learn -- in our current differentiable pipelines -- a perfect 0 and 1, unless we manually crop/clip the probability values ex-post.\\n\\n> Do you think (1) and (2) is valid?\\n\\nNo, as you are setting the ratio based on the observed triples we have and on the test split ratio people arbitrarily set for the old benchmarks, which were defined for link prediction, and not CQA. ***We state this again, believing that link prediction alone is sufficient for CQA is misleading and factually not true.***\\n\\n> But wait, why is the ratio a constant? Have you ever thought about that critically?\", \"the_ratio_is_a_constant\": \"a quotient between two scalars. An Unknown constant. Your flawed assumption is that this is known and based on the artificial split.\\n\\n> This is a friendly reminder. In your revised manuscript, the equation (1) about the definition of MRR is wrong; it seems to be a mean rank. :-)\\n\\nThanks for finally spotting it (why a reminder?), we can easily fix it.\"}", "{\"title\": \"Thanks for your reply.\", \"comment\": \"Dear authors,\\n\\nThanks for your point-to-point response. I would also like to engage more in the crucial discussion and skip some wording issues here and there. To summarize, I think we have already reached an agreement on the following observation:\\n\\n- The mean scores for almost all query types from existing benchmarks statistically **bury** the performance of full-inference query-target samples.\\n\\nAnd also, the following two arguments:\\n\\n- CQA is more than link memorization/prediction; it also includes the treatment of logical connectives and variables.\\n- full-inference query-target pairs performed significantly worse than the average performance.\\n\\nOur major disagreement is rooted in how we understand such observations/arguments and what the implications are.\\n\\n- First, it is very important to understand why full-inference query-target pairs consist of a minor part of the query. Not surprisingly, this is the direct result of the fact that missing links (valid or test triples) are only a small ratio of all links in the investigated knowledge graph splits. \\n - Why are there roughly 98% percent of partial-inference query target pairs for 2p? The answer is simply because the missing links (valid+test links) comprise about 15%. Then, the probability of two links being missing could be roughly estimated as 15% * 15% = 2.25%. I think this estimation is valid since the ratio of non-reducible 2p in FB15k237 is 1.9% and 2.4% in NELL955.\\n - This means the full-inference query-target pairs become the significant **ONLY** when the ratio of missing links grows significantly. The ratio decays of full-inference queries will be less than 1% in many settings.\\n\\nIf my understanding of the nature of the full-inference pair is correct, then several concerns prevent me from accepting the proposed settings as being valid. \\n1. **Are the full-inference query-target pairs truly complex?** No, they do not relate to the ``complex'' logical structure and do not align with the original goal of studying complex logical queries. Although this concept replacement makes the title, and other parts such as intro, section 6, and conclusion more eye-catching, it does not change the nature of sample manipulation. The difficulty of such query-target pairs, evident by the low MRR, is the result of selecting difficult samples from the original datasets, and the difficult sample is intuitively defined by how many edges in the queries are missing.\\n2. To me, the new benchmark follows another distribution in the probabilistic space of all possible query-target pairs. The new distribution is featured by all the reasoning edges being missing and, of course, has a smaller support than the previous data distribution. How small is the support of the new distribution? My previous estimation might provide some straightforward but rough intuition suggesting it is small, with a rough ratio of about $x^k$ where the $x\\\\in[0, 1]$ is the ratio of the missing edges, $k$ is the number of the predicates in a specific query type.\\n3. Suggesting such a new benchmark sends readers important messages to pay more attention to this particular data distribution when optimizing their models. I am worried that the measurement from this distribution might be biased towards certain subsets of triples/entities (at least they are 10% missing triples). Then, **my question is whether this new data distribution remains valid and effective in evaluating the CQA method's comprehensive capability (for logical connectives and variables)**. From the description in the paper and the discussion before, I cannot see the answer.\\n5. Should we assign more attention to such particular data distribution from a developing point of view? Also, following my understanding of the ratio of full-inference query-target pairs, my opinion is no. The reason is that the advancement of LLMs further reduces the hardness of knowledge harvesting. I conjecture that the ratio of missing links in KG will decay, so the ratio of full-inference pairs will also decay even faster.\\n\\nI still acknowledge the authors' job of identifying the more difficult part in the space of query-target pairs for a given query type. Even though we separate this part of the samples from the averaging scores, it is still hard for people to select the most suitable model per query type with multiple benchmarks. Should they make decisions by averaging the one used before and the new one proposed here? If one is willing to choose a pure neural model, is it right for the model selection process to underweight the prediction of observed links?\\n\\nI am looking forward to your reply.\\n\\nBest\"}", "{\"title\": \"You are assuming triple independence, why is this realistic? (I)\", \"comment\": \"> With all due respect, I will try to make my response match the seriousness of yours :-).\\n\\n***We believe that scientific debate should be based on rigorous logic and facts and the tone of the conversation should be polite***. We are trying to steer the conversation towards this direction, only partially succeeding so far. And trying to navigate the continuous goalpost shifting. We are glad that reviews and discussion are public.\\n\\n> The full-inference query-target pairs / truly complex queries / irreducible query-answer pairs in your terminology are essentially hard samples. The reasons are (1) the concept of query reduction does not apply to neural models and (2) treating hard samples as logically more complex queries is a concept replacement.\\n\\nWe ***never agreed on these two points***. The concept of query reduction is model-independent, it is a syntactic property of query-answer pairs. ***The fact that \\u201cneural models\\u201d (i.e., non-hybrid models) are able to effectively memorize training triples is solid evidence in the community*** and spawn from the tensor literature and ML literature. Please read again the references we provided in our paper and answer above. \\n\\nThe fact that you admit them as \\u201chard sample\\u201d \\u2013 *something many in the community ignore, including you in your review and previous answers dismissed it* \\u2013 is already pointing to one of our contributions: **there is a curve of hardness, with \\u201ceasy samples\\u201d being those reducible to 1p. There is solid evidence all models find them easier than the rest. And these samples constitute 98%+ of all datasets.** Current aggregate statistics are inflating perceived progress.\\n\\n\\n> the ratio of hard queries is often small. [...] you may have confused the concept of ratio with frequency. Your reasoning is that because one can get a good number of hard samples, one can say that the underlying distribution of all hard samples has a good coverage.\\n\\nWe believe the confusion appears on your side (and a frequency is a ratio!). We will try again to break the reasoning process step by step. We believe the flawed reasoning stems from the underlying flawed assumptions:\\nthe current percentage of train/val/ test splits are highly representative of the real-world\\nmissing triples are less than known triples\\nthe sampling process is not flawed\\n\\n\\n\\n\\n> In this part, we assume that the knowledge graph is given and the ratio of (artificially) missing edges is small.\\n\\nThis is the first flawed assumption.You cannot do this and base it only on the fact that current benchmarks have been split in a certain way. \\n*Why should we split the KG to have artificially/missing edges?* To estimate performance on unseen scenarios. As in ML, we would like this performance not to change with the split ratio much.\\n\\nFirst, we remark that benchmarks have been created for link predictions, then adapted to CQA. Second, consider that ***in a given KG, the number of real missing edges is far greater than the number of seen edges***. Consider just the proportion of edges in the current FB, NELL or even WikiData, where one can find millions of entities, this implies the possibility to have thousands of billions of possible triples. But only a fraction are observed. So in the real world $|E_m| >> |E_o|$. Note that this is true even when you discard those edges than might violate logical constraints in the ontology. Third, ***this only increases when you start counting complex queries beyond 1p***.\\n\\nIf we want to measure CQA in real, challenging scenarios, we need to test our models under this perspective, not blindly inheriting benchmarks that were thought for simple link prediction.\\n\\n> For a fixed query type, say 2p, the set of all possible query-target pairs is a fixed set. [...] Resulting three subsets of $T_{2p}$, that is, $T_{2p,I}$, $T_{2p,II}$, and $T_{2p,III}$ of reasoning trees. Those subsets of reasoning trees directly relate to the query-answer pairs. And let's just use the same meaning of suffixes I, II, and III to justify the subset of query-target samples $S_{2p, I}, S_{2p, II}, S_{2p, III}$ due to its natural connection with the reasoning trees.\\n\\n\\nWe appreciate ***you are explaining to us what we wrote in the paper about reductions***. There is no need to introduce new notation, let\\u2019s stick to the precise nomenclature we proposed.\"}", "{\"comment\": \"We thank the reviewer for praising our motivation, contribution and presentation. We will address all the questions in the following. We believe we can do this easily and hope to reach full acceptance.\\n\\n>Lack of discussion of related work; CQD-Hybrid is very similar to the QTO [1] and I think the difference should be cited and discussed; difference between CQD-Hybrid and QTO?\\n\\nWe added a discussion of related works in the revised version of the paper (page 8), including a paragraph of hybrid solvers like QTO, highlighting the differences with CQD-Hybrid in Appendix A.2.\\nSee also our global answer.\\n\\n> As an effort to propose new benchmarks, the experiments for the new benchmark are a little less. More baselines, some case analysis, etc., should be added.\\n\\n\\nWe added QTO as a baseline for the new benchmarks in the revised version of the paper. We are happy to add more baselines and run further analysis, if you could be more specific.\\n\\n>The problem of information leakage in training graphs can be solved well by the inductive setting in naive knowledge graph reasoning (one-hop reasoning) task. \\n\\n\\nWe agree, but we remark that the transductive setting we address is widely adopted and complementary to the inductive scenario. Therefore, investigating the issue of transductive benchmarks and fixing them without changing setting is an important contribution.\\n\\nThat said, the fact that inductive benchmarks can be \\u201cmore challenging\\u201d will depend on how they are created. One would need to perform an in-depth analysis to the scripts and process that generate them as we did for the transductive. As in our case, the issue is not the transductive scenario, but the way the benchmarks have been generated and \\u201cblindly\\u201d used so far. \\n\\n>Actually, there have been some attempts to establish inductive settings in CQA[1][2], where there will be no information leakage because the training and test graphs are different. How do you think this paper differs from these works?\\n\\nWe remark that we do not propose a benchmark for the inductive setting, rather propose a benchmark on the transductive setting that truly evaluates reasoning performance of transductive models on performing complex query answering. We will cite those works as related works, but clarifying they are addressing two different problems. One can have a rigorous transductive benchmark with no leakage, as we shown.\\n\\n>In my opinion, link leaks in the training graph only affect the GNN based and neural link predictor based methods, while the embedding-based methods do not take advantage of the information in the training graph (except for 1p queries). Why does this type of approach also degrade on the new benchmark?\\n\\n Our experiments suggest that also the embedding-based methods are affected by this leakage. Due to their extensive training on several query types (they train on up to 150.000 of queries for 10 query types) we suspect that they implicitly take advantage of existing triples by memorizing them in the embeddings as they are trying to reconstruct the triple tensor [3]\\n\\n[3] Trouillon, Th\\u00e9o, et al. \\\"Complex embeddings for simple link prediction.\\\" International conference on machine learning. PMLR, 2016.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This manuscript presents a data-level study of knowledge graph complex query answering. The main argument is that the query-target pairs in existing datasets (q, t) can be somehow reduced to the easier ones (sub(q), t) if the required triple can be found in training KG. Therefore, the paper proposes to focus on the irreducible query answer pairs and empirically examine that the performance of all existing methods will drop significantly. The facts revealed above motivate a search approach highlighted by letting the edges in the train graph be memorized. The performance of the approach is compared against previous works on old and new benchmarks.\\n\\n---\\n\\n## Retrospective summary after the rebuttal period.\\n\\nThis author-reviewer discussion thread goes on for too long during the rebuttal period, and particularly, the later part of the debate becomes intense. I think it is necessary to provide such a summary to digest my concerns and how they are not fully addressed for future readers of this page. Finally, I respond to the authors' accusation of a goalpost shift. \\n\\n### My concerns and why they are or are not addressed.\\n\\nMy initial concerns are, as stated in the weakness in the very first review:\\n1. the missing but essential baseline QTO/FIT.\\n2. the missing discussion in similar datasets, such as the BetaE dataset, FIT dataset, and EFO_k dataset.\\n3. distribution of the new benchmark.\\n\\nThe situation of those concerns are\\n1. Addressed.\\n2. Addressed partially. The BetaE dataset was considered empirically, which satisfies my minimum standard. However, no discussion about how their methodology of benchmark construction applies to query types in FIT and EFO_k datasets can be found.\\n3. It is still under debate, and let me expand on it as follows.\\n\\nMy concerns about the distribution of the proposed benchmark are decomposed into several fine-grained issues during the rebuttal period.\", \"distribution_issues\": \"1. I proposed a new set of terminology that clearly describes two parts of samples $S_{X, II}$ and $S_{X, III}$ based on the split of training and testing edges. The old benchmark, although sampled in a synthetic data distribution (no matter how weird it is), still covers both sets. The new benchmark only focuses on the $S_{X, III}$. My view is that the ignorance of $S_{X, II}$ is worrisome because such ignorance failed to reflect CQA's performances on link prediction, logical connectives, and variables on $S_{X, II}$.\\n2. Besides, the new benchmark, even used jointly with the old ones, actually stresses the importance of $S_{X, III}$, which is picked by enforcing the missing links. By emphasizing such a subset of data, the author tries to encourage a better link prediction because the simplest way of eliminating the gap between the new benchmark and the old one is just to make a better link prediction.\\n\\nBoth two issues are not fully addressed.\\n\\n1. The authors acknowledge the new benchmark's ignorance. However, they tried to alleviate the negative impact in two ways. The first way is the joint usage or the stratified analysis, which leads to my second issue. The second way is to state that $S_{X, II}$ is way less important than $S_{X, III}$, which leads to our disagreement in non-factual belief. My belief is the missing link (which needs to be filled to satisfy practical application) is relatively small and will be smaller due to the advancement of knowledge graph construction and knowledge harvesting. The authors believe that the missing links in a KG are significantly large proportion. Their belief is claimed to be supported by valid math derivation, which I don't think their math can support their belief.\\n2. For the second issue, I think the performance gap between old and new benchmarks is caused by only picking the missing links, and it can be narrowed by proposing better link predictors that have better link prediction performances on the missing links in existing KG datasets. There are some neverending debates about whether the gap can be fully closed or if it really reflects the measurement of CQA. It does not change my view that, as a benchmark that encourages researchers to get higher and higher scores, it is very important that a benchmark does not have a clear shortcut deviating from the original goal of the task (logical connectives and variables besides link prediction). Clearly, If one method can achieve better link prediction performance on test data, it can first predict all missing links and just run symbolic algorithms. The limit of this solution is exactly one begins with a link predictor that overfits the test set, which is how the authors create benchmark data. I don't think this is a valid outlook for constructing a new benchmark to facilitate the study. Ironically, optimizing on the old benchmark, criticized by the authors that are mostly measuring link prediction (suppose the authors are correct here), will finally close the gap between new and old benchmarks, making the new benchmark less and less useful.\\n\\n### Some accusations from the authors.\\n\\nThe authors accused of continuous goalpost shifting. However, my concerns are centered on the data distribution of the new benchmark and how it will impact the study. \\n\\nI decomposed and expanded my claims to respond to the authors' words, such as \\\"we don't see any problems here.\\\". I believe that the active authors deserve to know why I am against them. The authors, although very eloquent, repeatedly misunderstood my terms and words. Such as narrowing down my mentions of link predictors to neural link predictors (or knowledge graph embeddings) and refusing to accept my thoughts by just stating that neural models can never achieve perfect performance. However, no matter whether the neural models are perfect or not, the emphasis of new benchmarks on link prediction still remains unchanged.\\n\\nI understand that people sometimes get emotional during the debate, and the emotional words from both the authors and me are already documented in the following threads. Please also let me express my apology if any of my words ever hurt anyone.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"This paper conducts an in-depth study of existing benchmarks and reveals biases regarding tree-shaped queries and union operators in several datasets.\"], \"weaknesses\": \"I have two concerns about the content discussed and the angle studied in this paper.\\n\\nFirstly, the content seems to be very old. I am not sure whether this paper has been recycled on a sufficiently long time, so the author is not aware of the recent progress in this field. \\n1. The dataset discussed only covers the query types in [1], which is outdated today. In 2024, several datasets covering far more complex queries are also proposed, including [2] for queries with logical negation, [3] for cyclic queries, and [4] for multi-answer variables. For the ``fair'' split of triples, the answer is also unaware of existing studies on temporal CQA [5].\\n2. The baselines discussed are also no later than the year 2023. ULTRAQ is almost the same as the GNN-QE.\\n3. Given the above ignorance, the proposed CQD-hybrid method is fundamentally identical to the QTO [6] proposed in ICML'23. Those two methods are all search-based approaches that involve memorizing the train edges, which is proposed in this paper and also reflected in Equation 4 in [6], noticing that normalizing link predictor scores into [0,0.9] will not change the order of solutions. \\n\\nI prefer to recognize methodological identicality as unawareness rather than plagiarism. Therefore I didn't raise an ethical review flag.\\n\\n\\nSecondly, saying that \\\"the existing benchmark is problematic\\\" is questionable and somehow self-contradictory with this paper's philosophy of choosing outdated simple queries. \\n- On the one hand, scores on the previous benchmarks [1-5] are far from saturated because the average score is still less than 50 out of 100. Optimizing empirical models on previous benchmarks will also benefit the performance of the proposed \\\"hard\\\" benchmark. Meanwhile, recognizing the importance of training edges, although motivating the CQD-hybrid in this paper, is not new to the community because it is practiced in QTO [6] and later followed by FIT [3]. It hardly says why these findings are essential.\\n- On the other hand, the paper only focuses on the simpler query forms proposed in [1]. One might argue that such simple query forms cover a sufficiently large portion of real-world user cases, so the choice of such forms is reasonable. The same practical point of view can also apply to the easy-hard contrast produced by whether the reasoning triples of a query are observed or not. Although the previous benchmark consists of too many observed triples, as shown in this paper, it can also be reasonable by arguing that the train graph consists of a sufficiently large portion of knowledge that users are interested in.\", \"references\": \"[1] Ren, H., Hu, W., & Leskovec, J. (2020). Query2box: Reasoning over knowledge graphs in vector space using box embeddings. arXiv preprint arXiv:2002.05969.\\n\\n[2] Ren, H., & Leskovec, J. (2020). Beta embeddings for multi-hop logical reasoning in knowledge graphs. Advances in Neural Information Processing Systems, 33, 19716-19726.\\n\\n[3] Yin, H., Wang, Z., & Song, Y. (2023). Rethinking Complex Queries on Knowledge Graphs with Neural Link Predictors. arXiv preprint arXiv:2304.07063.\\n\\n[4] Yin, H., Wang, Z., Fei, W., & Song, Y. (2023). ${\\\\rm EFO} _k $-CQA: Towards Knowledge Graph Complex Query Answering beyond Set Operation.\\n\\n[5] Lin, X., Xu, C., Zhou, G., Luo, H., Hu, T., Su, F., ... & Sun, M. (2024). TFLEX: temporal feature-logic embedding framework for complex reasoning over temporal knowledge graph. Advances in Neural Information Processing Systems, 36.\\n\\n[6] Bai, Y., Lv, X., Li, J., & Hou, L. (2023, July). Answering complex logical queries on knowledge graphs via query computation tree optimization. In International Conference on Machine Learning (pp. 1472-1491). PMLR.\", \"questions\": \"Please respond to my two concerns in the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response in to \\\"Non sequitur?\\\"\", \"comment\": \"Dear authors,\\n\\nI am glad to see our discussion converge to a certain topic, indicating the debate will end soon. According to your post, I think now the frontier is about (1) Why I am saying the hardness of your hard class is caused by link prediction and can be fixed by the link predictor and (2) the \\\"belief\\\" about whether the ratio of missing edges in KGs is large or small.\\n\\n## About (1)\\n\\n> We fail to see the value of this mental experiment that will never be achieved and follows some \\\"interesting\\\" non-classical logic. \\n\\nIn fact, you have just simulated this mental experiment when you sample your new benchmark. You see all the missing edges in your sampling algorithm. You use the derived results as the gold answer.\\n\\n> This is again factually wrong, there is no concrete evidence that B and C are \\\"almost satisfied\\n\\nI don't need empirical evidence for that. \\n- For (B), please check the computation of QTO, and you should realize that QTO calculated graph traversal by multiplication of the adjacency matrix produced by the link predictor. (B) holds if you admit that the matrix multiplication can simulate graph traversal.\\n- For (C), please check the definitions of triangle norms. (C) holds if you admit that they collapse into the standard logic calculus when the input truth value is boolean. \\n\\nIf you don't like my wording, what would you call it if a computation process were implemented as its definition?\\n\\n> In fact, if it were true, then QTO, GNN-QE and CQD would perform much better for simple queries such as 2p on the new benchmarks.\\n\\nOn my side, I think we can blame the link predictor because the algorithm implemented by QTO/FIT follows the standard evaluation of existential queries if the adjacency matrix used by them is perfect (by link predictor is perfect, but this does not happen usually). Please check Chapter 4 in the literature [8].\\n\\nAlso, please explain your statement if you want to use it.\\n\\n[8] Suciu, D., Olteanu, D., R\\u00e9, C., & Koch, C. (2022). Probabilistic databases. Springer Nature.\\n\\n> Here you can have a reason why the new benchmarks are important: you can exactly disentangle A, from B and C.\\n\\nI don't understand this sentence. Please elaborate.\\n\\n> We see several flaws in this reasoning. You are projecting what you would like to see and achieve in a scenario that has no concrete basis to exist. \\n\\nAs I said before, this scenario is exactly how you produced your benchmark, which suggested one shortcut to exceed in your benchmark. It is okay if you are happier to call it unrealistic. But please also look at some related arguments.\\n\\n> The experimental evidence -- please re-run our experiments -- (and see also our answer to z2gA) says the opposite.\\n\\nPlease see my above and explain what is the reason if I can not produce the correct answer only because I had a bad adjacency matrix produced by the imprecise link predictor. \\n\\n--- \\n## About (2)\\n\\n> We provided solid math evidence for the number of missing links being much larger on real-world KGs. \\n\\n**I have no intention of changing your personal beliefs because this is your free will, and I left it to the community\\u2019s judgment. I stated that the impact of the disagreement in belief would be weakened in the final score**. But if you look into FB15k-237 for solid math evidence for your belief, here is my response because this is the real case I can analyze.\", \"let_me_recall_your_math_evidence_by_quoting_your_earlier_responses_as_follows\": \"> Consider FB15k-237 it has 14,541 entities and 237 relation types, so potentially 50,111,441,397 triples, but we observe only 310,116 of them.\", \"and_also_the_following_quote\": \"> Even if you (arbitrarily) decide that 49 billion triples are meaningless somehow, there will be more missing links in the remaining 1+ billion.\\n\\nPlease add to it if I missed anything about your math.\", \"you_must_want_to_distinguish_two_concepts\": \"(1) the total number of possible triples. (2) the missing triples we care about, that is, the triples can be justified as true. Recall that knowledge is justified as true.\\n\\n**Please look at the actual data, in particular the relations, in FB15k-237 before you tried to persuade yourself with those evidences**\\n\\nYou can look at your data or the public source I mentioned below. https://github.com/liuxiyang641/RAGAT/blob/main/data/FB15k-237/rel_cat.ipynb\\n\\nPlease go through every relation and ask some questions like the following examples.\\n- How many triples can be found for relation \\\"/people/person/place_of_birth\\\"? How many places of birth can one person have? Is it 14,541?\\n- How many triples can be found for relation \\\"/award/award_winning_work/awards_won\\\"? How many movie awards have been made in human history? Is it more than 1% of 14,541*14,541?\\n\\nYou can then see that your estimation is inflated by (1) counting triples with impossible relation types. (Can entity pair (a movie, an award) be connected by \\\"/people/person/place_of_birth\\\"?) (2) ignoring the fact that many relations are just sparse.\"}", "{\"title\": \"Thanks for your further engagement.\", \"comment\": \"Dear authors,\\n\\nI appreciate your patience and efforts in the discussion. I think this is what makes ICLR unique. With all due respect, I will try to make my response match the seriousness of yours :-). \\n\\n\\n## Recap of previous discussion\\n\\nIn my first round of responses, and acknowledged by the follow-up feedback, we reached the following agreement.\\n- **The full-inference query-target pairs / truly complex queries / irreducible query-answer pairs in your terminology are essentially hard samples.** The reasons are (1) the concept of query reduction does not apply to neural models and (2) treating hard samples as logically more complex queries is a concept replacement. You acknowledged those two points in your response. Then, the problem becomes how you blame the statistical average.\\n- CQA is more than link memorization and prediction. **Treatment of other logical elements (connectives and variables) is equally important**.\", \"in_my_second_round_response\": \"- I mention that even when you consider the hard samples and blame the statistical averaging, the ratio of hard queries is often small. Then, it is risky to construct a benchmark before we confirm whether the data is sampled from a distribution that can evaluate other logical elements with a **sufficiently wide coverage** (as we have already agreed on the importance of logical elements).\\nAfter reading your response, you may have confused the concept of **ratio** with **frequency**. Your reasoning is that because one can get a good number of hard samples, one can say that the underlying distribution of all hard samples has a good coverage.\\n\\n## The response in the third round\\n\\nIn this round, I will first discuss the coverage of hard samples precisely. Then, I will express again why building such a benchmark with ONLY hard samples of this kind is not necessary.\\n\\n### The coverage of hard samples.\\n\\nA knowledge graph contains the knowledge we observed, and we acknowledge there is also the knowledge that is missing. So in a knowledge graph dataset, the edges are split into two parts: the observed edges $E_{o}$ and the (artificially) missing edges $E_{m}$. In this part, we assume that the knowledge graph is given and the ratio of (artificially) missing edges is small.\\n\\nFor a fixed query type, say 2p, the set of all possible query-target pairs is a fixed set $S_{2p}$ and the set of all reasoning trees is also another fixed set $T_{2p}$\\n- Each $t\\\\in T_{2p}$ is a 2-path. Then there are three categories for a 2-path: (I) all edges are observed; (II) part of edges are (artificially) missing, and the other edges are observed; (III) all edges are (artificially) missing. Resulting three subsets of $T_{2p}$, that is, $T_{2p,I}$, $T_{2p,II}$, and $T_{2p,III}$ of reasoning trees.\\n- Those subsets of reasoning trees directly relate to the query-answer pairs. And let's just use the same meaning of suffixes I, II, and III to justify the subset of query-target samples $S_{2p, I}, S_{2p, II}, S_{2p, III}$ due to its natural connection with the reasoning trees.\\n- Notably, one query-answer pair can be explained by multiple reasoning trees, and if I understood the paper correctly, all reasoning trees $t$ for each full-inference query-target pair $s\\\\in S_{2p, III}$ should belong to $T_{2p, III}$.\\n\\nWhat should we care about? For a fixed query type X, samples from $S_{X,I}$ are not evaluated in previous practice because it is already solved by an existing database system, samples from $S_{X,II}\\\\cup S_{X,III}$ are then evaluated. And many methods solve $S_{X,II}\\\\cup S_{X,III}$ as the goal.\\n\\nWhen $|E_m| << |E_o|$, it is natural that $|T_{X, III}| << |T_{X, II}|$ and then $|S_{X,III}| << |S_{X,II}|$. For 2p case, $|T_{2p, III}|$ is the number of 2-paths with edge $|E_m|$ but $|T_{2p, II}|$ is the number of 2-path with one edge from $E_m$ and another from $E_o$. What the paper has found in this paper reflects these kinds of differences. This fact is not affected by how you sample the data. It is about the range of data you can sample from.\\n\\nConstructing benchmarks from a distribution over $S_{X, III}$ ignores the entire $S_{X, II}$. The original manuscript argues that $S_{X,II}$ can actually be \\\"reduced\\\" to type III samples that belong to a simpler sub-query type $Y$. $S_{Y, III}$ is already measured, so we don't need to repeat that job in $S_{X,II}$. However, we all agree that the reduction is NOT valid, as I mentioned earlier. And as a result, the performance of logical connectives and variables on $S_{X,II}$ are NOT measured in your benchmark.\\n\\nPlease be aware that the discussion above holds regardless of the sampling algorithm. Your previous responses about how you sample 50k samples in $S_{X,III}$ still do not contribute to any information in $S_{X,II}$.\"}", "{\"title\": \"Wishful thinking or misunderstanding of logic?\", \"comment\": \"> Let's conduct one mental experiment where the link prediction is perfect. Then, the performance drop due to the of the link prediction is gone\\n\\nThis is ***factually wrong***. You can reconstruct the training triple tensor perfectly, but fail terribly at reasoning (this is what ML models are trying to do!), because you do not know how to emulate exactly a formal reasoner in a ML (even hybrid) model.\\n\\n> . But interestingly, the improved link prediction will also close the gap between and performances\\n\\nThis is wishful thinking, and perhaps rooted in a logical fallacy: ***improving the accuracy of ML models on hard queries implies improving single link prediction performance***. The opposite is not true.\"}", "{\"summary\": \"In this paper, authors re-examine the existing problems of knowledge graph complex reasoning datasets. The authors propose that the current dataset cannot effectively measure the generalization ability of the reasoning model, that is, the complex queries in the dataset can be solved by the triples leaked in the training graph, and verifies their conjecture through extensive and sufficient experiments. Further, the authors propose a new set of benchmarks to more effectively measure the performance of complex reasoning models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Motivation of the paper is novel, and the re-examination of existing benchmarks is valuable.\", \"Experiments in this paper can support the conclusion well.\", \"Writing of the paper is good, the structure is clear, the layout is good, and it is easy to follow.\"], \"weaknesses\": [\"Lack of discussion of related work, if the space is limited, this part can be placed in the appendix.\", \"In Section 5.1, the author proposes a CQD-Hybrid solver. Actually, the practice described in the paper is very similar to the QTO [1] and I think the difference should be cited and discussed.\", \"As an effort to propose new benchmarks, the experiments for the new benchmark are a little less. More baselines, some case analysis, etc., should be added.\", \"Some typos, such as line.468: 50.000\", \"[1] Answering Complex Logical Queries on Knowledge Graphs via Query Computation Tree Optimization. In ICML2023\"], \"questions\": [\"The problem of information leakage in training graphs can be solved well by the inductive setting in naive knowledge graph reasoning (one-hop reasoning) task. Actually, there have been some attempts to establish inductive settings in CQA[1][2], where there will be no information leakage because the training and test graphs are different. How do you think this paper differs from these works?\", \"In my opinion, link leaks in the training graph only affect the GNN based and neural link predictor based methods, while the embedding-based methods do not take advantage of the information in the training graph (except for 1p queries). Why does this type of approach also degrade on the new benchmark?\", \"As mentioned in weakness, what's the difference between CQD-Hybrid and QTO?\", \"[1] Inductive Logical Query Answering in Knowledge Graphs. In NeruIPS 2022.\", \"[2] Type-aware Embeddings for Multi-Hop Reasoning over Knowledge Graphs. In IJCAI 2022.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Debunking non-factual assumptions\", \"comment\": \"We are trying our best to debunk non factual claims. Most of your comments are ***your beliefs*** as you write yourself. We rebut them in the following, highlighting how there is no concrete evidence to support them. We provided concrete evidence for ours, but even if you think also ours are just beliefs, why yours should be the right ones and should be the motivation to reject a paper?\\n\\n> This question concerns an axiom-level assumption to the knowledge graphs we handle. You cannot prove or disprove it. I believe the ratio is small [...] Industrial-level knowledge graphs and their embeddings already supported a vast number of applications. If a majority of knowledge cannot be predicted by existing edges and ML models, many applications we saw will not happen. In the future, the ratio of missing knowledge will be smaller because\\n\\nThis is wishful thinking. ***You are assuming that FUTURE KGs will be almost complete***, but even if we assume this (very hard to do, see next) this tells us nothing about the fact about the completeness of current KGs, nor about the current benchmarks, FB15k and NELL. \\n\\nIt is plainly inconsequential reasoning. It is like saying \\u201cwe believe future AI will be safe, otherwise many applications we saw did not happen, therefore it is useless to claim some instance of current AI is not safe\\u201d.\\n\\n> LLMs that learn from large corpus support question-answering applications quite well with the assistance of existing knowledge graphs or knowledge bases. That means the training corpus of LLMs + existing KG or KB consists of a great amount of knowledge we need.\\n\\nLLMs are notoriously unreliable, and \\u2013 more crucially \\u2013 out of the scope of the evaluation of CQA as done not only in our paper, but in twenty and more papers before us.\\n\\n> Based on 1 and 2, I think the ratio of missing edges in KG will be small to satisfy people's real needs.\\n\\nWe wish to believe you, but you either need to provide some concrete statistics or you are making some claim about a future that does not impact past benchmarks in any way. \\n> You assume that there could be pair-wise connections in KG first. This assumption reflects your belief. \\n\\nThis is not a belief, this is statistics (actually simple math) in action. In ***current real-world KGs we use as benchmarks, only a fraction of links is observed and used to create a dataset***. Consider FB15k-237 it has 14,541 entities and 237 relation types, so potentially 50,111,441,397 triples, but we observe only 310,116 of them. Even if you assume logical constraints (we did not find them in the schema definition),*** there will be billions of missing edges***.\\nYou miss an additional crucial point, the ML model does not know in advance which are invalid (do not satisfy logical constraints) or not, it is its job to output a probability score for every possible triple among those 50+ billions. So we have to consider the remaining ones missing.\\n\\n> I am suggesting that you can obtain $T_{2p}$ by iterating every entity in KG and conducting a DFS with depth of 2.\", \"there_is_a_profound_misconception_here\": \"what you are describing here is enumerating all reasoning trees, assuming they are all important. They are not, each one has associated a probability, and this follows an (unknown) joint probability distribution. We need it to i) model uncertainty (otherwise we will not be using ML models) and ii) select a relevant sample of reasoning trees for our dataset. ***There is always a sampling algorithm employed.*** But people not familiar with probabilities are ignoring the fact that the way they sample/construct the training set is dependent on certain (implicit) assumptions.\\n\\nNow, sampling uniformly is impossible because of the large sample space. Consider FB15k-237, as stated above there are 50+ possible triples, and therefore much more 2p queries. As we need a practical way to create a dataset, we need other sampling strategies. Hence people used the triple independence assumption.\\n\\n\\n> Please note that for now, we don't need a specific sampler to define those three types. Your comment in the post, \\\"You are assuming triple independence, why is this realistic? (II)\\\" exactly reflects that you are unaware of this problem structure.\\n\\n***The current benchmarks are created using a particular sampling strategy, assuming triple independence*** this is not a belief, but concrete evidence (you cannot ignore it!): it is in the code scripts everyone uses. This assumption allows to quickly generate some data, but this data is biased and the distribution of possible queries skewed. \\n\\n> Here is why I think your benchmark does not have sufficient coverage, which totally ignores the performance on $S_{X,II}$.\\n\\nThe fact ***we are extending the old benchmarks with new benchmarks only augments the current coverage*** (we are not saying to get rid of the old benchmarks). This profound confusion you have is not a good motivation to reject the paper.\"}" ] }
2F7MFqATdo
Intention Model: A Novel Explanation for In-context Learning
[ "Yonggang Zhang", "Hanzhe You", "Xinmei Tian", "Jie Lu" ]
In-context learning (ICL) has demonstrated remarkable success in enabling large language models (LLMs) to learn to do a downstream task by simply conditioning on a few input-output demonstrations. Distinct from traditional learning paradigms, ICL does not require model updates, thus attracting significant interest in understanding the mechanisms behind LLMs’ ICL capabilities. Advanced works aim to understand ICL through an empirical viewpoint to provide the multifaceted nature of ICL, while some works aim to explain how ICL can emerge theoretically. However, the current theoretical analysis exhibits a weak connection to empirical explorations due to strong assumptions, e.g., perfect LLMs and ideal demonstrations. This work proposes an intention model, providing a novel theoretical framework for explaining ICL. With mild assumptions, we present a ``no-free-lunch'' theorem for ICL: whether ICL emerges depends on the prediction error and prediction noise, which are determined by \emph{\textbf{i)}} LLMs' error of next-token prediction, \emph{\textbf{ii)}} LLMs' prediction smoothness, and \emph{\textbf{iii)}} the quality of demonstrations. Moreover, our intention model provides a novel explanation for the learning behavior of ICL under various input-output relations, e.g., learning with flipped labels. This is fortunately consistent with our experimental observations.
[ "In-context learning", "Large language models" ]
Reject
https://openreview.net/pdf?id=2F7MFqATdo
https://openreview.net/forum?id=2F7MFqATdo
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vG57ePl6aI", "tAoTZDfQZD", "qahtbOXW7D", "pHjDnRXMZQ", "p9Z7kWndPn", "oe5RywVd3j", "ndpY1ZnYC7", "k417WEWk0x", "gzwfHFeDi2", "fF0YZbyc6o", "fAE3saMBW0", "e0wndxUaYx", "ZvK9ITOYGg", "W4bN7DgEpx", "Ti7nkY8yuS", "SRXZ6vfPbc", "LqGbnInJ4t", "KrFDcqeDwQ", "KadwCRtrh6", "JO0qTHasVC", "HZnFXPLg4u", "GpScKhHwB7", "9KvZwDAecO", "2LPgyj5I8S", "1PigMMQ0l6", "06FKCy2kEW" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732516398829, 1732255359551, 1732255892073, 1732256539195, 1733217627263, 1735519966271, 1732256305090, 1732326978384, 1732255122774, 1732769314335, 1732255746802, 1732479673928, 1737523957663, 1730062584630, 1732255474005, 1732769722126, 1730758020556, 1732769256895, 1732324275851, 1730661163637, 1732527750740, 1730371232221, 1733222798303, 1732760704247, 1733300535997, 1732254948368 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9067/Authors" ], [ "ICLR.cc/2025/Conference/Submission9067/Authors" ], [ "ICLR.cc/2025/Conference/Submission9067/Authors" ], [ "ICLR.cc/2025/Conference/Submission9067/Authors" ], [ "ICLR.cc/2025/Conference/Submission9067/Authors" ], [ "ICLR.cc/2025/Conference/Submission9067/Area_Chair_rvwJ" ], [ "ICLR.cc/2025/Conference/Submission9067/Authors" ], [ "ICLR.cc/2025/Conference/Submission9067/Authors" ], [ "ICLR.cc/2025/Conference/Submission9067/Authors" ], [ "ICLR.cc/2025/Conference/Submission9067/Authors" ], [ "ICLR.cc/2025/Conference/Submission9067/Authors" ], [ "ICLR.cc/2025/Conference/Submission9067/Area_Chair_rvwJ" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9067/Reviewer_b9hR" ], [ "ICLR.cc/2025/Conference/Submission9067/Authors" ], [ "ICLR.cc/2025/Conference/Submission9067/Authors" ], [ "ICLR.cc/2025/Conference/Submission9067/Reviewer_A7mQ" ], [ "ICLR.cc/2025/Conference/Submission9067/Authors" ], [ "ICLR.cc/2025/Conference/Submission9067/Reviewer_A7mQ" ], [ "ICLR.cc/2025/Conference/Submission9067/Reviewer_6cPC" ], [ "ICLR.cc/2025/Conference/Submission9067/Reviewer_wUqa" ], [ "ICLR.cc/2025/Conference/Submission9067/Reviewer_wUqa" ], [ "ICLR.cc/2025/Conference/Submission9067/Reviewer_wUqa" ], [ "ICLR.cc/2025/Conference/Submission9067/Reviewer_6cPC" ], [ "ICLR.cc/2025/Conference/Submission9067/Authors" ], [ "ICLR.cc/2025/Conference/Submission9067/Authors" ] ], "structured_content_str": [ "{\"title\": \"A revised version is uploaded\", \"comment\": \"Dear ACs and Reviewers,\\n\\nWe sincerely appreciate the time and effort you have invested in reviewing our work. In response to the constructive comments and valuable suggestions, we have uploaded a revised version with more detailed descriptions.\", \"we_have_marked_the_revisions_with_different_colors_corresponding_to_the_feedback_from_each_reviewer\": \"Green for Reviewer A7mQ;\\n\\nRed for Reviewer 6cPC;\\n\\nBlue for Reviewer wUqa;\\n\\nGray for Reviewer b9hR.\"}", "{\"title\": \"Responses to Reviewer 6cPC\", \"comment\": \"> **W. 3**: The intent recognition experiment is not totally convincing. Details of the task setup are also missing.\\n\\n***Ans for W. 3):*** Thanks for pointing out this potentially confusing configuration description. \\n\\nOur experiments are mainly inspired by the observation that induction heads are shown to be closely related to general ICL in LLMs [r1]. Thus, we aim to identify a set of induction heads for a given intention and verify the impact of these intentions on the generation process under a specific intention. Due to limited space, we defer the detailed setup to Appendix F. In response to your valuable comments, we will highlight these details on the main page as follows.\\n\\nFor the algorithm to locate induction heads, we draw inspiration from the work [r2]. Namely, we construct counterfactual examples to compare the induction heads when the intention of interest is activated and when it is not. Specifically, given a reference sample used for activating a certain intention, we construct a corresponding counterfactual example to deactivate the intention with minimal changes to the reference sample. Subsequently, we replace the induction heads of the reference sample with those of the counterfactual example. Thus, we can record the output changes when replacing each head. Consequently, induction heads that cause drastic changes in outputs are located as the candidate heads related to the current intention. Following previous work, we employ the model LLaMA-2-7B and the dataset SST-2 for the employed models and datasets.\\n\\nWe will highlight these details on the main page in response to your valuable comments.\\n\\n\\n> **W. 4**: Please discuss these experiments in the main body of the paper, state their conclusions and how they support the theory.\\n\\n***Ans for W. 4):*** Thanks for your valuable comments. Accordingly, we will add the following descriptions to the main text. We will also provide more details on the dataset preparation.\\n\\n\\nOur experiments are designed to assess intention recognizability, pinpoint intention instantiation, and verify theoretical insights. \\n\\n- Intention recognition. To make the concept of intention more vivid, we design experiments to verify the recognizability of intention. The intuition is straightforward, namely, the basic idea of our intention model is that LLMs can infer intentions from the demonstrations. This implies that intentions could be recognized when LLMs are conditioned on demonstrations. Thus, we collect $10,150$ prompts with $50$ distinct intentions and extract the features of these prompts using different LLMs, leading to numerous pairs of features and the corresponding label representing intentions. Some of these samples are used to train a classifier, and the left are used to evaluate the prediction accuracy of this classifier. The results are shown in Table 1. \\n\\n- Intention localization. It shows that induction heads are the mechanistic source of general ICL in LLMs [1]. This motivates us to take a step further beyond recognizing intentions, namely, we aim to identify a set of induction heads for a given intention and verify the impact of these intentions on the generation process under a specific intention. Thus, we design experiments to locate intentions with results are given in Figrue 1. These results show that we can pinpoint intention instantiation, making the concept of intention more vivid.\\n\\n- Insights verification. Our theorem shows that the ICL capacity of an LLM emerges depends on the prediction error and the prediction noise. However, it is challenging to calculate or estimate the related values, i.e., the error of next-token prediction $\\\\delta_3$, LLM\\u2019s prediction smoothness, and demonstration shift. To validate our theorem, we design experiments to implicitly test the impact of these factors. For instance, the error of next-token prediction $\\\\delta_3$ could be related to the LLMs' performance under general tasks. Thus, we could conclude that the $\\\\delta_3$ of GPT4 is less than that of LLaMa-7B. Applying an external transition matrix can increase the demonstration shift, which would lead to larger prediction noise according to the prediction noise in Eq. (14). To verify these theoretical insights, we evaluate the ICL performance of different LLMs under the scenario where the transition matrix is realized by an addition operation, i.e., realizing $\\\\mathcal{T}$ by y -> (y + 1) mod 5 or by a more complicated one y -> (3y + 1) mod 5. The results shown in Table 2 are consistent with our theoretical analysis.\"}", "{\"title\": \"Responses to Reviewer wUqa\", \"comment\": \"> **W. 4**: Experiment section is too small and severely cut (defered to the appendix). Also, the evidence is circumstantial.\\n\\n***Ans for W. 4):*** Thanks for your valuable comments. We will revise the paper to include more details and results in the main text. Accordingly, we will add the following descriptions to the revision.\\n\\nOur intention model shows that learning behaviors, e.g., learning with flipping labels, can be modeled by multiplying a transition matrix $\\\\mathcal{T}$ by the original transition matrix $\\\\theta$, as shown in Eq. (14), $y_{t}(\\\\mathcal{T}\\\\theta_g) = {\\\\rm \\\\mathop{arg\\\\; max}\\\\limits_{y}}\\\\; p(y| x_{t},\\\\mathcal{T}\\\\theta_g), \\\\text{with} \\\\ y(\\\\mathcal{T}\\\\theta_d) = {\\\\rm \\\\mathop{arg\\\\; max}\\\\limits_{y}}\\\\; p(y| x,\\\\mathcal{T}\\\\theta_d)$. Based on our intention model, we have three conclusions:\\n\\n- _Conclusion 1_: **introducing an external $\\\\mathcal{T}$** to modify original outputs makes ICL more challenging. This is because the transition matrix $\\\\mathcal{T}$ would lead to $KL(p(x,y|\\\\mathcal{T} \\\\theta_d) || p_M(x,y|\\\\mathcal{T}\\\\theta_g))=\\\\epsilon^\\\\prime \\\\geq \\\\epsilon=KL(p(x,y|\\\\theta_d) || p_M(x,y|\\\\theta_g))$. Consequently, this results in 1) larger prediction errors, as shown in Eq. (7); and 2) larger prediction noise, as shown in Eq. (13). Thus, introducing an external matrix $\\\\mathcal{T}$ will degrade ICL performance, which is consistent with the results shown in **Table 2**, i.e., changing $y$ to $(y + 1)\\\\ mod \\\\ 5$.\\n\\n- _Conclusion 2_: an LLM with smaller error of next-token prediction $\\\\delta_3$ performs **better in overriding semantic priors under flipped label scenarios**. This is because smaller prediction error $\\\\delta_3$ can reduce the prediction noise as shown in Eq. (13). Intuitively, $\\\\delta_3$ is related to the LLMs' performance under general tasks, i.e., $\\\\delta_3$ of GPT4 could be less than that of GPT2. Thus, we employ three LLMs to verify the point and report their performance in **Table 2**. Our results verify the theoretical insights, and fortunately, this conclusion aligns well with the experimental observations [r1].\\n\\n- _Conclusion 3_: **increasing the number of demonstrations $n$ under the random label scenario lead to decreasing performance**. This is because larger $n$ would magnify the impact of demonstration shift and the LLMs\\u2019 error of next-token prediction, as shown in Eq. (13). This conclusion aligns well with the experimental observations [r3]. Similarly, a small $n$ leads to good performance, which aligns with the experimental observations [r2].\\n\\n\\n\\n> **W. 5**: The paper is very hard to read and follow. \\n\\n***Ans for W. 5):*** We apologize for the clarity and writing issues in the paper. We will revise it to make it easier to read and follow and address all of the specific issues you pointed out. \\n\\n> **W. 6**: Presentation can be improved by removing some theoretical proofs by putting it into the appendix and describe a stronger link between the theoretical and empirical results.\\n\\n***Ans for W. 6):*** Thanks for your kind suggestion. We will put some theoretical proof into the appendix to improve the presentation. For instance, we will put Eqs. (10) and (12) in the appendix. We will add the corresponding experiments after the theoretical analysis to highlight the connection between the theoretical analysis and experiments.\"}", "{\"title\": \"Responses to Reviewer b9hR\", \"comment\": \"We would like to express our gratitude to the reviewer for the time and effort dedicated to reviewing our work. In response to your comments, we have provided detailed responses below.\\n\\n## Response to weaknesses:\\n\\n> **W.1**: The paper's theoretical framework largely follows the derivation approach from Xie et al. (2022). The contribution seems more like an incremental step rather than a major theoretical innovation.\\n\\n***Ans for W.1):*** We apologize for the misunderstanding. To clarify our contribution, we highlight the difference between our work and the mentioned work. \\n\\n- Xie et al.'s work provides an outstanding framework to explain the phenomenon of ICL [r1]. However, their work uses some strong assumptions: LLMs can exactly fit distributions and perfectly delineate task intention, under which they prove that the prediction of ICL can be asymptotically optimized as the number of examples increases.\\n\\n- We propose a novel theoretical framework, i.e., the intention model. This allows for common models and demonstrations and derives the no-free-lunch theorem of ICL. Namely, whether ICL emerges depends on the prediction error and prediction noise, which are determined by i) LLMs\\u2019 prediction error of the next token, ii) LLMs\\u2019 prediction smoothness, and iii) the quality of demonstrations.\\n\\n> **W. 2**: The use of the term \\\"No Free Lunch\\\" for the presented theorem seems a bit off\\n\\n***Ans for W. 2):*** We apologize for the misunderstanding.\\n\\nThe use of the term \\\"No Free Lunch\\\" may be confusing to readers. We chose this term to show that no demonstration works best for all problems. However, in our theorem, we are not emphasizing a universal algorithm but rather a specific condition for ICL\\n We will revise the terminology in the revised paper to make it clearer and more accurate.\\n\\n\\n> **W. 3**: The experimental section lacks clarity on how each of the theoretical components manifests in practice\\n\\n***Ans for W. 3):*** Thanks for your valuable suggestions. \\n\\nOur theorem shows that the ICL capacity of an LLM emerges depends on the prediction error and the prediction noise. However, it is challenging to calculate or estimate the related values, i.e., the error of next-token prediction $\\\\delta_3$, LLM\\u2019s prediction smoothness, and demonstration shift. To validate our theorem, we design experiments to test these factors' impact implicitly. \\n\\nFor instance, the error of next-token prediction $\\\\delta_3$ could be related to the LLMs' performance under general tasks. The prior error of LLMs can be regarded as the difference between the LLMs\\u2019 prediction and the actual distribution.\\n\\nThus, we could conclude that the errors of GPT-4 are less than those of LLaMa-7B. Applying an external transition matrix can increase the demonstration shift, which would lead to larger prediction noise according to the prediction noise in Eq. (14). To verify these theoretical insights, we evaluate the ICL performance of different LLMs under the scenario where the transition matrix is realized by an addition operation i.e., realizing $\\\\mathcal{T}$ by y -> (y + 1) mod 5 or by a more complicated one y -> (3y + 1) mod 5. The results shown in Table 2 are consistent with our theoretical analysis.\\n\\n\\n\\n> ***Reference***\\n> \\n> [r1] An explanation of in-context learning as implicit bayesian inference.\"}", "{\"title\": \"Window for discussion is closing\", \"comment\": \"Dear Reviewer wUqa,\\n\\nWe sincerely appreciate the time you have taken to review our work and provide insightful comments. We understand that your schedule is demanding. However, as the window for discussion is closing, we kindly request that you review our responses to ensure your concerns have been addressed. We would greatly value your further feedback and are committed to making any necessary improvements to enhance our work.\\n\\nBest regards,\\n\\nAuthors of 9067\"}", "{\"metareview\": \"This paper proposes an \\\"intention model\\\" as a theoretical framework to explain in-context learning (ICL) capabilities in large language models. The key scientific claim is that ICL emergence depends on three factors: the model's next-token prediction error, prediction smoothness, and demonstration quality. The authors present a \\\"no-free-lunch\\\" theorem showing these factors determine whether ICL will emerge successfully. They also use their framework to explain empirical phenomena around how models handle flipped and random labels during ICL. The reviewers appreciated the novelty of the theoretical analysis and it's focus on interpretable terms. However, the paper has several notable weaknesses, especially the gaps in clarity, high degree of similarity to related works such as Xie et al., gaps in theory relating to inconsistency in explaining random intentions as in Min et al., and lack of ability to measure the theoretical factors in real-world experiments. Some are natural eg, next-token prediction error, while others such as prediction smoothness could also potentially benefit from related work [1]. Overall, the paper was borderline and there was no strong champion for the work, and felt a revision of the work to address the weaknesses is necessary for acceptance.\\n\\n[1] https://arxiv.org/abs/2406.11233\", \"additional_comments_on_reviewer_discussion\": \"See above.\"}", "{\"title\": \"Responses to Reviewer wUqa\", \"comment\": \"## Response to questions:\\n\\n> **Q. 1**: Why is it called the no-free lunch theorem?\\n\\n***Ans for Q. 1):*** \\n\\nThe no-free lunch theorem in machine learning states that no learning algorithm is naturally met and universally superior to all others for all possible problems. In the context of this paper, the no-free lunch theorem refers to the fact that the ICL performance of an LLM will depend on the specific task and the instructions provided to the model.\\n\\n> **Q. 2**: Why do we need to have a neighborhood of intentions, which are exactly modeled, compared to Xie et al.\\u2019s exact intention, which may have some modeling error?\\n\\n***Ans for Q. 2):*** \\n\\nWe propose a framework that models a neighborhood of intentions rather than a single exact intention. In practice, a user's true intention may not be perfectly specified, and there may be multiple plausible interpretations of the user's input. By modeling a neighborhood of intentions, we can capture a range of possible interpretations and make predictions that are robust to variations in the user's input.\\n\\n\\n> **Q. 3**: What is the difference between Assumption 3 and 5?\\n\\n***Ans for Q. 3):*** Assumption 3 describes that the difference in probability of similar intention producing the same next-token prediction in the real task space is bounded. Assumption 5 characterizes that the probability difference of similar intention in the distribution predicted by the model produces the exact next-token prediction is bounded. Actually, a significant improvement in our work is to incorporate the model's predictive power into the factors influencing ICL.\\n\\n> **Q. 4**: Why is it difficult to estimate next-token error of LLMs? Where is the causal link that implies that the model is figuring out this new matrix T?\\n\\n***Ans for Q. 4):*** Thanks for your valuable comments. We present this external map to mainly show how difficult it is for the model to infer different tasks. In the multiple choice test, flipped label can just be expressed by the transition matrix, which will lead to the weakening of the model's ability to distinguish task intention, i.e., $KL(p(x,y|\\\\mathcal{T} \\\\theta_d) || p_M(x,y|\\\\mathcal{T}\\\\theta_g)) \\\\geq KL(p(x,y|\\\\theta_d) || p_M(x,y|\\\\theta_g))$, implicitly leads to an increase in $p_M(\\\\varTheta_{\\\\epsilon})$, and thus the model's performance under the flipped-label task will deteriorate.\\n\\n\\n\\n> ***Reference***\\n> \\n> [r1] Sang Michael Xie, Aditi Raghunathan, Percy Liang, and Tengyu Ma. An explanation of in-context learning as implicit bayesian inference.\\n\\n> [r2] Larger language models do in-context learning differently.\\n\\n> [r3] In-context learning learns label relationships but is not conventional learning.\"}", "{\"title\": \"Response\", \"comment\": \"We sincerely appreciate your prompt feedback and deeply understand your responsibility in not raising the score for the original draft. We are working on the new version by integrating your constructive and valuable comments.\"}", "{\"title\": \"Responses to Reviewer 6cPC\", \"comment\": \"We extend our heartfelt thanks to the reviewer for the time and effort invested in evaluating our submission. It is gratifying to learn that you acknowledge the novelty of our perspective on explaining ICL. Given your constructive comments, we have prepared comprehensive responses below to address each point raised.\\n\\n\\n## Response to weaknesses:\\n\\n> **W. 1**: The amount of detailed mathematical analysis in Sections 3 and 4 is dense and obscures the key take away messages from the theory. One suggestion to the authors is to reconsider whether to keep all of the technical details in the main paper, or describe the main takeaways and the theorem, but move the rest into the appendix\\n\\n***Ans for W. 1):*** Thanks for pointing out this potentially confusing manner of writing. \\nThere are many empirical studies on ICL phenomena, but few theoretical studies on the actual mechanism and influencing factors of ICL exist. Our study provides a possible theoretical framework and can be used to explain some of the ICL phenomena.\\nAccordingly, we will add detailed descriptions to our revision. And we will reconsider the placement of some mathematical analysis and consider moving some of them to the appendix to focus more on the main takeaways and theorem in the main paper.\\n\\n\\n> **W. 2**: The paper lacks direct empirical confirmation of its theoretical findings.\\n\\n***Ans for W. 2):*** Thanks for pointing out this potentially confusing experimental configuration. Accordingly, we will add the following descriptions to the revision.\\n\\nOur intention model shows that learning behaviors, e.g., learning with flipping labels, can be modeled by multiplying a transition matrix $\\\\mathcal{T}$ by the original transition matrix $\\\\theta$, as shown in Eq. (14), $y_{t}(\\\\mathcal{T}\\\\theta_g) = {\\\\rm \\\\mathop{arg\\\\; max}\\\\limits_{y}}\\\\; p(y| x_{t},\\\\mathcal{T}\\\\theta_g), \\\\text{with} \\\\ y(\\\\mathcal{T}\\\\theta_d) = {\\\\rm \\\\mathop{arg\\\\; max}\\\\limits_{y}}\\\\; p(y| x,\\\\mathcal{T}\\\\theta_d)$. Based on our intention model, we have three conclusions:\\n\\n- _Conclusion 1_: **introducing an external $\\\\mathcal{T}$** to modify original outputs makes ICL more challenging. This is because the transition matrix $\\\\mathcal{T}$ would lead to $KL(p(x,y|\\\\mathcal{T} \\\\theta_d) || p_M(x,y|\\\\mathcal{T}\\\\theta_g))=\\\\epsilon^\\\\prime \\\\geq \\\\epsilon=KL(p(x,y|\\\\theta_d) || p_M(x,y|\\\\theta_g))$. Consequently, this results in 1) larger prediction errors as shown in Eq. (7); and 2) larger prediction noise as shown in Eq. (13). Thus, introducing an external matrix $\\\\mathcal{T}$ will degrade ICL performance, which is consistent with the results shown in **Table 2**, i.e., changing $y$ to $(y + 1)\\\\ mod \\\\ 5$.\\n\\n- _Conclusion 2_: an LLM with smaller error of next-token prediction $\\\\delta_3$ performs **better in overriding semantic priors under flipped label scenarios**. This is because smaller prediction error $\\\\delta_3$ can reduce the prediction noise as shown in Eq. (13). Intuitively, $\\\\delta_3$ is related to the LLMs' performance under general tasks, i.e., $\\\\delta_3$ of GPT4 could be less than that of GPT2. Thus, we employ three LLMs to verify the point and report their performance in **Table 2**. Our results verify the theoretical insights, and fortunately, this conclusion aligns well with the experimental observations [1].\\n\\n- _Conclusion 3_: **increasing the number of demonstrations $n$ under the random label scenario lead to decreasing performance**. This is because larger $n$ would magnify the impact of demonstration shift and the LLMs\\u2019 error of next-token prediction, as shown in Eq. (13). This conclusion aligns well with the experimental observations [3]. Similarly, a small $n$ leads to good performance, which aligns with the experimental observations [2].\"}", "{\"title\": \"Responses to Reviewer 6cPC\", \"comment\": \"Dear Reviewer #6cPC,\\n\\nThank you for your invaluable support! We greatly appreciate your feedback and contributions to improving our work.\\n\\nShould you have any outstanding questions or require further clarification on any issues, please do not hesitate to reach out. We would be more than happy to receive your constructive comments and address or discuss them with you promptly.\\n\\nBest regards,\\n\\nAuthors of #9067\"}", "{\"title\": \"Responses to Reviewer wUqa\", \"comment\": \"We would like to express our gratitude to the reviewer for the time and effort dedicated to reviewing our work. We appreciate that you find our theoretical framework generally interesting and reasonable, and the provided link between next token prediction ability and ICL is valid and interesting. In response to your valuable comments, we have provided detailed responses below. We hope that our responses have satisfactorily addressed your concerns, thereby enhancing the overall quality of our work.\\n\\n\\n## Response to weaknesses:\\n\\n> **W. 1**: The assumption 4 is strong. Moreover, there is no way to get predictions from the LLM given the same context and some different intentions \\\\theta_different.\\n\\n***Ans for W. 1):*** We apologize for the potentially confusing assumption. Accordingly, we will add the following descriptions to the revision.\\n\\nAssumption 4 only considers cases where the model infers a relatively correct intention, and if the model infers a wrong intention, i.e., $\\\\theta \\\\notin \\\\varTheta_{\\\\epsilon}$, we expect this probability to be small and bounded by Eq. (12). So this assumption is loose. \\n\\nUnder our theoretical framework, task demonstrations are generated based on a given task intention. It is a tiny probability event to generate the same demonstration under very different intentions, so the model will have different tendencies for the intention distribution $p_M\\\\left(y|S^n\\\\left(\\\\theta_d\\\\right), x_{t},\\\\theta_g\\\\right)$ generated by demonstrations generated by different intentions.\\n\\n> **W. 2**: The no-free lunch theorem in this paper does not explain the task learning capabilities of LLMs on completely novel tasks (\\\\theta not in the intention family) unrelated to the pretraining text.\\n\\n***Ans for W. 2):*** Thanks for your valuable comments. Trained on large amounts of data, LLMs generate amazing emergence capabilities, allowing the model to handle previously unseen tasks. We will discuss potential limitations and future work to address novel tasks.\\n\\n\\n> **W. 3**: The whole external mapping thing does not make much sense to me. Users do not provide an external mapping when getting the model outputs; they directly present demonstrations with this transformation. If the LLM infers this mapping, it can only be implicit. Making it a part of the original intention family. It is hard to tell if a mapping like flipped labels is present in the intention family learnt by the model during pretraining. If the mapping is randomly generated, this becomes a contradiction as it is indeed not present in the pretraining corpus.\\n\\n***Ans for W. 3):*** Thanks for your valuable comments. We present this external map to show more clearly how difficult it is for the model to infer different tasks. In the multiple choice test, the flipped label can be expressed by the transition matrix, which weakens the model's ability to distinguish task intention. The external mapping matters because different mapping makes a difference in the task's difficulty. e.g. The model performed worse on task $y->3y+1(\\\\text{mod}5)$ than on task $y->y+1(\\\\text{mod} 5)$\"}", "{\"comment\": \"Dear Reviewers,\\n\\nThis is a gentle reminder that the authors have submitted their rebuttal, and the discussion period will conclude on November 26th AoE. To ensure a constructive and meaningful discussion, we kindly ask that you review the rebuttal as soon as possible and verify if your questions and comments have been adequately addressed.\\n\\nWe greatly appreciate your time, effort, and thoughtful contributions to this process.\\n\\nBest regards,\\nAC\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper proposes a theoretical framework called the \\\"intention model\\\" to explain ICL behaviors. The authors present a \\\"no-free-lunch\\\" theorem for ICL, showing that its emergence depends on prediction error, prediction noise, the model's smoothness, and demonstration quality. Unlike previous approaches with strong assumptions, this work relaxes the assumptions on perfect model alignment and demonstration representation. The intention model helps bridge the gap between theoretical explanations and empirical observations.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The theoretical analysis breaks away from the typical assumptions about perfect model alignment. It feels like it\\u2019s providing a more grounded explanation, making it easier to connect theory with the real behaviors of LLMs.\\n\\n2. The writing is generally clear, and the mathematical notation is thoroughly defined, which makes it easier for readers to follow.\", \"weaknesses\": \"1. The paper's theoretical framework largely follows the derivation approach from Xie et al. (2022), particularly leveraging Bayesian Inference. Although it extends the original work by adding an error term between the LLM and the real distribution, this extension doesn\\u2019t feel groundbreaking. The contribution seems more like an incremental step rather than a major theoretical innovation.\\n\\n2. The use of the term \\\"No Free Lunch\\\" for the presented theorem seems a bit off. The typical connotation of \\\"No Free Lunch\\\" is about the impossibility of a universal solution that works optimally in all scenarios. Here, the theorem implies that LLM performance depends on factors like prediction error, prediction noise, and demonstration quality. While there is indeed an implication of trade-offs, if the theorem isn\\u2019t emphasizing a broad, universal limitation but rather a specific condition for ICL, then this choice of terminology could easily confuse readers.\\n\\n3. The experimental section lacks clarity on how each of the theoretical components, particularly the terms in Equation (13), manifests in practice. It\\u2019s unclear how specific terms like \\\"error in predicting the next token,\\\" \\\"prediction smoothness,\\\" and \\\"distribution smoothness\\\" are reflected in the real experimental observations. This disconnect makes it difficult for readers to see how well the theory aligns with the empirical results, and it weakens the overall support for the claims made in the theoretical part.\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Responses to Reviewer 6cPC\", \"comment\": \"> **W. 5**: The writing of the paper needs significant editing and proofreading.\\n\\n***Ans for W. 5):*** Thanks for your kind suggestions. We apologize for any writing errors or unclear phrases in the paper. We will carefully rewrite and edit the paper to address these issues.\\n\\n## Response to questions:\\n\\n> **Q. 1**: What is \\u201cn\\u201d in Table 1? How does Table 1 \\u201cshow that larger LLMs can capture the intention\\u201d? \\n\\n***Ans for Q. 1):*** \\u201cn\\u201d refers to the number of examples in the demonstrations in Table 1. \\n\\nOur theorem shows the no-free-lunch nature of ICL, involving prediction error and noise. Intuitively\\uff0c The **error of next-token prediction** $\\\\delta_3$ could be related to the LLMs' performance under general tasks, i.e., $\\\\delta_3$ of GPT-4 is less than that of GPT-2. Thus, we compare the ICL capability of different models, i.e., LLaMa-7B, Mistral-7B, and GPT-4, with results in **Table 1**, verifying that LLMs with smaller $\\\\delta_3$ exhibit higher ICL performance. \\n\\n\\n> **Q. 2**: Can you provide more details or samples for the dataset preparation?\\n\\n\\n***Ans for Q. 2):*** Details, such as the dataset description and experimental settings, are deferred to the Appendix. We are glad to provide further details.\\n\\n\\n> **Q. 3**: F.2 do the induction heads identified here affect intent recognition in section F.1? I.e, if you \\u201cknock out\\u201d the heads then extract features, does the intent prediction performance degrade?\\n\\n***Ans for Q. 3):*** Yes. Knocking out heads usually leads to performance degradation, especially for the heads related to the current intention. However, limited performance degradation is observed if we randomly knock out heads. This is consistent with our results shown in Figure 1.\\n\\n\\n> ***Reference***\\n\\n> [r1] In-context learning and induction heads\\n\\n> [r2] Localizing model behavior with path patching\"}", "{\"title\": \"Responses to Reviewer b9hR\", \"comment\": \"Dear Reviewer #b9hR,\\n\\nWe sincerely appreciate your dedicated time and effort in reviewing our work.\\n\\nIf you have any additional questions or issues that require further clarification, please do not hesitate to let us know. We would be more than happy to address them promptly.\\n\\nThank you once again for your invaluable support and contributions to improving our work. We greatly appreciate your feedback.\\n\\nBest regards,\\n\\nAuthors of #9067\"}", "{\"summary\": \"The current paper attempts development of a unified theoretical model of in-context learning that can help reconcile the incoherent empirical results seen in prior literature (e.g., the effect of data to label map randomization). To achieve this, the authors explicitly model the notion of \\\"intention\\\", i.e., the task the user wants the model to perform on a given datapoint, and assess what conditions lead to the inferred task from the model matching the intended task. This leads to a three-part decomposition of ICL error: (i) error of next-token prediction (essentially the autoregressive training loss); (ii) smoothness of predictions (how drastically they change if context is altered); and (iii) \\\"quality\\\" of demonstrations. All terms have intuitively expected effects and hence reconcile past empirical results.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The prime contribution from a theoretical standpoint in this paper is introduction of the user intent as a first-class citizen in theory. This helps accommodate phenomenology around experiments where alteration of the context lead to the same outputs---if the user intent remains the same, the model is likely to produce the same output. That said, I have some apprehensions on relation to past work and general presentation / experimentation.\", \"weaknesses\": [\"**Relation to past work.** A crucial missing reference seems to be Lin and Lee (\\\"Dual operating model of ICL\\\", ICML 2024). Therein, authors define a prior over tasks the model can solve and assess the effect of context size scaling to reconcile prior empirical results. It would help if authors can delineate how their contributions differ from that of Lin and Lee.\", \"**Experiments.** I understand the paper is theoretically driven, but there are several papers on the theory of ICL at this point and it is unclear which theoretical framework is in fact correct. I hence encourage authors to take a prediction-centric perspective: what predictive claim does your theory offer, and can you demonstrate that said claim checks out experimentally? I am happy with an entirely synthetic experiment. The currently existing experiments suggest induction heads may be the mechanism for intention inference, but that claim is mostly speculative and is not well corroborated by the current induction head knockout experiments (by knocking out induction heads, you might be removing general ICL ability, and it is unclear if where the model is failing is inference of the correct intent).\", \"**General presentation.** I found the writing quite convoluted in several parts of the paper. For example, the introduction has several typos and grammatical errors, and at times has unnecessarily complicated phrasing (e.g., \\\"Numerous outstanding works have revealed the enigmatic characteristics inherent to ICL\\\"). The citation commands are also incorrectly used---only \\\\citet was used, with no \\\\citep usage. If citations are to go in parentheses, then \\\\citep should be used.\"], \"questions\": [\"In Section 4.2, it is mentioned that the effect of high quality demonstrations is multiplicative on ICL error; however, other terms like next-token prediction accuracy and prediction smoothness have an additive effect. Intuitively, I don't follow why this would be the case. It seems to me the first factor in equation 13 (i.e., the factor with several additive terms) is merely assessing how well the model is at pretraining and modeling the distribution at hand, and the second factor (i.e., demonstration shift) assesses how good the demonstrations are at eliciting the desired capabilities. Is this the right interpretation? If not, can you better explain why the first term has additive influence of next-token error and prediction smoothness (which I would have expected to themselves have a multiplicative effect)?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Responses to Reviewer wUqa\", \"comment\": \"Dear Reviewer #wUqa,\\n\\nThank you for your invaluable feedback, which provides essential opportunities to clarify our contributions. We greatly appreciate your insights and would like to address your core concerns below.\\n\\n> **W.1**: The paper is still hard to read and follow. It would benefit from some restructuring.\\n\\n**Response**:\\n\\nIn response to your suggestion, we have moved several equations to the appendix to enhance the clarity of the main text. The current structure is as follows, and we welcome any further suggestions you may have:\\n\\n- _Introduction_: We explain what ICL is and highlight the difference between ICL and traditional learning by reviewing significant works. Then, We introduce various empirical observations and studies, discuss some theoretical explanations, and outline our main contributions: **First, we propose a no-free-lunch Theorem for ICL, demonstrating that the conditions for ICL emergence are not naturally met. Second, our intention model bridges the gap between theoretical and empirical results by providing a novel explanation for ICL behavior when LLMs are conditioned on varying input-output mappings**. We also acknowledge the contributions of [ref1], while clarifying that these two points are not addressed in [ref1].\\n\\n- _Preliminary_: We give the defination of ICL and detail some basic concepts used in ICL.\\n\\n\\n- _Setup of Intention Model_: We introduce all assumptions used in our work.\\n\\n- _Explaining In-Context Learning with Intention Model_: \\n\\n- - Section 4.1: We capture the target to explain ICL and propose to model the discrepancy between the predicted and ground-truth distributions. We find that ICL emerges by producing the ground-truth output using the ground-truth intention and inferring the ground-truth intention by reducing the probability of incorrect intentions.\\n\\n- - Section 4.2: We analyze the intention and output errors, leading to our no-free-lunch theorem for ICL.\\n\\n- - Section 4.3: We explain the learning behavior of ICL under flipped label and random label scenarios.\\n\\n\\n- _Experiments_: We illustrate the intention model by locating task-related intentions and validate our theorem through a flipped label scenario.\\n\\n- Finally, we review related works in Sec. 6, list some limitations in Sec. 7, and make a brief conclusion in Sec. 8.\\n\\nWe greatly respect your expertise in writing and structuring, and we would highly appreciate any further invaluable suggestions.\\n\\n> **W.2**: Lack of significant improvement in theory over Xie et al. \\n\\n**Response:**\\n\\nWe apologize for any misunderstanding. Please allow us to clarify our contributions in comparison to [ref1].\\n\\n- [ref1] employs relatively strong assumptions: i) LLMs can exactly fit the training distribution, focusing mainly on the discrepancy between training and testing distributions, and ii) demonstrations perfectly delineate task intention, which may be challenging in studying flipped label scenarios. Furthermore, their results suggest that ICL predictions can be asymptotically optimized as the number of examples increases.\\n\\n- In contrast, we propose a novel theoretical framework, the intention model, which accommodates common LLMs and demonstrations. Our no-free-lunch theorem indicates that the emergence of ICL depends on the prediction error and prediction noise, determined by i) LLMs\\u2019 prediction error for the next token, ii) LLMs\\u2019 prediction smoothness, and iii) the quality of demonstrations.\\n\\nWe respect and appreciate the contributions of [ref1], and we have highlighted our inspiration from their work. However, the above clarifications support our unique contributions and significant theoretical improvements.\\n\\n\\n> **Q.1**: What intention is being inferred with random labeling instead of the explained flipped labeling? What transition matrix would correspond to that? The problem is that task recall even after random labelling can not be explained with \\\"intentions\\\" learned during pre-training. Although the intention model might be able to explain meta learning in small transformers with toy data, it is not convincing for real ICL in LLMs.\\n\\n**Response:**\\n\\nThank you for your comments. As stated in our work, analyzing a random mechanism is more challenging because each demonstration would be generated with a distinct intention. This relates to mixed-intention scenarios, where even humans may struggle to infer the correct intentions due to random labels. We plan to explore this challenging scenario in future work.\\n\\n[ref1] An explanation of in-context learning as implicit bayesian inference. Xie et al. \\n\\nBest regards,\\n\\nAuthors of #9067\"}", "{\"title\": \"Response\", \"comment\": \"I appreciate the authors' response. I would have hoped for a draft update, but I'm nonetheless happy to keep my score in accordance with the original draft.\"}", "{\"summary\": \"This paper proposes a latent variable model to study in-context learning (ICL) of large language models (LLM). It contributes a theoretical framework based on hidden \\u201cintentions\\u201d which are sampled during LLM generation. A hidden Markov model is used to model the overall generation process of tokens from an intent. The paper proves a novel no free-lunch theorem for the intent model which describes the conditions for when ICL emerges in an LLM (small next-token prediction error and prediction noise). In addition, the paper relates the selection of demonstrations for ICL to matching the original intent, and provides theoretical insights for the process. Empirically, the paper reports experiments on the ability of LLMs to adapt to randomized labels in-context, linear probing for intents, and identifying induction heads for intent inference.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"This work introduces a latent variable \\u201cintent\\u201d based model for understanding ICL. The model is a reasonable plausible model for ICL in LLMs, outlining the weak assumption used in the theoretical analysis. Based on the intent model, several theoretical results are given, including conditions for when ICL can emerge in LLMs, under the intent model. The model also provides theoretical understanding to explain the phenomenon of demonstration selection for ICL, and adapting to random label changes (or other task shifts) using ICL.\\n\\nThe paper provides some experimental confirmation of their intent model and theoretical analysis. The experiments show the performance of LLMs under task shifts which appear to support the analysis. Moreover, experiments that use (2-layer) probes to classify intents and isolation inductions heads for intents are included, which provide some justification for their model.\\n\\nThe idea of changing the value of isolated induction heads for intent and observing its effect on the LLM is interesting. The results appear to confirm the importance of the identified heads.\", \"weaknesses\": \"The amount of detailed mathematical analysis in Sections 3 and 4 is dense and obscures the key take away messages from the theory. For example, after detailing the many assumptions for the intent model and deriving the no-lunch theorem in 4.2, the conclusion of the theorem appears to be \\u201cLLMs with weak capability in prediction the next token and non-qualified (?) demonstrations fail to exhibit ICL capability.\\u201d This is very well-known from empirical evidence (since GPT4), so it is not very surprising that the intent model, under reasonable assumptions, arrived at this result. As a key result of this paper, its relevance to a broader ICLR audience is unclear. One suggestion to the authors is to reconsider whether to keep all of the technical details in the main paper, or describe the main takeaways and the theorem, but move the rest into the appendix.\\n\\nThe paper lacks direct empirical confirmation of its theoretical findings. In 5.2 it states that \\u201cit is challenging to calculate or estimate these values (values predicted as necessary for ICL in Theorem 1)\\u201d hence indirect experiments must be done. This is a significant weakness for the theory, as it essentially cannot be experimentally confirmed or falsified. Can the values be estimated in a toy setting? \\n\\nThe intent recognition experiment is not totally convincing. It sets up an intent prediction task and uses features extracts from different layers of LLMs\\u2019, along with a 2-layer network to predict intent. Can this task be solved without using an intent model? Please consider including a baseline that plausibly does not implement an intent model. Details of the task setup are also missing. For example, what are some of the 50 intents? Are they instructions or tasks? How are train/test splits done?\\n\\nA lot of the content in the appendix is highly What is \\u201cn\\u201d in Table 1?\\nHow does Table 1 \\u201cshow that larger LLMs can capture the intention\\u201d? Isn\\u2019t the result just scaling?\\nL1182 \\u201cgroup them into 2 to 5 categories\\u201d which ones? Can you provide more details or samples for the dataset preparation?\\nF.2 do the induction heads identified here affect intent recognition in section F.1? I.e, if you \\u201cknock out\\u201d the heads then extract features, does the intent prediction performance degrade?\\n\\nrelevant to the paper. For example, Appendix D which discusses the theoretical and empirical challenges. Moreover, the experiments that actually try to confirm the plausibility of the intent model within real LLMs are in Appendix F. Please discuss these experiments in the main body of the paper, state their conclusions and how they support the theory.\\n\\nWriting of the paper needs significant editing and proofreading.\", \"just_a_few_examples\": \"L076 \\u201cIntroducing an external to modify\\u201d external what?\\nL225 \\u201cerror of next-token predicting\\u201d\\nL375 \\u201cIt shows that ICL\\u201d what is \\u201cit\\u201d? \\nL385 \\u201ccan be wrapped in the way\\u201d what does \\u201cwrapped\\u201d mean?\\nL498 \\u201cGTP-4\\u201d, \\u201cachieves exciting performance\\u201d what does \\u201cexciting\\u201d mean?\\nL501 \\u201cmatrixes\\u201d\", \"questions\": \"What is \\u201cn\\u201d in Table 1?\\nHow does Table 1 \\u201cshow that larger LLMs can capture the intention\\u201d? Isn\\u2019t the result just scaling?\\nL1182 \\u201cgroup them into 2 to 5 categories\\u201d which ones? Can you provide more details or samples for the dataset preparation?\\nF.2 do the induction heads identified here affect intent recognition in section F.1? I.e, if you \\u201cknock out\\u201d the heads then extract features, does the intent prediction performance degrade?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the detailed response to my concerns. However, my core concerns still remain:\\n1. The paper is still hard to read and follow. It would benefit from some restructuring. \\n2. Lack of significant improvement in theory over Xie et al. The presented intention model does not explain any currently unexplained phenomena (like Min et al.). What intention is being inferred with random labelling instead of the explained flipped labelling? What transition matrix would correspond to that? The problem is that [task recall](https://arxiv.org/abs/2305.09731) even after random labelling can not be explained with \\\"intentions\\\" learned during pre-training. Although the intention model might be able to explain meta learning in small transformers with toy data, it is not convincing for real ICL in LLMs.\\n\\nHence, I will keep my score.\"}", "{\"summary\": \"A theory about In-Context Learning similar to Xie et al's Bayesian Inference theory. Aims to explain some characteristics of ICL noted but not explained by prior works, for example perturbations in the label space. Aims to break down the error in predictions to interpretable quantities like LLMs' performance, quality of demonstrations.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The theory is very similar to Xie et al's Bayesian Inference theory with some modifications like neighbourhood of an intention, etc.\", \"The authors provide an interpretable way to connect LLM performance on next work prediction and the quality of demonstrations to the performance on ICL tasks (under their theory), which is nice.\"], \"weaknesses\": [\"Next token error is conditioned on \\\\theta in assumption 4. Even if the LLM infers intention, that would be solely determined by o_1:i-1, say \\\\theta_inferred. If the LLM is well trained, we can assume that it mostly infers the right intention and hence the condition with \\\\delta_4,1 can be satisfied. But in cases when it fails to infer the right intention, this error may be quite large. So the assumption is strong. Moreover, there is no way to get predictions from the LLM given the same context and some different intention \\\\theta_different, as the intention inference (if that is what happens in LLMs) is implicit and can not be disentangled. The LLM will always infer the same distribution over intentions given the same context, so I don\\u2019t understand assumption 4.\", \"Like Xie et al, the no free lunch theorem in this paper does not explain task learning capabilities of LLMs, on completely novel tasks (\\\\theta not in the intention family) unrelated to pretraining text.\", \"The whole external mapping thing does not make much sense to me. Users do not provide an external mapping when getting the model outputs; they directly present demonstrations with this transformation. If the LLM infers this mapping, it can only be implicit. Making it a part of the original intention family. It is hard to tell if a mapping like flipped labels is present in the intention family learnt by the model during pretraining. If the mapping is randomly generated, this becomes a contradiction as it is surely not present in the pretraining corpus. The authors say that they will explore this in future work, but it is an important point that makes Xie et al\\u2019s theory and this paper\\u2019s theory inconsistent with Min et al\\u2019s results where the model is able to infer the right intention with randomly generated labels.\", \"Experiment section is too small and severely cut (defered to the appendix). Which model? What ICL task? It is an important part of the paper and needs to be put in the main text. Also, the evidence is circumstantial. Intervening on model activations can imply so many things related to completely different theories. How can we claim that these results imply anything specifically about the intention model? This also highlights the difference between theory and practice, as the presented theory does not elicit easily verifiable causal experiments.\", \"The paper is very hard to read and follow. Like\", \"section 3.3, should define \\\\delta_1,1, 1,2, 4,1, etc. What do forbidden tokens mean, what are forbidden transitions?\", \"citations are placed very poorly. Sometimes before the sentence, sometimes after, sometimes unrelated; without proper usage of citet/citep.\", \"[nitpicky] \\u201cAdvanced works\\u201d: what is advanced, and compared to what?\", \"\\u201cfortunately consistent\\u201d: while good to know that the authors felt relieved that the method worked, it maybe inappropriate in a technical report. Some words feel too artificially placed like \\u201cenigmatic characteristics\\u201d.\", \"\\u201cThese intriguing phenomena highlight the difference between traditional learning and ICL Kossen et al. (2024). These seminal explorations provide a fruitful guide for understanding and explaining ICL.\\u201d These sentences don\\u2019t flow well. which works?\", \"\\u201ca relatively weak connection to empirical investigations, potentially due to strong assumptions\\u201d [ICML 2024 paper](https://arxiv.org/abs/2310.08540) illustrates this and may be appropriately cited.\", \"Line 77: \\u201cIntroducing an external to modify \\u2026\\u201d, external what?\", \"Line 116: definition of Sn can be confusing to read.\", \"o is used for both delimiter and document tokens. confusing.\", \"Line 297: \\\\theta_g is now called inferred intention, previously it was ground truth. confusing.\", \"Line 292: where does m come from? What does it mean? Unclear.\", \"Table 1 is referred to as Table 2 in the text.\", \"Many more ...\", \"Although I don't believe in reducing review scores for small clarity and writing issues, this paper seriously suffers from them and hampers the readers ability to understand the concepts conveyed. I would recommend a clear rewrite with more help from experienced co-authors.\"], \"questions\": [\"Why is it called the no-free lunch theorem? No one expects ICL to emerge in models with high next-token prediction errors, or models to perform well on under-specified ICL tasks.\", \"Why do we need to have a neighborhood of intentions, which are exactly modeled; compared to Xie et al\\u2019s exact intention which may have some modeling error (as is generally the case with all ML models).\", \"What is the difference between Assumption 3 and 5?\", \"Section 5.2: Why is it difficult to estimate next-token error of LLMs? And the results of this section don't mean much. Everyone would expect the ICL performance to go down with a more complex task. This does not imply that the model is performing inference of a complex intention as presented by the theory. There is no \\\"introduction of an external matrix T\\u201d, it is all in the theory. Where is the causal link that implies that the model is figuring out this new matrix T?\", \"In all, I find it hard to justify this paper because the theory it presents does not make any verifiable new predictions that Xie et al did not already make, and in my opinion does not explain the previously unexplained phenomena like Min et al.\", \"I will increase the score if the paper is clearly rewritten. I know that this is a long paper and hard to put concisely in 10 pages, but in this sort of work, the paper would greatly benefit if some of the underspecified theory (which makes it hard to understand) is moved completely to the appendix, and some more readable results like the experiments are moved to the front. The distinction between the results presented in this paper and Xie et al are unclear which could have been greatly improved in the introduction section. These are just some of my personal suggestions.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the authors' efforts during the rebuttal period. And I believe there is merit in the intention model theory but it needs to be consistent.\\n\\nIt is true that inferring the transition matrix for random labelling will be challenging under this intention model, but Min et al actually showed that models can easily figure out this case where the labels are not useful and maintain performance. Pan et al then showed that this ability (called task recall) does not depend on pairing of inputs and outputs, is present in smaller models and exhibited when the number of samples are small. Larger models and more demos start to learn (task learning) this newly demonstrated mapping (even if it is random). \\n\\nUnder the intention model, task recall can not be explained properly. We should always have a high error with random mapping under the intention model as the transition matrix should be hard to figure out. As the data shows this is not the case, and is different from the theory, there is something missing in the theory.\\n\\nI believe that the theory will benefit from another revision taking care of this aspect so that there are no inconsistencies between model behavior and the theory.\\n\\nI am increasing the score to 5 to reflect the potential usefulness of this theory, but will still reject it due to this inconsistency.\"}", "{\"title\": \"Reply to rebuttal\", \"comment\": \"The reviewer would like to thank the authors for taking the effort to submit detailed responses to my comments and revising the paper. I've raise my score to reflect that some of my concerns have been address and questions clarified.\"}", "{\"title\": \"Responses to Reviewer wUqa\", \"comment\": \"Dear Reviewer wUqa,\\n\\nWe would like to express our sincere gratitude for your invaluable assistance during this busy period. Your in\\u2010depth comments and suggestions have significantly enhanced the quality of our work. For instance, our paper did not initially address the issue of inconsistency. In response to your insightful comments, we will incorporate the following descriptions to clarify this inconsistency in the revised paper.\\n\\nOur proposed intention model can elucidate the paradoxical views on whether ICL predictions depend on in\\u2010context labels. \\n\\n- Min et al. observe that leveraging randomly assigned labels for the inputs barely affects ICL performance [r1]. \\n\\n- Conversely, Kossen et al. reject the null hypothesis that ICL predictions do not depend on the conditional label distribution of demonstrations, concluding that label randomization adversely affects ICL predictions [r2].\\n\\nAccording to our intention model theory, the random label scenario can be formulated using a random transition matrix as shown in Eq. 10. Each demonstration is assigned a distinct random transition matrix $\\\\mathcal{T}^r$ to modify the intention of demonstrations, leading to a significant demonstration shift $\\\\delta^r = KL(q_M(x,y|\\\\mathcal{T}^r\\\\theta_d)||q_M(x,y|\\\\mathcal{T}^r\\\\theta_d))>KL(q_M(x,y|\\\\theta_d)||q_M(x,y|\\\\theta_d))=\\\\delta$. Thus, introducing random transition matrix leads to a more extensive demonstration shift. Given this background, we can address the question-- why are these contradictory results observed in [r1] and [r2]? Our intention model provides a clear explanation for this.\\n\\n- 1) _The demonstration shift matters_. According to our intention model theory, the output error is decomposed into three terms, $\\\\eta_e$, $\\\\eta_{n_1}$, and $\\\\eta_{n_2}$. The first term $\\\\eta_e<k_t\\\\delta_{4,1}=\\\\delta_{4,1}$ depends on LLMs' error of next-token prediction. Here, $k_t = 1$ as all considered datasets are classification and multi-choice tasks. The second term decreases exponentially as the number of demonstrations increases. The demonstration shift scales the last term $p_M(\\\\varTheta_{\\\\epsilon})$. Thus, a larger demonstration shift results in more significant output errors.\\n\\n- 2) _The number of demonstration matters_. According to the prediction noise $\\\\eta_{n_2}$ (Eq. 9), more demonstrations lead to higher prediction noise, i.e., worse predicted outcomes, given the same demonstration shift. Thus, increasing the number of demonstrations leads to worse ICL performance. In this regard, [r1] used merely 16 demonstrations, while [r2] leveraged more than 50 demonstrations to evaluate ICL performance. Consequently, [r1] observed relatively small prediction errors, while [r2] observed higher ones.\\n\\nThank you once again for your in-depth comments, which have significantly improved the quality of our work. We are so excited to discuss ICL further with you. \\n\\n[r1] Rethinking the role of demonstrations: What makes in-context learning work? Min et al. EMNLP, 2022\\n\\n[r2] In-context learning learns label relationships but is not conventional learning. Kossen et al. ICLR, 2024\"}", "{\"title\": \"Responses to Reviewer A7mQ\", \"comment\": \"We would like to express our gratitude to the reviewer for the time and effort dedicated to reviewing our work. We appreciate your encouraging remarks on the applicability of our proposed theoretical framework. In response to your valuable comments, we have provided detailed responses below. We hope that these responses can satisfactorily address your concerns.\\n\\n## Response to weaknesses:\\n\\n> **W. 1**: A crucial missing reference seems to be Lin and Lee (\\\"Dual operating model of ICL\\\", ICML 2024). It would help if authors can delineate how their contributions differ from that of Lin and Lee.\\n\\n***Ans for W. 1):*** Thanks for your constructive comments. Accordingly, we highlight the difference between our work and the mentioned work. \\n\\n- Lin et al.'s work [r1] considers a specific regression task and assesses the performance of ICL [1]. Their work focuses on context length's impact on ICL and explains two Real-World phenomena.\\n- We propose a novel theoretical framework, i.e., the intention model. This allows for common models and downstream tasks and derives the no-free-lunch theorem of ICL. Namely, whether ICL emerges depends on the prediction error and prediction noise, which are determined by i) LLMs\\u2019 prediction error of the next token, ii) LLMs\\u2019 prediction smoothness, and iii) the quality of demonstrations.\\n\\nIn response to your kind suggestions, we will add the above discussions to the revision.\\n\\n> **W. 2**: I hence encourage authors to take a prediction-centric perspective: what predictive claim does your theory offer, and can you demonstrate that said claim checks out experimentally? \\n\\n***Ans for W. 2):*** According to your suggestions, we have highlighted the theoretical claims verified by experiments in the revision. \\n\\nOur theorem shows the no-free-lunch nature of ICL, involving prediction error and prediction noise. Thus, a straightforward approach to validate the theorem is to calculate the prediction error and noise. However, it is challenging to calculate or estimate these related values, i.e., the **error of next-token prediction** $\\\\delta_3$, LLM\\u2019s **prediction smoothness**, and **demonstration shift**. Thus, providing a quantitive analysis to verify the theorem is challenging.\\n\\nFortunately, some related factors can be controlled implicitly. \\n- The **error of next-token prediction** $\\\\delta_3$ could be related to the LLMs' performance under general tasks, i.e., $\\\\delta_3$ of GPT4 is less than that of GPT2. Thus, we compare the ICL capability of different models, i.e., LLaMa-7B, Mistral-7B, and GPT-4, with results in **Table 1**, verifying that LLMs with smaller $\\\\delta_3$ exhibit higher ICL performance. \\n\\n- The **demonstration shift** is captured by $\\\\epsilon = KL(p(x,y|\\\\theta_d) || p_M(x,y|\\\\theta_g))$. According to Eq. (14), the demonstration shift would vary with the external transition matrix $\\\\mathcal{T}$, leading to larger demonstration shift, i.e., $KL(p(x,y|\\\\mathcal{T} \\\\theta_d) || p_M(x,y|\\\\mathcal{T}\\\\theta_g)) = \\\\epsilon^\\\\prime \\\\geq \\\\epsilon = KL(p(x,y|\\\\theta_d) || p_M(x,y|\\\\theta_g))$. According to Eq. (13), applying $\\\\mathcal{T}$ leads to larger prediction noise. Thus, we evaluate ICL performance under different matrix $\\\\mathcal{T}$ with results in **Table 2**, where $\\\\mathcal{T}$ is realized using different mappings, i.e., $y=y$, $y = (y+1)\\\\ mod\\\\ 5$, and $y = (3y+1)\\\\ mod\\\\ 5$. This verifies that a larger demonstration shift degrades ICL performance.\\n\\nAligning with your valuable comments, we failed to provide direct experimental validation related to the LLM\\u2019s prediction smoothness. Thus, we will explore some implicit approach to reflecting the nature of LLMs in our future work. \\n\\n> **W. 3**: I found the writing quite convoluted in several parts of the paper.\\n\\n***Ans for W. 3):*** We apologize for the convoluted writing and grammatical errors in our paper. We will carefully revise the paper to make it more straightforward and more concise and correct all citation and grammatical errors.\\n\\n## Response to questions:\\n\\n> **Q. 1**: Can you better explain why the first term has additive influence of next-token error and prediction smoothness (which I would have expected to themselves have a multiplicative effect)?\\n\\n***Ans for Q. 1):*** Following your valuable question, we have added detailed explanations of the additive influence of next-token error and prediction smoothness. \\nWe agree that your intuition about next-token error and prediction smoothness is correct. Within the intention neighborhood inferred by LLMs, the actual effects of these two factors accumulate gradually by multiplying, leading to a more complex error term. We want this error to be easier to understand, so we use a binomial expansion and shrink the higher-order term to a constant. This provides a more intuitive understanding of the actual effect of the error term.\\n\\n> ***Reference***\\n> \\n> [r1] Dual Operating Modes of In-Context Learning, ICML 2024\"}" ] }
2Ez4dhU3NG
SPLR: A Spiking Neural Network for Long-Range Temporal Dependency Learning
[ "Biswadeep Chakraborty", "Saibal Mukhopadhyay" ]
Spiking Neural Networks (SNNs) offer an efficient framework for processing event-driven data due to their sparse, spike-based communication, making them ideal for real-time tasks. However, their inability to capture long-range dependencies limits their effectiveness in complex temporal modeling. To address this challenge, we present a **SPLR (SPiking Network for Learning Long-range Relations)**, a novel architecture designed to overcome these limitations. The core contribution of SPLR is the **Spike-Aware HiPPO (SA-HiPPO)** mechanism, which adapts the HiPPO framework for discrete, spike-driven inputs, enabling efficient long-range memory retention in event-driven systems. Additionally, SPLR includes a convolutional layer that integrates state-space dynamics to enhance feature extraction while preserving the efficiency of sparse, asynchronous processing. Together, these innovations enable SPLR to model both short- and long-term dependencies effectively, outperforming prior methods on various event-based datasets. Experimental results demonstrate that SPLR achieves superior performance in tasks requiring fine-grained temporal dynamics and long-range memory, establishing it as a scalable and efficient solution for real-time applications such as event-based vision and sensor fusion in neuromorphic computing.
[ "spiking neural networks", "long range dependencies", "event data modelling", "hippo matrix", "state space models" ]
Reject
https://openreview.net/pdf?id=2Ez4dhU3NG
https://openreview.net/forum?id=2Ez4dhU3NG
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yegCEKy5V2", "ru3X7gYkBp", "rqXS6EMNwE", "nqehm7hSGc", "nOusHGfZEO", "lVLB7ykBN6", "kvLczdYhI1", "jAntSjLU2f", "hECrVebUOG", "e1NZsQQeao", "dwbT49EMz9", "c3UipeZJUZ", "byLXRCMhBc", "ZU1qR9ooKg", "W5FkmwDpi8", "UtAiqp4gEP", "UCjP2M0JWX", "Ss65vXMc2s", "LlG7hw445z", "KmlA7Qi3Dd", "JNd8jQ5NDk", "HSLE2Jw1tR", "G3cA9z1Yeh", "EdRkLYcKX7", "DzvnuwSY4x", "DBhc5IYhp0", "9mbghNGJV9", "8OBldJoIvs" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730853646825, 1732076304548, 1732075479358, 1732075174458, 1730717294986, 1732068215838, 1730714283802, 1730542224998, 1732067903611, 1732074230502, 1732075678063, 1732075746515, 1732073468560, 1732071547640, 1732073579864, 1737523964702, 1732072136164, 1734749359185, 1732075324291, 1732074199210, 1732075979355, 1732077039931, 1732075834634, 1732067154372, 1729485684906, 1732074670775, 1732075438439, 1732074411297 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9155/Reviewer_Y4sN" ], [ "ICLR.cc/2025/Conference/Submission9155/Authors" ], [ "ICLR.cc/2025/Conference/Submission9155/Authors" ], [ "ICLR.cc/2025/Conference/Submission9155/Authors" ], [ "ICLR.cc/2025/Conference/Submission9155/Reviewer_ARZ5" ], [ "ICLR.cc/2025/Conference/Submission9155/Authors" ], [ "ICLR.cc/2025/Conference/Submission9155/Reviewer_LQNp" ], [ "ICLR.cc/2025/Conference/Submission9155/Reviewer_TiCG" ], [ "ICLR.cc/2025/Conference/Submission9155/Authors" ], [ "ICLR.cc/2025/Conference/Submission9155/Authors" ], [ "ICLR.cc/2025/Conference/Submission9155/Authors" ], [ "ICLR.cc/2025/Conference/Submission9155/Authors" ], [ "ICLR.cc/2025/Conference/Submission9155/Authors" ], [ "ICLR.cc/2025/Conference/Submission9155/Authors" ], [ "ICLR.cc/2025/Conference/Submission9155/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9155/Authors" ], [ "ICLR.cc/2025/Conference/Submission9155/Area_Chair_UEMa" ], [ "ICLR.cc/2025/Conference/Submission9155/Authors" ], [ "ICLR.cc/2025/Conference/Submission9155/Authors" ], [ "ICLR.cc/2025/Conference/Submission9155/Authors" ], [ "ICLR.cc/2025/Conference/Submission9155/Authors" ], [ "ICLR.cc/2025/Conference/Submission9155/Authors" ], [ "ICLR.cc/2025/Conference/Submission9155/Authors" ], [ "ICLR.cc/2025/Conference/Submission9155/Reviewer_tvp3" ], [ "ICLR.cc/2025/Conference/Submission9155/Authors" ], [ "ICLR.cc/2025/Conference/Submission9155/Authors" ], [ "ICLR.cc/2025/Conference/Submission9155/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The manuscript introduces SPLR, a spiking neural network model designed to capture long-range temporal relationships by integrating state-space dynamics with spiking neuron models and augmenting the HiPPO framework to handle spike-driven inputs. The proposed model reportedly achieves high performance comparable to other models on event-based datasets.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The manuscript presents an innovative approach by augmenting spiking dynamics with state-space model dynamics, potentially enabling spiking models to tackle more challenging tasks that require capturing long-range temporal dependencies. This direction could be of significant interest in broadening the applications of spiking models in complex temporal tasks.\", \"weaknesses\": \"While the proposed method is intriguing and addresses the relevant challenge of enabling spiking models to capture long-term dependencies, the manuscript has several critical weaknesses.\\n\\nFirstly, the presentation lacks clarity, making it difficult to fully grasp how the method works, interpret the experimental results, or potentially reproduce the findings. Essential details expected in a research paper, such as a discussion of related works, are missing (e.g., [1]). Additionally, fundamental concepts necessary to understand the work are not well-introduced; although state-space models (SSMs) have gained popularity recently, they are not widely understood in machine learning, so a brief overview would be beneficial.\\n\\nThe manuscript also omits essential citations, including the original HiPPO framework, which is central to this work, and does not offer a proper explanation of how it functions. The equations are unclear; while convolutions are frequently mentioned, no equations illustrate how or where convolutions are applied. Variable definitions are sometimes confusing or incomplete; for instance, on line 187, $\\\\Delta t$ is described as the time difference between spikes $i$ and $j$, but it is unclear what $i$ and $j$ refer to in the context of the matrix $F_{ij}$, as it operates over the hidden state rather than directly on spikes.\\n\\nRegarding the experiments, the manuscript lacks details about the setup, hindering the interpretability of the results. For example, it\\u2019s unclear what \\u201cSequential CIFAR-10\\u201d entails, such as the sequence length or frame generation process. Similarly, for the DVS Gesture dataset, it's ambiguous whether the processing was done for independent events or if events were accumulated into event frames.\\n\\n[1] Stan, MI., Rhodes, O. Learning long sequences in spiking neural networks. Sci Rep 14, 21957 (2024). https://doi.org/10.1038/s41598-024-71678-8\", \"questions\": [\"How does the dendritic attention layer differ from a current-based (CUBA) leaky integrate-and-fire (LIF) neuron model?\", \"How are spikes produced between the SPLR convolution layers?\", \"How are convolutions applied within the proposed model?\", \"In equation (1), is the variable $u(t)$ a binary vector representing input spikes?\", \"How does the inclusion of a decay matrix in the HiPPO framework enhance memory retention?\", \"Could you clarify the setup for the Sequential CIFAR-10 and CIFAR-100 tasks? How are frames sequenced? Similarly, could you elaborate on the experimental setup for the other datasets?\", \"For clarification, could you specify what spikes $i$ and $j$ refer to in line 187?\", \"Is the manuscript proposing a new type of spiking neuron, or an entire network architecture?\", \"Since the manuscript emphasizes improving SNNs' capacity to handle long-term dependencies, could you elaborate on why simple LIF models face challenges with this?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal to Weaknesses Part 3\", \"comment\": \"> Is the author's claim of dendrite-based spatio-temporal pooling just spatial pooling after the output of DH-LIF? Where is there temporal pooling?\\n\\n Thank you for raising this point. To clarify, our claim of dendrite-based spatio-temporal pooling refers to both **temporal** and **spatial pooling**, not just spatial pooling after DH-LIF neurons.\\n\\n- **Temporal Pooling**: Each DH-LIF neuron has multiple dendritic branches with distinct timing factors, acting as temporal filters that capture input dynamics at different timescales. These branches integrate temporal information by selectively emphasizing or suppressing features based on their temporal properties, performing pooling across time before further processing. \\n- **Spatial Pooling**: After temporal pooling at the dendritic level, spatial pooling is applied to reduce the spatial dimensionality of the feature maps while preserving the extracted temporal features.\\n\\nThis two-step process allows DH-LIF neurons to first aggregate temporal information through dendritic mechanisms and then reduce spatial complexity, enabling efficient extraction and integration of spatio-temporal features from asynchronous spike inputs.\\n\\nWe have added this distinction in the revised manuscript to ensure the role of temporal pooling at the dendritic level is explicitly highlighted. Thank you for pointing this out.\\n___\\n\\n> I suggest that the authors compare their method with other optimized spiking neural networks capable of long-range temporal modeling, such as [2,3,4].\\n\\nWe appreciate the reviewer\\u2019s suggestion and have carefully considered how our method compares to the referenced models. Below, we outline the key differences and include a quantitative comparison in the table provided.\\n\\n1. **Autaptic Synaptic Circuit (Wang & Yu, ICML 2024)**: \\n The Autaptic Synaptic Circuit enhances temporal memory and spatial coordination using two adaptive pathways. While similar in leveraging learnable timing factors, our approach extends this by incorporating a dendrite-level attention mechanism, allowing dynamic weighting of spatio-temporal features and enabling diverse temporal processing at the dendritic level.\\n\\n2. **TC-LIF: Two-Compartment Spiking Neuron Model (Zhang et al., AAAI 2024)**: \\n TC-LIF uses separate dendritic and somatic compartments to model long-term temporal dependencies. Our DH-LIF neurons extend this concept by introducing heterogeneous timing factors across dendritic branches, which support richer temporal diversity and are further enhanced by our dendrite-based pooling mechanism for spatio-temporal integration.\\n\\n3. **TIM: Temporal Interaction Module (Shen et al., IJCAI 2024)**: \\n TIM integrates into spiking transformers to handle temporal information by combining historical and current inputs. In contrast, our model focuses on biologically inspired processing at the dendritic level, where temporal diversity is handled through the DH-LIF neurons and dendritic attention rather than modifications to the transformer\\u2019s attention mechanism.\\n\\n| **Dataset** | **TIM [2] Params** | **TIM [2] Acc** | **TCLIF [3] Params** | **TCLIF [3] Acc** | **STC-LIF [4] Params** | **STC-LIF [4] Acc** | **SPLR Normal Params** | **SPLR Normal Acc** | **SPLR-Tiny Params** | **SPLR-Tiny Acc** |\\n|-------------|---------------------|-----------------|-----------------------|-------------------|-------------------------|---------------------|-------------------------|---------------------|----------------------|-------------------|\\n| SHD | 2.59M | 86.3 | 0.142M | 88.91 | | | 0.513M | 94.68 | 0.033M | 86.24 |\\n| DVS128 | | | | | 3.922M | 83.0 | 0.513M | 96.5 | 0.033M | 89.2 |\\n| SSC | 110.8K | 61.09 | | | | | 0.513M | 87.52 | 0.033M | 72.19 |\\n\\n\\nOur SPLR model combines dendritic heterogeneity, adaptive attention, and spatio-temporal pooling to balance biological inspiration and computational efficiency. As demonstrated in the table, SPLR achieves higher accuracy with significantly fewer parameters in most cases, particularly on complex datasets such as SHD and DVS128. We will include quantitative comparisons and expand this discussion in the revised manuscript to highlight our method's advantages in long-range temporal modeling.\\n\\n___\", \"edit\": \"We updated the table for the STC-LIF numbers are for DVS-Gesture and not for the SHD dataset\"}", "{\"title\": \"Rebuttal to Weaknesses Part 6\", \"comment\": [\"> Innovation.The originality of this work is limited. The dendrite modeling directly uses DH-LIF, and the SSM modeling is simply an SNN combined with HIPPO. I did not observe any standout contributions in this approach.\", \"We respectfully disagree with the assessment that the originality of this work is limited. While our work builds on foundational elements such as DH-LIF and HiPPO, the key innovations lie in how these components are adapted, extended, and integrated into a unified framework specifically tailored for spiking neural networks (SNNs). Below, we highlight the unique contributions of this work:\", \"1. **Spike-Aware HiPPO (SA-HiPPO):**\", \"The HiPPO framework has been extensively used for memory retention in continuous-time models. However, adapting it for the sparse and asynchronous nature of spiking inputs required significant innovation.\", \"SA-HiPPO introduces a decay matrix that dynamically adjusts memory retention based on the timing of incoming spikes. This extension ensures that recent spikes are prioritized while retaining a compressed representation of older information, enabling effective long-term memory retention in SNNs.\", \"This adaptation is crucial for leveraging HiPPO in event-driven systems, and to the best of our knowledge, this is the first work to introduce this mechanism in the context of spiking neural networks.\", \"2. **SPLR Convolution Layer:**\", \"Our SPLR Convolution Layer integrates spike-driven state-space dynamics with advanced techniques such as Normal Plus Low-Rank (NPLR) decomposition and FFT-based convolutions.\", \"While NPLR and FFT have been used in other contexts, their combination within a spike-driven framework is novel. This enables the model to achieve computational efficiency and scalability without sacrificing the ability to capture complex spatio-temporal dependencies.\", \"The use of state-space dynamics with event-driven updates provides a structured mechanism for handling both short- and long-term dependencies in SNNs, setting SPLR apart from conventional approaches.\", \"3. **Unified Architecture:**\", \"SPLR represents a cohesive framework that seamlessly integrates dendritic mechanisms (via DH-LIF), SA-HiPPO, and spike-driven state-space dynamics. This integration was specifically designed to address two key challenges in SNNs: long-term memory retention and asynchronous processing.\", \"Unlike prior works, which primarily adapt artificial neural network (ANN) architectures to spiking formats, SPLR is built from the ground up to natively handle the unique constraints and opportunities of spiking computation.\", \"4. **Substantial Experimental Improvements:**\", \"Our experimental results demonstrate that SPLR achieves substantial improvements over state-of-the-art methods across multiple event-based benchmarks (e.g., DVS Gesture, Celex-HAR, SSC). The model excels in tasks requiring long-range temporal dependencies while maintaining computational efficiency, highlighting the practical advantages of our innovations.\", \"These results underscore the impact of combining SA-HiPPO, state-space modeling, and dendritic mechanisms within a unified spiking framework.\", \"**Revisions to Manuscript:**\", \"To address potential misunderstandings about the novelty of our work, we will:\", \"Expand the discussion in the **Introduction** and **Methods** sections to emphasize the unique aspects of SA-HiPPO, SPLR Convolution, and the unified architecture.\", \"Highlight the differences between SPLR and prior work, particularly in terms of memory retention mechanisms, computational efficiency, and spiking-specific adaptations.\", \"Include additional details in the **Related Work** section to contextualize our contributions relative to DH-LIF, HiPPO, and other state-space modeling approaches.\", \"We believe these innovations collectively represent a significant advancement in the field of spiking neural networks and event-based processing. By bridging the gap between spiking computation and state-space modeling, SPLR establishes itself as a novel and impactful contribution. We hope this explanation clarifies the originality and significance of our work, and we would be glad to address any further questions or concerns.\"]}", "{\"title\": \"Rebuttal to Weaknesses Part 3\", \"comment\": \"> While sparse properties and FLOPS reduction are mentioned, the evaluation details are not provided.\\n\\n Thank you for highlighting the need to provide clearer evaluation details regarding FLOPS reduction and sparsity. We recognize the importance of explicitly connecting these properties to the empirical results, and we have clarified this in the revised manuscript as follows:\\n\\n1. SPLR achieves **significant FLOPS reduction** compared to state-of-the-art methods, as demonstrated in Figures 2 and 3. These results are particularly pronounced on high-resolution, event-based datasets like DVS Gesture and Celex-HAR, where SPLR variants (Normal, Small, Tiny) provide superior accuracy and computational efficiency. For instance:\\n - SPLR Tiny achieves a FLOPS count as low as 0.034 GFLOPs on Celex-HAR while maintaining competitive accuracy, showcasing its suitability for resource-constrained applications.\\n\\n2. The **FLOPS-accuracy trade-off** illustrated in Figures 2 and 3 aligns with our theoretical predictions about SPLR\\u2019s efficiency. Key contributing factors include:\\n - **NPLR Decomposition and FFT Convolutions**: These components allow SPLR to handle high-dimensional inputs with significantly reduced computational costs compared to standard dense convolution methods.\\n - **Sparsity-Driven Efficiency**: By leveraging the asynchronous, sparse nature of event-driven data, SPLR reduces redundant computations, further enhancing its efficiency for real-time applications.\\n\\n3. The event-driven processing capabilities of SPLR naturally exploit **sparsity in the input**, enabling selective updates and reducing overall computational overhead. This property is evident in SPLR\\u2019s ability to maintain both high accuracy and low computational cost across various datasets.\\n\\n\\n> (4) The supplementary materials contain excessive repetition and are overly lengthy, making it difficult for readers to stay engaged.\", \"we_have_revised_the_supplementary_materials_as_follows\": \"1. Repetitive sections have been consolidated to streamline the content and **reduce redundancies.** We have prioritized the inclusion of essential information and moved less critical details, such as exhaustive derivations and auxiliary results, to the supplementary materials. \\n\\n2. The **supplementary materials have been reorganized** into clearly defined sections with concise summaries. This makes it easier for readers to navigate specific topics, such as dataset descriptions, theoretical proofs, or experimental setups.\\n\\n3. Additional formatting changes, such as the inclusion of a table of symbols and variables, have been implemented to **improve readability** and accessibility for readers.\\n\\n---\\n\\n> (5) Overuse of Abbreviations: The excessive use of abbreviations makes the paper difficult to follow. For instance, I cannot understand why \\\"Spiking Network for Learning Long-Range Relations\\\" is abbreviated as \\\"SPLR\\\"\\u2014what does \\\"P\\\" represent here? Is there a difference between \\\"Spiking Network\\\" and \\\"Spiking Neural Networks (SNNs)\\\"? Additionally, what does HIPPO stand for? Shouldn\\u2019t the authors explain these abbreviations first? \\n\\nWe have reviewed all abbreviations in the manuscript to ensure they are clearly defined at first use, reducing unnecessary jargon to enhance readability.\\n\\n1. The abbreviation \\\"SPLR\\\" stands for \\\"SPiking Network for learning Long-Range Relations,\\\" \\n\\n2. \\\"Spiking Neural Network (SNN)\\\" refers to the general category of spiking neural architectures, while \\\"Spiking Network\\\" in \\\"SPLR\\\" was used for brevity. We have revised the text to consistently use \\\"Spiking Neural Network\\\" (SNN) for clarity.\\n\\n3. HiPPO stands for \\\"Highly Optimized Polynomial Projection.\\\" This framework provides memory retention in continuous-time models. [1]\\n\\n**References**\\n- Gu, A., Dao, T., Ermon, S., Rudra, A. and R\\u00e9, C., 2020. Hippo: Recurrent memory with optimal polynomial projections. Advances in neural information processing systems, 33, pp.1474-1487.\\n\\n---\"}", "{\"summary\": \"This work introduces SPLR (Spiking Network for Learning Long-Range Relations), designed to efficiently capture long-range temporal dependencies while maintaining the hallmark efficiency of spike-driven architectures. SPLR integrates a state-space convolutional layer and a Spike-Aware HiPPO (SA-HiPPO) layer, addressing the limitations of conventional SNNs in complex temporal modeling. The SPLR convolutional layer leverages state-space dynamics to enhance feature extraction, capturing spatial and temporal complexities in event-driven data while preserving the efficiency of sparse spike-driven processing. The SA-HiPPO layer adapts the HiPPO framework to spike-based formats, enabling efficient long-term memory retention. Through dendrite-based spatiotemporal pooling and FFT-based convolution techniques, SPLR demonstrates scalability when processing high-resolution event streams and outperforms traditional methods across various event-driven tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The authors propose SPLR which effectively captures long-range temporal dependencies, addressing limitations in traditional SNNs and enhancing temporal modeling capabilities for complex event-driven tasks.\\n2. The experiments show good results. SPLR achieves both computational efficiency and scalability.\", \"weaknesses\": \"The primary weakness of this paper lies in its writing, which significantly hinders clarity and understanding. A reorganization is recommended to improve readability and logical flow.\\n\\n1. Writing and Structure. For instance, Section 2 presents the SPLR components sequentially but lacks an overview that connects each part to the overall model structure, making it difficult for readers to understand how the parts interact. Section 3 is dense with theoretical content and proofs but does not clearly convey the main ideas, making it hard to follow the section\\u2019s intended focus.\\n\\n2. Lack of Citations. The paper frequently omits citations in crucial areas. For example, although modifications to the HiPPO framework are proposed, no supporting references are provided. Furthermore, DH-LIF is introduced without citation, and the reference for this component is missing from the bibliography, weakening the academic rigor of the paper.\\n\\n3. Confusions and Errors. There are several errors and confusions throughout the paper, such as the incorrect abbreviation of the Spiking State-Space Model as SPLR in line 855. Such errors further impact the readability and precision of the work.\", \"questions\": \"Please see the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal to Questions Part 1\", \"comment\": \">- How does the dendritic attention layer differ from a current-based (CUBA) leaky integrate-and-fire (LIF) neuron model?\\n\\nThe dendritic attention layer builds upon the standard CUBA LIF model by introducing multiple dendritic branches, each with its own distinct timing parameter $\\\\tau_d$. In the CUBA LIF model, inputs are uniformly integrated at the soma with a single timescale for current integration, limiting its ability to process complex temporal patterns. In contrast, the dendritic attention layer performs independent temporal filtering across multiple dendritic branches before aggregating the signals at the soma. These branches enable the neuron to dynamically process inputs from different temporal windows, providing a mechanism for selective attention to asynchronous and temporally distributed inputs.\\n\\nThis architectural extension allows the model to capture complex spatio-temporal patterns, enhancing its adaptability and flexibility compared to the single-timescale processing of CUBA LIF neurons. Additionally, the dendritic attention mechanism aligns more closely with biological processing by incorporating diverse temporal dynamics across dendritic branches.\\n\\nTo ensure clarity, we have updated the main text to explicitly describe this distinction and included further details in **Supplementary Section C.**\\n\\n\\n> -How are spikes produced between the SPLR convolution layers?\\n \\nSpikes are generated through the dynamics of the dendritic and soma compartments modeled by the **Dendrite Attention Layer** and **DH-LIF neurons**. The mechanism can be summarized as follows:\\n\\n1. **Dendrite Dynamics:** \\n Each dendritic branch accumulates and processes inputs over time, acting as a temporal filter. The dynamics of the dendritic current are governed by:\\n$$\\n i_d(t+1) = \\\\alpha_d i_d(t) + \\\\sum_{j \\\\in \\\\mathcal{N}_d} w_j p_j,\\n $$\\n where $ \\\\alpha_d = e^{-\\\\frac{1}{\\\\tau_d}} $ is the decay rate determined by the dendritic branch\\u2019s time constant $ \\\\tau_d $, $ w_j $ represents the synaptic weight of the input $p_j $, and $ \\\\mathcal{N}_d $ denotes the set of presynaptic inputs to dendrite $ d $.\\n\\n2. **Soma Integration:** \\n The currents from all dendritic branches are aggregated at the soma, where they are further integrated over time. The soma's membrane potential evolves according to:\\n $$\\n V(t+1) = \\\\beta V(t) + \\\\sum_d g_d i_d(t),\\n $$\\n where $ \\\\beta = e^{-\\\\frac{1}{\\\\tau_s}} $ is the soma\\u2019s decay factor, and $ g_d $ is the coupling strength of dendrite $ d \\\\$ to the soma.\\n\\n3. **Spike Firing:** \\n A spike is generated when the membrane potential $ V(t) $ exceeds a predefined threshold $ V_{\\\\text{th}}$. After firing, the membrane potential resets. These spikes propagate forward as inputs to the next SPLR convolution layer, maintaining the asynchronous event-based nature of the system.\\n\\nWe have clarified this process in **Supplementary Section C** under SPLR Convolution Layer. Thank you for highlighting this question, as it allowed us to ensure the description of the spike generation mechanism is more comprehensive.\"}", "{\"summary\": \"This paper presents a Spiking Network for Learning Long-Range Relations (SPLR). The proposed SPLR model comprises the dendrite attention layer, the Spike-Aware HiPPO (SA-HiPPO) layer, and the SPLR convolution layer. These modules enhance the long-range temporal dependency learning capability of SPLR. Experimental results demonstrate that SPLR outperforms prior methods in tasks requiring both fine-grained temporal dynamics and the retention of long-range dependencies.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper is well-written and technically solid. Each module in SPLR is introduced in detail and highlighted in different colors. This paper presents a detailed theoretical analysis, including the long-range dependency capability and stability of SPLR.\\n2. The proposed SPLR model achieves competitive accuracy with less computational overhead than other state-of-the-art models on the Celex-HAR dataset.\", \"weaknesses\": \"The proposed SPLR model incorporates several non-spike operations, including the NPLR decomposition and FFT convolution. It makes SPLR a hybrid architecture instead of a pure spiking neural network. The hybrid nature may compromise its hardware compatibility and make it difficult to deploy on neuromorphic hardware.\", \"questions\": \"1. This paper only compares FLOPs vs. accuracy between the proposed SPLR and other models. Does SPLR have an advantage over other methods in terms of inference latency?\\n2. The ablation studies only examine the effects of removing the dendrite attention layer and replacing SA-HiPPO with LIF. What if we replace NPLR decomposition and FFT convolution with standard convolution?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work proposes a spike SSM method named Spiking Network for Learning Long-Range Relations (SPLR). The proposed SPLR convolutional layer leverages state-space dynamics to enhance feature extraction while retaining the efficiency of sparse, event-based\\nprocessing, and incorporates a Spike-Aware HiPPO (SA-HiPPO) matrix that allows SPLR to effectively maintain long-range memory by adapting the HiPPO framework for discrete, spike-driven inputs. The authors tested their method on several datasets, such as Celex HAR, DVS128 Gesture, Sequential CIFAR-10/100\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. This work explores the combination of Spiking Neural Networks (SNN) and State Space Models (SSM), which is an interesting direction. Using state-space methods to improve SNNs' ability to model long-term dependencies holds promise.\\n2. The experimental comparisons in this work are extensive.\", \"weaknesses\": \"This work requires comprehensive improvements, with the main weaknesses outlined as follows.\\n\\n1. Writing: The writing in this work requires careful and comprehensive improvement, covering the overall organization of the paper, paragraph structure, and numerous details that need refinement. (1) The authors placed the related work section in the supplementary materials and omitted citations to many key works, which can confuse readers. For instance, the authors did not cite relevant papers when HIPPO was first mentioned (in fact, there are almost no citations in paragraphs 3 and 4 of the introduction). (2) The methodology section is not clearly explained. What type of spiking neurons does this work use? How does SPLR integrate with spiking neurons? The authors repeat content introduced in the main text within the supplementary materials. Lines 147-150 reiterate the significance of SPLR, but readers are likely more interested in the methodological details and the rationale behind the proposed approach\\u2019s significance. Unfortunately, these critical details are missing. (3) In the theoretical discussion, the authors present several theorems but do not clarify why these are necessary. While sparse properties and FLOPS reduction are mentioned, the evaluation details are not provided. \\n(4) The supplementary materials contain excessive repetition and are overly lengthy, making it difficult for readers to stay engaged. (5) Overuse of Abbreviations: The excessive use of abbreviations makes the paper difficult to follow. For instance, I cannot understand why \\\"Spiking Network for Learning Long-Range Relations\\\" is abbreviated as \\\"SPLR\\\"\\u2014what does \\\"P\\\" represent here? Is there a difference between \\\"Spiking Network\\\" and \\\"Spiking Neural Networks (SNNs)\\\"? Additionally, what does HIPPO stand for? Shouldn\\u2019t the authors explain these abbreviations first? (6) Overstatements: The paper is filled with terms like \\\"spike-driven,\\\" \\\"asynchronous,\\\" and \\\"real-time.\\\" As I understand it, \\\"spike-driven\\\" implies a purely additive network[2], yet the pink section in Figure 1 seems unable to achieve this. Regarding \\\"asynchronous,\\\" the authors\\u2019 explanation in lines 92-95 is too brief, making it difficult to discern what kind of preprocessing the network applies to the data.\\n\\n2. Motivation. The authors repeatedly state that the proposed SPLE can address the challenge of modeling both short- and long-term dependencies in SNNs. However, they fail to analyze why SNNs have limitations in this area and why their proposed method can solve this issue. For instance, this is mentioned in lines 58-67, 147-149, and 1846-1850 without providing the necessary analysis.\\n\\n3. Innovation.The originality of this work is limited. The dendrite modeling directly uses DH-LIF, and the SSM modeling is simply an SNN combined with HIPPO. I did not observe any standout contributions in this approach.\\n\\n4. Experiments. The datasets chosen by the authors, DVS128 Gesture and Sequential CIFAR-10/100, do not effectively test the model's ability to handle long-range dependencies. The authors could consider more challenging datasets, such as LRA. Additionally, the fact that no one in the SNN field has addressed the Celex HAR dataset does not imply that SNNs cannot handle it. The authors have not even provided a complete description of the size and scope of the Celex HAR dataset. If the authors aim to compare with other SNNs on challenging DVS datasets, they might try HAR-DVS[1]. Furthermore, the authors overlook comparisons with many recent SOTA methods on the Gesture dataset, where SNN performance has already surpassed 99.5%[2].\\n\\n---\\n[1] Hardvs: Revisiting human activity recognition with dynamic vision sensors. In AAAI 2024.\\n[2] Spike-driven transformer. In NeurIPS 2023.\", \"questions\": \"Please see weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal to Weaknesses 2\", \"comment\": \"> Variable definitions are sometimes confusing or incomplete; for instance, on line 187, $\\\\Delta t$ is described as the time difference between spikes $i$ and $j$, but it is unclear what $i$ and $j$ refer to in the context of the matrix $F_{i,j}$, as it operates over the hidden state rather than directly on spikes.\\n\\nIn our model, $i$ and $j$ index specific spike events associated with neurons within the network, while $\\\\Delta t$ represents the time difference between these events. Specifically, $F_{ij}(\\\\Delta t)$ is a decay matrix where $\\\\Delta t = t_j - t_i $ defines the time difference between spikes $i$ and $j$, and $\\\\alpha_{ij}$ is a parameter controlling the decay rate. Although $F_{ij}$ operates on the hidden state rather than directly on spike events, this temporal decay mechanism allows recent spikes to exert a stronger influence on the evolution of the hidden state, while the impact of older spikes diminishes exponentially. This design effectively emphasizes recent information while preserving a compressed history of prior events, enhancing the model's stability and responsiveness to event-driven inputs.\\n\\nWe have revised the manuscript to explicitly clarify these definitions and their roles in the state evolution dynamics. \\n\\n\\n> Regarding the experiments, the manuscript lacks details about the setup, hindering the interpretability of the results. For example, it\\u2019s unclear what \\u201cSequential CIFAR-10\\u201d entails, such as the sequence length or frame generation process. Similarly, for the DVS Gesture dataset, it's ambiguous whether the processing was done for independent events or if events were accumulated into event frames.\\n\\nWe have revised the manuscript to provide detailed descriptions of the experimental protocols. \\n\\n1. **Sequential CIFAR-10**: The \\\"Sequential CIFAR-10\\\" dataset is adapted from the PSN paper (Fang et al., NeurIPS 2023) to evaluate long-term temporal dependencies. In this setup, each image is processed column by column, treating the 32 image columns as a sequence of 32 time steps. This sequential representation mimics how temporal data unfolds, enabling the evaluation of the model's ability to learn dependencies over long time horizons. The frame generation process follows the methodology described in the PSN paper, where each column is treated as a single time step without further modifications.\\n\\n2. **DVS Gesture Dataset**: We explicitly stated in the revised manuscript that the SPLR model processes the DVS Gesture dataset on an **event-by-event** basis. Each spike is processed independently as it occurs, with the model dynamically updating its hidden state for every incoming event. This approach ensures fine-grained temporal modeling and avoids accumulating events into frames, preserving the dataset's asynchronous nature.\\n\\nTo enhance the interpretability of the results, we have added a dedicated section in the Supplementary Materials (Section B) providing detailed descriptions of all datasets and their processing pipelines. Additionally, key experimental details for DVS Gesture, HAR-DVS, Celex-HAR, and Long Range Arena datasets are now included in the main text, while results for other datasets, including Sequential CIFAR-10 and CIFAR-100, are summarized in the supplementary material.\\n\\nFinally, as part of our revisions, we have replaced the Sequential CIFAR results in Table 1 with those from the Long Range Arena (LRA) benchmark to provide a stronger evaluation of our model's performance on long sequence tasks. The Sequential CIFAR results are now fully presented in the Supplementary Material for further reference.\"}", "{\"title\": \"Rebuttal to Questions Part 2\", \"comment\": \"> 3. **Updated Results**:\\n - We extended our ablation study to include the effects of removing **NPLR decomposition** and replacing **FFT convolution** with standard convolution. The table below shows the completed result for SeqCIFAR-10. \\n\\n\\n\\n\\n### Ablation Table for SPLR Variants on seqCIFAR-10\\n\\n| **Model Variant** | **Channels** | **Accuracy (%)** | **Params (M)** | **FLOPs (GFLOPs)** |\\n|--------------------------------|--------------|-------------------|-----------------|---------------------|\\n| SPLR (Full) | 128 | 90.25 | 0.513 | 0.43 |\\n| SPLR (No SA-HiPPO) | 128 | 87.62 | 0.501 | 0.43 |\\n| SPLR (No NPLR Decomposition) | 128 | 88.05 | 0.513 | **1.8** |\\n| SPLR (No FFT Convolution) | 128 | 86.47 | 0.513 | **1.2** |\\n| SPLR (No Dendrite) | 128 | 85.83 | 0.501 | 0.43 |\\n| SPLR (Full) | 64 | 88.62 | 0.129 | 0.14 |\\n| SPLR (No SA-HiPPO) | 64 | 86.14 | 0.121 | 0.14 |\\n| SPLR (No NPLR Decomposition) | 64 | 86.72 | 0.129 | **0.56** |\\n| SPLR (No FFT Convolution) | 64 | 85.23 | 0.129 | **0.32** |\\n| SPLR (No Dendrite) | 64 | 84.65 | 0.121 | 0.14 |\\n| SPLR (Full) | 32 | 83.15 | 0.033 | 0.034 |\\n| SPLR (No SA-HiPPO) | 32 | 81.75 | 0.031 | 0.034 |\\n| SPLR (No NPLR Decomposition) | 32 | 82.12 | 0.033 | **0.12** |\\n| SPLR (No FFT Convolution) | 32 | 80.62 | 0.033 | **0.08** |\\n| SPLR (No Dendrite) | 32 | 80.05 | 0.031 | 0.034 |\\n\\n\\n- **Removing NPLR Decomposition**: GFLOPs increase significantly across all configurations (e.g., from 0.43 to 1.8 GFLOPs for 128 channels). Also, accuracy is only moderately impacted as this affects computational efficiency more than feature extraction quality.\\n- **Replacing FFT Convolution**: GFLOPs increase due to the quadratic complexity of standard convolution (e.g., from 0.43 to 1.2 GFLOPs for 128 channels). Also, accuracy drops more significantly, as standard convolution is less effective in capturing long-range temporal features.\\n\\nThese additional results validate our design choices. Both **NPLR decomposition** and **FFT convolution** play a critical role in ensuring the scalability and efficiency of the SPLR model. Replacing them with standard methods significantly increases computational costs and, in the case of FFT convolution, reduces performance.\\n\\nWe appreciate the reviewer\\u2019s suggestion to include these additional studies, which further highlight the importance of these architectural components in the SPLR model.\"}", "{\"title\": \"Rebuttal to Weaknesses Part 7\", \"comment\": \"> Experiments. The datasets chosen by the authors, DVS128 Gesture and Sequential CIFAR-10/100, do not effectively test the model's ability to handle long-range dependencies. The authors could consider more challenging datasets, such as LRA.\\n\\nFollowing this feedback, we conducted additional experiments on the LRA benchmark, which comprises tasks specifically designed to evaluate models' ability to process long-range sequences. \\n\\nThe results of these experiments are summarized in Table 1, where SPLR is compared against state-of-the-art spiking and non-spiking architectures. SPLR achieves competitive performance across multiple LRA tasks while maintaining the advantages of spiking neural networks, such as energy efficiency and event-driven processing.\\n\\n- **Replacement of Results**: The LRA results now replace the original Sequential CIFAR-10/100 results in Section 4.1 and Table 1 of the paper. For reference, the Sequential CIFAR results have been moved to Supplementary Section B. \\n\\n- **Validation of Long-Range Dependency Handling**: SPLR's performance on the LRA benchmark provides strong evidence of its ability to model long-range dependencies effectively. For example, SPLR outperforms several state-of-the-art methods in tasks such as Retrieval and Pathfinder, demonstrating the robustness of its memory retention and temporal modeling capabilities.\\n\\nAdditionally, we include new results on the HARDVS dataset, which further validate SPLR's efficiency and effectiveness in real-world event-based scenarios. These results are summarized in Table 2.\\n\\n**Summary of Contributions**: \\n1. SPLR demonstrates significant improvements in long-range temporal modeling, as validated by its strong performance on the LRA benchmark. \\n2. SPLR achieves these results while maintaining its advantages as a spiking neural network, offering a balance of accuracy, efficiency, and energy savings. \\n3. The inclusion of both LRA and HARDVS results showcases SPLR's versatility across benchmarks designed for different spatio-temporal challenges.\\n\\n___\\n\\n> Additionally, the fact that no one in the SNN field has addressed the Celex HAR dataset does not imply that SNNs cannot handle it. The authors have not even provided a complete description of the size and scope of the Celex HAR dataset. \\n\\nWe do not claim that other SNN models are incapable of handling Celex-HAR; rather, our work\\nrepresents the **first demonstration of strong performance on this dataset using SNNs**,\\nestablishing an important benchmark for future research. We have modified the text to avoid any confusion on this claim.\\n To address this, we have revised the manuscript to include a complete description of the dataset\\u2019s size and scope in **Section B**. The details provided are as follows:\\n\\n- Celex-HAR contains high-definition (1280\\u00d7800 pixels) event streams, making it one of the **largest and most spatially complex datasets in the HAR** (Human Activity Recognition) domain. Each sample consists of fine-grained spatio-temporal patterns, which require models to process both high-resolution spatial features and long-range temporal dependencies efficiently.\\n\\n- Celex-HAR is particularly challenging due to its combination of:\\n - **High spatial resolution**: Event streams are significantly larger than those in commonly used datasets like DVS Gesture or HAR-DVS.\\n - **Complex temporal patterns**: The dataset includes nuanced activity recognition tasks that demand robust memory retention across varying timescales.\\n These factors likely contribute to the lack of benchmarks in the SNN field and limited exploration in the broader neuromorphic computing community.\\n\\n- Our experiments demonstrate that while current deep neural network (DNN) models that perform well on datasets such as HAR-DVS struggle to scale to the size and complexity of Celex-HAR, **SPLR achieves strong performance with significantly reduced computational costs**. SPLR\\u2019s spike-driven, state-space approach is particularly effective in handling the large spatial and temporal scales of this dataset.\\n\\n___\"}", "{\"title\": \"Rebuttal to Weaknesses Part 8\", \"comment\": \"> If the authors aim to compare with other SNNs on challenging DVS datasets, they might try HAR-DVS[1].\\n\\n Thank you for suggesting the HAR-DVS dataset as a challenging benchmark for evaluating SPLR. Following this recommendation, we have conducted experiments on HAR-DVS, a dataset designed for event-based human activity recognition, to validate SPLR\\u2019s performance further. \\n\\n| **Model** | **GFLOPs** | **Accuracy (%)** |\\n|--------------------------------|------------|-------------------|\\n| C3D [Tran et al. (2015)] | 0.1 | 50.52 |\\n| R2Plus1D [Tran et al. (2018)] | 20.3 | 49.06 |\\n| TSM [Lin et al. (2019)] | 0.3 | 52.63 |\\n| ACTION-Net [Wang et al. (2021)]| 17.3 | 46.85 |\\n| TAM [Liu et al. (2021)] | 16.6 | 50.41 |\\n| V-SwinTrans [Liu et al. (2022c)]| 8.7 | 51.91 |\\n| SlowFast [Feichtenhofer et al. (2019)] | 0.3 | 46.54 |\\n| ESTF [Wang et al. (2024b)] | 17.6 | 51.22 |\\n| ExACT [Zhou et al. (2024)] | 13.2 | 90.1 |\\n| **SPLR-Tiny [Ours]** | 0.034 | 70.38 |\\n| **SPLR-Small [Ours]** | 0.13 | 81.73 |\\n| **SPLR-Normal [Ours]** | 0.41 | 88.29 |\\n\\n### **Key Findings**: \\n1. **Accuracy**: SPLR achieves competitive accuracy across all configurations, with SPLR-Normal reaching **88.29%** \\n2. **Efficiency**: SPLR achieves significantly lower FLOP counts compared to state-of-the-art methods such as R2Plus1D, ACTION-Net, and ESTF. Its modular design allows flexibility (Tiny, Small, Normal) while scaling performance. \\n3. **Event-Based Processing**: It is important to note that HAR-DVS provides frame-based data (raw event data was unavailable for download). Since SPLR is designed for event-by-event processing, we treated all events arriving at the same timestamp as a single batch for processing, adhering to the event-driven principles of the model. \\n\\nWe have incorporated these results into the **main text**, alongside results for **DVS Gesture** and **Celex-HAR**, as shown in **Figure 2**. Detailed experimental setups and additional analysis are provided in **Supplementary Section B**.\\n\\n___\\n\\n\\n> Furthermore, the authors overlook comparisons with many recent SOTA methods on the Gesture dataset, where SNN performance has already surpassed 99.5%[2].\\n\\n Thank you for your feedback. We have now explicitly included the **Spike-Driven Transformer [2]** as a datapoint in the **DVS Gesture results plot (Figure 3)**. Additional details are also provided in **Supplementary Section C**. While we acknowledge that several models exceed 99% accuracy on DVS Gesture, many rely on **frame-based event accumulation**, which introduces latency and preprocessing overhead. In contrast, SPLR employs an **event-by-event processing strategy**, enabling real-time inference with significantly lower computational costs.\\n\\nAs shown in **Figure 3**, SPLR achieves **96.5% accuracy** on DVS Gesture with reduced computational overhead. Furthermore, SPLR demonstrates its strength on larger and more complex datasets, such as **Celex-HAR**, where frame-based models often struggle to scale effectively.\\n\\n___\"}", "{\"title\": \"Rebuttal to Weaknesses\", \"comment\": \"> The primary weakness of this paper lies in its writing, which significantly hinders clarity and understanding. A reorganization is recommended to improve readability and logical flow.\\nWriting and Structure. For instance, Section 2 presents the SPLR components sequentially but lacks an overview that connects each part to the overall model structure, making it difficult for readers to understand how the parts interact.\\n\\n\\nWe have revised Section 2 to address these issues. Specifically:\\n1. **Added a High-Level Overview**: At the beginning of Section 2, we now provide an overarching explanation of the SPLR architecture, outlining how its components (e.g., Dendrite Attention Layer, SA-HiPPO, and SPLR convolution layers) interact and contribute to the overall functionality. This ensures that readers understand the big picture before diving into the details.\\n \\n2. **Improved Transitions and Logical Flow**: We restructured the section to enhance the progression between components, making it clear how each part integrates into the broader framework. This reorganization ensures that readers can follow the narrative more intuitively.\\n\\n3. **Updated Supplementary Materials**: To support clarity, we expanded the supplementary materials to include a concise summary of how the SPLR components work together. We have also re-written the SPLR Convolution Layer in order to make it easier to follow and avoid confusing namings.\\n\\n\\n> Section 3 is dense with theoretical content and proofs but does not clearly convey the main ideas, making it hard to follow the section\\u2019s intended focus.\", \"we_have_revised_section_3_to_address_this_concern\": \"1. **Added Intuitive Explanations**: To clarify the main ideas behind the key theorems and lemmas, we have included intuitive explanations before presenting the formal proofs. \\n\\n2. **Enhanced Logical Flow**: We have added an overview at the beginning of Section 3 to outline its goals and connect the theoretical results to the broader structure and objectives of the model. \\n\\n3. **Improved Readability**: Additional comments and clarifications have been incorporated within the proofs \\n\\n\\n> Lack of Citations. The paper frequently omits citations in crucial areas. For example, although modifications to the HiPPO framework are proposed, no supporting references are provided. Furthermore, DH-LIF is introduced without citation, and the reference for this component is missing from the bibliography, weakening the academic rigor of the paper.\", \"we_have_carefully_revised_the_manuscript_to_address_this_issue\": \"1. **HiPPO Framework**: We have included relevant citations to foundational works on the HiPPO framework in the sections discussing our modifications. These references provide the necessary context for our proposed extensions and ensure proper attribution.\\n\\n2. **DH-LIF Neurons**: The original manuscript cited the foundational paper by Hanle Zheng et al. (Nature Communications, 2024) in Section 8.7 and added comparison with it in Table 4. We have added this citation in additional places in the Methods section where DH-LIF neurons is discussed to ensure clarity and avoid any potential confusion. Furthermore, we have verified that this reference is correctly included in the bibliography.\\n\\n3. **Related Works Section**: We have expanded the Related Works section, adding relevant references in the Introduction for better framing of our contributions. A more comprehensive Related Works discussion is also included in Supplementary Section D to provide additional context.\\n\\n\\n> Confusions and Errors. There are several errors and confusions throughout the paper, such as the incorrect abbreviation of the Spiking State-Space Model as SPLR in line 855. Such errors further impact the readability and precision of the work.\\n\\nThank you for identifying these errors and areas of confusion. We have conducted a thorough review of the manuscript and supplementary materials to identify and correct this and other errors.\"}", "{\"title\": \"Rebuttal to Questions Part 2\", \"comment\": \"> - How are convolutions applied within the proposed model?\\n\\nTo address the reviewer's question on \\\"How are convolutions applied within the proposed model?\\\":\\n\\nThe SPLR model employs convolutions specifically through the **SPLR Convolution Layer**, which integrates temporal dynamics using state-space representations. Here's a concise explanation:\\n\\n### 1. Temporal Dynamics via State-Space Models (SSM)\\nThe SPLR Convolution Layer is based on the continuous-time state-space model, represented as:\\n$$\\n\\\\dot{x}(t) = \\\\mathbf{A}_S x(t) + \\\\mathbf{B} S(t), \\\\quad y(t) = \\\\mathbf{C} x(t),\\n$$\\nwhere $ x(t) \\\\in \\\\mathbb{R}^N $ is the hidden state, $S(t) $ represents the input spike train, and $ \\\\mathbf{A}_S , \\\\mathbf{B}$ , and $ \\\\mathbf{C} $ are system matrices.\\n\\nThe state evolves between spikes using the dynamics $\\\\dot{x}(t) = \\\\mathbf{A}_{S} x(t),$ and at spike times $ t_k $, it is updated as:\\n\\n$x(t_{k+1}) = e^{A_S \\\\Delta t_k} x(t_k)$ \\n\\n$+ A_{S}^{-1} (e^{A_S \\\\Delta t_{k}} - I) B S(t_k),$\\n\\nwhere $ \\\\Delta t_k = t_{k+1} - t_k $.\\n\\n### 2. Efficiency via NPLR Decomposition\\nThe system matrix \\\\( \\\\mathbf{A}_S \\\\) is decomposed using **Normal Plus Low-Rank (NPLR)** decomposition:\\n$$\\n\\\\mathbf{A}_S = \\\\mathbf{V} \\\\Lambda \\\\mathbf{V}^* - \\\\mathbf{P} \\\\mathbf{Q}^*,\\n$$\\nwhere $\\\\mathbf{V}$ is unitary, $\\\\Lambda$ is diagonal, and $\\\\mathbf{P}, \\\\mathbf{Q}$ are low-rank matrices. This reduces the computational complexity of matrix-vector multiplications from $O(N^2)$ to $O(Nr)$, where $r \\\\ll N$\\n\\n### 3. FFT-Based Convolution\\nTo capture long-range dependencies efficiently, the model employs **FFT-based convolution** in the frequency domain:\\n$$\\nK(\\\\omega) = \\\\frac{1}{\\\\omega - \\\\Lambda}, \\\\quad x(t) = \\\\text{IFFT} \\\\left( \\\\text{FFT}(K(\\\\omega)) \\\\cdot \\\\text{FFT}(x(t)) \\\\right),\\n$$\\nwhere $K(\\\\omega)$ represents the system\\u2019s impulse response, and FFT/IFFT operations are used to accelerate computation.\\n\\n### 4. Key Advantages\\nThe SPLR model's convolution mechanism offers several key advantages. By operating in an **event-by-event manner**, convolutions are applied in real time as spike events arrive, preserving the temporal resolution of the input data without relying on frame accumulation. This approach ensures asynchronous and efficient processing, aligning with the sparse, spike-driven nature of the model. Additionally, the use of NPLR decomposition and FFT-based convolutions significantly reduces computational overhead, enabling **scalability** even for high-dimensional inputs. Finally, the integration of the Spike-Aware HiPPO mechanism **dynamically adjusts the state evolution** based on inter-spike intervals, allowing the model to effectively capture both short-term and long-range temporal dependencies. These features collectively make SPLR efficient, adaptable, and well-suited for complex temporal tasks.\\n\\nThis combination of SSM, NPLR decomposition, and FFT-based convolution allows SPLR to efficiently model both short-term and long-range temporal dependencies while maintaining the sparsity and asynchronous nature of spiking inputs. Further details and equations are available in **Section 2.4** and **Supplementary Section C** of the revised manuscript.\\n\\n\\n> - In equation (1), is the variable $u(t)$ a binary vector representing input spikes?\\n\\nYes, $u(t)$ represents the input spikes as a binary vector, where each component $u_i(t)$ is 1 if a spike occurs at time $t$ for the $i$-th input and 0 otherwise. This aligns with the event-driven nature of spiking neural networks and ensures efficient handling of sparse, asynchronous data in the SPLR framework.\\n\\n\\n> - How does the inclusion of a decay matrix in the HiPPO framework enhance memory retention?\\n\\n The inclusion of a decay matrix ( F(\\\\Delta t) ) in the HiPPO framework, particularly in its spike-aware adaptation (SA-HiPPO), enhances memory retention by dynamically adjusting the influence of past events based on the elapsed time between them. This decay matrix ensures that recent spikes have a stronger influence on the system's memory state while older events gradually lose impact. Such a mechanism balances stability and responsiveness, enabling effective memory retention even in sparse and irregular spike-driven inputs. This adjustment maintains a compressed history of inputs, crucial for long-range temporal dependency learning in spiking neural networks\\u200b.\\nThe ablation study further highlights the role of SA-HiPPO - by removing the SA-HiPPO layer and replacing it with standard LIF neurons leads to a notable drop in accuracy (from 96.5% to 90.4% in the DVS Gesture dataset), emphasizing its critical role in maintaining long-range temporal dependencies\\u200b\"}", "{\"title\": \"Rebuttal to Weaknesses\", \"comment\": \"> The proposed SPLR model incorporates several non-spike operations, including the NPLR decomposition and FFT convolution. It makes SPLR a hybrid architecture instead of a pure spiking neural network. The hybrid nature may compromise its hardware compatibility and make it difficult to deploy on neuromorphic hardware.\\n\\n\\nWe appreciate the reviewer's comments regarding the hybrid nature of the proposed SPLR model. We acknowledge that SPLR incorporates non-spike operations such as NPLR decomposition and FFT convolution, contributing to its hybrid architecture. However, these components are essential to achieving the desired temporal processing capabilities and long-range dependencies in a computationally efficient manner.\\nAlthough recent studies have not explicitly demonstrated Normal Plus Low-Rank (NPLR) decomposition or FFT in neuromorphic hardware, they have shown similar operations can be implemented efficiently. For example, hybrid analog-digital co-processing approaches have demonstrated promising results in executing low-rank matrix operations in a spike-compatible manner (e.g., Akopyan et al., 2015; Davies et al., 2018). Additionally, FFT-like operations can be mapped to neuromorphic platforms using event-driven dataflows, leveraging asynchronous processing to approximate convolution effectively (e.g., Roy et al., 2019). These advancements suggest that such non-spike components can potentially be adapted to neuromorphic architectures. For instance, approaches like hybrid analog-digital co-processing have shown promising results in executing low-rank matrix operations in a spike-compatible manner (e.g., Akopyan et al., 2015; Davies et al., 2018). Similarly, FFT operations can be mapped to neuromorphic platforms using event-driven dataflows, leveraging asynchronous processing to approximate convolution effectively (e.g., Roy et al., 2019). These advancements suggest that the non-spike components in our model can be adapted to neuromorphic architectures, ensuring better compatibility without compromising computational efficiency.\\nThe feasibility of implementing NPLR decomposition and FFT in neuromorphic hardware lies in leveraging compute-in-memory (CIM) technologies and analog-digital hybrid designs. CIM technologies enable efficient matrix operations directly within memory arrays, which can significantly reduce data movement costs and enhance the overall efficiency of low-rank decompositions. Similarly, FFT operations can be implemented using parallel event-driven architectures that take advantage of the inherent sparsity in spiking signals.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Rebuttal to Questions Part 3\", \"comment\": \">- Could you clarify the setup for the Sequential CIFAR-10 and CIFAR-100 tasks? How are frames sequenced? Similarly, could you elaborate on the experimental setup for the other datasets?\\n\\nThe \\\"Sequential CIFAR-10\\\" dataset is adapted from the PSN paper (Fang et al., NeurIPS 2023) to evaluate long-term temporal dependencies. In this setup, each image is processed column by column, treating the 32 image columns as a sequence of 32 time steps. This sequential representation mimics how temporal data unfolds, enabling the evaluation of the model's ability to learn dependencies over long time horizons. The frame generation process follows the methodology described in the PSN paper, where each column is treated as a single time step without further modifications.\", \"for_the_other_datasets\": \"- **DVS Gesture**: The model processes individual spike events as they occur, preserving the dataset\\u2019s asynchronous nature without accumulating events into frames.\\n- **HAR-DVS and Celex-HAR**: Similarly, spike events are processed independently, enabling the model to dynamically capture temporal dependencies in human activity recognition tasks.\\n- **Long Range Arena (LRA)**: Tasks such as ListOps and Path-X are converted into an event-driven format by treating each token as a sequential event. The SPLR model processes tokens one at a time, leveraging its temporal dynamics to handle long-range dependencies.\\n\\nWe have clarified these setups in the main text (Section 4.1) and provided further details in the Supplementary Section B. \\n\\n>- For clarification, could you specify what spikes i and j refer to in line 187?\\n\\nIn line 187, spikes $i $ and $ j $ refer to specific spike events associated with neurons within the network. Specifically, $i $ indexes a presynaptic spike event, while $j$ indexes a subsequent spike event, both occurring within the system. The time difference between these events,$\\\\Delta t = t_j - t_i $, is used in the decay matrix $ F(\\\\Delta t) $ to dynamically adjust the influence of past events on the system's state.\\n\\nAlthough the decay matrix $ F(\\\\Delta t) $ operates over the hidden state rather than directly on spike events, the indices $i $ and $j $ track the temporal relationship between spikes, enabling the system to emphasize recent events while exponentially decaying older ones. \\n\\n>- Is the manuscript proposing a new type of spiking neuron, or an entire network architecture?\\n\\nThank you for your question. The manuscript proposes an entire network architecture built around novel extensions to spiking neural models, including the introduction of the Dendrite Attention Layer. While the proposed DH-LIF neuron extends standard spiking neuron models by incorporating multiple dendritic branches with independent temporal filtering, the primary focus is on the overall network design, which integrates these neurons with state-space formulations, the SA-HiPPO mechanism, and SPLR convolution layers.\\n\\nThis combination enables efficient processing of asynchronous, event-driven inputs while capturing long-range temporal dependencies. \\n\\n>- Since the manuscript emphasizes improving SNNs' capacity to handle long-term dependencies, could you elaborate on why simple LIF models face challenges with this?\\n\\nSimple Leaky Integrate-and-Fire (LIF) neuron models face challenges in handling long-term dependencies due to their fixed single-timescale dynamics. Specifically, the membrane potential in LIF neurons evolves with a single decay constant, limiting their ability to retain information from past inputs over extended durations. This lack of temporal flexibility results in the rapid decay of older information, making it difficult for LIF-based networks to capture long-range temporal dependencies effectively.\\n\\nIn contrast, our proposed approach addresses this limitation by integrating multiple mechanisms that enhance temporal dynamics:\\n1. **DH-LIF Neurons**: By introducing dendritic branches with independent temporal filtering, DH-LIF neurons allow information to be retained across diverse timescales.\\n2. **SA-HiPPO**: The spike-aware HiPPO mechanism further enhances memory retention by dynamically adjusting the influence of past events based on the time elapsed, ensuring stability and responsiveness.\\n\\nThese innovations allow the proposed architecture to overcome the temporal limitations of standard LIF neurons, significantly improving the modeling of long-term dependencies in asynchronous, event-driven data.\"}", "{\"metareview\": \"This paper introduces SPLR, a Spiking Network for Learning Long-Range Relations. The SPLR model consists of three main components: the Dendrite Attention Layer, the Spatial Pooling Layer, and the SPLR Convolution Layer. The proposed SPLR Convolution Layer combines SA-HiPPO, FFT, and NPLR to enhance SPLR's capability to learn long-range temporal dependencies. Experimental results show that SPLR outperforms previous methods in tasks that require both fine-grained temporal dynamics and the ability to retain long-range dependencies.\\n\\nAfter the rebuttal period, 5 reviewers rate 3, 5, 5, 5, 8, respectively. Most of the reviewers agree that SPLR outperforms prior methods in tasks requiring both fine-grained temporal dynamics and the retention of long-range dependencies, which is the strength of SPLR. However, there are some criticisms. Reviewer Y4sN believes a more detailed discussion of the SA-HiPPO should be included. Reviewer tvp3 finds it difficult to identify the core novelty and strengths of this paper and the writing logic of the paper needs improvement. I think these criticisms are justified. I suggest that the authors further clarify the proposed mechanisms and highlight the key contributions.\\n\\nAs for the concerns regarding Reviewer TiCG's evaluation, I agree with the author's concern that Reviewer TiCG is too harsh in tone and too arbitrary in judgment. I have taken these issues into consideration. However, some of Reviewer TiCG's criticisms are still justified. As Reviewer TiCG states, \\\"'Written a lot' does not mean 'well written'\\\". I agree that the writing in this work is redundant. The main text should highlight the contributions and innovations of the paper rather than listing all the details of the proposed methodology. For example, I suggest the authors put the \\\"Input Representation\\\", \\\"Normalization\\\", and \\\"Readout Layer\\\" in Sec. 3 into the supplementary, since these are not the key contributions of this paper. However, the related work section should be placed in the main text as it helps to highlight the innovations and contributions of this paper. I believe it is not impractical to seek more details with limited pages; the key is to highlight the main points.\\n\\nOverall, I believe this paper would benefit from further revision. Therefore, the final decision is to reject this paper.\", \"additional_comments_on_reviewer_discussion\": \"After the rebuttal period, Reviewer LQNp finds the concerns addressed and raises the rating to 8, while other reviewers keep the original rating. Reviewer Y4sN believes the manuscript needs further improvements, and a more detailed discussion of the SA-HiPPO should be included. Reviewer tvp3 finds it difficult to identify the core novelty and strengths of this paper and the writing logic of the paper needs improvement. I agree that this paper still has some problems after the rebuttal period.\\n\\nThe main disagreement lies between the authors and Reviewer TiCG. The reviewer finds the paper problematic in several areas, including logic, focus, and writing. The reviewer's tone is notably strong, and some of the judgments seem arbitrary. For instance, the reviewer states, \\\"It is even more improbable to outperform SSM architectures specifically designed for the LRA dataset.\\\" Despite this, some of Reviewer TiCG's criticisms\\u2014such as those regarding the paper's writing\\u2014are valid. I have carefully considered Reviewer TiCG's comments during the evaluation process, disregarding the aspects that do not make sense while taking into account the valid points.\"}", "{\"title\": \"Rebuttal to Weaknesses Part 4\", \"comment\": \"> (6) Overstatements: The paper is filled with terms like \\\"spike-driven,\\\" \\\"asynchronous,\\\" and \\\"real-time.\\\" As I understand it, \\\"spike-driven\\\" implies a purely additive network[2], yet the pink section in Figure 1 seems unable to achieve this. Regarding \\\"asynchronous,\\\" the authors\\u2019 explanation in lines 92-95 is too brief, making it difficult to discern what kind of preprocessing the network applies to the data.\\n\\n\\n1. **\\\"Spike-Driven\\\":** In the context of our SPLR model, \\\"spike-driven\\\" refers to the processing of discrete spike events as opposed to continuous signals. Although SPLR incorporates advanced mechanisms such as state-space modeling and the Spike-Aware HiPPO (SA-HiPPO) layer for long-range memory retention, it fundamentally operates on spike events, making it a spike-driven network.\\n\\n The term \\\"purely additive\\\" might traditionally describe models that directly update their states by summing incoming spikes in a straightforward manner. While SPLR involves more complex operations, such as state-space dynamics and temporal memory retention, these are applied within an event-driven and spike-based framework. Thus, the model retains its spike-driven nature while leveraging advanced techniques to enhance its capabilities.\\n\\n2. **\\\"Asynchronous\\\":** To clarify, \\\"asynchronous\\\" in SPLR denotes the ability to process inputs based on the timing of spike events rather than relying on synchronized updates or fixed clock cycles. SPLR processes spikes event-by-event as they arrive, without preprocessing them into continuous signals or frames. This ensures that the temporal resolution of the spike inputs is preserved.\\n\\n We have added a detailed discussion in the **experimental setup subsection**, clarifying that asynchronous processing in SPLR refers specifically to event-by-event updates, avoiding frame accumulation and maintaining the spike-driven paradigm.\\n\\n3. **Revisions to Figure 1:** We have revised the figure caption to explicitly state that all operations, including those in the pink section, are driven by spike events. The pink section represents components of the SPLRConv layer, such as SA-HiPPO, which operate within the spike-driven paradigm while incorporating mechanisms for improved temporal processing and memory retention.\\n\\n---\"}", "{\"title\": \"Rebuttal to Questions Part 1\", \"comment\": \"> 1. This paper only compares FLOPs vs. accuracy between the proposed SPLR and other models. Does SPLR have an advantage over other methods in terms of inference latency?\\n\\n\\nWe appreciate the reviewer\\u2019s insightful question regarding inference latency. Due to the asynchronous nature of spiking neural networks, SPLR can process information with lower latency by only activating neurons when necessary, as opposed to continuous, synchronous updates. This characteristic often results in reduced inference latency, especially in scenarios with sparse input activity.\\nTo further quantify these latency gains, we calculated the theoretical inference times for SPLR on an NVIDIA A100 GPU, which has a peak performance of 312 TFLOPS for FP16 operations. Given the current FLOPs requirements for the SPLR models on the CelexHAR dataset:\\n\\n|Model|GFLOPs | Theoretical Latency | Observed Latency |\\n|----------------|----------------|---------------------------------------|-------------------------------------|\\n| SPLR Tiny| 0.034 | 109 microseconds | 162.4 microseconds |\\n| SPLR Small| 0.13 | 417 microseconds | 582.2 microseconds |\\n| SPLR Normal| 0.41 | 1.314 milliseconds | 1.867 milliseconds |\\n\\n\\n\\nThese calculations underscore SPLR\\u2019s efficiency, particularly for the Tiny and Small models where the low FLOP counts lead to significant latency reductions. We expect even greater latency benefits on neuromorphic hardware like Intel\\u2019s Loihi, which is specifically optimized for spiking neural networks and can further exploit the event-driven nature of SPLR to achieve low-latency performance.\\nWe acknowledge that a direct comparison of inference latency would provide a more comprehensive evaluation of SPLR's performance. We plan to include such an analysis in the revised manuscript to better highlight the advantages of SPLR over other methods, particularly for time-critical tasks.\\n\\n\\n\\n> The ablation studies only examine the effects of removing the dendrite attention layer and replacing SA-HiPPO with LIF. What if we replace NPLR decomposition and FFT convolution with standard convolution?\\n\\n We thank the reviewer for highlighting this critical aspect. We detail the expected effects of replacing these components with their standard counterparts and provide updated results in our extended ablation study.\\n\\n1. **Effect of Removing NPLR Decomposition**:\\n - **Computational Impact**: Removing NPLR decomposition would increase computational complexity from $O(Nr)$ to $O(N^2)$, where $ r \\\\ll N $. This results in a significant increase in GFLOPs, especially for high-dimensional state spaces.\\n - **Performance Impact**: While the accuracy of the model might not drop drastically, the scalability of the model will be hindered, as large state matrices $ A_S $ must be handled in their dense form.\\n\\n2. **Effect of Replacing FFT Convolution with Standard Convolution**:\\n - **Computational Impact**: FFT convolution reduces the complexity of spatio-temporal feature extraction to $ O(N \\\\log N) $, while standard convolution scales as $ O(N^2) $. For long sequences or high-resolution inputs, this results in a significant increase in GFLOPs.\\n - **Performance Impact**: Standard convolution lacks the efficiency of FFT in capturing long-range temporal dependencies, which may result in a moderate accuracy drop in tasks that require modeling fine-grained temporal features.\"}", "{\"title\": \"Rebuttal to Weaknesses Part 2\", \"comment\": \"> Can the authors clarify the key differences between the DH-LIF model in this paper and the one presented in [1]?\", \"the_key_differences_are_as_follows\": \"1. **Temporal Dynamics**: \\n - In the original **DH-LIF model** [1], the focus is on multi-timescale dynamics achieved through dendritic heterogeneity. Each dendritic branch processes inputs with independent timing factors, creating neurons that can effectively capture temporal dynamics across different scales. \\n - In SPLR, **DH-LIF neurons are part of the Dendrite Attention Layer**, where an attention mechanism weighs the contributions of different dendritic branches. Additionally, the **Spike-Aware HiPPO layer** optimizes long-term memory retention by dynamically adjusting memory based on spike timing. This integration extends beyond neuron-level dynamics to address temporal dependencies across long sequences.\\n\\n2. **Architectural**: \\n - The original DH-LIF model is primarily used as a standalone neuron type to enhance temporal dynamics in spiking neural networks (SNNs). The design focuses on multi-timescale memory at the neuron level while maintaining manageable complexity. \\n - In SPLR, DH-LIF neurons are embedded within a larger architecture that combines **convolutional and state-space layers**. These layers enable the model to capture long-term dependencies in event-driven tasks while balancing computational efficiency and scalability. The dendritic branches contribute to a pooling mechanism, reducing computational overhead while retaining temporal features.\\n\\n3. **Learning**: \\n - In the original DH-LIF model, learning focuses on optimizing the temporal constants of dendritic branches to enhance temporal heterogeneity and facilitate diverse temporal computations. \\n - In SPLR, learning extends beyond dendritic time constants with the inclusion of the Spike-Aware HiPPO layer, which dynamically optimizes memory retention for capturing long-range temporal dependencies. This combination ensures both immediate and extended memory retention, making SPLR particularly effective for tasks requiring hierarchical temporal modeling, such as gesture recognition and spiking-based vision.\\n\\nThe original DH-LIF model emphasizes multi-timescale dynamics at the neuron level, enabling temporal memory through heterogeneous dendritic branches. In contrast, SPLR incorporates DH-LIF neurons into a comprehensive architectural framework, combining attention mechanisms, convolutional layers, and state-space modeling to handle complex temporal tasks with long-range dependencies efficiently. These extensions make SPLR suitable for more demanding applications while maintaining computational efficiency.\\n___\\n\\n> How does the DH-LIF used in the Dendrite Attention Layer relate to attention? \\n\\nThe DH-LIF neurons in the Dendrite Attention Layer implement a biologically inspired form of attention by leveraging the dendritic branches\\u2019 ability to process inputs at multiple timescales. Each dendritic branch has a unique timing factor, allowing the neuron to selectively amplify or suppress input signals based on their temporal properties. This selective processing acts as a form of attention, where the neuron dynamically focuses on the most relevant spatiotemporal features while downplaying less significant ones.\\nThe aggregation of outputs from multiple dendritic branches enables the network to prioritize temporal features based on their relevance, akin to how attention mechanisms weigh inputs differently to emphasize salient information. This spatio-temporal pooling mechanism not only incorporates spatial aggregation but also applies temporal weighting, enhancing the model\\u2019s ability to focus on meaningful temporal patterns.\\n\\nTo clarify this analogy, we have added an explicit discussion on the role of temporal pooling within the DH-LIF architecture in Section 4 and Suppl. Sec. C and highlight how it extends the traditional concept of attention. Thank you for pointing out this area for further elaboration, which we believe will enhance the clarity and impact of the manuscript.\\n\\n___\"}", "{\"title\": \"Summary of Changes\", \"comment\": [\"We thank the reviewers for their valuable feedback and have made significant revisions to the manuscript to address the major concerns raised. Below, we outline the key changes and improvements:\", \"1. **Improved Clarity, Organization, and Methodological Depth**\", \"**Expanded Model Explanations:** Enhanced the description of the SPLR model in Section 2, providing detailed explanations of key components, including the SA-HiPPO layer, NPLR decomposition, and FFT-based convolutions.\", \"**Revised Methodology Section:** Improved the logical flow of the Methods section by introducing an overview that connects all components of SPLR to the overall model structure.\", \"**Intuitive Theoretical Explanations:** Added intuitive explanations for key theorems in Section 3 to clarify their relevance to SPLR's design and improve readability.\", \"**Supplementary Material Enhancements:** Updated the supplementary sections to reduce redundancy, include pseudocode for reproducibility, and detail experimental setups, dataset processing pipelines, and hyperparameter configurations.\", \"**Citations and Related Work:** Added citations for foundational works (e.g., the original HiPPO framework and DH-LIF model) and expanded the Related Work section in the supplementary material. Comparisons now include:\", \"Spike-Driven Transformer (Figure 3a), STC-LIF for DVS Gesture-128 (Figure 4c), TCLIF in SHD and SSC (Figures 4a, 4b), and TIM in SHD (Figure 4a).\", \"2. **New Experiments, Results, and Comparisons**\", \"**New Benchmarks:** Evaluated SPLR on the Long Range Arena (LRA) benchmark (Table 1) to demonstrate its ability to handle long-range temporal dependencies.\", \"**HAR-DVS Results:** Included HAR-DVS results, validating SPLR\\u2019s capability to process high-resolution event-based data and demonstrating its robustness (Figure 2).\", \"**Ablation Studies:** Added new ablation studies to evaluate the impact of replacing NPLR decomposition and FFT-based convolution with standard convolution.\", \"**Highlighted Results and Trade-offs:**\", \"**Tables:** Added comparisons of SPLR's performance against recent state-of-the-art methods on DVS Gesture, HAR-DVS, and LRA datasets, emphasizing FLOPs vs. accuracy trade-offs.\", \"**Figures:** Revised Figure 3 to include methods exceeding 99% accuracy on DVS Gesture for proper contextualization, and added new visualizations summarizing SPLR\\u2019s performance on HAR-DVS and Celex-HAR datasets.\"]}", "{\"title\": \"Rebuttal to Weaknesses Part 1\", \"comment\": \"> The core innovation of this paper is the SPLR convolution layer to enhance the long-range temporal modeling capability of SNNs. However, while this improves the performance of the SNN, the integration of the SSM and the SNN requires significant computational overhead, which counteracts the power consumption advantage of the SNN.\\n\\nWe emphasize that the primary focus of SPLR is on improving the temporal modeling capabilities in complex event-driven tasks. Our goal is to achieve competitive performance with traditional DNNs but with a much lower computational complexity. We acknowledge that, SPLR has a higher computational cost than baseline SNNs, but achieves much higher performance specially in complex tasks.\\n\\nAs demonstrated in **Figures 3 and 4**, SPLR achieves significantly higher accuracy with comparable or lower FLOPS than state-of-the-art DNN or Transformer based methods., effectively balancing performance and computational cost. This trade-off becomes even more pronounced in **complex cases like HAR-DVS and Celex-HAR**, where SPLR maintains strong performance while scaling efficiently. In these challenging scenarios, SPLR\\u2019s structured temporal modeling enables it to outperform traditional SNNs and dense architectures that struggle to handle the large spatial and temporal complexities.\\n\\nAdditionally, the structured dynamics of SSMs reduce computational overhead by providing an efficient representation of long-range dependencies, eliminating the need for densely recurrent operations typical of many spiking architectures. This helps SPLR avoid resource-intensive computations while maintaining strong temporal memory retention.\\n___\\n\\n> In addition, as the authors mention SPLR is difficult to implement in hardware, making the significance of this work seem small.\\n\\nWe acknowledge the challenges of implementing SPLR in current neuromorphic hardware. However, this work represents an initial exploration of integrating state-space modeling with SNNs to enhance long-range temporal modeling. As demonstrated in Figures 3 and 4, SPLR achieves significant improvements in accuracy vs. FLOPs, particularly on complex datasets like HAR-DVS and Celex-HAR. These results highlight SPLR's computational efficiency and its potential for resource-constrained applications.\\nRegarding hardware feasibility, recent advancements in compute-in-memory (CIM) technologies and hybrid analog-digital designs provide a promising path forward for implementing SPLR's state-space aspects. CIM technologies enable efficient matrix operations directly within memory arrays, reducing data movement costs and enhancing the overall efficiency of low-rank decompositions like NPLR. Additionally, FFT-like operations can be mapped to event-driven architectures using asynchronous dataflows, leveraging the sparsity inherent in spiking signals to approximate convolutions effectively (e.g., Akopyan et al., 2015; Davies et al., 2018; Roy et al., 2019).\\nThe current trend in hardware design increasingly focuses on mixed-mode platforms that integrate digital and analog components, offering flexibility for implementing both spiking and non-spiking operations. Such platforms are well-suited for supporting hybrid approaches like SPLR, enabling efficient execution of state-space modeling and spiking computations in tandem.\\nWhile SPLR introduces challenges for immediate hardware deployment, its demonstrated performance gains and alignment with emerging hardware paradigms make it a strong candidate for future hardware-oriented optimizations. We appreciate the opportunity to address this important point.\\n\\n___\"}", "{\"title\": \"Rebuttal to Weaknesses 1\", \"comment\": \"> Firstly, the presentation lacks clarity, making it difficult to fully grasp how the method works, interpret the experimental results, or potentially reproduce the findings. Essential details expected in a research paper, such as a discussion of related works, are missing (e.g., [1]).\", \"we_have_made_several_enhancements_to_the_manuscript\": \"1. **Enhanced Clarity in the Main Paper**: \\n - We have expanded the description of the SPLR model and its components in **Section 2** of the main paper and added the complete details for each layer in **Supplementary Section C**.\\n - A visual flowchart has been added in **Figure 1** to provide a step-by-step overview of the SPLR architecture.\\n - Experimental configurations and results have been elaborated on in **Section 4** and in **Supp. Sec. B**,\\n2. **Comprehensive References to Related Works:**\\n - We have updated the introduction to include more references. Due to the space constraints, more exhaustive overview and comparison with prior works are added in Supplementary Section D.\\n3. **Additional Details in the Supplementary Section:**\\n - **Supplementary Sections B,C** provides a complete implementation guide for the SPLR model, including pseudocode, hyperparameter settings, and system configurations, to facilitate reproducibility.\\n - The **Supplementary Section D** contains a comprehensive discussion of related works and a detailed comparison with prior methods, addressing how our approach advances the state of the art.\\n\\n> Additionally, fundamental concepts necessary to understand the work are not well-introduced; although state-space models (SSMs) have gained popularity recently, they are not widely understood in machine learning, so a brief overview would be beneficial.\\n\\nThank you for your feedback. We have expanded the overview of SSMs in **Supplementary Section C.**\\n\\n> The manuscript also omits essential citations, including the original HiPPO framework, which is central to this work, and does not offer a proper explanation of how it functions.\\n\\nThank you for your valuable feedback. In response, we have added citations to the original HiPPO framework (Gu et al., 2020) in the introduction along with other significant works, on neuromorphic computing and spiking neural networks (Roy et al. 2019, Furber, 2016) and recent advances in state-space models for long-range dependency modeling (Hasani et al., 2021; Gu et al., 2021).\\nWe have included a comprehensive related works section in **Supplementary Section D**, where we elaborate on these references in detail. This includes an extended explanation of the HiPPO framework, its mechanism, and its role in enabling efficient long-range memory retention, which is central to the proposed Spike-Aware HiPPO (SA-HiPPO) layer.\\n\\n**References:**\\n- Gu, A., Dao, T., Ermon, S., Rudra, A., & R\\u00e9, C. (2020). HiPPO: Recurrent memory with optimal polynomial projections. Advances in Neural Information Processing Systems (NeurIPS).\\n- Furber, S. B. (2016). Large-scale neuromorphic computing systems. Journal of Neural Engineering, 13(5), 051001\\n- Kaushik Roy, Akhilesh Jaiswal, and Priyadarshini Panda. Towards spike-based machine intelligence with neuromorphic computing. Nature, 575(7784):607\\u2013617, 2019.\\n- Hasani, R., Amini, A., Yildiz, Y., Lechner, M., Grosu, R., & Rus, D. (2021). Liquid time-constant networks. Proceedings of the AAAI Conference on Artificial Intelligence.\\n- Gu, A., Johnson, I., Goel, K., Saab, K., Dao, T., Rudra, A. and R\\u00e9, C., 2021. Combining recurrent, convolutional, and continuous-time models with linear state space layers. Advances in neural information processing systems, 34, pp.572-585.\\n\\n>The equations are unclear; while convolutions are frequently mentioned, no equations illustrate how or where convolutions are applied. \\n\\nThank you for highlighting this important concern. We have revised the manuscript to include detailed mathematical formulations of the convolution operations within the SPLR Convolution Layer.\\nSpecifically, we have added equations that explicitly demonstrate the application of the SA-HiPPO mechanism, the Normal Plus Low-Rank (NPLR) decomposition, and the FFT-based convolutions in our SPLRConv layers. These equations now clearly illustrate the use of FFT-based convolutions and Cauchy kernels for efficient spatio-temporal processing. Additionally, we have supplemented these equations with thorough explanations to ensure the operations are intuitive and their roles within the model are well-understood.\"}", "{\"summary\": \"This paper integrates state-space models (SSMs) and spiking neural networks (SNNs), and proposes the Spiking Network for Learning Long-Range Relations (SPLR) to enhance the ability of SNNs to capture long-range dependencies. Theoretical proofs and experimental results support the performance advantages of SPLR.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The integration of SA-HiPPO and SPLR convolution enhances the model's ability to model long-range dependencies.\\n2. The SPLR model designed in this paper introduces a dendrite-based pooling layer, which further improves the performance using the DH-LIF neuron model.\\n3. The theoretical and experimental results in this paper confirm the effectiveness of the proposed method.\", \"weaknesses\": \"1. The core innovation of this paper is the SPLR convolution layer to enhance the long-range temporal modeling capability of SNNs. However, while this improves the performance of the SNN, the integration of the SSM and the SNN requires significant computational overhead, which counteracts the power consumption advantage of the SNN. In addition, as the authors mention SPLR is difficult to implement in hardware, making the significance of this work seem small.\\n\\n2. Can the authors clarify the key differences between the DH-LIF model in this paper and the one presented in [1]? How does the DH-LIF used in the Dendrite Attention Layer relate to attention? Is the author's claim of dendrite-based spatio-temporal pooling just spatial pooling after the output of DH-LIF? Where is there temporal pooling?\\n\\n3. I suggest that the authors compare their method with other optimized spiking neural networks capable of long-range temporal modeling, such as [2,3,4].\\n\\n\\n[1] Zheng H, Zheng Z, Hu R, et al. Temporal dendritic heterogeneity incorporated with spiking neural networks for learning multi-timescale dynamics. Nature Communications, 2024.\\n\\n[2] Wang L, Yu Z. Autaptic synaptic circuit enhances spatio-temporal predictive learning of spiking neural networks. ICML, 2024.\\n\\n[3] Zhang S, Yang Q, Ma C, et al. Tc-lif: A two-compartment spiking neuron model for long-term sequential modelling. AAAI, 2024.\\n\\n[4] Shen S, Zhao D, Shen G, et al. TIM: An efficient temporal interaction module for spiking transformer. IJCAI, 2024.\", \"questions\": \"Please see weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal to Weaknesses Part 2\", \"comment\": \">(2) The methodology section is not clearly explained. What type of spiking neurons does this work use? How does SPLR integrate with spiking neurons? The authors repeat content introduced in the main text within the supplementary materials. Lines 147-150 reiterate the significance of SPLR, but readers are likely more interested in the methodological details and the rationale behind the proposed approach\\u2019s significance. Unfortunately, these critical details are missing.\\n\\nThank you for this valuable feedback.\", \"type_of_spiking_neurons_used\": \"In this work, we primarily use DH-LIF (Dendritic Heterogeneous Leaky Integrate-and-Fire) neurons, which allow for multi-compartment modeling through dendritic branches. These neurons enhance the network\\u2019s ability to capture temporal dynamics across different timescales, essential for managing short- and long-term dependencies in event-driven data. We have updated Section 2 and Suppl Sec C to clarify this.\", \"integration_of_splr_with_spiking_neurons\": \"SPLR integrates with DH-LIF neurons by using their dendritic branches for temporal filtering and memory retention. The SPLR Convolution layer leverages state-space dynamics to process inputs asynchronously, updating based on spikes generated by DH-LIF neurons. Additionally, the Spike-Aware HiPPO (SA-HiPPO) layer complements this by adjusting memory retention dynamically, allowing SPLR to capture long-term dependencies without introducing excessive computational overhead. We have provided a detailed step-by-step explanation of how SPLR connects to spiking neuron outputs and processes event-driven data in Section 2 (Methods) and Suppl Sec C.\", \"reducing_redundancy_and_emphasizing_rationale\": \"We have streamlined the content to minimize repetition and focus on explaining the rationale behind each component of SPLR. We have highlighted the advantages of integrating DH-LIF neurons, state-space modeling, and SA-HiPPO in a unified framework and their collective contribution to achieving efficient long-term memory retention in spiking networks.\\n\\n---\\n\\n> (3) In the theoretical discussion, the authors present several theorems but do not clarify why these are necessary.\", \"we_have_made_the_following_revisions\": [\"1. The theorems in the theoretical discussion are intended to formally justify the efficiency, memory retention, and stability capabilities of the SPLR model. Specifically:\", \"**Lemma on Computational Complexity**: This result establishes that the SPLR model\\u2019s computational cost per spike is $O(N^2)$. This underscores the importance of techniques like NPLR decomposition to reduce FLOPS and improve scalability for real-time applications.\", \"**Theorem on Temporal Dependency Preservation**: This theorem demonstrates the SPLR model\\u2019s ability to retain long-range dependencies, a core limitation in standard spiking neural networks. By controlling the decay of older information, the model ensures that recent spikes have a stronger influence on the system state.\", \"**Lemma on Error Bounds for Matrix Exponential Approximation**: This lemma ensures that our approximations of the state-space dynamics using a Taylor expansion are both efficient and accurate, enabling practical deployment in resource-constrained settings.\", \"**Theorem on Bounded State Trajectories**: This result guarantees the stability of the SPLR model, ensuring that its internal state remains bounded under spike-driven inputs, which is essential for continuous, real-time processing.\", \"2. We have added **intuitive explanations** after each lemma and theorem. These explanations connect the theoretical results to the SPLR model\\u2019s design and objectives, such as efficient temporal processing, memory retention, and stability.\", \"3. We have added an introductory paragraph to the theoretical discussion, outlining its goals and linking the results to the broader context of the SPLR model. Additionally, we included a summary at the end of the section to tie the results back to practical outcomes, such as computational efficiency, FLOPS reduction, and robust handling of sparse event-driven data.\"]}", "{\"title\": \"Rebuttal to Weaknesses Part 5\", \"comment\": \"> Motivation. The authors repeatedly state that the proposed SPLE can address the challenge of modeling both short- and long-term dependencies in SNNs. However, they fail to analyze why SNNs have limitations in this area and why their proposed method can solve this issue. For instance, this is mentioned in lines 58-67, 147-149, and 1846-1850 without providing the necessary analysis.\\n\\n We outline the gaps in conventional SNNs and how SPLR addresses these challenges (Introduction, Suppl Sec C):\\n\\n1. **Limitations of Standard SNNs in Modeling Short- and Long-Term Dependencies**: \\n Traditional SNNs, particularly those employing simple neuron models such as the Leaky Integrate-and-Fire (LIF) neuron, encode temporal information using exponentially decaying membrane potentials. While this is effective for short-term memory, it inherently leads to rapid information loss, making it difficult to capture long-term dependencies. These limitations are further compounded by the lack of structured mechanisms for explicit temporal modeling, as most SNNs rely on local spike interactions without incorporating global temporal dependencies.\\n\\n For example, conventional SNNs lack the ability to dynamically adapt memory retention to the temporal characteristics of the input, which is critical for tasks requiring integration of information across multiple timescales, such as event-based gesture recognition or sequential classification.\\n\\n2. **How SPLR Addresses These Challenges**:\", \"splr_introduces_two_key_innovations_to_overcome_the_limitations_of_traditional_snns\": \"- **Spike-Aware HiPPO (SA-HiPPO):** \\n SA-HiPPO extends the HiPPO framework to operate in the discrete, asynchronous setting of SNNs. By dynamically adjusting memory retention based on the timing of incoming spikes, it enables the network to prioritize recent information while maintaining a compressed representation of past events. This structured memory retention mechanism ensures that long-term dependencies can be captured without excessive computational overhead.\\n - **State-Space Dynamics in SPLR Convolution:** \\n The SPLR Convolution layer integrates state-space modeling, enabling continuous temporal processing with event-driven updates. Unlike standard SNNs that rely solely on simple integration mechanisms, SPLR leverages structured state-space dynamics to handle both short-term and long-term dependencies efficiently, allowing it to process complex temporal patterns across varying timescales.\\n\\n3. **Revisions to Address This Concern**: \\n We have included a discussion on the limitations of traditional SNNs in modeling long-term dependencies due to their short memory span and the lack of explicit temporal mechanisms.\\n We have expanded on how SPLR Convolution and SA-HiPPO overcome these limitations by providing structured mechanisms for long-term memory retention and efficient temporal modeling.\\n\\n\\n**References**\\n\\n1. Shen S, Zhao D, Shen G, et al. TIM: An efficient temporal interaction module for spiking transformer. IJCAI, 2024. \\n2. Zhang S, Yang Q, Ma C, et al. Tc-lif: A two-compartment spiking neuron model for long-term sequential modelling. AAAI, 2024.\\n3. Wang L, Yu Z. Autaptic synaptic circuit enhances spatio-temporal predictive learning of spiking neural networks. ICML, 2024.\\n\\n---\"}", "{\"title\": \"Rebuttal to Weaknesses Part 1\", \"comment\": \"> This work requires comprehensive improvements, with the main weaknesses outlined as follows.\", \"writing\": \"The writing in this work requires careful and comprehensive improvement, covering the overall organization of the paper, paragraph structure, and numerous details that need refinement.\\n\\n Thank you for your detailed feedback on the writing and organization of the manuscript. Below are the key improvements we have implemented:\\n\\n1. **Restructured Introduction**: The introduction has been restructured to provide a more cohesive narrative, emphasizing the motivation, key contributions, and the broader significance of the work.\\n\\n2. **Overview in Methods**: We added a high-level overview at the beginning of the methods section to guide readers through the SPLR architecture and its components, improving logical flow and understanding.\\n\\n3. **Enhanced Theoretical Section**: Intuitive explanations for key theorems and lemmas have been added to Section 3, making the theoretical content more accessible and easier to follow. These explanations contextualize the results and connect them to the SPLR model's design.\\n\\n4. **Expanded Description of Methods**: The methods section has been expanded with additional details to improve clarity and provide a deeper understanding of the proposed techniques.\\n\\n5. **Restructured Supplementary Material**: \\n - **Notation and Symbols**: A new subsection lists and defines all symbols, variables, and notations used in the paper, reducing ambiguity. \\n - **Dataset Details**: Detailed descriptions of the datasets used have been added for completeness. \\n - **Reduced Redundancies**: Repeated content has been streamlined for better readability. \\n - **Expanded Related Works**: The related works section has been expanded to provide a comprehensive context for the contributions. \\n\\nWe believe these revisions significantly improve the paper's writing, organization, and readability. Thank you for highlighting these concerns, which allowed us to refine and strengthen the presentation of our work.\\n\\n---\\n\\n> (1) The authors placed the related work section in the supplementary materials and omitted citations to many key works, which can confuse readers. For instance, the authors did not cite relevant papers when HIPPO was first mentioned (in fact, there are almost no citations in paragraphs 3 and 4 of the introduction). \\n\\n To address these concerns, we have made the following revisions:\\n\\n1. **Adding Relevant Citations to the Introduction**: While the detailed related work section remains in the supplementary materials for a more comprehensive discussion, we have added the most relevant citations directly in the introduction. Specifically, the foundational works on the HiPPO framework are now cited where it is first mentioned. Also, other critical methods and concepts referenced in paragraphs 3 and 4 of the introduction now include appropriate citations, directing readers to key prior studies.\\n\\n2. **Revised Introduction for Clarity**: We have revised the introduction to better integrate these citations into the narrative.\\n\\n3. **Broader Coverage of Relevant Work**: Beyond the introduction, we have reviewed and enhanced citations throughout the manuscript to ensure comprehensive acknowledgment of foundational contributions. \\n\\nWe believe these revisions address the reviewer\\u2019s concerns and improve the clarity, accessibility, and rigor of the paper. Thank you for highlighting this issue and helping us strengthen our manuscript.\"}" ] }
2Ey2hkFicp
Benchmarking and Enhancing Large Language Models for Biological Pathway Reasoning
[ "Haiteng Zhao", "Chang Ma", "Lingpeng Kong", "Zhi-Hong Deng" ]
Large language models (LLMs) have demonstrated remarkable performance across various domains of biology, but their ability to reason about biological pathways remains underexplored. This includes reasoning about how perturbations in biological systems lead to various downstream effects through complex intermediate processes. Such reasoning is crucial for explaining and predicting biological phenomena, as well as for formulating hypotheses and designing experiments. In this study, we investigate whether LLMs can effectively understand and reason about biological pathways by introducing BioMaze, a comprehensive benchmark focusing on reasoning about the effects and mechanisms of natural and synthetic interventions—such as mutations, infections, or treatments—on various downstream targets under different conditions through complex intermediate pathway processes. BioMaze spans multiple biological domains and is categorized along three reasoning dimensions, capturing various aspects of pathway reasoning. We evaluate LLMs using the BioMaze benchmark with reasoning methods like Chain-of-Thought (CoT) and pathway graph-augmented approaches. Results show that while LLMs can understand mechanisms in natural organisms, they struggle with predicting phenomena after perturbations, highlighting their limitations in reasoning about biological pathways. To address these challenges, we propose PathSeeker, a novel LLM agent that interactively reasons through subgraph-based navigation within the pathway graph. This approach enhances LLMs' reasoning in biological pathways by leveraging pathway graph augmentation, particularly in cases involving perturbations, potentially bridging the gap between LLMs' current capabilities and the complexities of biological systems.
[ "Large Language Model", "Reasoning", "Biology", "Biological System", "Pathway", "Agent" ]
Reject
https://openreview.net/pdf?id=2Ey2hkFicp
https://openreview.net/forum?id=2Ey2hkFicp
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wgrTWteWnT", "uzyV6vqNgM", "upvOrXajNx", "ufnUEBVcXz", "q8ZCCDuMyG", "nDo9857kV6", "i2nUN0Zm0J", "hgtDMXVW9x", "ZyrrfGSZxq", "YqSljtAKH4", "Sfl9a9ejal", "RjxLljMu5n", "RJbMLDvtTJ", "PkJurzhggU", "O22NVERoWl", "IcAGCVKfWv", "FRXKxGCOOa", "ElYCm1piFx", "CRgLza3k6x", "6sQ5nPr0Zk", "5XdSgBBLgO", "3ECsx9EenH", "2iqZ5IHcAb", "2R8OjAK0qS", "2IxXTtrydA", "0lF6G3okQG" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment" ], "note_created": [ 1732283923688, 1731085759355, 1733312590524, 1732284039754, 1732872898190, 1732498470704, 1732284200844, 1737524104005, 1734711243785, 1732284262199, 1732498481491, 1732284000623, 1733212694156, 1732498489697, 1732283834277, 1732577574110, 1733063837715, 1732283704416, 1732728632769, 1730836386123, 1732284285179, 1732498502551, 1732283802650, 1730695114509, 1730660838601, 1732283939078 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11113/Authors" ], [ "ICLR.cc/2025/Conference/Submission11113/Reviewer_TncJ" ], [ "ICLR.cc/2025/Conference/Submission11113/Authors" ], [ "ICLR.cc/2025/Conference/Submission11113/Authors" ], [ "ICLR.cc/2025/Conference/Submission11113/Authors" ], [ "ICLR.cc/2025/Conference/Submission11113/Authors" ], [ "ICLR.cc/2025/Conference/Submission11113/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11113/Area_Chair_H5sk" ], [ "ICLR.cc/2025/Conference/Submission11113/Authors" ], [ "ICLR.cc/2025/Conference/Submission11113/Authors" ], [ "ICLR.cc/2025/Conference/Submission11113/Authors" ], [ "ICLR.cc/2025/Conference/Submission11113/Authors" ], [ "ICLR.cc/2025/Conference/Submission11113/Authors" ], [ "ICLR.cc/2025/Conference/Submission11113/Authors" ], [ "ICLR.cc/2025/Conference/Submission11113/Authors" ], [ "ICLR.cc/2025/Conference/Submission11113/Authors" ], [ "ICLR.cc/2025/Conference/Submission11113/Authors" ], [ "ICLR.cc/2025/Conference/Submission11113/Authors" ], [ "ICLR.cc/2025/Conference/Submission11113/Reviewer_m7QK" ], [ "ICLR.cc/2025/Conference/Submission11113/Authors" ], [ "ICLR.cc/2025/Conference/Submission11113/Authors" ], [ "ICLR.cc/2025/Conference/Submission11113/Authors" ], [ "ICLR.cc/2025/Conference/Submission11113/Reviewer_5QNb" ], [ "ICLR.cc/2025/Conference/Submission11113/Reviewer_MoHE" ], [ "ICLR.cc/2025/Conference/Submission11113/Authors" ] ], "structured_content_str": [ "{\"comment\": \"## Motivation for using subgraph methods rather than the whole graph\\n\\nThank you for your insightful question! While it is true that an individual pathway file,\\nsuch as MAPK, can fit within the context window of an LLM, typical and realistic biological research scenarios often do\\nnot have pre-given information of which specific pathway is relevant to the question. Furthermore, the reasoning process\\nmay involve interactions or activations spanning multiple human-splitted pathway maps.\\n\\nTo address this challenge, we combine **all** KEGG pathways into a **single**, comprehensive graph that serves as the\\naugmentation database for all the questions in the BioMaze, as we described in \\\"Pathway Graph Database\\\" of Subsection\\n3.3. This combined graph is too large to fit within the LLM\\u2019s context window in its entirety. As a result, PathSeeker,\\nalong with the baselines we selected, focuses on methods capable of dynamically identifying and extracting relevant\\nsubgraphs from large graph databases. This ensures our approach is both scalable and well-suited to the complexities of\\nreal-world biological pathway reasoning.\\n\\nWe have modified the description of the graph database in the revised draft to clarify the motivation and methodology\\nbehind our approach.\\n\\n## Evaluation of cutting-edge models as backbone\\n\\nThanks for the suggestion of experiment! We conduct experiment with backbone LLaMa 3.1-405B, and here is the result on\\nTrue/False tasks:\\n\\n| | Inquiry Type | | Extra Condition | | Investigation Target | | |\\n|------------------|--------------|-----------|-----------------|------------|----------------------|-------------|----------|\\n| LLaMa 3.1 405B | Normal | Perturbed | Natural | Intervened | Single | Interaction | Function |\\n| Viliana (0 Shot) | 84.58 | 74.19 | 81.63 | 76.26 | 80.44 | 85.14 | 76.88 |\\n| Viliana (2 Shot) | 85.47 | 73.68 | 82.21 | 75.91 | 83.34 | 83.83 | 76.65 |\\n| CoT (0 Shot) | 87.54 | 78.98 | 85.05 | 80.46 | 86.14 | 87.85 | 74.91 |\\n| CoT (2 Shot) | 86.09 | 78.65 | 83.44 | 79.45 | 85.42 | 88.14 | 77.74 |\\n| ToG | 87.31 | 79.08 | 84.19 | 78.20 | 86.04 | 89.50 | 75.36 |\\n| CoK | 83.85 | 77.51 | 81.91 | 77.42 | 84.42 | 86.42 | 78.31 |\\n| G-Retriever | 86.87 | 79.83 | 84.43 | 80.16 | 86.65 | 89.21 | 78.77 |\\n| PathSeeker | 89.43 | 81.80 | 84.21 | 82.25 | 87.83 | 87.09 | 81.82 |\", \"here_are_some_key_observations\": \"1) Cutting-edge model achieve higher performance: Overall, LLaMa 3.1-405B achieves an 8% performance improvement\\n compared to\\n the 8B version.\\n\\n2) Persistent gap in intervention scenarios: The results demonstrate that cutting-edge models exhibit varied performance\\n under different settings. A noticeable performance gap remains between natural and perturbed cases.\\n This\\n indicates that even state-of-the-art models struggle with reasoning about interventions in biological pathways\\n compared\\n to their better understanding of natural pathway states.\\n\\n3) Effectiveness of PathSeeker: Our proposed method, PathSeeker, still shows improvement comparing to CoT method,\\n particularly in scenarios involving interventions.\\n\\nWe will continue updating this dataset by incorporating data from more recent publications that were not available\\nduring the model\\u2019s pretraining phase, aiming to enhance the evaluation of LLMs in realistic biological research\\nscenarios involving pathway reasoning.\\n\\n## Further related work\\nThank you for highlighting this relevant area of research! The graph-augmented baselines we explore\\u2014such as ToG, CoK,\\nand G-Retriever\\u2014are indeed closely tied to graph-based retrieval methods, as discussed in Section 2 of the paper. We\\nhave incorporated additional related works, such as GraphRAG and GraphReader, in the revised draft.\"}", "{\"summary\": \"This paper addresses a gap in LLMs to reason biological pathways, especially with complex perturbations, interventions, and varying conditions. To address this gap, the authors first introduce a new benchmark, BioMaze, that contains 1.3k high-quality questions for biological pathways reasoning.\\n\\nNext, the paper evaluates LLMs on BioMaze with existing reasoning methods and finds that they struggle with perturbations. Then the authors propose a new reasoning approach, PathSeeker, that reasons through subgraph-based navigation within pathway graph. PathSeeker achieves better performance in biological reasoning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Clear identification of research gap: I think it is an interesting question whether LLMs can reason on biological pathways, and how well they do it. The authors have identified the limitations here clearly.\\n\\n2. Innovative benchmark: BioMaze is a valuable contribution to the field, providing a systematic evaluation framework for assessing LLM performance across various dimensions of biological pathway reasoning.\", \"weaknesses\": \"1. Data presentation is not very clear. For example, when the paper evaluates the performance of different models and reasoning methods, it simply writes \\\"performance\\\" without defining the metrics. Therefore, it is not clear whether a higher number means a better performance. In Table 2 and 3, the authors underline the lowest results, which is confusing.\\n\\n2. Baseline choice is not clear. The paper uses CoT as a baseline in 5.3.1 Task Analysis. I think a better baseline may be a method with pathway graph augmentation since PathSeeker also uses pathway graph augmentation.\\n\\n3. Analysis is not thorough enough. If the authors want to claim that PathSeeker reduces the performance gap between natural and intervened/perturbed groups, then they should provide more evidence and analysis on them.\", \"questions\": \"1. In Figure 4, how are the lines fitted? For the right figure (open-ended questions), the gap between CoT and PathSeeker is very small. What is the standard deviation?\\n\\n2. In Table 2, Table 3, Table 6, and Figure 5, please add what metrics and units are used. Also add evaluation method in Experiment section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Summary of Questions and Responses\", \"comment\": \"We thank the reviewers for all their valuable questions and suggestions. We sincerely hope our response and revisions\\nhave addressed their concerns.\\n\\nFor clarity, we summarize all questions raised by reviewers and our corresponding responses here:\\n\\n### Additional Experiments to Address Specific Questions:\\n\\n1. **Evaluation of Cutting-Edge Models as Backbone** \\n We included experimental results using larger language model backbone (LLaMA 3.1 405B) on BioMaze.\\n\\n2. **Additional Baselines for Analysis** \\n We analyzed additional baselines, including ToG, and explained why we use PathSeeker as the representative\\n graph-augmented method during analysis.\\n\\n3. **Handling Multi-Step Reasoning Decline in Chain-of-Thought (CoT) Approaches** \\n We evaluated the hierarchical CoT reasoning approach as proposed by the reviewer.\\n\\n### Additional Analysis:\\n\\n1. **Evidence for PathSeeker Reducing the Gap Between Natural and Perturbed Groups** \\n We illustrated the gap between natural and intervened/perturbed groups to provide evidence of PathSeeker's\\n effectiveness.\\n\\n2. **Error Categorization and Analysis** \\n We added error cases and detailed analyses for each category in Appendix A.2 to enhance understanding.\\n\\n### Clarification of Confusion and Misunderstandings:\\n\\n1. **Motivation for Using Subgraph Methods Instead of Whole Graphs** \\n We employed subgraph-based methods because all the KEGG pathway graphs are merged together as database, which cannot\\n be\\n processed as a single context input.\\n\\n2. **Answer Validation During Data Creation and Filtering** \\n We explained the ground truth validation process. During data creation, LLaMA 3.1 (405B) was explicitly instructed to\\n verify answers against the original paper's content five times. Only questions consistently answered correctly were\\n retained, followed by a final expert review to ensure data quality.\\n\\n### Discussion of Open Questions:\\n\\n1. **Evaluation of Open-Ended Answers** \\n We discussed the challenges of evaluating open-ended answers using rule-based approaches, similarity metrics, or\\n ROUGE scores. We also discussed the feasibility of using more cost-effective models, such as GPT-3.5.\\n\\n2. **Limitations of Pathway Graph Data** \\n We added error cases and analyses in Appendix A.2, and discussed challenges such as self-circulatory or multi-branch\\n structures in pathway graphs.\\n\\n3. **Improvement in Pathway Graph Searching** \\n We discussed the potential of further improvement in pathway graph searching methods.\\n\\n4. **Integration of RAG (Retrieval-Augmented Generation)** \\n We discussed how to combine RAG into our method, and the meaning of it.\\n\\n### Paper Modifications:\\n\\n1. **Data Presentation (Metrics and Units)**\\n2. **Interpretation of Lowest Results in Tables 2 and 3** \\n3. **Figure 4's fitting method and standard deviation** \\n4. **Further related work** \\n5. **Error cases and analysis added in Appendix A.2** \\n\\nFor details, please refer to the **Draft Modification Summary** in the Response.\"}", "{\"comment\": \"## Data correctness validation\\n\\nTo ensure question quality, we employ a two-step process. First, we filter questions using an advanced language model (\\nLLaMa 3.1-405B) to assess their relevance and clarity. Subsequently, each question undergoes a final quality check by\\nhuman reviewers.\\n\\nTo validate the answer quality, we require the LLM (LLaMa 3.1-405B) to answer the questions **based on the original\\npaper content rather than by themselves**. The model is explicitly instructed to respond with **Undetermined** if it\\ncannot confidently generate an answer. Each question is tested five times, and only questions that are consistently\\nanswered correctly (i.e., aligned\\nwith the intended label) and not marked as Undetermined in any of the trials are retained. This process helps eliminate\\nquestions with incorrect labels, ambiguous phrasing, or poor structure.\\n\\nIn the final stage, human experts perform an additional quality check to refine the questions further. Approximately 5\\\\%\\nof the data is filtered out at this stage, primarily due to issues such as hint leakage in the question, overly complex\\nphrasing (e.g., asking for multiple facts), or poorly defined structure. During this stage, human reviewers also verify\\nlabel correctness, ensuring the dataset's overall reliability and usability.\\n\\nThrough this comprehensive validation pipeline\\u2014particularly the human review step\\u2014we strive to ensure high data quality,\\nwith a focus on minimizing LLM errors and enhancing the accuracy of ground truth answers.\"}", "{\"title\": \"A Kind Reminder for Reading the Response\", \"comment\": \"We sincerely thank all the reviewers once again for their time and effort in providing insightful questions and suggestions regarding our work. We have made every effort to address these questions and incorporate the suggestions into our revisions. We sincerely hope our responses and updates adequately address your concerns.\\n\\nWe kindly request that you review our responses to ensure they effectively address your feedback. We would greatly appreciate any additional comments or suggestions and remain available to further discuss or address any remaining questions or concerns.\\n\\nThank you once again for your valuable feedback and guidance.\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"comment\": \"We sincerely hope that our response addresses your questions. We remain available to address any further questions you may have.\"}", "{\"comment\": \"## Open-ended answer evaluation method\\n\\nThank you for the insightful question! For the evaluation of open-ended tasks, we\\nutilized LLaMA-3.1 405B, which we found to provide evaluation quality comparable to GPT-4. Additionally, we experimented\\nwith other models, including GPT-3.5 and LLaMA-3.1 70B, and obtained the following results verified by human assessment:\\n\\n| | LLaMA 3.1 405B | GPT 4 | GPT 3.5 | LLaMA 3.1 70B |\\n|-------------------------------|----------------|-------|---------|---------------|\\n| Acc with human validation (%) | 96 | 96 | 94 | 93 |\\n\\nThe observed inconsistencies between model and human evaluations often arise in cases where the generated answer is\\nclose to the ground truth but expressed in a general manner or lacks specific details. This highlights a trade-off\\nbetween evaluation precision and computational cost. For instance, GPT-3.5 is more cost-efficient, while GPT-4 offers\\nhigher accuracy at a greater expense.\\n\\nRegarding evaluation methods like rule-based approaches, similarity metrics, or ROUGE scores, they are not well-suited\\nfor open-ended generation tasks. The key challenges include:\\n\\n1) Matching Biological Entities: For instance, evaluating answers that equate terms like coatamer protein II complex and\\n COPII requires advanced engineering efforts that rule-based methods struggle to handle.\\n\\n2) Different Expressions of the Same Fact: For example, the standard answer might state, \\\"NleF slows the intracellular\\n trafficking of tsVSVG from the endoplasmic reticulum to the Golgi,\\\" while the model-generated response suggests, \\\"\\n NleF\\n causes a delay or blockage in the anterograde transport trafficking of tsVSVG, leading to changes in its\\n intracellular\\n localization.\\\" While semantically equivalent, such variations are difficult to assess using rule-based, similarity,\\n or\\n ROUGE metrics.\\n\\nGiven the additional costs and challenges of evaluating open-ended tasks, we developed the True/False task in BioMaze to\\naddress these problems, which offers both greater convenience and accuracy. The True/False questions are designed\\nas\\nprobing forms of biological pathway queries, which, while still challenging as our experiments\\ndemonstrate, simplify evaluation.\\n\\n## Pathway graph limitations (potential sources of error when using the pathway graph data)\\n\\nThank you for your suggestion! To enhance the clarity of the error categories and provide\\nmore detailed insights, we have included representative examples for each error category in Appendix A.2 of the revised\\ndraft (due to space constraints in the main paper). In particular, we present error cases related to the pathway\\ngraph-augmented method, as illustrated in Figures 10 and 12.\\n\\nOne source of errors in the pathway-augmented method arises from the inherent complexity of graph data, especially in\\npathways with self-circulatory or multi-branch structures. For example, in Figure 10, the question asks:\\n\\\"What is the effect of heparin deficiency on the formation and degradation of Ang II in these peritoneal cell cultures?\\\"\\nHere, the model's reasoning process considered the pathway involving the degradation of Ang II but overlooked the more\\ncritical pathway concerning the conversion of Angiotensin I to Angiotensin II. This omission led to an incorrect\\nconclusion.\\n\\nThe challenge arises from the textual representation of pathway graphs. Although we developed a DFS-based graph\\nsequentialization method to better capture graph features, sequential LLMs still face difficulties in understanding and\\nreasoning about complex graph structures. This limitation is especially pronounced when they need to perform deductive\\nreasoning across multiple branches or navigate self-circulatory pathways simultaneously.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"The paper focuses on using LLMs to reason on biological pathways, specifically considering natural or synthetic interventions.\\n\\nAn important contribution is the BioMaze benchmark, compiling questions/answers based on pathways from the literature.\\n\\nA first finding is that the considered LLMs perform better when considering un-perturbed pathways than when considering interventions (L2 level as opposed to L1 level, w.r.t. the causality ladder). I am not sure this finding is surprising (L2 is notoriously more difficult to reason with and the data is less wealthy). \\n\\nThe proposed PathSeeker module leverages subgraphs information (as the overall graph integrating e.g. all Kegg pathway graphs is too large),\", \"additional_comments_on_reviewer_discussion\": \"Some issues are not adequately addressed, e.g., the quality of the BioMaze benchmark (circular assessment, after Rev. 5QNb) or the comparison with RAG approaches (although the authors argue that the use of pathways is the most natural one for the domain).\\n\\nThe area chair encourages the authors to pursue on this very promising line of research.\"}", "{\"comment\": \"## Handling multi-step reasoning decline during cot\\n\\nThank you for this inspiring idea! To address this, we conducted additional\\nexperiments by designing a hierarchical reasoning method. This approach requires the LLM to first outline the pathway\\npotentially involved in the question as a reasoning plan, and then conduct reasoning based on this self-proposed\\npathway. This method, which we denote as CoT-Self-Pathway, is similar to the pathway-augmented reasoning method, except\\nthat the pathway is self-generated by the LLM rather than provided by an external database.\\n\\nBelow are the results comparing CoT-Self-Pathway to standard CoT and our PathSeeker on the True/False task:\\n\\n| | | Inquiry Type | | Extra Condition | | Investigation Target | | |\\n|-----------|------------------|--------------|-----------|-----------------|------------|----------------------|-------------|----------|\\n| | | Normal | Perturbed | Natural | Intervened | Single | Interaction | Function |\\n| GPT-3.5 | CoT | 77.03 | 67.13 | 73.65 | 68.92 | 68.92 | 79.26 | 71.85 |\\n| | CoT-Self-Pathway | 79.11 | 67.81 | 75.14 | 66.19 | 69.27 | 81.23 | 71.02 |\\n| | PathSeeker | 78.85 | 74.44 | 77.63 | 74.36 | 78.01 | 81.66 | 73.78 |\\n| LLaMA3 8B | CoT | 81.77 | 71.63 | 79.04 | 70.67 | 79.73 | 84.35 | 71.52 |\\n| | CoT-Self-Pathway | 79.64 | 72.83 | 81.35 | 70.53 | 81.95 | 82.39 | 68.41 |\\n| | PathSeeker | 83.08 | 75.84 | 82.14 | 72.27 | 81.07 | 86.62 | 75.01 |\\n\\nAs the results indicate, CoT-Self-Pathway generally achieves performance comparable to standard CoT. We\\nobserved that the self-generated pathways tend to be more abstract and less detailed or comprehensive compared to\\npathways retrieved from graph databases. In some cases, these self-proposed pathways are more contextually aligned with\\nthe question, which can be an advantage.\\n\\nThe performance of this method is primarily constrained by the quality of the pathways generated by the LLM itself.\\nSince the LLM-generated pathways lack the details and accuracy of those from a dedicated pathway graph database,\\nCoT-Self-Pathway typically does not perform as well as pathway-augmented methods such as PathSeeker.\\n\\n## Pathway graph searching improvement \\nThank you for your insightful observation! The limitation of pathway graph\\naugmentation, particularly omissions in reasoning, can indeed be mitigated through database expansion and enhanced graph\\nsearch algorithms.\\n\\nOur method, PathSeeker, specifically focuses on improving graph search efficiency by employing\\nLLM-guided subgraph navigation, which has resulted in superior performance compared to other graph augmentation methods.\\nFor example, we conduct an error analysis including LLM-pruning graph BFS method ToG. Below are the error type\", \"classification_results_from_the_open_ended_task\": \"#### Percentage of Errors Across All Data in Open-Ended Tasks\\n\\n| | | unresolved conclusion | incomplete answer | omission in reasoning | faulty in reasoning |\\n|-----------|------------|-----------------------|-------------------|-----------------------|---------------------|\\n| GPT3.5 | CoT | 2.2 | 6.6 | 9.1 | 9.9 |\\n| | ToG | 3.2 | 4.4 | 18.5 | 4.6 |\\n| | PathSeeker | 0.5 | 6.0 | 10.7 | 6.1 |\\n| LLaMA3 8B | CoT | 1.5 | 5.5 | 8.2 | 8.3 |\\n| | ToG | 1.6 | 2.9 | 14.3 | 2.8 |\\n| | PathSeeker | 0.7 | 5.4 | 11.1 | 3.6 |\\n\\nThese results indicate that as an LLM-pruning graph BFS method, ToG is more prone to errors from omissions in reasoning,\\ndue to its less efficient graph navigation strategy. As a result, PathSeeker illustrate higher accuracy in the BioMaze\\nevaluation. We believe that further advancements in graph search efficiency could enhance pathway graph recall,\\nultimately boosting\\nthe model's reasoning capacity.\\n\\nIn this work, we primarily utilized the KEGG pathway graph database. Moving forward, we\\nplan to incorporate additional pathway databases, such as Reactome, into the project. This expansion will allow us to\\ncover a broader range of scenarios and improve the overall robustness of the method.\"}", "{\"comment\": \"We sincerely hope that our response addresses your questions. We remain available to address any further questions you may have.\"}", "{\"comment\": \"## Error categorization reason analysis\\n\\nThank you for your insightful suggestions! To improve the clarity of the error categories and provide more detailed\\ninsights, we have added representative examples of each error category in Appendix A.2 of the revised draft (due to\\nspace constraints in the main paper). While it is challenging to succinctly summarize the reasons behind each type of\\nerror, we believe these examples will help improve understanding.\\n\\nBelow, we briefly illustrate a few examples, but we strongly recommend referring to Appendix A.2 for a more\\ncomprehensive analysis.\\n1. **Omission in Reasoning** \\n This error occurs when critical steps in the reasoning process are omitted, leading to an incorrect final answer. \\n **Example:** \\n **Question:** Does BAMBI enhance or inhibit Wnt-promoted cell cycle progression? \\n **Label:** BAMBI increases Wnt-promoted cell cycle progression. \\n **Error Description**:The model's reasoning only identified BAMBI as a target of beta-catenin but failed to account for its interactions\\n with key components of the Wnt signaling pathway, such as LRP6, FZD10, and DVL1. This omission led to an incorrect\\n conclusion.\\n\\n2. **Faulty Reasoning** \\n This error occurs when the reasoning path aligns with the question context but contains significant errors in\\n deducing the events or relationships within that pathway. \\n **Example:** \\n **Question:** What is the effect of GogB-deficient Salmonella on tissue damage and colonization in the gut during long-term\\n chronic infections? \\n **Label:** GogB-deficient Salmonella cause extensive tissue damage and enhanced colonization in the gut during\\n long-term chronic infections. \\n **Error Description**:In this case, the model considered the NF-kappaB signaling pathway but failed to deduce that GogB deficiency leads to\\n increased inflammation. This critical oversight resulted in an incorrect conclusion.\\n\\n3. **Unresolved Conclusion** \\n This error occurs when the model cannot provide a definitive answer, indicating uncertainty or suggesting that the\\n answer is unknown. \\n **Example:** \\n **Question:** What is the effect of the E2A-Pbx1 fusion protein on the expression of the novel WNT gene, WNT-16, in pre-B\\n ALL cells? \\n **Label:** The E2A-Pbx1 fusion protein activates the expression of WNT-16 in pre-B ALL cells. \\n **Answer:** Further investigation would be needed to determine the exact effect of the E2A-Pbx1 fusion protein on\\n WNT-16 expression in pre-B ALL cells. \\n **Error Description**:In this case, the model is unable to determine the specific effect of the fusion protein on WNT-16 expression.\\n\\n4. **Incomplete Answer** \\n This error occurs when the response omits essential details, such as specific effects or key components of the\\n labeled answer. \\n **Example:** \\n **Question:** How is continual efferocytosis enhanced in macrophages? \\n **Label:** Continual efferocytosis is enhanced in macrophages by the metabolism of apoptotic cell-derived arginine\\n and ornithine to putrescine by macrophage arginase 1 (Arg1) and ornithine decarboxylase (ODC). \\n **Answer:** When macrophages engulf apoptotic cells (ACs), they not only clear the ACs but also metabolize the\\n engulfed cargo. This metabolic reprogramming involves the activation of specific metabolic pathways, such as\\n glycolysis and fatty acid oxidation, to generate energy and metabolites necessary for the efficient clearance of\\n ACs. \\n **Error Description**:In this case, the model highlights metabolic reprogramming but fails to mention the critical role of arginine and\\n ornithine metabolism in continual efferocytosis, resulting in an incomplete answer.\"}", "{\"title\": \"A Kind Reminder for Any Feedback\", \"comment\": \"As the discussion period comes to a close, we kindly request that reviewers provide any feedback they have on our\\nresponse and remain available to address\\nany additional questions or concerns.\\n\\nThank you,\\n\\nThe Authors\"}", "{\"comment\": \"We sincerely hope that our response addresses your questions. We remain available to address any further questions you may have.\"}", "{\"comment\": \"## Figure 4's fitting method and standard deviation\\nWe fitted the lines using a third-order polynomial curve with NumPy's polyfit method.\\n\\nThank you for suggesting the inclusion of standard deviation! In the revised draft, we have incorporated the standard\\nerror, calculated from five independent test runs, into Figure 4. The results show that the performance gap is\\nsignificant compared to the standard deviation, particularly for questions requiring a larger number of reasoning steps.\\n\\nThe observed phenomenon where \\\"the gap between CoT and PathSeeker is very small\\\" predominantly occurs for questions\\ninvolving fewer reasoning steps. For questions requiring more reasoning steps, however, the gap becomes more pronounced.\"}", "{\"title\": \"A Kind Reminder for Reading the Response\", \"comment\": \"Thank you once again for your thoughtful questions and suggestions! We sincerely hope our response and revisions have addressed your concerns. As the rebuttal period is closing soon, we kindly ask if you could read our response to ensure it effectively mitigates your concerns. We would greatly appreciate your feedback and remain available to address any additional questions or concerns.\\n\\nThank you,\\n\\nThe Authors\"}", "{\"title\": \"A Kind Reminder for Reading the Response\", \"comment\": \"As the rebuttal period is nearing its end, we kindly request the reviewers to review our response to ensure it effectively addresses your concerns. We would greatly value your feedback and are available to clarify or address any additional questions you may have.\\n\\nThank you for your time and consideration.\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"title\": \"Draft Modification Summary\", \"comment\": \"We thank all the reviewers for their insightful suggestions and questions regarding the paper. Below, we summarize the\", \"modifications_made_to_the_paper_for_clarity\": \"1. **Metric Descriptions** \\n As suggested by reviewer TncJ, we have added detailed descriptions of the metrics used in Tables 2, 3, and 6, as well\\n as Figure 5. Additionally, we have clarified the meaning of the lowest results presented.\\n\\n2. **Standard Deviation in Figure 4** \\n Based on reviewer TncJ's suggestion, we have included the standard deviations in Figure 4.\\n\\n3. **Description of Pathway Graph Database** \\n Following reviewer m7QK's feedback, we have provided a more detailed explanation of the pathway graph database. This\\n database consists of a single, large graph created by merging all KEGG pathway maps and pathways.\\n\\n4. **Related Work** \\n As recommended by reviewer m7QK, we have expanded the Related Work section to include discussions of GraphRAG and\\n GraphReader.\\n\\n5. **Failure Reason Cases** \\n In response to the queries from reviewers 5QNb and MoHE, we have added case examples for each type of failure reason\\n in Appendix A.2. These examples aim to provide a better understanding of how errors occur during the reasoning\\n process.\\n\\n6. **Other Modification in Representation** \\n We revised some of the paper's presentation to enhance clarity, such as the illustration in Table 1.\\n\\nFor the convenience of review, we have highlighted the modifications in blue.\"}", "{\"comment\": \"## Additional result of evaluating cutting-edge models as backbone\\nWe further evaluated LLaMA 3.1-405B on BioMaze open-ended tasks. We apologize for the delay, as to minimize potential\\nmodel\\nbias during evaluation, we employed GPT-4 as the evaluation LLM for LLaMA 3.1-405B. Our analysis of the evaluator\\ndemonstrated that GPT-4\\nexhibits evaluation accuracy comparable to LLaMA 3.1-405B, with both achieving 96% consistency with human. The\\nperformance\", \"results_are_as_follows\": \"| | Inquiry Type | | Extra Condition | | Investigation Target | | |\\n|----------------|--------------|-----------|-----------------|------------|----------------------|-------------|----------|\\n| LLaMa 3.1 405B | Normal | Perturbed | Natural | Intervened | Single | Interaction | Function |\\n| CoT (0 Shot) | 85.38 | 75.91 | 82.17 | 74.96 | 79.91 | 74.05 | 81.26 |\\n| CoT (2 Shot) | 84.71 | 75.70 | 81.65 | 74.81 | 77.79 | 83.50 | 80.31 |\\n| ToG | 86.43 | 79.25 | 84.75 | 76.78 | 84.39 | 79.09 | 81.37 |\\n| CoK | 84.12 | 73.42 | 79.71 | 74.19 | 80.73 | 70.03 | 77.48 |\\n| G-Retriever | 85.78 | 80.17 | 85.80 | 75.10 | 83.00 | 77.47 | 83.28 |\\n| PathSeeker | 88.24 | 83.82 | 88.20 | 79.97 | 86.54 | 82.31 | 85.76 |\\n\\nThe key observations are similar to the result on True/False tasks:\\n\\n1) Cutting-edge model achieve higher performance: Overall, LLaMA 3.1-405B achieves a 5% performance improvement compared\\n to the 8B version.\\n\\n2) Persistent gap in intervention scenarios: A noticeable performance gap remains between natural and\\n perturbed/intervened cases. This is one of the key conclusions of our benchmarking. Despite the backbone model's\\n stronger knowledge and reasoning abilities, interventions in biological systems still pose significant challenges for\\n the LLM's reasoning.\\n\\n3) Effectiveness of PathSeeker: Our proposed method, PathSeeker, demonstrates improved performance compared to the CoT\\n method, particularly in scenarios involving interventions.\"}", "{\"summary\": \"This paper introduces a benchmark to evaluate LLMs' reasoning abilities about biological pathways including perturbed pathways. The benchmark is diverse and covers different biological domains and scenarios.\\n\\nThe authors' evaluations show that while LLMs understand natural mechanisms well, they struggle with intervention scenarios\\n\\nThe authors propose PATHSEEKER, an LLM agent that navigates pathway graphs using subgraph-based exploration. This approach improves reasoning accuracy, including accuracy for intervention scenarios.\", \"key_contributions\": [\"BioMaze Benchmark\", \"Evaluation of LLMs on benchmark\", \"PATHSEEKER Agent, analysis of its performance on benchmark, its failure modes, and ablation study\"], \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The benchmark is a solid contribution. The authors did good work in breaking down the benchmark by various categories.\", \"PATHSEEKER has promise, though I wish it were better motivated and contextualized within related work within systems biology as well as graph reasoning tasks with LLMs as well as graph-based RAG techniques.\", \"The breakdown of failure modes for LLM reasoning over pathways, particularly in terms of causality, and showing how the graph augmentation helps is useful. Breaking down the reasons for failure with human validation is also a useful contribution and I wish I saw more of that.\"], \"weaknesses\": [\"With PATHSEEKER, I think there is a lack of motivation for explore the pathways via subgraphs other than \\\"Inspired by how humans browse web networks\\\". I don't disagree with this approach per se, but I don't think the authors motivate doing it this way as opposed to, say for example, adding including the whole graph or a big chunk of it into the prompt template. Indeed, as an experiment I pasted an XML file of a MAPK KEGG map into GPT-4's context window and it fits. And if something doesn't fit, context windows will get bigger. I think the authors should motivate the local approach, for example, by citing work that demonstrates failure modes for graph-based reasoning with LLMs, and citing work that shows how local approaches do better.\", \"I find it concerning that the authors did not include results for a cutting edge model like GPT-4, Claude, PALM 2 and limited tests to GPT-3.5 and Llama-3 8b, neither of which were fined-tuned for performance in this domain. The gap between GPT-3.5 and GPT-4, as an example, on general medical QA performance is quite large. This makes me worry the benchmark might be already saturating on more advanced models. Budget could have been an issue, but could have fine-tuned GPT-3.5 (perhaps on hold-out data from their benchmark), or they could have used their instance of LLaMa-3.1-405B to answer questions as well as evaluate them. Similarly, they could have used other fine-tuned open source models to evaluate.\"], \"questions\": \"\\\"We then apply multiple data filters and validation steps to ensure the correctness, quality, and relevance\\nto biological pathways. The correctness of each question is validated by checking whether LLMs\\ncan answer it accurately using the original paper content, allowing us to exclude question-label pairs\\nwith errors. Question quality is ensured through several filters, removing questions that are poorly\\ndefined, unpredictable (e.g., asking for specific measurement values), query more than one fact, are\\ntrivial with answers revealed in the question\\u2019s context, or are unrelated to biological pathways. After\\nall the filters, BioMaze contains 1.3k high-quality questions for biological pathways reasoning\\\"\\nCan you give me more confidence that these questions all have a single right answer that can be answered from the context? To what degree are the manually verified? Filters are great, but where does the buck stop? \\n\\nA pathway is essentially a knowledgebase. It would be good to connect this work to recent approaches that use knowledgebase graph structure in RAG, such as GraphRAG. Indeed, generally speaking, contextualization within prior work could be stronger.\", \"biggest_question\": \"Why did you not run eval on cutting-edge larger models or larger open-source models like your LLaMa-3.1-405B, or fine-tuned SLMs. Bit sus. Willing to upgrade review if this concern is addressed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"## Using RAG\\n\\nThank you for your insightful feedback! Our current graph database already leverages textual content for\\nretrieving nodes and edges, which aligns with the concept of \\\"retrieval from a pathway database.\\\" A potential\\nRetrieval-Augmented Generation (RAG) approach could indeed incorporate textual modalities to provide additional\\ncontextual information in the retrieved content.\\n\\nIn practice, existing literature or pathway databases are more accessible and widely used, making this a highly\\npractical\\nsolution. Our method, PathSeeker, is adaptable to any database as long as graph navigation\\u2014particularly local subgraph\\nsearching\\u2014can be developed based on the database. We believe this capability is critical for efficiently navigating\\ngraph-structured databases.\\n\\nOn the other hand, using pure graph data provides an idealized framework for formalizing the reasoning process,\\nspecifically for testing the capacity of LLMs to perform deductive reasoning on pathway graphs.\\nSince this work focuses on Pathway Reasoning, we believe a graph-form database offers the most suitable foundation for\\nevaluating the biological pathway reasoning capabilities of LLMs.\"}", "{\"comment\": \"We sincerely hope that our response addresses your questions. We remain available to address any further questions you may have.\"}", "{\"comment\": \"## Data (metrics and units) presentation\\nThank you for the helpful feedback. We apologize for any confusion caused by the description of our metric. We\\nintroduced the metric in Subsection 5.1. For True/False tasks, we compute accuracy averaged across the True and False\\nlabels to\\naccount for label imbalance in the dataset (50% for random guessing baseline). For open-ended tasks, the LLM is used to\\nevaluate the accuracy of generated answers by comparing them to the ground truth and determining whether they are\\ncorrect or incorrect. In this study, we use the LLaMa 3.1-405B model as the evaluator, with five in-context examples.\\nThe performance of the evaluator is further analyzed in Appendix A.8. We improved the metric description subsection.\\n\\nWe add the metric description to Table 2, 3, 6 and Figure 5 in the revised draft. Thank you for the feedback on this\\nmatter.\\n\\n## Meaning of lowest results in Tables 2 and 3\\nIn Tables 2 and 3, higher metric values indicate better performance. In these results, we underline the lowest values in\\neach dimension to highlight the more challenging setting. We further explain the meaning of the underline in the revised\\ndraft.\\n\\n## Additional baselines for analysis experiment\\nThank you for your thoughtful suggestion. We chose CoT and PathSeeker as baselines to represent two distinct reasoning\", \"approaches\": \"independent reasoning by LLMs and reasoning augmented by graph structures.\\nAs shown in the main experiment in Subsection 5.2, PathSeeker effectively utilizes pathway graphs,\\nmaking it the most representative method for illustrating how graph-augmented reasoning works.\\n\\nTo further explore other graph augmentation methods, we also analyzed the ToG method. Below are the error type\", \"classification_results_from_the_open_ended_task\": \"#### Percentage of Errors Across All Data in Open-Ended Tasks\\n\\n| | | unresolved conclusion | incomplete answer | omission in reasoning | faulty in reasoning |\\n|-----------|------------|-----------------------|-------------------|-----------------------|---------------------|\\n| GPT3.5 | CoT | 2.2 | 6.6 | 9.1 | 9.9 |\\n| | ToG | 3.2 | 4.4 | 18.5 | 4.6 |\\n| | PathSeeker | 0.5 | 6.0 | 10.7 | 6.1 |\\n| LLaMA3 8B | CoT | 1.5 | 5.5 | 8.2 | 8.3 |\\n| | ToG | 1.6 | 2.9 | 14.3 | 2.8 |\\n| | PathSeeker | 0.7 | 5.4 | 11.1 | 3.6 |\\n\\nThese results indicate that as an LLM-pruning graph BFS method, ToG is more prone to errors from omissions in reasoning,\\nlikely due to its less efficient graph navigation strategy.\\nInterestingly, the phenomenon that ToG with GPT 3.5 as the backbone performed worse than with LLaMA3 8B could be\\nattributed to GPT 3.5's shorter context length (4096 tokens) compared to LLaMA3\\u2019s (8192 tokens),\\nwhich limits the extent of graph navigation and may exacerbate pathway omissions.\\n\\n## Evidence for PathSeeker reducing gap between natural and intervened/perturbed groups\\nThank you for this valuable suggestion! To better demonstrate how PathSeeker enhances intervention reasoning,\\nwe present a comparison of the performance gap between natural and intervened/perturbed groups below:\\n\\n#### True/False Task: Natural - Intervened/Perturbed (Lower is Better)\\n\\n| | | Inquiry Type Gap | Extra Condition Gap |\\n|-----------|--------------|------------------|---------------------|\\n| GPT 3.5 | CoT (2 Shot) | 9.89 | 4.73 |\\n| | PathSeeker | 4.41 | 3.27 |\\n| LLaMa3 8B | CoT (2 Shot) | 10.14 | 8.36 |\\n| | PathSeeker | 7.24 | 9.87 |\\n\\n#### Open-ended Task: Natural - Intervened/Perturbed (Lower is Better)\\n\\n| | | Inquiry Type Gap | Extra Condition Gap |\\n|-----------|--------------|------------------|---------------------|\\n| GPT 3.5 | CoT (2 Shot) | 9.02 | 6.97 |\\n| | PathSeeker | 9.93 | 4.79 |\\n| LLaMa3 8B | CoT (2 Shot) | 12.64 | 8.54 |\\n| | PathSeeker | 10.42 | 6.24 |\\n\\nThe results indicate that PathSeeker achieves smaller performance gaps between natural and intervened/perturbed groups\\ncompared to CoT in most scenarios.\\nThis suggests that leveraging pathway graphs improves reasoning for intervention cases.\"}", "{\"summary\": \"This paper introduces BioMaze, a large-scale benchmark for evaluating large language models' ability to reason about biological pathways. The authors also introduced PATHSEEKER, a new approach to enhance LLMs' performance on these tasks.\\nThey found that while LLMs can understand basic biological mechanisms but LLMs struggle when asked to reason about perturbations or interventions in biological systems. \\nThrough their experiments, they observed that LLMs perform worse on perturbed systems compared to normal conditions.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The study is very comprehensive. I like the rigorous experimental design that systematically evaluates different aspects of pathway reasoning.\\n2. It contributed to the field of BIOLOGICAL PATHWAY REASONING by making benchmarks and the problem formulation combining biological pathway reasoning with LLM capabilities.\\n3. I also found myself enjoy reading the paper and like the well-structured presentation progressing logically from problem motivation to solution.\", \"weaknesses\": \"1. The author presents error categorization but it doesn't provide detailed analysis of when and why particular types of errors occur. If the authors can provide more analysis of the occurance, it would be nice.\\n2. The validation of ground truth answers relies heavily on LLMs themselves (LLaMA 3.1-405B and GPT-4). This circular dependency could reinforce existing model biases.\", \"questions\": \"1. Maybe the author can try to answer the reason why particular types of errors occur in categorization.\\n2. May using other models to do ground truth answers validation.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"1\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This study explores the under-examined ability of LLMs to reason about biological pathways, particularly focusing on how system perturbations affect downstream biological processes. The authors introduce the BioMaze dataset, a benchmark designed to assess LLMs\\u2019 reasoning on how various interventions, like mutations, infections, or treatments, impact downstream targets through complex pathway mechanisms across different biological contexts. With this dataset, the authors then test LLMs with reasoning techniques such as Chain-of-Thought (CoT) and graph-augmented methods, and they find that while LLMs can understand basic biological mechanisms, they struggle with predicting effects after perturbations. To enhance the reasoning ability of LLMs, the authors also developed PathSeeker. In this novel approach, the LLM agent navigates pathway subgraphs to improve performance in pathway reasoning, particularly in scenarios with biological perturbations.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. BioMaze benchmark for biological pathway reasoning: The authors present BioMaze, a benchmark dataset designed to evaluate LLMs\\u2019 reasoning abilities within a biological context. BioMaze focuses on assessing how well LLMs comprehend and reason about complex biological pathway phenomena, including cause-effect relationships in natural and perturbed conditions. Curated from the literature, this dataset includes high-quality questions and answers generated with Llama 3.1405B and GPT-4. Covering multiple biology subfields, BioMaze undergoes extensive filtering and validation to ensure relevance, accuracy, and diversity of pathway scenarios.\\n2. Pathway graph augmentation via PATHSEEKER agent model: Given that biological pathways are naturally structured as networks, the authors incorporate pathway graph data to improve LLM reasoning. They introduce PATHSEEKER, a novel graph-augmented agent that navigates pathway subgraphs to enrich LLM understanding and support reasoning in complex pathway contexts. This approach allows LLMs to access and utilize structural information essential for nuanced pathway reasoning, particularly in scenarios involving biological interventions.\\n3. Comprehensive evaluation and analysis: The paper conducts a thorough evaluation across multiple LLM models and experimental settings, systematically analyzing LLM performance with and without pathway graph augmentation. Additionally, the ablation study of PATHSEEKER explores its effectiveness by examining API usage, step distribution, and performance impact. These analyses further strengthen the value of pathway augmentation, validating the importance of PATHSEEKER in enhancing LLMs\\u2019 reasoning capabilities in biological pathway contexts.\", \"weaknesses\": \"1. Limited evaluation method for open-ended questions: outputs from different LLMs are evaluated by another LLM, specifically using the Llama 3.1 405B model, which is considerably powerful but would be costly to replicate the results. It would be more helpful if the authors could consider some alternatives, such as using rule-based keyword-matching or for example, using ROUGE score or embedding-based summarization methods to compare how similar or dissimilar answers from LLMs are to the ground truth answers. Another alternative could be to construct different evaluation methods based on the failure modes discovered later from the error analysis study.\\n2. see questions\", \"questions\": \"1. Pathway graph limitations: This paper highlights that faulty reasoning persists even with pathway augmentation, especially with perturbations. Could the authors provide more insight into potential sources of error in the pathway graph data? Is it the case that some specific cases or graph structures are more challenging for the LLM to navigate, and some are easier for LLMs to handle?\\n2. Handling multi-step reasoning decline: Given that CoT reasoning shows decreased accuracy with increased steps, have the authors considered alternative strategies or mechanisms, such as hierarchical reasoning, to mitigate this drop in performance, or are those questions just naturally challenging? \\n3. Error analysis: The error analysis indicates that omissions remain an issue with PATHSEEKER. What approaches might the authors consider to address these issues, especially when key pathway branches are missed? Could further database expansion, enhanced subgraph search criteria, or developing a different graph search algorithm improve the performance?\\n4. Using RAG: would authors consider incorporating RAG into this framework given the graph structure of biological pathways? Specifically, RAG could allow the model to retrieve specific or relevant information from related literature or pathway databases. This retrieval would provide the LLM with dynamic access to more detailed and more recent biological knowledge, instead of the graph structure constructed from a fixed database KEGG, as currently used in the paper. \\n5. Evaluator setting in this paper: this paper proposes using the llama 405B model as the evaluator model for LLM's outputs, as this is costly to run multiple times, would authors consider any alternative evaluation approaches such as applying rule-based methods or using alternative LLMs to strengthen the statistical validity of the benchmarking results?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"## Answer validation during data creating and filtering\\nTo ensure question quality, we employ a two-step process. First, we filter questions using an advanced language model (\\nLLaMa 3.1-405B) to assess their relevance and clarity. Subsequently, each question undergoes a final quality check by\\nhuman reviewers.\\n\\nTo validate the answer quality, we require the LLM (LLaMa 3.1-405B) to answer the questions based on the original\\npaper's content. The model is explicitly instructed to respond with **Undetermined** if it cannot confidently generate\\nan\\nanswer. Each question is tested five times, and only questions that are consistently answered correctly (i.e., aligned\\nwith the intended label) and not marked as Undetermined in any of the trials are retained. This process helps eliminate\\nquestions with incorrect labels, ambiguous phrasing, or poor structure.\\n\\nIn the final stage, human experts perform an additional quality check to refine the questions further. Approximately 5\\\\%\\nof the data is filtered out at this stage, primarily due to issues such as hint leakage in the question, overly complex\\nphrasing (e.g., asking for multiple facts), or poorly defined structure. During this stage, human reviewers also verify\\nlabel correctness, ensuring the dataset's overall reliability and usability.\"}" ] }
2ErS9Bkc3O
Towards unlocking the mystery of adversarial fragility of neural networks
[ "Jingchao Gao", "Ziqing Lu", "Raghu Mudumbai", "Xiaodong Wu", "Jirong Yi", "Catherine Xu", "Hui Xie", "Weiyu Xu" ]
In this paper, we study the adversarial robustness of deep neural networks for classification tasks. The adversarial robustness of a classification algorithm is defined as the smallest magnitude of possible additive perturbations that can change the output of the classification algorithm. We provide a matrix-theoretic explanation of the adversarial fragility of deep neural network. In particular, our theoretical results show that neural network's adversarial robustness can degrade as the input dimension $d$ increases. Analytically we show that neural networks' adversarial robustness can be only $1/\sqrt{d}$ of the best possible adversarial robustness. Our matrix-theoretic explanation is consistent with an earlier information-theoretic feature-compression-based explanation for the adversarial robustness of neural networks.
[ "deep learning", "adversarial attack", "adversarial robustness" ]
https://openreview.net/pdf?id=2ErS9Bkc3O
https://openreview.net/forum?id=2ErS9Bkc3O
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yusgFZo3ZW", "rLxbbyNhUN", "j0RpIOEsfz", "OgCYcozLpY", "Je7RIthcpq" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730204415443, 1730730846972, 1730460286885, 1730676721143, 1733155031912 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7956/Reviewer_wdU2" ], [ "ICLR.cc/2025/Conference/Submission7956/Reviewer_zPdU" ], [ "ICLR.cc/2025/Conference/Submission7956/Reviewer_Jbfe" ], [ "ICLR.cc/2025/Conference/Submission7956/Reviewer_wWt5" ], [ "ICLR.cc/2025/Conference/Submission7956/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper studied the smallest magnitude of perturbations that could alter the model output, particularly in linear cases under several assumptions. The authors demonstrated that the adversarial robustness of models degraded as the input dimension $d$ increased. Besides, they analytically showed that the adversarial robustness of linear networks could be $1/\\\\sqrt{d}$ of that of the minimum-distance classifier.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"$\\\\bullet$ Exploring the smallest magnitude of perturbations that can change model output is intriguing. The paper also provided detailed derivations and proofs.\\n\\n$\\\\bullet$ The comparison between the adversarial robustness of minimum-distance classifiers and DNNs is also noteworthy.\", \"weaknesses\": \"1. Several theorems were based on assumptions that were too strong, and provided little assurance that the analysis of \\\"adversarial robustness of neural networks\\\" can be generalized to any two-layer DNN, including:\\n\\n$\\\\bullet$ Theorem 1 analyzed the robustness of a two-layer linear network under the following assumptions: (a) each dimension of the training samples follows a standard Gaussian distribution, (b) the activation layer is identity matrix $I$, (c) the linear matrix $H$ is an orthogonal matrix, and (d) for the i-th sample, each i is a distinct label, with the model outputting a score of 1 for category i, and a score of 0 for all other categories, as stated in Eq (1). \\n\\n$\\\\bullet$ Theorem 4 used similar assumptions.\\n\\nTo improve it, the authors could include detailed discussions of how Theorems 1, 4, 6 and 7 might extend to, or provide insights into, more general neural network architectures used in practice. \\n\\n2.\\tWriting: The authors could revise the manuscript to highlight the physical significance of each theorem, the limitations, and the insights for the DNN robustness community. For example, the authors could discuss potential implications of Theorem 7 for adversarial attacks on real-world DNNs.\\n\\n$\\\\bullet$ It is suggested to focus on the potential significance and application scenarios of each theorem and lemma, and the general ideas of the proofs in the main text, while moving the detailed proofs (e.g. Lines 151-244) to the appendix. For example, for Theorem 7, what is the actual context in which \\\"the classifier wrongly think the input is $x+\\\\epsilon x_2$ instead of $x+\\\\epsilon x_1$\\\"? \\n\\n$\\\\bullet$ Since each theorem has different assumptions, it would be beneficial to make a table that clearly lists the assumptions of each theorem, indicating which theorems represent purely ideal cases and which can be generalized to typical DNNs.\\n\\n3.\\tThe abstract and introduction contained overclaims. Most of the theorems presented in the main text were derived under strong assumptions, and some conclusions had its restrictions, e.g., in linear networks. For example, in the abstract, the authors demonstrated \\\"neural network\\u2019s adversarial robustness can degrade \\u2026 only $1/\\\\sqrt{d}$ of the best possible adversarial robustness.\\\" What is \\\"best possible adversarial robustness\\\"? Do these results apply to any DNN, or only to linear networks, or only to two-layer linear networks? Are there some strict assumptions implicit in this conclusion? The authors should revise the abstract and introduction to clarify the conditions under which the conclusions are applicable.\\n\\nBesides, the authors could conduct validation experiments on typical DNNs (e.g. validating Theorem 7 on the convolutional neural networks such as the LeNet or ResNet-18). And a thorough discussion of the limitations is recommended.\", \"questions\": \"1.\\tBased on the assumptions in Eq. (1) and Eq. (5), under what conditions can $w_i$ satisfy these assumptions? Does this imply that new constraints have been added to $w_i$? The authors should discuss the practical feasibility of these conditions and how they relate to real-world DNNs.\\n\\n2.\\tIn Theorems 1 and 4, what does \\\"with high probability\\\" specifically refer to? Please provide a rigorous definition in the main text or appendix.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper provides a theoretical analysis of adversarial robustness in several specific contexts. It examines the dataset and adversarial perturbations across different settings, starting with a random linear network and progressing to trained multi-layer non-linear networks and arbitrary multi-class datasets. The authors conduct experiments with 12-dimensional synthetic data and linear or two-layer networks to support their theoretical findings regarding the sizes of adversarial perturbations.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Writing: The paper is well-written and effectively communicates its final goal from the outset. The intuitions behind the proofs are presented in a highly accessible manner.\", \"thoroughly_detailed_experiments\": \"The experiments are described with great clarity and organization, providing all the necessary details for readers to fully understand the methodology and findings.\", \"weaknesses\": \"Lack of Related Research: The paper overlooks existing theoretical work on random networks, such as \\\"Adversarial Examples in Multi-Layer Random ReLU Networks\\\" by Bartlett et al.\", \"concepts\": \"With the exception of Theorem 7, the settings discussed are largely unrelated to each other or to real-world scenarios. Theorem 7 relies heavily on linearity, (although the claim to apply on for highly non-linear networks) state only that changes in output due to input perturbations can be captured through projections on the relevant gradients.\", \"overstating_generality\": \"The paper makes broad claims about phenomena related to dimensionality that are primarily observed in random networks, a point that is only briefly mentioned in the introduction and not sufficiently discussed throughout the rest of the paper.\", \"questions\": \"1. Doesn't the perturbation e chosen is a one gradient step attack with simple targeted loss?\\n2. Does x + \\\\epsilon x1, and x+ \\\\epsilon x2 necessarily classified differently? it seem plausible that the classification is different only for very big \\\\epsilon. have you tested it?\\n3. What does the experiments shows?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a theoretical analysis of why neural networks classifiers are susceptible to adversarial perturbation, ie, adversarial fragility: specifically why small, targeted perturbations can dramatically change their classification outputs. The authors challenge existing theories, which attribute this fragility to factors like smoothness of decision functions or curvature of decision boundaries, arguing these approaches only partially address the problem. The authors present a matrix-theoretic analysis of this problem and explore how neural networks' robustness declines as the input dimension increases, theorizing that their adversarial robustness is inherently limited to approximately $1/\\\\sqrt \\ud835\\udc51$ of optimal robustness.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"A better understanding of adversarial attacks and robustness of neural networks remains an important topic.\", \"weaknesses\": [\"To the best of my knowledge, the paper's conclusion that adversarial robustness can only be $1/\\\\sqrt d$ is already known [1, 2] and has been shown in more general settings.\", \"The theoretical analysis is weak, as all the theorems make important and unrealistic assumptions, i.e. normal distribution of the data, constraints on the weight matrices.\", \"$\\\\ell_2$ is the only distance considered, several other papers have proposed theoretical analysis with respect to the $\\\\ell_p$ norm (see [1, 2]).\", \"Theorem 1 spans 2 full pages to show a probabilistic bound over a linear network with several assumptions, but it's unclear why the authors came to all this work, since the distance to the decision boundary for a linear network can be computed in closed form.\", \"The paper proposes a total of 7 theorems, each of which is accompanied by a proof.\", \"The paper does not propose any related work\", \"The paper does not provide usable results\", \"The experimental section only proposes toy experiments\"], \"suggestions_for_improving_the_paper\": [\"Instead of presenting a list of theorems, the authors should motivate their analysis and explain why it's interesting. How can these results help the community? Even if the theoretical analysis has assumptions, how can it be useful for real-world applications?\", \"Authors should propose a related work section and compare their analysis with other work. How is their analysis better or novel than the competing work?\", \"Authors should propose real-world experiments, adversarial robustness is now a mature research topic, and large-scale (e.g. ImageNet) experiments should be performed.\", \"[1] Yang et al. Randomised smoothing of all shapes and sizes. ICML 2020\", \"[2] Kumar et al. Curse of Dimensionality on Randomised Smoothing for Certifiable Robustness. ICML 2020\"], \"questions\": \"See suggestions for improving the paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethical concerns.\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This manuscript provides a theoretical investigation into the robustness of DNNs in classification tasks. Through rigorous matrix-theoretic analysis, they establish that the minimum adversarial perturbation\\u2014the smallest input modification required to change a network's classification decision\\u2014exhibits an intrinsic relationship with input dimensionality.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"I appreciate the clear presentation and detailed theoretical derivation of this manuscript.\", \"weaknesses\": [\"I am not an expert in this theoretical area, thus I cannot check all proof details and judge the theoretical contribution.\", \"From my perspective, the conclusion of this work---adversarial robustness can degrade as the input dimension d increases---is not rigorous.\", \"What if the additional dimension of $\\\\bf x$ is correlated with other dimensions? I.e., the new dimension does not bring any new imformation, would it degrade the robustness?\", \"On the other hand, if the new dimension brings new information, the new $\\\\bf x \\\\in R^{d+1}$ and the prior $\\\\bf x \\\\in R^{d}$ are drawn from different data distributions. How to compare the robustness of DNNs over different data distributions?\", \"How to compare the norm for variables with different dimensions? I.e., let $\\\\bf \\\\delta_1\\\\in R^d$ and $\\\\bf \\\\delta_2\\\\in R^{d+1}$, can we directly compare $||\\\\delta_1||_2$ and $||\\\\delta_2||_2$? They are in different dimensions, for example, can we say volume > area > length?\"], \"questions\": \"ref weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
2ET561DyPe
Few-Class Arena: A Benchmark for Efficient Selection of Vision Models and Dataset Difficulty Measurement
[ "Bryan Bo Cao", "Lawrence O'Gorman", "Michael Coss", "Shubham Jain" ]
We propose Few-Class Arena (FCA), as a unified benchmark with focus on testing efficient image classification models for few classes. A wide variety of benchmark datasets with many classes (80-1000) have been created to assist Computer Vision architectural evolution. An increasing number of vision models are evaluated with these many-class datasets. However, real-world applications often involve substantially fewer classes of interest (2-10). This gap between many and few classes makes it difficult to predict performance of the few-class applications using models trained on the available many-class datasets. To date, little has been offered to evaluate models in this Few-Class Regime. We conduct a systematic evaluation of the ResNet family trained on ImageNet subsets from 2 to 1000 classes, and test a wide spectrum of Convolutional Neural Networks and Transformer architectures over ten datasets by using our newly proposed FCA tool. Furthermore, to aid an up-front assessment of dataset difficulty and a more efficient selection of models, we incorporate a difficulty measure as a function of class similarity. FCA offers a new tool for efficient machine learning in the Few-Class Regime, with goals ranging from a new efficient class similarity proposal, to lightweight model architecture design, to a new scaling law. FCA is user-friendly and can be easily extended to new models and datasets, facilitating future research work. Our benchmark is available at https://github.com/bryanbocao/fca.
[ "Few-Class", "lightweight", "small neural network", "benchmark", "scaling law", "image similarity", "convolutional neural network", "CNN", "transformer" ]
Accept (Poster)
https://openreview.net/pdf?id=2ET561DyPe
https://openreview.net/forum?id=2ET561DyPe
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xIOy9Soakw", "wA450MLy52", "vukY7CuRyo", "vP1RRdkvaL", "tbUQvLs1Yq", "sSnbk0hmtU", "nqj7ukKZOF", "mi6mCmNLge", "lG4si6VIzF", "jrsFFrll13", "iAlMRyChxy", "g8wL8YLBnN", "fuBQOEqyHU", "dQsWsVjBi9", "d1Fx5pwolH", "Zqm1ljCv2s", "XyRVb6C2yA", "UsaRaRewsk", "RCWcYSEqDZ", "PtHmlcxoHI", "O31V5k4moO", "NYbxj2uyTZ", "Iec0TXiXtx", "D26KjAsMsl", "8zVihq9vC3", "6aPvt8jXRj", "4k8pRng5nh", "3sWmCwedNg", "2JS5WXytq6", "1gEUfmsGuT" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1730671766488, 1732769774432, 1733017628408, 1733020917182, 1732942693892, 1730836441101, 1732943093032, 1732761424537, 1732941001566, 1733019121677, 1732646628579, 1732771100286, 1732772280732, 1732647428573, 1732594808158, 1732647166218, 1732764163149, 1732770321566, 1735187580828, 1732643602351, 1733021123968, 1732944189030, 1730657420017, 1733207124510, 1737524137658, 1733018384423, 1732943346347, 1730702693195, 1733293582371, 1732937025950 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11666/Reviewer_2XhF" ], [ "ICLR.cc/2025/Conference/Submission11666/Authors" ], [ "ICLR.cc/2025/Conference/Submission11666/Authors" ], [ "ICLR.cc/2025/Conference/Submission11666/Authors" ], [ "ICLR.cc/2025/Conference/Submission11666/Authors" ], [ "ICLR.cc/2025/Conference/Submission11666/Reviewer_trsf" ], [ "ICLR.cc/2025/Conference/Submission11666/Authors" ], [ "ICLR.cc/2025/Conference/Submission11666/Authors" ], [ "ICLR.cc/2025/Conference/Submission11666/Authors" ], [ "ICLR.cc/2025/Conference/Submission11666/Authors" ], [ "ICLR.cc/2025/Conference/Submission11666/Reviewer_o3dH" ], [ "ICLR.cc/2025/Conference/Submission11666/Authors" ], [ "ICLR.cc/2025/Conference/Submission11666/Authors" ], [ "ICLR.cc/2025/Conference/Submission11666/Authors" ], [ "ICLR.cc/2025/Conference/Submission11666/Authors" ], [ "ICLR.cc/2025/Conference/Submission11666/Authors" ], [ "ICLR.cc/2025/Conference/Submission11666/Authors" ], [ "ICLR.cc/2025/Conference/Submission11666/Authors" ], [ "ICLR.cc/2025/Conference/Submission11666/Area_Chair_YtJK" ], [ "ICLR.cc/2025/Conference/Submission11666/Authors" ], [ "ICLR.cc/2025/Conference/Submission11666/Authors" ], [ "ICLR.cc/2025/Conference/Submission11666/Authors" ], [ "ICLR.cc/2025/Conference/Submission11666/Reviewer_o3dH" ], [ "ICLR.cc/2025/Conference/Submission11666/Reviewer_o3dH" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11666/Authors" ], [ "ICLR.cc/2025/Conference/Submission11666/Authors" ], [ "ICLR.cc/2025/Conference/Submission11666/Reviewer_6NMw" ], [ "ICLR.cc/2025/Conference/Submission11666/Authors" ], [ "ICLR.cc/2025/Conference/Submission11666/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper presents a new benchmark for the \\u201cfew-class\\u201d problem, which is a classification problem with very few classes. Most of the scientific literature focuses on datasets with many classes while practitioners often encounter the few-class scenario. The benchmark consist of several selected datasets and several settings, such as training on large set of classes and evaluating on a smaller set, and popular vision models are evaluated and compared. Finally, an analysis of what happens in few-class regimes is proposed.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The benchmark is well executed and will be useful for \\u201cfew-class\\u201d adaptation research. There are many models evaluated with many datasets and the analysis is thorough. In particular, the author study in depth the evolution of the performance of models trained on large set of classes compared to specialized models, as a function of the number of classes, and show the importance, and therefore propose a metric for evaluating this adaptation.\", \"The similarity benchmark is a nice addition. It correlates well with the performance while being easy to evaluate and with a modest cost.\", \"The presentation and writing are very clear. The figures are very informative.\"], \"weaknesses\": [\"The motivation behind few-class evaluations is not fully convincing. In practice, one will take a large model and fine-tune it (without the classification layer) to a target set of classes, hence obtaining a specialized model. Evaluating the capabilities of a full model on few-class is only interesting when there are too many subsets to consider ? When does that happen in practice, and could you just not use small adaptation layers on top of frozen backbone for each of the subsets ?\", \"One thing that is missing from the paper is a recommendation for practitioners on which vision model to use for someone interested in the few-class problem. Basically discussing in more details the results from Table 1 and providing comparison between models in Section 4.2 and 4.3. One interesting question is, do models that perform really well on the many classes setup are the same that also perform well on the few-class setup ?\", \"Some of the findings in the paper are fully expected. The fact that a model specifically trained on the target subset of classes perform better that a larger model trained on a superset is not very surprising or novel.\"], \"questions\": [\"What original research do you expect will use this benchmark and what do you hope it will achieve or unlock ?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Q3: Discussion on the low correlation for DCN-Full\", \"comment\": \"__Q3__: \\\"_I agree with your description and ad-hoc interpretation of Fig. 5. However, I am missing a discussion on why we see the low correlation for DCN-Full. Do you have any interpretation of this?_\\\"\\n\\n__Ans__: Apologize for the missing discussion. In a nutshell, the models in DCN-Full, trained by the overall training paradigm in full datasets may have learned __general features__ that could include __extraneous parameters__ not beneficial to some sub classes. This leads to __high variance__ of accuracies within the same sub-classes, meaning that such models' performance does not represent the upper bound of achievable sub-class accuracy. Our proposed __Nearest Inter-Class Similarity__ $S'_{\\\\beta}$ is designed to capture the inherent difficulty of a target (sub)dataset, which, in theory correlates with the highest empirical accuracies (as verified in Fig. 5 (c) and (d)). The fact that the __high variance__ in DCN-Full, which __fails to reflect these sub-class upper limits__, results in the __low correlation__ shown in Fig. 5 (a) and (b).\\n\\nWe hypothesize that models in DCN-Full (derived from the full models in large full datasets) have learned __general visual representations__ not specific to the target sub-classes. The conventional training paradigm enforces the model to perform __equally well__, if balanced, on all many classes in the full dataset. As a result, some parameters that could potentially be beneficial to a certain sub-class set may need to be __adjusted__, __sacrificing performance on some sub-classes__ for others in order to minimize the overall objective function (typically cross-entropy loss) among all many classes. When such full models are deployed directly to the Few-Class Regime, high variance occurs as the overall cross-entropy loss function does not encourage a model to learn representations specific to each target subset. This leads to the scattered pattern observed in Fig. 5 (a) and (b).\\n\\nWe would like to bring these under-explored questions to the community, which inspired the development of the Few-Class Arena tool for further study in this area. A summarized discussion is included in Section 4.3 RESULTS ON FC-SIM in blue in the revised version.\"}", "{\"title\": \"W1: Few classes selection and semantic closeness\", \"comment\": \"We thank Reviewer 6NMw for recognizing the utility of our benchmark, the behavioral analysis of models, and the advantages of the proposed similarity metric in correlating with model performance. We would address each concern separately.\\n\\n__W1__: \\\"_...there are no details discussed on how these few classes have been selected and how semantically close these few classes to each other?_\\\"\\n\\n__Ans__: __Few classes are randomly sampled by seed numbers from 0 - 4 by default.__ The details were specified on __Line 82__: \\\"_Each model is tested on 5 subsets whose_ $N_{CL}$ _classes are randomly sampled from the original 1000 classes._\\\", __Line 187__: \\\"_Few-Class Arena generates the specific model and dataset configuration files for each subset, where subset classes are randomly extracted from the full set of classes, as specified by the seed number._\\\" and __Line 412__: \\\"_For reproducible results, we use seed numbers from 0 to 4 to generate 5 subsets for one_ $N_{CL}$ _by default._\\\" For extension to Object Detection and Segmentation, these are detailed from __Line 1610 to 1613__: \\\"_The procedure is consistent with the method outlined in Section3.3 and 3.4. Specifically, for a specific_ $N_{CL}$ (2 in this example), we randomly sample the_ $N_{CL}$ _classes from the full dataset of COCO, where each consists of five subsets with seed numbers from 0 to 4. We performed experiments with 2, 5, 80._\\\"\\n\\nThe proposed Similarity-Based Silhouette Score (SimSS) is designed to capture the __semantic cluster__ characteristics of few classes, in particular \\\"_the (1) tightness of a class cluster and (2) distance to other classes of class clusters, are features that characterize the inherent class difficulty,_\\\" described on __Line 315__. We kindly recommend that the reviewer examine __Fig. 18__ closely for detailed insights into the semantic closeness across ten datasets, spanning from the Many-Class to the __Few-Class Regime__.\"}", "{\"title\": \"Q1: Full model predictions in the Few-Class Regime\", \"comment\": \"__Q1__: \\\"_For FC-Full, when_ $N_{CL}$ _decreases, how to make sure that model predicts only few classes? Are the logits of those few classes are selected to get the prediction and discard logits of all other classes?_\\\"\\n\\n__Ans__: __No modification of the logits (or any layers).__ The few classes are a subset of the full dataset, we let the full models output the labels (predictions) and compare them against the ground truth label to calculate the Top-1 accuracy.\"}", "{\"title\": \"W3: Novel findings\", \"comment\": \"__W3__: \\\"_Some of the findings in the paper are fully expected. The fact that a model specifically trained on the target subset of classes perform better that a larger model trained on a superset is not very surprising or novel._\\\"\\n\\n__Ans__: Our paper presents in a progressive way: first revisit some well observed findings in prior work (which are expected) \\\"_(a) Sub-models attain higher upper-bound accuracy than full models.\\\", \\\"(b) The range of accuracy widens for full models at few-classes, which increases the uncertainty of a practitioner selecting a model for few classes. In contrast, sub-models narrow the range._\\\" The __scaling law__ is an emerging general law but mainly describing how models __scale up__: \\\"_(c) Full models follow the scaling law [1] in the dimension of model size - larger models (darker red) have higher accuracy from many to few classes_.\\\" Then we present our __novel findings__ of __scaling down__ in the __Few-Class Regime__: \\\"_(d) Surprisingly, the scaling law is violated for sub-models in the Few-Class Regime where larger models do not necessarily perform better than smaller ones._\\\" described from Line 94 - 102 and Fig. 1. The role of __image similarity__ (measured by SimSS as a proxy of dataset difficulty) is __more pronounced__ in the __Few-Class Regime__ than in the Many-Class Regime, as demonstrated across ten datasets with a wide range of $N_{CL}$ in Fig. 18 in Section A.10. Notably, this type of dataset difficulty measurement has not been considered in existing scaling laws, as mentioned in Line 1467.\\n\\n[1] Jared et al. Scaling laws forneural language preprintarXiv:2001.08361,2020.\"}", "{\"summary\": \"This paper tackles the problem of choosing image classifiers for tasks with only a small number of categories (\\\"Few-Class\\\"). To do so, they introduce a new benchmark, termed \\\"Few Class Arena\\\" (FCA) on which they train and evaluate a range of models on subsets of various full datasets (e.g wit so-called sub-models trained on between 2 and all 1k ImageNet categories). The FCA benchmark is open-sourced with code available on GitHub. The paper provides detailed discussion of how the open-source package can be used for model selection in the few-class setting.\\n\\nOverall, the authors show that models trained on specific sub-classes (sub-models) are better than models trained on the full dataset and evaluated on the same sub-classes, across model sizes. They further show that there is no single best model architecture for a given dataset, and that training models on different datasets result in different rankings of architecture. The authors also propose a \\\"dataset difficulty\\\" metric which can be computed without training a model, and correlates well with the few-class performance of a model on a dataset.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Tab 1 / Fig 2b shows the results when training different base architectures on different classification datasets from scratch. Interestingly, not all results are correlated with the trend on ImageNet-1K, indicating the optimal architecture choice depends on the dataset.\", \"The code is open sourced and well documented. It seems that it would be simple for a researcher to reproduce the authors claim with limited effort (though I have not run the code myself).\", \"It is interesting that sub-models consistently outperform full models on ImageNet. The fact that full models have seen more training datapoints in total may have compensated for fewer classes, which makes the result not totally intuitive.\"], \"weaknesses\": \"My main issue with this paper is in overall utility. The high level goal of the paper is to provide a tool with which practitioners can select a model (dominantly through the lens of model *architecture*) for a few-class classification task. The tool basically allows authors to train a model (with most results presented from scratch) on subsets of a given dataset. However, this does not align with the practical problem to me, where practitioners might take a model pretrained on a large amount of *data* (e.g DINOv2 or CLIP) is finetuned for a given task (note that lightweight variants of these models are also open-source).\\n\\nGiven that this paper is predominantly an empirical examination which proposes a practical open-source library, I feel that the lack of experiments with pretrained models prevents acceptance.\", \"other_issues\": [\"The citation format makes the main text quite difficult to parse\", \"L52: Main text does not seem to describe Figure 1 accurately?\", \"There is no discussion of few-shot literature, which is at least tangentially related to this problem\"], \"questions\": [\"I may have missed something, but I cannot understand exactly how the proposed difficulty metric is intended to be used?\", \"The proposed difficulty metric seems expensive to compute, with pairwise similarity scores required for large subsets of the data. How does this compare to the cost of conducting a single model training run?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Q: Original research to unlock\", \"comment\": \"__Q__: \\\"_What original research do you expect will use this benchmark and what do you hope it will achieve or unlock ?_\\\"\\n\\n__Ans__: By using this tool, we aim to explore key research questions in the Few-Class Regime: What is the simplest baseline model capable of meeting performance criteria within this context? What is an effective dataset difficulty measurement method to assist in model selection? How does the scaling law behave in this setting?\\n\\nWhen scaling up to a larger dataset, the scaling Law might guide us in scaling up models in terms of data size, model size etc. However, we show that image similarity (as a proxy of dataset difficulty) plays a more and more important role as $N_{CL}$ decreases to the Few-Class Regime, as demonstrated across all datasets in Fig. 18. We argue that simply downscaling a model blindly without considering class similarity may yield a model selection with sub-optimal efficiency. Therefore, our benchmark tool can assist in the next (down) scaling law that takes image similarity (as a way to measure dataset difficulty) into consideration for model selection.\\n\\nThese issues have been discussed from Line 1463 - 1471. We have introduced a new argument on Line 47 in the revised draft.\"}", "{\"title\": \"W2 & Q1: Discussion on fine-tuned models by pre-trained self-supervised models\", \"comment\": \"__W2__: \\\"*The experiments cover classification tasks based on supervised (pre-)training. However, there is an increasing trend that classification models are fine-tuning based on self-supervised pre-trained models. This paradigm is not covered in this study, and therefore, the findings are limited to the more traditional fully supervised paradigm. The authors could discuss these aspects or add experimentation with self-supervised pre-trained models (if available).*\\\"\\n\\n__Q1__: \\\"*(Related to W2) The finding that the scaling law w.r.t. model size is violated for submodes is interesting. I would be curious if this only applies to supervised training or also to self-supervised pre-training combined with minimal fine-tuning or linear probing. Did you conduct any experiments into this direction or do you have an intuition on that?*\\\"\\n\\n__Ans__: We appreciate the suggestion of pre-trained models by Self-Supervised Learning (SSL) methods, which can potentially bring new insights into the Few-Class Regime study.\", \"we_would_like_to_first_highlight_the___research_goal_differences__\": \"in the __Few-Class Regime__, our goal is to identify the __smallest baseline__ model that achieves the desired performance accuracy while utilizing the __minimal set of features__ necessary for the target __few-class applications__. This direction is fundamentally different from the SSL framework that aim to learn __general representations__ by leveraging the knowledge from large amounts of __unlabeled data__, typically via a a pretext task. This can further enable models adapted to a target application from a large well pre-trained SSL model. However, we hypothesize that such a (SSL) pre-trained model may have included __extraneous__ features that are not required in the target application, which can contradict with the aforementioned \\\"efficiency\\\" goal in the __Few-Class Regime__ (discussed on Line 384 in Section 4.2 RESULTS ON FC-SUB). To investigate the Few-Class problems, we strategically select sub-models as they are only trained on the target few-classes. We apologize for the lack of clarity regarding their relations and differences, and have addressed this issue in the revised draft. Please refer to Line 47 and Line 845 in A.3 EXTENDED RELATED WORK within the Appendix for a detailed explanation. The new addition is highlighted in blue.\\n\\nWe have included additional study on comparing with fine-tuned (SSL) models in Table 2 in 4.4 COMPARISON WITH FINE-TUNED MODELS in the revision. Table 2 shows the Top-1 Accuracies for different configurations on CIFAR100. $N_{CL} \\\\in$ \\\\{2, 4, 100\\\\}. Best scores are highlighted in bold.\\n\\nFor ViT, we fine-tune a ViT-B model initialized with weights from the CLIP pre-trained backbone. A linear layer is added on top, and the model is trained for 10 epochs. This setup is indicated by the star symbol (*). \\n\\n---\\n\\n__Table 2__\\n| Model Type | $N_{CL}$ | ResNet18 | ResNet50 | MobileViT-Small | ViT-B |\\n|----------|----------|----------|----------|----------|----------|\\n| Full | 100 | 76.11 | 73.71 | 73.83 | 32.54 |\\n| Full | 4 | 75.10 |\\t72.20 | 72.35 | 36.15 |\\n| Fine-tuned | 4 | \\t87.60 |\\t\\t__90.55__ | __90.00__ | __91.16__* |\\n| Sub | 4 | __90.65__ | 90.15 | 89.45 | 85.40 |\\n| Full | 2 | 75.00 |\\t71.30 | 71.80 | 40.80 |\\n| Fine-tuned | 2 | \\t87.90 |\\t\\t93.70 | 90.50 | 95.20* |\\n| Sub | 2 | __96.30__ | __95.30__ | __95.50__ | __95.90__ |\\n\\n---\\n\\nWe conclude that the fine-tuned models exhibit patterns and trends consistent with the observations presented in Fig. 1. Note our focus of this work is to leverage the proposed difficult measurement method, FC-Sim, to efficiently estimate the achievable model accuracy, thereby assisting in model selection in the Few-Class Regime. Sub-models can offer insights into the minimal visual features required for a specific real-world scenario as they are trained exclusively on the target classes. In contrast, weights pre-trained on large full datasets -- whether through fully supervised or self-supervised manner -- may include extraneous features that are irrelevant to the target classes. We hereby prioritize sub-model study in this work.\"}", "{\"title\": \"W2: Recommendation for practitioners\", \"comment\": \"__W2.1__: \\\"_One thing that is missing from the paper is a recommendation for practitioners on which vision model to use for someone interested in the few-class problem._\\\"\\n\\n__Ans__: We have added A.2 BENCHMARK USAGE GUIDELINE in the revised draft. We would like to distinguish between the use cases of practitioners (P) and researchers (R): Ps are primarily interested in selecting the optimal, model while Rs would like to study many models in the Few-Class Regime. For P, users can compute the proposed SimSS score to identify and select the model that best satisfies their accuracy and hardware constraints. For R, we assume the main interest is the study of model comparison (full and sub-models in FC-Full and FC-Sub), and accessing difficulty measurements in FC-Sim. Our benchmark tool features in configuration files that cover various scenarios by allowing specifications such as $N_{CL}$, seed numbers and so forth. It provides streamlined interfaces, sparing users from managing such tedious implementations for users to conduct large-scale experiments, detailed in Section 3 FEW-CLASS ARENA (FCA).\\n\\n__W2.2__: \\\"_Basically discussing in more details the results from Table 1 and providing comparison between models in Section 4.2 and 4.3._\\\"\\n\\n__Ans__: We have already provided the details (in the original draft) of ten models on the ten datasets from Fig. 8 to Fig. 17 in Section A.8. Due to the large amount of result data, we summarized the comparisons in Fig. 3 and 4 in Section 4.2 and 4.3 and discussed the general observations. We encourage a more concrete question if possible.\\n\\n__W2.3__: \\\"_One interesting question is, do models that perform really well on the many classes setup are the same that also perform well on the few-class setup ?_\\\"\\n\\n__Ans__: __Not necessarily__. We show that the rankings differ dramatically for different models on various datasets in Fig. 2 (b) and Fig. 7.\"}", "{\"title\": \"W3: How is C_hat acquired?\", \"comment\": \"__W3__: \\\"_To compute SimSS, a score called Nearest Inter-Class Similarity requires a nearest class (C_hat) to the target class (C), it is not clear how this C_hat is acquired._\\\"\\n\\n__Ans__: As described on Line 326, it is \\\"_a scalar describing the similarity among instances between class C and the closest class of each instance in C_\\\".\\n\\nIt is computed simply using a __max()__ function (where a higher similarity score indicates greater closeness) applied to the class candidate list (sim_c_p_ls) in our released code:\\n\\nmax(sim_c_p_ls)\", \"https\": \"//github.com/fewclassarena/fca/blob/aa796880953a58f79b243a855d7aad3a221b8587/configs/_base_/sim.py#L220\"}", "{\"comment\": \"Thank you for sharing the additional experiments w.r.t. W1 which indeed align with your initial findings for image classification.\\n\\nI understand that additional experiments are time consuming and not everything can be covered within a rebuttal period.\\nHowever, I would be curious about your thoughts and/or intuition on W2, Q1, Q2, Q3 as well.\"}", "{\"title\": \"Q1: Difficulty metric usage guideline\", \"comment\": \"__Q1__: \\\"_I may have missed something, but I cannot understand exactly how the proposed difficulty metric is intended to be used?_\\\"\\n\\n__Ans__: A user only needs to provide the target dataset with class labels, it will compute the difficulty score. Recall our benchmark tool is designed for both practitioners and researchers. A practitioner can follow the guidelines detailed in the README.md file in the GitHub link (https://github.com/fewclassarena/fca) and run the execute command. The returned difficulty score can be further used to index the smaller target range of models. In contrast, a research may have interests in analyzing a set of difficulty scores in various sub-class sets, our tool includes the feature of utilizing configuration files to conduct large-scale experiments. By specifying these configurations, a research can only execute once and wait for all results. We include a high-level guidelines in A.2 BENCHMARK USAGE GUIDELINE. We encourage users to refer to our GitHub link for the detailed usage.\"}", "{\"title\": \"Q2: difficulty metric compute complexity, comparison to a single model training run\", \"comment\": \"__Q2__: \\\"_The proposed difficulty metric seems expensive to compute, with pairwise similarity scores required for large subsets of the data. How does this compare to the cost of conducting a single model training run?_\\\"\\n\\n__Ans__: Since the difficulty metric involves pairwise computations, the time complexity is quadratic, which can be a problem when the number of classes is large. However, our target usage is in the Few-Class Regime with few classes. Based on our empirical testing on ten datasets, it takes around a few minutes to obtain results for $N_{CL}<10$. The cost is affordable compared to the hours or days required for model training.\\n\\nWe would like to emphasize our motivation that without such a difficult score, it requires the re-evaluation of published models or even retraining (for many training and testing runs) to find an optimal model in an expensive architectural search space.\"}", "{\"title\": \"Making progress\", \"comment\": \"Thank you for your response! I am summarizing the results and will address other questions shortly. Please stay tuned. Appreciate your patience.\"}", "{\"title\": \"W1: Generalizing to other vision tasks (e.g. object detection and segmentation)\", \"comment\": \"We thank Reviewer o3dH for the constructive feedback on __recognizing the motivation__, __acknowledging the significant finding that has not been previously overlooked in the literature__ as well as __appreciating the utility of our benchmark tool__. The comments have been instrumental in refining our work. We would like to address each concern separately and apologize for the delay in our response due to our commitment to conducting additional experiments.\\n\\n__W1: Generalizing to other vision tasks (e.g. object detection and segmentation)__\\n\\n__Ans:__ We agree on the review that this study on few-class models should be extended to other vision tasks as mentioned in line 115 in our original submission. As the first work and benchmark tool to assist in application and research in the few-class regime, we intend to __remain our focus solely on image classification (IC) in this work__ with object detection (OD) and segmentation (Seg) as future work. This strategic decision aligns with the progressive advancements in the computer vision community, which often begins with IC (such as the ResNet [1], EfficientNet [2], ViT [3], MMPretrain [4]) before moving to more complex tasks of OD or S, such as the R-CNN series [5-6], EfficientDet [7], ViT for OD [8], MMOD[9] etc.\\n\\nTo __assess the generalization of the few-class properties to OD and Seg__, we conducted additional __validation__ experiments using YOLOv8. The procedure is consistent with the method outlined in the Figure 1 caption.\\nSpecifically, for a specific $N_{CL}$ (2 in this example), we randomly sample the (2) classes from the full dataset of COCO, where each $N_{CL}$ consists of five subsets with seed numbers from 0 to 4. We performed experiments with $N_{CL} = $\\\\{2, 5, 80\\\\}. The YOLOv8-nano model was chosen since we focus on efficiency. Image size of 320x320 was used. Model performance was evaluated using the standard metric, mean average precision at an IoU threshold of 0.5 (__mAP@50__). The tables below summarize our results:\\n\\n---\\n\\n__Table of Average Score\\u2191__\\n| Model | $N_{CL}$ | OD mAP@50 | Seg mAP@50 |\\n|----------|----------|----------|----------|\\n| YOLOv8-nano F | 2 | \\t0.488 |\\t\\t0.475 |\\n| __YOLOv8-nano S__ | 2 | \\t__0.538 (0.150+)__ |\\t\\t__0.482 (0.007+)__ |\\n| YOLOv8-nano F | 5 | \\t0.456 |\\t\\t0.435 |\\n| __YOLOv8-nano S__ | 5 | \\t__0.503 (0.047+)__ |\\t\\t__0.474 (0.039+)__ |\\n| YOLOv8-nano F | 80 | \\t0.405 |\\t\\t0.378 |\\n\\n---\\n\\n__Table of Standard Deviation\\u2193__\\n| Model | $N_{CL}$ | OD mAP@50 | Seg mAP@50 |\\n|----------|----------|----------|----------|\\n| YOLOv8-nano F | 2 | \\t0.161 |\\t\\t0.180 |\\n| __YOLOv8-nano S__ | 2 | \\t__0.106 (0.055-)__ |\\t\\t__0.152 (0.028-)__ |\\n| YOLOv8-nano F | 5 | \\t0.090 |\\t\\t0.098 |\\n| __YOLOv8-nano S__ | 5 | \\t__0.069 (0.021-)__ |\\t\\t__0.084 (0.014-)__ |\\n| YOLOv8-nano F | 80 | \\t0.195 |\\t\\t0.200 |\\n\\n---\\n\\nwhere each row summarizes the results of 5 models for a given $N_{CL}$. F: Full model; S: Sub-model. Since there is no \\\"few class\\\" for the full dataset ($N_{CL}=80$), the corresponding model consists only of the full model (using the downloaded pre-trained weights). The arrow signs shows the direction of a better value, e.g., \\u2191 means higher is better.\\n\\nOverall, these results in OD and S align with the main findings: _Sub-models attain higher upper-bound\\naccuracy than full models_, as indicated by the scores highlighted with the (+) sign in Table of Average Scores\\u2191; _The range of accuracy widens for full models at few-classes, which increases the uncertainty of a practitioner selecting a model for few classes. In contrast, sub-models narrow the range_, as shown in Table of Standard Deviation\\u2193.\\n\\nWe will append these results and discussions in the draft shortly.\\n\\n[1] He et al. \\\"Identity mappings in deep residual networks.\\\" ECCV, 2016.\\n\\n[2] Tan et al. \\\"Efficientnet: Rethinking model scaling for convolutional neural networks.\\\" PMLR, 2019.\\n\\n[3] Dosovitskiy et al. \\\"An image is worth 16x16 words: Transformers for image recognition at scale.\\\" arXiv:2010.11929 (2020).\\n\\n[4] MMPreTrain Contributors. Openmmlab\\u2019s pre-training toolbox and benchmark. https://github.com/open-mmlab/mmpretrain, 2023.\\n\\n[5] He et al. \\\"Mask r-cnn.\\\" ICCV. 2017.\\n\\n[6] Ren et al. \\\"Faster R-CNN: Towards real-time object detection with region proposal networks.\\\" TPAMI 2016.\\n\\n[7] Tan et al. \\\"Efficientdet: Scalable and efficient object detection.\\\"CVPR 2020.\\n\\n[8] Li et al. \\\"Exploring plain vision transformer backbones for object detection.\\\" ECCV 2022.\\n\\n[9] Chen et al. MMDetection: Open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155, 2019.\"}", "{\"title\": \"Responses to Weaknesses MO1 & M02\", \"comment\": \"We sincerely thank Reviewer trsf for __acknowledging the limitations we identified in commonly used many-class datasets__, as well as __our efforts in open sourcing our code with detailed documentation__. We also appreciate Reviewer trsf's __recognition of our new findings in the few-class regime__ that challenge the prevailing assumptions and motivate us to explore further into this direction. This has led to the development of the first benchmark specifically designed for few-class problems.\\n\\nWe would like to emphasize that, in addition to releasing our benchmark tool, we have also conducted extensive large-scale experiments. Our key findings, summarized in the Statement of Contributions encompassing a diverse range of architectures of CNNs and transformers, evaluated on 10 datasets with a total of 1591 training and testing runs.\\n\\nThe original submission includes comprehensive results for pre-trained models, referred to as Full-models, which are summarized in Figure 3. These Full-models, including ResNet50, VGG16, ConNeXt V2 Base, Inception V3, EfficientNet V2 Medium, ShuffleNet V2, MobileNet V3 Small, ViT Base, Swin Transformer V2 Base and MobileViT Small have been evaluated on ten datasets for various number of classes. Detailed results for each model are presented in Figures 8-17 in the Appendix.\\n\\n\\n__MO1: Citation format.__\\n\\n__Ans__: We have corrected the citation format. Please check the revision.\\n\\n\\n__MO2: L52: Main text does not seem to describe Figure 1 accurately?__\\n\\n__Ans__: The main text summarizes the key findings in Figure 1. Sorry, I do not understand in what specific aspect the main text does not describe Figure 1. It would be great if you can elaborate the points concretely.\"}", "{\"title\": \"Q2: Differences between model architectures\", \"comment\": \"__Q2.1__: \\\"_You mention that you conducted experiments on different architectures such as ResNet\\u2019s and ViT\\u2019s. However, the results are presented in an aggregated way. Did you find any significant differences between model architectures?_\\\"\\n\\n__Ans__: __No clear differences__ between model architectures have been observed so far in terms of the trend from full classes to few classes. Despite some differences in training various architectures, my high-level observations on the ten models on ten datasets indicate that the main factors are the scale of the models and the target dataset difficulty, which motivates us to design and develop the Similarity-Based Silhouette Score to quantify this difficulty.\\n\\n__Q2.2__: \\\"_Do the findings of Fig 1 equally apply for both architectures?_\\\"\\n\\n__Ans__: __Yes.__ We have included the __additional__ experiments on CNNs (ResNet18, 50) and Vision Transformer-Base (ViT-B). Results are summarized in Table 7. The overall trend of __ViT-B__ is __consistent__ with the observations for ResNet described in Fig. 1. We included the results in the A.11 COMPARING VARIOUS ARCHITECTURES section in the revised draft.\\n\\n---\\nTable 7\\n\\n| Model Type | $N_{CL}$ | ResNet18 | | ResNet50 | | ViT-B |\\n|----------|----------|----------|----------|----------|----------|----------|\\n| | | Top-1Acc.\\u2191 | STDEV\\u2193 | Top-1Acc.\\u2191 | STDEV\\u2193 | Top-1Acc.\\u2191 | STDEV\\u2193 |\\n| Full | 1000 | 69.90 | 0 | 76.55 | 0 | 82.37 | 0 |\\n| Full | 10 | 66.68 | 4.372 | 73.76 | 3.675 | 79.52 | 2.704 |\\n| Full | 5 | \\t65.68 | 9.680 | 72.16 | 9.302 | 79.12 | 8.146 |\\n| Full | 2 | 62.80 | 14.18 | 70.60 | 14.67 | 78.00 | 14.47 |\\n| Sub | 10 | 91.88 | 1.640 | 91.48 | 2.265 | 82.16 | 3.508 |\\n| Sub | 5 | 93.68 | 1.213 | 92.96 | 3.170 | 89.60 | 2.227 |\\n| Sub | 2 | 96.80 | 1.304 | 94.80 | 2.168 | 95.60 | 1.949 |\\n\\nNote that ViT-B outperforms both ResNet18 and ResNet50 in the full dataset, achieving a Top-1 accuracy of 82.37%. However, smaller models like ResNet18 or ResNet50 can achieve competitive performance compared to the ViT-B model. While it is well-established in the literature that ViTs generally perform better with large amounts of data, here we aim to revisit the study from a fresh Few-Class perspective, which sets our work apart from prior studies.\"}", "{\"title\": \"MO3: Discussion on Few-Shot Learning\", \"comment\": \"__MO3__: \\\"_There is no discussion of few-shot literature, which is at least tangentially related to this problem_\\\"\\n\\n__Ans__: Thank you for the suggestion of discussion the relation between Few-Shot Learning (FSL) and our work. We would like to clarify that the fundamental research questions in FSL differ from ours in the Few-Class Regime. The FSL framework aims to address the problem of __data scarsity__ with the goal for a model to leverage the representations from very __few samples__ (or none, in the case of Zero-Shot Learning), or prior knowledge that can __generalize__ effectively to other tasks or domains.\\n\\nIn stead of proposing a new learning frameworks, our Few-Class Arena focuses on the research problem of selecting the most __efficient__ model with __minimal__ features needed for the target application deployment.\\n\\nWe have clarified this on line 47 and provided a discussion on FSL in Section A.3 EXTENDED RELATED WORK, in the revised draft.\"}", "{\"metareview\": \"This work aims to benchmark image classification in the few-class regime as a proxy for performance on real-world tasks of this size. Such a focus is complementary to existing large-scale benchmarks such as ImageNet, COCO, etc. with 10s of classes or 1000 classes. The contributions include experiments over many subsets of classes on ImageNet, evaluations of popular model architectures like ResNets and ViTs, and a metric that incorporates class similarity into a difficulty score. The application of these contributions is to guide model selection, to identify the most efficient architecture for a given set of classes, and provide scaling predictions in this regime, and these applications are shown by this work. The benchmark is released as an open-source project for reproduction and future work.\", \"strengths\": \"The topic is complementary to the majority of work focusing either on (A) the large-scale regime of many classes and a lot of data or (B) the few-shot regime of scarce data. The experiments are thorough in covering different model architectures, numbers of classes, and learning with and without transfer. The proposed difficulty metric can save experimentation time (6NMw).\", \"weaknesses\": \"The evaluation of the full-class models is fair but incomplete in not conditioning on the subclasses at all: the full models could predict the argmax of the logits for the classes intersecting with a given subset. There is doubt about the utility of the contributed benchmark and analysis to the community, because transfer learning and prompting are nearly ubiquitous (trsf, 6NMw).\", \"decision\": \"This is a borderline submission, and reviewers questioned the motivation and the justification of its experimental scope. However, the revision and additional results have proven the use of the benchmark, even though it is has a somewhat narrow scope of minimal/efficient architectures that are sufficient for few-class accuracy, and reviewers were convinced to maintain or improve their ratings. The proposed difficulty metric and how it might guide model and dataset scaling brings another lens to scaling questions that are now faced in research and practice alike. Given these uses, and the full open-source release of the benchmark, the meta-reviewer sides with acceptance. Congratulations!\", \"additional_comments_on_reviewer_discussion\": \"Reviewers are borderline with ratings of marginal accept (6NMw: 6, 2XhF: 6, o3dH: 6) and marginal reject (trsf: 5). Reviewers shared key concerns including the unspecified relationship to few-shot learning, the exclusion of fine-tuning from pre-training in the scope of the benchmark and experiments even though it is a standard practice, and some concerns about the motivation and use of the benchmark. The authors provide a thorough and multi-step response to each review. 1/4 reviewers discusses with the authors (03dH) and 2/4 reviewers (6NMw, 03dH) discuss with the AC following the rebuttal and author discussion phase. During AC discussion acknowledge the rebuttal and additional experiments, and raise (6NMw: 5 to 6) or maintain (o3dH: 6) their borderline positive rating, but do not champion the paper. In the absence of discussion by other reviewers, the meta-reviewer has closely examined the points of each review and the resulting thread of responses by the authors. The main issue raised by trsf was the lack of experiments with pre-training and fine-tuning, which is addressed by the revision and rebuttal experiments and the new fine-tuning option in the proposed library, and the main issue raised by 2XhF is motivation and use, which is addressed by the application to guiding model selection.\\n\\nBy fixing the serious omission of fine-tuning, and showing an application for which the proposed benchmark can spare total experimentation resources, the revision and rebuttal have addressed the most serious shortcomings as weighed by the meta-reviewer.\"}", "{\"title\": \"Author-year citations\", \"comment\": \"Thank you for the suggestion of the correct use of author-year citations. We have updated the draft accordingly.\\n\\nFor instance,\\n\\n\\\"_Typical examples include 1000 classes in ImageNet Deng et al. (2009) for image classification, and 80 object categories in COCO Lin et al. (2014) for object detection. Previous benchmarks also extend vision to multimodal research such as image-text Lee et al. (2024); Le et al. (2024); Laurenc\\u00b8on et al. (2024); Bitton et al. (2022)._\\\"\\n\\nhas been corrected as\\n\\n\\\"_Typical examples include 1000 classes in ImageNet (Deng et al., 2009) for image classification, and 80 object categories in COCO (Lin et al., 2014) for object detection. Previous benchmarks also extend vision to multimodal research such as image-text (Lee et al., 2024; Le et al., 2024; Laurenc \\u0327on et al., 2024; Bitton et al., 2022)._\\\"\"}", "{\"title\": \"Q2: Benchmark usage guideline\", \"comment\": \"__Q2__: \\\"_if a user has a custom dataset with few classes and want to find a model that works better on this custom dataset, it would be helpful to have an explanation on how this benchmark can assist the user in this case._\\\"\\n\\n__Ans__: We have added A.2 BENCHMARK USAGE GUIDELINE in the revised draft. Please refer to the response to \\\"Q1: Difficulty metric usage guideline\\\" by Reviewer trsf and \\\"W2: Recommendation for practitioners\\\" by Reviewer 2XhF.\"}", "{\"title\": \"Actionable suggestions\", \"comment\": \"Dear Reviewer 6NMw, we really appreciate your valuable suggestions. The feedback of \\\"_suggest authors to extend their analysis_\\\" is somewhat vague for us, we would greatly appreciate any actionable suggestions for experiments to help extend our analysis and contribute to improving our work. Thank you!\"}", "{\"summary\": \"The paper introduces a benchmark designed to evaluate and select efficient image classification models in scenarios with a limited number of classes. This setting, common in real-world applications (e.g., 2-10 classes), contrasts with widely used benchmarks like ImageNet and COCO, which involve hundreds or thousands of classes. The paper presents FCA as a tool to help researchers and practitioners efficiently select models for few-class tasks. The paper coins the term ``few-class regime'' and presents some interesting non-intuitive insights regarding the performance of models that are pre-trained with many class datasets and then applied in few-class settings.\\nIn addition, they introduce a dataset difficulty metric by inverting image similarity measured via CLIP and DINOv2 features.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The necessity of a few-class benchmark is well-motivated by a strong finding that models pre-trained on many-class datasets perform worse than expected on few-class datasets which is an issue not addressed in literature thus far.\\n2. The paper presents some interesting insights that contradict expected model behavior based on intuition.\\n3. The authors provide a ready to use code base integrated with other frameworks and libraries ensuring usability.\", \"weaknesses\": \"1. The scope of the study only covers image classification, while it is in principle also applicable to dense prediction tasks where object classes are present (e.g., object detection, semantic segmentation). It would be insightful if the same findings hold for these tasks. The authors could add a discussion or experiments (if available) for other vision tasks.\\n2. The experiments cover classification tasks based on supervised (pre-)training. However, there is an increasing trend that classification models are fine-tuning based on self-supervised pre-trained models. This paradigm is not covered in this study, and therefore, the findings are limited to the more traditional fully supervised paradigm. The authors could discuss these aspects or add experimentation with self-supervised pre-trained models (if available).\", \"questions\": \"1. (Related to W2) The finding that the scaling law w.r.t. model size is violated for submodes is interesting. I would be curious if this only applies to supervised training or also to self-supervised pre-training combined with minimal fine-tuning or linear probing. Did you conduct any experiments into this direction or do you have an intuition on that?\\n2. You mention that you conducted experiments on different architectures such as ResNet\\u2019s and ViT\\u2019s. However, the results are presented in an aggregated way. Did you find any significant differences between model architectures? Do the findings of Fig 1 equally apply for both architectures?\\n3. I agree with your description and ad-hoc interpretation of Fig. 5. However, I am missing a discussion on why we see the low correlation for DCN-Full. Do you have any interpretation of this?\\n\\n\\n**Minor comments:**\\n\\n- The correct use of author-year citations would improve readability. I.e.: Author (2024) for in-text citations and (Author, 2024) elsewise.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for clarifying my questions. I also appreciate the effort of adding additional experiments.\\n\\nThere are still an open questions regarding the utility / practical usage of your approach (as also pointed out by reviewers trsf and 2XhF):\\n\\nIn your reply to to review 2XhF you mention that your benchmark is useful when my practical domain \\\"shares a subset of the object categories with the large pre-trained model, but ground truth labels are unavailable\\\".\\nI agree that when a practitioner wants to use a pre-trained model off-the-shelf for a use case with a specific number of sub-classes, they can refer to your full model (F) benchmark.\\n\\nWhen would a practitioner use the sub-model (S) benchmark? Do I understand correctly that this is to be used in a different scenario when labels and resources to train a model are available?\\n\\nAlso, could you confirm that I understood the following aspects correctly, as I did not find them explicitly stated in the manuscript:\\n\\na) Sub-models are trained from scratch rather than using the full model weights as a starting point?\\n\\nb) For both (S) and fine-tuning (FT) you train the full model not just the last linear layer?\\n\\nAssuming my assumptions a) and b) are true, I would like to add the following question:\\nWhen looking at accuracies of (S) and (FT), the differences are small, e.g. Table 2 (reported without errors) and Table 8. I would argue that (S) and (FT) are roughly equal and their accuracies are highly correlated. Taking into account that (FT) is computationally cheaper than (S), I would make an argument that (FT) is the more useful few-class metric in practice. What are your thoughts on that? Am I missing a certain property that is unique for (S)?\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"W2: Transfer Learning\", \"comment\": \"__W2__: \\\"_The aspect of transfer learning has not been discussed. It is a common practice to finetune ImageNet pretrained models like ResNet50 or ViT, or recent foundation models like CLIP and DINOv2 to different downstream tasks that include adapting or finetuning them on few classes. The analysis presented in the paper is missing this exploration. Is it better to train the models from scratch on the few classes or finetuning these models work better for few classes? Does SimSS score also align on the finetuned models?_\\\"\\n\\n__Ans__: For transfer learning, we conducted __additional__ experiments during the rebuttal period. Please kindly refer to the response to _W1: Adaptation Layers in Transfer Learning_ by Reviewer 2XhF and _W2 & Q1: Discussion on fine-tuned models by pre-trained self-supervised models_ by Reviewer o3dH. The empirical observation is that the performance of the finetuned models is close to the sub-models, therefore the SimSS scores should align with the finetuned models.\"}", "{\"title\": \"All Responses Ready\", \"comment\": \"Dear Reviewer o3dH, thank you for taking your time to review our paper. We have provided responses to address all weaknesses and questions you raised. It would be appreciated if you may reconsider these responses. If any more questions arise, we are happy to provide additional clarity.\\n\\nThank you again for your valuable feedback!\"}", "{\"summary\": \"The paper proposes a benchmark tool called Few-Class Arena to benchmark models on different datasets with smaller number of classes (e.g. < 10) and propose a similarity metric called SimSS to measure dataset difficulty measure. They show that ResNet family models trained on full ImageNet 1K classes show reduced performance when tested only for few ImageNet classes (< 10 classes). On the other hand, the same models when trained on smaller number of ImageNet classes from scratch show higher performance on these classes when compared to models trained on all classes of ImageNet 1k. They show that the proposed SimSS metric can serve as a proxy to estimate the upper bound accuracy of model performance on few-class datasets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"\\u2022\\tAddressed an important problem by proposing the benchmarking tool.\\n\\n\\u2022\\tThe tool is designed to be user friendly and allows to run wide range of experiments by setting few hyper-parameters. The tool allows to benchmark on custom models and datasets.\\n\\n\\u2022\\tProvided a behavioral understanding between models trained on large number of classes vs smaller number of classes in the few-class regime.\\n\\n\\u2022\\tThe proposed similarity metric shown to be linearly corelated with the model performance on small number of classes. Such proxy helps to save computation cost and time of conducting various experiments.\", \"weaknesses\": \"Despite focusing on an interesting problem setting, the analysis shown in the paper has limited scope. Authors shown experiments on models evaluated or trained on smaller number of classes, however there are no details discussed on how these few classes have been selected and how semantically close these few classes to each other? Would the analysis presented differ by choosing the those few classes differently?\\n\\nThe aspect of transfer learning has not been discussed. It is a common practice to finetune ImageNet pretrained models like ResNet50 or ViT, or recent foundation models like CLIP and DINOv2 to different downstream tasks that include adapting or finetuning them on few classes. The analysis presented in the paper is missing this exploration. Is it better to train the models from scratch on the few classes or finetuning these models work better for few classes? Does SimSS score also align on the finetuned models?\\n\\nTo compute SimSS, a score called Nearest Inter-Class Similarity requires a nearest class (C_hat) to the target class (C), it is not clear how this C_hat is acquired.\\n\\nOverall, I appreciate the motive and tool for benchmarking few-class regime, however the analysis presented in the paper is incomplete, and I suggest authors to extend their analysis.\", \"questions\": \"1. For FC-Full, when N_{CL} decreases, how to make sure that model predicts only few classes? Are the logits of those few classes are selected to get the prediction and discard logits of all other classes?\\n\\n2. If a user has a custom dataset with few classes and want to find a model that works better on this custom dataset, it would be helpful to have an explanation on how this benchmark can assist the user in this case.\\n\\n---------------------------------------------------------------------------------------\", \"final_review\": \"I appreciate authors for their detailed responses to the comments from all the reviewers. Authors comprehensive responses, clarifications, and additional experiments have addressed the key weaknesses. I now vote towards borderline acceptance.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We would like to thank Reviewer o3dH03 for taking your time in reviewing our comments, as well as responses to other reviewers. The comments are thoughtful and helpful in improving our work.\\n\\nBefore addressing each concern in detail, we would like to draw the reviewers' attention to the broader perspective of our benchmark. This framework may require a shift in mindset for both researchers and practitioners as they consider the central research question: \\\"what is the simplest baseline model capable of meeting performance criteria within this Few-Class Regime?\\\" (Line 47).\\n\\n__Q4__: \\\"_When would a practitioner use the sub-model (S) benchmark? Do I understand correctly that this is to be used in a different scenario when labels and resources to train a model are available?_\\\"\\n\\n__Ans__: Yes, especially when the pre-trained models are difficult to obtain from public resources. FC-Sub, the sub-model (S) benchmark, would be useful in the following use cases of (1) Out-of-distribution (OOD) and (2) Bias Propagation (BP). \\n\\n(1) (OOD) is the opposite scenario of the previously mentioned practical domain that \\\"shares a subset of the object categories with the large pre-trained model\\\". For example, unusual objects like spaceships, stars or items in medical images, stretches (assuming the full models are unavailable) may not benefit from fine-tuning from datasets like ImageNet.\\n\\n(2) In some use cases that rely heavily on shape information like classifying sketches, CNNs pre-trained on ImageNet may be biased towards texture [1] (although ViT can help mitigate the texture-bias problem).\\n\\nIn such cases, it would be preferable to train a model from scratch in FC-Sub. Note that our benchmark has already included the Quickdraw345 and Textures47 datasets.\\n\\n\\\"_Also, could you confirm that I understood the following aspects correctly, as I did not find them explicitly stated in the manuscript:_\\\"\\n\\n__Q5.a__ \\\"_Sub-models are trained from scratch rather than using the full model weights as a starting point?_\\\"\\n\\n__Ans__: Yes.\\n\\n__Q6.b__ \\\"_For both (S) and fine-tuning (FT) you train the full model not just the last linear layer?_\\\"\\n\\n__Ans__: Yes. Due to time constraints, we did not attempt freezing the backbone and training only the last linear layers. However, the limited number of training epochs during fine-tuning should not significantly alter the visual backbones.\\n\\n__Q7.a__ \\\"_Assuming my assumptions a) and b) are true, I would like to add the following question: When looking at accuracies of (S) and (FT), the differences are small, e.g. Table 2 (reported without errors) and Table 8._\\\"\\n\\n__Ans__: Yes. Mentioned on Line 285 in :\\\"Then Few-Class Arena generates bash scripts for model training on each\\nsubset.\\\" in Section 3.4. We will further clarify sub-models by explicitly mentioning \\\"training from stretch\\\" in the final version.\\n\\n__Q7.b__ \\\"_I would argue that (S) and (FT) are roughly equal and their accuracies are highly correlated._\\\"\\n\\n__Ans__: Yes, based on the results.\\n\\n__Q7.c__ \\\"_Taking into account that (FT) is computationally cheaper than (S), I would make an argument that (FT) is the more useful few-class metric in practice. What are your thoughts on that? Am I missing a certain property that is unique for (S)?_\\\"\\n\\n__Ans__: While I agree that for a __single run of training__, FT is computationally cheaper than S as typically a smaller number of layers (such as the last fully-connected layers while freezing the visual backbone, or additional layers from the backbone can be selectively updated) are being trained for only a few epochs for FT. It will still take __many runs of training__ in FT in order to find the most efficient models (Line 49). The propose SimSS metric in FC-Sim is a __one-time compute on the dataset__ to index that model.\\n\\n(7.c.1) One of the goals of our benchmark is __Generality__ stated in A.1 GOALS. We aim at making it general for all use cases. Therefore we focus on models trained from scratch in FC-Sub, covering the cases of OOD and BP.\\n\\n(7.c.2) For researchers, we want the models to fully adapt to the sub-classes without potential extraneous from full datasets (Line 461) for further analysis.\\n\\nBoth (7.c.1) and (7.c.2) make us prioritize sub-models for analysis in our current work.\\n\\nGiven the amount of questions regarding fine-tuning (FT), we have added an option of (FT) for users in our benchmark code: https://github.com/fewclassarena/fca (Search the keyword \\\"Fine-Tuning sub-models\\\"). \\n\\n[1] Geirhos et al. \\\"ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness.\\\" In International Conference on Learning Representations 2019.\"}", "{\"title\": \"W1: Adaptation Layers in Transfer Learning\", \"comment\": \"We sincerely thank Reviewer 2XhF for appreciating the breadth of our work (\\\"many models with many datasets\\\"), the thoroughness and depth of the trained models, the strong correlation demonstrated between the proposed similarity measure and performance, as well as the clarity and quality of our presentation and writing.\\n\\n__W1.1__: _\\\"The motivation behind few-class evaluations is not fully convincing. In practice, one will take a large model and fine-tune it (without the classification layer) to a target set of classes, hence obtaining a specialized model. \\\"_\\n\\n__Ans__: We agree on the commonly used fine-tuning method to adapt to a target sub classes.\", \"we_would_like_to_first_highlight_the___research_goal_differences__\": \"in the __Few-Class Regime__, our goal is to identify the __smallest baseline__ model that achieves the desired performance accuracy while utilizing the __minimal set of features (MF)__ necessary for the target __few-class applications__. Fine-tuning, however, does __not explicitly__ take MF into consideration. We hypothesize that such a large pre-trained model may have included __extraneous__ features that are not required in the target application, which can contradict with the aforementioned \\\"efficiency\\\" goal in the __Few-Class Regime__ (discussed on Line 384 in Section 4.2 RESULTS ON FC-SUB). Therefore, we argue that full model evaluation is necessary to investigate the Few-Class problems. Please refer to Line 47 and Line 845 in A.3 EXTENDED RELATED WORK within the Appendix for a detailed explanation. The new addition is highlighted in blue.\\n\\nWe have included __additional__ study on comparing with __fine-tuned__ pre-trained models in Table 2 in 4.4 COMPARISON WITH FINE-TUNED MODELS in the revision. We also include the results of transfer learning from a Self-Supervised pre-trained model ViT-B. Table 2 shows the Top-1 Accuracies for different configurations on CIFAR100. $N_{CL} \\\\in$ \\\\{2, 4, 100\\\\}. Best scores are highlighted in bold.\\n\\nFor ViT, we fine-tune a ViT-B model initialized with weights from the CLIP pre-trained backbone. A linear layer is added on top, and the model is trained for 10 epochs. This setup is indicated by the star symbol (*). \\n\\n---\\n\\n__Table 2__\\n| Model Type | $N_{CL}$ | ResNet18 | ResNet50 | MobileViT-Small | ViT-B |\\n|----------|----------|----------|----------|----------|----------|\\n| Full | 100 | 76.11 | 73.71 | 73.83 | 32.54 |\\n| Full | 4 | 75.10 |\\t72.20 | 72.35 | 36.15 |\\n| Fine-tuned | 4 | \\t87.60 |\\t\\t__90.55__ | __90.00__ | __91.16__* |\\n| Sub | 4 | __90.65__ | 90.15 | 89.45 | 85.40 |\\n| Full | 2 | 75.00 |\\t71.30 | 71.80 | 40.80 |\\n| Fine-tuned | 2 | \\t87.90 |\\t\\t93.70 | 90.50 | 95.20* |\\n| Sub | 2 | __96.30__ | __95.30__ | __95.50__ | __95.90__ |\\n\\n---\\n\\nWe conclude that the fine-tuned models exhibit patterns and trends consistent with the observations presented in Fig. 1. Note our focus of this work is to leverage the proposed difficult measurement method, FC-Sim, to efficiently estimate the achievable model accuracy, thereby assisting in model selection in the Few-Class Regime.\\n\\n__W1.2__: \\\"_When does that happen in practice, and could you just not use small adaptation layers on top of frozen backbone for each of the subsets ?_\\\"\\n\\n__Ans__: It happens when the usage environment __shares a subset of the object categories__ with the large pre-trained model, but __ground truth labels are unavailable__ in the target environment, which is very common when users would like to simply apply off-the-shelf models whose weights are pre-trained from large many-class datasets. Popular examples include deploying pre-trained YOLO models directly for industrial use cases, such as detecting general objects like pedestrians and vehicles. (For details on extending to object detection, please see Section A.12 in the revision and our Response to W1: Generalizing to other vision tasks (e.g., object detection and segmentation) for Reviewer o3dH. In this work, however, we focus specifically on image classification.)\\n\\nWhen a large model encounters novel classes in the deployment that were not included in its training dataset, ground truth labels are typically required in order to train new adaptation layers for classification. If we don't use any adaptation layers trained by the novel class ground truth labels, the model will simply output the class label with the highest confidence, but the correctness of this prediction cannot be guaranteed.\\\"\"}" ] }
2E6OK8cSoB
Semantic-Aware Diffusion Model for Sequential Recommendation
[ "Yaoqi Chen", "Jianjin Zhang", "Qi Chen", "Weihao Han", "Zhengxin Zeng", "Mingzheng Li", "Xue Wu", "Yujing Wang", "Hao Sun", "Xu Tan", "Weiming Zhang", "Jiang Bian", "Weiwei Deng", "Feng Sun", "Qi Zhang" ]
Sequential recommendation aims to predict the next click for a particular user based on their historical interacted item sequences. Recently, diffusion-based methods have achieved the state-of-the-art performance in sequential recommendation. However, they fail to effectively utilize the rich semantic information embedded in items during the diffusion process to accurately guide the generation, leading to sub-optimal results. To address this limitation, we designed SDRec, a **S**emantic-aware **D**iffusion model for sequential **Rec**ommendation. Our model introduces a novel architecture, the Semantic Fusion Layer, which leverages the embedding table from the encoder to incorporate item semantics into the diffusion process through an attention mechanism. Together with the well-designed contrastive and generative losses, SDRec effectively utilizes the item semantics in diffusion model, unleashing the potential of sequential recommendation. Our experiments show that SDRec has over 10% relative gain with superior efficiency compared with existing methods.
[ "Diffusion Model", "Sequential Recommendation" ]
https://openreview.net/pdf?id=2E6OK8cSoB
https://openreview.net/forum?id=2E6OK8cSoB
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qPX6nOkMj6", "QA2jMMpODe", "7u49hpNVnA", "0BNQzDhuf8" ], "note_type": [ "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1729132587313, 1732004987608, 1730002616432, 1730544089982 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9990/Reviewer_JRN5" ], [ "ICLR.cc/2025/Conference/Submission9990/Authors" ], [ "ICLR.cc/2025/Conference/Submission9990/Reviewer_Tx8o" ], [ "ICLR.cc/2025/Conference/Submission9990/Reviewer_Yd3X" ] ], "structured_content_str": [ "{\"summary\": \"The paper proposes SDREC, a semantic-aware diffusion model for sequential recommendation tasks, which aims to predict the next item a user is likely to interact with based on their historical interaction sequence. The authors highlight the limitations of existing diffusion-based recommendation models, which often fail to incorporate item semantics effectively, leading to suboptimal recommendations. To address this, SDREC introduces a Semantic Fusion Layer, an innovative component designed to enhance the diffusion process by integrating item semantic information through an attention mechanism. This approach, combined with contrastive and generative losses, ensures that item semantics are fully utilized, improving the model\\u2019s accuracy in predicting user preferences.\\n\\nThe experimental results show that SDREC outperforms state-of-the-art models, achieving over a 10% relative improvement in performance while maintaining computational efficiency, making it suitable for real-time applications. The paper demonstrates SDREC\\u2019s superiority through experiments on multiple datasets, underscoring the importance of integrating item semantics in diffusion-based sequential recommendation systems.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"#### 1. **Originality**\\nThe paper presents a novel approach to sequential recommendation by introducing the **SDREC** model, which leverages a **Semantic Fusion Layer** to effectively incorporate item semantics into the diffusion process. This contribution stands out for several reasons:\\n - It addresses a critical limitation in current diffusion-based recommendation methods, which often fail to utilize semantic information effectively.\\n - The combination of a **contrastive learning framework** with a generative diffusion process is a creative and unique approach that differentiates this work from existing models.\\n\\nOverall, the originality arises from the integration of **semantic-aware mechanisms** in diffusion-based recommendation, which fills a significant gap in the current literature.\\n\\n#### 2. **Quality**\", \"the_paper_demonstrates_decent_quality_in_terms_of_both_methodological_rigor_and_empirical_validation\": [\"The authors provide a **clear and detailed description** of the SDREC model, including theoretical motivations and the design choices behind its components (e.g., the Semantic Fusion Layer).\", \"Extensive experiments are conducted on multiple **real-world datasets** (e.g., Amazon Beauty, Amazon Toys, and Movielens), showing consistent and significant improvements over state-of-the-art baselines.\", \"The implementation details, including the training strategies and parameter settings, are well-documented, ensuring reproducibility and transparency.\", \"#### 3. **Clarity**\"], \"the_paper_is_well_structured_and_clear_in_its_presentation\": [\"The introduction effectively outlines the problem and motivates the need for the proposed model.\", \"The model design and methodology are described systematically, with helpful visual aids (e.g., figures and diagrams) that clarify complex concepts, such as the diffusion process and the role of the Semantic Fusion Layer.\", \"The results and analysis are presented in an organized manner, making it easy for the reader to understand the comparative performance and the benefits of SDREC over baseline models.\", \"The clarity of explanation, coupled with structured figures, enables readers to follow the technical details without ambiguity, contributing to the paper's accessibility.\", \"#### Summary of Strengths\", \"In summary, the paper excels across multiple dimensions:\", \"**Originality**: Innovative integration of semantics in diffusion processes.\", \"**Quality**: Methodologically rigorous with comprehensive empirical validation.\", \"**Clarity**: Clear presentation supported by visual aids and structured explanations.\", \"The combination of these strengths makes this paper a valuable addition to the literature on sequential recommendation systems.\"], \"weaknesses\": \"#### 1. **Unclear Motivation and Explanation of Semantic Utilization**\\nWhile the paper introduces SDREC as a model that integrates item semantics through the **Semantic Fusion Layer**, the motivation behind why and how semantics are critical in the diffusion process remains insufficiently explained. Although the authors mention that traditional models do not effectively leverage semantics, the paper does not provide a clear, detailed rationale for why this limitation specifically impairs recommendation accuracy. Additionally, the semantic information utilized (e.g., item categories, attributes) is not well-defined, leaving readers uncertain about what constitutes the \\\"semantics\\\" and how exactly it is encoded or represented.\\n\\n**Recommendation for Improvement**: \\n - **Motivation**: The paper would benefit from a stronger motivation section that explicitly explains why semantics are crucial in sequential recommendation tasks and why their integration into the diffusion process is expected to enhance performance. The authors could provide theoretical justifications or empirical evidence showing the gap in current models and how the proposed method aims to bridge this.\\n - **Clarification of Semantics**: The authors should clearly define what they mean by \\\"item semantics.\\\" Providing specific examples (e.g., movie genres, product categories, textual descriptions) and explaining how these elements are encoded and utilized within the model would make the approach more transparent. Additionally, it would be helpful to include an illustration or case study demonstrating how semantic information influences the diffusion process and leads to better recommendations.\\n\\nBy improving the clarity of motivation and the description of semantic use, the paper could strengthen its theoretical foundation and make its contributions more accessible and convincing to readers.\\n\\n#### 2. **Scalability Concerns with Large-Scale Datasets**\\nAlthough SDREC demonstrates efficiency on moderate-sized datasets (e.g., Amazon and Movielens), the paper does not provide evidence of its scalability on **larger, real-time recommendation systems** that involve millions of users and items. Given that the diffusion process involves multiple steps and attention mechanisms, it is important to understand whether SDREC can scale without compromising latency and computational resources in a production environment.\\n\\n**Recommendation for Improvement**: Including a scalability analysis or experiments on larger datasets (e.g., a full-scale Amazon dataset or Netflix prize data) could strengthen the paper\\u2019s claims about the model\\u2019s efficiency and its readiness for real-world deployment. \\n\\n\\n\\n\\n#### Summary of Weaknesses\\nIn summary, while SDREC shows promise, the following areas need improvement:\\n - Improving the clarity of motivation and the description of semantic use.\\n - Evaluating the model's scalability with larger datasets.\", \"questions\": \"1. **Can You Clarify What \\\"Semantics\\\" Are Used and How They Are Encoded?**\\n - The term \\\"item semantics\\\" is central to the model, but the specifics are not clearly defined. Could you provide examples (e.g., item attributes, categories) and explain how these semantics are encoded and integrated into the model?\\n\\n2. **What Is the Theoretical Justification for the Semantic Fusion Layer?**\\n - The Semantic Fusion Layer is a novel component, but its role and theoretical basis in the diffusion process are not fully explained. Could the authors elaborate on why this specific mechanism enhances the recommendation performance compared to other methods?\\n\\n3. **Is SDREC Scalable to Large-Scale Real-World Applications?**\\n - The paper shows efficiency on moderate-sized datasets, but how does SDREC scale to millions of users and items in real-world settings? Have the authors conducted any scalability tests or optimizations to demonstrate its readiness for large-scale deployment?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper falls into the sequential recommendation, where a novel diffusion recommender that considers global awareness of item semantics is introduced. The proposed encode-decoder architecture is well-designed to learn from global semantics. However, the motivation is not reasonable, and the proposed technique is not practical in real-world scenarios.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Study the problem of unawareness of global semantics in diffusion recommenders.\\n\\n\\n2. Design an encoder-decoder architecture to address the issue and demonstrate the performance on three datasets across multiple baselines.\", \"weaknesses\": \"1. The motivation is not reasonable. The sequential recommendation aims to predict the next item. The recommender generally outputs the probability distribution on the item set in this setting. In other words, items with high probabilities should be ranked first by design. According to the user's past behavior in Figure 2, category 6 is the user's major interest. As a result, the concentration of distribution among the top 10 predictions is reasonable.\\n\\n\\n2. The proposed solution is not practical. A real-world recommender usually handles millions of items. The computational complexity of the proposed solution is related to the number of items, which makes it challenging to scale up and handle new items. Thus, the inference time comparison in Table 4 will have a different conclusion when using a large-scale dataset.\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a semantic-aware diffusion model SDREC for sequential recommendation tasks. SDREC enhances the model's use of item semantic information by introducing the Semantic Fusion Layer, making the recommendation generation process more accurate. This layer fuses the semantic features in the embedding table to help the model better understand the user's interest dynamics when making recommendations, thereby improving the quality of recommendations. In addition, SDREC uses a contrastive learning framework to improve the model's adaptability to different sequence patterns. Experimental results show that on multiple real datasets, SDREC outperforms a variety of existing methods in recommendation accuracy and computational efficiency.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Clear motivation: the article identifies the noise problem of the diffusion model in the recommendation task, and proposes to use semantic information as a conditional input to reduce the impact of noise. This motivation is reasonable and meets the actual needs of the recommendation system.\", \"the_experimental_design_is_relatively_sufficient\": \"the paper conducts comprehensive comparative experiments with mainstream recommendation methods on multiple real data sets, demonstrating the advantages of SDREC in recommendation quality. The experimental design is relatively reasonable and verifies the effectiveness of the model.\", \"weaknesses\": \"Lack of clear formulas and detailed descriptions: A key component of the article is the Semantic Fusion Layer, but the specific implementation details of this module lack clear formula support and detailed descriptions of its design points. This makes it difficult for readers to fully understand the actual role of this module in the model and its contribution to the recommendation effect.\", \"noise_in_user_interaction_sequences\": \"The paper mentions that \\\"the encoder receives clean user sequences and explicitly captures the semantic relationship between items through contrastive learning.\\\" I understand that the author's definition of clean here refers to the original interaction sequence (no noise is introduced). But my question is that the original user interaction sequence is often not clean and may contain noisy data such as misclicks and unexpected behaviors. Existing research points out that user behavior data usually contains noise, and unprocessed click data may cause the recommendation model to deviate from the user's true preferences [r1]. Therefore, whether user sequences that have not been processed with noise can generate high-quality semantic embeddings and whether such semantic embeddings are conducive to diffusion models require in-depth analysis.\\n\\n[r1] Hongyu Lu, Min Zhang, and Shaoping Ma. 2018. Between Clicks and Satisfaction: Study on Multi-Phase User Preferences and Satisfaction for Online News Reading. In Proceedings of the International SIGIR Conference on Research and Development in Information Retrieval. ACM, 435\\u2013444\", \"questions\": \"The paper mentions using \\\"clean user sequences\\\" to generate semantic embeddings, but actual user interaction data usually contains noise (such as accidental clicks). Will the interaction sequences that have not been denoised affect the quality of semantic embeddings?\\n\\nThe description of Semantic Fusion Layer in the method section is relatively brief. Can you give a clearer explanation?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
2E2q9t1MFp
Impact of Data Distribution on Fairness Guarantees in Equitable Deep Learning
[ "Yan Luo", "Congcong Wen", "Hao Huang", "Minghan Li", "Min Shi", "Yi Fang", "Mengyu Wang" ]
Fairness in machine learning is paramount to human society because machine learning systems increasingly influence various aspects of our daily lives, particularly in consequence-critical tasks such as medical diagnosis. Deep learning models for medical diagnosis often exhibit biased performance across diverse demographic groups. Theoretical analyses to understand unfairness in AI-based medical diagnosis systems are still lacking. This work presents a comprehensive theoretical analysis of the impact of disease prevalence and data distributions on the fairness guarantees of deep learning models for medical diagnosis. We formalize the fairness problem, introduce assumptions, and derive fairness error bounds, algorithmic complexity, generalization bounds, convergence rates, and group-specific risk bounds. Our analysis reveals that fairness guarantees are significantly influenced by the differences in disease prevalence rates and data distributions across demographic groups. We prove that considering fairness criteria can lead to better performance than standard supervised learning. Empirical results on diverse datasets, including FairVision, CheXpert, HAM10000 and FairFace, corroborate our theoretical findings, demonstrating the impact of disease prevalence and feature distribution disparities on the equitable performance of deep learning models for tasks such as glaucoma, diabetic retinopathy, age-related macular degeneration, and pleural effusion detection. The code for analysis is publicly available via \url{https://github.com/anonymous2research/fairness_guarantees}.
[ "Fairness in Machine Learning", "Equitable Deep Learning", "Fairness Error Bound" ]
https://openreview.net/pdf?id=2E2q9t1MFp
https://openreview.net/forum?id=2E2q9t1MFp
ICLR.cc/2025/Conference
2025
{ "note_id": [ "jwKVx4p0gz", "V765jX26xd", "HwNIBE3y5G", "67oztrsMyw" ], "note_type": [ "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1731649073199, 1731019966599, 1730237269400, 1730584832139 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2870/Authors" ], [ "ICLR.cc/2025/Conference/Submission2870/Reviewer_JtZC" ], [ "ICLR.cc/2025/Conference/Submission2870/Reviewer_cpA1" ], [ "ICLR.cc/2025/Conference/Submission2870/Reviewer_rrPY" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper presents a theoretical framework for analyzing fairness in medical domains across diverse demographic groups. The authors present several strong analytical results, under some statistical assumptions. The authors evaluate on 4 datasets on two deep learning models with fairness over racial groups.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. Overall, the evaluation seems reasonable within the specific domain. The authors present 4 real-world datasets for different detection tasks.\\n\\n2. The analytical results seem quite strong. Specifically Thm 7 and Cor 2 could be quite useful results for analyzing fairness under Gaussian assumptions.\", \"weaknesses\": \"1. Overall, the evaluation in this work is quite weak. The primary result, Fig 1 is still a mystery to me. I dont have intuition for what the feature distribution 'ought' look like. So this seems the authors present mostly AUC over 4 detection tasks.\\n\\nUnless I missed it, this work doesn't actually present any bias mitigation strategy, except some discussion about sufficiently large sampling. \\n\\n2. there seems to be some assumptions of normality within this work that might not \\n\\n3. The overall scope of this work is somewhat limited. I didn't quite get the specifics that make these bounds hold under *medical domains* specifically (vs. domain independent). Why this is a domain paper is still a mystery to me, as none of the problem setting particularize it to medical.\", \"small\": \"While the high level analytical results are fairly intuitive. I did seem to get lost in the theorem specifics. This could be an issue with me, or the notation, which I often found impenetrable without further highlighting or description in text. e.g. Thm 1, 6. Corr. 1,2.\\n\\nOverall, in my (unconfident) estimation, this could be a reasonable domain paper with strong analytical results. However, without a mitigation strategy, with some difficult interpreting the theorems, and without understanding the domain specificity (thus narrowing the paper), I'm not over the accept threshold on this.\", \"questions\": \"1. I actually don't see that the main results (like Thm 7) have the sample size as a factor in the bound? Is this correct? Shouldn't the bound improve as a factor of n?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work presents an interesting learning-theory-inspired analysis of fair machine learning, focussing on fairness measures that aim to equalize the loss across all groups. Specifically, the fairness measure is of a similar spirit as equalized odds/equal opportunity and is defined as the differences in expected loss across various demographic groups. Next, they derive interesting complexity bounds on this loss difference using statistical learning techniques like Hoeffding's inequality, VC dimension, and the symmetrization lemma. They have also included experimental results on several datasets alongside their theoretical contributions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"-- Interesting mathematical analysis using techniques from learning theory such as Hoeffding bound, VC dimension, and Symmetrization lemma.\\n-- The paper is generally well-written and the ideas are quite nicely presented.\\n-- They have also included experimental results to show how their upper bound can be computable in several scenarios. The main strength is that the theoretical bound seems to be computable from the experiments, so the research has both depth and applicability.\", \"weaknesses\": \"-- Though they mention AI-based medical diagnosis here and there including the abstract, I don't think the paper has anything unique to medical diagnosis here. I think the emphasis on medical diagnosis is a bit of a distraction and can be discussed only in experiments if needed.\\n\\n -- While the derivation of generalization bounds for a loss function in itself is not new, their main nuance (as per my understanding) lies in bounding the difference of the losses. They also make Lipschitz assumptions. I can increase my rating if the novelty of the analysis is distilled out.\\n\\n-- The problem statement is closer to accuracy-fairness tradeoffs. While the paper has referenced several early papers in this area of accuracy-fairness tradeoffs, a lot of other prior works in the last 2-3 years that are quite closely related to this work have not been discussed. \\n[1] Menon, A. K. and Williamson, R. C. The cost of fairness in binary classification. In Proceedings of the Conference on\\nFairness, Accountability and Transparency, 2018.\\n[2] Zhao, H. and Gordon, G. J. Inherent tradeoffs in learning fair representation. arXiv preprint arXiv:1906.08386, 2019.\\n[3] Sanghamitra Dutta, Dennis Wei, Hazar Yueksel, Pin-Yu Chen, Sijia Liu, Kush Varshney. Is There a Trade-Off Between Fairness and Accuracy? A Perspective Using Mismatched Hypothesis Testing. International Conference on Machine Learning 2020.\\n[4] Garg, S., Kim, M. P., and Reingold, O. Tracking and improving information in the service of fairness. In Proceedings\\nof the ACM Conference on Economics and Computation, pp. 809\\u2013824, 2019.\\n\\nFor instance, [1] also considers similar fairness metrics. [3] also looks into tradeoffs using equalized-odds-like measures and difference in errors across groups.\\n\\n-- I would also be curious if this type of analysis has been explored in the context of fairness in federated learning attempting to characterize the worst gap in loss across multiple clients.\\n\\n-- Another possible limitation: It might be difficult to extend this to demographic parity?\", \"questions\": \"-- Could you highlight the main steps or nuances in the mathematical analysis that arises due to difference of loss in comparison to standard generalization bounds on loss functions?\\n\\n-- In the experiments section, are you comparing an upper bound with an upper bound?\\n\\n-- What would be the main takeaway of the experiments section?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents theoretical results regarding the fairness losses of machine learning models. These theoretical results are then validated on different medical datasets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper derives a range of theoretical results involving fairness error bounds, algorithmic complexity, generalization bounds, convergence rates, and group-specific risk bounds\\n2. The paper also conducts extensive experiments on a variety of medical datasets to confirm the theoretical findings.\", \"weaknesses\": \"1. Firstly, the authors frame the problem as specifically for AI-based medical diagnosis systems. This is also reflected by the mention of specifically the medical setting in both, the abstract and the introduction. However, the setting being considered is much more general, and therefore should not be framed as being specific to the medical setting.\\n2. The motivation behind the paper is not very clear. The authors derive a number of theoretical results, however, these results are not well motivated. For example, it is not clear how these results can be useful in practice, or how they can help improve fair model training. \\n3. There is no discussion of the implications of theorem 1. Why is it useful and what insights does it provide?\\n4. The disease prevalence $r_i$ is not defined formally before assumption 1. \\n5. In Theorem 2, what is the loss that the optimal function f^* minimises?\\n6. I am not convinced that the result in Theorem 2 is correct. Firstly, there is no assumption on how close $\\\\hat{f}$ is to $f^*$. So in theory, $\\\\hat{f}$ could be very \\u2018far away\\u2019 from $f^*$ if it is not trained correctly. In this case, even as the number of data $n$ increases, the fairness errors of model $\\\\hat{f}$ could be very different from that of $f^*$. In specific, in the proof of this result, how do you get from line 790 to line 791 (i.e. from the second inequality to the third)?\\n7. $\\\\epsilon$-optimality is not defined\\n8. > The theorem suggests that to achieve a smaller fairness risk, one should have a larger sample size, a smaller VC dimension, and a smaller number of demographic groups (lines 263-265)\\n\\nThis is not necessarily true. This just means that the upper bound is small in this case, but does not necessarily mean that these parameters lead to a smaller fairness risk\\n\\n9. fairness risk in line 273 $R(f)$ is not defined explicitly.\\n10. There is no discussion on how realistic the assumptions made are, and how robust the theoretical and empirical results are to these assumptions.\", \"questions\": \"> We prove that under certain conditions, the local optima of the fairness problem can outperform those of the supervised learning problem, highlighting the importance of considering fairness criteria in model development.\\n\\nCan you please elaborate what this means?\\n\\nPlease see the weakness section above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
2DD4AXOAZ8
Inference-Friendly Models With MixAttention
[ "Shashank Rajput", "Ying Sheng", "Sean Owen", "Vitaliy Chiley" ]
The size of the key-value (KV) cache plays a critical role in determining both the maximum context length and the number of concurrent requests supported during inference in modern language models. The KV cache size grows proportionally with the number of attention heads and the tokens processed, leading to increased memory consumption and slower inference for long inputs. In this work, we explore the use of MixAttention, a model architecture modification closely related to a blog published by Character.AI. MixAttention combines sliding window attention, where only a small subset of recent tokens is stored in the KV cache, with KV cache sharing across layers. Our experiments demonstrate that MixAttention significantly reduces memory usage and improves inference speed without sacrificing model performance in both short and long-context tasks. We also explore various configurations of this architecture, identifying those that maintain quality across evaluation metrics while optimizing resource efficiency.
[ "language models", "inference", "transformers", "architecture" ]
Reject
https://openreview.net/pdf?id=2DD4AXOAZ8
https://openreview.net/forum?id=2DD4AXOAZ8
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xJWGVi2elR", "i7SJQZpgiE", "fRVDxsJlVH", "dtm78PFvlc", "YkClMmz8xE", "Kx4S3vnosV" ], "note_type": [ "official_review", "official_review", "meta_review", "official_review", "official_review", "decision" ], "note_created": [ 1730645152038, 1730625407434, 1734735486773, 1730593209203, 1729401469562, 1737523892642 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8185/Reviewer_xieg" ], [ "ICLR.cc/2025/Conference/Submission8185/Reviewer_wFCF" ], [ "ICLR.cc/2025/Conference/Submission8185/Area_Chair_vm6X" ], [ "ICLR.cc/2025/Conference/Submission8185/Reviewer_fhax" ], [ "ICLR.cc/2025/Conference/Submission8185/Reviewer_mii4" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"summary\": \"The authors introduce MixAttention, an architecture that employs sliding window attention to store only recent tokens while sharing KV caches across layers. They train and evaluate four different variants and report the corresponding results.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The idea is simple and clear, the experimental setup is also quite clear.\", \"weaknesses\": \"1. This paper lacks innovation; both the recent window and multi-layer attention are established techniques. The paper simply combines these two methods without any improvements.\\n\\n2. The experimental results are presented solely as bar charts. I believe it would be beneficial to include a table with some precise values.\\n\\n3. This paper resembles more of a technical report rather than an innovative and well-developed research paper, which does not meet the high standards of ICLR.\", \"questions\": \"Refer to the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper aims to optimize the inference efficiency of LLMs by reducing the amount of KV cache. The core intuition of this paper is to combine two existing approaches, i.e., sliding window attention and layer-wise sharing of KV cache, to further reduce the memory cost of inference. Although this kind of combination has already been proposed by some blog and papers, this paper aims to explore the effectiveness of this kind of method from an empirical perspective.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The combination of sparsifying the token of sequence and sharing the KV cache across layers seems to be a promising method to reduce the inference cost. This paper conducts some interesting experiments, from pre-training to evaluation, to give us some insights regarding the impact of different choices of the setups of such combination.\\n2. The experiment setup is reasonably designed.\", \"weaknesses\": \"1. The novelty is limited in two ways. Firstly, it is a straightforward combination of two existing techniques without many adjustments. Secondly, this combination has already been explicitly described in the blog of character.ai, as cited by the authors.\\n2. I can get that the value of this paper is to provide some empirical guidelines of this combination method, but still, the new information brought by this paper is also limited. For example, \\u201c\\u2026having the standard KV cache computed in the deeper layers is more important for long context abilities than the standard KV cache of the first few layers.\\u201d has been declared by some existing studies. In general, the experiment conclusions of this paper are some high-level phenomenons, instead of some practical methodology.\\n3. The experiments are all based on a 5B MoE model, which makes the generalisability of the conclusions less convincing. \\n4. There are quite a few new hyper-parameters getting involved, e.g., for a N-layer model, how to decide which layers are standard attention, which layers are sliding window? how many layers for a KV-sharing group? These decisions are pre-defined in this paper, but what\\u2019s really interesting is how to make these decisions wisely given a new model.\", \"questions\": \"Please refer to the Weakness part\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper proposes MixAttention, an architectural modification for language models that combines sliding window attention with KV cache sharing across layers to reduce memory usage and improve inference speed.\\n\\nThe main concerns were the lack of novelty as it primarily reproduced and evaluated ideas already published in a blog post. Additionally, reviewers noted that combining existing techniques (sliding window attention and KV cache sharing) without meaningful improvements or deeper analysis did not yet meet ICLR's standards.\\nWe hope the feedback helps to strengthen the paper for a future occasion.\", \"additional_comments_on_reviewer_discussion\": \"Authors did not provide a response in the feedback phase, so this remained a clear case.\"}", "{\"summary\": \"This paper ablates over a particular modification to the transformer architecture where kv-caches are shared across layers and a portion of layers use sliding window attention, for the purpose of reducing compute and memory while retaining performance.\\nTheir main findings show that sharing the KV-cache from the first layer, throughout the entire network hurts performance on RULER (at 32k ctx), and so the KV-cache for a non-sliding window attention layer should be computed at least once in deeper layers, while also controlling for the level of kv-cache sharing on the sliding window attention layers.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Cache sharing across layers has not been extensively studied and ablated over, and so this paper provides additional sample points that show the relationship between cache sharing approach and performance.\", \"The authors tested their results on RULER which is a long-context benchmark and more conventional evals such as MMLU and HellaSwag through the Gauntlet evals framework which unveils differences in performance between different KV-cache sharing approaches.\", \"Some of these KV-cache sharing variants perform as well as standard attention while being significantly cheaper in compute and memory.\"], \"weaknesses\": [\"Lack of insight or discussion as to why certain cache-sharing approaches perform better or worse.\", \"The paper lacks novelty, as it mostly relies on architectural configurations proposed by a blog by CharacterAI [1], and as a consequence, it lacks explanation as to why these configurations were selected in the first place.\", \"In general, the main critique is that the paper presents only surface level analysis of the observations and does not contribute much to a deeper understanding of why certain cache-sharing approaches perform better than others.\", \"[1] Character.AI. Optimizing AI Inference at Character.AI \\u2014 research.character.ai. https://research.character.ai/optimizing-inference/, 2024.\"], \"questions\": [\"It would be interesting to see trends between performance and degree of cache-sharing for both standard attention and sliding window attention, as this would give us a better understanding of the rate at which the performance worsens.\", \"More explanation for why certain choices were made for the experiments such as the eval benchmark of choice, selection of cache-sharing variants.\", \"More discussion and analysis of the results that leads to deeper insights.\", \"More discussion about the differences between this and the other cache-sharing paper [1].\", \"[1] William Brandon, Mayank Mishra, Aniruddha Nrusimha, Rameswar Panda, and Jonathan Ragan Kelly. Reducing transformer key-value cache size with cross-layer attention. arXiv preprint arXiv:2405.12981, 2024.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposed an approach called MixAttention which is interleaving standard attention with sliding window attention. Their MixAttention approach also shares KV-cache across the layers. All these optimizations lead to reduce memory usage for the model during inference without significantly deteriorating the model accuracy.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper is easy to follow and unlike most approaches that use custom device-level code to make inference efficient, the approach doesn't require any custom kernels. This makes the approach easier to adapt to slight changes in the model architecture or running inference on hardware from other vendors.\", \"weaknesses\": \"1. There is no novelty in the approach. The paper just evaluates the approach proposed in the [blog](https://research.character.ai/optimizing-inference/) by character.AI with slight modifications. Also, there is nothing new written in the paper different from the blog.\\n2. The authors have not put in enough effort for the paper. There is no optimization done in SGLang to optimize the inference for sliding window attention baseline.\\n3. The paper is poorly written and there are some typos in the paper. For instance, line 199 uses the word 'sequence' twice in succession.\\n4. The paper also says to refer to the appendix for a few experiments, however, there is no appendix in the paper.\\n5. I don't believe that any amount of experiments can make the paper in an acceptable format since there is no novelty.\", \"questions\": \"1. There is no Pareto improvement shown. How does the proposed approach compare to a smaller standard MoE model with similar KV-cache size? It would be ideal to see a Pareto-improvement curve with KV-cache memory on the X-axis and model accuracy on Y-axis.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
2D0uXQbntW
InfiniBench: A Comprehensive Benchmark for Large Multimodal Models in Very Long Video Understanding
[ "Kirolos Ataallah", "Chenhui Gou", "Eslam Mohamed BAKR", "Khushbu Pahwa", "Jian Ding", "Mohamed Elhoseiny" ]
Understanding long videos, ranging from tens of minutes to several hours, presents unique challenges in video comprehension. Despite the increasing importance of long-form video content, existing benchmarks primarily focus on shorter clips. To address this gap, we introduce InfiniBench a comprehensive benchmark for very long video understanding which presents 1)very long video duration, averaging 52.59 minutes per video 2)The largest number of question-answer pairs, 108.2K 3) Diversity in questions that examine nine different skills and include both multiple-choice questions and open-ended questions 4) Memory questions, such as Global Appearance that require remembering and tracking the visual aspects through the video. Using InfiniBench, we comprehensively evaluate existing Large Multi-Modality Models (LMMs) on each skill, including the commercial models such as GPT-4o and Gemini 1.5 Flash and the recent open-source models. The evaluation shows significant challenges in our benchmark. Our findings reveal that even leading AI models like GPT-4o and Gemini 1.5 Flash face challenges in achieving high performance in long video understanding, with average accuracies of just 56.01 % and 43.32 %, and average scores of 3.25 and 2.79 out of 5, respectively. Qwen2-VL matches Gemini's performance in the MCQ skills but lags significantly in open-ended question tasks. We hope this benchmark will stimulate the LMMs community towards long video and human-level understanding.
[ "video understanding", "benchmark", "long video benchmark", "long video understanding" ]
Reject
https://openreview.net/pdf?id=2D0uXQbntW
https://openreview.net/forum?id=2D0uXQbntW
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z8LT94u4cf", "wyOMfzaLyP", "wsPjg19XE6", "vkSrmxzcn4", "tXTs535vna", "rJA4cALQSm", "r31gDXzFYu", "qRFViZb0ky", "nGYz1ueDE4", "mgmP9qYUBo", "hl69HiAVNu", "g0J0jTp9kX", "eETE346KpW", "cltowg8Aq8", "Zfj5mXf9BG", "ZAkQPuSW95", "VLOPOD9I1k", "R0a3zVqosd", "PpsOorCrsL", "OhYEh5kJWC", "MBr0wRlWgD", "L1h1sXpNdB", "Ku1Y9F1rt1", "J2l6ko6wkn", "DJGyDCKpoW", "CRNdL8cpy6", "BH8im85Q2F", "APDAwAVRTS", "A1LhvREmx4", "9fmthG2PPK", "9Un77b3fxP", "9TsTf9IyKF", "9OQTh2dwtv", "7adybPAkJP", "4lf3a5Io0L", "4FWdXxdRtf", "1fQ3NAD3GJ", "1RgjVPKyeB" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733047173891, 1733046298766, 1732564872696, 1732560330511, 1732559024952, 1732563829230, 1732647594451, 1732560157554, 1733046028667, 1733046079416, 1733186436637, 1737523493812, 1733157111841, 1730628877621, 1730010230918, 1732557593999, 1733186650279, 1732564165656, 1732564257959, 1732558851933, 1730567275033, 1733186793489, 1732564948339, 1732564810671, 1732564367102, 1732559858872, 1733046136994, 1730613870276, 1732644669809, 1733184844219, 1732562996927, 1734858661731, 1732565010394, 1730454807959, 1732562909394, 1733045951158, 1732559752298, 1732559275528 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2250/Authors" ], [ "ICLR.cc/2025/Conference/Submission2250/Authors" ], [ "ICLR.cc/2025/Conference/Submission2250/Authors" ], [ "ICLR.cc/2025/Conference/Submission2250/Authors" ], [ "ICLR.cc/2025/Conference/Submission2250/Authors" ], [ "ICLR.cc/2025/Conference/Submission2250/Authors" ], [ "ICLR.cc/2025/Conference/Submission2250/Reviewer_dcjq" ], [ "ICLR.cc/2025/Conference/Submission2250/Authors" ], [ "ICLR.cc/2025/Conference/Submission2250/Authors" ], [ "ICLR.cc/2025/Conference/Submission2250/Authors" ], [ "ICLR.cc/2025/Conference/Submission2250/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2250/Reviewer_6qyh" ], [ "ICLR.cc/2025/Conference/Submission2250/Reviewer_6qyh" ], [ "ICLR.cc/2025/Conference/Submission2250/Reviewer_dcjq" ], [ "ICLR.cc/2025/Conference/Submission2250/Authors" ], [ "ICLR.cc/2025/Conference/Submission2250/Authors" ], [ "ICLR.cc/2025/Conference/Submission2250/Authors" ], [ "ICLR.cc/2025/Conference/Submission2250/Authors" ], [ "ICLR.cc/2025/Conference/Submission2250/Authors" ], [ "ICLR.cc/2025/Conference/Submission2250/Reviewer_oZkF" ], [ "ICLR.cc/2025/Conference/Submission2250/Authors" ], [ "ICLR.cc/2025/Conference/Submission2250/Authors" ], [ "ICLR.cc/2025/Conference/Submission2250/Authors" ], [ "ICLR.cc/2025/Conference/Submission2250/Authors" ], [ "ICLR.cc/2025/Conference/Submission2250/Authors" ], [ "ICLR.cc/2025/Conference/Submission2250/Authors" ], [ "ICLR.cc/2025/Conference/Submission2250/Reviewer_147D" ], [ "ICLR.cc/2025/Conference/Submission2250/Authors" ], [ "ICLR.cc/2025/Conference/Submission2250/Reviewer_oZkF" ], [ "ICLR.cc/2025/Conference/Submission2250/Authors" ], [ "ICLR.cc/2025/Conference/Submission2250/Area_Chair_zP6n" ], [ "ICLR.cc/2025/Conference/Submission2250/Authors" ], [ "ICLR.cc/2025/Conference/Submission2250/Reviewer_iJs3" ], [ "ICLR.cc/2025/Conference/Submission2250/Authors" ], [ "ICLR.cc/2025/Conference/Submission2250/Authors" ], [ "ICLR.cc/2025/Conference/Submission2250/Authors" ], [ "ICLR.cc/2025/Conference/Submission2250/Authors" ] ], "structured_content_str": [ "{\"title\": \"Thanks\", \"comment\": \"We sincerely thank you for your thoughtful feedback and confirming that we have addressed your questions. We also greatly appreciate your detailed comments and the time you\\u2019ve taken to evaluate our work and for maintaining your positive rating.\"}", "{\"title\": \"Kind reminder: We are looking forward to your reply\", \"comment\": \"Dear Reviewer iJs3,\\n\\nWe kindly ask if our response has addressed your concerns. Fortunately, we still have till December 3rd to discuss. Therefore, please feel free to share any additional questions or feedback, and we\\u2019ll be happy to provide further clarification.\\n\\nBest regards, The Authors\"}", "{\"title\": \"Response 1, Part 2\", \"comment\": \"---\", \"q3\": \"(Continued)\\n\\n| GPT-4o | Global Appearance (ACC) | Scene transitions | character actions | Temporal questions | Local vision+text | Summarization | Deep context understanding | Spoiler questions | Linking Multiple Events | AVG acc | AVG score |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| | MCQ | MCQ | MCQ | MCQ | MCQ | Open-Ended | Open-Ended | Open-Ended | Open-Ended | | |\\n| Random Performance | 17 | 17 | 16 | 42 | 20 | N/A | N/A | N/A | N/A | N/A | N/A |\\n| Video + sub + question | 45.98 | 46.35 | 35.32 | 68.02 | 81.7 | 3.46 | 3.38 | 2.72 | 3.47 | 55.474 | 3.2575 |\\n| Question + Video Info | 20.83 | 22.51 | 17.18 | 42.82 | 17.17 | 1.7 | 0.37 | 0.68 | 0.7 | 24.102 | 0.8625 |\\n| Question | 14.81 | 24.08 | 15.78 | 42.35 | 16.44 | 1.75 | 0.36 | 0.67 | 0.67 | 22.692 | 0.8625 |\\n\\n\\n**[Data Leakage vs. Common Sense]**\\n\\nIn contrast, only the blind Qwen on one skill, the ``Character Actions'', achieves closer performance than the Qwen, which takes the visual input, 36.6 and 36, respectively.\\nThis could be interpreted as the model using its common sense to answer the question.\\nThe choices in this skill contain valid actions, and only their order is wrong. \\nThus, we argue that the model could perform well using common sense to order the events.\\n\\nTo test our hypothesis, we assess the model performance on this skill as an open-ended question without choices. \\nWe leverage GPT-4o to score the models' outputs out of 5, where 0 is the worst and 5 is the best. The detailed prompt used while scoring is depicted in Figure 7.\\nAs expected, when we remove the visual input, the accuracy drops significantly from 0.79 to 0.003 as shown in the table below.\\n\\n| Inputs | GPT-4o Score |\\n|----------------------------|-------------------|\\n| Questions | 0.003 |\\n| Video + Questions | 0.79 |\\n\\n\\n---\\n> Q4. Evaluate more long-video models (e.g., Qwen2VL) and different input frame rates (1, 8, 32, 128, and more).\", \"we_have_included_three_new_recent_long_video_models\": \"1. Qwen2VL\\n2. InternVL\\n3. LLava OV\\n\\nIn addition, we tested the best-performing model on our benchmark, GPT-4o, with different input frame rates. Due to time constraints, these experiments were conducted on 20% of the dataset.\\n\\n**[Findings]**\\n\\n1. Influence of Input Frame Rate:\\n\\n* Feeding more frames intuitively improves accuracy, but the degree of improvement varies across models.\\n* For instance, GPT-4o benefits the most from higher frame rates, while LLaVA-OV\\u2019s performance remains almost unchanged despite using an 8x higher frame rate.\\n\\n2. Analysis of LLaVA-OV\\u2019s Behavior:\\n\\n* The limited benefit of higher frame rates for LLaVA-OV may be attributed to its training strategy. \\n* LLaVA-OV is trained jointly on single images, multi-images, and videos.\\n* This strategy employs a balanced visual representation approach, aggressively downsampling video inputs to ensure parity with image-based scenarios.\\n* While effective for general tasks, this aggressive downsampling likely hurts LLaVA-OV\\u2019s ability to understand long videos, limiting its benefit from higher frame rates.\\n\\n3. Skill-Specific Insights:\\n* Specific skills benefit more from higher frame rates. For example, the ``local vision+text'' skill improves most as it relies on short sequential shots.\\n* Increasing the frame rate reduces the chance of missing critical shots related to the answer, thereby boosting accuracy for such tasks.\\n\\nThe results demonstrate that while higher frame rates generally improve performance, the degree of improvement depends on the model\\u2019s design and training strategy. Models like GPT-4o, optimized for sequential inputs, show significant gains, whereas models like LLaVA-OV, which aggressively downsample videos, see minimal benefits.\"}", "{\"title\": \"Response 1, Part 4\", \"comment\": \"---\\n> Q5: **[Copyrights]** The copyright implications of using movies and TV shows and possibly releasing the dataset are not discussed and raise ethical concerns.\\n\\nWe appreciate the reviewer\\u2019s concern about copyright implications and ethical considerations related to using movies and TV shows in our dataset. Below, we clarify our approach and steps to ensure compliance with legal and ethical standards:\\n\\n1. Use of Existing Datasets (e.g., MovieNet and TVQA):\\n\\n * We rely on publicly available datasets like MovieNet and TVQA, which provide downsampled versions of video content.\\n\\n * Both datasets are widely used in academic research and distribute videos in a legally compliant manner by heavily reducing the frame rate (e.g., 1 FPS). This downsampling transforms the videos into low-resolution derivatives primarily for research purposes, which may mitigate copyright concerns.\\n\\n * While we cannot definitively state that downsampling resolves all legal and ethical issues, its widespread use in academia suggests it is generally accepted within the research community.\\n\\n2. Benchmark Without Redistribution of Videos:\\n\\n * Our benchmark does not redistribute video content directly. Instead, we provide annotations, question-answer pairs, and evaluation scripts.\\nUsers are required to independently download the videos from official sources (e.g., MovieNet or TVQA).\\n\\n * This ensures we do not claim ownership of the original video content, nor do we host or distribute it ourselves.\\n\\n * This approach aligns with practices used in existing benchmarks, where users are required to obtain the original images independently.\\n\\n3. Ethical Considerations:\\n\\n * We acknowledge that relying on copyrighted material, even in downsampled form, can raise ethical questions.\\n\\n * We are committed to transparency and ensuring our benchmark is used responsibly. To that end, we will include a clear disclaimer stating:\\n\\n1. The benchmark does not redistribute or modify original video content.\\n\\n2. Users must adhere to the terms and conditions of the original video sources.\\n\\n**Conclusion:**\\n\\nWhile we cannot claim definitive resolution of all legal or ethical concerns, our approach of using downsampled versions from widely accepted datasets, combined with ensuring users obtain the videos independently, reduces our direct responsibility for copyright compliance.\\nIf further legal concerns arise, we will explore ways to refine our approach in collaboration with legal experts and ensure the benchmark remains compliant and ethical.\\n\\n\\n---\\n> Q6: **[Duplicates]** \\nSince the dataset has around 100 questions per video, it is likely that there are (near) duplicate questions. \\n\\nWe appreciate the reviewer\\u2019s suggestion and agree that addressing potential duplicates is an important aspect of the benchmark creation process.\\n\\n**[Deduplication Methodology]**\\n\\nTo ensure the dataset is free from (near) duplicate questions, we implemented the following approach:\\n\\n1. Encoding and Similarity Calculation:\\n\\n * We used M3-Embedding [1] to encode the questions and answer choices into vector representations.\\n\\n * Cosine similarity was then calculated to identify potential duplicates.\\n\\n2. Thresholds for Evaluation:\\n\\n * To account for varying degrees of similarity, we evaluated three thresholds: 90%, 95%, and 98% cosine similarity.\\n\\n3. Findings:\\n\\n * As shown in Figure X of the revised paper, the vast majority of questions across all skills are unique, with no duplicates detected.\\n\\n * Two skills, Temporal Questions and Character Actions, showed instances of potential duplicates.\\n\\n**[Analysis of Detected Duplicates]**\\n\\nUpon investigation, we found that the detected duplicates were false positives. For example, the following pair of questions was flagged as duplicates because they differ only by the words \\\"before\\\" and \\\"after\\\". However, this small difference completely changes the meaning of the question:\\n\\n1. Q1: Did the event flashback to Phoebe completing a mile on a hippity-hop before turning thirty, happen before the event Monica makes breakfast with chocolate-chip pancakes?\\n\\n2. Q2: Did the event flashback to Phoebe completing a mile on a hippity-hop before turning thirty, happen after the event Monica makes breakfast with chocolate-chip pancakes?\\n\\nThese examples highlight the importance of semantic context in evaluating question similarity, as minor lexical differences can significantly alter meaning.\\n\\n**Conclusion**\\n\\nBased on our analysis, we are confident that the dataset is free from true duplicates, and our pipeline effectively identifies and handles potential near-duplicates. The false positives flagged by the similarity detection process underscore the complexity of semantic evaluation, especially in nuanced question construction.\\n\\n\\n[1] Multi-Granularity, Multi-Linguality Multi-Functionality. \\\"M3-Embedding: Multi-Linguality, Multi-Functionality, Multi-Granularity Text Embeddings Through Self-Knowledge Distillation.\\\"\"}", "{\"title\": \"Response 1, Part 2\", \"comment\": \"---\\n> Q3: \\n\\n> Q3.1: On GPT-4o evaluation, only 250 frames are selected. Are the 250 frames selected uniformly? \\n\\nYes, we sample 250 frames uniformly.\\n\\n> Q3.2: Have you tried to reduce the frame size and squeeze more frames into GPT-4o?\\n\\n1. Influence of Input Frame Rate:\\n\\n* Feeding more frames intuitively improves accuracy, but the degree of improvement varies across models.\\n\\n* For instance, GPT-4o benefits the most from higher frame rates, while LLaVA-OV\\u2019s performance remains almost unchanged despite using an 8x higher frame rate.\\n\\n2. Analysis of LLaVA-OV\\u2019s Behavior:\\n\\n* The limited benefit of higher frame rates for LLaVA-OV may be attributed to its training strategy. \\n\\n* LLaVA-OV is trained jointly on single images, multi-images, and videos.\\n\\n* This strategy employs a balanced visual representation approach, aggressively downsampling video inputs to ensure parity with image-based scenarios.\\n\\n* While effective for general tasks, this aggressive downsampling likely hurts LLaVA-OV\\u2019s ability to understand long videos, limiting its benefit from higher frame rates.\\n\\n3. Skill-Specific Insights:\\n\\n* Specific skills benefit more from higher frame rates. For example, the ``local vision+text'' skill improves most as it relies on short sequential shots.\\n* Increasing the frame rate reduces the chance of missing critical shots related to the answer, thereby boosting accuracy for such tasks.\\n\\nThe results demonstrate that while higher frame rates generally improve performance, the degree of improvement depends on the model\\u2019s design and training strategy. Models like GPT-4o, optimized for sequential inputs, show significant gains, whereas models like LLaVA-OV, which aggressively downsample videos, see minimal benefits.\\n\\n| Qwen2VL | Global Appearance | Scene transitions | character actions | Temporal questions | Local vision+text | Summarization | Deep context understanding | Spoiler questions | Linking Multiple Events | AVG acc | AVG score |\\n|------|-------|-----|---------|--------|-----|-----|------|------|-----|----|-----|\\n| | MCQ | MCQ | MCQ | MCQ | MCQ | Open-Ended | Open-Ended | Open-Ended | Open-Ended | | |\\n| Random Performance | 17 | 17 | 16 | 42 | 20 | N/A | N/A | N/A | N/A | N/A | N/A |\\n| 250 Frames | 36.6 | 30.2 | 36.64 | 50.23 | 59.89 | 0.67 | 2.05 | 1.39 | 2.82 | 42.712 | 1.7325 |\\n| 128 Frames | 32.58 | 28.64 | 34.59 | 49.33 | 54.98 | 0.59 | 1.87 | 1.31 | 2.75 | 40.024 | 1.63 |\\n| 16 Frames | 30.8 | 20.83 | 32.2 | 46.59 | 42.9 | 0.3 | 1.53 | 1.16 | 2.44 | 34.664 | 1.3575 |\\n\\n| GPT-4o | Global Appearance (ACC) | Scene transitions | character actions | Temporal questions | Local vision+text | Summarization | Deep context understanding | Spoiler questions | Linking Multiple Events | AVG acc | AVG score |\\n|--------------------|-------------------------|-------------------|-------------------|--------------------|-------------------|---------------|----------------------------|-------------------|-------------------------|---------|-----------|\\n| | MCQ | MCQ | MCQ | MCQ | MCQ | Open-Ended | Open-Ended | Open-Ended | Open-Ended | | |\\n| Random Performance | 17 | 17 | 16 | 42 | 20 | N/A | N/A | N/A | N/A | N/A | N/A |\\n| 250 Frames | 45.98 | 46.35 | 35.32 | 68.02 | 81.7 | 3.46 | 3.38 | 2.72 | 3.47 | 55.474 | 3.2575 |\\n| 128 Frames | 18.98 | 29.84 | 17.92 | 43.12 | 22.1 | 1.78 | 0.37 | 0.61 | 0.69 | 26.392 | 0.8625 |\\n| 16 Frames | 20.37 | 31.93 | 16.38 | 42.32 | 20.22 | 1.68 | 0.35 | 0.63 | 0.65 | 26.244 | 0.8275 |\"}", "{\"title\": \"Response 1, Part 7\", \"comment\": \"----\\n> Q8 (Continued)\\n\\n| Llava OV | Global Appearance | Scene transitions | character actions | Temporal questions | Local vision+text | Summarization | Deep context understanding | Spoiler questions | Linking Multiple Events | AVG acc | AVG score |\\n|--|---|-----|----|----|----|----|---|---|----|----|----|\\n| | MCQ | MCQ | MCQ | MCQ | MCQ | Open-Ended | Open-Ended | Open-Ended | Open-Ended | | |\\n| Random Performance | 17 | 17 | 16 | 42 | 20 | N/A | N/A | N/A | N/A | N/A | N/A |\\n| 250 Frames | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A |\\n| 128 Frames | 36.6 | 23.95 | 25.911 | 45.49 | 48.6 | 0.55 | 1.79 | 1.3 | 2.58 | 36.1102 | 1.555 |\\n| 16 Frames | 41.51 | 24.47 | 25.97 | 44.27 | 40.15 | 0.48 | 1.48 | 1.33 | 2.3 | 35.274 | 1.3975 |\\n\\nUsing 250 frames per video is not applicable for LLava-OV and InternVL, as they are standard video models not designed to handle too-long videos.\\nAccordingly, we hit the maximum context length for these models, which prevents us from running on higher sampling rates.\\n\\n---\\n> Q9: Could authors explain how spoiler questions are generated and provide the prompt used?\\n\\nThe spoiler questions in our benchmark are not generated by GPT-4o or any other AI model. Instead, they are sourced from the IMDB website, where all questions and answers are created by humans.\", \"process_for_spoiler_questions\": \"1. We scraped spoiler-related questions from IMDB, which are manually written and answered by human contributors.\\n\\n2. To ensure quality and completeness, we filtered out any unanswered questions, keeping only those with valid human-provided answers.\\n\\n3. We verify 10% of them and the human verification accuracy is 95.34%\\n\\nThis approach guarantees that the spoiler questions are realistic and reflect genuine human reasoning.\\n\\n\\n---\\n> Q10: How can a model\\u2019s performance be lower than random?\\n\\nWe appreciate the reviewer\\u2019s insightful question. While it might seem counterintuitive, there are valid reasons why a model's performance can fall below random chance in a multiple-choice question (MCQ) setting.\", \"key_reasons\": \"1. Overfitting or Bias Towards Incorrect Patterns:\\n\\n* Some models may overfit to spurious correlations or patterns in the training data, leading to systematic biases in their predictions.\\n\\n* Instead of distributing predictions randomly across all answer choices, the model might consistently select incorrect answers due to these biases, resulting in performance worse than random.\\n\\n2. Instruction-Following Issues:\\n\\n* In some cases, the model might fail to correctly follow the format of the question or ignore the instruction to choose from the given options.\\n\\n* This behavior can result in answers that are invalid or unrelated to the choices, further reducing accuracy.\\n\\n---\\n> Q11: For the human verification, how were human responses on open-ended questions evaluated?\\n\\nFor human verification of open-ended questions, we asked annotators to evaluate the model's responses on a five-point scale based on their correctness level relative to the ground truth (GT), similar to the evaluation process used by GPT-4o.\\n\\nThis approach ensures a consistent and fair evaluation of open-ended responses by aligning human judgments with the predefined GT criteria.\\n\\n---\\n> Q12: Do authors have evidence to the quality of TVQA annotations and summaries obtained from the web?\\n\\n1. TVQA Annotations:\\n\\n* The quality of TVQA questions and answers is well-documented in the TVQA paper (Section 3.2, Table 8).\\n\\n* It explicitly mentions that \\u201cThe negative answers in TVQA are written by human annotators. They are instructed to write false but relevant answers to make the negatives challenging.\\u201d\\n\\n* This demonstrates that the questions and options are carefully crafted and verified by humans, ensuring their quality and relevance.\\n\\n2. IMDB Summaries:\\n\\n* According to IMDB\\u2019s contribution guidelines [source](https://help.imdb.com/article/contribution/titles/plots/G56STCKTK7ESG7CP#):\\nContributors must follow strict instructions when submitting plot summaries.\\n\\n* Each contribution is reviewed and approved by the IMDB team before publication.\\n\\n* This rigorous process ensures that IMDB summaries are reliable, accurate, and already human-verified.\"}", "{\"comment\": \"I appreciate the detailed responses and comments from the authors. The authors have addressed all my questions. I'd like to maintain my rating.\", \"a_minor_comment\": \"in my statement \\\"The variety of TV show sources is limited since there are only 6 different TV shows.\\\", my concern was on the quantity (6) not the content. I agree that movies and TV shows in general are great for evaluating video understanding capabilities.\"}", "{\"title\": \"Response 1, Part 3\", \"comment\": \"---\\n> Q4: **[Multi-Modal Benchmark]** It is unclear how much the benchmark relies on multimodal reasoning. It would be interesting to see an ablation that uses (1) No context, only the question itself (2) Only the question and subtitles (3) the question, subtitles and video frames.\\n\\nTo evaluate the contribution of multimodal reasoning, we conducted experiments on Qwen using four input variants:\\n\\n1. Video + Subtitle\\n2. Video\\n3. Question + Subtitle\\n4. Question\\n\\n**Findings:**\\n\\n1. [Visual Clues] \\n\\n * Dropping subtitles did not significantly impact performance, demonstrating that our benchmark focuses on visual cues and that textual information alone is insufficient to answer most questions.\\n\\n2. [Blind Models] \\n\\n * To assess the importance of visual inputs, we tested models using subtitles alone (termed Blind Models) without video frames.\\n\\n * Results showed that blind models achieved near-random performance, emphasizing the critical role of visual inputs in answering questions.\\n\\n3. [Blind and Deaf Models] \\n\\n * Similarly, removing both video frames and subtitles (termed Blind and Deaf Models) resulted in random performance.\\n\\n * This further highlights the significant role of visual inputs, with textual inputs alone contributing minimally, as evidenced by the similarity in performance between blind and blind-and-deaf models.\\n\\nTables summarizing these findings have been included in the revised paper.\\n\\n| Qwen | Global Apperance | Scene transitions | character actions | Temporal questions | Local vision+text | Summarization | Deep context understanding | Spoiler questions | Linking Multiple Events | AVG acc | AVG score |\\n|----|-------|---|---|-----|------|------|-----|----|---|----|-----|\\n| | MCQ | MCQ | MCQ | MCQ | MCQ | Open-Ended | Open-Ended | Open-Ended | Open-Ended | | |\\n| Random Performance | 17 | 17 | 16 | 42 | 20 | N/A | N/A | N/A | N/A | N/A | N/A |\\n| Video + Subtitle | 36.6 | 30.2 | 36.64 | 50.23 | 59.89 | 0.67 | 2.05 | 1.39 | 2.82 | 42.712 | 1.7325 |\\n| Video | 36.97 | 28.12 | 36.97 | 49.06 | 47.56 | 0.35 | 1.69 | 1.26 | 2.49 | 39.736 | 1.4475 |\\n| Question + Subtitle | 18.05 | 25.65 | 17.85 | 44.48 | 20.79 | 1.86 | 0.43 | 0.66 | 0.69 | 25.364 | 0.91 |\\n| Question | 18.75 | 19.27 | 29.62 | 45.49 | 38.29 | 0 | 0.97 | 0.76 | 1.7 | 22.692 | 0.8625 |\"}", "{\"title\": \"Kind reminder: We are looking forward to your reply\", \"comment\": \"Dear Reviewer 6qyh,\\n\\nWe kindly ask if our response has addressed your concerns. Fortunately, we still have till December 3rd to discuss. Therefore, please feel free to share any additional questions or feedback, and we\\u2019ll be happy to provide further clarification.\\n\\nBest regards, The Authors\"}", "{\"title\": \"Kind reminder: We are looking forward to your reply\", \"comment\": \"Dear Reviewer 147D,\\n\\nWe kindly ask if our response has addressed your concerns. Fortunately, we still have till December 3rd to discuss. Therefore, please feel free to share any additional questions or feedback, and we\\u2019ll be happy to provide further clarification.\\n\\nBest regards, The Authors\"}", "{\"title\": \"Thanks and Further Clarifications\", \"comment\": \"We thank the reviewer for acknowledging our benchmark's usefulness and reliability and raising the score. We also appreciate your suggestion that future work could benefit from a stronger focus on novel methodological contributions beyond quantitative expansion.\\nHowever, we would like to highlight several key aspects that underscore the significance and innovation of our work:\\n\\n1. **Importance of Scale:**\\n\\n * Existing long-video benchmarks are extremely limited in size; for example, the largest has only ~2k questions. In contrast, our benchmark is approximately 50\\u00d7 larger, containing 108k questions.\\n * Scale matters because smaller benchmarks may lead to misleading conclusions due to limited coverage. A larger dataset ensures robustness and diversity, enabling more reliable evaluation of long-video models.\\n2. **Introduction of New and Challenging Skills:**\\n\\n * We introduce several novel and challenging evaluation skills, such as spoiler questions, global appearance reasoning, and linking events. These skills go beyond existing benchmarks and test nuanced multi-modal reasoning capabilities.\\n\\n3. **Complexity of Movies and Series:**\\n\\n * Movies and TV series present uniquely challenging scenarios with complex relationships, non-linear storytelling, and twists that demand deep multi-modal reasoning. Our benchmark leverages this complexity to push the boundaries of video understanding models.\\n\\n4. **Holistic Coverage:**\\n\\n * Compared to existing long-video benchmarks, ours is the most holistic in terms of the number of skills evaluated and the breadth of models covered. This makes it a comprehensive resource for the community and provides a clear roadmap for advancing long-video understanding research.\\n\\n5. **Scalability Beyond Testing:**\\n\\n * Due to its scale and diversity, our benchmark is not limited to evaluation; it can also serve as a valuable resource for training and pretraining video models, further accelerating progress in the field.\\n\\n6. **Rigorous Quality Control:**\\n\\n * Despite the large scale of the dataset, we have implemented rigorous human verification to ensure reliability. This careful balance between scale and quality sets our benchmark apart.\\n\\nWe hope these points further highlight the contributions of our work and its potential impact on advancing long-video understanding. Thank you again for your valuable feedback and constructive suggestions, which have helped us strengthen our work.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Official Comment by Reviewer 6qyh\", \"comment\": \"Thank you for the detailed responses and additional experiments. I am upgrading my rating to 5. Given the existence of several long-video benchmarks like Video-MME, LVBench, MLVU and LongVideoBench, while your benchmark provides valuable validation through human verification and blindness tests, I find that merely scaling up video length and dataset size represents an incremental rather than innovative advancement. The rigorous quality controls and comprehensive model evaluations are commendable, but future work would benefit from novel methodological contributions beyond quantitative expansion.\"}", "{\"summary\": \"This paper introduces InfiniBench, a video understanding benchmark dataset featuring the longest video duration (average 52.59 minutes per video) and the largest number of question-answer pairs (108.2K) to evaluate 9 different video understanding tasks.\\n\\nThe authors conducted comprehensive evaluations of existing large multimodal models (including commercial models like GPT-4V, Gemini 1.5 Flash, and open-source models). Experiments show that even leading AI models still face challenges in long video understanding, with the best models GPT-4V and Gemini 1.5 Flash achieving average accuracy rates of only 49.16% and 42.72% respectively.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The questions are comprehensive and well-structured, covering multiple dimensions and employing diverse construction strategies for different types of questions.\\n2. The evaluation methods are reasonable, adopting different assessment metrics for multiple-choice and open-ended questions.\", \"weaknesses\": \"1. The paper lacks discussion of related work. For example, benchmarks proposed in Video-MME, LVBench, and Long VideoBench published in June 2024 are very similar to InfiniBench.\\n\\n2. Most of the question-answer pairs are generated by GPT-4o. Although multiple information sources were used as input, it's difficult to guarantee the quality of the dataset.\\n\\n3. Part of the data comes from IMDB content, which likely appeared multiple times in the training corpus of LLMs used by video models, potentially leading to dataset leakage issues.\", \"questions\": \"1. Add references and discussions of related work.\\n2. It would be better to evaluate more long-video models (e.g., Qwen2VL) and different input frame rates (1, 8, 32, 128, and more).\\n3. Since most question-answer pairs are generated by GPT-4o, could this lead to inflated evaluation results for GPT-4o? Analysis is needed regarding dataset quality, hallucination rates, and potential information leakage issues.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a benchmark, called InfiniBench, for the evaluation of long video understanding. The dataset consists of 1219 videos. The average length of the videos is 52.59 minutes. There are 108.2K (video, question) pairs. The questions are divided into 9 categories. Some categories require the ability to make associations across a longtime span. Some categories require in-depth understanding and reasoning capabilities. It is a very interesting new benchmark for long video understanding.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"The videos are very long with an average length 52 minutes.\\n\\nThe number of (question, answer) pairs is large (108k)\\n\\nSome of the questions are unique such as spoiler questions, global appearance, and scene transitions. \\n\\nCompared to the existing benchmarks, this benchmark contains much longer videos and contains some new interesting types of questions. It'll be very useful to the researchers who work on long video understanding.\", \"weaknesses\": \"The variety of TV show sources is limited since there are only 6 different TV shows.\", \"questions\": \"Can the authors comment on the limited variety of TV shows? What about sports events like NBA, NFL, Tennis, etc.\\n\\nOn GPT-4o evaluation, only 250 frames are selected. Are the 250 frames selected uniformly? Have you tried to reduce the frame size and squeeze more frames into GPT-4o?\\n\\nWill all the videos be released to public? Are there any legal issues?\\n\\nAre there text scripts (screenplay) associated with all the videos (movies and TV shows)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response 1\", \"comment\": \"---\\n> **Q1: Compare with MoVQA.**\\n\\nWe appreciate the contributions of MoVQA and acknowledge its value in advancing video question-answering research. However, our benchmark is significantly larger in scale and broader in scope, covering a wider range of skills, video durations, and models.\\n\\n**[Key Differentiation Points:]**\\n\\n1. Scale:\\n\\n * Our benchmark is 5\\u00d7 larger than MoVQA, with 108.2K QA pairs compared to 21.953K QA pairs in MoVQA.\\n\\n2. QA Diversity:\\n\\n * We support both multiple-choice questions (MCQ) and open-ended evaluations, while MoVQA is limited to MCQ only.\\n\\n3. Video Sources:\\n\\n * Our dataset features videos from both TV shows and movies, whereas MoVQA focuses solely on movies.\\n\\n4. Video Length:\\n\\n * The average video length in our benchmark is significantly longer (52.59 minutes) compared to MoVQA (16.53 minutes).\\n\\n5. Model Coverage:\\n\\n * We evaluate 10 long-video models, whereas MoVQA evaluates only 4 models (MPlug-Owl, Otter, VideoChatGPT, and VideoChat).\\n\\n\\n**[Comparison with Other Recent Benchmarks:]**\\n\\nIn addition to comparing with MoVQA, we have expanded our evaluation to include other recent benchmarks, such as Video-MME, LVBench, LongVideoBench, and MLVU.\", \"here_are_some_key_highlights\": \"1. Video Length:\\n\\n * Most benchmarks focus on short videos, with an average length of around 10 minutes.\\n\\n * The exception is LVBench, which includes 1-hour-long videos, making it comparable in duration to our dataset.\\n\\n2. Scale:\\n\\n * Our benchmark is 70\\u00d7 larger than LVBench.\\n\\n3. QA Types:\\n\\n * We support both MCQ and open-ended QA, while other long-video benchmarks, such as LVBench and LongVideoBench, are limited to MCQ.\\n\\n4. QA Resources:\\n\\n * Our QA resources include the video script and summary, providing additional context for question-answering.\\n\\n5. Challenging Capabilities:\\n\\n * While most benchmarks, including LVBench and LongVideoBench, focus on visual understanding, our benchmark evaluates combined subtitle + visual understanding, making it more challenging.\\n\\n**Conclusion:**\\n\\nAs shown in Table 1 of the revised paper, our benchmark surpasses MoVQA and other recent benchmarks in terms of scale, diversity, and the breadth of evaluation. We believe these advancements contribute significantly to the development of long-video understanding models.\\n\\n---\\n> **Q2: The writing and paper organization needs refinement.**\\n\\nThank you for highlighting this issue. We have carefully revised and polished the writing and organization of the paper to ensure clarity and improve readability. Please refer to the revised version for the updated changes.\"}", "{\"title\": \"Kind reminder #2: We are looking forward to your reply\", \"comment\": \"Dear Reviewer 147D,\\n\\nWe sincerely appreciate your dedicated time and effort in reviewing our paper.\\n\\nSince there are only a few hours remaining for reviewers to post messages to authors, we kindly ask if our additional clarifications and new results have sufficiently addressed your main concerns or if there are any remaining questions we can further address.\\n\\nThank you once again for your valuable feedback. Incorporating these clarifications and experiments has helped strengthen the paper further.\"}", "{\"title\": \"Response 1, Part 1\", \"comment\": \"---\\n> **Q1: The benchmark only uses movies and TV shows, which is too limited.**\\n**They should add more casual videos like vlogs and livestreams to make the testing more realistic.**\\n\\nWe appreciate the reviewer's suggestion and agree that including a wider variety of video sources, such as vlogs or live streams, could add value to future benchmarks. However, we argue that movies and TV shows are highly suitable and effective for assessing long-video understanding for the following reasons:\\n\\n1. Diverse and Complex Contexts:\\n\\n* Movies and TV shows are rich in content, featuring intricate character relationships, evolving storylines, and multi-layered themes. These elements introduce dynamic and complex reasoning challenges beyond the repetitive or monotonic scenarios often found in daily-life videos like vlogs or live streams.\\n\\n* For instance, movies often include unexpected actions, rapid shifts in context, and non-linear narratives that demand a higher level of understanding, making them ideal for testing models' ability to handle long-term dependencies and reasoning tasks.\\n\\n2. Variety of Skills Tested:\\n\\n* Our benchmark is designed to include diverse questions that evaluate different levels of video understanding, from surface-level observations to deeper reasoning about characters, events, and causality.\\n\\n* This holistic design ensures that the benchmark challenges models on a wide range of skills, as demonstrated by the low performance of current state-of-the-art models.\\n\\n3. Limitations of Casual Videos:\\n\\n* Vlogs and daily-life videos often revolve around singular, straightforward activities (e.g., riding a bicycle or walking through a park). These scenarios typically lack the nuanced interplay between characters and the layered storytelling in cinematic content.\\n\\n* While casual videos may help test immediate perception tasks, they are less suited for evaluating the reasoning and complex relational understanding required for proper long-video comprehension.\\n\\n4. Storytelling Patterns:\\n\\n* While movies and TV shows follow storytelling patterns, these patterns are not uniform and vary greatly across genres, cultures, and creators. This inherent variability further enriches the benchmark by introducing a broad spectrum of reasoning and comprehension challenges.\\n\\n**Acknowledgment and Future Work:**\\n\\nWe acknowledge that including additional categories of videos, such as vlogs, live streams, or documentaries, could further diversify the benchmark and make it more comprehensive. We plan to explore incorporating these sources in an extended version of the work or future studies.\\n\\n---\\n> **Q2: Without scripts or captions, the benchmark can't test how well AI models understand regular videos people watch and share online.**\\n\\nWe appreciate the reviewer\\u2019s observation and would like to clarify an important distinction regarding the use of transcripts and subtitles in our benchmark:\\n\\n**[Transcript vs. Subtitles]**\\n\\n1. Role of Transcripts:\\n\\n * Transcripts are detailed documents created by movie or TV show writers. They provide comprehensive information beyond spoken dialogue, including: \\n 1. Scene descriptions.\\n 2. Context about settings, locations, and character actions.\\n 3. Camera angles or shot compositions.\\n\\n * Transcripts serve as blueprints for visual and narrative elements, helping us extract visual insights and design challenging, reliable benchmark questions.\\n\\n * Key Point: Transcripts are used only during the benchmark creation process to ensure robustness and question diversity, not during inference or evaluation.\\n\\n2. Role of Subtitles:\\n\\n * Subtitles focus solely on translating spoken dialogue into text, typically extracted by transcribing the video's audio.\\n\\n * Subtitles are optional inputs for the AI model during inference, representing an additional modality when available.\\n\\n3. Clarifying the Inputs:\\n\\n * During inference, the model's input consists of video frames and, optionally, subtitles, as shown in Table 3.\\n\\n* Transcript Example:\\n\\n```\\nTranscript of episode 1, season 1 of Friends TV shows:\\n[Scene: Central Perk, Chandler, Joey, Phoebe, and Monica are there.]\", \"monica\": \"There's nothing to tell! He's just some guy I work with!\\n...\\n(Ross gestures his consent.)\", \"joey\": \"Strip joint! C'mon, you're single! Have some hormones!\\n(Rachel enters in a wet wedding dress and starts to search the room.)\\n```\\n\\n* Subtitle Example:\\n\\n```\\nSubtitle of the same episode\\n1\", \"00\": \"00:57,892 --> 00:00:59,962\\nThere's gotta be something wrong with him.\\n```\\nBy distinguishing between the roles of transcripts and subtitles, we ensure that the benchmark tests true long-video understanding during inference while using transcripts only for reliable benchmark construction.\"}", "{\"title\": \"Response 1, Part 2\", \"comment\": \"---\\n> **Q3: InfiniBench's testing does not cover current mainstream open-source models such as Qwen2VL, LLaVA-Onevision, and InternVL2**\", \"we_have_included_three_new_recent_long_video_models\": \"1. Qwen2VL\\n2. InternVL\\n3. LLava OV\\n\\nDue to time constraints, these experiments were conducted on 20% of the dataset.\\n\\n| Qwen2VL | Global Appearance | Scene transitions | character actions | Temporal questions | Local vision+text | Summarization | Deep context understanding | Spoiler questions | Linking Multiple Events | AVG acc | AVG score |\\n|----|----|----|---|----|----|----|----|---|---|----|---|\\n| | MCQ | MCQ | MCQ | MCQ | MCQ | Open-Ended | Open-Ended | Open-Ended | Open-Ended | | |\\n| Random Performance | 17 | 17 | 16 | 42 | 20 | N/A | N/A | N/A | N/A | N/A | N/A |\\n| 250 Frames | 36.6 | 30.2 | 36.64 | 50.23 | 59.89 | 0.67 | 2.05 | 1.39 | 2.82 | 42.712 | 1.7325 |\\n| 128 Frames | 32.58 | 28.64 | 34.59 | 49.33 | 54.98 | 0.59 | 1.87 | 1.31 | 2.75 | 40.024 | 1.63 |\\n| 16 Frames | 30.8 | 20.83 | 32.2 | 46.59 | 42.9 | 0.3 | 1.53 | 1.16 | 2.44 | 34.664 | 1.3575 |\\n\\n\\n| GPT-4o | Global Appearance | Scene transitions | character actions | Temporal questions | Local vision+text | Summarization | Deep context understanding | Spoiler questions | Linking Multiple Events | AVG acc | AVG score |\\n|----|----|----|---|----|----|----|----|---|---|----|---|\\n| | MCQ | MCQ | MCQ | MCQ | MCQ | Open-Ended | Open-Ended | Open-Ended | Open-Ended | | |\\n| Random Performance | 17 | 17 | 16 | 42 | 20 | N/A | N/A | N/A | N/A | N/A | N/A |\\n| 250 Frames | 45.98 | 46.35 | 35.32 | 68.02 | 81.7 | 3.46 | 3.38 | 2.72 | 3.47 | 55.474 | 3.2575 |\\n| 128 Frames | 18.98 | 29.84 | 17.92 | 43.12 | 22.1 | 1.78 | 0.37 | 0.61 | 0.69 | 26.392 | 0.8625 |\\n| 16 Frames | 20.37 | 31.93 | 16.38 | 42.32 | 20.22 | 1.68 | 0.35 | 0.63 | 0.65 | 26.244 | 0.8275 |\\n\\n\\n| InternVL | Global Appearance | Scene transitions | character actions | Temporal questions | Local vision+text | Summarization | Deep context understanding | Spoiler questions | Linking Multiple Events | AVG acc | AVG score |\\n|----|----|----|---|----|----|----|----|---|---|----|---|\\n| | MCQ | MCQ | MCQ | MCQ | MCQ | Open-Ended | Open-Ended | Open-Ended | Open-Ended | | |\\n| Random Performance | 17 | 17 | 16 | 42 | 20 | N/A | N/A | N/A | N/A | N/A | N/A |\\n| 250 Frames | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A |\\n| 128 Frames | 25.89 | 21.35 | 24.12 | 44.33 | 41.62 | 0.72 | 1.69 | 1.27 | 2.53 | 31.462 | 1.5525 |\\n| 16 Frames | 23.21 | 20.83 | 25.18 | 44.82 | 31.95 | 0.7 | 1.54 | 1.28 | 2.6 | 29.198 | 1.53 |\"}", "{\"title\": \"Response 1, Part 1\", \"comment\": \"---\\n> Q1: Can the authors comment on the limited variety of TV shows? What about sports events like NBA, NFL, Tennis, etc.\\n\\nWe appreciate the reviewer's suggestion and agree that including a wider variety of video sources, such as sports events, could add value to future benchmarks. However, we argue that movies and TV shows are highly suitable and effective for assessing long-video understanding for the following reasons:\\n\\n1. Diverse and Complex Contexts:\\n\\n* Movies and TV shows are rich in content, featuring intricate character relationships, evolving storylines, and multi-layered themes. These elements introduce dynamic and complex reasoning challenges beyond the repetitive or monotonic scenarios often found in daily-life videos like vlogs or live streams.\\n\\n* For instance, movies often include unexpected actions, rapid shifts in context, and non-linear narratives that demand a higher level of understanding, making them ideal for testing models' ability to handle long-term dependencies and reasoning tasks.\\n\\n2. Variety of Skills Tested:\\n\\n* Our benchmark is designed to include diverse questions that evaluate different levels of video understanding, from surface-level observations to deeper reasoning about characters, events, and causality.\\n\\n* This holistic design ensures that the benchmark challenges models on a wide range of skills, as demonstrated by the low performance of current state-of-the-art models.\\n\\n3. Limitations of Casual Videos:\\n\\n* Vlogs and daily-life videos often revolve around singular, straightforward activities (e.g., riding a bicycle or walking through a park). These scenarios typically lack the nuanced interplay between characters and the layered storytelling in cinematic content.\\n\\n* While casual videos may help test immediate perception tasks, they are less suited for evaluating the reasoning and complex relational understanding required for true long-video comprehension.\\n\\n4. Storytelling Patterns:\\n\\n* While movies and TV shows follow storytelling patterns, these patterns are not uniform and vary greatly across genres, cultures, and creators. This inherent variability further enriches the benchmark by introducing a broad spectrum of reasoning and comprehension challenges.\\n\\n**Acknowledgment and Future Work:**\\n\\nWe acknowledge that including additional categories of videos, such as sports events, could further diversify the benchmark and make it more comprehensive. We plan to explore incorporating these sources in an extended version of the work or future studies.\\n\\n\\n---\\n> Q2: Will all the videos be released to public? Are there any legal issues?\\n\\n\\nWe appreciate the reviewer\\u2019s concern about copyright implications and ethical considerations related to using movies and TV shows in our dataset. Below, we clarify our approach and steps to ensure compliance with legal and ethical standards:\\n\\n1. Use of Existing Datasets (e.g., MovieNet and TVQA):\\n\\n* We rely on publicly available datasets like MovieNet and TVQA, which provide downsampled versions of video content.\\n\\n* Both datasets are widely used in academic research and distribute videos in a legally compliant manner by heavily reducing the frame rate (e.g., 1 FPS). This downsampling transforms the videos into low-resolution derivatives primarily for research purposes, which may mitigate copyright concerns.\\n\\n* While we cannot definitively state that downsampling resolves all legal and ethical issues, its widespread use in academia suggests it is generally accepted within the research community.\\n\\n2. Benchmark Without Redistribution of Videos:\\n\\n* Our benchmark does not redistribute video content directly. Instead, we provide annotations, question-answer pairs, and evaluation scripts.\\nUsers are required to independently download the videos from official sources (e.g., MovieNet or TVQA).\\n\\n* This ensures we do not claim ownership of the original video content, nor do we host or distribute it ourselves.\\n\\n* This approach aligns with practices used in existing benchmarks, where users are required to obtain the original images independently.\\n\\n3. Ethical Considerations:\\n\\n* We acknowledge that relying on copyrighted material, even in downsampled form, can raise ethical questions.\\n\\n* We are committed to transparency and ensuring our benchmark is used responsibly. To that end, we will include a clear disclaimer stating:\\n\\n1. The benchmark does not redistribute or modify original video content.\\n\\n2. Users must adhere to the terms and conditions of the original video sources.\\n\\n**Conclusion:**\\n\\nWhile we cannot claim a definitive resolution of all legal or ethical concerns, our approach of using downsampled versions from widely accepted datasets and ensuring users obtain the videos independently reduces our direct responsibility for copyright compliance.\\nIf further legal concerns arise, we will explore ways to refine our approach in collaboration with legal experts and ensure the benchmark remains compliant and ethical.\"}", "{\"summary\": \"The paper proposes InfiniBench, a novel benchmark for long video understanding based on movies and TV shows. The benchmark has 108.2k question-answer pairs on 1,219 videos that average 52.59 minutes in length. The benchmark tests 9 different reasoning abilities including visual, long-context and local reasoning. This makes InfiniBench the largest-scale long video understanding benchmark to date. InfiniBench was constructed by combining and augmenting from two existing video benchmarks, TVQA and MovieNet. Most question types were generated by prompting GPT-4 with the transcript of the video while a custom pipeline was used to generate questions on changes in character appearance. The paper presents benchmark results of 8 long video understanding models, including 6 open source ones and 2 commercial ones, and discusses insights into their performance across various tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The presented benchmark has an impressive scale with 108.2k questions on 1,219 videos that average 52.59 minutes in length.\", \"There are 9 different question types that test long video understanding models across a variety of skills.\", \"The paper presents results of 8 long video models and draws interesting conclusions on their performance.\", \"There is a large gap between human performance and model performance, suggesting the benchmark has ample room for improvement.\", \"The paper has a good in-depth discussion of related work.\"], \"weaknesses\": [\"The question-answer pairs in the benchmark were generated fully automatically without any human intervention. This raises questions about soundness and of the questions and potential bias. A human evaluation is performed on a subset of the data, but good human performance is no proof that questions are well-formed and free of hallucinations.\", \"Most questions are generated from transcripts that authors obtained online, but it is unclear what information these transcripts contain, whether they are complete and error-free. It is also unclear how much visual information the transcripts contain and therefore it is unclear to what degree this is a multimodal benchmark.\", \"The use of movies and TV shows raises questions about generalizability. Most MLLMs likely know the plots of popular movies and shows because their summaries or transcripts were part of their training data. So, they may be able to answer the questions in the dataset without any context, which is not the case for most videos from the web. The effect of this is not examined.\", \"It is unclear how much the benchmark relies on multimodal reasoning. Questions about movies and TV shows could often be answerable from subtitles alone, which are provided as context in the evaluation. It would be interesting to see an ablation that uses (1) No context, only the question itself (2) Only the question and subtitles (3) the question, subtitles and video frames.\", \"The copyright implications of using movies and TV shows and possibly releasing the dataset are not discussed and raise ethical concerns.\", \"Since the dataset has \\\\~100 questions per video, it is likely that there are (near) duplicate questions. However there is no analysis of this and no mention of a filtering stage to remove duplicates.\", \"There are several issues with the presentation such as redundant figures, tables that are not referenced, and wrong references. The limitations section also exceeds the 10-page limit.\"], \"questions\": [\"Given the concerns listed above, I have doubts that this paper is suitable for publication at ICLR. I hope that authors can provide evidence to address my concerns as well as answers to the following questions.\", \"Could authors provide evidence of transcript quality? How accurate and complete are they? How much focus do they have on vision? Could authors provide examples?\", \"Why are multiple-choice questions evaluated by asking the model to generate an answer and then using GPT to match this answer to the options? Authors state in the appendix that the reason is that models often do not follow the prescribed answer format, but from my experience at least the larger VLMs are good at following instructions about the answer format.\", \"I am worried that using GPT for option matching introduces additional bias. I believe this could be measured by evaluating GPT or Gemini again by giving it the answer options in the prompt and asking it to respond with only the answer letter. Results could then be compared against the GPT-matched results.\", \"Also to the above point, did authors verify that event ordering type questions get matched correctly with GPT? These answers only differ in their ordering of options, so I am wondering whether GPT matches them correctly.\", \"The benchmark was constructed using GPT, and GPT is the best performing model across all tasks. It would be interesting to quantify if there is bias towards GPT, e.g. by generating part of the data with Gemini and checking if relative model performance is consistent with the original benchmark.\", \"How are copyright concerns handled? Did authors obtain permission from the copyright owners to use the video material for this purpose and to reproduce this content in a publication? If the dataset will be publicly released, how are copyright concerns handled?\", \"l. 198: \\u201cTo address this limitation, we transformed the TVQA dataset from a collection of short clips into a long video dataset by gathering and sequencing the clips corresponding to each episode thereby reconstructing the full episode frames.\\u201c How was this done and what data source was used?\", \"Appendix l. 12: \\u201cThe remaining two skills, i.e., local visual questions and summarizing, do not need human verification, as the first one is adopted from the TVQA dataset, and the latter is scrapped from human responses on the web.\\u201d I do not fully agree with this statement since existing benchmarks and the humans writing the summaries that were pulled from the web could still contain errors. Do authors have evidence to the quality of TVQA annotations and summaries obtained from the web?\", \"How does the number of video frames provided affect the model accuracy?\", \"Appendix B is quite important to understand the evaluation results presented, so I think it would be better suited to be in the main text.\", \"Appendix B mentions that the benchmark videos have no audio, so video and subtitles are provided to the model separately. Does this mean that alignment between frames and subtitles is missing? Did authors measure the effect of this?\", \"Could authors explain how spoiler questions are generated and provide the prompt used?\", \"How does the \\u201cI don\\u2019t know\\u201d option affect results? How accurately does GPT match model answers to this option?\", \"Fig. 5 (left) is redundant with Tab. 3, so one of them should be removed.\", \"l. 363: The explanation of local vision and text questions is not clear. It is not explained what these questions are nor how they were generated.\", \"It would be good to have random accuracy in Tab. 5 for direct comparability. Then, Tab. 4 could be omitted.\", \"l. 482: \\u201cAs shown in the table 5, MiniGPT4-video and LLaVA-NeXT-Interleave match lower than the random performance\\u201d What random performance is being compared to here? It would help to add this to the table as suggested above.\", \"l. 482, l. 505: How can a model\\u2019s performance be lower than random?\", \"l. 488: \\u201cOne reason may be that eliminating the noisy information and focus on only the related information helps more in answering the questions\\u201c How does the Goldfish model eliminate noisy information?\", \"For the human verification, how were human responses on open-ended questions evaluated?\", \"Minor points\", \"Tab. 1: I would not agree with the \\u201chuman\\u201d checkmark for InfiniBench since questions were generated fully automatically.\", \"Tab. 2 is never referenced.\", \"Appendix B: It would be helpful to express this in tabular form so readers can see at a glance how many frames and what modalities were used in each model.\", \"Tab. 5.: I would suggest to organize this into one big table with one column per task type. Also would be nice to visualize as a radar chart.\", \"It would be helpful to annotate question types in Sec 3.2.2 and Fig. 1 with whether they are MCQ or OE.\", \"It would be helpful to see a listing of modalities (vision, summary, transcript) used to generate each question.\", \"Please use \\\\\\\\citep for citations to place citations in parentheses.\", \"In tables, please right-justify numerical columns and use a consistent number of digits after the decimal point.\", \"Fig. 4: The font size in these charts is very small in print. I suggest increasing it. Also I would suggest to change the pie chart into a bar chart for easier readability.\", \"Fig. 5: Same concern as above about the font size.\", \"l. 373: Here, the reference to Fig. 4 is repeated, but Fig. 5 is wrongly referenced. Suggest correcting this sentence to refer to Fig. 3\\\\.\", \"l. 406: Broken reference.\", \"l. 413: The reference should point to Sec. B in the supplementary material.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"The following are my original concerns which have been mitigated:\\n\\nI have a concern about potential copyright infringement in this work. The proposed dataset is based on copyrighted content (video frames and subtitles of movies and TV shows) that authors have downloaded and used for experiments. The paper also includes figures of frames from TV shows. It is unclear whether the authors obtained permission from copyright owners for their use of the data. Authors do not mention whether they intend to release the dataset publicly, but if they do, this would raise further concerns.\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Kind reminder #2: We are looking forward to your reply\", \"comment\": \"Dear Reviewer iJs3,\\n\\nWe sincerely appreciate your dedicated time and effort in reviewing our paper.\\n\\nSince there are only a few hours remaining for reviewers to post messages to authors, we kindly ask if our additional clarifications and new results have sufficiently addressed your main concerns or if there are any remaining questions we can further address.\\n\\nThank you once again for your valuable feedback. Incorporating these clarifications and experiments has helped strengthen the paper further.\"}", "{\"title\": \"Response 1, Part 3\", \"comment\": \"> Q4: (Continued)\\n\\n| Qwen2VL | Global Appearance (ACC) | Scene transitions | character actions | Temporal questions | Local vision+text | Summarization | Deep context understanding | Spoiler questions | Linking Multiple Events | AVG acc | AVG score |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| | MCQ | MCQ | MCQ | MCQ | MCQ | Open-Ended | Open-Ended | Open-Ended | Open-Ended | | |\\n| Random Performance | 17 | 17 | 16 | 42 | 20 | N/A | N/A | N/A | N/A | N/A | N/A |\\n| 250 Frames | 36.6 | 30.2 | 36.64 | 50.23 | 59.89 | 0.67 | 2.05 | 1.39 | 2.82 | 42.712 | 1.7325 |\\n| 128 Frames | 32.58 | 28.64 | 34.59 | 49.33 | 54.98 | 0.59 | 1.87 | 1.31 | 2.75 | 40.024 | 1.63 |\\n| 16 Frames | 30.8 | 20.83 | 32.2 | 46.59 | 42.9 | 0.3 | 1.53 | 1.16 | 2.44 | 34.664 | 1.3575 |\\n\\n\\n| GPT-4o | Global Appearance (ACC) | Scene transitions | character actions | Temporal questions | Local vision+text | Summarization | Deep context understanding | Spoiler questions | Linking Multiple Events | AVG acc | AVG score |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| | MCQ | MCQ | MCQ | MCQ | MCQ | Open-Ended | Open-Ended | Open-Ended | Open-Ended | | |\\n| Random Performance | 17 | 17 | 16 | 42 | 20 | N/A | N/A | N/A | N/A | N/A | N/A |\\n| 250 Frames | 45.98 | 46.35 | 35.32 | 68.02 | 81.7 | 3.46 | 3.38 | 2.72 | 3.47 | 55.474 | 3.2575 |\\n| 128 Frames | 18.98 | 29.84 | 17.92 | 43.12 | 22.1 | 1.78 | 0.37 | 0.61 | 0.69 | 26.392 | 0.8625 |\\n| 16 Frames | 20.37 | 31.93 | 16.38 | 42.32 | 20.22 | 1.68 | 0.35 | 0.63 | 0.65 | 26.244 | 0.8275 |\\n\\n\\n| InternVL | Global Appearance (ACC) | Scene transitions | character actions | Temporal questions | Local vision+text | Summarization | Deep context understanding | Spoiler questions | Linking Multiple Events | AVG acc | AVG score |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| | MCQ | MCQ | MCQ | MCQ | MCQ | Open-Ended | Open-Ended | Open-Ended | Open-Ended | | |\\n| Random Performance | 17 | 17 | 16 | 42 | 20 | N/A | N/A | N/A | N/A | N/A | N/A |\\n| 250 Frames | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A |\\n| 128 Frames | 25.89 | 21.35 | 24.12 | 44.33 | 41.62 | 0.72 | 1.69 | 1.27 | 2.53 | 31.462 | 1.5525 |\\n| 16 Frames | 23.21 | 20.83 | 25.18 | 44.82 | 31.95 | 0.7 | 1.54 | 1.28 | 2.6 | 29.198 | 1.53 |\"}", "{\"comment\": \"---\\n> **Q1: The paper lacks discussion of related work, e.g., Video-MME, LVBench, and Long VideoBench.**\\n\\nWe have expanded our evaluation to include other recent benchmarks, such as Video-MME, LVBench, and LongVideoBench.\", \"here_are_some_key_highlights\": \"1. Video Length:\\n\\n* Most benchmarks focus on short videos, with an average length of around 10 minutes.\\n\\n* The exception is LVBench, which includes 1-hour-long videos, making it comparable in duration to our dataset.\\n\\n2. Scale:\\n\\n* Our benchmark is 70\\u00d7 larger than LVBench.\\n\\n3. QA Types:\\n\\n* We support both MCQ and open-ended QA, while other long-video benchmarks, such as LVBench and LongVideoBench, are limited to MCQ.\\n\\n4. QA Resources:\\n\\n* Our QA resources include the video script and summary, providing additional context for question-answering.\\n\\n5. Challenging Capabilities:\\n\\n* While most benchmarks, including LVBench and LongVideoBench, focus on visual understanding, our benchmark evaluates combined subtitle + visual understanding, making it more challenging.\\n\\n\\n**Conclusion:**\\n\\nAs shown in Table 1 of the revised paper, our benchmark surpasses the recent benchmarks in scale, diversity, and breadth of evaluation. We believe these advancements contribute significantly to the development of long-video understanding models.\\n\\n---\\n> **Q2: [The quality of the dataset] Most of the question-answer pairs are generated by GPT-4o. Although multiple information sources were used as input, it's difficult to guarantee the dataset's quality.**\\n\\n[**Human Verification of 10%**]\\n\\nWe have conducted a human verification of our benchmark for 10% (10.8k questions) to verify the dataset's quality. \\nThe verification of the 10% of the data takes around 400 human hours.\\nThe results show the average of correct questions is (95.8), and humanly correct the rest of the dataset. So the final humanly verified set is 100% accurate. The remaining set is now considered as weak labels with 96% expected accuracy which we believe can be a valuable resource for training. We are expanding the human verification process to cover the whole set which is expected to take around 4249 human hours.\", \"the_detailed_accuracy_per_skill_is_reported_in_the_table_below\": \"|Skill Name |Number of Questions| Accuracy |\\n|----|----|----|\\n|Character Actions | 667 | 94.9 |\\n|Deep Context Understanding | 2172 | 96.50 |\\n|Global Appearance | 135 | 89.62 |\\n|Linking Multiple Events | 2297 | 98.00 |\\n|Scene Transitions | 103 | 88.34 |\\n|Spoiler Questions | 43 | 95.34 |\\n|Temporal Questions | 2927 | 94.08 |\\n\\n[**Human Verification of 100%**]\\n\\nWe have 923 episodes, whereas one requires 3 hours on average for verification. In addition, we have 296 movies, each taking around five hours to be verified.\\nWe have a contract with a data annotation company, with 10 annotators working full-time on our benchmark. Therefore, all the data should be verified and ready in 20 working days.\\n\\n---\\n> **Q3: [Dataset Leakage] Part of the data comes from IMDB content, which likely appeared multiple times in the training corpus of LLMs used by video models, potentially leading to dataset leakage issues.**\\n\\n**[Blindness Experiment]**\\n\\nTo genuinely assess the data leakage, we deliberately drop the video and only feed the question and some context about the episode or the movie without any visual inputs.\\n\\nFor instance, here is the input prompt in the blindness case:\\n\\n``\\nThis is a question for a video from {show} {season_num} {episode_num}, use your knowledge to answer this question:\\n{question}\\n''\\n\\nWe have conducted the blindness experiments using two models, Qwen and GPT-4o.\\nAs shown in the tables below, in most skills, the blind models' performance is too close to the random performance.\\n\\nFor instance, on the \\\"global appearance\\\" and the \\\"scene transitions\\\" skills, Qwen achieves 19.6 and 21, while GPT-4o achieves 20.8 and 22.5, approximately equal to the random performance of 17 for both skills.\\n\\n| Qwen | Global Apperance | Scene transitions | character actions | Temporal questions | Local vision+text | Summarization | Deep context understanding | Spoiler questions | Linking Multiple Events | AVG acc | AVG score |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| Question Type | MCQ | MCQ | MCQ | MCQ | MCQ | Open-Ended | Open-Ended | Open-Ended | Open-Ended | | |\\n| Random Performance | 17 | 17 | 16 | 42 | 20 | N/A | N/A | N/A | N/A | N/A | N/A |\\n| Video + sub + question | 36.6 | 30.2 | 36.64 | 50.23 | 59.89 | 0.67 | 2.05 | 1.39 | 2.82 | 42.712 | 1.7325 |\\n| Question + Video Info | 19.64 | 21.35 | 35.05 | 46.57 | 39.71 | 0.28 | 1.7 | 1.48 | 2.6 | 32.464 | 1.515 |\\n| Question | 18.75 | 19.27 | 29.62 | 45.49 | 38.29 | 0 | 0.97 | 0.76 | 1.7 | 30.284 | 0.8575 |\", \"title\": \"Response 1, Part 1\"}", "{\"title\": \"Response 1, Part 3\", \"comment\": \"> Q3: (Continued)\\n\\n| Llava OV | Global Appearance | Scene transitions | character actions | Temporal questions | Local vision+text | Summarization | Deep context understanding | Spoiler questions | Linking Multiple Events | AVG acc | AVG score |\\n|----|----|----|---|----|----|----|----|---|---|----|---|\\n| | MCQ | MCQ | MCQ | MCQ | MCQ | Open-Ended | Open-Ended | Open-Ended | Open-Ended | | |\\n| Random Performance | 17 | 17 | 16 | 42 | 20 | N/A | N/A | N/A | N/A | N/A | N/A |\\n| 250 Frames | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A |\\n| 128 Frames | 36.6 | 23.95 | 25.911 | 45.49 | 48.6 | 0.55 | 1.79 | 1.3 | 2.58 | 36.1102 | 1.555 |\\n| 16 Frames | 41.51 | 24.47 | 25.97 | 44.27 | 40.15 | 0.48 | 1.48 | 1.33 | 2.3 | 35.274 | 1.3975 |\\n\\nAfter adding the new methods, a complete leaderboard can be seen in Table 5 of the paper.\\n\\n---\\n> Q4: In Table 1, the benchmark comparison is insufficient, especially regarding recent video benchmarks such as Video-MME and LongVideoBench. Additionally, the author's definition of \\\"very long\\\" is problematic - MLVU and MovieChat have only a 3-minute gap, yet MLVU is defined as very long. This is not reasonable.\\n\\nWe appreciate the reviewer's feedback and agree that our previous taxonomy, based on a hard threshold (e.g., 10 minutes), is not robust.\\n\\n**[Updated Taxonomy]**\\n\\nInspired by this discussion, we have adopted a more adaptive and reliable categorization method using K-means clustering:\\n\\n* We set $k=3$ to categorize benchmarks into three groups based on average video length.\\n\\n* This method adaptively determines the categories without relying on arbitrary thresholds.\\n\\nThe updated categorization addresses inconsistencies in defining \\\"very long\\\" benchmarks and ensures a fair comparison.\\n\\n**[Expanded Comparison]**\\n\\nAdditionally, we have included comparisons with recent benchmarks, such as Video-MME and LongVideoBench, in Table 1 of the revised paper.\\n\\nWe believe these changes improve the reliability and comprehensiveness of our benchmark comparison. Thank you for raising this important point!\\n\\n\\n---\\n> Q5: How is GPT-4V's scoring aligned with human evaluation?\\n\\nWe conducted a human evaluation on 10% of the dataset to assess the alignment between GPT-4o's scoring system and human preferences.\\n\\n**Evaluation Methodology:**\\n\\n* We designed a simple GUI that displayed responses from two models side by side for each question.\\n\\n* Annotators were asked to select which model provided the better response based on quality and relevance.\\n\\n* We then measured the Pearson correlation between human preferences and GPT-4o's scoring or ranking systems.\\n\\n**Results:**\\n\\n* The correlation between human preferences and GPT-4o's scoring system was 96%, indicating that GPT-4o's scoring is highly reliable and closely aligned with human judgment.\\n\\nThese results validate the robustness of GPT-4o's scoring system as a reliable evaluation metric.\"}", "{\"title\": \"Response 1, Part 2\", \"comment\": \"---\\n> Q3: **[Data Leakage]**\\nMost MLLMs likely know the plots of popular movies and shows because their summaries or transcripts were part of their training data. So, they may be able to answer the questions in the dataset without any context\\n\\n**[Blindness Experiment]**\\n\\nTo genuinely assess the data leakage, we deliberately dropped the video and only fed the question and some context about the episode or the movie without any visual inputs.\\n\\nFor instance, here is the input prompt in the blindness case:\\n\\n``\\nThis is a question for a video from {show} {season_num} {episode_num}, use your knowledge to answer this question:\\n{question}\\n''\\n\\nWe have conducted the blindness experiments using two models, Qwen and GPT-4o.\\nAs shown in the tables below, in most skills, the blind models' performance is too close to the random performance.\\n\\nFor instance, on the \\\"global appearance\\\" and the \\\"scene transitions\\\" skills, Qwen achieves 19.6 and 21, while GPT-4o achieves 20.8 and 22.5, approximately equal to the random performance of 17 for both skills.\\n\\n\\n| Qwen | Global Apperance (ACC) | Scene transitions | character actions | Temporal questions | Local vision+text | Summarization | Deep context understanding | Spoiler questions | Linking Multiple Events | AVG acc | AVG score |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| Question Type | MCQ | MCQ | MCQ | MCQ | MCQ | Open-Ended | Open-Ended | Open-Ended | Open-Ended | | |\\n| Random Performance | 17 | 17 | 16 | 42 | 20 | N/A | N/A | N/A | N/A | N/A | N/A |\\n| Video + sub + question | 36.6 | 30.2 | 36.64 | 50.23 | 59.89 | 0.67 | 2.05 | 1.39 | 2.82 | 42.712 | 1.7325 |\\n| Question + Video Info | 19.64 | 21.35 | 35.05 | 46.57 | 39.71 | 0.28 | 1.7 | 1.48 | 2.6 | 32.464 | 1.515 |\\n| Question | 18.75 | 19.27 | 29.62 | 45.49 | 38.29 | 0 | 0.97 | 0.76 | 1.7 | 30.284 | 0.8575 |\\n\\n\\n| GPT-4o | Global Appearance (ACC) | Scene transitions | character actions | Temporal questions | Local vision+text | Summarization | Deep context understanding | Spoiler questions | Linking Multiple Events | AVG acc | AVG score |\\n|------------------------|-------------------------|-------------------|-------------------|--------------------|-------------------|---------------|----------------------------|-------------------|-------------------------|---------|-----------|\\n| | MCQ | MCQ | MCQ | MCQ | MCQ | Open-Ended | Open-Ended | Open-Ended | Open-Ended | | |\\n| Random Performance | 17 | 17 | 16 | 42 | 20 | N/A | N/A | N/A | N/A | N/A | N/A |\\n| Video + sub + question | 45.98 | 46.35 | 35.32 | 68.02 | 81.7 | 3.46 | 3.38 | 2.72 | 3.47 | 55.474 | 3.2575 |\\n| Question + Video Info | 20.83 | 22.51 | 17.18 | 42.82 | 17.17 | 1.7 | 0.37 | 0.68 | 0.7 | 24.102 | 0.8625 |\\n| Question | 14.81 | 24.08 | 15.78 | 42.35 | 16.44 | 1.75 | 0.36 | 0.67 | 0.67 | 22.692 | 0.8625 |\\n\\n\\n\\n**[Data Leakage vs. Common Sense]**\\n\\nIn contrast, only the blind Qwen on one skill, the ``Character Actions'', achieves closer performance than the Qwen, which takes the visual input, 36.6 and 36, respectively.\\nThis could be interpreted as the model using its common sense to answer the question.\\nThe choices in this skill contain valid actions, and only their order is wrong. \\nThus, we argue that the model could perform well using common sense to order the events.\\n\\nTo test our hypothesis, we assess the model performance on this skill as an open-ended question without choices. \\nWe leverage GPT-4o to score the models' outputs out of 5, where 0 is the worst and five is the best. The detailed prompt used while scoring is depicted in Figure 7.\\nAs expected, when we remove the visual input, the accuracy drops significantly from 0.79 to 0.003 as shown in the table below.\\n\\n| Inputs | GPT-4o Score |\\n|----------------------------|-------------------|\\n| Questions | 0.003 |\\n| Video + Questions | 0.79 |\"}", "{\"title\": \"Kind reminder: We are looking forward to your reply\", \"comment\": \"Dear Reviewer oZkF,\\n\\nWe kindly ask if our response has addressed your concerns. Fortunately, we still have till December 3rd to discuss. Therefore, please feel free to share any additional questions or feedback, and we\\u2019ll be happy to provide further clarification.\\n\\nBest regards, The Authors\"}", "{\"summary\": \"This paper introduces InfiniBench, an innovative and comprehensive benchmark focused on evaluating large multimodal models' performance in understanding very long videos. InfiniBench is notable for its ultra-long video duration (averaging 52.59 minutes per video) and massive question-answer pairs (108.2K), covering nine different skills including multiple-choice and open-ended questions. These questions are designed to be both diverse and human-centric, with videos primarily sourced from movies and TV shows. Experimental results show that even leading AI models like GPT-4V and Gemini 1.5 Flash face significant challenges in long video understanding, achieving average accuracies of only 49.16% and 42.72%, with mean scores of 3.22 and 2.71 (out of 5) respectively. This indicates that while these models perform relatively well on local skills, they still have limitations in skills requiring global reasoning and deep contextual understanding, such as scene transitions and movie spoiler questions. Open-source models generally perform below random chance on multiple-choice questions, highlighting long-sequence global reasoning as a major challenge for existing models. Additionally, models relying on both video and text information perform poorly without caption input, emphasizing the importance of processing both visual and textual information for long video understanding. The introduction of InfiniBench aims to fill the gap in long video understanding benchmarks, drive the development of open-source large language models, and motivate multimodal large models toward more human-like long video understanding and reasoning capabilities, despite current limitations such as video source restrictions and script dependency.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. InfiniBench provides a comprehensive evaluation of large multimodal models' capabilities in long video understanding through including the longest video duration and a large number of question-answer pairs, as well as designing diverse question types (multiple-choice and open-ended questions) covering nine different skills, thus thoroughly examining models' performance across multiple dimensions of long video understanding.\\n\\n2. By evaluating various models including both commercial and open-source models, InfiniBench reveals the challenges and limitations of existing models in long video understanding, especially in tasks requiring deep contextual understanding and critical thinking. This in-depth assessment helps identify model deficiencies and provides clear directions for future research and model improvements.\\n\\n3. InfiniBench's design not only tests models' technical capabilities but also drives models toward more human-like understanding and reasoning abilities. Through proposing human-centric questions, such as movie spoiler questions, it promotes model performance improvement in long video understanding tasks, which is significant for achieving more advanced AI applications and advancing the field of artificial intelligence.\", \"weaknesses\": \"1. The benchmark only uses movies and TV shows for testing, which is too limited. It should include more types of videos that show different parts of real life, like nature documentaries or home videos. The problem is that movies and TV shows follow certain storytelling patterns, so AI models might just learn these patterns instead of truly understanding the videos. They should add more casual videos like vlogs and livestreams to make the testing more realistic.\\n\\n2. The benchmark needs written scripts to create its questions and answers. This is a big problem because most real-world videos don't come with scripts. Without scripts or captions, the benchmark can't test how well AI models understand regular videos that people actually watch and share online.\\n\\n3. InfiniBench's testing does not cover current mainstream open-source models such as Qwen2VL, LLaVA-Onevision, and InternVL2. This makes it difficult to obtain a more comprehensive and in-depth comparison between open-source and closed-source models.\\n\\n4. In Table 1, the benchmark comparison is insufficient, especially regarding some recent video benchmarks such as Video-MME and LongVideoBench. Additionally, the authors' definition of \\\"very long\\\" is problematic - MLVU and MovieChat have only a 3-minute gap, yet MLVU is defined as very long. This is not reasonable.\", \"questions\": \"1. How is GPT-4V's scoring aligned with human evaluation?\\n2. Why weren't the latest models tested, and why wasn't there comparison and discussion of the latest benchmarks?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response 1, part 8\", \"comment\": \"> Q13 It would be helpful to see a listing of modalities (vision, summary, transcript) used to generate each question.\\n\\nThe table below shows the different modalities used to generate each type of questions.\\n\\n|Skill name | Input Modality|\\n|-----------|---------------|\\n|Character actions | Summary +Transcript |\\n|Deep context understanding | Summary +Transcript |\\n|Global appearance | Vision |\\n|Linking multiple events | Summary|\\n|Scene transitions | Transcript |\\n|Spoiler questions | web scraping |\\n|Temporal questions | Transcript |\\n|Local vision text questions | Adopted from TVQA|\\n|Summarization | web scraping |\"}", "{\"comment\": \"I\\u2019d like to thank the authors for their very elaborate response and the many new experiments performed. Overall, the authors were able to address most of my concerns while also making significant improvements to the paper. I am still concerned about potential bias towards GPT4. While my 3rd question (bias in eval protocol) has been mitigated by the experiments for Q7, my concern about bias in the questions still stands. Nevertheless, since most of my concerns have been addressed, I am increasing my score to 8\\\\. I am also withdrawing my call for an ethical review based on the authors\\u2019 response regarding potential copyright issues.\", \"please_find_my_detailed_responses_below\": [\"**Q1**: Thank you for the clarification\\\\! From reading the paper, I got the impression that the human evaluation simply asked human labelers to answer the questions. I see that significant care is being taken to ensure data quality. I would suggest filtering out those QAs that labelers marked as wrong.\", \"**Q2**: Thank you for the response. This clarifies my understanding of what a transcript is and mitigates my concern of insufficient visual data being present. I\\u2019d suggest adding this to the paper or supplementary material.\", \"**Q3**: Thank you for this additional analysis. This somewhat mitigates my concern about prior knowledge existing in models. Interestingly, the blind Qwen model does well on character actions and local vision+context while the blind GPT4o is barely better than random. For the sake of transparency, it would be good to add this blindness experiment to the supplementary material.\", \"**Q4**: Thank you for this interesting analysis\\\\! This mitigates my concern about multimodality. It\\u2019s great to see that both vision and subtitles contribute positively to accuracy and there are complimentary effects when combining them. The insights per question type are also very valuable.\", \"**Q5**: Thank you for the detailed response\\\\! The fact that this dataset only consists of annotations on top of other datasets and does not include frames from the TV shows alleviates my ethical concerns.\", \"**Q6**: I appreciate the authors performing an additional analysis and including the results in the paper. This somewhat alleviates my concern about duplication. I would call the example given a near-duplicate though since the same reasoning is needed to answer both question, except that the answer is flipped.\", \"**Q7**: Thank you for this additional analysis\\\\! Since open-source VLMs are not as good at instruction following, especially with large contexts, I think this evaluation method makes sense. But reporting the exact matching metric is a good idea since it allows for a cheaper evaluation. Another thing worth trying would be, instead of penalizing a model if its answer does not parse, select its answer by random choice, which gives it a 1-in-N chance of answering correctly and is more fair. (I\\u2019m not asking this for the rebuttal, but just giving this as a suggestion on how the gap between the evaluation methods could be closed.)\", \"**Q8**: Thank you for the many additional experiments and insightful findings\\\\!\", \"**Q9**: Thank you for the additional explanation.\", \"**Q10**: I\\u2019m still not sure this explains performance lower than random, since any learned biases would have to actively steer the model away from the correct answer in order for the score to be lower than random.\", \"**Q11**: Thank you for the clarification\\\\!\", \"**Q12**: Thank you for this reassuring clarification\\\\!\"]}", "{\"title\": \"Response1, Part 6\", \"comment\": \"---\\n> Q8 (Continued)\\n\\n| Qwen2VL | Global Appearance | Scene transitions | character actions | Temporal questions | Local vision+text | Summarization | Deep context understanding | Spoiler questions | Linking Multiple Events | AVG acc | AVG score |\\n|--|---|-----|----|----|----|----|---|---|----|----|----|\\n| | MCQ | MCQ | MCQ | MCQ | MCQ | Open-Ended | Open-Ended | Open-Ended | Open-Ended | | |\\n| Random Performance | 17 | 17 | 16 | 42 | 20 | N/A | N/A | N/A | N/A | N/A | N/A |\\n| 250 Frames | 36.6 | 30.2 | 36.64 | 50.23 | 59.89 | 0.67 | 2.05 | 1.39 | 2.82 | 42.712 | 1.7325 |\\n| 128 Frames | 32.58 | 28.64 | 34.59 | 49.33 | 54.98 | 0.59 | 1.87 | 1.31 | 2.75 | 40.024 | 1.63 |\\n| 16 Frames | 30.8 | 20.83 | 32.2 | 46.59 | 42.9 | 0.3 | 1.53 | 1.16 | 2.44 | 34.664 | 1.3575 |\\n\\n\\n| GPT-4o | Global Appearance | Scene transitions | character actions | Temporal questions | Local vision+text | Summarization | Deep context understanding | Spoiler questions | Linking Multiple Events | AVG acc | AVG score |\\n|--|---|-----|----|----|----|----|---|---|----|----|----|\\n| | MCQ | MCQ | MCQ | MCQ | MCQ | Open-Ended | Open-Ended | Open-Ended | Open-Ended | | |\\n| Random Performance | 17 | 17 | 16 | 42 | 20 | N/A | N/A | N/A | N/A | N/A | N/A |\\n| 250 Frames | 45.98 | 46.35 | 35.32 | 68.02 | 81.7 | 3.46 | 3.38 | 2.72 | 3.47 | 55.474 | 3.2575 |\\n| 128 Frames | 18.98 | 29.84 | 17.92 | 43.12 | 22.1 | 1.78 | 0.37 | 0.61 | 0.69 | 26.392 | 0.8625 |\\n| 16 Frames | 20.37 | 31.93 | 16.38 | 42.32 | 20.22 | 1.68 | 0.35 | 0.63 | 0.65 | 26.244 | 0.8275 |\\n\\n\\n| InternVL | Global Appearance | Scene transitions | character actions | Temporal questions | Local vision+text | Summarization | Deep context understanding | Spoiler questions | Linking Multiple Events | AVG acc | AVG score |\\n|--|---|-----|----|----|----|----|---|---|----|----|----|\\n| | MCQ | MCQ | MCQ | MCQ | MCQ | Open-Ended | Open-Ended | Open-Ended | Open-Ended | | |\\n| Random Performance | 17 | 17 | 16 | 42 | 20 | N/A | N/A | N/A | N/A | N/A | N/A |\\n| 250 Frames | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A |\\n| 128 Frames | 25.89 | 21.35 | 24.12 | 44.33 | 41.62 | 0.72 | 1.69 | 1.27 | 2.53 | 31.462 | 1.5525 |\\n| 16 Frames | 23.21 | 20.83 | 25.18 | 44.82 | 31.95 | 0.7 | 1.54 | 1.28 | 2.6 | 29.198 | 1.53 |\"}", "{\"metareview\": \"The paper presented a benchmark for long video understanding which is an important and challenging problem in the field. The paper received mixed ratings from the reviewers. There are some critical concerns raised by the reviewers. First, one reviewer is concerned about the diversity of the testing long videos. The videos are mainly from TV shows or movies which largely restricts the applicability of the benchmark to be used to model realistic video understanding problems. Another reviewer also mentioned that the testing videos are from only a limited number of channels which may bring bias in testing the performance of different video understanding models. Second, the reviewers are also criticizing the benchmark baseline models being used. There are several important video understanding models not tested in the benchmark. It brings limitations to understanding how challenging or useful the proposed benchmark is. Finally, there are also comments regarding the presentation quality of the submission. Based on these key points, AC decided to recommend a rejection for this time. The authors are encouraged to further polish the paper to submit it another time.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers requested further clarification on important details of the benchmark and asked for more experiments on state-of-the-art video understanding models. The reviewers are not fully satisfied with the authors' rebuttal.\"}", "{\"title\": \"Response 1, Part 4\", \"comment\": \"> Q4: (Continued)\\n\\n| Llava OV | Global Appearance (ACC) | Scene transitions | character actions | Temporal questions | Local vision+text | Summarization | Deep context understanding | Spoiler questions | Linking Multiple Events | AVG acc | AVG score |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| | MCQ | MCQ | MCQ | MCQ | MCQ | Open-Ended | Open-Ended | Open-Ended | Open-Ended | | |\\n| Random Performance | 17 | 17 | 16 | 42 | 20 | N/A | N/A | N/A | N/A | N/A | N/A |\\n| 250 Frames | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A |\\n| 128 Frames | 36.6 | 23.95 | 25.911 | 45.49 | 48.6 | 0.55 | 1.79 | 1.3 | 2.58 | 36.1102 | 1.555 |\\n| 16 Frames | 41.51 | 24.47 | 25.97 | 44.27 | 40.15 | 0.48 | 1.48 | 1.33 | 2.3 | 35.274 | 1.3975 |\\n\\nUsing 250 frames per video is not applicable for LLava-OV and InternVL, as they are standard video models not designed to handle too-long videos.\\nAccordingly, we hit the maximum context length for these models, which prevents us from running on higher sampling rates.\\n\\n\\n---\\n> Q5: Since most question-answer pairs are generated by GPT-4o, could this lead to inflated evaluation results for GPT-4o?\\n\\nWe appreciate the reviewer\\u2019s suggestion and share the concern about ensuring a reliable and unbiased evaluation. To address this point, we conducted the following experiments:\\n\\nTo address this point, we conducted several experiments:\\n1. **[Reliable Benchmark]** \\n* We performed a human evaluation of the generated question-answer pairs.\\n* The results show that the generated pairs align with human annotations by more than 95%, demonstrating that the benchmark is sufficiently reliable.\\n2. **[Poor Performance]** \\n* Despite GPT-4o being used to generate the question-answer pairs, its evaluation performance is far from acceptable.\\n* For example, it achieves only 35% accuracy on the \\\"Character Actions\\\" skill, while random performance is 16%. This indicates that the evaluation is not artificially inflated.\\n3. **[Input Richness]** \\n* An important question arises: If GPT-4o can generate reliable, human-like question-answer pairs, why does it perform poorly on these questions?\\n* The answer lies in the difference between the inputs used during data creation and evaluation:\\n* During data creation, we provide GPT-4o with the transcript and the video summary, which contain rich visual and contextual information, including semantics, event context, character personalities, and other specifics.\\n* To assess long-video understanding capabilities during the evaluation, we omit the transcript and the summary and provide only the video input.\\n* This discrepancy in input richness justifies the gap between GPT-4o\\u2019s performance during testing and the benchmark creation.\\n\\nWe hope these points clarify why the evaluation results are not inflated and further justify the reliability of the benchmark and experimental setup.\"}", "{\"summary\": \"In this work, the authors propose an InfiniBench for very long video understanding. To contain local/global events and understand visual/contextual content, they define a long video understanding covering nine skills through four critical aspects in movies and tv shows.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1 Important Topic. Long-form video understanding is a challenging but important problem. Hence, how to develop benchmark to evaluate this problem is critical.\\n\\n2 Experiments. The experimental results are sufficient to support the claim of benchmark.\", \"weaknesses\": \"1 Similar work has been proposed in the literature. For example, [MoVQA: A Benchmark of Versatile Question-Answering for Long-Form Movie Understanding, arXiv:2312.04817]. Please clarify the difference.\\n \\n2 The writing and paper organization is not good. Please refine it for easy reading.\", \"questions\": \"Please see weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response1, Part 5\", \"comment\": \"---\\n> Q7: Why are multiple-choice questions evaluated by GPT not exact matching?\\n\\nWe appreciate the reviewer\\u2019s observation. The primary reason for using GPT-4o for evaluation instead of exact matching is to address cases where models do not strictly follow instructions and fail to output the selected choice directly.\\n\\n**Why GPT-4o Evaluation?**\\n\\n* The flexibility of GPT-4o allows us to focus on assessing the specific skill being tested rather than penalizing models for instruction-following errors.\\n\\n* This approach ensures that the evaluation emphasizes the core reasoning or understanding ability of the model, rather than its adherence to formatting requirements.\\n\\n**Comparison of Evaluation Methods**\\n\\nTo assess the impact of these two evaluation strategies\\u2014exact matching (restrictive) and GPT-4o evaluation (flexible)\\u2014we conducted experiments on two models: GPT-4o and MiniGPT4_video.\", \"findings\": \"* GPT: Performs equally well under both exact matching and flexible evaluation. \\nGPT strictly follows the instructions and outputs the exact choices, showing no advantage from flexibility.\\n\\n* MiniGPT4_video: Benefits significantly from the flexibility of GPT-4o evaluation. \\nThis model sometimes deviates from the instruction format but still demonstrates reasonable understanding of the question. Flexible evaluation allows us to capture its actual performance more effectively.\\n\\n**Conclusion and Future Reporting**\\n\\n* Inspired by this discussion, we will report results using both evaluation methods:\\n\\n1. Exact Matching: A restrictive approach that directly evaluates instruction adherence.\\n\\n2.GPT-4o Evaluation: A flexible approach that focuses on the skill being tested.\\n\\nThis dual reporting will provide a more comprehensive understanding of model performance, balancing strict evaluation with flexibility where appropriate.\\n\\n| | Global Appearance | Scene transitions | Character actions | Temporal questions | Local vision+text |\\n|------------------------------|-------------------|-------------------|-------------------|--------------------|--------------------|\\n| GPT4o (GPT matching) | 45.98 | 46.35 | 35.3 | 68.02 | 81.70 |\\n| GPT4o (Exact matching) | 44.76 | 45.83 | 35.2 | 67.78 | 79.48 |\\n| MiniGPT4-video (GPT) | 3.57 | 1.5 | 2.7 | 39.54 | 13.97 |\\n| MiniGPT4-video (Exact match) | 2.76 | 1.04 | 1.6 | 7.63 | 0.1 |\\n\\nThis ablation only for 20 % of the benchmark\\n\\n---\\n> Q8: How does the number of video frames provided affect the model accuracy?\\n\\n1. Influence of Input Frame Rate:\\n\\n* Feeding more frames intuitively improves accuracy, but the degree of improvement varies across models.\\n\\n* For instance, GPT-4o benefits the most from higher frame rates, while LLaVA-OV\\u2019s performance remains almost unchanged despite using an 8x higher frame rate.\\n\\n2. Analysis of LLaVA-OV\\u2019s Behavior:\\n\\n* The limited benefit of higher frame rates for LLaVA-OV may be attributed to its training strategy. \\n\\n* LLaVA-OV is trained jointly on single images, multi-images, and videos.\\n\\n* This strategy employs a balanced visual representation approach, aggressively downsampling video inputs to ensure parity with image-based scenarios.\\n\\n* While effective for general tasks, this aggressive downsampling likely hurts LLaVA-OV\\u2019s ability to understand long videos, limiting its benefit from higher frame rates.\\n\\n3. Skill-Specific Insights:\\n\\n* Specific skills benefit more from higher frame rates. For example, the ``local vision+text'' skill improves most as it relies on short sequential shots.\\n* Increasing the frame rate reduces the chance of missing critical shots related to the answer, thereby boosting accuracy for such tasks.\\n\\nThe results demonstrate that while higher frame rates generally improve performance, the degree of improvement depends on the model\\u2019s design and training strategy. Models like GPT-4o, optimized for sequential inputs, show significant gains, whereas models like LLaVA-OV, which aggressively downsample videos, see minimal benefits.\"}", "{\"title\": \"Summary of our Rebuttal\", \"comment\": [\"We sincerely thank the reviewers and area chairs for their thoughtful feedback and constructive comments. Your insights have been instrumental in strengthening our work, improving its clarity, and providing additional evidence to support our claims.\", \"In the revised manuscript, we have incorporated all feedback, with changes clearly highlighted in blue for your convenience.\", \"Below, we summarize the key updates and experiments added to address your concerns:\", \"1. Inclusion of More Models:\", \"We have incorporated additional recent long-video understanding models into our benchmark to ensure they are comprehensive and holistic.\", \"As a result, we have updated the findings in the benchmark, providing the community with more robust insights to guide future research directions.\", \"2. Human Verification of the Benchmark:\", \"We conducted a human verification of 10% of the benchmark and evaluated all models on this subset.\", \"The strong correlation between model scores on the verified subset and the full benchmark demonstrates the reliability of our dataset.\", \"3. Human Evaluation of Models:\", \"We evaluated the correlation between our benchmark scores and human preferences, observing a high agreement of 95%.\", \"This further validates the benchmark\\u2019s effectiveness in assessing model performance.\", \"4. Dataset Leakage Analysis:\", \"We performed additional experiments to assess dataset leakage and found it to be minimal due to the careful design of the benchmark.\", \"This ensures the integrity of our evaluation and strengthens confidence in our results.\", \"5. Influence of Subtitles:\", \"We added ablation studies to analyze the role of subtitles, showing that their impact is limited.\", \"This reinforces that our benchmark primarily evaluates visual understanding, focusing on vision-based reasoning rather than textual cues.\", \"6. Enhanced Writing and Visuals:\", \"We replaced nearly all figures with enhanced versions for better presentation and clarity.\", \"Additionally, we revised several sections to improve readability and ensure a smoother flow of ideas.\", \"We hope these updates comprehensively address your concerns and further demonstrate the robustness and value of our contributions. Thank you again for your invaluable feedback, which has greatly improved the quality of our work.\"]}", "{\"title\": \"Response 1, Part 1\", \"comment\": \"---\\n> Q1: A human evaluation is performed on a subset of the data, but good human performance is no proof that questions are well-formed and free of hallucinations.\\n\\nWe appreciate the reviewer\\u2019s concern and would like to clarify the steps we have taken to ensure the quality and validity of our benchmark:\\n\\n[**Human Verification of 10% of the Data**]\\n\\nWe conducted a human verification process on 10% of the dataset (10.8k questions) to assess the dataset's quality. By \\\"dataset quality,\\\" we evaluate:\", \"by_saying_dataset_quality_we_mean\": \"1. Validity of the Questions: Ensuring that the questions are relevant to the video.\\n\\n2. Correctness of the Answers: Verifying whether the answers provided for valid questions are accurate.\\n\\n3. Plausibility of Question-Answer Pairs: Checking if the format and phrasing of the pairs are clear and free of vagueness.\\n\\nThis verification process ensures a holistic assessment of the benchmark\\u2019s reliability.\\n\\n**Results:**\\n\\n* On average, 95.8% of the questions were deemed correct and valid.\\n\\n* A breakdown of accuracy per skill is provided below:\\n\\n|Skill Name |Number of Questions| Accuracy |\\n|----------------------------|-------------------|-------------------------|\\n|Character Actions | 667 | 94.9 |\\n|Deep Context Understanding | 2172 | 96.50 |\\n|Global Appearance | 135 | 89.62 |\\n|Linking Multiple Events | 2297 | 98.00 |\\n|Scene Transitions | 103 | 88.34 |\\n|Spoiler Questions | 43 | 95.34 |\\n|Temporal Questions | 2927 | 94.08 |\\n\\nWe are continuously working to verify and correct the rest of the dataset and estimate that full verification will require approximately 4249 human hours.\\n\\n[**Human Verification of 100% of the Data**]\", \"the_full_dataset_consists_of\": \"* 923 episodes, each requiring 3 hours on average for verification.\\n\\n* 296 movies, each taking approximately 5 hours to verify.\\n\\nTo expedite this process, we have partnered with a data annotation company with ten full-time annotators working on the benchmark. Based on this setup, we estimate that the entire dataset will be fully verified and corrected within 20 working days.\\n\\nWe believe these efforts demonstrate our commitment to ensuring the benchmark is free of hallucinations and robust enough for long-video understanding tasks.\\n\\n---\\n> Q2: **[Transcript Details]** \\nCould authors provide evidence of transcript quality? How accurate and complete are they? How much focus do they have on vision? Could authors provide examples?\\n\\n1. Role of Transcripts:\\n\\n* Transcripts are detailed documents created by movie or TV show writers. They provide comprehensive information beyond spoken dialogue, including: \\n1. Scene descriptions.\\n2. Context about settings, locations, and character actions.\\n3. Camera angles or shot compositions.\\n\\n* Transcripts serve as blueprints for visual and narrative elements, helping us extract visual insights and design challenging, reliable benchmark questions.\\n\\n2. Transcript Example:\\n\\n```\", \"transcript_of_episode_1_season_1_of_friends_tv_shows\": \"**[Scene: Central Perk, Chandler, Joey, Phoebe, and Monica are there.]**\", \"monica\": \"There's nothing to tell! He's just some guy I work with!\", \"joey\": \"Strip joint! C'mon, you're single! Have some hormones!\", \"ross\": \"I don't want to be single, okay? I just... I just- I just wanna be married again!\\n**(Rachel enters in a wet wedding dress and starts to search the room.)**\\n```\", \"as_shown_in_the_above_example_the_transcript_contains_a_lot_of_visual_clues_such_as\": \"* Rachel enters in a wet wedding dress and starts to search the room.\\n\\n* Scene: Central Perk, Chandler, Joey, Phoebe, and Monica are there.\\n\\n* Ross gestures his consent.\\n\\n3. Subtitle Example:\\n\\nIn contrast, the subtitle does not provide any visual clues therefore, we use it only during testing only.\\n\\n```\\nSubtitle of the same episode\\n1\", \"00\": \"00:57,892 --> 00:00:59,962\\nYou're going out with a guy.\\nThere's gotta be something wrong with him.\\n```\\n\\nBy distinguishing between the roles of transcripts and subtitles, we ensure that the benchmark tests true long-video understanding during inference while using transcripts only for reliable benchmark construction.\"}", "{\"title\": \"Response 1, Part 3\", \"comment\": \"---\\n> Q 3.2 (Continued)\\n\\n| InternVL | Global Appearance (ACC) | Scene transitions | character actions | Temporal questions | Local vision+text | Summarization | Deep context understanding | Spoiler questions | Linking Multiple Events | AVG acc | AVG score |\\n|--------------------|-------------------------|-------------------|-------------------|--------------------|-------------------|---------------|----------------------------|-------------------|-------------------------|---------|-----------|\\n| | MCQ | MCQ | MCQ | MCQ | MCQ | Open-Ended | Open-Ended | Open-Ended | Open-Ended | | |\\n| Random Performance | 17 | 17 | 16 | 42 | 20 | N/A | N/A | N/A | N/A | N/A | N/A |\\n| 250 Frames | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A |\\n| 128 Frames | 25.89 | 21.35 | 24.12 | 44.33 | 41.62 | 0.72 | 1.69 | 1.27 | 2.53 | 31.462 | 1.5525 |\\n| 16 Frames | 23.21 | 20.83 | 25.18 | 44.82 | 31.95 | 0.7 | 1.54 | 1.28 | 2.6 | 29.198 | 1.53 |\\n\\n\\n| Llava OV | Global Appearance (ACC) | Scene transitions | character actions | Temporal questions | Local vision+text | Summarization | Deep context understanding | Spoiler questions | Linking Multiple Events | AVG acc | AVG score |\\n|--------------------|-------------------------|-------------------|-------------------|--------------------|-------------------|---------------|----------------------------|-------------------|-------------------------|---------|-----------|\\n| | MCQ | MCQ | MCQ | MCQ | MCQ | Open-Ended | Open-Ended | Open-Ended | Open-Ended | | |\\n| Random Performance | 17 | 17 | 16 | 42 | 20 | N/A | N/A | N/A | N/A | N/A | N/A |\\n| 250 Frames | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A |\\n| 128 Frames | 36.6 | 23.95 | 25.911 | 45.49 | 48.6 | 0.55 | 1.79 | 1.3 | 2.58 | 36.1102 | 1.555 |\\n| 16 Frames | 41.51 | 24.47 | 25.97 | 44.27 | 40.15 | 0.48 | 1.48 | 1.33 | 2.3 | 35.274 | 1.3975 |\\n\\n\\nUsing 250 frames per video is not applicable for LLava-OV and InternVL, as they are standard video models not designed to handle too-long videos.\\nAccordingly, we hit the maximum context length for these models, which prevents us from running on higher sampling rates.\\n\\n---\\n> Q4: Are there text scripts (screenplay) associated with all the videos (movies and TV shows)?\\n\\nYes, transcripts are available for all the movies and TV series included in our benchmark.\\n\\n* We collected these transcripts from publicly available sources on the internet and will publish them on our website to ensure convenience and facilitate future research.\\n\\n* However, as discussed earlier, we will not release the video content itself. Instead, we will refer users to the original sources, such as the MovieNet and TVQA datasets, where the videos can be independently accessed.\\n\\nThis approach ensures compliance with copyright and ethical standards while making the benchmark resources easily accessible for the research community.\"}" ] }
2Chkk5Ye2s
Be More Diverse than the Most Diverse: Optimal Mixtures of Generative Models via Mixture-UCB Bandit Algorithms
[ "Parham Rezaei", "Farzan Farnia", "Cheuk Ting Li" ]
The availability of multiple training algorithms and architectures for generative models requires a selection mechanism to form a single model over a group of well-trained generation models. The selection task is commonly addressed by identifying the model that maximizes an evaluation score based on the diversity and quality of the generated data. However, such a best-model identification approach overlooks the possibility that a mixture of available models can outperform each individual model. In this work, we numerically show that a mixture of generative models on benchmark image datasets can indeed achieve a better evaluation score (based on FID and KID scores), compared to the individual models. This observation motivates the development of efficient algorithms for selecting the optimal mixture of the models. To address this, we formulate a quadratic optimization problem to find an optimal mixture model achieving the maximum of kernel-based evaluation scores including kernel inception distance (KID) and Rényi kernel entropy (RKE). To identify the optimal mixture of the models using the fewest possible sample queries, we view the selection task as a multi-armed bandit (MAB) problem and propose the *Mixture Upper Confidence Bound (Mixture-UCB)* algorithm that provably converges to the optimal mixture of the involved models. More broadly, the proposed Mixture-UCB can be extended to optimize every convex quadratic function of the mixture weights in a general MAB setting. We prove a regret bound for the Mixture-UCB algorithm and perform several numerical experiments to show the success of Mixture-UCB in finding the optimal mixture of text and image generative models. The project code is available in the [Mixture-UCB Github repository](https://github.com/Rezaei-Parham/Mixture-UCB).
[ "Multi-Armed Bandits", "Evaluation of generative models", "Kernel-based evaluation scores", "Mixture-UCB", "Diversity in data generation" ]
Accept (Poster)
https://openreview.net/pdf?id=2Chkk5Ye2s
https://openreview.net/forum?id=2Chkk5Ye2s
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zphZCHmUss", "zjoXTTzljq", "qSRIl7Nbd6", "iuhLQ90IYz", "fPxz94v3Ez", "f37kD9L6Sj", "dAfeNzTu4p", "a8xuSWLY2s", "ZJjjfNRLDO", "Z53rjMa02x", "Yj26qQJmee", "XjnZt2xWv3", "XjF4uFuzj5", "TKsVQuqH5l", "RF6cKnK9Jq", "Q6mqGni5t2", "Q3iLQImlNt", "JQ0CyD9nok", "IOoWmPmzci", "GWnq10QNcJ", "DYxlppIkys", "BUfchdW90N", "2dAzTRWHKK", "0xqLc8FLgV", "0dTTDTmQUj" ], "note_type": [ "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment" ], "note_created": [ 1732900621726, 1734687743636, 1731378818636, 1733248236231, 1732894512813, 1732305067727, 1732305307088, 1732900536124, 1732305692593, 1732305804649, 1733224718472, 1732305458953, 1732693206421, 1732529749808, 1732900706115, 1737524281897, 1733161523331, 1732900374399, 1732413067447, 1731426507399, 1732834437048, 1733096210507, 1730758316106, 1730942453152, 1732678525443 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13785/Authors" ], [ "ICLR.cc/2025/Conference/Submission13785/Area_Chair_prmw" ], [ "ICLR.cc/2025/Conference/Submission13785/Reviewer_GUyR" ], [ "ICLR.cc/2025/Conference/Submission13785/Authors" ], [ "ICLR.cc/2025/Conference/Submission13785/Reviewer_pb52" ], [ "ICLR.cc/2025/Conference/Submission13785/Authors" ], [ "ICLR.cc/2025/Conference/Submission13785/Authors" ], [ "ICLR.cc/2025/Conference/Submission13785/Authors" ], [ "ICLR.cc/2025/Conference/Submission13785/Authors" ], [ "ICLR.cc/2025/Conference/Submission13785/Authors" ], [ "ICLR.cc/2025/Conference/Submission13785/Reviewer_mqps" ], [ "ICLR.cc/2025/Conference/Submission13785/Authors" ], [ "ICLR.cc/2025/Conference/Submission13785/Reviewer_x3ew" ], [ "ICLR.cc/2025/Conference/Submission13785/Reviewer_x3ew" ], [ "ICLR.cc/2025/Conference/Submission13785/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13785/Authors" ], [ "ICLR.cc/2025/Conference/Submission13785/Authors" ], [ "ICLR.cc/2025/Conference/Submission13785/Reviewer_nCW5" ], [ "ICLR.cc/2025/Conference/Submission13785/Reviewer_pb52" ], [ "ICLR.cc/2025/Conference/Submission13785/Reviewer_mqps" ], [ "ICLR.cc/2025/Conference/Submission13785/Reviewer_mqps" ], [ "ICLR.cc/2025/Conference/Submission13785/Reviewer_nCW5" ], [ "ICLR.cc/2025/Conference/Submission13785/Reviewer_mqps" ], [ "ICLR.cc/2025/Conference/Submission13785/Authors" ] ], "structured_content_str": [ "{\"comment\": \"We thank Reviewer pb52 for the feedback on our response. We are pleased to hear that the response addressed the reviewer's comments.\"}", "{\"metareview\": \"This paper addresses the problem of online selection to obtain a mixture of generative models.\\nTo minimize the number of sample queries needed to form the optimal mixture model, the authors propose the Mixture Upper Confidence Bound (Mixture-UCB) online learning approach, by maximizing kernel-based evaluation scores such as the Kernel Inception Distance (KID) and Renyi Kernel Entropy (RKE). \\nBoth theoretical results (e.g., regret bounds) and empirical results are provided to demonstrate the effectiveness of the proposed method.\\n\\nFollowing the rebuttal phase, the reviewers reached a consensus that the (novel) problem under study is well-motivated and that the contribution of this work is significant. \\nAs such, I recommend accepting the paper. \\nHowever, **please update** the final submission to reflect the discussions with all reviewers (and particularly Reviewers x3ew and mqps), including a discussion of the potential limitations (of the setting, and/or of the proposed approach).\", \"additional_comments_on_reviewer_discussion\": \"The reviewers raised the following points:\\n\\n- Motivation (of online selection, or to minimize the number of samples, raised by all reviewers): The authors successfully convinced the reviewers that the problem setting is both significant and of interest.\\n- Limitations of the current setting and/or proposed approach (raised by Reviewers x3ew and mqps): This concern was **only partially resolved** by the authors during the rebuttal phase.\\n\\nI have carefully considered all of the above points in making my final decision.\"}", "{\"summary\": \"In this paper, the authors focus on the online selection of generative models, and in particular, the optimal linear mixture among a set of such models.\\nThe problem appears novel, and the authors make interesting connections to the maximization of some kernel-based scores and multi-armed bandit. \\nBased on this, the authors propose Algorithms 1 and 2 to solve this online selection of mixture efficiently, with performance guarantee given in Theorem 1 and 2, respectively. (Although I have some concerns on the settings and theoretical results, see below).\\nThese methods can be used for widely used kernel inception distance (KID) and Renyi kernel entropy (RKE), and are tested on realistic image and text data in Section 6.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper consider online selection of generative mixture models, which, to the best of knowledge, is a novel problem of interest.\", \"By making interesting connection to kernel-based scores and multi-armed bandit, the authors propose efficient methods to solve the above problem, with some theoretical guarantee.\", \"Experiments on realistic data are provided, showing the practical applicability of the proposed approaches.\"], \"weaknesses\": [\"It would be great to discuss the limitation of the proposed approach, see below for my detailed comments/questions.\", \"some settings and theoretical results need clarification, see below for my detailed comments/questions.\"], \"questions\": \"Below are a few questions and/or comments.\\n\\n1. The problem appears novel, so I believe it makes sense to better motivative it. For example, in which context are we interested in picking a model to generate a sample at each round, why it is of interest to use \\\"the fewest possible sample queries\\\"? How the proposed method performs in an offline setting, with respect to performance and/or scalability?\\n2. When summarizing the contribution of this paper, could the authors also provide (forward) pointers to the precise results? For example, \\\"proposing an online learning framework in Section ??\\\". I personally believe that this may facilitate the interested readers to quickly grasp the main contribution of the paper.\\n3. Is the working assumption of linearly mixed model somewhat restrictive? Is there something else in the literature, or even such linear combination is (the first time) proposed by the authors in this paper? In fact, on the top row of Figure 3, there is a linearly mixtured \\\"dog\\\" that appears a bit bizarre: is this due to some limitation of this linear mixture? \\n4. I personally find Theorem 1 a bit surprising: To me, kernel matrix \\\"estimation\\\" problem plus some online selection problem, and solving the former problem in general requires a lot of samples to have a tight spectral norm control on the estimated kernel matrix. I believe that the authors avoid this issue by assuming/focusing on the case of bounded. Could the authors comment more on this? For example, does this bounded kernel/loss function setting limit the practical interest of the proposed methods? Also, could the authors comment on the observed sample size $n_i$ for the proposed OGD method to make sense? We do not see this in Theorem 2 and this has an impact on the computational complexity I believe?\\n5. a tiny side remark: Figure 3 appears in the main text but commented in the appendix.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank Reviewer mqps for the feedback on our response. Please find our responses below.\\n\\n**1- About Density and Precision** \\n\\nWe have clarified in our last response that what we meant was that Density and Precision are in the form $\\\\mathbb{E}[f(X)]$, which is an expected value of a function $f$. We cannot conclude that $X_g$ has a similar distribution as $X_r$ only by looking at $\\\\mathbb{E}[f(X_g)]$ and $\\\\mathbb{E}[f(X_r)]$. FID and KID are more suitable for measuring distances between probability distributions.\\n\\nFor example, for Precision (eqn. (1) in Kynk\\u00e4\\u00e4nniemi et al. 2019), the function $f$ is taken to be $f(x) = 1$ if $x$ is in the manifold of the real images (defined via k-th nearest neighbor by Kynk\\u00e4\\u00e4nniemi et al), and $f(x) = 0$ otherwise. We can see that Precision $\\\\mathbb{E}[f(X_g)]$ can attain its maximum value $1$ when the model is mode-collapsed and always output the same sample that lies in the manifold of the real images. Therefore, a high Precision does not indicate that the model approximates the true distribution well.\\n\\n\\n**2- About normal distribution and FID** \\n\\nThe FID formula provided by the reviewer actually attains its optimal value at a mixture distribution. According to the formula by Reviewer mqps, $$\\\\mathbb{E}[{\\\\rm FID}]=a^2\\\\mathbb{E}\\\\bigl[\\\\Vert\\\\epsilon\\\\Vert^2\\\\bigr]+\\\\tilde{a}^2\\\\mathbb{E}\\\\bigl[\\\\Vert\\\\tilde{\\\\epsilon}\\\\Vert^2\\\\bigl],$$ \\nwhere $\\\\tilde{a} = 1-a$. This quadratic function of $a$ is minimized at $$a = \\\\frac{\\\\mathbb{E}[\\\\Vert\\\\tilde{\\\\epsilon}\\\\Vert^2]}{\\\\mathbb{E}[\\\\Vert\\\\tilde{\\\\epsilon}\\\\Vert^2] + \\\\mathbb{E}[\\\\Vert\\\\epsilon\\\\Vert^2]}$$ \\nThis represents a mixture distribution (unless one of $\\\\mathbb{E}[\\\\Vert\\\\epsilon\\\\Vert^2]$ and $\\\\mathbb{E}[\\\\Vert\\\\tilde{\\\\epsilon}\\\\Vert^2]$ is zero, which is extremely unlikely since it means that one model is completely accurate). \\n\\n(To address a potential source of confusion, note that since $a$ and $\\\\tilde{a}$ are mixture weights, we have $a + \\\\tilde{a} = 1$ instead of $a^2 + \\\\tilde{a}^2 =1$. If we incorrectly assumed $a^2 + \\\\tilde{a}^2 =1$, we would arrive at the incorrect conclusion that FID is minimized at a single model.)\\n\\nTherefore, even the reviewer's example shows that the FID-minimizing model will be a mixture of the models with non-zero weights. We hope that this example, together with the experiment results in Figure 2 and Tables 1,2, will finally convince the reviewer that \\\"generative models normally do not complement each other\\\" is not a weakness of the mixture approach.\\n\\n**Additional Note:** We believe the actual formula of FID (including the covariance term) is more complicated than the reviewer's provided formula. In this response, we adopt the reviewer's formula for the sake of simplicity, but would like to note that the similar conclusion that FID is generally minimized by a mixture model remains valid for the correct FID formula with the covariance term. This conclusion is also supported by our numerical results in Tables 1 and 2.\"}", "{\"comment\": \"Thanks for the response. The authors have addressed my concerns and I will keep the score.\"}", "{\"title\": \"Authors' General Response\", \"comment\": \"We thank the reviewers for their constructive and thoughtful feedback. An updated paper has been submitted (changes are highlighted in blue). Here we address a common question from the reviewers. Responses to the other comments of the reviewers are posted under each review.\\n\\n**1-Motivations behind the online selection of generative models** \\n\\n \\nWe argue that the combination of generative models is inherently an online learning problem. Generating samples from a model is costly, in terms of computational time and resources, and perhaps monetary cost for commercial models, e.g. Dall-E and Flux 1.1 Pro. Therefore, we should naturally generate a small number of samples from each model and evaluate them, before we decide which model to use next, and keep using the new samples to guide our selection of the next model to use. This is similar to how a human would act when given a number of generative models that are costly to use. This paper proposes an algorithm to automate this process, with a tight bound on its regret (see updated discussion after Theorem 2).\\n\\nAn offline two-stage method (where we first generate a fixed large number of samples from each model in Stage 1, and then use them to compute the mixture distribution for the remaining samples in Stage 2) can be suboptimal since we are not discarding the obviously suboptimal arms in the middle of Stage 1, and we are not utilizing the information in the new samples in Stage 2 to update the mixture distribution.\"}", "{\"comment\": \"We thank Reviewer pb52 for the thoughtful feedback. We are pleased to hear that Reviewer pb52 finds our paper well-written and easy to follow. Please find our responses below.\\n\\n**1-Online selection of well-trained generative models might have few applications.** \\n\\nLarge generative model inference being costly is precisely the motivation of our online approach. If we have several large models where generating one sample can take about 9 seconds (for the SD-XL model on one A100 GPU), then generating a batch of 1000 samples from each model to perform conventional score-based evaluation would take several hours. Instead, we should generate the samples one by one in an online manner, quickly ruling out the obviously suboptimal models, while generating more samples from the apparently better models. Please refer to \\\"Motivations behind the online selection of generative models\\\" in our general response.\\n\\n\\n\\n**2-Theoretical guarantees about Mixture-UCB-OGD.** \\n\\nThe analysis of Mixture-UCB-OGD would be more challenging than that of Mixture-UCB-CAB, due to the difficulty of keeping track of the proportion vector $n^{(t)}/t$. The analysis of Mixture-UCB-OGD would involve a complicated dynamical system with the state being the proportion vector and the matrix $\\\\hat{\\\\mathbf{K}}$.\\n\\n**3-About FID metric.**\", \"we_note_that_the_fid_metric_can_be_decomposed_into_the_sum_of_two_terms\": \"1) a quadratic cost on the embedded means' difference norm $\\\\Vert \\\\mu_{G} -\\\\mu_X \\\\Vert_2^2$ and 2) a non-quadratic cost on the embedded covariances $\\\\Vert \\\\Sigma_G^{1/2} - \\\\Sigma_X^{1/2}\\\\Vert^2_F$. Finding a quadratic approximation of the second FID component will be an interesting future direction to extend our online evaluation method to the FID score.\"}", "{\"comment\": \"We thank Reviewer x3ew for the feedback on our response. We are pleased to hear that the response addressed the reviewer's comments.\"}", "{\"comment\": \"We thank Reviewer mqps for the thoughtful feedback. We are pleased to hear that Reviewer mqps finds the paper well written and easy to follow. Please find our responses below.\\n\\n**1-Motivation of finding a good mixture of different generative models.** \\n\\nWhile it is true that prior works focus on finding a single best model, we hope that our methods can open a new avenue of combining generative models. Our experiments show that our methods can select a mixture of models that outperforms any single one of those models in terms of diversity. Also, we believe that the models being trained in an independent manner is beneficial to our methods, since a mixture of independent samples will likely improve diversity. This is indeed the case for large-scale text-to-image models, which are usually trained on different training datasets. Figure 1 in the revised paper shows one example where three standard text-to-image models generate differently-styled cartoon giraffe pictures. In such cases, considering the mixture of generative models can significantly add to the diversity of output data.\\n\\n\\n\\n**2-Usefulness of the online learning approach.** \\n\\nNote that the optimal mixture will not necessarily involve every model. An online selection algorithm will prevent the models that are not in the optimal mixture (or has a low percentage in the optimal mixture) from being sampled frequently. Also, the problem of combining generative models is online in nature. Please refer to \\\"Motivations behind the online selection of generative models\\\" in our general response on the top of the page for details.\\n\\nAlso, we have a sparse version of our algorithm presented in Appendix 8.2, which attempts to choose a mixture involving a small subset of the models. In this setting, we have to avoid sampling too frequently from suboptimal models not present in the optimal mixture. An online algorithm can discard a model as soon as we can confidently tell that it is suboptimal.\\n\\n\\n**3-Other works on mixtures of generative models.** \\n\\nTo the best of our knowledge, our work is the first to propose the *selection of a mixture of the distributions* of multiple generative models. Specifically, we aim to highlight the possible improvements in the diversity of generated data by using a mixture of several generative models. \\n\\n**4-About Density and Precision.** \\n\\nWe have evaluated the Precision, Density (quality scores) as well as recall, coverage (diversity scores) for the KID-based experiments on the real image datasets LSUN-bedroom and FFHQ. We note that these scores cannot be used in our experiments on images generated by text-to-image models, because the reference dataset needed by the scores is unknown. \\n\\nWe have included Table 1 and the following explanations in the appendix of the updated paper. In our quantitative evaluation, we observed that the Precision of the optimal mixture is similar to that of the maximum Precision score among individual models. On the other hand, the Recall-based diversity improved in the mixture case. However, the quality-measuring Density score slightly decreased for the selected mixture model, as Density is a linear score for quality that could be optimized by an individual model. On the other hand, the Coverage score of the mixture model was higher than each individual model.\\n\\nNote that Precision and Density are scores on the average quality of samples. Intuitively, the quality score of a mixture of models is the average of the quality score of the individual models, and hence the quality score of a mixture cannot be better than the best individual model. On the other hand, Recall and Coverage measure the diversity of the samples, which can increase by considering a mixture of the models. To evaluate the net diversity-quality effect, we measured the FID score of the selected mixture and the best individual model, and the selected mixture model had a better FID score compared to the individual model with the best FID.\\n\\n**5-\\\"Is $\\\\hat{L}(\\\\mathbf{a};\\\\mathbf{x}^{(t)})-(\\\\mathbf{\\\\epsilon}^{(t)})^{\\\\rm T}\\\\mathbf{a})$ a lower or upper bound of $L(\\\\mathbf{a})$?\\\"** \\n\\nIt is a lower bound. In the conventional multi-armed bandit setting, the goal is to maximize the reward, so an upper confidence bound is used. In comparison, our goal is to minimize the loss function $L(\\\\mathbf{a})$, so we require a lower confidence bound. We still use the term \\\"upper confidence bound\\\" to conform with standard terminology. (Although we can flip the sign and consider the negative loss function to be the reward and use an upper confidence bound, this will make the reward always negative and is a negative definite function, which is unnatural.)\"}", "{\"comment\": \"We thank Reviewer nCW5 for the thoughtful feedback. We are pleased to hear that Reviewer nCW5 finds the formulation of the problem interesting. Please find our responses below.\\n\\n**1.1-Motivations behind online selection.** \\n\\nWe argue that the combination of generative models is inherently an online problem. Please refer to \\\"Motivations behind the online selection of generative models\\\" in our general response on the top of the page for details.\\n\\n**1.2-Importance of the ability to generate diverse samples.**\\n\\nDiversity has been an important criterion in the evaluation of generative models in the literature. The well-known evaluation scores of Recall and Coverage have been proposed to exclusively assess the diversity of a generation model. The evaluated diversity scores can be used to ensure the model's sufficient grasp of the contents in the data distribution.\\n\\n**1.3-Importance of saving samples in the selection process.**\\n\\nNote that generating high-resolution image and video samples would be computationally and financially expensive. Our goal is to identify the optimal mixture of generation models by using the minimum number of queries to sub-optimal generative models. This can save the unnecessary costs of creating samples from weaker models to detect their lack of optimality. \\n\\n\\n**2-Comparing Theorem 2 to offline approach.** \\n\\nNote that an offline approach will also have a worst-case error approximately $O(\\\\sqrt{1/T})$. For conventional multi-armed bandit without the quadratic kernel term, Bubeck and Cesa-Bianchi (2012) showed a minimax lower bound on the regret per round that scales like $\\\\sqrt{1/T}$ as $T$ increases, so Theorem 2 in our paper is tight within a logarithmic factor (this discussion is added to the updated paper after Theorem 2). This $\\\\sqrt{1/T}$ order of growth of the error is also applicable to offline approaches. If we have two arms where the difference between their average rewards is $\\\\sqrt{1/T}$, we cannot reliably tell which arm is better even if we are given the $T$ samples in an offline manner, and choosing the wrong arm will result in a $\\\\sqrt{1/T}$ gap from the optimum. This shows the tightness of Theorem 2, in the sense that even if we do not have the quadratic kernel term, and even if $T$ samples are given to us in an offline manner, there is still a $\\\\sqrt{1/T}$ error in the worst case, so Theorem 2 is tight within a logarithmic factor.\\n\\n**3-About Theorems 1 and 2, $\\\\alpha$ and the curse of dimensionality.** \\n\\nTheorem 1 holds for any *fixed* $\\\\alpha$. We also have a bound in Lemma 1 in the appendix which is a worst-case bound that holds simultaneously for every $\\\\alpha$, which is a key step in the proof of Theorem 2. We will clarify this in the revised paper.\\n\\nThe worst-case bound in Lemma 1 does not suffer from the curse of dimensionality, even though the bound holds simultaneously for every $\\\\alpha$ and every possible sequence $n_1,\\\\ldots,n_m$, where $n_i$ is the number of times arm $i$ is pulled up to the current round. This is because each quadratic term $\\\\hat{\\\\mathbf{K}}_{i,j}$ only depends on two models $i$ and $j$, so we only require taking union bound over choices of $n_i,n_j$ ($O(T^2)$ number of choices), rather than over choices of the whole sequence $n_1,\\\\ldots,n_m$ ($O(T^m)$ number of choices). The quadratic structure of the loss function helps us avoid the curse of dimensionality.\\n\\nTheorem 2 does not suffer from the curse of dimensionality. The right hand side of Theorem 2 scales with $m$, but not exponentially. This is because we only require sufficient samples from the $m$ models in the worst case in order to obtain a good estimate. Therefore, we only need a sample size $T$ that is approximately proportional to $m$ in the worst case, as shown in Theorem 2. We do not require the samples to cover a large $m$-dimensional space.\\n\\n**4-About the analysis on the dependency between the selections.** \\n\\nThe dependency between $x^{(1)},\\\\ldots,x^{(T)}$ is indeed a major challenge in the analyses. The samples $x^{(s)},x^{(t)}$ are dependent, so $\\\\kappa(x^{(s)},x^{(t)})$ does not have a tractable distribution. We bound the dependency between $x^{(s)},x^{(t)}$ via the chain rule of mutual information and Pinsker's inequality in the proof of Theorem 2. Intuitively, each $x^{(t)}$ may depend on $x^{(1)},\\\\ldots,x^{(t-1)}$, but cannot significantly depend on every one of $x^{(1)},\\\\ldots,x^{(t-1)}$ since the mutual information between $x^{(t)}$ and $x^{(1)},\\\\ldots,x^{(t-1)}$ is bounded by the entropy of the choice of the arm, which is upper-bounded by $\\\\log m$. This allows us argue that $\\\\kappa(x^{(s)},x^{(t)})$ cannot significantly deviate from the situation where $x^{(s)},x^{(t)}$ are independent, and $T^{-2} \\\\sum_{s,t \\\\in [T]} \\\\kappa (x^{(s)},x^{(t)})$ can be approximated by $T^{-2} \\\\sum_{s,t \\\\in [T]} \\\\mathbf{K}_{i_s, i_t}$ where $i_t$ is the arm pulled at round $t$.\"}", "{\"comment\": \"I thank the authors for their reply. Below are some follow-up remarks.\\n\\n**Measures that only compare the expected values of two distributions**\\n\\nI thought what the authors meant by saying that Precision and Density only compare \\\"expected values\\\" of the real distribution $P_r$ and the generative model $P_g$ is that these two measures only take into account the means $E_{x_r\\\\sim P_r}[x_r]$ and $E_{x_g\\\\sim P_g}[x_g]$. According to the reference (Kynk\\u00e4\\u00e4nniemi et al, NeurIPS 2019) pointed out by the authors, it does not seem to be the case for Precision defined in (1) of this paper. \\n\\n**Complementarity of generative models**\\n\\nAs we keep coming back to my statement \\\"generative models normally do not complement each other\\\", I think it is better that I try to explain with a concrete example. In this example, we will use FID and consider that the real distribution is a normal distribution $\\\\mathcal{N}(\\\\mu_r, I)$ of unknown mean $\\\\mu_r$ and known identity covariance (the knowledge of identity covariance is assumed to simplify the discussion). Let $\\\\mathcal{N}(\\\\mu_g, I), \\\\mathcal{N}(\\\\mu_{\\\\tilde{g}}, I)$ be two generative models obtained in an independent manner so that their errors $\\\\epsilon=\\\\mu_r-\\\\mu_g, \\\\tilde{\\\\epsilon}=\\\\mu_r-\\\\mu_{\\\\tilde{g}}$ are independent as well. It is easy to see that the expected value of a mixture $\\\\mathcal{M}=a\\\\mathcal{N}(\\\\mu_g, I)+\\\\tilde{a}\\\\mathcal{N}\\\\mu_{\\\\tilde{g}}, I)$ is given by $a\\\\mu_g+\\\\tilde{a}\\\\mu_{\\\\tilde{g}}$, which is equal to $\\\\mu_r+a\\\\epsilon+\\\\tilde{a}\\\\tilde{\\\\epsilon}$. Then we have ${\\\\rm FID}=\\\\Vert a\\\\epsilon+\\\\tilde{a}\\\\tilde{\\\\epsilon}\\\\Vert^2$. As $E[{\\\\rm FID}]=a^2E[\\\\Vert\\\\epsilon\\\\Vert^2]+\\\\tilde{a}^2E[\\\\Vert\\\\tilde{\\\\epsilon}\\\\Vert^2]$, ${\\\\rm FID}$ tends to be minimized by taking the best single model with the smallest error, specially in very high dimensions where ${\\\\rm FID}\\\\simeq E[{\\\\rm FID}]$ as a consequence of $\\\\epsilon^{\\\\rm T}\\\\tilde{\\\\epsilon}\\\\simeq 0$.\\n\\nOf course, many real data can not be well approximated by a single normal distribution, specially when they are multimodal. This is why I mentioned the case where the target distribution is a Gaussian mixture. Let us consider now that the real distribution is a mixture of two Gaussian components $\\\\mathcal{N}_1,\\\\mathcal{N}_2$. If we have two generative models $g,\\\\tilde{g}$ with $g$ performing better w.r.t. $\\\\mathcal{N}_1$ and $\\\\tilde{g}$ w.r.t. $\\\\mathcal{N}_2$, then combining them would probably lead to a better approximation of the target Gaussian mixture. However, when $g$ happens to perform better for both $\\\\mathcal{N}_1$ and $\\\\mathcal{N}_2$, using a mixture of $g,\\\\tilde{g}$ is likely to result in a decreased quality of generated samples. I think both cases are probable scenarios in practice, which is what I referred to as a \\\"potential limitation\\\" of the mixture approach.\"}", "{\"comment\": \"We thank Reviewer GUyR for the thoughtful feedback. We are pleased to hear that Reviewer GUyR finds the connection to kernel-based scores and multi-armed bandit interesting. Please find our responses below.\\n\\n**1-Context in which we are interested in picking a model to generate a sample at each round.** \\n\\nAn online selection method where we pick a model at each round is a natural approach to combine generative models, that can reduce the cost of generating from sub-optimal models. Please refer to \\\"Motivations behind the online selection of generative models\\\" in our general response on the top of the page for details.\\n\\n\\n**2-Forward pointers to the precise results.** \\n\\nThank you for the suggestion. Pointers have been added to the updated paper.\\n\\n**3-\\\"Is the working assumption of linearly mixed model somewhat restrictive?\\\"** \\n\\nNote that we treat each generative model as a black box. We are not allowed to examine the architectures and parameters of the models in order to combine them. We are also not allowed to combine the pixels or embedding vectors of the images generated by different models (if we decide to pull an arm, we must use the image generated by the arm as is). Therefore, we can only combine them through choosing a mixture (i.e., assigning a percentage to each model, e.g., 70 percent of the samples come from Model 1, and 30 percent come from Model 2). We make this assumption for the sake of full generality. By treating each model as a black box, our methods can combine any set of models, including models with vastly different architectures, without any prior knowledge on how we can combine the parameters of different models in a reasonable manner. The use of linear mixture is a consequence of this general assumption on the models.\\n\\n\\n\\n**4-About Theorem 1.** \\n\\nWe would like to clarify that the matrix $\\\\mathbf{K}$ in the paper is not the usual $n \\\\times n$ kernel matrix (where $n$ is the sample size), but rather the $m \\\\times m$ \\\"average kernel matrix\\\" ($m$ is the number of models), where the entry $\\\\mathbf{K}_{i,j}$ is the average of $\\\\kappa(X,X')$ where $X$ is a random sample from model $i$ and $X'$ is a random sample from model $j$. Since $m$ is usually much smaller than $n$, the matrix $\\\\mathbf{K}$ is significantly easier to estimate than the usual kernel matrix.\\n\\nIndeed, the bounds in Theorem 1 and 2 are possible because the loss function (4) is assumed to be bounded (see the beginning of Section 5.1). Please note that boundedness is not a big obstacle, since several popular kernels (e.g., Gaussian kernel) are bounded by default. In case the kernel function is unbounded (when the input data is unbounded), we can still compute a bound on the data in order to give a bound on the loss function.\\n\\nAbout the sample size needed by OGD, although Theorem 2 only applies to Mixture-UCB-CAB, we expect a similar $O(\\\\sqrt{\\\\frac{\\\\log T}{T}})$ error to apply to OGD, since this is a general phenomenon for regret bounds that do not depend on the sample distributions of the arms (see updated discussions after Theorem 2). Nevertheless, proving a regret bound for OGD would be more challenging than CAB due to the difficulty of keeping track of the proportion vector $n^{(t)}/t$, so we leave the analyses of the sample size needed by OGD to future studies.\\n\\n**5-About Figure 3.** \\n\\nThank you for pointing this out. This has been fixed in the updated paper.\"}", "{\"comment\": \"Thanks for the insightful response. I agree that the proposed algorithm better suits a \\\"remote setting\\\" where memory is less concerned. Overall, I'm supportive of the paper's acceptance.\"}", "{\"summary\": \"Given a group of generative models, this paper studies the problem of improving the diversity (and quality) of generated outputs by combining them into an (optimal) mixture. The authors present the Mixture-UCB framework, encompassing two specific algorithms, Mixture-UCB-CAB and Mixture-UCB-OGD, designed by iteratively optimizing a quadratic objective (wrt the mixture weights) on kernel-based eval metrics and efficiently formulating the mixture of models under an online bandit setting. Specific metrics include Kernel Inception Distance (KID) and R\\u00e9nyi Kernel Entropy (RKE). Theoretical regret bounds have provided adequate support for Mixture-UCB-CAB, and experimental evaluations demonstrate the advantages of both algorithms across various datasets and model types.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is generally well-written and well-structured, with clear definitions and visualizations.\\n2. The focus on mixtures of generative models to achieve superior diversity (and quality) appears innovative and addresses a limitation in traditional model selection approaches, which aim to find only a single best-performing model. Being able to customize the support size of the mixture is a good plus. \\n3. The theoretical analysis for Mixture-UCB-CAB is well-formulated and provides near-optimal guarantees (i.e., up to logarithmic factors of $m$ and $T$).\\n4. The diverse experiments demonstrate the algorithms' practical applications and performance gains, especially in exciting domains such as text-to-image generation.\", \"weaknesses\": \"1. While Mixture-UCB-OGD seems computationally more efficient than Mixture-UCB-CAB, the absence of a theoretical guarantee akin to Theorem 2 for CAB leaves an open question about its convergence and reliability.\\n2. Linear mixtures show their ability to enhance diversity. Still, the data distributions might produce mixtures that lack coherence, as some of the visual examples hint at (e.g., in Figure 3, the mixture model generated both realistic and unrealistic car images). In other words, optimizing the single diversity metric may not capture users' needs (e.g., a model that can generate images of cars with different sizes, poses, coloring styles, and backgrounds may be more natural to be said \\\"diverse\\\").\\n3. The mixture model approach may not be suitable for memory-efficient use cases, such as deployment on end devices like smartphones or smart home modules. Storing, updating, and switching among multiple generative models (e.g., this might require loading new parameters into memory) could significantly increase memory requirements and other costs, making the approach impractical for some critical applications. A possible mitigation strategy could be distilling a mixture of large models into a single, smaller-scale model, thereby retaining the benefits of the mixture while reducing resource needs.\", \"questions\": \"Could the authors kindly consider the weaknesses highlighted above and share any thoughts, feedback, or responses they might have? Also, I wonder about the typical scenarios where the mixture models fail to improve diversity or quality.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank Reviewer nCW5 for the feedback on our response.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": [\"We thank Reviewer mqps for the feedback on our response. Please find our responses below.\", \"We respectfully disagree that \\\"generative models normally do not complement each other\\\" is a potential limitation of the proposed mixture approach. Figures 2(a) and 2(b) show that the mixture approach outperforms each of the individual popular generative models applied on standard datasets, without truncation. For Figures 2(a) and 2(b), we did not apply any special treatment to make the models complement each other. As we stated in the text, the mentioned generative models are pre-trained and made available by the well-cited dgm-eval repository (Stein et al, NeurIPS 2023), which are not trained jointly to fit our mixture approach.\", \"Using FID and KID for evaluating generative models is a standard approach. We believe that our method outperforming individual models in terms of FID and KID (while having acceptable quality scores such as Precision and Density) is sufficient to show the merits of our method.\", \"Regarding the concern that our approach results in sub-optimal quality scores such as Precision and Density, we emphasize that the Precision and Density scores of our approach are not poor. In some situations, we can even improve upon FID without reducing Precision. For example, in Table 2, our approach ranks 1st (tied) in Precision and 2nd in Density among 5 models (while being 1st in FID and Coverage, and 2nd in Recall), giving an almost overall improvement upon the best individual model. In Table 1, our approach ranks 4th among the 6 models for Precision and Density (while being 1st in FID and Recall).\", \"While the reviewer's concern appears to be based on the Precision and Density scores, we remark that these scores are not measures of goodness of distribution approximation, and are **maximized by a mode-collapsed generative model** that always outputs the same high-quality sample. A reasonable model almost always has lower Precision and Density scores compared to the mode-collapsed model. Therefore, not maximizing the Precision and Density scores is not an indicator of weakness. As also emphasized by (Sajjadi et al, NeurIPS 2018) and (Naeem et al, ICML 2020) who proposed the scores, Precision and Density should be examined together with Recall and Coverage to provide a holistic evaluation of a generative model.\"], \"to_answer_the_specific_questions_of_the_reviewer\": [\"**Density, precision and expected value.** To see why Density and (improved) Precision (Kynk\\u00e4\\u00e4nniemi et al, NeurIPS 2019) are expected values, note that their definitions involve an average over the generated samples, in the form $\\\\frac{1}{M}\\\\sum_{i=1}^M f(x_i)$ where $x_1,\\\\ldots,x_M$ are the generated samples and $f$ is a certain function. Therefore, they are in the form $\\\\mathbb{E}[f(X)]$ where $X$ is a random generated sample. An average can always be maximized by a degenerate distribution (a mode-collapsed model) at the point $x$ that maximizes $f(x)$.\", \"**Regarding \\\"why not use other metrics to access the quality of generated data\\\".** We use FID and KID to assess our method. Unlike Density and Precision, FID and KID are mathematical (pseudo)metrics to measure the distance between probability distributions, and quantify how close the distribution of the generated samples is to the true distribution. In the case of KID (with a universal kernel function, e.g. Gaussian kernel), the distance is zero if and only if the reference and generative model's distributions are the same.\", \"To address a potential confusion, note that there are two notions that are termed \\\"quality\\\": the average quality of samples as measured by Precision and Density, and the quality of the whole generative model (i.e., how well the model approximates the true distribution) as measured by FID and KID. The average quality of samples is only one aspect of the quality of the whole model. Here, we are using the word \\\"quality\\\" to mean the average quality of samples (as in Sajjadi et al, 2018), and use \\\"goodness of distribution approximation\\\" to mean the quality of the whole model.\"]}", "{\"comment\": [\"We thank Reviewer mqps for the thoughtful feedback on our response. We are glad to hear that our previous response addressed several of the reviewer's questions. Regarding the reviewer's remaining comments, we would like to raise the following:\", \"We respectfully disagree with the assertion that \\\"using a mixture of generative models that normally do not complement each other probably does not give a better approximation of the target distribution\\\". For example, Figure 2 in the updated paper (was Figure 1 in the old version) shows that for models trained on the FFHQ and LSUN-Bedroom datasets, our mixture model (Mixture-UCB-CAB and OGD) offers a better approximation than each individual model (One-Arm Oracle is the best individual model), where the goodness of approximation is measured by KID. Tables 1 and 2 in the updated paper also suggest that our mixture model can offer an improvement in terms of the FID score. Also, note that quality scores such as Density and Precision are not measures of goodness of distribution approximation themselves, as they are merely measuring the average quality of samples (one cannot conclude that two distributions are similar merely because their expected values are similar). A measure of goodness of distribution approximation (e.g., FID and KID) has to take diversity into account to capture any mismatch between the higher-order moments (e.g. the covariance matrix) of the distributions.\", \"We are not advocating that we should favor diversity over quality, as we believe both of them are important. Standard scores such as FID and KID naturally take both quality and diversity into account. In the numerical results in Tables 1, 2 and Figure 2 in the updated paper, we observe that the FID and KID scores of the proposed mixture model improve upon the scores of each individual generative model. This shows that the proposed mixture model not only improves diversity, but is also generally favorable in terms of minimizing FID and KID.\", \"The reason why this paper emphasizes on diversity is because this is the main improvement provided by the proposed mixture model. As demonstrated by our experimental results (Tables 1, 2 and Figure 2 in the updated paper), the improvement in diversity is so significant that the standard scores (FID and KID, which take both quality and diversity into account) also improve. Our method is suitable not only to users who favor diversity, but also to users who value both diversity and quality. (Admittedly, the mixture model does not offer improvement to users who only value quality, but this position is uncommon, since only valuing quality would mean that the users prefer a mode-collapsed model that always produces the same high-quality sample.)\", \"Our Mixture-UCB algorithms are designed with both quality and diversity in mind. The loss function in equation (4) includes both a linear quality term $\\\\mathbb{E}\\\\bigl[f(X)\\\\bigr]$ and a quadratic diversity term $\\\\mathbb{E}\\\\bigl[\\\\kappa(X,X')\\\\bigr]$ (the linear term and the quadratic term indeed correspond to quality and diversity for KID in equation (3)). Our method is flexible, and the relative weights of quality and diversity can be adjusted by the user. In the extreme scenario where the user assigns zero weight to the diversity term (i.e, the user only considers the quality factor), the Mixture-UCB algorithm will reduce to applying standard UCB to optimize the quality score and, as guaranteed by Theorem 2, will converge to the generative model with the maximum quality score. However, this scenario would be uncommon in practice as a typical user will likely consider both diversity and quality factors.\"]}", "{\"comment\": \"I'd like to thank the authors for their responses. I'll keep my current rating.\"}", "{\"summary\": \"This paper aims to solve the online selection task over a group of well-trained generation models. It explores the selection of a mixture of multiple generative models and formulate a quadratic optimization problem to optimize the kernel-based evaluation scores including kernel inception distance (KID) and Renyi kernel entropy (RKE). Specifically, it proposes an online learning approach called Mixture Upper Confidence Bound (Mixture-UCB). Theoretically, regret analysis is provided for one method (Mixture-UCB-CAB). Experimental results illustrate the effectiveness of the proposed method for text-based and image-based generative models.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Overall, this paper is well-written and easy to follow.\\n2. The proposed method (Mixture-UCB) is somehow novel, although it is inspired by classical UCB in the multi-armed bandit setting.\\n3. Theoretical results about the regret bound are provided for the proposed Mixture-UCB-CAB. The proof seems right although I have not checked the proof line-by-line.\\n4. Empirical results illustrate the effectiveness of the proposed method in finding the optimal mixture of text-based and image-based generative models.\", \"weaknesses\": \"1. I am afraid that the online selection of well-trained generative models might have few applications because it is already costly for the (large) generative model inference, then why do we need online selection rather than batch selection? Discussions about practical applications can be added.\\n2. Experimental results show that Mixture-UCB-OGD might be better than Mixture-UCB-CAB. However, theoretical guarantees about Mixture-UCB-OGD are missing. I know it might be more challenging and more detailed discussions can be added to clarify why.\", \"questions\": \"In practice, FID metric is widely-used in the evaluation of generative models. Can this paper cover this metric and why?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the reviewers for their clear reply, which adequately addressed several of my questions. As reflected in my initial review, I appreciate the presentation quality and the extensive experimentation of this paper. My biggest concern is about the interest of improving diversity by selecting a mixture of generative models, which is not entirely resolved by the authors' reply.\\n\\nFirst what I meant by \\\"trained in an independent manner\\\" is not \\\"trained on independent samples\\\", but *trained by independent optimisation formulations as opposed to a joint optimisation*. My reasoning is that if the target distribution is a Gaussian mixture model (GMM), combining single models that generate from **different** Gaussian components in the target GMM would make perfect sense, however it is unlikely that single generative models happen to match different components unless they are trained through a joint optimization with a penalty that encourage them to learn different Gaussian components.\", \"this_is_why_i_raised_the_question_about_the_quality_score\": \"using a mixture of generative models that normally do not complement each other probably does not give a better approximation of the target distribution. As confirmed by the authors, using a mixture of generative models has little benefit in improving the quality. In contrary, it can lead to a degraded quality compared to the best single model.\\n\\nThen I think the value of this contribution depends crucially on whether it has practical interest to favor diversity over quality when evaluating generative models. Unless the authors could provide stronger arguments on this point, my current stand on this paper is towards rejection.\"}", "{\"comment\": [\"I thank the authors for their reply, and would like to raise some follow-up questions.\", \"In Figure 2 of the revision, we can see that using mixture of models is particularly effective on FFHQ truncated generators that \\\"generate diversity-controlled images centered on eight randomly selected points\\\". I think this supports my point that \\\"generative models normally do not complement each other\\\" unless they were trained in a joint manner such as the FFHQ truncated generators in Figure 2. To me, this is an important point to be discussed in the paper as a potential limitation of the proposed mixture approach. Also could the authors explain why they think Density and Precision only compare the expected values between the generative model and the target distribution? And if the authors believe that Density and Precision are \\\"not measures of goodness of distribution approximation themselves\\\", why not use other metrics to access the quality of generated data?\", \"I do find the improvement in FID and KID very interesting, which is the main reason of my positive rating in the initial review.\", \"My concern is not exactly about the focus on the improvement of diversity, but about the improvement of diversity (potentially) at the cost of quality. This is why I particularly asked about the quality scores.\", \"I agree that the linear quality term $\\\\mathbb{E}\\\\bigl[f(X)\\\\bigr]$ allows a control on the quality, however it only compares the expected values, which, as pointed out by the authors, can be insufficient. Moreover, this optimization formulation does not automatically guarantee that mixtures of generative models give better values on the linear quality term $\\\\mathbb{E}\\\\bigl[f(X)\\\\bigr]$ than single models, and the paper does not seem to provide empirical evidence on this point.\"]}", "{\"summary\": \"This paper study online selection for generative models, in order to generate diverse samples. The authors formulated the problem as a mixture multi armed bandit problem and developed two algorithms for that: Mixture-UCB-CAB and Mixture-UCB-OGD. The authors developed theoretical guarantees for the Mixture-UCB-CAB algorithm. The authors conduct many experiments to show the efficacy of their developed methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"It's interesting to see the authors formulated the generative model selection problem as an online selection problem. The authors also developed two algorithms for this new setting and provide theoretical guarantees for one of them. Experimental results demonstrate the efficacy of the proposed algorithms.\", \"weaknesses\": \"1. Since this is a new problem, can authors provide more motivations for online selection of generative models, e.g., how important is the ability to generate diverse samples? And how important is to save samples in the selection process.\\n2. The authors provide a convergence guarantee for Mixture-UCB-CAB in Thm 2. For comparison, what is the rate of convergence for the offline approach that randomly generate $T$ samples and then optimize over $\\\\alpha$?\\n3. Does Thm 1 holds for all $\\\\alpha$? Also, the guarantee in Thm 2 doesn't suffer the curse of dimensionality even if the algorithm is selection $\\\\alpha \\\\in R^m$; can authors explain why does that happen?\\n4. Compared to standard bandit problem where one gets an intermediate regret term at each round, it seems that the studied problems gets $O(t)$ (averaged) terms (the first Eq in Section 5), and all these terms are related to the previous selections $x_1, \\\\cdots, x_{t-1}$. Can authors elaborate how do they deal with these terms in the analysis? What are some technical contributions?\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The main goal of this work is to maximize the diversity of generated samples by selecting not a single but a mixture of generative models. Formulating first a population loss of quadratic form that can translate into evaluation scores including kernel inception distance (KID) and Renyi kernel entropy (RKE), this article proposes two online algorithms based on continuum-armed bandit and gradient descent, to find the optimal mixture through minimizing an upper confidence bound of the quadratic population loss. Experiments show that the proposed algorithms are efficient at approaching the optimal mixture.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"This paper is well written and easy to follow.\", \"The theoretical framework underlying the proposed algorithms is well grounded.\", \"Extensive experiments were carried out to demonstrate the performance of the proposed algorithms.\"], \"weaknesses\": [\"According to the literature review of this article, there seems to be little interest in finding a good mixture of different generative models. Indeed, if the goal is to approach the target distribution, it makes more sense to select the single best generative model than to use a mixture of different generative models, which are usually trained in an independent manner, therefore unlikely to complement each other.\", \"It is true that when the objective is to find the single best generative model, the online approach can help prevent sampling from suboptimal models. However, as using a mixture of generative models requires sampling from all member models, the online approach seems to be less useful in this setting.\"], \"questions\": \"* Can the authors find some other works that also aim to find good mixtures of generative models, and compare their method to these works?\\n\\n* Can the authors provide the quality scores Density (Naeem et al., 2020). and Precision (Kynkaanniemi et al., 2019) in the experiments that they conducted?\\n\\n* Small question regarding Lines 257&259: is $\\\\hat{L}(\\\\mathbf{a};\\\\mathbf{x}^{(t)})-(\\\\mathbf{\\\\epsilon}^{(t)})^{\\\\rm T}\\\\mathbf{a})$ a lower or upper bound of $L(\\\\mathbf{a})$?\\n\\n\\n\\n=======================================================================================================\\n\\n**Update after rebuttal**\\n\\nI would like to apologize for an important error that I made in the example presented in my last comment to the authors. I was so rushed to post this last comment before the discussion deadline to give the authors a chance to respond that I did not get to double check my math. As there is no other means for me to reach out to the authors, I have to rectify this error for the authors' information by editing my official review (at the suggestion of AC).\\n\\nThe error, which I realized after the deadline, came from the application of the FID formula. Under the assumption of Gaussian distributions, the FID is measured by \\n$$d_{\\\\rm F}\\\\left(P_r,P_g\\\\right)=\\\\Vert\\\\mu_r-\\\\mu_g\\\\Vert^2+{\\\\rm tr}\\\\left(\\\\Sigma_r+\\\\Sigma_g-2(\\\\Sigma_r\\\\Sigma_g)^{\\\\frac{1}{2}}\\\\right)$$\\nwhere $\\\\mu_r,\\\\mu_g$ and $\\\\Sigma_r,\\\\Sigma_g$ are respectively means and covariances of $P_r,P_g$.\\n\\nAs in my example, the covariances of the real distribution $\\\\mathcal{N}(\\\\mu_r,I)$ and the two generative models $\\\\mathcal{N}(\\\\mu_g,I),\\\\mathcal{N}(\\\\mu_{\\\\tilde{g}},I)$ are identity matrices, the FIDs are simply distances between the means:\\n$$d_{\\\\rm F}\\\\left(\\\\mathcal{N}(\\\\mu_r,I),\\\\mathcal{N}(\\\\mu_g,I)\\\\right)=\\\\Vert\\\\mu_r-\\\\mu_g\\\\Vert^2=\\\\Vert\\\\epsilon_g\\\\Vert^2,$$\\n$$d_{\\\\rm F}\\\\left(\\\\mathcal{N}(\\\\mu_r,I),\\\\mathcal{N}(\\\\mu_{\\\\tilde{g}},I)\\\\right)=\\\\Vert\\\\mu_r-\\\\mu_{\\\\tilde{g}}\\\\Vert^2=\\\\Vert\\\\epsilon_{\\\\tilde{g}}\\\\Vert^2.$$\\n\\nThe error occurred when I calculated the FID between $\\\\mathcal{N}(\\\\mu_r,I)$ and a mixture $\\\\mathcal{M}=\\\\alpha\\\\mathcal{N}(\\\\mu_g,I)+\\\\tilde{\\\\alpha}\\\\mathcal{N}(\\\\mu_{\\\\tilde{g}},I)$ of the two generative models. While the mean of $\\\\mathcal{M}$ is indeed $\\\\alpha\\\\mu_g+\\\\tilde{\\\\alpha}\\\\mu_{\\\\tilde{g}}$ as I said in my comment to the authors, the covariance, however, should be $I+\\\\alpha\\\\tilde{\\\\alpha}(\\\\mu_g-\\\\mu_{\\\\tilde{g}})(\\\\mu_g-\\\\mu_{\\\\tilde{g}})^{\\\\rm T}$, **not** $I$. \\n\\nTherefore, $d_{\\\\rm F}\\\\left(\\\\mathcal{N}(\\\\mu_r,I),\\\\mathcal{M}\\\\right)\\\\approx\\\\Vert\\\\mu_r-\\\\alpha\\\\mu_g-\\\\tilde{\\\\alpha}\\\\mu_{\\\\tilde{g}}\\\\Vert^2+\\\\alpha\\\\tilde{\\\\alpha}\\\\Vert\\\\mu_g-\\\\mu_{\\\\tilde{g}}\\\\Vert^2$ for large $\\\\Vert\\\\mu_g-\\\\mu_{\\\\tilde{g}}\\\\Vert^2$, \\nleading to\\n$$d_{\\\\rm F}\\\\left(\\\\mathcal{N}(\\\\mu_r,I),\\\\mathcal{M}\\\\right)\\\\approx\\\\Vert\\\\alpha\\\\epsilon_g+\\\\tilde{\\\\alpha}\\\\epsilon_{\\\\tilde{g}}\\\\Vert^2+\\\\alpha\\\\tilde{\\\\alpha}\\\\Vert\\\\epsilon_g-\\\\epsilon_{\\\\tilde{g}}\\\\Vert^2.$$Then we have, for independent $\\\\epsilon_g,\\\\epsilon_{\\\\tilde{g}}$, that\\n$$E\\\\\\\\{d_{\\\\rm F}\\\\left(\\\\mathcal{N}(\\\\mu_r,I),\\\\mathcal{M}\\\\right)\\\\\\\\}\\\\approx\\\\alpha^2E\\\\\\\\{\\\\Vert\\\\epsilon_g\\\\Vert^2\\\\\\\\}+\\\\tilde{\\\\alpha}^2E\\\\\\\\{\\\\Vert\\\\epsilon_{\\\\tilde{g}}\\\\Vert^2\\\\\\\\}+\\\\alpha\\\\tilde{\\\\alpha}E\\\\\\\\{\\\\Vert\\\\epsilon_g\\\\Vert^2\\\\\\\\}+\\\\alpha\\\\tilde{\\\\alpha}E\\\\\\\\{\\\\Vert\\\\epsilon_{\\\\tilde{g}}\\\\Vert^2\\\\\\\\}=\\\\alpha E\\\\\\\\{\\\\Vert\\\\epsilon_g\\\\Vert^2\\\\\\\\}+\\\\tilde{\\\\alpha}E\\\\\\\\{\\\\Vert\\\\epsilon_{\\\\tilde{g}}\\\\Vert^2\\\\\\\\},$$\\nwhich is always minimized by taking the single model with the smallest error.\\n\\nThe intuitive reasoning behind this example is that a mixture of Gaussians is unlikely to match better the target Gaussian distribution than (the best) single Gaussians. The lack of discussion on the limitations and the risks of the mixture approach is my main criticism of this work.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank Reviewer x3ew for his/her thoughtful feedback. We are glad to hear that Reviewer x3ew finds our paper \\\"well-written\\\" and \\\"well-structured\\\". Please find our responses below.\\n\\n**1-Theoretical guarantee for OGD.**\\n\\nProving a theoretical guarantee for Mixture-UCB-OGD would be more challenging compared to Mixture-UCB-CAB. A theoretical guarantee for Mixture-UCB-OGD would require analyzing a complicated dynamical system, where the state is the proportion vector and the matrix $\\\\hat{\\\\mathbf{K}}$. Therefore, we only give a theoretical guarantee for Mixture-UCB-CAB in this paper, and instead rely on experimental results to show the performance of Mixture-UCB-OGD. Theoretical guarantees for Mixture-UCB-OGD are left for future studies.\\n\\n\\n**2-Coherence of generated samples.**\\n\\nWhile Reviewer x3ew has raised an interesting point of distinguishing diversity of styles (e.g., realistic and unrealistic, where some may regard as a lack of coherence in style) and diversity of the depicted objects (e.g., cars of different sizes), the choice of which kind of diversity to prefer seems to be rather subjective. A user who prefers diversity in terms of coloring styles (as mentioned by the reviewer) may also prefer diversity in terms of realistic/unrealistic styles. \\n\\nIn our method, the quantification of diversity using RKE and KID scores depends on the choice of embedding model used in the evaluation process. In our experiments, we adopted the standard DINOv2 model to embed image samples, following the recommendation of Stein et al. in [1]. The results indicate that DINOv2 embeddings recognize cartoonish styles as a form of diversity. We note that the proposed method is adaptable and can be applied with alternative embedding models or custom features that represent the aspects of diversity most relevant to the user\\u2019s goals. Using a different embedding model would shift the emphasis in diversity quantification to the styles and features prioritized by the new embedding, potentially aligning more closely with the user's preferences.\\n \\n\\n**3-About memory efficiency.**\\n\\nTo run our algorithm locally with no access to external computing resources, we indeed require loading all the generative models. However, the strength of our algorithm is more apparent in a remote setting, where the user sends requests to several online services that host the generative models. Our algorithm only requires black-box access to the generative models, and hence it can be applied on all the popular commercial generative models. In the remote setting, the memory usage is much smaller since the user only needs to store the generated samples, not the whole models.\\n\\nThe reviewer's suggestion of distilling a mixture of large models into a single small model is very interesting. Nevertheless, we note that it is rather different from the black-box approach in our paper. Distilling large models using black-box access likely requires a significantly large number of samples. If we require white-box access to the models (e.g., the weights of the neural networks), this can limit the application of the method, since it may be difficult to combine models with significantly different architectures in this manner, and white-box access is unavailable for proprietary models. Also, a small distilled model may require more computational resources than the aforementioned remote setting.\\n\\nIn sum, the black-box approach of our method has two major advantages: applicability to general (open-source and proprietary) generative models with different architectures, and applicability to the remote setting suitable for devices with limited computational resources. We have added some discussions related to this point to the updated introduction.\\n\\n\\n**4-Typical scenarios where the mixture models fail to improve diversity or quality.**\\n \\nThe effectiveness of using a mixture of generative models depends on their probability distributions. When all the models represent the same distribution, the potential benefit of combining them is limited. However, if the models generate samples from different distributions, their mixture can lead to an improvement in diversity scores. This is particularly relevant for state-of-the-art prompt-based generative models, where variations in architecture and training data often result in different probability distributions across the trained models. As shown in Figure 1 of the revised introduction, three standard text-to-image models produce visually different samples for the prompt \\u201cgreen giraffe\\u201d in a \\u201ccartoon style.\\u201d In such cases, combining these generative models improves the diversity scores. We believe this scenario is common in many applications of prompt-driven generative AI.\\n\\n[1] Stein et al., \\u201cExposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models\\u201d, NeurIPS 2023\"}" ] }
2Cg4YrsCMA
Data-Centric Human Preference Optimization with Rationales
[ "Hoang Anh Just", "Ming Jin", "Anit Kumar Sahu", "HUY PHAN", "Ruoxi Jia" ]
Reinforcement learning from human feedback plays a crucial role in aligning language models towards human preferences, traditionally represented through comparisons between pairs or sets of responses within a given context. While many studies have enhanced algorithmic techniques to optimize learning from such data, this work shifts focus to improving preference learning through a data-centric approach. Specifically, we propose enriching existing preference datasets with machine-generated rationales that explain the reasons behind choices. We develop a simple and principled framework to augment current preference learning methods with rationale information. Our comprehensive analysis highlights how rationales enhance learning efficiency. Extensive experiments reveal that rationale-enriched preference learning offers multiple advantages: it improves annotation efficiency, accelerates convergence to higher-performing models, and reduces verbosity bias and hallucination. Furthermore, this framework is versatile enough to integrate with various preference optimization algorithms. Overall, our findings highlight the potential of re-imagining data design for preference learning, demonstrating that even freely available machine-generated rationales can significantly boost performance across multiple dimensions.
[ "dpo", "preference learning", "alignment" ]
Reject
https://openreview.net/pdf?id=2Cg4YrsCMA
https://openreview.net/forum?id=2Cg4YrsCMA
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zVAefsEym8", "yjvTWbPdPn", "upg4N60NDg", "rl2lletSKd", "qX5Gj6GNJO", "q5786YidGJ", "nR8COd1P9t", "kvN7Fh9cUT", "khkRrfaXW7", "gD81Guo5M1", "dC4F9ZqYcB", "d2bgrQGYll", "cO5FpuXJQs", "baJGFHdPek", "a91Y8sDavD", "X0P3EyFoZg", "VSJYXMfzlm", "UtgsbFJg7P", "R2MVVTUI2s", "PH9vwkQlbI", "OzExb34qgf", "MdtmIS0t4x", "McSUqzebLc", "LvSadHVR3f", "KrMouYAdEx", "J2XLGQbRHp", "HCfAdx7pPb", "DBD67qLQ8H", "A0ldu1YSPO", "8UmefLTuTd", "8R26EK5S2J", "73CHyAJC4E", "5LYsLnuDo3", "5Jdf6xMGYT", "4idSommczR", "0yg67xrytB", "0YOWJ5l4ur" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "meta_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732261582335, 1733014407358, 1733163454684, 1732527571179, 1732261778684, 1732262822768, 1732527085601, 1733014133722, 1732657721702, 1730603967000, 1732769338414, 1732527265920, 1733163526955, 1732262517287, 1732261302284, 1732641724899, 1732261723754, 1732769716485, 1732261231289, 1732262573144, 1732527165816, 1733163505591, 1732720118395, 1732657762661, 1733014791259, 1732547281683, 1732262736989, 1737523917503, 1730822072103, 1734920853788, 1732261257367, 1729679378934, 1730569533311, 1732262339305, 1733014568757, 1732261450487, 1732261363984 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8553/Authors" ], [ "ICLR.cc/2025/Conference/Submission8553/Authors" ], [ "ICLR.cc/2025/Conference/Submission8553/Authors" ], [ "ICLR.cc/2025/Conference/Submission8553/Authors" ], [ "ICLR.cc/2025/Conference/Submission8553/Authors" ], [ "ICLR.cc/2025/Conference/Submission8553/Authors" ], [ "ICLR.cc/2025/Conference/Submission8553/Authors" ], [ "ICLR.cc/2025/Conference/Submission8553/Authors" ], [ "ICLR.cc/2025/Conference/Submission8553/Authors" ], [ "ICLR.cc/2025/Conference/Submission8553/Reviewer_5bwS" ], [ "ICLR.cc/2025/Conference/Submission8553/Authors" ], [ "ICLR.cc/2025/Conference/Submission8553/Authors" ], [ "ICLR.cc/2025/Conference/Submission8553/Authors" ], [ "ICLR.cc/2025/Conference/Submission8553/Authors" ], [ "ICLR.cc/2025/Conference/Submission8553/Authors" ], [ "ICLR.cc/2025/Conference/Submission8553/Reviewer_jgVq" ], [ "ICLR.cc/2025/Conference/Submission8553/Authors" ], [ "ICLR.cc/2025/Conference/Submission8553/Authors" ], [ "ICLR.cc/2025/Conference/Submission8553/Authors" ], [ "ICLR.cc/2025/Conference/Submission8553/Authors" ], [ "ICLR.cc/2025/Conference/Submission8553/Authors" ], [ "ICLR.cc/2025/Conference/Submission8553/Authors" ], [ "ICLR.cc/2025/Conference/Submission8553/Reviewer_ZabL" ], [ "ICLR.cc/2025/Conference/Submission8553/Authors" ], [ "ICLR.cc/2025/Conference/Submission8553/Authors" ], [ "ICLR.cc/2025/Conference/Submission8553/Reviewer_EsKw" ], [ "ICLR.cc/2025/Conference/Submission8553/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8553/Reviewer_ZabL" ], [ "ICLR.cc/2025/Conference/Submission8553/Area_Chair_TyFx" ], [ "ICLR.cc/2025/Conference/Submission8553/Authors" ], [ "ICLR.cc/2025/Conference/Submission8553/Reviewer_EsKw" ], [ "ICLR.cc/2025/Conference/Submission8553/Reviewer_jgVq" ], [ "ICLR.cc/2025/Conference/Submission8553/Authors" ], [ "ICLR.cc/2025/Conference/Submission8553/Authors" ], [ "ICLR.cc/2025/Conference/Submission8553/Authors" ], [ "ICLR.cc/2025/Conference/Submission8553/Authors" ] ], "structured_content_str": [ "{\"title\": \"Experimental Setup is Weak\", \"comment\": \"> Explanation of Figures 2 and 3\\n\\nIn Figure 2, we compare DPO and RDPO against the SFT model, but both DPO and RDPO involve varying data sizes. As the reviewer pointed out, DPO outperforms RDPO compared to SFT at 1K data points. In Figure 3, we fixed the DPO model at 12K data points for Ultrafeedback and varied the RDPO training data size, which may have contributed to the potential confusion. We apologize for this oversight. For the plots, we conducted the winrate evaluation three times and observed consistent scores with negligible standard deviation. We apologize for any misunderstanding.\\n\\n> Why focus on a small dataset size?\\n\\nWhile training on larger datasets is possible, our primary goal is to enhance data quality for preference learning by augmenting the dataset, as an alternative to simply increasing annotated data. We demonstrate data efficiency by reducing the number of annotated pairs needed\\u2014by 2 to 4 times compared to DPO\\u2014while achieving similar performance. Although increasing annotated data could potentially improve performance, this is not guaranteed. Many popular pairwise preference datasets are similar in scale to ours, such as the OpenHermes2.5-DPO dataset (https://huggingface.co/datasets/argilla/OpenHermes2.5-dpo-binarized-alpha), the multi-turn preference dataset Capybara Preferences (https://huggingface.co/datasets/argilla/Capybara-Preferences-Filtered), and the code preference dataset CodeUltraFeedback (https://huggingface.co/datasets/coseal/CodeUltraFeedback_binarized).\\nRegarding the Ultrafeedback dataset (original version), it appears that DPO does not significantly improve performance over SFT, although DPO still achieves a majority winrate against the SFT model. However, for the ORCA dataset, we observe a performance increase with the DPO model, as shown in Figure 2.\\n\\n> Choices of DPO model.\\n\\nWe selected the model that had converged on the winrate later in training, as it had been exposed to more preference data.\\n\\n> RewardBench/AlpacaEval Benchmarks\\n\\nWe evaluated our results on the AlpacalEval 2.0 dataset as a benchmark. In our experiments, we focused on the direct winrates of DPO and RDPO against their own responses to assess performance, rather than comparing against a third-party model response (GPT-4 in the case of AlpacalEval). This approach allowed us to directly observe the impact of adding rationales to the training, providing a clearer comparison with models trained without them.\\n\\n> Poor quality rationales\\n\\nWhen RDPO is trained with low-quality rationales, we observe a degradation in performance, with standard DPO (without rationales) performing better. To improve the quality of the rationales, we recommend using stronger models for generation (in our case, Llama-3-8B-Instruct is sufficient, though more powerful models are emerging with ongoing developments). Additionally, after generating the rationales, we suggest verifying their correctness by leveraging a mixture of verifiers for robust assessment. For high-stakes domains, we envision having experts supervise the rationale generation process to ensure correctness and maximize the informativeness of the rationales. Additionally, we can leverage multiple models to improve the quality of the rationale, which can boost the valuable cues for the model to learn.\\n\\n> RDPO performance in Figure 5\\n\\nIn Figure 5, we examine the effect of the source model on the generation of rationales and their impact on performance. We observe that RDPO still achieves a higher winrate over DPO when using Llama-3-8B-Instruct. However, when the rationales are generated using the weaker source model, Phi3-Mini-4K, the winrate gap narrows. This suggests that for the Llama-3-8B-Instruct model, the rationale should be generated by a stronger model than Phi3-Mini-4K, highlighting the importance of the source model. This finding aligns with our theoretical result, which indicates that more informative rationales lead to improved preference prediction.\"}", "{\"title\": \"Friendly Reminder\", \"comment\": \"Dear Reviewer jgVq,\\n\\nThank you for your feedback! We have addressed your concerns and would greatly appreciate any additional feedback you may have. If there are further suggestions to improve our work, we would be happy to address them.\\n\\nWith Appreciation,\\\\\\nPaper8553 Authors\"}", "{\"title\": \"Discussion Period Ends Today\", \"comment\": \"Dear Reviewer 5bwS,\\n\\nAs the discussion period wraps up today, we want to emphasize how much we value your feedback. Having addressed your questions, we now kindly ask for your insights.\\nWe sincerely appreciate you taking the time to share your thoughts.\\n\\nWarm Regards,\\\\\\nPaper8553 Authors\"}", "{\"title\": \"Rebuttal period ending soon\", \"comment\": \"Dear Reviewer ZabL,\\n\\nWe want to once again thank you for your helpful comments. We have addressed your questions. Please let us know if you have any more questions, we would be happy to address them before the rebuttal period ends.\\n\\nKind Regards,\\nPaper8553 Authors\"}", "{\"title\": \"We appreciate your review\", \"comment\": \"Dear Reviewer jgVq,\\n\\nWe want to thank you for your helpful feedback, which led to valuable discussion about our work. We have carefully addressed your questions and welcome any additional feedback. Please let us know if you have any further questions, we will be happy to address them.\\n\\nKind Regards, \\\\\\n Paper8553 Authors\"}", "{\"title\": \"Answers to Questions\", \"comment\": \"> **Runtime and average length**\\n\\nWe appreciate the reviewer's suggestion.\\nWe report the runtime for RDPO and DPO for one epoch on Llama-3.1-8B-Instruct, using 12,000 Orca examples, as follows:\\n\\n+ RDPO General: 6770 seconds\\n+ RDPO Specific: 6950 seconds\\n+ DPO: 3583 seconds\\n\\nWhile processing additional tokens nearly doubles the runtime, RDPO compensates for this by requiring fewer annotations while achieving comparable or superior performance to DPO.\\nAdditionally, we report the average response lengths for the Orca dataset:\\n\\n+ Chosen responses: 786 \\n+ Rejected responses: 981 \\n+ Rationale responses: 411 \\n\\n> **Winrate computation**\\n\\nTo compute the winrates against DPO and SFT, we generated responses from each model for 512 fixed test samples from a given dataset. For each comparison (RDPO vs. DPO and RDPO vs. SFT), we used an LLM judge to select the better response. If the judge could not determine a preferred response, the comparison was marked as a draw. The win rate is calculated as the number of RDPO responses preferred, divided by the total of 512 samples.\"}", "{\"title\": \"Rebuttal period ending\\u2013we anticipate your feedback!\", \"comment\": \"Dear Reviewer EsKw,\\n\\nWe want to thank You for Your helpful comments, which led to a number of interesting discussions. We have responded to each of Your concerns and questions. Hopefully, You will find that they adequately address Your concerns. Before the rebuttal phase is over, please let us know if You have any more questions or need any clarification. We would be happy to address them.\\n\\nBest Wishes,\\\\\\nPaper8553 Authors\"}", "{\"title\": \"Thank you!\", \"comment\": \"We would like to thank you for your positive assessment!\\n\\nKind Regards, \\\\\\nPaper8553 Authors\"}", "{\"title\": \"We thank you for your valuable comment (1/2).\", \"comment\": \"**We appreciate the reviewer\\u2019s emphasis on the importance of experimental baselines.** However, it is crucial to clarify that our method and the referenced works ([1, 2, 3]) are **not directly comparable** as they **address different problems in preference learning.** The **referenced methods focus on generating synthetic preference datasets.** In contrast, **our work tackles a distinct challenge: enhancing existing preference datasets with rationales to improve how models learn from human preferences.** While both approaches contribute to preference learning, they **serve different stages of the pipeline and target different use scenarios** - data generation creates new annotated preference pairs, while our method enriches existing preference annotations with explanatory depth. This fundamental difference in objectives and application scenarios means direct comparisons between these approaches would not effectively evaluate our specific contribution.\\n\\n**Furthermore, including the referenced methods as baselines could create a misleading impression of direct competition, overlooking their complementary potential.** For instance, preference data generation and rationale augmentation address separate challenges within the preference learning pipeline and can coexist synergistically. \\n\\nGiven these differences, we believe our evaluation strategy of focusing on how adding rationales can enhance existing preference learning frameworks better aligns with our core contribution. **To address the reviewer\\u2019s concerns, we will revise the paper to further clarify the unique roles of data generation and rationale augmentation in preference learning. This discussion will also elaborate on the specific challenges associated with self-synthetic data generation and why our approach represents a distinct and complementary contribution to the field.**\"}", "{\"summary\": \"This paper investigates if incorporating rationales along with binary preference data can help improve alignment preference. To this end the authors propose rationale-DPO, an extension to the popular alignment method DPO. They compare the two algorithms with on different datasets (orca, ultrafeedback) and for different models (Mistral-7B-v0.1, Zephyr-7B-Beta etc.). The authors also propose a simplified information theoretic analysis to better understand rationale based preference modeling.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This study is well motivated and adds to the literature of incorporating richer feedback (additional of rationales in this case) to do more efficient RLHF alignment.\", \"weaknesses\": \"Weakness:\\n1. While the problem is well motivated, the methodology to maximize the likelihood of generating the given preference and the rational is a very intuitive and simple method and is not significantly novel.\\n2. Difficulty collecting data: procuring rationales can be more expensive as compared to getting just binary feedback. In addition, for certain cases like when you are comparing artwork it might not be possible to humans to explain their choice. While using LLMs to generate rationales is an efficient way of scaling the method, there is a risk of getting a misaligned response if that model is misaligned (for ex. not harmless) and it may also lead to an echo chamber as no new perspective beyond what the rationale generating LLM believes is true will be in the dataset. How do you envision addressing these challenges?\\n3. In Figure 2, it seems that DPO win rate is only lagging behind RDPO by ~5-8% for the same amount of data points, however, RDPO requires a lot more text for a single data point.\", \"questions\": \"1. Instead of plotting data points w.r.t performance metrics, it will be worthwhile to plot the total number of text tokens used for training w.r.t the performance metrics. For example, if the rationale itself is quite longer than the original texts for comparison it can contain a lot more information which might explain the improvement in performance. Additionally, it is also worthwhile to report the average training time for the both procedures.\\n\\n2. For the vs DPO and vs SFT section, can you please provide the exact formula you used to compute the win rates? Are there any tie-breaking rules?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your feedback! We appreciate your suggestion!\", \"comment\": \"We sincerely appreciate the reviewer\\u2019s insightful feedback and fully agree on the importance of analyzing the contributions of the rationale SFT loss and the pairwise alignment loss (e.g., DPO) independently, as well as exploring their synergistic effects. To address this, we conducted a series of experiments to isolate the impact of each component and evaluate their combined effect. Specifically, we investigated an extreme case where the rationale loss alone drives preference optimization, with the DPO alignment loss set to zero. This approach was based on the hypothesis that rationales inherently encode preferences by combining preference-response pairs, the preferences themselves, and the associated reasoning processes, thereby providing a rich and effective training signal.\\n\\n| | RDPO (Preference + Rationale) | DPO (Preference-Only) | Rationale-Only |\\n|:---------|:-----------:|:--------------:|:-------------:|\\n| General | 64.5 | 59.1 | 61.8 |\\n| Detailed | 64.4 | 59.1 | 61.3 | \\n\\nFor these experiments, we fine-tuned Mistral-7B-Instruct-v0.2 on the Orca dataset across three settings: RDPO (combining DPO and rationale loss), DPO (excluding rationale loss), and Rationale-Only (excluding DPO loss). The results, as shown in the table above, reveal that rationales alone can substantially improve model performance, achieving a high win rate of over 61% without explicit pairwise preference modeling. This improvement likely stems from the informational richness embedded in rationales, which compensates for the absence of pairwise alignment.\\nWhile DPO also demonstrated a majority win rate against the SFT baseline (above 59\\\\%), training with both rationale and preference losses (RDPO) consistently achieved the highest win rate (64.5%) across both general and detailed settings. This highlights the benefit of integrating rationales into the preference objective, effectively leveraging the strengths of both losses to produce superior performance.\\n\\nTo further investigate how rationales enhance DPO preference learning, we examined the reward margin metrics. As shown in the table below, RDPO not only achieved higher reward margins between chosen and rejected responses but also demonstrated faster convergence compared to DPO. This can be explained through the following: while DPO explicitly aims to maximize reward margins, the inclusion of rationales provides an implicit quality signal, offering explanations for the differences between chosen and rejected responses. This signal reinforces the model's ability to improve reward margins by guiding it toward more informed preferences.\\n\\n|Training Points | 0 | 1000 | 2000 | 3000 | 4000 | 5000 | 6000 | 7000 | 8000 | 9000 | 10000 | 11000 |\\n|:-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------| -------| \\n| DPO | 0.00 | 0.05 | 0.19 | 0.32 | 0.42 | 0.49 | 0.54 | 0.58 | 0.62 | 0.63 | 0.65 | 0.66 | \\n| RDPO | 0.00 | 0.1 | 0.25 | 0.46 | 0.67 | 0.76 | 0.83 | 0.85 | 0.86 | 0.87 | 0.89 | 0.91 | \\n\\n\\nThese findings underscore the complementary nature of the rationale SFT loss and the pairwise alignment loss. While DPO explicitly optimizes reward margins, the rationale prediction loss provides supplementary supervision, enabling the model to learn the reasoning underlying response preferences. This integration not only strengthens the selection process but also accelerates training convergence. By combining these two approaches, RDPO amplifies their individual strengths, resulting in more efficient and effective preference learning.\"}", "{\"title\": \"Rebuttal period closing soon\\u2013we anticipate your feedback!\", \"comment\": \"Dear Reviewer 5bwS,\\n\\nWe would like to thank You for Your comments and suggestions, we tried our best to address every question raised. We hope that our answers could resolve Your concerns. We are happy to address additional suggestions. Since the rebuttal period is closing soon, we would love to be able to respond any further questions.\\n\\nBest Wishes,\\\\\\nPaper8553 Authors\"}", "{\"title\": \"Discussion Period Ends Today\", \"comment\": \"Dear Reviewer EsKw,\\n\\nAs the discussion period wraps up today, we want to emphasize how much we value your feedback. Having addressed your questions, we now kindly ask for your insights. We sincerely appreciate you taking the time to share your thoughts.\\n\\nWarm Regards,\\nPaper8553 Authors\"}", "{\"title\": \"Experimentation lacks rigor and thoroughness\", \"comment\": \"> **Winrate against a fixed opponent**\\n\\nWe agree with the reviewer that using a fixed comparison would be easier to interpret than a relative one. To address this, we have used a fixed SFT model for comparison [Figure 2]. However, while RDPO and DPO achieving the same win rate against the fixed model may suggest similar performance, it does not necessarily imply that RDPO and DPO are equivalent. Therefore, we also conducted direct comparisons between the two models to obtain their head-to-head win rate [Figure 3]. Additionally, for some experiments, we evaluated the models using AlpacaEval 2.0 [Figures 9 and 10].\\n\\n> **DPO is not fully optimized**\\n\\nWe completely agree with the reviewer that DPO could potentially be further optimized with better hyperparameters. In our work, we used the hyperparameter settings commonly adopted for DPO and ORPO. Using the available dataset, we were able to achieve an improved LC Winrate on the AlpacaEval 2.0 for both models and datasets. The higher win rates observed in other models are likely due to their use of different datasets and preference learning objectives. For instance, both SimPO and SPPO use the Ultrafeedback dataset but generate responses based on scores from an external reward model. Therefore, we can only assess the benefit of rationales within the context of the given dataset and learning objectives we used, where we observed positive results.\\nAdditionally, the Ultrafeedback dataset, created by SimPO with assistance from PairRM, is highly optimized to score well on the AlpacaEval benchmark. However, a closer inspection reveals that some response pairs do not have clear preferences, such as:\", \"a\": \"\\\"According to the passage, Quinn would be described as a 'very good friend'.\\\"\\\\\", \"b\": \"\\\"According to the paragraph, Quinn would be described as a 'very good friend'. This is a direct reference to the phrase 'because he was a very good friend'.\\\"\\n\\nThese examples suggest that the method may rely on a reward model tailored to excel on a specific benchmark, rather than providing the model with a true understanding of human preferences.\", \"or\": \"\", \"dataset_links\": \"\\\\\", \"https\": \"//huggingface.co/datasets/princeton-nlp/mistral-instruct-ultrafeedback\\n\\n> **Produced Text Length**\\n\\nIn our experiments, we observe that RDPO generates shorter responses compared to DPO for the test sets of their respective datasets.\\nIn the AlpacaEval 2.0 benchmark, both RDPO and RORPO produce shorter responses on average than the original model, and in most cases, shorter than DPO and ORPO. However, as the reviewer noted, the difference is minimal in some instances. This could be attributed to the style of questions in the AlpacaEval 2.0 benchmark, which often require longer responses to address open-ended queries. In contrast, the datasets evaluated in the main paper include a mix of both closed-ended and open-ended questions.\"}", "{\"title\": \"We appreciate your review\", \"comment\": \"Dear Reviewer EsKw,\\n\\nWe want to thank you for your elaborate comments and suggestions, which led to valuable extension of our work. We have carefully addressed your concerns and welcome any additional feedback. Please let us know if you have any further questions, we will be happy to address them.\\n\\nKind Regards,\\\\\\n Paper8553 Authors\"}", "{\"title\": \"Thank you for the explanation\", \"comment\": \"I thank the authors for their detailed response. My concerns about the experiments have been addressed.\\n\\nHowever, I believe more effort should be devoted to examining the function of the two components: the pairwise alignment loss (e.g., DPO, IPO, etc.) and the rationale SFT loss. The core contribution of the paper lies in proposing the rationale SFT loss and demonstrating its benefit to alignment when used in combination with the pairwise alignment loss.\\n\\nIt is therefore important to evaluate how much the rationale SFT loss contributes on its own. As acknowledged by the authors and noted by other reviewers, the rationale SFT loss does not appear to be novel, at least in isolation. Nevertheless, the combination of these components is reasonable and valuable. For this paper to establish its position in the literature, a deeper examination of the interaction between the DPO loss and the rationale loss is necessary.\"}", "{\"title\": \"Questions\", \"comment\": \"> How are draws measured in Figure 2?\\n\\nWe use the LLM as a judge to determine which response is preferred between models. However, in cases where the judge cannot decide on a preferred response, we consider it a draw. Additionally, we shuffle the responses to prevent any ordering bias from influencing the judge\\u2019s decision.\\n\\n> Clarification of the drawing rate\\n\\nWe apologize for the confusing phrasing of our claim. In the experiment, we observed that as the data size increases, the RDPO winrate improves relative to DPO, while the draw rate remains constant. This indicates that the increase in RDPO's winrate over DPO is due to converting previously losing cases into winning ones. We hope this clarifies our claim.\\n\\n> Typo\\n\\nThank you for catching that.\"}", "{\"title\": \"Thank you for your feedback! We have incorporated the suggested changes into the updated version of our paper.\", \"comment\": [\"We would like to thank the reviewers for their valuable feedback, which significantly helped us improve our work and better position it.\", \"In response, we have incorporated related work on synthetic preference data generation to clarify the positioning of our contributions and highlight the specific problems addressed by our study (a concern raised by Reviewer EsKw).\", \"We added an explanation regarding the performance of LLaMA 3.1 in Table 4 (Figure 5 in the original version), discussing potential reasons for its lower performance increase (a concern raised by Reviewer EsKw).\", \"To enhance the understanding of rationale contributions in preference learning objective, we have studied each component in isolation and how they contribute to preference learning (a concern raised by Reviewer jgVq).\", \"Additionally, we included a detailed cost analysis, covering both runtime and token generation costs (a concern raised by Reviewer ZabL and Reviewer 5bwS).\", \"Finally, we corrected typos throughout the manuscript.\", \"We appreciate your comments and we welcome further discussion until the discussion period is closing.\"]}", "{\"title\": \"We appreciate your review\", \"comment\": \"Dear Reviewer ZabL,\\n\\nWe want to thank you for your positive assessment. We appreciate the economic perspective on the utility of the dataset which aligns with our motivation to improve the effectiveness of data. We appreciate your suggestion on guiding the project owners with limited annotation budget to show the practicality of the method. Please let us know if you have further questions, we will be happy to address them.\\n\\nKind Regards, \\\\\\n Paper8553 Authors\"}", "{\"title\": \"We appreciate your review\", \"comment\": \"Dear Reviewer 5bwS,\\n\\nWe want to thank you for your helpful comments, which led to valuable discussion about our work. We have carefully addressed your concerns and welcome any additional feedback. Please let us know if you have any further questions, we will be happy to address them.\\n\\nKind Regards,\\\\\\n Paper8553 Authors\"}", "{\"title\": \"Rebuttal period ending soon\\u2013we anticipate your feedback!\", \"comment\": \"Dear Reviewer jgVq,\\n\\nWe want to thank You for Your helpful comments, which led to a number of important extensions of our work. We have addressed each of Your questions. Please let us know if You have any more questions, we would be happy to address them within our allowed period.\\n\\nBest Wishes,\\\\\\nPaper8553 Authors\"}", "{\"title\": \"Discussion Period Ends Today\", \"comment\": \"Dear Reviewer jgVq,\\n\\nAs the discussion period wraps up today, we want to emphasize how much we value your feedback. Having addressed your questions, we now kindly ask for your insights. We sincerely appreciate you taking the time to share your thoughts.\\n\\nWarm Regards,\\nPaper8553 Authors\"}", "{\"comment\": \"I thank the authors for their responses. Most of my concerns are addressed in meaningful details. Therefore, I remain my comment on this paper.\"}", "{\"title\": \"We thank you for your valuable comment (2/2).\", \"comment\": \"We thank the reviewer for their thoughtful attention to the Llama-3.1-8B-Instruct experiment presented in Figure 5. While this case exhibits a marginal performance gap, with a win rate of 52\\u201345% (normalized to 54\\u201346%), we respectfully emphasize that our method **consistently delivers significant improvements across multiple tested settings.** Specifically:\\n\\n+ **Across datasets:** Mistral-7B-Instruct-v0.2 trained with rationale-enhanced preference optimization surpasses its counterpart trained without rationales on both the Orca and Ultrafeedback datasets, achieving win rates exceeding 60%.\\n+ **Across models:** RDPO consistently outperforms DPO on Mistral-7B-Instruct-v0.1, Mistral-7B-Instruct-v0.2, and Zephyr-beta models, achieving win rates consistently above 55%.\\n+ **Across methods:** Both RDPO and RORPO demonstrate improvements over DPO and ORPO, respectively, as shown in evaluations on Mistral-7B-Instruct-v0.2 and Llama-3.1-8B-Instruct using the AlpacaEval 2.0 benchmark.\\n\\n\\nThese findings reinforce **RDPO\\u2019s value as a broadly applicable preference-learning framework that robustly improves existing approaches**, such as DPO and ORPO, rather than being tailored to a specific model or dataset.\\nRegarding the Llama-3.1-8B-Instruct experiment, the modest improvements can be attributed to two key factors. First, the inherent capability of Llama-3.1-8B-Instruct surpasses Mistral-7B-Instruct-v0.2, making substantial gains more challenging to achieve on this stronger baseline. Second, the general preference datasets like Orca and Ultrafeedback, which include pre-existing responses, may not be fully optimized for Llama-3.1-8B-Instruct. For example, prior works [1-5] **generate new response pairs dynamically during training, producing synthetic datasets that differ substantially from our setting that improves pre-existing preference data in an offline manner.** While their online generation strategy may explain their larger improvements, **this observation reveals an exciting opportunity to extend our method**: creating preference pairs and incorporating rationales in an online manner. We appreciate the reviewer's insight in highlighting this direction and have added it to our discussion of future work.\\n\\nThis broad pattern of improvement aligns with established practices in machine learning research, where **techniques are evaluated based on their overall effectiveness across multiple scenarios rather than performance in any single setting. [6-9]** We have expanded our discussion of the Llama-3.1-8B-Instruct experiment in the paper to provide deeper analysis. However, **we respectfully suggest that this single case be viewed in the context of RDPO\\u2019s broader demonstrated effectiveness, which remains its key contribution to the field.** We believe this comprehensive evaluation provides strong evidence for our method's contribution to preference learning.\\n\\n[1] RLCD: Reinforcement Learning from Contrastive - Distillation for Language Model Alignment. Yang et al., 2024. \\\\\\n[2] West-of-N: Synthetic Preference Generation for Improved Reward Modeling. Pace et al., 2024. \\\\\\n[3] Self-Taught Evaluators. Wang et al., 2024. \\\\\\n[4] Meng, Yu, Mengzhou Xia, and Danqi Chen. \\\"Simpo: Simple preference optimization with a reference-free reward.\\\", 2024 \\\\\\n[5] Wu, Yue, et al. \\\"Self-play preference optimization for language model alignment.\\\", 2024 \\\\\\n[6] Ethayarajh, Kawin, et al. \\\"Kto: Model alignment as prospect theoretic optimization.\\\", 2024 \\\\\\n[7] Park, Ryan, et al. \\\"Disentangling length from quality in direct preference optimization.\\u201d, 2024 \\\\\\n[8] Pal, Arka, et al. \\\"Smaug: Fixing failure modes of preference optimisation with dpo-positive.\\\", 2024 \\\\\\n[9] Guo, Yiju, et al. \\\"Controllable preference optimization: Toward controllable multi-objective alignment.\\\", 2024\"}", "{\"title\": \"A friendly reminder\", \"comment\": \"Dear Reviewer 5bwS,\\n\\nThank you for your thoughtful feedback, which contributed to meaningful discussions about our work. We have thoroughly addressed your questions and updated our work to reflect the changes. With the deadline approaching, we welcome any additional input and are more than happy to address any further questions.\\n\\nBest Regards,\\\\\\nPaper8553 Authors\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for your rebuttal. Unfortunately, I retain core concerns regarding experimental rigor and missing baselines.\\n\\nI disagree that because the methods mentioned in my review are designed for preference modeling, they do not constitute important experimental baselines. You state that \\u201cour primary goal is to enhance data quality for preference learning by augmenting the dataset, as an alternative to simply increasing annotated data\\u201d. These works share the same motivation; it is therefore important for the reader to know whether to invest in generating rationales for RDPO or in one of these alternative augmentation strategies.\\n\\nRegarding experiments, I still do not consider a 52% win rate to be evidence for the superiority of RDPO when trained on Llama-8B. This is a similar-sized, arguably better model than the Mistral 7B model on which most of your results are based. Since RDPO does not provide performance gains across different models, it is challenging to be convinced of its added value. I recommend authors investigate the reasons for this lack of performance. I also find it very surprising that you obtain confidence intervals of zero for evaluation results as noisy as preference judgments -- I think it would be worth discussing your uncertainty estimation framework in more detail.\"}", "{\"title\": \"Responses to Weaknesses\", \"comment\": \"> **The method is very intuitive and simple.**\\n\\nWe present a novel, data-centric approach to advancing the field of preference learning. Our method\\u2019s simplicity enables seamless integration with existing techniques, such as DPO, ORPO, and potentially SimPO or SPPO, as discussed in our paper. By incorporating rationales, we demonstrate how models can more effectively and efficiently learn from preference datasets by explicitly understanding the reasoning behind human choices. This rationale-driven approach shifts the focus from algorithm-centric improvements to uncovering the underlying logic of preferences. Our findings reveal that providing sufficient explanations significantly enhances model performance, offering a fresh and previously underexplored perspective on preference learning.\\n\\n> **Difficulty of collecting data**\\n\\nWe appreciate the reviewer's perspective on data collection. As illustrated in Figure 2, simply increasing the number of preference pairs does not always lead to better performance. Our method prioritizes extracting deeper insights from existing preference data. In specialized domains, generating additional valid preference pairs can be particularly challenging due to limited domain expertise. Our approach seeks to maximize learning from expert-curated data while addressing these limitations.\\nAdditionally, obtaining high-quality human preference data is often expensive. By leveraging rationales, our method enhances the utility of existing data without requiring further collection efforts. We believe rationales help guide the model to adopt correct reasoning patterns, avoiding pitfalls like reliance on superficial features (e.g., response length), which are common in traditional preference learning approaches [1,2]. Although generating rationales introduces additional LLM inference costs, the expense for a 500K dataset can remain within $100. This cost is offset by the significant gains in learning efficiency, making it especially valuable when high-quality preference data is scarce or costly.\\nMoreover, in cases where human annotators themselves struggle to determine a preferred answer due to unclear reasoning, it is inherently challenging for a model to discern the rationale behind a choice. These scenarios demonstrate the inherent difficulty of the task, regardless of the approach used.\\nFinally, when addressing model misalignment, we recommend employing multiple diverse models to mitigate biases and outliers in the generated rationales. This strategy can improve the quality of the rationale signals provided to the model, further enhancing its learning outcomes.\\n\\n[1] Azar, Mohammad Gheshlaghi, et al. \\\"A general theoretical paradigm to understand learning from human preferences.\\\" International Conference on Artificial Intelligence and Statistics. PMLR, 2024.\\\\\\n[2] Park, Ryan, et al. \\\"Disentangling length from quality in direct preference optimization.\\\" arXiv preprint arXiv:2403.19159 (2024).\\n\\n> **Figure 2 and 3 explanation**\\n\\nWhile Figure 2 shows that RDPO outperforms DPO by approximately 5-8 percentage points, our approach also achieves the same win rate against SFT as DPO, but with significantly fewer pairwise data points. By enhancing the data with rationales, we reduce the annotation effort by a factor of three.\\nAdditionally, Figure 2 highlights the win rate against the SFT model, serving as an intermediate comparison between DPO and RDPO. In the direct comparison, depicted in Figure 3, RDPO achieves a win rate of 60-65% against DPO across various training data sizes for both the Orca and Ultrafeedback datasets.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper presents a data-centric approach to RLHF by enriching preference datasets with machine-generated rationales. These rationales offer explanations for choices between preferred and non-preferred responses, addressing ambiguity and enhancing the effectiveness of preference learning. The proposed framework integrates rationales into the training process, can save annotation costs by 3x, and lands the fine-tuned model at better performance. Extensive experiments demonstrate that rationale-enriched learning outperforms traditional methods, with benefits across various preference optimization algorithms.\\n\\nThis work underscores the potential of rationale-based data augmentation in preference learning, paving ways for more effective language model alignment and encouraging further exploration of unpaired preference learning scenarios.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"1. This paper is well written. The notations are clear.\\n\\n2. It provides up-to-date literature on RLHF techniques. It underscores the potential of rationale-based data augmentation in preference learning, paving ways for more effective language model alignment and encouraging further exploration of unpaired preference learning scenarios.\\n\\n3. Among many lines of work addressing the economic utility of dataset design and construction in RLHF, mechanism design has been recently explored to enhance the overall economic utility of dataset construction in RLHF.\\nThe merits of introducing mechanism design are well supported by game theory studies, both theoretically and practically:\\n\\nZhang, G., & Duan, J. (2024). VickreyFeedback: Cost-efficient Data Construction for Reinforcement Learning from Human Feedback.\", \"https\": \"//arxiv.org/abs/2409.18417\\n\\nMatsushima, H., Noda, S.: Mechanism design with general ex-ante investments. Journal of Mathematical Economics 106, 102831 (2023)\\n\\n4. The experiments on Orca and UltraFeedback are convincing, with rational theoretical analysis using mutual information as a tool and in-depth ablation discussion in the appendix B.2.\", \"weaknesses\": \"This paper underlines the impact of including rationale in the RLHF fine-tuning process. In other words, the proposed method generally leverages auxiliary data to enhance the model performance.\\n\\nHowever, generating qualitative rationales alongside existing datasets might increase the annotation cost in dollar terms. Therefore, the breakeven analysis and the operating guidance could have been more straightforward to project owners with a limited annotation budget in dollar terms.\", \"questions\": \"It would be great if the total cost (rationale annotation cost vs. fine-tuning performance) breakeven could be revealed in dollar terms, and the operating guidance could be discussed for project owners with a limited annotation budget.\\n\\nOne way could be to provide a detailed cost-benefit analysis, including estimated costs for generating rationales (e.g., API costs if using a language model) versus the potential savings from reduced annotation needs. This would give project owners more concrete information to assess the method's practicality within their budget constraints.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"Thank you for your submission to ICLR. This paper aims to equip preference learning methods with machine-generated rationales, which explain the reason behind preference choices. The authors show that these rationales can increase learning efficiency.\\n\\nThe reviewers agree that the problem is well motivated, and that it is valuable to develop frameworks such as this for incorporating rationales into preference tuning procedures. However, there were still a number of concerns from the reviewers about this paper. In particular, there were concerns about the novelty of the presented methodology, experimental rigor/precision, and both the choice of and performance against baseline methods. In the end, a majority of the reviewers remained unconvinced about their concerns.\", \"additional_comments_on_reviewer_discussion\": \"During the response period, there was some healthy discussion between the authors and most of the reviewers. In their rebuttal, the authors responded back to questions and comments given by reviewers. However, the majority or the reviewers remained unconvinced and did not convert to a positive score.\"}", "{\"title\": \"Breakeven Analysis\", \"comment\": \"As highlighted by the reviewer, our goal is to enhance the utility of the dataset by improving the information efficiency of the annotated pairs. By providing rationales alongside the annotations, we aim to not only enable more efficient learning from preferences\\u2014thus reducing the need for excessive annotation\\u2014but also to potentially improve the interpretability and understanding of these preferences.\\n\\nWe appreciate the reviewer for the suggestion. To assist project owners in evaluating the trade-offs, we present a cost-benefit analysis of the approach. The table outlines the cost of using the API to generate rationales for a given number of annotations. It also highlights the RDPO win rate compared to the SFT model for each data budget and estimates the number of annotations that could potentially be saved from using DPO to achieve the same level of performance as RDPO.\\n\\n| **API Rationale Cost** |\\\\\\\\$0.13|\\\\\\\\$0.19| \\\\\\\\$0.26 | \\\\\\\\$0.32|\\\\\\\\$0.39| \\n|:------------------|-------:|-------:|-------:|-------:|-------:| \\n| **Annotations Used**| 1K| 1.5K | 2K | 2.5K | 3K | \\n| **Annotations Saved** |3K| 6K|6.5K|6.8K|>10K| \\n| **vs SFT Winrate** | 54%| 56%| 58%| 60%|62%| \\n\\nWhile we used open-weight models to generate the rationales in our study, the table illustrates the associated costs when utilizing an API model, specifically gpt-4o-mini. This model is referenced with pricing of \\\\\\\\$0.150 per 1M input tokens and \\\\\\\\$0.600 per 1M output tokens. We show the results on the Mistral-7B-v0.2-Instruct trained on the Orca dataset.\"}", "{\"summary\": \"The paper presents a new direct preference optimization method that leverages preference rationales (natural language explaining the reasons why one response is better than another). The proposed method adds a supervised loss term to the DPO objective, jointly training/anchoring the model to generate a valid preference rationale. Each preference rationales are generated with an LLM-as-Judge, augmenting a conventional binary preference dataset.\\n\\nThe method can be seen as a form of hybrid distillation from both preference data (DPO) and from LLM-as-Judge rationales.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The area of generative reward modeling is important and gaining traction.\", \"Promising experimental results across two datasets, performing comparable or better than DPO.\"], \"weaknesses\": \"1. Limited novelty and poor positioning with respect to the growing literature on synthetic preference generation and generative reward modeling (see missing references below, to be discussed in the paper). In addition, the authors focus entirely on direct preference optimization as an alignment method, but reward modeling + reinforcement learning remain a major paradigm for LM alignment. How does this work translate to this setting and compare to the following baselines?\\n\\nReferences\", \"rlcd\": \"Reinforcement Learning from Contrastive Distillation for Language Model Alignment. Yang et al., 2024.\", \"west_of_n\": [\"Synthetic Preference Generation for Improved Reward Modeling. Pace et al., 2024.\", \"Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena. Zheng et al., 2024.\", \"Self-Taught Evaluators. Wang et al., 2024.\", \"2. I found the theoretical analysis and motivation for the method unclear.\", \"Equation 2 (L230) -> why is the joint probability decomposed in this way? Why doesn\\u2019t the preference label also depend on the rationale? Surely there isn\\u2019t a single ground-truth preference considering what is discussed in the intro (multiple valid preference labels based on different raters\\u2019 values)? In fact, does Section 5 not use the opposite formulation (\\u201cthe preference inferred from the rationale\\u201d)?\", \"Information-theoretic results may be interesting but are completely relegated to the appendix, so they cannot be counted as a contribution of the paper. Authors state \\u201cOur analysis demonstrates a closed-form relationship between rationale informativeness and its alignment with true preferences\\u201d, without including any explanation for this claim. What does this mean and what is the form of this relationship?\", \"3. Finally, the experimental setup is too weak to demonstrate the added value of the proposed method.\", \"Is performance improvement statistically significant? Fig 2 suggests that DPO > RDPO with 1K Ultrafeedback data, but we obtain the opposite result in Fig 3. If the result is due to statistical uncertainty, this should be measured and shown on plots (RDPO outperforms DPO by a similar margin in Fig 2, which could therefore not be statistically significant).\", \"Preference dataset sizes are typically >>11K (see top-perfoming RMs on RewardBench, for example). Why did the authors focus their analysis on such a small, non-representative dataset size. Also, why is there no improvement in performance with DPO beyond 1K preferences?\", \"Related: L353, why pick the DPO model trained with 12K Ultrafeedback preferences as baseline, if its SFT performance is lower than that or models trained on less data?\", \"Why not evaluated model performance on established benchmarks such as RewardBench/AlpacaEval?\", \"How does RDPO with poor quality rationales (e.g. permuted / opposite) perform against standard DPO? I imagine much worse, since we are training on biased information. How can practitioners ensure that their rationales\\u2019 quality is sufficiently high to afford gains and not harm performance?\", \"Why is RDPO performing similarly to DPO then trained on Llama-3-8B in Figure 5?\"], \"questions\": [\"See weakness points above.\", \"Some additional questions I had when reading the paper, that I believe should be clarified:\", \"How are draws measured in Fig 2?\", \"I don\\u2019t understand this sentence: \\u201cthe drawing rate for the RDPO model is stable and low across different training data sizes, which shows that RDPO winning rate is higher not due to flipping the draw points but the losing points.\\u201d Can authors clarify?\", \"Fig2 caption typo: Winrare\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces a new method for incorporating machine-generated rationales into preference fine-tuning, enhancing language models\\u2019 performance without extra human annotation. The authors demonstrate that maximizing rationale likelihood alongside preference loss improves model efficacy.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper proposes a new approach to integrate model-generated rationales in preference tuning, avoiding the need for additional human labels.\\n2. Experimental results show that optimizing rationale likelihood alongside preference loss boosts model performance, reducing annotation needs and training data volume.\", \"weaknesses\": [\"1. The proposed method can be seen as a combination of the preference loss such as DPO and the rationale log-likelihood. The paper lacks further exploration of how the two components contribute to improved performance. A few questions are:\", \"a. In the ablation study on $\\\\gamma$, it seems the scale of gamma (from 1.0 to 10.0) does not matter at all. Did the authors try smaller $\\\\gamma$ or extremely large $\\\\gamma$?\", \"b. How does tuning solely on rationale likelihood without DPO loss affect performance? Will the performance increase?\", \"c. Justification is needed for a variable $\\\\gamma$ given the theoretical suggestion of $\\\\gamma=1$.\", \"2. Experimentation lacks rigor and thoroughness:\", \"a. Reporting win-rate against DPO alone does not fully capture the rationale\\u2019s benefit. It is hard to evaluate the absolute improvement brought by the rationale loss. It would be better to report win-rate against a fixed opponent such as GPT-4 on AlpacaEval 2.0. This can ensure that the baseline DPO model is properly trained to a satisfactory performance.\", \"b. Another related question is that there is no evidence that the DPO model in this paper is fully optimized. One may question if the dataset is weak or if the hyperparameters are adequately explored. For example, Llama3-8b-instruct + Ultrafeedback annotated by PairRM (see SimPO\\u2019s GitHub page for their dataset and model performance) can achieve a 40% LC win-rate, and the LC win-rate reported in the appendix is below 25%. I understand that SimPO did not release their training configuration, but the point here is that one cannot effectively conclude that the rationale loss significantly improves the performance.\", \"c. The length bias is a key issue in preference fine-tuning. In the main text, it is reported that RDPO can produce much shorter responses and maintain a higher win-rate against DPO. This is quite surprising and deserves more analysis or explanation from the authors. On the other hand, in section B.4, the length on the AlpacaEval 2.0 dataset remains close to DPO or the original model.\"], \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Exploration of components\", \"comment\": \"> **Exploration of components**\\n\\nWhile we agree with the reviewer that conducting more in-depth experiments to evaluate the interaction between the preference training objective and the rationale objective would be valuable, we have undertaken several analyses to explore this relationship. Specifically, we have studied the impact of incorporating rationales into the original preference learning objectives. Our investigations include assessing performance improvements by varying dataset sizes, evaluating rationales of different quality, conducting parameter ablation studies, and integrating rationales into various preference learning methods (e.g., DPO, ORPO) across different models and datasets. Notably, we observed that adding rationales designed to promote specific properties (e.g., conciseness) consistently imparts those traits across models and datasets\\u2014a behavior not evident in the original DPO approach. This suggests an intriguing avenue for future research: exploring whether rationales can transfer certain meta-properties to models.\\n\\n> **Scale of gamma**\\n\\nAs gamma decreases towards 0, we observe performance degrading to that of vanilla DPO/ORPO. Starting around a value of 0.2, we see the benefit of incorporating rationales into the preference dataset, which enhances the performance of the rationale-enhanced model compared to the vanilla preference-trained model. Performance then stabilizes as gamma approaches 10, with a slight decline in performance occurring as gamma increases beyond 10, up to 100. However, it is important to note that for different preference objectives, the optimal value of gamma may vary, highlighting the significance of gamma in achieving the best performance.\\n\\n> **Rationale without DPO**\\n\\nWe appreciate the reviewer\\u2019s suggestion. While our framework was introduced to complement existing preference learning objectives for more efficient data learning, we have not yet explored the use of rationales independently, without the preference objective. This suggestion is akin to the concept of Self-Taught Evaluators, where the preference learning objective is removed and models are supervised fine-tuned. We value this input and see it as a promising direction for future work.\\n\\nWang, Tianlu, et al. \\\"Self-taught evaluators.\\\" arXiv preprint arXiv:2408.02666 (2024).\\n\\n> **Gamma in theory and in practice**\\n\\nThe gamma introduced in the theoretical formulation ($\\\\\\\\beta = 0.5 + \\\\\\\\gamma$) represents the informativeness or quality of the rationale, while the gamma used in our experiments refers to the influence of the rationale loss on the preference training, which is independent of the rationale's quality. We apologize for any confusion caused by the notation.\"}", "{\"title\": \"A Friendly Reminder\", \"comment\": \"Dear Reviewer EsKw,\\n\\nThank you for your insightful feedback, which sparked valuable discussions about our work. We have thoroughly addressed your questions. As the deadline approaches, please do not hesitate to reach out with further questions, as we would be happy to address them.\\n\\nWarm Regards,\\\\\\nPaper8553 Authors\"}", "{\"title\": \"Unclear Theoretical Analysis\", \"comment\": \"> Joint probability decomposition\\n\\nWe do not assume the conditional independence of preference and rationale given the prompt x. Therefore, if we were to apply Bayes' rule to the joint probability in the reverse order, we would obtain:\\n$p^*(y_w \\\\\\\\succ y_l, r | x) = p^*(y_w \\\\\\\\succ y_l |x, r) \\\\\\\\cdot p^*(r | x)$,\\nwhich demonstrates that the preference also depends on the rationale. We choose to decompose the probability in the original way because it allows us to separate the common preference term from the rationale term, which can be easily integrated into most pairwise preference learning methods.\\n\\n> Information-theoretic results\\n\\nDue to space constraints, we have moved our theoretical results to the Appendix and kept a general overview of our results in the main text. We apologize for any inconvenience or potential confusion this may cause. Our first result (Theorem 1) establishes the connection between the mutual information of the true preferences and rationale given the prompt, I(Z;R\\u2223S), and the informativeness of the rationale. Specifically, this dependency indicates that as the informativeness of the rationale about the preference increases, the mutual information also increases. This suggests that higher-quality rationales can enhance the understanding of the true preferences. Our second result (Theorem 2) shows that training with rationales can reduce generalization error, especially when the rationale is useful for predicting the preference, which can eventually boost learning efficiency.\"}", "{\"title\": \"Limited novelty and poor positioning\", \"comment\": \"We appreciate the references provided by the reviewer, as they represent significant contributions to the field. Our work primarily focuses on the DPO extension, highlighting its adaptability to other pairwise preference learning methods, such as ORPO, due to our formulation's ability to incorporate a rationale loss. Since RLCD shares similarities with DPO, it could potentially support contrastive prompts with the appropriate rationales. The same reasoning applies to the West-of-N method, which selects the best and worst responses from a pool for pairwise preference training. Our framework could extend this approach by including rationales for each response pair. Self-Taught Evaluators also share some similarities with our approach, as they create judgments to identify better responses. However, their judgments are typically limited to selection without providing detailed rationales. Moreover, they consider their method in the SFT setting, flattening all data into instructions, which differs from our preference learning objective. Additionally, in the synthesis generation process, they generate one general response and one clearly orthogonal response, which is not assumed in our work.\"}" ] }
2CflgSMLoK
Data-Efficient Training by Evolved Sampling
[ "Ziheng Cheng", "Zhong Li", "Jiang Bian" ]
Data selection is designed to accelerate learning with preserved performance. To achieve this, a fundamental thought is to identify informative data samples with significant contributions to the training. In this work, we propose **Evolved Sampling** (**ES**), a simple yet effective framework for *dynamic* sampling performed along the training process. This method conducts *batch* level data selection based on *differences* of historical and current losses, significantly reducing the back propagation time with modest additional overheads while maintaining the model performance. Due to its conciseness, ES is readily extensible to incorporate *set* level data selection for further training accelerations. As a plug-and-play framework, ES consistently achieves lossless training accelerations across various models (ResNet, ViT, ALBERT), datasets (CIFAR, ImageNet, GLUE), and optimizers (SGD, Adam), saving up to 40\% wall-clock time. Particularly, the improvement is more significant under the *noisy supervision* setting. When there are severe corruptions in labels, ES can obtain accuracy improvements of approximately 20\% relative to the standard batched sampling. Our results motivate further investigations on the data efficiency aspect of modern large-scale machine learning.
[ "learning efficiency", "evolved sampling", "data selection", "loss dynamics" ]
Reject
https://openreview.net/pdf?id=2CflgSMLoK
https://openreview.net/forum?id=2CflgSMLoK
ICLR.cc/2025/Conference
2025
{ "note_id": [ "w8ILlLx6Al", "tE9sC9Gexs", "osb7IVymcV", "mi16KlBEel", "m3yiD1hMBU", "lRbrMHIZQ3", "jQogEyGEjL", "j6zyNq2U3r", "gWdA9ZDql7", "ZUPEMjEgMm", "YTXZGF9DF0", "UZKnwerRAy", "TgdxlyuAWm", "RjMAQmw8Kx", "RWGLcAApQR", "I4TG1NzT7k", "E0yoG7WJMC", "ARD4PlcO95", "9XOGXlFU0e", "8aYRm7sdr1", "89Il5I751k", "7LnM3xYF8n", "3Umvv9yFjf", "0DgAw0qzaY" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_review", "meta_review", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732669446971, 1732669990225, 1733181341375, 1730004519714, 1737524117246, 1730573687195, 1734592434350, 1733181546101, 1730459063160, 1733183537393, 1730578907869, 1732670085765, 1732669896437, 1732669297526, 1732669761673, 1732670153146, 1732695709151, 1732669676667, 1733298161711, 1732670215347, 1732669603430, 1732746478674, 1733213031960, 1733183468186 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11321/Authors" ], [ "ICLR.cc/2025/Conference/Submission11321/Authors" ], [ "ICLR.cc/2025/Conference/Submission11321/Authors" ], [ "ICLR.cc/2025/Conference/Submission11321/Reviewer_4oRD" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11321/Reviewer_G9Hm" ], [ "ICLR.cc/2025/Conference/Submission11321/Area_Chair_eeF2" ], [ "ICLR.cc/2025/Conference/Submission11321/Authors" ], [ "ICLR.cc/2025/Conference/Submission11321/Reviewer_R7ur" ], [ "ICLR.cc/2025/Conference/Submission11321/Authors" ], [ "ICLR.cc/2025/Conference/Submission11321/Reviewer_M8jH" ], [ "ICLR.cc/2025/Conference/Submission11321/Authors" ], [ "ICLR.cc/2025/Conference/Submission11321/Authors" ], [ "ICLR.cc/2025/Conference/Submission11321/Authors" ], [ "ICLR.cc/2025/Conference/Submission11321/Authors" ], [ "ICLR.cc/2025/Conference/Submission11321/Authors" ], [ "ICLR.cc/2025/Conference/Submission11321/Reviewer_G9Hm" ], [ "ICLR.cc/2025/Conference/Submission11321/Authors" ], [ "ICLR.cc/2025/Conference/Submission11321/Authors" ], [ "ICLR.cc/2025/Conference/Submission11321/Authors" ], [ "ICLR.cc/2025/Conference/Submission11321/Authors" ], [ "ICLR.cc/2025/Conference/Submission11321/Reviewer_4oRD" ], [ "ICLR.cc/2025/Conference/Submission11321/Reviewer_R7ur" ], [ "ICLR.cc/2025/Conference/Submission11321/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer M8jH (continue)\", \"comment\": \">**Q3**: While the method helps reducing the number of backpropagation steps performed during training, it still requires feedforward running of all samples through the network, which is still computationally expensive. Indeed, while the results are positive, the measured gains are not particularly game-changing.\\n\\n**A3**: For the forward computation: \\n- We point out that the additional computation introduced by forward passes is modest, since compared to the baseline (no data selection), one only needs to additionally compute the losses on *selected mini*-batches with reduced sizes compared to original (meta-)batches. \\n- We note that the overhead of forward passes is much less than that of backward passes. Therefore, reducing the backward propagation computation as in our method would be dominantly effective to accelerate the training. The acceleration effect has been reflected by overall reduced time, as is shown in extensive experiments in Section 4.1 in the original manuscript.\", \"for_the_measured_gains\": \"- We have demonstrated the superiority of our method though extensive experiments in Seciton 4, by showing ES(WP)'s significant training accelerations on both small-scale and large-scale clean datasets (Seciton 4.1) and considerable learning accuracies enhancements on noisy datasets (Seciton 4.2). \\n- Particularly, we emphasize that \\\"InfoBatch\\\" ([2]) is a strong data selection baseline (ICLR 2024 Oral, possibly the most recent SOTA), but our method surpasses InfoBatch in both learning accuracies and training accelerations (see Table 2, 3, 4, 5 and Figure 3(a)). \\n\\n>**Q4**: Minor: I am not sure \\\"evolved\\\" is the right term here; \\\"evolved\\\" and \\\"ES\\\" remind strongly of evolutionary optimization and \\\"Evolution Strategies\\\", which can introduce confusion. \\n\\n**A4**: Thanks for the reminder. We plan to replace the method name with e.g. sampling by diff-loss re-weighting. Any further suggestions are welcomed. \\n\\n>**Q5**: It would be interesting to read more about the increased robustness to label noise; I might have expected the proposed method to perform worse, since samples with wrong labels would yield higher losses (unless/until the network memorizes the whole training set).\\n\\n**A5**: In Figure 3(b), we plot the noisy gradient ratio (ngr) within training, i.e. illustrating the relative magnitudes of gradients evaluated on corrupted samples over the whole mini-batch along the training dynamics. This ratio is mathematically defined as $\\\\text{ngr}:=\\\\parallel\\\\sum _ {i \\\\in \\\\mathfrak{b} _ t: y _ i=\\\\tilde{y} _ i}\\\\nabla _ {\\\\theta} \\\\ell _ {i}(\\\\theta(t))\\\\parallel _ 2/\\\\parallel\\\\sum _ {i \\\\in \\\\mathfrak{b} _ t}\\\\nabla _ {\\\\theta} \\\\ell _ {i}(\\\\theta(t))\\\\parallel _ 2$ with $\\\\mathfrak{b} _ t$ as the selected mini-batch for any training time $t$ and $\\\\tilde{y}$ as the corrupted label. \\n\\n- As is shown in Figure 3, one can observe *strong dynamical correlations* between learning accuracies ($\\\\text{acc}$) in Figure 3(a) and noisy gradient ratios ($\\\\text{ngr}$) in Figure 3(b). That is, $\\\\text{acc}$ always benefits from bounded $\\\\text{ngr}$ and degrades from increased $\\\\text{ngr}$, and their variations are almost *simultaneous* along the training dynamics (except Ordered SGD (\\\"Order\\\") due to its sensitivity with respect to losses (i.e., Ordered SGD always selects samples with top losses)). \\n- However, despite that ES(WP) seems to select more noisy samples due to its loss re-weighting scheme, this intuition is not *provably* correct, since the variations in both the numerator and denominator of $\\\\text{ngr}$ are indefinite due to offsets among terms. \\n- In fact, as is shown in Figure 3, ES(WP) (and also basic loss re-weighting (\\\"Loss\\\")) really underperform at initial training stages. However, the $\\\\text{acc}$ of ES(WP) and loss re-weighting somehow begins to upgrade (with non-increased $\\\\text{ngr}$) and finally outperform. Since this phase transition occurs in the middle of training, the underlying mechanism may involve detailed learning dynamics, and the quantitative dynamical analysis can be quite difficult and complex, which is beyond the scope of the current algorithmic paper and left as the future work. \\n\\n**References**\\n\\n[1] Ravi Raju, Kyle Daruwalla, and Mikko Lipasti. Accelerating deep learning with dynamic data pruning. *arXiv preprint arXiv:2111.12621*, 2021. \\n\\n[2] Ziheng Qin, Kai Wang, Zangwei Zheng, Jianyang Gu, Xiangyu Peng, Zhaopan Xu, Daquan Zhou, Lei Shang, Baigui Sun, Xuansong Xie, and Yang You. InfoBatch: Lossless training speed up by unbiased dynamic data pruning. In *International Conference on Learning Representations*, 2024.\"}", "{\"title\": \"Response to Reviewer R7ur (continue)\", \"comment\": \">**Q5**: The use of wall-clock time as a measure of speedup also presents challenges. Since wall-clock time is influenced by multiple factors, including the specific point of reference and the extent to which reference performance is met or exceeded, this metric is not straightforward. No details are provided on the variability of wall-clock measurements, which could make these results more challenging to interpret. An additional, complementary metric\\u2014such as the number of examples seen (similar to token counts in LLM training)\\u2014could yield a more direct and comparable measurement of processing efficiency, especially since the baseline approach involves higher computational requirements.\\n\\n**A5**: The experiments in the original manuscript mainly follow InfoBatch ([2]; ICLR 2024 Oral) to evaluate data selection methods running with *fixed budgets in epochs*. **To avoid possible interpretation issues, as suggested, we further add plots regarding the global learning dynamics, i.e. test accuracies versus the number of samples used for back-propagation. See Section B.3 (Figure 6) in the revised manuscript for details.** \\n\\n>**Q6**: Regarding robustness to label noise, Figure 3(a) indicates that while the method outperforms the baseline, the speedup advantage is lost under noisy conditions. This finding implies that the method may benefit from integrating the baseline up to its peak performance before switching to the proposed scheme. Such a hybrid approach could potentially leverage the best of both methods, maintaining efficiency without sacrificing performance under challenging conditions.\\n\\n**A6**: This is a sharp observation and promising insight. In fact, as is pointed out in **A3**, we have already partially realized this hybrid approach by tuning the annealing ratio $\\\\text{ar}$. The remaining question is how to decide the moment to switch. As an outlook, the most direct way is to monitor the overall performance and try to identify where saturations occur. More advanced techniques may be inspired by [3]: When indexes of mislabeled data samples are known and not dominant in numbers, one can turn to monitor training errors on mislabeled samples, and decide to switch when these errors are maximized. This is based on the empirical findings that gradient descent is biased to fit the clean data first during initial phases of training, and then fit noises. Note that this phenomenon is also observed in Figure 3, where the learning accuracy of \\\"Baseline\\\" first increases and then deceases (Figure 3(a)), with the corresponding noisy gradient ratio first deceasing (bounded) and then increasing (Figure 3(b)). \\n\\n>**Q7**: In Figure 3(b), the gradients under comparison lack clarity. It is uncertain whether the gradients displayed encompass all examples (both corrupted and uncorrupted), necessitating additional forward passes and potentially affecting wall-clock measurements, or if the results only include corrupted examples selected by the method. The latter case would introduce a selection bias, affecting the integrity of the reported results. A more informative and balanced approach would be to calculate the proportion of non-informative examples selected per epoch, providing a relative measure of their influence on learning. This would give a clearer picture of how these less useful samples affect training efficiency and could allow for more balanced comparisons.\\n\\n**A7**: We first clarify that Figure 3(b) is plotted to intepret Figure 3(a), not as a part of the proposed method. For further clarifications and discussions, you can refer to **A5** in **Response to Reviewer M8jH** for details. \\n\\n>**Q8**: In Table 5, the ground-truth results are presented without a corresponding baseline for corruption-free performance. Including such a baseline would clarify the upper bound achievable in the absence of noise, providing a benchmark against which the \\\"superior\\\" performance in noisy conditions could be assessed.\\n\\n**A8**: Thanks for your suggestion. We have added the corresponding results into Table 5 in the revised version. In fact, these results are exactly the second column of Table 2.\"}", "{\"title\": \"Response to Reviewer G9Hm\", \"comment\": \"Thanks for your comments. To clarify, we rerun the above experiments in Table 1 with further fine-tuning steps on classification tasks. The results are as follows.\\n\\n- Table 1 (new): The test accuracies ($\\\\\\\\%$) and reduced time of (MAE-based) pre-training the ViT-Large model on the ImageNet-1K dataset for 300 epochs (with 4xA100), and fine-tuning for 50 epochs.\\n| |Baseline|ES|ESWP\\n|----|----|----|----|\\n|Top-1 accuracy|$84.9$|$84.8$|$84.7$| \\n|Top-5 accuracy|$97.2$|$97.2$|$97.1$| \\n|Time $\\\\downarrow$|$-$|$12.1\\\\\\\\%$|$17.7\\\\\\\\%$| \\n\\nIt is observed that with fixed budgets of total training epochs (which is often required in practice to determine the learning rate schedule in advance, and \\\"fixed total training epochs\\\" is the typical default setting of many former related works, e.g. InfoBatch ([1]; ICLR 2024 Oral), which is possibly the most recent SOTA and also compares with *only \\\"Baseline\\\"*), ES(WP) can achieve comparable performance with the baseline, but lead to significant accelerations. In this experiment, the configuration of all hyper-parameters remains the same with other experiments in the manuscript, except that the mini-batch size is enlarged as $b=192$ (the meta-batch size is still $B = 256$). We hope that these newly updated results solve your concerns. \\n\\n\\n\\n**References**\\n\\n[1] Ziheng Qin, Kai Wang, Zangwei Zheng, Jianyang Gu, Xiangyu Peng, Zhaopan Xu, Daquan Zhou, Lei Shang, Baigui Sun, Xuansong Xie, and Yang You. InfoBatch: Lossless training speed up by unbiased dynamic data pruning. In *International Conference on Learning Representations*, 2024.\"}", "{\"summary\": \"This paper functions as a well-thought-out \\\"momentum optimizer\\\" in the data space. Instead of considering the presentation of data as fixed as in SGD, we take a more expansive view and think of the data space as another component of the model to optimize.\\n\\nThe work is somewhat novel in the large model training space.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"The paper builds upon good theoretical foundations.\\n\\nThe paper well cites related work and the literature that leads to this contribution.\\n\\nThe paper creates an efficient heuristic based approach to solve a practical problem which rests on the previous theoretical contributions.\\n\\nThe paper well considers ablation studies and robustness studies.\\n\\nThe paper's theoretical arguments are well constructed.\", \"weaknesses\": \"This should be better justified: This can be inefficient since different samples may have varied importance. Can you look at the influence functions or coresets literature?\\n\\nThis statement needs to be better motivated and explained, why is evolved sampling \\\"natural?\\\"\\nIn general machine learning tasks, the typical behaviors of loss curves often appear decent trends\\noverall, but can oscillate meanwhile due to certain noises. This introduces the sensitivity or instability\\nissue of the sampling scheme (3.6). A natural smoothing operation is to use the exponential moving\\naverage (EMA) of losses\\n\\nThe proof presentations are somewhat lacking. It's difficult for me to quickly match up concepts from the optimization literature to some of the theoretical arguments made, for example, the EMA to the minimax problem.\\n\\nIt may be worthwhile in explaining this better with regards to the control theory literature, specifically, control theory also deals with oscillations and rectifies them in similar manners:\\n\\nDecoupled EMA. To sufficiently leverage the loss dynamics in a more robust sense, we propose to\\ncalculate the sampling probability as\\npi(t) \\u221d wi(t) = \\u03b21si(t \\u2212 1) + (1 \\u2212 \\u03b21)\\u2113i(\\u03b8(t)),\\nsi(t) = \\u03b22si(t \\u2212 1) + (1 \\u2212 \\u03b22)\\u2113i(\\u03b8(t)), si(0) = 1/n (3.8)\\nwith \\u03b21, \\u03b22 \\u2208 [0, 1] as two hyper-parameters. Here, the intermediate series {si(t)}t\\u2208N, updated in\\nthe EMA scheme, is also referred as the score (for the i-th sample). The scheme (3.8) is the so-called\\ndecoupled EMA,\\n2 which reduces to (3.7) when \\u03b21 = \\u03b22 = \\u03b2. In Figure 1, it is shown by the red curve\\nand appears an \\u201cinterpolation\\u201d between the original loss and single EMA: When losses oscillate,\\nthe decoupled EMA reacts moderately by not only capturing detailed dynamics of losses, but also\\nremaining necessary robustness , exhibiting the flexibility to trade-off (by tuning two betas).\\nIntuitively, by setting (\\u03b21, \\u03b22) \\u2192 (0+, 1\\n\\u2212), we are able to exploit the long-term historical information\\nalong the training (via \\u03b22), while focusing on the importance of current losses (via \\u03b21) and thus can\\nget the best of both world. This simple and elegant design turns out to be surprisingly beneficial in\\npractice, which is further verified in numerous experiments in Section 4.\\n\\n\\nThis should really be better explained. Again, this paper is moving into the \\\"total optimization landscape\\\" where both data and model parameters are considered components of the system to be optimized. It's not immediately clear whether this is a consequence of the problem the authors were solving, or the key insight that led to the approach.\\n\\n(ii) ES to solve a DRO problem. From another perspective, ES can be also reformulated as a\\nsolution to the minimax problem...\", \"questions\": \"Can the key idea of the paper: optimization of the data space, be more cohesively or clearly presented? Currently, it's still difficult to understand the key idea of the paper without significant theoretical and literature knowledge.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper introduces a method called Evolved Sampling (ES) for efficient data selection in training machine learning models. The core contribution is a dynamic sampling framework that identifies informative data samples based on the evolution of loss values throughout the training process. By adjusting the selection of data at the batch level according to changes in loss values, ES significantly reduces the required training time while maintaining model accuracy.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Novelty - The paper introduces decoupled exponential moving averages, which leverage first-order loss differences for more stable and robust sampling, effectively combining ideas from loss and gradient-based sampling with robust optimization principles.\\n\\n2. Quality - The paper provides theoretical proofs and experiments across models and datasets, demonstrating consistent gains in efficiency and robustness, especially under noisy labels.\\n\\n3. Writing - The paper is clearly structured, with well-organized sections and visual aids that clarify ES\\u2019s advantages over traditional methods, though some theoretical sections may be dense for general readers.\\n\\n4. Relevance - ES offers practical relevance for reducing computational costs without accuracy loss, making it impactful for both research and industry applications in large-scale ML.\", \"weaknesses\": \"1. Significance - Much of the computation cost of foundation models occurs during pre-training, which is mostly self-supervised (auto-regressive, contrastive learning, auto-encoders). All the experiments in the paper are for labeled datasets, which represent fine-tuning use cases where the computation cost is not a major concern. Thus, the significance of the method is not clearly demonstrated.\\n\\n2. Scalability - The paper claims that ES has only modest overheads, but lacks an in-depth analysis of computational and memory costs associated with the decoupled EMA calculations, especially in large-scale tasks or datasets.\\n\\n3. Assumptions - Some assumptions in theoretical analysis may not hold in practice, e.g., smoothness of loss functions, especially for complex architectures and non-convex losses. A discussion of how the method performs when assumptions deviate from theory, or empirical analysis on non-smooth tasks, would help clarify the applicability.\\n\\n4. Hyperparameter Sensitivity - Introducing 2 hyperparameters could be a major concern for the proposed method. The current analysis (Figure 5) is too limited, e.g., what's the impact of hyperparameters on efficiency? Besides, it does seem that hyperparameters introduce a large variance in performance. For fair comparisons, the cost of searching hyperparameters should also be considered in the overall task (e.g., on a smaller dataset to test hyperparameters and then apply to a large dataset.)\\n\\n5. Lack of Baselines for Noise - In the experiments on label noise, ES performs well, but the comparison is limited mainly to non-specialized sampling methods. \\n\\nnit - ES in this literature often refers to 'Evolution Strategy', so would be nice to have a different abbreviation for the proposed method.\", \"questions\": \"1. Could the authors provide more insight into the sensitivity of the hyperparameters $(\\\\beta_1, \\\\beta_2)$ across different datasets and architectures?\\n\\n2. ES appears computationally feasible for single-machine training, but would its performance gains hold up in distributed training settings?\\n\\n3. ES with Pruning (ESWP) combines batch and set-level selection, but it is not entirely clear how this combination impacts overall performance in practice.\\n\\n4. How can ES be used for self-supervised training?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": [\"This paper proposes a method (ESWP) for supervised training, where in general, data points with higher losses are deemed more important for backpropagating, rather than regular batched gradient descent over uniformly chosen data points.\", \"The core idea is to keep track of an Exponentially Moving Average (EMA) of data point importances (determined by loss vlaues), and then using these to determine which datapoints to actually be used for gradient descent. Specifically, Algorithm 1 is a loop consisting of:\", \"\\\"Pruning\\\" down the entire dataset by performing importance sampling\", \"Recomputing importance weights according to uniform batches using EMA updates.\", \"\\\"Annealing\\\": Selecting minibatches according to their importances, OR sometimes performing uniform sampling again\"], \"experimental_results_are_conducted_over\": [\"Standard ResNet + CIFAR-10/100 settings with comparisons to other data selection methods\", \"NLP tasks using ALBERT as a base model\"], \"additional_ablation_studies_are_conducted_to_understand\": [\"If there is label noise, the EMA allows for choosing the \\\"right\\\" data points\", \"The importance of combining all effects (annealing and EMA)\", \"Choices of method hyperparameters (batch size / minibatch sizes, EMA update coefficients).\", \"## Strengths:\", \"Experiments are fairly comprehensive and the method is solidly investigated.\", \"Method is reasonable and well-motivated.\", \"## Weaknesses:\", \"Being blunt, the paper can easily fall into the category of \\\"increasing complexity, but only gaining incremental improvements\\\". The method introduces many new hyperparameters which may have to be tuned, and overall the improvements don't appear to be significant.\", \"The paper doesn't provide us with any new profound lessons or conclusions.\"], \"additional_comments_on_reviewer_discussion\": \"The reviewer scores were a (5,5,5,8), signaling that the majority of the reviewers are lukewarm, with the exception of Reviewer 4oRD who gave the 8.\", \"the_most_common_and_core_issues_are\": [\"\\\"Lack of novelty\\\" - i.e. the method is mainly a combination of EMA, Annealing, and Pruning which have been investigated in previous papers.\", \"Speedup gains - i.e. the method doesn't actually produce significant gains - e.g. slight percentage increases in accuracy on CIFAR-10, and slight percentage improvements in wall-clock time. The fairness of these results were also debated by multiple reviewers.\", \"Reviewer 4oRD still wishes to champion the paper, on the basis of:\", \"The paper doesn't need to be validated over new datasets (e.g. language modelling) since it requires large amounts of compute.\", \"We can agree to this, although given the CIFAR-10 results provided already, it's not convincing that we would obtain huge gains in LLM training.\", \"The paper raises the general notion that optimizing data selection and weights are both important.\", \"As mentioned by other reviewers, this paper isn't the first to make this conclusion however, and it isn't new.\", \"Seeing as most of the reviewers don't agree for acceptance, the decision is to reject.\"]}", "{\"title\": \"Response to Reviewer 4oRD\", \"comment\": \"Thank you for your reply! We are happy to see that our response addressed your concerns. Thanks again for your valuable feedback to help us improve our paper and for increasing your rating.\"}", "{\"summary\": \"The paper introduces a novel framework called Evolved Sampling (ES) (and with Pruning ES-WP) aimed at enhancing data efficiency in machine learning. The authors propose a dynamic sampling method that selects informative data samples based on the evolution of losses during training. This approach aims to reduce backpropagation time while maintaining model performance across various architectures (ResNet, ViT, ALBERT) and datasets (CIFAR, ImageNet, GLUE). Key contributions include: (i) Dynamic Sampling: ES utilizes historical and current loss differences to inform data selection, allowing for batch-level sampling without the need for pre-trained models. (ii)Efficiency Gains: The method achieves up to 40% reduction in wall-clock time during training and shows improved accuracy (approximately 20%) in scenarios with noisy labels; and (iii) Theoretical Justifications: The authors provide theoretical insights into how their method alleviates loss oscillations and can be viewed through the lens of distributionally robust optimization.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"### Originality:\\n\\nThe main proposition lies in the recursive definition of an Exponentially Moving Average over the losses of individual examples to deselect them from the training process to gain speedups and improved; i.e. stable, learning dynamics. The single-level EMA itself is a well-known approach that is applied to this setting with a recursive definition. The other techniques, i.e. annealing and pruning, are mere adaptations from prior work and are only a minor contribution to the originality. The bridge between batch and set-level data selection, which their method allows them to do is a nice feature, but not the main contribution. The theoretic analysis is interesting overall. But insights like decoupled EMA is in fact a convolution over hyperparameters\\u2019 powers of historical losses \\u2013 so their results are not really surprising.\\n\\n### Quality: \\n\\nQuite a few experimental issues are present, which I will detail in the weaknesses section.\\n\\n### Clarity:\\n\\nOverall the paper is clearly and concisely written. With the main exception of when exactly we are collecting the loss values of pruned examples; which might bias the calculation of their weight.\\n\\n### Significance: \\n\\nThe efficiency of modern machine learning algorithms and neural networks is a great issue, as it results in huge energy demand. Reducing the footprint is a critical point. One angle of attack pursued in this paper is being selective about the order and the subset of consumed examples. This is indeed an important and interesting avenue.\", \"weaknesses\": \"Besides the weak overall originality, my main criticism is connected to the empirical evaluation:\\n\\nThe necessity for a burn-in period, where standard training must occur to initialize the loss adequately before applying the Exponential Moving Average (EMA) scheme, points to a limitation in the approach. This dependency on a specific loss initialization suggests that the method might not be entirely robust across various starting conditions. It would benefit the study to explore a more systematic ablation of this burn-in period as a hyperparameter. Additionally, understanding whether variations in the burn-in length affect performance could provide insight into the model's dependency on initialization stability and might even reveal opportunities to shorten or eliminate this requirement.\\n\\nAnother area where clarity is needed is the reporting of statistical measures. The number of seeds used for evaluation and averaging remains unspecified, and no standard deviations are provided. This omission raises questions about whether noise rather than true performance gains might influence observed differences in performance between the proposed method and baseline competitors. Including standard deviations would allow readers to assess the consistency of the results, providing a clearer understanding of the variability in performance.\\n\\nThe use of wall-clock time as a measure of speedup also presents challenges. Since wall-clock time is influenced by multiple factors, including the specific point of reference and the extent to which reference performance is met or exceeded, this metric is not straightforward. No details are provided on the variability of wall-clock measurements, which could make these results more challenging to interpret. An additional, complementary metric\\u2014such as the number of examples seen (similar to token counts in LLM training)\\u2014could yield a more direct and comparable measurement of processing efficiency, especially since the baseline approach involves higher computational requirements.\\n\\nRegarding robustness to label noise, Figure 3a indicates that while the method outperforms the baseline, the speedup advantage is lost under noisy conditions. This finding implies that the method may benefit from integrating the baseline up to its peak performance before switching to the proposed scheme. Such a hybrid approach could potentially leverage the best of both methods, maintaining efficiency without sacrificing performance under challenging conditions.\\n\\nIn Figure 3b, the gradients under comparison lack clarity. It is uncertain whether the gradients displayed encompass all examples (both corrupted and uncorrupted), necessitating additional forward passes and potentially affecting wall-clock measurements, or if the results only include corrupted examples selected by the method. The latter case would introduce a selection bias, affecting the integrity of the reported results. A more informative and balanced approach would be to calculate the proportion of non-informative examples selected per epoch, providing a relative measure of their influence on learning. This would give a clearer picture of how these less useful samples affect training efficiency and could allow for more balanced comparisons.\\n\\nIn Table 5, the ground-truth results are presented without a corresponding baseline for corruption-free performance. Including such a baseline would clarify the upper bound achievable in the absence of noise, providing a benchmark against which the \\\"superior\\\" performance in noisy conditions could be assessed.\", \"further_minor_issues\": [\"Ablations:\", \"choices of \\\\beta. The presented heatmap tables are way too broad. I suggest using some Sobol or Latin Hypercube design and then reporting the heat surfaces. This way, we get a far more fine-grained perspective on the hyperparameters\\u2019 behavior.\", \"Pruning is not ablated\", \"The notation 0^+ and 1^- should probably be introduced or replaced by intervals (0, 1) instead of [0, 1]\", \"The notation is at times slightly overburdened (e.g. the additional vector notation in 320), instead of just writing the actual values in there directly.\"], \"questions\": \"I would like to get a clarification regarding Eq. 3.8. We have access to the current loss of an example to decide whether or not we want to sample it for that epoch. I interpret this as doing the forward pass on an example that we later deselect to be part of the backward pass calculation. This means that we still maintain the gradient of that example until we deselect it. The main cost saved then is the amount of bwd passes. In Algorithm 1, the necessity for forward passes seems to be mitigated in Line 284 at least during the pruning by taking the historically weighed score s instead of the weight function. This seemingly implies that to select examples, only historic losses are considered. But this poses yet another question: How do we adjust an example\\u2019s loss if the example is no longer selected? Because then we yet again will need a fwd pass and we could have calculated the full weight. This seems to be what is done in 289; i.e. only the loss over the batch examples is calculated. The only thing to mitigate the issue of disregarding bad losses (almost) completely is in Remark 1 and discounting the existing values. Either way, this introduces non-trivial and dead-lock-ish dynamics I would like to see investigated.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"A Gentle Reminder\", \"comment\": \"Dear Reviewer M8jH,\\n\\nAs the rebuttal phase nears its end, we want to gently remind you of our responses and would greatly appreciate your further feedback.\\n\\nIn the above rebuttal, we have addressed all your concerns, comments and additional questions with the utmost care and detailed responses. Hope our efforts meet your expectations. If you feel your concerns have been adequately addressed and find our updates satisfactory, with the utmost respect, we invite you to consider a score adjustment. If you have any remaining concerns, we would be happy to discuss them further, and we hope to make the most of the remaining day to provide further clarifications.\\n\\nWe deeply appreciate the time and effort you have dedicated to reviewing our paper. Regardless of the final outcome, we want to express our heartfelt gratitude for your thoughtful feedback and contributions to improving our work. Thank you for your time and consideration!\\n\\nBest, \\\\\\nAuthors\"}", "{\"summary\": \"The paper proposes \\\"Evolved Sampling\\\" (ES), a dynamic sampling method aimed at improving data efficiency during training. The method selects informative samples based on the loss values during training using a decoupled Exponential Moving Average (EMA) scheme. This reduces the number of samples needed for backpropagation, saving up to 40% in wall-clock time while maintaining model performance. The method was tested on a thorough evaluation across many different models (ResNet, ViT, ALBERT) and datasets (CIFAR, ImageNet, GLUE).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"ES shows a reduction in training time without loss in performance, which is promising for computationally expensive tasks.\", \"The use of loss evolution for sampling is an interesting approach that addresses the shortcomings of previous static and simple dynamic sampling methods.\", \"The results on datasets with noisy labels are interesting.\", \"Evaluation is sufficiently complete.\"], \"weaknesses\": [\"Limited novelty: the paper largely builds on existing sampling concepts with incremental improvements.\", \"The description of the method can be simplified considerably.\", \"While the method helps reducing the number of backpropagation steps performed during training, it still requires feedforward running of all samples through the network, which is still computationally expensive. Indeed, while the results are positive, the measured gains are not particularly game-changing.\", \"Minor: I am not sure \\\"evolved\\\" is the right term here; \\\"evolved\\\" and \\\"ES\\\" remind strongly of evolutionary optimization and \\\"Evolution Strategies\\\", which can introduce confusion.\", \"It would be interesting to read more about the increased robustness to label noise; I might have expected the proposed method to perform worse, since samples with wrong labels would yield higher losses (unless/until the network memorizes the whole training set).\"], \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer R7ur (continue)\", \"comment\": \">**Q9**: Ablations of betas. The presented heatmap tables are way too broad. I suggest using some Sobol or Latin Hypercube design and then reporting the heat surfaces. This way, we get a far more fine-grained perspective on the hyperparameters\\u2019 behavior.\\n\\n**A9**: We plot Figure 5 to convey the main message that default configurations of hyperparameters $(\\\\beta_1, \\\\beta_2)$ are basically okay: The default betas are consistently validated to be roughly (locally) optimal in small-scale models and datasets, and their *superior* effectiveness and efficiency remain in other experiments under different settings (e.g. noisy supervision, Table 5 and Figure 3) and with larger scales (e.g. Table 3). **We also extend Figure 5 for denser betas around the default values to verify its (local) optimality.**\\n\\n- Table 3: Test accuracies (\\\\%) for different betas in ES (ResNet-18, CIFAR-100).\\n |$\\\\beta_2$ \\\\ $\\\\beta_1$|$0.15$|$0.2$|$0.25$\\n |----|----|----|----|\\n |$0.95$|$78.5$|$78.8$|$78.1$\\n |$0.9$|$78.4$|$\\\\textbf{78.8}$|$78.6$\\n |$0.85$|$78.3$|$78.4$|$78.3$\\n\\n>**Q10**: Pruning is not ablated. \\n\\n**A10**: By combining both the batch and set level data selection, we aim to achieve more aggressive data pruning. **We further perform ablation studies on the pruning ratio $\\\\text{pr}$ to demonstrate its impacts on the overall performance.**\\n\\n- Table 4: The effect of pruning ratios in ESWP (ResNet-18, CIFAR-10).\\n |$\\\\text{pr}$|$0.1$|$0.2$|$0.3$|$0.4$|\\n |----|----|----|----|----|\\n |accuracy (\\\\%)|$95.3$|$95.3$|$95.2$|$94.9$\\n\\n>**Q11**: The notation $0^+$ and $1^-$ should probably be introduced or replaced by intervals $(0, 1)$ instead of $[0, 1]$. \\n\\n**A11**: These notations represent the single-side limit. Roughly speaking, $(\\\\beta_1, \\\\beta_2) \\\\to (0^+, 1^-)$ means that $\\\\beta_1$ is close to $0$ with $\\\\beta_1>0$, and $\\\\beta_2$ is close to $1$ with $\\\\beta_2<1$. \\n\\n>**Q12**: The notation is at times slightly overburdened (e.g. the additional vector notation in 320), instead of just writing the actual values in there directly. \\n\\n**A12**: Thanks for your suggestion. We have rewritten the suggested contents in the revised version. \\n\\n>**Q13**: I would like to get a clarification regarding Eq. 3.8. We have access to the current loss of an example to decide whether or not we want to sample it for that epoch. I interpret this as doing the forward pass on an example that we later deselect to be part of the backward pass calculation. This means that we still maintain the gradient of that example until we deselect it. The main cost saved then is the amount of bwd passes. In Algorithm 1, the necessity for forward passes seems to be mitigated in Line 284 at least during the pruning by taking the historically weighed score s instead of the weight function. This seemingly implies that to select examples, only historic losses are considered. But this poses yet another question: How do we adjust an example\\u2019s loss if the example is no longer selected? Because then we yet again will need a fwd pass and we could have calculated the full weight. This seems to be what is done in 289; i.e. only the loss over the batch examples is calculated. The only thing to mitigate the issue of disregarding bad losses (almost) completely is in Remark 1 and discounting the existing values. Either way, this introduces non-trivial and dead-lock-ish dynamics I would like to see investigated.\\n\\n**A13**: Please refer to **A2** for details. \\n\\n**References**\\n\\n[1] Ravi Raju, Kyle Daruwalla, and Mikko Lipasti. Accelerating deep learning with dynamic data pruning. *arXiv preprint arXiv:2111.12621*, 2021. \\n\\n[2] Ziheng Qin, Kai Wang, Zangwei Zheng, Jianyang Gu, Xiangyu Peng, Zhaopan Xu, Daquan Zhou, Lei Shang, Baigui Sun, Xuansong Xie, and Yang You. InfoBatch: Lossless training speed up by unbiased dynamic data pruning. In *International Conference on Learning Representations*, 2024. \\n\\n[3] Saurabh Garg, Sivaraman Balakrishnan, J. Zico Kolter, and Zachary C. Lipton. RATT: Leveraging unlabeled data to guarantee generalization. *Proceedings of the 38th International Conference on Machine Learning*, PMLR 139:3598-3609, 2021.\"}", "{\"title\": \"Response to Reviewer R7ur (continue)\", \"comment\": \">**Q2**: Overall the paper is clearly and concisely written. With the main exception of when exactly we are collecting the loss values of pruned examples; which might bias the calculation of their weight.\\n\\n**A2**: For the set level selection (line 282-284), we do not compute the current loss as line 289-290 do (for batch level selection). The main reason is to avoid unnecessary computation overheads. For the bias of weights, firstly, we use sampling instead of ranking to select training data, as is stated in Remark 1, indicating that even samples with small scores can be still selected probably into $\\\\mathcal{D}_e$. \\n\\nMoreover, we elaborate the intution behind ESWP as follows: \\n(1) The annealing strategy at the initial stage is conducive to data samples towards similar score scales at the first few epochs. \\n(2) Suppose the loss value during training has a decaying trend, the score is also likely to decrease. \\n(3) If a sample $z_i$ has a relatively small score initially and thus not been selected, its score will remain the same, while the scores of other selected samples are basically decreasing. \\n(4) In this way, at some later stages, the score of $z_i$ will become relatively large compared to others, and hence $z_i$ is likely to be selected. Otherwise, the score of $z_i$ is too small, and $z_i$ can be regarded as unimportant or well-fitted sample. \\n\\n>**Q3**: The necessity for a burn-in period, where standard training must occur to initialize the loss adequately before applying the Exponential Moving Average (EMA) scheme, points to a limitation in the approach. This dependency on a specific loss initialization suggests that the method might not be entirely robust across various starting conditions. It would benefit the study to explore a more systematic ablation of this burn-in period as a hyperparameter. Additionally, understanding whether variations in the burn-in length affect performance could provide insight into the model's dependency on initialization stability and might even reveal opportunities to shorten or eliminate this requirement.\\n\\n**A3**: Regarding the annealing technique, we respond by two points: \\n- Introducing the annealing leads to a hybrid data selection mechanism, which can be viewed as a homotopy or interpolation regime: When the annealing ratio is $\\\\text{ar}:=E_a/E=0$, it is pure ES(WP); when $\\\\text{ar}=0.5$, it reduces to the baseline without any data selection. Therefore, there exist optimal annealing ratio values $\\\\text{ar}^*$ potentially better than both methods. In this work, we set a default $\\\\text{ar}=5$\\\\% for all experiments, resulting in a burn-in period that is far from adequate. \\n- **We further perform ablation studies on the annealing ratio $\\\\text{ar}$ to demonstrate its impacts on the overall performance.** \\n\\n - Table 5: The effect of annealing ratios in ES (ResNet-18, CIFAR-100).\\n |$\\\\text{ar}$ |$0.0$|$0.05$|$0.075$|\\n |----|----|----|----|\\n |accuracy (\\\\%)|$78.60$|$78.79$|$78.32$|\\n\\n>**Q4**: Another area where clarity is needed is the reporting of statistical measures. The number of seeds used for evaluation and averaging remains unspecified, and no standard deviations are provided. This omission raises questions about whether noise rather than true performance gains might influence observed differences in performance between the proposed method and baseline competitors. Including standard deviations would allow readers to assess the consistency of the results, providing a clearer understanding of the variability in performance.\\n\\n**A4**: Regarding the statistical measures, we respond by four points: \\n- We *do* run multiple seeds and take the average. In Line 384-385 in the original manuscript: \\\"All the reported results are evaluated on the average of 2-4 independent random trials.\\\" All the methods have similar standard deviations, which are between 0.1 and 0.2 in Table 2 and Table 3. We omit this metric for concise presentations.\", \"in_fact\": [\"For tasks on clean datasets, the gaps of learning accuracies are not that significant, and the acceleration is our main focus. One can observe that in all experiments, the gaps of reduced training time among data selection methods are not marginal, not likely caused by noises (Table 2, 3, 4).\", \"For tasks on datasets with noisy labels, the gaps of learning accuracies are significantly large, impossibly caused by noises in training (Table 5, Figure 3(a)).\"]}", "{\"title\": \"Response to Reviewer M8jH\", \"comment\": [\">**Q1**: Limited novelty: the paper largely builds on existing sampling concepts with incremental improvements.\", \"**A1**: The existing sampling concepts where this work is motivated by are mainly loss re-weighting and EMAs. However, the present work differentiates from them with at least three points as follows:\", \"Loss re-weighting: Although the loss re-weighting is one of our motivations, its effectiveness was only numerically verified in applications and no theoretical characterizations were provided before. In this work, we first mathematically prove the convergence acceleration achieved by loss re-weighting (Proposition 1), which is certainly novel.\", \"EMAs: Although the standard EMA has been adopted in data selection (e.g. UCB ([1])), its practical performance is not that satisfying (e.g. [1]). In this work, we propose a new EMA scheme (decoupled EMA), acting as a non-trivial extension in the following sense:\", \"In formulation, the decoupled EMA \\\"space\\\" contains former loss re-weighting based data selection methods. It is easy to see that decoupled EMA reduces to standard EMA by setting $\\\\beta_1=\\\\beta_2=\\\\beta$, and further reduces to basic loss re-weighting and uniform sampling by setting $\\\\beta=0$ and $\\\\beta=1$, respectively.\", \"Intuitively, we illustrate the comparisons among dynamics of EMAs (Figure 1). It is shown that the decoupled EMA acts as an interpolation between the loss re-weighting and standard EMA: When losses oscillate, the decoupled EMA reacts moderately by not only capturing detailed dynamics of losses (which is ignored by standard EMA due to its over-smoothing effect), but also remaining certain robustness (while loss re-weighting is quite sensitive to loss variations). The decoupled EMA is also flexible to trade-off between these two regimes (details and smoothing) by tuning two betas (left $\\\\to$ middle $\\\\to$ right in Figure 1).\", \"In theory, we mathematically prove the novel Proposition 2 to demonstrate that the decoupled EMA is in fact a **first-order** modification of standard EMA, resulting in the flexibility to balance between losses and their **differences**. This potentially gives chances to better leverage the loss dynamics for data selection. Moreover, this first-order modification is *implicitly* introduced, meaning that only quantities involving losses are evaluated, without significant overheads as former first-order data selection methods based on gradients.\", \"(This point is also recognized by other reviewers, say Reviewer G9Hm: \\\"Novelty - The paper introduces decoupled exponential moving averages, which leverage *first-order* loss differences for more stable and robust sampling, effectively combining ideas from loss and gradient-based sampling with robust optimization principles.\\\")\", \"Empirically, we observe non-trivial improvements in all experiments presented in Section 4, especially for datasets with noisy labels (*limited* investigations on this setting in former data selection references). Particularly, for the superiority of decoupled EMA over standard EMA, the theoretical insight derived in Proposition 2 is also numerically verified in Table 2, 3 and 5, where ES(WP) outperforms UCB, and abaltions in Table 6, where Loss + DE outperforms Loss + E for multiple architectures and datasets.\", \"Additionally, the proposed method performs data selection in both the batch and set (epoch) level, which is also novel compared to former data selection literature (Table 1). This leads to more aggressive data pruning with better training accelerations and efficiency in appications.\", \">**Q2**: The description of the method can be simplified considerably.\", \"**A2**: Currently, we aim to provide a complete description of our method. The method part is developed in a logical order as follows:\", \"Preliminaries (Section 3.1): We present basic problem formulations.\", \"Motivations (Section 3.2): We theoretically prove the convergence acceleration via loss re-weighting (Proposition 1), and mention several variants based on loss re-weighting in former references for completeness.\", \"Analytical developments towards ES(WP) (Section 3.3): By viewing loss re-weighting as $0$-order EMA, we naturally extend the data selection framework to EMAs with higher orders, i.e. standard EMA ($1$-order) and decoupled EMA ($2$-order). Their intuitive comparisons are illustrated in Figure 1. Together with other general techniques (i.e. annealing and pruning), we finally achieve Algorithm 1 (illustrated in Figure 2).\", \"We would be appreciate if you can provide more *specific* suggestions to the description part, and we are certainly open to adjust the corresponding contents accordingly.\"]}", "{\"title\": \"Response to Reviewer R7ur\", \"comment\": [\">**Q1**: The main proposition lies in the recursive definition of an Exponentially Moving Average over the losses of individual examples to deselect them from the training process to gain speedups and improved; i.e. stable, learning dynamics. The single-level EMA itself is a well-known approach that is applied to this setting with a recursive definition. The other techniques, i.e. annealing and pruning, are mere adaptations from prior work and are only a minor contribution to the originality. The bridge between batch and set-level data selection, which their method allows them to do is a nice feature, but not the main contribution. The theoretic analysis is interesting overall. But insights like decoupled EMA is in fact a convolution over hyperparameters\\u2019 powers of historical losses \\u2013 so their results are not really surprising.\", \"**A1**: Regarding the originality, this work is motivated by mainly loss re-weighting and EMAs. However, the present work differentiates from these existing concepts with at least three points as follows:\", \"Loss re-weighting: Although the loss re-weighting is one of our motivations, its effectiveness was only numerically verified in applications and no theoretical characterizations were provided before. In this work, we first mathematically prove the convergence acceleration achieved by loss re-weighting (Proposition 1), which is certainly novel.\", \"EMAs: Although the standard EMA has been adopted in data selection (e.g. UCB ([1])), its practical performance is not that satisfying (e.g. [1]). In this work, we propose a new EMA scheme (decoupled EMA), acting as a non-trivial extension in the following sense:\", \"In formulation, the decoupled EMA \\\"space\\\" contains former loss re-weighting based data selection methods. It is easy to see that decoupled EMA reduces to standard EMA by setting $\\\\beta_1=\\\\beta_2=\\\\beta$, and further reduces to basic loss re-weighting and uniform sampling by setting $\\\\beta=0$ and $\\\\beta=1$, respectively.\", \"Intuitively, we illustrate the comparisons among dynamics of EMAs (Figure 1). It is shown that the decoupled EMA acts as an interpolation between the loss re-weighting and standard EMA: When losses oscillate, the decoupled EMA reacts moderately by not only capturing detailed dynamics of losses (which is ignored by standard EMA due to its over-smoothing effect), but also remaining certain robustness (while loss re-weighting is quite sensitive to loss variations). The decoupled EMA is also flexible to trade-off between these two regimes (details and smoothing) by tuning two betas (left $\\\\to$ middle $\\\\to$ right in Figure 1).\", \"In theory, we mathematically prove the novel Proposition 2 to demonstrate that the decoupled EMA is in fact a **first-order** modification of standard EMA, resulting in the flexibility to balance between losses and their **differences**. This potentially gives chances to better leverage the loss dynamics for data selection. Moreover, this first-order modification is *implicitly* introduced, meaning that only quantities involving losses are evaluated, without significant overheads as former first-order data selection methods based on gradients.\", \"(This point is also recognized by other reviewers, say Reviewer G9Hm: \\\"Novelty - The paper introduces decoupled exponential moving averages, which leverage *first-order* loss differences for more stable and robust sampling, effectively combining ideas from loss and gradient-based sampling with robust optimization principles.\\\")\", \"Empirically, we observe non-trivial improvements in all experiments presented in Section 4, especially for datasets with noisy labels (*limited* investigations on this setting in former data selection references). Particularly, for the superiority of decoupled EMA over standard EMA, the theoretical insight derived in Proposition 2 is also numerically verified in Table 2, 3 and 5, where ES(WP) outperforms UCB, and abaltions in Table 6, where Loss + DE outperforms Loss + E for multiple architectures and datasets.\", \"Combining theoretical justifications and numerical verifications gives that the *general* convolutional forms are *not* the keys, and introducing additional loss *differences* as the high-order information of loss variations is novel, effective and non-trivial due to its efficiency (recall that this first-order modification is *implicitly* introduced, meaning that only quantities involving losses are evaluated, without significant overheads as former first-order data selection methods based on gradients).\", \"Additionally, the proposed method performs data selection in both the batch and set (epoch) level, which is also novel compared to former data selection literature (Table 1). This leads to more aggressive data pruning with better training accelerations and efficiency in appications.\"]}", "{\"title\": \"Response to Reviewer 4oRD\", \"comment\": \">**Q1**: This should be better justified: This can be inefficient since different samples may have varied importance. Can you look at the influence functions or coresets literature?\\n\\n**A1**: For coresets: \\n- We have cited related coresets literature in Section 2 (see the \\\"Static sampling\\\" paragraph, Line 118-122 in the original manuscript). As we have discussed, static sampling methods require *extra* training, leading to considerable costs in both computation and memory.\", \"for_influence_functions\": \"- The definition of influence functions involves calculations of high-dimensional gradients, Hessians and their inverses, whose computation overheads are considerably significant. \\n\\n>**Q2**: This statement needs to be better motivated and explained, why is evolved sampling \\\"natural?\\\" In general machine learning tasks, the typical behaviors of loss curves often appear decent trends overall, but can oscillate meanwhile due to certain noises. This introduces the sensitivity or instability issue of the sampling scheme (3.6). A natural smoothing operation is to use the exponential moving average (EMA) of losses. \\n\\n**A2**: Sorry for the wrong description. We actually mean \\\"commonly-used\\\" here. We have updated this in the revised version. \\n\\n>**Q3**: The proof presentations are somewhat lacking. It's difficult for me to quickly match up concepts from the optimization literature to some of the theoretical arguments made, for example, the EMA to the minimax problem.\\n\\n**A3**: Te proof of Proposition 3 is deferred to Appendix A.3. Proposition 3 means that, solving the minimax optimization problem (3.12) with gradient-based iterations is formally equivalent to gradient descent iterations integrated with the decoupled EMA sampling. This provides a novel perspective to understand EMA-based data selection methods, which has not been discussed in the optimization literature to the best of our knowledge. \\n\\n>**Q4**: It may be worthwhile in explaining this better with regards to the control theory literature, specifically, control theory also deals with oscillations and rectifies them in similar manners. \\n\\n**A4**: Thanks for your insightful suggestions. Yes, control theory in general deals with behaviors of dynamical systems, and it is straightforward to formulate the training with data selection as certain constrained optimization problems, and possibly derive corresponding necessary conditions with tools in optimal control (e.g. Pontryagin\\u2019s maximum principle). However, it seems that this direction can suffer from significant computation loads due to additionally involved calculations of gradients and Hessians. We would appreciate if you can suggest more related references.\"}", "{\"comment\": \"I appreciate the authors' responses. My main concern remains that the proposed method's significance, either on performance or speed, is not convincing enough. I'm also unsure how to interpret the pre-training experiment results. The reconstruction loss does not really indicate downstream performance, and the training time is different, what is the purpose here?\\n\\nOverall, I believe the paper is not ready to be published at its current stage, so my score remains unchanged.\"}", "{\"comment\": \">**Q4**: Hyperparameter Sensitivity - Introducing 2 hyperparameters could be a major concern for the proposed method. The current analysis (Figure 5) is too limited, e.g., what's the impact of hyperparameters on efficiency? Besides, it does seem that hyperparameters introduce a large variance in performance. For fair comparisons, the cost of searching hyperparameters should also be considered in the overall task (e.g., on a smaller dataset to test hyperparameters and then apply to a large dataset.)\\n\\n**A4**: We plot Figure 5 to convey the main message that default configurations of hyperparameters $(\\\\beta_1, \\\\beta_2)$ are basically okay: The default betas are consistently validated to be roughly (locally) optimal in small-scale models and datasets, and their *superior* effectiveness and efficiency remain in other experiments under different settings (e.g. noisy supervision, Table 5 and Figure 3) and with larger scales (e.g. Table 3). **We also extend Figure 5 for denser betas around the default values to verify its (local) optimality.**\\n\\n- Table 3: Test accuracies (\\\\%) for different betas in ES (ResNet-18, CIFAR-100).\\n |$\\\\beta_2$ \\\\ $\\\\beta_1$|$0.15$|$0.2$|$0.25$\\n |----|----|----|----|\\n |$0.95$|$78.5$|$78.8$|$78.1$\\n |$0.9$|$78.4$|$\\\\textbf{78.8}$|$78.6$\\n |$0.85$|$78.3$|$78.4$|$78.3$\\n\\n>**Q5**: Lack of Baselines for Noise - In the experiments on label noise, ES performs well, but the comparison is limited mainly to non-specialized sampling methods.\\n\\n**A5**: We point out that the scope of this work is within comparisons between general data selection methods. In fact, whether there are label noises or not and the portion of noises are often unknown in practical applications, hence data selection methods specialized for these noisy settings (if any) are reasonably not the first choices. \\n\\n>**Q6**: nit - ES in this literature often refers to 'Evolution Strategy', so would be nice to have a different abbreviation for the proposed method. \\n\\n**A6**: Thanks for the reminder. We plan to replace the method name with e.g. sampling by diff-loss re-weighting. Any further suggestions are welcomed. \\n\\n>**Q7**: Could the authors provide more insight into the sensitivity of the hyperparameters across different datasets and architectures? \\n \\n **A7**: Please refer to **A4** for details.\\n\\n>**Q8**: ES appears computationally feasible for single-machine training, but would its performance gains hold up in distributed training settings? \\n\\n**A8**: Please refer to **A1** for details. \\n\\n>**Q9**: ES with Pruning (ESWP) combines batch and set-level selection, but it is not entirely clear how this combination impacts overall performance in practice. \\n\\n**A9**: By combining both the batch and set level data selection, we aim to achieve more aggressive data pruning. **We further perform ablation studies on the pruning ratio $\\\\text{pr}$ to demonstrate its impacts on the overall performance.**\\n- Table 4: The effect of pruning ratios in ESWP (ResNet-18, CIFAR-10).\\n |$\\\\text{pr}$|$0.1$|$0.2$|$0.3$|$0.4$|\\n |----|----|----|----|----|\\n |accuracy (\\\\%)|$95.3$|$95.3$|$95.2$|$94.9$\\n\\n>**Q10**: How can ES be used for self-supervised training? \\n\\n**A10**: Please refer to **A1** for details.\", \"title\": \"Response to Reviewer G9Hm (continue)\"}", "{\"title\": \"Response to Reviewer R7ur\", \"comment\": \"Thanks for your comments. We have outlined the novelty in details in the above rebuttal (**A1** in **Response to Reviewer R7ur**). Particularly, you can check **Table 1 (new)** in **Response to Reviewer G9Hm** for the newly updated core empirical results (large-scale unsupervised learning with distributed training).\"}", "{\"title\": \"Response to Reviewer 4oRD (continue)\", \"comment\": \">**Q5**: \\\"Decoupled EMA. To sufficiently leverage the loss dynamics in a more robust sense, we propose to calculate the sampling probability as $p_i(t) \\\\propto w_i(t) = \\\\beta_1 s_i(t-1)+(1-\\\\beta_1)\\\\ell_i(\\\\theta(t))$, $s_i(t) = \\\\beta_2 s_i(t-1)+(1-\\\\beta_2)\\\\ell_i(\\\\theta(t))$, $s_i(0) = 1/n$ with $\\\\beta_1, \\\\beta_2 \\\\in [0,1]$ as two hyper-parameters. Here, the intermediate series $\\\\{s_i(t)\\\\}_{t\\\\in\\\\mathbb{N}}$, updated in the EMA scheme, is also referred as the score (for the i-th sample). The scheme (3.8) is the so-called decoupled EMA, which reduces to (3.7) when $\\\\beta_1=\\\\beta_2=\\\\beta$. In Figure 1, it is shown by the red curve and appears an \\u201cinterpolation\\u201d between the original loss and single EMA: When losses oscillate, the decoupled EMA reacts moderately by not only capturing detailed dynamics of losses, but also remaining necessary robustness , exhibiting the flexibility to trade-off (by tuning two betas). Intuitively, by setting $(\\\\beta_1, \\\\beta_2) \\\\to (0^+, 1^-)$, we are able to exploit the long-term historical information along the training (via $\\\\beta_2$), while focusing on the importance of current losses (via $\\\\beta_1$) and thus can get the best of both world. This simple and elegant design turns out to be surprisingly beneficial in practice, which is further verified in numerous experiments in Section 4.\\\"\\nThis should really be better explained. Again, this paper is moving into the \\\"total optimization landscape\\\" where both data and model parameters are considered components of the system to be optimized. It's not immediately clear whether this is a consequence of the problem the authors were solving, or the key insight that led to the approach. \\n\\n**A5**: For statements' explainations:\\n- Relations of EMAs: Figure 1 illustrates the comparisons among dynamics of EMAs. It is shown that the decoupled EMA acts as an *interpolation* between the loss re-weighting and standard EMA: When losses of certain samples oscillate, the decoupled EMA reacts moderately by not only capturing detailed dynamics of losses (which is ignored by standard EMA due to its over-smoothing effect), but also remaining certain robustness (while loss re-weighting is quite sensitive to loss variations). The decoupled EMA is also flexible to trade-off between these two regimes (details and smoothing) by tuning two betas (left $\\\\to$ middle $\\\\to$ right in Figure 1). \\n- Effect of $(\\\\beta_1, \\\\beta_2)$: \\n - Since $p_i(t) \\\\propto w_i(t) = \\\\beta_1 s_i(t-1)+(1-\\\\beta_1)\\\\ell_i(\\\\theta(t))$, $\\\\beta_1 \\\\in [0,1]$, it is obvious that smaller $\\\\beta_1$ gives a larger coefficient of the current loss $\\\\ell_i(\\\\theta(t))$, hence we are focusing on the importance of current losses by setting $\\\\beta_1 \\\\to 0^+$. \\n - Since $s_i(t) = \\\\beta_2 s_i(t-1)+(1-\\\\beta_2)\\\\ell_i(\\\\theta(t))$, $\\\\beta_2 \\\\in [0,1]$, it is obvious that larger $\\\\beta_2$ gives a larger coefficient of the historical score $s_i(t-1)$, hence we are focusing on the importance of historical weights by setting $\\\\beta_2 \\\\to 1^-$. \\n\\nFor \\\"total optimization landscape\\\": \\n- We agree with this viewpoint, but it can be quite general in terms of induced formulations. In our opinion, specific realizations of \\\"total optimization landscape\\\" should be simple in the sense to introduce additional computation as light as possible. We currently view data selection with high-order loss-based re-weighting (i.e. decoupled EMA) as an economical candidate (see reasons in the 3rd and 5th sub-point of the 2nd point in **A1** in **Response to Reviewer R7ur**). \\n\\n>**Q6**: (ii) ES to solve a DRO problem. From another perspective, ES can be also reformulated as a solution to the minimax problem...\\n\\n**A6**: Please refer to **A3** for details.\\n\\n>**Q7**: Can the key idea of the paper: optimization of the data space, be more cohesively or clearly presented? Currently, it's still difficult to understand the key idea of the paper without significant theoretical and literature knowledge. \\n\\n**A7**: For the discussion of key ideas of this paper, you can refer to the last point in **A5** for details. For the writing part, although it is recognized by other reviewers, say Reviewer G9Hm: \\\"The paper is clearly structured, with well-organized sections...\\\"; Reviewer R7ur: \\\"Overall the paper is clearly and concisely written\\\", we are certainly open to more specific suggestions.\"}", "{\"title\": \"Response to Reviewer G9Hm\", \"comment\": \">**Q1**: Significance - Much of the computation cost of foundation models occurs during pre-training, which is mostly self-supervised (auto-regressive, contrastive learning, auto-encoders). All the experiments in the paper are for labeled datasets, which represent fine-tuning use cases where the computation cost is not a major concern. Thus, the significance of the method is not clearly demonstrated.\\n \\n**A1**: First, we clarify that the fine-tuning task studied in this work (Table 3) is for *full fine-tuning*, also possessing considerable computation costs when scaling to large models (ViT-Large) and datasets (ImageNet-1K). **In addition, we also add the corresponding pre-training experiments under the distributed learning setting as follows.** \\n\\n- Table 1: The re-construction loss and running time of (MAE-based) pre-training the ViT-Large model on the ImageNet-1K dataset for 300 epochs (with 4xA100).\\n | |Baseline|ES|ESWP\\n |----|----|----|----|\\n |Loss|$0.425$|$0.439$|$0.433$| \\n |Time (h)|$48.7$|$42.8$|$40.1$| \\n\\n>**Q2**: Scalability - The paper claims that ES has only modest overheads, but lacks an in-depth analysis of computational and memory costs associated with the decoupled EMA calculations, especially in large-scale tasks or datasets. \\n\\n**A2**: For computation: \\n- We note that the additional computation only arises from forward passes, whose overhead is much less than that of backward passes. Therefore, reducing the backward propagation computation as in our method would be dominantly effective to accelerate the training. The acceleration effect has been reflected by overall reduced time, as is shown in extensive experiments in Section 4.1 in the original manuscript. \\n- We claim that the additional computation introduced by forward passes is modest, since compared to the baseline (no data selection), one only needs to additionally compute the losses on *selected mini*-batches with reduced sizes compared to original (meta-)batches.\", \"for_memory\": \"- It is straightforward to deduce from Eq. (3.8) that the additional memory is $O(n)$ ($n$: sample size), since we need to store the score value of each data sample for only one single training step. The additional $O(n)$ memory costs are negligible since only $O(1)$ extra space is required for each data sample consisting of high-dimensional tensors. \\n- **We numerically test the overall memory costs of ES(WP), which are shown to be reduced compared to the baseline (no data selection).** \\n - Table 2: The averaged memory usage under the default configuration of batch sizes ($b=64$, $B=256$) when (full) fine-tuning the ViT-Large model on the ImageNet-1K dataset (with 1xA100 (80GB)).\\n | |Baseline|ES|ESWP\\n |----|----|----|----|\\n |Memory (GB)|$52.4$|$49.7$|$49.1$ \\n\\n>**Q3**: Assumptions - Some assumptions in theoretical analysis may not hold in practice, e.g., smoothness of loss functions, especially for complex architectures and non-convex losses. A discussion of how the method performs when assumptions deviate from theory, or empirical analysis on non-smooth tasks, would help clarify the applicability. \\n\\n**A3**: We respond by two points: \\n- We clarify that Proposition 1 is derived just aiming to theoretically *motivate* data selectiom methods based on loss re-weighting. With its mathematically proved convergence accelerations, and by viewing loss re-weighting as $0$-order EMA, we achieve analytical developments towards ES(WP) (Section 3.3). That is, one can naturally extend the data selection framework to EMAs with higher orders, i.e. standard EMA ($1$-order) and decoupled EMA ($2$-order) used in ES(WP). \\n- In practice, we *do* provide empirical analysis in general settings. In fact, all experimental results under the \\\"Loss\\\" data selection method in Section 4 are desired results.\"}", "{\"title\": \"Response to author\", \"comment\": \"I am convinced of the merits of the paper. I will be raising my score.\\n\\nThe authors should consider the feedback of other reviewers as well and think about improving the presentation, as well as the validation to put forward a more persuasive argument.\"}", "{\"comment\": \"Thank you very much for your replies. I'm very sorry to answer so late -- the openreview emails are always classified as spam and thus I often miss them.\\n\\nOverall, I agree with most of your arguments and appreciate your clarifications. (Unfortunately, changes in the paper are not highlighted and the openreview diff does not work -- so, I cannot easily check what was changed in the paper.) I will go through the paper again in the next few days and have a close look again -- sorry that I wasn't able to do it before the discussion period ended.\\n\\nNevertheless, I still believe that the novelty is rather limited and the empirical results do not fully convince me. Therefore, I will only increase my score to 5 for the moment.\"}", "{\"title\": \"A Gentle Reminder\", \"comment\": \"Dear Reviewer R7ur,\\n\\nAs the rebuttal phase nears its end, we want to gently remind you of our responses and would greatly appreciate your further feedback.\\n\\nIn the above rebuttal, we have addressed all your concerns, comments and additional questions with the utmost care and detailed responses. Hope our efforts meet your expectations. If you feel your concerns have been adequately addressed and find our updates satisfactory, with the utmost respect, we invite you to consider a score adjustment. If you have any remaining concerns, we would be happy to discuss them further, and we hope to make the most of the remaining day to provide further clarifications.\\n\\nWe deeply appreciate the time and effort you have dedicated to reviewing our paper. Regardless of the final outcome, we want to express our heartfelt gratitude for your thoughtful feedback and contributions to improving our work. Thank you for your time and consideration!\\n\\nBest, \\\\\\nAuthors\"}" ] }
2CYZkawsmz
MDTREE: A Masked Dynamic Autoregressive Model for Phylogenetic Inference
[ "ChenRui Duan", "Zelin Zang", "Siyuan Li", "Stan Z. Li" ]
Phylogenetic tree inference, crucial for understanding species evolution, presents challenges in jointly optimizing continuous branch lengths and discrete tree topologies. Traditional Markov Chain Monte Carlo methods, though widely adopted, suffer from slow convergence and high computational costs. Deep learning methods have introduced more scalable solutions but still face limitations. Bayesian generative models struggle with computational complexity, autoregressive models are constrained by predefined species orders, and generative flow networks still fail to fully leverage evolutionary signals from genomic sequences. In this paper, we introduce MDTree, a novel framework that redefines phylogenetic tree generation from the perspective of dynamically learning node orders based on biological priors embedded in genomic sequences. By leveraging a Diffusion Ordering Network to learn evolutionarily meaningful node orders, MDTree autoregressively positions nodes to construct biologically coherent trees. To further push its limits, we propose a dynamic masking mechanism that accelerates tree generation through parallel node processing. Extensive experiments show that MDTree outperforms existing methods on standard phylogenetic benchmarks, offering biologically interpretable and computationally efficient solutions for tree generation.
[ "Phylogenetic Inference", "Genome Language Model", "Transformer", "Graph Structure Generation", "DNA", "Large Language Models" ]
Reject
https://openreview.net/pdf?id=2CYZkawsmz
https://openreview.net/forum?id=2CYZkawsmz
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qEmquTi6is", "g3LZnZK9Ty", "bLhDaloGYe", "VEkwP3IVWr", "QXuaW9nDyC", "Ktkpgz5e1d", "EXwnvJmqNo", "2e2zS6BRxE" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "official_review", "official_review", "meta_review", "decision" ], "note_created": [ 1730826132988, 1730065553900, 1730675383007, 1730614163004, 1730684521727, 1730670290262, 1734898099339, 1737523533683 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2808/Reviewer_zB4G" ], [ "ICLR.cc/2025/Conference/Submission2808/Reviewer_oF9c" ], [ "ICLR.cc/2025/Conference/Submission2808/Reviewer_5UDw" ], [ "ICLR.cc/2025/Conference/Submission2808/Reviewer_ESW1" ], [ "ICLR.cc/2025/Conference/Submission2808/Reviewer_DTGp" ], [ "ICLR.cc/2025/Conference/Submission2808/Reviewer_i3bg" ], [ "ICLR.cc/2025/Conference/Submission2808/Area_Chair_mYbE" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"summary\": \"The paper provides a new deep learning based method that incorporates language model to extract biological priors to find a node insertion ordering. They improve on state of the art methods using autoregressive models and provide comprehensive experiments.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The paper improves on existing methods, notably ARTree, for deep learning for phylogenetic inference.\", \"The proposed central problem, that of finding a proper taxon insertion order is an important piece of any phylogenetic inference algorithm and deserves more highlights in deep learning-based approaches.\", \"The use of language models to extra biological priors is quite novel and general.\", \"Experiments are relatively extensive, at the base line of deep learning-based approaches.\"], \"weaknesses\": [\"The paper's main contribution is methodological but compared to the most closely related method, ARTree, there are only marginal improvements across datasets. This is also in light of the fact that the proposed methodology is extremely more computational intensive, both in terms of runtime and carbon footprint. With so much more computation, it is not too unfair to expect a more pronounced difference. Perhaps it is advisable to find conditions where dynamic node ordering strongly affects the tree reconstruction methods. If it is too hard to find such conditions, perhaps it is not as bad a problem as stated in the paper.\", \"The paper's key insights (compared to literature) is a method to learn an insertion ordering of the taxa. However, it is not clear that the proposed methodology to find such an ordering is clearly advantageous compared to other different orderings. The baseline considered against use a lexicographical ordering, which is just arbitrary. What happens when a different ordering is used?\", \"Related to the topic of choosing the right taxa ordering: theoretically, given just one correct planar ordering of the taxa (draw the true tree onto the plane and number leaves from left to right, there is a trivial greedy algorithm to find the correct tree structure and branch length from tree distance approximated from DNA sequences. As a result, find the correct order is one of the hardest subproblem of tree inference.\", \"There are other line of work that uses Prim ordering of the distance matrix between taxa as the ordering to add taxa into the tree and implemented with maximum likelihood heuristics (Zhang, Rao, Warnow 2019 Constrained incremental tree building: new absolute fast converging phylogeny estimation methods with improved scalability and accuracy; Le et al. 2021 Using Constrained-INC for Large-Scale Gene Tree and Species Tree Estimation). These are not deep learning-based methods so it's not directly comparable, but at least a discussion on the existing orders that have been considered is warranted. It would also be interesting to see how the Prim ordering works in these experiments.\"], \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The manuscript introduces MDTree, a novel approach to phylogenetic tree inference. MDTree addresses the issues of model complexity and computational efficiency. By leveraging a Diffusion Ordering Network, MDTree dynamically learns the optimal node order, enabling more accurate and efficient tree construction. This approach incorporates graph neural networks (GNNs) and language modeling techniques to capture complex evolutionary relationships. Additionally, a dynamic masking mechanism allows for parallel node processing, further accelerating the inference process. The authors benchmark the performance in several aspects to show the effectiveness of MDTree.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The authors proposed a novel method and conducted a comprehensive evaluation by comparing MDTree with several baseline methods across various datasets and metrics.\", \"weaknesses\": \"The paper's experimental evaluation is hindered by several aspects. Firstly, the parameter settings for baseline methods are not well-documented, potentially impacting the strength of reported performance. Secondly, the absence of publicly available code limits reproducibility and hinders independent verification of the results. Additionally, the methods compared in each table are inconsistent, lacking clear explanations for these choices. For example, while MrBayes is included in Table 2, it is absent from Table 1, raising questions about the rationale behind these decisions.\\n\\nWhile the paper introduces a novel approach to phylogenetic tree inference, the literature review in the introduction appears to conflate different concepts. For instance, the discussion of Solis-Lemus & An\\u00e9, 2016 and Zhang et al., 2018, which focus on network inference from gene trees/multiple sequence alignments under the multispecies network coalescent model, seems to be mixed with the concept of gene tree inference from sequence data, the primary focus of the proposed MDTree method. A clearer distinction between these approaches would enhance the paper's clarity and contextual understanding.\\n\\nBesides the major concerns, below are some minor concerns.\", \"figure_1\": \"left and right are opposite. Run time unit is missing.\\n\\nThe name of the proposed method is not consistent in tables. E.g., Table 1: MDTree, Table 2: Ours.\", \"questions\": \"My suggestion is to address the weaknesses above.\\n1. describe baseline method settings\\n2. provide code availability to reproduce the result\\n3. compare all methods in each metric or explain why a certain method is not included\\n4. review the related work and discuss existing method and gap in a more clear way\\n5. proofread the paper\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a new node ordering network to be able to better utilise autoregressive models to generate phylogenetic trees.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": [\"Several new ideas on the technical side.\", \"(Marginally? hard to judge with the units and lack of error bars) better results.\", \"Improvements in run-time.\"], \"weaknesses\": [\"There are no statistical error margins (e.g. standard deviation) for the results. This is okay if the computational cost is huge (e.g. often the case in LLMs) if so please state this clearly as well as the running cost in GPU hours, the GPUs used, etc.\", \"Table 2 confuses me. For instance for DS1: three are numbers are bold, but not even the 3 highest ones (when MLL higher is better according to the caption), e.g. VBPI-GNN also has the third highest score -7108.41. Also the difference between some of the results seem incredibly marginal. Furthermore, negative MLL might be clearer rather than having a minus sign in front of every number.\", \"The method has many components, which on the hand is impressive that the authors managed to build this system and make it work, but also comes with limitations that aren't adequately discussed in my opinion. There are a great number of hyperparameters, but far too little space is dedicated to ablating them or acknowledging the difficulty of choosing them.\", \"The related work is quite brief given that the model borrows many techniques related work it compares against. A clearer delineation would be helpful to the reader.\"], \"clarity\": [\"The way indices i and t are re-used is confusing.\", \"It is imperative to give the citation of each baseline method considered. In the paper they are merely named, but not cited. This allows for confusion if two methods share the same name for instance and is generally poor practice.\", \"The Table captions could be improved, what are the numbers in paranthesis in Table 2? What is the grey background mean? Why are multiple numbers in bold per dataset?\", \"The y-axis range in Figure 3 makes the results really hard to discern.\"], \"minor\": [\"Line 164 \\\"As discussed, ...\\\" please add a link to where this is discussed, this helps non-linear reading, which is standard for papers.\", \"Equation 1 LHS says h_i, the text says h_t\"], \"questions\": [\"Why is G_t a graph? I would have thought it is a DAG.\", \"How do you define \\\\tau and B_\\\\tau mathematically?\", \"The mapping F takes a single species sequence to a tree topology, but in the text it states that F depends on G, which is not reflected in the notation. In addition, why is each species sequence sent to a separate tree topology? There is only a single evolutionary timeline we want to analyse.\", \"Line 186 beta increases montonically as what value goes from 0 to 1?\", \"What positional encoding function PE is being used?\", \"Is the categorical distribution in Eq 3 the forward process of the diffusion process?\", \"What is the MLL metric? Please either expand the acronym or give a citation.\", \"-Table 3: What was the hardware used?\", \"I presume alpha in Equation 5 is highly sensitive to N? How come the authors choose to take the softmax of a softmax rather than directly adding the alpha term to L_i?\", \"What is the importance of branch lengths?\", \"You have 3 distinct components, could you clarify how the gradient flows? I presume that at the boundary between the modules the gradient is stopped due to the discrete decision boundary? If so, how does the ordering network for instance get any signal.\", \"What modules are pretrained? Which are trained from scratch? How do you initialise the weights?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a new framework for phylogenetic inference MDTree. Traditional methods like MCMC and previous deep learning methods are limited by their high computational cost, low efficiency and low inference accuracy. The new framework uses multiple techniques to effectively resolve these limitations, including a diffusion ordering network (DON) to generate node orders according to evolutionary relationships, autoregressive constriction module with dynamic masking mechanism to generate tree structures in parallel. The model uses a dual-pass traversal to estimate the tree branch lengths.\\nThis study includes an extensive evaluation which indicate that MDTree has a robust performance on datasets with variant taxa number and sequence length. Its computational cost and running time outperformed the state-of-the-art methods. This study also includes a comprehensive ablations study on models and hyper parameters to demonstrate the contribution and robustness of the modules.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"This is a quite impressive work that solved multiple pain points in phylogenetic inference.\\nThe idea proposed by this paper is innovated and effective, improves the phylogenetic tree inference on both efficiency and accuracy. \\nThe paper is well organized with a clear writing style. It is easy to follow the author\\u2019s idea. \\nThe experiment design is comprehensive. The author considered about multiple aspects of phylogenetic inference, such as running time, tree quality, model robustness, empirical study. The results are convincing.\", \"weaknesses\": \"There are certain weakness about this study. The complex architecture and multi-layered optimization requirements may limit the practical application. It is worth to consider about pack the framework into a user friendly package or online service. This will not only help people who are interested in this study, but also increase the impact of this impressive work.\\nThere are some details about the method and the evaluation metrics are ignored in the paper, such as how does DON determine the node order based on genomic embeddings? What is the impact of sequence divergence and species evolutionary relationship distance to the node order and inferred phylogenies? Why generating highly diverse tree topology is necessary, especially in biological analysis.\", \"questions\": \"1. How does DON determine the node order based on genomic embeddings? How much does it impact the final inference if the species sequence order differs?\\n2. How does the mask rate selection impact the parallel computation of nodes insertion and overall model running efficiency? \\n3. There is no summarization of of the sequence divergence and evolutionary relationship distances about the dataset used in this study. It is necessary to evaluate the impact of sequence divergence on the model performance. The author can also consider adding simulated datasets experiments to better control the sequence divergence. \\n4. What is the purpose of generating highly diverse tree topologies in biological research? What type of practical application needs such diverse tree topology instead of a highly confident and accurate phylogenetic tree? \\n5. Consider adding bootstrap analysis for phylogenetic support estimation to better indicate how confident is the inferred phylogenies.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a new method for phylogenetic tree inference which extends beyond autoregressive models and is claimed to improve both computational efficiency and accuracy. Specifically, it focuses on learning a better ordering strategy for adding a new node into the phylogenetic tree from GLM priors as opposed to using fixed orders (lexicographical order) in autoregressive models. The authors framed the problem as masked dynamic autoregressive tree generation and introduced a Dynamic Ordering Network (DON), which is an absorbing diffusion model for learning the node adding order from pre-trained genomic language models (GLMs) to better leverage the biological and evolutional information. They further introduced several techniques for efficiency improvement, including dynamic masking and parallel processing, dual-pass tree traversal for branch length estimation, and LAX model for variance reduction. Extensive experiments show improved accuracy and efficiency of the proposed framework.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"The paper studies an important problem in bioinformatics concerning phylogenetic tree inference. The main highlight is the introduction of DON which infers node orders and enables sampling multiple nodes for parallel computation, which makes use of the strong prior in pretrained GLM and might be more flexible than the fixed order AR method.\\n\\nThe method is novel and adds a significant improvement to AR method. Additional techniques are introduced to further improve efficiency, generation consistency, and optimization. The perspective of studying the influence of node orders on phylogenetic accuracy is also novel.\\n\\nThe authors conduct extensive experiments across multiple tasks and datasets related to phylogenetic tree inference and consistently outperform the baselines including ARTree which is likely the previous SOTA. They also showed strong metrics on computational efficiency, and a thorough ablation study showing the importance of DON.\", \"weaknesses\": \"The paper studies a very specific task, phylogenetic tree generation problem, within the bioinformatics domain. Although the task might be important in the domain, the introduced methodology seems to be highly specified for this problem alone, which might limit its significance in the general graph generation area.\\n\\nThe biggest concern lies in the writing clarity, particularly the DON description. Multiple key information is missing and several notations are inconsistent across the main text. Firstly, the DON module seems to assume some graph structure already known among the sequence (e.g., figure 2.A has a ring structure). How is this graph constructed? It cannot be the tree structure as the phylogenetic tree has not been generated yet at this stage.\\n\\nThe presentation of DON in 3.1 can be largely improved, with many key notations and parameters unexplained. The biggest gap is the lack of a proper definition of the forward and backward diffusion process, with clear correspondence to time t. It starts with directly \\u201cupdating node features $h_t$\\u201d without defining what t means. It is also not clear what positional encoding $PE_t(g_i)$ means, it is used with subscription $t$ but isn\\u2019t the position of node i fixed? How does PE vary with time t and why is it varying with t? It is not clear whether the transition probability in (2) defines a forward corruption process from t=N to 0 or t=0 to N? How can we make sure only a single node is selected to be absorbed at each time step? The notation of $h_t$ is also confusing, is it a single node embedding or embedding matrix for all nodes? There is a mixed use of $h_t$ and $h_i$.\\n\\nThe node order generation process after the entire graph is absorbed is also not explained well. Equation (3) defines a conditional probability between node embeddings $q(h_t|h_0, h_{(<t))}$, how can this be used for order determination? Shouldn\\u2019t one predict the probability of unmasking a node in a diffusion setting? It seems the transition matrix Qt only allows jumping from a non-masking state to a mask. How can this be reused for computing a cumulative transition matrix in the opposite direction (i.e. from masked to unmasked)?\\n\\nFinally, it is not clear whether the DON is trained (e.g., with a certain score matching loss, and if so what is the training target given that optimal order is not available ahead of time?), or it is just a hand-crafted discrete forward diffusion process which is completely determined by the hyperparameters $\\\\beta_{t,i}$. There is no description regarding how network parameters of the relational graph convolutional network used for node feature computation are trained either. There is a large discrepancy between what is described in 3.1 and training loss (10) in 3.4, where $q_\\\\sigma(\\\\sigma_t|G_0,\\\\sigma_{(<t)})$ suddenly appears without definition. \\n\\nIn section 3.2 tree construction, a multi-head attention block with a query matrix Q is introduced MHA($Q, h_i, h_i$), what is the goal of Q here? It is initialized to an Identity matrix with size (N-3)*100, but was not mentioned later. \\n \\nThere are several typos and inconsistencies in naming terminology. E.g., the DON is sometimes referred as Diffusion Ordering Network and sometimes Dynamic Ordering Network.\", \"questions\": \"1.\\tHow does node count measure runtime efficiency in figure 1 and why a lower node count is preferred?\\n2.\\tHow do we get the initial graph structure that is used as input to DON?\\n3.\\tIs DON a completely separate and preceding step from the dynamic AR tree construction? Or the order determination step is roll out iterative after each node insertion step?\\n4.\\tSee other questions in Weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors present MDTree, a technique for inferring phylogenetic trees (topology + branch lengths) from a set of genomic sequences. To motivate their model, the authors reframe phylogenetic tree construction from the perspective of DART: dynamic autoregressive tree generation, which differs from its autoregressive predecessors by incorporating a node order learning step. To this end, MDTree uses a Diffusion Ordering Network (DON) using genomic language model embeddings to sort sequences. This enables better autoregressive generation and even makes it possible to add nodes in parallel. The authors benchmark MDTree on 8 classic phylogenetics datasets, comparing it to classical MCMC, structure generation, and autoregressive methods. In almost all benchmarks, they show state-of-the-art performance as well as improvements in secondary properties like computation speed, parsimony, and diversity.\", \"edit\": \"updated score from 5 to 6 following discussion with the authors.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"**Strong benchmark performance** is the main strength of this paper. Across almost all dataset + task benchmarks, the authors claim state-of-the-art performance. This is shown in Tables 1--4 and Figures 3--4.\", \"**Extensive secondary benchmarks** characterize MDTree's runtime reduction, parsimony, topological diversity, and bipartition frequencies compared to ARTree (and sometimes other models).\", \"**Biological ordering** is a desirable addition to autoregressive models, ensuring that the phylogenetic tree construction task can aggregate as much information across nodes of the tree as possible. This addresses a key limitation of previous autoregressive methods.\", \"**Parallelism**, enabled by the biological ordering, is a desirable property for a computational method and appears to improve processing speeds substantially (as shown in Figure 1).\", \"**Use of embeddings** eliminates the restriction that all sequences are the same length common in other models; it is also likely to unlock improvements in MDTree \\\"for free\\\" as better genomic foundation models are trained.\"], \"weaknesses\": [\"**Relationship to ARTree** is somewhat unclear, and although comparisons favor MDTree, the ARTree score is oftentimes quite close.\", \"The authors should make it explicit what distinguishes MDTree from ARTree.\", \"The ablations should make it clear which ablated forms of MDTree (if any) are worse than base ARTree\", \"Since the benchmark scores of MDTree and ARTree are often quite close together, I am less impressed by \\\"state of the art\\\" results. If the authors could convince me why this position is mistaken, and their method is a *significant* improvement over ARTree, I would be amenable to improving my score.\", \"**Lack of motivation** for specific architectural choices. Most notably, the diffusion ordering network (DON) is justified in terms of the limitations of other autoregressive methods like ARTree; however, the specific choice of architecture is presented as arbitrary/self-evident. To this end, I have several questions:\", \"What other options have the authors considered/tested? Why was the DON ultimately chosen?\", \"How does the DON compare to a simple baseline that produces biologically meaningful orders without relying on deep learning? The authors may have a better sense of what a good baseline might be. However, I propose the following baseline as a reasonable starting point:\", \"1. Compute pairwise edit distances between genomic sequences (e.g. Levenshtein distances)\", \"2. Perform agglomerative clustering on the pairwise distances to get a crude tree\", \"3. Use an inorder traversal of the resulting tree to sort the leaves. This is your input order.\", \"You cite \\\"evidence of robustness across different taxa orders\\\" in ARTree (line 163), but here you simply say \\\"the influence of node orders on phylogenetic accuracy has not been thoroughly examined.\\\" The ablation-based evidence presented in Table 7 suggests that node order has a weak influence on model performance, but it would be more convincing to see a non ablation-based characterization (e.g. what is the variance in MLLs for random permutations of node order?)\", \"**Unintuitive choice of representations** to seed the DON. It is not apparent that genomic LM representations are the best candidates here, as the LMs are not actually trained to estimate evolutionary distances. Moreover, vector space representations of genetic sequences will always incur distortion, as the geometry of phylogenetic trees is inherently non-Euclidean as a result of the four-point condition.\", \"**The DART formulation** seems unnecessary. What is the advantage of reformulating phylogenetic tree construction (which we have a perfectly good description of already: learning a topology and a set of branch lengths), besides that it attempts to justify the use of a DON? If that is all, I would argue that \\\"proper node orders improve phylogenetic inference\\\" is a sufficient claim.\", \"If other problems in phylogenetics are better viewed from the DART perspective, I would be interested in such an example. This would go a long way towards changing my mind on the value of this part of the paper.\", \"**Presentation** is unrefined throughout:\", \"Figures are often cramped, and combined into figures with no clear logic (e.g. Figure 1 includes a cartoon and a runtime comparison)\", \"Model details are crammed in pages 4 and 5. It is unclear without substantial cross-referencing and careful reading how all of the pieces fit together. While I understand the need to fit within the page limit, I would be interested in seeing the full architecture described in the Appendix.\", \"\\\"Mask rate modulated by a cosine function\\\" (200) seems to be an essential detail of the autoregressive tree, but the equation is not given anywhere\", \"**Related work** does not discuss Bayesian methods since VBPI (except for VaiPhy). There have been many developments in this field since then.\", \"**Missing experiments**: oftentimes, certain models are missing from evaluations. For instance, MrBayes and structure representation models are missing from Table 1; many models are missing from Table 4; comparisons in terms of runtime, diversity, bipartition, etc., are only run for 2-3 models at a time. It is possible that these results are infeasible to generate, but the authors should make this explicit.\"], \"questions\": [\"Why is it better that \\\"closely related species should be placed earlier in the tree\\\" (175) versus, e.g. simply clustering together? Is this robust to all topologies? For instance, what happens if you have two very distantly related subtrees, each of which has many species who are closely related to another?\", \"Similarly, I am interested in worst-case performance of the clustering algorithm. For instance, if you had a linear tree, would you still be able to parallelize your algorithm?\", \"What should the reader take away from the Angiosperm tree in Figure 8? How does this compare to/improve on the trees generated by other models?\", \"Bayesian phylogenetics methods will typically include a proof that their estimators are consistent and unbiased. Is it possible to do something similar in the case of this method? If not, the authors should justify why it is worth abandoning such guarantees in favor of their model\", \"Will the model be made publicly available? If so, is it easy to use and install? Does it work on a variety of machines and operating systems?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"**Summary:**\\nThis work focuses on phylogenetic tree inference from a set of genomic sequences. Motivated by issues with the existing methods (low efficiency, low inference accuracy, predetermined node orderings, or high computational cost), the authors introduce a deep learning method MDTree for inferring the topology and branch lengths of the phylogenetic tree from genomic data. Specifically, unlike the existing paradigm that relies on fixed orderings such as lexicographic, they advocate learning the order in which nodes are added to the phylogenetic tree. The main contribution is extending the existing autoregressive tree (ART) generation method with an absorbing state diffusion model called the \\u2018Diffusion Ordering Network\\u2019 (DON). DON sorts the genomic sequences, using embeddings from a pretrained genomic language model, to better capture the similarities between the species from biological and evolutionary perspectives. Furthermore, MDTree can perform parallel node processing. Empirical validation is conducted on several phylogenetic datasets.\\n\\n**Strengths:**\\nReviewers acknowledged the contributions of this paper, noting the (a) novelty of incorporating dynamic (biological) ordering within the autoregressive setup, (b) efficiency afforded by parallelism, (c) benefits of leveraging pretrained embeddings for phylogenetic inference, (d) enhanced flexibility over existing ART paradigm, and (e) comprehensive experimental design (encompassing model robustness, runtime, and diversity etc.). \\n\\n\\n**Weaknesses:**\\nSome reviewers raised concerns that MDTree seemed to provide rather marginal improvement over ARTree across datasets (despite the very significant computational overhead and increased footprint), so wondered whether dynamic tree ordering was indeed a critical issue for tree reconstruction in practice.\\nQuestions and concerns were also raised about (a) insufficient/at times inaccurate literature review that did not discuss recent related work on Bayesian methods, or conflated different works, (b) lack of motivation for design and complexity of specific architectural choices, (c) missing evaluations for some models, (d) lack of reproducibility of the proposed method, as well as missing information about parameter settings used to obtain results with the baselines, and (e) issues with presentation that hampered clarity and understanding of the work. \\n\\n**Recommendation:**\\nThe authors actively participated in the rebuttal phase, satisfactorily addressing some questions and concerns. However, some concerns remained; e.g., reviewer zB4G maintained that extra computation and carbon footprint did not seem to justify the incommensurately small performance gain. Reviewer i3bg echoed this sentiment, stating they were not convinced about contributions of this work were significant enough. I fully agree these are valid concerns. Given the breadth of experiments and the idea of incorporating dynamism and parallelism in autoregressive generation, I was willing to consider the experimental part to be sufficient.\\n\\nReviewers 5UDw and DTGp remained unconvinced about the clarity and presentation, which they found to be confusing. 5UDw maintained that in the current form the work was too tailored to the current setting, and expressed concern that it might not be of interest to the broader ICLR community. DTGp also emphasized several issues with exposition - in particular about the description of DON. \\n\\nIn order to be able to make an informed recommendation, I proceeded to take a closer look at the revised manuscript myself. Despite being (very) familiar with several technical components of this work (such as GCN and diffusion), I found the writing (especially about the technical parts) to be extremely confusing and rather underwhelming. I\\u2019m afraid the work in its current form suffers from serious readability issues (as pointed out by 5UDw and DTGp), and a significant effort in form of a major revision (that includes clear mathematical descriptions) needs to be invested before it is ready for publication.\\n\\nTechnical presentation issues aside, I think the paper will significantly benefit from separating the description of key methodological contributions (i.e., incorporating dynamic ordering and parallelism) - which should be clearly developed formally via a mathematical formulation - from the phylogenetic-specific details. Not only will doing so significantly enhance the readability of the paper (and increase confidence about the technical machinery being correct), I believe it will also make the work more broadly accessible.This way the authors could position their methodological contributions better in the context of previous work on ordering-related issues for autoregressive models (see, e.g., Xu et al. Anytime sampling for Autoregressive Models via Ordered Autoencoding. ICLR 2021), while still being able to demonstrate the benefits of their overall approach for phylogenetic inference.\", \"additional_comments_on_reviewer_discussion\": \"Details already provided above.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
2CQa1VgO52
Enhancing Deep Symbolic Regression via Reasoning Equivalent Expressions
[ "Nan Jiang", "Ziyi Wang", "Yexiang Xue" ]
Symbolic regression seeks to uncover physical knowledge from experimental data. Recently a line of work on deep reinforcement learning (DRL) formulated the search for optimal expressions as a sequential decision-making problem. However, training these models is challenging due to the inherent instability of the policy gradient estimator. We observe that many numerically equivalent yet symbolically distinct expressions exist, such as $\log(x_1^2 x_2^3)$ and $2\log(x_1) + 3\log(x_2)$. Building on this, we propose Deep Symbolic Regression via Reasoning Equivalent eXpressions (DSR-Rex). The high-level idea is to enhance policy gradient estimation by leveraging both expressions sampled from the DRL and their numerically identical counterparts generated via an expression reasoning module. Our DSR-Rex (1) embeds mathematical laws and equalities into the deep model, (2) reduces gradient estimator variance with theoretical justification and (3) encourages RL exploration of different symbolic forms in the search space of all expressions. In our experiments, DSR-Rex is evaluated on several challenging scientific datasets, demonstrating superior performance in discovering equations with lower Normalized MSE scores. Additionally, DSR-Rex computes gradients with smaller empirical standard deviation, compared to the previous DSR method.
[ "symbolic regression", "deep reinforcement learning", "symbolic reasoning" ]
https://openreview.net/pdf?id=2CQa1VgO52
https://openreview.net/forum?id=2CQa1VgO52
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zxq7AK0wIX", "xlYL5diFWs", "upoSan6Obq", "hGt8lLcWIi", "aW2AjFkBuv", "WH99ySVqUW", "VeLhfsPwmv", "TKrTzreEBn", "SaOtYCUpGT", "SL6YPpBE6v", "MCa7i6F83w", "HPnzEEIo6B", "70iOpCKcsN", "5oTednQtql", "344D78wkhD" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1733152676333, 1733049768364, 1730723075890, 1730606585559, 1732598144561, 1732805698189, 1732594103562, 1737093744200, 1732598996650, 1730652676277, 1730559844794, 1732596127372, 1732631064058, 1730113770232, 1732595021580 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5648/Reviewer_zjYU" ], [ "ICLR.cc/2025/Conference/Submission5648/Reviewer_Pc43" ], [ "ICLR.cc/2025/Conference/Submission5648/Reviewer_zjYU" ], [ "ICLR.cc/2025/Conference/Submission5648/Reviewer_p6DY" ], [ "ICLR.cc/2025/Conference/Submission5648/Authors" ], [ "ICLR.cc/2025/Conference/Submission5648/Reviewer_ooAT" ], [ "ICLR.cc/2025/Conference/Submission5648/Authors" ], [ "ICLR.cc/2025/Conference/Submission5648/Authors" ], [ "ICLR.cc/2025/Conference/Submission5648/Authors" ], [ "ICLR.cc/2025/Conference/Submission5648/Reviewer_ooAT" ], [ "ICLR.cc/2025/Conference/Submission5648/Reviewer_Pc43" ], [ "ICLR.cc/2025/Conference/Submission5648/Authors" ], [ "ICLR.cc/2025/Conference/Submission5648/Reviewer_ej1f" ], [ "ICLR.cc/2025/Conference/Submission5648/Reviewer_ej1f" ], [ "ICLR.cc/2025/Conference/Submission5648/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your clarification. To strengthen the quality of the experimental results, I recommend the author benchmark their method on the SRBench dataset with state-of-the-art baselines. Based on the current manuscript, I will keep my score.\"}", "{\"comment\": \"Thank you very much for the clarification of the author. Considering your clarification of all the reviewers, I decided to maintain my current score. I wish you every success in your work.\"}", "{\"summary\": \"The paper identifies a problem of deep symbolic regression (DSR) for symbolic regression problems, that failure to capture equivalent expressions results in high variance of gradients and unstable training for the policy gradient estimator. The author proposed to address the problem by appending the symbolic reasoning module to a batch sampling of DSR to capture the equivalent expressions and adopting a new policy gradient method based on the group of equivalent expressions.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"**1)** The paper is well written with clear notations, concrete technical details, and illustrative figures to explain the problem.\\n\\n**2)** The paper is well-motivated by an interesting topic of expression equivalency in the symbolic regression (SR) area, which is promising to attain better performance with existing SR models and develop new SR models.\\n\\n**3)** Theoretical analysis provides the performance lower-bound as DSR.\", \"weaknesses\": \"**1)** Expression equivalency problems exist in nearly all SR methods. Compared with the large landscape of SR model families, the baseline model DSR is a little bit out-of-date. For example, GPMeld, the successor of DSR in Figure 2, exhibits better performance than DSR, and a similar performance to DSR-REX. Besides, the benchmarking models adopted in the experiments only encompass Reinforcement Learning (RL) based methods and one RL and genetic programming hybrid method GPMeld. To make stronger conclusions, more types of SR models should be considered, such as AI Feynman 2.0 as cited in the paper which studies similar expression equivalency problems.\\n\\n**2)** Figure 3 only compares the efficiency between the steps within the DSR-REX with different architectures. The comparison of efficiency between DSR and DSR-REX would bring in more insights.\", \"questions\": \"**1)** Can you include more types of SR models in benchmarking, or explain the advantages of DSR-Rex over AI Feynman 2.0 in capturing equivalent expressions?\\n\\n**2)** In equation (4), You mentioned that $\\\\mathbb{I}$ \\\\{ $\\\\cdot$ } $=1$ if $\\\\tau$ can be converted to $\\\\phi$, however, according to the definition in line 85, $\\\\phi$ is one specific expression. How do you obtain the probability of the equivalent group? Do you mean $\\\\phi$ represents all equivalent expressions to $\\\\phi$ here? In line 181, \\\"all possible sequences\\\" refers to all the sequences in the same equivalent group, or all the expressions have been sampled?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes DSR-Rex, which adds a mathematical equivalence reasoning module to deep symbolic regression to improve the efficiency and stability in the training process (variance reduction). Based on a re-expression of the objective after grouping mathematically equivalent but symbolically different equations, the algorithm uses standard encoding/decoding modules of DSR plus a novel reasoning module that enumerates equivalent expressions of the generated equations, and then modifies the training objective of DSR. It is proved the equivalence of objective functions of DSR-Rex and DSR and reduced variance of the estimated objective using DSR. The performance of DSR-Rex is evaluated on Feymann datasets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. Improving the performance of symbolic regression is an important problem, and this paper is likely to have impact for the important method of DSR.\\n\\n2. The motivating point of addressing equivalent symbolic equations is interesting and insightful. \\n\\n3. The paper is clearly structured.\", \"weaknesses\": \"1. The presentation clarity of the paper needs to be improved, including notations and method details. Please see my detailed questions below.\\n\\n2. The motivation needs to be strengthened to justify why numerically equivalent but symbolically different equations will pose challenges to DSR training and why the proposed methods. \\n\\n3. The experiments can be enhanced by adding more benchmark comparisons.\", \"questions\": \"1. Details/notations about problem setup need to be more precise. For instance,\\n- for $\\\\tau=(\\\\tau_1,\\\\dots,\\\\tau_k)$, what is $k$? How is it determined? \\n- What is each $\\\\tau_i$ -- is each of them a math operator/variable/coefficient? \\n- In equations (1) and (2), the reward is defined for each sequence $\\\\tau$, but right after that, the notations override previous ones, where $\\\\{\\\\tau_1,\\\\dots,\\\\tau_N\\\\}$ represents multiple sequences, so here each $\\\\tau_i$ is a sequence, instead of an element in a sequence? \\n\\nPlease revise the notations and be rigorous about their meanings. \\n\\n2. I understand that numerically equivalent but symbolically different expressions exist, and it is reasonable to try to avoid them. However, for the motivation of this work, I was wondering how this might negatively affect DSR. Why does it make it less stable or less efficient, as the authors claim? \\n\\n3. Line 196, the sentence \\\"Since we cannot directly use the probability distribution q\\u03b8 to sample a group of sequences with the\\nsame reward. Instead,...\\\" seems to be grammatically incorrect. \\n\\n4. More details of the method for equivalent expressions are needed for clarity: It is claimed that \\\"In practice, equation 7 is not computed by enumerating every expression in \\u03a6 (as indicated by the inner summation).\\\" and the details are in Section 3.2. However, Section 3.2 seems difficult to understand. What is the generated equivalent expressions for? How are they used in equation 7? Or is there an equivalent way to compute equation 7 after generating the equivalent expressions? \\n\\n5. What if the equivalent expressions in Section 3.2 cannot enumerate all possible choices? What is the consequence, and how would limiting the number of them impact the results? \\n\\n6. Setup for Section 5.2: Is Figure 2 the result for one dataset, or aggregating results from multiple datasets? Please consider showing the results for all 10 datasets. \\n\\n7. How are the 10 tasks selected from the Feymann dataset? It would also be helpful to consider larger benchmarks like SRBench [1]. \\n\\n[1] La Cava, William, et al. \\\"Contemporary symbolic regression methods and their relative performance.\\\" Advances in neural information processing systems 2021.DB1 (2021): 1.\\n\\n8. The high-level idea of addressing numerically equivalent expressions seems widely applicable. Would similar ideas be useful beyond the context of DSR? It would be helpful to have some discussion on the broader scope.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"### 1. Notation Confusion\\nThank you for pointing out the notation inconsistency. In our work, $\\\\tau$ represents a sequence of grammar rules, where each $\\\\tau_i$ corresponds to one mathematical operator, variable, or coefficient. The maximum length of $\\\\tau$ is set to be $k$. We previously used $\\\\tau_1, \\\\ldots, \\\\tau_N$ to denote a batch of sequences, which may have caused confusion. We will revise the notation system throughout the paper to ensure consistency and clarity.\\n\\n------\\n\\n### 2. Impact of the Proposed Module on Classic DSR\\nClassic DSR relies solely on the reward of an equation to guide its search. Over many iterations, it may implicitly learn numerically equivalent but symbolically different expressions. Our method explicitly incorporates this information into the model, encouraging it to adapt more quickly to such equivalencies.\\n\\n------\\n\\n### 3. Grammar Errors\\nWe will carefully proofread the content to eliminate grammar errors and typos, ensuring the manuscript is polished and professional.\\n\\n\\n------\\n\\n### 4. Misinformation Between Equation 7 and Section 3.2\\nEquation 7 provides the theoretical foundation for the proposed idea, while Section 3.2 describes its empirical implementation. The empirical results serve as an approximation of the theoretical estimator because, in theory, the group size can be infinitely large. Limiting the maximum group size introduces an approximation error. We will conduct additional ablation studies to analyze the impact of this approximation in detail.\\n\\n------\\n\\n### 5. Experimental Result Presentation\\nWe will rewrite Section 5.2 to provide a more detailed comparison with baselines, covering all instances in the dataset for a comprehensive evaluation.\\n\\n------\\n\\n### 6. Ten Tasks from the Feynman Dataset\\nThank you for raising this concern. Our intention was to demonstrate the effectiveness of the proposed method on challenging instances from the Feynman dataset, as summarized in Table 2. We will include a detailed learning comparison over these selected hard instances in a future revision.\\n\\n### 7. Future extension to other base methods\\nThanks for your suggestion We will try to incorporate this idea into a wider range of baselines to show the effectiveness of the proposed idea.\", \"title\": \"Greatly appreciate for your constructive feedback!\"}", "{\"title\": \"Response to authors\", \"comment\": \"Thank you for your response. The proposed improvements could strengthen future revisions of this work. To address the theoretical-practical gap, I would encourage analyzing how the group size parameter and distribution of equivalent expressions affect performance through ablation studies. Additionally, comprehensive experiments on standard benchmarks could better demonstrate the empirical benefits of leveraging equivalent expressions.\\n\\nGiven that these important changes remain to be implemented, I maintain my score.\"}", "{\"title\": \"Thank you for your valuable feedback.\", \"comment\": \"### 1. Limited Evaluation Scope\\nThank you for your feedback on the experimental evaluation. We appreciate your suggestion and will include the mentioned datasets in a future revision to broaden the evaluation scope.\\n\\n------\\n### 2. Comparison with Recent Baselines\\nThank you for bringing recent methods to our attention. The proposed module is currently applied to the classic DSR method. We believe it can also be applied to and compared against recent methods in symbolic regression, and we plan to include such comparisons in future work.\\n\\n------\\n### 3. Insufficient Theoretical Analysis\\nWe have included a detailed theoretical analysis in Appendix B, focusing on the improvement of the empirical variance of the policy gradient estimator. Could you provide additional details or suggestions on what further theoretical analysis could help demonstrate the effectiveness of the proposed module?\\n4. Impact of Hyperparameters on the Proposed Module\\nThank you for raising this concern. We will perform an ablation study to evaluate the impact of group size on the estimated policy gradient value and include this analysis in a future revision.\\n\\n------\\n### 5. Potential Bias with Different Group Sizes\\nWe appreciate your comments on the effect of different group sizes. In Theorem 1, we demonstrate that the new objective (over probability q) is equivalent to the original objective (over probability p), indicating that group size does not theoretically affect the model. However, empirically, using a maximum group size to sample equations (rather than considering all equations in the group) introduces estimation bias and variance. We will conduct additional ablation studies to analyze this effect in detail in future work.\\n\\n------\\n### 6. Experimental Analysis\\nThank you for your concerns regarding the experimental analysis. We acknowledge that the current experimental setting and results could be presented more clearly. In the future, we will carefully revise the content to ensure clarity and provide a thoroughly proofread version.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Thanks for your constructive feedback!\", \"comment\": \"### 1. Limited Evaluation Dataset and Baselines\\nThank you for your feedback regarding the experimental evaluation. We appreciate your suggestion and will incorporate the mentioned datasets and recent baselines in a future revision to broaden the evaluation scope. We believe the proposed module is well-suited for application to and comparison with state-of-the-art methods in symbolic regression. Such comparisons will be included in our future work. \\n\\nThanks for the suggestions. We will extend the analysis in Figure 3 by experimenting with different neural network architectures.\\n\\n------\\n\\n### 2. Notation Confusion\\nThank you for highlighting the issues with our notation. We acknowledge the inconsistencies in both the notation and its definitions and will address these in a future revision. To clarify, with some slight abuse of notation, we use $\\\\phi$ to denote a group of sequences that can be transformed into the equation $\\\\phi$ (i.e., sequences that yield the same reward value). In line 181, the term \\\"all possible sequences\\\" refers to the entire search space of expressions.\"}", "{\"summary\": \"This paper presents Deep Symbolic Regression via Reasoning Equivalent Expressions (DSR-REX), an enhancement to deep reinforcement learning-based symbolic regression (DSR). The key innovation is leveraging numerically equivalent mathematical expressions to reduce policy gradient estimate variance while maintaining unbiasedness. The method incorporates a symbolic reasoning module that generates equivalent expressions through mathematical transformations, leading to improved convergence and performance compared to baseline deep RL methods. The authors provide theoretical guarantees for their approach and demonstrate empirical improvements on several datasets.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Interesting approach to leveraging equivalent expressions for variance reduction in symbolic regression, with supporting theoretical analysis\", \"The methods to reason and find equivalent expressions are straightforward and fast, making them easy-to-use for future works.\"], \"weaknesses\": [\"Limited evaluation scope using primarily trigonometric datasets and a small subset of Feynman equations, rather than standard benchmarks like SRBench (all Feynman equation, black-box datasets)\", \"Comparison against outdated baselines (DSR, neural guided GP) rather than current SOTA methods like PySR, uDSR, E2E, TPSR, and SPL\", \"Insufficient analysis of how the theoretical guarantees translate to practical scenarios, particularly regarding the sampling distribution of equivalent expressions\", \"Lack of ablation studies on the impact of different group sizes and reasoning rules\"], \"questions\": \"1. The theoretical guarantees assume fair sampling of all equivalent sequences for each expression, but in practice this may not hold. Consider two expressions \\u03c6\\u2081 and \\u03c6\\u2082, where \\u03c6\\u2081 finds only two equivalent forms, while \\u03c6\\u2082 finds N>>2 equivalent forms through the designed reasoning rules. This could lead to q(\\u03c6\\u2082) > q(\\u03c6\\u2081) simply due to having more discoverable equivalent forms (e.g., there are lots of trigonometric equivalences compared to other operations), rather than actual learning preference. How does this potential bias affect the training process?\\n\\n2. What is the value of max group size parameter, and how sensitive is the method to this parameter?\\n\\n3. Could you clarify if the results shown in Fig. 2 (right) are averaged across all benchmark datasets or specific ones?\\n\\n4. How are the 10 Feynman datasets selected? Why not evaluate on standard benchmarks like SRBench and compare against more recent SOTA methods?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes DSR-REX, which improves the performance of the algorithm by embedding mathematical laws and equalities into deep models. Moreover, the variance of the gradient estimator is theoretically guaranteed to decrease. Finally, in various experimental tests, DSR-REX shows good performance.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"##### Strengths\\n\\n1. In this paper, DSR-REX has achieved good results in comparison with other baselines.\\n2. Achieving variance reduction of the gradient estimator with a theoretical guarantee.\", \"weaknesses\": \"##### Weaknesses\\n\\n1. I think the chapter arrangement of this article is unreasonable. For example, the related work is actually behind the method, which makes the article very messy. I spent an hour not understanding what the author was doing. I think The **Related work** can be put behind the **Introduction**. The **Motivation ** part of **METHODOLOGY ** can be appropriately deleted and put into the **introduction ** part...\\n2. Many related works are not mentioned.\\n**Reinforcement Learning for Scientific Discovery.** such as TPSR(Discovering mathematical formulas from data via gpt-guided Monte Carlo tree search), SR-GPT(Discovering mathematical formulas from data via gpt-guided monte carlo tree search), RSRM(Reinforcement Symbolic Regression Machine)...\\n\\n**Symbolic Regression with Domain Knowledge:** NSRwH(Controllable neural symbolic\\nregression), MLLM-SR(MLLM-SR: Conversational Symbolic Regression base Multi-Modal Large Language Models\\n), LLM-\\nSR(LLM-SR: Scientific Equation Discovery via Programming with Large Language Models)...\\n3. This article only mentioned the symbolic regression method using reinforcement learning, but symbolic regression is not the only one, other methods should appear in the comparison method, e.g. SNIP,(https://doi.org/10.48550/arXiv.2310.02227) MMSR(https://doi.org/10.1016/j.inffus.2024.102681), DSO(NGGP)(https://doi.org/10.48550/arXiv.2111.00053), TPSR(Transformer-based Planning for Symbolic Regression), and so on \\n4. The author should test your algorithm on the SRBench dataset.\", \"questions\": \"##### Questions\\n\\n1. The third innovation point of the paper, 'Encours-ages RL exploration of different symbolic forms in the search space of all expres- sions' Is to make the probability of model sampling more random? Like adding entropy loss?\\n2. Article line 151, additional sequences generated by a symbolic expression reasoning module. How does symbolic expression reasoning module generate additional sequences and what is their role?\\n3. In Figure 1, the **Reasoned expressions** can improve the performance of the algorithm. Please analyze the reasons for the improvement in the performance of the algorithm more carefully in the article.\\n4. Although your idea is good, I think it is inappropriate for the words \\\"high-level idea\\\" to appear in an academic paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your valuable feedback!\", \"comment\": \"### 1. Paper Presentation and Writing\\nThank you for your feedback regarding the paper's organization. Our intention was to first present the novel methodology and then contrast it with prior work to highlight our contributions. In the revised version, we will reorganize the paper to improve clarity and flow. \\n\\n### 2. Missing Related Work \\nWe appreciate your comments regarding the related work section. We acknowledge that a more comprehensive review of the literature would provide a better context for our contributions. In future revisions, we will include discussions on the additional models you mentioned. \\n\\n### 3. Clarification on the Third Innovation Point \\nWe apologize for any confusion caused by the description of the third innovation point. Our intention was to convey that the extra equations assist the Deep RL model in exploration. This is conceptually similar to the idea of an \\\"empty loss,\\\" which prevents the model from becoming overly confident in specific expressions. We will carefully revise this section to ensure the message is rigorous and clear. \\n\\n### 4. Clarification on Line 151 \\nThe position of the proposed module is described in Line 159, while the mechanism behind it is explained in Line 240. We will ensure this relationship is explicitly referenced to avoid any confusion in the revised manuscript. \\n\\n### 5. Analysis of the Improvement Brought by the Proposed Module \\nThank you for your valuable feedback. The theoretical benefits of the proposed module are outlined in Theorems 1 and 2. The key idea is that the module generates additional equations, reducing the variance of sample estimates and thereby improving the quality of estimation. This, in turn, enhances the model's stability. We will expand the analysis in the revised manuscript to provide further clarity and detail. \\n\\n### 6. Inappropriate Use of \\\"High-Level Idea\\\" \\nThank you for pointing out the inappropriate use of terminology. We will replace \\\"high-level idea\\\" with more precise and formal phrasing to maintain academic rigor throughout the paper.\"}", "{\"title\": \"Official Review by Reviewer ej1f\", \"comment\": \"Thank you for your response. As the experiment is not yet complete, I will not adjust my evaluation score at this time. However, I look forward to your future updates, including detailed information on evaluations across multiple datasets and baselines, as well as an ablation study on hyperparameters.\"}", "{\"summary\": \"The performance comparison between DSR-REX and previous models like DSR and NGGP highlights its superiority. The complexity of the case study equations effectively showcases the model\\u2019s symbolic regression capabilities. Additionally, the paper provides clear details of the algorithm and experimental processes.\", \"weaknesses\": \"The paper lacks comparisons with other tasks beyond DSR, such as SPL[1], TPSR[2], and uDSR[3], across different benchmarks like SRbench[4]. It also does not discuss how this method could be applied to these models.\\n\\n[1]Sun F, Liu Y, Wang J X, et al. Symbolic physics learner: Discovering governing equations via monte carlo tree search[J]. arXiv preprint arXiv:2205.13134, 2022. \\n\\n[2]Shojaee P, Meidani K, Barati Farimani A, et al. Transformer-based planning for symbolic regression[J]. Advances in Neural Information Processing Systems, 2023, 36: 45907-45919. \\n\\n[3]Landajuela M, Lee C S, Yang J, et al. A unified framework for deep symbolic regression[J]. Advances in Neural Information Processing Systems, 2022, 35: 33985-33998. \\n\\n[4]La Cava W, Orzechowski P, Burlacu B, et al. Contemporary symbolic regression methods and their relative performance[J]. arXiv preprint arXiv:2107.14351, 2021.\", \"questions\": \"1. Could you provide DSR-REX\\u2019s results on SRBench[4] or SRSD-Feynman[5] to assess the model's stability under varying noise levels and complexities with scientific implications?\\n\\n2. What level of improvement might your method bring if applied to other models like SPL[1], TPSR[2], and uDSR[3] for training?\\n\\n3. Could you share the recovery rate for each expression in Chapter 5.2, Experimental Analysis?\\n\\n4. Could you include an ablation study on parameters in Appendix Section D?\\n\\n5. Could you compare DSR-REX with models like SPL, TPSR, and uDSR [1, 2, 3] in the experiments in Chapter 5\\uff1f\\n\\n[1]Sun F, Liu Y, Wang J X, et al. Symbolic physics learner: Discovering governing equations via monte carlo tree search[J]. arXiv preprint arXiv:2205.13134, 2022.\\n\\n[2]Shojaee P, Meidani K, Barati Farimani A, et al. Transformer-based planning for symbolic regression[J]. Advances in Neural Information Processing Systems, 2023, 36: 45907-45919. \\n\\n[3]Landajuela M, Lee C S, Yang J, et al. A unified framework for deep symbolic regression[J]. Advances in Neural Information Processing Systems, 2022, 35: 33985-33998. \\n\\n[4]La Cava W, Orzechowski P, Burlacu B, et al. Contemporary symbolic regression methods and their relative performance[J]. arXiv preprint arXiv:2107.14351, 2021. \\n\\n[5]Matsubara Y, Chiba N, Igarashi R, et al. Rethinking symbolic regression datasets and benchmarks for scientific discovery[J]. arXiv preprint arXiv:2206.10540, 2022.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The performance comparison between DSR-REX and previous models like DSR and NGGP highlights its superiority. The complexity of the case study equations effectively showcases the model\\u2019s symbolic regression capabilities. Additionally, the paper provides clear details of the algorithm and experimental processes.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your valuable feedback.\", \"comment\": \"### 1. Limited Evaluation Dataset and Baselines\\nThank you for your valuable feedback on the experimental evaluation. We greatly appreciate your suggestions and will incorporate the recommended datasets and recent baselines in a future revision to enhance the comprehensiveness of our evaluation. Additionally, we will conduct further experiments on metrics such as recovery rate to provide a more robust analysis. \\n\\nWe also believe that the proposed module can be empirically applied to and compared against other existing methods. We will include these comparisons in our future revisions to strengthen the empirical evaluation. \\n\\n---\\n\\n### 2. Ablation Study on Hyperparameters in the Proposed Module \\nThank you for highlighting the need for an ablation study. We acknowledge the importance of evaluating the impact of hyperparameters, such as group size, on the estimated policy gradient value. We will perform this analysis and include the results in a future revision to provide deeper insights into the behavior of the proposed module.\"}" ] }
2BtFKEeMGo
Learning from weak labelers as constraints
[ "Vishwajeet Agrawal", "Rattana Pukdee", "Maria Florina Balcan", "Pradeep Kumar Ravikumar" ]
We study programmatic weak supervision, where in contrast to labeled data, we have access to \emph{weak labelers}, each of which either abstains or provides noisy labels corresponding to any input. Most previous approaches typically employ latent generative models that model the joint distribution of the weak labels and the latent ``true'' label. The caveats are that this relies on assumptions that may not always hold in practice such as conditional independence assumptions over the joint distribution of the weak labelers and the latent true label, and more general implicit inductive biases in the latent generative models. In this work, we consider a more explicit form of side-information that can be leveraged to denoise the weak labeler, namely the bounds on the average error of the weak labelers. We then propose a novel but natural weak supervision objective that minimizes a regularization functional subject to satisfying these bounds. This turns out to be a difficult constrained optimization problem due to discontinuous accuracy bound constraints. We provide a continuous optimization formulation for this objective through an alternating minimization algorithm that iteratively computes soft pseudo labels on the unlabeled data satisfying the constraints while being close to the model, and then updates the model on these labels until all the constraints are satisfied. We follow this with a theoretical analysis of this approach and provide insights into its denoising effects in training discriminative models given multiple weak labelers. Finally, we demonstrate the superior performance and robustness of our method on a popular weak supervision benchmark.
[ "unsupervised learning", "weak supervision", "learning theory" ]
Accept (Poster)
https://openreview.net/pdf?id=2BtFKEeMGo
https://openreview.net/forum?id=2BtFKEeMGo
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vsg7SQvy71", "qcl0lx8jih", "lapXRsxVE0", "iTtW88g0kE", "dLCdH87xCg", "c0EIfy5cGT", "WEHvV51TbC", "Siezr3vAxL", "S7x1LECtTa", "PNnBzHHdtG", "LzU88LVH3g", "LLexoKVQQk", "KurmHMr845", "GAGp8iavet", "EIr9uEh62l", "EGw01V18eP", "BMSdjQumxH", "69gDIz9EzX", "25wtu2bFKv" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_review", "official_comment", "official_review" ], "note_created": [ 1732654565671, 1732500098947, 1732654681566, 1732415473753, 1732415871587, 1730579410837, 1732749011242, 1734858312933, 1732414598428, 1732415109235, 1732655260223, 1733119158828, 1732415323858, 1730683864274, 1737523493975, 1732414535978, 1730600826555, 1732505841528, 1730352358349 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2254/Authors" ], [ "ICLR.cc/2025/Conference/Submission2254/Reviewer_chyr" ], [ "ICLR.cc/2025/Conference/Submission2254/Authors" ], [ "ICLR.cc/2025/Conference/Submission2254/Authors" ], [ "ICLR.cc/2025/Conference/Submission2254/Authors" ], [ "ICLR.cc/2025/Conference/Submission2254/Reviewer_chyr" ], [ "ICLR.cc/2025/Conference/Submission2254/Authors" ], [ "ICLR.cc/2025/Conference/Submission2254/Area_Chair_Trhd" ], [ "ICLR.cc/2025/Conference/Submission2254/Authors" ], [ "ICLR.cc/2025/Conference/Submission2254/Authors" ], [ "ICLR.cc/2025/Conference/Submission2254/Authors" ], [ "ICLR.cc/2025/Conference/Submission2254/Reviewer_chyr" ], [ "ICLR.cc/2025/Conference/Submission2254/Authors" ], [ "ICLR.cc/2025/Conference/Submission2254/Reviewer_X6nV" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2254/Authors" ], [ "ICLR.cc/2025/Conference/Submission2254/Reviewer_NPc4" ], [ "ICLR.cc/2025/Conference/Submission2254/Reviewer_NPc4" ], [ "ICLR.cc/2025/Conference/Submission2254/Reviewer_jPxa" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer chyr Reply (1/3)\", \"comment\": \"**C1: There are several other weak supervision baselines better than Snorkel**\\n\\nWe want to highlight that the main contribution of our paper is to provide a novel, simple and principled method of learning from weak labelers by incorporating information about their average errors and avoiding making modeling assumptions about them. Our method is thus qualitatively different from any prior work in this area. \\n\\nThe purpose of experiments was to validate our approach and provide a sanity-check that it performs at least competitively compared to prior approaches. The choice of our baselines covers two broad approaches - one that first infer pseudo labels using weak labels and then subsequently train a downstream model (Snorkel, Majority vote), and another - that trains an end to end model (LoL). There are several other methods in the prior work but many of them require many hyperparameters, or are limited to binary classification settings, or require auxiliary sources of information, such as dependency graphs between weak labelers. For instance, the popular triplet method [1] was only proposed for binary classification. Here, we provide an additional comparison of [1] vs ours on datasets with 2 classes\\n\\n| Data | Triplet Mean | Ours (V) |\\n|:------------|:---------------|:--------------|\\n| Bioresponse | $57.2\\\\pm2.0$ | $62.6\\\\pm1.8$ |\\n| CDR | $65.6\\\\pm0.7$ | $67.9\\\\pm1.1$ |\\n| IMDB | $76.0\\\\pm1.6$ | $72.5\\\\pm2.4$ |\\n| Yelp | $74.0\\\\pm1.3$ | $75.3\\\\pm1.5$ |\\n| Youtube | $82.2\\\\pm1.7$ | $90.2\\\\pm3.1$ |\\n\\n[1] Daniel Y. Fu, Mayee F. Chen, Frederic Sala, Sarah M. Hooper, Kayvon Fatahalian, and Christopher R\\u00e9. 2020. Fast and three-rious: speeding up weak supervision with triplet methods. In Proceedings of the 37th International Conference on Machine Learning (ICML'20), Vol. 119. JMLR.org, Article 307, 3280\\u20133291.\\n\\n\\n**C2: I am still not convinced of the value of the theoretical results either.**\\n\\nWe believe our proposed method and the corresponding theoretical analysis offer meaningful contributions, as they address a novel setting for learning from weak labelers\\u2014a perspective highlighted as valuable by reviewers (jPxa, NPc4, X6nV). We are more than willing to provide additional clarification or further elaboration on our theoretical contributions if that would be helpful.\"}", "{\"comment\": \"There are several other weak supervision baselines better than Snorkel. The details of the experimental protocol are not given and it is not clear why in Figure 2 there is no effect due to noise, at some point (level of noise) things will start to break down. I am still not convinced of the value of the theoretical results either. What's the trade-off between coverage, noise levels, and number of labelers?\"}", "{\"title\": \"Response to Reviewer chyr Reply (2/3)\", \"comment\": \"**C3: The details of the experimental protocol are not given**\\n \\nFor our main results in Table 1, we provided details in our main text in section \\u201cExperiment Details\\u201d. To repeat, for all methods, we used a two-layer neural network of hidden size 16 on pre-trained BERT embeddings and trained it on full batch gradient descent for 500 epochs using Adam optimizer. We used a validation set of size 100 for hyperparameter tuning. Two hyperparameters were used: learning rate in [0.01, 0.003, 0.001] and weight decay (L2 regularization) in [0.01, 0.003, 0.001]. \\n\\nFor experiments on noisy bounds (Figure 2), we used the exact same setup, except we add random noise on the true bounds sampled from a uniform distribution. \\n\\nFor the additional experiments on simulating different scenarios for weak labelers as requested by the reviewer, we used the exact same setup for training except we filtered or duplicated the weak labelers as described in our earlier response.\\n\\n**C4: It is not clear why in Figure 2 there is no effect due to noise, at some point (level of noise) things will start to break down.**\\n\\nWe want to clarify a misunderstanding here. In Figure 2, we can see that as the noise level increases, the accuracies do decrease across all datasets. Although for some datasets the reduction is minor. Since we added noise sampled uniformly between $[-a, a]$ where $a$ ranged till $0.5$, the error bounds on average are still informative, and so it is possible that performance is not reduced much for some datasets. We see this as a strength as it speaks to the robustness of our method. Here we provide another experiment by adding a fixed noise to the true $\\\\eta$ for every weak labeler. One can clearly see a clear trend that performance decreases as noise is increased. When noise is positive there is a clear trend of performance being reduced as the corresponding constraints are relaxed. When the noise is negative, sometimes the performance is unaffected (CDR, Chemprot, IMDB, Youtube) because the constraints are tightened and since our loss trades off between satisfying the constraints and minimizing the L2 regularization, it may still find a good classifier.\\n\\n| Data \\\\Noise | -0.4 | -0.2 | 0.0 | 0.2 | 0.4 | 0.6 | 0.8 |\\n|:------------|:-------------|:-------------|:--------------|:-------------|:-------------|:-------------|:-------------|\\n| Bioresponse | $59.3\\\\pm0.9$ | $64.0\\\\pm1.4$ | $66.8\\\\pm1.8$ | $54.3\\\\pm2.3$ | $46.8\\\\pm2.2$ | $44.8\\\\pm2.0$ | $46.4\\\\pm1.9$ |\\n| CDR | $68.8\\\\pm1.2$ | $69.1\\\\pm2.1$ | $68.4\\\\pm1.3$ | $54.4\\\\pm2.3$ | $37.5\\\\pm3.1$ | $34.9\\\\pm3.1$ | $35.6\\\\pm3.7$ |\\n| Chemprot | $56.4\\\\pm1.3$ | $56.0\\\\pm2.3$ | $54.3\\\\pm1.0$ | $45.6\\\\pm7.5$ | $22.5\\\\pm3.9$ | $3.8\\\\pm0.9$ | $0.3\\\\pm0.2$ |\\n| IMDB | $69.9\\\\pm4.1$ | $67.9\\\\pm5.5$ | $70.8\\\\pm2.9$ | $56.0\\\\pm0.6$ | $38.5\\\\pm5.1$ | $40.6\\\\pm3.5$ | $41.8\\\\pm3.1$ |\\n| Semeval | $71.1\\\\pm4.7$ | $77.1\\\\pm4.8$ | $76.2\\\\pm10.9$ | $74.3\\\\pm5.6$ | $65.1\\\\pm9.1$ | $57.1\\\\pm8.9$ | $26.6\\\\pm4.3$ |\\n| Trec | $55.1\\\\pm5.2$ | $61.6\\\\pm4.0$ | $67.1\\\\pm6.5$ | $57.4\\\\pm9.3$ | $49.6\\\\pm8.5$ | $36.3\\\\pm3.4$ | $10.4\\\\pm4.6$ |\\n| Yelp | $67.6\\\\pm0.3$ | $66.4\\\\pm6.5$ | $70.0\\\\pm5.9$ | $59.8\\\\pm1.5$ | $44.2\\\\pm2.0$ | $47.8\\\\pm0.9$ | $43.2\\\\pm7.0$ |\\n| Youtube | $83.7\\\\pm2.5$ | $85.5\\\\pm3.1$ | $83.8\\\\pm6.0$ | $71.6\\\\pm4.0$ | $41.5\\\\pm7.2$ | $32.0\\\\pm6.4$ | $27.7\\\\pm5.6$ |\\n| Mean | $66.5\\\\pm2.5$ | $68.4\\\\pm3.7$ | $69.7\\\\pm4.5$ | $59.2\\\\pm4.1$ | $43.2\\\\pm5.1$ | $37.2\\\\pm3.6$ | $29.0\\\\pm3.8$ |\\n\\nThe numbers show mean +- standard deviation across 5 random train - val splits of the data. We trained using Adam optimizer for $500$ epochs using a fixed learning rate of $0.01$ and tuned L2 weight decay in $[0.001, 0.003, 0.01]$.\"}", "{\"title\": \"Response to Reviewer chyr (2/2)\", \"comment\": \"**Scenario 2**: For each dataset, the highest coverage weak labeler with accuracy less than the median accuracy is duplicated 2 * m times where m is the number of weak labelers.\\n\\n| Dataset | Ours | Snorkel | Maj Vote | LoL(S) |\\n|:-------------|:-------------|:-------------|:---------------|:-------------|\\n| Bioresponse | $64.4\\\\pm1.0$ | $54.4\\\\pm2.4$ | $55.8\\\\pm1.7$ | $52.3\\\\pm1.0$ |\\n| CDR | $67.7\\\\pm1.0$ | $61.0\\\\pm1.1$ | $67.6\\\\pm2.3$ | $64.0\\\\pm1.5$ |\\n| Chemprot | $53.1\\\\pm0.4$ | $51.4\\\\pm2.5$ | $55.0\\\\pm2.9$ | $47.3\\\\pm0.8$ |\\n| IMDB | $74.9\\\\pm0.9$ | $74.8\\\\pm1.6$ | $73.6\\\\pm2.2$ | $69.7\\\\pm0.8$ |\\n| Semeval | $68.3\\\\pm3.2$ | $59.0\\\\pm1.9$ | $61.6\\\\pm0.7$ | $54.2\\\\pm2.4$ |\\n| Trec | $60.0\\\\pm0.9$ | $33.6\\\\pm1.4$ | $36.2\\\\pm1.2$ | $33.7\\\\pm2.1$ |\\n| Yelp | $72.7\\\\pm0.4$ | $75.0\\\\pm2.5$ | $75.0\\\\pm2.7$ | $67.6\\\\pm0.4$ |\\n| Youtube | $85.8\\\\pm1.6$ | $64.8\\\\pm2.6$ | $76.8\\\\pm2.7$ | $75.0\\\\pm1.8$ |\\n| mean | $68.4\\\\pm1.2$ | $59.2\\\\pm2.0$ | $62.7\\\\pm2.0$ | $58.0\\\\pm1.3$ |\\n| Average Rank | $1.3$ | $3.1$ | $1.9$ | $3.7$ |\\n\\nOur method continues to perform better for most datasets in both of these scenarios.\\n\\n3. **Theoretical results do not explain how learning in the proposed setup leads to a classifier with good generalization error. It naively depends on the summation of errors of individual labelers. On the other hand several of the baselines provide results showing how the labelers cancel their noises and eventually lead to a classifier with comparable generalization error to a model trained on clean labels. They do make certain assumptions on labelers to get there. What can be said more specifically in this setup with similar assumptions? Even a naive majority vote with labelers with random noise of \\u03b7j could be shown to give good generalization error going down with the number of samples and the number of weak labelers.**\\n\\nWe believe our theoretical results do provide insights into how our proposed setup can lead to a classifier with good generalization error beyond naive summation of errors of individual labelers. We presented an argument based on the agreement region (Figure 1); where one can intuitively understand how multiple weak labeler constraints can provide a denoising effect. Further we also provided a bound based on conflict between different weak labelers (Theorem 4.5). The theorem provides a bound which is smaller than simply summing the weak labeler errors by considering the effect of conflicting regions. We also show that a classifier with low error on the coverage set also has low error across the entire space, assuming that the underlying probability distribution is smooth (Theorem 4.6). To this end, our first two results suggest that weak labeler constraints have a denoising effect leading to a low error in the coverage set and the final result suggest that we would have a low error on the whole space as well. \\n\\nWhile noise assumption can lead to a better generalization in alternative approaches (e.g. majority vote), this has no impact on our error bound since our proposed method does nott rely on any additional assumption on the weak labeler noise rather than the bound on the error of each weak labelers. We see this a strength rather than a weakness since this implies that our method is more general and more robust to the setting when the noise assumption does not hold. \\n\\n4. **I do not find the empirical results convincing. Ours(C) and Ours(V) rely on either hand tuning \\u03b7j or estimating from validation data. Did you estimate source quality in other WS setups using the same validation data?**\\n\\nIn Ours(C), we set $\\\\eta_j$ to be a constant value $\\\\eta$ for all weak labelers then we pick the $\\\\eta$ that has the best validation accuracy. In Ours(V), we estimate $eta_j$ using the label from the validation set. Other WS baselines do not have a mechanism to use validation sets to estimate the source accuracy, for instance the method used by Snorkel and Majority vote are based on aggregating weak labels from different weak labelers. However these methods still use validation set for hyperparameter selection and for early stopping when training on the soft-labels. Therefore we think that the comparison is still fair.\\n\\n\\n5. **Can you provide some simulations with different \\u03b7j and labeling sources outputs, clearly showing how the method works in different scenarios?**\\n\\nIn Figure 2, we provided an ablation where we added a uniform random noise to \\u03b7j. It shows that our method is quite robust to misspecified \\u03b7j. In response to the point 2 above, we provided results for the two more scenarios, 1) where we used a subset of weak labelers that are disjoint from each other 2) we duplicated a weak labeler many times. In both of these scenarios it can be seen that our method performs better than the baselines.\"}", "{\"title\": \"Response to Reviewer jPxa\", \"comment\": \"We thank the reviewer for their comments on the paper. Below we would like to address some concerns raised by them.\\n\\n1. **The problem addressed in this paper is certainly interesting, but as the authors themselves mention, it has strong connections to areas like crowdsourcing, noisy label learning, semi-supervised learning, and ensemble learning. Each of these fields already has well-established techniques that could be adapted, with only minor modifications, to solve the problem presented here. However, the paper dismisses these connections too quickly, with phrases like \\\"not directly applicable to our setting\\\" and \\\"relies on the implicit inductive bias.\\\" I find this explanation insufficient, as it limits the paper's significance and impact. A deeper exploration of these connections, along with additional comparative experiments, would have been much more convincing.**\\n\\nWe did not find these related works to be readily applicable to our setting for various reasons.\\n\\nIn noisy label learning, prior works usually make stringent assumptions about noise. For instance, [1] consider a single noisy label source where noise at any input is assumed to be independent, in particular each label is assumed to be passed through a noisy channel. In our case weak labelers are often in the form of deterministic rules/ classifiers, we cannot make such assumptions about the noise model. Specifically, we cannot assume that the errors of a weak labeler on different data points are independent. In fact the independence assumption is almost always violated, a rule/ classifier is usually smooth over the input space implying that error that is disagreement of weak labeler from target classifier will be highly spatially correlated. \\n\\nIn crowdsourcing, we are usually given multiple weak labels for each input. However, in our setting, we may only have one or very few weak labels for any input, rendering techniques that rely on multiple weak labels from crowdsourcing not applicable.\\n\\n[1] Natarajan, N., Dhillon, I. S., Ravikumar, P. K., & Tewari, A. (2013). Learning with noisy labels. Advances in neural information processing systems, 26.\\n\\n\\n2. **The authors propose two objectives and frame the problem as a constrained optimization task, introducing corresponding optimization methods. While the paper's main contribution is centered on optimization through projection, I have to admit that I'm not an expert in optimization, this approach feels somewhat intuitive. It doesn't strike me as a particularly novel or non-intuitive solution.**\\n\\nIn both of the settings - constraints on classifier and constraints on distribution, we required multiple non trivial steps to reduce the problem into a tractable algorithm. The first step of introducing a lifted variable $q$ and using an alternating minimization procedure of alternating between gradient descent and projection onto constraint sets may look similar to an expectation maximization (EM) style algorithm, but without an efficient projection step such an alternating minimization remains infeasible. \\n\\nFor instance in the case of constraints on classifiers (Section 3.1), the optimizing variable $q$ is continuous whereas the constraints are discrete involving argmax over $y$ of $q(x)_y$ for each $x$. This is not a standard form, and requires a non-trivial observation of reducing it to an optimization on a discrete variable $\\\\text{clf}(q)$ (Proposition 3.5) which then results in a standard ILP. Given $\\\\text{clf}(q)$, then estimating the continuous $q$ (as described in Appendix D.2) in an efficient manner was also not trivial. For the case of constraints on distribution (Section 3.2), the proposition 3.7 and subsequent development of an efficient and scalable algorithm also requires multiple non trivial steps.\\n\\n3. **Additionally, regarding the problem setup and experiments, I would like to see more details about the coverage rate and noisy rate of each weak labeler and the collective coverage of all labelers. This seems crucial to the model\\u2019s performance and yet isn\\u2019t discussed in enough detail.**\\n\\nWe have provided the number of classes, number of weak labelers and their average error and coverage in Table 2 in Appendix G in the revised version.\"}", "{\"summary\": \"This paper proposes a method for learning from programmatically generated weak labels by treating weak labelers as constraints rather than relying on traditional generative models. This approach uses side information in the form of bounds on weak labelers\\u2019 error rates, which are applied as constraints within a constrained optimization framework. The authors introduce an alternating minimization algorithm to iteratively project model predictions onto the feasible region defined by these constraints. They evaluate the method on multiple weak supervision benchmarks and demonstrate that it improves upon traditional weak supervision techniques, such as Snorkel, by incorporating this constraint-based learning approach.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper presents an interesting alternative to traditional generative models for weak supervision. By viewing weak labelers as error-bound constraints, the approach avoids common assumptions about label independence or probabilistic structure, which may not hold always.\\n\\n2. The authors provide an upper bound on error (in the union of covered region by all weak labelers) of any predictor satisfying all the constraints. The upper bound is summation of upper bounds on the errors in each weak labelers and the probability of region where weak labelers have a conflict. Implying better bound with more conflict.\\n\\n3. The method is evaluated on weak supervision benchmarks, where it demonstrates improved accuracy over other weak supervision methods.\", \"weaknesses\": \"1. The model relies on accurate estimates of weak labelers\\u2019 error bounds to define constraints. However, obtaining these estimates is challenging, and inaccurate bounds could lead to suboptimal model performance. In the experiments these are estimated using validation data, which could also be hard to obtain. In contrast several baselines in weak supervision (those based on generative modeling) estimate 'labelers quality' using only the unlabeled samples.\\n\\n2. Assumptions made on the weak labelers are not clear. In particular, what are the assumptions that the labelers have to satisfy to ensure the method will work as expected and the theoretical results will hold. Naively putting an upper bound of $\\\\eta_j$ on each labeler could lead to several scenarios e.g. all could have $\\\\eta_j$ error in different parts of the input space (~independence) or highly overlapping parts (~highly correlated). Could you explain how the method and results will turn out in these two extremes? \\n\\n3. Theoretical results do not explain how learning in the proposed setup leads to a classifier with good generalization error. It naively depends on the summation of errors of individual labelers. On the other hand several of the baselines provide results showing how the labelers cancel their noises and eventually lead to a classifier with comparable generalization error to a model trained on clean labels. They do make certain assumptions on labelers to get there. What can be said more specifically in this setup with similar assumptions? Even a naive majority vote with labelers with random noise of $\\\\eta_j$ could be shown to give good generalization error going down with the number of samples and the number of weak labelers.\", \"questions\": \"Please see the weaknesses above. And,\\n1. I do not find the empirical results convincing. Ours(C) and Ours(V) rely on either hand tuning $\\\\eta_j$ or estimating from validation data. Did you estimate source quality in other WS setups using the same validation data? \\n\\n2. Can you provide some simulations with different $\\\\eta_j$ and labeling sources outputs, clearly showing how the method works in different scenarios?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Follow-Up on Addressing Feedback\", \"comment\": \"Dear Reviewer chyr,\\n\\nWe want to kindly ask you to reconsider your score if we have addressed your concerns. We are happy to engage with any other concerns you have.\"}", "{\"metareview\": \"This paper addresses the problem of learning from programmatic weak supervision, where labeled data is replaced with weak labelers that either abstain or provide noisy labels. Traditional methods rely on latent generative models, which often depend on assumptions like conditional independence that may not hold in practice. This paper proposes a constraint-based framework for weak supervision, which can leverage side information about weak labelers' accuracy bounds to train models effectively. Experimental results demonstrate the effectiveness of the proposed method.\\n\\nThe merits of this paper lie in that it introduces a learning objective minimizing a regularization function while satisfying weak labelers' error constraints. It develops a scalable alternating minimization algorithm for projecting model outputs to satisfy these constraints. Both theoretical insights and experimental validation are supportive.\", \"additional_comments_on_reviewer_discussion\": \"This paper finally receives the scores of 8, 6, 6, 6. The authors' rebuttal has addressed the reviewers' concerns, so finally all the reviewers gave a positive score to this paper. Considering this situation, I recommend accepting this paper.\"}", "{\"title\": \"Response to Reviewer X6nV\", \"comment\": \"We thank the reviewer for their positive comment on the paper. We address the mentioned weaknesses:\\n\\n1. **The paper misses a conclusion and future extensions paragraph**\\n\\nThank you for pointing it out, we added a conclusion and future extension paragraph in the uploaded revised version.\\n\\n2. **On some datasets, the margin of the proposed methods and competing methods are small. Would it be helpful to run some statistical tests to compare their performances?**\\n\\nWe agree that statistical tests would provide a more rigorous basis for comparison. Nevertheless, we remark that with our current result, the proposed method doesn\\u2019t significantly underperform any baseline, while performing significantly better than all on some datasets. In our results table, we also bolded methods that are within a standard error of the best-performing method.\"}", "{\"title\": \"Response to Reviewer NPc4\", \"comment\": \"We thank the reviewer for the their positive comments on the paper. We addressed the missing references ?? in the appendix in the revised version of the paper. We also incorporated suggestions on the plot.\\n\\nAs asked by the reviewer we did another experiment with L1 regularization and provide the results below.\\n\\n| Dataset| L1 reg | L2 reg |\\n|:--------------------|:-------------------------|:-------------------------|\\n| Bioresponse | $\\\\textbf{61.8}\\\\pm 1.6$ | $\\\\textbf{62.9}\\\\pm 1.0$ |\\n| CDR | $65.1\\\\pm 0.7$ | $\\\\textbf{68.2}\\\\pm 0.5$ |\\n| Chemprot | $49.4\\\\pm 0.7$ | $\\\\textbf{53.4}\\\\pm 0.7$ |\\n| IMDB | $\\\\textbf{72.4}\\\\pm 0.7$ | $\\\\textbf{72.9}\\\\pm 1.0$ |\\n| Semeval | $73.3\\\\pm 1.5$ | $\\\\textbf{78.6}\\\\pm 2.3$ |\\n| Trec | $55.9\\\\pm 2.3$ | $\\\\textbf{60.8}\\\\pm 2.0$ |\\n| Yelp | $\\\\textbf{74.2}\\\\pm 1.8$ | $\\\\textbf{74.6}\\\\pm 1.5$ |\\n| Youtube | $\\\\textbf{88.0}\\\\pm 2.3$ | $\\\\textbf{88.2}\\\\pm 1.5$ |\\n| Mean | $\\\\textbf{67.5}\\\\pm 1.4$ | $\\\\textbf{69.9}\\\\pm 1.3$ |\\n\\n\\n\\nIn both columns we use either L1 regularization or L2 regularization weight in [0.001, 0.003, 0.01] as a hyperarameter and use the one that achieves the best validation accuracy. L2 regularization performs a few points better than L1 regularization for most datasets.\"}", "{\"title\": \"Response to Reviewer chyr Reply (3/3)\", \"comment\": \"**C5: What's the trade-off between coverage, noise levels, and number of labelers?**\\n\\nWe would like to point out that in our framework, there is no trade-off between the coverage, the noise levels (average errors) and the number of weak labelers since we treat each weak labeler as an individual constraint. On the impact of these factors to the final error bound, our theoretical results suggested that \\n1. As the noise level (average error) is small, we would have a better error bound (Theorem 4.5).\\n2. As there are more weak labelers, denoising effect among them can also lead to a better error bound (Lemma 4.3, Theorem 4.5).\\n3. As the weak labeler has more coverage, the full instance space must be closer to the coverage set and this also leads to a better error bound (Theorem 4.6).\\n\\nOn the empirical side, we have provided 3 additional experiments to investigate what happen when 1) weak labelers have disjoint coverages, 2) duplicated weak labelers, 3) additional experiments when we add a fixed noise to the true error rates. We hope that these additional results help clarify your concerns.\\n\\nBy noise levels if they mean noise in the error bounds, as we showed in Figure 2 and the additional experiment, our method is robust to a certain level of noise especially if the provided error is an underestimate, but at some point the performance will start to break down if the error bounds are too relaxed.\"}", "{\"comment\": \"Thank you for the clarifications. I have revised my scores.\"}", "{\"title\": \"Response to Reviewer chyr (1/2)\", \"comment\": \"We thank the reviewer for providing critical feedback on the paper. We addresses the weaknesses and questions asked by the reviewer as follows:\\n\\n1. **The model relies on accurate estimates of weak labelers\\u2019 error bounds to define constraints. However, obtaining these estimates is challenging, and inaccurate bounds could lead to suboptimal model performance. In the experiments these are estimated using validation data, which could also be hard to obtain. In contrast several baselines in weak supervision (those based on generative modeling) estimate 'labelers quality' using only the unlabeled samples.**\\n\\nOur method is quite robust and works even when the provided error bounds are noisy. This is evident in Figure 2, where we added uniform random noise to the true error rates of each weak labeler to create error bounds, and observed that its performance only decreases slowly as the noise level is increased. The validation set we used to estimate the error bounds in our main experiments was also quite small (only 100 data points), which is not not sufficient to train a good model, as demonstrated in the Sup(V) baseline. In addition, the estimated error are quite noisy compared to the true errors. On the other hand, other baselines also require this validation data for hyperparameter selection and for early stopping to avoid overfitting.\\n\\n2. **Assumptions made on the weak labelers are not clear. In particular, what are the assumptions that the labelers have to satisfy to ensure the method will work as expected and the theoretical results will hold. Naively putting an upper bound of \\u03b7j on each labeler could lead to several scenarios e.g. all could have \\u03b7j error in different parts of the input space (independence) or highly overlapping parts (highly correlated). Could you explain how the method and results will turn out in these two extremes?**\\n\\nContrary to prior work, we do not assume any assumptions on the weak labelers, they can be completely arbitrary and unrelated to each other. Our theoretical results hold as long as the error bound is an upper bound of the true error of each weak labeler since this ensures that the target function satisfies our constraints. Our method however involves relaxing the constrained estimator with an unconstrained one (equation 8) that trades off satisfying constraints and minimizing regularization term. Thus, even if the given error bounds are inaccurate and noisy or less than the true error rates, our algorithm can still work reasonably well as shown in Figure 2.\\n\\nIn the below tables, we show results for the two extremes 1) where we used a subset of weak labelers that are disjoint from each other to simulate error in different parts of input space (independence), 2) we duplicated a weak labeler to simulate the case of labelers being highly correlated.\\n\\n**Scenario 1** : 50% weak labelers are chosen that are as disjoint as possible from each other (determined by a heuristic algorithm.) The following table shows the mean and max IOU (intersection over union) between the coverage sets of pairs of weak labelers in in percentage for the original weak labelers (normal) and the filtered set (disjoint).\\n\\n| dataset | Normal (mean)| Disjoint (mean) | Normal (max) | Disjoint (max) |\\n|:------------|--------------:|----------------:|-------------:|---------------:|\\n| Bioresponse | 4.92 | 1.38 | 61.17 | 7.82 |\\n| CDR | 2.94 | 0.43 | 84.08 | 4.58 |\\n| Chemprot | 1.8 | 0.9 | 15.61 | 5.73 |\\n| IMDB | 3.72 | 0.28 | 23.87 | 1.11 |\\n| Semeval | 0 | 0 | 16.81 | 1.77 |\\n| Trec | 0.88 | 0.03 | 100 | 1.82 |\\n| Yelp | 6.53 | 4.32 | 22.18 | 9.21 |\\n| Youtube | 7.36 | 3.73 | 31.98 | 9.65 |\\n\\n\\nBelow table shows the results for this case.\\n\\n| Dataset | Ours | Snorkel | Maj Vote | LoL(S) |\\n|:-------------|:-------------|:-------------|:---------------|:-------------|\\n| Bioresponse | $59.3\\\\pm1.2$ | $56.2\\\\pm1.1$ | $55.6\\\\pm2.5$ | $54.1\\\\pm0.5$ |\\n| CDR | $63.0\\\\pm0.8$ | $60.6\\\\pm3.4$ | $59.8\\\\pm3.5$ | $62.0\\\\pm0.5$ |\\n| Chemprot | $46.0\\\\pm0.9$ | $46.4\\\\pm1.7$ | $45.6\\\\pm2.2$ | $46.9\\\\pm1.2$ |\\n| IMDB | $73.9\\\\pm0.8$ | $73.4\\\\pm2.5$ | $74.8\\\\pm1.2$ | $73.5\\\\pm0.5$ |\\n| Semeval | $74.4\\\\pm1.8$ | $70.2\\\\pm1.5$ | $62.6\\\\pm0.5$ | $57.5\\\\pm2.1$ |\\n| Trec | $55.6\\\\pm3.4$ | $39.4\\\\pm3.0$ | $38.6\\\\pm1.9$ | $37.8\\\\pm2.4$ |\\n| Yelp | $73.8\\\\pm1.3$ | $67.2\\\\pm1.7$ | $73.4\\\\pm1.5$ | $67.3\\\\pm0.4$ |\\n| Youtube | $80.2\\\\pm0.8$ | $71.2\\\\pm1.0$ | $76.4\\\\pm2.0$ | $70.5\\\\pm0.9$ |\\n| mean | $65.8\\\\pm1.4$ | $60.6\\\\pm2.0$ | $60.8\\\\pm1.9$ | $58.7\\\\pm1.1$ |\\n| Average Rank | $1.3$ | $2.8$ | $2.7$ | $3.2$ |\"}", "{\"summary\": \"The paper explores programmatic weak supervision by treating weak labelers as constraints in a classification task.The authors propose a constrained optimization approach that integrates weak labeler error bounds directly into the learning objective. This forms a complex optimization problem and is solved with a novel alternating minimization algorithm.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The idea is novel and the theory is rigorous. The proposed algorithms lead to significant improvements in empirical evaluations on some datasets.\", \"weaknesses\": \"1. The paper misses a conclusion and future extensions paragraph.\\n2. On some datasets, the margin of the proposed methods and competing methods are small. Would it be helpful to run some statistical tests to compare their performances?\", \"questions\": \"see weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"General Comment to All Reviewers\", \"comment\": [\"We thank all the reviewers for their reviews and feedback. The main questions raised by the reviewers revolve around the performance of our proposed method under different settings and connection with the prior works. We remark that since we proposed a method for learning from weak labelers while avoiding particular assumptions on the weak labelers or their noise model, we would expect our methods to perform well under many settings. We conducted the following new experiments to support our claim:\", \"Using L1 regularization instead of L2. (Results provided in response to Q3 of reviewer NPc4)\", \"Duplicated weak labelers (to simulate when weak labelers are highly correlated) (Results provided in response to Q2 of reviewer chyr)\", \"Only using 50% of the available weak labelers, and we selected them in a way that are as disjoint from each other as possible (to simulate when weak labelers are independent) (Results provided in response to Q2 of reviewer chyr)\", \"Overall, we found that our proposed method still performs well in these scenarios. We also thank the reviewers for pointing out typos and missing references, which we have incorporated in the revised version. We will also answer the reviewers\\u2019 specific questions in individual responses. Thank you all again for your hard work and consideration!\"]}", "{\"summary\": \"By using accuracy restrictions on weak labelers as learning constraints, this work introduces a novel method for programmatic weak supervision. The paper makes three primary contributions:\\n\\n1. create a constraint-based method for aggregating weak labelers; \\n2. present a scalable optimization problem; and \\n3. offer a theoretical analysis of the suggested constrained estimator.\\n\\nThe suggested method is technically sound and well-motivated, and the paper is well-written. The empirical evaluation shows the efficacy of the suggested approach, while the theoretical analysis sheds light on the denoising consequences of several weak labelers.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"1. Novel approach: This work presents a novel constraint-based objective that specifically considers accuracy bounds.\\n2. Scalable: A linear program for classifier constraints and a convex optimization problem for distribution constraints can effectively execute the paper's efficient alternating minimization approach.\\n3. Thorough theoretical analysis: The paper offers a thorough theoretical examination of the suggested approach. These analyses offer assurances on the trained classifier's inaccuracy and draw attention to the denoising impacts.\\n4. Excellent empirical performance: According to an experimental evaluation on a well-known weak supervision benchmark, the suggested approach outperforms current baselines, proving its efficacy and resilience.\", \"weaknesses\": \"1. The authors admitted that in the case of learning on classifier solving the ILP can still be slow even with LP relaxation. Additionally, because the stochastic gradient descent relies on the population means of the weak labeler accuracies, the method is unable to use a small batch size.\", \"questions\": \"1. I suggest the authors use different markers and line styles for different datasets instead of only using color to differentiate different lines.\\n2. There are several ?? in the paper. For example, on line 1357, 1359, 1418: ??\\n3. On line 491, the author mentioned that they implemented Algorithm 1 with an L2 regularization. I wonder what are the impacts of other regularization.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank authors for the responses and spending their precious time to do the experiments. I maintain my recommendation for acceptance.\"}", "{\"summary\": \"This paper introduces a novel approach for learning from weak labelers by framing the problem as a constrained optimization task. Instead of relying on generative models or conditional independence assumptions, the paper proposes using known upper bounds on the error rates of weak labelers to guide the learning process. The paper develops an alternating minimization algorithm to iteratively generate soft pseudo-labels that satisfy the constraints and train the model accordingly. Theoretical analysis is provided to explain the denoising effects of the method, and experiments on benchmark datasets demonstrate its effectiveness.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper moves away from traditional generative models, avoiding the often unrealistic assumption of conditional independence between weak labelers, making it more flexible for real-world applications.\\n\\n2. The theoretical analysis is quite thorough, especially with the introduction of projection techniques and alternating minimization, showing how to effectively build a classifier without labeled data.\\n\\n3. Good writting.\", \"weaknesses\": \"The problem addressed in this paper is certainly interesting, but as the authors themselves mention, it has strong connections to areas like crowdsourcing, noisy label learning, semi-supervised learning, and ensemble learning. Each of these fields already has well-established techniques that could be adapted, with only minor modifications, to solve the problem presented here. However, the paper dismisses these connections too quickly, with phrases like \\\"not directly applicable to our setting\\\" and \\\"relies on the implicit inductive bias.\\\" I find this explanation insufficient, as it limits the paper's significance and impact. A deeper exploration of these connections, along with additional comparative experiments, would have been much more convincing.\\n\\nThe authors propose two objectives and frame the problem as a constrained optimization task, introducing corresponding optimization methods. While the paper's main contribution is centered on optimization through projection, I have to admit that I'm not an expert in optimization, this approach feels somewhat intuitive. It doesn't strike me as a particularly novel or non-intuitive solution.\\n\\nAdditionally, regarding the problem setup and experiments, I would like to see more details about the coverage rate and noisy rate of each weak labeler and the collective coverage of all labelers. This seems crucial to the model\\u2019s performance and yet isn\\u2019t discussed in enough detail.\", \"questions\": \"Please see above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
2Akf4BBCKo
KVSharer: Efficient Inference via Layer-Wise Dissimilar KV Cache Sharing
[ "Yifei Yang", "zouying cao", "Qiguang Chen", "Libo Qin", "Dongjie Yang", "Zhi Chen", "hai zhao" ]
The development of large language models (LLMs) has significantly expanded model sizes, resulting in substantial GPU memory requirements during inference. The key and value storage of the attention map in the KV (key-value) cache accounts for more than 80\% of this memory consumption. Nowadays, most existing KV cache compression methods focus on intra-layer compression within a single Transformer layer but few works consider layer-wise compression. In this paper, we propose a plug-and-play method called \textit{KVSharer}, which shares the KV cache between layers to achieve layer-wise compression. Rather than intuitively sharing based on higher similarity, we discover a counterintuitive phenomenon: sharing dissimilar KV caches better preserves the model performance. Experiments show that \textit{KVSharer} can reduce KV cache computation by 30\%, thereby lowering memory consumption without significantly impacting model performance and it can also achieve at least 1.3 times generation acceleration. Additionally, we verify that \textit{KVSharer} is compatible with existing intra-layer KV cache compression methods, and combining both can further save memory.
[ "Large Language Model", "KV Cache", "KVSharer" ]
https://openreview.net/pdf?id=2Akf4BBCKo
https://openreview.net/forum?id=2Akf4BBCKo
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yJgBYw3WNp", "wyA3vdxw0z", "vUYjEt1Uba", "tvXz12Xp7J", "mIg3AZ2BmX", "dbv5UzVAWL", "Ptl5PnvGJK", "JT6PD4YFw1", "IheQd55dNW", "FwcwYp0RZ6", "ApDDfnM3Ma", "7TeTemSm7I", "6FcxKH4Vjo", "5rb3M65TTa", "0S7Z4Hll5P" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_review", "official_review", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1729758730393, 1732544911601, 1733023459697, 1730447133537, 1732545033916, 1730682124598, 1732544855755, 1730627878105, 1730534707932, 1734334037557, 1732806489395, 1732545301112, 1732544971435, 1732959682516, 1733029081412 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7482/Reviewer_fUgR" ], [ "ICLR.cc/2025/Conference/Submission7482/Authors" ], [ "ICLR.cc/2025/Conference/Submission7482/Authors" ], [ "ICLR.cc/2025/Conference/Submission7482/Reviewer_fzUL" ], [ "ICLR.cc/2025/Conference/Submission7482/Authors" ], [ "ICLR.cc/2025/Conference/Submission7482/Reviewer_fCuJ" ], [ "ICLR.cc/2025/Conference/Submission7482/Authors" ], [ "ICLR.cc/2025/Conference/Submission7482/Reviewer_5E3J" ], [ "ICLR.cc/2025/Conference/Submission7482/Reviewer_jb9o" ], [ "ICLR.cc/2025/Conference/Submission7482/Authors" ], [ "ICLR.cc/2025/Conference/Submission7482/Reviewer_5E3J" ], [ "ICLR.cc/2025/Conference/Submission7482/Authors" ], [ "ICLR.cc/2025/Conference/Submission7482/Authors" ], [ "ICLR.cc/2025/Conference/Submission7482/Reviewer_fCuJ" ], [ "ICLR.cc/2025/Conference/Submission7482/Reviewer_jb9o" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a novel approach to sharing the key-value (KV) cache across different layers in a new dimension, which can lead to more efficient memory usage and improved performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This idea offers new insights into how memory size can be further reduced, potentially leading to more efficient model deployments and optimized hardware utilization.\", \"weaknesses\": \"1) The paper lacks a comparison with other cache-sharing methods, which would provide a clearer understanding of its advantages.\\n\\n2) It should consider the scenario when the KV cache is quantized, as quantization is often used during inference to save energy.\\n\\n3) The paper also lacks a scalability analysis, which is crucial for evaluating how well the proposed method performs as model size and complexity increase.\", \"questions\": \"What is the time scalability of the proposed approach? Will the inference time remain acceptable when scaling up to models with over 400 billion parameters? It would be valuable to provide an estimation or analysis to address this concern.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 5E3J\", \"comment\": \"Thanks for your feedback. We will address your concerns as followed:\\n\\n**Q#1: Whether such a highly aggregated metric(the averaged value of the KV-cache) is informative or not.**\\n\\nSince previous work rarely considered layer-level KV cache compression, as you mentioned, they mainly focused on token- or head-level KV cache representations. However, our work requires comparing the similarity between entire layers. To preserve the layer-level KV cache representation for each sample in the calibration dataset as much as possible, it is reasonable to average the layer-level KV cache representations across all samples without bias toward any particular sample. Moreover, this averaging approach is very common and is frequently used in prior work to generate hidden states or heatmaps for attention maps.\\n\\n**C#1: It would be critical to evaluate the performance of the proposed method over long-context benchmarks.**\\n\\nThank you for your suggestion. We will include experiments on long-context benchmarks in the future version.\"}", "{\"title\": \"Response to Reviewer fCuJ\", \"comment\": \"Thank you for your response and the further clarification on Q5. We will include the experiment you mentioned in the next revision to improve the manuscript. Thank you!\"}", "{\"summary\": \"This paper introduces KVSharer, a post-training method for layerwise KV cache sharing. Based on the counterintuitive observation that sharing KV caches between layers with dissimilar, rather than similar, KV caches leads to less performance degradation, KVSharer employs a systematic search strategy for KV sharing. As a result, KVSharer reduces GPU memory consumption while maintaining model performance.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Does not require training\", \"Provides an interesting and novel insight that sharing dissimilar KV caches yields better performance.\", \"Offers diverse and insightful evaluation results.\"], \"weaknesses\": [\"Results show a noticeable performance drop even at low compression rates (e.g., 12.5%, 25%), which may limit the practicality of the method.\", \"Lacks an explanation for why sharing dissimilar KV caches yields better performance, leaving an essential aspect of the method's effectiveness rather unclear.\"], \"questions\": [\"Why is it better to share dissimilar KV caches? Since the authors themselves describe this as counterintuitive, providing an explanation for this phenomenon would be highly valuable for the community.\", \"What happens if KVSharer is unable to find $C$ pairs of layers to share KV caches while satisfying the threshold $T$? It would be helpful to include a guideline on setting this threshold and any evaluation showing its impact on search performance.\", \"In Table 2, why does the memory usage reduction exceed the compression rate? Additionally, what is the source of the observed increase in generation throughput? Since KV cache sharing reduces memory usage but likely not memory bandwidth, it is unclear how this improves inference throughput.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer fzUL\", \"comment\": \"Thanks for your review. We address your concerns accordingly.\\n\\n**C#1: Results show a noticeable performance drop even at low compression rates (e.g., 12.5%, 25%), which may limit the practicality of the method.**\\n\\nMaintaining over 90% of the model's performance with a 25% compression rate is sufficient for many use cases. Additionally, as shown in our case study on page 16, the responses generated by our model are both fluent and knowledgeable, which can meet the requirements of various scenarios. \\n\\n**C#2&Q#1: Lacks an explanation for why sharing dissimilar KV caches yields better performance, leaving an essential aspect of the method's effectiveness rather unclear.**\\n\\nWe are working on providing both theoretical and empirical evidence. However, the KVSharer proposed in this manuscript effectively reduces memory usage while maintaining high performance, and its counterintuitive findings offer insights for future model improvements, highlighting our contribution.\\n\\n**C#3: What happens if KVSharer is unable to find C pairs of layers to share KV caches while satisfying the threshold T?**\\n\\nAs noted in the footnote of page 6, this phenomenon does not occur when the threshold is set to a reasonable value, such as our recommended 0.5.\\n\\n**Q#2: Why does the memory usage reduction exceed the compression rate? What is the source of the observed increase in generation throughput?**\\n\\nFor memory reduction, we speculate that it might be due to PyTorch's underlying mechanisms utilizing fragmented memory more efficiently. Of course, we will conduct a more in-depth analysis. As for inference speed, we suspect that the acceleration could be attributed to reduced memory read/write operations, even though the computation load hasn't decreased. We are also exploring this further.\"}", "{\"summary\": \"This paper introduces a new inter-layer KV cache compression technique through layer-wise KV cache dis-similarity search and sharing. The layers are ranked pairwise in accordance with their dis-similarity score. For each pair, an earlier layer's KV will be shared and reused by an later layer for efficient pre-filling and generation.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper and the technique introduced have the following strengths:\\n\\n1. Paper writing is easy to follow with good figures and illustrations.\\n2. The experiment sections demonstrate KVSharer can be used in orthogonal with other intra-layer KV compression techniques like H2O and PyramidInfer to achieve higher memory saving and more significant speedup.\\n3. The paper brings up a new angle\", \"weaknesses\": \"I have several concerns about the paper:\\n\\n1. Even though layer pairs are ranked from high dis-similarity to low dis-similarity, whether to use the pair still depends on the cosine similarity between the the KV-cache compression model and the original model. There is a possibility that the cosine similarity check, rather than dis-similarity ranking, plays a major role.\\n\\n2. A major claim in the paper is dis-similarity metrics is better than similarity metrics when it comes to inter-layer KV cache sharing. Empirical evidences are provided in Section 5.1 and Figure 6 when changing the Euclidean-distance based ranking from descending order (dis-similarity) to ascending order (similarity). However, I didn't find any theoretical and empirical evidence that \\\"Euclidean distance for KV cache is a sufficient good metrics\\\" in comparison with the other SOTAs. More specifically, how does KVSharer compare with other layer-wise compression strategies, for example miniCache [1], LCA [2], CLLA [3] and simpleLayerKV [4]? Without the experiment results, I don't think the paper is ready at this stage for publication.\\n\\n[1] Liu, Akide, et al. \\\"MiniCache: KV Cache Compression in Depth Dimension for Large Language Models.\\\" arXiv preprint arXiv:2405.14366 (2024).\\n\\n[2] Brandon, William, et al. \\\"Reducing Transformer Key-Value Cache Size with Cross-Layer Attention.\\\" arXiv preprint arXiv:2405.12981 (2024).\\n\\n[3] Yang, Zhen, et al. \\\"Lossless KV Cache Compression to 2%.\\\" arXiv preprint arXiv:2410.15252 (2024).\\n\\n[4] Zhang, Xuan, et al. \\\"SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.\\\" arXiv preprint arXiv:2410.13846 (2024).\", \"questions\": \"I think the paper will be much more ready if the authors could address the following questions (from high to low priority):\\n\\n1. Could the authors provide comparisons with other layer-wise compression strategies in terms of accuracy and system performances?\\n\\n2. Did the authors investigate the relationship between dis-similarity ranking and the acceptance rate by the thresholding condition? It's possible that the cosine similarity check, rather than dissimilarity ranking, plays a primary role. In principle, if the \\\"higher dis-similarity --> inter-layer KV cache sharing gives better performance\\\" hypothesis holds, then a higher rank should correspond to a higher acceptance rate. Could the authors provide additional results and justification on this point?\\n\\n3. There is an important threshold in this work: cos-similarity (representation similarity) threshold that determines whether to accept a KV cache pair, Can the authors provide explanations on how the value is determined/searched? Moreover, the number of target shared KV cache layers is also an important hyper-parameter, and this is discussed in the paper an ablation study on in Table 1. But can the authors provide some guidance/calculation on how this number translate to memory saving and inference speedup?\\n\\n4. For KV cache dissimilarity distance, why did the authors choose Euclidean distance? Could the authors ablate on other distance metrics? Similarly, for cosine similarity from the final layer hidden states, what if some other metrics like angular distance is used (less important, just wondering)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer fCuJ\", \"comment\": \"Thanks for your comments. We address your concerns as follows.\\n\\n**C#1&Q1: There is a possibility that the cosine similarity check, rather than dis-similarity ranking, plays a major role. Did the authors investigate the relationship between dis-similarity ranking and the acceptance rate by the thresholding condition.**\\n\\nWhen preparing the manuscript, we have found that KVSharer selects pairs mostly ranked within the top 30% for dissimilarity, confirming it uses dissimilarity for KV cache sharing rather than cosine similarity check. We will include this analysis in the future version.\\n\\n**C#2&Q#2: No theoretical and empirical evidence that \\\"Euclidean distance for KV cache is a sufficient good metrics\\\". Why did the authors choose Euclidean distance? Could the authors ablate on other distance metrics?**\\n\\nWe chose to use Euclidean distance based on our experimental findings, and we also compared it with cosine similarity while preparing this manuscript. We conducted experiments on Llama2-7B-Chat, the following are the supplementary results:\\n\\n| Metric | Similar | Layers | PPL |\\n|------------------|-----|---|-------|\\n| Cosine | Similarity | 4 | 8.96 |\\n| Cosine | Dissimilarity | 4 | 8.57 |\\n| Cosine | Similarity | 8 | 15.68 |\\n| Cosine | Dissimilarity | 8 | 15.11 |\\n| Cosine | Similarity | 12 | 42.81 |\\n| Cosine | Dissimilarity | 12 | 30.67 |\\n\\nThe experimental results indicate that when cosine similarity is used instead of Euclidean distance similarity, the observed pattern also remains consistent: leveraging dissimilarity for sharing performs better than using similarity for sharing. Moreover, since models with the same compression rate achieve better perplexity (PPL) when using Euclidean distance for sharing compared to cosine similarity (as shown in Figure 5 of the manuscript), we chose to use Euclidean similarity as the metric. We will include these analyses in the future version.\\n\\n**C#3&Q#3: Provide comparisons with other layer-wise compression strategies.**\\n\\nThe methods in [2], [3], and [4] you mentioned all require post-training to be utilized, whereas our KVSharer is training-free, making a direct comparison less necessary. Additionally, [3] and [4] were public after the ICLR submission deadline, so they are not relevant for comparison in this manuscript. Furthermore, [4] is not strictly a layer-wise compression strategy; it focuses on dropping tokens within the KV cache of certain layers.\\nWe have already discussed [1] and [2] in the related work section. However, [1] lacks publicly available code, and we are actively working on reproducing it. The results will be included in the future version.\\n\\n**Q#4: Can the authors provide explanations on how the cos-similarity (representation similarity) threshold is determined/searched?**\", \"as_noted_in_the_footnote_of_page_6\": \"During strategy searching, the similarity of the last layer's hidden state between the compressed and original models is usually above 0.8. A 0.5 threshold is set to avoid rare cases of output collapse. Since this is infrequent, we did not conduct an ablation study on T. And we recommend setting the threshold around 0.5.\\n\\n**Q#5: Provide guidance/calculation on how the number of target shared KV cache layers to memory saving and inference speedup.**\\n\\nAs mentioned in Lines 319-320, we recommend setting the KV cache compression rate to around 25% to maintain good model performance.\"}", "{\"summary\": \"This paper first presents a counterintuitive phenomenon when attempting to leverage the cross-layer pattern to improve the efficiency of the LLM generative inference computation, where sharing dissimilar KV caches better preserves the model performance. Based on this observation, this paper introduces a method named KVSharer, which integrates this observation to implement efficient cross-layer KV cache sharing. An empirical study has been conducted to verify the effectiveness of the proposed methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"S1. This paper explores an important problem of improving the efficiency in utilizing KV cache in LLM generative inference.\", \"S2. The related work and research context are well summarized.\"], \"weaknesses\": [\"W1. Heuristic-based on aggregated information. As enumerated in Section 3.1.2, the proposed method uses the averaged value of the KV-cache to consider the similarity between different layers -- it is a little confusing why such highly integrated information could guide the sharing policy, considering lots of recent work has been exploring the KV-cache utilization at token, layer, and head level jointly. My concern is whether such a highly aggregated metric is informative or not.\", \"W2. My main concern is about the experimental setup. There is a significant mismatch between the motivation example in the introduction, e.g., \\\"During the LLM inference phase, the KV cache typically accounts for 80% of the total memory usage.\\\" and the benchmarked settings, where the context window is set to just a few thousand, e.g., up to 1024+4096 in Table-2. Unless batching to an extremely large value (not mentioned in the paper), there is a significant gap between the motivation and the experiments. I think it would be critical to evaluate the performance of the proposed method over long-context benchmarks (e.g., Infiniti-bench) where the model's context window should be from 32K to 128K (or even longer). Otherwise, the truly useful scenario is not evaluated.\"], \"questions\": \"Please address the corresponding concern listed in the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"Not applicable.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces KVSharer, a plug-and-play method for compressing the key-value (KV) cache of large language models (LLMs) during inference. Unlike the intuitive approach of sharing similar KV caches, KVSharer is based on a counterintuitive observation: sharing different KV caches across layers does not significantly degrade model performance. KVSharer employs a search strategy to identify the optimal KV cache sharing policy across different layers, substantially reducing GPU memory usage while retaining most of the model\\u2019s performance. Additionally, KVSharer is compatible with existing intra-layer KV cache compression methods, offering a complementary approach to memory optimization for LLMs.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. This paper addresses a good research topic: efficient LLM inference.\\n \\n2. The paper is well-organized.\\n \\n3. The proposed method is clearly presented.\", \"weaknesses\": \"1. **Lack of novelty and research depth:** This main technique is to share the dissimilar KV cache for efficient inference, which is quite simple. Although authors claim that this originates from a counterintuitive observation, there is no motivation provided in the methodology section. Therefore, both of the novelty and the research depth of this paper are not qualified for the top AI conference.\\n \\n2. **Unreasonable observation without further analysis:** The observation that the sharing the dissimilar KV cache brings in better accuracy than sharing the similar one sounds unreasonable, the dissimilar KV states output different attention scores, making the LLM attend to different part of the query token. It is more convincing that the obtained conclusion is just a coincidence and varies across the models and datasets, considering that no in-depth analysis has been provided.\\n \\n3. Lack Needle-in-a-Haystack experiment.\", \"questions\": \"See Above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We appreciate the suggestions from the AC and all reviewers, and we will refine the paper further in subsequent versions.\"}", "{\"comment\": \"Thank you for your reply! I hope the review will help make this paper better in the future!\"}", "{\"title\": \"Response to Reviewer fUgR\", \"comment\": \"Thanks for your review. We address your concerns accordingly.\\n\\n**C#1: The paper lacks a comparison with other cache-sharing methods.**\\n\\nAs described in our related work section, layer-wise KV cache compression is currently rare, and existing methods require training, while ours is training-free, making direct comparisons less reasonable. We are searching for baselines and will consider introducing them in the future version.\\n\\n**C#2: It should consider the scenario when the KV cache is quantized.**\", \"we_conducted_additional_experiments_using_the_gptq_quantized_llama2_7b_chat_model\": \"| Model | Compression Layers | PPL |\\n|------------------------------|--------|--------|\\n| Llama-2-7B-Chat-GPTQ | 0 | 8.61 |\\n| Llama-2-7B-Chat-GPTQ | 4 | 10.40 |\\n| Llama-2-7B-Chat-GPTQ | 8 | 16.68 |\\n| Llama-2-7B-Chat-GPTQ | 12 | 25.89 |\\n\\nWe also find that KVSharer does not significantly increase the model's PPL within a 25% compression rate, further demonstrating its effectiveness. We will include this result in future versions.\\n\\n**C#3: The paper also lacks a scalability analysis, which is crucial for evaluating how well the proposed method performs as model size and complexity increase.**\\n\\nWe have validated our method across multiple model families, such as the Llama2 and InternLM series, and on mainstream model sizes ranging from 7B, 13B, and 20B to 70B. The consistent conclusions can demonstrate the effectiveness of our approach.\\n\\n**Q#1: What is the time scalability of the proposed approach? Will the inference time remain acceptable when scaling up to models with over 400 billion parameters? It would be valuable to provide an estimation or analysis to address this concern.**\\n\\nWe have validated our method on multiple model families and mainstream model sizes. Our approach has been proven to significantly accelerate inference speed. Conducting experiments on a 400B model exceeds our hardware capacity, requiring 1000GB or more of memory, which is beyond the reach of most researchers. Additionally, 400B models are rarely used in practice, making such demands uncommon. Our existing experiments already demonstrate the effectiveness and stability of our method.\"}", "{\"title\": \"Response to Reviewer jb9o\", \"comment\": \"Thanks for your review. We address your concerns accordingly.\\n\\n**C#1: Lack of novelty and research depth: no motivation provided in the methodology section.**\\n\\nWe respectfully disagree with your viewpoint. Our method is derived from experimental observations, as described in Lines 064-067. While we currently cannot provide a theoretical proof, we have validated our observations through extensive experiments across various models. Moreover, many existing works focus on sharing based on parameter or representation similarity. In contrast, our counterintuitive findings offer a more novel perspective.\\n\\n**C#2: Unreasonable observation without further analysis: The observation sounds unreasonable and the obtained conclusion is just a coincidence and varies across the models and datasets.**\\n\\nWe also respectfully disagree with your viewpoint. We have obtained consistent conclusions across a wide range of datasets and models, and our results are certainly not coincidental. While we will attempt to conduct deeper analyses in future versions, we do not accept the doubts about our experimental results.\\n\\n**C#3: Lack Needle-in-a-Haystack experiment.**\\n\\nWe will include the experiments on related long-context benchmarks in the future version.\"}", "{\"comment\": \"For C#1&Q#1 + C#2&Q#2, I believe adding more detailed results and incorporating them into future revisions is important toward making this paper publication-ready.\\n\\nFor C#3 & Q#3: while I understand that some relevant works have been released only recently, it is essential to benchmark the proposed approach against baseline methods that share similar goals of generation acceleration and memory savings through KV cache compression. In addition to benchmarking KVSharer against itself with different compression ratios (and w/ vs. w/o applying other intra-layer compression techniques).\\n\\nRegarding Q#5, my question was: \\\"How does this number **translate** into memory savings and inference speedup?\\\" THe reason I asked was because choosing different compression ratios is a trade off between speedup/memory saving vs. performance. I believe the paper would benefit from a more thorough system profiling. Such profiling should illustrate how system performances are affected by varying compression rates.\"}", "{\"comment\": \"Thanks for the comments. I have raised my score and hope the author could revise the paper based on all the reviewers' suggestions in the future. Thank you.\"}" ] }
2AWZTv6kgV
Projected Neural Differential Equations for Learning Constrained Dynamics
[ "Alistair White", "Anna Büttner", "Maximilian Gelbrecht", "Valentin Duruisseaux", "Niki Kilbertus", "Frank Hellmann", "Niklas Boers" ]
Neural differential equations offer a powerful approach for learning dynamics from data. However, they do not impose known constraints that should be obeyed by the learned model. It is well-known that enforcing constraints in surrogate models can enhance their generalizability and numerical stability. In this paper, we introduce projected neural differential equations (PNDEs), a new method for constraining neural differential equations based on projection of the learned vector field to the tangent space of the constraint manifold. In tests on several challenging examples, including chaotic dynamical systems and state-of-the-art power grid models, PNDEs outperform existing methods while requiring fewer hyperparameters. The proposed approach demonstrates significant potential for enhancing the modeling of constrained dynamical systems, particularly in complex domains where accuracy and reliability are essential.
[ "neural differential equations", "neural ordinary differential equations", "constraints", "dynamics", "scientific machine learning", "ai for science" ]
https://openreview.net/pdf?id=2AWZTv6kgV
https://openreview.net/forum?id=2AWZTv6kgV
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xIbXqSRiJ5", "cSkfsnWwBo", "bWO9wGK1nx", "Tzj8mjNRVE", "PKMqyhDGdZ" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730613046587, 1729684481511, 1732231756234, 1730487959229, 1730629797443 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13504/Reviewer_eEkq" ], [ "ICLR.cc/2025/Conference/Submission13504/Reviewer_MdTC" ], [ "ICLR.cc/2025/Conference/Submission13504/Authors" ], [ "ICLR.cc/2025/Conference/Submission13504/Reviewer_TExi" ], [ "ICLR.cc/2025/Conference/Submission13504/Reviewer_1Zyj" ] ], "structured_content_str": [ "{\"summary\": \"In this paper, a method for learning differential equations while preserving conservation laws is proposed. Specifically, to preserve conservation laws, the authors project the learned vector field onto the tangent bundle of the manifold defined by the conserved quantities.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper is well-written and easy to read. Some experiments were conducted to support the effectiveness of the proposed method.\", \"weaknesses\": \"The proposed method is not novel because this method has already been proposed; the continuous-time model is shown in [1], and also the discrete-time model is shown in [2].\\n\\n[1] Kasim, M.F. and Lim, Y.H. (2022) Constants of motion network, NeurIPS2022\\n\\n[2] Matsubara, T. and Yaguchi, T. (2023) FINDE: Neural Differential Equations for Finding and Preserving Invariant Quantities, ICLR2023\\n\\nIn [1], the learned vector field is designed to be orthogonal to the gradient vectors of the conserved quantities. Precisely, the learned vector field is projected onto the tangent space at each point of the manifold defined by the conserved quantities, which is the same as the approach proposed in this paper. In [1], the QR decomposition is used for orthogonalization and hence the method of computing the projection operator is a little different from that of this paper, which uses the pseudo inverse.\\n\\nIn [2], exactly the same approach as this paper is proposed; in [2], the manifold defined by conserved quantities is first introduced. Then, they consider tangent bundles of this manifold, and project the leaned vector field onto the tangent space at each point. More precisely, in [2], a continuous-time model is first considered. Equation (6) in [2], which represents the continuous-time model, is completely identical to (7) in this paper. The pseudo-inverse matrix is used for projection in [2], the pseudo-inverse matrix is specifically computed though (so the model looks a little different.) In addition, in [2] a discrete-time model is also discussed. In the discrete-time model, the discrete gradient, which is a discrete version of the gradient, is considered, and the discrete tangent space is defined using the discrete gradient. The discrete-time model is essentially the projection onto this discrete tangent space.\\n\\nIn addition, it seems that the conserved quantities are assumed to be given in this paper; however, the methods shown in the above papers can handle cases where these quantities are unknown. \\n\\nConsidering the above, the contributions of this paper is quite limited.\", \"questions\": \"My concerns regarding this paper are as explained above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces Projected Neural Differential Equations (PNDEs), a novel method designed to incorporate known constraints into neural differential equations. Specifically, PNDEs project the vectors of the vector field onto the tangent spaces of the manifold defined by known constraints. This allows for the enforcement of various constraints without the need for specific coordinate systems. The paper provides empirical evidence through experiments showing that PNDEs satisfy constraints more accurately than state-of-the-art baselines.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The paper presents an approach to integrating known constraints into neural differential equations, which is a relatively unexplored area in the field of machine learning and dynamical systems. The introduction of the projection method to enforce constraints on the vector field is innovative.\\n\\n2. The empirical results demonstrate that PNDEs outperform existing methods in terms of accuracy and stability. The experiments conducted on challenging examples, including chaotic systems and power grid models, further validate the robustness of the proposed method.\\n\\n3. The paper is well structured and clearly written, making complex concepts accessible to readers.\", \"weaknesses\": \"1. There is a potential ambiguity in the notation used in the paper. Specifically, the definition of $\\\\mathcal{E}$ is lacking; is it $\\\\mathbb{R}^n$? The notations $f_{\\\\theta}$ and $\\\\bar{f}_{\\\\theta}$ in equations 1, 3, and 8 are not entirely consistent.\\n\\n2. The paper primarily focuses on known constraints. In many cases\\u2014such as the first example presented\\u2014it shows that if the constraints are known, the corresponding total differential equation can be determined.\\n\\n3. The effect of the ODE solver is not discussed in this paper. Ideally, the constraints should be preserved exactly, as stated by Proposition 1 in the paper. However, the numerical results indicate that they are only preserved approximately, albeit with small errors. These discrepancies may arise from numerical errors introduced by the ODE solver.\\n\\n4. The setup of the first example differs from that in Section 2. Is it possible to explicitly write out the manifold for the example?\\n\\n5. The empirical results demonstrate that the proposed PNDEs outperform existing methods in terms of the consistent error. However, the improvement in terms of prediction error is less pronounced. I am not certain whether the ability to preserve the given constraints is the most critical indicator. If we aim to improve this, the simplest and most straightforward approach would be to project the predicted state onto the known manifold during the prediction phase.\", \"questions\": \"1. Could the paper provide a detailed formula of computing the \\\"Relative State Error\\\" and \\\"Constraint Error\\\"?\\n\\n2. Could the paper explain in detail how the trajectory of the figure was selected?\\n\\n3. What does the vertical axis of the leftmost subgraph in Figure 2 mean\\uff1f\\n\\n4. Section 4.2: Why does NDE using generalized coordinates that satisfy constraints perform poorly, and does this mean that preserving constraints cannot directly indicate better predictions? Can the results highlight the importance of constraints?\\n\\n5. Section 4.3: Do we know the governing function for this example, or are the dynamics learned from the given data? Additionally, the statement 'apply random perturbations to each grid (see Appendix A) and learn the dynamics from the resulting transients' feels unclear. Could the paper clarify this? Also, what are the sizes of the training and test datasets used for this example?\\n\\n6. For all examples, the paper only presents the state error for a single test trajectory. Could the paper provide more comprehensive quantitative evaluations? \\n\\n7. The assumption of known constraints may be too restrictive. It would be helpful to discuss scenarios where the constraints are known, but the governing function is unknown and data is provided.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We would like to thank all of the reviewers for their thorough and constructive feedback on the paper. Unfortunately, we have to agree with Reviewer eEkq that the proposed method was already derived in prior works. We were not aware of these works, and we thank the reviewer for bringing them to our attention.\\n\\nWhile we think our experiments demonstrate the benefits of the approach in a number of new and interesting directions, the method itself is not new and we are withdrawing the paper.\\n\\nAgain, we would like to thank the reviewers for their time and for the thoughtful reviews.\"}", "{\"summary\": \"This paper presents a projection-based approach to ensure hard constraint satisfaction in constrained dynamics, which is important for several real-world applications. However, several concerns outlined in the weaknesses section remain, and further investigation of these limitations is needed to improve its practical applicability.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"Constrained dynamics are present in real-world problems, and existing NODEs that do not consider these constraints run the risk of failing to satisfy them. In contrast, the proposed method achieves hard constraint satisfaction through projection.\", \"weaknesses\": \"1.\\tThe methodology is based on the assumption that the analytic form of the constraint function $g$ is known, which seems impractical. In real situations where the dynamics are unknown and the states $u$ are given only by the data, there are often many cases where the analytic form of $g$ is also unknown.\\n2.\\tWhat is the difference and advantage of the proposed method of projecting the forcing function $f$ onto the tangent space compared to projecting the predicted state $u$ onto the constraint manifold? Projecting $u$ onto the constraint manifold seems simpler than projecting $f$ onto the tangent space, while still satisfying the constraints.\\n3.\\tThe authors suggest restricting $f$ to the tangent space to satisfy the constraints. This leads to a question related to the one above: Is the original problem Eq. (1) with constraints on the state equivalent to the constrained problem Eq. (8) considered by the authors with the forcing term in the tangent space? Eq. (8) may limit the range of expressible dynamics.\\n4.\\tThere is a concern that the computation of the adjoint differential and pseudoinverse in Eq. (7) would be quite difficult for general $g$.\\n5.\\tInstead of enforcing hard constraints, constraints could be incorporated into the loss function by penalizing it. For instance, this could involve adding the $L^2$ norm of $g(u)$=$(g(u)-0)$ as a regularization term to the existing NODE loss. An experimental comparison with this approach seems necessary.\\n6.\\tWhile hard constraints ensure that the constraints are satisfied, they are not necessarily superior to soft constraints. Hard constraints can limit the representational power of the network and may negatively impact training because of their complex computational structure. It is crucial to understand and experimentally verify the trade-off between satisfying constraints and the model's capacity.\", \"questions\": \"Please address the concerns mentioned in the Weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper addresses the challenge of learning constrained dynamical systems in the context of neural differential equations (i.e., NDEs herafter). This term of NDEs includes the 2018 class of NODEs and generalization therefore such UDEs and bagging them together the authors are interested since indeed they allow for flexible modeling of dynamical systems by parameterizing the vector field f_\\\\theta with neural networks. The authors observe however, that they do not inherently enforce possible known constraints that the system may have, such as conservation laws, holonomic constraints (both applying quite well in Hamiltonian systems for example), or algebraic relationships and this can lead to learned models that to different levels of severity may violate essential properties of the system, resulting in poor generalization and numerical instability.\\n\\nTo overcome this limitation, the authors introduce the \\\"Projected Neural Differential Equations\\\" (PNDEs) whose key idea is to enforce constraints by projecting the neural network vector field f_\\\\theta onto the tangent space T_M of the constraint manifold M, which is defined by (algebraic equations) g(u) = 0. Specifically, they define the projected vector field as Proj_u (f_\\\\theta) \\\\in T_uM\\nwhere \\\\mathrm{Proj}_u is the orthogonal projection operator of from T_uE onto T_uM. By integrating this projected vector field, the solutions remain on the manifold M, ensuring that the constraints are satisfied for all time. So in some sense, they try to \\\"get rid\\\" of the components of the vector field/neural net that would be learned but would live outside the constrained submanifold the physical system actually lives on.\\n\\nThe authors provide a detailed derivation of the projection operator using common decomposition techniques. They demonstrate that for an embedded submanifold M defined by smooth constraint functions g(u) the projection can be explicitly computed using the Jacobian of the latter, i.e., the constraints. This allows for efficient computation of the projected vector field during numerical integration.\\n\\nTo validate their approach, the authors conduct experiments on several challenging dynamical systems: the Fermi\\u2013Pasta\\u2013Ulam\\u2013Tsingou lattice system, a damped pendulum system with holonomic constraints and power grid models that incorporate power flow equations. \\n\\nCompared to various existing methods they cite, such as SNDEs, the experiments seem to verify claims that the proposed method offers exact constraint enforcement without introducing additional hyperparameters and/or suffering from stiffness issues in numerical integration which arguably distinguishes PNDEs from penalty-based methods or those that incorporate constraints as soft losses during training, which may not guarantee constraint satisfaction during inference.\\n\\nOverall, the paper presents a principled and general framework for incorporating hard constraints into neural differential equations by projecting the neural network vector field onto the tangent space of the constraint manifold. This approach enhances the modeling of constrained dynamical systems, improving accuracy, generalizability, and numerical stability.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper presents a substantial advancement in the field of neural differential equationss given that the authors address a, indeed, crucial limitation of standard NDEs\\u2014the inability to enforce known constraints in the learned dynamical systems\\u2014which often leads to poor generalization and numerical instability. While many solutions to this have been discussed, e.g. within Hamiltonian neural nets, the strengths of the paper are several and go beyond the literature, to the best of my knowledge. c\\n\\nThe introduction of PNDEs is novel contribution and the authors provide a principled method to incorporate hard constraints directly into the learning process. Their approach differs from e.g., Stabilized Neural Differential Equations by ensuring that the constraints are satisfied exactly, rather than asymptotically or approximately. Using projection operators is creative and less common within \\\"deep learning\\\" as opposed to more traditional convex optimization.\\n\\nThe paper demonstrates rigorous, but bried, theoretical development and provides clear mathematical derivations. The authors derive the projection operator using the Jacobian of the defining constraint functions, and in detail explain and ensure that the projected vector field remains within the tangent space of M and they further go to propose in detail that solutions to the PNDE remain on the constraint manifold is well-founded, and the proof is succinct yet thorough. Some extra discussion as well as graphical illustration would be beneficial here! The experimental section is robust, and covers a range of systems.\\n\\nIn terms of clarity, I am happy to see a quite clear and well-written paper which manages to convey complex ideas effectively. That said, certain sections would be harder to follow for people with more ML/DL background but I won't count this as a limitation. Note that the motivation behind enforcing hard constraints in NDEs is very clearly articulated, and the limitations of existing methods are adequately discussed. The derivation of the projection operator is presented step-by-step (again, a graphical illustration would do miracles here), making it accessible to mathematically include readers with a background in differential geometry and dynamical systems. The experimental figures and tables are informative and enhance the understanding of the results. The experimental setup is described in sufficient detail, allowing for reproducibility (although I could not locate a link with a repo).\\n\\n\\nAs mentioned earlier, incorporating hard constraints into NDEs has significant implications for modeling realistic dynamical systems that inherently possess constraints, such as conservation laws and algebraic relationships. The ability of PNDE to enforce these constraints exactly enhances the reliability and accuracy of the models, which is crucial in safety-critical applications like power grid management. However, many practicioners, especially for this example, would claim that the lack of rigorous guarantees is a problem. Returnign to the main ideas of the paper, when suitable examples are considered, improving generalization and numerical stability, PNDEs contribute to advancing the state-of-the-art in data-driven modeling of dynamical systems. This work arguably opens up new possibilities for applying NDEs to a broader class of problems where constraints play a vital role.\", \"weaknesses\": \"While the paper, as discussed, presents a novel and effective method for incorporating hard constraints into neural differential equations there are several areas where the work could be improved.\\n\\nOne of the (few) main weaknesses lies in the discussion of related work and positioning of the proposed method within the existing literature. The paper focuses primarily on comparing PNDEs to SNDEs. However, there is a rich body of research on incorporating physical constraints and conservation laws into neural network models of dynamical systems that is not adequately addressed.\\n\\nFor instance, the (indeed) cited HNNs [Greydanus et al., 2019] and Symplectic ODE-Net [Zhong et al., 2020] (since the author do mention inductive bias in the intro) are significant contributions that leverage the symplectic structure of Hamiltonian systems to enforce conservation of energy and other invariants. These methods learn the Hamiltonian function directly and ensure that the learned dynamics preserve the symplectic form, inherently satisfying certain physical constraints. Therefore, it's not clear to as wether the PNDEs would be relevant in systems where HNNs seem to perform very well. As a matter of fact, in recent work on learning generalized Hamiltonians using fully symplectic mappings [Choudhary et al. 2024], addresses the challenge of modeling non-separable Hamiltonian systems using implicit symplectic integrators within neural networks which should be a class of problems where previously I would had assumed PNDEs to be prime candidates to work on but its just not clear to me as to what the best approach would be in such situations. So, overall, I would prefer a more thoorough discussion here. Finally, I would be keen for the authors to portray further unerstanding of the literature of projections. For example, it is known that such projections, introduce certain symmetries. These symmetries ideally can be quotioned out in order to fascillitate easier training, see for example a similar construction in convex optimization and SDPs where the tangential projection symmetries need be addressed [Bellon et. al. 2210.08387].\\n\\n\\nWhile the paper provides a clear derivation of the projection operator and proves that solutions to the PNDE remain on the constraint manifold, the theoretical analysis could be strengthened. Specifically, the paper lacks a discussion on the computational complexity and scalability of the projection operation in high-dimensional systems or with complex constraints. Maybe it's too hard? From the practicioner's point of view this is important too. Given the experiments discuss power grid (we would use BnC methods normally and not gradient based methods for a number of reasons) this is important. Also, computing the projection onto the tangent space requires solving a system involving the Jacobian of the constraints, which can be computationally intensive for large-scale systems.\\n\\nMoreover, the paper does not provide theoretical guarantees on the convergence or stability of the PNDEs beyond the preservation of the constraints. Are there some assumptions that can be made that would allow for an analysis of the numerical errors introduced by the projection and their impact on the overall solution accuracy? Additionally, insights into how the method performs under approximate constraints or in the presence of noise would enhance the understanding of its robustness.\\n\\n\\nRe the experimental section, while demonstrating the effectiveness of PNDEs on several systems, could be expanded to provide a more comprehensive evaluation. The experiments focus on systems where the constraint manifold is relatively straightforward to compute. It would be valuable to test PNDEs on \\\"less trivial\\\" systems with high-dimensional constraints or where the constraint manifold has nontrivial topology maybe? \\n\\nFurthermore, the comparison is primarily with SNDEs and unconstrained NDEs. Including additional baseline methods, such as HNNs, Symplectic Neural Networks, or other constraint-enforcing techniques, would strengthen the empirical evaluation. This would provide a clearer picture of the advantages and limitations of PNDEs relative to existing approaches.\\n\\nWhile the paper is generally well-written, certain sections could be clarified for better accessibility. The derivation of the projection operator, although mathematically rigorous, might be challenging for readers not deeply familiar with differential geometry. Providing more intuitive explanations or illustrative examples could help bridge this gap.\\n\\nAdditionally, the notation used in some equations, such as the use of adjoints and pseudoinverses, could be explained in more detail. Ensuring that all symbols and operations are clearly defined would improve the readability of the paper.\\n\\n\\nAnother point is that the proposed method assumes that the constraints can be expressed as explicit algebraic equations and that the Jacobian of the constraints is full rank. In practice, many systems might have constraints that are implicit, differential-algebraic, or have singular Jacobians. What happens then? Discussing how PNDEs could be extended or adapted to handle such cases would enhance the significance and applicability of the work.\", \"questions\": \"Can the authors add some \\\"cartoons\\\" to ensure that readers less inclined with differential geometry can still understand the main (pictorially easy to be fair) intuition behind the paper?\\n\\nCan the authors expand the discussion on relevant literature (maybe with some table comparison?) as per the \\\"Weaknesses\\\" section? \\n\\nCan you provide insights into the computational complexity of the projection operation and discuss potential scalability issues (needs not be FOCS-style theory). Include analysis on the numerical stability and error propagation introduced by the projection maybe if possible?\\n\\nRe the comment above \\\"The experiments focus on systems where the constraint manifold is relatively straightforward to compute. It would be valuable to test PNDEs on \\\"less trivial\\\" systems with high-dimensional constraints or where the constraint manifold has nontrivial topology maybe? \\\" can you maybe design such a larger instance and maybe harder topology problem? I would not decline the paper based on this but I do think it would massively strengthen the paper. \\n\\nNote the unique typos I found was that no $ sign was used in a couple of T_uM instances, just make sure you fix this. \\n\\nBy addressing these points, the paper would be strengthened in terms of positioning within the existing literature, theoretical rigor, empirical validation, and clarity, ultimately enhancing its significance and impact on the field and would make it a super strong paper for ICLR 2025.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
2ATD8a8P3C
Conformal Structured Prediction
[ "Botong Zhang", "Shuo Li", "Osbert Bastani" ]
Conformal prediction has recently emerged as a promising strategy for quantifying the uncertainty of a predictive model; these algorithms modify the model to output sets of labels that are guaranteed to contain the true label with high probability. However, existing conformal prediction algorithms have largely targeted classification and regression settings, where the structure of the prediction set has a simple form as a level set of the scoring function. However, for complex structured outputs such as text generation, these prediction sets might include a large number of labels and therefore be hard for users to interpret. In this paper, we propose a general framework for conformal prediction in the structured prediction setting, that modifies existing conformal prediction algorithms to output structured prediction sets that implicitly represent sets of labels. In addition, we demonstrate how our approach can be applied in domains where the prediction sets can be represented as a set of nodes in a directed acyclic graph; for instance, for hierarchical labels such as image classification, a prediction set might be a small subset of coarse labels implicitly representing the prediction set of all their more fine-descendants. We demonstrate how our algorithm can be used to construct prediction sets that satisfy a desired coverage guarantee in several domains.
[ "Conformal Prediction", "Structured Prediction", "Integer Programming" ]
Accept (Poster)
https://openreview.net/pdf?id=2ATD8a8P3C
https://openreview.net/forum?id=2ATD8a8P3C
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zR8eFtmfP6", "xuywlW25U7", "x4YZvqv9nn", "tomk6KuKpj", "stYe4mfr87", "nVEXrxTqxo", "mUeTZNZBXp", "jeBywPJNTL", "iKBinYFjm9", "etS2ixdxel", "e4kSOxp4oE", "bUDpzDgq4C", "aoLs03oQhb", "af5ONwP7hQ", "XhB161oa6t", "OYfYxMzrg6", "NVl0HsF8yL", "I5BByqGT3o", "HTlCNLRWjp", "7vJHTYKnR6" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment" ], "note_created": [ 1732222138610, 1730784125646, 1733101769262, 1732163739589, 1733077885813, 1732162424256, 1732163260560, 1733045627867, 1732586570317, 1732163946354, 1735033910915, 1730685605189, 1733100597504, 1733100679549, 1730783048717, 1732491300479, 1733176071597, 1737523696545, 1732161915622, 1732161380170 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5291/Authors" ], [ "ICLR.cc/2025/Conference/Submission5291/Reviewer_ydCP" ], [ "ICLR.cc/2025/Conference/Submission5291/Reviewer_FJWK" ], [ "ICLR.cc/2025/Conference/Submission5291/Authors" ], [ "ICLR.cc/2025/Conference/Submission5291/Authors" ], [ "ICLR.cc/2025/Conference/Submission5291/Authors" ], [ "ICLR.cc/2025/Conference/Submission5291/Authors" ], [ "ICLR.cc/2025/Conference/Submission5291/Reviewer_ydCP" ], [ "ICLR.cc/2025/Conference/Submission5291/Authors" ], [ "ICLR.cc/2025/Conference/Submission5291/Authors" ], [ "ICLR.cc/2025/Conference/Submission5291/Area_Chair_gbVz" ], [ "ICLR.cc/2025/Conference/Submission5291/Reviewer_FJWK" ], [ "ICLR.cc/2025/Conference/Submission5291/Authors" ], [ "ICLR.cc/2025/Conference/Submission5291/Authors" ], [ "ICLR.cc/2025/Conference/Submission5291/Reviewer_JTJU" ], [ "ICLR.cc/2025/Conference/Submission5291/Authors" ], [ "ICLR.cc/2025/Conference/Submission5291/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5291/Authors" ], [ "ICLR.cc/2025/Conference/Submission5291/Authors" ] ], "structured_content_str": [ "{\"comment\": \"We would like to add an additional comment regarding the question on the sensitivity of $m$. We apologize for omitting this in the previous response.\\n\\nThe sensitivity of the hyperparameter $m$ depends on the DAG structure and the performance of the underlying model. Across the four tasks we evaluated, $m$ is more sensitive in the question-answering problem compared to the other experiments, due to the more ambiguous problem setting and poorer underlying model performance. Since all nodes tend to have similar probability mass usually without any dominant ones, changes in $m$ can significantly affect the selected nodes when $m$ is small (e.g. 1 or 2; see Figure 4a, 4b in our paper). However, it is a common behavior that $m$ becomes less sensitive to the prediction set as it becomes larger (e.g. all tasks tend to exhibit smaller changes in prediction set size when $m$ is increased from 4 to 8).\"}", "{\"summary\": \"This paper proposes a framework for conformal structured prediction i.e., conformal prediction in the structured prediction setting. The proposed framework outputs structured prediction sets that achieve marginal or PAC coverage guarantees while minimizing prediction set size. In the context of a set of nodes in a directed acyclic graph, the prediction set as a small subset of coarse labels corresponds to the prediction set of fine-grained descendants of the coarse labels. The paper presents empirical analysis of the approach in three domains to demonstrate the performance of their approach.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-organized for the most part.\\n2. The paper is technically sound in its description of problem formulation and the marginal and PAC guarantees.\\n3. Construction of prediction sets in the structured prediction setting and in the context of nodes in a directed acyclic graph is an important problem.\", \"weaknesses\": \"1. Missing discussion of important related work [1, 2]: I believe the paper misses citing and comparison with important related work on conformal risk control [1]. [1] considers hierarchical image classification in ImageNet similar to the paper and controls the graph distance between nodes. Additionally, RAPS method in [2] is a conformal prediction method that introduces regularization to encourage smaller and stable sets, and is worth comparing to given the focus of the paper on reducing average set size.\\n2. The empirical evaluation can certainly benefit from more analysis. In the current form, the contribution and significance of the method are not demonstrated very clearly:\\n - It is hard to understand the utility of the method without comparison with more baselines. I believe doing this is especially possible for the marginal guarantees. Qualitative comparison of the prediction sets will also help demonstrate the utility of structured prediction sets. I see the paper discusses one example in the main text, however there is certainly value in adding more examples in this case (also mentioning the error level used for standard conformal prediction and other details for fair comparison).\\n - Following from above, I appreciate Table 1 in the paper as it helps understand the influence of hyperparameters better. I would suggest adding similar examples for other datasets as well.\\n\\n3. The motivation of the paper is not very clear in the beginning and only becomes clearer as the method and examples are discussed later in the paper. While the introduction has sufficient details about the method, I would suggest making the motivation for structured prediction sets clearer early on.\\n\\n\\n\\n\\n**Minor comments:**\\n1. L60: parameter $\\\\tau$ has not been defined and referenced early I believe without sufficient context here.\\n2. Similar comment for Figure 2. The caption makes reference to $\\\\tau$, whereas the notation has not been introduced earlier in text or in the caption.\\n3. L306: typo/incomplete -> (2)\\n4. L416-417: possibly missing word after \\u2018values\\u2019; \\u201c in contrast, for the PAC guarantee, coverage for all values within one standard deviation...\\u201d\\n\\n[1] Anastasios Nikolas Angelopoulos, Stephen Bates, Adam Fisch, Lihua Lei, and Tal Schuster. Conformal Risk Control. International Conference on Learning Representations, 2024.\\n\\n[2] Anastasios Nikolas Angelopoulos, Stephen Bates, Michael Jordan, and Jitendra Malik. Uncertainty Sets for Image Classifiers using Conformal Prediction. International Conference on Learning Representations, 2021.\", \"questions\": \"How should $m$ be selected in practice? From the experiments, this selection appears as an important choice for the quality of prediction sets, however the paper lacks discussion on this aspect.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for addressing all the comments. I will maintain my positive score of acceptance.\"}", "{\"comment\": \"(continued from the previous comment)\\n\\nAs can be seen, the MNIST 2-digit and ImageNet problems exhibit similar computation times, both demonstrating fast solve times. As explained in the paper, the MNIST-2 tree has a depth of 3, 100 leaves, and 111 nodes, whereas the ImageNet tree is a lot larger, with 18 layers, 1000 leaves, and 1816 nodes. In our framework, increasing the scale of the DAG does not necessarily result in increased computation time for the IP problem.\\n\\nHowever, when extending to the MNIST 3-digit problem, the computation time for the IP slightly becomes slower. The MNIST-3 tree has a depth of 4, 1000 leaves, and 1111 nodes, which significantly increases the number of nodes compared to the MNIST-2 tree and notably increases the tree density compared to the ImageNet tree. The computation for IP can become intensive as both the number of nodes and the density of the DAG scale up. A practical strategy to alleviate this computational burden is to simplify the DAG by removing some internal nodes while preserving the overall hierarchy. In particular, if a node $v$ is removed, its parent nodes become the parents of each of $v$'s children. This approach allows us to maintain the structured prediction set property while improving computational efficiency.\\n\\nFinally, the structure for the question-answering task is not a tree but a DAG, where a single node can have multiple parents. The DAG structure used for this task has 51 layers, 51 leaves, and 650 nodes, making it relatively sparse compared to structures for the other three domains. In this case, there is a small jump in computation time when $m = 2$, but the overall computation time remains manageable.\\n\\n**Have you considered comparing this approach to other recent extensions of conformal prediction tailored to structured outputs or complex tasks (e.g., those applied to natural language or image data)?**\\n\\nWe have added a comparison to [1], which considers a tree structure in the code domain (specifically, the abstract syntax tree), since this approach also targets conformal structured prediction in a specific domain. We have extended their approach to general structured prediction settings to perform our comparison; see below for details.\\n\\nAnother paper applying conformal prediction to natural language is [2]; however, they still use sets of samples, which may suffer from lack of interpretability. Finally, [3] considers conformal prediction for question answering; their algorithm is closely related to the learn-then-test algorithm. However, they do not consider a tree structure of structured prediction sets; instead, they only consider a single sequence of increasingly coarse-grained labels. Thus, their approach is not applicable to our problem formulation.\\n\\nNow, we return to the comparison to [1]. While the strategy proposed in [1] is specialized to the code domain, we generalize their framework to apply to arbitrary DAG structures. Our results (below) show that we significantly outperform this baseline, both in terms of prediction set size and computational cost. The main shortcoming of this approach is that it leverages existing PAC prediction set algorithms, which require that the monotonicity assumption holds. Thus, their algorithm restricts the structure of the prediction sets across different values of $\\\\tau$ to enforce monotonicity. In contrast, our approach proves a novel conformal prediction bound (Theorem 3.2 in our paper) to avoid the need for monotonicity. We show results for the question-answering task here (prediction set sizes: https://ibb.co/rp2VWqs; computational cost: https://ibb.co/ss1B9wp). These plots show the prediction set sizes using the same annotations as Figure 4 of our paper. The baseline results are represented by dashed lines. For both prediction set size and computational cost, our approach outperforms the baseline for almost every parameter setting, and significantly so for $m=2$ and $m=4$. Results in other domains exhibit similar trends, and we will add them in an updated version of the paper (which we plan to submit by the end of the rebuttal period).\\n\\n[1] A Khakhar, et al. PAC Prediction Sets for Large Language Models of Code, ICML 2023.\\n\\n[2] V Quach, et al. Conformal Language Modeling, 2023.\\n\\n[3] C Mohri, T Hashimoto. Language Models with Conformal Factuality Guarantees, 2024.\\n\\n(continued on next comment)\"}", "{\"comment\": \"Thanks for taking the time to read our response and updated paper!\\n\\nWe did not have time during the response period to improve the clarity of our paper but we will work hard to do so for our next revision.\"}", "{\"comment\": \"Thank you for taking the time and effort to read and provide comments on our work. We provide answers to the questions below.\\n\\nAs with the broader conformal prediction literature, our goal is not to achieve better AUC, but to provide statistical guarantees on the coverage of our algorithm. In fact, the conformal prediction setting is closely related to the P/R curve: \\u201ccoverage\\u201d is equivalent to recall, and \\u201cprediction set size\\u201d is closely related to precision (in particular, if we ignore miscoverages, then precision set size = 1/precision; it is straightforward to modify this equation to account for miscoverages). The key distinction is the focus on obtaining *provable* finite-sample guarantees on the coverage (a.k.a., recall). Our main contribution is an algorithm for constructing a conformal predictor that satisfies this guarantee. While we propose an algorithm for solving this problem, we emphasize that it is much more generally applicable than just hierarchical label spaces; instead, we are targeting general structured prediction algorithms. Thus, we do not believe comparisons to existing approaches for hierarchical label prediction are useful comparisons for understanding the properties of our approach.\\n\\nNevertheless, we appreciate the reviewer\\u2019s suggestion to include a comparison with a baseline to highlight our method\\u2019s contribution. To this end, we have compared to a baseline from the conformal prediction literature.\\n\\nWe compare our method with a baseline strategy adapted from [1]. While the strategy proposed in [1] is specialized to the code domain, we generalize their framework to apply to arbitrary DAG structures. Our results (below) show that we significantly outperform this baseline, both in terms of prediction set size and computational cost. The main shortcoming of this approach is that it leverages existing PAC prediction set algorithms, which require that the monotonicity assumption holds. Thus, their algorithm restricts the structure of the prediction sets across different values of $\\\\tau$ to enforce monotonicity. In contrast, our approach proves a novel conformal prediction bound (Theorem 3.2 in our paper) to avoid the need for monotonicity. We show results for the question-answering task here (prediction set sizes: https://ibb.co/rp2VWqs; computational cost: https://ibb.co/ss1B9wp). These plots show the prediction set sizes using the same annotations as Figure 4 of our paper. The baseline results are represented by dashed lines. For both prediction set size and computational cost, our approach outperforms the baseline for almost every parameter setting, and significantly so for $m=2$ and $m=4$. Results in other domains exhibit similar trends, and we will add them in an updated version of the paper (which we plan to submit by the end of the rebuttal period).\\n\\nFinally, to understand the relationship to the P/R curve, consider the this scatter plot (https://ibb.co/ZBgcpqS), which shows the relationship between recall (i.e., coverage rate) and precision (i.e., average inverse prediction set size) for the question answering task, with $m=4$ for both methods. As can be seen, our approach significantly outperforms the baseline.\\n\\n[1]: Adam Khakhar, Stephen Mell, and Osbert Bastani. Pac prediction sets for large language models of code. In International Conference on Machine Learning, In International Conference on Machine Learning, pp. 16237\\u201316249. PMLR, 2023.\"}", "{\"comment\": \"Thank you for taking the time and effort to read and provide comments on our work. We provide answers to all the remarks and questions below.\\n\\n**How does the proposed method for conformal structured prediction fundamentally differ from or improve upon prior hierarchical prediction approaches?**\\n\\nThank you for sharing this related work. We note that our goals are significantly different from the goals in [1]. In particular, the goal in [1] is to find the prediction set with the highest probability mass (as indicated by Eq. (3) in [1]), whereas our goal is to find the smallest possible prediction sets under a constraint that the coverage rate meets some desired level. As indicated by Table 2 of [1], their approach would obtain much lower coverage (e.g., <50%), whereas our results demonstrate that our approach achieves the desired coverage rate.\\n\\n[1] T Mortier, et al. Set-Valued Prediction in Hierarchical Classification with Constrained Representation Complexity, 2022.\\n\\n**Conduct tests in more complex environments. (e.g. multi-label classification, multi-class classification in healthcare, or real-world document settings).**\\n\\nThank you for this suggestion. We have added a new experiment to demonstrate the application of our framework to predict the emotion labels in a given piece of text. In particular, we use the GoEmotion dataset [2], which consists of 27 emotion categories annotated on 58,000 English Reddit comments. They also provide a hierarchical structure on these categories (Figure 2 in [2]). We show coverage rates and average prediction set sizes here (https://ibb.co/pvttLc9) using the same format as the other results in our paper. We will add these results to an updated version of our paper.\\n\\n[2] Demszky, D., Movshovitz-Attias, D., Ko, J., Cowen, A., Nemade, G., & Ravi, S. (2020). GoEmotions: A dataset of fine-grained emotions. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (pp. 4040-4054).\\n\\n**The framework assumes that the label space is represented by a DAG. How does this assumption impact generalizability to label structures that are non-hierarchical, cyclical, or have overlapping dependencies?**\\n\\nWe emphasize that our DAG structure is a structure on the space of prediction sets, and can differ from the structure of the label space. In applications where the label space has a tree or DAG structure, we can naturally consider structured prediction sets that conform to this structure, but this is not a requirement. For instance, if the label space has a graph structure, we could still construct prediction sets representing sets of labels, and impose a DAG structure on these prediction sets (e.g., based on set inclusion). Thus, our approach can be flexibly applied to more complex domains. If you have any suggestions about specific kinds of label structures that might be of interest, we are happy to discuss them in detail.\\n\\n**Could you elaborate on practical scenarios or domains where marginal coverage versus PAC coverage would be preferable? How should a practitioner decide between the two guarantees in real-world settings?**\\n\\nThe main difference between the two is the margin of safety that is provided. Marginal guarantees are on average over both the training set and the new examples. Thus, for a given training set, the resulting classifier might fail to satisfy the guarantee. In contrast, PAC prediction sets hold with high probability over the training set. This difference is illustrated by comparing the coverage rate plots for the two different approaches in our paper. When using marginal guarantees, the average coverage across different random seeds is above the desired coverage level, but any individual random seed may fall above or below this level. In contrast, for PAC guarantees, the coverage is above the desired coverage level for almost all random seeds.\\n\\nGiven these properties, PAC coverage generally provides greater reliability than marginal coverage. Thus, in domains where it is critical for the deployed model to satisfy the coverage guarantee, PAC prediction sets should be used; otherwise, marginal guarantees may suffice.\\n\\n**Integer programming (IP) can be computationally intensive, particularly for large DAGs. Have you measured or benchmarked the runtime performance and scalability of the IP formulation, especially in tasks with larger label hierarchies?**\\n\\nThe plot here (https://ibb.co/2y276tG) illustrates the average time required to solve each IP problem as we vary $m$ while fixing $\\\\epsilon=0.1$, for each of our four domains.\\n\\n(continued on next comment)\"}", "{\"comment\": \"Thank you for including discussion of relevant related work. I also appreciate the additional empirical analysis. The paper can certainly benefit from more examples and greater clarity in writing to better motivate the work (as I suggested earlier as well). However, given the responses to the raised concerns, I would like to update my score towards acceptance.\"}", "{\"title\": \"Summary of Revision Updates\", \"comment\": \"Dear Reviewers,\\n\\nWe have uploaded a revision of the paper, incorporating our responses to all the reviews from the previous round. The revised texts are written in blue. Below are the major updates:\\n1. We clarified the definition of $\\\\tau$ and the error level used in Figure 1 in the **Introduction**.\\n2. We expanded the analysis in the **Related work** section and included the papers suggested by the reviewers to highlight how our work improves upon prior studies.\\n3. We further clarified the generalizability of our DAG structure in representing the space of prediction sets in **Section 4**.\\n4. We included a new experiment in the domain of emotion prediction and described the baseline strategy used for comparison in **Section 5.1**.\\n5. In **Section 5.2**, we added:\\n - A comparison of results with the baseline,\\n - An analysis of difference between marginal and PAC guarantees and how a practitioner should decide between the two,\\n - An discussion on the sensitivity to the hyperparameter $m$, and\\n - How $m$ should be selected in practice.\\n6. We updated the quantitative plots by removing the case for $\\\\epsilon=0.4$ to focus on the more pivotal value of $\\\\epsilon=0.15$ and added baseline results to the original prediction sets size plots. (The plots are the same as the ones we provided earlier in the responses.)\\n7. In the **Appendix**, we added:\\n - A discussion on computational cost and scalability,\\n - Analyses of additional qualitative examples,\\n - Multiple plots showcasing quantitative results from the new experiments, along with baseline comparisons, and\\n - A helpful illustration demonstrating the relationship between our study and the P/R curve literature.\\n\\nWe hope this revision addresses your concerns. If you have any further questions or comments, we would be happy to discuss and address them before the discussion period ends. We look forward to any thoughts you may have!\"}", "{\"comment\": \"(continued from the previous comment)\\n\\n**How sensitive is the framework to the hyperparameter $m$ (the maximum number of nodes in the prediction set)? Is there a recommended method for tuning $m$ based on the domain or task?**\\n\\nIn general, the choice of $m$ depends on the needs of the given application domain. It governs the trade-off between the interpretability and granularity of the resulting prediction sets. Specifically, larger values of $m$ allow more labels to be included in the set, often capturing finer-grained categories such as \\u201cattire\\u201d or \\u201cBlenheim spaniel\\u201d. Conversely, smaller values of $m$ result in sets with fewer labels, which can be more interpretable for users. We suggest that practitioners try different values of $m$, and manually examine the resulting prediction sets to determine which choices offer the best tradeoff between interpretability and coverage.\"}", "{\"metareview\": \"The paper tackle ways to efficiently apply conformal prediction to structured prediction.\\nIn the proposed framework, labels are organized in a Directed Acyclic Graph (DAG). Each node corresponds to a label or a set of labels that represent either coarse-grained categories (e.g., \\\"Animal\\\") or fine-grained categories (e.g., \\\"Dog,\\\" \\\"Cat\\\") while Edges capture hierarchical or inclusion relationships between labels. This ensures that prediction sets can balance granularity, interpretability, and coverage guarantees, making it suitable for structured prediction tasks.\\n\\nThe core technical contribution lies in formulating the selection of prediction sets as an integer programming problem aimed at minimizing the size of the prediction set. To ensure coverage, the approach sums the probabilities of leaf nodes covered by the selected nodes in the DAG. Marginal coverage guarantees are achieved by leveraging a previously established method that applies learn-then-test techniques to estimate thresholds for the scoring function.\\n\\nComputational time is a core issue for such approach and deserves a clear description in the main paper in my opinion (not in the appendix).\", \"additional_comments_on_reviewer_discussion\": \"The reviewers appreciated the novel framework for structured conformal prediction, highlighting its ability to construct interpretable prediction sets with formal marginal and PAC coverage guarantees. Strengths included the generalizability to structured output spaces (e.g., DAGs) and applications to tasks like MNIST, ImageNet, and question answering. Initial weaknesses identified included missing comparisons to related works (e.g., Conformal Risk Control, RAPS), limited experimental baselines, and unclear motivation in the introduction. The authors addressed these by adding comparisons, qualitative examples, and a new experiment on the GoEmotion dataset, while clarifying hyperparameter sensitivity and computational performance. While reviewers noted that writing clarity and real-world applicability could be further improved, most leaned toward acceptance, recognizing the framework\\u2019s theoretical contributions and practical value.\"}", "{\"summary\": \"The paper \\\"Conformal Structured Prediction\\\" introduces a novel framework to extend conformal prediction methods to complex, structured output spaces. Conformal prediction typically provides sets of labels with a high probability of containing the true label, giving a quantified uncertainty. However, for tasks involving structured or hierarchical outputs\\u2014such as text generation, image classification with hierarchical categories, or question answering with date ranges\\u2014traditional conformal methods produce prediction sets that can be large and hard to interpret.\\n\\nThe authors propose a method to generate interpretable structured prediction sets using a conformal predictor that works within a directed acyclic graph (DAG) representing the label hierarchy. This approach maintains coverage guarantees while reducing prediction set complexity. The paper introduces algorithms for efficiently determining the smallest structured prediction set that meets a specified confidence level, and it adapts these methods to provide both marginal and Probably Approximately Correct (PAC) guarantees.\\n\\nThe authors demonstrate the utility of their approach through experiments on MNIST digits, hierarchical ImageNet classification, and a question answering dataset focused on years as answers. The results show that their framework achieves desired coverage rates while keeping prediction sets concise and interpretable.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Extension of conformal prediction to structured outputs using DAGs, combining conformal prediction with hierarchical representations.\\n2. Rigorous theoretical development with both marginal and PAC coverage guarantees, validated through experiments in diverse domains.\\n3. Generally well-organized with clear explanations and helpful visual aids, making complex concepts accessible.\\n4. Addresses an important gap, potentially impacting applications that require structu\", \"weaknesses\": \"1. While the paper\\u2019s application of conformal prediction to structured outputs is valuable, similar approaches have been explored in hierarchical classification and structured prediction. For instance, previous works have used hierarchical structures (e.g., DAGs or trees) to improve interpretability in label prediction. The paper could benefit from a more thorough comparison to these existing methods, as well as a deeper explanation of what sets its approach apart. Highlighting any unique technical advancements in how the proposed framework constructs prediction sets in hierarchical settings (e.g., advantages of the DAG-based approach) would further clarify its contributions.\\n\\nSet-Valued Prediction in Hierarchical Classification with Constrained\\nRepresentation Complexity. Mortier et al. (2022): This work focuses on hierarchical classification with constrained set-valued predictions, utilizing hierarchical label spaces in which classes are structured in a directed acyclic graph (DAG) or tree. It emphasizes creating interpretable predictions that adhere to hierarchical constraints, much like structured conformal prediction but without formal coverage guarantees.\\n\\n\\n2. The experiments are limited to specific domains (MNIST digits, ImageNet, and SQuAD). Although these domains represent a variety of structured prediction tasks, they are relatively controlled environments and may not fully reflect the challenges of deploying conformal structured prediction in more complex real-world applications. For instance, the framework\\u2019s performance on prediction sets for multi-label classification or in contexts with high label ambiguity (e.g., complex multi-class categorization in medical or legal documents) remains untested.\\n\\nDeploying conformal structured prediction in a healthcare setting, particularly for diagnostic tasks with hierarchical or multi-label structures (e.g., identifying conditions or diseases from imaging data or lab results), would offer insights into the model's reliability and interpretability under more variable, high-stakes conditions. This field often requires nuanced coverage guarantees and interpretability to support clinical decision-making.\\n\\nReal-world document classification often involves hierarchical categories (e.g., legal documents, financial reports) and multi-label classifications. Testing in this setting could reveal how well the model scales with complex, unbalanced label hierarchies, providing additional insights into its generalizability to larger, noisy datasets that are typical in business and legal contexts.\\n\\n3. The paper assumes that the label hierarchy can be represented by a DAG, which works well for hierarchical classification but may be restrictive for tasks with overlapping or cyclical dependencies. In complex scenarios where relationships between classes are non-hierarchical or not acyclic, this assumption may not hold, potentially limiting the framework\\u2019s applicability to structured outputs with more intricate dependencies.\\n\\n4. The inclusion of PAC guarantees is a strong point, but the differences between the marginal and PAC guarantees could be better explored. The PAC guarantee is inherently conservative, and the experiments demonstrate that it leads to larger prediction sets in some cases. However, there is little analysis of scenarios where a PAC guarantee might be more beneficial than a marginal guarantee, or vice versa, depending on the task or risk tolerance.\", \"questions\": \"1. How does the proposed method for conformal structured prediction fundamentally differ from or improve upon prior hierarchical prediction approaches?\\n\\n2. The framework assumes that the label space is represented by a DAG. How does this assumption impact generalizability to label structures that are non-hierarchical, cyclical, or have overlapping dependencies?\\n\\n3. Integer programming (IP) can be computationally intensive, particularly for large DAGs. Have you measured or benchmarked the runtime performance and scalability of the IP formulation, especially in tasks with larger label hierarchies?\\n\\n4. Could you elaborate on practical scenarios or domains where marginal coverage versus PAC coverage would be preferable? How should a practitioner decide between the two guarantees in real-world settings?\\n\\n5. Have you considered comparing this approach to other recent extensions of conformal prediction tailored to structured outputs or complex tasks (e.g., those applied to natural language or image data)?\\n\\n6. How sensitive is the framework to the hyperparameter m\\n(the maximum number of nodes in the prediction set)? Is there a recommended method for tuning \\nm based on the domain or task?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Looking Forward to Your Feedback Before the DDL\", \"comment\": \"Thank you once again for taking the time to review our work. We have carefully addressed the concerns you raised to the best of our ability, and your feedback has been invaluable in improving the quality and clarity of our submission.\\n\\nAs the discussion period is nearing its end, we would greatly appreciate the opportunity to confirm if we have addressed your concerns satisfactorily. We hope to have the chance to discuss this further today before the discussion period closes. Have we addressed all your comments?\"}", "{\"title\": \"Looking Forward to Your Feedback Before the DDL\", \"comment\": \"Thank you once again for taking the time to review our work. We have carefully addressed the concerns you raised to the best of our ability, and your feedback has been invaluable in improving the quality and clarity of our submission.\\n\\nAs the discussion period is nearing its end, we would greatly appreciate the opportunity to confirm if we have addressed your concerns satisfactorily. We hope to have the chance to discuss this further today before the discussion period closes. Have we addressed all your comments?\"}", "{\"summary\": \"The authors propose conformal structured predictions which addresses the interpretability issue of conformal predictions when the output set is complex. Their algorithm also differs from existing algorithms as in conformal structured predictions the search space over $\\\\tau$ is not monotone anymore. This approach can be applied to tasks where the output space can be represented as a directed acyclic graph and has a hierarchical structure. The authors provide formal coverage guarantees using PAC and marginal coverage and evaluate their approach in numbers predictions with MNIST digits, ImageNet classification and temporal Q&A.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"It is an interesting problem, particularly how best to use the external structure of the labels to generate a better 'curve', i.e. recall at given output size.\", \"The experimental setups were quite interesting, e.g. MNIST with number ranges.\", \"The proposed method seems to extend well to DAG spaces (beyond trees). Though I suppose it is still restricted to DAG instead of Graphs to sum up the probs of final leaf nodes.\"], \"weaknesses\": [\"I would love to see a baseline where we dont use the structure at all and instead rely on regular P/R curve characteristics. Does the AUC of this model behave better? It is not clear to me as such.\", \"Even if we do use the external structure and forced to only predict internal nodes in the DAG (as opposed to arbitrary set of leaf nodes), it would still be useful to understand the P/R curve look significantly different with the proposed models. There are plenty of baselines where we can do prediction on internal nodes in addition to leaf nodes.\"], \"questions\": \"(see above)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewers,\\n\\nThank you for your thoughtful and detailed feedback on our submission. We have carefully addressed your comments and provided clarifications. As the rebuttal deadline approaches, we kindly request you to review our responses to ensure that we have adequately addressed your concerns. If there are any remaining unclear points, we would be happy to provide further clarifications.\\n\\nIf you find that all your questions and concerns have been satisfactorily addressed, we kindly ask you to consider adjusting your scores accordingly.\\n\\nThank you for your time and consideration!\"}", "{\"title\": \"Follow-up before discussion ends\", \"comment\": \"We would greatly appreciate your feedback on whether we have addressed all your comments before the discussion period ends **today**. If your concerns and questions have been resolved, we would appreciate it if you reconsider your score.\\n\\nThank you once again for your review.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"(continued from the previous comment)\\n\\n**Provide examples similar to Table 1 for other datasets**\\n\\nWe provide the table on the above discussed example (the wig picture) in Table 2 (https://ibb.co/GWYSQ3s). Similar to our findings in Table 1, with smaller $m$, our prediction sets contain fewer labels, making them more interpretable, while prediction sets with larger $m$ contain more fine-grained labels. Also, with higher epsilon, our algorithm is allowed to make more errors, resulting in much smaller prediction sets.\\n\\n**Clarify motivation earlier in the paper**\\n\\nWe are happy to update our paper to clarify our goals.\\n\\n**How should $m$ be selected in practice?**\\n\\nIn general, the choice of $m$ depends on the needs of the given application domain. It governs the trade-off between the interpretability and granularity of the resulting prediction sets. Specifically, larger values of $m$ allow more labels to be included in the set, often capturing finer-grained categories such as \\u201cattire\\u201d or \\u201cBlenheim spaniel\\u201d. Conversely, smaller values of $m$ result in sets with fewer labels, which can be more interpretable for users. We suggest that practitioners try different values of $m$, and manually examine the resulting prediction sets to determine which choices offer the best tradeoff between interpretability and coverage.\"}", "{\"comment\": \"Thank you for taking the time and effort to read and provide comments on our work. We provide answers to all the remarks and questions below.\\n\\n**Discuss and compare to related works:**\\n\\n*Conformal Risk Control*\\n\\n*Uncertainty Sets for Image Classifiers using Conformal Prediction*\\n\\nWe thank the reviewer for pointing out these two related works. We want to note the following differences between our proposed algorithm and these works.\\n\\nIn general, the goals in the \\u201cConformal Risk Control\\u201d paper are qualitatively very different \\u2013 their goal is to minimize more general risk functions (instead of prediction set size), whereas our goal is to extend conformal prediction to structured prediction. In Section 3.3, their purpose is to control an alternative risk function based on the hierarchical label structure, while still constructing traditional prediction sets. Indeed, we believe that our approach could be combined with theirs to further improve interpretability of the resulting prediction sets. For this specific task, their algorithm can be viewed as producing a structured prediction set; however, even in this context, their search is constrained to only parent nodes of the class with the highest estimated probability $\\\\hat y$, whereas our search space is much more general (and can be much more flexibly specified by the user).\\n\\nThe goal of the \\u201cUncertainty Sets for Image Classifiers using Conformal Prediction\\u201d paper is also very different from ours. The goal of their paper is to reduce the average prediction set size by adding a regularization term (Eq. (4) in their paper), whereas our goal is to compute structured prediction sets.\\n\\n**Compare with more baselines**\\n\\nWe compare our method with a baseline strategy adapted from [1]. While the strategy proposed in [1] is specialized to the code domain, we generalize their framework to apply to arbitrary DAG structures. Our results (below) show that we significantly outperform this baseline, both in terms of prediction set size and computational cost. The main shortcoming of this approach is that it leverages existing PAC prediction set algorithms, which require that the monotonicity assumption holds. Thus, their algorithm restricts the structure of the prediction sets across different values of $\\\\tau$ to enforce monotonicity. In contrast, our approach proves a novel conformal prediction bound (Theorem 3.2 in our paper) to avoid the need for monotonicity. We show results for the question-answering task here (prediction set sizes: https://ibb.co/rp2VWqs; computational cost: https://ibb.co/ss1B9wp). These plots show the prediction set sizes using the same annotations as Figure 4 of our paper. The baseline results are represented by dashed lines. For both prediction set size and computational cost, our approach outperforms the baseline for almost every parameter setting, and significantly so for $m=2$ and $m=4$. Results in other domains exhibit similar trends, and we will add them in an updated version of the paper (which we plan to submit by the end of the rebuttal period).\\n\\n[1]: Adam Khakhar, Stephen Mell, and Osbert Bastani. Pac prediction sets for large language models of code. In International Conference on Machine Learning, In International Conference on Machine Learning, pp. 16237\\u201316249. PMLR, 2023.\\n\\n**Include more qualitative examples (e.g. Figure 1) with details, such as the error level used for standard conformal prediction.**\\n\\nWe first thank the reviewer for reminding us of the missing error level information. The error level for both the standard conformal prediction and conformal structured set is 0.05 (i.e., the desired coverage level is 0.95).\\n\\nAs suggested by the reviewer, we provided another qualitative example here https://ibb.co/VvRQFM2, comparing our method to the standard PAC prediction set algorithm. We set the error level to 0.05 and the confidence level to 0.99 ($\\\\delta$=0.01), for both the standard PAC prediction set and our algorithm. As can be seen from the example, when constructing the prediction set using the standard PAC prediction set algorithm, the resulting prediction set is much larger than the one constructed by our algorithm. The large prediction set includes labels from very distant categories, making it much harder to interpret compared to having just two coarse-grained labels that summarize the labels. In our prediction set, the different dog breeds have been summarized as simply \\u201cdog\\u201d, and the different man-made artifacts have been summarized as \\u201cartifact\\u201d. The fact that the labels are quite coarse reflects the inherent uncertainty in the prediction for this image.\\n\\n(continued on next comment)\"}" ] }
29sul3tAEa
HyperAdapter: Generating Adapters for Pre-Trained Model-Based Continual Learning
[ "Qizhe Zhang", "Ruichuan An", "Bocheng Zou", "Zhi Zhang", "Shanghang Zhang" ]
Humans excel at leveraging past experiences to learn new skills, while artificial neural networks suffer from the phenomenon of catastrophic forgetting during sequential learning. Efforts have been made to alleviate forgetting by introducing a rehearsal buffer into the model, but this way is impractical in real-world scenarios with data privacy. Recently, pre-trained model-based continual learning methods have provided new insights into addressing this issue by effectively utilizing the powerful representational capabilities of pre-trained models to avoid catastrophic forgetting without a rehearsal buffer. In this work, we propose a novel pre-trained model-based continual learning framework, HyperAdapter, which utilizes a hypernetwork to generate adapters based on the current input, adapting the pre-trained model to the corresponding task. This paradigm requires fewer additional parameters as the number of tasks increases, which is a critical advantage for scaling to long sequences continual learning. Unlike methods that partition task-related knowledge into relatively independent subspaces, it promotes positive knowledge transfer across tasks. Comprehensive experiments across various datasets demonstrate that HyperAdapter consistently outperforms all existing methods and even exceeds the upper bounds of multi-task learning, establishing a new state-of-the-art for pre-trained model-based continual learning. Our code will be released.
[ "hypernetworks", "adapter tuning", "class-incremental learning" ]
https://openreview.net/pdf?id=29sul3tAEa
https://openreview.net/forum?id=29sul3tAEa
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tz1IY6aiwS", "mmHsiF7zxZ", "gyJTsH8c2G", "Pst5ZmMPyg", "Omwb5XkbzT", "FLLhv9lCr1" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730847807074, 1730588014396, 1730613773979, 1729170924053, 1730697233998, 1731658339669 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9828/Reviewer_BBsP" ], [ "ICLR.cc/2025/Conference/Submission9828/Reviewer_bCqR" ], [ "ICLR.cc/2025/Conference/Submission9828/Reviewer_j536" ], [ "ICLR.cc/2025/Conference/Submission9828/Reviewer_4wJE" ], [ "ICLR.cc/2025/Conference/Submission9828/Reviewer_c4Eb" ], [ "ICLR.cc/2025/Conference/Submission9828/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper presents a novel rehearsal-free approach for continual learning based on hypernetworks that generate so-called adapters to adapt the pre-trained model to different tasks.\", \"the_idea_is_intuitive_and_at_high_level_is_brain_inspired\": [\"the task dictionary is sort of an episodic memory in the hippocampus\", \"the hypernetwork is sort of the neocortex, storing past knowledge.\", \"task-specific embeddings are updated rapidly; and the general hypernetwork is updated slowly\", \"The empirical results on the introduced by the authors CL-100 benchmark look promising.\"], \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"An intuitive high-level connection to the Complementary Learning Systems theory.\", \"A simple approach with promising performance wrt accuracy.\", \"Promising approach wrt scalability - HyperAdapter allows to avoid the excessive number of adapters.\", \"Overall, the paper is well-written and is comprehendable to a wide audience.\"], \"weaknesses\": [\"It is not fully clear how the technical novelty is positioned, as well as what baselines should be included for demonstrating what ideas actually work (e.g. other hypernetworks-basd CL? other rehearsal-free CL approaches?).\", \"Overall, the technical novelty is rather modest: hypernetworks have been used for CL in the past. The ideas of exploring a combination of faster and slower adapting models have been explored in the past. The idea of recognizing already observed/recurrent tasks/concepts have been studies in the past too. (however, the studied combination of ideas in the context of adapting pre-trained models is novel to the best of my knowledge).\", \"The code and the experimental workbench is not available yet. Hence it is not easy to reproduce the results.\"], \"questions\": [\"what are the closest existing approaches to the proposed HyperAdapters, only multiple adapters?\", \"what are the goals of the experiments?\", \"is it established that CL-100 is a better benchmark / good enough to be able to obtain conclusive empirical evidence (wrt the main goal of the experiments)?\", \"why the hyperparameters in Equations 3 and 10 are both set to 0.1\", \"how good/representative the initial pre-trained model should be for the main conclusions to hold?\", \"what is the expected efficiency gain compared to learning more adapters? does it depend on how similar/dissimilar different tasks are?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper addresses the problem of catastrophic forgetting in continual learning. The authors introduce a pre-trained model-based continual learning framework, HyperAdapter, which utilizes a hypernetwork to generate adapters based on the current input, adapting the pre-trained model to the corresponding task. A key to the method is that HyperAdapter uses representative features from pre-trained models, eliminating the necessity to know the task identities during inference or the dependence on any rehearsal buffers. Experimentation shows that it outperforms other methods.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The research will be of interest to the ICLR community.\", \"originality\": \"There is currently a lot of interest in avoiding catastrophic forgetting in the continual learning setting. The authors have summarised and categorised the main approaches.\", \"experimentation\": \"The experimentation is carried out on the standard data sets.\", \"reproducibility\": \"I believe that the details are sufficient for reproducibility of the experiments.\", \"weaknesses\": \"Clarity: The paper discusses the main approaches at a high level and fails to clearly describe the key novelty of the proposed approach. Additionally, the comparison of the proposed method with how people learn is repeated in a number of places. The points around this are well known and widely documented. Removing the replication provides space to describe the novelty in more detail and room to discuss the implications of the results.\", \"typos\": \"Please check your paper carefully for typos including the repeated use of \\u201cregularation\\u201d on page 3. A grammar checker will pick up some of the errors.\", \"discussion_of_results\": \"The paper is missing a discussion of the results. Adding this will provide a deeper understanding of the advantages of the approach.\", \"discussion_of_limitations\": \"The paper is missing a discussion of the limitations of the approach and potential ways to address them. Adding this will provide a more balanced presentation of the research.\", \"discussion_of_braider_impact\": \"What are the open problems or future directions related to your work? Adding this to the paper would improve the paper's discussion of broader impact and potential future work.\", \"questions\": \"The values of the hyperparameters are stated on page 7. How does varying the hyperparameter values affect performance?\\n\\nWhat are the memory requirements, and what do they depend on?\\n\\nWhat are the time requirements, and what do they depend on?\\n\\n\\u201cImproving this selection mechanism is left as a direction for future work.\\u201d \\u2013 can you suggest some possible directions for improvement?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a class-incremental continual learning framework HyperAdapter, which uses hypernetworks to generate adapters to adapt a pre-trained model to different tasks. Specifically, it uses the pre-trained model's class embedding as a key to query the task embedding that generates the adapter parameters. Extensive experiments are performed on image classification benchmarks, showing improved performance over other regularization-based and prompt-based CL frameworks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Generates parameters specifically for pre-trained model adapters rather than the entire network, enhancing training efficiency.\", \"Introduces a task embedding dictionary for efficient retrieval of task embeddings for incoming tasks.\", \"Provides a thorough and detailed experimental analysis.\"], \"weaknesses\": [\"The approach appears to be a direct adaptation of a hypernet-based continual learning framework to adapter-based fine-tuning. Task relationships rely solely on the pre-trained model\\u2019s class token embedding for input, making it resemble a method of training separate models for distinct classes, with conditional parameter generation handled by the hypernet. This setup may not effectively handle scenarios where new classes can not be easily matched with labels of the pre-trained model, such as domain-specific tasks. e.g. for complex facial emotion classification tasks, the pre-trained model would give similar class embeddings (e.g. human, face, eye glasses, etc) regardless of which emotion class the image belongs to.\", \"The paper draws an analogy between the proposed method and the brain\\u2019s Complementary learning system (CLS). However, unlike the human brain, which can dynamically adjust its internal representations, such as merging or acquiring new concepts, the task dictionary here has keys fixed by the pre-trained classes, lacking the true flexibility of dictionary learning to adapt and integrate new concepts. It's suggested to consider ways to make the task dictionary more dynamic or adaptive over time.\"], \"questions\": [\"How is the task dictionary initialized? The definition of $q(x)$ in Section 4.1 requires more clarity: is [CLS] a one-hot vector, or does it represent the class token embedding of ViT (which is then multiplied by f(x))?\", \"Does the size of the task dictionary correspond to the number of training tasks (i.e., classification tasks) or the total number of classes across these tasks?\", \"How is Equation 10 optimized? Including the training algorithm or detailed description of the training process of different model parts would be beneficial.\", \"What is the parameter scale unit in Figure 3? Does it measure the parameters of the hypernetwork or the generated model parameters (e.g., U,W) during inference?\", \"How does the proposed method compare with other hypernetwork-based continual learning approaches?\", \"e.g.\", \"Ding, F., Xu, C., Liu, H., Zhou, B., & Zhou, H. (2024). Bridging pre-trained models to continual learning: A hypernetwork based framework with parameter-efficient fine-tuning techniques. Information Sciences, 674, 120710.\", \"Hemati, Hamed, Vincenzo Lomonaco, Davide Bacciu, and Damian Borth. \\\"Partial hypernetworks for continual learning.\\\" In Conference on Lifelong Learning Agents, pp. 318-336. PMLR, 2023.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposed pre-trained model-based continual learning method that employs a hypernetwork to generate adapters based on the current input. The proposed method features positive transfer and fewer additional parameters as the number of tasks increases. It outperforms a variety of continual learning methods in many representative benchmarks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper is essentially well-organized and easy to follow.\\n\\n2. The proposed hypernetwork seems to be a simple but effective strategy, applicable to both adapter-based and LoRA-based parameter-efficient tuning.\\n\\n3. Less requirement for additional parameters is a good feature for continual learning that ensures scalability.\", \"weaknesses\": \"1. As acknowledged by the authors, the hypernetwork itself is a large linear layer, and the use of a separate hyper parameter for each layer results in much more parameter cost. For fairness, it is therefore desirable to compare the total parameter cost with other baselines methods.\\n\\n2. Does the hypernetwork need to be updated in continual learning? If so, how does it overcome catastrophic forgetting?\\n\\n3. The authors only considered one particular pre-trained checkpoint of supervised ImageNet-21K. Does the proposed method apply to other pre-trained checkpoints, especially for self-supervised pre-training?\\n\\n4. The authors compared only a representative selection of pre-trained model-based continual learning methods. It would be more informative to consider other concurrent competitors, such as SLCA (ICCV\\u201923), LAE (ICCV\\u201923), RanPAC (NeurIPS\\u201923), HiDe (NeurIPS\\u201923), etc.\", \"questions\": \"Please refer to the Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper tackles the problem of catastrophic forgetting in continual learning, highlighting the limitations of traditional rehearsal buffer methods in data-sensitive contexts. The authors introduce HyperAdapter that employs hypernetworks to generate task-specific adapters for pre-trained models, thereby requiring fewer additional parameters as the number of tasks increases and promoting positive knowledge transfer across tasks. Comprehensive experiments demonstrate that HyperAdapter consistently outperforms existing methods on benchmarks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The proposed HyperAdapter leverages hypernetworks to generate task-specific adapters for pre-trained models, addressing data privacy concerns and enabling effective knowledge transfer.\", \"HyperAdapter requires fewer additional parameters as the number of tasks increases, making it suitable for long-sequence continual learning.\", \"Experiments demonstrate that HyperAdapter consistently outperforms some methods in rehearsal-free continual learning.\", \"The paper is clearly written and easy to follow.\"], \"weaknesses\": \"- The core idea proposed in this paper, using hypernetworks to generate model parameters (whether for all parameters, some parameters, or even applied to pre-trained models) to tackle the continual learning problem, has already been extensively explored in the literature [1-6]. This paper merely applies these existing methods in the context of prompting-based continual learning with pre-trained models, which significantly limits its novelty and contribution.\\n- Several of the innovative designs introduced, such as block-wise hyper-adapters, bear strong similarities in motivation and methodology to chunk embeddings and network partitioning discussed in [1]. This further constrains the novelty of the work.\\n- One of the claimed main advantages, \\\"eliminating the necessity of knowing the task identities during inference,\\\" was previously addressed in [1] under the concept of unknown task identity inference. Additionally, the query-key matching mechanism commonly used in prompt-based continual learning to address this issue is a well-established practice [7-9].\\n\\n[1] Continual learning with hypernetworks. ArXiv:1906.00695 2019.\\n\\n[2] Continual learning with dependency preserving hypernetworks. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision 2023.\\n\\n[3] Continual model-based reinforcement learning with hypernetworks. 2021 IEEE International Conference on Robotics and Automation.\\n\\n[4] Hypernetworks for continual semi-supervised learning. ArXiv:2110.01856 2021.\\n\\n[5] Partial hypernetworks for continual learning. Conference on Lifelong Learning Agents 2023.\\n\\n[6] Prototype-based HyperAdapter for Sample-Efficient Multi-task Tuning. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing.\\n\\n[7] Bridging pre-trained models to continual learning: A hypernetwork based framework with parameter-efficient fine-tuning techniques. Information Sciences, 2024.\\n\\n[8] Learning to Prompt for Continual Learning. CVPR 2022.\\n\\n[9] DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning. ECCV 2022.\", \"questions\": [\"What fundamental differences exist between hyperadapters and existing methods that utilize hypernetworks for continual learning [1-6], aside from variations in application scenarios? Please clarify the key innovations.\", \"Is it possible to extend this approach to other continual learning settings, such as class-incremental, domain-incremental, or task-incremental learning?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
29p13QihRM
Language-Guided Object-Centric World Models for Predictive Control
[ "Youngjoon Jeong", "Junha Chun", "Soonwoo Cha", "Taesup Kim" ]
A world model is essential for an agent to predict the future and plan in domains such as autonomous driving and robotics. To achieve this, recent advancements have focused on video generation, which has gained significant attention due to the impressive success of diffusion models. However, these models require substantial computational resources. To address these challenges, we propose a world model leveraging object-centric representation space using slot attention, guided by language instructions. Our model perceives the current state as an object-centric representation and predicts future states in this representation space conditioned on natural language instructions. This approach results in a more compact and computationally efficient model compared to diffusion-based generative alternatives. Furthermore, it flexibly predicts future states based on language instructions, and offers a significant advantage in manipulation tasks where object recognition is crucial. In this paper, we demonstrate that our latent predictive world model surpasses generative world models in visuo-linguo-motor control tasks, achieving superior sample and computation efficiency. We also investigate the generalization performance of the proposed method and explore various strategies for predicting actions using object-centric representations.
[ "Object-Centric Representation", "World Model", "Predictive Control" ]
Reject
https://openreview.net/pdf?id=29p13QihRM
https://openreview.net/forum?id=29p13QihRM
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wJYB5pnAHM", "vhUTa3Ek2Y", "vYov5NvR17", "r6c0S1QgYr", "qYeK6b6LgY", "nOwrnqhXQv", "kKQQocVLp4", "QHvgF4pIgG", "PDeV8FBP6z", "Ox1JdMCW4k", "IG9nsiI5Yj", "EKpV6nSh6E", "E0rgLcdPxV", "DwPqWYS9hT", "D6IVa8740s", "3S6wQZfoLk" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732840697507, 1732841291520, 1732841034445, 1732840935154, 1734745300044, 1730703391219, 1732840997158, 1732840853813, 1729114707368, 1730405619662, 1732841172580, 1732841259231, 1737523728421, 1732840507708, 1730132285255, 1732840748020 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5844/Authors" ], [ "ICLR.cc/2025/Conference/Submission5844/Authors" ], [ "ICLR.cc/2025/Conference/Submission5844/Authors" ], [ "ICLR.cc/2025/Conference/Submission5844/Authors" ], [ "ICLR.cc/2025/Conference/Submission5844/Area_Chair_GUhp" ], [ "ICLR.cc/2025/Conference/Submission5844/Reviewer_NMFa" ], [ "ICLR.cc/2025/Conference/Submission5844/Authors" ], [ "ICLR.cc/2025/Conference/Submission5844/Authors" ], [ "ICLR.cc/2025/Conference/Submission5844/Reviewer_Eu5p" ], [ "ICLR.cc/2025/Conference/Submission5844/Reviewer_cCRo" ], [ "ICLR.cc/2025/Conference/Submission5844/Authors" ], [ "ICLR.cc/2025/Conference/Submission5844/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5844/Authors" ], [ "ICLR.cc/2025/Conference/Submission5844/Reviewer_vMaT" ], [ "ICLR.cc/2025/Conference/Submission5844/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thanks for your valuable feedback. We are currently running additional experiments to address reviewers\\u2019 concerns, and below are our responses to the specific concerns if answerable without additional experimental results. We will fully cover all concerns during the extended review timeline, thanks for your understanding!\\n\\n\\n> [Weakness 1] The contribution in terms of \\\"object-centric\\\" design feels limited, as it primarily substitute the encoder for SAVi without introducing distinct object-centric innovations.\\n\\n While most object-centric literatures are limited to image reconstruction and segmentation, integration to more practical downstream tasks can enrich the usability of object-centric representation. Therefore, rather than proposing a novel method of object-centric representation, we focus on properly utilizing object-centric representation which is done simply and efficiently using transformers with permutation invariance.\\n\\n\\n> [Weakness 2] The lack of an experiment comparing your proposed model with a VAE-based variant (ours + VAE in Tab.1) makes it difficult to conclusively justify the benefits of slot attention.\\n\\n The reason we did not explicitly include \\u201cours + VAE\\u201d in the main text is that the primary purpose of our world model is not to decode into the RGB space but to predict states in the latent space of slots. We considered it somewhat unnatural to decode into the RGB space and then re-encode it using a VAE. However, your observation is valid, and we are currently conducting additional experiments to address this question. We will provide a response as soon as we have the results.\\n\\n\\n> [Weakness 3] Comparison against video diffusion models would be more appropriate than models like instructpix2pix, as diffusion models are more aligned with the proposed model's multi-frame prediction capability.\\n\\n One of the baseline models, Seer, is a language-conditioned video diffusion model conditioned on 6 frames to generate 10 future frames like our method. \\n\\nThe motivation behind our research stems from studies such as UniSim (Yang et al., 2023) and UniPi (Du et al., 2023), which employ video diffusion models as world models but require substantial time and computational resources. For instance, UniPi utilizes 256 TPU-v4s to train video diffusion models for 2 million steps with a batch size of 2048. Similarly, UniSim uses 512 TPU-v3s to train video diffusion models for 1 million steps over 20 days, processing nearly 821 million data examples. Furthermore, these diffusion models require several sampling steps during inference, which can also become a significant obstacle in practical applications. \\n\\nTo address these challenges, we aimed to implement a world model using a latent predictive approach. To evaluate the time efficiency of our method, we selected Seer (Gu et al., 2023), a latent video diffusion model known for its strength in time efficiency, as a baseline. As you pointed out, SuSiE (Black et al., 2023) may not align perfectly in terms of multi-frame prediction capability. However, if we consider its functionality solely as a world model, it can be seen as more efficient than video diffusion models like Seer due to its ability to generate only the essential frames. Additionally, because it generates frames autoregressively, it tends to produce coarser videos. From this perspective, we concluded that SuSiE, based on instruct2pix2pix, is a suitable comparison point for time efficiency as a generative world model. Its strong time efficiency aligns well with our goal of benchmarking the proposed model against generative methods in this aspect.\"}", "{\"comment\": \"> [Question 1] It\\u2019s interesting that Seer gets basically 0% on all tasks. What are the qualitative failure cases there?\\n\\n In Figure 3 and Figure 6 and 7 in Appendix, Seer fails to predict accurate arm and object locations in the first hand. Though Seer fails to accurately recover image domains of language table environments, if it predicts arm and object information, the action decoder should be able to accurately predict actions. From this failed video prediction, in Figure 8 and 9 in Appendix, action predictions from Seer totally fail to solve the task with behavior of taking meaningless actions.\\n\\n> [Question 2] Since SuSiE was already evaluated on CALVIN, which is simulated, why not evaluate your approach in that setting?\\n\\n To show our method in a more challenging environment, we conducted additional experiments on more complex simulated environments (Libero) using our method with SAVi and with state-of-the-art video slot encoder model (SOLV). Please refer to the general comment for additional experiments. \\n\\n\\nFinally, we hope our response could address the concerns, and we thank the reviewer again for the helpful comments. We are glad to discuss further comments and suggestions.\"}", "{\"comment\": \"> [Question 1] Is language table the de-facto setting for studying object-centric control? It seems fairly limited and biased towards object centric approaches since it is clearly possible to discard the background information quite easily. Studying it in cases of ambiguity, where sometimes the background is obvious to ignore and sometimes not would bring more of the community to investigate this topic.\\n\\nSince slot attention maps each input feature to a slot without exception, background is not processed differently or ignored, rather ideally, created as a background slot. Then, the world model learns to predict both objects and the background slot dynamics. The background slot can be ignored when training the action decoder as it learns to attend to necessary information for predicting the action.\\n\\nWhile the language table is in a less complex domain, it being our choice is done considering its multi-object property, availability of language instruction, offline data and evaluation environment and our default setting\\u2019s (SAVi with simple CNN) expressiveness capability. As we expected that our method can be applied when replacing image encoder with a more expressive network, we are currently running additional experiments to address this issue.\\n \\n\\n> [Question 2] In section 4.6, Is a world model really necessary? - Have the authors reported a pixel2action baseline, basically that does the same learning procedure, and except for learning from images directly, extract from some off the shelf network. The current results only ablate the absence of future slots, which makes sense, but that doesn't answer the question generally about needing a world model or not.\\n\\nTo address your general question about whether a world model is necessary, we conducted an additional experiment. Specifically, we used the VAE encoder from Stable Diffusion v1.5 to extract features from the current frame and trained an action decoder using these features combined with the instruction. The results showed that only 1 out of 200 episodes succeeded, reinforcing our claim that future prediction is crucial for control tasks. For more details, please refer to [Answer 4] to Reviewer NMFa04.\\n\\nFinally, we hope our response could address the concerns, and we thank the reviewer again for the helpful comments. We are glad to discuss further comments and suggestions.\"}", "{\"comment\": \"> [Weakness 4] Some experimental results would benefit from further analysis: for example, it not clear why using language conditioning for the agent itself is decreasing success rate.\\n\\n As stated in \\u201cDoes adding instructions into the action decoding help?\\u201d ablation, since the world model predicts a slot trajectory that already realizes given language instruction, the slot trajectory is sufficient for predicting actions for the agent. While performance drop is minor (1~4%p) for transformer action decoders in Table 3, redundantly providing instruction information to the agent can rather hinder the training process.\\n\\n> [Weakness 5] Some potentially missing related work in video-based object-centric learning, control with object-centric representation and world models based on the object-centric representations\\n\\nThanks for pointing out some missing related works, we checked and cited them.\\n\\n**Page 1:** Meanwhile, object-centric representation derived from Locatello et al. (2020) has recently garnered significant attention as a method for encoding images or videos in several studies (... Zadaianchuk et al., 2023).\\n\\n**Page 3:** A world model predicts future states based on current environments. Recent advancements in diffusion models have led to active research on video generation-based world models (Du et al., 2024a;b; Yang et al., 2024), but these methods demand substantial data and lengthy training and inference times. Some studies address these challenges by learning dynamics within the latent space, either by extracting full-image representations (Hafner et al., 2019; 2020; 2023; Babaeizadeh et al., 2017; Franceschi et al., 2020) or by using masked inputs (Seo et al., 2023). Others propose object-centric approaches, focusing on state representations (Wu et al., 2022; Collu et al., 2024) or combining states and actions (Ferraro et al., 2023; Feng & Magliacane, 2023). We introduce the first object-centric world model guided by natural language. Our model is more computationally efficient than diffusion-based approaches and outperforms non-object-centric methods in action prediction tasks. Unlike prior studies that train goal-conditioned policies using object-centric representations requiring goal images at test time (Zadaianchuk et al., 2020; Haramati et al., 2024), our approach predicts future states based on instructions and uses these predictions to train an action decoder, enabling manipulation tasks without goal images.\\n\\n\\u2014\\n\\n> [Question 1] What is trained during \\\"Predict future dynamics\\\" Figure 1(b)? If nothing remove \\\"fire\\\" sign near the world model?\\n\\n The figure shows the autoregressive prediction process of the slot world model. Autoregressive prediction is done both in training and inference for long-term consistency of the world model, which means captions of the figure were misleading. We changed the captions to resolve this ambiguity.\\n\\n **Page 2 Figure 1 caption:** (b) More specifically, the world model utilizes the predicted slots from the previous steps to autoregressively predict future slots.\\n\\n\\n> [Question 2] \\\"nature of slot attention not being robust to variable object number\\\" this could be clarified\\n\\n Slot attention needs a predefined number of slots near optimal to properly train the object discovery capability. Original slot attention literature shows experiments of being robust to increase in slot numbers at test time, however, this is when slot attention is already trained with a slot number of maximum object number + 1(background) which is optimal. A large slot number during training causes over-segmentation of objects harming its object discovery capability and robustness to numbers of objects. Since it would be common for a maximum number of objects to be not available (or has no upper bound) for manipulation tasks in reality unlike simulated benchmark environments, it is a critical limitation and promising direction for future research.\\n\\nFinally, we hope our response could address the concerns, and we thank the reviewer again for the helpful comments. We are glad to discuss further comments and suggestions.\"}", "{\"metareview\": \"The paper introduces a language-guided, object-centric world model for predictive control, which is both computationally efficient and effective in robotic and autonomous tasks. Using slot attention for object-focused representation and language guidance, it outperforms diffusion-based models in task success, speed, and generalization.\\n\\nWhile the paper tackles an interesting problem with some promising results, reviewers had a number of concerns regarding the novelty and contribution of the approach as well as the baselines considered.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers and authors had a discussion but all reviewers remained negative and had remaining concerns at the end of the rebuttal period.\"}", "{\"summary\": \"The paper introduces a language-guided, object-centric world model for predictive control, which is both computationally efficient and effective in robotic and autonomous tasks. Using slot attention for object-focused representation and language guidance, it outperforms diffusion-based models in task success, speed, and generalization.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Effectively uses SAVi to extract object-centric frame features, enhancing computational efficiency and model accuracy.\", \"Compares against two baseline models (Seer and Susie), highlighting the advantages in efficiency and success rate of the proposed approach.\", \"Demonstrates generalization capabilities to unseen tasks and objects, showing robustness in diverse environments.\"], \"weaknesses\": [\"The contribution in terms of \\\"object-centric\\\" design feels limited, as it primarily substitute the encoder for SAVi without introducing distinct object-centric innovations.\", \"The lack of an experiment comparing your proposed model with a VAE-based variant (ours + VAE in Tab.1) makes it difficult to conclusively justify the benefits of slot attention.\", \"Comparison against video diffusion models would be more appropriate than models like instructpix2pix, as diffusion models are more aligned with the proposed model's multi-frame prediction capability.\", \"The analysis suggesting that future state prediction alone suffices for action decoding is questionable; the low accuracy for \\\"instruction + 0 future steps\\\" (2.5%) compared to near-zero performance for Seer implies that baseline results may lack rigor, potentially outperforming when future states are not predicted.\", \"The dataset used is overly simplistic, limiting the scope of validation for the world model. Testing across multiple, varied environments would better demonstrate the model\\u2019s general applicability and robustness.\"], \"questions\": \"See Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your valuable feedback. We are currently running additional experiments to address reviewers\\u2019 concerns, and below are our responses to the specific concerns if answerable without additional experimental results. We will fully cover all concerns during the extended review timeline, thanks for your understanding!\\n\\n> [Weakness 1] The authors do not have a SlotFormer baseline, which does not use any language conditioning. Given that one of the key claims of the paper is that language conditioned object centric world models help downstream tasks, checking the importance of being language centric is critical. Adding that baseline would be helpful.\\n\\nWe conducted an additional experiment for the suggested SlotFormer baseline replacing LSlotFormer to SlotFormer with identical experiment settings only without language instruction. Slotformer prediction successfully reconstructed in-domain images, but semantically fails to perform block-to-block tasks. Action decoder trained with SlotFormer successed 2/200 (1%) of environment evaluation tasks. \\n\\nThis is because in a language table environment, the agent does not have any information other than language instruction to perform given tasks. One can claim that even without instruction, when successfully learned, the model could be able to generate random A-to-B trajectory resulting in a success rate of 2/(n(n-1)); 2 for B-to-A execution leading to success. However, the predicted trajectories do not show a long-term task solving behavior due to its task ambiguity and inconsistency.\\n\\n> [Weakness 2] For the evaluation of this approach, the authors have used the language table simulation environment, which involves some objects to be manipulated on a table top setting. This makes sense since there is a clear distinction between foreground (the objects) and background, which favors object centric approaches over general video generative models. However, showcasing some other scenarios or evaluation setups where maybe lets say intuitively a video generative model would have an edge, would have been interesting and more convincing to see.\\n\\n We expect video generative models, especially when pretrained, would have an edge in more complex image domains. To show our method in a more challenging environment, we conducted additional experiments on more complex simulated environments (Libero) using our method with SAVi and with state-of-the-art video slot encoder model (SOLV). Please refer to the general comment for additional experiments.\\n\\n\\n> [Weakness 3] Minor: The qualitatives in figure 3 are not the easiest to parse, if the author\\u2019s method works well, but having a video to see the predictions would make the difference much clearer, I couldn\\u2019t find anything in the suppmat.\\n\\n Thanks for pointing out the difficulty in parsing the qualitative results in the figures, we added video prediction visualizations in supplementary materials to make the results clearer and more interpretable.\"}", "{\"comment\": \"Thanks for your valuable feedback. We are currently running additional experiments to address reviewers\\u2019 concerns, and below are our responses to the specific concerns if answerable without additional experimental results. We will fully cover all concerns during the extended review timeline, thanks for your understanding!\\n\\n> [Weakness 1] Overall, the proposed method is a simple modification of the SlotFormer adding language goal conditioned predictions and trained on a large dataset of the demonstrations. On it its own it is not a big problem, if the proposed methods would be studied on diverse and challenging environments and compared with other methods that are state-of-the-art world models\\n\\n> [Weakness 3] It is not clear how the methods compare to the standard baselines on this task: while outperforming diffusion models for video prediction, it is not clear if usage of world model with object-centric representations are comparable or not with state-of-the-art algorithms using the same data for training.\\n\\n To show our method in a more challenging environment, we conducted additional experiments on more complex simulated environments (Libero) using our method with SAVi and with state-of-the-art video slot encoder model (SOLV). Please refer to the general comment for additional experiments.\\n\\nFor the baselines, we considered language conditioned diffusion video/image prediction models as world model baselines. Our motivation is to substitute diffusion world models with more efficient object-centric world models. Although direct comparison between these methods can be difficult due to their difference in design, our approach shows better performance over diffusion models, both when trained from scratch on the target dataset and when fine-tuned from a pre-trained model, while also offering greater computational efficiency.\\n\\nThe primary motivation behind our research stems from studies such as UniSim (Yang et al., 2023) and UniPi (Du et al., 2023), which employ video diffusion models as world models but require substantial time and computational resources. For instance, UniPi utilizes 256 TPU-v4s to train video diffusion models for 2 million steps with a batch size of 2048. Similarly, UniSim uses 512 TPU-v3s to train video diffusion models for 1 million steps over 20 days, processing nearly 821 million data examples. Furthermore, these diffusion models require several sampling steps during inference, which can also pose significant challenges in practical applications. \\n\\nTo address these challenges, we aimed to implement a world model using a latent predictive approach. Therefore, we compare our method against SOTA diffusion-based generative world models as baselines. While we greatly appreciate your suggestion to compare with other types of world models, we believe that doing so could divert the focus from the primary objectives of our work. \\n\\nAdditionally, as shown in Table 2, we provide a comparison of the data usage between diffusion-based generative world models and our approach. Notably, we outperform Seer-S, a version of Seer trained from scratch using the same amount of data as our method. This result further underscores the efficiency and effectiveness of our proposed approach.\\n\\n\\n> [Weakness 2] While the improved performance on the synthetic dataset is encouraging, it is still not clear how the method would perform in more realistic scenarios where both object-centric models and corresponding agents can struggle. As mentioned by authors, recently it was shown that object-centric methods are able to decompose much challenging images or videos (e.g. see DINOSAUR (Seitzer et al. (2023)) for images or VideoSAUR [5] /SOLV (Aydemir et al., 2024) for videos). Thus, it would be important to test how object-centric world models perform in more realistic environments with visual more complex scenarios, e.g. by training LSlotFormer on VideoSAUR or SOLV slots on environments like ManiSkill2).\\n\\n To show our method in a more challenging environment, we conducted additional experiments on more complex simulated environments (Libero) using our method with SAVi and with state-of-the-art video slot encoder model (SOLV). Please refer to the general comment for additional experiments.\"}", "{\"summary\": \"The authors propose to train a language-conditioned latent dynamics model whose state representations are object-centric \\u201cslots\\u201d provided by a frozen pre-trained model. They then train an inverse dynamics model that predicts the actions corresponding to the transitions of the autoregressively-generated latent slot representations.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"The paper is written quite clearly, I think the authors presented their ideas quite well\", \"I appreciate the ablation discussions in 4.6.\"], \"weaknesses\": [\"Despite the paper being relatively clearly written, I would highly recommend using ICLR 2025\\u2019s extended page limit to increase the size and quality of your visualizations. For instance, for Figure 3, it\\u2019s very hard to see where the cube and moon are in the scene. I likewise cannot see any empirical quality differences between your approach\\u2019s generations and SuSiE\\u2019s.\", \"The approach is not tested on a wide range of tasks \\u2013 only the simulated LanguageTable benchmark. It\\u2019s not at all clear to me that it would generalize to the real world. Given that SuSiE was evaluated both in sim (CALVIN) and on real-world robots on Bridge-like tasks and showed good performance (compared to strong real-world baselines like RT-2), it is unclear if the present paper\\u2019s approach would similarly scale to such more complex tasks.\", \"Similarly, the authors claim: \\u201cHowever, the major drawback of language-guided video-generation models is the requirement of large-scale labeled language-video datasets and the corresponding high computational cost. Therefore, latent predictive models, which abstract video to predict forward in compact latent state spaces, can serve as an alternative from a computational efficiency perspective.\\u201d\", \"If this is true, it seems more sensible to evaluate in the real world, where data is more limited than in sim.\", \"Additionally, SuSiE does show that an off-the-shelf image generation model pre-trained on general image-language data can be fine-tuned to work well with robot data just on existing relatively limited robot datasets. If that\\u2019s the case, it seems highly unclear that sample efficiency is a problem.\", \"\\u201cThe task is to move a block to another block based on the description of the colors or shapes \\u2026 which are the red moon, blue cube, green star, and yellow pentagon.\\u201d This likewise seems very limited \\u2013 I understand that there are generalization experiments, but the Bridge dataset used for SuSiE\\u2019s real-world experiments contain a much wider range of actions and objects, and thus also a much wider range of language (including many noisy labels). It has thus demonstrated to be scalable to a wider range of language and visual entities, which I think would similarly benefit this approach (as it stands, being able to generate latent state trajectories for such a limited number of objects and actions does not say much about its scalability).\", \"As it stands, given that the approach was only evaluated on a single task setting and said setting is not that representative of real-world language-conditioned visuo-motor robotics tasks, I do not think that this approach has sufficiently demonstrated its general applicability. I think more experiments in a wider variety of domains would be very helpful, especially in real world experiments.\", \"Finally, I think it would be important to include results that showcase in what settings visual and object-centric world models each excel or break down. I can imagine some cases wherein image or video generation is bad: for example, if I ask a robot to fetch me something from a closed cabinet, the image generator would have to effectively \\u201cimagine\\u201d what the inside of that cabinet looks like. However, I do not have corresponding intuition for object-centered world models (though it seems like their weaknesses might be quite similar). See last question for more.\"], \"questions\": [\"It\\u2019s interesting that Seer gets basically 0% on all tasks. What are the qualitative failure cases there?\", \"Since SuSiE was already evaluated on CALVIN, which is simulated, why not evaluate your approach in that setting?\", \"In what qualitative settings would object-centered world models be more or less effective than image ones? Do the authors have any intuitive examples of this, and is there any experimental evidence to back that up?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes to extend Slot Former to conditioned on language instruction object-centric dynamics prediction model. Such model could be used for decoding future actions for given state and instruction. Such predictions are in tern used for decoding the best action for the next time step. The paper showed that in the synthetic environment with large dataset, using such structured repression leads to better performance in comparison to using diffusion models for future state prediction. In addition, authors showed that such model is able to generalize to unseen tasks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-written and mostly easy to follow.\", \"The authors provide a comparison with several image generation baselines adapted for the robotics domain, showing large gap from them.\", \"The authors study how robust their method to some changes in the environment, such as changing the block type or changing the task to unseen one.\"], \"weaknesses\": \"- Overall, the proposed method is a simple modification of the SlotFormer adding language goal conditioned predictions and trained on a large dataset of the demonstrations. On it its own it is not a big problem, if the proposed methods would be studied on diverse and challenging environments and compared with other methods that are state-of-the-art world models (e.g. [6, 7])\\n\\n- While the improved performance on the synthetic dataset is encouraging, it is still not clear how the method would perform in more realistic scenarios where both object-centric models and corresponding agents can struggle. As mentioned by authors, recently it was shown that object-centric methods are able to decompose much challenging images or videos (e.g. see DINOSAUR (Seitzer et al. (2023)) for images or VideoSAUR [5] /SOLV (Aydemir et al., 2024) for videos). Thus, it would be important to test how object-centric world models perform in more realistic environments with visual more complex scenarios, e.g. by training LSlotFormer on VideoSAUR or SOLV slots on environments like ManiSkill2). \\n\\n- It is not clear how the methods compare to the standard baselines on this task: while outperforming diffusion models for video prediction, it is not clear if usage of world model with object-centric representations are comparable or not with state-of-the-art algorithms using the same data for training. \\n\\n- Some experimental results would benefit from further analysis: for example, it not clear why using language conditioning for the agent itself is decreasing success rate. \\n\\n- Some potentially missing related work in video-based object-centric learning, control with object-centric representation and world models based on the object-centric representations: \\n\\n1. Focus: Object-centric world models for robotics manipulation - also proposed a world model using object-centric representations. (https://arxiv.org/abs/2307.02427)\\n2. Learning Dynamic Attribute-factored World Models for Efficient Multi-object Reinforcement Learning, NeurIPS 2023 (https://arxiv.org/abs/2307.09205) - learns dynamics graph for more efficient policies. \\n3. Self-Supervised Visual Reinforcement Learning with Object-Centric Representations, ICLR 2020 - proposed a goal-conditioned transformer based policy (or action decoder in authors notation), https://arxiv.org/abs/2011.14381\\n4. Entity-Centric Reinforcement Learning for Object Manipulation from Pixels (https://arxiv.org/pdf/2404.01220)\\n5. Object-Centric Learning for Real-World Videos by Predicting Temporal Feature Similarities (extention of SAVi to more complex real-world videos using DINOSAUR), https://arxiv.org/abs/2306.04829\\n\\n\\n6. TD-MPC2: Scalable, Robust World Models for Continuous Control\\n7. PWM: Policy Learning with Large World Models\", \"questions\": [\"What is trained during \\\"Predict future dynamics\\\" Figure 1(b)? If nothing remome \\\"fire\\\" sign near the world model?\", \"\\\"nature of slot attention not being robust to variable object number\\\" this could be clarified\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your valuable feedback. We are currently running additional experiments to address reviewers\\u2019 concerns, and below are our responses to the specific concerns if answerable without additional experimental results. We will fully cover all concerns during the extended review timeline, thanks for your understanding!\\n\\n> [Weakness 1] Despite the paper being relatively clearly written, I would highly recommend using ICLR 2025\\u2019s extended page limit to increase the size and quality of your visualizations. For instance, for Figure 3, it\\u2019s very hard to see where the cube and moon are in the scene. I likewise cannot see any empirical quality differences between your approach\\u2019s generations and SuSiE\\u2019s.\\n\\n Thanks for pointing out the difficulty in parsing the qualitative results in the figures and visualizations, we improved figure visibility to make the results clearer and more interpretable. To aid understanding, we have added a video related to Figure 3 in the supplementary material. The video includes four segments: original.gif (the original video), ours.gif (the video decoded by our method), and seer_s.gif and seer_f.gif (videos generated by the Seer baselines). While Seer is restricted to generating 10 frames, our method, being autoregressive, has no such limitation. Additionally, as per your suggestion, we have increased the size of Figures 3 and 4 to improve readability.\\n\\n However, both Seer and our method appear to lose shape details when decoding into the pixel space in Figure 3. In our case, this is likely due to the limitations of the image decoder, as well as the fixed resolution, which constrains the potential for quality improvement. We believe this limitation is acceptable, as the primary contribution of our method is not in producing high-quality frame decodings but in accurately predicting future states within the latent space. \\n\\n\\n> [Weakness 2] The approach is not tested on a wide range of tasks \\u2013 only the simulated LanguageTable benchmark. It\\u2019s not at all clear to me that it would generalize to the real world. Given that SuSiE was evaluated both in sim (CALVIN) and on real-world robots on Bridge-like tasks and showed good performance (compared to strong real-world baselines like RT-2), it is unclear if the present paper\\u2019s approach would similarly scale to such more complex tasks.\\n\\n To show our method in a more challenging environment, we conducted additional experiments on more complex simulated environments (Libero) using our method with SAVi and with state-of-the-art video slot encoder model (SOLV). Please refer to the general comment for additional experiments.\\n\\n\\n> [Weakness 3] Similarly, the authors claim: However, the major drawback of language-guided video-generation models is the requirement of large-scale labeled language-video datasets and the corresponding high computational cost. Therefore, latent predictive models, which abstract video to predict forward in compact latent state spaces, can serve as an alternative from a computational efficiency perspective.\\n\\n> - If this is true, it seems more sensible to evaluate in the real world, where data is more limited than in sim.\\n> - Additionally, SuSiE does show that an off-the-shelf image generation model pre-trained on general image-language data can be fine-tuned to work well with robot data just on existing relatively limited robot datasets. If that\\u2019s the case, it seems highly unclear that sample efficiency is a problem.\\n\\n We currently have no access to real world demonstration data and physical manipulation devices. Also, complex domain cases like real-world are limited due to our default setting\\u2019s (SAVi with simple CNN) expressiveness capability, which is addressed in additional experiments. Extending and testing in real-world scenarios is a priority for future work once the necessary resources become available.\\n\\nWe considered internet scaled pre-training also as a part of the dataset and computation burden, which can be resolved leveraging an open-sourced model. However, the scale of limited fine-tuning data in their work is not very trivial compared to pre-training data scale. Also, our main contribution is computational efficiency during training and inference.\"}", "{\"comment\": \"> [Weakness 4] \\u201cThe task is to move a block to another block based on the description of the colors or shapes \\u2026 which are the red moon, blue cube, green star, and yellow pentagon.\\u201d This likewise seems very limited \\u2013 I understand that there are generalization experiments, but the Bridge dataset used for SuSiE\\u2019s real-world experiments contain a much wider range of actions and objects, and thus also a much wider range of language (including many noisy labels). It has thus demonstrated to be scalable to a wider range of language and visual entities, which I think would similarly benefit this approach (as it stands, being able to generate latent state trajectories for such a limited number of objects and actions does not say much about its scalability).\\n\\n> [Weakness 5] As it stands, given that the approach was only evaluated on a single task setting and said setting is not that representative of real-world language-conditioned visuo-motor robotics tasks, I do not think that this approach has sufficiently demonstrated its general applicability. I think more experiments in a wider variety of domains would be very helpful, especially in real world experiments.\\n\\nRegarding concerns about language diversity and scalability, we agree that the current experimental setup involves a relatively limited range of objects and language inputs and more comprehensive evaluation of our method's scalability is needed. To show our method in a more diverse environment, we conducted additional experiments on more complex simulated environments (Libero) using our method with SAVi and with state-of-the-art video slot encoder model (SOLV). Please refer to the general comment for additional experiments. Also, as stated in [Answer 3], we currently have no access to real world demonstration data and physical manipulation devices. Extending and testing in real-world scenarios is a priority for future work once the necessary resources become available.\\n\\n\\n> [Weakness 6] Finally, I think it would be important to include results that showcase in what settings visual and object-centric world models each excel or break down. I can imagine some cases wherein image or video generation is bad: for example, if I ask a robot to fetch me something from a closed cabinet, the image generator would have to effectively \\u201cimagine\\u201d what the inside of that cabinet looks like. However, I do not have corresponding intuition for object-centered world models (though it seems like their weaknesses might be quite similar). See last question for more.\\n\\n> [Question 3] In what qualitative settings would object-centered world models be more or less effective than image ones? Do the authors have any intuitive examples of this, and is there any experimental evidence to back that up?\\n\\nSince object-centric representations of individual entities are given as input, the world model explicitly learns relationships between objects such as physical interactions or compositional understanding. Prior works (Heravi et al., 2023; Yoon et al., 2023; Driess et al., 2023) and our experiments empirically show that using object-centric representation can benefit in robotics manipulation tasks compared to non-object-centric features. Also, Intuitively, object-centric world models are expected to excel in cases where there are multiple objects and especially when their motions are independent, for example, swarm robotics or pedestrian monitoring. Though this property is out of our work\\u2019s scope, it is a promising direction for future research.\\n\\nOn the other hand, object-centric representations might be less effective in scenarios where their advantages are diminished, such as environments with very few objects or where one object dominates the scene, reducing the need for relational reasoning. In such cases, where no significant interactions occur, image-based models could perform better. \\n\\nRegarding the example of \\\"imagination\\\" such as a robot predicting the contents of a closed cabinet, we agree that object-centric world models face similar challenges. Their reliance on explicit object representations and interactions could limit performance in highly stochastic or ambiguous environments where generating plausible visualizations is more critical than understanding object relationships.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"General Response\", \"comment\": \"Thanks for all the reviewers' valuable feedback. The main concerns for our work that all reviewers had is that our experiments are conducted in a limited and simple environment setting, lacking diversity and scalability to more complex environments. We are aware that the language table environment is not sufficiently complex to represent the model\\u2019s general applicability, and would like to address this in the general comment.\\n\\nOriginal SAVi with simple CNN image encoder can be applied to complex simulated image domain (MOVi++) with conditioning slot priors on segmentations or bounding boxes and training with optical flow objective from additional prediction network (PWC-Net). As stated in Appendix A.1, we stick to the non-conditional slot prior and reconstruction loss setting of SAVi for its simplicity and fully self-supervised pipeline without additional networks, which limits slot encoder\\u2019s applicability in complex domains. \\n\\nWe have tried using our slot encoder setting(SAVi with simple CNN) to train in Libero dataset and environment, however, expressiveness capabilities for this slot encoder setting couldn\\u2019t able to properly bind objects in complex domains. In theory, since our method is applicable with any slot encoder other than SAVi, we conducted additional experiments on more complex simulated environments (Libero) using our method with SAVi and with state-of-the-art video slot encoder model (SOLV). \\n\\nDuring the rebuttal period, we attempted to extend our methodology by applying SOLV to Libero dataset. However, it has been challenging to complete this within the given period. Unfortunately, our current experiments have not yet demonstrated satisfactory performance when applying SOLV to Libero. Visualizations of the execution results reveal that the robotic arm can identify and move the objects necessary to complete the episode. However, the lack of precision in the actions seems to prevent successful episode completion. \\n\\nThis issue likely stems from the difference in action complexity between the two environments. In the language table setting, actions are represented as simple 2D displacements in the x and y directions, whereas Libero requires more complex 7D actions, including displacements in x, y, z directions, rotation, and the state of the gripper.\", \"we_explored_two_approaches_for_training_solv\": \"training from scratch using only Libero and fine-tuning from the checkpoint trained on the YouTube-VIS 2019 dataset (Yang et al., 2019) provided by the authors. While fine-tuning appeared to improve object segmentation, it did not have a significant impact on the success rate during evaluation.\\n\\nAdditionally, we investigated replacing the Invariant Slot Attention used in SOLV with standard Slot Attention, considering that Invariant Slot Attention might exclude object-related information such as position and size from the slot representation, which might be crucial for manipulation tasks. However, this modification also did not improve episode success. \\n\\nObserving that the world model trained with SOLV slots fine-tuned using Invariant Slot Attention achieved a significantly lower loss while the action decoder\\u2019s loss increased, we hypothesized that the slots lacked critical information for manipulation tasks such as object position, size, and rotation. To address this, we adjusted the training of the world model to also predict relative grid information that reflects object position, size, and rotation. Simultaneously, the action decoder was trained to use both the slots and the relative grid information to predict actions. While this approach slightly increased the world model\\u2019s loss, it resulted in improved action decoder performance. Unfortunately, this improvement did not translate into higher episode success rates. \\n\\nBased on these experiments, we conclude that the primary reason for failure during evaluation lies not in SOLV or the world model but in the action decoder\\u2019s inability to generate sufficiently precise actions. Further refinement of the action decoder is likely needed to overcome this limitation. Nevertheless, as our method is compatible with different action decoders, we believe this issue can be addressed by introducing a more sophisticated action decoder in the future. This would enable our method to handle the increased complexity of Libero\\u2019s action space effectively.\"}", "{\"summary\": [\"The work proposes to inject language control in object centric world models and show its effectiveness in control.\", \"It argues that object centric models, specifically based on slots as studied in the paper, are more efficient and performant than large scale video generation models based on diffusion.\", \"They conduct experiments on a simulated table top manipulation benchmark to justify their method and various design choices.\", \"They present an analysis on how to tune these world models in terms of action decoding, look ahead steps, access to past states to achieve good performance.\"], \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well written and easy to understand.\", \"The problem of building a world model for predictive control is a useful and relevant one to solve.\", \"The authors have ablated the components of their approach fairly well, including how to do the best action decoding, how many past steps to use in the world model etc.\"], \"weaknesses\": [\"The authors do not have a SlotFormer baseline, which does not use any language conditioning. Given that one of the key claims of the paper is that language conditioned object centric world models help downstream tasks, checking the importance of being language centric is critical. Adding that baseline would be helpful.\", \"For the evaluation of this approach, the authors have used the language table simulation environment, which involves some objects to be manipulated on a table top setting. This makes sense since there is a clear distinction between foreground (the objects) and background, which favors object centric approaches over general video generative models. However, showcasing some other scenarios or evaluation setups where maybe lets say intuitively a video generative model would have an edge, would have been interesting and more convincing to see.\", \"Minor: The qualitatives in figure 3 are not the easiest to parse, if the author\\u2019s method works well, but having a video to see the predictions would make the difference much clearer, I couldn\\u2019t find anything in the suppmat.\"], \"questions\": [\"A few questions (some overlapping with Cons. section above):\", \"Is language table the de-facto setting for studying object-centric control? It seems fairly limited and biased towards object centric approaches since it is clearly possible to discard the background information quite easily. Studying it in cases of ambiguity, where sometimes the background is obvious to ignore and sometimes not would bring more of the community to investigate this topic.\", \"In section 4.6, Is a world model really necessary? - Have the authors reported a pixel2action baseline, basically that does the same learning procedure, and except for learning from images directly, extract from some off the shelf network. The current results only ablate the absence of future slots, which makes sense, but that doesn't answer the question generally about needing a world model or not.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> [Weakness 4] The analysis suggesting that future state prediction alone suffices for action decoding is questionable; the low accuracy for \\\"instruction + 0 future steps\\\" (2.5%) compared to near-zero performance for Seer implies that baseline results may lack rigor, potentially outperforming when future states are not predicted.\\n\\n Your observation that it might be possible for baseline methods to achieve success using only the current frame and instruction, without future-predicted slots, is valid, especially since our proposed method demonstrated around a 2.5% success rate under these conditions. To test this hypothesis, we trained an action decoder (policy) using features extracted by inputting the current RGB frame into a VAE, along with the instruction. We evaluated this approach in the same language table setting described in the paper across 200 episodes. The resulting success rate was only 0.5%, supporting our claim that control tasks are challenging without future prediction, relying solely on the current state and instruction. \\n\\nYour implication that the Seer results with future prediction may lack rigor is also reasonable. However, as presented in the main text and supplementary material (Figure 3, Figure 6-7), this is likely due to Seer\\u2019s consistent failure in future prediction. \\n\\n> [Weakness 5] The dataset used is overly simplistic, limiting the scope of validation for the world model. Testing across multiple, varied environments would better demonstrate the model\\u2019s general applicability and robustness.\\n\\n To show our method in a more challenging environment, we conducted additional experiments on more complex simulated environments (Libero) using our method with SAVi and with state-of-the-art video slot encoder model (SOLV). Please refer to the general comment for additional experiments.\\n\\nFinally, we hope our response could address the concerns, and we thank the reviewer again for the helpful comments. We are glad to discuss further comments and suggestions.\"}" ] }