forum_id
stringlengths
9
20
forum_title
stringlengths
3
179
forum_authors
sequencelengths
0
82
forum_abstract
stringlengths
1
3.52k
forum_keywords
sequencelengths
1
29
forum_decision
stringclasses
22 values
forum_pdf_url
stringlengths
39
50
forum_url
stringlengths
41
52
venue
stringclasses
46 values
year
stringdate
2013-01-01 00:00:00
2025-01-01 00:00:00
reviews
sequence
E1SaL8aK7k
SafeVision: Efficient Image Guardrail with Robust Policy Adherence and Explainability
[ "Peiyang Xu", "Minzhou Pan", "Zhaorun Chen", "Xue Lin", "Chaowei Xiao", "Bo Li" ]
As image generation models become increasingly prevalent, the need for efficient and transparent guardrails against unsafe content is more critical than ever. Traditional unsafe image classifiers, limited to predefined categories, often misclassify content due to the pure feature-based learning rather than semantic-based reasoning and struggle to adapt to emerging threats. The time and resources required for retraining on new harmful categories further hinder their ability to respond to evolving threats. To address these challenges, we propose SafeVision, a novel image guardrail system that integrates human-like understanding and reasoning with scalability. Within SafeVision, we propose an effective data collection and generation, policy-following training pipeline, and a customized loss function. In particular, we propose an efficient diverse QA generation and training strategy to enhance the effectiveness of the training process. SafeVision is able to follow given safety policies during inference time to guardrail against new risk categories and thus avoid expensive retraining, provide accurate risky content predictions, and provide precise explanations. SafeVision operates in two modes: 1) rapid classification mode, and 2) comprehension mode that provides both classification and human-readable explanations. In addition, considering the limitations of existing unsafe image benchmarks, which contain either only binary or limited categories, we provide VisionHARM-500K, a high-quality unsafe image benchmark comprising over 500k images to cover a wide array of risky categories. This dataset significantly broadens the scope and depth of unsafe image benchmarks. Through comprehensive experiments, we show that SafeVision achieves state-of-the-art performance in both efficiency and accuracy, with an accuracy of 91.77% on the VisionHARM-500K test set (17.77% higher than GPT-4O) and an inference time of 0.0979 seconds per image (over 50 times faster than GPT-4O). SafeVision sets a new standard for comprehensive, policy-following, and explainable image guardrail models, delivering state-of-the-art performance while aligning with human reasoning and enabling scalable adaptation to emerging threats.
[ "AI safety", "Large language model", "Multi modality", "Image moderation" ]
Reject
https://openreview.net/pdf?id=E1SaL8aK7k
https://openreview.net/forum?id=E1SaL8aK7k
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uxRmfPjTGd", "qvO3wtiaa4", "qHSREPL4MM", "pUegmTxQl2", "oFaSE5wA6Y", "n0BrxpDzgA", "el7LcuyTku", "eM5ha8fIFM", "dulyCJMGkp", "c5DrDPho9h", "aP9oUQJH5w", "YeRxYbhbgg", "R3gQH9sXsU", "NuHvF4YPIn", "KihVmi5gL5", "Gj1mkXSgQF", "C02hQd9Qeu", "8HrFMGH8OA", "4Ezr5uU5yW", "1pgFrbubRp" ], "note_type": [ "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_review", "official_review" ], "note_created": [ 1732679418705, 1731555116503, 1732426165978, 1737523958337, 1732679652035, 1732679591503, 1733175423350, 1732425434208, 1732425762154, 1732430079159, 1732425231486, 1732679491314, 1732424965930, 1733196198870, 1732426061105, 1730329039821, 1734851281936, 1732425642883, 1730409290863, 1730104764904 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9077/Authors" ], [ "ICLR.cc/2025/Conference/Submission9077/Reviewer_Kfxg" ], [ "ICLR.cc/2025/Conference/Submission9077/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9077/Authors" ], [ "ICLR.cc/2025/Conference/Submission9077/Authors" ], [ "ICLR.cc/2025/Conference/Submission9077/Reviewer_RhxE" ], [ "ICLR.cc/2025/Conference/Submission9077/Authors" ], [ "ICLR.cc/2025/Conference/Submission9077/Authors" ], [ "ICLR.cc/2025/Conference/Submission9077/Authors" ], [ "ICLR.cc/2025/Conference/Submission9077/Authors" ], [ "ICLR.cc/2025/Conference/Submission9077/Authors" ], [ "ICLR.cc/2025/Conference/Submission9077/Authors" ], [ "ICLR.cc/2025/Conference/Submission9077/Authors" ], [ "ICLR.cc/2025/Conference/Submission9077/Authors" ], [ "ICLR.cc/2025/Conference/Submission9077/Reviewer_frKA" ], [ "ICLR.cc/2025/Conference/Submission9077/Area_Chair_ZSUX" ], [ "ICLR.cc/2025/Conference/Submission9077/Authors" ], [ "ICLR.cc/2025/Conference/Submission9077/Reviewer_RhxE" ], [ "ICLR.cc/2025/Conference/Submission9077/Reviewer_CZEm" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer Kfxg:\\n\\nWe sincerely appreciate your thoughtful feedback and the time you have invested in reviewing our paper. Please let us know if you have further suggestions or comments. If there are any additional questions or issues you'd like to discuss, we are fully committed to engaging further to enhance our paper.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"summary\": \"This paper proposes an unsafe image classification model, in which it particularly provides several features: 1) the classification results can be coupled with a human-readable explanation (i.e. why such image is classified as unsafe/harmful), 2) zero-shot ability to support user-defined novel classes (i.e. the text description/definition of the novel unsafe class), and 3) the model has fast inference time with having the output in JSON format. Moreover, this paper also proposes a VISIONHARM-500K dataset, which is large-scale, diverse (cover wide range of unsafe categories), and richly-annotated (e.g. explanations and QA-pairs) to support various training objectives.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The proposed method is experimentally shown to have superior performance with respect to different baselines (including four VLM guardrails and nine classifier guardrails) across several datasets (three multi-label datasets and six binary label datasets), with having better trade-off between model performance and computational overhead.\"], \"weaknesses\": [\"The InternVL2-8B itself is already having comparable performance with respect to GPT-4o and the proposed method SafeVision-8B (which takes InternVL2-8B as its backbone) for the novel classes (cf. Figure 5), where the additional training procedure or even the model designs in the proposed method seem to not offer better zero-shot transferability.\", \"The comparison might be not fair enough, as the proposed SafeVision is trained on the proposed VISIONHARM-500K dataset where its harmful categories are actually the super-set (or union) of all the other multi-label and binary-label datasets. Moreover, the self-refinement training scheme used in the proposed method actually can be treated as an ensemble framework of utilizing the consensus of several strong VLMs (i.e. Qwen-VL-Chat, InternVL2-26B, LLaVA-v1.6-34B, and the continuously-updated SafeVision model). In summary, the proposed method adopts larger training set, leverages the ensemble framework during training, and is built upon a stronger backbone (i.e. InternVL2), it is hence not surprising to have superior performance than the other baselines, leading to potential concern of unfair comparison (perhaps there should be baselines of training the open-source VLM guardrails on the proposed dataset?).\", \"From the ablation study, it looks like the proposed method is quite sensitive to the hyper-parameter tuning (e.g. critical token weight ratio and the weights of VLMs in the self-refinement training scheme, while the value and the varying schedule of these hyper-parameters are manually set) and the format of few-shot samples.\", \"The design for the decoder of SafeVision for having fast inference is actually not new (i.e. having a list of special token in the tokenizer to improve the inference efficiency).\", \"The overall organization seems to be problematic, as the details of dataset collection and proposed method are mostly provided in the supplementary, leading to the concern of self-containment for the main paper.\"], \"questions\": \"The authors should carefully address the concerns as listed in the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer CZEm\", \"comment\": \"Thank you very much for your valuable suggestions and thoughtful feedback. We appreciate your recognition of the superior performance of SafeVison and the contribution of the VisionHARM-500K dataset. Below, we address the specific weaknesses and questions point by point and hope these can help address your concerns.\\n\\n***Q1:** The categories defined by the dataset should be reflected in the main body.*\\n\\n**A1:** Thank you for this valuable feedback. We will add a detailed description of VISIONHARM-500K's categories in Section 3 in our revised submission, VISIONHARM-500K dataset includes 10 categories: Safe, Hate Humiliation Harassment, Violence Harm Cruelty, Sexual, Criminal Planning, Weapons Substance Abuse, Self Harm, Animal Cruelty, Disasters Emergencies, and Political.\\n\\n***Q2**: How does SAFEVISION judge the content of new categories? Is it controlled only by prompts?*\\n\\n**A2**: As shown in our evaluation in Section 5.5.3, using only simple prompt changes will cause performance degradation. SAFEVISION's handling of new categories extends beyond simple prompt control. Our system employs 1) A **dynamic policy framework** that allows a flexible definition of new categories through structured guardrail policies, 2) A **text-based few-shot learning** approach that leverages our pretrained multimodal representations. The prompt template in Appendix A.5 is just one component of this comprehensive system. We will clarify this design in the revised paper.\\n\\n***Q3**: In Section 4.2, is the improvement to the tokenizer to add category nouns to the vocabulary library?*\\n\\n**A3**: The tokenizer enhancement encompasses two key innovations: 1) **class tokens** for the predefined unsafe categories. 2) **structural tokens** for faster and more stable formative response generation. Such tokenizer redesign contributes significantly to both accuracy and inference speed improvements. We will expand this discussion in Section 4.2.\\n\\n***Q4**: This work declares the contribution of the dataset, but there is no related open-source plan in the text. Will the code and dataset of this paper be released?*\\n\\n**A4**: Yes, we are committed to fostering reproducible research. Our release plan includes: 1) The complete VISIONHARM-500K dataset with detailed documentation, 2) SAFEVISION's model implementation and training code, 3) Evaluation scripts and benchmarking tools. We will make these resources available through a public GitHub and Huggingface repository as soon as the anonymity ends.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Dear reviewer CZEm:\\n\\nWe sincerely appreciate your thoughtful feedback and the time you have invested in reviewing our paper. Please let us know if you have any further suggestions or comments. If there are any additional questions or issues you'd like to discuss, we are fully committed to engaging further to enhance our paper.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"Dear reviewer frKA :\\n\\nWe sincerely appreciate your thoughtful feedback and the time you have invested in reviewing our paper. Please let us know if you have any further suggestions or comments. If there are additional questions or issues you'd like to discuss, we are fully committed to engaging further to enhance our paper.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Response to authors - Reviewer RhxE\", \"comment\": \"Thanks for the feedback and additional efforts in the rebuttal. After reading the feedback and revised version on the novelty explanation and extra experiments, I decide to upgrade my rating to \\\"6: marginally above the acceptance threshold\\\", even though I still don't believe the novelty of this work is its value. The authors need to clarify the novelty and model details in the experiments (maybe in appendix) for a strong publication, if this work is accepted. Good luck.\"}", "{\"title\": \"Response to Reviewer Kfxg(Part 3)\", \"comment\": \"***Q3**: From the ablation study, it looks like the proposed method is quite sensitive to the hyper-parameter tuning (e.g. critical token weight ratio and the weights of VLMs in the self-refinement training scheme, while the value and the varying schedule of these hyper-parameters are manually set) and the format of few-shot samples.*\\n\\n**A3**: Regarding the critical token weight ratio, our initial plot had a limited y-axis range of only 3 percent, and we only included SafeVision without any other baselines. In the updated figure, we have added GPT-4O and InternVL2 as baselines and tested both SafeVision-2B and SafeVision-8B. We also adjusted the y-axis scale to be more reasonable. Please refer to Section 5.5 Figure 6 for the updated figure. We can observe that while weighted loss does enhance model performance, the accuracy changes within a small range of less than 3 percent. This indicates that the model performance is actually quite stable.\\n\\nFor the weights of different VLMs in the self-refinement training scheme, the weight for our model is set as $w \\\\cdot \\\\sqrt{\\\\text{epoch}}$, while the other three VLMs share the same weight, calculated as $\\\\frac{1 - w \\\\times \\\\sqrt{\\\\text{epoch}}}{3}$. Therefore, the only parameter we need to adjust is $w$. Moreover , since $w$ can\\u2019t be too large initially or too small after several epochs, its range is actually limited. We have added a new experiment to demonstrate the influence of $w$, as shown in the table below. We set $w$ to 0.05,0.1,0.15 and 0.2 in the beginning and applied self-refinement training to a subset of the training data over multiple epochs and calculated the percent of remaining data after each epoch. From the results, it is evident that the data removed in each epoch is stable and not significantly affected by the choice of $w$.\\n\\n| Epoch | $w$ = 0.05 | $w$ = 0.10 | $w$ = 0.15 | $w$ = 0.20 |\\n| ----- | ---------- | ---------- | ---------- | ---------- |\\n| 0 | 100% | 100% | 100% | 100% |\\n| 1 | 97.5% | 98.5% | 98.4% | 98.3% |\\n| 2 | 96.5% | 96.4% | 96.2% | 95.2% |\\n| 3 | 96.0% | 95.2% | 95.0% | 93.9% |\\n| 4 | 94.7% | 94.0% | 94.1% | 93.8% |\\n\\nRegarding the format of few-shot samples, we found that using a detailed JSON format yields the best performance for SafeVision, so we continue to use this format in our evaluation. This is due to the inherent nature of the model and can be addressed through further model training, which we plan to explore in future work.\\n\\n***Q4**: The design for the decoder of SafeVision for having fast inference is actually not new (i.e. having a list of special token in the tokenizer to improve the inference efficiency).*\\n\\n**A4**: Thank you for your comments. While we acknowledge that using special tokens to improve inference efficiency is not novel in general, we want to clarify that this is not our primary contribution. Importantly, our work is the first to apply this approach specifically within the image guardrail task, where it has led to substantial improvement in both performance and inference speed. This unique adaptation effectively addresses the task\\u2019s specialized requirements and complements our main contributions.\\n\\n***Q5**: The overall organization seems to be problematic, as the details of dataset collection and proposed method are mostly provided in the supplementary, leading to the concern of self-containment for the main paper.*\\n\\n**A5**: Thank you for your valuable feedback. We acknowledge the concern regarding self-containment and agree that the dataset collection and proposed method sections could be better integrated into the main paper. We will revise these sections to ensure they are clearly presented within the main text. Please refer to our updated paper for these modifications.\"}", "{\"title\": \"Response to Reviewer RhxE (Part2)\", \"comment\": \"***Q2.2**: From Figure 5, I am not convinced that the proposed model is significantly better than other models, as the proposed method trained on VisionHARM-500K and tested on VisionHARM-500K. In the new category experiment, we can see GPT-4o is better than this method.*\\n\\n**A2.2**: Thank you for your question. We would like to clarify that the purpose of Table 5 is to demonstrate that the SafeVision training framework does not hurt the performance of the model in unseen categories. Other VLM-as-guardrail methods (LLAVAGuard, LLamaGuard) generate **less than 0.2 F1 scores** in unseen categories, while SafeVision maintains the performance of the original VLM backbone (InterVL-8B) and even achieves a better average performance. We will update the paper to clearly include our goal in our revision.\\n\\nRegarding the performance comparison with GPT-4O, we acknowledge that GPT-4O slightly outperforms SafeVision in some unseen categories(cult). However, it is important to note that SafeVision has **only 8B parameters,** and as evaluated in Table 2, the inference time overhead for SafeVision is **15 times** faster than GPT-4O. Moreover, SafeVision outperforms GPT-4O in trained categories. We believe we have successfully demonstrated the advantages of our method, which enables smaller models to achieve performance exceeding the existing state-of-the-art models on existing categories through low-cost fine-tuning, while maintaining the model's performance in untrained categories and greatly improving the model's efficiency.\\n\\nTo further evaluate the model's performance in unseen categories, we conducted an extra evaluation using several public datasets containing novel categories: Bullying[1], Guns[2], Bloody[3], Fire[4], Alcohol[5], Cocaine[5], and Tobacco[5]. This resulted in a large test set containing 3,223 images. We use the F1 score to evaluate the performance of each model. The results are shown in the table below:\\n\\n| Model/Category | Safe | Alcohol | Bloody | Bullying | Cocaine | Fire | Guns | Average |\\n| -------------- | --------- | --------- | --------- | --------- | --------- | --------- | --------- | --------- |\\n| LLamaGuard3 | 0.258 | 0.086 | 0.685 | 0.000 | 0.000 | 0.099 | 0.000 | 0.349 |\\n| LLaVAGuard | 0.272 | 0.836 | 0.000 | 0.000 | 0.095 | 0.018 | 0.025 | 0.077 |\\n| GPT-4o | 0.680 | **0.932** | 0.942 | 0.453 | 0.773 | 0.671 | 0.997 | 0.908 |\\n| InternVL2 | 0.649 | 0.721 | 0.810 | 0.377 | 0.780 | 0.743 | 0.994 | 0.892 |\\n| SafeVision-8B | **0.727** | 0.887 | **0.961** | **0.504** | **0.824** | **0.789** | **0.997** | **0.929** |\\n\\nSafeVision outperforms GPT-4O in all categories except for a slight lag in the Alcohol category. We hope our response clarifies our evaluation goal and demonstrates the performance comparison with GPT-4O, addressing your concerns.\\n\\n[1] [https://huggingface.co/datasets/Zoooora/BullyingAct](https://huggingface.co/datasets/Zoooora/BullyingAct) \\n\\n[2] [https://huggingface.co/datasets/JoseArmando07/gun-dataset](https://huggingface.co/datasets/JoseArmando07/gun-dataset)\\n\\n[3] [https://huggingface.co/datasets/NeuralShell/Gore-Blood-Dataset-v1.0](https://huggingface.co/datasets/NeuralShell/Gore-Blood-Dataset-v1.0) \\n\\n[4] [https://huggingface.co/datasets/EdBianchi/SmokeFire](https://huggingface.co/datasets/EdBianchi/SmokeFire) \\n\\n[5] [https://huggingface.co/datasets/luisf1xc/data_drugs_class](https://huggingface.co/datasets/luisf1xc/data_drugs_class) \\n\\n***Q3**: In Table 1 & 2, citations are necessary for each comparing method.*\\n\\n**A3**: Thank you for pointing this out, we will add citations for all the baselines and datasets in Table 1 & 2. Please refer to our updated paper for these modifications.\"}", "{\"title\": \"General Response\", \"comment\": \"We thank the reviewers for their insightful comments and constructive suggestions to improve our paper. We are pleased that the reviewers appreciated the importance of our research topic, recognized the novelty and quality of our VISIONHARM-500K dataset and guardrail model, and found our evaluation comparing SafeVision to existing safeguarding methods comprehensive and clear.\", \"the_reviewers_raised_some_important_points_that_we_will_address_in_detail_in_our_response_and_paper_revision\": \"1. We have designed additional experiments on new categories in our response. The results further demonstrate the effectiveness of our training pipeline in maintaining SafeVision\\u2019s performance in untrained categories while significantly improving its efficiency (**Reviewer Kfxg, RhxE**).\\n2. We have clarified the novelty of our training methodology and provided more comprehensive experiments and analysis in our response. The results highlight the effectiveness and superiority of our proposed pipeline and dataset (**Reviewer Kfxg, RhxE**).\\n3. We have included more detailed ablation studies on the hyper-parameters used in our training pipeline in our response and Section 5.5. The results show the robustness of SafeVision against variations in hyper-parameter choices (**Reviewer Kfxg**).\\n4. To address potential concerns about test set leakage, we have provided more details about our training pipeline, where the test set remains completely isolated and is used solely for final evaluation and misclassified instances will be identified exclusively using a separate validation set (**Reviewer frKA**).\\n5. We have compared our approach to handling misclassified instances with prior work. In our approach, we leverage misclassified instances to update the guardrail policy, addressing edge cases through policy evolution and improving the model\\u2019s generalization to new categories (**Reviewer frKA**).\\n6. We have clarified the novelty of our model architecture and analyzed the reason for deliberately avoiding architectural modifications (**Reviewer RhxE**). \\n7. We have enhanced the paper's presentation, formatting, and references based on the detailed feedback (**Reviewer Kfxg, frKA,CZEm**).\\n8. We have provided more details on the dataset categories, how SafeVision handles new categories beyond prompts, specifics of the tokenizer improvements, and our plans for open-sourcing the code and data (**Reviewer Kfxg, CZEm**).\\n\\nWe sincerely appreciate the reviewers' feedback and believe that these revisions have strengthened our paper. We hope our responses and the updated manuscript address all concerns and the reviewers will appreciate our work towards building safer AI systems.\"}", "{\"comment\": \"***Q2**: The comparison might be not fair enough...... Moreover, the self-refinement training scheme used in the proposed method actually can be treated as an ensemble framework......leading to potential concern of unfair comparison.*\\n\\n**A2**: Thank you for the valuable suggestions. We would like to address your concerns point by point:\\n\\n***Q2.1**: The comparison might be not fair enough, as the proposed SafeVision is trained on the proposed VISIONHARM-500K dataset where its harmful categories are actually the super-set (or union) of all the other multi-label and binary-label datasets\\u2026. perhaps there should be baselines of training the open-source VLM guardrails on the proposed dataset?*\\n\\n**A2.1**: Thank you for your question! During the evaluation, we also evaluated SafeVision on the test splits from the datasets used to train the other models, as shown in Tables 1 & 2. The results demonstrate that our model consistently outperforms the other models even on their own test sets. For example, LlavaGuard achieves **0.688 accuracy** on the LlavaGuard dataset, while SafeVision attains **0.808 accuracy**. Although our dataset includes their categories, we didn't collect data from identical sources, so their train-test distributions should be more similar to each other than to our method. This just proves that our method and dataset have better generalizability.\\n\\nWe acknowledge the importance of an ablation study to demonstrate the effectiveness of our training pipeline. We used the same backbone model fine-tuned under three different settings: (1) using the VISIONHARM-500K dataset without our training pipeline, (2) using our training pipeline with the LlavaGuard dataset, and (3) using both the VISIONHARM-500K dataset and our training pipeline. We use accuracy to evaluate the performance of each model. The results are shown in the table below:\\n\\n| Model | Baseline | VISIONHARM-500K without training pipeline | Llavaguard train set + training pipeline | VISIONHARM-500K + training pipeline |\\n| :------------: | :------: | :---------------------------------------: | :--------------------------------------: | :---------------------------------: |\\n| Llavaguard-13b | 68.9% | 85.7% | 74.4% | 93.0% |\\n| Internvl2-2b | 36.9% | 63.1% | 73.4% | 91.8% |\\n\\nThe results show that even when using the Llavaguard train set instead of VISIONHARM-500K, the model still generates better results with our training pipeline. For instance, the performance of internvl2-2b improves **from 36.9% to 73.4%** when trained on the Llavaguard train set using our pipeline, surpassing its performance when trained on VISIONHARM-500K without the pipeline (63.1%). This suggests that the training pipeline contributes more to the performance than the dataset itself. However, the best performance is achieved when both VISIONHARM-500K and the training pipeline are used together.\\n\\n***Q2.2**: Moreover, the self-refinement training scheme used in the proposed method actually can be treated as an ensemble framework of utilizing the consensus of several strong VLMs (i.e. Qwen-VL-Chat, InternVL2-26B, LLaVA-v1.6-34B, and the continuously-updated SafeVision model)....leverages the ensemble framework during training, and is built upon a stronger backbone (i.e. InternVL2), it is hence not surprising to have superior performance than the other baselines*\\n\\n**A2.2**: We greatly appreciate the valuable comments! To explain, while the models used in our self-refinement training scheme may be stronger on some general task benchmarks, Tables 1 and 2 show that their individual performance on the image guardrail task is not stronger than the final model we trained. This highlights that our method has achieved better performance by using only weaker models, which can be considered a significant contribution. Furthermore, we only use InternVL2-2B and InternVL2-8B as the backbone, which are smaller models, yet they still outperform InternVL2-26B after training.\\n\\nIn addition to performance, our proposed method also increases the efficiency of the model for the guardrail task, making it an important contribution that enables low-cost deployment in real-world industry applications.\\n\\nMoreover, since our proposed method consists of a dataset and training pipeline, it can be adopted by almost all VLM models. This means that when stronger models become available in the future, we can replace the current backbone with a better model, making our method scalable.\", \"title\": \"Response to Reviewer Kfxg (Part 2)\"}", "{\"comment\": \"Dear Reviewer RhxE:\\n\\nWe sincerely appreciate your thoughtful feedback and the time you have dedicated to reviewing our paper. Please let us know if you have any further suggestions or comments. If there are any additional questions or issues you would like to discuss, we are fully committed to engaging further to enhance our paper.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"Thank you very much for your valuable suggestions and thoughtful feedback. We appreciate your recognition of the superior performance and lower computational overhead of SafeVison. Below, we address the specific weaknesses and questions point by point and hope these can help address your concerns.\\n\\n***Q1**: The InternVL2-8B itself is already having comparable performance with respect to GPT-4o and the proposed method SafeVision-8B (which takes InternVL2-8B as its backbone) for the novel classes (cf. Figure 5), where the additional training procedure or even the model designs in the proposed method seem to not offer better zero-shot transferability.*\\n\\n**A1**: Thank you for your insightful comment. We agree that InternVL2-8B, which serves as the backbone for SafeVision-8B, demonstrates comparable performance to GPT-4O and SafeVision-8B in the novel classes, as shown in Figure 5. However, we would like to emphasize that the primary goal of the SafeVision training framework is not to improve zero-shot transferability but to enhance the model's performance in trained categories while maintaining its performance in untrained categories and significantly improving its efficiency. We will update the paper to further clarify this and add related discussion.\\n\\nAs demonstrated in Tables 1 and 2, SafeVision-8B outperforms an even larger model from the same family (InternVL2-26B) and other baselines in trained categories across multiple independent datasets at **15 times faster** than the baseline. \\n\\nTo further evaluate the model's performance in unseen categories, we conducted an extra evaluation using several public datasets containing novel categories following the reviewer\\u2019s suggestions: Bullying[1], Guns[2], Bloody[3], Fire[4], Alcohol[5], Cocaine[5], and Tobacco[5]. This resulted in a large test set containing 3,223 images. We use the F1 score to evaluate the performance of each model. The results are shown in the table below:\\n\\n| Model/Category | Safe | Alcohol | Bloody | Bullying | Cocaine | Fire | Guns | Average |\\n| -------------- | --------- | --------- | --------- | --------- | --------- | --------- | --------- | --------- |\\n| LLamaGuard3 | 0.258 | 0.086 | 0.685 | 0.000 | 0.000 | 0.099 | 0.000 | 0.349 |\\n| LLaVAGuard | 0.272 | 0.836 | 0.000 | 0.000 | 0.095 | 0.018 | 0.025 | 0.077 |\\n| GPT-4o | 0.680 | **0.932** | 0.942 | 0.453 | 0.773 | 0.671 | 0.997 | 0.908 |\\n| InternVL2 | 0.649 | 0.721 | 0.810 | 0.377 | 0.780 | 0.743 | 0.994 | 0.892 |\\n| SafeVision-8B | **0.727** | 0.887 | **0.961** | **0.504** | **0.824** | **0.789** | **0.997** | **0.929** |\\n\\nSafeVision outperforms GPT-4O in all categories except for a slight lag in the Alcohol category. We hope our response clarifies our evaluation goal and demonstrates the performance comparison with GPT-4O, addressing your concerns.\\n\\n[1][https://huggingface.co/datasets/Zoooora/BullyingAct](https://huggingface.co/datasets/Zoooora/BullyingAct) \\n\\n[2][https://huggingface.co/datasets/JoseArmando07/gun-dataset](https://huggingface.co/datasets/JoseArmando07/gun-dataset)\\n\\n[3] [https://huggingface.co/datasets/NeuralShell/Gore-Blood-Dataset-v1.0](https://huggingface.co/datasets/NeuralShell/Gore-Blood-Dataset-v1.0) \\n\\n[4] [https://huggingface.co/datasets/EdBianchi/SmokeFire](https://huggingface.co/datasets/EdBianchi/SmokeFire) \\n\\n[5] [https://huggingface.co/datasets/luisf1xc/data_drugs_class](https://huggingface.co/datasets/luisf1xc/data_drugs_class)\", \"title\": \"Response to Reviewer Kfxg (Part 1)\"}", "{\"comment\": \"Dear Reviewer RhxE,\\n\\nThank you very much for your thoughtful feedback and for taking the time to review our work. We sincerely appreciate your recognition of our efforts in addressing your concerns. If our paper is accepted, we will certainly further clarify the novelty and model details in our camera-ready version. \\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer frKA\", \"comment\": \"Thank you very much for your valuable suggestions and thoughtful feedback. We appreciate your recognition of the research topic of SafeVison and the contribution of VisionHARM-500K dataset. Below, we address the specific weaknesses and questions point by point and hope these can help address your concerns.\\n\\n***Q1**: The proposed self-refinement training involves a testing procedure where the new version of the model is evaluated on the test set and misclassified instances are extracted & analyzed to curate new policies (Line 267 - Line 269). The paper should argue why and how such a procedure avoids test set leakage. If the test set information is encoded in the renewed policy, the procedure could lead to inflated performance on the test set because the model has already captured the exact test information during training. Also, the paper should compare how existing works handle misclassified instances in the test set.*\\n\\n**A1**: We appreciate this important concern about potential test set leakage. To clarify our methodology:\\n\\n- We maintain three distinct datasets: **training, validation, and test sets**. The self-refinement process utilizes only the validation set, not the test set. The test set is completely isolated and is used solely for final evaluation.\\n\\n- For the policy refinement process, misclassified instances are identified exclusively using the validation set. Policy updates are made based on feedback from the validation set, ensuring that the test set remains untouched throughout the entire training process.\\n\\nAdditionally, existing guardrail models do not handle misclassified instances in their training or validation processes. Such a refinement process is a unique contribution of SafeVision. The dynamic approach offers several advantages:\\n\\n- Better handling of edge cases through policy evolution. \\n- Improved generalization to new categories.\\n\\n- More robust policy adaptation capabilities.\\n\\nWe will clarify this approach and the difference with the existing method in the revised paper.\\n\\n***Q2:** The paper presentation needs significant improvement. For example, Appendix C.5 refers readers to a null table (Line 1407) and the prompt description on page 20 exceeds the page boundary (Line 1026 - Line 1079). The quotation marks in the prompt description are often monotonously right quotations (page 16, page 19, page 22). The reference list is also not well-curated. For example, the referenced website addresses often far exceed the page boundary and are obscured (Line 665, Line 778). Note that the Llama-Guard Team's paper is referenced as \\\"Team (2024)\\\" in the paper (Line 144, Line 215, ...), which reads strange. Also, there are many lines of unspecified space on page 18 (Line 922 - Line 958). While a few minor errors in the paper presentation will not affect the rating, too frequent observation of them will harm the rating since they are not aligned with the proceeding guidelines and are not beneficial for future readers.*\\n\\n**A2:** Thank you for your thorough review of our paper's presentation. We have made comprehensive improvements:\\n\\n**Technical Corrections:**\\n\\nFixed the missing table in Appendix C.5\\n\\nAdjusted prompt descriptions to fit within page boundaries\\n\\nStandardized quotation marks throughout the document\\n\\nProperly formatted website references with appropriate line breaks\\n\\nUpdated the Llama-Guard Team citation to a more appropriate format\\n\\nRemoved unnecessary spaces on page 18\\n\\n**Layout and Formatting:**\\n\\nEnsured all content fit within page boundaries\\n\\nStandardized formatting across all sections\\n\\nImproved reference formatting for better readability\\n\\nFixed all spacing issues\\n\\nPlease refer to our revised submission for these improvements.\"}", "{\"summary\": \"The paper proposes SafeVision, a novel image guardrail system based on Vision-Language Models to detect and comprehend synthetic images. The system consists of two key modules: fast classification for general filtering scenarios and multimodal comprehension for policy-specified guardrails. The framework is evaluated with multiple safeguard scenarios and compared with diverse models. The proposed dataset is sufficiently scalable compared to existing datasets.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The research topic of safeguarding image generators is important and intriguing.\\n2. The proposed dataset is novel with good contributions.\\n3. The evaluation of various existing safeguarding methods is fair.\", \"weaknesses\": \"1. The proposed self-refinement training involves a testing procedure where the new version of the model is evaluated on the test set and misclassified instances are extracted & analyzed to curate new policies (Line 267 - Line 269). The paper should argue why and how such a procedure avoids **test set leakage**. If the test set information is encoded in the renewed policy, the procedure could lead to inflated performance on the test set because the model has already captured the exact test information during training. Also, the paper should compare how existing works handle misclassified instances in the test set.\\n\\n2. The paper presentation needs significant improvement. For example, Appendix C.5 refers readers to a null table (Line 1407) and the prompt description on page 20 exceeds the page boundary (Line 1026 - Line 1079). The quotation marks in the prompt description are often monotonously right quotations (page 16, page 19, page 22). The reference list is also not well-curated. For example, the referenced website addresses often far exceed the page boundary and are obscured (Line 665, Line 778). Note that the Llama-Guard Team's paper is referenced as \\\"Team (2024)\\\" in the paper (Line 144, Line 215, ...), which reads strange. Also, there are many lines of unspecified space on page 18 (Line 922 - Line 958). **While a few minor errors in the paper presentation will not affect the rating, **too frequent observation** of them will harm the rating since they are not aligned with the proceeding guidelines and are not beneficial for future readers.**\", \"questions\": \"Please address my concerns stated in the weakness section. Although the novelty and presentation are limited, I still appreciate the contribution of this new dataset for image safeguarding. I give this submission an initial rating of borderline reject, and I look forward to the authors' response.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper proposes a new framework for guardrailing computer vision systems for improved safety. The proposed system effectively triages predictions for decision-making, such that for less certain predictions, a more human-like/chain-of-thought response must accompany the prediction. The response by the reviewers is mixed and mostly on the borderline negative side. During the post-rebuttal discussion phase, the reviewers reaffirmed that the evaluation is not rigorous enough, and that it is peculiar to have as many target improvement in the known classes but novel ones, since this is what you want from such a system. Thus, the paper does not pass the bar.\", \"additional_comments_on_reviewer_discussion\": \"The design leading to faster inference for the decoder of SafeVision is not new, as admitted by the authors. Together with the fact that InternVL2 backbone itself already strikes a better balance between model size and performance, the claimed contribution of the proposed method in terms of both efficiency and accuracy is hence weak.\\n\\nConcerns regarding using ensemble and larger dataset to achieve better performance (which is good and reasonable but can not be well considered as a significant contribution) are well resolved by the rebuttal, although the effort by the authors effort to provide additional experimental results is appreciated.\"}", "{\"title\": \"Response to Reviewer RhxE (Part1)\", \"comment\": \"Thank you very much for your valuable suggestions and thoughtful feedback. We appreciate your recognition of the superior performance of SafeVison and the contribution of VisionHARM-500K dataset. Below, we address the specific weaknesses and questions point by point and hope these can help address your concerns.\\n\\n***Q1**: This paper shows fewer novelties or contributions in model architecture for VLM learning and inference. In addition, this paper discusses very few about the network architecture. I prefer to learn more about model details about the proposed method. I cannot see how vision encoder and policy prompt encoder fuse each other.*\\n\\n**A1**: We appreciate the reviewer's comments and would like to clarify that our paper's focus deliberately avoids architectural modifications for several reasons:\\n\\n- Architectural modifications would require training large-scale VLMs from scratch\\u2014an approach that is both computationally intensive and resource-prohibitive. In SafeVision, a unique contribution is our training approach that preserves the pre-trained capabilities of the VLM model while enhancing its guardrail abilities.\\n\\n- More importantly, our pipeline is designed to be model-agnostic and forward-compatible. While architectural changes often become model-specific and lack transferability, our methodology can be adapted to enhance future VLMs into better guardrail models.\\n\\n- Even without modifying the model architecture, we achieve state-of-the-art performance through innovations in data quality (diverse QA pairs), training methodology (novel self-refinement), and loss function design (carefully weighted objectives). This demonstrates that our approach brings significant novelty to existing guardrail models while maintaining architectural simplicity.\\n\\nRegarding the fusion between vision and policy prompt encoders, we leverage the proven methodology from our backbone model, InternVL2 [1], utilizing QLLaMA as a language middleware to align visual and linguistic features. This choice maintains consistency with established approaches while allowing us to fully leverage the well-trained capabilities of the base model.\\n\\n[1] Chen, Zhe, et al. \\\"Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n\\n***Q2.1**: The experiments are not very convincing. For the comparing methods, how do those models train? Are those models trained on VisionHARM-500K dataset? It is unclear that the proposed method is significantly better than other methods, because of potential data distribution bias.*\\n\\n**A2.1**: Thank you for raising this valuable question! To clarify, we used the baseline models' pre-trained weights directly without fine-tuning them on the VisionHARM-500K dataset. We want to highlight that such a choice is because the VISIONHARM-500K dataset is a part of our proposed method and the key contributions of our work.\\n\\nWe also acknowledge that our evaluation does not have conclusively proven the superiority of our training pipeline and dataset. To address this concern and provide a more comprehensive analysis, we conducted an ablation study as per your suggestion under three settings:\\n\\n- using the VisionHARM-500K dataset without our training pipeline\\n\\n- using our training pipeline with the dataset from Llavaguard\\n\\n- using both the VisionHARM-500K dataset and our training pipeline\\n\\nWe use accuracy to evaluate the performance of each model. The results are shown in the table below.\\n\\n| Model | Baseline | VISIONHARM-500K without training pipeline | Llavaguard train set + training pipeline | VISIONHARM-500K + training pipeline |\\n| :------------: | :------: | :---------------------------------------: | :--------------------------------------: | :---------------------------------: |\\n| Llavaguard-13b | 68.9% | 85.7% | 74.4% | 93.0% |\\n| Internvl2-2b | 36.9% | 63.1% | 73.4% | 91.8% |\\n\\nThe results show that even when using the Llavaguard train set instead of VISIONHARM-500K, the model still generates better results with our training pipeline. For instance, the performance of internvl2-2b improves **from 36.9% to 73.4%** when trained on the Llavaguard train set using our pipeline, surpassing its performance when trained on VISIONHARM-500K without the pipeline (63.1%). This suggests that the training pipeline contributes more to the performance than the dataset itself. However, the best performance is achieved when both VISIONHARM-500K and the training pipeline are used together.\"}", "{\"summary\": \"This paper presents a dataset and model for detecting harmful visual content, with explanation and adaptation capability to new policies. The proposed method creates VisionHARM-500K dataset from LAION dataset, by using VLM filtering and image captioning. Built on the proposed VisionHARM-500K dataset, this paper also presents a model for detecting harmful content, which supports two modes. The first mode simply outputs classification results, and the second mode also generates a textual explanation with the harm score. To demonstrate the effectiveness, this paper compares the proposed method with many other models on binary classification, multi-classification, and new category harm classification. This paper presents stronger results in most settings than previous work. Overall, the paper is clearly written, but I have some concerns on the details, especially in the comparison with other methods. Therefore, I lean to reject this work slightly, even though this paper presents a lot. I could change my mind after rebuttal.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This paper presents a new dataset, VisionHARM-500K dataset, which can be used by the community to study vision safety problem. Researchers can also utilize the results presented in this paper as baseline, to move forward and achieve better results.\", \"This paper presents better results than many previous work in terms of binary classification and multi-classification tasks. Ablation studies are also presented.\"], \"weaknesses\": [\"This paper shows fewer novelties or contributions in model architecture for VLM learning and inference. In addition, this paper discusses very few about the network architecture. I prefer to learn more about model details about the proposed method. I cannot see how vision encoder and policy prompt encoder fuse each other.\", \"The experiments are not very convincing. For the comparing methods, how do those models train? Are those model trained on VisionHARM-500K dataset? It is unclear that the proposed method is significantly better than other methods, because of potential data distribution bias. From Figure 5, I am not convinced that the proposed model is significantly better than other models, as the proposed method trained on VisionHARM-500K and tested on VisionHARM-500K. In new category experiment, we can see GPT-4o is better than this method.\", \"In Table 1 & 2, citations are necessary for each comparing method.\"], \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work explores multimodal large models for detecting harmful content. Existing work suffers from low generalization and difficulty in handling new categories of hazards. Therefore, the authors built VISIONHARM-500K, a high-quality unsafe image benchmark comprising over 500k images to cover a wide array of risky categories. Based on this benchmark, the authors proposed SAFEVISION, which supports multiple modes and provides precise explanations. Experiments show the effectiveness and efficiency of the proposed method.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"1. This paper constructs VISIONHARM-500K, which is conducive to the further development of image moderation.\\n\\n2. The proposed SAFEVISION can better distinguish harmful content and has extremely high efficiency.\", \"weaknesses\": \"1. The categories defined by the dataset should be reflected in the main body.\\n2. How does SAFEVISION judge the content of new categories? Is it controlled only by prompts?\\n3. In Section 4.2, is the improvement to the tokenizer to add category nouns to the vocabulary library?\\n4. This work declares the contribution of the dataset, but there is no related open-source plan in the text. Will the code and dataset of this paper be released?\", \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
E1N1oxd63b
ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation
[ "Tianchen Zhao", "Tongcheng Fang", "Haofeng Huang", "Rui Wan", "Widyadewi Soedarmadji", "Enshu Liu", "Shiyao Li", "Zinan Lin", "Guohao Dai", "Shengen Yan", "Huazhong Yang", "Xuefei Ning", "Yu Wang" ]
Diffusion transformers have demonstrated remarkable performance in visual generation tasks, such as generating realistic images or videos based on textual instructions. However, larger model sizes and multi-frame processing for video generation lead to increased computational and memory costs, posing challenges for practical deployment on edge devices. Post-Training Quantization (PTQ) is an effective method for reducing memory costs and computational complexity. When quantizing diffusion transformers, we find that existing quantization methods face challenges when applied to text-to-image and video tasks. To address these challenges, we begin by systematically analyzing the source of quantization error and conclude with the unique challenges posed by DiT quantization. Accordingly, we design an improved quantization scheme: ViDiT-Q (**V**ideo \& **I**mage **Di**ffusion **T**ransformer **Q**uantization), tailored specifically for DiT models. We validate the effectiveness of ViDiT-Q across a variety of text-to-image and video models, achieving W8A8 and W4A8 with negligible degradation in visual quality and metrics. Additionally, we implement efficient GPU kernels to achieve practical 2-2.5x memory optimization and a 1.4-1.7x end-to-end latency speedup.
[ "video generation", "low-bit quantization", "diffusion model" ]
Accept (Poster)
https://openreview.net/pdf?id=E1N1oxd63b
https://openreview.net/forum?id=E1N1oxd63b
ICLR.cc/2025/Conference
2025
{ "note_id": [ "oqszlQVvSm", "fUmxJ3r6gX", "OZWmCWV7jT", "OU25MsyvIM", "MVUCRZaouQ" ], "note_type": [ "official_review", "meta_review", "decision", "official_review", "official_review" ], "note_created": [ 1730730731669, 1734841213005, 1737523468230, 1730709422968, 1730714373015 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1762/Reviewer_2N42" ], [ "ICLR.cc/2025/Conference/Submission1762/Area_Chair_Zc26" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1762/Reviewer_GGBr" ], [ "ICLR.cc/2025/Conference/Submission1762/Reviewer_9VJP" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces ViDiT-Q, a novel quantization method designed to address the unique challenges faced by diffusion transformers (DiTs) in text-to-image and video generation tasks. Large model sizes and multi-frame processing in video generation pose significant computational and memory costs, making efficient deployment on edge devices challenging.\\nThe authors propose ViDiT-Q, a tailored quantization method for DiTs. This scheme effectively manages quantization errors by addressing specific challenges such as data distribution variations and channel imbalance. ViDiT-Q uses channel balancing to reduce color deviations and dynamic quantization to handle temporal variations in video sequences.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"ViDiT-Q reduces the incoherence of data distribution, thereby lowering quantization error, by combining scaling and rotation-based channel balancing methods. Specifically, the scaling method addresses the \\\"static\\\" channel imbalance at the initial denoising stage, while the rotation method handles the \\\"dynamic\\\" distribution variations over time.\\n\\nViDiT-Q uses channel balancing to reduce color deviations and dynamic quantization to handle temporal variations in video sequences.\\n\\nViDiT-Q is validated on various text-to-image and video generation models, demonstrating minimal degradation in visual quality and metrics even at W8A8 and W4A8 quantization levels.\\n\\nQualitative results show that ViDiT-Q maintains high image quality and text-image alignment, while naive PTQ methods produce highly blurred or noisy images.\", \"weaknesses\": \"**Please note that since I am not an expert in model quantization and do not have any background in this field, the weaknesses I provide may not be sufficient to reveal the shortcomings of the work.**\\n\\n1. While ViDiT-Q performs well at W8A8 and W4A8 quantization levels, there is a noticeable performance drop at lower activation bit-widths (such as W4A4 or W4A2). This indicates that the current mixed precision design has room for improvement, especially in fully leveraging the acceleration potential of 4-bit weights.\\n\\n2.ViDiT-Q introduces multiple quantization parameters (such as different $\\\\alpha$ values) to handle data variations across different timesteps. This complex parameter management increases the model's complexity.\\n\\n3. I believe the model can further introduce 8-bit Attention (SageAttention) to improve model efficiency, which has already been integrated into some video diffusion model libraries. I wonder if 8-bit attention mechanisms or some linear attention acceleration mechanisms can further improve your solution?\", \"questions\": \"**Please note that since I am not an expert in model quantization and do not have any background in this field, the weaknesses I provide may not be sufficient to reveal the shortcomings of the work.**\\n\\n1. While ViDiT-Q performs well at W8A8 and W4A8 quantization levels, there is a noticeable performance drop at lower activation bit-widths (such as W4A4 or W4A2). This indicates that the current mixed precision design has room for improvement, especially in fully leveraging the acceleration potential of 4-bit weights.\\n\\n2.ViDiT-Q introduces multiple quantization parameters (such as different $\\\\alpha$ values) to handle data variations across different timesteps. This complex parameter management increases the model's complexity.\\n\\n3. I believe the model can further introduce 8-bit Attention (SageAttention) to improve model efficiency, which has already been integrated into some video diffusion model libraries. I wonder if 8-bit attention mechanisms or some linear attention acceleration mechanisms can further improve your solution?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"Summary:\\n\\nThe paper presents a post-quantization method for Diffusion Transformer models for image and video generation. It provides a detailed analysis of the source of degradation and proposes a tailored quantization method, achieving W8A8 and W4A8 with negligible degradation in visual quality and metrics.\", \"strength\": [\"Provide a detailed analysis of the sources of degradation. The proposed method is well-motivated.\", \"very well-written paper.\", \"demonstrating minimal visual quality and metrics degradation even at W8A8 and W4A8 quantization levels.\"], \"weakness\": [\"There were concerns about the lack of lower bit-widths quantization, ablation study, and comparisons with relevant techniques. But the rebuttal addressed them well.\"], \"justification\": \"All three reviewers' concerns have been adequately addressed. The responses and additional experiments are very detailed, according to the reviewers. The AC reads the reviews and response, and agrees with the reviewers that this is a solid contribution. The AC thus recommends to accept.\", \"additional_comments_on_reviewer_discussion\": \"The AC belives that there are three main initial concerns:\\n\\n1) Lower Bitwidth experiments (by Reviewer 2N42 and Reviewer GGBr):\\n\\nThe authors provide additional experiments on W4A4, W2A8 using the Pixart-Sigma model and report the FID, CLIP score, and ImageReward metrics. Both reviewers are satisfied with the additional exploration. \\n\\n2) Lack of ablation study (Reviewer GGBr)\\n\\nThe authors present the ablation studies in Table 2 of the main paper. Specifically, they validate the importance of quantization parameters, channel balance, and mixed precision. \\n\\n3) Lack of comparison with general quantization methods.\\n\\nThe authors provided the results in Table 6 in Appendix D.5. The results show general quantization methods like AdaRound and Brecq have moderate performance drops while the proposed ViDiT-Q achieves comparable results with the FP16 baseline.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper introduces ViDiT-Q (Video & Image Diffusion Transformer Quantization), a quantization scheme designed for Diffusion Transformers (DiTs) to reduce memory and computational demands in visual generation tasks like text-to-image and video synthesis. To address the unique challenges of DiT quantization, such as large data variation and time-varying channel imbalance, ViDiT-Q introduces fine-grained dynamic quantization for timestep-specific adjustments, a static-dynamic channel balancing technique, and a metric-decoupled mixed precision approach that allocates bit-widths based on layer sensitivity to visual quality metrics. Experiments demonstrate that ViDiT-Q achieves substantial hardware efficiency gains\\u2014up to 2-2.5x in memory savings and 1.4-1.7x in latency reduction\\u2014while maintaining high visual quality, making it a viable solution for deploying DiTs on constrained devices.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper introduces a unique approach tailored for Diffusion Transformers, addressing specific challenges like time-varying channel imbalance and data variation, which are rarely explored in quantization research.\\n\\n2.ViDiT-Q\\u2019s methodology is grounded in thorough quantization error analysis, with each technique validated through extensive experiments on both text-to-image and text-to-video tasks, showing careful consideration of practical performance.\\n\\n3. By achieving substantial memory and latency reductions, ViDiT-Q makes deploying DiTs on constrained devices feasible, enabling real-world applications for visual generation in resource-limited environments.\\n\\n4. The paper is well-structured and clearly explains each component of ViDiT-Q, supported by effective figures and diagrams that make the methodology and results accessible and easy to follow.\", \"weaknesses\": \"1. While the paper addresses W4A8 quantization, there is limited exploration of even lower bit-width configurations, such as W4A4 or W2A8, which are often critical for more aggressive compression on edge devices. A deeper analysis of these configurations would broaden the applicability of ViDiT-Q.\\n\\n2. Although ViDiT-Q integrates several techniques, the paper lacks ablation studies that isolate the impact of each component, such as fine-grained dynamic quantization and static-dynamic channel balancing. Detailed ablations would provide clarity on the individual benefits of these methods.\\n\\n3. The paper primarily compares ViDiT-Q to diffusion-specific quantization methods but lacks benchmarks against general quantization techniques (e.g., adaptive quantization). Including these comparisons would better contextualize ViDiT-Q\\u2019s performance.\\n\\n4. The evaluation is restricted to text-to-image and text-to-video tasks, but the applicability of ViDiT-Q to other DiT applications (e.g., super-resolution or other image manipulation tasks) remains unexplored. Testing ViDiT-Q on these tasks could expand its impact and highlight potential limitations.\\n\\n5. The hardware efficiency results are limited to a single hardware setup (NVIDIA A100). Testing on additional platforms, particularly lower-power devices, would provide more comprehensive insights into ViDiT-Q\\u2019s practicality for diverse deployment environments.\", \"questions\": \"1. Could you explore or discuss the feasibility of further quantizing ViDiT-Q to configurations like W4A4 or W2A8? Understanding its performance at lower bit-widths would clarify its limitations and potential for more aggressive compression.\\n\\n2. Could you provide an ablation study showing the individual contributions of fine-grained dynamic quantization, static-dynamic channel balancing, and metric-decoupled mixed precision?\\n\\n3. Have you considered benchmarking ViDiT-Q against broader quantization methods, such as adaptive quantization or knowledge distillation for compression?\\n\\n4. In your metric-decoupled mixed precision approach, did you observe any limitations or challenges with determining sensitivity dynamically? Further details on how this sensitivity analysis adapts to different models or layers could provide practical guidance for real-world implementation.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a quantization method tailored for diffusion Transformer models used in image and video generation. The authors conduct a detailed analysis of data scale variations across different components of the model, leading to the design of fine-grained grouping, dynamic quantization, and static-dynamic channel balance. To address the issue that quantization errors do not accurately reflect the quality of generation, they also introduce a metric-decoupled mixed-precision design. Experimental results show that the proposed method effectively improves the quality of generated outputs after quantization.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"(1) The paper provides a detailed analysis of the sources of degradation in generation quality post-quantization and then proposes corresponding solutions. The article is easy to understand and is written with a clear logical flow.\\n\\n(2) Experimental results demonstrate that, compared to existing methods, the proposed approach shows a significant improvement in generation quality.\", \"weaknesses\": \"The article is written with a relatively clear logic and has a certain degree of innovation. However, some parts still require further elaboration. Both the fine-grained grouping and dynamic quantization strategies are closely linked to existing methods. Yet, the author only briefly describes the differences without providing detailed, intuitive, or quantitative explanations. For instance, the specific distinctions between \\\"channel-wise\\\" and \\\"output-channel-wise\\\" are not clearly articulated, nor is it explained why \\\"timestep-wise quantization parameters\\\" would be more costly compared to the method proposed in this paper.\", \"questions\": \"Some questions have already been pointed out in Weakness.\\n\\nHere, I have an additional question regarding the metric decoupled mixed-precision design. The paper emphasizes that different parts of the model affect generation quality in different ways. I would like to know to what extent these sensitivities are decoupled, and whether there are any modules that are sensitive to multiple metrics simultaneously. If such modules exist, should a joint sensitivity analysis involving multiple metrics be conducted?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}" ] }
E1ML0nEReb
Exploring contextual modeling with linear complexity for point cloud segmentation
[ "Yong Xien Chng", "Xuchong QIU", "Henry Zheng", "Yizeng Han", "Yifan Pu", "Jiewei Cao", "Gao Huang" ]
Point cloud segmentation is an important topic in 3D understanding that has traditionally been tackled using either the CNN or Transformer. Recently, Mamba has emerged as a promising alternative, offering efficient long-range contextual modeling capabilities without the quadratic complexity associated with Transformer's attention mechanisms. However, despite Mamba's potential, early efforts have all failed to achieve better performance than the best CNN-based and Transformer-based methods. In this work, we address this challenge by identifying the key components of an effective and efficient point cloud segmentation architecture. Specifically, we show that: 1) Spatial locality and robust contextual understanding are critical for strong performance, and 2) Mamba features linear computational complexity, offering superior data and inference efficiency compared to Transformers, while still being capable of delivering strong contextual understanding. Additionally, we further enhance the standard Mamba specifically for point cloud segmentation by identifying its two key shortcomings. First, the enforced causality in the original Mamba is unsuitable for processing point clouds that have no such dependencies. Second, its unidirectional scanning strategy imposes a directional bias, hampering its ability to capture the full context of unordered point clouds in a single pass. To address these issues, we carefully remove the causal convolutions and introduce a novel Bidirectional Strided SSM to enhance the model's capability to capture spatial relationships. Our efforts culminate in a novel architecture named MEEPO that effectively integrates the strengths of CNN and Mamba. MEEPO surpasses the previous state-of-the-art method, PTv3, by up to +0.8 mIoU on multiple key benchmark datasets, while being 42.1\% faster and 5.53$\times$ more memory efficient. Our code will be released.
[ "point cloud segmentation", "efficient", "contextual modeling" ]
Reject
https://openreview.net/pdf?id=E1ML0nEReb
https://openreview.net/forum?id=E1ML0nEReb
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xzE6luYbbg", "xZpJT8Wqyk", "uUxYxpW2CW", "s3DcliHfeK", "mGocvsUdfC", "m11hXJnMtV", "hRuquTu3xa", "h5wVbInCw2", "RsUhIfXxT9", "OjaT6Zy1Hg", "OT3HJXaRek", "MXQYqbtPm3", "IX43WHGyhV", "GOT72QPlCk", "FB1H9kSYQK", "F6FaPsc9TH", "E5OmJT5hxS", "DXyIlPfQ2J", "6g87iOxqEj", "04hvsH8JTk" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732685605975, 1731481132031, 1731479106568, 1732088499175, 1732088560781, 1730681455160, 1729937594982, 1732088337952, 1732534309372, 1732088269657, 1730616961818, 1731473694564, 1737523443151, 1730642447535, 1732088659805, 1732503960434, 1734332016520, 1732874013227, 1729172074533, 1731473710702 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1247/Authors" ], [ "ICLR.cc/2025/Conference/Submission1247/Reviewer_XaY8" ], [ "ICLR.cc/2025/Conference/Submission1247/Reviewer_fEVg" ], [ "ICLR.cc/2025/Conference/Submission1247/Authors" ], [ "ICLR.cc/2025/Conference/Submission1247/Authors" ], [ "ICLR.cc/2025/Conference/Submission1247/Reviewer_U9M8" ], [ "ICLR.cc/2025/Conference/Submission1247/Reviewer_XaY8" ], [ "ICLR.cc/2025/Conference/Submission1247/Authors" ], [ "ICLR.cc/2025/Conference/Submission1247/Authors" ], [ "ICLR.cc/2025/Conference/Submission1247/Authors" ], [ "ICLR.cc/2025/Conference/Submission1247/Reviewer_wi14" ], [ "ICLR.cc/2025/Conference/Submission1247/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1247/Reviewer_HJ9v" ], [ "ICLR.cc/2025/Conference/Submission1247/Authors" ], [ "ICLR.cc/2025/Conference/Submission1247/Reviewer_fEVg" ], [ "ICLR.cc/2025/Conference/Submission1247/Area_Chair_tiu5" ], [ "ICLR.cc/2025/Conference/Submission1247/Authors" ], [ "ICLR.cc/2025/Conference/Submission1247/Reviewer_fEVg" ], [ "ICLR.cc/2025/Conference/Submission1247/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewers and Area Chair,\\n\\nWe sincerely thank you for your valuable time and effort in reviewing our paper.\\n\\nIn this paper, we provide a comprehensive analysis of architectural properties of commonly used point cloud segmentation operators, investigate the reasons behind Mamba-based networks' under-performance in this task, and propose simple yet effective solutions to address these issues. These insights culminate in the development of a novel architecture for point cloud segmentation, named MEEPO, which achieves state-of-the-art performance. Our research offers many valuable insights for designing efficient and effective point cloud segmentation networks.\\n\\nAll the reviewers have acknowledged several strengths of our paper, including its **clear structure and readability (HJ9v, fEVg)**, **meaningful and thorough analysis (HJ9v, U9M8, wi14)**, and **comprehensive ablation experiments (HJ9v, wi14)**. Additionally, there is unanimous agreement regarding the **impressive performance and efficiency** of our proposed network.\\n\\nMost weaknesses mentioned are minor issues related to writing presentations and clarity. In response to the constructive feedback given, we have carefully revised the manuscript, making the following key improvements:\\n\\n1. Enhanced the explanation of PTv3's performance degradation with increasing window size (lines 234 and 298)\\n2. Added a stride=1 comparison in Table 7(e) (line 446)\\n3. Incorporated a Strided SSM ablation study in Table 7(d) (line 441)\\n4. Clarified parameter count details in Table 7(a) (lines 196 and 469)\\n5. Provided more detailed explanation of Mamba's point cloud processing order (line 189)\\n6. Expanded the discussion of Mamba's local bias (line 822 in the Appendix)\\n7. Included data demonstrating MEEPO's effectiveness in processing increasingly larger point sets (line 841 in the Appendix)\\n8. Reduced bold text usage (lines 357 and 370)\\n9. Corrected the Figure 5 reference (line 251)\\n10. Added citation to Bi-Mamba+ (line 518)\\n\\nWe are pleased to note that all the engaged reviewers expressed satisfaction with these revisions, awarding positive scores (8, 6, 6, 6) after their evaluation. We have also carefully addressed the weaknesses raised by reviewer HJ9v but was unable to obtain his/her response. All in all, we strongly believe that our revised paper has met the rigorous standards required for an ICLR submission.\\n\\nThank you again for your thoughtful feedback and support. \\n\\nSincerely,\\n\\nThe Authors\", \"title\": \"General Response\"}", "{\"title\": \"Point Cobra is a paper reviewed in Nips 2024\", \"comment\": \"Sorry for misidentifying the NeurIPS paper as an AAAI submission. The author\\u2019s response has alleviated my concerns about duplicate submission or plagiarism.\", \"i_still_have_some_questions_regarding_the_method\": \"1. While the new model structure aligns with the previous version (NIPS2024), there are differences in latency and memory usage. I am interested in understanding the source of these improvements.\\n2. This paper introduces an additional CNN module to capture the local features of the point cloud. Could the Causal-Free Conv1D in the Mamba block be removed, given that these two modules appear to serve the same role in the model?\"}", "{\"title\": \"Sincerely sorry for the mistake, i'll update my comments soon\", \"comment\": \"I'm sorry that I have mistaken the NeurIPS paper as an AAAI submission. I will update my comments soon.\"}", "{\"comment\": \"Thank you for your detailed review. We are glad that you find our analysis of existing architectures to be thorough and comprehensive! Below are our response to the weaknesses:\\n\\n**[W1] Missing Citation**\\n> We have already cited Vision Mamba in our paper. We apologize for omitting Bi-Mamba+ and have now added this citation to the related work section (line 518) in the updated paper. \\n\\n**[W1] Additional comparison with stride 1**\\n> Using a stride of 1 is equivalent to repeating the Bidirectional SSM twice. Following your suggestion, we include this setting to Tab. 7(e) of the updated paper, which confirms that strided computation matters.\\n\\n**[W2] Data demonstrating Mamba's advantage in performance and efficiency**\\n> Mamba's performance and efficiency advantages are already illustrated in Fig. 1(a) and (b), which show that our proposed Mamba model, MEEPO, outperforms PTv3 at its best performance with a sequence length of 1024 (78.0 vs. 77.5), while being 42.1% faster and 5.53x more memory-efficient.\\n> \\n>  \\n>\\n> In contrast, Fig. 4 primarily focuses on PTv3's performance degradation with increasing sequence length. Comparable data for Mamba is not presented because window partitioning is unnecessary for our model due to its linear complexity. In fact, our proposed MEEPO utilizes all points. Nevertheless, to provide a similar comparative analysis, we implement a modified version of MEEPO with window partitioning. The results, included in Fig.8 of the updated paper's appendix (line 841), demonstrate that MEEPO can effectively leverage long contexts, showing progressive performance improvements with increased point utilization.\\n\\n**[W3] Clarification on parameter count in Tab. 7**\\n> The difference in parameter count is due to different channel sizes, which are adjusted to make them comparable. When using the same channel size, these different operators vary significantly in their parameter counts. For example, a 3D convolution operator has many more parameters than attention or SSM operators when using the same channel size. We have updated lines 196 and 469 to clarify this methodology.\\n\\n**[W4] Figure reference fix:** \\n> Thanks for the suggestion! We have fixed this error in the updated paper\"}", "{\"comment\": \"**[Q1] Differences from NeurIPS version**\\n> They are not exactly the same. Since then, we have made several hyper-parameter optimizations, such as reducing the MLP ratio and adding an extra layer to the stem.\\n\\n**[Q2] Can causal conv in SSM be removed?**\\n> Following your suggestion, we try removing the convolution. However, it gives a worse mIoU score on ScanNet (77.3 vs 77.5)\"}", "{\"summary\": \"This paper focuses on a simple yet practical target in 3D understanding: how to enhance the accuracy of a Mamba-based framework while preserving its efficiency rooted in linear complexity. The paper seeks to achieve this through an in-depth analysis of the components that contribute to the strong performance of PTv3, summarized as contextual understanding and spatial locality. These findings lead to the two major design elements of this work: Causal-free Mamba and Bidirectional Strided SSM. Overall, it is encouraging to see the method achieve solid performance on major scene-level point cloud semantic segmentation benchmarks.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. **[Analysis-driven Methodology]** First of all, I appreciate the paper's analysis-driven approach: rather than being experiment-driven, it presents insights based on previous work and derives its methodology from this understanding. This makes the paper easier to follow and the proposed method more convincing.\\n\\n2. **[Strong performance]** The paper achieves strong performance on several major point cloud semantic segmentation benchmarks and appears to be efficient as well.\", \"weaknesses\": \"The insightful analysis, convincing approach, and solid performance demonstrate that this manuscript makes **a valuable empirical contribution**. However, the following weaknesses **limit its broader impact**:\\n\\n1. **[Scaling up]** \\n**(a)** Many observations and analyses (such as the ablation on PTv3 window size) are based on training from scratch on ScanNet (1,500 samples), which is relatively small in scale. However, many model properties may change significantly when scaled up with more data. For instance, using the PTv3 window size ablation as an example, attention mechanisms are inherently adaptive to kernel size, meaning accuracy should not be negatively impacted by the window size. The degradation observed when increasing the window size beyond 1024 is likely due to insufficient data to support this adaptive capacity. \\n**(b)** It is good to see the Mamba framework achieve both higher accuracy and efficiency compared to previous SOTA. However, the true value of superior scratch accuracy and efficiency lies in its capacity for further scaling. I am particularly curious about the model's accuracy and efficiency when scaling up training through multi-dataset joint training, as well as its performance when scaling up parameters with larger data volumes.\\n\\n2. **[Go beyond semantic segmentation]**\\nWhy do point cloud perception and representation learning prioritize semantic segmentation? This is because it is the simplest way to evaluate the quality of learned representations using a single linear layer. However, this does not mean that research on point cloud backbones should be limited to semantic segmentation alone. The claims of the paper would be stronger if more downstream tasks, such as instance segmentation and object detection, were included. It may not be necessary to achieve the highest performance; instead, demonstrating more properties of the proposed method could be more informative.\", \"questions\": \"1. In Figure 1(b), PTv3 consumes even more memory compared to PTv2, which seems unusual since window-based attention should be significantly more memory-efficient than neighborhood attention. The only reason I can imagine is that FlashAttention may be disabled while using a large kernel, resulting in larger matrix multiplications. Could you provide a more detailed explanation of this setup?\\n\\n2. Maybe a shorter, more impactful title could make it easier for readers to remember? The current version is too lengthy.\\n\\n3. Maybe the table arrangement could be improved? I don\\u2019t recommend using resizebox, as it can lead to uneven text sizes. You might consider referring to the LaTeX source code from the PTv3 paper for tips on adjusting table formatting.\\n\\n4. It might be better to reduce the use of bold text? For instance, consider changing `\\\\textbf{Proposed Solution:}` to` \\\\textit{Proposed solution:}`, and avoid bolding certain numbers in the main text.\\n\\n(Minor suggestions for reference only)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper analyzes several popular neural network architectures, including CNN-based and Transformer-based designs, as well as the recently introduced Mamba model. It proposes a new point cloud segmentation architecture, MEEPO, which combines the strengths of CNNs and Mamba, surpassing previous state-of-the-art methods like PTv3 on multiple key benchmark datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The model is highly efficient and achieves accuracy that exceeds prior state-of-the-art methods.\\n2. The design of the Strided Bidirectional SSM effectively enhances the model's understanding of spatial relationships.\", \"weaknesses\": \"1. The proposed hyper-structure is based on the analysis of existing architectures, which makes the technical innovation somewhat limited.\", \"questions\": \"I am unsure whether the authors of this paper are aware of *Point Cobra*. I would like the authors to address the numerous similarities between the two papers and provide an explanation.\\n\\n1. While the new model structure aligns with the previous version (NIPS2024), there are differences in latency and memory usage. I am interested in understanding the source of these improvements.\\n\\n2. This paper introduces an additional CNN module to capture the local features of the point cloud. Could the Causal-Free Conv1D in the Mamba block be removed, given that these two modules appear to serve the same role in the model?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"Sorry for misidentifying the NeurIPS paper as an AAAI submission. The author\\u2019s response has alleviated my concerns about duplicate submission or plagiarism.\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your detailed review. We are glad that you find our analysis to be meaningful! Below are our response to the weaknesses and questions:\\n\\n**[W2] Regarding simple fix to Mamba**\\n> We would like to clarify that our paper extends beyond providing simple fixes to Mamba. It offers a comprehensive analysis of architectural properties of commonly used point cloud segmentation operators, investigates the reasons behind Mamba-based networks' underperformance in this task, and proposes simple yet effective solutions to address these issues. Our research provides valuable insights for designing efficient and effective networks, and our experiments demonstrate that these solutions significantly mitigate the identified problems. While more complex approaches are possible and could be explored in future work, we favor simple solutions for their clarity, ease of explanation, and broader applicability.\\n\\n**[Q1] Additional evidence/explanation for the respective effectiveness of CNN and Transformer for local modeling and contextual modeling**\\n> Following your suggestion, we provide further statistical evidence by computing the overall improvements for the 'door' and 'table' classes, which are used to illustrate the benefits of contextual and local modeling in the visualizations. The 'door' class demonstrates the importance of contextual modeling, as door positions are heavily influenced by surrounding objects in a scene. In contrast, the 'table' class highlights the value of local modeling due to its varied shapes and structural features. The results align with our analysis and are presented below:\\n| case | 'Door' mIoU (contextual modeling) | 'Table' mIoU (local modeling) |\\n| -------- | ------- | ------- |\\n| Pure Transformer | 61.1 | 51.1 |\\n| Pure Mamba | 61.0 | 54.2 |\\n| Pure CNN | 55.9 | 62.7 |\\n> \\n>  \\n>\\n> Aside from that, we would like to clarify that our analysis is primarily inspired by multiple studies in 2D domains [1,2,3], which highlight the presence of architectural biases. The visualizations in Fig. 3 and Fig. 5 help to validate these intuitions and complement the quantitative experiments presented in Tab. 2. These experiments demonstrate the significance of both local and contextual modeling, as removing either component results in considerable performance degradation. \\n1. Pan et al., On the Integration of Self-Attention and Convolution, CVPR 2022\\n2. Pan et al., 3D Object Detection with Pointformer, CVPR 2021\\n3. Li et al., UniFormer: Unifying Convolution and Self-attention for Visual Recognition, TPAMI 2023\\n\\n**[Q2] Clarification of similarity of Bidirectional SSM with Vision Mamba**\\n> Yes, the Bidirectional SSM is indeed identical to that of Vision Mamba. We apologize for the unclear Fig. 6(b) and have improved it in the updated paper.\\n\\n**[Q2.5] Explanation for the inconsistency with Vision Mamba**\\n> We would like to clarify that there is *no* inconsistency here, as we are addressing fundamentally different tasks. While Bidirectional SSM can significantly improve performance in 2D tasks (as shown in Vision Mamba), its modest 0.1% improvement in point cloud segmentation can be attributed to the added complexity of the third dimension. In 3D space, objects exhibit much greater variability in shape and structure, often requiring more than just bidirectional scanning to fully capture all required details, particularly in dense prediction tasks like segmentation.\\n> \\n>  \\n>\\n> Our strided scan approach offers an effective solution to address this limitation by helping to capture additional contextual information. We try using Strided SSM alone (without bidirectional scanning) but the experiment shows that it alone *cannot* achieve the performance achieved by the combined Bidirectional Strided SSM (77.7 vs 78.0). This result highlights that the two techniques are complementary and should be used together to achieve optimal performance. We have included the additional Strided SSM result in Table 7(d) of the updated paper.\\n\\n**[Q3] Explanation for Causal-Free Mamba**\\n> As indicated in line 357, Causal-Free Mamba simply replaces the causal convolution with standard convolution (using torch.nn.Conv1d). \\n\\n**[Q4,Q5] Hyperlink and notation fix**\\n>Thanks for the suggestions! We have fixed the incorrect link and notation in the updated paper.\"}", "{\"comment\": \"Thank you for your response! We\\u2019re delighted to hear that our reply addressed your concerns and appreciate your recognition of our work.\"}", "{\"comment\": \"Thank you for your detailed review. We are glad that you find our analysis and approach to be informative and valuable! Below are our response to the weaknesses and questions:\\n\\n**[W1] Better explanation for performance degradation of PTv3 with respect to increasing window size**\\n\\n>We agree that the observed degradation may be attributed to limited data availability rather than the model's inherent capacity limitations. We have improved the corresponding sections on lines 234 and 298 of the updated paper to emphasize this point.\\n\\n**[W2] Additional 3D object detection experiment**\\n\\n> Following your suggestion, we provide an additional 3D object detection result on ScanNet v2 dataset:\\n| backbone | [email protected] |\\n| -------- | ------- |\\n| PTv3 | 71.3 |\\n| MEEPO | 72.2 (+0.9) |\\n\\n**[Q1] Clarification of PTv3 experimental setup**\\n\\n> We conduct our experiments mainly on V100 which does not support flash attention. Based on our test, PTv3 does consume more memory in this setup.\\n\\n**[Q2] Possibility of changing to impactful title**\\n\\n> Thanks for your suggestion! We will certainly take this into consideration. However, currently we haven't settled on a suitable title yet.\\n\\n**[Q3,Q4] Presentation improvement suggestions**\\n\\n> Thanks for your suggestions! We have followed them to make the table fonts slightly larger and reduce the use of bold text.\"}", "{\"summary\": \"This paper presents a novel architecture named MEEPO, designed for point cloud segmentation with a focus on efficient contextual modeling. It introduces Mamba, a state-space model (SSM) that achieves linear complexity compared to traditional Transformers, which have quadratic complexity. The authors identify Mamba's limitations, such as enforced causality and directional bias, and propose solutions through causal-free convolutions and bidirectional strided state-space modeling. MEEPO outperforms previous models, like PTv3, in accuracy, latency, and memory efficiency across several benchmark datasets (e.g., ScanNet, S3DIS, nuScenes).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tMEEPO introduces a new method by integrating CNN and Mamba components, achieving good contextual modeling with reduced computational costs.\\n2.\\tThe paper thoroughly explores the limitations of existing architectures (CNN, Transformer, and SSM) and provides comprehensive ablations and comparisons to validate its design choices.\\n3.\\tThe introduction of causal-free convolutions and bidirectional strided SSM in Mamba addresses some original limitations in Mamba for this task,.\", \"weaknesses\": \"1.\\tIn Section 4.2, the authors claim that they propose a 'Bidirectional Strided SSM' method, however, Bidirectional SSMs have already been proposed in works like [1][2] to address the issue of forgetting unidirectional sequences. I believe that not referencing these articles while claiming authorship is not rigorous. Additionally, what baffles me is that in Table 7, a comparison is made for strides ranging from 2 to 16, showing a decreasing trend in performance. So, why not similarly compare and discuss the results for a stride of 1?\\n2.\\tThe manuscript presents in Fig. 4 the mIoU performance of the Transformer-based PTv3 under different range windows. The advantage of the Mamba model lies in its better memory capabilities for long sequences. The authors emphasize this advantage in line 240, but they do not provide specific data to demonstrate the validation of this conclusion in point cloud data (the author can show a similar format figure as Fig.4).\\n3.\\tTable 7 in the article indicates that the parameter volume of CNN+Mamba is smaller than that of pure Mamba. However, based on the author's description, compared to pure Mamba, the author has added a CNN module to Mamba and employed a bidirectional mechanism for computation. Why the parameter count is lower than pure Mamba needs further clarification from the author.\\n4.\\tIn line 251, the referred figure is not Fig.4, it should be Fig. 5.\\n\\n\\n[1] Bi-Mamba+: Bidirectional Mamba for Time Series Forecasting\\n[2] Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model\", \"questions\": \"please refer to the weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThanks for the comment. We would like to clarify that this paper is an improved version of a rejected paper previously submitted to *NeurIPS 2024*. However, we absolutely did NOT submit to *AAAI 2025*. Therefore, we can think of two possibilities:\\n\\n(1) The *NeurIPS* paper has been mistakenly identified as an *AAAI* submission.\\n\\n(2) Someone else has resubmitted our *NeurIPS* paper to *AAAI* without our knowledge.\\n\\nCould you please kindly check and confirm that you have indeed seen this paper in *AAAI2025*? If so, this is a very serious academic issue and we will need to involve the help of PC to help investigate this issue.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper focuses on the 3D point cloud segmentation task. It first compares the performance of three types of blocks: CNN, Transformer, and Mamba, and then improves the existing Mamba architecture. The authors provide extensive visualization results and ablation experiments.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"[S1] It is meaningful to compare the performance of the three types of blocks\\u2014CNN, Transformer, and Mamba\\u2014on the same task, point cloud segmentation.\\n[S2] The paper is detailed and easy to read.\\n[S3] The paper conducts detailed ablation experiments on the newly proposed module.\\n[S4] The paper achieves state-of-the-art results on multiple datasets.\", \"weaknesses\": \"[W1] The comparison of CNN, Transformer, and Mamba blocks lacks strong evidence. I believe that concluding \\\"CNN is more effective for local modeling and Transformer is better at handling contextual information\\\" based on just one example for each is insufficient.\\n[W2] The proposed method is more like a small fix to Mamba-based method. \\n[W3] The two solutions proposed in this paper: 1) the casual-free block is described in very little detail and lacks a clear explanation; 2) bidirectional SSM seems to have already been introduced in VisionMamba (Zhu 2024), and this paper's work only adds n-stride.\", \"questions\": \"[Q1] Could the authors provide more detailed explanations of W1, such as whether there is theoretical evidence to support it or if there are sufficiently extensive experiments across multiple datasets?\\n[Q2] Is the Bidirectional SSM mentioned by the authors consistent with VisionMamba (Zhu 2024)? I do not see a bidirectional structure in Fig(6b), only the Strided SSM.\\n[Q2.5] If the answer of Q2 is yes, the result you reported in the second line of Tab(7d) shows that this structure can only bring 0.1% increasement, which seems inconsistent with the effects reported in VisionMamba (Zhu 2024) regarding this structure. From an efficiency perspective, is the introduction of the indirection necessary if it only brings a 0.1% improvement?\\n[Q3] The casual-free conv block is not clearly explained. If this is something new you are proposing, it would be helpful to include a more detailed explanation or illustration.\\n[Q4] Standardize the notation for SSM. In the abstract and introduction, it is referred to as \\\"Strided Bidirectional SSM,\\\" while later it is called \\\"Bidirectional Strided SSM.\\\"\\n[Q5] There\\u2019s something wrong of your hyperlink in 3.3. It should be Fig 5 but yours is Fig 4.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your detailed review. We are glad that you find our paper to be well-structured and results to be impressive! Below are our response to the weaknesses:\\n\\n**[Q1] More explanation for Mamba's local bias**\\n> The effectiveness of Mamba for local processing due to locally-biased forget gate has been rigorously examined in the recent NeurIPS 2024 paper \\\"Demystify Mamba in Vision: A Linear Attention Perspective\\\" by Han et al. In Sec 4.2 and Fig. 4, the authors demonstrate that Mamba tends to focus predominantly on recent tokens. To give more statistical data to support this claim, we reproduce their findings and include a similar plot for point cloud segmentation in Fig. 7 of our updated paper's appendix (line 822). Additionally, we also add this citation to our updated paper.\\n\\n**[Q2] Improved explanation for performance degradation of PTv3 with respect to increasing window size**\\n> Thanks for the suggestion! We agree with your intuition and have improved the corresponding sections on lines 234 and 298 of the updated paper to emphasize this point.\\n\\n**[Q3]: Regarding minimal structural change to Mamba**\\n> We would like to clarify that our paper extends beyond providing structural changes to Mamba. It offers a comprehensive analysis of architectural properties of commonly used point cloud segmentation operators, investigates the reasons behind Mamba-based networks' underperformance in this task, and proposes simple yet effective solutions to address these issues. Our research provides valuable insights for designing efficient and effective networks, and our experiments demonstrate that these solutions significantly mitigate the identified problems. While more complex approaches are possible and could be explored in future work, we favor simple solutions for their clarity, ease of explanation, and broader applicability.\\n\\n**[Q4] Additional 3D object detection experiment**\\n> Following your suggestion, we provide an additional 3D object detection result on ScanNet v2 dataset:\\n| backbone | [email protected] |\\n| -------- | ------- |\\n| PTv3 | 71.3 |\\n| MEEPO | 72.2 (+0.9) |\\n\\n**Figure reference fix** \\n> Thanks for the suggestion! We have corrected the figure reference error in the updated paper.\\n\\n**Clarification of Mamba's point cloud processing order** \\n> The order is the same as the one used in PTv3. We have added this clarification to line 189 of the updated paper.\"}", "{\"comment\": \"The authors' responses have addressed my concerns well, i'll slightly improve my score\"}", "{\"metareview\": \"The paper receives 4 positive and 1 negative rating after rebuttal. Although the paper has some merits like competitive results with faster runtime and lower memory cost, the reviewers pointed out a few critical concerns about 1) technical contributions compared to other Mamba-based approaches, and 2) results other than semantic segmentation. After taking a close look at the paper, rebuttal, and discussions, the AC agrees with reviewers' feedback, especially on the minor architectural change of the existing Mamba methods but applying to the studied task. Without further explanations about the main contributions or additional results, e.g., other point cloud tasks, it is not convincing to show enough technical novelty, and hence the AC suggests the rejection decision. The authors are encouraged to improve the paper based on the feedback for the next venue.\", \"additional_comments_on_reviewer_discussion\": \"In the rebuttal, some of the concerns like technical clarity are addressed by the authors. However, even the discussions are not actively participated during the post-rebuttal discussion period, the AC finds that the authors have failed to provide detailed responses to most reviewers' critical questions, e.g., 1) model training with more data from reviewer U9M8, 2) tasks beyond segmentation from reviewer U9M8 and fEVg (only one additional experiment was provided), 3) main technical contributions compared to Mamba-based methods from reviewer HJ9v, wi14. XaY8, and fEVg, especially on the over-claimed Bidirectional SSM where VisionMamba already proposed. This significantly limits the technical novelty of the proposed framework, given the marginal performance improvement (also, the improved runtime and memory saving is naturally brought by Mamba anyway). Overall, the AC took a close look at all the contents and agrees that the authors have not addressed the above concerns well in the rebuttal, in which the paper still requires to be significantly improved before making it ready for publication.\"}", "{\"title\": \"Kind Reminder\", \"comment\": \"Dear Reviewer HJ9v,\\n\\nWe sincerely appreciate the opportunity to address your concerns. We hope our responses have adequately resolved the points you raised. As the deadline is approaching, we kindly request your feedback at your earliest convenience. If you need further clarification on any aspect of our work, please don\\u2019t hesitate to let us know.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"summary\": \"Point cloud segmentation is crucial for 3D understanding, traditionally tackled using CNNs or Transformers. Recently, Mamba has emerged as an efficient solution for contextual modeling, though it has struggled to outperform leading CNN and Transformer methods. This work identifies key requirements for effective segmentation: strong spatial locality and robust contextual understanding. Enhancing Mamba, the authors remove causality and introduce a Strided Bidirectional SSM to address directional biases in unordered point clouds. The resulting architecture, MEEPO, merges the strengths of CNNs and Mamba, achieving up to +0.8 mIoU over state-of-the-art methods on benchmarks like ScanNet and nuScenes, with notable gains in speed and memory efficiency.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-structured with a clear analysis-driven approach.\\n2. The experimental results are impressive.\", \"weaknesses\": \"1. In lines 238-241, it's unclear why Mamba is considered effective for local processing, given its design goal of modeling long sequences with linear complexity. The reference to a \\\"locally-biased forget gate\\\" needs further explanation and detailed analysis or statistical data beyond visualization.\\n\\n2. In lines 300-302, the statement that long-range attention is unnecessary may only partially reflect the issue. The core reason might be the sparsity of 3D point clouds. Transformers generally require extensive data and prolonged training to surpass CNNs, so it might be more accurate to say that insufficient data limits the full potential of Transformers rather than implying long-range attention is irrelevant.\\n\\n3. The innovations appear somewhat limited, with minimal structural changes. The modifications, such as Causal-Free Mamba and Bidirectional Strided SSM, feel more like tricks.\\n\\n4. Presenting results on downstream perception tasks like segmentation and detection would add further value.\", \"minor_errors\": [\"Line 251: Fig.4 should be Fig.5.\", \"clarify the point cloud processing order in Mamba. Is it similar to PTv3's order?\"], \"questions\": \"see above weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThanks for the comment. We would like to clarify that this paper is an improved version of a rejected paper previously submitted to *NeurIPS 2024*. However, we absolutely did NOT submit to *AAAI 2025*. Therefore, we can think of two possibilities:\\n\\n(1) The *NeurIPS* paper has been mistakenly identified as an *AAAI* submission.\\n\\n(2) Someone else has resubmitted our *NeurIPS* paper to *AAAI* without our knowledge.\\n\\nCould you please kindly check and confirm that you have indeed seen this paper in *AAAI2025*? If so, this is a very serious academic issue and we will need to involve the help of PC to help investigate this issue.\"}" ] }
E1HLZcRZI1
Arti-PG: A Procedural Toolbox to Synthesize Large-Scale and Diverse Articulated Objects with Rich Annotations
[ "Jianhua Sun", "Yuxuan Li", "Jiude Wei", "Longfei Xu", "Nange Wang", "Yining Zhang", "Cewu Lu" ]
The acquisition of substantial volumes of 3D articulated object data is expensive and time-consuming, and consequently the scarcity of 3D articulated object data becomes an obstacle for deep learning methods to achieve remarkable performance in various articulated object understanding tasks. Meanwhile, pairing these object data with detailed annotations to enable training for various tasks is also difficult and labor-intensive to achieve. In order to expeditiously gather a significant number of 3D articulated objects with comprehensive and detailed annotations for training, we propose Articulated Object Procedural Generation toolbox, a.k.a. Arti-PG toolbox. Arti-PG toolbox consists of i) descriptions of articulated objects by means of a generalized structure program along with their analytic correspondence to the objects’ point cloud, ii) procedural rules about manipulations on the structure program to synthesize large-scale and diverse new articulated objects, and iii) mathematical descriptions of knowledge (e.g. affordance, semantics, etc.) to provide annotations to the synthesized object. Arti-PG has two appealing properties for providing training data for articulated object understanding tasks: i) objects are created with unlimited variations in shape through program-oriented structure manipulation, ii) Arti-PG is widely applicable to diverse tasks by easily providing comprehensive and detailed annotations. Arti-PG now supports the procedural generation of 26 categories of articulate objects and provides annotations across a wide range of both vision and manipulation tasks, and we provide exhaustive experiments which fully demonstrate its advantages. We will make Arti-PG toolbox publicly available for the community to use. More details, analysis and discussions are provided in technical appendices.
[ "Articulated Object", "Articulated Object Manipulation", "Robotics" ]
https://openreview.net/pdf?id=E1HLZcRZI1
https://openreview.net/forum?id=E1HLZcRZI1
ICLR.cc/2025/Conference
2025
{ "note_id": [ "jTzfBu969k", "j80AoUMkgi", "hS7Fz7QfuM", "bOART7oLPU", "PqC6RR6z7K" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730452752402, 1730800795942, 1730727538303, 1729970014590, 1731653245728 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7206/Reviewer_ivN2" ], [ "ICLR.cc/2025/Conference/Submission7206/Reviewer_5aE3" ], [ "ICLR.cc/2025/Conference/Submission7206/Reviewer_An9k" ], [ "ICLR.cc/2025/Conference/Submission7206/Reviewer_8Yrx" ], [ "ICLR.cc/2025/Conference/Submission7206/Authors" ] ], "structured_content_str": [ "{\"summary\": [\"Observing the data sparsity issue for collecting articulated objects, this paper proposes to use procedural generation to assist the creation of 3D articulated objects with various annotations.\", \"By taking an existing object as input, the system represents an articulated object as a combination of a macro spatial structure and a micro geometric detail. A structure program is used to specify the primitive geometry and connectivity for each part as a descriptor of the general structure of the input object. Then the geometric details is describe as the deformation from the primitives to the original surface by finding the point-correspondence.\", \"Once representing the object in the program, it takes two steps to synthesize a new 3D articulated object: use mathematical rules to randomize the structure first, and then recover the geometry with point-wise correspondence. Following the 3D synthesis, a series of mathematical rules is applied to annotate the object automatically. All these components are integrated into the proposed Articulated Object Procedural Generation toolbox (ArtiPG-toolbox).\"], \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"This paper identifies an important data gap in the field of articulated object modeling and contributes to densifying and diversifying the synthetic data for articulated objects.\", \"This paper designs a practical system to automate the process of creating alternative versions of the input object with various annotations.\", \"This paper shows a user-friendly interface to help users to synthesize new objects using the system.\"], \"weaknesses\": [\"**Paper writing should be improved**. The paper writing is overall difficult to follow, mainly due to 1) the repeating contents in the introduction and related work sections that can be better structured; 2) the explanation in the method and experiment sections is not specific enough to understand the details; 3) some content is inconsistent throughout the paper, e.g., the number of the objects collected using Arti-PG (3096 in introduction, 2133 in line 394).\", \"**Unclear contribution**. It is unclear what the main contributions are claimed and demonstrated in this work. Based on my understanding, the proposed system is helpful to automate the augmentation process of the existing datasets, but it is not really creating new objects at scale as it is essentially using the existing datasets as a library to composite more objects in a combinatorial way under certain assumptions and with lots of manually crafted rules.\", \"**Experiment purpose is vague**. It is unclear what the experiment section is trying to validate. Why the three tasks are chosen to do the benchmark? Why only the PointWOLF is reported as the baseline? For the data itself, how is the realism of the synthesized objects evaluated and guaranteed?\", \"**Insufficient discussion on the assumption and limitation**. There are many assumptions made in constructing the Arti-PG toolbox but never discussed. It is also unclear what this system is good at and what its limitation is.\"], \"questions\": [\"**Questions**\", \"About data annotation: what annotations are provided for the data? Where are the annotations from? Is the affordance manually crafted for each primitive?\", \"About the exception handling module: how does it work exactly? How can it make sure the parts are geometrically and kinematically plausible?\", \"About Discrete Parameter Alteration (DPA): Does the change to the part only affect the part itself? Would other related parts adapt accordingly? For example, if I want to change a cabinet originally with one door to two doors, would the cabinet body/frame adjust to make more space for the additional door?\", \"**Suggestions**\", \"As mentioned in the `Weaknesses` section, this work can benefit from explicitly summarizing the main contributions, which also helps readers to understand other sections better.\", \"This work can be better contextualized by re-organizing the related work section with more references and discussion on the connections with the prior work, e.g. incorporating the subsection of \\\"articulated object synthesis\\\", and combining sections 2.1 and 2.3 into one block.\", \"**Some closely related references that are missing**:\", \"Lei, Jiahui, Congyue Deng, William B. Shen, Leonidas J. Guibas, and Kostas Daniilidis. \\\"Nap: Neural 3d articulated object prior.\\\" Advances in Neural Information Processing Systems 36 (2023): 31878-31894.\", \"Liu, Jiayi, Hou In Ivan Tam, Ali Mahdavi-Amiri, and Manolis Savva. \\\"CAGE: Controllable Articulation GEneration.\\\" In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17880-17889. 2024.\", \"Luo, Rundong, Haoran Geng, Congyue Deng, Puhao Li, Zan Wang, Baoxiong Jia, Leonidas Guibas, and Siyuang Huang. \\\"PhysPart: Physically Plausible Part Completion for Interactable Objects.\\\" arXiv preprint arXiv:2408.13724 (2024).\", \"Liu, Jiayi, Manolis Savva, and Ali Mahdavi-Amiri. \\\"Survey on Modeling of Human-made Articulated Objects.\\\" arXiv preprint arXiv:2403.14937 (2024).\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces Articulated Object Procedural Generation toolbox (Arti-PG toolbox) that speeds up generating 3D articulated objects with part annotations. This toolbox contains (1) structure programs with correspondence to surface point cloud, (2) procedural manipulation (3) mathematical knowledge description. Experiments demonstrate that the data generated by Arti-PG can improve models' performance on both vision and robotic manipulation tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper is well-motivated -- many tasks related to articulated objects face the problem of insufficient amount of data. To address this problem, this paper proposes novel components and procedures that generate 3D articulated shapes with large variations and detailed annotations.\\n\\n2. The paper is well-written and easy to follow. Implementation details have been sufficiently provided in both main paper and supplementary. So this paper should be reproducible.\\n\\n3. Experiments for both vision and robotic tasks (part segmentation, part pose estimation, point cloud completion and object manipulation) have shown that training on data generated by Arti-PG toolbox can indeed improve the performance, which proves the usefulness of this toolbox and its data.\", \"weaknesses\": \"1. The paper mostly shows articulated objects with one/two joint with very few exceptions (boxes with four joints). Is there any limitation on generating objects with much more joints (e.g., 10)?\\n\\n2. Although advanced primitive is available, the generated shapes still seem to lack fine geometric details, which typically exist in real-world objects. Without these details, I was wondering if the authors have any thoughts on how this would affect the sim-to-real gap.\\n\\n3. How does this method handle some big structural variations within a category, e.g., chair with four straight legs vs. swivel chair.\", \"questions\": \"If the parts have some complicated geometries (e.g., concave shapes), then the joint does not necessarily ly on the boundary of the OBB. In this case, the parts at the two ends of the joint may not be connecting well or may have some unwanted intersection. What are the authors' thoughts on this?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a toolbox for annotating 3D articulated objects. It consists of 3 components: (i) descriptions of articulated objects by a generalized structure program and point correspondences; (ii) variations on the structure program to synthesize diverse new articulated objects; (iii) additional annotations such as affordance and semantics.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Datasets for articulated objects with high-quality annotations are indeed inadequate nowadays. The paper is addressing this important problem with an effective framework for data annotation.\"], \"weaknesses\": [\"To my understanding, this paper is positioned as a dataset and benchmark paper, with its main contribution being a toolbox for data annotation. If this understanding is correct, I would expect more data analysis:\", \"Simply 3096 objects does not sound enough for a dataset contribution, but it may also be reasonable given the complex data annotation for 3D articulated objects. How does this number compare to existing datasets?\", \"What are the qualities, distributions, and other features of the dataset (or in general, the annotated data with the toolbox)? It would be good to show more statistics about that.\", \"The writing of the paper makes me feel a bit hard to follow. For example, it would be good to show an overview figure of what the procedural generation process is like. Also, why is the sequence of operations \\\"procedural\\\"? It would be really good to have more high-level explanations, including figures, in addition to the technical details, to help readers better understand the overall framework.\"], \"questions\": [\"As discussed in weaknesses, I may want to see more analysis of the dataset, such as its data statistics.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The manuscript proposes an articulated object synthesis framework, Arti-PG. The method is designed based on the insight that the articulated object is structural and can be viewed as created via basic primitives in a structural way. Arti-PG proposes to decompose the generation process into global structure creation by manipulating structures and generating detailed local geometry by fitting and aligning the point clouds. With this hierarchical generation philosophy, Arti-PG proposes a two-stage generation pipeline, together with a labeling and annotation process. Arti-PG can successfully create large amounts of articulated objects with diverse shapes. Experiments demonstrate that models trained using these generated articulated objects can perform better than those trained with a small amount of data. Concerns lie in the quality of the generated shapes and the efficiency of the system.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Hierarchical shape modeling and generation is a smart and effective representation. The paper is well-motivated. Leveraging the structural prior and cross-instance local geometry relations, Arti-PPG proposes a promising way for generating articulated objects.\", \"The development of an articulated object generation toolbox for various applications. Generating articulated objects in a convenient and efficient manner is quite crucial for many downstream tasks that have a high demand for large-scale articulated object data.\", \"Extensive experiments with real-world evaluations. The authors present a wide range of generated results to demonstrate the effectiveness of the method and the quality and diversity of the generated objects. Besides and various downstream applications are included in the experiments, demonstrating the value of a large scale of diverse articulated objects.\", \"Detailed visualizations including videos are provided in the Appendix and Supp. Besides, very detailed implementations covering pseudo code are provided in the supplementary material.\"], \"weaknesses\": [\"Reasonability of the method. The quality of the generated results is not naturally guaranteed in the method. The design in generating the local details cannot make sure the validity and the quality of the generated object. For instance, simply leveraging this method without a human-in-the-loop design can easily result in invalid outputs, e.g., parts collide with each other in the articulated motion.\", \"The efficiency and the applicability. The method is semi-autonomous and relies on human efforts. Therefore it is indeed questionable whether the method can help with scaling up the articulated object dataset in a reasonable time budget.\", \"Quality and diversity. Although the authors have shown appealing generation results, it seems that many instances are still limited to the original articulated objects available in their considered datasets. Thus it is questionable w.r.t. the sample diversity. Creating objects with limited diversity using a method that makes it hard to generate brand-new objects would downweight the value of the method.\"], \"questions\": \"Can the generation method itself ensure the physical fidelity of the generated objects?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
E1EHO0imOb
Scaling FP8 training to trillion-token LLMs
[ "Maxim Fishman", "Brian Chmiel", "Ron Banner", "Daniel Soudry" ]
We train, for the first time, large language models using FP8 precision on datasets up to 2 trillion tokens --- a 20-fold increase over previous limits. Through these extended training runs, we uncover critical instabilities in FP8 training that were not observable in earlier works with shorter durations. We trace these instabilities to outlier amplification by the SwiGLU activation function. Interestingly, we show, both analytically and empirically, that this amplification happens only over prolonged training periods, and link it to a SwiGLU weight alignment process. To address this newly identified issue, we introduce Smooth-SwiGLU, a novel modification that ensures stable FP8 training without altering function behavior. We also demonstrate, for the first time, FP8 quantization of both Adam optimizer moments. Combining these innovations, we successfully train a 7B parameter model using FP8 precision on 256 Intel Gaudi2 accelerators, achieving on-par results with the BF16 baseline while delivering up to a $\sim$ 34 % throughput improvement. A reference implementation is supplied in https://github.com/Anonymous1252022/Megatron-DeepSpeed
[ "quantization", "fp8", "llms", "training", "acceleration", "compression" ]
Accept (Spotlight)
https://openreview.net/pdf?id=E1EHO0imOb
https://openreview.net/forum?id=E1EHO0imOb
ICLR.cc/2025/Conference
2025
{ "note_id": [ "n0t5wiTbMr", "iviS5BTOqs", "i3UV708Nx9", "hkvrQFGKSa", "g3jtJcK2p9", "cLnolGIKEb", "c77lcAO4mS", "ZapilFkL05", "TakT3kc1dW", "MwWZ7SHGVO", "M0GMoF6BaJ", "KTBBnEEJbY", "IGE9wV4rWx", "Bz5W9Le1IV", "6VUtIu3dLe", "4z6NCdsxjl", "3JogXSWYQe", "17OOb2guCm" ], "note_type": [ "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "comment" ], "note_created": [ 1730515697723, 1730004938646, 1732975591920, 1732432918364, 1729969441340, 1732433237393, 1733036721551, 1737523599381, 1732432492650, 1732433374079, 1732433270348, 1733303037042, 1732432724017, 1732908617294, 1734944434048, 1732649535504, 1730700011169, 1733277151136 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3789/Reviewer_p6vg" ], [ "ICLR.cc/2025/Conference/Submission3789/Reviewer_1QzP" ], [ "ICLR.cc/2025/Conference/Submission3789/Reviewer_p6vg" ], [ "ICLR.cc/2025/Conference/Submission3789/Authors" ], [ "ICLR.cc/2025/Conference/Submission3789/Reviewer_KJr6" ], [ "ICLR.cc/2025/Conference/Submission3789/Authors" ], [ "ICLR.cc/2025/Conference/Submission3789/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3789/Authors" ], [ "ICLR.cc/2025/Conference/Submission3789/Authors" ], [ "ICLR.cc/2025/Conference/Submission3789/Authors" ], [ "ICLR.cc/2025/Conference/Submission3789/Authors" ], [ "ICLR.cc/2025/Conference/Submission3789/Authors" ], [ "ICLR.cc/2025/Conference/Submission3789/Reviewer_1QzP" ], [ "ICLR.cc/2025/Conference/Submission3789/Area_Chair_kXew" ], [ "ICLR.cc/2025/Conference/Submission3789/Reviewer_8YGg" ], [ "ICLR.cc/2025/Conference/Submission3789/Reviewer_8YGg" ], [ "~Clarence_Lee3" ] ], "structured_content_str": [ "{\"summary\": \"The paper presents a novel approach to training large language models (LLMs) using FP8 precision on datasets of up to 2 trillion tokens, revealing critical instabilities associated with FP8 training that were not observable in prior studies. It identifies that the SwiGLU activation function amplifies outliers over prolonged training, leading to instability. To mitigate this, the authors propose Smooth-SwiGLU, a modified activation function that maintains stability without altering performance. The paper also introduces FP8 quantization for both Adam optimizer moments, significantly improving memory usage and training throughput.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Very interesting findings!\\n\\nThe introduction of Smooth-SwiGLU and the quantization of Adam optimizer moments are significant advancements in the FP8 training methodology.\\n\\nThe paper provides thorough experimental results demonstrating the effectiveness of the proposed techniques, achieving comparable performance to BF16 with improved training throughput.\\n\\nSuccessfully training models on datasets up to 2 trillion tokens sets a new benchmark for FP8 training, addressing scalability issues in LLMs.\", \"weaknesses\": \"While the findings are significant, their applicability to other model architectures beyond those tested (like LLaMA2) could be explored further.\", \"questions\": \"How do the results generalize to other activation functions beyond SwiGLU?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposed a practical solution for quantized (FP8) training over trillion tokens.\\n\\nThe paper conduct LLM training over a real big 2 trillion dataset and found swiglu is the main cause of instability when training using FP8, thus proposed a optimized and smoothed method called smooth-swiglu.\\n\\nSmooth-swiglu in together with FP8, the authors achieve good model convergence on 2 trillion token dataset for llm training\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Training LLM on trillion-token big dataset makes the paper result solid.\\n\\nAdopting a pure FP8 adam optimizer (both two momentum are fp8) and make training converge is good contribution. \\n\\nIdentify swiglu cause model training instability and optimized as a smooth-swiglu is another contribution.\", \"weaknesses\": \"1 ) Paper novelty is limited. The only thing new is adding a scaling factor on top of swiglu may not be a big novelty. Previous work already study similar smooth method on activation functions. e.g., Swish, SMU as smooth function for relu.\\n\\n[1] swish: https://arxiv.org/pdf/1710.05941\\n\\n[2] SMU: https://arxiv.org/pdf/2111.04682\\n\\n2 ) FP8 optimizer contribution is just extending previous 1 momentum using FP8 to both momentum using FP8 may lack a bit novelty here. For example, Figure 5 shows empirical study for both FP8 momentum on llama2 100m, it shows second momentum using E5M2 format achieve similar model loss curve as bf16. But how does it generalize to bigger models (more practical models like llama 8b 70b)? Is second momentum always need larger dynamic range rather than higher precision?\\n\\n3 ) The only baseline is from this paper[3], how does this approach compared with nvidia transformer engine fp8. There is no comparison in either design or evaluation results. Please include a comparison with NVIDIA's Transformer Engine FP8 implementation, both in terms of methodology and empirical results. This comparison would provide valuable context for the paper's contributions.\\n\\n[3]FP8-LM: Training FP8 Large Language Models\\n\\n4 ) Billion level token training is almost sufficient for most LLM downstream tasks (e.g. Apple [4]). This trillion-token improvement may not have wide application scenarios. Please discuss more on potential use cases where trillion-token training might be beneficial, and how this paper's method scales compared to existing approaches at different dataset sizes.\\n\\n[4] Apple Intelligence Foundation Language Models https://arxiv.org/pdf/2407.21075\", \"questions\": \"How does this paper compare with nvidia transformer-engine's fp8 training with automatic mixed precision?\\n\\nIn table 3, why micro batch is 1? The reason for doing quantization is to support larger batch training (higher throughput), if limiting to micro-batch as 1, it is almost impossible to get good throughput/token per sec numbers.\\n\\nIn table 4, deepspeed zero-1 itself would reducing some GPU memory footprint compared with pure pytorch DDP. Why use zero1 not 2 or 3?\\n\\nIn figure 6, why bf16 loss curve has more spikes compared with FP8 + Smooth-SwiGLU + FP8 Optimizer? to me, I think bf16 should have better and more smooth loss curve compared with any FP8 methods?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the reply\"}", "{\"title\": \"Response to reviewer p6vg\", \"comment\": \"$\\\\textbf{Q1}$: \\\"While the findings are significant, their applicability to other model architectures beyond those tested (like LLaMA2) could be explored further... How do the results generalize to other activation functions beyond SwiGLU?\\\"\\n\\n$\\\\textbf{A1}$: According to our analysis in section 4.2, replacing the Swish activation in SwiGLU with any other GLU activation function ([1]) will suffer from similar phenomena, leading to alignment and quadratic amplification. This is in contrast to other activation functions (e.g., ReLU,GeLU) which are linear at larger input magnitudes. We added this clarification in the new version of the paper. To validate the analysis, we added Section A.4 in the appendix, detailing FP8 training of a GPT-3 125m model using the GeLU activation function, for standard dataset length. The results demonstrate that no training stability issues were observed in this scenario. \\n\\n\\n[1] GLU Variants: https://arxiv.org/pdf/2002.05202\"}", "{\"summary\": \"This paper 1) identifies instability issues in FP8 training for large language models (LLMs) on trillion-token-scale datasets, 2) links the root cause to the alignment of two weight matrices of the SwiGLU neuron, and 3) proposes Smooth SwiGLU to mitigate outliers in the activations. Smooth SwiGLU achieves this by applying scaling factors before and after the last linear layer of the MLP components. Furthermore, they compared different FP8 formats to obtain the best configuration for the FP8 optimizer. The method successfully stabilizes FP8 LLM training on trillion-token-scale datasets and delivers a 5.1% throughput improvement over the baseline on 256 Intel Gaudi2 accelerators.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The paper uses comprehensive experiments and analysis to link the instabilities in FP8 training to SwiGLU weight alignment.\\n2. The paper provides an efficient implementation of Smooth-SwiGLU.\\n3. The paper is well-written.\", \"weaknesses\": \"1. The motivation of Smooth SwiGLU is that \\\"While disabling quantization of the SwiGLU output effectively prevents divergence, it reduces the potential acceleration benefits of FP8\\\". However, \\\"FP8 + Smooth SwiGLU\\\" is only 5.1% faster than the \\\"FP8 + SwiGLU output BF16\\\". Given that the performance gap is relatively small and the experiments were conducted only on Intel Gaudi2, it would be helpful to compare the two variants on more hardware configurations (i.e., NVIDIA and AMD GPUs) to verify whether the performance gap persists.\\n2. Table 2 currently only compares the accuracy and perplexity between the proposed FP8 and the standard BF16 configurations. To provide a more comprehensive evaluation, it would be beneficial to include additional comparisons, such as 'FP8 + Smooth SwiGLU,' 'FP8 + SwiGLU output BF16,' and 'FP8 + SwiGLU output BF16 + FP8 Optimizer'. If evaluating these variants is challenging or unnecessary, please provide the corresponding discussion.\", \"questions\": \"When training larger models, do similar issues arise earlier or later?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer 1QzP 1/2\", \"comment\": \"$\\\\textbf{Q1}$: \\\" Paper novelty is limited. The only thing new is adding a scaling factor on top of swiglu may not be a big novelty. Previous work already study similar smooth method on activation functions. e.g., Swish, SMU as smooth function for relu. \\\"\\n\\n$\\\\textbf{A1}$: There may have been a misunderstanding here: Smooth-SwiGLU\\u2019s scaling factors do not smooth out the activations (as in Swish and SMU), as they appear before (and after) the quantization function (see Fig. 4), not before the activation. Note that without the quantization function, these scaling factors will cancel out. This scaling ensures stability for FP8 training without modifying SwiGLU's functional properties and with minimal implementation complexity. Perhaps our choice of the name `Smooth SwiGLU' was not optimal and led to this misunderstanding. \\n\\nMoreover, please note our paper has several other novel contributions besides the smooth-SwiGLU (listed in the bullet list in the introduction section), such as the first FP8 training on a trillion-token dataset and pinpointing the FP8 stability issue to SwiGLU's weight alignment problem.\\n\\n$\\\\textbf{Q2}$: \\\"FP8 optimizer contribution is just extending previous 1 momentum using FP8 to both momentum using FP8 may lack a bit novelty here. For example, Figure 5 shows empirical study for both FP8 momentum on llama2 100m, it shows second momentum using E5M2 format achieve similar model loss curve as bf16. But how does it generalize to bigger models (more practical models like llama 8b 70b)? Is second momentum always need larger dynamic range rather than higher precision? \\\"\\n\\n$\\\\textbf{A2}$: Please note the result in Fig. 6: there we show we can train a 7B model also with an FP8 optimizer (first moment E4M3, second moment E5M2) --- i.e. we show the generalization of the FP8 optimizer recipe also for a practical model, as the reviewer required. \\nAs noted in \\\"general comments\\\", training Llama2 7B takes about 2 weeks in 256 devices. Training a 70B model would require even more computational resources and time, which we do not currently have.\\nHowever, we indeed believe that the second moment would generally require a higher dynamic range than the first moment --- as it is natural to have a significantly larger range for the (estimator of the) square of the gradients than for the (estimator of the) gradients. We have clarified this point in the revised version of the manuscript. Thank you for bringing it to our attention.\\n\\n$\\\\textbf{Q3}$: \\\"The only baseline is from this paper [3], how does this approach compared with nvidia transformer engine fp8. There is no comparison in either design or evaluation results. Please include a comparison with NVIDIA's Transformer Engine FP8 implementation, both in terms of methodology and empirical results. This comparison would provide valuable context for the paper's contributions. \\\"\\n\\n$\\\\textbf{A3}$: Please note that both [3] and the Nvidia transformer engine use the same quantization configuration which includes delayed scaling, the E4M3 format for the forward phase, and the E5M2 format for the backward phase. In the paper, we use Gaudi's implementation for the transformer engine which is equivalent. We clarify this point in the experiment section (line 454). Unfortunately, there are no open-source models that were trained on FP8 with Nvidia's transformer engine with similar configurations, and running it requires extensive GPUs resources, which we do not have. Therefore, we cannot run FP8 training with Nvidia's transformer engine directly.\\n\\n$\\\\textbf{Q4}$: \\\"Billion level token training is almost sufficient for most LLM downstream tasks (e.g. Apple [4]). This trillion-token improvement may not have wide application scenarios. Please discuss more on potential use cases where trillion-token training might be beneficial, and how this paper's method scales compared to existing approaches at different dataset sizes.\\n\\n$\\\\textbf{A4}$: Perhaps this is a misunderstanding: our paper refers to trillions of tokens, but the models can still have billions of parameters. Please note models in [4] are also trained for trillions of tokens. For example, in section 3.2.1, the `AFM-server' paragraph in [4]: \\\"We train AFM server from scratch for 6.3T tokens...\\\", and in the AFM-on-device paragraph \\\"training for a full 6.3T tokens\\\". \\nMoreover, one can find many modern foundation models that were trained for trillions of tokens, such as llama2 (2T tokens), llama3 (15T tokens), and mistral (8T tokens).\\nThus, we believe trillion tokens training is a common real-world scenario. In this work, we show for the first time the ability to train in this scenario with FP8 precision.\"}", "{\"title\": \"General comment\", \"comment\": \"We sincerely thank the reviewers for their time and support. This constructive feedback has helped us improve the clarity and depth of the paper. We are happy that we have addressed all major concerns and strengthened the overall quality.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"title\": \"General comment\", \"comment\": \"We thank all the reviewers for their detailed feedback and for expressing positive opinions. We uploaded a new revision of the paper and supplementary material to address all remarks. All the changes are marked in red.\\n\\nNotice that we now mention in the experiments section that $\\\\textbf{each training run takes more than 2 weeks with 256 devices}$. This limits us from doing many additional experiments.\\n\\nMoreover, in the Appendix (section A.3), we added an interesting study of the effect of Smooth-SwiGLU on BF16 training. We show two interesting conclusions: (1) Smooth-SwiGLU allows a smoother training curve using standard LR. (2) Smooth-SwiGLU enables us get a lower loss, especially with a larger LR. This suggests that Smooth-SwiGLU is beneficial for training stability in general, not only for FP8 training. We plan to investigate this direction on a large scale in future work.\\n\\nFor additional details, see the answer for each reviewer concerns. Please let us know if there are any additional comments.\"}", "{\"title\": \"Response to reviewer KJr6\", \"comment\": \"$\\\\textbf{Q1}$: \\\"The motivation of Smooth SwiGLU is that \\\"While disabling quantization of the SwiGLU output effectively prevents divergence, it reduces the potential acceleration benefits of FP8\\\". However, \\\"FP8 + Smooth SwiGLU\\\" is only 5.1\\\\% faster than the \\\"FP8 + SwiGLU output BF16\\\". Given that the performance gap is relatively small and the experiments were conducted only on Intel Gaudi2, it would be helpful to compare the two variants on more hardware configurations (i.e., NVIDIA and AMD GPUs) to verify whether the performance gap persists.\\\"\\n\\n$\\\\textbf{A1}$: Following the reviewer's request, we added in the Appendix (Section A.2) performance acceleration with the different configurations on Nvidia GPUs (A6000 Ada). Notice, the performance acceleration ratio is similar for Gaudi2 Vs Nvidia: 27.04\\\\% Vs 27.6\\\\% for 'FP8 + SwiGLU output in BF16', 33.52\\\\% Vs 34.16\\\\% for 'FP8 + Smooth SwiGLU' and 37.08\\\\% Vs 37.58\\\\% for 'FP8'. Unfortunately, we do not have access to AMD GPUs or H100 GPUs (to show better performance). \\n\\n$\\\\textbf{Q2}$: \\\"Table 2 currently only compares the accuracy and perplexity between the proposed FP8 and the standard BF16 configurations. To provide a more comprehensive evaluation, it would be beneficial to include additional comparisons, such as 'FP8 + Smooth SwiGLU,' 'FP8 + SwiGLU output BF16,' and 'FP8 + SwiGLU output BF16 + FP8 Optimizer'. If evaluating these variants is challenging or unnecessary, please provide the corresponding discussion.\\\"\\n\\n$\\\\textbf{A2}$: We thank the reviewer for pointing this out, we missed this. We updated Table 2 to contain the 2 options we run: 'FP8 + SwiGLU output in BF16' (Fig 3) and 'FP8 + Smooth-SwiGLU + FP8 optimizer' (Fig 6). Unfortunately, we didn't run the other options: 'FP8 + SwiGLU output in BF16 + FP8 optimizer' or 'FP8 + Smooth-SwiGLU', since each run takes more than 2 weeks on 256 devices (see `general comment' above). However, we expect similar accuracy since from our experiments the FP8 optimizer converges similarly to the full precision counterpart.\\n\\n$\\\\textbf{Q3}$: \\\"When training larger models, do similar issues arise earlier or later?\\\"\\n\\n$\\\\textbf{A3}$: Excellent question! Our analysis in section 4.2 is around a stationary point, so the alignment effect should become stronger as we converge closer to the stationary point. Also, as the reviewer can notice from Fig. 5 in Llama2 paper [1], larger models often achieve a smaller loss in fewer steps. Combining both observations, we believe that, in larger models, the alignment effect (and its resulting outliers) should appear earlier during training (in terms of steps). Of course, this is something that we need to verify empirically --- and we plan to do so when we will have enough resources.\\n\\n=============================\\n\\n[1] Llama2 - https://arxiv.org/pdf/2307.09288\"}", "{\"title\": \"Response to reviewer 1QzP 2/2\", \"comment\": \"$\\\\textbf{Q5}$: \\\"In table 3, why micro batch is 1? The reason for doing quantization is to support larger batch training (higher throughput), if limiting to micro-batch as 1, it is almost impossible to get good throughput/token per sec numbers.\\\"\\n\\n$\\\\textbf{A5}$: The main purpose of this work is to show the ability to accelerate compute by using FP8 matrix multiplications instead of BF16. We use the standard recipe of low precision training (similar to [3] and Nvidia transformer engine), which includes storing the high-precision weights, and only quantizing the weights right before the matrix multiplication (done at the forward/backward phases). In general, this recipe does not allow using larger batch sizes than the BF16 training regime (which used a batch size of 1) since it does not reduce memory. We clarify the specific quantization recipe in the updated version of the paper in experiments section (line 454).\\n\\n$\\\\textbf{Q6}$: \\\"In table 4, deepspeed zero-1 itself would reducing some GPU memory footprint compared with pure pytorch DDP. Why use zero1 not 2 or 3?\\\"\\n\\n$\\\\textbf{A6}$: We agree with the reviewer that using Zero2 or Zero3 can allow additional memory reduction --- however we decided to use Zero1 since it is the method that reduces memory with sharding of the optimizer moments, that are the focus in table 4 and allow a simpler implementation. Notice also the BF16 experiments in table 4 use Zero1, so it is a fair comparison with the proposed method. \\n\\n$\\\\textbf{Q7}$: \\\"In figure 6, why bf16 loss curve has more spikes compared with FP8 + Smooth-SwiGLU + FP8 Optimizer? to me, I think bf16 should have better and more smooth loss curve compared with any FP8 methods?\\\"\\n\\n$\\\\textbf{A7}$: Excellent question! It was previously observed that the BF16 loss curve can have spikes (e.g. figure 4 in [5]). Indeed, as the reviewer pointed out, it seems that Smooth-SwiGLU has strong stabilizing properties --- so strong that FP8 training with Smooth-SwiGLU is more stable (less `spiky') than BF16 training. Following the reviewer's question, to further examine this stabilizing effect, we added to the appendix (Section A.3), a study on the effect of Smooth-SwiGLU on BF16 training. As we also mention in the ``general comment'', we find two additional interesting conclusions: (1) Smooth-SwiGLU allows a smoother training curve using standard LR. (2) Smooth-SwiGLU us get to lower loss values, especially with larger LR. So indeed, Smooth-SwiGLU has a significant stabilizing effect also in BF16 training. In future work, we plan to investigate this direction on a large scale. \\n\\n===================================\\n\\n[1] swish: https://arxiv.org/pdf/1710.05941\\n\\n[2] SMU: https://arxiv.org/pdf/2111.04682\\n\\n[3] FP8-LM: Training FP8 Large Language Models\\n\\n[4] Apple Intelligence Foundation Language Models https://arxiv.org/pdf/2407.21075\\\"\\n\\n[5] Micro scaling: https://arxiv.org/pdf/2310.10537\"}", "{\"title\": \"Comment to Clarence Lee\", \"comment\": \"Thank you for your thoughtful comment; we're delighted to hear that our work has been helpful to you.\"}", "{\"title\": \"Response to reviewer 8YGg\", \"comment\": \"$\\\\textbf{Q1}$: \\\"The paper has an extensive discussion about how feedforward SwiGLU affect FP8 training stability. However, there is no mention about how other model components like RMSNorm, MHA/GQA affect training stability. Could the authors also discuss whether each of the other model components affect FP8 training stability and provide quantitative results?\\\"\\n\\n$\\\\textbf{A1}$: Excellent question! As the reviewer can notice from Fig. 3 --- when we disable the FP8 quantization only at the SwiGLU output the loss converges similarly to the BF16 baseline. This means the other components, such as MHA or RMS Norm, do not cause significant instability with FP8 precision. Notice that GQA was not checked, since it is not part of Llama2 7B architecture. However, we plan in future work to run Llama3, where GQA is included. We added this clarification in the updated version.\\n\\n$\\\\textbf{Q2}$: \\\"It seems the SwiGLU weight alignment issue explains the occurrence of huge-magnitude outliers. However, can the authors comment more on the sporadic nature of these outliers? Is SwiGLU held accountable for that as well?\\\"\\n\\n$\\\\textbf{A2}$: To get these immense outliers several things need to occur --- weights alignments ($\\\\mathbf{w}_1\\\\sim \\\\pm \\\\mathbf{w}_2$), high norm weights ($||\\\\mathbf{w}_i||\\\\gg 1$), and specific tokens for which the layer input $\\\\mathbf{x}$ is aligned with the layer weight vectors $\\\\mathbf{w}_i$ (i.e., that $\\\\mathbf{x}$ and $\\\\mathbf{w}_i$ are not near-orthogonal). For these specific tokens, this causes high pre-activation values (i.e., that $|\\\\mathbf{w}_i\\\\cdot\\\\mathbf{x}|\\\\gg 1$), which can be amplified quadratically by the SwiGLU activation --- leading to instability with low-precision training.\\n\\n$\\\\textbf{Q3}$: \\\"If I understand correctly, smooth-SwiGLU does not directly address the weight alignment phenomenon. Rather, it is an alternative implementation specifically designed for FP8 training, which circumvents the overflow issue by preventing outliers in the inputs to the last linear layer. Is this accurate?\\\"\\n\\n$\\\\textbf{A3}$: Exactly, Smooth-SwiGLU doesn't prevent the alignment phenomenon, but it allows FP8 training even with this alignment. We did not try to prevent this alignment since in quantized training we aim to reduce numerical precision without significantly changing the original training regime. The reason FP8 training works now is because Smooth-SwiGLU seems to stabilize training in general, as mentioned in the ''general comment\\\" on the new section (A.3) that includes a study of the effect of Smooth-SwiGLU on BF16 training.\\n\\n$\\\\textbf{Q4}$: \\\"Figure 2 (b) on the dynamics of w1\\n and w2 norm correlation is insightful and interesting. Is the dynamics based on llama2's trianing hypermaraters? Do the authors have additional empirical results on how these dynamics change with different training hyperparameters?\\\"\\n\\n$\\\\textbf{A4}$: Yes, we used llama2 default hyperparameters as remarked in section 6.1. We believe it is reasonable to focus only on the default parameters since (1) llama's default hyperparameters are tuned to achieve the best results and (2) the purpose of this work is to allow FP8 training without requiring any additional tuning --- which requires many computational resources. Moreover, please notice in ``general comments'', we remark that the training time was over 2 weeks with 256 devices, and this limits us in checking additional training hyperparameters.\"}", "{\"title\": \"official comments by reviewer 1QzP\", \"comment\": \"Thank authors for the reply, which addressed my major concerns. I have updated my score, and I think this is a solid paper.\"}", "{\"metareview\": \"All reviewers agreed this paper should be accepted: it addresses an important problem, the method is thoughtfully-designed, and the paper is clearly written. A clear accept. Authors: you've already indicated that you've updated the submission to respond to reviewer changes, if you could double check their comments for any recommendation you may have missed on accident that would be great! The paper will make a great contribution to the conference!\", \"additional_comments_on_reviewer_discussion\": \"Three reviewers responded to the author feedback with very short responses, one raised their score. No authors engaged in further discussion of the paper. All reviewers agreed to accept. Reviewer p6vg wrote an extremely short review, I disregarded it. I wouldn't recommend inviting them for future ICLR review cycles.\"}", "{\"comment\": \"Thanks for the informative responses. This is overall a well-executed and thoughtfully presented work. I believe it makes a valuable contribution by advancing the understanding of large-scale FP8 training and showcasing a cost-effective technique for scaling it further.\"}", "{\"summary\": \"The paper presents new findings and addresses key challenges in large-scale FP8 training for LLMs on modern hardware that supports FP8 operations. The authors identified that one major challenge in scaling FP8 training is the occurrence of sporadic, high-magnitude activations. They pinpoint SwiGLU as the main source of these extreme outliers, providing the novel insight that a weight alignment process during training leads to substantial SwiGLU activations (as discussed in Section 4.1). To address this issue and stabilize large-scale FP8 training, the authors propose a smooth-SwiGLU activation function. This approach prevents outliers in the quantization of inputs to the last linear layer with an efficient per-channel scaling for better parallelism.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper demonstrates opportunities and challenges in scaling FP8 training to trillion-token scale using LLaMA2 architecture. In particular, the authors identify that with SwiGLU (which is an important to contemporary LLMs), weight alignment issues can induce large-magnitude outliers and result in training stability issue that poses significant challenge for FP8's limited dynamic range.\\n\\n2. Extensive experiment results are provided, showing the proopsed Smooth-SwiGLU training with FP8 optimizer (moment 1: E4M3,moment 2: E4M3) is able to converge to baseline FP16 training. \\n\\n3. Experiments show around 30% improvement in training throughput, which provides significant energy saving for large-scale training.\\n\\n4. Paper writing and presentation are clear with well identified bottleneck and detailed solutions.\", \"weaknesses\": \"1. The paper has an extensive discussion about how feedforward SwiGLU affect FP8 training stability. However, there is no mention about how other model components like RMSNorm, MHA/GQA affect training stability. Could the authors also discuss whether each of the other model components affect FP8 training stability and provide quantitative results?\\n\\n2. It seems the SwiGLU weight alignment issue explains the occurrence of huge-magnitude outliers. However, can the authors comment more on the sporadic nature of these outliers? Is SwiGLU held accountable for that as well?\", \"questions\": \"I would like to ask the authors for several clarifications:\\n\\n1. If I understand correctly, smooth-SwiGLU does not directly address the weight alignment problem. Rather, it is an alternative implementation specifically designed for FP8 training, which circumvents the overflow issue by preventing outliers in the inputs to the last linear layer. Is this accurate?\\n\\n2. Figure 2 (b) on the dynamics of $\\\\mathbf w_1$ and $\\\\mathbf w_2$ norm correlation is insightful and interesting. Is the dynamics based on llama2's trianing hypermaraters? Do the authors have additional empirical results on how these dynamics change with different training hyperparameters?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Validating the relevancy of this work\", \"comment\": \"Dear authors and reviewers,\\n\\nI am a member of the public not affiliated with the authors, but I have worked extensively with pretraining. The problem that the authors tackle is highly relevant and impactful, and I personally have faced huge instabilities in fp8 training beyond 200B tokens and there was no clear solution of how to tackle this problem. Given that the authors have clearly validated their work at trillion token scale, I would like to appreciate the authors for this impactful work and hope that the reviewers can take into account the practical impact of this work when giving a final assessment.\\n\\nThank you!\"}" ] }
E1DGY1FXef
Modeling Abstract Style Prompts for Text-to-Speech Models
[ "Anuj Diwan", "Zhisheng Zheng", "David Harwath", "Eunsol Choi" ]
A recent trend in text-to-speech synthesis (TTS) is to construct models capable of generating naturalistic speech that adheres to a textual style prompt describing the speaker's voice and speaking style. In this paper, we propose a crisper definition of style-controlled TTS by categorizing style tags by how they can be collected (*automatic* tags obtainable using signal processing tools e.g. low-pitched and slow; *demographic* tags obtainable using speaker demographics e.g. male and American accent; and *abstract* tags which need human-annotations e.g. authoritative and awed) and what they represent (*intrinsic* tags inherent to speaker identity e.g. gender, average pitch, texture; and *situational* tags specific to utterance-level speaking styles e.g. emotion). Compared to previous work, we expand the space of style prompts substantially by covering 47 abstract tags, 10 demographic tags and 6 automatic tags. For abstract intrinsic tags, we annotate a subset of speakers from the VoxCeleb dataset. For abstract situational tags, we leverage existing speaking-style-based datasets Expresso and EARS. We train a style-prompted TTS model based on Parler-TTS using these datasets and find that our model outperforms baselines on speech-style consistency metrics. Our collected dataset and model will be open-sourced.
[ "text-to-speech", "style", "emotion", "datasets" ]
https://openreview.net/pdf?id=E1DGY1FXef
https://openreview.net/forum?id=E1DGY1FXef
ICLR.cc/2025/Conference
2025
{ "note_id": [ "eX89WhtlL4", "SijFiMxfET", "NzCti9rpsi", "GCxKiPiulA", "28vV48VeFS" ], "note_type": [ "official_review", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1730707875371, 1731714603396, 1729871813940, 1730121342068, 1730561681745 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11729/Reviewer_cevn" ], [ "ICLR.cc/2025/Conference/Submission11729/Authors" ], [ "ICLR.cc/2025/Conference/Submission11729/Reviewer_gVmW" ], [ "ICLR.cc/2025/Conference/Submission11729/Reviewer_XHjY" ], [ "ICLR.cc/2025/Conference/Submission11729/Reviewer_zSYK" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces a clearer categorization of style tags for text-to-speech synthesis, expanding to 63 tags across automatic, demographic, and abstract categories. Using datasets like StyledVoxCeleb and the Parler-TTS framework, the model improves on speech-style consistency while facing challenges in quality and content accuracy. The authors plan to open-source their dataset and model to aid further advancements in style-prompted TTS.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper introduces a detailed and clear categorization of speech style tags into abstract intrinsic, abstract situational, demographic, and automatic tags. This clear delineation facilitates a more nuanced understanding and application of style prompts in TTS models.\\n2. The trained TTS models exhibit improved performance on speech-style consistency metrics compared to baselines, demonstrating the effectiveness of the proposed approach in maintaining consistent speech styles.\\n3. The commitment to open-source the dataset and models upon publication promotes transparency and further research in the field, providing valuable resources for other researchers and developers.\", \"weaknesses\": \"Weaknesses:\\n1. The paper utilizes specific open-source datasets and designs corresponding style tag annotation strategies to enable human annotators to meet annotation requirements, thus expanding the range of abstract style tag data. This approach is relatively common in prior work, such as VoxEditor \\\\[1] although it does not cover all types comprehensively.\\n2. The current cost of manual annotation limits the scale of building a style tag dataset, which is undoubtedly insufficient for training an effective style-prompted TTS or even a zero-shot style-prompted TTS. This issue is evidenced in the experiments, where the speech MOS scores and content consistency metrics are inferior to those of the baseline models. Additionally, some tags leverage existing metadata from open-source datasets, such as Expresso and EARS, which are rich in emotional tags, but this restricts the expansion of dataset size due to the limited scale of available open-source emotional datasets. Expanding to larger datasets and more speakers is a crucial issue that needs resolution.\\n3. The experiments related to the style-prompted TTS, including comparisons with baselines, should include some demo audio examples to provide a more intuitive comparison. Additionally, for the constructed text style dataset, providing some sample audio style prompts to demonstrate the annotation effect would be beneficial.\\n\\n\\\\[1]: Sheng Z, Ai Y, Liu L J, et al. Voice Attribute Editing with Text Prompt[J]. arXiv preprint arXiv:2404.08857, 2024.\", \"questions\": \"See details in Paper Weaknesses.\\n\\nAnother issue is that this paper introduces a wide variety of style tags, which might lead to potential conflicts between tags, such as Rhythm and Speaking Rate Levels, or Pitch Levels and Emotion. For example, the emotion of Sleepy typically involves low pitch. If a style text prompt includes both High-pitched and Sleepy emotions and is input into the model, how would it perform?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We are actively working on revising the paper to improve scalability of our data collection approach, train on cleaner data to prevent degradation of TTS metrics, and perform more analyses on what abstract tags matter. However, this will result in a larger revision than possible during rebuttal, and hence we withdraw the submission.\"}", "{\"summary\": \"Although style-prompted TTS systems typically employ style tags to automatically constrcuct textual style prompts data, there has been a lack of systematic discussion on these tags. To address this gap, this work presents a systematic categorization of style tags across two dimensions. Compared with previous datasets, this work incorporates a broader range of style tags and annotates a 200-hour subset of the VoxCeleb dataset, focusing on tags that were previously underrepresented. The experiments on an open-source style-prompted TTS system validates the effectiveness of the proposed dataset.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The annotated dataset and model weights will be open-sourced.\", \"The categorization of style tags is beneficial for expanding the scope of text prompts constructed based on tags, thereby facilitating better generalization to style prompts encountered in real-world scenarios.\"], \"weaknesses\": [\"The baseline model exhibits noticeable instability, with the paper suggesting that the WER could surpass 20%. Such instability may substantially affect the assessment of stylistic elements.\", \"It may be inaccurate to claim that \\\"autoregressive TTS systems are inherently prone to decoding instabilities\\\" , as evidenced by robust autoregressive TTS systems like Valle 2.\", \"Additionally, I value the proposed dataset, but I have reservations about whether the expansion of style tags can be extended to real-world style annotations. This uncertainty arises from the fact that natural language cannot be entirely broken down into a mere combination of tags. To address these concerns, it would be insightful to know if the authors have considered testing the system with free-form style descriptions that go beyond simple tag combinations.\"], \"questions\": \"1. Could you provide further clarification on the rationale behind using different binning thresholds for pitch and speaking rate? Did this lead to significant discrepancies between speaker-level and utterance-level pitch labels, or were other factors at play?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The article introduces a crisper definition for style-prompted text-to-speech (TTS) systems, which categorizes the style tags along two dimensions: how the tags are collected (automatic, demographic, and abstract) and what aspects they represent (intrinsic to speaker identity or situational speaking styles). To support this comprehensive tagging definition, the authors annotated a subset of the VoxCeleb dataset with abstract intrinsic style tags, created the StyledVoxCeleb dataset, and utilized existing datasets Expresso and EARS for abstract situational tags. They then fine-tuned the Parler-TTS model using these diverse datasets. Their experiments demonstrate that the new model significantly outperforms baseline systems in maintaining speech-style consistency, achieving higher consistency MOS scores and better tag recall scores.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed crisper categorization of style tags successfully addresses some limitations of current style-prompted TTS approaches.\\n2. This work will release its model and collected data upon publication, which would contribute to related research fields.\\n3. The data collection methodology outlined in Section 3.1 is clear and straightforward. This is also helpful for future research.\", \"weaknesses\": \"The main weaknesses are:\\n1. Possibly inappropriate experimental setup. As described in Section 4.1, this paper combines the test splits of StyledVoxCeleb, Expresso, EARS, and LibriTTS-R to construct the dataset for evaluation, which come from different data domains. However, if the test set includes these three datasets, then ``Init.`` -> ``+LTTSR`` -> ``+LTTSP, Exp, EARS`` -> ``Ours`` in Table 5 will surely perform increasingly better as the domain gap gradually diminishes. Therefore, it can not be determined that the performance improvements are due to the proposed crisper categorization of style tags. To support the claims of this paper, the test set should be selected from out-of-domain data. \\n2. In Table 3, the recall is only 0.36, and the accuracy is just 75%, while similar works have an accuracy of around 85%. Does this suggest that the style description tags in this dataset are inaccurate, resulting in lower testing accuracy? \\n3. In Table 4, the inclusion of the Exp, EARS, and StyledVoxCeleb datasets led to a noticeable decrease in speech intelligibility. Although the authors hypothesize that this decline is due to the introduction of abstract speaking styles, small dataset sizes, and noisy data, there is a lack of further experiments to support these assumptions.\\n4. The lack of demo audio examples makes it harder to judge whether the experimental results are convincing. The authors could provide some audio samples.\", \"there_are_also_some_minor_issues\": \"1. Clarity issues. In Section 2, Line 139, the phrase ``this does not add a real signal to the dataset`` is unclear. What is the real signal? Are you referring to the variational information [1] in the speech signal? Additionally, in Line 415, there is a typo: ``on read audiobook data`` -> ``on reading-style audiobook data``.\\n2. In Section 8, the authors claim that the noisy samples from VoxCeleb negatively impact model performance and suggest that scaling to more speakers may mitigate this issue. However, wouldn\\u2019t increasing the number of noisy examples further degrade the model\\u2019s performance? Have the authors considered using cleaner large-scale datasets, such as Emilia [2]?\\n3. In Section 3.1, the paper says that ``we can only provide the celebrity\\u2019s name rather than the actual speech clip``. Have the authors experimented with any models that support audio input, such as Gemini-pro or Qwen-Audio 2?\\n\\nTo conclude, this paper presents an innovative approach to defining style prompt tags; however, it does not introduce any novel algorithms or model structures, so its contribution is moderately limited. Additionally, as noted in the main weaknesses part, there are still some issues with the experimental setups. Therefore, I give it a score of 5.\\n\\n[1] Ren, Yi, et al. \\\"Fastspeech 2: Fast and high-quality end-to-end text to speech.\\\" arXiv preprint arXiv:2006.04558 (2020). \\n[2] He, Haorui, et al. \\\"Emilia: An extensive, multilingual, and diverse speech dataset for large-scale speech generation.\\\" arXiv preprint arXiv:2407.05361 (2024).\\n[3] Ji, Shengpeng, et al. \\\"Textrolspeech: A text style control speech corpus with codec language text-to-speech models.\\\" ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2024.\", \"questions\": \"My questions are included in the weaknesses part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The author proposes an improved approach for the style-prompted text-to-speech system capable of generating speech that aligns with specific style prompts. The authors categorize style tags into automatic, demographic, and abstract groups, differentiating between intrinsic tags related to speaker identity (e.g., gender, pitch) and situational tags tied to individual utterances (e.g., emotion). The authors conducted experiments based on the open-source model Parler-TTS, and while the model shows enhanced style consistency, challenges remain in balancing speech quality and content accuracy due to the diversity and quality of the training data. The authors intend to open source their dataset and model to facilitate further research.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The authors present a more detailed definition of style descriptions that was not considered in previous work.\", \"The authors have greatly expanded the existing space of stylistic tags to cover 63 tags, and this extensive tag coverage opens up the possibility of generating more personalized and diverse speech.\", \"With more comprehensive style labelling and a diverse dataset, the trained TTS model outperforms the baseline model in terms of speech style consistency metrics.\"], \"weaknesses\": [\"Creating the annotations required significant manual labour, and I acknowledge the effort put in by the authors, but apart from the extended style description definitions, I don't think the article shows the novelty of an ICLR-level paper.\", \"Both the production of the datasets and the evaluation of the models rely excessively on human subjective opinions. Despite the various measures taken by the authors to ensure quality, it is difficult to avoid introducing inherent biases.\", \"Despite the improvement in stylistic consistency, the model performs poorly in terms of speech naturalness (MOS) and content accuracy (WER). For TTS models, speech quality and content accuracy are basic requirements, the lack of which will directly affect the practical application value of the dataset.\", \"The authors present so many categories of labels, but don't analyse them in more detail, and I worry about whether some of the labels will actually make a difference.\"], \"questions\": \"See details in Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
E0dTlxy1T4
MMEvol: Empowering Multimodal Large Language Models with Evol-Instruct
[ "Run Luo", "Haonan Zhang", "Longze Chen", "Ting-En Lin", "Xiong Liu", "Yuchuan Wu", "Min Yang", "Yongbin Li", "Minzheng Wang", "Pengpeng Zeng", "Lianli Gao", "Heng Tao Shen", "Yunshui Li", "Xiaobo Xia", "Fei Huang", "Jingkuan Song" ]
The development of Multimodal Large Language Models (MLLMs) has seen significant advancements with increasing demands in various fields (e.g., multimodal agents, embodied intelligence). While model-driven approaches attempt to enhance MLLMs capabilities through diverse architectures, the gains have become increasingly marginal. Conversely, data-driven methods, which scale up image-text instruction data, are more effective but face limited data diversity and complexity challenges. The absence of high-quality data constitutes a significant development barrier for MLLMs. To address the data quality bottleneck, we propose MMEvol, a novel multimodal instruction data evolution framework. This framework iteratively improve data quality through a refined combination of fine-grained perception, cognitive reasoning, and interaction evolution, generating a more complex and diverse image-text instruction dataset that empowers MLLMs with enhanced capabilities. Beginning with an initial set of instructions, SEED-163K, we utilize MMEvol to systematically broaden the diversity of instruction types, extend visual reasoning steps to improve cognitive reasoning abilities, and thoroughly explore fine-grained information within images to enhance visual understanding and robustness. To comprehensively evaluate the effectiveness of our approach, we conduct extensive qualitative analysis and quantitative experiments across 13 vision-language tasks. Compared to baseline models trained with the initial seed data, the results demonstrate that our method achieves an average accuracy improvement of 3.1 percentage points. Furthermore, our approach reaches state-of-the-art (SOTA) performance in nine tasks using significantly less data compared to state-of-the-art models.
[ "MLLM; MultiModal;Visual Reasoning" ]
Reject
https://openreview.net/pdf?id=E0dTlxy1T4
https://openreview.net/forum?id=E0dTlxy1T4
ICLR.cc/2025/Conference
2025
{ "note_id": [ "za7wQB5ZP9", "z99K0aVJhM", "xN3n9lgLIG", "wamayP6YeW", "vnNeMsC2og", "uHTzniumZt", "syuahsVRQF", "rPcg6ERTTf", "psCUMeLnyS", "obtvQgUcZt", "nGjcTgxzRR", "n9iegCvnMg", "jzgqoNkcrj", "jb56dlfM0U", "jaX1WdmZhH", "iYjdCdsz6S", "iITzuNnEGm", "fnsvERkcz3", "fWoX8cyjkq", "ePYjdq9h1f", "cXmgE3IZXJ", "b94ovU5mGX", "ZO7zA3M7ih", "Yh9bX33RGG", "TSBLm7VZ5G", "Symdqh77mW", "SoOa4oNSQt", "Rypbd8nA3u", "RhC2eDRqf9", "NJPrEz0NYN", "KBYvc0IDRM", "INjEWgFScd", "I6TJff0SA4", "GsOcBc1nZ3", "GUaXNVhbY7", "Ch6NcGDSMY", "9ywmb9JRhD", "7wEk2j6LzY", "4QFTWGETYv", "3lMGZisUmZ", "1vhVvL2jnv", "1jqVRFMFoq", "1OEB0X236e" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732180976043, 1732427192054, 1733067280042, 1732274583443, 1732591532602, 1732181126122, 1732788748408, 1732519176928, 1732180790485, 1729223087709, 1732180894458, 1732427026715, 1732274484077, 1732730773505, 1732864811956, 1733129158991, 1732575331326, 1732274726471, 1729131358130, 1732035310077, 1733067360811, 1732181031494, 1732788719031, 1732519431115, 1732180934612, 1732519534290, 1732181107551, 1734869542887, 1731297275844, 1732427115943, 1732275176573, 1730712481674, 1732427145273, 1733112346296, 1732519578789, 1730196346558, 1737523408559, 1732274416493, 1732519494476, 1732366068763, 1732180664535, 1732427070415, 1732180870179 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Reviewer_h9M6" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Reviewer_8Cfc" ], [ "ICLR.cc/2025/Conference/Submission643/Reviewer_pN7Z" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Reviewer_u5c1" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Reviewer_u5c1" ], [ "~Lai_Wei7" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Area_Chair_K66o" ], [ "ICLR.cc/2025/Conference/Submission643/Reviewer_1KdT" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Reviewer_8Cfc" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Reviewer_1KdT" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Reviewer_pN7Z" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ], [ "ICLR.cc/2025/Conference/Submission643/Authors" ] ], "structured_content_str": [ "{\"comment\": \"> **W 3**: Following 2, the proposed method evolute the instruction multiple times, will this lead to the error accumulation problem?\\n\\n**Response:** \\n\\nCumulative error is inevitable.However, it is possible to estimate the number of samples contributing to the cumulative error after three iterations as approximately $0.055^3$ \\u00d7 160K, which amounts to around 30. This is a relatively small quantity (30 vs. 160K) and exerts a marginal impact on the quality of the evolved data.\\n\\n> **W 4**: The paper focuses on improving the training data quality, while the provided example is quite limited. More data samples will help better evaluate the data quality.\\n\\n**Response:** \\n\\nThank you for your valuable suggestions. In the revised version, we have incorporated additional visual cases in Figures 21-23 (highlighted in red) to effectively illustrate the validity of our evolution.\\n\\nThank you again for your insightful comments. If you have other comments, we are happy to address them to polish this work. We look forward to contributing to the development of both the Multi-Modal research and the open-source community.\"}", "{\"comment\": \"Dear Reviewer u5c1,\\n\\nI hope this message finds you well. We have carefully considered your feedback and have made significant improvements to the manuscript. We truly value your insights, and your expertise has greatly contributed to enhancing the quality of our work. Could you please let us know if the revisions meet your expectations? We are eager to address any further queries you might have.\\n\\nThank you for your invaluable support and consideration.\\n\\nWarm regards,\\n\\nAuthors\"}", "{\"comment\": \"Dear Reviewer 1KdT,\\n\\nWe hope this message finds you well.\\n\\nWe have carefully addressed your questions and concerns in the rebuttal, including conducting additional experiments and providing detailed clarifications.\\n\\nAs the rebuttal deadline is approaching, we kindly invite you to join the discussion. We would greatly appreciate it if you could reconsider your rating, provided all your concerns have been addressed. If you have any additional questions, please do not hesitate to let us know. We are more than happy to provide further clarifications.\\n\\nThank you again for your careful review and valuable suggestions!\\n\\nBest,\\n\\nAuthors\"}", "{\"comment\": \"Dear Reviewer pN7Z:\\n\\nWe are truly grateful for your insightful comments and the guidance you provided during your review of our paper. We are pleased to inform you that we have addressed all points raised and have made significant improvements. As the discussion phase draws near, we kindly request your reevaluation at your earliest convenience. Should any questions remain, we are at your disposal to clarify them promptly.\\n\\nThank you for your time and understanding.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"comment\": \"Dear Reviewer u5c1:\\n\\nThank you for your reply. Your comments are very insightful and valuable for us yo polish this paper. If you have any additional questions, please do not hesitate to let us know. We are more than happy to provide further clarifications.\\n\\nThank you again for your careful review and valuable suggestions!\\n\\nBest,\\n\\nAuthors\"}", "{\"comment\": \"> **Q 3**: Please provide a study on this to evaluate the robustness of the proposed method.\\n\\n**Response:** \\n\\nWe selected 1K data points with lower scores (<5) as low-quality seed data for three rounds of instruction evolution, resulting in 3K evolved data points. We conducted a comparative experiment between these evolved data and an equal quantity of 3K data points evolved from randomly selected seed data. The results are presented in the table below and defualt setting is random initialization.\\n\\n| Seed | MMStar | MathVista$^M$ | MME$^C$ | AI2D | HallBench | MMMU$^V$ | RWQA | AVG. |\\n| ------------------------------ | ------ | ------------- | ------- | ---- | --------- | -------- | ---- | ---- |\\n| Random-Initialization (3K) | 37.9 | 26.1 | 31.3 | 55.1 | 43.8 | 35.8 | 53.2 | 40.5 |\\n| Low-Score-Initialization (3K) | 36.5 | 25.4 | 29.9 | 54.8 | 43.1 | 35.0 | 52.6 | 39.7 |\\n\\nThe quality of evolutionary data is influenced by the initial instructions, although the impact is relatively minor. Nevertheless, high-quality instructional data can still be generated through multiple iterations, demonstrating the robustness of our method.\\n\\n\\n\\nThank you again for your insightful comments. If you have other comments, we are happy to address them to polish this work. We look forward to contributing to the development of both the Multi-Modal research and the open-source community.\"}", "{\"comment\": \"> **Q 3**: Prompt Sensitivity\\n\\n**Response:** \\n\\nThank you for your valuable suggestion. We are pleased to further elucidate the design rationale of our prompt, as elaborated in our paper. Initially, we identified three common deficiencies in existing multimodal data. To address these issues, we devised three evolutionary strategies tailored to each deficiency, forming a meaningful motivation for our approach. However, the core challenge in evolving multimodal instructions lies in assessing the quality of multimodal data, including its degree of complexity and diversity. It is only by quantifying these attributes that we can effectively enhance the data's complexity and diversity.\\n\\nTo tackle this challenge, we adopted the classification scheme outlined in Cambrain-I [10], which categorizes multimodal capabilities into visually-centric and language-centric atomic capabilities. Each multimodal problem is decomposed into a combination of atomic capabilities and atomic goals, with the introduction of a visually-centric visual operation chain to measure the level of reasoning complexity. Through these designs, we can assess and specifically direct the evolution of multimodal data's complexity and diversity.\\n\\nWe adhere to three principles to maximize the length of the visual operation chain, increase data on atomic capabilities, and diversify atomic goal types. Ultimately, this approach yielded the most concise prompts across the three respective directions and successfully drove the evolution of multimodal instructions. Our prompt design rationale strictly adheres to the principles of making instructional evolution feasible and efficient through minimalistic design.\\n\\n[10] Shengbang , et al. Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs (2024).\\n\\n\\n\\nWe appreciate your response and will actively participate in the discussion. If our feedback solves your concerns or you have other concerns, kindly let us know. We will do our best to address them for you and enhance this work.\"}", "{\"comment\": \"Dear Reviewer 1KdT,\\n\\nWe deeply appreciate the time and effort you have invested in reviewing our paper. We have thoroughly addressed your valuable comments and made the necessary revisions. Could you kindly re-evaluate our manuscript at your earliest convenience? We are more than willing to discuss any remaining concerns you might have.\\n\\nThank you for your understanding and cooperation.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"> **W 4**: Firrst, the seed dataset and one of the vision-centric capabilities is OCR, while there is few OCR related benchmark results, more results on OCRBench, ChratQA, DocVQA, TextVQA would be very insightful.\\n\\n**Response:** \\n\\nWe sincerely appreciate the your valuable feedback. We have included additional evaluations related to OCR, as shown in the table below. The results demonstrate that MMEvol significantly enhances the OCR capabilities of MLLM compared with the Baseline.\\n\\n| Model | OCRBench | ChartQA | DocVQA | TextVQA |\\n| --------- | -------- | ------- | ------ | ------- |\\n| SEED-8B | 57.3 | 70.4 | 79.2 | 69.8 |\\n| MMEvol-8B | 61.2 | 74.6 | 84.6 | 74.6 |\\n\\n> **W 5**: Second, more ablation on ratio of the three evolution methods in each round, how to decide and eliminate failed evolution, what\\u2019s the success/fail ratio for each round, what is the model quality gain for reach round, etc. would be informative.\\n\\n**Response:** \\n\\nThank you to your detail review. As shown in Fig.7 in the main paper, we prompt the GPT-4o mini to analyze the evolutionary gain and complexity levels of generated instruction data. As for data elimination, each generated sample will be rated on a difficulty scale of 1 to 10 according to the evaluation criteria in Fig. 7, samples that do not demonstrate significant evolutionary advancement, *i.e.*, \\u201cimproved=False\\u201d or \\u201cimproved=True\\u201d while receiving a score below 6, will be eliminated from further consideration. As suggested, we have tallied the failure rates for each round as shown in the table below.\\n\\n| Round-1 | Round-2 | Round-3 |\\n| ------- | ------- | ------- |\\n| 26% | 24% | 20% |\\n\\nMoreover, we conducted ablation experiments on the ratios of the three evolutionary directions during the evolution process of the 1K data, as shown in the table below. The results indicate that when the ratios of the three evolutionary directions are equal, the highest average performance can be achieved, thereby demonstrating that all three directions are equally important for the diversity and complexity of the evolutionary instruction data.\\n\\n| FP-Evol | I-Evol | CR-Evol | MMStar | MathVista$^M$ | MME$^C$ | AI2D | HallBench | MMMU$^V$ | RWQA | AVG. |\\n| ------- | ------ | ------- | ------ | ------------- | ------- | ---- | --------- | -------- | ---- | ---- |\\n| 2/3 | 1/6 | 1/6 | 36.9 | 26.3 | 31.0 | 54.0 | 44.8 | 34.4 | 51.4 | 39.8 |\\n| 1/6 | 2/3 | 1/6 | 34.3 | 25.4 | 29.2 | 53.2 | 43.5 | 35.8 | 52.6 | 39.2 |\\n| 1/6 | 1/6 | 2/3 | 36.3 | 26.7 | 32.5 | 54.3 | 44.0 | 35.2 | 51.1 | 40.0 |\\n| 1/3 | 1/3 | 1/3 | 37.9 | 26.1 | 31.3 | 55.1 | 43.8 | 35.8 | 53.2 | 40.5 |\\n\\n> **W 6**: Third, in comparison with other methods, InternVL2-8b (releaesd 2024/07) and Qwen2-VL-7b (released 2024/08) should be included in Table2 under weight open source section.\\n\\n**Response:** \\n\\nWe have revised the missing citation and made changes in Table 2, highlighting them in red. We appreciate your valuable feedback, which has contributed to the improvement of our work.\\n\\n> **Q 1**: it is because the pretrained ViT / LLM lacks these capabilities in the first place, or pretrained models already have learned enough knowledge but somehow forget it with poor instruction data during fine-tuning? Do you think this data will help other pretrained VLMs trained with tens of billions of image/text tokens?\\n\\n**Response:** \\n\\nThis is a very interesting question. We believe the latter is correct. The pre-trained model has already acquired sufficient general knowledge and coarse-grained alignment. Therefore, during the supervised fine-tuning phase, it is only necessary to utilize high-quality, fine-grained instructional data to effectively activate this knowledge, which will lead to improved performance. We think our data can assist other pre-trained Vision-Language Models (VLMs) in achieving better training performance. Subsequently, we will release approximately 10 million high-quality evolutionary data points generated using MMEvol to support the open-source community in building more robust fully open-source Multimodal Large Language Models (MLLMs).\\n\\n\\n\\nThank you again for your insightful comments. If you have other comments, we are happy to address them to polish this work. We look forward to contributing to the development of both the Multi-Modal research and the open-source community.\"}", "{\"summary\": \"MMEvol addresses data quality and diversity challenges by proposing an iterative evolution of image-text instruction data. Starting with SEED-163K, it expands instruction types, enhances visual reasoning, and strengthens fine-grained perception and cognitive abilities.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This work proposes an image-text instruction evolution framework, Evol-Instruct, to enhance the quality and quantity of instruction data. Using three distinct evolution methods and instruction elimination to remove harmful data, the approach increases the complexity and diversity of a limited seed dataset. After three rounds of evolution, the resulting data trains a new model that achieves SOTA performance across various benchmarks.\", \"weaknesses\": [\"Evolving multimodal dataset makes sense and is so interesting but actual performnace improvements are too marginal in my perspective: 2~3% because evaluating language ability can improve large margin if it is really contrbutional approach.\", \"Experiments should be compared for fair comparison where same architecture, same dataset to provide the effectiveness of MMEvol.\", \"What kind of dataset samples are more effective to be applited by MMEvol like Math, code, or anyhting?\", \"---\", \"I will keep my score becuase it seems improvments are marginal than what I've expected (avg. 5%p~10%p).\"], \"questions\": \"Refer to Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> **Q2&W 3&Q3**: The paper lacks a comparison of different prompts used in each evolution stage, leaving the impact of prompt templates on data refinement unclear. Is the rewritten output sensitive to changes in the prompt? What is the rationale behind the current prompt design? Have other prompt variations been compared?\\n\\n**Response:**\", \"base_version_of_prompt_as_below\": \"\", \"fp_evol\": \"I want you act as a Q&A Creator. Your objective is to draw inspiration from the given Q&A to create a brand new created Q&A.\", \"i_evol\": \"I want you act as a Q&A Rewriter. Your objective is to rewrite a given Q&A into a more complex form to meet real word interactive demand.\", \"cr_evol\": \"I want you act as a Q&A Rewriter. Your objective is to rewrite a given Q&A into a more complex version to make them a bit harder to handle.\\n\\nTo verify the effectiveness of our prompt design, we conducted an ablation study using the base prompt on 1K seed data, while maintaining equal evolutionary probabilities across three fixed directions. Additionally, we provided supplementary visual results using the base prompt in Figure 11 of the paper, highlighted in red.\\n\\n| FP-Evol | I-Evol | CR-Evol | MMStar | MathVista$^M$ | MME$^C$ | AI2D | HallBench | MMMU$^V$ | RWQA | AVG. |\\n| ------------ | ------------ | ------------ | ------ | ------------- | ------- | ---- | --------- | -------- | ---- | ---- |\\n| $\\\\checkmark$ | $\\\\checkmark$ | $\\\\checkmark$ | 34.7 | 25.7 | 29.9 | 54.1 | 42.1 | 35.5 | 49.8 | 38.8 |\\n| $\\\\checkmark$ | $\\\\checkmark$ | $\\\\times$ | 35.7 | 25.9 | 30.3 | 54.8 | 42.9 | 35.2 | 51.2 | 39.4 |\\n| $\\\\checkmark$ | $\\\\times$ | $\\\\times$ | 36.5 | 25.4 | 30.8 | 55.0 | 43.6 | 35.4 | 52.4 | 39.9 |\\n| $\\\\times$ | $\\\\times$ | $\\\\times$ | 37.9 | 26.1 | 31.3 | 55.1 | 43.8 | 35.8 | 53.2 | 40.5 |\\n\\nThe symbol $\\\\checkmark$ indicates that during the evolutionary process, the prompt has been replaced with the base version, as demonstrated in the table. Utilizing our meticulously designed prompts significantly enhances the diversity and complexity of the data, thereby making the evolutionary process more efficient.\\n\\n> **W 4&Q4**: When comparing the results of **MMEval-8B** with **Cambrian-1 8B** in Table 2, although **MMEval-8B** shows overall improvements, it exhibits significant performance declines on key benchmarks like **MMMU**, **AI2D**, and **MMStar**. Why does **MMEval-8B** perform poorly on **MMMU**?\\n\\n**Response:** \\n\\nCompared to Cambrian-1 8B, even with the utilization of only 8% additional data (480K vs. 6M), our model demonstrates comparable performance on key benchmarks such as **MMMU**, **AI2D**, and **MMStar**. Furthermore, continual enhancement of image quantity during the evolutionary process (for instance, through a simple rewriting of $VILA^2$ [1] rather than multiple iterations) would yield significant improvements on the MMMU benchmark. This suggests that for evaluative datasets like MMMU, which consists of college-level textbook questions, relying solely on limited image datasets for instruction evolution may yield relatively modest enhancements compared to text-centered capabilities if there is insufficient new image data available. However, it is important to note that the addition of image data and instruction evolution can be synergistically combined, resulting in more substantial improvements overall.\\n\\n[1] Yunhao Fang, et al.\\\"$VILA^2$: VILA Augmented VILA\\\" *arXiv preprint arXiv:2407.17453* (2024).\\n\\n\\n\\n\\n\\nThank you again for your insightful comments. If you have other comments, we are happy to address them to polish this work. We look forward to contributing to the development of both the Multi-Modal research and the open-source community.\"}", "{\"comment\": \"Dear Reviewer 1KdT,\\n\\nThank you for contributing your time and expertise to review our manuscript. We've taken your insightful comments seriously and amended the paper accordingly. As we approach the discussion deadline, we are eager to hear your thoughts on the revised version. Should any points still need clarification, we're ready to assist promptly.\\n\\nThank you for your indispensable guidance.\\n\\nWith gratitude,\\n\\nAuthors\"}", "{\"comment\": \"Dear Reviewer 8Cfc:\\n\\nThank you immensely for your valuable feedback on our manuscript. We've worked diligently to incorporate your suggestions and make necessary revisions. With the review timeline approaching, we kindly ask if you could spare a moment to re-evaluate the updated document. Please let us know if there is anything else you need from our end for clarification.\\n\\nWe truly appreciate your cooperation and continued support.\\n\\nWarm regards,\\n\\nAuthors\"}", "{\"title\": \"Reponses to Authors' Comments\", \"comment\": \"I sincerely thank the authors for their comprehensive responses to my concerns. However, I still tend to maintain my initial score for the following reasons:\\n1. Scalability Concerns: The authors attempt to demonstrate the scalability of the proposed method by conducting an additional experiment using an open-source state-of-the-art (SoTA) method as a data rewriter. However, my concern remains that regardless of whether the rewriter is open-source or closed-source, it inevitably serves as a performance upper bound for the proposed method. This implies that surpassing the rewriter's performance is unattainable if the method merely distills knowledge from it as a teacher. I consider this a significant limitation of the paper, yet the authors neither address it adequately nor mention it explicitly in the manuscript.\\n2. Scalability and Practicality: The authors claim that their method is both scalable and practical. However, I find no concrete evidence supporting this assertion, such as experiments involving larger datasets or models. This raises the question: what exactly is meant by scalability in this context?\\n3. Prompt Sensitivity: Regarding my concern about prompt sensitivity, the authors provide a comparison between their prompt and a few naive alternatives. However, they fail to address critical aspects, such as the rationale behind their specific prompt design or whether the rewritten outputs are sensitive to variations in the prompt. Considering that the paper focuses on designing a data refinement pipeline where the prompt plays a pivotal role, the lack of insights in this area significantly weakens its contributions.\"}", "{\"comment\": \"Thanks for the detailed rebuttal, my main concern is solved and I keep my score as 6.\"}", "{\"comment\": \"Dear Reviewer 1KdT,\\n\\nThank you for taking the time to review our responses and for updating your rating! We sincerely appreciate your recognition of our efforts! We strongly agree that more exploration of synthetic data for VLMs is needed. The lack of synthetic data, especially data with diversity and complexity, seriously hinders the further improvement of VLM performance. Besides, we will add the ablations to the final version of this paper.\\n\\nThank you once again for your valuable suggestions and constructive feedback throughout the entire review process!\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Thank you for your response\", \"comment\": \"Thank you for your response. You have addressed most of my concerns so I'm glad to raise the rating to 6.\"}", "{\"comment\": \"Dear Reviewer u5c1:\\n\\nThank you for your extensive review and constructive comments on our manuscript. We have earnestly worked on resolving all issues highlighted and have provided comprehensive responses to your queries. As the review deadline is imminent, we humbly request your reevaluation of our revised submission. Please feel free to reach out if any further clarification is required from our side.\\n\\nYour support is highly valued, and we thank you in advance for your consideration.\\n\\nWith gratitude,\\n\\nAuthors\"}", "{\"summary\": \"The paper addresses the challenge of enhancing the quality and diversity of training data for Multimodal Large Language Models (MLLMs). Traditional model-driven approaches face diminishing returns, while existing data-driven methods are limited by the complexity and variety of available data. To overcome this, the authors propose MMEvol, a framework that iteratively refines image-text instruction data through fine-grained perceptual evolution, cognitive reasoning evolution, and interactive evolution. This process generates a richer and more diverse dataset, improving the models' visual understanding and reasoning capabilities. The approach leads to a significant performance boost across multiple vision-language benchmarks, achieving state-of-the-art results in several tasks with less data compared to existing methods. This paper contributes by advancing the capability of MLLMs through an innovative data evolution method that emphasizes the quality of instructions over sheer data volume.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Data Evolution Framework: The MMEvol framework effectively enhances the diversity and complexity of image-text instruction data through iterative evolution methods, such as fine-grained perceptual, cognitive reasoning, and interactive evolution. This approach significantly improves the quality of data used for training MLLMs, addressing a critical bottleneck in existing data-driven methods.\", \"empirical_improvement\": \"The proposed method achieves an average accuracy improvement of 3.1 percentage points over baseline models and reaches state-of-the-art performance in nine vision-language tasks. This demonstrates the efficacy of MMEvol in enhancing model capabilities with less training data compared to other approaches.\", \"data_balancing_and_quality_analysis\": \"The generated instructions are more compositional, longer in reasoning chains, and more balanced between objects. The statistics of the instructions suggest improved quality of instructions.\", \"weaknesses\": \"Evaluation Limitation: While the paper claims that the instruction generation method is a key contribution, it lacks a direct comparison with other multimodal instruction generation techniques, such as MIMIC-IT or LLaVA-NEXT. To strengthen this evaluation, I suggest that the authors generate instructions using MIMIC-IT and LLaVA-NEXT methods on the same seed data and compare the quality, diversity, and complexity of the resulting instructions with those generated by MMEvol. This would help demonstrate how MMEvol performs relative to similar methods in the field, addressing the largest weakness of the paper.\", \"absence_of_failure_case_study\": \"The paper does not sufficiently explore the limitations or failure scenarios of MMEvol. I recommend that the authors provide concrete examples of cases where MMEvol might fail or produce suboptimal results. Additionally, conducting an analysis of how performance changes with increasing rounds of evolution could help identify potential saturation points, offering insights into when the method might negatively impact model performance.\", \"technical_contribution\": \"The paper's technical contribution feels incremental compared to previous works like MIMIC-IT[1] or MM-Instruct[2], which also iteratively refine instructions from image metadata. To clarify the novelty of MMEvol, it would be helpful for the authors to explicitly outline which aspects of their method are similar to these prior works and what specific contributions MMEvol makes beyond them. This will help position the paper relative to existing methods and highlight any unique advancements.\\n\\n[1] Li et al., MIMIC-IT: Multi-Modal In-Context Instruction Tuning, 2023.\\n\\n[2] Liu et al., MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment, 2024.\", \"questions\": [\"Fig 12: why does negative scaling happen at step 6k-7k? Is it related to certain instruction augmentation techniques?\", \"Table 1: What are the number of instructions for each row?\", \"Table 2: What's Qwen2-7B baseline performance?\", \"The quality, and maybe quantity, of the seed instruction set may affect the quality of the generated instructions. Please provide a study on this to evaluate the robustness of the proposed method.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for your interesting work\", \"comment\": \"Have you open-sourced the code of MM-Evol? This is a very useful method and I want to try it in other domains and tasks. Thank you :)\"}", "{\"comment\": \"Dear Reviewer h9M6,\\n\\nYour insights have been invaluable in refining our work, and we have diligently addressed each of your comments. As we approach the discussion deadline, we kindly ask if you could reassess our revised manuscript. We are more than willing to engage in further dialogue to ensure all your concerns are fully resolved.\\n\\nThank you for your attention to this matter.\\n\\nKind regards,\\n\\nAuthors\"}", "{\"comment\": \"Thanks for the valuable and encouraging comments! Our point-by-point responses to the reviewer's mentioned concerns are provided as follows.\\n\\n> **W 1**: Evolving multimodal dataset makes sense and is so interesting but actual performnace improvements are too marginal in my perspective: 2~3% because evaluating language ability can improve large margin if it is really contrbutional approach.\\n\\n**Response:**\\n\\nWe emphasize our 2-3% performance enhancement from the following points:\\n\\n1. The data we evolved is relatively limited. The 163K dataset constitutes only 15% of the total dataset (1.1M).\\n\\n2. The rewriting model we employed, GPT-4o-mini, is cost-effective but not optimal. If we utilize a more advanced open-source model such as Qwen2VL with 72 billion parameters, we could achieve further improvements.\\n\\n3. In contrast to Cambrian's dataset of 7 million, which includes an additional 6 million high-quality instruction data, we have achieved better results using only 480K data points (8%) evolved from a limited seed dataset.\\n\\nConsidering these three points, the fact that we achieved a 2-3% overall performance improvement with such a small amount of evolved data is quite acceptable. Should we scale up our efforts and use a more powerful MLLM, the results would be very promising.\\n\\n| Data | MMStar | MathVista$^M$ | MME$^C$ | AI2D | HallBench | MMMU$^V$ | RWQA | AVG. |\\n| ---------------- | ------ | ------------- | ------- | ---- | --------- | -------- | ---- | ---- |\\n| GPT4o-mini (3K) | 37.9 | 26.1 | 31.3 | 55.1 | 43.8 | 35.8 | 53.2 | 40.5 |\\n| Qwen2VL-72B (3K) | 39.1 | 27.9 | 33.1 | 57.8 | 46.4 | 36.9 | 46.9 | 41.2 |\\n\\n> **W 2**: Experiments should be compared for fair comparison where same architecture, same dataset to provide the effectiveness of MMEvol.\\n\\n**Response:** \\n\\nTo conduct a fair comparison, we randomly downsampled the same number of data points from MMEvol for a rigorous comparative experiment. The results of the experiment are presented in the table below. Under the same data volume and model architecture, MMEvol-8B achieved an average improvement of approximately 2.7 points, demonstrating the effectiveness of our approach.\\n\\n| Seed | IT | MMStar | MathVista$^M$ | MME$^C$ | AI2D | HallBench | MMMU$^V$ | RWQA | MIA | BLINK | MMSInst | AVG. |\\n| ------------- | ---- | ------ | ------------- | ------- | ---- | --------- | -------- | ---- | ---- | ----- | ------- | ---- |\\n| LLaVA-Next-8B | 0.8M | 43.9 | 31.5 | 44.6 | 69.9 | 52.3 | 41.7 | 60.1 | 65.1 | 43.5 | 25.6 | 47.8 |\\n| MMEvol-8B | 0.8M | 48.4 | 41.3 | 45.9 | 71.3 | 47.6 | 40.2 | 61.2 | 72.6 | 45.2 | 30.9 | 50.5 |\\n\\n> **W 3**: What kind of dataset samples are more effective to be applited by MMEvol like Math, code, or anyhting?\\n\\n**Response:** \\n\\nThank you for your insightful concern. As we are solely focused on the textual portion of the instructional data during the evolutionary process, without supplementing new images, the text-centric multimodal instructional data type demonstrates greater efficiency and a higher success rate in terms of evolution. For instance, the success rate of evolutionary processes is significantly higher for types of data related to code and creative tasks. In contrast, for scientific diagrams (such as those in chemistry, physics, and mathematics), the success rate of evolution is relatively lower due to constraints imposed by the types and quantities of images available.\\n\\n\\n\\nThank you again for your insightful comments. If you have other comments, we are happy to address them to polish this work. We look forward to contributing to the development of both the Multi-Modal research and the open-source community.\"}", "{\"comment\": \"Thanks for the valuable and insightful comments! Our point-by-point responses to the reviewer's mentioned concerns are provided as follows.\\n\\n> **Q 1**: Scalability Concerns:\\n\\n**Response:** \\n\\nThank you for your valuable suggestion. Your main concern pertains to whether the data evolution in MMEvol can surpass the upper bound. Here, we provide further clarification. First, in the open-source multimodal community, there are numerous fully open-source works, such as LLaVA-OneVision [1] and Molmo [2]. These works have utilized a large amount of multimodal data constructed using closed-source multimodal teacher models, ultimately achieving results superior to those of the teacher models. Similarly, self-evolution work like VILA2 [3] have successfully surpassed the upper bound on MLLM performance through multiple cycles of simple rewriting on extensive datasets. These works collectively validate the efficacy of using multimodal teacher models to construct data that eventually trains stronger multimodal models, outperforming the original teacher models. Consequently, MMEvol emerges as a promising approach, offering a more efficient method for multimodal data construction and achieving this with lesser data than prior efforts. \\n\\nSecondly, the rapid progress in instruction evolution is grounded in iterative comparison during synthetic data generation, guiding data synthesis through clear directions in complexity and diversity. This approach generates more complex and diverse data, eventually outperforming the teacher models. Such methodologies have already been validated in domains like code [4], mathematics [5], and text [6]. Our approach successfully applies the principles of instruction evolution to the multimodal domain.\\n\\nLastly, we further elucidate that the breakthrough of the upper bound is a conclusion drawn from previous works [1,2,3]. The core contribution of MMEvol lies in providing a more efficient method for generating multimodal data. It can quickly construct a substantial amount of high-quality data from a limited base, intentionally enhancing diversity and complexity. Compared to simple rewriting methods like MIMIC-IT [7], ALLaVA [8], and MM-Instruct [9], which lack clear objectives, our method demonstrates higher efficiency and better outcomes. As shown in the table below, MMEvol surpassed the teacher model on both HallBench and POPE using only 15% of full data (1.1M) for evolution.\\n\\n| Model | HallBench | POPE |\\n| ---------------- | --------- | -------- |\\n| GPT4o-mini (API) | 61.9 | 86.1 |\\n| MMEvol | **64.1** | **87.8** |\\n\\n> **Q 2**: Scalability and Practicality\\n\\n**Response:** \\n\\nThank you very much for your detailed review. We have included the missing scalability study below. Due to the absence of models at the scale of Qwen2 and LLaMA3 13B, we have chosen Vicuna 1.5 to conduct the ablation experiments.\\n\\n| Model | MMStar | MathVista$^M$ | MME$^C$ | AI2D | HallBench | MMMU$^V$ | RWQA | AVG. |\\n| ----------------------- | -------- | ------------- | -------- | -------- | --------- | -------- | -------- | -------- |\\n| Vicuna-7B (seed 3K) | 28.7 | 20.7 | 22.9 | 38.9 | 39.6 | 29.3 | 43.2 | 31.9 |\\n| Vicuna-7B (evolved 3K) | 30.9 | 22.0 | 25.6 | 41.2 | 42.3 | 31.6 | 46.3 | 34.3 |\\n| Vicuna-7B (evolved 6K) | 31.4 | 23.2 | 28.6 | 43.5 | 44.6 | 32.3 | 48.1 | 36.0 |\\n| Vicuna-7B (evolved 9K) | 31.9 | 24.0 | 31.1 | 44.7 | 47.4 | 33.8 | 50.3 | 37.6 |\\n| Vicuna-13B (evolved 9K) | **34.6** | **26.1** | **34.5** | **50.6** | **52.3** | **36.1** | **54.5** | **41.3** |\\n\\nAs shown in the table, the model's multimodal capabilities can be further enhanced with increasing data volume and model scale. Larger-scale models exhibit greater performance improvements when trained on our high-quality data.\\n\\n[1] Bo Li, et al. LLaVA-OneVision: Easy Visual Task Transfer (2024).\\n\\n[2] Matt Deitke, et al. Molmo and PixMo: Open Weights and Open Data for State-of-the-Art Multimodal Models (2024).\\n\\n[3] Yunhao Fang, et al. $VILA^2$: VILA Augmented VILA (2024).\\n\\n[4] Ziyang Luo, et al. WizardCoder: Empowering Code Large Language Models with Evol-Instruct (2023).\\n\\n[5] Haipeng Luo, et al. WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct (2023).\\n\\n[6] Can Xu, et al. WizardLM: Empowering Large Language Models to Follow Complex Instructions (2023).\\n\\n[7] Li et al., MIMIC-IT: Multi-Modal In-Context Instruction Tuning, 2023.\\n\\n[8] Guiming et al., ALLaVA: Harnessing GPT4V-Synthesized Data for Lite Vision-Language Models, 2024.\\n\\n[9] Liu et al., MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment, 2024.\"}", "{\"comment\": \"Dear Reviewer 8Cfc,\\n\\nThank you very much for your detailed review and constructive feedback. We have carefully revised the manuscript to resolve the issues you've pointed out. With the discussion deadline approaching, we would be grateful if you could review our changes. Should you require any further clarifications, please let us know, and we will gladly provide them promptly.\\n\\nBest wishes,\\n\\nAuthors\"}", "{\"comment\": \"Thanks for the valuable and encouraging comments! Our point-by-point responses to the reviewer's mentioned concerns are provided as follows.\\n\\n> **W 1**: This leads to an unfair comparison, as in most cases, more data leads to better performance.\\n\\n**Response:** \\n\\nWe sincerely appreciate the detailed review comments. To ensure a fair comparison, we present the following experimental results. We downsampled MMevol to 0.8M and conducted a comparative experiment using LLaVA-Next. As shown in the table below, MMevol achieved an overall improvement of 2.7, demonstrating the effectiveness of the method.\\n\\n| Seed | IT | MMStar | MathVista$^M$ | MME$^C$ | AI2D | HallBench | MMMU$^V$ | RWQA | MIA | BLINK | MMSInst | AVG. |\\n| ------------- | ---- | ------ | ------------- | ------- | ---- | --------- | -------- | ---- | ---- | ----- | ------- | ---- |\\n| LLaVA-Next-8B | 0.8M | 43.9 | 31.5 | 44.6 | 69.9 | 52.3 | 41.7 | 60.1 | 65.1 | 43.5 | 25.6 | 47.8 |\\n| MMEvol-8B | 0.8M | 48.4 | 41.3 | 45.9 | 71.3 | 47.6 | 40.2 | 61.2 | 72.6 | 45.2 | 30.9 | 50.5 |\\n\\n> **W 2**: Is the model capable of finding out bad cases generated by itself? Further, I'm wondering about the fail rate and elimination rate of the proposed method.\\n\\n**Response:** \\n\\nTo investigate the reliability of the rewrites produced by GPT-4-o-mini, we conducted a manual evaluation of the data before and after the evolution process. Specifically, we first extracted 30 images of various types from the seed data to ensure diversity, keeping 5 relevant question-answer pairs for each image. Subsequently, we carried out the corresponding evolution in three different directions, ultimately obtaining 450 evolved question-answer pairs, which were then subject to scoring and filtering. The results were distributed among five experts for manual evaluation of the accuracy of the model evolution and the scoring filter. The data is summarized in the table below. From the table, it is evident that the average success rate of evolution using MLLM can reach 90%, while the accuracy of the scoring filter can achieve 94%, indicating the reliability of MMEovel. Additionally, we provide detailed scoring cases in Figure 15, highlighted in red.\\n\\n| data id | expert | image categories | FP-Evol (0-5) | I-Evol (0-5) | CR-Evol (0-5) | I-Elim (0-15)(450) |\\n| ----------------- | ------ | ---------------------------------------------------------- | ------------- | ------------ | ------------- | ------------------ |\\n| 0,1,3,4,5,6 | 0 | LandMark,OCR,Human&Clothes,Traffic,Living room,Sport | 5,4,4,5,5,4 | 5,4,3,4,5,4 | 5,3,4,5,4,4 | 15,13,13,14, 13,14 |\\n| 7,8,9,10,11,12 | 1 | Kitchen,Office supplies&Tools,Plants,Animal,Sport,LandMark | 5,5,4,5,4,4 | 5,4,5,5,4,4 | 5,5,4,4,5,4 | 14,15,13,15,14,13 |\\n| 13,14,15,16,17,18 | 2 | Foods,LandMark,OCR,Human&Clothes,Traffic,Sport | 4,4,3,5,4,5 | 5,4,4,4,4,5 | 4,5,5,4,5,5 | 14,14,15,13,14,15 |\\n| 19,20,21,22,23,24 | 3 | Foods,Sport,LandMark,Office supplies&Tools,Plants,Traffic | 3,4,5,5,5,4 | 3,4,5,5,5,5 | 5,5,5,5,5,5 | 13,15,14, 15,15,15 |\\n| 25,26,27,28,29,30 | 4 | Animal,Sport,Traffic,Landmark,Sport,Office supplies&Tools | 4,5,5,5,5,5 | 4,5,5,5,4,5 | 5,5,3,5,5,5 | 14,15,14,15,14,15 |\\n| | | | 89.3% | 88.7% | 92% | 94.5% |\"}", "{\"comment\": \"Dear Reviewer h9M6,\\n\\nWe are truly grateful for the thoughtful review you provided. We have taken all your feedback into consideration and revised the paper accordingly. Could you possibly re-evaluate our submission given the updates we have made? Your further feedback would be greatly appreciated, and we are prepared to clarify any remaining points of confusion.\\n\\nMany thanks,\\n\\nAuthors\"}", "{\"comment\": \"Thanks for the valuable and encouraging comments! Our point-by-point responses to the reviewer's mentioned concerns are provided as follows.\\n\\n> **W 1**: Evaluation Limitation\\n\\n**Response:** \\n\\nThank you for your valuable suggestion. In order to draw comparisons with MIMIC-IT, we constructed a dataset of 3,000 evolutionary data points using MIMIC-IT and GPT4o-mini based on a seed set of 1,000 data points. We conducted comparative experiments with MMEvol, with the results presented in the table below. Under stringent conditions of fairness (seed data, evolutionary API, model architecture), MMEvol achieved an average lead of 6.4 points, demonstrating particularly significant superiority on the RealWorldQA task. This highlights the effectiveness of MMEvol.\\n\\n| Seed | MMStar | MathVista$^M$ | MME$^C$ | AI2D | HallBench | MMMU$^V$ | RWQA | AVG. |\\n| ------------- | ------ | ------------- | ------- | ---- | --------- | -------- | ---- | ---- |\\n| MIMIC-IT (3K) | 32.1 | 24.3 | 26.4 | 47.6 | 41.9 | 31.5 | 34.5 | 34.1 |\\n| MMEvol (3K) | 37.9 | 26.1 | 31.3 | 55.1 | 43.8 | 35.8 | 53.2 | 40.5 |\\n\\n> **W 2**: Absence of Failure Case Study\\n\\n**Response:** \\n\\nThank you very much for your detailed review. We have included the missing Failure Case Study in Figure 14 and highlighted it in red.\\n\\n> **W 2**: Technical Contribution\\n\\n**Response:** \\n\\nWe further elucidate the differences between MMInstruct, MIMIC-IT, and MMEvol from the perspectives of complexity and diversity.\\n\\n1. MMEvol is characterized by its elegance and simplicity, enabling the completion of an arbitrary number of task expansions using a unified prompt and straightforward image data. In contrast, MIMIC-IT and MMInstruct employ more complex pipelines and require a limited set of pre-defined tasks. For instance, MIMIC-IT necessitates both image and video data to accomplish six specific tasks. MMEvol transcends the limitations of pre-set tasks by generating new tasks, thus eliminating the need for intricate manual multi-task designs, making it more efficient and effective, with a higher degree of diversity.\\n\\n2. MMEvol continuously iterates through comparison and evolution based on existing questions and answers via an instruction-driven evolution approach, generating more complex tasks. Conversely, MIMIC-IT and MMInstruct create new data by repeatedly inputting new images into pre-defined tasks through rewriting. The former has a clear objective of increasing complexity, while the latter merely reprocesses input data within a limited task framework. This results in MMEvol achieving superior data complexity and quality with only 480K evolution data, while MIMIC-IT requires 2.8M.\\n\\n> **Q 1**: why does negative scaling happen at step 6k-7k? Is it related to certain instruction augmentation techniques?\\n\\n**Response:** \\n\\nThe issue of multi-task training conflicts exists within the framework of multimodal instruction fine-tuning [1]. The various capabilities of multimodal models do not enhance comprehensively without conflict. It is reasonable that negative scaling occurs between steps 6k and 7k. We did not employ any instruction enhancement techniques; rather, we simply present the results of our experiments.\\n\\n[1] Chen Wei, et al.\\\"LLaVA-MoLE: Sparse Mixture of LoRA Experts for Mitigating Data Conflicts in Instruction Finetuning MLLMs\\\" *arXiv preprint arXiv:2401.16160* (2024).\\n\\n> **Q 2**: Table 1: What are the number of instructions for each row?\\n\\n**Response:** \\n\\nFor each row, we employ 6K instructions for the ablation study. Here, we clarify that we utilized an equivalent amount of 6K data for each row.\\n\\n> **Q 3**: Table 2: What's Qwen2-7B baseline performance?\\n\\n**Response:** \\n\\nThank you for your invaluable suggestions. We have added the Qwen2-7B baseline trained on the seed data (highlighted in red) to Table 2 of the paper.\"}", "{\"metareview\": \"This paper introduces a framework to improve multimodal models by evolving image-text instruction data. It shows some performance improvements with less data. The approach is interesting and shows promising results. However, concerns were raised about the comparison with other methods, and there\\u2019s not enough exploration of failure cases or scalability. At this stage, the paper lacks strong support for acceptance, and the authors are encouraged to revise their work, providing more clarity in their experiments.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal, reviewers raised concerns about fairness in comparisons with other methods, the clarity of the framework's technical details, and the lack of failure case exploration. While the authors made revisions, including more experiments and clarifications, the overall improvements were marginal, and scalability concerns remained. These points contributed to the decision to reject the paper.\"}", "{\"summary\": \"This paper introduce a method to systematically evolve seed instruction fine-tuning data for VLM to larger scale and enhance vision centric capabilities for fine-tuned visual understanding, visual reasoning and human interactive capabilities. The enhanced evolution dataset show signiificant improvement on a wide range of perception and reasoning dataset.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The problems this paper tries to address are spot on, current VLMs is limited in terms of complex visual reasoning, natural interaction with human, etc. The method to evolve current instruction fine-tune data to enhance these capabilities is one of interesting direction to improve the model\\u2019s capabilities, as shown in the benchmark results.\\n\\nThe authors mention data and code will be released, and I think it\\u2019ll be a good contribution to the community to set a higher baseline for LLAVA style model and maybe even beyond.\", \"weaknesses\": \"My major concern is lack of technical clarity and minor concern is insufficient evaluation.\\n\\nFirst, the prompt is clear in Fig 4 - Fig 7, can you explictly describe what model, commercial API or other method is used to generate the evolutoin instruction data from the prompt templates?\\n\\nSecond, it is mentioned pseudo function calling is used for visual reasoning evolution, can you describe the setup of function call, the model or template used to generate the function call, and how it is integrataed to the visual reason evolution process?\\n\\nThird, it is mentioned MLLM is used for rewriting, can you describe details on what MLLM is used, how it is used (prompt used), evalution or abalation of the significance of this rewriting step?\\n\\nOn the evaluation side, I think a more comprehensive evaluation on different capabilities of the model and some more ablation would provide more insights to the method and data. \\n\\nFirrst, the seed dataset and one of the vision-centric capabilities is OCR, while there is few OCR related benchmark results, more results on OCRBench, ChratQA, DocVQA, TextVQA would be very insightful. \\n\\nSecond, more ablation on ratio of the three evolution methods in each round, how to decide and eliminate failed evolution, what\\u2019s the success/fail ratio for each round, what is the model quality gain for reach round, etc. would be informative. \\n\\nThird, in comparison with other methods, InternVL2-8b (releaesd 2024/07) and Qwen2-VL-7b (released 2024/08) should be included in Table2 under weight open source section.\", \"questions\": \"Addressing questions w.r.t. Technical clarify and evaluation in weakness session would impact my final judgement of the paper. The following questions are nice-to-have discussion which might not impact final score.\\n\\nEnhancing vision centric capabilities in fine-grained object, CoT, interaction seems to be effective in LLAVA style VLM, do you think it is because the pretrained ViT / LLM lacks these capabilities in the first place, or pretrained models already have learned enough knowledge but somehow forget it with poor instruction data during fine-tuning? Do you think this data will help other pretrained VLMs trained with tens of billions of image/text tokens?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer pN7Z,\\n\\nFirstly, thank you for your meticulous review and constructive feedback. We have addressed all the points you raised and have clarified any ambiguities in the revised manuscript. As the deadline for discussions nears, we kindly ask if you could review our changes and provide any further comments. Your inputs are invaluable, and we're committed to ensuring the manuscript meets your standards.\\n\\nThank you for your continued support.\\n\\nBest wishes,\\n\\nAuthors\"}", "{\"comment\": \"Dear Reviewer h9M6:\\n\\nWe hope this message finds you well. We deeply appreciate your thoughtful feedback and the attention you\\u2019ve given to our manuscript. All concerns have been thoroughly addressed, and we wish to invite you to review the manuscript once more. With the deadline approaching, we would be grateful if you could confirm that all uncertainties have been resolved. We are ready to assist you with any further clarifications.\\n\\nThanks for your cooperation.\\n\\nKind regards,\\n\\nAuthors\"}", "{\"summary\": \"This paper addresses the lack of high-quality data in building stronger multimodal large language models and proposes a solution through the development of a multimodal instruction data evolution framework, MMEval. The framework iteratively improves data quality across three evolution stages: fine-grained perception, cognitive reasoning, and interactive evolution. The authors demonstrate the effectiveness of the framework by refining a new dataset called SEED-163K and training a large multimodal model on this refined dataset. Evaluation results across 13 benchmarks further validate the effectiveness of the proposed framework.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper addresses the issue of data scarcity in instruction datasets for training multimodal large language models, presenting a well-motivated problem.\", \"The authors designed a sophisticated pipeline to enhance data quality, featuring three evolution stages and an instruction elimination stage.\", \"The authors evaluate the effectiveness of the three proposed evolution stages, as shown in Table 1.\", \"The performance of **MMEval** MLLMs in Table 2 appears promising.\"], \"weaknesses\": [\"The entire data improvement framework relies on closed-source frontier models like GPT-4, which suggests a form of knowledge distillation from these models but may limit the ability to scale to larger datasets. Additionally, the strong dependence on models like GPT-4 reduces the framework's interpretability.\", \"The authors do not provide an analysis of the reliability of using GPT-4 as a data rewriter.\", \"The paper lacks a comparison of different prompts used in each evolution stage, leaving the impact of prompt templates on data refinement unclear.\", \"When comparing the results of **MMEval-8B** with **Cambrian-1 8B** in Table 2, although **MMEval-8B** shows overall improvements, it exhibits significant performance declines on key benchmarks like **MMMU**, **AI2D**, and **MMStar**.\"], \"questions\": [\"How reliable is GPT-4 as a data rewriter? How can the quality of the rewritten data be evaluated?\", \"Is the rewritten output sensitive to changes in the prompt?\", \"What is the rationale behind the current prompt design? Have other prompt variations been compared?\", \"Why does **MMEval-8B** perform poorly on **MMMU**?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer h9M6,\\n\\nWe want to express our gratitude for your thorough review and helpful comments on our manuscript. We've diligently worked on incorporating your suggestions and believe the revised version is much stronger. The deadline for discussions is approaching, and we would appreciate your feedback on our revisions at your earliest convenience. Your thoughtful evaluation is crucial to us.\\n\\nThank you for your understanding and assistance.\\n\\nKind regards,\\n\\nAuthors\"}", "{\"comment\": \"Thank you for clarifying technical detials and conduct additional experiments. I've read all reviews and comment threads, overall I think this is a solid work and can contribute to the community, thus update my evaluation score to 6. I recommend authors to add these abalations (maybe run larger scale experiments with Qwen2VL to get a SOTA model/data) to the paper, and data/code could be open source as promised to further benefit this field.\\n\\nI'd also want to comment on another comment from reviewer about the upper bound of distallation from stronger VLMs. If we look at current LLMs training (distallate knowledge from the entire noisy web data), synthetic data has shown to be very effective. Peronally I think we should explore more synthetic data generation methods for VLMs as well.\"}", "{\"comment\": \"Dear Reviewer u5c1,\\n\\nWe genuinely value the time and effort you have dedicated to reviewing our manuscript. We have responded comprehensively to your comments and made the necessary adjustments. As the deadline for discussion nears, we kindly ask if you could review our updated paper. We are eager to address any additional questions that may arise.\\n\\nThank you very much for your continued support and assistance.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"summary\": \"In this paper, the authors propose MMEvol, a novel multimodal instruction data evolution framework that augments existing multi-modal training data with better diversity and complexity. The experiment results show that the proposed method works well and helps the MLLMs get better performance\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Please refer to Questions\", \"weaknesses\": \"Please refer to Questions\", \"questions\": \"### Strength\\n1. The paper is well-written and easy to follow\\n2. The proposed idea is novel and seems to work well\\n\\n### Weakness\\n1. My main concern is the comparison fairness in Table 2. The LLaVA-Next baseline uses the seed-set (163K data), while the MMEvol uses an augmented set with more data(447K additional data). This leads to an unfair comparison, as in most cases, more data leads to better performance. This problem also exists in Table 1. The unfair comparison hinders the understanding of the actual effectiveness of the proposed method and I think it is easy to become fairer as the seed-set is sampled from existing open-source datasets and can easily be scaled up to a similar size with the augmented one.\\n\\n2. Both the evolution and elimination are realized by the same model (GPT4o-mini). Is the model capable of finding out bad cases generated by itself? Further, I'm wondering about the fail rate and elimination rate of the proposed method.\\n\\n3. Following 2, the proposed method evolute the instruction multiple times, will this lead to the error accumulation problem?\\n\\n4. The paper focuses on improving the training data quality, while the provided example is quite limited. More data samples will help better evaluate the data quality.\", \"i_like_the_proposed_idea_and_give_it_a_6\": \"marginally above the acceptance threshold. But there are still some unclear problems I mentioned above. I will adjust the final score based on the rebuttal.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Dear Reviewer 1KdT:\\n\\nWe greatly appreciate the time and effort you dedicated to reviewing our paper. We have carefully addressed all your insightful suggestions and clarified any ambiguous points to improve our work. As the deadline for the discussion is nearing, could you kindly reconsider your evaluation based on the revised version? We are open to any further queries you might have and are eager to provide any additional information needed.\\n\\nThank you for your understanding and support.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"Dear Reviewer pN7Z,\\n\\nYour insights have been invaluable in refining our work, and we have diligently addressed each of your comments. As we approach the discussion deadline, we kindly ask if you could reassess our revised manuscript. We are more than willing to engage in further dialogue to ensure all your concerns are fully resolved.\\n\\nThank you for your attention to this matter.\\n\\nKind regards,\\n\\nAuthors\"}", "{\"comment\": [\"We express our gratitude for the insightful comments and constructive feedback from the reviewers on our manuscript. We are pleased to have received positive evaluations from the majority of the reviewers. Moreover, we are delighted to learn that the reviewers recognized the significance of the research problem and the novelty of the core idea (Reviewers 1KdT, pN7Z, and h9M6), as well as the convincing nature of the experiments (Reviewers 1KdT, pN7Z, 8Cfc, h9M6, and u5c1). Based on the reviews, we provide both a general response to the points raised by multiple reviewers and individual responses below to address each reviewer's concerns.\", \"1. Regarding the questions about the experiments, we have taken the following actions:\", \"For Reviewers 1KdT, pN7Z, 8Cfc, h9M6, and u5c1, we have either highlighted the locations of the required experiments corresponding to their comments in our paper or added the pertinent experiments accordingly.\", \"For Reviewer 1KdT, we have provided an ablation study of the significance of this rewriting step.\", \"For Reviewer 1KdT, we have provided more results on OCRBench.\", \"For Reviewer 1KdT, we have included ablation experiments with different ratios and information on the elimination rates after three rounds of evolution.\", \"For Reviewers 8Cfc and h9M6, we have conducted ablation experiments on different MLLM evolutions, demonstrating the robustness and scalability of our method.\", \"For Reviewer 8Cfc, we have provided ablation experiments on different prompt evolutions, showcasing the contribution of our prompt design.\", \"For Reviewers 8Cfc and pN7Z, we have supplied additional evaluations showing the consistency of expert evaluations and MLMM evolution scores, further illustrating the reliability of our method.\", \"For Reviewers pN7Z and h9M6, we conducted fair comparison experiments under equivalent architecture and data conditions.\", \"For Reviewer u5c1, we have provided comparative experimental results between MIMIC-IT and MMEvol, further elucidating the effectiveness and core contributions of our approach.\", \"For Reviewer u5c1, we present ablation results on seed data quality, further demonstrating the efficiency and robustness of our method.\", \"2. We have addressed the questions about the idea and technical details as follows:\", \"For Reviewer 1KdT, we further explained our technical details and added corresponding case explanations, providing insights regarding data quality and training methods.\", \"For Reviewer 8Cfc, we elaborated on the reasons for MMEvol's limited improvements on certain key benchmarks.\", \"For Reviewer pN7Z, we further analyzed the potential impact of cumulative error and added more visual cases as suggested.\", \"For Reviewer h9M6, we further explained MMEvol's significant potential under fair comparison conditions, and discussed what types of data have better evolutionary potential.\", \"For Reviewer u5c1, we further explained the differences between MMEvol and previous methods regarding data diversity and complexity, and method scalability, providing experimental evidence for the efficiency and technical contributions of our approach.\", \"For Reviewer u5c1, we supplemented experiments related to the Qwen2 baseline and further explained technical details.\", \"3. Missing reference:\", \"For Reviewer 1KdT, we have included the performance data for InternVL2-8b (released 2024/07) and Qwen2-VL-7b (released 2024/08) in the related work section in the revised draft.\", \"We have also revised the draft according to all the reviewers' suggestions, with the revisions highlighted in red. We sincerely thank all the reviewers for their constructive suggestions. Please feel free to let us know if further details or explanations would be useful.\", \"Yours sincerely,\", \"Authors of #643\"]}", "{\"comment\": \"Thanks for your professional and careful review. We respond to your concerns or questions as follows.\\n\\n> **W 1**: First, the prompt is clear in Fig 4 - Fig 7, can you explictly describe what model, commercial API or other method is used to generate the evolutoin instruction data from the prompt templates?\\n\\n**Response:** \\n\\nWe utilized GPT-4o-mini, and as mentioned in Appendix C and the Limitation Section, we will include a clearer description in the revised version of the main paper.\\n\\n> **W 2**: It is mentioned pseudo function calling is used for visual reasoning evolution, can you describe the setup of function call, the model or template used to generate the function call, and how it is integrataed to the visual reason evolution process?\\n\\n**Response:** \\n\\nWe have provided the following example regarding Figure 3 of the paper and offered a further explanation of the evolution of cognitive reasoning.\\n\\n**Seed Sample:**\\n\\n```json\\n{\\n \\\"objects\\\": [\\\"window\\\", \\\"couch\\\", \\\"vase\\\", ...],\\n \\\"skills\\\": [\\\"Grounding Ability\\\", \\\"Context Understanding Ability\\\", ...],\\n \\\"format\\\": \\\"Normal\\\",\\n \\\"question\\\": \\\"Which is closer to the window, the couch or the vase?\\\",\\n \\\"steps\\\": [],\\n \\\"answer\\\": \\\"The couch.\\\"\\n}\\n```\\n\\n**Evolved Sample:**\\n\\n```json\\n{\\n \\\"objects\\\": [\\\"window\\\", \\\"couch\\\", \\\"vase\\\", ...],\\n \\\"skills\\\": [\\\"Grounding Ability\\\", \\\"Context Understanding Ability\\\", ...],\\n \\\"format\\\": \\\"Complex reasoning\\\",\\n \\\"question\\\": \\\"Which is closer to the window, the couch or the vase?\\\",\\n \\\"steps\\\": [\\n {\\n \\\"manipulation\\\": \\\"grounding_1(window) -> bbx_1\\\",\\n \\\"description\\\": \\\"Locate the window in the room and return its bounding box as bbx_1.\\\"\\n },\\n {\\n \\\"manipulation\\\": \\\"grounding_2(couch) -> bbx_2\\\",\\n \\\"description\\\": \\\"Locate the couch in the room and return its bounding box as bbx_2.\\\"\\n },\\n {\\n \\\"manipulation\\\": \\\"grounding_3(vase) -> bbx_3\\\",\\n \\\"description\\\": \\\"Locate the vase in the room and return its bounding box as bbx_3.\\\"\\n },\\n {\\n \\\"manipulation\\\": \\\"Calculate([window, couch/vase]) -> res_i\\\",\\n \\\"description\\\": \\\"Calculate the spatial distance between the window and the couch/vase.\\\"\\n }\\n ],\\n \\\"answer\\\": \\\"First, based on the detected bounding boxes of the window [0.2, 0.23, 0.57, 0.4] and the couch [0.17, 0.43, 0.83, 0.79], it can be inferred that the couch is positioned below the window. Further, according to the location of the vase [0.5, 0.51, 0.58, 0.72], it is located near the center of the room. Therefore, the couch is closer to the window.\\\"\\n}\\n```\\n\\nIn this example, we instruct the model to generate a series of reasoning steps (the \\\"steps\\\" field) using pseudo function calls. These steps simulate operations such as grounding objects and calculating spatial relationships. By incorporating these steps into the evolution process, we enhance the model's ability to perform complex visual reasoning. It is important to note that we achieve both processes simultaneously by incorporating requirements in the prompt, which ultimately yields more complex instructions and enhances the model's reasoning capabilities.\\n\\n> **W 3**: Third, it is mentioned MLLM is used for rewriting, can you describe details on what MLLM is used, how it is used (prompt used), evalution or abalation of the significance of this rewriting step?\\n\\n**Response:** \\n\\nKindly refer to W1&W2 and the corresponding replies. We utilized the GPT-4o-mini model and formatted the instruction data based directly on the prompts, which were then fed into the model to generate and parse the evolved data. To investigate the impact of removing the chain of reasoning on inductive rewriting, we also conducted the following ablation experiments on 3K evolved data.\\n\\n| CR-Evol | MMStar | MathVista$^M$ | MME$^C$ | AI2D | HallBench | MMMU$^V$ | RWQA | AVG. |\\n| -------------- | ------ | ------------- | ------- | ---- | --------- | -------- | ---- | ---- |\\n| w/o rewritting | 37.2 | 25.4 | 30.1 | 54.7 | 43.6 | 34.2 | 51.3 | 39.5 |\\n| w /rewritting | 37.9 | 26.1 | 31.3 | 55.1 | 43.8 | 35.8 | 53.2 | 40.5 |\\n\\nAs we can see from the table, the induction of reasoning chains and rewriting requirements into the answer are crucial.\"}", "{\"comment\": \"Dear Reviewer 8Cfc,\\n\\nI hope this email finds you in good spirits. We are grateful for your detailed evaluation and have worked hard to address your concerns. With the discussion deadline fast approaching, we would appreciate it if you could revisit the manuscript to see if the adjustments align with your expectations. Your expertise continues to play a pivotal role in refining our work.\\n\\nThank you once again for your dedication.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"comment\": \"Thanks for the valuable and encouraging comments! Our point-by-point responses to the reviewer's mentioned concerns are provided as follows.\\n\\n> **W 1**The entire data improvement framework relies on closed-source frontier models like GPT-4, which suggests a form of knowledge distillation from these models but may limit the ability to scale to larger datasets. Additionally, the strong dependence on models like GPT-4 reduces the framework's interpretability.\\n\\n**Response:** \\n\\nIn this study, we employ the GPT-4o mini model as a cost-effective alternative to the GPT-4 for data evolution. Our results demonstrate that its performance is comparable to that of the latter one, while offering lower costs (**15K$ vs. 600$**). Additionally, the GPT-4o mini's performance aligns closely with established high-performance open-source alternatives, making it a favorable choice for our evolution. We present the results of 3K data evolved using the open-source model Qwen2VL 72B, as illustrated in the table below.\\n\\n| Data | MMStar | MathVista$^M$ | MME$^C$ | AI2D | HallBench | MMMU$^V$ | RWQA | AVG. |\\n| -------------- | ------ | ------------- | ------- | ---- | --------- | -------- | ---- | ---- |\\n| GPT4o-mini-3K | 37.9 | 26.1 | 31.3 | 55.1 | 43.8 | 35.8 | 53.2 | 40.5 |\\n| Qwen2VL-72B-3K | 39.1 | 27.9 | 33.1 | 57.8 | 46.4 | 36.9 | 46.9 | 41.2 |\\n\\nCompared to GPT-4o-mini, utilizing the more powerful open-source Qwen2VL 72B yields superior results, demonstrating the scalability and practicality of our approach.\\n\\n> **Q1&W 2**: The authors do not provide an analysis of the reliability of using GPT-4 as a data rewriter. How reliable is GPT-4 as a data rewriter? How can the quality of the rewritten data be evaluated?\\n\\n**Response:** \\n\\nTo investigate the reliability of the rewrites produced by GPT-4-o-mini, we conducted a manual evaluation of the data before and after the evolution process. Specifically, we first extracted 30 images of various types from the seed data to ensure diversity, keeping 5 relevant question-answer pairs for each image. Subsequently, we carried out the corresponding evolution in three different directions, ultimately obtaining 450 evolved question-answer pairs, which were then subject to scoring and filtering. The results were distributed among five experts for manual evaluation of the accuracy of the model evolution and the scoring filter. The data is summarized in the table below. From the table, it is evident that the average success rate of evolution using MLLM can reach 90%, while the accuracy of the scoring filter can achieve 94%, indicating the reliability of MMEovel. Additionally, we provide detailed scoring cases in Figure 15, highlighted in red.\\n\\n| data id | expert | image categories | FP-Evol (0-5) | I-Evol (0-5) | CR-Evol (0-5) | I-Elim (0-15)(450) |\\n| ----------------: | ------ | ---------------------------------------------------------- | ------------- | ------------ | ------------- | ------------------ |\\n| 0,1,3,4,5,6 | 0 | LandMark,OCR,Human&Clothes,Traffic,Living room,Sport | 5,4,4,5,5,4 | 5,4,3,4,5,4 | 5,3,4,5,4,4 | 15,13,13,14, 13,14 |\\n| 7,8,9,10,11,12 | 1 | Kitchen,Office supplies&Tools,Plants,Animal,Sport,LandMark | 5,5,4,5,4,4 | 5,4,5,5,4,4 | 5,5,4,4,5,4 | 14,15,13,15,14,13 |\\n| 13,14,15,16,17,18 | 2 | Foods,LandMark,OCR,Human&Clothes,Traffic,Sport | 4,4,3,5,4,5 | 5,4,4,4,4,5 | 4,5,5,4,5,5 | 14,14,15,13,14,15 |\\n| 19,20,21,22,23,24 | 3 | Foods,Sport,LandMark,Office supplies&Tools,Plants,Traffic | 3,4,5,5,5,4 | 3,4,5,5,5,5 | 5,5,5,5,5,5 | 13,15,14, 15,15,15 |\\n| 25,26,27,28,29,30 | 4 | Animal,Sport,Traffic,Landmark,Sport,Office supplies&Tools | 4,5,5,5,5,5 | 4,5,5,5,4,5 | 5,5,3,5,5,5 | 14,15,14,15,14,15 |\\n| | | | 89.3% | 88.7% | 92% | 94.5% |\"}" ] }
E0UsEIRBQ8
Semi-Supervised Underwater Object Detection with Image Enhancement Guided by Attribute-based Data Distribution
[ "Wenzhang Zhou", "caixia xia", "Baojie Fan", "Leo Shawn", "Xiangzhu Meng", "Jiandong Tian" ]
Semi-supervised underwater object detection aims to improve the performance of detectors on unlabeled underwater images by leveraging knowledge from labeled ones. However, existing methods often overlook the distribution differences between labeled and unlabeled underwater images. In this paper, we propose a novel underwater image enhancement method guided by attribute-based data distribution (UIEG+), which focuses on reducing the discrepancies between enhanced and original unlabeled images across different attributes, thereby effectively addressing the challenges in semi-supervised underwater object detection. Specifically, we explore an underwater image enhancement strategy based on two attributes: color and scale distributions. For the color attribute, we construct a 3-dimensional grid memory, where each grid cell represents a color subspace and records the number of samples in that subspace. Similarly, for the scale attribute, we design a 1-dimensional vector memory that dynamically stores the number of samples in each scale subspace. Subsequently, we propose an effective sampling method to derive parameters for color and scale transformations based on the aforementioned distribution analysis, increasing the likelihood of transformations in low-distribution regions. To evaluate its effetiveness and superiority, massive semi-superivised underwater object deteciton experiments in multiple datasets have been conduted by integrating UIEG+ into existing semi-supervised object detection frameworks. The code will be released.
[ "Semi-supervised learning; Underwater object detection" ]
https://openreview.net/pdf?id=E0UsEIRBQ8
https://openreview.net/forum?id=E0UsEIRBQ8
ICLR.cc/2025/Conference
2025
{ "note_id": [ "kfk73zybYU", "hmPXi6quap", "Pd0m9mEttO", "OTQSyVod84", "425D7U7KMz" ], "note_type": [ "comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1732610637135, 1731078875537, 1730680618984, 1730816076044, 1729873190758 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2531/Authors" ], [ "ICLR.cc/2025/Conference/Submission2531/Reviewer_WyZx" ], [ "ICLR.cc/2025/Conference/Submission2531/Reviewer_PUQb" ], [ "ICLR.cc/2025/Conference/Submission2531/Reviewer_AXLT" ], [ "ICLR.cc/2025/Conference/Submission2531/Reviewer_ppxz" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"In this work, the authors propose a novel underwater image enhancement method guided by attribute-based data distribution, which focuses on reducing the discrepancies between enhanced and original unlabeled images across different attributes, thereby effectively addressing the challenges in semi-supervised underwater object detection. Experimental evaluations were performed on multiple datasets, and the experimental results look good.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper proposed a novel underwater image enhancement method guided by attribute-based data distribution (UIEG+), which aims to reduce distributional differences between enhanced and unlabeled underwater images by analyzing the distribution of unlabeled images in terms of color and scale attributes.\\n2. This paper incorporate the proposed UIEG+ into existing SSOD frameworks, thereby effectively addressing the challenges of semi-supervised underwater object detection.\", \"weaknesses\": \"1. The experiment is not sufficient. The authors have discussed some recent related work in 2024, but did not compared with them.\\n2. The contribution is somewhat limited. A novel underwater image enhancement method guided by attribute-based data distribution (UIEG+) is proposed in the detection model. If using recent SOTA image enhancement instead in the detection model, will it improve the performance?\\n3. The ablation experiments are inadequate. For example, only CTransfor and STransfor components on URPC are tested.\", \"questions\": \"1. More recent work should be compared to verify the superiority of the proposed model.\\n2. The authors may discuss about the enhancement part, i.e., whether the SOTA enhancement methods can improve the performance or not. This can show the effectiveness of the contribution.\\n3. Please add more ablation experiments.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces a semi-supervised underwater object detection method(UIEG+) that tries to address distribution differences between labeled and unlabeled images. The authors use 3D color memory and 1D scale memory to track image distributions, guiding transformations to ensure better detection.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"Semi-supervised learning is important for underwater imaging problems due to high cost of annotation. UIEG+ is compatible with existing semi-supervised object detection (SSOD) frameworks and it introduces a unique method by considering both color and scale distributions.\", \"weaknesses\": \"Focusing on specific attribute like color and scale coulld be very limiting. The ablation study is good, but I'd prefer to see the mAP with only color transform.\\n\\nThe 3D color memory and 1D scale memory approach might introduce additional computational overhead. It mgiht be better to include some information about time-complexity.\\n\\nIt's not completely clear whether an image enhancement framework is needed for object detection. The mAP doesn't show constant improvement over existing methods.\", \"questions\": \"It's not completely clear whether an image enhancement framework is needed for underwater object detection or an end-to-end method would be better. It might be better to provide more detailed comparison with semi-supervised or self-supervised object detection methods.\\n\\nIt might be better to include some discussion about other attributes that could improve performance. More specifically edge/shape/frequency based attributes might be useful.\\n\\nThe results presented in the paper are quite close, making it difficult to assess the statistical significance of the improvements. Including error bars or confidence intervals would provide valuable insight into the robustness and reliability of these results.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors present a paper on an image enhancement method for semi-supervised object detection in the underwater image domain. The proposed approach follows a teacher-student architecture, with the teacher initialized on labeled data and updated via EMA from the student, which is in turn trained on both labeled and pseudo-labeled images, the latter subject to the said augmentation approach. Experimental results on two datasets show that the method achieve performance at least on par with the state of the art.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1) The paper is well-written and easy to follow.\\n\\n2) The proposed approach is able to achieve good results using a simple enhancement procedure.\", \"weaknesses\": \"1) The authors claim that the proposed approach takes into account the appearance distribution of unlabeled images, unlike other approaches from the state of the art. These approaches don\\u2019t take specifically into account the unlabeled distribution, but process all images in a uniform way; hence, if there is a main color/scale mode in the distribution, it is the one that will be mostly represented by the state-of-the-art approaches. However, this seems to achieve the same result as what the authors are doing, i.e., explicitly select the mode of the distribution.\\n\\n2) Results lack confidence intervals or standard deviations, making it hard to assess the statistical significance of AP/mAP differences.\\n\\n3) Overall, the methodological novelty of the approach is limited. The whole framework follows an established paradigm, and the enhancement approach is really very simple (besides my notes in weakness 1).\", \"questions\": \"1) How is weak augmentation performed?\\n\\n2) The color and scale augmentations seem to always choose the most frequent bin in the corresponding memories. Why not employ a weighted sampling?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposed a dataset-conditioned enhancement for underwater semi-supervised learning.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The writing is clear.\", \"weaknesses\": \"The main weakness is that the novelty is very limited. This paper proposed a dataset-conditioned enhancement for underwater semi-supervised object detection. However, the solution is calculating the average colour/scale parameter from images and use it to augment training data. It may improve the performance, but it's trying to overfit the dataset. As the evaluation dataset is not a big dataset, overfitting may slightly improve the performance. The authors should prove the generalisation of this method. What if the evaluation data contains many objects in different colours and scales? Real underwater scenes are highly diverse, and the URPC data from the Zhangzi Island is heavily biased.\\n\\nThe performance is not good enough, for example, in Tab.1, Ours (PseCo) only improves the baseline by 0.4%.\\n\\nThus this paper does not meet ICLR's standards.\", \"questions\": \"1. Consistent-Teacher is the best baseline in your paper, why don't you integrate your augmentation with it?\\n2. I would suggest solving the abovementioned question/weakness to make this paper solid before submission.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
E040QmNETN
MuVi: Video-to-Music Generation with Semantic Alignment and Rhythmic Synchronization
[ "Ruiqi Li", "Siqi Zheng", "Xize Cheng", "Ziang Zhang", "Shengpeng Ji", "Zhou Zhao" ]
Generating music that aligns with the visual content of a video has been a challenging task, as it requires a deep understanding of visual semantics and involves generating music whose melody, rhythm, and dynamics harmonize with the visual narratives. This paper presents MuVi, a novel framework that effectively addresses these challenges to enhance the cohesion and immersive experience of audio-visual content. MuVi analyzes video content through a specially designed visual adaptor to extract contextually and temporally relevant features. These features are used to generate music that not only matches the video’s mood and theme but also its rhythm and pacing. We also introduce a contrastive music-visual pre-training scheme to ensure synchronization, based on the periodicity nature of music phrases. In addition, we demonstrate that our flow-matching-based music generator has in-context learning ability, allowing us to control the style and genre of the generated music. Experimental results show that MuVi demonstrates superior performance in both audio quality and temporal synchronization. The generated music video samples are available at muvi-v2m.github.io.
[ "Video-to-music generation", "music generation" ]
Reject
https://openreview.net/pdf?id=E040QmNETN
https://openreview.net/forum?id=E040QmNETN
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yT8aSP7JOj", "xjzaxgqDdF", "wy9COw3MKl", "uWpxvyqaAc", "uKmcAxWJcX", "sQdQbv7LLo", "sODmbvwYOT", "pFwGBtgcKD", "mtEW15vgha", "mU0ISDBXE5", "l2HQMvVgVC", "kJUcM7iS3b", "jgIKPgTiQl", "iV5unsXfEd", "ej30Fp6k0T", "e1mlsogTA5", "cV8r4B5pwP", "b9MXXQEF57", "Z7dn4aoP4a", "UBdBgngGmw", "T6MQOF3EeO", "SUSxtueAZu", "Q6mJY3JOET", "PXHrkoIBNV", "P4C5SzFmdy", "JGn5A2TtbV", "IT7pTy9YN7", "FZ0YpBZbmc", "EXMyIx8Umu", "9PqbdkTZSs", "3Z4fJu1U7w", "39zHv5fijD" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment" ], "note_created": [ 1732635913508, 1732215153941, 1732215248081, 1732418808278, 1732214946656, 1732547669708, 1732524248615, 1732547654621, 1732547675843, 1732215598511, 1732216192525, 1732570643621, 1732579142462, 1733165637030, 1732524219038, 1734771301433, 1732547748546, 1732214920283, 1732216282484, 1732547775268, 1732570681483, 1732547788984, 1732540033292, 1730715264189, 1732216027620, 1730718128828, 1732215937381, 1732579149849, 1733165531768, 1730661059717, 1737524075117, 1732216080536 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10758/Reviewer_JLkN" ], [ "ICLR.cc/2025/Conference/Submission10758/Authors" ], [ "ICLR.cc/2025/Conference/Submission10758/Authors" ], [ "ICLR.cc/2025/Conference/Submission10758/Authors" ], [ "ICLR.cc/2025/Conference/Submission10758/Authors" ], [ "ICLR.cc/2025/Conference/Submission10758/Reviewer_yVkt" ], [ "ICLR.cc/2025/Conference/Submission10758/Authors" ], [ "ICLR.cc/2025/Conference/Submission10758/Reviewer_yVkt" ], [ "ICLR.cc/2025/Conference/Submission10758/Reviewer_yVkt" ], [ "ICLR.cc/2025/Conference/Submission10758/Authors" ], [ "ICLR.cc/2025/Conference/Submission10758/Authors" ], [ "ICLR.cc/2025/Conference/Submission10758/Authors" ], [ "ICLR.cc/2025/Conference/Submission10758/Reviewer_yVkt" ], [ "ICLR.cc/2025/Conference/Submission10758/Authors" ], [ "ICLR.cc/2025/Conference/Submission10758/Authors" ], [ "ICLR.cc/2025/Conference/Submission10758/Area_Chair_qJ6U" ], [ "ICLR.cc/2025/Conference/Submission10758/Reviewer_yVkt" ], [ "ICLR.cc/2025/Conference/Submission10758/Authors" ], [ "ICLR.cc/2025/Conference/Submission10758/Authors" ], [ "ICLR.cc/2025/Conference/Submission10758/Reviewer_yVkt" ], [ "ICLR.cc/2025/Conference/Submission10758/Authors" ], [ "ICLR.cc/2025/Conference/Submission10758/Reviewer_yVkt" ], [ "ICLR.cc/2025/Conference/Submission10758/Reviewer_cqcQ" ], [ "ICLR.cc/2025/Conference/Submission10758/Reviewer_cqcQ" ], [ "ICLR.cc/2025/Conference/Submission10758/Authors" ], [ "ICLR.cc/2025/Conference/Submission10758/Reviewer_JLkN" ], [ "ICLR.cc/2025/Conference/Submission10758/Authors" ], [ "ICLR.cc/2025/Conference/Submission10758/Reviewer_yVkt" ], [ "ICLR.cc/2025/Conference/Submission10758/Authors" ], [ "ICLR.cc/2025/Conference/Submission10758/Reviewer_yVkt" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10758/Authors" ] ], "structured_content_str": [ "{\"title\": \"I will hold my rating score\", \"comment\": \"My main concern still lies on the generalizability of the proposed method (the rest concerns has been resolved). I think the proposed synchronization only works on monophonic like simple instrumental sounds generation now rather than complex multitrack music generation. And, there might be an extensive extra work to apply the proposed method for complex multitrack music generation. So, I think the authors should narrow down the scope of the work to be \\\"certain types of (instrumental) sounds\\\" generation rather than \\\"music\\\" generation.\"}", "{\"title\": \"Response to Reviewer cqcQ (Part 1/N)\", \"comment\": \"We thank the reviewer for the constructive and professional review and we are sorry about the unsupported claims.\\n\\n**[About in-context learning]**\\n\\n> The paper includes an additional section on the in-context learning (ICL) capability of music generation models, which I feel deviates from the main theme of the proposed task. From the limited experiments presented, this section does not add to the paper\\u2019s persuasiveness, nor does it convincingly demonstrate actual ICL capabilities. I suggest removing this contribution from the paper.\\n\\nThe reason for including this paragraph is to demonstrate the potential controllability of the method. Without incorporating specific training strategies for ICL, our method can only generate music with random instruments and types, although the rhythm and mood of the music are aligned with the video. We believe that the controllability of music generation is necessary but not primary, hence we included only this paragraph for explanation. Also, this section highlights the capability to generate music in any style, demonstrating the generalizability on the generation side. More intuitive effects of ICL capabilities can be seen on the demo page. \\n\\n**[About semantic synchronisation]**\\n\\n> The discussion on semantic synchronization is lacking. The paper initially introduces an example: \\u201cthe style, melody, and emotions of the music will evolve in harmony with the video content,\\u201d and Figure 2 also touches on this aspect. The authors attempt to measure semantic synchronisation using the SIM metric and imply that Section 4.2 will discuss this metric in detail. However, I found no mention of this metric in Section 4.2. Thus, I believe the discussion on semantic synchronisation is relatively insufficient. I recommend further analysis of the model\\u2019s performance in semantic synchronisation, perhaps by including additional case studies.\\n\\nThank you for raising this issue. Although we agree with reviewer cqcQ\\u2019s point that the discussion on semantic synchronization is not sufficiently thorough, we need to first clarify some statements in the paper and address any potential misunderstandings. In line 353, the statement \\\"which is discussed in detail in Section 4.2\\\" was originally used to refer to the choice of using VideoMAE V2 + Softmax combination as the visual encoder in the SIM measure, not the SIM measure itself. We believe it is necessary to specify what exactly the visual encoder in this SIM measure is. Directly stating our choice of the encoder in this paragraph might seem abrupt, so we have reserved the justification for this choice for Section 4.2, where we demonstrate that this choice is superior in the final generation results, in the first paragraph of Section 4.2. This is our intention, but indeed, the phrasing can be misleading. We will revise this section accordingly.\\n\\nNevertheless, the discussion on semantic synchronization is indeed relatively insufficient, although we indeed mentioned the SIM metric in line 430 to show the poor performance of M$^2$UGen in terms of synchronization. Due to the lack of mature algorithms for fine-grained music emotion recognition, it is challenging to use objective metrics to measure the semantic alignment and synchronization between music and video, especially for emotion and content transitions. As far as we know, SIM is the only objective metric currently available to us. However, during the subjective evaluation phase, we required the raters to focus more on fine-grained alignment (such as emotion transitions) in the generated music, and the results indicate a superior performance of our model. For an intuitive impression, the demo samples also demonstrate fine-grained and rapid-response emotion and content transition.\"}", "{\"title\": \"Response to Reviewer cqcQ (Part 2/N)\", \"comment\": \"**[About music training data]**\\n\\n> Training data concerns for the music generation model. I am curious about one particular issue: since the Jamendo dataset does not include semantic variations, requiring it to generate music that aligns with video semantic changes is effectively an out-of-distribution (OOD) problem. How do the authors plan to address this point?\\n\\nThe Jamendo dataset is not used to train the model to generate music that aligns with video. Basically, we utilize two sets of training data: the audio-only music data, and the audio-visual paired data. The audio-only data is used to pre-train the unconditional DiT to acquire a generator that has a certain general capability to generate music, or you can call it a parameter warm-up. The audio-visual paired data is used to finetune the DiT combined with the visual adaptor to empower the model to generate semantically and rhythmically aligned music with visual conditions. The reasons of pre-training are two folds: 1) we want to show that our method can be generalized to any simple music generator, because the generator is not the main focus of this paper; 2) without pre-training, this model, having been exposed to very limited data (about 280 hours of music), would greatly restrict its generalization ability and the diversity of the generated samples. Specifically, if we cancel the pre-training procedure, the performance drops dramatically, as shown in the first row of Table 5.\\n\\n**[About baselines]**\\n\\n> Choice of baseline. I fully understand the difficulty in comparing to baseline models due to the lack of open-source availability. In this situation, using M2UGen as a baseline is acceptable. However, I would like to note that M2UGen is a weak baseline and performs poorly on the video-to-music (V2M) task. While not a strict requirement, I noticed that VidMuse has released its code. Considering that this baseline is also prominently discussed in the paper, I encourage the authors to include a comparison with it.\\n\\nThank you for your suggestions. VidMuse released its code on October 14th, which was after the paper submission deadline, and our own implementation of VidMuse turned out to perform much worse than our method and M$^2$UGen. Hence we did not include their results in our paper. Nevertheless, during the rebuttal phase, we managed to compare with VidMuse based on the newly released checkpoints, and the results are listed below. From the results, it can be seen that VidMuse is outperformed by our model in both terms of audio quality and synchronization. Subjectively, the music VidMuse generates contains noise and artifacts, and the rhythm is not synchronized with the video.\\n\\n| Methods | FAD | KL | IS | FD | BCS | BHS | SIM |\\n| :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |\\n| VidMuse | 8.13|4.88|1.50|43.82|81.35 |36.12|3.30 |\\n| ours | 4.28|3.52|1.63|28.15|104.17|49.23|19.18|\\n\\n---\\n\\nOnce again, thank you for your effort in reviewing our work and your valuable comments, which are of great significance to improving our work.\"}", "{\"title\": \"Looking Forward to Further Feedback\", \"comment\": \"Dear Reviewer yVkt,\\n\\nThank you again for your great efforts and valuable comments.\\n\\nWe have tried to address the main concerns you raised in the review and made huge efforts such as additional experiments. As the end of the rebuttal phase is approaching, we are looking forward to hearing your feedback regarding our answers. We are always happy to have a further discussion and answer more questions (if any).\\n\\nThanks in advance,\\n\\nSubmission10758 Authors\"}", "{\"title\": \"Response to Reviewer JLkN (Part 2/N)\", \"comment\": \"**[About limited video contents]**\\n\\n> Also, if we see Appendix, the collected video mostly includes Disney, Tom and Jerry, and Silent Films. I think this fact should also be described in the paper well since the proposed method is valid only on these kind of video contents for now. \\n\\nThank you for raising this issue. Due to the high production costs of such artistic works, their numbers are very scarce. However, we still managed to find a considerable number of videos, covering a wide range from animation to live-action videos. By combining a video encoder pre-trained on general video data with an adaptor specially designed to create an information bottleneck, we believe this method has a certain degree of general applicability, although this is not the main focus of this paper. We also demonstrated the capability of this method to handle various video content on the demo page.\\n\\n**[About voice interference]**\\n\\n> The authors mentioned that for the video that contains vocal singing, they excluded the vocal part through source separation technique, however, if there exist some narration or voice of the actors, the proposed technique will not be valid unless they delete speech parts.\\n\\nWe apologize for the unclearness. In lines 328-329, the \\\"vocals\\\" mentioned in the text include both singing voice and speech. In fact, the tool we used [2] will remove all human voices in the audio. \\n\\n---\\n\\nOnce again, thank you for your effort in reviewing our work and your acknowledgment.\\n\\n**References**\\n\\n[1] Copet, Jade, et al. Simple and controllable music generation. Advances in Neural Information Processing Systems 36 (2024).\\n\\n[2] Anjok07 and aufr33. Ultimate vocal remover. https://github.com/Anjok07/ultimatevocalremovergui, 2020.\"}", "{\"comment\": \"The authors acknowledge the difficulty of measuring semantic alignment, stating, \\\"Due to the lack of mature algorithms for fine-grained music emotion recognition, it is challenging to use objective metrics to measure the semantic alignment and synchronization between music and video.\\\" I agree with this statement; however, metrics such as the ImageBind Score[1], which are employed in existing works like M\\u00b2UGen and VidMuse, could offer a stronger evaluation framework for assessing semantic alignment.\\n\\n[1] Girdhar R, El-Nouby A, Liu Z, et al. Imagebind: One embedding space to bind them all[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 15180-15190.\"}", "{\"title\": \"Looking Forward to Further Feedback\", \"comment\": \"Dear Reviewer cqcQ,\\n\\nThank you again for your great efforts and valuable comments.\\n\\nWe have tried to address the main concerns you raised in the review and made huge efforts such as additional experiments. As the end of the rebuttal phase is approaching, we are looking forward to hearing your feedback regarding our answers. We are always happy to have a further discussion and answer more questions (if any).\\n\\nThanks in advance,\\n\\nSubmission10758 Authors\"}", "{\"comment\": \"(1) The authors state, \\\"Foremost, prior to our work, there were no studies that simultaneously focused on general video-to-music generation with semantic alignment and rhythmic synchronization.\\\"\\nThis is not accurate, as prior works such as VBMG [1], V2Meow [2], and VMAS [3] have already addressed general video-to-music generation focusing on both semantic alignment and rhythmic synchronization. Therefore, this task is not new. The authors should clearly articulate their specific contributions beyond these existing methods to justify the novelty of their work.\\n\\n(2) The authors claim, \\\"M\\u00b2UGen, from the perspective of music generation, is a strong baseline, because it incorporates a strong music generator, MusicGen.\\\"\\nHowever, MusicGen [4] was introduced in June 2023 ([arXiv link](https://arxiv.org/abs/2306.05284v1)), and newer, more advanced music generators such as AudioLDM2 [5], MusicLDM [6], and Stable-Audio-Open [7] have since been developed. Additionally, M\\u00b2UGen is a multi-task model not specifically designed for video-to-music generation. I agree with Reviewer cqcQ's assessment that \\\"M\\u00b2UGen is a weak baseline and performs poorly on the video-to-music (V2M) task.\\\"\\n\\n(3) The authors state, \\\"The ultimate quality of music generation is not the focus of this paper; it only needs to be satisfactory. The emphasis of this paper is on the semantic alignment and rhythmic synchronization.\\\"\\nHowever, audio quality is a critical aspect of music generation tasks. Even if semantic alignment is prioritized, the paper does not sufficiently explain or analyze this component, as evidenced by the unclear explanation of the SIM metric in the paper and in Response to Reviewer cqcQ (Part 1/N). The implementation details of SIM need to be clarified further.\", \"references\": \"[1] Zhuo L, Wang Z, Wang B, et al. Video background music generation: Dataset, method and evaluation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 15637-15647.\\n\\n[2] Su K, Li J Y, Huang Q, et al. V2Meow: Meowing to the Visual Beat via Video-to-Music Generation[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2024, 38(5): 4952-4960.\\n\\n[3] Lin Y B, Tian Y, Yang L, et al. VMAS: Video-to-Music Generation via Semantic Alignment in Web Music Videos[J]. arXiv preprint arXiv:2409.07450, 2024.\\n\\n[4] Copet J, Kreuk F, Gat I, et al. Simple and controllable music generation[J]. Advances in Neural Information Processing Systems, 2024, 36.\\n\\n[5] Liu H, Yuan Y, Liu X, et al. Audioldm 2: Learning holistic audio generation with self-supervised pretraining[J]. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2024.\\n\\n[6] Chen K, Wu Y, Liu H, et al. Musicldm: Enhancing novelty in text-to-music generation using beat-synchronous mixup strategies[C]//ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2024: 1206-1210.\\n\\n[7] Evans Z, Parker J D, Carr C J, et al. Stable audio open[J]. arXiv preprint arXiv:2407.14358, 2024.\"}", "{\"comment\": \"(1) The repeated use of phrases like \\\"we believe\\\" and \\\"might lead to\\\" undermines the strength of the paper's claims. While cautious language is necessary for academic writing to ensure rigor, a top-conference paper should base its conclusions on rigorous experiments and detailed analyses rather than speculative or tentative language. The authors need to provide stronger evidence to support their claims.\\n\\n(2) I understand the motivation behind \\u201cmimicking natural sounds with musical instruments to achieve more expressive audiovisual effects.\\u201d However, the authors claim to \\u201caim to tackle these long-standing challenges of video-to-music generation,\\u201d explicitly including \\u201cIntegration of foley and sound effects\\u201d as one of their focuses. Despite this emphasis, there is no evidence of substantial effort toward this goal in the form of experiments or analyses.\"}", "{\"title\": \"Response to Reviewer yVkt (Part 1/N)\", \"comment\": \"Thank you for your valuable comments, and we would like to make some clarifications, which we hope will address your concerns.\\n\\n### **Novelty and Contribution**\\n\\n> 1.Novelty and Contribution: The paper presents its main contributions as a visual adaptor and a contrastive training scheme, but visual adaptor techniques and contrastive learning have already been used in video-to-music generation tasks [1, 2] and are commonly employed in multi-modal learning [3, 4]. The design of the visual adaptor lacks unique innovation, primarily involving a selection of common aggregation and pooling methods, which appears more as an ablation study to find the best setting. Overall, the proposed method lacks novelty, and the results in Table 2 indicate that the proposed method does not outperform the baseline across all metrics.\\n\\nWe disagree with reviewer yVkt's opinion that our method lacks innovation.\\n\\n1. Foremost, prior to our work, there were no studies that simultaneously focused on general video-to-music generation with semantic alignment and rhythmic synchronization. We identified and addressed this issue, achieving significantly superior results. Therefore, we are tackling a new task.\\n2. Indeed, we adopted two kinds of widely used methods: visual adaptors and contrastive learning. However, our visual adaptor is designed to compress high-frame-rate video features and provide a non-autoregressive generator with features about video semantics and synchronization. For contrastive learning, we designed two novel types of negative samples for the frame-level contrastive learning scheme, and as far as we know, none of the methods mentioned by the reviewer yVkt [1, 2, 3] uses frame-level contrastive learning scheme nor introduces new negative samples. \\n3. The reviewer yVkt claims that our method's failure to outperform M$^2$UGen in terms of IS detracts from its novelty, according to Table 2. However, M$^2$UGen, from the perspective of music generation, is a strong baseline, because it incorporates a strong music generator, MusicGen. It is challenging for our method to completely surpass a model that has ten times our parameter count and four times our training data volume, especially considering that our focus is not solely on music generation itself. The ultimate quality of music generation is not the focus of this paper; it only needs to be satisfactory. The emphasis of this paper is on the semantic alignment and rhythmic synchronization.\"}", "{\"title\": \"Response to Reviewer yVkt (Part 5/N)\", \"comment\": \"**[About finetuing M2Ugen]**\\n\\n> 4.3 The M^2Ugen method shows comparable or superior results in terms of audio quality (Table 2). Fine-tuning this method on the dataset used in this paper could provide additional insight into its performance.\\n\\nIt is worth mentioning that the music-visual alignment and the sound quality are somewhat of a trade-off. A similar phenomenon can be seen in Figure 4 that higher CFG scale enhances the alignment while diminishing sound quality in terms of IS. Nevertheless, we managed to finetune the M$^2$UGen by the several steps described in their original paper: 1) we slice the videos beforehand, and use MU-LLaMA to generate captions for audios; 2) we use MPT-7B to generate answers for training the LLaMA 2 model; 3) we finetune the LLaMA 2 model using the LoRA and the adaptor technique, where we have to crop the video frames to fit the ViViT encoder; 4) we finetune the MusicGen medium model with the generated captions until convergence. Preliminary experimental results are listed below, from which it can be observed that the performance of M$^2$UGen did not change significantly after finetuning, and even slightly decreased. This is because their design lacks considerations for audio-visual temporal alignment, and the finetuing on a smaller dataset may cause possible overfitting in large models. In fact, using language models as a bridge results in the loss of a substantial amount of effective visual information.\\n\\n| Methods | FAD | KL | IS | FD | BCS | BHS | SIM |\\n| :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |\\n| M$^2$UGen | 5.12|3.83|1.65|32.14|75.21 |25.14|1.41 |\\n| M$^2$UGen (finetuned) | 5.23|3.96|1.57|33.82|75.42 |25.05|1.38|\\n| ours | 4.28|3.52|1.63|28.15|104.17|49.23|19.18|\\n\\n### **Questions**\\n\\n**[About local and global features]**\\n\\n> 1.Claim on Previous V2M Methods (lines 39-41): The authors claim that \\\"Previous V2M methods focus on global features,\\\" presenting this as a limitation of past approaches. However, this appears inconsistent with prior work, as several existing methods focus on local clip features for training. For instance, V2Meow [1] and VMAS [2] emphasize local clip-based features, while VidMuse [3] captures both local and global features through long-short-term modeling. The authors should clarify and provide evidence to support their assertion about the emphasis on global features in previous V2M approaches.\\n\\nWe agree with reviewer yVkt that the statement \\\"previous V2M methods focus on global features\\\" is inaccurate. The mentioned methods [2, 7, 10] indeed incorporate frame-level visual representations. However, VidMuse [7] mainly focuses on semantic alignment, V2Meow [10] mainly focuses on rhythmic synchronization. In fact, V2Meow does incorporate frame-level semantic features (CLIP) to generate music, but their evaluation mainly focuses on global semantic relevance, not fine-grained semantic alignment. As for VMAS [2], they released their paper on September 11, which essentially means our work was conducted concurrently. Also, from their demo samples, it can be observed that the music does not undergo observable emotional changes with the video; most videos maintain a single emotional style, and there is no clear rhythmic synchronization. \\n\\n**[About beat synchronization metrics]**\\n\\n> 2.Choice of Beat Synchronization Metrics and Exclusion of Dance Video Music Generation for Comparison: The authors select Beats Coverage Score (BCS) and Beats Hit Score (BHS) as metrics to evaluate beat synchronization, following the approach in [4] (line 346), which specifically targets music generation for dance videos. However, the authors then claim in line 364 that \\\"D2M-GAN are not considered for comparison because their scope of application differs from ours.\\\" If dance-related videos are outside MuVi\\u2019s intended scope, it is unclear why dance-specific metrics are being applied for evaluation. This raises a need for clarification.\\n\\nWe disagree with reviewer yVkt's opinion that BCS and BHS are dance-specific metrics, and that using these metrics implies comparing with dance2music methods. We believe that as technology advances, people will gradually uncover the essence of problems, rather than just focusing on their surface. The main focus of our work differs from that of dance2music methods. In fact, the model inputs are majorly different, as they require specific human motion encoding or body keypoints. However, it must be acknowledged that these two tasks share similarities in evaluation, as they both focus on the rhythmic synchronization of the generated music. Therefore, rather than saying this metric is dance-specific, it is more accurate to say it is rhythm-specific, and we borrow it to measure rhythmic synchronization.\"}", "{\"title\": \"Replying to Official Comment by Reviewer yVkt (1/2)\", \"comment\": \"**[About novelty and contribution]**\\n\\n1. Reviewer yVkt mentions three works, VBMG [1], V2Meow [2], and VMAS [3], that \\\"have already addressed general video-to-music generation focusing on both semantic alignment and rhythmic synchronization\\\". It is worth emphasizing that the \\\"alignment\\\" in our context is interpreted as local or fine-grained alignment (lines 39-41). That is, if the mood or the rhythm of the scene changes, the musical features must respond immediately. However, VBMG [1] only generates music with a constant rhythm, and both VBMG and V2Meow [2] only focus on global semantic alignment (they incorporate fine-grained semantic features as input, but only evaluate regarding global contents). VMAS [3] is a concurrent work, so we do not categorize it as 'previous' work.\\n2. M$^2$UGen, from the perspective of music generation, is a strong baseline, because the music generator it involves, MusicGen, has ten times our parameter count and four times our training data volume.\\n3. While achieving fine-grained semantic alignment rhythmic synchronization, the proposed method reaches a level of sound quality that is competitive with methods incorporating text-to-music generation models. The explanation of the SIM metric is discussed in the following comment section.\\n\\n**[About the SIM metric]**\\n\\nThe SIM metric is a reference-free metric derived from the contrastively pre-trained encoders. Specifically, the SIM value is the average of frame-level cosine similarity values, which can handle varying-length music-visual pairs. For the music track, an AudioMAE encoder encodes the audio in a fine-grained way and a learnable linear layer transforms the encoding into an audio feature that has the same length as the compressed video feature. The implementation details of the encoders are listed in Appendix C. Given the audio and visual features both in the shape of (N, C), we compute the element-wise cosine similarity values and compute the average to be the SIM value. Therefore, the SIM value pays attention to local similarity. \\n\\n**[About integration of foley and sound effects]**\\n\\nThe intent of this paragraph was to distinguish the mimicking of foley sounds in the generated music from the logic of traditional foley sound generation. This raises a very important issue: the objects in the video are not necessarily the sources of the sounds being generated. This renders some traditional video-sound or video-music datasets ineffective, as they are constructed based on the pairing of the sound-producing object and the sound produced. This also explains why we do not use traditional datasets or models/metrics developed based on these datasets (such as ImageBind). We will revise the manuscript to make this section more precise.\\n\\n**[About MuVi(beta)] and baseline comparison**\\n\\n1. We create a simple and intuitive baseline with the (CLIP-ViT + attention, without contrastive pre-training) setting, which is also supported by reviewer yVkt (\\\"A simple baseline could have been constructed by combining an existing video understanding model with a music generation model, similar to the approach in [VMAS, VidMuse]\\\"). \\n2. As for the two baselines reviewer yVkt mentioned: VMAS is a concurrent work and has not released their code; VidMuse has not released their code until October 14th. We have made every effort to conduct additional experiments and listed the results in the original rebuttal comments (the \\\"[Comparison with other baselines]\\\" section). We list the results here again:\\n\\n|Methods|FAD|KL|IS|FD|BCS | BHS | SIM |\\n|:-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |\\n|VidMuse|8.13|4.88|1.50|43.82|81.35 |36.12|3.30 |\\n|ours|4.28|3.52|1.63|28.15|104.17|49.23|19.18|\\n\\n3. Reviewer yVkt mentions four existing works in video-to-music generation, M$2$UGen [4], V2Meow [2], VidMuse [5], and VMAS [3], and the paper \\\"includes only one comparative method apart from additional rebuttal experiments\\\". However, V2Meow has not released their code, and even their [demo page](https://tinyurl.com/v2meow) is inaccessible. VidMuse has not released their code until October 14th, and our initial replication also produced unsatisfactory results. VMAS is essentially a concurrent work, because they released the paper on September 11 UTC. \\n\\n**References**:\\n\\n[1] Zhuo L, Wang Z, Wang B, et al. Video background music generation: Dataset, method and evaluation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 15637-15647.\\n\\n[2] Su K, Li J Y, Huang Q, et al. V2Meow: Meowing to the Visual Beat via Video-to-Music Generation[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2024, 38(5): 4952-4960.\\n\\n[3] Lin Y B, Tian Y, Yang L, et al. VMAS: Video-to-Music Generation via Semantic Alignment in Web Music Videos[J]. arXiv preprint arXiv:2409.07450, 2024.\\n\\n[4] Liu S, Hussain A S, Sun C, et al. M$^2$UGen: Multi-modal Music Understanding and Generation with the Power of Large Language Models[J]. arXiv preprint arXiv:2311.11255, 2023.\"}", "{\"comment\": \"[About M\\u00b2UGen]\\n\\nThe criterion for determining whether a model is a strong music generator should not be based on model size but rather on the quality of the generated music. According to the authors' logic, does this imply that MusicLM, the early text-to-music generation model with 430M parameters, is stronger than the music generator proposed by the authors (with 330M parameters)? Or that the AudioLDM-2 Full size with 346M parameters is weaker than MusicGen?\\n\\n\\n[About the Baselines]\\n\\nM\\u00b2UGen compares its video-to-music generation task with two baselines (CoDi and CMT), while V2Meow compares three methods (CDCD, D2M-GAN, and CMT) in its paper and reports scores on the AIST++ benchmark. VidMuse and VMAS, on the other hand, compared with five additional methods in their respective papers.\"}", "{\"title\": \"Replying to Official Comment by Reviewer yVkt for Additional Experimental Results (2/2)\", \"comment\": \"**[Additional Benchmark]**\\n\\nEvaluation results on 25s dancing test set (AIST++) from LORIS benchmark:\\n\\n| Methods | FAD | KL | IS | FD | BCS | BHS | SIM | BCS' | CSD | BHS' | HSD | F1 | MOS-Q | MOS-A | ImageBind AV score |\\n| :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |\\n| GT | - | - | - | - | - | - | - | - | - | - | - | - | - | - | 0.0527 |\\n| LORIS | - | - | - | - | - | - | - | 98.6 | 6.1 | 90.8 | 13.9 | 94.5 | - | - | - |\\n| CMT | 8.19| 4.81| 1.12| 73.26| 90.85 | 39.38| 3.25 | 96.9 | 6.3 | 46.0 | 18.4 | 62.4 | 3.36 | 3.41 | 0.0508 |\\n| VidMuse | 5.63| 2.12| 1.19| 34.01| 90.07 | 41.14| 5.12 | 95.7 | 12.8 | 97.0 | 6.6 | 96.4 | 3.58 | 3.54 | 0.0601 |\\n| M2UGen | 5.80| 4.56| 1.45| 40.42| 65.76 | 24.44| 3.85 | 94.6 | 3.7 | 91.3 | 17.5 | 93.6 | 3.81 | 2.86 | 0.0552 |\\n| ours | 5.56| 2.06| 1.77| 33.78| 125.72| 46.23| 14.25| 95.9 | 13.0 | 59.5 | 26.4 | 73.4 | 3.85 | 4.03 | 0.0513 |\\n\\nSince the evaluation is based on LORIS benchmark, we also borrow the corresponding metrics (BCS', CSD, BHS', HSD, F1) to make a fair comparison. Because the checkpoints of LORIS are inaccessible, we only copy the results from their orirginal paper. It is worth mentioning that the metrics from LORIS are computed second-wise, while ours (BCS, BHS) are computed based on a 100ms tolerance. **This makes LORIS metrics significantly loose for rhythmic synchronization, namely, models can perform very well or even cheat on these metrics easily. This was also our initial reason for not using this metric.** \\n\\nIt is worth mentioning that our model has never seen any specially collected dancing videos, not to mention the whole training split of LORIS dataset. Therefore, our model is tested in an out-of-domain fashion.\\n\\nAnother important thing is that the ImageBind AV score is pretty low even for ground truth (GT) samples. This just confirms our previous point that this metric is inappropriate in this task. To address this issue, we conducted additional subjective evaluations and also introduced metrics specifically for semantic alignment and rhythmic synchronization. The results demonstrate the superior performance of the proposed method.\", \"evaluation_results_on_25s_figure_skating_test_set_from_loris_benchmark\": \"| Methods | FAD | KL | IS | FD | BCS | BHS | SIM | BCS' | CSD | BHS' | HSD | F1 | MOS-Q | MOS-A | ImageBind AV score |\\n| :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |\\n| GT | - | - | - | - | - | - | - | - | - | - | - | - | - | - | 0.0578 |\\n| LORIS | - | - | - | - | - | - | - | 52.2 | 18.5 | 57.0 | 19.8 | 54.5 | - | - | - |\\n| CMT |16.76| 3.16| 1.19| 83.24| 82.03 | 38.64| 3.59 | 39.3 | 28.5 | 75.1 | 27.6 | 51.6 | 3.25 | 3.36 | 0.0521 |\\n| VidMuse |12.62| 2.85| 1.29| 71.13| 80.92 | 39.84| 4.03 | 53.6 | 22.5 | 91.5 | 14.6 | 67.6 | 3.52 | 3.45 | 0.0487 |\\n| M2UGen |14.12| 2.77| 1.25| 69.90| 82.62 | 39.12| 3.52 | 65.3 | 17.6 | 95.9 | 9.4 | 78.4 | 3.77 | 3.12 | 0.0616 |\\n| ours |14.44| 2.58| 1.44| 64.02| 102.60| 50.67| 15.18| 65.7 | 25.8 | 64.9 | 26.8 | 60.0 | 3.79 | 3.93 | 0.0619 |\\n\\nIt is worth noting that **the audio quality of ground-truth tracks of the figure skating dataset is extremely poor**, compared to other musical datasets. The audio in these videos is very noisy, mixed with a lot of noise and even human voices. Moreover, these videos do not exhibit any obvious semantic transitions. Therefore, **we believe this benchmark cannot demonstrate the real performance of our method**, but we still conducted the evaluation as requested by reviewer yVkt. From the results, we can observe a very clear variance, and the high FAD indicates that the distribution of audio features learned by the models are quite different from that of this dataset. This indirectly suggests that the dataset is lacking in terms of audio quality.\"}", "{\"title\": \"Looking Forward to Further Feedback\", \"comment\": \"Dear Reviewer JLkN,\\n\\nThank you again for your great efforts and valuable comments.\\n\\nWe have tried to address the main concerns you raised in the review and made huge efforts such as additional experiments. As the end of the rebuttal phase is approaching, we are looking forward to hearing your feedback regarding our answers. We are always happy to have a further discussion and answer more questions (if any).\\n\\nThanks in advance,\\n\\nSubmission10758 Authors\"}", "{\"metareview\": \"This paper, MuVi, proposes a new video-to-music generation framework focusing on two primary goals: semantic alignment (music content changing in pair with the video\\u2019s mood or scene) and rhythmic synchronization (musical beats matching a video\\u2019s pacing).\\n\\n## Reviewers\\u2019 Feedback ##\\n\\n**Positive Aspects:** \\n\\nReviewer cqcQ noted that video-to-music remains a relatively new area, agreed that the discussions on semantic synchronization were sufficient in the rebuttal, and was satisfied with the additional baseline comparison. Reviewer yVkt acknowledged that some of the concerns have been resolved including writing, semantic alignment metric, and additional experimental comparison and benchmarks. Reviewer JLkN noted that most concerns regarding method clarity and experimental comparisons had been addressed.\\n\\n**Unresolved Issues:**\\n\\n1. Novelty: Reviewer yVkt found the authors\\u2019 arguments about being the first to tackle local or fine-grained alignment unconvincing, pointing out that there are several relevant methods\\n\\n2. Generalizability: Reviewer yVkt and Reviewer JLkN noted that, despite extra benchmarks in the rebuttal, the results can not convincingly demonstrate the generalizability of the proposed approach.\\n\\n3. Metrics & Comparison: There were inconsistencies between MuVi\\u2019s definitions of BCS/BHS and those in referenced works. The reviewer yVkt felt this may mislead readers about how results compare with other studies. They also noted that the demo page lacked updated comparative results.\\n\\n4. Overclaim in In-context learning section: Reviewer cqcQ suggested that the paper\\u2019s in-context learning part might not rise to the level of a distinct contribution.\\n\\n## Recommendation ##\\n\\nAfter rebuttal and discussion, the reviewers had mixed opinions (scores of 8, 6, and 3). The reviewer gave it an 8 highlighting the paper\\u2019s writing and recognized that video-to-music generation remains a relatively new field, highlighting certain differences from existing methods. I appreciate the efforts of the authors to provide extensive responses and for the reviewers to engage in discussions. However, after rebuttal and discussion, the other two reviewers still had concerns\\u2014specifically about overlaps with prior local alignment approaches and insufficient demonstration for broader generalization. Despite one positive review, I recommend rejection due to insufficient novelty and limited applicability.\\n\\nI encourage the authors to address these concerns and consider resubmitting to a future venue.\", \"additional_comments_on_reviewer_discussion\": \"The rebuttal and discussion phase highlighted several critical concerns: overlap with existing approaches, limited generalization, and potential overclaims. While one reviewer remained positive, highlighting the clear writing and the paper's contribution to a relatively new task, the other two remained unconvinced. They felt the core ideas overlapped with prior frameworks (Reviewer yVkt) and that generalizability was not adequately demonstrated (Reviewers yVkt and JLkN). Despite the authors' responses, including additional comparisons and clarifications, these core concerns are still there. Although video-to-music generation is an interesting and challenging task, I concur with the reviewers' concerns and recommend rejection.\"}", "{\"comment\": \"(1) The authors use a (CLIP-ViT + attention, without contrastive pre-training) setting as a baseline. This is not only one of the authors' own settings but also a deliberately weak configuration. Using such a baseline weakens the persuasiveness of their comparative evaluation.\\n\\n(2) While the authors state that \\\"making any comparison before that date unfair,\\\" the paper includes only one comparative method apart from additional rebuttal experiments. This is insufficient, as many existing works in video-to-music generation, such as M\\u00b2UGen[1], V2Meow[2], VidMuse[3], and VMAS[4], already provide multiple baselines for reference. Even if direct comparisons are deemed unfair due to timing, the authors could have drawn insights or evaluations from these established baselines to strengthen the validity of their proposed approach.\\n\\n[1] Liu S, Hussain A S, Sun C, et al. M $^{2} $ UGen: Multi-modal Music Understanding and Generation with the Power of Large Language Models[J]. arXiv preprint arXiv:2311.11255, 2023.\\n\\n[2] Su K, Li J Y, Huang Q, et al. V2Meow: Meowing to the Visual Beat via Video-to-Music Generation[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2024, 38(5): 4952-4960.\\n\\n[3] Tian Z, Liu Z, Yuan R, et al. Vidmuse: A simple video-to-music generation framework with long-short-term modeling[J]. arXiv preprint arXiv:2406.04321, 2024.\\n\\n[4] Lin Y B, Tian Y, Yang L, et al. VMAS: Video-to-Music Generation via Semantic Alignment in Web Music Videos[J]. arXiv preprint arXiv:2409.07450, 2024.\"}", "{\"title\": \"Response to Reviewer JLkN (Part 1/N)\", \"comment\": \"Thanks for your valuable feedback, and we hope our response fully resolves your concerns.\\n\\n**[About integration of foley and sound effects]**\\n\\n> In Introduction, \\\"Integration of foley and sound effects\\\" paragraph seems not necessary. The paper does not tackles foley sound generation neither sound effects. If we listen to the demo samples, It's more like rhythmic synchronization + bpm modulation, not foley or sound effects.\\n\\nWe apologize for any possible misunderstandings caused by this description. The intent of this paragraph was to distinguish the mimicking of foley sounds in the generated music from the logic of traditional foley sound generation. Indeed, our method was not specifically designed for foley sound generation, but many audiovisual artworks employ the technique of mimicking natural sounds with musical instruments to achieve more expressive audiovisual effects. Our method will also possess this capability after training. These foley sound effects, simulated by musical instruments, are actually a special form of music generation, as stated in lines 63-66. Many cases can also be found in the demo samples (for sample, at the beginning of [this sample](https://muvi-v2m.github.io/data/result/video/tom_and_jerry_01[90to110](2).mp4), when Jerry hits Tom in the back of the head with a revolver, this sound is simulated by a set of percussion instruments and a brief string section). Nevertheless, the reviewer JLkN's perspective, which suggests that this does not actually fit the definition of foley sound generation and is essentially just a musical technique, also has merit. Therefore, we will refine this part to emphasize more on musical and instrumental techniques, rather than foley sound generation. \\n\\n**[About 33.7K music tracks]**\\n\\n> In Section 4.1, the authors noted that they used MTG-Jamendo Dataset + 33.7K music tracks from the internet. I think the reason why they have used more 33.7K music tracks from the internet should be explained more in detail.\\n\\nThank you for pointing this out, it would be clearer if we provide a discussion on our choice. We pre-trained the music generator to first establish a simple yet decent baseline, as we do not want the generator itself to become a bottleneck or critical point for this task. In fact, the model was initially trained using the 33.7K tracks from the internet, but using only these data was not enough to fully meet the baseline standard. It was after this that we noticed the MTG-Jamendo dataset and directly incorporated it into our training set, achieving a reasonably good baseline. Therefore, this was a choice based on experience. To measure the performance of an unconditional generator, we follow previous works [1] and evaluate the methods on the MusicCaps benchmark. Here are preliminary experimental results:\\n\\n|Methods|FAD|KL|\\n|:-:|:-:|:-:|\\n|Mousai|7.5|1.59|\\n| MusicLM | 4.0 | - |\\n| Noise2Music | 2.1 | - |\\n| MusicGen (1.5B) | 5.0 | 1.31 |\\n| MuVi (uncond. 33.7K) | 8.5 | 2.34 | \\n| MuVi (uncond. 50K) | 7.8 | 2.05 |\\n| MuVi (uncond. 83.7K) | 7.3 | 1.91 |\\n\\nWe compare the unconditional generator trained with different sets of training data (33.7K from the internet, 50K from the MTG-Jamendo dataset, and the combination), and some baselines (the results are copied from their original papers). For an unconditional generator, it reaches reasonable performance with the combination of the two sets.\"}", "{\"title\": \"Response to Reviewer yVkt (Part 6/N)\", \"comment\": \"**[About MuVi(beta)]**\\n\\n> 3.Choice of MuVi(beta) Setting for Comparison: The paper claims \\\"use CLIP-ViT(base-patch16) and the attention pooling adaptor as the visual encoder\\\" for MuVi(beta) (lines 366-367). However, Table 1 shows that the VideoMAE V2 with a Softmax adaptor yields better results for this setting. It is unclear why a suboptimal setting was selected for MuVi(beta), as this choice could impact the fairness and interpretability of the comparisons. An explanation from the authors on the rationale for this choice would provide more clarity.\\n\\nAs mentioned in the \\\"[Comparison with other baselines]\\\" section, we construct MuVi(beta) to create a simple and trivial baseline, as requested in the Weakness 4.1 of reviewer yVkt's comments, since we don't have much baselines to compare with. MuVi(beta) is not only constructed with CLIP + attention pooling, it is also trained without contrastive pre-training, resulting in a very simple baseline. Reviewer yVkt accuses us of choosing this model for comparison because it performs poorly, which is a case of putting the cart before the horse. We choose CLIP + attention combination as the adaptor strategy, because it is actually very competitive comparing with the VideoMAE V2 + Softmax combination. Also, using CLIP as the visual encoder is a common choice among other works, meeting the requirements of a simple baseline.\\n\\n---\\n\\nWe hope our clarifications address your concerns and we are looking forward to your re-assessment of our work. We also welcome further discussion with you. Thank you again for your efforts.\\n\\n**References**\\n\\n[1] Liu S, Hussain A S, Sun C, et al. M2UGen: Multi-modal Music Understanding and Generation with the Power of Large Language Models[J]. arXiv preprint arXiv:2311.11255, 2023.\\n\\n[2] Lin Y B, Tian Y, Yang L, et al. VMAS: Video-to-Music Generation via Semantic Alignment in Web Music Videos[J]. arXiv preprint arXiv:2409.07450, 2024.\\n\\n[3] Zhang R, Han J, Liu C, et al. Llama-adapter: Efficient fine-tuning of language models with zero-init attention[J]. arXiv preprint arXiv:2303.16199, 2023.\\n\\n[4] Radford A, Kim J W, Hallacy C, et al. Learning transferable visual models from natural language supervision[C]//International conference on machine learning. PMLR, 2021: 8748-8763.\\n\\n[5] Rombach R, Blattmann A, Lorenz D, et al. High-resolution image synthesis with latent diffusion models[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 10684-10695.\\n\\n[6] Hou Z, Sun F, Chen Y K, et al. Milan: Masked image pretraining on language assisted representation[J]. arXiv preprint arXiv:2208.06049, 2022.\\n\\n[7] Tian Z, Liu Z, Yuan R, et al. VidMuse: A Simple Video-to-Music Generation Framework with Long-Short-Term Modeling[J]. arXiv preprint arXiv:2406.04321, 2024.\\n\\n[8] Li S, Qin Y, Zheng M, et al. Diff-BGM: A Diffusion Model for Video Background Music Generation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 27348-27357.\\n\\n[9] Zhang C, Hua Y. Dance2Music-Diffusion: leveraging latent diffusion models for music generation from dance videos[J]. EURASIP Journal on Audio, Speech, and Music Processing, 2024, 2024(1): 48.\\n\\n[10] Su K, Li J Y, Huang Q, et al. V2Meow: Meowing to the Visual Beat via Video-to-Music Generation[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2024, 38(5): 4952-4960.\"}", "{\"comment\": \"The fine-tuned M\\u00b2UGen model performs worse on key metrics like FAD, KL, and FD, which is counterintuitive.\\nThe authors' explanation for this performance drop (e.g., overfitting or lack of temporal alignment) is unconvincing. A deeper analysis is needed to justify these results.\"}", "{\"title\": \"Replying to Official Comment by Reviewer yVkt (2/2)\", \"comment\": \"**[About finetuning M$^2$Ugen]**\\n\\nThe generation process of M$^2$Ugen is dissected: the LLaMA 2 decoder summarizes the main theme of the visual features extracted by the ViViT model and generates a description about the feature of the music to generate. The MusicGen generator then follows the instruction to generate music. Therefore, in our task, what actually has a direct impact on the generation quality is the finetuning step of MusicGen. Specifically, if we only finetune the LLaMA 2 model and keep the MusicGen model unchanged, the objective results essentially remain still in multiple runs' average. In addition, FAD, KL, and IS also indicate the latent distribution distance between the generation set and the source set. A worse value also implies that the generated samples have relatively little similarity to the dataset. \\n\\n**[About generalizability]**\\n\\nThe main focus of this paper is to explore the way to generate music tracks with fine-grained semantic alignment and rhythmic synchronization, and exploring its generalizability is not the primary focus. Even so, we will make every effort to evaluate the performance on LORIS [6] benchmark. The reason we choose LORIS but the others are: SymMV [7] is for symbolic music generation; V2M [5] has not released their test dataset; AIST++ [8] is included in the LORIS benchmark. However, as the rebuttal deadline approaches, if we are unable to complete this evaluation before the deadline, please understand.\\n\\n**References**:\\n\\n[1] Zhuo L, Wang Z, Wang B, et al. Video background music generation: Dataset, method and evaluation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 15637-15647.\\n\\n[2] Su K, Li J Y, Huang Q, et al. V2Meow: Meowing to the Visual Beat via Video-to-Music Generation[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2024, 38(5): 4952-4960.\\n\\n[3] Lin Y B, Tian Y, Yang L, et al. VMAS: Video-to-Music Generation via Semantic Alignment in Web Music Videos[J]. arXiv preprint arXiv:2409.07450, 2024.\\n\\n[4] Liu S, Hussain A S, Sun C, et al. M$^2$UGen: Multi-modal Music Understanding and Generation with the Power of Large Language Models[J]. arXiv preprint arXiv:2311.11255, 2023.\\n\\n[5] Tian Z, Liu Z, Yuan R, et al. Vidmuse: A simple video-to-music generation framework with long-short-term modeling[J]. arXiv preprint arXiv:2406.04321, 2024.\\n\\n[6] Yu J, Wang Y, Chen X, et al. Long-term rhythmic video soundtracker[C]//International Conference on Machine Learning. PMLR, 2023: 40339-40353.\\n\\n[7] Zhuo L, Wang Z, Wang B, et al. Video background music generation: Dataset, method and evaluation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 15637-15647.\\n\\n[8] Li R, Yang S, Ross D A, et al. AI choreographer: Music conditioned 3D dance generation with AIST++[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 13401-13412.\"}", "{\"title\": \"Additional Concerns\", \"comment\": \"I agree with Reviewer JLkN\\u2019s remark that \\\"the collected video mostly includes Disney, Tom and Jerry, and Silent Films. I think this fact should also be described in the paper well since the proposed method is valid only on these kind of video contents for now.\\\"\\n\\nMoreover, there are potential concerns about data leakage, raising doubts about the model\\u2019s generalizability and its tendency to overfit on the authors\\u2019 dataset:\\n\\n(1) On the demo page, the generated music in almost all samples predominantly features strings, brass instruments, piano, and percussion. This closely matches the training data categories mentioned by the authors and does not reflect the diversity of the MTG-Jamendo Dataset, which includes 95 genres and 41 instruments. The authors also claim that the MTG-Jamendo Dataset is used to train the music generator, but the lack of diversity in the generated music suggests possible overfitting to specific subsets of the training data.\\n\\n(2) For video samples from categories similar to those in the training dataset, such as [sample_1](https://muvi-v2m.github.io/data/result/video/tom_and_jerry_01%5B90to110%5D(2).mp4), [sample_2](https://muvi-v2m.github.io/data/result/video/tom_and_jerry_01[270to290](0).mp4), the model demonstrates excellent rhythmic synchronization. However, for videos from other categories, such as [sample_3](https://muvi-v2m.github.io/data/result/video/game_cg[80to100](6).mp4), [sample_4](https://muvi-v2m.github.io/data/result/video/game_cg[540to560](1).mp4), the performance deteriorates significantly, particularly in terms of synchronization and musical alignment.\\n\\nThe authors claim that \\\"we believe this method has a certain degree of general applicability.\\\" However, this claim is not adequately supported. To validate this assertion, evaluating the model's performance on established and diverse benchmarks, such as LORIS [1], SymMV [2], V2M [3], AIST++ [4], or BGM909 [5], would help comprehensively assess its generalizability and robustness across different types of content.\\n\\n\\n[1] Yu J, Wang Y, Chen X, et al. Long-term rhythmic video soundtracker[C]//International Conference on Machine Learning. PMLR, 2023: 40339-40353.\\n\\n[2] Zhuo L, Wang Z, Wang B, et al. Video background music generation: Dataset, method and evaluation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 15637-15647.\\n\\n[3] Tian Z, Liu Z, Yuan R, et al. VidMuse: A simple video-to-music generation framework with long-short-term modeling[J]. arXiv preprint arXiv:2406.04321, 2024.\\n\\n[4] Li R, Yang S, Ross D A, et al. AI choreographer: Music conditioned 3D dance generation with AIST++[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 13401-13412.\\n\\n[5] Li S, Qin Y, Zheng M, et al. Diff-BGM: A Diffusion Model for Video Background Music Generation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 27348-27357.\"}", "{\"title\": \"I appreciate your efforts improving the paper quality.\", \"comment\": \"Thanks for your rebuttal.\\n\\nI agree that your rebuttal solved most issues I raised. Assuming you will integrate these revisions to the next version of your paper, I am happy to raise the score to 8.\\n\\nHowever, I keep my opinion about in-context learning section. After reading the rebuttal I understand the reason why this paper includes it, but it still kind of over-claiming. My suggestion is that you can discuss it, but do not count it as the contribution of this paper. \\n\\nAgain, good rebuttal. Thanks for your work. Good luck.\"}", "{\"summary\": \"This paper proposes a novel video-to-music generation method that improves semantic similarity and rhythmic consistency by enhancing the visual encoder and adopting a new contrastive learning pretraining approach. The authors also conducted a series of ablation studies to examine the specific impact of different modules on the model\\u2019s performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The writing in this paper is clear, and the motivation aligns well with intuition. Overall, the experiments are reasonably thorough and adequately conducted.\\n\\nThe assumptions about video-to-music generation are sensible. The authors address key challenges of the task, such as temporal semantic consistency in videos, rhythm alignment, and distinguishing between sound effects and music. Therefore, the proposed improvements seem well-suited for tackling these issues.\\n\\nThe evaluation metrics are also well-chosen, covering metrics that sufficiently measure the model\\u2019s performance across various aspects. The experimental results are convincing.\", \"weaknesses\": \"The paper includes an additional section on the in-context learning (ICL) capability of music generation models, which I feel deviates from the main theme of the proposed task. From the limited experiments presented, this section does not add to the paper\\u2019s persuasiveness, nor does it convincingly demonstrate actual ICL capabilities. I suggest removing this contribution from the paper.\\n\\nThe discussion on semantic synchronisation is lacking. The paper initially introduces an example: \\u201cthe style, melody, and emotions of the music will evolve in harmony with the video content,\\u201d and Figure 2 also touches on this aspect. The authors attempt to measure semantic synchronisation using the SIM metric and imply that Section 4.2 will discuss this metric in detail. However, I found no mention of this metric in Section 4.2. Thus, I believe the discussion on semantic synchronisation is relatively insufficient. I recommend further analysis of the model\\u2019s performance in semantic synchronisation, perhaps by including additional case studies.\\n\\nTraining data concerns for the music generation model. I am curious about one particular issue: since the Jamendo dataset does not include semantic variations, requiring it to generate music that aligns with video semantic changes is effectively an out-of-distribution (OOD) problem. How do the authors plan to address this point?\\n\\nChoice of baseline. I fully understand the difficulty in comparing to baseline models due to the lack of open-source availability. In this situation, using M2UGen as a baseline is acceptable. However, I would like to note that M2UGen is a weak baseline and performs poorly on the video-to-music (V2M) task. While not a strict requirement, I noticed that VidMuse has released its code. Considering that this baseline is also prominently discussed in the paper, I encourage the authors to include a comparison with it.\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer yVkt (Part 3/N)\", \"comment\": \"**[About ambiguous phrases]**\\n\\n> 2.3 Ambiguous phrases like \\u201cwe believe\\u201d (lines 222, 483) and \\u201cmight lead to\\u201d (lines 77, 483) appear multiple times in the paper. Clear support or reasoning should be provided for these assertions.\\n\\nWe use tentative or cautious language because this is a scientific academic paper. Extreme or absolute descriptions deny any other possibilities or future discoveries, which is something we do not wish to promote. Here are some further discussions about the paragraphs reviewer yVkt points out.\\n\\n1. **Line 222**. The \\\"considers global features\\\" part does not need further verification, because the Softmax operation guarantees that the weights at all spatial positions are positive. For the \\\"selectively captures local information\\\" part, we calculated the average standard deviation of the weights of each frame during the inference time: average standard deviation = 0.0131. For reference, the standard deviation of a 14x14 matrix where only one element equals 1 and all other elements equal 0 is 0.0714. Meanwhile, the expectation of standard deviation of a matrix that conforms to a normal distribution and is then processed by a Softmax function is 0.0064. Therefore, the adaptor compresses local information selectively. In addition, intuitive changes in attention can also be observed in the demo samples.\\n2. **Line 484**. This is a possible interpretation of ours for the decline in generalization ability after droping contrastive learning, as we found it tends to memorize the entire training set directly. If we compute the CLAP similarity while infering the training set, we found that the similarity of the method without contrastive learning (0.36) is much higher than that with contrastive learning (0.31).\\n3. **Line 77**. This is also a possible interpretation for the decline in generalization ability after droping certain negative samples. The quantative analysis can be found in Table 5 and Section 4.4.\\n\\n### **Presentation and Writing**\\n\\n**[About integration of foley and sound effects]**\\n\\n> 3.1 The introduction (line 63) highlights tackling \\u201cIntegration of foley and sound effects,\\u201d yet no further details or experiments addressing this topic are provided in the rest of the paper.\\n\\nThis paragraph aimed to clarify the difference between the imitation of foley sounds in the generated music and the traditional approach to creating foley sounds. Many audiovisual artworks employ the technique of mimicking natural sounds with musical instruments to achieve more expressive audiovisual effects, so our method will also possess this capability after training. As mentioned in lines 63-66, these foley effects, reproduced through musical instruments, represent a special form of music generation. Many similar cases can also be found in the demo samples (for sample, at the begining of [this sample](https://muvi-v2m.github.io/data/result/video/tom_and_jerry_01[90to110](2).mp4), when Jerry hit Tom in the back of the head with a revolver, this sound is simulated by a set of percussion instruments and a brief string section). Nevertheless, the reviewer's perspective, which suggests that this does not actually fit the definition of foley sound generation and is essentially just a musical technique, also has merit. Therefore, we will refine this part to emphasize more on musical and instrumental techniques, rather than foley sound generation.\"}", "{\"summary\": \"The paper proposed a method for video to music generation. There exist not many previous works that utilizes the concept of (video)sequence-to-(music)sequence generation. Most of the previous works tackled the video-to-music generation task as mediaContent-to-sequence task, so the global video features has been used for generating music. Previous works that used the concept of the sequence-to-sequence was mainly studied in the dance-to-music generation task. Therefore, in this paper, the authors tried to generate music using frame-level video information so that each the generated music frame is synchronized with the video frames. To do this, they mainly suggested two techniques which are an visual adaptor module and contrastive video-music pre-training. Visual adaptor aggregates frame-level video features to match with music frames. And, contrastive video-music pre-training is aimed to make synchronization of music beat and video event while preserving overall mood/style coherency. Both the strategies were verified to be effective through the ablation studies.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The proposed visual adaptor and pre-training technique with the two negative samplings seems to be working for better modeling synchronization between video and music.\", \"weaknesses\": \"In Introduction, \\\"Integration of foley and sound effects\\\" paragraph seems not necessary. The paper does not tackles foley sound generation neither sound effects. If we listen to the demo samples, It's more like rhythmic synchronization + bpm modulation, not foley or sound effects.\", \"questions\": \"In Section 4.1, the authors noted that they used MTG-Jamendo Dataset + 33.7K music tracks from the internet. I think the reason why they have used more 33.7K music tracks from the internet should be explained more in detail.\\n\\nAlso, if we see Appendix, the collected video mostly includes Disney, Tom and Jerry, and Silent Films. I think this fact should also be described in the paper well since the proposed method is valid only on these kind of video contents for now. (The authors mentioned that for the video that contains vocal singing, they excluded the vocal part through source separation technique, however, if there exist some narration or voice of the actors, the proposed technique will not be valid unless they delete speech parts.)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer yVkt (Part 2/N)\", \"comment\": \"### **Lack of Justification and Explanation**\\n\\n**[About visual adaptors]**\\n\\n> 2.1 The adaptor design section lacks a clear justification. For instance, why were these three adaptor methods chosen, instead of exploring alternative multi-modal adaptors [1, 3]? Why is CLS set as the query instead of the key-value pair?\\n\\n1. **Why not use the adaptor from M$^2$UGen [1]**: To achieve semantic alignment and rhythmic synchronization, we need to compress high-frame-rate video features to fit the simple non-autoregressive generator. The adaptor in M$^2$UGen does not have a compression effect, and it only samples 32 frames uniformly from all videos. This kind of adaptors are more suitable for adapting language models. \\n2. **Why not use the adaptor from LLaMA-Adaptor [3]**: The adaptor from LLaMA-Adaptor does not actually involve compressing video features, so we still need to design the specific compression strategy ourselves. Also, LLaMA-Adaptor is designed to finetune large language models in a memory efficient way and inject potential multimodel conditions into these large language models, which requires the generator to be an autoregressive model with strong comprehension capabilities, such as LLaMA. On the one hand, finetuning our model does not exert significant memory pressure; on the other hand, this method of prompt concatenation struggles to provide direct alignment information. Our experiments show that using adapting prompt concatenation like this for conditional generation is far less effective in terms of temporal alignment than the simple method of channel-wise fusion (that is, we compress the visual feature to the same shape as the audio feature, then perform element-wise addition), because the latter provides explicit position guidance. To make a comparison, we sample the 32 frames from sliced videos uniformly to create fixed-length conditions, where the topmost 10 layers are concatenated with these 32 prompts. It is worth mentioning that for a ODE/SDE-based generator, this conditioning strategy is overly complex and far less intuitive than channel-wise fusion (or even cross-attention). Here are some preliminary experimental results, where it can be observed that the adapting prompt concatenation technique is outperformed by the channel-wise fusion.\\n\\n| Methods | FAD | KL | IS | FD | BCS | BHS | SIM |\\n| :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |\\n| LLaMA-Adaptor | 5.35|3.81|1.47|39.03|91.36 |43.24|13.15|\\n| ours | 4.28|3.52|1.63|28.15|104.17|49.23|19.18|\\n\\n3. **Why is CLS set as the query**: It is stated in Section 3.1 that we borrow the idea of semantic aware masking strategy from [6] to establish a semantic aware attention pooling strategy, where the CLS token is used as the query. Besides, if the CLS token is used as key/value, the compression operation is meaningless. Because in this case the output of the adaptor still has the same shape as the input feature. \\n\\n**[About metrics]**\\n\\n> 2.2 The paper introduces various metrics for evaluating model performance, but lacks explanations for each metric. For instance, in lines 446-449: \\u201cresulting in lower sound quality, diversity, and poorer synchronization,\\u201d it is unclear which metrics specifically measure sound quality, diversity, or synchronization. \\n\\n1. FAD is defined to measure the similarity between the generation set and the source data set (using intermediate features of VGGish). A lower FAD indicates plausible generation or higher audio quality. \\n2. KL is defined to measure the distance between the distributions of two sets, measured at a paired sample level. A lower KL indicates higher audio quality. However, if the generation lacks of diversity and influences the distribution, KL might increase. \\n3. IS is computed by inception networks to measure both quality and diversity.\\n4. FD is similar to FAD but uses PANNs instead of VGGish.\\n5. BCS, BHS are used to measure rhythmic synchronization.\\n6. SIM is used to measure semantic alignment.\\n7. MOS-Q is used to measure audio quality.\\n8. MOS-A is used to measure fine-grained alignment and synchronization.\\n\\n**[About channel-wise fusion]**\\n\\n> Additionally, the statement \\u201cthe channel-wise fusion of the visual conditioning still aids in synchronization\\u201d lacks experimental evidence to substantiate this claim.\\n\\nThe evidence to substantiate that the channel-wise fusion of the visual conditioning also aids in synchronization can be found in the comment section \\\"[About visual adaptors]\\\"\"}", "{\"comment\": \"[About generalizability]\\n\\nThe authors now claim that \\\"exploring its generalizability is not the primary focus.\\\" However, earlier in response to Reviewer JLkN\\u2019s comment, they stated, \\\"we believe this method has a certain degree of general applicability.\\\" If the authors still wish to claim this, the claim needs to be substantiated. Additionally, the concern regarding potential data leakage should be addressed and clarified, with support from the authors to resolve these issues.\\n\\n[About the benchmark]\\n\\nV2M has not been released, but both SymMV and BGM909, despite being proposed for symbolic music generation tasks, contain audio music modalities and corresponding ground truth, which can be used for evaluation. The authors could choose one of these for comparison, or alternatively, they could select AIST++ or LORIS.\\n\\nAnother point worth mentioning is that all materials should have been completed and submitted by the submission deadline. However, as I previously mentioned to the Area Chair, the authors continued to modify their GitHub repository after the submission deadline, even during the review period, as evidenced by https://github.com/MuVi-V2M/MuVi-V2m.github.io/commits/main. This could be viewed as unfair and provides the authors with an advantage.\\n\\nThat being said, if the Area Chair agrees, I suggest that the authors include the new results, including new baselines and new benchmarks, as references. If the authors are unable to provide these results before the rebuttal deadline, I recommend they run the demo pages for baselines like V2Meow, VidMuse, and VMAs and present these results as a reference instead.\"}", "{\"title\": \"Replying to Official Comment by Reviewer yVkt for Additional Experimental Results (1/2)\", \"comment\": \"We apologize for the delayed response, as replicating and comparing baseline methods takes a considerable amount of time, especially since some baseline methods require a significant amount of time for inference (e.g., VidMuse needs 14 min to generate a 25s audio clip, while MuVi only needs 6s). We still have some experiments that are not completed, but we will do our best to finish them and provide partial results first. Please understand. We will also include the additional results in the revised manuscript.\\n\\n**[ImageBind Evaluation]**\\n\\nWe have done the ImageBind AV score evaluation as requested by reviewer yVkt, and the results are listed below.\\n\\n| Methods | ImageBind AV score |\\n| :-: | :-: |\\n| VidMuse | 0.0527|\\n| M2UGen | 0.0513|\\n| ours | 0.0542|\\n\\nIt is worth mentioning that, because the audio-visual bind of ImageBind is trained with Audioset dataset, which is constructed based on the pairing of the sound-producing object and the sound produced (that is, traditional video-to-audio dataset), the ImageBind AV score is invalid and irrelevant for our task (we have already elaborated this in the \\\"Integration of foley and sound effects\\\" paragraph in the original paper). Although Audioset contains over 1M music samples, most of them are performance videos, i.e., videos that record the sound of certain musical instruments. This essentially categorizes them as video-to-audio type data. \\n\\nOur task does not require such stringent relationships, a video of a car does not imply the sound of its engine, nor does a video of a violin necessarily imply the sound of the violin. Therefore, using this metric for our task is unreasonable. Just because other works (VidMuse and M$^2$UGen, mentioned by reviewer yVkt) have used this metric does not mean it is reasonable. **This was our initial reason for not using this metric.** However, we have included the results here, which demonstrate that our method still outperforms the baselines.\"}", "{\"summary\": \"This paper proposes MuVi, a new method for generating music that aligns with video content, focusing on both semantic alignment and rhythmic synchronization. MuVi's design includes a \\\"visual adaptor\\\" that extracts relevant visual features from videos, which helps guide music generation to match the mood and rhythm of the video. To improve synchronization between visual events and musical beats, the authors use a pre-training technique that contrasts synchronized and unsynchronized video-music pairs, helping the model learn rhythmic alignment.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper introduces a new method for generating music that aligns with video content, focusing on both semantic alignment and rhythmic synchronization within a generative video-music Diffusion Transformer framework.\\n2. The model employs a joint encoder-decoder architecture that integrates a contrastive pre-training scheme for improved synchronization. The inclusion of a \\\"visual adaptor\\\" enhances the model\\u2019s ability to compress and process high-frame-rate visual inputs, capturing video cues for music generation.\\n3. The paper is well-organized, presenting MUVI's methodology alongside a series of experiments. The framework demonstrates superior performance over the baseline on the test dataset across various evaluation metrics, showcasing its effectiveness in video-to-music generation.\", \"weaknesses\": \"1.Novelty and Contribution: The paper presents its main contributions as a visual adaptor and a contrastive training scheme, but visual adaptor techniques and contrastive learning have already been used in video-to-music generation tasks [1, 2] and are commonly employed in multi-modal learning [3, 4]. The design of the visual adaptor lacks unique innovation, primarily involving a selection of common aggregation and pooling methods, which appears more as an ablation study to find the best setting. Overall, the proposed method lacks novelty, and the results in Table 2 indicate that the proposed method does not outperform the baseline across all metrics.\\n\\n2.Lack of Justification and Explanation: Another weakness is the lack of clear justification and explanation across different sections, from design choices to metric selection.\\n\\n2.1 The adaptor design section lacks a clear justification. For instance, why were these three adaptor methods chosen, instead of exploring alternative multi-modal adaptors [1, 3]? Why is CLS set as the query instead of the key-value pair?\\n\\n2.2 The paper introduces various metrics for evaluating model performance, but lacks explanations for each metric. For instance, in lines 446-449: \\u201cresulting in lower sound quality, diversity, and poorer synchronization,\\u201d it is unclear which metrics specifically measure sound quality, diversity, or synchronization. Additionally, the statement \\u201cthe channel-wise fusion of the visual conditioning still aids in synchronization\\u201d lacks experimental evidence to substantiate this claim.\\n\\n2.3 Ambiguous phrases like \\u201cwe believe\\u201d (lines 222, 483) and \\u201cmight lead to\\u201d (lines 77, 483) appear multiple times in the paper. Clear support or reasoning should be provided for these assertions.\\n\\n3. Presentation and Writing: There are some presentation and writing issues within the paper.\\n\\n3.1 The introduction (line 63) highlights tackling \\u201cIntegration of foley and sound \\teffects,\\u201d yet no further details or experiments addressing this topic are provided in the rest of the paper.\\n\\n3.2 The temporal shift method introduced in Section 3.2 is motivated as a significant contribution, but it lacks a clear explanation. Additionally, some symbols, like \\\"m,\\\" are redefined multiple times, \\\"C'\\\" is not defined, which may cause confusion for readers.\\n\\n\\n4. Experimental Comparison: A main weakness of the paper is lack of the experimental comparisons, which include only one baseline method.\\n\\n4.1 A simple baseline could have been constructed by combining an existing video understanding model with a music generation model, similar to the approach in [2, 6].\\n\\n4.2 The experiments omit comparisons with several relevant state-of-the-art methods, such as Diff-BGM [5], VidMuse [6], and Dance2Music-Diffusion [7].\\n\\n4.3 The M^2Ugen method shows comparable or superior results in terms of audio quality (Table 2). Fine-tuning this method on the dataset used in this paper could provide additional insight into its performance.\", \"references\": \"[1] Liu S, Hussain A S, Sun C, et al. M$^{2}$UGen: Multi-modal Music Understanding and Generation with the Power of Large Language Models[J]. arXiv preprint arXiv:2311.11255, 2023.\\n\\n[2] Lin Y B, Tian Y, Yang L, et al. VMAS: Video-to-Music Generation via Semantic Alignment in Web Music Videos[J]. arXiv preprint arXiv:2409.07450, 2024.\\n\\n[3] Zhang R, Han J, Liu C, et al. Llama-adapter: Efficient fine-tuning of language models with zero-init attention[J]. arXiv preprint arXiv:2303.16199, 2023.\\n\\n[4] Radford A, Kim J W, Hallacy C, et al. Learning transferable visual models from natural language supervision[C]//International conference on machine learning. PMLR, 2021: 8748-8763.\\n\\n[5] Li S, Qin Y, Zheng M, et al. Diff-BGM: A Diffusion Model for Video Background Music Generation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 27348-27357.\\n\\n[6] Tian Z, Liu Z, Yuan R, et al. VidMuse: A Simple Video-to-Music Generation Framework with Long-Short-Term Modeling[J]. arXiv preprint arXiv:2406.04321, 2024.\\n\\n[7] Zhang C, Hua Y. Dance2Music-Diffusion: leveraging latent diffusion models for music generation from dance videos[J]. EURASIP Journal on Audio, Speech, and Music Processing, 2024, 2024(1): 48.\", \"questions\": \"In addition to the weaknesses, here are some points that raise further confusion or seem inconsistent in the paper:\\n\\n1.Claim on Previous V2M Methods (lines 39-41): The authors claim that \\\"Previous V2M methods focus on global features,\\\" presenting this as a limitation of past approaches. However, this appears inconsistent with prior work, as several existing methods focus on local clip features for training. For instance, V2Meow [1] and VMAS [2] emphasize local clip-based features, while VidMuse [3] captures both local and global features through long-short-term modeling. The authors should clarify and provide evidence to support their assertion about the emphasis on global features in previous V2M approaches.\\n\\n2.Choice of Beat Synchronization Metrics and Exclusion of Dance Video Music Generation for Comparison: The authors select Beats Coverage Score (BCS) and Beats Hit Score (BHS) as metrics to evaluate beat synchronization, following the approach in [4] (line 346), which specifically targets music generation for dance videos. However, the authors then claim in line 364 that \\\"D2M-GAN are not considered for comparison because their scope of application differs from ours.\\\" If dance-related videos are outside MuVi\\u2019s intended scope, it is unclear why dance-specific metrics are being applied for evaluation. This raises a need for clarification.\\n\\n3.Choice of MuVi(beta) Setting for Comparison: The paper claims \\\"use CLIP-ViT(base-patch16) and the attention pooling adaptor as the visual encoder\\\" for MuVi(beta) (lines 366-367). However, Table 1 shows that the VideoMAE V2 with a Softmax adaptor yields better results for this setting. It is unclear why a suboptimal setting was selected for MuVi(beta), as this choice could impact the fairness and interpretability of the comparisons. An explanation from the authors on the rationale for this choice would provide more clarity.\\n\\n[1] Su K, Li J Y, Huang Q, et al. V2Meow: Meowing to the Visual Beat via Video-to-Music Generation[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2024, 38(5): 4952-4960.\\n\\n[2] Lin Y B, Tian Y, Yang L, et al. VMAS: Video-to-Music Generation via Semantic Alignment in Web Music Videos[J]. arXiv preprint arXiv:2409.07450, 2024.\\n\\n[3] Tian Z, Liu Z, Yuan R, et al. VidMuse: A Simple Video-to-Music Generation Framework with Long-Short-Term Modeling[J]. arXiv preprint arXiv:2406.04321, 2024.\\n\\n[4] Zhu Y, Olszewski K, Wu Y, et al. Quantized gan for complex music generation from dance videos[C]//European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022: 182-199.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer yVkt (Part 4/N)\", \"comment\": \"**[About temporal shift]**\\n\\n> 3.2 The temporal shift method introduced in Section 3.2 is motivated as a significant contribution, but it lacks a clear explanation. Additionally, some symbols, like \\\"m,\\\" are redefined multiple times, \\\"C'\\\" is not defined, which may cause confusion for readers.\\n\\nThe symbol \\\"m\\\" in lines 267-268 is improperly used, as it should be used to indicate any features from the music source in other texts. Therefore, we will refine the text and change the \\\"m\\\" here to \\\"b\\\" to make a distinction. As for \\\"C\\\", we don't see any symbo \\\"C\\\" in the temporal shift section. We hope that reviewer yVkt can clarify this issue clearly. \\n\\nIn addition, reviewer yVkt mentioned that the temporal shift section in Section 3.2 lacks a clear explanation, but did not specify which parts were difficult to understand or potentially confusing, except the improper use of symbols. Therefore, we can only provide a general explanation once again. To create a negative pair, we temporally shift the audio track to create asynchrony. Specifically, the shift offset is restricted by the minimum BPM of the music track. Because the music data we used inherently has unstable rhythm, the BPM is not constant through time. We use a dynamic beat tracking algorithm mentioned in the text to obtain a time-varying BPM sequence for each track, and select the minimum value to restrict temporal shift (if we select the maximum value, during the shift operation, there is still chance that a whole beat of lower BPM is shifted). If the minimal beat cycle is $n$ frames, we only shift $kn+b$ frames in both directions, avoiding shifting whole beats. Also, we skip the half-beat areas to avoid backbeat synchronization, so the range of $b$ is $ {\\u23080.1n\\u2309, .., \\u230a0.4n\\u230b, \\u23080.6n\\u2309, ..., \\u230a0.9n\\u230b}$. \\n\\n### **Experimental Comparison**\\n\\n**[Comparison with other baselines]**\\n\\n> 4.1 A simple baseline could have been constructed by combining an existing video understanding model with a music generation model, similar to the approach in [2, 6].\\n> 4.2 The experiments omit comparisons with several relevant state-of-the-art methods, such as Diff-BGM [5], VidMuse [6], and Dance2Music-Diffusion [7].\\n\\nReviewer yVkt mentioned that \\\"a simple baseline could have been constructed by combining an existing video understanding model with a music generation model\\\", and yes, that is why we construct MuVi(beta), an existing visual model (CLIP-ViT + attention, without contrastive pre-training) and a simple flow-matching-based generator. However, reviewer yVkt raised concerns during the \\\"Question\\\" section 3, stating that this baseline is simple and suboptimal and therefore unfair. We find this point to be contradictory. We will discuss more about this issue in the corresponding \\\"Question\\\" section.\\n\\nReviewer yVkt mentioned that we omitted comparisons with other SOTA methods [2, 7, 8, 9]. Therefore, we clarify this issue and provide additional experiments as requested by reviewer yVkt here.\\n\\n1. VMAS [2] has still not released their code, and they released their paper on September 11, which essentially means our work was conducted concurrently. In addition, our own replication of their method produced unsatisfactory results. Consequently, we abandoned this unfair comparison.\\n2. The situation with VidMuse [7] is similar. They only released their code on October 14th, making any comparison before that date unfair. Nevertheless, we managed to conduct a comparison with VidMuse during the rebuttal phase, using the released checkpoints. The results are shown below, where we can see that VidMuse is outperformed by our method. \\n3. Diff-BGM [8] generates symbolic music, which requires a symbolic waveform synthesizer to transform the generated music score to audio. In fact, the whole set of metrics is different between Diff-BGM and acoustic generators like ours. Therefore, we don't understand why reviewer yVkt requires us to compare with them.\\n4. Dance2Music-Diffusion [9], like other dance2music methods, aims to generate music tracks for dancing movements. It incorporates special designs for human pose and movement understanding, which is not for general videos, resulting in a unfair comparison. Nevertheless, we still made every effort to test their model and provided this comparison in response to reviewer yVkt's request. The results are listed below, where we can see that Dance2Music-Diffusion is outperformed by our method.\\n\\n| Methods | FAD | KL | IS | FD | BCS | BHS | SIM |\\n| :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |\\n| VidMuse | 8.13|4.88|1.50|43.82|81.35 |36.12|3.30 |\\n| Dance2Music-Diffusion | 9.42|4.69|1.39|46.11|84.45 |35.91|2.39 |\\n| ours | 4.28|3.52|1.63|28.15|104.17|49.23|19.18|\"}" ] }
Dzh0hQPpuf
Student-Informed Teacher Training
[ "Nico Messikommer", "Jiaxu Xing", "Elie Aljalbout", "Davide Scaramuzza" ]
Imitation learning with a privileged teacher has proven effective for learning complex control behaviors from high-dimensional inputs, such as images. In this framework, a teacher is trained with privileged task information, while a student tries to predict the actions of the teacher with more limited observations, e.g., in a robot navigation task, the teacher might have access to distances to nearby obstacles, while the student only receives visual observations of the scene. However, privileged imitation learning faces a key challenge: the student might be unable to imitate the teacher's behavior due to partial observability. This problem arises because the teacher is trained without considering if the student is capable of imitating the learned behavior. To address this teacher-student asymmetry, we propose a framework for joint training of the teacher and student policies, encouraging the teacher to learn behaviors that can be imitated by the student despite the latters' limited access to information and its partial observability. Based on the performance bound in imitation learning, we add (i) the approximated action difference between teacher and student as a penalty term to the reward function of the teacher, and (ii) a supervised teacher-student alignment step. We motivate our method with a maze navigation task and demonstrate its effectiveness on complex vision-based quadrotor flight and manipulation tasks.
[ "Reinforcement Learning", "Imitation Learning", "Robotics" ]
Accept (Spotlight)
https://openreview.net/pdf?id=Dzh0hQPpuf
https://openreview.net/forum?id=Dzh0hQPpuf
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zSyABEaKF2", "zO1G1yyowe", "vU1BjqbGdc", "tpkaZpRgVf", "sF4VA7viP0", "qNukEZhoei", "q4X4Ztp9Is", "lvdj2eariN", "jWQbuq97ia", "iOwlOjGXZ1", "i2PZ6IoNoP", "gN4meFKH92", "f65SNBmAn5", "eb708io9S2", "e6YXMFOBuk", "cPRxJe5b6Y", "c0VDp7JWzW", "ZOHtXZRvbB", "WcdHR1p41Y", "RghYvL3lCu", "RT3vOylQMS", "Oh7PIxNyNw", "KRg5CsOck0", "JVUTgEIHHn", "GPhlT4PaET", "EastyxJqdq", "EPzybtJpPS", "BMmc1ZNX86", "AgT11PAZ4N", "8B2aibltuI", "3jUZwoVCrn", "14HSYAkgdw" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_comment", "comment", "official_review" ], "note_created": [ 1732553827264, 1732674028878, 1733311148877, 1733146345441, 1732553661946, 1732891988823, 1732555398007, 1733136386115, 1737523628238, 1732891929733, 1732554287731, 1732969237404, 1733146426090, 1731128852669, 1730656507463, 1732552928743, 1732892009359, 1732554442585, 1732552800456, 1732723252464, 1732795725998, 1733146255734, 1732912236738, 1734764926170, 1732733272148, 1732809234013, 1732554803833, 1742196100250, 1730536810445, 1732550591940, 1740422665812, 1730693332095 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4246/Authors" ], [ "ICLR.cc/2025/Conference/Submission4246/Reviewer_2qRa" ], [ "ICLR.cc/2025/Conference/Submission4246/Authors" ], [ "ICLR.cc/2025/Conference/Submission4246/Authors" ], [ "ICLR.cc/2025/Conference/Submission4246/Authors" ], [ "ICLR.cc/2025/Conference/Submission4246/Authors" ], [ "ICLR.cc/2025/Conference/Submission4246/Authors" ], [ "ICLR.cc/2025/Conference/Submission4246/Reviewer_Xbsv" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4246/Authors" ], [ "ICLR.cc/2025/Conference/Submission4246/Authors" ], [ "ICLR.cc/2025/Conference/Submission4246/Authors" ], [ "ICLR.cc/2025/Conference/Submission4246/Reviewer_mh2n" ], [ "ICLR.cc/2025/Conference/Submission4246/Reviewer_2qRa" ], [ "ICLR.cc/2025/Conference/Submission4246/Reviewer_mh2n" ], [ "ICLR.cc/2025/Conference/Submission4246/Authors" ], [ "ICLR.cc/2025/Conference/Submission4246/Authors" ], [ "ICLR.cc/2025/Conference/Submission4246/Authors" ], [ "ICLR.cc/2025/Conference/Submission4246/Authors" ], [ "ICLR.cc/2025/Conference/Submission4246/Authors" ], [ "ICLR.cc/2025/Conference/Submission4246/Authors" ], [ "ICLR.cc/2025/Conference/Submission4246/Authors" ], [ "ICLR.cc/2025/Conference/Submission4246/Reviewer_2qRa" ], [ "ICLR.cc/2025/Conference/Submission4246/Area_Chair_55eR" ], [ "ICLR.cc/2025/Conference/Submission4246/Reviewer_2qRa" ], [ "ICLR.cc/2025/Conference/Submission4246/Authors" ], [ "ICLR.cc/2025/Conference/Submission4246/Authors" ], [ "~Nico_Messikommer1" ], [ "ICLR.cc/2025/Conference/Submission4246/Reviewer_Xbsv" ], [ "ICLR.cc/2025/Conference/Submission4246/Authors" ], [ "~Philip_Bachman1" ], [ "ICLR.cc/2025/Conference/Submission4246/Reviewer_xapm" ] ], "structured_content_str": [ "{\"title\": \"Official Comment by Authors (2/2)\", \"comment\": \"**Q1: How does the proposed method theoretically compare to existing approaches in terms of convergence guarantees and sample complexity?**\\n\\nWe agree that a stronger theoretical analysis of our method might be beneficial, but we believe it would be beyond the scope of this work. Nevertheless, our experiments on two complex robotics tasks demonstrate the generality of the framework and its ability to improve student performance in challenging Imitation Learning settings. We leave a deeper theoretical investigation as an avenue for future work.\\n\\n---\\n\\n**Q2: What is the sensitivity of the method to the choice of the KL-divergence penalty weight, and how can this hyperparameter be effectively tuned?**\\n\\nThank you for suggesting this interesting ablation. We conducted experiments for the vision-based manipulation task to evaluate the impact of different weightings for the KL divergence term (0.1, 0.075, 0.05, 0.025). The results show that the vision-based student consistently outperforms the baseline across all tested values, with the highest performance achieved at a weighting factor of 0.025. Overall, our method demonstrated robustness to this hyperparameter, consistently achieving higher success rates than the baseline.\\n\\n---\\n\\n**Q3: Can the authors provide more detailed ablation studies to demonstrate the individual contributions of the reward penalty and the KL-divergence supervision?**\\n\\nThank you for this valuable suggestion. We performed this ablation study on the vision-based student in the manipulation task. Namely, we exclude the reward penalty while keeping the KL divergence in the teacher updates, the success rate drops from 0.88 to 0.74. This performance drop is even larger (from 0.88 to 0.47) when the KL divergence in the teacher update is removed while keeping the reward penalty. These results suggest that feature alignment between the teacher and student, driven by the KL divergence, has a greater impact on the imitability of the teacher's behavior. The highest success rate is achieved when both the KL divergence and the reward penalty are used to shape the behavior of the teacher.\\n\\n---\\n\\n**Q4: How well does the method scale to more complex environments and tasks, and what are the potential limitations in terms of computational overhead and sample efficiency?**\\n\\nWe believe that the tasks of vision-based obstacle avoidance with a quadrotor and vision-based manipulation offer complex environments that are highly relevant to real-world applications. The manipulation task requires the policy to control a robot with complex kinematics and high degrees of freedom, and to manage contact interactions, whereas the quadrotor task involves a mobile robot performing agile movements to avoid obstacles, which necessitates handling highly non-linear dynamics.\\nOur proposed method can readily be applied across both tasks without major modifications. Therefore, we do not anticipate any additional computational overhead.\\n\\n---\\n\\n**Q5: Can the authors discuss the potential extensions of the proposed framework to other imitation learning algorithms and settings, such as inverse reinforcement learning or multi-agent imitation learning?**\\n\\nOur proposed framework focuses on adapting the teacher training to the capabilities of the student, which sets it apart from most existing imitation learning methods that primarily train the student without modifying the teacher. Consequently, our approach can be combined with many existing imitation learning algorithms to better align the student with the teacher. This is especially relevant for IL focusing more on representation learning. \\n\\nExtending our framework to multi-agent imitation learning is also feasible as long as access to the teacher policy is available during training. In such scenarios, both teachers and students can be trained simultaneously without requiring significant changes to the training pipeline.\\n\\nIn contrast, applying our approach to the inverse reinforcement learning setting is not straightforward. Since IRL assumes the policy generating the demonstrations is unknown, our framework relying on adapting the teacher cannot be directly applied in such settings.\"}", "{\"title\": \"Thanks for the update, still have remaining questions.\", \"comment\": \"Hello,\\nI thank the authors for their responses, which will be helpful for my final decision. I still have some questions on differences between proposed method and actual implementation.\\n\\nIt seems like the objective prescribes a KL divergence term to keep the teacher close to the student. But in section 4.2 Rollout phase, the teacher is trained on an additional penalty term based on the action difference between teacher and proxy student. Shouldn't the KL divergence term be enough? Why add the additional penalty term, and why is it helpful over just using KL? \\n\\nNext, there are some details on shared networks and updates to particular parts of the network that I'm not clear on. It seems like the student and teacher action decoders are shared. The action decoding layers are only updated on the policy gradient computed with the task reward. What is going on here? Are the encoders getting gradients from the full objective? \\n\\nIt would be great if the authors can write out the exact computation graph somewhere, like what networks are getting which gradients, etc. The current architecture figure in the paper isn't too intuitive, especially the uni-directional arrows for the loss / KL don't clearly show me what is the prediction and what is the target.\"}", "{\"title\": \"Final Post-Rebuttal Response\", \"comment\": \"Once again, we would like to thank all reviewers and area chairs for their constructive feedback and discussions. We greatly value the reviewers' thoughtful feedback on clarity, related work, and additional experiments. Based on this helpful feedback and the ensuing discussions, we have significantly improved the manuscript. This improvement is reflected in the scores raised to positive levels by three out of four reviewers. We kindly hope the fourth reviewer may find our responses and enhancements sufficient to reconsider their score as we believe to have adequately addressed their concerns.\", \"the_improvements_to_the_manuscript_are_summarized_below\": \"- **Ablations**: We conducted ablations to evaluate the impact of different components of our proposed method. These results are now included in the updated manuscript. The results show that our method is robust to hyperparameter choices and that all proposed components are beneficial for significantly improving performance.\\n\\n- **Related Work**: We have incorporated all the missing references suggested by the reviewers and expanded the discussion to better contextualize our contributions.\\n\\n- **Baselines**: To more effectively highlight the advantages and performance improvements of our method, we have added the results of three baseline methods (HLRL, COSIL, DWBC). Given the complexity of the tasks, these comparisons further demonstrate the effectiveness of our approach.\\n\\n- **Technical Clarity**: We enhanced the clarity of the methodology and experiments by restructuring sections, improving descriptions, and providing additional technical details. For the rebuttal phase, we opted for short and concise changes to the manuscript to ease the iteration with the reviewers. We will make sure to further improve the clarity and coherence of the text for the final version.\\n\\nWe would like to highlight that the additional experiments were conducted with due diligence. For each baseline method, we performed a parameter search and reported the mean and variance of the performance of five (vision-based manipulation) and three (vision-based quadrotor flight) seeds. This guarantees a fair comparison. The final performance of most baselines with a fixed teacher converges to a similar performance (significantly lower than our approach), confirming the benefit of changing the teacher during training.\\n\\n---\\n\\n**Final Updated Experimental Results**\\n\\n\\n| **Methods** | **Success Rate (Manipulation)** | **Success Rate (Quadrotor)** |\\n|---------------------|----------------------------|----------------------------|\\n| BC | 0.16 \\u00b1 0.15 | 0.05 \\u00b1 0.04 |\\n| DAgger | 0.34 \\u00b1 0.31 | 0.08 \\u00b1 0.03 |\\n| **HLRL** | 0.61 \\u00b1 0.22 | 0.31 \\u00b1 0.11 |\\n| **DWBC** | 0.63 \\u00b1 0.18 | 0.35 \\u00b1 0.07 |\\n| **COSIL** | 0.56 \\u00b1 0.21 | 0.30 \\u00b1 0.07 |\\n| w/o Align (Ours) | 0.61 \\u00b1 0.18 | 0.38 \\u00b1 0.11 |\\n| w Align (Ours) | **0.88 \\u00b1 0.07** | **0.46 \\u00b1 0.04** |\\n#### **Updated baseline experiments.**\\n\\n---\\n\\n\\n| **Configuration** | **Success Rate** |\\n|-------------------------------|-------------------------|\\n| w Reward / wo Loss | 0.47 \\u00b1 0.37 |\\n| wo Reward / w Loss | 0.74 \\u00b1 0.08 |\\n| wo shared decoder | 0.62 \\u00b1 0.27 |\\n| \\u03bb = 0.1 | 0.81 \\u00b1 0.20 |\\n| \\u03bb = 0.075 | 0.77 \\u00b1 0.18 |\\n| (Default) \\u03bb = 0.05 | 0.88 \\u00b1 0.07 |\\n| \\u03bb = 0.025 | 0.95 \\u00b1 0.03 |0.95 \\u00b1 0.03 |\\n#### **Ablation Experiments.**\"}", "{\"title\": \"The discussion period ends today\", \"comment\": \"Dear reviewer mh2n,\\n\\nWith the end of the discussion period approaching (today), we have not yet received a response from you regarding our rebuttal. We would really appreciate it if you could review our response and revised manuscript at your earliest convenience. If our rebuttal has adequately addressed your concerns, we would greatly appreciate it if you could consider adjusting your score accordingly. Thank you once again for your time and for providing such valuable feedback.\"}", "{\"title\": \"Official Comment by Authors (1/2)\", \"comment\": \"**W1: Limited theoretical analysis: The paper lacks a comprehensive theoretical analysis comparing the proposed method to existing approaches.**\\n\\nWe do agree that such a theoretical analysis would provide additional justification for our approach. Unfortunately, this analysis is beyond the scope of this work. Nevertheless, we believe our experiments on two complex robotics tasks demonstrate the generality of the framework and its ability to improve student performance in challenging Imitation Learning settings. We leave a deeper theoretical investigation as an avenue for future work.\\n\\n---\\n\\n**W2: Insufficient ablation studies: The individual contributions of the reward penalty and the KL-divergence supervision are not clearly distinguished through ablation experiments, making it difficult to assess the necessity of each component.**\\n\\nBased on the helpful feedback, we conducted additional ablation studies on the manipulation task. These experiments evaluate the contributions of both proposed components. The results show that each component independently enhances performance while also demonstrating low sensitivity to parameter variations.\\n\\n| **Configuration** | **Success Rate** |\\n|-------------------------------|-------------------------|\\n| w Reward / wo Loss | 0.47 \\u00b1 0.37 |\\n| wo Reward / w Loss | 0.74 \\u00b1 0.08 |\\n| wo shared decoder | 0.62 \\u00b1 0.27 |\\n| \\u03bb = 0.1 | 0.81 \\u00b1 0.20 |\\n| \\u03bb = 0.075 | 0.77 \\u00b1 0.18 |\\n| (Default) \\u03bb = 0.05 | 0.88 \\u00b1 0.07 |\\n| \\u03bb = 0.025 | 0.95 \\u00b1 0.03 |0.95 \\u00b1 0.03 |\\n#### **Ablation Experiments.**\\n\\n---\\n\\n**W3: Limited experimental evaluation**\\n\\nWe have included a comparison with \\u201cDeep Whole-Body Control\\u201d [1] and \\u201cReal-World Humanoid Locomotion with Reinforcement Learning\\u201d [2] (currently only for vision-based manipulation) in the revised manuscript. The results clearly show that our proposed method outperforms both baselines. Additionally, we have expanded the experimental section to provide further insights into the results and the observed improvements.\\n\\n| **Methods** | **Success Rate (Manipulation)** | **Success Rate (Quadrotor)** |\\n|---------------------|----------------------------|----------------------------|\\n| BC | 0.16 \\u00b1 0.15 | 0.05 \\u00b1 0.04 |\\n| DAgger | 0.34 \\u00b1 0.31 | 0.08 \\u00b1 0.03 |\\n| **HLRL** | 0.61 \\u00b1 0.22 | tbd |\\n| **DWBC** | 0.63 \\u00b1 0.18 | 0.35 \\u00b1 0.07 |\\n| w/o Align (Ours) | 0.61 \\u00b1 0.18 | 0.38 \\u00b1 0.11 |\\n| w Align (Ours) | **0.88 \\u00b1 0.07** | **0.46 \\u00b1 0.04** |\\n#### **Updated baseline experiments.**\\n\\n[1] Deep Whole-Body Control: Learning a Unified Policy for Manipulation and Locomotion, CoRL 2022\\n\\n[2] Real-World Humanoid Locomotion with Reinforcement Learning, Science Robotics 2024\\n\\n---\\n\\n**W4: Clarity of certain technical details: Some aspects of the paper, such as the definition of the proxy student network and the justification for the equality of covariance matrices in equations (9) and (10), require further clarification**\\n\\nWe clarified the definition of the proxy student as well as the equality of the covariance matrices in equations (9) and (10).\"}", "{\"title\": \"A gentle reminder\", \"comment\": \"Thanks again for serving as a reviewer, we really appreciate your comments. Your feedback has strongly improved our paper, and we believe that we have addressed all of your concerns as we now have included 3 more baselines, 3 more ablations and substantially improved the writing. The end of the discussion period is rapidly approaching, and we would really appreciate it if you could check our response and let us know whether your concerns are well addressed. If not or in case you have any further concerns we would be more than happy to work with you on improving the paper.\"}", "{\"title\": \"Official Comment by Authors (2/2)\", \"comment\": \"**Q1: Why does the \\\"w/o align\\\" baseline perform so well compared to BC and DAgger, particularly in the quadrotor environment? It seems this baseline would produce behaviors similar to that of BC and DAgger.**\\n\\nThe main difference between our method without alignment and the imitation learning baselines is that we used a shared action decoder between the teacher and the student policies. Instead of aligning only the actions, we also align the embeddings of the action decoders. Similar to the results shown in [1][2], we believe that aligning both the actions and the observation representations will help the policy perform better than aligning only the actions.\\n\\nWe would also like to emphasize that for the task of quadrotor obstacle avoidance, due to the evaluation period being rather short, the success rate information does not sufficiently capture the policy behavior. Hence, we have now introduced two new evaluation metrics describing how \\\"environment aware\\\" the resulting policies are. For example, the vision-based student policy will need to always look in the direction it flies towards to ensure it does not collide. This behavior is not learnable without changing the teacher policy's behavior. We observed that our approach still outperforms the baseline approaches, and the baseline without alignment performs similarly to the DAgger and BC baselines.\\n\\n| **Methods** | **Velocity Angle [\\u00b0] \\u2193** | **Num. Obstacle in View \\u2191** |\\n|---------------------|--------------------------|-----------------------------|\\n| BC | 75.5 | 1.92 |\\n| DAgger | 78.6 | 2.42 |\\n| DWBC | 46.7 | 2.33 |\\n| w/o Align (Ours) | 63.2 | 2.61 |\\n| w Align (Ours) | **32.2** | **3.51** |\\n#### **Perception-Aware Experiments.**\\n\\n[1] RMA: Rapid Motor Adaptation for Legged Robots, RSS 2021\\n\\n[2] In-Hand Object Rotation via Rapid Motor Adaptation, CoRL 2022\\n\\n---\\n\\n**Q2: Does the proposed method require more interactions than the baselines? Were baselines evaluated with equivalent budgets of privileged and high-dimensional samples?**\\n\\nWe are grateful for raising this concern. All the tested methods are evaluated with the exact number of privileged and high-dimensional samples. The different teachers for the imitation learning method are taken from the w/o alignment experiments, which train the teacher purely based on the RL reward. The baselines are then trained with the same number of rendered images as used in our approach.\\nWe have added this clarification to the revised manuscript.\\n\\n**Q3: What is the performance without using a shared action decoder?**\\n\\nWe performed ablation experiments in the vision-based manipulation task. Without the shared action decoder, the performance of the vision-based student drops from 0.88 to 0.62 success rate. This confirms that the low-level task information can be shared between privileged teacher and student. Thus, the learned student policy benefits by also leveraging the teacher experiences for the shared action decoder.\\n\\n| **Configuration** | **Success Rate** |\\n|-------------------------------|-------------------------|\\n| w Reward / wo Loss | 0.47 \\u00b1 0.37 |\\n| wo Reward / w Loss | 0.74 \\u00b1 0.08 |\\n| wo shared decoder | 0.62 \\u00b1 0.27 |\\n| \\u03bb = 0.1 | 0.81 \\u00b1 0.20 |\\n| \\u03bb = 0.075 | 0.77 \\u00b1 0.18 |\\n| (Default) \\u03bb = 0.05 | 0.88 \\u00b1 0.07 |\\n| \\u03bb = 0.025 | 0.95 \\u00b1 0.03 |0.95 \\u00b1 0.03 |\\n#### **Ablation Experiments.**\\n\\n**Q4: How sensitive is the method to the weight of the KL divergence penalty term?**\\n\\nThank you for suggesting this ablation. We conducted experiments for the vision-based manipulation task to evaluate the impact of different weightings for the KL divergence term (0.1, 0.075, 0.05, 0.025). The results show that the vision-based student consistently outperforms the baseline across all tested values, with the highest performance achieved at a weighting factor of 0.025. Overall, our method demonstrated robustness to this hyperparameter, consistently achieving higher success rates than the baseline.\\n\\n---\\n\\n**Q5: Is it possible to have videos of the experiments in simulation ?**\", \"we_have_included_a_video_showcasing_the_performance_for_the_two_complex_tasks\": \"vision-based quadrotor flight and vision-based manipulation.\\n\\n---\\n\\n**Q6: Out of curiosity, have you experimented with using forward vs reverse KL divergence as the penalty term?**\\n\\nThat is an interesting question. We have not yet conducted such experiments due to time constraints. But we can include the result of such an experiment in the case of acceptance for the final version.\"}", "{\"comment\": [\"Thank you for your response, the additional experiments, and the accompanying videos.\", \"The paper has seen improvements, and my primary concern has been reasonably addressed: the method is now better validated and contextualized.\", \"I am inclined to raise my score, acknowledging the authors' efforts.\", \"That said, there are still weaknesses:\", \"While the clarity has improved, the paper remains somewhat hard to follow, despite the underlying method being conceptually simple in my opinion.\", \"Watching the videos, the experimental results are a bit underwhelming. The observation asymmetry appears artificial: it is created in the manipulation environment via a weird camera angle, and in the quadrotor environment, it could arguably be addressed by adding a heading-related reward term.\", \"Introducing a more practical robotic environment, where achieving perception-aware behaviors presents a genuine challenge, would make the paper truly compelling.\", \"I encourage the authors to continue improving their paper, as it is still borderline in my opinion.\", \"Below, I have included a few suggestions to improve the writing, mostly centered on the method sections (these are subjective, and I defer to the authors' judgment in considering them).\", \"---\", \"From my point of view, the main contributions of the paper are the KL-regularization of the teacher and the introduction of a proxy student. These should be prominently highlighted and explained clearly. The remaining elements, while relevant, are secondary and could be streamlined. Simplifying and condensing the explanation of the method overall would be beneficial.\", \"KL-regularized RL is a well-established technique, ex: [8-16] and many more. Including a discussion would be appropriate.\", \"The derivations in Eq. 3\\u20138 are straightforward and might not be necessary in the main paper. Summarizing them into a single equation and moving the detailed steps to the appendix would be more concise. Additionally, these derivations seem familiar and may already exist in the literature, potentially in one of [8-16].\", \"Including more references is better in general, and I feel the initial submission had relatively few for this type of paper.\", \"Figure 1 is difficult to understand.\", \"The backpropagation graph is still unclear.\", \"In Section 4, adding more equations that explicitly detail the training losses for each network would be helpful.\", \"In particular, the integration and training of the shared action network is still unclear.\", \"Algorithm 1 is quite helpful. It could be refined and moved to the main paper.\", \"The explanations and the derivation of the KL divergence between two Gaussians (Eq. 9,10) are trivial and would be better suited to the appendix.\", \"Having three separate \\\"method\\\" sections (Sec. 2,3,4) looks slightly unusual to me. Combining these into one or two sections might improve readability, flow and conciseness.\", \"Please also add implementation details for the baselines, for example in the appendix.\"], \"references\": [\"[8] Optimal control as a graphical model inference problem, 2012\", \"[9] Reinforcement learning and control as probabilistic inference: Tutorial and review, 2018\", \"[10] Information asymmetry in kl-regularized RL, 2019\", \"[11] Exploiting hierarchy for learning and transfer in kl-regularized rl, 2019\", \"[12] Leverage the Average: an Analysis of KL Regularization in Reinforcement Learning, 2020\", \"[13] Accelerating online reinforcement learning with offline datasets, 2020\", \"[14] Accelerating reinforcement learning with learned skill priors, 2020\", \"[15] Training language models to follow instructions with human feedback, 2022\", \"[16] Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor, 2018\"]}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"title\": \"A gentle reminder\", \"comment\": \"Thanks again for serving as a reviewer, we really appreciate your comments. Your feedback has strongly improved our paper, and we believe that we have addressed all of your concerns as we now have included 3 more baselines, 3 more ablations and substantially improved the writing. The end of the discussion period is rapidly approaching, and we would really appreciate it if you could check our response and let us know whether your concerns are well addressed. If not or in case you have any further concerns we would be more than happy to work with you on improving the paper.\"}", "{\"title\": \"Official Comment by Authors (1/2)\", \"comment\": \"**W1: The paper\\u2019s positioning in the related work as the only one considering information asymmetry in student-teacher framework is incorrect. There are several recent works that have looked at this problem [1-5], in fact the imitability cost like the one proposed is explored in [3].**\\n\\nWe sincerely appreciate the list of key related works, which has helped us better contextualize our contributions. In our revised manuscript, we have included a detailed discussion to position our work in relation to them. Specifically, most of the referenced works [2, 3, 4, 5] assume a fixed teacher, with the exception of [1], which proposes an adaptive DAgger inside a POMDP formulation using a belief state. Similarly to our formulation, [3] employs the KL divergence between the teacher and student actions as a divergence metric. However, in [3], this divergence metric is used exclusively to train the student by a weighted combination of imitating expert actions and optimizing for RL rewards.\\n\\n---\\n\\n**W2: As a consequence of the above, I think the paper misses key baselines that actually tackle problems under similar assumptions of asymmetry**\\n\\nThank you for raising this point. Based on comments from the other reviewers, we have now included additional baselines that follow similar assumptions. We have included a comparison with \\u201cDeep Whole-Body Control\\u201d [1], which adjusts the teacher encoder by enforcing a shared feature space. Additionally, we also compare against the proposed \\u201cReal-World Humanoid Locomotion with Reinforcement Learning\\u201d method [2] (currently only for vision-based manipulation). The results clearly show that our proposed method outperforms them.\\n\\n| **Methods** | **Success Rate (Manipulation)** | **Success Rate (Quadrotor)** |\\n|---------------------|----------------------------|----------------------------|\\n| BC | 0.16 \\u00b1 0.15 | 0.05 \\u00b1 0.04 |\\n| DAgger | 0.34 \\u00b1 0.31 | 0.08 \\u00b1 0.03 |\\n| **HLRL** | 0.61 \\u00b1 0.22 | tbd |\\n| **DWBC** | 0.63 \\u00b1 0.18 | 0.35 \\u00b1 0.07 |\\n| w/o Align (Ours) | 0.61 \\u00b1 0.18 | 0.38 \\u00b1 0.11 |\\n| w Align (Ours) | **0.88 \\u00b1 0.07** | **0.46 \\u00b1 0.04** |\\n#### **Updated baseline experiments.**\\n\\n[1] Deep Whole-Body Control: Learning a Unified Policy for Manipulation and Locomotion, CoRL 2022\\n\\n[2] Real-World Humanoid Locomotion with Reinforcement Learning, Science Robotics 2024\\n\\n---\\n\\n**W3: Additionally, the introduced approach involves several hyperparameters \\u2013 the balancing term for \\u201cimitability\\u201d cost, the update style (frequency/batch sizes) of the alignment phase.**\\n\\nWe conducted ablation experiments for the vision-based manipulation task to evaluate the impact of different weightings for the KL divergence term (0.1, 0.075, 0.05, 0.025). The results show that the vision-based student consistently outperforms the baseline across all tested values, with the highest performance achieved at a weighting factor of 0.025. Overall, our method demonstrated robustness to this hyperparameter, consistently achieving higher success rates than the baseline.\\n\\n| **Configuration** | **Success Rate** |\\n|-------------------------------|-------------------------|\\n| w Reward / wo Loss | 0.47 \\u00b1 0.37 |\\n| wo Reward / w Loss | 0.74 \\u00b1 0.08 |\\n| wo shared decoder | 0.62 \\u00b1 0.27 |\\n| \\u03bb = 0.1 | 0.81 \\u00b1 0.20 |\\n| \\u03bb = 0.075 | 0.77 \\u00b1 0.18 |\\n| (Default) \\u03bb = 0.05 | 0.88 \\u00b1 0.07 |\\n| \\u03bb = 0.025 | 0.95 \\u00b1 0.03 |0.95 \\u00b1 0.03 |\\n#### **Ablation Experiments.**\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"We are happy to hear that we could clarify the KL divergence question and improve the manuscript based on the helpful discussions. We are also thankful for the continued help to improve the clarity of the manuscript, especially spotting an error in the text. We will correct and clarify this point in the next version of the manuscript.\\n\\n---\\n\\n**Line 211: action decoder is just getting the task reward (i.e. no KL penalty?)**\\n\\nWe are grateful for pointing out this mistake in the sentence in Line 211. The shared action decoder is trained with the policy gradient (task reward + KL penalty) and the KL-Div gradient (computed based on the KL divergence between proxy and teacher actions). Thus, it is trained with the same gradients as the teacher encoder. We will correct this point in the next version of the manuscript.\\n\\n---\\n\\n**Line 788 and 794: these lines suggest you're computing policy gradient wrt task reward + penalty term. And is the encoder getting gradients from task reward + penalty term + KL term, and the action decoder getting gradients from just the task reward? Why do we need to be so selective with the gradients, is it to prevent collapse due to the shared action decoder?**\\n\\nThat is correct; the policy gradient is computed based on the task reward and the penalty term, see also Eq. 8. The teacher encoder is trained with the same gradients as the shared task decoder, i.e., with the policy gradient (task reward + KL penalty) and the KL-Div gradient. As correctly stated, during the alignment phase, the shared task decoder is frozen to avoid the collapse of the feature space between the different encoders (teacher, student, proxy student). The collapse can happen since one perfect alignment between the three encoders is achieved by predicting a constant output, which harms the task performance.\", \"the_different_networks_are_trained_based_on_the_following_gradients\": [\"**Shared action decoder**: policy gradient (task reward + KL penalty), KL-Divergence gradient.\", \"**Teacher encoder**: policy gradient (task reward + KL penalty), KL-Divergence gradient.\", \"**Student encoder**: L1-Loss between student and frozen teacher network activations\", \"**Proxy student encoder**: L1-Loss between proxy student and frozen student network activations.\"]}", "{\"comment\": \"Thank you for the detailed response. I appreciate the changes in the related work section to appropriately contextualize the paper's contribution and clarifications to my concerns. I am now in agreement with the novelty of the contribution and have raised my score in response to the new experiments comparing the proposed approach to more closely related works in the area. However, I feel that the paper in its current state is a little rushed and would encourage the authors to consider improving the presentation \\u2013 especially providing more implementation details for the baselines.\"}", "{\"summary\": \"The authors propose to improve imitation learning between a privileged expert policy that sees more information and a partially observed student policy. They propose to train the teacher policy so that it maximizes return while staying close to the student policy distribution, so that the teacher and student trajectory distributions are more aligned. The authors show this improves performance in several simulated robotic setups.\\n\\n---\\nPost rebuttal - the authors have done a decent job addressing my concerns on experimentation and improved the presentation.\\n\\nUpdated my score to 6.\\n\\nI do agree with reviewer mh2n that this submission still feels a bit rushed, with a lot of important results (comparison to competitive baselines and ablations) coming in through the rebuttal period. I tend to be a bit more wary of results that come in through rebuttal, given the limited time and resources.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"Simple approach to handling asymmetry in teacher student policy distillation, where teacher is trained to minimize divergence between teacher and student. this will constrain the teacher to states where the student can explore\", \"Interesting experimental findings.\", \"The experiments show that teacher policies trained in this manner are better for teaching student policies.\", \"A teacher with this constraint gets higher returns than a teacher without, because learning to act with less inputs leads to more robustness and generalization behavior. This would be really interesting to explore further, with more experiments or analysis.\"], \"weaknesses\": \"## Experimental section has several problems\\n- Experimental section is lacking, sparse in task selection and choices of baseline, and seem a bit contrived. Several ways to fix this:\\n\\t- More standardized benchmarks, using envs from prior work like [1, 2]\\n\\t- better baselines and analysis (see below)\\n\\t- Sim2real of the drone / manipulator results would round it out\\n\\n- choice of baselines is lacking, really should compare with prior work in handling asymmetric RL / IL problems ( see references below)\\n- Baseline details are completely missing, even if BC and Dagger are widely known, their training details need to be written down.\\n- little-to-no ablations, i.e. why the shared feature encoder? Wouldn't we expect that the teacher representation be very different than the student representation in very partially observed settings? \\n\\n- The authors need to put more work in the experimental section, which is verbose and unorganized. Having two giant paragraphs per experiment, one for setup, and one for results, is lazy and hard for readers to parse. Would like to see more figures, analysis of the teacher and student behavior, especially in the drone and manipulator case. IMO, the color maze experiment, which is a toy experiment, does not require that much text and space dedicated to explaining it. It could even be moved to appendix.\\n- Another way to improve the experimental section, is to provide some real world robot results. Because this is ICLR (more focused on learning methods), I would say this isn't a strict requirement, and I would appreciate more rigorous comparisons in the previous experiments. But real robot results would definitely round out the experimental section.\\n\\n## Another weakness - having to train a teacher policy from scratch\\n- One drawback of this paper is the need to train a teacher policy with their specialized objective. This makes the method harder to use in practice since this method requires training a teacher policy from scratch using RL and many, many samples in a fast simulator (10^8 for maze, 100M for manipulation). Getting a good simulator and doing sim2real is not always feasible.\\n\\t- In contrast, other methods like ADVISOR, COSIL, etc. (see references below) assume that the teacher policy is given, and do not require teacher training. In robotics, where reasonable teacher policies can be obtained through scripting and pre-existing controllers, it seems that approaches that do not modify the teacher are easier to use. \\n\\nSome discussion here would be appreciated.\\n\\n\\n## Minor: missing connections to prior work on regularizing privileged teachers and representations\\n- Regularizing the privileged policy / representation so that it doesn't stray too far away from the student has been explored in the past, see [6,7,8]\\n\\n1. Leveraging Fully Observable Policies for Learning under Partial Observability\\n2. Privileged Sensing Scaffolds Reinforcement Learning\\n3. Bridging the Imitation Gap by Adaptive Insubordination\\n4. TGRL: An Algorithm for Teacher Guided Reinforcement Learning\\n5. Impossibly Good Experts and How to Follow Them \\n6. Bridging the Sim-to-Real Gap from the Information Bottleneck Perspective\\n7. Deep Whole-Body Control: Learning a Unified Policy for Manipulation and Locomotion \\n8. Designing Skill-Compatible AI: Methodologies and Frameworks in Chess\", \"questions\": \"See weaknesses above. I am eager to see the paper be improved.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes to tackle the problem of information asymmetry in the privileged training and distillation paradigm by proposing to adapt the teacher policy\\u2019s behavior to account for the student\\u2019s partial observability by adding a reward term to encourage imitability of actions. They additionally introduce a proxy student network that approximates how the student behaves conditioned on the teacher\\u2019s privileged observations to alleviate the need of generating potentially high-dimensional student observations for optimizing the teacher policy.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The motivation of the problem and proposed approach is presented intuitively (but lacks some clarity see clarification questions Q1-2).\", \"weaknesses\": \"* The paper\\u2019s positioning in the related work as the only one considering information asymmetry in student-teacher framework is incorrect. There are several recent works that have looked at this problem [1-5], in fact the imitability cost like the one proposed is explored in [3].\\n\\n* As a consequence of the above, I think the paper misses key baselines that actually tackle problems under similar assumptions of asymmetry (the considered BC and DAgger baselines are bound to fail here) and therefore does not establish novelty.\\n\\n* Additionally, the introduced approach involves several hyperparameters \\u2013 the balancing term for \\u201cimitability\\u201d cost, the update style (frequency/batch sizes) of the alignment phase. The paper does not describe the implications of these choices of the algorithm and in-reality these can be considerably hard to tune for each task and can raise concerns of applicability and reproducibility of the approach.\\n\\n_References_:\\n\\n[1] Warrington, Andrew, et al. \\\"Robust asymmetric learning in POMDPs.\\\" International Conference on Machine Learning. PMLR, 2021.\\n\\n[2] Weihs, Luca, et al. \\\"Bridging the imitation gap by adaptive insubordination.\\\" Advances in Neural Information Processing Systems 34 (2021): 19134-19146.\\n\\n[3] Nguyen, Hai, et al. \\\"Leveraging fully observable policies for learning under partial observability.\\\" arXiv preprint arXiv:2211.01991 (2022).\\n\\n[4] Walsman, Aaron, et al. \\\"Impossibly good experts and how to follow them.\\\" The Eleventh International Conference on Learning Representations. 2022.\\n\\n[5] Shenfeld, Idan, et al. \\\"TGRL: An algorithm for teacher guided reinforcement learning.\\\" International Conference on Machine Learning. PMLR, 2023.\", \"questions\": \"(Q1) The description of the alignment phase (Sec 4.2) can benefit from more clarity: What subset of experiences are good enough to train the proxy student \\u2013 how much divergence between student/student-proxy is tolerable? If this has to be a large subset of the data then there is no benefit from using a proxy student, one might as well use all the observations to update both the teacher and student. I think clarity of the approach will improve with a pseudocode description of all the phases with input requirements in the appendix.\\n\\n(Q2) In the experiments, the baselines aren\\u2019t described clearly \\u2013 what does w/o alignment mean? Does it suggest removal of proxy student network in (a) no imitability loss for teacher (so standard distillation setup) (b) imitability loss with actual student network. If (a), then why is the method \\u201cw/o align (ours)\\u201d in the result tables? Also, can the authors explain why the teacher returns w/o alignment are lower than with alignment in Figure 4, intuition suggests that it should be higher?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Comment by Authors (2/2)\", \"comment\": \"**W7: One drawback of this paper is the need to train a teacher policy with their specialized objective. This makes the method harder to use in practice since this method requires training a teacher policy from scratch using RL and many, many samples in a fast simulator (10^8 for maze, 100M for manipulation). Getting a good simulator and doing sim2real is not always feasible.**\\n\\nIndeed, training a teacher in our framework requires more training samples since the teacher policy must adapt to the evolving capabilities of the student while still accomplishing the task. However, the computational and time bottleneck lies in the student training, particularly in rendering high-dimensional images. Therefore, the cheap teacher interactions using low-dimensional observations can help to reduce the actual bottleneck of training a high-dimensional student. \\nAs correctly noted, our method depends on training the teacher with RL. However, privileged teacher training using RL is a common practice nowadays in the domain of robotics [1][2][3][4], which leads to robust policy performance in both simulation and the real world. \\nWe have added the assumption of joint teacher and student training to the revised manuscript.\\n\\n[1] Learning high-speed flight in the wild, Science Robotics 2021\\n\\n[2] Learning robust perceptive locomotion for quadrupedal robots in the wild, Science Robotics 2022\\n\\n[3] Extreme Robot Parkour, ICRA 2024\\n\\n[4] Humanoid Parkour Learning, CoRL 2024\\n\\n[5] Deep Whole-Body Control: Learning a Unified Policy for Manipulation and Locomotion, CoRL2022\\n\\n[6] Real-World Humanoid Locomotion with Reinforcement Learning, Science Robotics 2024\"}", "{\"title\": \"A gentle reminder\", \"comment\": \"Thanks again for serving as a reviewer, we really appreciate your comments. Your feedback has strongly improved our paper, and we believe that we have addressed all of your concerns as we now have included 3 more baselines, 3 more ablations and substantially improved the writing. The end of the discussion period is rapidly approaching, and we would really appreciate it if you could check our response and let us know whether your concerns are well addressed. If not or in case you have any further concerns we would be more than happy to work with you on improving the paper.\"}", "{\"title\": \"Official Comment by Authors (2/2)\", \"comment\": \"**Q1: The description of the alignment phase (Sec 4.2) can benefit from more clarity: What subset of experiences are good enough to train the proxy student \\u2013 how much divergence between student/student-proxy is tolerable? If this has to be a large subset of the data then there is no benefit from using a proxy student, one might as well use all the observations to update both the teacher and student. I think clarity of the approach will improve with a pseudocode description of all the phases with input requirements in the appendix.**\\n\\nWe observed that our approach is generally robust to divergences between the student and proxy student. However, it is challenging to deterministically control or quantify this divergence, which limits our ability to provide a detailed quantitative analysis.\\nRegarding sample efficiency, the subset of interactions required for the student is significantly smaller than the total interactions collected by the teacher. For instance, in the vision-based manipulation task, the teacher utilizes 98,304,000 interactions, whereas the student only requires 3,072,000 interactions. This represents a substantial 32-fold reduction in sample requirements.\\nWe are thankful for the recommendation to add a pseudocode description to the appendix, which we have included in Section A.2 in the appendix.\\n\\n---\\n\\n**Q2: In the experiments, the baselines aren\\u2019t described clearly \\u2013 what does w/o alignment mean? Does it suggest removal of proxy student network in (a) no imitability loss for teacher (so standard distillation setup) (b) imitability loss with actual student network. If (a), then why is the method \\u201cw/o align (ours)\\u201d in the result tables? Also, can the authors explain why the teacher returns w/o alignment are lower than with alignment in Figure 4, intuition suggests that it should be higher?**\\n\\nWe appreciate the helpful feedback and have addressed these concerns by adding a dedicated subsection to introduce the baselines. In the \\u201cw/o alignment\\u201d setting, we exclude the KL-Divergence from both the reward and the policy update phase while keeping the paired L1-Loss on shared action decoder features. As a result, the student does not affect the teacher, and the proxy student is not needed. This configuration represents a standard distillation setup, leveraging identical network alignments to specifically evaluate the contributions of our proposed KL-Divergence terms in the reward and policy update.\\n\\nOne plausible explanation for the increased teacher return observed in Figure 4 is that perception-aware behavior enhances the robustness of the teacher. With our framework, the teacher adopts safer behaviors, such as maintaining a greater distance from obstacles to account for the student's limitations. This can be observed in Figure 3 by the small distance of the teacher \\u201cwithout alignment\\u201d and the last obstacle. By reducing risk, the teacher achieves more consistent long-term returns.\"}", "{\"title\": \"Official Comment by Authors (1/2)\", \"comment\": \"**W1: Experimental section is lacking, sparse in task selection and choices of baseline, and seem a bit contrived.**\\n\\nWe have introduced additional baselines, conducted a more detailed analysis of the impact of each proposed component, and provided further insights in the experimental section.\\n\\n---\\n\\n**W2: Choice of baselines is lacking.**\\n\\nWe ran additional experiments with 2 new baselines, namely \\u201cDeep Whole-Body Control\\u201d [5] and \\u201cReal-World Humanoid Locomotion with Reinforcement Learning\\u201d [6] (currently only for the vision-based manipulation task) . Our results demonstrate that our proposed method significantly outperforms these baselines.\\n\\n| **Methods** | **Success Rate (Manipulation)** | **Success Rate (Quadrotor)** |\\n|---------------------|----------------------------|----------------------------|\\n| BC | 0.16 \\u00b1 0.15 | 0.05 \\u00b1 0.04 |\\n| DAgger | 0.34 \\u00b1 0.31 | 0.08 \\u00b1 0.03 |\\n| **HLRL** | 0.61 \\u00b1 0.22 | tbd |\\n| **DWBC** | 0.63 \\u00b1 0.18 | 0.35 \\u00b1 0.07 |\\n| w/o Align (Ours) | 0.61 \\u00b1 0.18 | 0.38 \\u00b1 0.11 |\\n| w Align (Ours) | **0.88 \\u00b1 0.07** | **0.46 \\u00b1 0.04** |\\n#### **Updated baseline experiments.**\\n---\\n\\n**W3: Baseline details are completely missing.**\\n\\nThank you for pointing that out. We added a new subsection to the experiments section to introduce the tested baselines, ensuring greater clarity and completeness.\\n\\n---\\n\\n**W4: little-to-no ablations, i.e. why the shared feature encoder? Wouldn't we expect that the teacher representation be very different than the student representation in very partially observed settings.**\\n\\nWe ran additional experiments to perform all requested ablations. The results show that the shared action decoder is a critical component, increasing the success rate from 0.62 to 0.88. We argue that the teacher encoder learns high-level task representations rather than perception-specific features. With our KL-Divergence loss in the teacher policy updates, we ensure that the teacher encoder extracts task-relevant features that can be learned by the student.\\n\\n| **Configuration** | **Success Rate** |\\n|-------------------------------|-------------------------|\\n| w Reward / wo Loss | 0.47 \\u00b1 0.37 |\\n| wo Reward / w Loss | 0.74 \\u00b1 0.08 |\\n| wo shared decoder | 0.62 \\u00b1 0.27 |\\n| \\u03bb = 0.1 | 0.81 \\u00b1 0.20 |\\n| \\u03bb = 0.075 | 0.77 \\u00b1 0.18 |\\n| (Default) \\u03bb = 0.05 | 0.88 \\u00b1 0.07 |\\n| \\u03bb = 0.025 | 0.95 \\u00b1 0.03 |0.95 \\u00b1 0.03 |\\n#### **Ablation Experiments.**\\n\\n---\\n\\n**W5: The authors need to put more work in the experimental section**\\n\\nThank you for pointing that out. We restructured the experimental section to improve readability and organization. Implementation details have been moved to the appendix, while a dedicated subsection introduces the tested baselines. Additionally, the section now includes ablation studies.\\n\\n---\\n\\n**W6: Another way to improve the experimental section, is to provide some real world robot results. Because this is ICLR (more focused on learning methods), I would say this isn't a strict requirement**\\n\\nBoth tasks are highly complex robotics challenges that require sophisticated hardware and experimental setups for real-world testing. However, the simulation frameworks we employ have been shown to successfully transfer to real-world scenarios [1][2][3].\\n\\n[1] Champion-level Drone Racing using Deep Reinforcement Learning, Nature, 2023\\n\\n[2] Reaching the Limit in Autonomous Racing: Optimal Control vs. Reinforcement Learning, Science Robotics, 2023\\n\\n[3] On the role of the action space in robot manipulation learning and sim-to-real transfer, obotics and Automation Letters, 2024\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"We are glad that our responses have been helpful and appreciate the opportunity to provide further clarification.\\n\\n---\\n\\n**It seems like the objective prescribes a KL divergence term to keep the teacher close to the student. But in section 4.2 Rollout phase, the teacher is trained on an additional penalty term based on the action difference between teacher and proxy student. Shouldn't the KL divergence term be enough? Why add the additional penalty term, and why is it helpful over just using KL?**\\n\\nThe KL-term inside the reward term can be interpreted as a reward encouraging the teacher policy to visit states where the student and teacher are aligned and avoid states with a large misalignment. Thus, it affects the exploration of the teacher policy. In contrast, the KL-term in the objective/loss aligns the teacher action to the student action and can be interpreted as an alignment on the learned representation space. The positive effect of both terms is also confirmed by the ablation experiments in Table 2.\\n\\n---\\n\\n**Next, there are some details on shared networks and updates to particular parts of the network that I'm not clear on. It seems like the student and teacher action decoders are shared. The action decoding layers are only updated on the policy gradient computed with the task reward. What is going on here? Are the encoders getting gradients from the full objective?**\\n\\nAs correctly noted, the teacher, student, and proxy student share the same action decoder, which is updated only with the policy gradient computed from the task reward. With the same policy gradient, the teacher encoder is updated. The student and proxy student are updated in the alignment phase. Importantly, during alignment, the teacher encoder remains fixed and is not updated. In the alignment phase, the following encoders are updated:\\n- Student Encoder: Updated by aligning the student to the teacher while stopping the gradient flow to the teacher.\\n- Proxy Student Encoder: Updated by aligning the proxy student to the image-based student, with the gradient flow stopped at the student.\\n\\n---\\n\\n**It would be great if the authors can write out the exact computation graph somewhere, like what networks are getting which gradients, etc. The current architecture figure in the paper isn't too intuitive, especially the uni-directional arrows for the loss / KL don't clearly show me what is the prediction and what is the target.**\\n\\nBased on the feedback, we have introduced the gradient flow with different colors representing the different losses in Figure 1a). We have also introduced some clarification in the text to better highlight different updates. Additionally, we added a pseudocode algorithm in the appendix that illustrates better which network is updated in which phase. We remain open to further suggestions and sincerely thank the reviewer for their valuable input!\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"We are grateful for the thoughtful clarification questions and discussion. In response, we have further refined the manuscript, placing greater emphasis on clearly distinguishing the different KL-Divergence terms to enhance understanding.\\n\\n---\\n\\n**Task reward = just the environmental reward, or does it also include the KL penalty term?**\\n\\nThat is correct, the task reward is just the environmental reward defined by the design of the task. It does not include the KL penalty term.\\n\\n---\\n\\n**It's still not clear to me why you need to include both the KL penalty term in the rollout phase, and the KL divergence term in the loss. Aren't they somewhat redundant, i.e. if you take the policy gradient of the KL penalty term, it will give you a similar / same gradient as doing the KL divergence term between student and teacher policy distributions?**\\n\\nThe two terms do seem similar at first glance, but they serve different purposes, namely, representation learning, and exploration. In theory, if you take the policy gradient of the KL penalty term, you would obtain the following term:\\n\\n$\\\\int \\\\nabla_\\\\theta \\\\log p_{\\\\theta}(\\\\tau) [\\\\sum_{s_t\\\\in \\\\tau}\\\\gamma^t D_{KL}(\\\\pi_T(\\\\cdot|s_t),\\\\pi_S(\\\\cdot|s_t))].$\", \"while_the_kl_divergence_term_in_the_loss_is_the_expectation_of_the_gradient_of_the_kl_divergence_term\": \"$\\\\int p_{\\\\theta}(\\\\tau) \\\\nabla_\\\\theta D_{\\\\theta}(\\\\tau) d\\\\tau$\\n\\nThese two terms are quite different. Also, in practice, adding the KL-term in the reward function provides a gradient signal that includes information about the long-term effect of taking certain actions on the divergence between student and teacher in future states. Hence the KL-divergence in the reward serves to modify the exploration behavior of the policy in a way that discourages discrepancy between student and teacher. In contrast, the KL-Divergence in the loss only measures the difference between student and teacher actions for the samples in the current batch (without explicitly looking at the effect on future states). Hence, this term only serves for learning representations that are similar between student and teacher and does not affect the exploration.\\n\\n---\\n\\n**My point of confusion is that the equation 8 just has one KL divergence term, which aims to make the student and teacher state marginals close. But now there are two KL terms in the implementation. Is one of the two a heuristic? If so, it would be nice to state clearly which term directly motivates the method, and which one is a heuristic.**\\n\\nThe two terms arise from the theoretical objective of minimizing the upper bound of the difference between teacher and student performance. Specifically, they emerge because of the product rule while taking the derivative with respect to the teacher weights, see Eq. 7. This leads to two KL divergence terms in Eq. 8, which are represented with D_theta in the Policy Gradient and KL-Div gradient. This theoretical derivation is further supported by the increased performance achieved while using both terms in the algorithm formulation, as reported in Table 2.\"}", "{\"title\": \"The discussion period ends today\", \"comment\": \"Dear reviewer xapm,\\n\\nWith the end of the discussion period approaching (today), we have not yet received a response from you regarding our rebuttal. We would really appreciate it if you could review our response and revised manuscript at your earliest convenience. If our rebuttal has adequately addressed your concerns, we would greatly appreciate it if you could consider adjusting your score accordingly. Thank you once again for your time and for providing such valuable feedback.\"}", "{\"title\": \"Followup\", \"comment\": \"Thank you for handling the KL divergence questions, I understand it fully now. I am still unclear on what gradients the shared action decoder is getting, versus the encoders.\", \"line_211\": \"action decoder is just getting the task reward (i.e. no KL penalty?)\", \"line_788_and_794\": \"these lines suggest you're computing policy gradient wrt task reward + penalty term. And is the encoder getting gradients from task reward + penalty term + KL term, and the action decoder getting gradients from just the task reward? Why do we need to be so selective with the gradients, is it to prevent collapse due to the shared action decoder?\\n\\nPlease clarify for me, what the action decoder is getting trained on, and what the encoders are getting trained on, with respect to all the terms in the objective.\"}", "{\"metareview\": \"This paper presents a teacher-student training framework where the teacher learns to generate data that is easier for the student to mimic. In the asymmetric teacher-student setting, the teacher has privileged sensory information while the student is limited and therefore may not be able to mimic the optimal behavior of the teacher. The proposed method leverages action discrepancy between the student and the teacher as a penalty in training the teacher with reinforcement learning.\\n\\nReviewers agree this paper studies an interesting and important problem and the proposed method is novel. The experimental results are intriguing, especially in the case where the robot learned to generate camera view-aware policy. However, many baselines and ablation results were added during the rebuttal phase and the reviewers expressed concerns about the readiness of the presentation of the paper. The authors should incorporate additional feedback from the reviewers.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers raised concerns about the lack of sufficient comparison with baselines and ablation of design choices. The authors ran additional experiments during the rebuttal phase and updated the manuscript accordingly to address such concerns. Reviewers raised scores but expressed concerns that the results are rushed.\"}", "{\"title\": \"Clarifications continued\", \"comment\": \"Thanks, this is helpful. Remaining questions:\\n\\nTask reward = just the environmental reward, or does it also include the KL penalty term? \\n\\nIt's still not clear to me why you need to include both the KL penalty term in the rollout phase, and the KL divergence term in the loss. Aren't they somewhat redundant, i.e. if you take the policy gradient of the KL penalty term, it will give you a similar / same gradient as doing the KL divergence term between student and teacher policy distributions? \\n\\nMy point of confusion is that the equation 8 just has one KL divergence term, which aims to make the student and teacher state marginals close. But now there are two KL terms in the implementation. Is one of the two a heuristic? If so, it would be nice to state clearly which term directly motivates the method, and which one is a heuristic.\\n\\nOverall, I think I get most of the algorithm now, but I would encourage the authors to continue working on the presentation of the algorithm, i.e. see if people outside the project can easily understand the algorithm from just reading the paper and figures.\"}", "{\"title\": \"Updated PDF Response\", \"comment\": \"Based on the helpful feedback and discussions, we have extended the last submission to improve further the clarity and provide more details. In addition to the already reported ablation experiments, we have now included in total three baseline approaches for the vision-based manipulation task and two baseline approaches for the vision-based quadrotor task. We will add the third baseline to the quadrotor task for the next version of the paper.\\n\\n\\n| **Methods** | **Success Rate (Manipulation)** | **Success Rate (Quadrotor)** |\\n|---------------------|----------------------------|----------------------------|\\n| BC | 0.16 \\u00b1 0.15 | 0.05 \\u00b1 0.04 |\\n| DAgger | 0.34 \\u00b1 0.31 | 0.08 \\u00b1 0.03 |\\n| **HLRL** | 0.61 \\u00b1 0.22 | 0.31 \\u00b1 0.11 |\\n| **DWBC** | 0.63 \\u00b1 0.18 | 0.35 \\u00b1 0.07 |\\n| **COSIL** | 0.56 \\u00b1 0.21 | tbd |\\n| w/o Align (Ours) | 0.61 \\u00b1 0.18 | 0.38 \\u00b1 0.11 |\\n| w Align (Ours) | **0.88 \\u00b1 0.07** | **0.46 \\u00b1 0.04** |\\n#### **Updated baseline experiments.**\"}", "{\"title\": \"Official Comment by Authors (1/2)\", \"comment\": \"**W1: Missing important comparison and discussion: Several prior works focus on training the student with RL to achieve behaviors distinct from the teacher. While some of these are referenced in the paper, there is no thorough discussion of this class of methods, and the authors only compare their approach to BC and DAgger, which were not specifically designed to handle observation asymmetry.**\\n\\nWe sincerely appreciate the list of key related works, which has helped us better contextualize our contributions. In our revised manuscript, we have included a detailed discussion to position our work in relation to them. Specifically, most of the referenced works [1-7] assume a fixed teacher. This is in contrast to our proposed method, which adapts the teacher to the capabilities of the student.\\n\\nWe have included a comparison with \\u201cDeep Whole-Body Control\\u201d [C1], which adjusts the teacher encoder by enforcing a shared feature space. Additionally, we also compare against the proposed \\u201cReal-World Humanoid Locomotion with Reinforcement Learning\\u201d [C2] method (currently only for vision-based manipulation). The results clearly show that our proposed method outperforms them. Additionally, we have expanded the experimental section to provide further insights into the results and the observed improvements.\\n\\n| **Methods** | **Success Rate (Manipulation)** | **Success Rate (Quadrotor)** |\\n|---------------------|----------------------------|----------------------------|\\n| BC | 0.16 \\u00b1 0.15 | 0.05 \\u00b1 0.04 |\\n| DAgger | 0.34 \\u00b1 0.31 | 0.08 \\u00b1 0.03 |\\n| **HLRL** | 0.61 \\u00b1 0.22 | tbd |\\n| **DWBC** | 0.63 \\u00b1 0.18 | 0.35 \\u00b1 0.07 |\\n| w/o Align (Ours) | 0.61 \\u00b1 0.18 | 0.38 \\u00b1 0.11 |\\n| w Align (Ours) | **0.88 \\u00b1 0.07** | **0.46 \\u00b1 0.04** |\\n#### **Updated baseline experiments.**\\n\\n\\n[C1] Deep Whole-Body Control: Learning a Unified Policy for Manipulation and Locomotion, CoRL 2022\\n\\n[C2] Real-World Humanoid Locomotion with Reinforcement Learning, Science Robotics 2024\\n\\n---\\n\\n**W2: Clarity and methodological details: Certain aspects of the approach, especially in Section 4, lack clarity...A potential drawback of this approach is that alternating training for both teacher and student before teacher training is complete might demand more high-dimensional samples than BC or DAgger**\\n\\nWe enhanced the clarity of the methodology and experiments by restructuring sections, improving descriptions, and providing additional technical details. \\n\\nIndeed, training a teacher in our framework requires more training samples since the teacher policy must adapt to the evolving capabilities of the student while still accomplishing the task. However, the computational and time bottleneck lies in the student training, particularly in rendering high-dimensional images. Therefore, the additional (cheap) teacher interactions using low-dimensional observations can help to reduce the actual bottleneck of training a high-dimensional student.\"}", "{\"title\": \"Response: Prior work on student guided teacher/student training\", \"comment\": \"Dear Philip Bachman,\\nMany thanks for pointing us to your work. We have included your work in the paper.\"}", "{\"summary\": \"To address the asymmetry in observation spaces during teacher-student training in reinforcement learning, this paper introduces a KL divergence penalty term between the teacher policy and a proxy for the student policy. This penalty is added to the teacher's RL loss, encouraging the teacher to adapt its behavior based on the student's observation space. To facilitate efficient penalty computation during teacher training, the authors employ a proxy student with the same privileged observation space as the teacher. The student learns by imitating the teacher from its limited observation space, while the proxy student imitates the student using privileged observations. They validate their approach in simulation across a toy grid navigation task, a vision-based quadrotor obstacle avoidance environment, and a vision-based manipulation environment, demonstrating the emergence of perception-aware behaviors.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"Impressive qualitative results: The emergence of perception-aware behaviors is impressive, and the authors effectively demonstrate that their approach produces this outcome across different environments, including a quadrotor flight task and a vision-based manipulation task.\", \"Interesting approach: The teacher-student observation asymmetry is a relevant problem to address in robotics. The idea of penalizing the teacher for behaviors that cannot be reproduced from the student\\u2019s observation space is both simple and sound.\"], \"weaknesses\": [\"Missing important comparison and discussion: Several prior works focus on training the student with RL to achieve behaviors distinct from the teacher [1, 2, 3, 4, 5, 6, 7] (and likely more). While some of these are referenced in the paper, there is no thorough discussion of this class of methods, and the authors only compare their approach to BC and DAgger, which were not specifically designed to handle observation asymmetry. Since many of these prior works are motivated by teacher-student observation asymmetry, it is crucial that the authors include a proper discussion of this line of work and compare their approach to some of them.\", \"Clarity and methodological details: Certain aspects of the approach, especially in Section 4, lack clarity. The method involves alternating between data collection, RL policy updates with the proposed penalty, and policy alignment, but critical details are reserved for the appendix, such as the ratio of privileged to high-dimensional observation samples or the number of updates performed in each phase for each component. A potential drawback of this approach is that alternating training for both teacher and student before teacher training is complete might demand more high-dimensional samples than BC or DAgger, which train the teacher once and then the student. It is unclear how comparisons to baselines were made, particularly regarding environment interactions. Does the proposed method require more interactions than the baselines? Were baselines evaluated with equivalent budgets of privileged and high-dimensional samples? Additionally, compared to the baselines, the approach introduces a proxy student, a more involved training protocol and a KL penalty with associated hyperparameters, making it challenging to assess the algorithm's overall complexity. The \\u201cw/o Align\\u201d baseline is also not clearly explained, nor is it clear why its behavior differs from BC and DAgger. Finally, the shared action decoder is introduced without sufficient motivation; the authors could clarify this design choice, for instance through ablation studies.\"], \"references\": [\"[1] TGRL: An Algorithm for Teacher Guided Reinforcement Learning, ICML2023\", \"[2] Leveraging Fully Observable Policies for Learning under Partial Observability, CoRL2022\", \"[3] SoloParkour: Constrained Reinforcement Learning for Visual Locomotion from Privileged Experience, CoRL2024\", \"[4] Bootstrapping reinforcement learning with imitation for vision-based agile flight, CoRL2024\", \"[5] Privileged Sensing Scaffolds Reinforcement Learning. ICLR2024\", \"[6] Real-World Humanoid Locomotion with Reinforcement Learning, 2023\", \"[7] Bridging the Imitation Gap by Adaptive Insubordination, NeurIPS2021\"], \"questions\": [\"Why does the \\\"w/o align\\\" baseline perform so well compared to BC and DAgger, particularly in the quadrotor environment? It seems this baseline would produce behaviors similar to that of BC and DAgger.\", \"Does the proposed method require more interactions than the baselines? Were baselines evaluated with equivalent budgets of privileged and high-dimensional samples?\", \"What is the performance without using a shared action decoder?\", \"How sensitive is the method to the weight of the KL divergence penalty term?\", \"Is it possible to have videos of the experiments in simulation ?\", \"Out of curiosity, have you experimented with using forward vs reverse KL divergence as the penalty term?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General Response\", \"comment\": [\"We sincerely thank the reviewers for their valuable feedback regarding clarity, missing related work, and requests for additional ablation and baseline experiments. In response, we have made significant improvements to the manuscript (highlighted in blue), as summarized below:\", \"**Ablations**: We conducted additional experiments for the vision-based manipulation task to evaluate the impact of the proposed components. These results are now included in the updated manuscript.\", \"**Related Work**: We have incorporated all the missing references suggested by the reviewers and expanded the discussion to better contextualize our contributions.\", \"**Baselines**: To more effectively highlight the advantages and performance improvements of our method, we have added results for baseline methods. Given the complexity of the tasks, these comparisons further demonstrate the effectiveness of our approach.\", \"**Technical Clarity**: We enhanced the clarity of the methodology and experiments by restructuring sections, improving descriptions, and providing additional technical details.\", \"**Improved Stability**: We observed simulation instabilities for the manipulation task\", \"when the Franka arm touches the top of the drawer (particularly with our proposed method). To address this, we have slightly lowered the position of the Franka robot, leading to more consistent behavior, as reported in the updated Table 2.\", \"We look forward to further engaging discussions and appreciate the opportunity to refine our work based on the reviewers' constructive feedback.\"]}", "{\"title\": \"Prior work on student guided teacher/student training\", \"comment\": \"I actually proposed student guided teacher/student training back in 2015 in a paper called: \\\"Data Generation as Sequential Decision Making\\\". See Section 2.2 on page 2 of https://arxiv.org/abs/1506.03504. I called it Generalized Guided Policy Search, since it's a generalization of Guided Policy Search.\\n\\nOne significant difference between your work and mine is that my experiments were in a simpler setting where the student's observations were a subset of the teacher's observations, so there were no concerns about cost of producing the student's observations. Approximating the student's behavior with an additional policy that predicts the student's actions while conditioning on the teacher's observations is a nice trick for making this setup more effective/efficient in contexts where the student's observations are costly.\"}", "{\"summary\": \"The paper addresses the important problem of information asymmetry between teacher and student policies in imitation learning. The authors propose a novel joint training framework that encourages the teacher to learn behaviors that can be more easily imitated by the student, thereby mitigating the negative impact of the information gap. The core idea is to incorporate the performance bound of imitation learning into the teacher's objective function, resulting in two key modifications: a KL-divergence-based penalty term in the teacher's reward function and a KL-divergence-based supervisory signal for updating the teacher's network parameters. The effectiveness of the proposed method is validated through experiments on three diverse tasks: maze navigation, vision-based quadrotor obstacle avoidance, and vision-based drawer opening with a robotic arm.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper presents a novel approach to tackling the teacher-student information asymmetry problem by incorporating the imitation learning performance bound into the teacher's objective function, leading to a creative combination of ideas from imitation learning theory and practical algorithm design.\\n2. The proposed method is well-motivated and grounded in the theoretical foundations of imitation learning, with clear explanations for the design of the KL-divergence-based penalty and supervisory terms.\\n3. The method is evaluated on three diverse tasks, demonstrating its applicability to both discrete and continuous control domains. The results show improvements over baseline imitation learning approaches.\\n4. The paper is generally well-written and easy to follow, with a clear problem statement, detailed method description, and effective use of figures and tables.\", \"weaknesses\": \"1. Limited theoretical analysis: The paper lacks a comprehensive theoretical analysis comparing the proposed method to existing approaches. A more rigorous theoretical justification for the superiority of the method would strengthen the contributions.\\n2. Insufficient ablation studies: The individual contributions of the reward penalty and the KL-divergence supervision are not clearly distinguished through ablation experiments, making it difficult to assess the necessity of each component.\\n3. Limited experimental evaluation: While the method is evaluated on three tasks, a more extensive experimental evaluation on a wider range of environments and benchmarks would provide stronger evidence for the generalizability and effectiveness of the approach.\\n4. Clarity of certain technical details: Some aspects of the paper, such as the definition of the proxy student network and the justification for the equality of covariance matrices in equations (9) and (10), require further clarification.\", \"questions\": \"1. How does the proposed method theoretically compare to existing approaches in terms of convergence guarantees and sample complexity?\\n2. What is the sensitivity of the method to the choice of the KL-divergence penalty weight, and how can this hyperparameter be effectively tuned?\\n3. Can the authors provide more detailed ablation studies to demonstrate the individual contributions of the reward penalty and the KL-divergence supervision?\\n4. How well does the method scale to more complex environments and tasks, and what are the potential limitations in terms of computational overhead and sample efficiency?\\n5. Can the authors discuss the potential extensions of the proposed framework to other imitation learning algorithms and settings, such as inverse reinforcement learning or multi-agent imitation learning?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
DzbUL4AJPP
Boosting Methods for Interval-censored Data with Regression and Classification
[ "Yuan Bian", "Grace Yi", "Wenqing He" ]
Boosting has garnered significant interest across both machine learning and statistical communities. Traditional boosting algorithms, designed for fully observed random samples, often struggle with real-world problems, particularly with interval-censored data. This type of data is common in survival analysis and time-to-event studies where exact event times are unobserved but fall within known intervals. Effective handling of such data is crucial in fields like medical research, reliability engineering, and social sciences. In this work, we introduce novel nonparametric boosting methods for regression and classification tasks with interval-censored data. Our approaches leverage censoring unbiased transformations to adjust loss functions and impute transformed responses while maintaining model accuracy. Implemented via functional gradient descent, these methods ensure scalability and adaptability. We rigorously establish their theoretical properties, including optimality and mean squared error trade-offs. Our proposed methods not only offer a robust framework for enhancing predictive accuracy in domains where interval-censored data are common but also complement existing work, expanding the applicability of existing boosting techniques. Empirical studies demonstrate robust performance across various finite-sample scenarios, highlighting the practical utility of our approaches.
[ "Boosting", "Functional gradient descent", "Interval-censored data", "Minimax error rate", "Nonparametric classification", "Nonparametric regression", "Smoothing spline" ]
Accept (Poster)
https://openreview.net/pdf?id=DzbUL4AJPP
https://openreview.net/forum?id=DzbUL4AJPP
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zZZOnsn6fB", "z1C5Wv7gE7", "qdQNpTnkyM", "pZT56QZ2Uf", "oFbkj8TvmO", "lMBuQ5fL7I", "jhuye4jQRm", "jSZWjfZSQT", "ieMFFGRYuw", "bVIf4rkxtH", "bPAXrDEGPh", "abLhmG3GQS", "aKremC8TfV", "ZbNe4nHzmq", "XnEpGEBFYJ", "XDNpgMYMCN", "TF9YrD8mNF", "QTxmentT4y", "QLiLls3pds", "OtopS15A32", "NyD8HhQ3BG", "NZ4xnUPfwi", "NVAewYfL9f", "NKXOwrlqWq", "KtHhP4jQNt", "JJUyPdu4Zg", "D7CMcQ7nCE", "9a6QOV1O7s", "8swwUDV5MK", "7YuyDHF1sY", "5fJBSruOoD" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment" ], "note_created": [ 1730388346607, 1732514510211, 1732705803214, 1732601880335, 1732514393871, 1732761750750, 1732673673600, 1732511667422, 1732710730427, 1732737222719, 1730435612076, 1730371679997, 1730662188630, 1732562133636, 1732718598732, 1732511219923, 1732661070189, 1732512271353, 1732514314079, 1732515139194, 1732530741352, 1732572829703, 1734430483918, 1732661359629, 1732600953245, 1730499569986, 1732515176978, 1732737141057, 1737523936273, 1732541525049, 1732513345099 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8841/Reviewer_VgwR" ], [ "ICLR.cc/2025/Conference/Submission8841/Authors" ], [ "ICLR.cc/2025/Conference/Submission8841/Reviewer_4h4i" ], [ "ICLR.cc/2025/Conference/Submission8841/Authors" ], [ "ICLR.cc/2025/Conference/Submission8841/Authors" ], [ "ICLR.cc/2025/Conference/Submission8841/Authors" ], [ "ICLR.cc/2025/Conference/Submission8841/Authors" ], [ "ICLR.cc/2025/Conference/Submission8841/Authors" ], [ "ICLR.cc/2025/Conference/Submission8841/Reviewer_VgwR" ], [ "ICLR.cc/2025/Conference/Submission8841/Authors" ], [ "ICLR.cc/2025/Conference/Submission8841/Reviewer_uzby" ], [ "ICLR.cc/2025/Conference/Submission8841/Reviewer_Npps" ], [ "ICLR.cc/2025/Conference/Submission8841/Reviewer_4h4i" ], [ "ICLR.cc/2025/Conference/Submission8841/Reviewer_uzby" ], [ "ICLR.cc/2025/Conference/Submission8841/Authors" ], [ "ICLR.cc/2025/Conference/Submission8841/Authors" ], [ "ICLR.cc/2025/Conference/Submission8841/Authors" ], [ "ICLR.cc/2025/Conference/Submission8841/Authors" ], [ "ICLR.cc/2025/Conference/Submission8841/Authors" ], [ "ICLR.cc/2025/Conference/Submission8841/Authors" ], [ "ICLR.cc/2025/Conference/Submission8841/Reviewer_Npps" ], [ "ICLR.cc/2025/Conference/Submission8841/Reviewer_mmSc" ], [ "ICLR.cc/2025/Conference/Submission8841/Area_Chair_6XsQ" ], [ "ICLR.cc/2025/Conference/Submission8841/Authors" ], [ "ICLR.cc/2025/Conference/Submission8841/Authors" ], [ "ICLR.cc/2025/Conference/Submission8841/Reviewer_mmSc" ], [ "ICLR.cc/2025/Conference/Submission8841/Authors" ], [ "ICLR.cc/2025/Conference/Submission8841/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8841/Reviewer_VgwR" ], [ "ICLR.cc/2025/Conference/Submission8841/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The boosting method is widely used in machine learning field, but it is not obvious to extend it to interval-censored data, that is, the outcome value is not specifically given but its interval is given. To apply the boosting to such datasets, the paper applies the method called the \\\"censoring unbiased transformation\\\" (CUT) to the loss function. As described in Proposition 1, the application of CUT does not change the expected loss between the interval-censored data and the true (uncensored) data, if we know the conditional survivor function $S$ (in reality we need to estimate it from the dataset). With this conversion of the problem, they showed that we can apply L2boost, an existing method of boosting for regression datasets (it can be extended to binary classifications).\", \"soundness\": \"4\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper found that, by CUT, existing L2boost algorithm can be applied to interval-censored datasets.\", \"There is a constraint that we need to estimate the joint distribution of $X$ and $Y$ without knowing $Y$ itself but knowing only intervals. They showed that estimating the distribution by ICRF experimentally worked well.\", \"Theoretical convergence rates and lower bounds of MSE (for regressions) and misclassifications (for classifications) are presented.\"], \"weaknesses\": [\"The procedure itself looks somewhat simple; first we apply CUT and then L2boost. If there is a difficulty or interesting results of these combinations, please emphasize.\", \"Perhaps, the combination of boosting and interval-censored data is the novelty? If so, please emphasize the discussion on the novelty (e.g., limitations in existing methods).\"], \"questions\": [\"Key questions\", \"It uses ICRF to estimate the conditional survivor function, but perhaps can we just impute interval-censored outcomes by ICRF (instead of using (Bian et al., 2024a))?\", \"Section 1.1: The paper presented several tree-based interval-censored regression methods, but is it difficult to extend these methods to boosting? If so, why?\", \"Section 1.2: It states that L2Boost-Impute is a proposed method, but the procedure appears only in Section 5, and the imputation method is just a reference. What is the novelty?\", \"Section 5, Figure 1: Why the methods \\\"O\\\" (oracle) and \\\"R\\\" (reference) are compared in these plots? It is true that these results uses unknown information in reality and they can produce better results, but it looks that these results are not discussed in the paper.\", \"Minor questions and suggestions\", \"Some overlaps of letters found, so please consider replacing either of them with another letter.\", \"$L$: overlapped between the \\\"loss function\\\" and the \\\"left of the interval\\\"\", \"$S$: overlapped between the \\\"conditional survivor function\\\" and the \\\"smoother matrix\\\"\", \"Section 3.1, line 182: Should $L = l, R = r, l < Y \\\\leq r$ be $L = l, R = r, L < Y \\\\leq R$, as far as reading the succeeding equation? (Does it mean that the distribution of $Y$ depends on the observation of the random variables $L$ and $R$ but independent of the distributions of $L$ and $R$?)\", \"Section 3.1, line 187: It looks that the randomness of $M$ is not used anywhere else. How does the randomness work?\", \"Section 3.2, line 238: It states that we need to estimate $S(y|X_i)$, but the procedure has not been described at this point. Please consider referring Section 3.3 as the section that describes how to estimate it.\", \"Section 5: The \\\"sample-based maximum absolute error\\\" is abbreviated as \\\"S-MAE\\\", but please consider another abbreviation, since \\\"MAE\\\" also stands for **mean** absolute error. (It is confusing since \\\"S-MSE\\\" represents the \\\"sample-based **mean** squared error\\\".)\", \"Section 5, line 471: adpots --> adopts\", \"Section 5, line 510: boxlplots --> boxplots\", \"Section 5, Figure 1(b): lines for \\\"CUT\\\" are not visible since they look overlapped with other lines, so please change line style or transparency so that the lines appear.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Responses to Reviewer VgwR (3/3)\", \"comment\": \"**Minor questions and suggestions:**\\n- **Some overlaps of letters found, so please consider replacing either of them with another letter: $L$ overlapped between the loss function and the left of the interval; $S$ overlapped between the conditional survivor function and the smoother matrix.**\\n\\n**Our Response**: Thank you for this careful comment. In preparing a revised manuscript, we will use different notation to differentiate them from greater clarity. \\n- **Section 3.1, line 182: Should $L=l, R=r, l < Y \\\\le r$ be $L=l, R=r, L < Y \\\\le R$ as far as reading the succeeding equation? (Does it mean that the distribution of $Y$ depends on the observation of the random variables $L$ and $R$ but independent of the distributions of $L$ and $R$?)**\\n \\n**Our Response**:\\nThank you very much for insightfully pointing out this error. Our intended meaning was as you perceived, but it was not precisely presented. We have now corrected it.\\n- **Section 3.1, line 187: It looks that the randomness of $M$ is not used anywhere else. How does the randomness work?**\\n\\n**Our Response**: Thank you for this insightful question. While the randomness of $M$ may not be directly used in specific calculations, it serves a crucial role in our framework by illustrating that our methods are designed to accommodate settings where the number of observations is not fixed but can vary with uncertainty. This flexibility is important in practical applications, where real-world data may not have a rigid, predefined number of observations.\\n\\nIn real applications with a single dataset, $M$ is realized as the number of observations, but treating $M$ as a random variable highlights our methods\\u2019 robustness across scenarios with differing numbers of observations.\\nTo clarify, we have updated the text to better explain $M$.\\n- **Section 3.2, line 238: It states that we need to estimate $S(y|X_i)$, but the procedure has not been described at this point. Please consider referring Section 3.3 as the section that describes how to estimate it.**\\n\\n**Our Response**: Thank you for this careful suggestion. In this revision, we have addressed this point by referring to Section 3.3.\\n- **Section 5: The sample-based maximum absolute error is abbreviated as S-MAE, but please consider another abbreviation, since MAE also stands for {\\\\bf mean} absolute error. (It is confusing since S-MSE represents the sample-based mean squared error.)**\\n\\n**Our Response**: Thank you for your comments on the abbreviations S-MAE and S-MSE. To avoid potential confusion with MAE (mean absolute error) and MSE (mean squared error), we have now revised the abbreviations. Specifically, S-MAE has been replaced with SMaxAE, and S-MSE has been replaced with SMSqE.\\n- **Section 5, line 471: adpots -> adopts**\\n\\n**Our Response**: Thank you for pointing out this typo. We have fixed it for the revision.\\n- **Section 5, line 510: boxlplots -> boxplots**\\n\\n**Our Response**: Thank you for pointing out this typo. We have fixed it for the revision.\\n- **Section 5, Figure 1(b): lines for CUT are not visible since they look overlapped with other lines, so please change line style or transparency so that the lines appear.**\\n\\n**Our Response**: Thank you for this suggestion. In response, we have updated Figure 1(b) and the related figures by modifying the line and point styles to ensure the CUT lines are clearly visible and no longer overlap with other lines.\"}", "{\"comment\": \"I would like to thank authors for their response, addressing my concerns and adding additional work. I am happy with the manuscript. Releasing code would be definitely great. I will keep my score as it is.\"}", "{\"title\": \"thanks\", \"comment\": \"Dear Reviewer Npps:\\n\\nThank you very much for your prompt response to our rebuttal. We deeply appreciate your time and insightful comments, which have greatly helped improve the presentation of our work.\"}", "{\"title\": \"Responses to Reviewer VgwR (2/3)\", \"comment\": \"**Questions**:\\n\\n**Key questions**\\n- **It uses ICRF to estimate the conditional survivor function, but perhaps can we just impute interval-censored outcomes by ICRF (instead of using (Bian et al., 2024a))?**\\n\\n**Our Response**: Thank you for your question. ICRF was designed to estimate the conditional survival function (Cho et al., 2022), but it does not directly predict survival time or status, which is the primary focus of our paper.\\n While it is possible to use ICRF to impute interval-censored outcomes, we chose to incorporate ICRF as part of our $L_2$Boost-CUT and $L_2$Boost-IMP methods. These two-step approaches enable us to leverage the strengths of both ICRF for imputing the interval-censored data and $L_2$Boost for subsequent predictions. For greater clarity, in this revision we have included the discussions in Sections 3.2 and 3.3.\\n- **Section 1.1: The paper presented several tree-based interval-censored regression methods, but is it difficult to extend these methods to boosting? If so, why?**\\n\\n**Our Response**: Thank you for the comments. Yao et al. (2021) introduced a survival forest method utilizing the conditional inference framework, while Cho et al. (2022) proposed a recursive forests method. These methods are specifically designed for estimating survival functions for interval-censored data and cannot directly predict survival time or status. In contrast, our proposed boosting method builds upon survival function estimation as a foundational step, where Yao et al. (2021) and Cho et al. (2022)'s methods can be incorporated as a basic component of our framework. \\n\\nYang et al. (2024) adopted a similar approach by constructing an observed loss function and developing tree algorithms for interval-censored data. However, their focus differs from ours as they emphasize tree-based methods, while we concentrate on boosting combined with smoothing splines. Moreover, our work offers comprehensive theoretical results for boosting methods, whereas Yang et al. (2024) provided only the implementation procedure without theoretical justification.\\n\\nExtending tree-based methods to boosting may present a challenge due to the iterative refinement of model parameters in boosting, where the learning rate and loss function need to be carefully optimized. In our proposed $L_2$Boost-CUT and $L_2$Boost-IMP methods, we leverage the unique properties of the $L_2$ loss, enabling a simpler boosting algorithm with the optimal learning rate inherently incorporated into the optimization process for $\\\\hat{h}$. While tree-based methods, such as those proposed by Yao et al. (2021) and Cho et al. (2022), can be adapted procedurally to a boosting framework with a loss function other than the $L_2$ loss, establishing theoretical guarantees remains challenging and uncertain.\\n\\nTo address your comments, in this revision, we have included additional discussions on this aspect in Appendix E.3. As a side note, in this revision we have run additional experiments using the method by Yao et al. (2021), as suggested by another reviewer. The results, reported in Figure G.4 of Appendix G.2, shows that the results from the Yao et al. (2021) method are in good agreement with those produced from our proposed methods. However, the SKDT (Kendall's $\\\\tau$) values from this method appears slightly more variable than those from our methods.\\n- **Section 1.2: It states that L2Boost-Impute is a proposed method, but the procedure appears only in Section 5, and the imputation method is just a reference. What is the novelty?**\\n\\n**Our Response**: Thank you for raising this careful comment, which we have now carefully addressed. In our initial version, we did not include comparison details due to space limitations. In this revision, we have added more information to compare the two algorithms, explaining their differences and similarities in Section 3.2 for greater clarity.\\n\\n- **Section 5, Figure 1: Why the methods O (oracle) and R (reference) are compared in these plots? It is true that these results uses unknown information in reality and they can produce better results, but it looks that these results are not discussed in the paper.**\\n \\n**Our Response**:\\nAs you correctly pointed out, the O (oracle) and R (reference) methods rely on information unavailable in real-world scenarios and thus cannot be applied in actual data analysis. The inclusion of these methods in Figure 1 serves to provide benchmarks for our proposed methods, illustrating the upper bounds of performance for a realistic method. Including these benchmarks allows us to assess how our methods perform relative to those designed for ideal conditions with full data availability.\"}", "{\"comment\": \"Dear reviewers:\\n\\nThank you all for your continued feedback on our rebuttal and for your helpful suggestions. In addition to providing our further responses to your additional feedback individually, we have revised the manuscript once again to incorporate your comments for greater clarity. The changes made in response to your initial feedback are marked in red, while the new revisions are highlighted in blue for ease of your reference.\\n\\nWe deeply appreciate your time and valuable expertise, which have significantly improved the presentation of our work. We believe this revised version represents a considerable improvement over the initial draft for greater clarity.\\n\\nThank you once again for your support.\\n\\nWith warm regards\"}", "{\"comment\": \"Thank you for your thoughtful feedback on our rebuttal. We would like to further address your comments and clarify the broader contributions of our work.\\n\\n# **Performance Compared to Existing Methods:**\\n\\nRegarding your observation, \\\"It would be the best if the performance outperformed existing methods, but it looks not true as far as reading the experimental results (although certain improvements are seen)\\\", we agree that this is an important point. These observations reflect the unique challenges posed by interval-censored data.\\n\\nTo the best of our knowledge, our methods are the first to handle **predictions for various features** related to survival processes with **interval-censored data**. As such, there are no existing methods available for **direct, fair comparisons**. The only meaningful baselines are the N, O, and R methods described in Section 5, as their differences can be clearly delineated and compared. While we have included additional experiments incorporating the YAO and COX methods for baseline comparisons in the revised manuscript, these methods do not share comparable characteristics for fair evaluation.\\n\\nTheoretically, our Proposition 1 demonstrates that the censoring unbiased transformation (CUT)-based $L_{CUT}$\\u200b loss function has the same risk as the original $L_2$\\u200b loss. However, determining the transformed outcomes in (7) adds complexity due to the need for survival function estimation. Moreover, the absence of direct measurements of survival times introduces substantial variability, as evidenced by the noticeable differences between our methods and the R method, which assumes access to true responses. Nonetheless, our methods show significant improvement over the naive method, which incorporates boosting but fails to properly account for the effects of interval-censoring.\\n\\n# **Benefit of Boosting for Interval-Censored Data:**\\n\\nIn survival analysis, particularly with interval-censored data, survival times are often only partially observed, creating significant analytical challenges. Boosting offers advantages for addressing these challenges:\\n\\n(1) **Information Aggregation:**\\n\\nBoosting combines weak learners into a strong ensemble, effectively aggregating the limited and incomplete information that can be extracted from interval-censored datasets. This characteristic is particularly valuable for survival data, where individual methods often struggle to extract sufficient signal from severely incomplete data.\\n\\n(2) **Enhanced Information Extraction:**\\n\\nOur framework leverages the boosting technique in conjunction with censoring-specific strategies such as the censoring unbiased transformation (CUT) and imputation (IMP). These methods integrate into the boosting process to improve the ability to utilize the partial information inherent in interval-censored data.\\n \\n# **Insights and Broader Contributions:**\\n\\nWe appreciate your suggestion to emphasize the broader implications and insights of our work. The principles behind our methods could be adaptable to other machine learning frameworks, such as deep learning or ensemble methods, broadening their applicability beyond boosting. This can be an interesting project for future exploration. Furthermore, while Theorem 1 demonstrates that the $L_2$Boost-CUT and $L_2$Boost-IMP algorithms consistently outperform unboosted weak learners in terms of MSE, this result is established under the assumption of weak base learners (as per Condition (C4)). Inspired by your comments, it would be valuable to quantify the extent of improvement offered by boosting over unboosted learners and to investigate how this improvement depends on the form of weak learners, particularly in the context of interval-censored data.\\nIn preparing the final version of our manuscript, we plan to add these aspects for possible future research.\\n\\n# **Conclusion:**\\n\\nIn summary, our work demonstrates the practical value of employing boosting in survival analysis with interval-censored data by effectively addressing the unique challenges posed by incomplete information. The methodology we propose not only improves information extraction in this domain but also offers theoretical insights and paves the way for future extensions to other learning paradigms.\\n\\n\\nThank you once again for your constructive feedback and thoughtful suggestions, which we deeply appreciate. They have been invaluable in improving the presentation and expanding the scope of our manuscript. Your precious time and valuable insights are deeply appreciated.\"}", "{\"title\": \"Responses to Reviewer 4h4i\", \"comment\": \"**Strengths**:\\n - **The paper is mathematically well written. The notation is consistent and precise. Definitions and proofs are provided in the Appendix.**\\n - **The theoretical results are novel, clearly structured and formulated. The implications of each theoretical result are well discussed and framed within the literature context if relevant.**\\n - **The proposed algorithms are experimentally tested on the synthetic dataset under various scenarios. The method is further applied to real-life dataset.**\\n \\n**Our Response**: Thank you for the positive feedback. We are glad you recognize the novelty of the theoretical results and the mathematical rigor of the paper. We also want to thank you for dedicating your time and expertise to review our paper and provide constructive comments and suggestions. We appreciate your careful review, and we have carefully addressed all comments thoroughly in our revision.\\n\\n**Weaknesses**:\\n1. **The underlying part of the model is estimating survival function. It would be interesting to include the experiment with different survival function estimators to assess how sensitive the method is to the potential biases of the estimator of the survival times as this part of the model is not properly covered by presented theory.**\\n\\n2. **It is not clear how sensitive the proposed method is to the noise in the underlying data.**\\n\\n3. **Authors did not provide much insights into how well the algorithm scales to the larger datasets.**\\n\\n**Our Response**: In this revision, we have carefully addressed each of your comments to improve upon the weakness of our initial submission. Specifically, in response to your suggestion about survivor function estimation, we have now conducted additional experiments and reported the results in Figure G.5, with detailed comments provided in Appendix G.3. Regarding your comments on the sensitivity of our methods to noise level and data size, we have thoroughly addressed them. Details are provided in the following bullet points, corresponding to your individual questions.\\n\\n**Questions**:\\n1. **What are the scaling capabilities of the models to the large datasets? (provided datasets are of order of magnitude $10^3$? It would be good to see experiments for varying sizes of the dataset.**\\n\\n**Our Response**: Thank you for raising this important question. To address your concern, in this revision we have conducted additional experiments using a larger dataset with $n=10^3$ to evaluate the scalability of our methods. Results for these experiments have now been summarized in Figure G.1 in Appendix G.1.\\n\\n2. **The imputation algorithm and the CUT boosting in the presented experiments (especially Figure 3, E.3) seems to produce similar results. Can you provide more insights into why that is?**\\n\\n**Our Response**: Thank you for raising this interesting point, which we have now carefully addressed. In our initial version, we did not include comparison details due to space limitations. In this revision, we have added more information to compare the two algorithms, explaining their differences and similarities. Specifically, we have inserted the following material in Section 3.2 for greater clarity.\\n\\n3. **It would be great to see performance for other real-life datasets with already known benchmarks to demonstrate that the presented method performs well against other well explored survival dataset problems.**\\n\\n**Our Response**: Thank you for this suggestion. We agree that benchmarking the proposed methods on multiple survival datasets would demonstrate the broader applicability of our approaches.\\nIn response to your suggestion, we have included another dataset -- the Bangkok HIV data -- in this revision and analyzed it using the proposed methods. The results are now presented alongside the initial analysis of the Signal Tandmobiel dataset in Figure 3 in Section 5 (Here, results of implementing the Cox model are also included in response to other reviewers' suggestion).\\n\\n4. **For the synthetic dataset, e.g. scenario 1, how does the performance changes based on the amount of the noise in the data, so when you change the variance in the error term, e.g. instead of only 0.25 variance for normal, when you increase the noise for to 0.5, 1, 1.5?**\\n\\n**Our Response**: Thank you for the suggestion. In this revision, we have incorporated your feedback\\n by conducting additional experiments to extend the initial analysis with $\\\\sigma=0.25$ to include $\\\\sigma=0.5$, $\\\\sigma=1$, and $\\\\sigma=1.5$. The results are summarized in Figure G.4, together with some comments presented in Appendix G.2.\"}", "{\"comment\": \"I was convinced with the responses, especially with the section \\\"Benefit of Boosting for Interval-Censored Data\\\". The effectiveness of ensemble methods for interval-censored datasets, and the development of a boosting method for such datasets are valuable.\"}", "{\"comment\": \"Dear Reviewer mmSc:\\n\\nAgain, many thanks for your feedback on our rebuttal. We share your view on the importance of code availability. We have now uploaded our raw code files as supplementary materials, which include four files: readme.md (provides information about the other three files), fun.R (contains helper functions), cut.R (implements the CUT algorithm), and imp.R (implements the IMP algorithm used in the experiments in Section 5). A user-friendly version of the code, with self-contained explanations and better documentation of implementation details, will be made available on GitHub upon acceptance of the paper\\n\\nWe value your comments, suggestions, and feedback, which greatly help us sharpen the presentation of our work. \\n\\nWith kind regards.\"}", "{\"summary\": \"The manuscript introduces a framework that extends boosting techniques to effectively handle interval-censored data. The authors offer a thorough theoretical analysis, examining mean squared error (MSE), variance, and bias, building on foundational results from B\\u00fchlmann & Yu (2003). The proposed methods are further substantiated through experiments conducted on both synthetic and real-world datasets, demonstrating the framework\\u2019s applicability and effectiveness.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The manuscript is clearly written and accessible, making the methodology easy to follow.\", \"The authors conduct a rigorous theoretical analysis of their proposed methods, assessing mean squared error (MSE), variance, and bias to establish a solid foundation for their approach.\"], \"weaknesses\": [\"In Proposition 2, using a distinct notation for the smoother matrix would help prevent confusion with the survival function $S$.\", \"Adding experiments for the Cox proportional hazards model would strengthen the manuscript's applicability and support its conclusions.\", \"Additional ensemble methods exist for interval-censored data, such as:\", \"Yao, W., Frydman, H., and Simonoff, J.S., 2021. An ensemble method for interval-censored time-to-event data. Biostatistics, 22(1), pp.198-213.\", \"Including comparisons with such methods would enhance the evaluation of the proposed approach.\", \"In both synthetic and real data experiments, the proposed CUT method and the imputation approach (Bian et al. 2024a) yield comparable results, with the imputation method occasionally performing better (e.g., in real data). Could the authors further elaborate on the unique advantages of their proposed method?\", \"In Appendix A, the authors outline several conditions primarily related to the smoother matrix $S$. Providing relevant examples of $S$ that meet these conditions would improve clarity.\", \"All theoretical results assume consistent estimation of the survival function. How does the convergence rate of the survival function estimator impact the final estimator?\"], \"questions\": [\"In page 3 line 108, should the survival probability be P(Y>s)?\", \"In page 3 Equation (5), what is the update compared to Equation (3)?\", \"In traditional boosting methods, a learning rate is typically included. Is there a similar parameter in Algorithm 1?\", \"In Condition (C1), what does $Q_i^2$ mean? Does it denote $Q_i'Q_i$ as $Q_i$ is an eigenvector?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The manuscript extends the boosting framework to regression and classification tasks with interval-censored data. The framework is particularly applicable in survival analysis, due to the prevalence of interval-censored data. The approach combines the L2Boost method with the censoring unbiased transformation (CUT) approach to form L2Boost-CUT, which is implemented via functional gradient descent. The manuscript also introduces an impute method based on L2Boost. L2Boost-CUT is applied to both synthetic and real data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is generally very well written and easy to follow. The authors excellently introduce the problem and previous methods. The manuscript also clearly motivates the problem. Specific strengths are:\\n\\n1. The manuscript contains detailed and extensive theoretical results, providing strong theoretical guarantees. \\n2. The theoretical results are described very well. Each theorem and proposition is described in terms of its broader importance and the purpose for it being used. This is very important in theoretical papers, to allow the reader to follow without getting bogged down by extensive theory.\\n3. The manuscript tackles an interesting problem and the contribution is important. The method is applicable to many important settings.\", \"weaknesses\": \"My main concerns are regarding the empirical results. I'm not convinced by both the scope and outcome of the results. Specifically:\\n\\n1. Both the synthetic and real data studies are very limited in scope. For an approach that is so widely useful, it is a shame that it has not been applied to more settings to showcase its usefulness. It would be insightful to find and present scenarios showing the extreme ends of performance (very good and very bad performance) of the approach and discuss why this is.\\n2. The results from the studies are not overly convincing. The method performs the same as imputation and standard boosting (which already exists) and does not provide a large benefit over the naive approach in the synthetic study. I have asked a clarifying question around this in the Questions, so this weakness can be addressed. \\n3. I think there should be more discussion on the assumptions and limitations of the method. This would help to understand under which scenarios we might expect the method to not do well.\\n4. There is no discussion on computational cost or complexity. This could either be a theoretical complexity analysis or even just simple timing tests. Either would provide useful insights for practitioners who wish to decide on whether to apply the method to their problem.\\n5. There is no reproducibility. It would be helpful to have reproducible code, even for just a basic example. This would also partly help alleviate weakness 4, as I could run the code myself and observe the execution time. \\n\\nI would like to see more evidence that the approach is beneficial in practice. I think the paper could do with some rearrangement: some of the theoretical results moved to the appendix to make space for further empirical results.\", \"questions\": \"1. How easily could the framework be extended to other boosting approaches (such as XG)?\\n2. Relating to the weakness point above: what's the benefit of L2Boost-CUT if it provides the same results as imputation + standard boosting?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents boosting algorithms for regressors and classifiers for the censored data. The censoring naturally introduces a bias into the estimator as it collapses the mass of the tails of the underlying distribution on the censoring boundary. In particular, the main contribution of the paper is in presenting the novel boosting framework with censoring unbiased transformation which focuses on modifying the loss function. Authors also present a model for imputing the missing data in censoring context. Authors investigate the theoretical properties of the algorithms and conclude that incorporating the spline estimators into the base learner results in optimal MSE rates for the predictor.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The paper is mathematically well written. The notation is consistent and precise. Definitions and proofs are provided in the Appendix.\", \"The theoretical results are novel, clearly structured and formulated. The implications of each theoretical result are well discussed and framed within the literature context if relevant.\", \"The proposed algorithms are experimentally tested on the synthetic dataset under various scenarios. The method is further applied to real-life dataset.\"], \"weaknesses\": \"1) The underlying part of the model is estimating survival function. It would be interesting to include the experiment with different survival function estimators to assess how sensitive the method is to the potential biases of the estimator of the survival times as this part of the model is not properly covered by presented theory.\\n2) It is not clear how sensitive the proposed method is to the noise in the underlying data.\\n3) Authors did not provide much insights into how well the algorithm scales to the larger datasets.\", \"questions\": \"1) What are the scaling capabilities of the models to the large datasets? (provided datasets are of order of magnitude 10^3? It would be good to see experiments for varying sizes of the dataset.\\n2) The imputation algorithm and the CUT boosting in the presented experiments (especially Figure 3, E.3) seems to produce similar results. Can you provide more insights into why that is?\\n3) It would be great to see performance for other real-life datasets with already known benchmarks to demonstrate that the presented method performs well against other well explored survival dataset problems. \\n4) For the synthetic dataset, e.g. scenario 1, how does the performance changes based on the amount of the noise in the data, so when you change the variance in the error term $epsilon_i$, e.g. instead of only 0.25 variance for normal, when you increase the noise for to 0.5, 1., 1.5?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"In the revised manuscript, the authors state: \\\"We analyze synthetic data using the proposed L2Boost-CUT (CUT) and L2Boost-IMP (IMP) methods, as opposed to three other methods.\\\" However, after a careful comparison of the original and revised manuscripts, it seems that the so-called L2Boost-IMP (IMP) method corresponds to the imputation approach proposed by Bian et al. (2024a) in the original draft. Could the authors clarify why this discrepancy exists?\\n\\nFurthermore, in Section 3.2, the authors mention:\\n- \\\"These two algorithms are derived from distinct perspectives in addressing interval-censored data, and they may be expected to yield different results.\\\"\\n- \\\"As a result, L2Boost-CUT and L2Boost-IMP differ mainly in the stopping criterion, suggesting that they may often yield similar results.\\\"\\n\\nIf L2Boost-IMP is indeed the imputation method proposed in Bian et al. (2024a), this raises questions about the novelty of the proposed methods. Specifically, if the differences between the two methods lie only in the stopping criterion, and the results are often similar or marginally different, it becomes unclear what new insights or advantages the proposed method offers compared to the existing approach. As such, I see no compelling reason to revise my current score.\"}", "{\"comment\": \"Dear Reviewer VgwR:\\n\\nMany thanks for your prompt response. We value your comments, suggestions, and feedback, which greatly help us sharpen the presentation of our work. With kind regards.\"}", "{\"title\": \"Consolidated Rebuttal to All Reviewers\", \"comment\": \"Dear reviewers,\\n\\nThank you for dedicating your valuable time and effort to reviewing our paper. We appreciate your insightful feedback and constructive comments, which have greatly helped us enhance the presentation of our work. We are grateful for your recognition of the contributions and strengths of our paper, as summarized below:\\n- The manuscript tackles an **interesting problem and the contribution is important**. The method is applicable to many important settings.\\n- **The theoretical results are novel, clearly structured and formulated**. The implications of each theoretical result are **well discussed and framed** within the literature context if relevant.\\n- The article provides a **thorough theoretical analysis** of the proposed algorithm, which is really **important to ground further applicative work using this method in practice**.\\n- The paper is **mathematically well written**. The notation is consistent and precise. The paper is easy to follow. The authors **excellently introduce** the problem and previous methods. The manuscript also clearly motivates the problem.\\n- **The theoretical results are described very well**. Each theorem and proposition is described in terms of its broader importance and the purpose for it being used. **This is very important in theoretical papers**, to allow the reader to follow without getting bogged down by extensive theory.\\n- The proposed algorithms are **experimentally tested** on the synthetic dataset under various scenarios. The method is further applied to real-life dataset.\\n\\nRegarding the weakness of our article, we have carefully reviewed each of your queries, concerns, and comments. In preparing the revised\\nversion, we have thoroughly addressed each comment to improve the clarity and presentation of our paper. Here, we highlight the key changes made in this revision. Detailed explanations for the changes we have made to each specific comment are provided in the responses to each reviewer.\\n- Responses to the query about the **similarity of the imputation algorithm and the CUT boosting** (from **Reviewers 4h4i, uzby, VgwR, Npps**):\\n\\nIn our initial version, we did not include comparison details due to space limitations. In this revision, we have added detailed descriptions to compare the two algorithms, explaining their differences and similarities in Section 3.2 for greater clarity.\\n- Responses to the **clarification of the proposed methods and ICRF** (from **Reviewers mmSc, uzby, VgwR**):\\n\\nOur method builds on survival function estimation, with ICRF utilized as a component of our framework. Therefore, our work is not intended to improve ICRF but to leverage it within a broader algorithm to achieve new predictive objectives. As a result, the goal of our paper differs from that of ICRF, and direct numerical comparisons between the two would not be meaningful. For clarity, we have included some discussions at the end of Section 3.3 in this revision.\\n- Responses to the suggestion of **including additional baseline**, like the Cox model and the method by Yao et al. (2021) (from **Reviewers mmSc, uzby**):\\n \\nWe have included additional comparison results in the revised manuscript. Specifically, for data analysis, we now present results using the Cox model in Figure 3 of Section 5. Results from experiments involving the Cox model and the method by Yao et al. (2021) are now shown in Figure G.4 in Appendix G.2.\\n- Responses to the comments on **sensitivity** of the proposed methods (from **Reviewer 4h4i**) and the use of **additional metric** to summarize experiment results (from **Reviewer mmSc**):\\n \\nTo address these comments, we have now conducted additional experiments and reported the results using the additional metric, Kendall's $\\\\tau$ (denoted SKDT in the manuscript), along with the initial metrics. Please see its definition in Appendix F.1, and the results reported in Figure 1 of Section 5 and Figure G.4 in Appendix G.2, together with the related comments in those places.\\n- Responses to the comments on **extensions and limitations** of our methods (from **Reviewers VgwR, Npps**):\\n\\nIn response to these comments, we have included possible extensions in Appendix E.3 and revised Section 6 to discuss the limitations of our methods.\\n\\nFinally, we have carefully revised the manuscript to make the presentation more concise and informative. Key messages are presented in the main text, while lengthy technical details, discussions, and additional experiments are provided in the appendices with clear headings for each topic. We have also proofread the paper to correct spelling errors. For your convenience, we have marked the major revisions in red. \\n\\nThank you for your constructive comments and suggestions, which have greatly enhanced our manuscript.\"}", "{\"comment\": \"We appreciate your thoughtful feedback and for taking the time to review our rebuttal. We are grateful for your increased assessment of our work. Below, we would like to further address your remaining concerns in more detail.\\n\\n1. **Regarding the Contribution and Performance Dependence on Imputation:**\", \"we_would_like_to_address_the_perception_of_our_contributions\": \"\\\"as it seems that most of the performance is due to the use of another algorithm to impute the `missing' data, and then the application of rather standard machine learning tools.\\\"\\n\\nWhile we agree that the proposed $L_2$Boost-IMP employs the **imputation** as a first step, the novelty of our approach lies in its tailored design for the **unique challenges posed by interval-censored data**. Unlike standard imputation techniques, our approach ensures that the imputed values are consistent with the underlying data distribution. Furthermore, the integration of these imputed values with machine learning tools provides a robust framework for handling interval-censored data. This combination goes beyond a simple `impute-and-apply' approach.\\n\\nHandling imputation for interval-censored responses is significantly more complex than standard imputation for missing responses. To draw an analogy, this extension mirrors the difference between calculus and arithmetic: while calculus builds upon the foundational elements of arithmetic, it represents a substantial conceptual and methodological leap. Similarly, our approach is not a straightforward application of standard imputation and machine learning tools but rather a novel framework designed to address the intricacies of interval-censored data.\\n\\nTo further clarify, we highlight the differences between our $L_2$Boost-IMP and the usual procedure for imputation of missing responses:\\n\\n (a) **Complexity in Facilitating Missingness:**\\n\\nIn the context with missing responses, a single missing data indicator, say $\\\\Delta_i$ ($\\\\Delta_i= 1$ if response $Y_i$ is observed, and 0 otherwise), suffices to reflect the missingness status for each subject $i$. However, to facilitate interval-censored responses, our $L_2$Boost-IMP method requires a more complex representation. Instead of a scalar missing data indicator\\n$\\\\Delta_i$, a sequence of censoring indicators $\\\\Delta_{i,j}$ is needed for each subject $i$, corresponding to different intervals indexed by $j$. \\n\\n(b) **Complexity in Constructing of Pseudo-Outcomes:**\\n\\nIn the context with missing responses, an imputation model based on the conditional distribution $f(y_i|X_i, \\\\Delta_i=0)$ is often\\nused to determine an imputed value for a missing response. However, our $L_2$Boost-IMP method requires a more complex approach. Determining the pseudo-outcome, as outlined in (7), involves estimating the survivor function, which significantly increases the complexity of implementing the $L_2$Boost-IMP method compared to standard imputation procedures.\"}", "{\"title\": \"Responses to Reviewer mmSc\", \"comment\": \"**Strengths**:\\n\\n**The article is clearly written and provides a thorough theoretical analysis of the proposed algorithm, which is really important to ground further applicative work using this method in practice.**\\n\\n**Our Response**: Thank you for your positive feedback and for recognizing the importance and thoroughness of our work. We also want to thank you for dedicating your time and expertise to review our paper and provide constructive comments and suggestions. We appreciate your careful review, and we will address all comments thoroughly in our revision.\\n\\n**Weaknesses**:\\n\\n**There are several limitations of this work.**\\n- **my main concern is that the paper does not clarify the empirical improvement compared to the use of ICRF. It is slightly different indeed to use boosted trees compared to random forests (bagging), but the article fails to show a clear gain in performance, both on simulated and real data.**\\n\\n**Our Response**: Thank you for raising this concern. We would like to clarify the distinction of our work in relation to ICRF. ICRF (Cho et al., 2022) is a tree-based, nonparametric method specifically designed for estimating survival functions for interval-censored data, which differs from our objective. We focus on developing boosting methods for regression and classification tasks with interval-censored data. Specifically, we aim to construct a predictive model $f(\\\\cdot)$ that well predicts a transformed target variable $g(Y)$, where $g(\\\\cdot)$ can take various forms to address different tasks. These tasks include predicting the survival time, the survival status at specific time points, or any functional outcome of survival time of interest.\\n\\nOur method builds upon survival function estimation as a basic step, for which ICRF is utilized as a component of our framework. Therefore, our work is not designed to improve ICRF but rather to leverage it within a broader algorithm to achieve new predictive objectives. Consequently, the goal of our paper is distinct from ICRF\\u2019s, and direct numerical comparisons between the two would not be meaningful. For greater clarity, in this revision we have included some discussions at the end of Section 3.3 to address your comment, as well as a related comment from another reviewer.\\n\\n- **there should be more baselines, like Cox models, even if they are designed for right-censored data rather than interval data. There are very few baselines included in the benchmark presented here. It's unclear whether the I method is similar to ICRF or a novel method. If not, ICRF should be included as a baseline to demonstrate the contributions of this work clearly.**\\n\\n\\n**Our Response**: Thank you for your feedback. Regarding your comment on ICRF, as explained in the response to the previous bullet point, the I method (now referred to as the L2Boost-IMP method, or IMP for short) is not similar to ICRF. Instead, ICRF constitutes a component of the I method and is used specifically for estimating the survivor function. \\n\\nIn response to your suggestion regarding the use of the Cox model, we have included additional results derived from it in the revised manuscript. Specifically, we now present results for both the data analysis and experimental settings involving the Cox model. They now appear in Figure 3 in Section 5, as well as Figure G.4 in Appendix G.2.\\n\\n- **I don\\u2019t think there is any mention that the code to use L2Boost-CUT or L2Boost-Impute will be made available, is it?**\\n\\n**Our Response**: Thank you for your comment. We plan to make the code publicly available after acceptance and will post it on GitHub at that time. In this revision, we have added a note after Proposition 1 in Section 3.2 to inform readers of this intention.\\n\\n**Questions:**\\n- **are there any additional performance metrics that could be used? like the c-index (or a variation of it)?**\\n\\n**Our Response**: Thank you for your thoughtful suggestion regarding additional performance metrics. To address this, we have incorporated Kendall\\u2019s $\\\\tau$ to evaluate the concordance between the estimator and its target. Kendall\\u2019s $\\\\tau$ quantifies the ordinal association between two variables by comparing concordant and discordant pairs. This metric captures both the consistency of the predicted ranking with the true ranking and the extent of any discrepancies between them. Alongside the metrics considered in the initial version, we now present experimental results using Kendall\\u2019s $\\\\tau$ in Figure 1, as well as Figures G.1. G.2, G.4, and G. 5 in Appendix G. Additionally, we have included the discussions in Appendix F.1, Section 5, and Appendix G.1.\"}", "{\"title\": \"Responses to Reviewer VgwR (1/3)\", \"comment\": \"**Strengths**:\\n- **The paper found that, by CUT, existing L2boost algorithm can be applied to interval-censored datasets.**\\n- **There is a constraint that we need to estimate the joint distribution of $X$ and $Y$ without knowing $Y$ itself but knowing only intervals. They showed that estimating the distribution by ICRF experimentally worked well.**\\n- **Theoretical convergence rates and lower bounds of MSE (for regressions) and misclassifications (for classifications) are presented.**\\n\\n**Our Response**: Thank you for dedicating your time and expertise to review our paper and provide constructive comments and suggestions. We appreciate your careful review, and we will address all comments thoroughly in our revision.\\n\\n**Weaknesses**:\\n- **The procedure itself looks somewhat simple; first we apply CUT and then L2boost. If there is a difficulty or interesting results of these combinations, please emphasize.**\\n- **Perhaps, the combination of boosting and interval-censored data is the novelty? If so, please emphasize the discussion on the novelty (e.g., limitations in existing methods).**\\n \\n**Our Response**: Thank you for your comments and suggestions, which have helped us revise the paper and more clearly articulate the contributions of our work.\\nThe key innovation of our work lies in presenting a framework that extends boosting methods to handle interval-censored data \\u2014 a crucial yet underexplored problem in machine learning. Standard boosting algorithms are not directly applicable in this context due to the challenges posed by interval censoring. Our approach addresses this gap by leveraging the censoring-unbiased transformation (CUT) to construct unbiased loss functions tailored for interval-censored data.\\n\\nMotivated by your comment and feedback from other reviewers, we have revised the paper to include extensions of our framework to other loss functions that can be used to handle interval-censored data. We have also highlighted the uniqueness of the $L_2$ loss function employed in our current theoretical development and emphasized that establishing similar theoretical results for these extensions is not straightforward.\\n\\nIn response to your suggestion, we have included a discussion on possible extensions in Appendix E.3 as well as the limitations of our methods in Section 6.\"}", "{\"title\": \"Responses to Reviewer Npps (1/2)\", \"comment\": \"**Strengths**:\\n\\n**The paper is generally very well written and easy to follow. The authors excellently introduce the problem and previous methods. The manuscript also clearly motivates the problem. Specific strengths are:**\\n\\n1. **The manuscript contains detailed and extensive theoretical results, providing strong theoretical guarantees.**\\n\\n2. **The theoretical results are described very well. Each theorem and proposition is described in terms of its broader importance and the purpose for it being used. This is very important in theoretical papers, to allow the reader to follow without getting bogged down by extensive theory.**\\n\\n3. **The manuscript tackles an interesting problem and the contribution is important. The method is applicable to many important settings.**\\n\\n**Our Response**: Thank you for your positive feedback. We are delighted that you found the manuscript addresses an interesting problem and the contribution is important. We also want to thank you for dedicating your time and expertise to review our paper and provide constructive comments and suggestions. We appreciate your careful review, and we will address all comments thoroughly in our revision.\\n\\n**Weaknesses**:\\n\\n**My main concerns are regarding the empirical results. I'm not convinced by both the scope and outcome of the results. Specifically:**\\n\\n1. **Both the synthetic and real data studies are very limited in scope. For an approach that is so widely useful, it is a shame that it has not been applied to more settings to showcase its usefulness. It would be insightful to find and present scenarios showing the extreme ends of performance (very good and very bad performance) of the approach and discuss why this is.**\\n\\n**Our Response**: Thank you for your comments and suggestions. In this revised version, we have carefully addressed your concerns by conducting additional experiments and data analyses to explore diverse settings.\\n\\nFirst, we have now included an additional real-world dataset to further validate the practical utility of our methods. The analysis results are now presented in Figure 3 and discussed at the end of Section 5.\\n\\nSecond, we have now expanded synthetic experiments for various scenarios in Appendix G, including varying levels of noise, sample sizes, and data generation models, to provide a more comprehensive understanding of the performance of our proposed methods.\\n\\nFurthermore, as ICRF comprises a component in our proposed methods (clarified at the end of Section 3.3), we have now conducted further experiments to evaluate how different implementations of ICRF influence performance of our proposed methods. Results and discussions are now included in Appendix G.3. \\n\\n2. **The results from the studies are not overly convincing. The method performs the same as imputation and standard boosting (which already exists) and does not provide a large benefit over the naive approach in the synthetic study. I have asked a clarifying question around this in the Questions, so this weakness can be addressed.**\\n\\n**Our Response**: Thank you for your comments, which we have now carefully addressed. In our initial version, we did not include comparison details due to space limitations. In this revision, we have added more information to compare the two algorithms, explaining their differences and similarities, in Section 3.2 for greater clarity.\\n\\nFurthermore, in this revised manuscript, we have conducted additional experiments; details can be found in Appendix G. The results demonstrate the practical utility of our proposed methods.\\n\\n3. **I think there should be more discussion on the assumptions and limitations of the method. This would help to understand under which scenarios we might expect the method to not do well.**\\n\\n**Our Response**: To address your comments, we have now included the discussion in Section 6.\\n\\n4. **There is no discussion on computational cost or complexity. This could either be a theoretical complexity analysis or even just simple timing tests. Either would provide useful insights for practitioners who wish to decide on whether to apply the method to their problem.**\\n\\n**Our Response**: Thank you for highlighting the importance of discussing computational complexity. In response, in this revision we have now included the discussion in Appendix E.2. In addition, in this revision we have re-ran experiments and added computational timing comparisons for better insights into finite sample implementation. These results are now reported in Table F.1 of Appendix F.3.\"}", "{\"comment\": \"I appreciate the effort and time taken to respond to my review. I have considered your responses carefully and will leave my score as it is.\"}", "{\"comment\": \"The authors provided interesting answers and explained some parts of the paper that were still slightly unclear to me. I am still not entirely convinced by the contribution, as it seems that most of the performance is due to the use of another algorithm to impute the \\\"missing\\\" data, and then the application of rather standard machine learning tools. However, my initial grade does not reflect the quality of the paper, so I increase my grade to 5.\\nI regret that the code was not submitted in an anonymous way so that it could be reviewed with the paper. One of the main interest of the article is to contribute to define good practice when using censored data, which can not be done with a bad code or no code.\"}", "{\"metareview\": \"This paper has been borderline in the review process, with a short range of rating but most reviewers judging its content good. In addition to taking into account the rebuttal process, I read the paper.\\n\\nRegarding the rebuttal process, I personally appreciated the attention to details in the authors' reply to uzby on the differences with the work of Bian et al. (2024a). As seen from the updated draft, the authors have made very substantial updates to their draft during the rebuttal process, which is very appreciated. I appreciate the additional work on noise handling, in which the authors also have added more baselines. And more metrics on separate experiments. The particular care given overall to the reply to all specific comments made by reviewers stands out.\\n\\nNow, regarding the paper's content, I found it both interesting since the problem is indeed non trivial for boosting, and the treatment of the problem quite well done in a work with good balance between theory and experiments. I only regret that the paper does not dig into boosting properties as first designed in Valiant's model, but this is a detail. I trust the authors will make their code available. I appreciated the attention to details given in the theory section and the extensive experiments with many baselines used.\\n\\nIn the end, taking into account the author's care in their responses during rebuttal, the numerous updates in the draft, I consider this paper worthy of presentation.\", \"additional_comments_on_reviewer_discussion\": \"The authors have done an excellent job of replying to each reviewer, tackling many different subjects / properties and being very specific on each of them, which made reading / judging / comparing very easy.\"}", "{\"comment\": \"2. **Our Contributions Go Beyond Introducing the $L_2$Boost-IMP Method:**\\n\\nOur work presents a comprehensive framework that significantly advances the applicability of boosting methods to interval-censored data -- a critical yet underexplored challenge in machine learning. Key contributions include:\\n \\n (a) **Two Complementary Methods:**\\n\\nIn addition to introducing the $L_2$Boost-IMP method, we also propose the $L_2$Boost-CUT method approaching the problem from a different perspective. This method focuses on **adjusting the loss function** so its expectation recovers that of the original $L_2$ loss $L$, whereas the $L_2$Boost-IMP method **preserves** the functional form of the original loss $L$ but **replaces** its first argument with transformed response $\\\\tilde Y_1({\\\\cal O}_i)$ in (7).\\n \\n (i) **Difference Between the Two Methods:**\\n \\n The CUT-based loss function $L_{\\\\rm CUT}({\\\\cal O}_i, f(X_i))$ in (9) modifies the $L_2$ loss, denoted $L(u,v)$, to ensure Proposition 1 holds. Because this adjusted loss function is **quadratic** in the difference between its first and second arguments, it is not identical to the $L_2$ loss with its first argument imputed by the transformed response $\\\\tilde Y_1({\\\\cal O}_i)$ in (7), as used in the $L_2$Boost-IMP method. That is,\\n\\n $$ L_{\\\\rm CUT}({\\\\cal O}_i, f(X_i)) \\\\ne L(\\\\tilde Y_1({\\\\cal O}_i),f(X_i)). $$\\n\\n (ii) **Connection Between the Two Methods:**\\n\\n Despite this difference, we identify a unique connection between the two proposed methods. Due to the linear derivative of the $L_2$ loss with respect to its first argument, the increment terms in both $L_2$Boost-CUT and $L_2$Boost-IMP are closely related. This connection reflects the underlying coherence between the two approaches, despite their distinct formulations.\\n \\n (b) **Theoretical Justifications:**\\n\\n Our work goes beyond merely providing boosting procedures to handle interval-censored data; it also offers rigorous theoretical justifications, including evaluations of mean squared error (MSE), variance, and bias. These analyses significantly enrich the literature on boosting methods, particularly for data with complex censoring structures.\\n\\n (c) **Possible Extensions:**\\n\\n For general loss functions, particularly nonlinear ones, constructing an adjusted loss function like $L_{\\\\rm CUT}$ in (9) to ensure Proposition 1 holds is challenging and, in many cases, impractical. However, extending the idea of $L_2$Boost-IMP -- using the original loss function with imputed values determined by the transformed response in (7) -- provides a straightforward implementation framework, as suggested in Appendix E.3. \\n\\n(d) **Summary:**\", \"our_approaches_tackle_the_unique_challenges_of_interval_censored_data_from_complementary_angles\": \"(i) The $L_2$\\u200bBoost-CUT method introduces a novel framework by **adjusting the loss** function, while the $L_2$\\u200bBoost-IMP method **leverages imputation** with transformed responses to retain the loss's original form.\\n\\n (ii) Both methods ensure that imputed values align with the complex **interval-censoring** structure of the data.\\n\\n (iii) Our **theoretical contributions and practical methodologies** collectively extend the frontiers of machine learning for censored data, providing a robust framework for broader application across various domains.\\n \\n3. **Regarding the Submission of Code:**\\n\\nWe share your view on the importance of code availability in ensuring transparency and reproducibility. Due to the constraints of the anonymous review process, we were unable to include our code at the time of submission. However, we are fully committed to making our code publicly available upon acceptance, ensuring it meets high-quality standards and provides clear implementation details.\\n\\nWe hope these clarifications address your remaining concerns and highlight the novelty and practical value of our contributions. We will further incorporate these clarifications when preparing the final version of the manuscript. Thank you once again for your thoughtful comments and constructive feedback, which have greatly improved the presentation and clarity of our work. Your time and expertise are deeply appreciated.\"}", "{\"comment\": \"Thank you for providing more detailed comments. Here, we further clarify and address the concerns you raised.\\n\\n**1. The differences between Bian et al. (2024a) and our L2Boost-IMP (IMP)**\\n\\nWe would like to clarify that our L2Boost-IMP method is fundamentally different from the approach proposed by Bian et al. (2024a). The reference to Bian et al. (2024a) in the Introduction is included solely to provide context about boosting methods in diverse applications.\\n\\n Both the L2Boost-IMP method and Bian et al.'s (2024a) approach share a common use of *imputation* to create complete datasets for applying algorithms designed for full data. However, key differences between the methods pose distinct challenges in establishing their respective theoretical guarantees, as highlighted below.\\n\\n**(a) different contexts:**\\n\\nThe method proposed by Bian et al. (2024a) was developed for missing responses, where a simple missing data indicator $\\\\Delta_i$ ($\\\\Delta_i= 1$ if response $Y_i$ is observed, and 0 otherwise) suffices to reflect the missingness status for each subject $i$. In contrast, our L2Boost-IMP (IMP) method is tailored for interval-censored responses, which require a more complex representation. Instead of a scalar missing data indicator $\\\\Delta_i$, a sequence of censoring indicators $\\\\Delta_{i,j}$ is needed for each subject $i$, corresponding to different intervals indexed by $j$. \\n\\n**(b) different constructions of pseudo-outcomes:**\\n\\nBian et al. (2024a) used the Buckley-James formulation (Buckley and James 1979) by defining a pseudo-outcome as\\n$$ Y_i^* \\\\triangleq \\\\Delta_i Y_i+(1-\\\\Delta_i) E(Y_i|X_i, \\\\Delta_i=0). $$\\nHowever, our approach constructs the pseudo-outcome differently, as shown in (7), which incorporates the survivor function.\\n\\n**(c) different implementations:**\\n\\nIn determining the pseudo-outcome $Y_i^*$, Bian et al. (2024a) approximated the conditional expectation $E(Y_i|X_i, \\\\Delta_i=0)$ using the Monte-Carlo method. They generated a sequence of variates\\n $y_i^{(k)}$ from the conditional distribution $f(y_i|X_i, \\\\Delta_i=0)$ for $k =1, \\\\ldots, K$, which $K$ is a user-specified, and computed\\nthe average $(1/K)\\\\sum_{k=1}^K y_i^{(k)}$. On the contrary,\\ndetermining the pseudo-outcome in our approach, as outlined in (7), requires estimating the survivor function, making the implementation of our $L_2$Boost-IMP (IMP) method considerably more complex than that of Bian et al. (2024a).\\n\\n**(d) different properties of imputed outcome/loss function:**\\n\\nThe pseudo-outcomes in Bian et al. (2024a) satisfy the properties\\n$$ E(Y^*_i|X_i)=E(Y_i|X_i), \\\\ \\\\ \\\\mbox{and thus}, \\\\ \\\\ E(Y^*_i)=E(Y_i). $$\\n\\nHowever, pseudo-outcomes in our approach lack this property due to the complexity introduced by the interval-censoring structure of the data.\\n\\n**2. differences between our proposed two methods: L2Boost-CUT (CUT) and L2Boost-IMP (IMP)**\\n\\nThe CUT-based loss function $L_{\\\\rm CUT}({\\\\cal O}_i, f(X_i))$ in (9) modifies the $L_2$ loss, denoted $L(u,v)$, to ensure Proposition 1 holds. However, this adjusted loss function is **quadratic** in the difference between its first and second arguments, meaning it is not identical to the $L_2$ loss with its first argument imputed by the transformed response\\n $\\\\tilde Y_1({\\\\cal O}_i)$ in (7). \\n\\nSpecifically $$ L_{\\\\rm CUT}({\\\\cal O}_i, f(X_i)) \\\\ne L(\\\\tilde Y_1({\\\\cal O}_i),f(X_i)). $$\\n\\nThis distinction underpins our earlier statement that \\\"These two algorithms are derived from distinct perspectives in addressing interval-censored data, and they may be expected to yield different results.\\\"\\n The $L_2$Boost-CUT method focuses on **adjusting the loss function** so its expectation recovers that of the original $L_2$ loss $L$, whereas the $L_2$Boost-IMP method **preserves** the functional form of the original loss $L$ but **replaces** its first argument with transformed response $\\\\tilde Y_1({\\\\cal O}_i)$ in (7).\\n\\n Despite this difference, since the $L_2$ loss has a derivative linear in its first argument, the increment terms in both $L_2$Boost-CUT and $L_2$Boost-IMP are closely related. However, if the loss function were $L_q$ with $q\\\\ge 3$, the development of\\n$L_q$Boost-CUT and $L_q$Boost-IMP methods would differ more substantially. However, as noted in Appendix E.3, establishing theoretical guarantees for such cases is challenging.\\n\\nFor a general loss function, particularly when it is nonlinear, constructing an adjusted loss function like $L_{\\\\rm CUT}$ \\u200b in (9) to ensure Proposition 1 holds is challenging, making it difficult to implement. However, using the original loss function with imputed values determined by the transformed response in (7) is a straightforward implementation, though establishing theoretical results for both cases remains difficult.\\n\\nThank you for your thoughtful question on this aspect. We will incorporate these clarifications in the next revision.\"}", "{\"summary\": \"In this work the authors propose a new algorithm based on boosting for interval-censored data. They provide an extensive theoretical analysis and an empirical comparison of both simulated and real data, for which their proposed methods have a performance similar to that of the state of the art.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The article is clearly written and provides a thorough theoretical analysis of the proposed algorithm, which is really important to ground further applicative work using this method in practice.\", \"weaknesses\": [\"There are several limitations of this work.\", \"my main concern is that the paper does not clarify the empirical improvement compared to the use of ICRF. It is slightly different indeed to use boosted trees compared to random forests (bagging), but the article fails to show a clear gain in performance, both on simulated and real data\", \"there should be more baselines, like Cox models, even if they are designed for right-censored data rather than interval data. There are very few baselines included in the benchmark presented here. It's unclear whether the I method is similar to ICRF or a novel method. If not, ICRF should be included as a baseline to demonstrate the contributions of this work clearly.\", \"I don't think there is any mention that the code to use L2Boost-CUT or L2Boost-Impute will be made available, is it?\"], \"questions\": \"see limitations\\n- are there any additional performance metrics that could be used? like the c-index (or a variation of it)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Responses to Reviewer Npps (2/2)\", \"comment\": \"5. **There is no reproducibility. It would be helpful to have reproducible code, even for just a basic example. This would also partly help alleviate weakness 4, as I could run the code myself and observe the execution time.**\\n\\n**Our Response**: Thank you for highlighting the importance of reproducibility and suggesting the inclusion of reproducible code. We plan to make the code publicly available on GitHub upon the paper's acceptance. To address this, we have added a note after Proposition 1 in Section 3.2 to inform readers of our intention.\\n\\n6. **I would like to see more evidence that the approach is beneficial in practice. I think the paper could do with some rearrangement: some of the theoretical results moved to the appendix to make space for further empirical results.**\\n\\n**Our Response**: Thank you for the constructive suggestion. In this revised version, we have carefully organized the material to ensure it is both informative and comprehensive.\\n\\n**Questions**: \\n\\n**How easily could the framework be extended to other boosting approaches (such as XG)? Relating to the weakness point above: what's the benefit of L2Boost-CUT if it provides the same results as imputation + standard boosting?**\\n\\n**Our Response**: Thank you for your thoughtful comments and questions. In this revision, we have carefully addressed them. First, we respond to your comment on the extensibility to other boosting approaches. While our focus in this paper is on $L_2$Boost with the CUT-based loss function, the general principle of incorporating interval-censored data into boosting frameworks is not limited to $L_2$Boost, and the proposed framework can be adapted to other boosting methods. Depending on the form of loss functions, some extension are straightforward with theoretical guarantees readily established by modifying our theoretical derivations. For other loss functions, while the implementation procedures can be readily formed by modifying Algorithm 1, developing sound theoretical guarantees is not straightforward, which warrants in-depth explorations. Specifically, in this revision we have included the details in Appendix E.3.\\n\\nRegarding the comparison between $L_2$Boost-CUT and $L_2$Boost-Impute, we have now make this more clear with the revisions after Proposition 1 in Section 3.2.\"}", "{\"comment\": \"Dear Reviewer 4h4i:\\n\\nMany thanks for your feedback on our rebuttal. We share your view on the importance of code availability. We have now uploaded our raw code files as supplementary materials, which include four files: readme.md (provides information about the other three files), fun.R (contains helper functions), cut.R (implements the CUT algorithm), and imp.R (implements the IMP algorithm used in the experiments in Section 5). A user-friendly version of the code, with self-contained explanations and better documentation of implementation details, will be made available on GitHub upon acceptance of the paper\\n\\nWe value your comments, suggestions, and feedback, which greatly help us sharpen the presentation of our work. \\n\\nWith kind regards.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you for rigorous responses and updates.\\nHowever, it is still unclear for me about the **benefit of employing boosting** in the proposed problem setup, although the method itself becomes clear for me. \\nIt would be the best if the performance outperformed existing methods, but it looks not true as far as reading the experimental results (although certain improvements are seen). However, it is good if there are good insights through the proposed method, for example, \\\"this is an interesting result in the field of boosting\\\" or \\\"this methodology can be extended to other learning methods than boosting\\\". How do the authors think?\"}", "{\"title\": \"Responses to Reviewer uzby\", \"comment\": \"**Strengths**:\\n- **The manuscript is clearly written and accessible, making the methodology easy to follow.**\\n- **The authors conduct a rigorous theoretical analysis of their proposed methods, assessing mean squared error (MSE), variance, and bias to establish a solid foundation for their approach.**\\n\\n**Our Response**: Thank you for your encouraging feedback. We are pleased that you found the manuscript clear and accessible, and we appreciate your recognition of the rigorous theoretical analysis. We also want to thank you for dedicating your time and expertise to review our paper and provide constructive comments and suggestions. We appreciate your careful review, and we have addressed all comments thoroughly in our revision.\\n\\n**Weaknesses**:\\n- **In Proposition 2, using a distinct notation for the smoother matrix would help prevent confusion with the survival function $S$**\\n\\n**Our Response**: Thank you for this suggestion. In the revision, we have now incorporated a different symbol, $\\\\Psi$, for improved clarity. \\n\\n- **Adding experiments for the Cox proportional hazards model would strengthen the manuscript's applicability and support its conclusions.**\\n\\n**Our Response**: In response to your suggestion, we have included results derived from the Cox model in both data analyses and experiments. These results are now presented in Figure 3 (Section 5) and Figure G.4 (Appendix G.2), along with corresponding comments in the respective sections.\\n\\n- **Additional ensemble methods exist for interval-censored data, such as: Yao, W., Frydman, H., and Simonoff, J.S., 2021. An ensemble method for interval-censored time-to-event data. Biostatistics, 22(1), pp.198-213. Including comparisons with such methods would enhance the evaluation of the proposed approach.**\\n\\n**Our Response**: Thank you for this suggestion, which has now been incorporated in this revision. Specifically, additional experiment results, reported in Figure G.4 in Appendix G.2, shows that the results from the Yao et al. (2021) method are in good agreement with those produced from our proposed methods. However, the SKDT (Kendall's $\\\\tau$) values from this method appears slightly more variable than those from our methods.\\n\\n- **In both synthetic and real data experiments, the proposed CUT method and the imputation approach (Bian et al. 2024a) yield comparable results, with the imputation method occasionally performing better (e.g., in real data). Could the authors further elaborate on the unique advantages of their proposed method?**\\n\\n**Our Response**: Thank you for raising this interesting point, which we have now carefully addressed. In our initial version, we did not include comparison details due to space limitations. In this revision, we have added more information to compare the two algorithms, explaining their differences and similarities in Section 3.2 for greater clarity.\\n\\n- **In Appendix A, the authors outline several conditions primarily related to the smoother matrix $S$. Providing relevant examples of $S$ that meet these conditions would improve clarity.**\\n\\n**Our Response**: Thank you for the suggestion. In this revision, we have now inserted the following material at the end of Appendix A.\\n\\n- **All theoretical results assume consistent estimation of the survival function. How does the convergence rate of the survival function estimator impact the final estimator?**\\n\\n**Our Response**: Thank you for raising this important concern. We agree that the convergence rate of the survival function estimator can affect the speed at which the estimator approaches the true value. A slower convergence rate may increase variability in the final estimator, potentially resulting in less reliable predictions, particularly in finite sample settings. To make this point clear, in this revision we have now included discussions in Section 3.3.\\n\\n**Questions**:\\n- **In page 3 line 108, should the survival probability be $P(Y>s)$?**\\n \\n**Our Response**: Thank you. We have now fixed this.\\n- **In page 3 Equation (5), what is the update compared to Equation (3)?**\\n\\n**Our Response**: For greater clarity, we have revised the presentation of Equation (5) by combining it with the follow-up description originally in Equation (6).\\n- **In traditional boosting methods, a learning rate is typically included. Is there a similar parameter in Algorithm 1?**\\n\\n**Our Response**: Thank you for this insightful question. To make this point clear, in this revision we have now included discussions in Appendix E.1.\\n- **In Condition (C1), what does $Q_i^2$ mean? Does it denote $Q_i^\\\\prime Q_i$ as $Q_i$** as is an eigenvector?\\n\\n**Our Response**: Thank you very much for pointing out this error. You are correct; it was meant to be $Q_i^\\\\top Q_i$. We have now corrected it.\"}" ] }
Dzamphz35c
Ultra-Low Accumulation Precision Inference with Block Floating Point Arithmetic
[ "Jun He", "Xin Ju", "Mei Wen", "Yasong Cao", "Zhongdi Luo", "Jianchao Yang", "Jingkui Yang", "Gang Li", "Jian Cheng" ]
Block Floating Point (BFP) quantization offers a hardware-efficient numerical range trade-off. Previous studies have quantized weights and activations to an extremely low precision using the BFP arithmetic. However, as the precision of weights and activations is reduced, we have identified that accumulation becomes a hardware bottleneck in the BFP MAC. Nevertheless, existing attempts to decrease the precision of accumulation in matrix multiplication have generally preserved model performance through training with a pre-selected, fixed accumulation precision. Nonetheless, selecting an unduly low precision leads to notable performance degradation, and these studies lack an effective approach to establish the lower precision limit, potentially incurring considerable training costs. Hence, we propose a statistical method to analyze the impact of reduced accumulation precision on the inference of deep learning applications. Due to the presence of fixed-point accumulation and floating-point accumulation in BFP matrix multiplication, we have formulated a set of equations to relate the data range of fixed-point multiply-accumulate operations and the effects of floating-point swamping to the parameters of BFP quantization, the length of accumulation, model weights, and the minimum number of bits required for accumulation, thereby determining the appropriate accumulation precision. Applied to MMLU Llama2-7B, SQuAD-v1.1 BERT-Large and BERT-Base and CIFAR-10 ResNet-50, our precision settings yield performance close to the FP32 baseline. Meanwhile, further precision reduction degrades performance, indicating our approach’s proximity to precision limits. Guided by our equations, the hardware exhibited a 13.7\%-28.7\% enhancement in area and power efficiency over high-precision accumulation under identical quantization configuration, and it demonstrated a $10.3\times$ area reduction and an $11.0\times$ power reduction compared to traditional BFP16 implementations.
[ "accumulation precision; block floating-point quantization; MAC; deep learning" ]
Reject
https://openreview.net/pdf?id=Dzamphz35c
https://openreview.net/forum?id=Dzamphz35c
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z5QaCP93GW", "yExORRGogq", "wHkeEdFORf", "vT2TrXuvYn", "vO6wNM1r9c", "smTEJ2zClv", "rnx1kIzmur", "rAvOcYERQS", "pNNd8a3tDu", "nQSbpt2yS7", "k5fbdWLCAc", "idbiuWziI2", "g646CwDelw", "cXLXpDWW7k", "YqVY6zLftr", "Wtk1XmyOAg", "V5Cn801NN4", "UuV8gbXPp4", "U1OFIY5EnG", "T2P51Gkrjm", "OYwzHa1hur", "N1PbnVe2vk", "Mp8KXZVNKM", "Mcjc0L4EFa", "LvLmSy3qPL", "LPNssMz04K", "IukkO47BC7", "I7vbyNUkFr", "CicVmbgRJ7", "CK1ipnnUSp", "7jjhumZMF2", "6jizEyjzk0", "4bjgUJ7WBX", "4TpmHNqxlu", "0sbIGJtu6R" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment" ], "note_created": [ 1732529520970, 1732704676003, 1731823249685, 1732513212366, 1732704544254, 1731662452925, 1732591397461, 1731822978591, 1731662441890, 1732161241567, 1733188311205, 1733059656840, 1732272231845, 1731662407520, 1732161256931, 1732208251660, 1731822772733, 1732237018364, 1731662389921, 1730580130635, 1732522993340, 1731590500195, 1731581615776, 1730640692351, 1731576095255, 1730532922672, 1731822388648, 1737523383847, 1732537083252, 1731499551636, 1731474960757, 1732161226582, 1734612004857, 1730124753120, 1732161267015 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission189/Reviewer_cr9Y" ], [ "ICLR.cc/2025/Conference/Submission189/Authors" ], [ "ICLR.cc/2025/Conference/Submission189/Authors" ], [ "ICLR.cc/2025/Conference/Submission189/Reviewer_cr9Y" ], [ "ICLR.cc/2025/Conference/Submission189/Authors" ], [ "ICLR.cc/2025/Conference/Submission189/Authors" ], [ "ICLR.cc/2025/Conference/Submission189/Reviewer_nn8r" ], [ "ICLR.cc/2025/Conference/Submission189/Authors" ], [ "ICLR.cc/2025/Conference/Submission189/Authors" ], [ "ICLR.cc/2025/Conference/Submission189/Authors" ], [ "ICLR.cc/2025/Conference/Submission189/Authors" ], [ "ICLR.cc/2025/Conference/Submission189/Reviewer_wP2n" ], [ "ICLR.cc/2025/Conference/Submission189/Authors" ], [ "ICLR.cc/2025/Conference/Submission189/Authors" ], [ "ICLR.cc/2025/Conference/Submission189/Authors" ], [ "ICLR.cc/2025/Conference/Submission189/Reviewer_wP2n" ], [ "ICLR.cc/2025/Conference/Submission189/Authors" ], [ "ICLR.cc/2025/Conference/Submission189/Authors" ], [ "ICLR.cc/2025/Conference/Submission189/Authors" ], [ "ICLR.cc/2025/Conference/Submission189/Reviewer_Z6LM" ], [ "ICLR.cc/2025/Conference/Submission189/Authors" ], [ "ICLR.cc/2025/Conference/Submission189/Authors" ], [ "ICLR.cc/2025/Conference/Submission189/Authors" ], [ "ICLR.cc/2025/Conference/Submission189/Reviewer_cr9Y" ], [ "ICLR.cc/2025/Conference/Submission189/Authors" ], [ "ICLR.cc/2025/Conference/Submission189/Reviewer_nn8r" ], [ "ICLR.cc/2025/Conference/Submission189/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission189/Authors" ], [ "ICLR.cc/2025/Conference/Submission189/Authors" ], [ "ICLR.cc/2025/Conference/Submission189/Authors" ], [ "ICLR.cc/2025/Conference/Submission189/Authors" ], [ "ICLR.cc/2025/Conference/Submission189/Area_Chair_3F45" ], [ "ICLR.cc/2025/Conference/Submission189/Reviewer_wP2n" ], [ "ICLR.cc/2025/Conference/Submission189/Authors" ] ], "structured_content_str": [ "{\"comment\": \"I do not intend to claim that the proposed method lacks merit, but the way the authors have presented the method is not sufficiently clear to convince me of its novelty and advantages.\", \"key_concerns\": \"1.\\tQuality of Writing and Data Representation:\\n\\nThe paper's explanations and data representations lack the clarity needed to effectively convey the novelty and contributions of the proposed method. \\n\\n2.\\tAdvancements Over Conventional Statistical Approaches:\\n\\nWhile the proposed analysis appears rigorous, it primarily relies on the statistics of inputs and weights, with no clear evidence demonstrating a direct correlation with model accuracy. Additionally, the method does not appear to be clearly distinct from partial sum precision lowering techniques based on na\\u00efve scanning or basic statistical measures such as standard deviation [1, 2]. Please note that partial sum lowering is a well-established concept in in-memory computing, where partial sum precision plays a critical role in efficiency. To establish the superiority of the proposed approach, the paper should explicitly demonstrate its advantages over na\\u00efve approaches. For instance, the paper could compare the correlation between the proposed statistical measures and model accuracy with those of conventional statistical measures, illustrating how the proposed analysis provides better guidance for precision reduction.\\n\\n3.\\tUncertainty About the Contributions to Bit Precision Reduction:\\n\\nWhile reducing bit precision as much as possible is important, it is unclear whether the extreme bit reductions achieved in this work are solely attributable to the proposed method. For example, the segmented approach appears to be a novel and meaningful contribution, as it improves precision reduction compared to the non-segmented approach. However, attributing the absolute reduction solely to the proposed method seems speculative without stronger evidence or analysis.\\n\\n4.\\tMissing Clarifications on Proposed Methods:\\n\\nCertain aspects of the proposed methods lack sufficient detail. For example, the description of segmented inter-block accumulation is unclear. The paper states that inter-block accumulation involves summing FP-converted partial sums using an FP accumulator. However, what does \\\"segmented inter-block accumulation\\\" mean in this context? Does it imply that segments are accumulated using integers, or does it refer to an accumulation order where sums are computed within segments first and then aggregated across segments? Clarifying these details is essential for a complete understanding of the method.\\n\\n[1] Lee, Juhyoung, et al. \\\"ECIM: exponent computing in memory for an energy-efficient heterogeneous floating-point DNN training processor.\\\" IEEE Micro 42.1 (2021): 99-107.\\n\\n[2] Sun, Hanbo, et al. \\\"An energy-efficient quantized and regularized training framework for processing-in-memory accelerators.\\\" 2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC). IEEE, 2020.\"}", "{\"comment\": \"Thank you for your review and valuable suggestions. Wishing you a joyful life.\"}", "{\"comment\": \"We have added the results of the training task in the latest submitted paper. The newly added content is in Appendix E, please review it.\"}", "{\"comment\": \"Although the authors' reply and the revisions to the paper have partially improved its quality, there are still too many unresolved issues to make the proposed solution compelling. Therefore, I will retain my original score.\", \"key_issues\": \"1.\\tThe explanations and definitions of several terms used in the paper remain unclear and insufficiently detailed. This lack of clarity makes it difficult to fully understand the methodology and its implications.\\n\\n2.\\tI do not see the value or novelty of the proposed intra-block partial sum analysis and FnRR-based analysis. Both approaches utilize statistical properties of the layers, but the paper does not explain how these analyses offer any significant advantage over simpler and widely adopted methods such as min/max or 3-sigma-based truncation.\\n\\n3.\\tThe paper lacks a robust theoretical foundation to demonstrate how the proposed approach preserves accuracy. Despite this, it claims that the inter-block accumulation precision can be reduced to a bitwidth of 2\\u20133 for BFP8 Seg (Table 3). This reduction seems overly aggressive and raises concerns about whether the baseline FP32 precision used in the comparisons is unnecessarily high, potentially skewing the evaluation.\\n\\n4.\\tFigure 5 still lacks proper line descriptions, making it difficult to interpret the data presented.\"}", "{\"comment\": \"Have I solved your question? Looking forward to your reply.\"}", "{\"comment\": \"In the latest submitted version, I have modified some expressions in the introduction to better convey our main idea. I have mainly modified lines 60 to 80. Please kindly review and correct me.\"}", "{\"title\": \"Thank you for the answers.\", \"comment\": \"I have read the response from the authors and other reviewers. I would like to keep the score.\"}", "{\"comment\": \"According to your suggestion, we applied our method to the image classification training task of ResNet18 on CIFAR-10. And we have added the results of the training task in the latest submitted paper. The newly added content is in Appendix E, please review it. According to our experimental results, in the training process, the data accuracy requirements of forward accumulation are the highest. In other words, according to our theory, the cumulative bit width selected for the forward process can also be applied to the reverse and gradient solutions.\"}", "{\"comment\": \"In the latest submitted version, I have modified some expressions in the introduction to better convey our main idea. I have mainly modified lines 60 to 80. Please kindly review and correct me.\"}", "{\"comment\": \"May I ask whether I have solved your question? Looking forward to your reply.\"}", "{\"comment\": \"Thank you for your review and valuable suggestions. Wishing you a joyful life.\"}", "{\"comment\": \"Thank you for your responses to my concerns. As the discussion period is coming to an end, I have reviewed the revised PDF and your responses to other reviewers once again.\\nRegardless of the contribution, the overall explanations in the paper make it very difficult for readers to follow. It is hard to enumerate all the issues. For example, the definition of a term in line 650 is incorrect. While the your response addressed this, Section 5.5 of the paper does not mention the process or tools related to power and area calculations. Besides, more explanations should exist for Figure 1. \\n\\nThanks for your contribution again. However, considering the current state of the paper, I will reflect this in my rating.\"}", "{\"comment\": \"We have included the experimental results regarding the reduction of accumulation precision following quantization via stochastic rounding in Appendix F for your reference. We look forward to your response.\"}", "{\"comment\": \"In the latest submitted version, I have modified some expressions in the introduction to better convey our main idea. I have mainly modified lines 60 to 80. Please kindly review and correct me.\"}", "{\"comment\": \"May I ask whether I have solved your question? Looking forward to your reply.\"}", "{\"title\": \"Answer for the first rebuttal\", \"comment\": \"Thanks for your answer.\\n\\nBut there are still several concerns.\\n\\nFirstly, 3 $\\\\sigma$ principle was assumed to determine the accuracy. Is it always valid in diverse models? \\n\\nSecondly, in your assumption, the unbiased data were assumed. Is this assumption also valid in any cases?\\n\\nAlso, the revised manuscript has many typos. For example, 329, 333, 334 lines has non-italic terms: n1, n2, n3.\\nIn Figure 4, what is the meaning \\\"what is the meaning of f(n) rapidly approaches 1, whereas above it, f(n) increases swiftly\\\"'\\nIn line 276. I think that Theorem 1 is for FnRR in Eq. (7). Because this theorem only have equations, more explanation can be helpful.\\n\\nConsidering the unsolved concerns, I will keep my rating at this time.\"}", "{\"comment\": \"We have added the results of the training task in the latest submitted paper. The newly added content is in Appendix E, please review it. According to our experimental results, in the training process, the data accuracy requirements of forward accumulation are the highest. As you said, the cumulative bit width in the training task could be lower, and our experimental results coincide with your insight. This is interesting and worth exploring further. In other words, according to our theory, the cumulative bit width selected for the forward process can also be applied to the reverse and gradient solutions.\"}", "{\"comment\": \"Thank you for your reply.\\n\\nFor the first question, we use the $3\\\\sigma$ principle to judge the data range of fixed-point accumulation, according to which the precision of fixed-point accumulation is selected. When selecting the bit width of fixed-point accumulation, we select the data range upward (that is, to the wider bit width), so the actual range of supported data is greater than $3\\\\sigma$. It is sufficient to satisfy the purpose of neither overflow nor waste of fixed point accumulation bit width. Our experiments also prove that the cumulative bit width of fixed points selected under various models will not overflow, that is, it will not cause model accuracy change.\\n\\nAs for the second question, as we mentioned in the previous answer, the nearest rounding quantization method is unbiased for the mean value of data distributed with a symmetry of 0 values. When this distribution condition cannot be satisfied, we can also adopt the stochastic rounding method for quantization, and the unbias of random rounding has been proved in other work[1]. Whether the variance is unbiased or not does not affect the derivation of the formula (because if quantifying the difference brings a certain deviation $\\\\Delta\\\\sigma^2$, we can change the expression meaning of sigma in the formula from $KVar[I \\\\cdot W]$ to $K(Var[I \\\\cdot W]+\\\\Delta\\\\sigma^2)$), nor does it affect the final prediction result (for reasons analyzed in Section 4.3).\\n\\nAs for the third question, we would like to thank you for pointing out our typos and we have corrected it. For Figure 4, this sentence means that the part below the dashed line, when $f(n)$ is less than 1000, its value approaches 1 rapidly as $n$ decreases, that is, $n(1-fnRR)$ approaches 0 rapidly, that is, $FnRR$ approaches 1 in this corresponding interval. The part above the dashed line, when $f(n)$ is greater than 1000, its value increases rapidly as n increases, that is, $FnRR$ moves away from 1 in this corresponding interval. According to our analysis in Section 4.2, we need to let FnRR approach 1 to maintain model accuracy, so we choose 1000 as the threshold for judgment. In addition, the key of theorem 1 lies in the calculation formula of $FnRR$, because the corresponding derivation process is too long to be written in the main text, so we have shown it in Appendix B.\\n\\n[1] Is Integer Arithmetic Enough for Deep Learning Training? NeurIPS, 2022\"}", "{\"comment\": \"In the latest submitted version, I have modified some expressions in the introduction to better convey our main idea. I have mainly modified lines 60 to 80. Please kindly review and correct me.\"}", "{\"summary\": \"Block Floating Point (BFP) quantization is introduced to improve the hardware efficiency in deep learning, but its accumulation logic becomes the hardware bottleneck especially for low-bit BFP quantization. This work studies the effect of reduced accumulation precision in BFP quantization and proposes a statistical method to determine the appropriate accumulation precision. Experiments on Llama2-7B, BERT and ResNet-50 show that proposed approach can save 13%-28% area and reduce 13%-25% power while maintaining the model performance close to FP32 baseline.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"This work presents a theoretical framework for analyzing the effect of accumulation precision in quantized GEMM, especially taking both data statistics and floating-point swapping into consideration. This provides a solid foundation for further research on quantization and its hardware design.\", \"This work validates the proposed approach across different models and demonstrates the actual hardware benefits including area and power savings with a complete synthesized design.\"], \"weaknesses\": [\"The proposed method relates the accumulation precision with the data range of actual workload, and thus predicts different accumulation precisions for different models. However, in real world, it is more common to run different models on the same hardware and thus it seems there is no need to specialize accumulation precision settings in hardware. Furthermore, if the hardware will be used for model training, the accumulation should also be able to handle the data range of model training, which is much larger than the inference. Therefore, it is doubtful if the proposed method is practical in the real world hardware design scenario.\"], \"questions\": \"My questions are listed in the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"As for the first question, I don't know what definition is not clearly expressed. Could you please give an example to illustrate it, because I have supplemented the definitions you mentioned before in the comment and the paper.\\n\\nFor the second question, the analysis of the partial sum in the block is to determine a more suitable BFP MAC fixed point accumulation bit width. The $3\\\\sigma$ principle is the analysis method we adopt, which can find a more suitable bit width compared with the maximum and minimum value to reduce the redundancy of the data range and achieve the purpose of simplifying the hardware. The analysis based on FnRR is to measure the impact of reducing the precision of floating-point accumulation mantissa on the data accuracy, so as to find a suitable floating-point accumulation mantissa bit width, and also achieve the purpose of simplifying the hardware. I have detailed how to build this connection in lines 60-80 of the paper and Section 4.2 and Appendix B. Using FnRR-based analysis, we can predict the appropriate floating point mantissa bit width in advance, and the hardware design based on this prediction can make the hardware area and power consumption smaller, which is also reflected in our experiment\\n\\nFor the third question, the purpose of our method is to present a theorem to guide the selection of the appropriate accumulation bit width, because we find that there is a waste of high precision accumulation in the case of extremely low quantization accuracy. Through some statistical theorems and analysis, we derive a set of formulas for predicting the cumulative bit width. Our subsequent experiments also prove that the bit width predicted by our formula can maintain the model performance. We do not understand that the bit width precision reduction you mentioned is too aggressive. Therefore, both the theorem we deduced and the experimental results prove that in the BFP8(Seg) quantization configuration with a block size of 128, only 2 bits are indeed needed to maintain performance. In the conventional design, FP32 is used for accumulation, but we found that FP32 is not needed in fact, and the accumulation with low appropriate accuracy can also maintain the model performance, isn't it the significance of our work?\\n\\nFor the fourth question, we analyzed Figure 5 in both Section 5.3 and Section 5.4. Through Figure 5 and Table 3, we can find that the accuracy of floating point summation mantissa predicted by us is close to the boundary of floating point summation mantissa accuracy that can keep the model performance from decreasing, thus proving the feasibility of our theory.\"}", "{\"comment\": \"Thank you for your careful review.\\n\\nFor the first question, I present my main idea on lines 51 to 61. In short, there are both fixed-point and floating-point accumulations in BFP. Then I used variance and mean to estimate the data range of fixed-point accumulation based on the $3\\\\sigma$ principle to determine the accuracy of fixed-point accumulation, and used FnRR to measure the degree of floating point swamping. Then based on the formula $f(n)$ in Section 4.5 and the threshold of 1000, we can determine the appropriate precision of floating-point accumulation. I also redrew Figure 1 to make its meaning clearer. Moreover, for assumption 1 in B, BFP quantization is a uniform linear quantization, and we complete the round by nearest rounding. It is worth noting that when the data distribution is close to symmetric, the nearest rounding is close to unbiased because it is symmetric in positive and negative errors. So we assume it's unbiased. In addition, according to our analysis of the importance of variables in Section 4.3, the variance of the data has little effect on the FnRR, so it is convenient for discussion that we make this assumption. \\n\\nAs for the second question, we are very sorry for bringing you a bad reading experience. We have revised our paper and unified the symbols. The sigma(316 lines) and symbols $\\\\sigma$ in the paper have the same meaning, which we have unified. Also, we redrew Figure 5, adding labels for the x and y axes. The scores in Figure 5 are representative of model performance. \\n\\nAs for the third question, as you said, our method may be applied to training. We are doing experiments on training tasks and will submit the results as soon as we collect them.\\n\\nFor the fourth problem, we use professional logic synthesis tools and open source 7-nm process Library (ASAP7) to synthesize.\"}", "{\"comment\": \"In response to your main concern, in lines 385 to 388, I mentioned 3 groups of tasks, which are Llama2-7B for MMLU benchmark test, Bert-Large and Bert-Based SQuAD-v1.1 question-answering task, and ResNet-50 for image classification based on CIFAR-10 data set. Among them, Bert-Large and Bert-Based, we first used the SQuAD-v1.1 data set to fine-tune the open source pre-training model in bert's official github repository in full precision, and then introduced quantization and reduced accumulation accuracy based on the fine-tuned checkpoint for question-answering test. The question-answering task process is the inference process. The same is true for Resnet50's image classification task based on the CIFAR-10 dataset. In addition, we did not plan to discuss the task of training at the beginning, because as you said, some overflow can be truncated in training but some quality can be recovered in subsequent training, but this is not included in our theory, which is very interesting and may be one of the contents of future research. In addition, I am also conducting experiments on training tasks, and I will submit the results as soon as I collect them.\\n\\nFor Figure 5, which I have revised in the newly submitted paper and which I mentioned in the original on lines 474 to 476, the dashed line in the figure represents the baseline. In addition, block size is also marked in the legend. For example, in the legend on the left of Figure 5, it can be seen that the red line indicates that K is 128, that is, under the quantization configuration of block size 128, corresponding to the average accuracy of the accumulation percision obtained in the MMLU test benchmark.\"}", "{\"summary\": \"This paper investigates the impact of accumulator precision in BFP (Block Floating Point) hardware on the accuracy of neural networks. It provides separate analyses for the two types of accumulators in BFP hardware: intra-block and inter-block accumulators. Based on this accuracy analysis, the authors reduced the precision of BFP hardware, resulting in improvements in area and power efficiency.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"Although the motivation behind this work is interesting and valid, the paper lacks sufficient detail and evaluation, making it difficult to clearly identify its strengths at this stage.\", \"weaknesses\": \"1.\\tThe paper does not specify the format used for low-precision floating-point numbers. While FP32 is a well-known format with 1-bit sign, 8-bit exponent, and 23-bit significand, there is no standard format for floating-point numbers with fewer than 16 bits. For instance, FP8 can have either a 4-bit exponent and 3-bit significand or a 5-bit exponent and 2-bit significand. This paper does not clarify the specific format used for low-precision FP numbers.\\n2.\\tThere is a lack of detail on the hardware implementation. The paper does not describe the hardware architecture considered, nor does it specify the bitwidth of the accumulators in both the proposed approach and the baseline.\\n3.\\tAlthough the paper provides an analytical approach to analyze the distribution of partial sums in Sections 4.1 and 4.2, there is no clear connection between this analysis and the optimization of accumulator bitwidth. Based on the results in Table 3 and Figure 5, and the fact that the impact of accumulator bitwidth varies across networks, the optimization of bitwidth appears to be empirical rather than directly derived from the analysis in Sections 4.1 and 4.2.\", \"questions\": \"1.\\tI suggest that the authors provide details on the low-precision floating-point formats used in this study.\\n2.\\tI recommend that the authors include more detailed information about the hardware implementation. For example, please provide a block diagram of the hardware and specify the bitwidth used for each component.\\n3.\\tIn Equation (2), you mention that the range of partial sums depends on $2^{A_{width}}$ and $2^{W_{width}}$. However, it\\u2019s unclear whether the bitwidth refers to the exponent or mantissa, and it doesn\\u2019t specify whether it pertains to inter-block or intra-block partial sums, or the final accumulation results of the layer. If it refers to intra-block partial sums (as BFP only handles integer terms within the block), I believe the maximum bitwidth of the partial sum should be $log(k) + A_{width} + W_{width}$. Please clarify how you derived the term in Equation (2).\\n4.\\tIn Section 4.1, what is the difference between $I_e$/$W_e$ and I/W? These terms are not clearly defined, making it difficult to follow the equations in this section.\\n5.\\tPlease clearly label the x and y axes in Figure 5 for better interpretation.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your affirmation and careful review of my paper.\\n\\nFirst, as you mentioned, in the real world, we need to run multiple models on one hardware. But this is not inconsistent with reducing the cumulative bit width. As our experiments show, our method can well predict the minimum accumulation accuracy required in the case of quantitative configuration determination. From Section 4.3, we analyze that the accumulation length n is the decisive factor in selecting the accumulation mantissa precision. Therefore, using our formula from another perspective, that is, we can calculate the maximum supported accumulation length by fixing the quantization configuration and the accumulation mantissa precision.\\n\\nFor example, we use our formula to calculate that in a BFP quantization configuration with a block size of 128, a floating-point sum of 56 length can be supported with a precision of 4, corresponding to a matrix K dimension of up to 7168 participating in matrix multiplication; When the precision of the sum is 5, it can support the floating point sum of the sum length of 130, corresponding to the matrix K dimension of the matrix multiplication is up to 16640; It can be supported when the summation precision is 6, and the floating point summation length is 130, corresponding to the matrix K dimension participating in matrix multiplication is up to 43520. That is to say, in the case of only considering the inference task, the cumulative mantissa precision of 6 can support most applications.\\n\\nSecond, for the training scenario you mentioned, first of all, on some end side devices we usually only need to deploy the model for inference tasks. Therefore, there are scenarios where our method can be used.\\n\\nThird, in response to your mention that the data range may be wider during training, this is not contrary to reducing the accuracy of the summations. Because the floating-point shortening mantissa widths cut off the lower bits, the absolute size of the data represented is much lower than the higher bits represented. For example, if the floating-point mantissa $(1.11001111)_{2}$ is truncated by the lower 4 bits, the missing data is only $0.05859375$, which is about $3.23\\\\%$ of the original data. Therefore, I think it is also possible to reduce the accumulation accuracy during training. At the same time, I am also doing training experiments. After collecting the experimental results, I will submit them as soon as possible.\"}", "{\"summary\": \"The paper aims to reduce the accumulator precision of the low-bit hardware matmul units where the accumulator becomes the hardware bottleneck. Based on the assumption that the inputs follow Laplace distribution, the paper analyzes the mean and variance for block floating-point format and proposes to use the mean with 3 standard deviations as the approximation of the largest magnitude that the accumulator should support, thus trimming down the accumulator precision. Experiments on ResNet 50, Bert-large, and llama2-7b shows the precision prediction fits well with the actual mininumal bits needed.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper shows clearly how the target of improvement and the bottleneck of accumulator in hardware in Figure 1, although more detailed description of the setup and source of the numbers shown in Figure 1(b) will be appreciated. The paper also clearly described its strategy, which is using the three standard deviations to estimate the largest output magnitude of the accumulator.\", \"weaknesses\": \"The main concern is on the experiments. Line 385 to 388 indicates that the evaluation mixes inference-only evaluation and training. It is very likely that the accumulator precision required in those two cases are very different. The accumulator precision for training can potentially be lower than inference-only approximation because overflow can serve as clipping, and model can still recover some quality through training. These two setups need to be separated and ablated.\\n\\nIn addition, Figure 5 is important as it shows how close the theoretical prediction matches the lower bound of the bitwidth needed for the accumulator in practice. However it is unclear in Figure 5 what the floating-point baseline is (dashed lines?). It is also unclear what the block size is. These are critical for assessing the experimental results.\", \"questions\": \"The questions are in the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We have added the results of the training task in the latest submitted paper. The newly added content is in Appendix E, please review it. According to our experimental results, in the training process, the data accuracy requirements of forward accumulation are the highest. Therefore, the design in real hardware according to the bit width of forward accumulation determined by our theory is sufficient to meet the bit width requirements of reverse and gradient solution. Therefore, our theory is also suitable for real hardware design scenarios.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"First of all thank you for your detailed reply.\\n\\nAs for the second point, because I am not familiar with the relevant research of in-memory computing, I did not compare it with it before. I read the two papers you pointed out. ECIM[1] experimentally found that when the selected cumulative bit width is 21, the result is measured under the $(1024\\\\times1024)\\\\times(1024\\\\times1024)$ matrix multiplication initialized with the ResNet-18 training distribution. It is also mentioned that although increasing the length of the accumulation will make the method more profitable, a larger accumulation length will also introduce a larger calculation error, and it does not discuss the boundary of the accumulation length that the 21-bit width can support. Therefore, in real hardware design, if we need to bring each model into the data for calculation and then enumeration to choose the appropriate cumulative bit width, it will undoubtedly generate a huge amount of work. Our method uses statistics to predict the corresponding cumulative bit width in advance, which can save the above work. This is the point where our method is superior to ECIM, and the advantage of this prediction boundary is also explained in the abstract and Introduction, such as lines 15-19. In the second paper[2], the $3\\\\sigma$ principle is also used to analyze the data range, but it analyzes the distribution range of quantified data, that is, it does not focus on the discussion of the accumulation part, which is inconsistent with our focus.\\n\\nFor the third point, the partial sum analysis method and the floating-point accumulation precision analysis method based on FnRR proposed by me do not aim to further reduce the accumulation bit width, and we found that under extremely low quantization precision, the accumulation bit width can also be reduced, but how to determine the boundary of this reduction is still a challenge. That's why we propose these two methods. That is, to determine the boundaries that can be accumulated with reduced precision. The proposed segmented approach is based on our analysis results in Section 4.3. In Section 4.3, we found that the length of accumulation is the decisive factor affecting the accuracy of accumulation. Therefore, in order to further reduce the accuracy of accumulation, we proposed the method of segmented accumulation, that is, changing the order of accumulation. As you said in point 4, accumulate in segments first, and then summarize across segments. We also extend our theory to the case of segmented accumulation in Section 4.4.\\n\\nFor the fourth point, the segmented accumulation occurs in the floating-point accumulation, which is when the fixed-point part has been completed and converted to floating-point. And as I mentioned above, segmental accumulation actually changes the order of accumulation.\\n\\n[1] Lee, Juhyoung, et al. \\\"ECIM: exponent computing in memory for an energy-efficient heterogeneous floating-point DNN training processor.\\\" IEEE Micro 42.1 (2021): 99-107.\\n\\n[2] Sun, Hanbo, et al. \\\"An energy-efficient quantized and regularized training framework for processing-in-memory accelerators.\\\" 2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC). IEEE, 2020.\"}", "{\"comment\": \"Thank you for your recognition of my reply, and also thank you for your careful review and patient communication, which has provided me with great help to improve my work.\\n\\nAs for the second point, I have modified the title in the original figure 5. Originally, I wanted to use INT4 and INT8 to represent the data types corresponding to BFP data elements, but I am very sorry for the confusion. So I changed the title correspondence to BFP8 and BFP4.\\n\\nAs for the third point, Flexpoint and FAST as you mentioned did not use 8 bits as the shared exponential bit width, but in the latest related work such as Microscaling[1], 8 bits are generally used as the shared exponential bit width, and as I also mentioned in lines 207 to 208 of the paper, we allocated enough bit width to the shared exponential to simplify the discussion. As you suggested, I mentioned this point directly on lines 207 to 208. Referring to the bit width of FP32, we choose 8 bits as the bit width of the shared index after comprehensive consideration.\\n\\nAs for the fourth point, thanks for your suggestion, I have revised it in line 218 in the newly submitted version.\\n\\nAs for the fifth point, I realize that there is a clerical error, the correct formula should be 1, I have modified it in the newly submitted version, thank you very much for your reminding!\\n\\nAs for the sixth point, I have described what a segment is in lines 327 to 332. As for how to select segments, our method does not restrict it, but we chose $\\\\lfloor \\\\sqrt{n}\\\\rfloor$ as the segment length in line 481, that is, when we did the experiment. For an instance, if we have 100 numbers going into the floating point sum, then this is the sum length $n$ of 100, and with $\\\\lfloor \\\\sqrt{n}\\\\rfloor$ as the segment length, we can get 10 floating point sum segments of length 10. Since both segmentation and unsegmentation are floating-point accumulators, this is a floating-point accumulator.\\n\\nAs for the seventh point, I do not understand which picture you refer to in Figure 19(a). With respect to the inter-block accumulation accuracy, it refers to the accumulation accuracy of the FP ADDER in Figure 1(a).\\n\\n[1] With Shared Microexponents, A Little Shifting Goes a Long Way, ISCA, 2023.\"}", "{\"comment\": \"For question 1, the BFP data format mentioned in the paper is a blocky floating-point data, which is different from conventional floating-point data in that all elements in the block share an index instead of each data having its corresponding index. Generally, the shared index of BFP is 8 bits, and the bit width of each element is determined by the specific quantization configuration. BFP4 refers to an element with a bit width of 4bits, and BFP8 refers to an element with a bit width of 8bits. For example, a BFP8 data with a block size of 16 means that 16 data elements belonging to a block share an 8bit-sized index, and each of these 16 elements is stored in 8bits.\\n\\nFor question 2, the hardware architecture of the BFP MAC is shown in Figure 1(a). According to your suggestion, I redrew Figure 1(a) and added the size of its usage bit width to each component. In baseline, we use FP32 as the floating-point cumulative bit width, and use the calculation formula you mentioned in question 3 to determine the fixed-point cumulative bit width, such as the fixed-point bit width we set $(\\\\log_{2}^{16}+8+8)$ for a BFP8 MAC with block size 16.\\n\\nFor question 3, the meaning of my formula and yours is the same, my formula determines the range of partial sum data representation, while your formula determines the bit width calculated according to the range, its representation meaning is the same. There are no exponents because we're talking about fixed-point numbers, and because we're talking about signed fixed-point numbers. Therefore, for a signed fixed-point number with a bit width of b, the representation range is $[-2^{b-1},2^{b-1}-1]$. In order to include all parts and data, bits are needed $\\\\lceil log_{2}^{K}+log_{2}^{2^{A_{width}+W_{width}-2}+1}\\\\rceil+1$, which is consistent with your formula.\\n\\nFor question 4, I explained in line 214 of the paper that W and I refer to the elements corresponding to the weights and inputs quantized in BFP format, that is, the part of the data except the shared index. W and I know that the original weights and inputs are not quantized by BFP.\\n\\nFor question 5, I have added the X-axis and Y-axis labels in a newly submitted PDF.\\n\\nFor weakness 3, the bit widths in table3 are derived from the theories derived from Sections 4.1, 4.2 and 4.5. From 4.1, we can calculate the data distribution range through the $3\\\\sigma$ principle, which is mentioned in line 232 to 236, to obtain the corresponding fixed-point cumulative bit widths. From 4.2 and 4.5, we can obtain the floating mantissa bit widths through the Equation 9 and $f(n)$ threshold 1000.\"}", "{\"comment\": \"May I ask whether I have solved your question? Looking forward to your reply.\"}", "{\"metareview\": \"This paper presents a statistical method to predict the boundaries of accumulation precision in deep learning inference using Block Floating-Point (BFP) arithmetic. The proposed approach aims to optimize hardware design by predicting the required accumulation precision, with a set of equations relating BFP quantization parameters to fixed-point multiply-accumulate operations and floating-point swamping effects. The method is validated on various models, demonstrating improvements in area and power efficiency while maintaining performance close to the FP32 baseline.\\n\\nOne of the primary concerns raised by the reviewers is the lack of clarity in the paper\\u2019s presentation. Another concern is the generalizability of the method, as some reviewers felt that it is tailored to specific models and may not scale well to other architectures or hardware configurations. Additionally, the contribution of the proposed method to bit precision reduction is unclear, and it is uncertain whether the extreme reductions in precision are primarily due to the new approach or if other factors, such as the segmented approach, play a significant role.\\n\\nReviewers had mixed opinions about the novelty and practicality of the proposed methods. This paper received an average score of 3.75, which is below the competitive threshold for this year\\u2019s submissions. Given the balance of strengths and weaknesses, the final recommendation is to reject this submission in its current form. The paper holds potential, but it would benefit from further revisions, including clearer explanations, a broader applicability study, and more detailed hardware implementation information.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, several key points were raised by the reviewers that provided further insight into the strengths and weaknesses of the paper.\", \"clarity_of_methodology\": \"One of the main concerns raised by Reviewer was the lack of clarity in terms and definitions, which impeded the understanding of the proposed method. The authors attempted to address this by providing additional explanations and clarifying certain terms. However, Reviewer remained skeptical, suggesting that the clarifications were still insufficient to fully resolve the ambiguity. Given that clarity is crucial for understanding the novelty and impact of the methodology, this concern weighed heavily in the final decision. Despite the authors\\u2019 efforts, the paper still lacked the necessary precision in presenting the core ideas.\", \"generalizability_and_applicability\": \"Reviewer raised concerns about the generalizability of the proposed method, especially its applicability beyond the specific models evaluated in the paper. The authors defended their approach, arguing that the methodology could be adapted to other models, though no additional experimental evidence was provided to substantiate this claim. While this point was partially addressed, the lack of broader validation meant that doubts about the method's scalability persisted. This concern was weighed as significant in the final recommendation, as the paper\\u2019s applicability to a wider range of models and hardware configurations remains uncertain.\", \"precision_reduction_and_contribution\": \"Reviewer questioned the contribution of the proposed method to bit precision reduction, suggesting that factors beyond the new approach might be influencing the results. The authors did not provide a clear explanation of whether the precision reductions were primarily due to their method or other aspects of the approach, leaving this question unresolved. This lack of clarity, along with the inability to distinguish the novel contributions, was another major point of concern. Given that this aspect was not sufficiently addressed in the rebuttal, it was weighed heavily in the final decision.\", \"hardware_implementation_and_real_world_applicability\": \"Reviewer pointed out the absence of detailed hardware implementation information and questioned the real-world applicability of the proposed method. The authors offered some additional insights into their hardware setup, but they did not provide enough concrete details regarding the synthesis environment or practical constraints. This concern remained unresolved in the rebuttal and was critical in the final assessment, as real-world validation is necessary to support the theoretical claims made in the paper.\\n\\nIn summary, while the authors made an effort to clarify certain aspects of their methodology and address reviewer concerns, several key issues remained unresolved. The paper still suffers from significant clarity issues, a lack of generalizability, and insufficient hardware implementation details. As a result, despite the authors' responses, the points raised by the reviewers were not adequately addressed, leading to a final recommendation of rejection.\"}", "{\"summary\": \"In the block floating point quantization, this paper proposes a statistical method to analyze the impact of reduced accumulation\\nprecision on the inference of deep learning applications,\\nwhere formulates a set of equations to relate the data range of fixed-point\\nmultiply-accumulate operations and the effects of floating-point swamping.\\nThe experimental results show that area costs and hardware efficiency can be achieved, \\ndemonstrating significant area reduction and power reduction.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"This paper shows a contribution of arithmetic for low-cost inference.\", \"weaknesses\": \"Although this paper shows a contribution of arithmetic for low-cost inference,\", \"i_have_several_serious_concerns_as\": \"1. The main idea is not explicitly shown. I think that a metric FnRR denotes the ratio of floating point swamping. \\nThe main idea should be focused on introduction and Figure 1, where the main idea or metric is not illustrated.\\nIn the proof of Eq. (7), the normalized values based on assumption 1 in B. Proof of Theorem 1 are considered.\\nIn the distributions of weights and activation, the assumption 1 is questionable. If there are any references for that, it could be better.\\n\\n2. It is hard to understand this paper. Many italic and non-italic terms are mixed. For example, italic n and nonltalic n are used without discrimination.\\nAre sigma (line 316) and symbol sigma the same? Besides, too many other typos are shown.\\nIn Figure 5, do terms in Y axis mean the accuracy on datasets? What is the meaning of scores? \\n\\n3. The metric is only applied to inference. I think that this method can be evaluated on any training works. \\nI think that in the model for image classification, the proposed idea can be applicable to the model training. (ResNet 18 on CIFAR10 or ResNet50 on ImageNet-1K)\\nBesides, I think that resnet50 on CIFAR10 is not suitable for the job using floating-point format. \\n\\n4. In hardware implementation, what is the environments for hardware synthesis? \\n\\nIn conclusion, in a point of arithmetic, the explanation should be more polished. \\nBesides, the effects of the proposed metric should be analyzed in other cases.\", \"questions\": \"Please, see the above weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"May I ask whether I have solved your question? Looking forward to your reply.\"}" ] }
DzKdjWe59v
Hint Marginalization for Improved Reasoning in Large Language Models
[ "Soumyasundar Pal", "Didier Chételat", "Yingxue Zhang", "Mark Coates" ]
Large Language Models (LLMs) have exhibited an impressive capability to perform reasoning tasks, especially if they are encouraged to generate a sequence of intermediate steps. Reasoning performance can be improved by suitably combining multiple LLM responses, generated either in parallel in a single query, or via sequential interactions with LLMs throughout the reasoning process. Existing strategies for combination, such as self-consistency and progressive-hint-prompting, make inefficient usage of the LLM responses. We present Hint Marginalization, a novel and principled algorithmic framework to enhance the reasoning capabilities of LLMs. Our approach can be viewed as an iterative sampling strategy for forming a Monte Carlo approximation of an underlying distribution of answers, with the goal of identifying the mode the most likely answer. Empirical evaluation on several benchmark datasets for arithmetic reasoning demonstrates the superiority of the proposed approach.
[ "reasoning", "large language models" ]
Reject
https://openreview.net/pdf?id=DzKdjWe59v
https://openreview.net/forum?id=DzKdjWe59v
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ymQMbbpKnS", "ylhDwaEmyy", "wgSXdQrJIb", "uirVTE2ntB", "uX7gEG6Cez", "pYxjOm0azN", "oyUBVcUL0A", "obewA3ZR8l", "n3zqT4BlKL", "iZ0yGdQDmU", "flPdy24C5s", "bNI3MLAVZK", "apk0CKe56H", "aOXnVR8f0C", "ZyHseHsLeD", "TAs3KDZR7v", "T8R5Lc5j5t", "SIdIEGW0S5", "OloMWvT9Mi", "MP6xSlLfbd", "LeTNx20k7p", "KxSCOhNpPR", "JD7gz51hld", "IzembTaMr0", "HLNm8Yr1Bw", "E3zH1cRTEz", "5mmh9YsCZD", "5ZsAW1P0jA", "2r0OZZt3qj", "0tET3BXQxz" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732410886576, 1732747792853, 1732746046631, 1732413920959, 1732410790009, 1732741209378, 1732412124702, 1732748049586, 1730662212133, 1732741447630, 1732407119820, 1732409372172, 1734622664706, 1730700421447, 1737524213725, 1732410098200, 1732409546765, 1730056008635, 1732744336095, 1732748541352, 1732747126890, 1732741770412, 1730816619164, 1732413416319, 1733018872100, 1732752309242, 1732411406442, 1732407806451, 1732413747711, 1733018046706 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12766/Authors" ], [ "ICLR.cc/2025/Conference/Submission12766/Authors" ], [ "ICLR.cc/2025/Conference/Submission12766/Authors" ], [ "ICLR.cc/2025/Conference/Submission12766/Authors" ], [ "ICLR.cc/2025/Conference/Submission12766/Authors" ], [ "ICLR.cc/2025/Conference/Submission12766/Authors" ], [ "ICLR.cc/2025/Conference/Submission12766/Authors" ], [ "ICLR.cc/2025/Conference/Submission12766/Authors" ], [ "ICLR.cc/2025/Conference/Submission12766/Reviewer_5edW" ], [ "ICLR.cc/2025/Conference/Submission12766/Authors" ], [ "ICLR.cc/2025/Conference/Submission12766/Authors" ], [ "ICLR.cc/2025/Conference/Submission12766/Authors" ], [ "ICLR.cc/2025/Conference/Submission12766/Area_Chair_5oqG" ], [ "ICLR.cc/2025/Conference/Submission12766/Reviewer_bgqX" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12766/Authors" ], [ "ICLR.cc/2025/Conference/Submission12766/Authors" ], [ "ICLR.cc/2025/Conference/Submission12766/Reviewer_mTdx" ], [ "ICLR.cc/2025/Conference/Submission12766/Authors" ], [ "ICLR.cc/2025/Conference/Submission12766/Authors" ], [ "ICLR.cc/2025/Conference/Submission12766/Authors" ], [ "ICLR.cc/2025/Conference/Submission12766/Authors" ], [ "ICLR.cc/2025/Conference/Submission12766/Reviewer_v83X" ], [ "ICLR.cc/2025/Conference/Submission12766/Authors" ], [ "ICLR.cc/2025/Conference/Submission12766/Authors" ], [ "ICLR.cc/2025/Conference/Submission12766/Authors" ], [ "ICLR.cc/2025/Conference/Submission12766/Authors" ], [ "ICLR.cc/2025/Conference/Submission12766/Authors" ], [ "ICLR.cc/2025/Conference/Submission12766/Authors" ], [ "ICLR.cc/2025/Conference/Submission12766/Authors" ] ], "structured_content_str": [ "{\"title\": \"Cont'd\", \"comment\": \"> # Q2. What exactly is PHP+HM? I didn't see PHP clearly described, and it was unclear to me how you were using PHP within HM.\\n\\nAs explained in Section 4.3,\\nPHP+HM refers to a variant of our method, where the initial answer distribution is approximated using several PHP-provided answers (i.e., PHP+SC).\\nIn other words, the only difference between CoT+HM and PHP+HM algorithms is in their initializations (using CoT+SC for CoT+HM and PHP+SC for PHP+HM).\\nThe subsequent HM iterations are carried out in exactly the same manner for both of these algorithms.\\n\\nIn Section 5, we discuss PHP (Zheng et al., 2023), which refers to a sequential refinement based prompting method, where the answers generated by the LLM in previous rounds are used as hints in the prompt to assist the LLM in reasoning. \\nThe algorithm is terminated when the same answer is repeated in two consecutive rounds. Although PHP uses a hint strategy, it focuses on sequentially refining the prompt. The key difference in our approach is that we refine the distribution over the responses and use hints to guide this refinement.\\n\\nPHP+SC refers to running several non-interacting PHP chains independently, collecting the terminal answers from each of them, and conducting a majority vote among those answers to determine the solution.\"}", "{\"title\": \"Cont'd\", \"comment\": \"**p-value from Wilcoxon signed rank test between the probabilities of true answers from distributions $p_3(y|x)$ and $p_1(y|x)$ for the 'difficult' questions (for the entire dataset)**\\n\\n| **LLM** | **AddSub** | **MultiArith** | **SingleEQ** | **SVAMP** | **GSM8K** | **AQuA** |\\n|----------------------|----------------------------|---------------------------|--------------------------|--------------------------|--------------------------|--------------------------|\\n| **GPT-3.5 Turbo** | 0.0291 (0.1172) | 0.0006 ($1.3 \\\\times 10^{-5}$) | 0.0012 ($8.6 \\\\times 10^{-5}$) | 0.0132 ($1.4 \\\\times 10^{-5}$) | $9.2 \\\\times 10^{-18}$ ($4.3 \\\\times 10^{-22}$) | 0.0001 ($1.6 \\\\times 10^{-8}$) |\\n| **GPT-4 Turbo** | 0.2868 (0.2258) | 0.0104 ($2.3 \\\\times 10^{-6}$) | 0.0002 ($6.2 \\\\times 10^{-7}$) | $4.8 \\\\times 10^{-8}$ ($1.7 \\\\times 10^{-13}$) | $2.2 \\\\times 10^{-31}$ ($1.5 \\\\times 10^{-41}$) | 0.0065 (0.0042) |\\n| **GPT-4o-mini** | 0.0038 (0.0024) | 0.8413 (0.0243) | 0.0317 (0.0255) | 0.5898 (0.3028) | $4.5 \\\\times 10^{-12}$ ($5.2 \\\\times 10^{-12}$) | $2.1 \\\\times 10^{-5}$ ($8.5 \\\\times 10^{-6}$) |\\n\\n**Percentage of 'difficult' questions (percentage of questions in the entire dataset), so that $p_3(y|x) \\\\geqslant p_1(y|x)$ is satisfied (in other words, HM does not decrease the probability of the true answer)**\\n\\n| **LLM** | **AddSub** | **MultiArith** | **SingleEQ** | **SVAMP** | **GSM8K** | **AQuA** |\\n|----------------------|------------------|-------------------|-------------------|-------------------|-------------------|-------------------|\\n| **GPT-3.5 Turbo** | 79.4 (92.7) | 85.2 (97.3) | 86.0 (97.2) | 63.5 (83.8) | 70.8 (81.4) | 64.7 (74.8) |\\n| **GPT-4 Turbo** | 76.2 (95.7) | 96.3 (99.7) | 87.7 (98.0) | 89.5 (96.9) | 85.7 (93.3) | 79.1 (86.6) |\\n| **GPT-4o-mini** | 85.7 (97.2) | 96.3 (99.7) | 82.5 (97.0) | 81.1 (93.9) | 83.8 (92.7) | 75.8 (83.9) |\\n\\n\\n> # A priori, I find the prompting strategy you adopt (from the PHP work) somewhat strange, especially given that the few-shot rationales don't appear to reference the hints. I understand (and appreciate) your arguments for why this style of hinting could help in quantitative domains (maybe somehow attending to the hint makes nearby answers more likely), but still, I find that explanation far from obvious and in need of empirical validation.\\n\\nPlease refer to the answer above for the detailed empirical validation of how PHP-style prompting helps in instantiating our HM framework. \\n\\nWhile we agree that, to some extent, the design of the PHP prompt is not completely intuitive, and the lack of extensive analysis and ablation is a valid criticism of (Zheng et al., 2023)'s work, we argue that those criticisms should not overshadow the novel methodological contributions made in our work because of the following reasons.\\n\\nFirst, empirical results in both (Zheng et al., 2023) and our submitted work clearly show that **PHP outperforms CoT**, which **provides evidence in favor of the usefulness of hinting.** \\n\\nSecond, the utility of our design of HM framework is motivated by a principled criterion that **'probability of the correct answer increases via hint-marginalization'** if and only if **'the in-flow probability to the correct answer is more than the out-flow probability from the correct answer'**. Our empirical analysis (Figures 3-5, please refer to the detailed response above) clearly shows that there is strong empirical evidence in support of that phenomenon for the hinting prompt across all datasets and LLMs considered in our experiments. **Thus, the utilization of the PHP-style prompt as a component in our work is well justified.** \\n\\nWe agree with the reviewer that a deeper investigation into why the hinting approach is beneficial would be a valuable contribution, but we do not think it is essential to fully understand why a mechanism works provided there is convincing evidence that it does work. \\n\\n**Continued on the next Official Comment**\"}", "{\"title\": \"Cont'd\", \"comment\": \"**Mathematical Framework:**\\n\\nLet us reuse the notations from the main paper, so $x$ denotes a question and $y$ is its 'correct' answer.\\n\\nBelow, we rewrite the definitions (as written in **lines 139-140 in Section 3.1 of our paper**) of the in-flow of probability to the correct answer and the out-flow of probability from the correct answer for one round of hinting for completeness:\\n\\\\begin{align}\\n\\\\textrm{in-flow-prob.} (x, y) &= \\\\sum_{y' \\\\neq y} p_{1}(\\\\tilde{y}{=}y'|x) p(\\\\tilde{y}{=}y|x, \\\\mathrm{Hint}(y'))\\\\, \\\\tag{1}\\n\\\\end{align}\\n\\\\begin{align}\\n\\\\textrm{out-flow-prob.}(x, y) &= p_{1}(\\\\tilde{y}{=}y|x) \\\\sum_{y' \\\\neq y} p(\\\\tilde{y}{=}y'|x, \\\\mathrm{Hint}(y))\\\\, \\\\tag{2}\\n\\\\end{align}\\n\\\\begin{align}\\n\\\\textrm{in-flow-prob.} (x, y)&> \\\\textrm{out-flow-prob.} (x, y) \\n\\\\end{align}\\n\\n$\\\\implies$ \\n\\\\begin{align}\\n\\\\sum_{y' \\\\neq y} p_{1}(\\\\tilde{y}{=}y'|x) p(\\\\tilde{y}{=}y|x, \\\\mathrm{Hint}(y')) > p_{1}(\\\\tilde{y}{=}y|x) \\\\sum_{y' \\\\neq y} p(\\\\tilde{y}{=}y'|x, \\\\mathrm{Hint}(y)) \\\\tag{3}\\n\\\\end{align}\\n\\nIn other words, $\\\\textrm{in-flow-prob.} (x, y)$ denotes the joint probability of the event that for a question $x$, the initial answer was incorrect and after one round of hinting, it was corrected.\\nSimilarly, $\\\\textrm{out-flow-prob.} (x, y)$ is the joint probability of the event that the initial answer was correct and after one round of hinting, it switched to an incorrect answer.\\n\\nIn the HM framework, we compute \\n\\\\begin{align}\\np_{2}(\\\\tilde{y}{=}y|x) = p_{1}(\\\\tilde{y}{=}y|x) p(\\\\tilde{y}{=}y|x, \\\\mathrm{Hint}(y)) + \\\\sum_{y' \\\\neq y} p_{1}(\\\\tilde{y}{=}y'|x) p(\\\\tilde{y}{=}y|x, \\\\mathrm{Hint}(y'))\\\\. \\\\tag{4}\\n\\\\end{align}\\nNote that the second term on the right hand side is the in-flow probability to the correct answer via hinting.\", \"one_can_also_write\": \"\\\\begin{align}\\np_{1}(\\\\tilde{y}{=}y|x) &= p_{1}(\\\\tilde{y}{=}y|x) \\\\times 1\\\\,\\n\\\\end{align}\\n\\\\begin{align}\\n&= p_{1}(\\\\tilde{y}{=}y|x) \\\\times \\\\bigg[ p(\\\\tilde{y}{=}y|x, \\\\mathrm{Hint}(y)) + \\\\sum_{y' \\\\neq y} p(\\\\tilde{y}{=}y'|x, \\\\mathrm{Hint}(y))\\\\bigg]\\\\,\\n\\\\end{align}\\n\\\\begin{align}\\n&=p_{1}(\\\\tilde{y}{=}y|x) p(\\\\tilde{y}{=}y|x, \\\\mathrm{Hint}(y)) + \\\\sum_{y' \\\\neq y} p_{1}(\\\\tilde{y}{=}y|x) p(\\\\tilde{y}{=}y'|x, \\\\mathrm{Hint}(y))\\\\, \\\\tag{5}\\n\\\\end{align}\\nNote that the second term is the out-flow probability from the correct answer via hinting.\\nCombining these two equations (4) and (5), we see that we also have:\\n\\\\begin{align}\\np_{2}(\\\\tilde{y}{=}y|x)>p_{1}(\\\\tilde{y}{=}y|x) \\\\implies \\\\textrm{in-flow-prob.} (x, y)&> \\\\textrm{out-flow-prob.} (x, y)\\\\,.\\n\\\\end{align}\\nThus, the implication goes both ways, and $p_{2}(\\\\tilde{y}{=}y|x) > p_{1}(\\\\tilde{y}{=}y|x)$ is satisfied **if and only if** the in-flow probability to the correct answer exceeds the out-flow probability from the correct answer.\\nIn retrospect, we did not stress the **'only if'** part of this condition.\\n\\nNote that this is true for subsequent rounds of hinting as well. In other words, $p_{3}(\\\\tilde{y}{=}y|x) > p_{2}(\\\\tilde{y}{=}y|x)$, **if and only if** the condition in eq. (3) is satisfied with all $p_1(\\\\cdot|x)$-s replaced by $p_2(\\\\cdot|x)$. In our experiments, $p_{1}(\\\\tilde{y}|x)$ is estimated by sampling multiple CoTs in parallel without any hinting.\\nThus, CoT+SC declares the estimated mode of $p_{1}(\\\\tilde{y}|x)$ as the final answer.\\nThe proposed CoT+HM algorithm is initialized with the same $p_{1}(\\\\tilde{y}|x)$.\\n\\n**Experimental Procedure:**\\n\\nIn each of the six arithmetic datasets, the majority of the questions are \\u2018easy\\u2019 and all of CoT+SC, PHP+SC, and\\nCoT+HM methods assign a very high probability on the correct answers for them. \\nIn order to bring out the\\ndifferences among these algorithms, we only focus on the \\u2018difficult\\u2019 questions. We define 'easy' and \\u2018difficult\\u2019 questions in these benchmarks as follows. \\nIf a question is solved correctly by all algorithms in Table 1, we categorize it as \\u2018easy\\u2019.\\nA question that\\nis not \\u2018easy\\u2019 is termed \\u2018difficult\\u2019. \\nThus, the accuracies of all algorithms are 100\\\\% on the `easy' questions and removing them from the dataset does not affect the ranking of different algorithms (in Table 1 and Table 9, different algorithms have the same rankings in terms of their accuracies).\\n\\nFor each of the \\u2018difficult\\u2019 questions, we independently rank CoT+SC, PHP+SC, and CoT+HM in terms of the probability they assign to the correct answer.\\nThe algorithm having the lowest (best) rank (i.e., rank 1) for a 'difficult' question has the highest probability on the `correct' answer (note that, this does not necessarily mean that the corresponding algorithm's output is correct).\\nSimilarly, the algorithm, which assigns the lowest probability on the 'correct' answer is ranked the worst (i.e., 3).\\n\\n**Continued in the next Official Comment**\"}", "{\"title\": \"Cont'd\", \"comment\": \"> # Q2. a) Is there any intuition for why hints regarding proximity to other numbers are helpful to the LLM in arithmetic tasks? For example referring to the first prompt example in 8.2, why would knowledge of the answer being near 4, 7 be helpful in reasoning? There is no causal way to arrive at the answer from this knowledge and the output of the prompt does not seem to take this into account.\\n\\nFirst, we would like to note that we do not propose the hint prompt in this work. Rather, it has been adapted to the HM framework from PHP (Zheng et al., 2023) and we do not claim any optimality of its design.\\n\\nHowever, since we do make use of the hint mechanism, we can provide some clarification of the intuition behind the mechanism. \\nAs Zheng et al. (2023) note, hinting allows humans to check their answers and improve upon their previous solution to a given problem. We conjecture that in selecting its arithmetic answer, the LLM will assign attention to the hint and, in particular, its understanding of the phrase *\\\"close to x\\\"* will provide additional bias towards selecting a number that is closer to the suggested hint. In this way, presence of the hint in the prompt nudges the LLM to consider the hint both as it selects the steps in the rationale and when it answers the question.\\n\\nEmpirically, we observe that there is a significantly greater chance of selecting the same answer as the provided hint. For example, as specified in Appendix 8.3, for the GSM8K dataset, the probability of obtaining an incorrect answer conditioned on providing a correct hint is 0.0179. By contrast, we see that the best performing procedure has an error rate of 0.054. This provides evidence that the insertion of the hint is affecting the answer (in a positive way), even if it is not immediately discernible (as the reviewer correctly point out \\\"There is no causal way to arrive at the answer from this knowledge and the output of the prompt does not seem to take this into account.\\\") in the formation of the rationale for the in-context examples (e.g. in Table 3 in Appendix 8.2).\\n\\nWe argue that investigation of how the LLM is using the hints requires the development of a deeper theoretical understanding of LLMs' few-shot learning capabilities. This is an open question in LLM research at present, but is not the main contribution or focus of this paper. \\n\\nAdditional support for the benefit of hinting is presented by Fu et al. (2024). In their work, the LLM is encouraged via in-context examples to prepare a hint before solving the problem. The developed hints are more general than those we employ in our work, but the performance improvement in reasoning is indicative of the potential value of a hint in directing an LLM towards a good solution. Further evidence is provided by Agrawal et al. (2024). In their work, a hint is generated using a weaker LLM. This is observed to yield a performance improvement over multiple math reasoning datasets.\\n\\nWe understand that the paper did not sufficiently explain the intuition and value of hinting and we have modified the paper to include a summary of this discussion in Appendix 8.8, citing these two recent works in support. \\n\\n### References:\\n\\n- Fu, Jinlan, et al. \\\"Hint-before-Solving Prompting: Guiding LLMs to Effectively Utilize Encoded Knowledge.\\\" *arXiv preprint arXiv:2402.14310* (2024).\\n- Agrawal, Vansh, et al. \\\"Give me a hint: Can LLMs take a hint to solve math problems?\\\" *arXiv preprint arXiv:2410.05915* (2024).\\n\\n> # Q2. b) I understand that the hints condition the LLM, but it is unclear to me why such conditioning would be helpful in general.\\n\\nPlease refer to the discussion above for the intuition of using hints. Empirically, our results in Table 1 (or the results in Table 2 in PHP (Zheng et al., 2024)) show that PHP consistently outperforms CoT, which provides evidence in support of the usefulness of hinting. \\n\\nOur analysis in Appendix 8.3 shows that:\\n- (a) using the correct answer as a hint, the LLM generates the same answer with a very high probability; and\\n- (b) even with an incorrect hint in the prompt, the LLMs are at least somewhat likely to generate the correct answer in the next interaction.\\n\\nMoreover, from Table 9 of our revised paper, we observe that PHP still outperforms CoT in other Big-Bench tasks such as *\\\"Date Understanding\\\"* and *\\\"Object Tracking\\\"*, demonstrating its utility beyond the arithmetic tasks.\"}", "{\"title\": \"Cont'd\", \"comment\": \"> # W2. b) For example, do the various critique-based prompting strategies satisfy this property? Does \\\"hinting\\\" as you do actually satisfy this property on a wide range of examples? Why?\\n\\nAs discussed above, investigation of whether critique-based prompting strategies satisfy the assumptions in Section 3.1 falls outside the scope of this work. Our analyses (Figures 2-6, Appendix 8.3) provide strong empirical evidence that hinting satisfies the desired properties for the LLMs and datasets considered in our work. \\n\\nWith the additional datasets that are now included in our experimental results, following valuable suggestions by the reviewers, we show that the hint-based approach offers benefits for:\\n- Arithmetic reasoning,\\n- More general mathematical reasoning including geometry and algebra (MATH dataset),\\n- Date understanding and object tracking (Big Bench).\\n\\nWe consider that this demonstrates the applicability of *\\\"hinting\\\"* to a broad range of reasoning tasks. \\nWe do concur with the reviewer that the investigation of alternative refinement approaches is highly desirable and an exciting avenue to explore.\\n\\n> # W3. I don't have a sense of the difficulty of the tasks for GPT-4-caliber language models--GPT-4 seems to score in the mid-to-high 90s. What sorts of problems does this technique really help to solve? On harder reasoning benchmarks where GPT-4 still does very poorly (e.g., Chollet's Abstraction and Reasoning Challenge) does this technique actually help? If not, how do the authors view the limitations of this technique?\\n\\nWe thank the reviewer for this question. We agree that some of the datasets are relatively easy, but we still observe consistent improvement over self-consistency (a strong baseline) in most cases. The same benchmarks are considered in the papers that have presented the relevant baselines. \\n\\nOur proposed hint marginalization strategy solves problems where self-consistency fails to establish the correct mode (please refer to Figure 2) and sampling more CoTs is not helpful. If we restrict ourselves to the *\\\"difficult\\\"* questions during the performance assessment (eliminating the easy questions that are answered correctly by all LLMs and all methods; please refer to Table 10), then the improvement is more substantial. \\n\\nThe lack of particularly challenging datasets is a valid criticism of our work, also raised by other reviewers. We have now included results for the MATH dataset (please refer to Table 8 of our revised paper), which is a much more challenging mathematical reasoning dataset. For several sub-disciplines (Geometry, Intermediate algebra, Pre-calculus), the state-of-the-art performance (without using extreme computation and a very long inference time) is in the range of 50-65 percent, suggesting that LLMs still find these problems very difficult to solve. The proposed HM approach leads to a performance improvement in 5 out of 7 settings.\\n\\nWe agree with the reviewer that applying the method more broadly to other reasoning domains is a worthwhile and very interesting research direction. With a view to satisfying this request, we now provide results for *\\\"Date Understanding\\\"* and *\\\"Object Tracking\\\"*, which are problem sets involving quantitative (but not strictly mathematical or arithmetic) reasoning.\\n\\nExtending beyond this (outside quantitative problems) would require careful prompt engineering to generalize to other reasoning domains. This direction is very interesting but is not the main focus of our current work. \\nNote that the ARC dataset is not amenable to CoT-style prompting and none of the baseline algorithms considered in our work is capable of addressing such problems in their current form.\\n\\n> # Q1. Some results are starred in your table but I could not find any description of what stars indicate. Can you clarify?\\n\\nWe thank the reviewer for bringing this to our attention. We apologize for not explaining the asterisks in Section 4.4.\\nFor each dataset and each LLM, we conduct a Wilcoxon\\nsigned rank test between the top two algorithms and mark the best result with *, if the difference is statistically significant at the 5\\\\% level. Although we mentioned the Wilcoxon test in Section 4.4, we did not explain the asterisk. We have now edited the caption of Table 1 to clarify the use of asterisks.\"}", "{\"title\": \"Thanks for reading the rebuttal\", \"comment\": \"Thank you very much for acknowledging our rebuttal and appreciating our work. Your positive viewpoint on our work is truly important and encouraging to us.\"}", "{\"title\": \"Response to Reviewer mTdx\", \"comment\": \"We thank the reviewer for acknowledging the generality and 'robust foundation' of our work. Below, we address your concerns.\\n\\n> # W1. a) For the presentation as a general method, the experimental evaluation is too narrow. Only arithmetic reasoning benchmarks are tested, which is problematic in two ways. For one, within the category of math benchmarks the most challenging benchmarks (such as MATH) are left out.\\n\\nThe lack of particularly challenging datasets is a valid criticism of our work, also raised by other reviewers. We have now included results for the MATH dataset (please refer to Table 8 of our revised paper), which is a much more challenging mathematical reasoning dataset. For several sub-disciplines (Geometry, Intermediate algebra, Pre-calculus), the state-of-the-art performance (without using extreme computation and a very long inference time) is in the range of 50-65 percent, suggesting that LLMs still find these problems very difficult to solve. The proposed HM approach leads to a performance improvement in 5 out of 7 settings. \\n\\n> # W1. b) For the chosen benchmarks, state-of-the-art models already perform very well (see e.g., [1]). The reference to Patel et al. 2021 that motivates their choice does not seem timely here either. Employing such extensive prompting regimes for mild improvements in weak models is not a strong motivation. [1] Se\\u00dfler, Kathrin, et al. \\\"Benchmarking Large Language Models for Math Reasoning Tasks.\\\" *arXiv preprint arXiv:2408.10839* (2024).\\n\\nWe agree that some of the arithmetic datasets are relatively easy for GPT, but we still observe consistent improvement over self-consistency in most cases. The same benchmarks are considered in the papers proposing the relevant baselines. Our proposed hint marginalization strategy solves problems where self-consistency fails to establish the correct mode (please refer to Figure 2) and sampling more CoTs is not helpful. If we restrict ourselves to the 'difficult' questions during the performance assessment (eliminating the easy questions that are answered correctly by all LLMs and all methods; please refer to Table 10 of our revised paper), then the improvement is more substantial. \\n\\n> # W1. c) It would be important to see how the method performs also on more challenging tasks where there is actually headroom to see improvements over current models. \\n\\nWe agree with the reviewer that applying the method more broadly to other diverse and challenging reasoning domains is a worthwhile and very interesting research direction. With a view to partially satisfying this request, we now provide results for \\\"Date Understanding'' and \\\"Object Tracking'', which are problems sets involving quantitative (but not strictly mathematical or arithmetic) reasoning. \\n\\nWe observe an improvement over the baselines for both of these tasks. The baseline performance is still relatively strong for these datasets, but we note that hint marginalization reduces the average error rate by more than 10 percent for both tasks compared to the next best baseline. The datasets are often padded with many very easy questions that are answered without difficulty by all methods and LLMs. The performance on the more challenging subset of questions, where some (or all) LLMs make errors is more interesting to analyze. As highlighted by Figure 3 in the paper, our proposed algorithm achieves more noticeable benefits on these subsets of challenging questions. This is also true for the Date Understanding and Object Tracking datasets.\\n\\n> # W2. a) Second, a limitation to arithmetic benchmarks seems arbitrary when presenting a general method. Are there limitations that make the application of HM problematic in those other settings?\\n\\nThis is a valid point. The suggested inclusion of the Math dataset, as well as the analysis of the Date Understanding and Object Tracking datasets, addresses this limitation. \\nOur results on the Math dataset (please refer to Table 8 of our revised paper) and the big-bench reasoning tasks such as Date Understanding and Object Tracking (please refer to Table 9 of our revised paper) demonstrates that our frameworks is advantageous beyond arithmetic reasoning. The Math dataset probes the capabilities of the approach for more general mathematical reasoning (geometry and algebra, for example), and the date understanding and object tracking probe other types of numerical reasoning. \\n\\nIn terms of limitations, extending beyond quantitative reasoning problems for a generalization of the HM framework to other reasoning domains would require careful prompt engineering for designing effective hinting strategies for other domains. If there is not a quantitative answer, there is also a challenge of how to define a distribution and aggregate over different responses. This direction is very interesting but is beyond the scope of our current work.\"}", "{\"title\": \"Cont'd\", \"comment\": \"> # Which brings me to Appendix 8.3--thank you for pointing me to this experiment. However, I am unsure how best to understand the results. (This is an average across the whole dataset? Did you query GPT-4 Turbo more than once per prompt, or only once per prompt?)\\n\\nIn Appendix 8.3, we have written \\\"As an example, using GPT-4 Turbo on the entire GSM8K dataset, the empirical frequency of obtaining an incorrect answer conditioned on an immediate correct hint is 0.0179. \\nThis suggests that assuming $\\\\gamma$ to be very small is justified.\\nOn the other hand, the empirical frequency of obtaining a correct answer conditioned on a previous incorrect hint is 0.3159, which supports the assumption of having a strictly non-zero value for $\\\\delta$.''\\n\\n(1) These results are obtained by considering an average across the whole dataset. We have amended the text in the appendix to make this clear.\\n\\n(2) We queried the LLM more more than once per prompt, since these results are obtained by analyzing PHP+SC. \\n\\nThe original purpose of Appendix 8.3 was to convey the simple intuition for Section 3.1, that on average, LLMs tend to repeat the correct answer with a high probability if it is provided as a hint, and they also possess some ability of self-correction if an incorrect hint is provided.\\n\\nHowever, in light of the reviewer's questions, we realize that reporting a dataset level summary statistic only provides some indirect evidence in support of the success of the hinting mechanism.\\n\\nOn the other hand, we already presented a more thorough, direct, and systematic study of PHP prompt's refinement capability in the paper (as explained in detail in the response to the previous question above, with results in Figures 3-5). \\nWe have also added quantitative results from the same experiment to the modified Appendix 8.9 which concretely show support for our usage of the hint-prompt in the HM framework.\\n\\n**Continued in the next Official Comment**\"}", "{\"summary\": \"Hinting has proved itself as a viable approach to improve the reasoning capabilities of an LLM. A common approach is to incorporate a potential answer into the prompt, e.g. by adding \\\"the solution might be close to X\\\" at the end. The submission proposes a simple yet principled approach to leverage the initial answers of an LLM as hints, defining an iterative refinement of the answer distribution. More precisely, given an initial query $q$, one defines probability distributions $p_n(x\\\\mid q)$ recursively by\\n\\\\begin{align*}\\np_0(x\\\\mid q) &= p_{\\\\text{LLM}}(x\\\\mid q) \\\\\\\\\\\\\\\\\\np_{n+1}(x\\\\mid q) &= \\\\int p_n(x_0\\\\mid c)p_{\\\\text{LLM}}(x\\\\mid q, \\\\mathrm{HINT}(x_0)])d x_0\\n\\\\end{align*}\\nThe main motivation behind this definition is the empirical observation that in many cases across the possible hints the flow of probability to the correct answer will be larger than the one incorrect one. As an intuition, one can state that that giving a correct answer as hint will often make the correct answer much more likely while an incorrect answer as hint will often be ignored by the model. \\n\\nThe authors conduct extensive experiments both with multiple datasets and multiple SOTA LLMs and compare hint marginalization against other reasoning frameworks such as self consistency, chain of thought, and progressive hint prompting. Throughout the experiments, hint marginalization consistently shows that it is able to outperform previous methods. \\n\\nOverall, I think this is a good submission. The proposed idea might be simple but it is also sound and effective. Hence, I can recommend the paper for acceptance.\\n*Update*: After discussion I decided to downgrade the rating due to the rather weak motivation of the method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Independent of underlying task.\", \"Can be combined with other advanced prompting strategies.\", \"Defines a sound stochastic process as basis for combining answers from multiple hints.\", \"Provides a simple sampling based algorithm to estimate marginal probabilities iteratively.\", \"Takes great care to ensure fair evaluation in the experiments.\", \"Impressive experimental results.\"], \"weaknesses\": [\"Justification of the methods stems only from intuition and a few empirical evidences.\"], \"questions\": [\"Is there any clustering of equivalent answers (e.g. 0.5, 1/2, \\\\frac{1}/{2}, ...) during sampling?\", \"In the methodology section you define the output of the LLM to consist of the answer $y$ and an additional rationale $z$. Is $z$ used in any way in your procedure?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Request for Feedback on Rebuttal Responses\", \"comment\": \"Dear Reviewer mTdx,\\n\\nAs the discussion period nears its end, we hope that we have effectively addressed and resolved your concerns.\\n\\nYour feedback on our rebuttal responses would be greatly appreciated. We are more than happy to provide further clarification on any remaining issues.\\n\\nThank you for your time and consideration.\"}", "{\"title\": \"General response to all reviewers\", \"comment\": \"We would like to sincerely thank the reviewers for their thoughtful and constructive feedback on our paper. Their comments have helped significantly in improving the quality of our work. We deeply appreciate the time and effort that each reviewer invested in thoroughly reviewing our manuscript. The detailed suggestions and observations were invaluable, and we believe that the revisions made in response to their comments have strengthened the paper considerably.\\n\\nAs per the reviewers' suggestions, we have added a) results using Llama-3 (Table 7), b) results on Math dataset (Table 8), c) results on tasks beyond arithmetic reasoning (Table 10), and d) discussion on the intuition of using hints (Appendix 8.8) in the revised version of the paper.\\n\\nWe would like to share these new results with all reviewers. Below, we respond to each reviewer individually.\"}", "{\"title\": \"Response to Reviewer bgqX\", \"comment\": \"We thank the reviewer for acknowledging that the paper is 'clearly written' and 'easy to follow'. Below, we address your concerns.\\n\\n> # W1. a) I don't really work in this area, and am unfamiliar with ICLR's norms for papers that essentially present a new prompting technique with benchmark results.\\n\\nWe would like to stress that our paper **does not** fall into the category of 'papers that essentially present a new prompting technique'.\\nIn this work, we **do not propose any novel prompt engineering** technique. \\nInstead, the HM framework is a **novel iterative hint-based refinement strategy for reasoning with LLMs**, where the main novelty lies in its **capability of maintaining and updating a distribution over answers** for improved reasoning.\\n\\nIn **Section 3.4** of the paper, we discuss that applicability of the proposed HM algorithm is **agnostic to the choice of prompts** and HM can **readily incorporate any advanced prompting techniques**, since those methods combined with SC can be used for\\ninitializing $p_1(\\\\tilde{y}|x)$ for subsequent iterations of Hint Marginalization. \\n**Our contribution is thus orthogonal to prompting approaches.**\\n\\nOur paper perhaps did not make it sufficiently clear, but in light of the reviewer's comments, we will clarify that **HM is not a prompt design method** in the introduction of the paper. \\n\\n> # W1. b) But I believe the paper does not currently contribute a significant advance to scientific knowledge in this area.\\n\\n\\nOur contribution (written in detail towards the end of the Introduction Section of the paper) is to propose a novel, probabilistic, simple, computationally efficient, principled, generally applicable, and effective 'iterative refinement strategy' for LLM's reasoning.\\n\\n**HM** is a **novel** and **probabilistic** framework, since to the best of our knowledge, this is the first work which considers sequential refinement of the **distribution** of LLMs' answers instead of refining **one** answer.\\n\\n**HM** is remarkably **simple** to implement since the Monte Carlo approximations required for updating the distribution of answers (Eqs. 3 and 6 in the paper) involve straightforward arithmetic calculations only.\\n\\n**HM** is **computationally efficient** since the runtime of one HM iteration is essentially the same as one LLM call. Implementing Eqs. 3 and 6 contributes negligibly to the runtime, and the LLM calls within each HM iteration using different hints can be carried out in parallel, so that one round of refinement has close to the same latency as that of a single LLM call.\\n\\n**HM** is **principled** since it formalizes how marginalizing over hints iteratively should make the mode of the inference distribution more likely to be the correct answer under some mild assumptions.\\n\\n**HM** is **generally applicable** since it is agnostic to the choice of prompts and can readily incorporate any advanced prompting techniques. We experimentally demonstrate how other prompting techniques such as PHP can be combined successfully with our method.\\n\\n**HM** is **effective** since the results in Table 1 in the paper show that out of 18 experimental scenarios (3 LLMs, six datasets), we observe a statistically significant increase in accuracy in 14. Justifying our intuition, further analyses in Figures 3-5 show that **CoT+HM has a higher probability of the correct answer compared to its competitors more often** across multiple datasets and LLMs.\\n\\nTherefore, we believe that this work is an **important first step towards developing and understanding better probabilistic inference techniques for LLMs** in the era of OpenAI o1 (which is closed source but is presumed to sample multiple responses and refine them for its final answer to encourage 'slow thinking').\"}", "{\"metareview\": \"This paper generated a lot of discussion. The general opinion was that the paper identified an important problem of combining multiple responses from LLMs to improve their reasoning ability. The proposed approach was also considered as principled.\\n\\nWhile the broad ideas were well articulated, the reviewers were quite concerned with what they perceive as ad-hoc evaluation. It is not clear why the improvement shrunk compared to benchmarks, and while a small experiment was added for discrete arithmetic, no methodology was provided, analysis of ranks and p-values were provided for only some subsets of benchmarks and Llama was evaluated on a subset of benchmarks with no rank histograms.\\n\\nIn the end, there was consensus that the paper needs one more round of edits before acceptance. IF this was a journal, I would have suggested major revisions and then reviewed it but the changes do not appear minor.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers were actually quite responsive and engaged with the authors. There was some resentment among the reviewers with the tone of the discussion from the authors. I would request the authors to kindly be a bit more polite with the responses if possible.\"}", "{\"summary\": \"This paper proposes a new protocol for iteratively prompting an LLM to solve reasoning problems.\\n\\nThe protocol is based on the hypothesis that, when the prompt to the LLM contains a candidate answer as a \\\"hint\\\" (e.g., \\\"The answer is probably close to $y$\\\"), the LLM will: (1) output $y$ with high probability if it is in fact the correct answer, and (2) still have some chance of outputting the correct answer even if the hint $y$ is not correct. If this hypothesis holds, then we can iteratively concentrate probability on the correct answer by repeatedly prompting the LLM with its previous answer as a \\\"hint.\\\" More formally, we can estimate a sequence of distributions $p_r(y \\\\mid x) = \\\\sum_{y' \\\\in \\\\mathcal{Y}} p_{r-1}(y' \\\\mid x) \\\\cdot p_{LM}(y \\\\mid \\\\text{question}=x, \\\\text{hint}=y')$, where each $p_r$ concentrates more probability on the correct answer than $p_{r-1}$. The paper's proposed method is a particular Monte Carlo scheme for estimating $p_r$, which works by iteratively estimating $p_0, p_1, p_2,$ and so on, with a different sampling budget at each step. The steps of the algorithm must be performed in sequence, but within each step, approximation can be done in parallel.\\n\\nThe paper compares the proposed method to other prompting protocols on six benchmarks with three OpenAI LLMs, and shows that it achieves slightly higher accuracy overall.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is clearly written and was easy to follow.\\n\\n2. The paper clearly states the reasons that the technique might be expected to work.\\n\\n3. The empirical evidence does seem to show that the proposed technique delivers some performance gains on well-studied benchmarks.\", \"weaknesses\": \"1. I don't really work in this area, and am unfamiliar with ICLR's norms for papers that essentially present a new prompting technique with benchmark results. But I believe the paper does not currently contribute a significant advance to scientific knowledge in this area. For example:\\n\\n- After reading the paper, I still have very little intuition for why this sort of \\\"hinting\\\" (\\\"The answer is close to X\\\") is supposed to be beneficial. In the example few-shot prompts, the rationales do not appear to use the hint at all. In practice, do LLM rationales use the hints? How? There is no real empirical analysis beyond the overall \\\"our strategy does better\\\" benchmark results. \\n\\n- The theory of the paper is based on the availability of some iterative-refinement strategy that has high \\\"in-flow\\\" of probability to the correct answer and low \\\"out-flow\\\" of probability away from the correct answer. But there is no systematic study of what sorts of iterative refinement strategies have this property. For example, do the various critique-based prompting strategies satisfy this property? Does \\\"hinting\\\" as you do actually satisfy this property on a wide range of examples? Why?\\n\\n2. I don't have a sense of the difficulty of the tasks for GPT-4-caliber language models--GPT-4 seems to score in the mid-to-high 90s. What sorts of problems does this technique really help to solve? On harder reasoning benchmarks where GPT-4 still does very poorly (e.g., Chollet's Abstraction and Reasoning Challenge) does this technique actually help? If not, how do the authors view the limitations of this technique?\", \"questions\": \"1. Some results are starred in your table but I could not find any description of what stars indicate. Can you clarify?\\n\\n2. What exactly is PHP+HM? I didn't see PHP clearly described, and it was unclear to me how you were using PHP within HM.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Cont'd\", \"comment\": \"> # W1. e) There is no real empirical analysis beyond the overall ``our strategy does better\\\" benchmark results.\\n\\nWe strongly disagree with the assessment that we do not include any empirical analysis beyond the overall *\\\"our strategy does better\\\"*.\\n\\nIn addition to presenting the main experimental results on standard benchmarks in Table 1, we indeed present a simple intuition behind hint marginalization, other quantitative analyses, illustrations, and a case study to highlight the comparison between the proposed HM and relevant sampling-based baseline algorithms.\\n\\nThe assumptions in Section 3.1 are formed by analyzing the PHP results. An example for GSM8K is provided in Appendix 8.3, which confirms that:\\n- (a) using the correct answer as the hint, the LLM generates the same answer with a very high probability; and\\n- (b) even with an incorrect hint in the prompt, the LLMs are at least somewhat likely to generate the correct answer in the next interaction.\\n\\nSimilar results are obtained for other datasets and LLMs examined in our experiments. We will provide all such results in a table in the revised version and add a reference to Appendix 8.3 in Section 3.1 for a clearer presentation.\\n\\nWe illustrate the steps of Algorithm 1 in Figure 1 for an easier exposition of HM via an example. In addition, Figure 2 shows a case study of how HM corrects an erroneous answer of CoT+SC by increasing the probability of the *\\\"correct\\\"* answer iteratively.\\n\\nFigures 3-5 show that CoT+HM has a higher probability of the correct answer compared to its competitors (including CoT+SC) for most of the *\\\"difficult\\\"* questions across all datasets and LLMs used in our experiments. Since CoT+HM is initialized with CoT+SC, these results provide direct empirical evidence that marginalizing over hints indeed increases the probability of the *\\\"correct\\\"* answer. This justifies our intuitions and demonstrates the efficacy of the proposed HM framework beyond improved accuracies.\\n\\n> # W2. a) The theory of the paper is based on the availability of some iterative-refinement strategy that has high \\\"in-flow\\\" of probability to the correct answer and low \\\"out-flow\\\" of probability away from the correct answer. But there is no systematic study of what sorts of iterative refinement strategies have this property.\\n\\nOur objective in this work is to present a general method and demonstrate its effectiveness via specific instantiations of the framework.\", \"the_motivation_for_choosing_the_hinting_prompt_for_sequential_refinement_of_the_answer_distribution_stems_from\": \"- (a) the simplicity; and\\n- (b) the effectiveness of the PHP style prompting.\\n\\nAnalyzing whether other iterative refinement strategies have high *\\\"in-flow\\\"* of probability to the correct answer and low *\\\"out-flow\\\"* of probability away from the correct answer is certainly very interesting and would lead to further generalizations of our approach.\\n\\nHowever, investigating that aspect is not the main focus of this work for the following reasons:\\n1. Incorporation of other refinement strategies in our framework would require careful prompt engineering and falls outside the scope of this work. \\n As a concrete example, consider self-refine (Madaan et al., 2023), which uses extensive prompt engineering via Python codes and explicitly introduces some errors and corrected versions in the feedback prompt. \\n Incorporating the LLM-generated code as a hint in such a setting would require extensive experimentation with potentially different prompt engineering techniques. This would lead to a very high experimental computational cost because of the complicated nature of such prompts and the need to generate multiple samples for estimating the conditional probabilities.\\n \\n2. Our experimental results in Table 1 already show that the proposed HM approaches outperform the iterative refinement methods using the hint-based methodology. We do not believe it is essential to find other refinement approaches that satisfy the *\\\"in-flow\\\"* and *\\\"out-flow\\\"* assumption or to characterize the types of refinement strategies that achieve this.\"}", "{\"title\": \"Cont'd\", \"comment\": \"> # W1. c) For example: After reading the paper, I still have very little intuition for why this sort of \\\"hinting\\\" (\\\"The answer is close to X\\\") is supposed to be beneficial.\\n\\nWe thank the reviewer for drawing our attention to this point. \\n\\nFirst, we would like to note that we do not propose the hint prompt in this work. Rather, it has been adapted to the HM framework from PHP (Zheng et al., 2023).\\n\\nHowever, since we do make use of the hint mechanism, we can provide some clarification of the intuition behind the mechanism. As Zheng et al. (2023) note, hinting allows humans to check their answers and improve upon their previous solution to a given problem.\\n\\nWe conjecture that in selecting its arithmetic answer, the LLM will assign attention to the hint and in particular, its understanding of the phrase *\\\"close to x\\\"* will provide additional bias towards selecting a number that is closer to the suggested hint.\\n\\nAdditional support for the benefit of hinting is presented by Fu et al. (2024). In their work, the LLM is encouraged via in-context examples to prepare a hint before solving the problem. The developed hints are more general than those we employ in our work, but the performance improvement in reasoning is indicative of the potential value of a hint in directing an LLM towards a good solution. Further evidence is provided by Agrawal et al. (2024). In their work, a hint is generated using a weaker LLM. This is observed to yield a performance improvement over multiple maths reasoning datasets.\\n\\nWe understand that the paper did not sufficiently explain the intuition and value of hinting and we have modified the paper to include a summary of this discussion in Appendix 8.8, citing these two recent works in support.\\n\\n### References:\\n\\n- Fu, Jinlan, et al. \\\"Hint-before-Solving Prompting: Guiding LLMs to Effectively Utilize Encoded Knowledge.\\\" *arXiv preprint arXiv:2402.14310* (2024).\\n- Agrawal, Vansh, et al. \\\"Give me a hint: Can LLMs take a hint to solve math problems?\\\" *arXiv preprint arXiv:2410.05915* (2024).\\n\\n> # W1. d) In the example few-shot prompts, the rationales do not appear to use the hint at all. In practice, do LLM rationales use the hints? How?\\n\\nWe reiterate that we do not propose the hinting prompt in this work and do not claim any optimality of its design.\\nHaving said that, as the reviewer notes correctly, the rationales do not appear to use the hint explicitly in the example few-shot prompts.\\n\\nDespite this, the improved accuracy of PHP in comparison to CoT (see Table 1 of our paper and/or Table 2 in the PHP paper) provides strong empirical evidence in support of the usefulness of hint-prompting.\\n\\nWe argue that investigation of how the LLM is using the hints requires the development of a deeper theoretical understanding of LLMs' few-shot learning capabilities. This is an open question in LLM research at present, but is not the main contribution or focus of this paper.\\n\\nHowever, we conjecture that the presence of the hint in the prompt nudges the LLM to consider the hint both as it selects the steps in the rationale and when it answers the question. Empirically, we observe that there is a significantly greater chance of selecting the same answer as the provided hint. For example, as specified in Appendix 8.3, for the GSM8K dataset, the probability of obtaining an incorrect answer conditioned on providing a correct hint is 0.0179. By contrast, we see that the best performing procedure has an error rate of 0.054. This provides evidence that the insertion of the hint is affecting the answer (in a positive way), even if it is not immediately discernible in the formation of the rationale.\"}", "{\"summary\": \"The paper presents Hint Marginalization (HM), an iterative prompting framework for refining for certain kinds of LLM tasks. The paper introduces the method of Hint Marginalization in a general algorithmic way. Intuitively the method can be understood as first sampling an output distribution from multiple attempts of the first prompt, which is then iteratively refined by sampling answer hints from this initial distribution, which are then provided as an additional information in the next round of prompts. This is then supplemented with an experimental evaluation on a number of standard arithmetic reasoning benchmarks for GPT-3.5 Turbo, GPT-4 Turbo and GPT-4o mini.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"The empirical evaluation does show some improvements, most importantly in comparision to self-consistency methods.\", \"The paper proposes a general model for refining LLM answers. Although the type of task seems to implicitly be limited to tasks that have \\\"atomic\\\" answers (e.g., numbers).\", \"Following the initial distribution of answers in sampling hints is an interesting idea and framing the problem as iteratively refining distributions is a robust foundation for the technique.\"], \"weaknesses\": [\"For the presentation as a general method, the experimental evaluation is too narrow. Only arithmetic reasoning benchmarks are tested, which is problematic in two ways.\", \"For one, within the category of math benchmarks the most challenging benchmarks (such as MATH) are left out. For the chosen benchmarks, state-of-the-art models already perform very well (see e.g., [1]). The reference to Patel et al. 2021 that motivates their choice does not seem timely here either. Employing such extensive prompting regimes for mild improvements in weak models is not a strong motivation. It would be important to see how the method performs also on more challening tasks where there is actually headroom to see improvements over current models.\", \"Second, a limitation to arithmetic benchmarks seems arbitrary when presenting a general method. Are there limitations that make the application of HM problematic in those other seetings? In particular it seems that the method is limited to reasoning tasks that output a singular (discrete) answer. In more complex settings it is unclear how the hint distribution can be reasonably formed.\", \"There no experiments with state-of-the-art models such as GPT-4o or any models not by OpenAI. It is unclear to what degree the observed improvements for weaker/old models translate in any meaningful way also to new models or to alternative architectures. Already in the provided experimental data (Table 1) we see that the improvement gains from HM diminish significantly with GPT-4 Turbo. Strikingly, on AQuA self-consistency becomes stronger than HM, wheras the situation was flipped for the weaker GPT-3.5.\", \"[1] Se\\u00dfler, Kathrin, et al. \\\"Benchmarking Large Language Models for Math Reasoning Tasks.\\\" arXiv preprint arXiv:2408.10839 (2024).\"], \"questions\": [\"From the algorithm and the mathemaical exposition it is unclear to me if HM can work for continuous distributions. Could you please elaborate on whether there is any consideration for this setting?\", \"Is there any intution for why hints regarding proximity to other numbers are helpful to the LLM in arithmetic tasks? For example referring to the first prompt example in 8.2, why would knowledge of the answer being near 4, 7 be helpful in reasoning? There is no causal way to arrive at the answer from this knowledge and the output of the prompt does not seem to take this into account. I understand that the hints condition the LLM, but it is unclear to me why such conditioning would be helpful in general.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to the follow-up questions\", \"comment\": \"We thank the reviewer for reading our rebuttal and for raising interesting follow up questions. Below, we address your concerns.\\n\\n> # I have taken a look at the referenced papers on hinting (Fu et al. and Agrawal et al.). Perhaps I am misreading but they seem to propose a rather different type of \\\"hinting\\\" than in your work. Crucially, their hinting protocols are not techniques for conditioning a language model on a previously generated answer. As such, I do not believe they are relevant to your paper, because those hinting strategies could not be used to refine a distribution into a more concentrated distribution.\\n\\nWe agree that those techniques cannot directly be used to refine a distribution \\ninto a more concentrated distribution. As written in our rebuttal, our intention was to show support for the general idea of hinting aiding in LLMs' reasoning, not to suggest that these were alternative refinement strategies in the HM framework.\\n\\n> # As for Zheng et al. 2023, it appears their paper was rejected from TMLR, with multiple reviewers raising as a concern the lack of motivation or ablation testing to understand whether \\\"hinting\\\" really has the claimed properties, e.g.:\\n\\n> # \\\"Although PHP has improved model performance, its results do not explicitly present any reasoning paths to understand how it utilize these hints.\\\"\\n\\n> # \\\"The prompt looks odd! I don't have much a priori intuition that prompting with not necessarily correct hints would improve the results so significantly, which is why I suspect a lurking alternate explanation.\\\"\\n\\nAlthough it is true that their paper was rejected from **TMLR**, the authors did submit a version to the **AI4MATH** workshop at **ICML 2024**, which was accepted. You can read the peer review [here](https://openreview.net/forum?id=UkFEs3ciz8).\\n\\nReading the reviews from **TMLR** (which can be read [here](https://openreview.net/forum?id=5HsBuYYx4i)), one gathers that none of the reviewers dispute the fact that the suggested prompt works better than **CoT**. On the other hand, they were unhappy that the authors had not provided more investigations into why or how the **LLMs** benefited from this strategy.\\n\\nYour concern falls under the same category. While we consider that a thorough investigation to explain exactly how the **LLM** is impacted by the hint is beyond the scope of our work (and would essentially constitute another paper), we believe that we have provided a reasonable conjecture. More importantly, in alignment with your suggestion, what is critical for our purposes is demonstrating that the proposed hint satisfies the **in-flow** versus **out-flow** assumptions that support our proposed method.\\n\\nBelow, we provide a detailed response to support the use of the **PHP** prompt in our work.\\n\\n> # To reiterate my key concern, I understand the logic of your paper to be:\\n\\n> # (1) Suppose we have a way to condition a language model on its previous answer, in such a way that in-flow of probability to the correct answer is greater than out-flow of probability from the correct answer.\\n\\n\\n> # (2) Then we can iteratively refine the LM's answer distribution to concentrate more and more mass on the correct answer.\\n\\n\\n> # To my knowledge, no one has convincingly demonstrated point (1) in the literature, so it would fall to your paper to defend the existence of such a prompting method.\\n\\nWe thank the reviewer for bringing up this point, which allows us to discuss this issue in detail.\\nWe agree that no one has convincingly demonstrated point (1) in the literature, so we do indeed need to demonstrate via empirical analysis that PHP prompting (Zheng et al., 2023) satisfies this property for justifying its use in the proposed HM framework.\\n\\nPerhaps we did not sufficiently emphasize this aspect in our presentation or in the response, but we did conduct a thorough empirical investigation of this issue in our initial submission (see lines 419-428 in our paper and Figures 3-5) and our results clearly support that using PHP-style prompting indeed satisfies this property.\\nIn light of the reviewer's comment, we will stress this point in the introduction as a valuable contribution of our work, and we will amend the discussion of Figures 3-5 to emphasize how these results support the claim. \\n \\nBelow, we briefly recap our mathematical framework, explain our experimental procedure for this investigation (also written in lines 418-427 in Section 4.4 of our paper), and reiterate and explain our key results for enhanced clarity.\\n\\n**Continued in the next Official Comment**\"}", "{\"title\": \"Cont'd\", \"comment\": \"> # The numbers you report seem consistent with your assumption, but also with other possibilities, e.g. that a hinted LM has some probability $q$ of sticking with the hint, and probability $1-q$ of ignoring the hint and answering from its unhinted distribution. Because GPT-4 Turbo already does very well on the dataset, this will look like \\\"high probability of correct given correct hint, some probability of correct given incorrect hint.\\\" But on a dataset where the initial performance was worse, this behavior might no longer satisfy your assumptions.\\n\\nWe thank the reviewer for this insightful question. In light of your analysis, we realize that the provided results in Section 8.3 do not act as sufficient support for the 'in-flow vs out-flow' assumption. \\n\\nAs mentioned in the previous response, the original purpose of Appendix 8.3 was to provide some evidence in support of our assumptions in Section 3.1.\\n\\nPlease refer to the **previous response and the updated Appendix 8.9 for a detailed discussion of this issue**.\\n\\nHowever, **while the proposed hypothetical scenario could explain the numbers presented in Appendix 8.3, it cannot explain the observations in Figures 3-5**, as explained below.\", \"let_us_first_consider_two_extreme_scenarios\": \"a) the hint conditioned LLM outputs the hint as answer with certainty, i.e., $p(y_1|x, \\\\mathrm{Hint}(y_2)) = \\\\delta(y_1-y_2)$ for all $y_1$ and $y_2$ ($\\\\delta(\\\\cdot)$ denotes the Kronecker delta function), and b) the hint conditioned LLM ignores the hint completely and answers from its unhinted distribution $p(y_1|x, \\\\mathrm{Hint}(y_2)) = p_1(y_1|x)$ (note that $p_1(\\\\cdot|x)$ is the initial (unhinted) distribution, estimated using CoT+SC) for all $y_1$ and $y_2$.\\n\\nIn case a),\\n\\\\begin{align}\\np_{2}(\\\\tilde{y}|x) &= \\\\int p(\\\\tilde{y}|x, \\\\mathrm{Hint}(y')) p_{1}(y'|x)dy' = \\\\int \\\\delta(\\\\tilde{y}-y') p_{1}(y'|x)dy' = p_{1}(\\\\tilde{y}|x)\\n\\\\end{align}\\n\\nIn case b),\\n\\\\begin{align}\\np_{2}(\\\\tilde{y}|x) &= \\\\int p(\\\\tilde{y}|x, \\\\mathrm{Hint}(y')) p_{1}(y'|x)dy' = \\\\int p_1(\\\\tilde{y}|x) p_{1}(y'|x)dy' = p_1(\\\\tilde{y}|x) \\\\int p_{1}(y'|x)dy' = p_1(\\\\tilde{y}|x)\\n\\\\end{align}\\n\\nSo, in both cases a) and b), HM would keep the answer distribution unaltered.\\nIntuitively, if the LLM outputs the hint as its answer w.p. 1, then both the in-flow and out-flow probabilities are zero.\\nOn the other hand, if it ignores the hint completely, the in-flow and out-flow probabilities are equal.\\n\\nThe reviewer's hypothetical scenario is a mixture of those two extreme conditional distributions, i.e., ``a hinted LM has some probability $q$\\n of sticking with the hint, and $1-q$ probability \\n of ignoring the hint and answering from its unhinted distribution''.\\n In this case, we have $p(y_1|x, \\\\mathrm{Hint}(y_2)) = q \\\\delta(y_1-y_2) + (1-q) p_1(y_1|x)$.\\nFrom the analysis above, we see that for any value of $q \\\\in [0,1]$, this would again result in $p_{2}(\\\\tilde{y}|x) =p_{1}(\\\\tilde{y}|x)$ for all $\\\\tilde{y}$.\\n\\n**So, if the reviewer's hypothetical scenario is indeed true, then HM would not increase the probability of the correct answer (and hence would not increase accuracy), irrespective of the value of the initial accuracy obtained by estimating mode of the unhinted distribution $p_1$.**\\n\\nHowever, our results show that a) **CoT+HM outperforms CoT+SC** (which outputs the mode of $p_1$ as its answer) in Table 1, and b) more often **CoT+HM has higher probability of the correct answer than that of CoT+SC** across different datasets and LLMs (Figures 3-5).\\n\\nThose observations **support our intuitions about the usefulness of hinting**. Other results in our paper (e.g., the **comparison between CoT and PHP in Table 1**) also provide **strong empirical evidence** that the **PHP-style prompt is indeed taking the hint into account in a positive way**.\\n\\n> # Thank you for your various other clarifications. I apologize for the language \\\"essentially presenting a new prompting technique\\\" -- it's true you are not really presenting a prompting technique, more an inference-time iterative prompting protocol. I do think the idea makes sense, but still have reservations about \\\"hinting.\\\" I note Reviewer mTdx had similar concerns. I can see arguments for accepting this paper but on balance I am still somewhat dissatisfied. I am fine conceding to other reviewers with stronger opinions or more background in this area, though, if they are convinced by the results.\\n\\nThank you for acknowledging that we \\\"are not really presenting a prompting technique'' and the idea \\\"makes sense''. \\n\\n**We very much appreciate the interaction and the careful thought you have given to our work; it has prompted us to consider some important aspects more rigorously.**\\n\\nPlease let us know if you still have any outstanding concerns, so that we can make further attempts in addressing them.\"}", "{\"title\": \"Cont'd\", \"comment\": \"**Our result:**\\n\\nNext, we count for each algorithm how many times it obtains rank 1, 2, and 3 on these 'difficult' questions and plot the stacked-histograms of these ranks for all six datasets using the three GPT models in Figures 3-5.\\n\\nWe observe that the proposed CoT+HM achieves the lowest rank based on the probability of correct answer across the 'difficult\\u2019 questions for all datasets and all LLMs more often, outperforming both CoT+SC and PHP+SC. The height of the blue bar in the CoT+HM column, which counts how many times CoT+HM has higher probability of the correct answer compared to either of the other two algorithms, is the largest for all datasets and LLMs in Figures 3-5.\\n\\nIn other words, this provides **direct empirical evidence** that CoT+HM has **higher probability of the correct answer** compared to its competitors (including CoT+SC, which uses $p_1(\\\\cdot|x)$ for inference) for most of these 'difficult' questions across six datasets and three LLMs (i.e., 18 different cases).\\n\\nRemember that for each question, obtaining a **higher probability on the right answer** from (one or more rounds of) CoT+HM than that of CoT+SC is possible **if and only if** the **\\\"in-flow of probability to the correct answer is greater than out-flow of probability from the correct answer\\\"**. \\n\\nWe would like to stress that this experiment is not intended to serve as another \\\"CoT+HM'' versus \\\"CoT+SC'' competition, of the form \\\"our proposed method works better''. The experiment evaluates the probability assigned to the correct answer, and this may not be the maximum, so it does not directly reflect the accuracy of a method. Rather, the value that \\\"CoT+HM'' assigns to the correct answer $y$ is a direct empirical approximation of $p_3(\\\\tilde{y}=y|x)$, and the value that \\\"CoT+SC'' assigns to the correct answer $y$ is a direct empirical approximation of $p_1(\\\\tilde{y}=y|x)$. When these approximations are formed using 40 chains-of-thought, they form a sufficiently accurate approximation of the underlying probabilities, such that when we perform a rank comparison over 6 datasets and 3 LLMs, the probability of observing such a consistent difference through chance is very small. Moreover, one could argue that in order for the proposed HM strategy to work, we actually need to observe the in-flow $>$ out-flow condition for the empirical probabilities.\\n\\n**Statistical Significance:**\\n\\nIn order to demonstrate the statistical significance of our result, we have now conducted a Wilcoxon signed rank test between $p_3(y|x)$ (i.e., the estimated probability of the `correct' answer obtained from the proposed CoT+HM) and $p_1(y|x)$ (i.e., the probability of the 'correct' answer, at the initialization of CoT+HM, estimated from CoT+SC using 40 samples), and report the p-values in Table 11 in Appendix 8.9 in the revised version of the paper (also shown here).\\nWe observe that except for 5 out of 36 cases (6 datasets, 3 LLMs, and 2 different partitions of the datasets), the difference between $p_3(y|x)$ and $p_1(y|x)$ is statistically significant at the 5\\\\% level, providing strong empirical support in favor of the capability of the HM iterations in increasing the probability of the true answers.\\n\\nIn addition, we also calculate the percentage of difficult questions for which $p_3(y|x) \\\\geqslant p_1(y|x)$ is satisfied and report the results in Table 12 in Appendix 8.9 in the revised version of the paper (also shown here).\\nWe observe that in each case, for the majority of the questions, HM iterations do not decrease the probability of the true answer.\\n\\nTo comply with the reviewer's suggestion, we have included these results in Appendix 8.9 to demonstrate the empirical capability of the hint prompt in refining the probability distribution of answers. \\n\\n**Other comments:**\\n\\nNote that we **do not claim** that **in-flow of probability to the correct answer is greater than out-flow of probability from the correct answer** for **all questions in all datasets**.\\n\\nSince on the 'easy' questions as defined above, both CoT+SC and CoT+HM have 100\\\\% accuracy, in order to obtain an improved performance using CoT+HM, we only need a subset, consisting of 'difficult' questions, where the property is satisfied more often, and this increase in probability of the 'correct' answer with each round of HM results in a correction of the final answer.\\n\\n\\n**Summary:**\\n\\n1) Our results in Figures 3-5 substantiate that we indeed observe the increase in probability of the correct answer more often by applying HM using the hint prompt of Zheng et al., 2023.\\n\\n2) As per the reviewer's suggestion, we provide additional details to clarify the results in Figures 3-5 here and modify Appendix 8.9 to include some quantitative results (Table 11 and 12) to justify the use of hint-prompt in our work. \\n\\n\\n**Tables 11 and 12 in Appendix 8.9 are copied in the next Official Comment for completeness.**\"}", "{\"title\": \"Please let us know if you still have any outstanding concerns\", \"comment\": \"Dear reviewer v83X,\\n\\nThank you for your review and the acknowledgment of our response.\\n\\nAs the discussion period nears its end, we hope that we have effectively addressed and resolved your concerns.\\nPlease let us know if you still have any outstanding concerns, so that we can make further attempts in addressing them. \\n\\nIn response to your comments, we have performed additional experiments using two Llama family models and showed the application of HM beyond arithmetic tasks (Math dataset and two big-bench tasks). \\nIn both cases, we obtained accuracy improvement using the proposed HM framework.\\n\\nWe believe that these new experiments address your core concerns and therefore would like to request a reconsideration of your rating of our paper.\"}", "{\"summary\": \"The paper presents a new method called hint marginalisation to improve the reasoning capability of a large language model. The basic idea is to generate multiple responses (possibly in parallel) from the same model or from multiple models and combine in order to steer the model towards the most likely response. Therefore, the proposed approach can be viewed as an iterative sampling scheme to produce a Monte-Carlo approximation of a probability distribution over the responses where the mode of the distribution corresponds to the most likely response. The empirical evaluation is carried out on several reasoning tasks using OpenAI GPT models. The results demonstrate conclusively that the proposed hint marginalisation scheme improves the reasoning capabilities of the models considered.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Improving the reasoning capabilities of existing language models is definitely a topic of considerable interest in the AI community. The proposed approach seems to address this issue in a principled manner.\", \"The quality of the presentation is overall quite good and therefore the paper is relatively easy to follow even by readers outside this research area. Most of the technical details presented in the paper are discussed in a relatively clear manner. The examples provided throughout the paper help to get a better understanding of the proposed scheme.\"], \"weaknesses\": [\"Most modern LLMs, especially those from the OpenAI GPT family do fairly well on arithmetic reasoning problems and therefore the improvements shown in Table 1 are marginal (typically less than 1%). Perhaps considering other reasoning tasks would better highlight the benefits of the proposed approach.\", \"I was surprised to see that only the GPT models were considered in the experimental evaluation. They already demonstrated strong reasoning capabilities. Therefore, maybe the proposed approach would be more appropriate for weaker models.\"], \"questions\": [\"Can you comment on applying the hint marginalisation scheme to open models like the Llama family? And if you already experimented with the open models, what kind of results did you obtain?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Cont'd\", \"comment\": \"> # W2. b) In particular it seems that the method is limited to reasoning tasks that output a singular (discrete) answer. In more complex settings it is unclear how the hint distribution can be reasonably formed.\\n\\nOur approach is designed for reasoning tasks where there is one *\\\"correct\\\"* answer and the metric is task accuracy (whether the algorithm's answer matches the *\\\"correct\\\"* answer). We note that this problem setting does encompass a wide range of reasoning tasks across various domains, e.g., arithmetic (correct answer is a number), mathematical (correct answer is, for example, an algebraic expression), logical (correct answer is a boolean variable), and multiple-choice questions (with a predefined number of options). The reviewer is correct in observing that this is a limitation of our method, but it is a relatively broad limitation, and still leaves our approach applicable to many reasoning tasks.\\n\\nWe agree that HM (as well as any baseline algorithms considered in this work) is not suitable in its current form for nuanced open-ended question answering. (Please refer to the detailed response below for the discussion of *\\\"continuous distributions\\\"*).\\n\\n> # W3. a) There no experiments with state-of-the-art models such as GPT-4o or any models not by OpenAI.\\n\\nUnfortunately, GPT-4o is prohibitively expensive (USD 10.00 / 1M output tokens). We consider that it is sufficient to conduct experiments with multiple LLMs. Aside from this, the documented performance of GPT-4o is not significantly better than GPT-4 Turbo or GPT-4o-mini. We agree that it is important to extend analysis beyond the GPT family, and we now include results for two Llama models (please refer to Table 7 in our revised paper). \\n\\n> # W3. b) It is unclear to what degree the observed improvements for weaker/old models translate in any meaningful way also to new models or to alternative architectures.\\n\\nFor the benchmark experiments in our paper, we use GPT-4o-mini, which was released on July 18, 2024, and is OpenAI's *\\\"most cost-efficient small model that\\u2019s smarter and cheaper than GPT-3.5 Turbo\\\"* ([source](https://openai.com/api/pricing/)). \\nThis model was thus released very recently. Based on the performance of GPT-4o-mini in Table 1 and its release date, it cannot be viewed as one of the *\\\"weaker/old models\\\"*. Its performance is close to that of GPT-4o.\\n\\nHowever, we acknowledge that it is important to conduct experiments with other architectures. Hence, we now include experimental results for two Llama-3 variants (please refer to Table 7 in our revised paper).\\n\\n> # W3. c) Already in the provided experimental data (Table 1) we see that the improvement gains from HM diminish significantly with GPT-4 Turbo. Strikingly, on AQuA self-consistency becomes stronger than HM, whereas the situation was flipped for the weaker GPT-3.5.\\n\\nWe agree with the reviewer that some of the arithmetic datasets are relatively easy for GPT-4-Turbo, but we still observe that CoT+HM provides accuracy improvement over self-consistency in 15 out of 18 cases in Table 1, which strongly supports the general usefulness of the proposed HM approach.\\n\\nMoreover, the new experiments on Math (please refer to Table 8 of our revised paper) and other big-bench tasks (please refer to Table 9 of our revised paper) show the general usefulness of HM beyond these benchmarks.\"}", "{\"title\": \"Cont'd\", \"comment\": \"- Figures 3-5 show that the proposed CoT+HM (uses $p_3(\\\\cdot|x)$ for inference) **achieves the lowest rank\\nbased on the probability of correct answer across the 'difficult' questions for all datasets and all LLMs more often**, outperforming both CoT+SC (uses $p_1(\\\\cdot|x)$ for inference) and PHP+SC. \\n\\n- The **height of the blue bar** in the **CoT+HM column**, which **counts how many times CoT+HM has higher probability of the correct answer compared to either of the other two algorithms**, is **the largest for all datasets and LLMs** in **Figures 3-5**.\\n\\n- Remember that for each question, obtaining a **higher probability on the right answer** from (one or more rounds of) CoT+HM than that of CoT+SC is possible **if and only if** the **\\\"in-flow of probability to the correct answer is greater than out-flow of probability from the correct answer''**. \\n\\n- In order to demonstrate the **statistical significance** of our result, we have now conducted a **Wilcoxon signed rank test** between $p_3(y|x)$ (i.e., the estimated probability of the 'correct' answer obtained from the proposed CoT+HM) and $p_1(y|x)$ (i.e., the probability of the 'correct' answer, at the initialization of CoT+HM, estimated from CoT+SC using 40 samples), and report the **p-values in Table 11 in Appendix 8.9** in the revised version of the paper (**also shown here**).\\n\\n**p-value from Wilcoxon signed rank test between the probabilities of true answers from distributions $p_3(y|x)$ and $p_1(y|x)$ for the 'difficult' questions (for the entire dataset)**\\n\\n| **LLM** | **AddSub** | **MultiArith** | **SingleEQ** | **SVAMP** | **GSM8K** | **AQuA** |\\n|----------------------|----------------------------|---------------------------|--------------------------|--------------------------|--------------------------|--------------------------|\\n| **GPT-3.5 Turbo** | 0.0291 (0.1172) | 0.0006 ($1.3 \\\\times 10^{-5}$) | 0.0012 ($8.6 \\\\times 10^{-5}$) | 0.0132 ($1.4 \\\\times 10^{-5}$) | $9.2 \\\\times 10^{-18}$ ($4.3 \\\\times 10^{-22}$) | 0.0001 ($1.6 \\\\times 10^{-8}$) |\\n| **GPT-4 Turbo** | 0.2868 (0.2258) | 0.0104 ($2.3 \\\\times 10^{-6}$) | 0.0002 ($6.2 \\\\times 10^{-7}$) | $4.8 \\\\times 10^{-8}$ ($1.7 \\\\times 10^{-13}$) | $2.2 \\\\times 10^{-31}$ ($1.5 \\\\times 10^{-41}$) | 0.0065 (0.0042) |\\n| **GPT-4o-mini** | 0.0038 (0.0024) | 0.8413 (0.0243) | 0.0317 (0.0255) | 0.5898 (0.3028) | $4.5 \\\\times 10^{-12}$ ($5.2 \\\\times 10^{-12}$) | $2.1 \\\\times 10^{-5}$ ($8.5 \\\\times 10^{-6}$) |\\n\\n- We observe that except for 5 out of 36 cases (6 datasets, 3 LLMs, and 2 different partitions of the datasets), the **difference between $p_3(y|x)$ and $p_1(y|x)$ is statistically significant at the 5\\\\% level**, providing **strong empirical support** in favor of the **capability of the HM iterations in increasing the probability of the 'correct' answers**.\\n\\n- In addition, we also **calculate the percentage of difficult questions for which $p_3(y|x) \\\\geqslant p_1(y|x)$ is satisfied** and report the results in **Table 12 in Appendix 8.9** in the revised version of the paper (**also shown here**).\\n\\n**Percentage of 'difficult' questions (percentage of questions in the entire dataset), so that $p_3(y|x) \\\\geqslant p_1(y|x)$ is satisfied (in other words, HM does not decrease the probability of the true answer)**\\n\\n| **LLM** | **AddSub** | **MultiArith** | **SingleEQ** | **SVAMP** | **GSM8K** | **AQuA** |\\n|----------------------|------------------|-------------------|-------------------|-------------------|-------------------|-------------------|\\n| **GPT-3.5 Turbo** | 79.4 (92.7) | 85.2 (97.3) | 86.0 (97.2) | 63.5 (83.8) | 70.8 (81.4) | 64.7 (74.8) |\\n| **GPT-4 Turbo** | 76.2 (95.7) | 96.3 (99.7) | 87.7 (98.0) | 89.5 (96.9) | 85.7 (93.3) | 79.1 (86.6) |\\n| **GPT-4o-mini** | 85.7 (97.2) | 96.3 (99.7) | 82.5 (97.0) | 81.1 (93.9) | 83.8 (92.7) | 75.8 (83.9) |\\n\\n- We observe that **in each case**, for the **majority of the questions, HM iterations do not decrease the probability of the 'correct' answer**.\\n\\nIn summary, these results provide **strong and direct empirical evidence** that **hinting** is indeed an **effective strategy for refinement of the answer distribution**, as proposed in our HM framework.\"}", "{\"title\": \"Summary of new experimental results obtained during the rebuttal period\", \"comment\": \"For the sake of the reviewers' convenience, we provide a **brief summary of the new experimental results we obtained during the rebuttal period.**\\n\\n**Results using Llama:**\\n\\n From the results in Table 7 in our revised paper, we observe that for a strongly capable Llama-3-70b-instruct model, both CoT+HM and PHP+HM perform well and outperform CoT+SC. The results for a strong Llama model thus align with those for the stronger GPT models.\\n\\nWhen using Llama-3-8b-instruct, PHP+HM algorithm achieves the best accuracy in 4 out 6 datasets and performs comparably in the remaining two datasets. For the weaker model, it is important to have a better initial distribution to refine, and PHP achieves this better than CoT. \\n\\n**Results on Math dataset:**\\n\\nMean and standard error of accuracy (in \\\\%) of reasoning on the Math dataset using GPT-4o-mini. The **highest** accuracy among all competing algorithms is marked in **bold** and the _second-best_ accuracy in those cases is marked in _italic_.\\n\\n| **Algorithm** | **Algebra** | **Counting and Probability** | **Geometry** | **Intermediate Algebra** | **Number Theory** | **Prealgebra** | **Precalculus** |\\n|-------------------|--------------|------------------------------|---------------|--------------------------|-------------------|----------------|-----------------|\\n| **CoT** | 88.5\\u00b10.9 | 73.4\\u00b12.0 | 55.1\\u00b12.3 | 51.5\\u00b11.6 | 76.3\\u00b11.8 | 86.9\\u00b11.1 | 49.1\\u00b12.1 |\\n| **PHP** | 90.2\\u00b10.9 | 75.3\\u00b12.0 | 55.9\\u00b12.3 | 52.3\\u00b11.7 | 78.1\\u00b11.8 | 87.6\\u00b11.1 | 51.1\\u00b12.1 |\\n| **CoT+SC** | 93.9\\u00b10.7 | **82.9\\u00b11.7** | *64.7\\u00b12.2* | 58.1\\u00b11.7 | *83.5\\u00b11.6* | **91.2\\u00b11.0** | 51.3\\u00b12.1 |\\n| **CoT+HM** | *94.1\\u00b10.7* | *81.0\\u00b11.8* | 64.1\\u00b12.2 | *58.3\\u00b11.7* | 82.0\\u00b11.7 | **91.2\\u00b11.0** | *51.5\\u00b12.1* |\\n| **PHP+HM** | **94.8\\u00b10.6** | 80.6\\u00b11.8 | **65.3\\u00b12.2** | **58.9\\u00b11.6** | **85.4\\u00b11.5** | *90.7\\u00b11.0* | **52.0\\u00b12.1** |\\n\\nWe observe that **the HM approach leads to a performance improvement in 5 out of 7 settings.** \\n\\n**Other tasks:**\\n\\nMean and standard error of accuracy (in \\\\%) of reasoning for Date Understanding and Object Tracking tasks using GPT-4o-mini. The **highest** accuracy among all competing algorithms is marked in **bold** and the _second-best_ accuracy in those cases is marked in _italic_.\\n\\n| **Algorithm** | **Date Understanding** | **Object Tracking** |\\n|-------------------|------------------------|---------------------|\\n| **CoT** | 91.9\\u00b11.4 | 96.4\\u00b10.7 |\\n| **PHP** | 93.5\\u00b11.3 | *97.7\\u00b10.5* |\\n| **CoT+SC** | *93.8\\u00b11.3* | 96.7\\u00b10.7 |\\n| **CoT+HM** | **94.6\\u00b11.2** | **98.0\\u00b10.5** |\\n\\nWe provide results for \\\"Date Understanding\\\" and \\\"Object Tracking\\\", which are problems sets involving quantitative (but not strictly mathematical or arithmetic) reasoning. We observe that PHP still outperforms CoT, demonstrating the utility of hinting beyond the arithmetic tasks. The proposed CoT+HM offers an improvement in accuracy for both of these datasets by reducing the average error rate by more than 10 percent compared to the next best baseline.\"}", "{\"title\": \"Response to Reviewer 5edW\", \"comment\": \"We thank the reviewer for acknowledging the merit of our work. Below, we address your concerns.\\n\\n> # W1. Justification of the methods stems only from intuition and a few empirical evidences.\\n\\nIn addition to presenting the main experimental results on standard benchmarks in Table 1, we present a simple intuition behind hint marginalization, other quantitative analyses, illustrations, and a case study to highlight the comparison between the proposed HM and relevant sampling-based baseline algorithms.\\n\\nThe assumptions in Section 3.1 are formed by analyzing the PHP results. An example for GSM8K is provided in Appendix 8.3, which confirms that:\\n- (a) using the correct answer as the hint, the LLM generates the same answer with a very high probability; and\\n- (b) even with an incorrect hint in the prompt, the LLMs are at least somewhat likely to generate the correct answer in the next interaction.\\n\\nSimilar results are obtained for other datasets and LLMs examined in our experiments. \\nWe will provide all such results in a table in the revised version and add a reference to Appendix 8.3 in Section 3.1 for a clearer presentation.\\n\\nWe illustrate the steps of Algorithm 1 in Figure 1 for an easier exposition of HM via an example. In addition, Figure 2 shows a case study of how HM corrects an erroneous answer of CoT+SC by increasing the probability of the *\\\"correct\\\"* answer iteratively.\\n\\nFigures 3-5 show that CoT+HM has a higher probability of the correct answer compared to its competitors (including CoT+SC) for most of the *\\\"difficult\\\"* questions across all datasets and LLMs used in our experiments. Since CoT+HM is initialized with CoT+SC, these results provide direct empirical evidence that marginalizing over hints indeed increases the probability of the *\\\"correct\\\"* answer. This justifies our intuitions and demonstrates the efficacy of the proposed HM framework beyond improved accuracies.\\n\\n> # Q1. Is there any clustering of equivalent answers (e.g. 0.5, 1/2, $\\\\frac{1}{2}$, ...) during sampling?\\n\\nThe answer extraction and cleansing of answers from sampled CoTs for all algorithms is carried out by following the same steps laid out by Kojima et al. (2022). This involves careful regular expression based parsing of the CoTs and subsequent conversion of each sampled answer from string to float format with (possible) round-off. This allows us to sum the probabilities of the same answer expressed in different formats (as in the example provided by the reviewer), and reduces the number of LLM calls in subsequent iterations of hint marginalization.\\n\\n> # Q2. In the methodology section you define the output of the LLM to consist of the answer $y$ and an additional rationale $z$. Is $z$ used in any way in your procedure?\\n\\nWe do not use $z$ explicitly in our procedure and this is a very interesting suggestion and worth further study. Currently we group responses purely based on the final answer, and we form hints without using $z$. There is likely to be valuable information in the produced $z$ that can be exploited (this is supported by the recent reported success of **process-based** training as opposed to **outcome-based**).\\n\\nPrevious experimental work has suggested that it is important to encourage the LLM to generate $z$, because otherwise we often do not observe a diversity of reasoning with multiple different candidate answers.\\nWang et al. (2023) show that sampling multiple answers for a question and performing a majority vote improves performance only if the LLM is encouraged to generate diverse reasoning paths (e.g., by using few-shot CoT prompting (Wei et al., 2022)) .\\n\\nWe could explicitly use the rationales if we had access to a verifier that is capable of scoring the 'correctness' of the rationales.\\nFor example, the verification scores of the rationales corresponding to the mode of the distribution of the answers after each round could be utilized to design a stopping criterion for HM. This would be advantageous in allocating computational budget dynamically across tasks with varying difficulty levels.\"}", "{\"title\": \"Response to Reviewer v83X\", \"comment\": \"We thank the reviewer for acknowledging the principled nature and clear presentation of our work. Below, we address your concerns.\\n\\n> # W1. Most modern LLMs, especially those from the OpenAI GPT family do fairly well on arithmetic reasoning problems and therefore the improvements shown in Table 1 are marginal (typically less than 1\\\\%). Perhaps considering other reasoning tasks would better highlight the benefits of the proposed approach.\\n\\nWe thank the reviewer for this suggestion.\\nWe agree that some of the arithmetic datasets are relatively easy for GPT, but we still observe consistent improvement over a strong baseline (self-consistency) in 15 out of 18 cases in Table 1. \\nThe same benchmarks are considered in the papers that presented the relevant baselines.\\nOur proposed hint marginalization strategy solves problems where self-consistency fails to establish the correct mode (please refer to Figure 2 in our paper) and sampling more CoTs is not helpful. If we restrict ourselves to the 'difficult' questions during the performance assessment (eliminating the easy questions that are answered correctly by all LLMs and all methods), then the improvement is more substantial (please refer to Table 10). \\n\\nThe lack of particularly challenging datasets is a valid criticism of our work, also raised by other reviewers. \\nWe have now included results for the MATH dataset (please refer to Table 8), which is a much more challenging mathematical reasoning dataset. For several sub-disciplines (Geometry, Intermediate algebra, Pre-calculus), the state-of-the-art performance (without using extreme computation and a very long inference time) is in the range of 50-65 percent, suggesting that LLMs still find these problems very difficult to solve. The proposed HM approach leads to a performance improvement in 5 out of 7 settings. \\n\\nWe agree with the reviewer that applying the method more broadly to\\nother reasoning domains is a worthwhile and very interesting research direction.\\nWith a view to satisfying this request, we now provide results for \\\"Date Understanding\\\" and \\\"Object Tracking\\\", which are problems sets involving quantitative (but not strictly mathematical or arithmetic) reasoning (please refer to Table 9 for results). \\n\\nExtending beyond this (outside quantitative problems) would require careful prompt engineering to generalize hinting to other reasoning domains. This direction is very interesting but is not the main focus of our current work.\\n\\n> # W2. I was surprised to see that only the GPT models were considered in the experimental evaluation. They already demonstrated strong reasoning capabilities. Therefore, maybe the proposed approach would be more appropriate for weaker models.\\n\\nThis is a good suggestion. We agree that it is important to extend analysis beyond the GPT family. We now include results for two Llama models (please refer to Table 7 for results).\\n\\n> # Q1. Can you comment on applying the hint marginalisation scheme to open models like the Llama family? And if you already experimented with the open models, what kind of results did you obtain?\", \"we_have_now_conducted_experiments_with_two_llama_family_llms\": \"the weaker Llama-3-8b-instruct and the very capable Llama-3-70b-instruct.\\nThe results are presented in Table 7. \\nIn order to reduce the API cost of the experiments, we restrict running the more expensive 70B model to only the three most difficult benchmarks.\\n\\nFrom the results in Table 7 of our revised paper, we observe that using Llama-3-8b-instruct, the relative advantage of PHP over CoT is diminished in comparison to the GPT models.\\nThis suggests that weaker LLMs, such as Llama-3-8b-instruct, which often have relatively poor instruction following capability, cannot utilize the hint effectively for solving the reasoning task, highlighting the inadequacy of sophisticated prompting for weaker LLMs.\\n\\nIn this setting, the effect of the quality of approximation of the initial distribution of HM becomes important for obtaining a good reasoning accuracy and PHP+HM outperforms CoT+HM in most cases. \\nExcept for GSM-8K, PHP+HM either outperforms CoT+SC or obtains comparable performance on all other datasets.\\n\\nOn the contrary, for a strongly capable Llama-3-70b-instruct model, both CoT+HM and PHP+HM perform well.\"}", "{\"title\": \"Cont'd\", \"comment\": \"> # Q1. a) From the algorithm and the mathematical exposition it is unclear to me if HM can work for continuous distributions.\\n\\nOur approach is designed for reasoning tasks where there is one 'correct' answer and the metric is task accuracy (whether the algorithm's answer matches the `correct' answer). The most relevant baseline algorithms we consider, e.g. CoT+SC and PHP, are also suitable only in the same setting.\\n\\nChen et al., 2023 points out that self-consistency can only be applied to tasks where the final answer is a number, a True/False boolean variable, or an option (a)/(b)/(c) from a multiple-choice set. Without considerable modification, self-consistency cannot handle tasks that involve free-form generation, such as code generation, translation, summarization, and open-ended, descriptive question answering. **Thus, it is not a limitation of the proposed Hint Marginalization arising from our design, but is common to all relevant baselines.**\\n\\nHowever for those tasks with a single 'correct' answer, we (and self-consistency as well) do not make any explicit assumptions regarding the distribution of answers (i.e., discrete or continuous) and do not assume any a priori knowledge of the support.\\nOur iterative sampling and marginalization procedure provides a valid Monte Carlo approximation of the sequence of distributions of answers, defined in Eq. 1 of the paper irrespective of whether $p_r(y|x)$ is continuous or discrete. \\n\\nNote that, depending on the nature of the target answer and the evaluation protocol in a task, one can introduce further approximations for applying HM. For example, if for a question, we have the prior knowledge that the answer is an integer, and the LLM outputs a float, we could use a round-off after each round of HM. If the evaluation protocol only requires a match of the algorithm's answer to the 'correct' answer up to two decimal points, we should perform a two decimal points round off for all answers and hints for all LLM sampled answers. Alternatively, one could instruct the LLM explicitly to provide integer answers/ answers up to two decimal points. If the answer is an option between 'yes/no', then a careful answer extraction and parsing would allow us to group different versions of the same answer (e.g. 'yes', 'Yes', 'YEAH', 'Certainly' etc.) into the same category and sum their probabilities. We could also ask the LLM via another call to group all answers in the two distinct categories after each round. \\n\\nThis aspect is **not unique to our HM approach; for evaluation of relevant baselines such as self-consistency, the same consideration is required**. Often, the dataset is grouped together with the code for answer parsing and evaluation, provided by the dataset curators to ensure a fair evaluation.\\n\\nChen, Xinyun, et al., \\\"Universal Self-Consistency for Large Language Model Generation\\\" arxiv preprint arXiv:2311.17311 (2023).\\n\\n> # Q1. b) Could you please elaborate on whether there is any consideration for this setting?\\n\\nAs discussed above, it is not entirely clear whether the reviewer refers to free-form language generation tasks as 'continuous distribution'. If this is the case, our method will require adjustment, similar to the modifications proposed by Chen et al., 2023 in adapting self-consistency to such tasks. For example, one could use a similar prompt to their 'Universal Self Consistency prompt' to score different generations, and use those scores to form the conditional probabilities $p(\\\\tilde{y}|x, \\\\textit{Hint}(y'))$.\\n\\nIf the reviewer is instead referring to tasks where the answer is real-valued (and hence there is a continuous distribution over candidate answers), then our method does work as is in such settings (Please see the discussion above).\"}", "{\"title\": \"Response to Reviewer mTdx\", \"comment\": \"We thank the reviewer for reading our rebuttal.\\n\\n> # Experimental performance\\n\\nFor completeness, we copy Table 8 from the paper to make discussion easier.\\n\\nMean and standard error of accuracy (in \\\\%) of reasoning on the Math dataset using GPT-4o-mini. The **highest** accuracy among all competing algorithms is marked in **bold** and the _second-best_ accuracy in those cases is marked in _italic_.\\n\\n| **Algorithm** | **Algebra** | **Counting and Probability** | **Geometry** | **Intermediate Algebra** | **Number Theory** | **Prealgebra** | **Precalculus** |\\n|-------------------|--------------|------------------------------|---------------|--------------------------|-------------------|----------------|-----------------|\\n| **CoT** | 88.5\\u00b10.9 | 73.4\\u00b12.0 | 55.1\\u00b12.3 | 51.5\\u00b11.6 | 76.3\\u00b11.8 | 86.9\\u00b11.1 | 49.1\\u00b12.1 |\\n| **PHP** | 90.2\\u00b10.9 | 75.3\\u00b12.0 | 55.9\\u00b12.3 | 52.3\\u00b11.7 | 78.1\\u00b11.8 | 87.6\\u00b11.1 | 51.1\\u00b12.1 |\\n| **CoT+SC** | 93.9\\u00b10.7 | **82.9\\u00b11.7** | *64.7\\u00b12.2* | 58.1\\u00b11.7 | *83.5\\u00b11.6* | **91.2\\u00b11.0** | 51.3\\u00b12.1 |\\n| **CoT+HM** | *94.1\\u00b10.7* | *81.0\\u00b11.8* | 64.1\\u00b12.2 | *58.3\\u00b11.7* | 82.0\\u00b11.7 | **91.2\\u00b11.0** | *51.5\\u00b12.1* |\\n| **PHP+HM** | **94.8\\u00b10.6** | 80.6\\u00b11.8 | **65.3\\u00b12.2** | **58.9\\u00b11.6** | **85.4\\u00b11.5** | *90.7\\u00b11.0* | **52.0\\u00b12.1** |\\n\\nWe apologize that our phrasing in the response and the revised paper was unclear. We intended to refer to the performance of PHP+HM, not the grouped performance of the HM-based techniques. As can be seen in the table above, **the proposed PHP+HM does obtain the best accuracy in 5 out of 7 sub-categories.**\\n\\nRegarding the reported results for Llama-3-70b-instruct, over the three more challenging arithmetic reasoning datasets, **CoT+HM achieves a performance improvement** over the best baseline in 2 out of 3 cases (equal in the third case), with an **average performance improvement of 0.8\\\\%**. PHP+HM **outperforms in all three cases**, with an **average accuracy improvement of 0.4\\\\%**. \\n\\nThe **original primary criticism of the review** was that **\\\"the experimental evaluation is too narrow\\\"**. In response to this, **we included results for Math (identified by the reviewer as a more challenging dataset), for two open models (Llama variants), and two non-arithmetic tasks (Date Understanding and Object Tracking)**. \\n\\nNow **it appears that the main criticism has changed from the experimental evaluation being too narrow to the observed performance improvement not being large enough**.\\n\\nThe experiments now **encompass 5 LLMs and 9 datasets**. Taking into account the 7 different subcategories of questions in the Math dataset, **we investigate 36 experimental scenarios**. Of these, the **proposed PHP-HM method outperforms all baselines in 26 cases**. Compared to the **best baseline method**, there is **almost no additional computational overhead introduced by the proposed method**. Although the improvements are not dramatic, they are **observed consistently across multiple datasets and LLMs**. Using the **Math dataset** as a **challenging example recommended by the reviewer**, the proposed method either **(i) achieves a >0.5\\\\% improvement in 5 out of 7 subcategories for almost no additional computation; or (ii) achieves a 3-10\\\\% improvement compared to less computationally demanding baselines.**\\n\\nGiven that **the paper introduces a novel, principled method**, we consider that **this level of relatively consistent outperformance is more than satisfactory for a research paper**. While we respect the reviewer's opinion, **there seems to be too much focus on the sole criterion of \\\"does the proposed method improve by more than $x$ percent.\\\"** \\n\\n> # Concerns about hinting\\n\\nOur arguments in favor of using hinting (and in particular the utilization of the PHP (Zheng et al., 2023)-style prompt) in our proposed HM framework can be summarized as follows:\\n\\n- In Section 3.1, **we show mathematically** that in the proposed HM framework, if the **'in-flow'** of probability to the 'correct' answer **exceeds** the **'out-flow'** of probability from the 'correct' answer, then the **probability of the correct answer increases** with each HM iteration. Note that this implication goes both ways (**'if and only if'**).\\n\\n- Thus, if there is any refinement strategy, which **satisfies this 'in-flow' vs 'out-flow' criterion**, then it **becomes a suitable candidate** to be incorporated in the **proposed HM framework**.\\n\\n- We conduct **detailed analysis of the obtained results** (illustrations in **Figures 3-5**, empirical results in **Tables 11-12**) to demonstrate that there is **strong empirical evidence** that **hinting satisfies this criterion**.\\n\\n**Continued in the next Official Comment**\"}" ] }
DzGe40glxs
Interpreting Emergent Planning in Model-Free Reinforcement Learning
[ "Thomas Bush", "Stephen Chung", "Usman Anwar", "Adrià Garriga-Alonso", "David Krueger" ]
We present the first mechanistic evidence that model-free reinforcement learning agents can learn to plan. This is achieved by applying a methodology based on concept-based interpretability to a model-free agent in Sokoban -- a commonly used benchmark for studying planning. Specifically, we demonstrate that DRC, a generic model-free agent introduced by [Guez et al. (2019)](https://arxiv.org/abs/1901.03559), uses learned concept representations to internally formulate plans that both predict the long-term effects of actions on the environment and influence action selection. Our methodology involves: (1) probing for planning-relevant concepts, (2) investigating plan formation within the agent's representations, and (3) verifying that discovered plans (in the agent's representations) have a causal effect on the agent's behavior through interventions. We also show that the emergence of these plans coincides with the emergence of a planning-like property: the ability to benefit from additional test-time compute. Finally, we perform a qualitative analysis of the planning algorithm learned by the agent and discover a strong resemblance to parallelized bidirectional search. Our findings advance understanding of the internal mechanisms underlying planning behavior in agents, which is important given the recent trend of emergent planning and reasoning capabilities in LLMs through RL.
[ "reinforcement learning", "interpretability", "planning", "probes", "model-free", "mechanistic interpretability", "sokoban" ]
Accept (Oral)
https://openreview.net/pdf?id=DzGe40glxs
https://openreview.net/forum?id=DzGe40glxs
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tGraE8wANO", "tFKhLcVnIK", "ksmrWajm3w", "ka0WJvyDYK", "jnC1KsZJUA", "iMHfi5Drw6", "gwZktYBI3z", "fY0UsCWNdi", "f0KWj8FOPt", "e0qwGG5bMD", "crfZaG7BWm", "bVpdSour5J", "TnLuBQmIPE", "QpM86Z70Wm", "QZskmtht5F", "QV6fzRgqlQ", "QBUbCZ4YJf", "NYaLy65rdI", "NHwuVXiWmZ", "Mq6UN8oomL", "LE0hgZMYOF", "L4tE4hD7LG", "KyHULil4sQ", "Jyaz1NBI2f", "GX0ULh4lrh", "EvxdgubwuR", "EfAo1kzRBe", "D7tVDix0B0", "2vXSjaf9C8", "2E6lhBLKqw", "1XB3NIjZWL" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732265770857, 1732267013436, 1732439093183, 1732267799766, 1732268221141, 1732265737730, 1732490111573, 1734742798415, 1732268075480, 1732616703565, 1732489809764, 1732267691425, 1732905902311, 1732906997337, 1730390279783, 1732266611570, 1732487579126, 1732267570686, 1737524114724, 1730629220320, 1732267146924, 1732822754250, 1733214495842, 1732266844059, 1732267392357, 1732490572308, 1733089254841, 1730666131924, 1730719267576, 1732489603111, 1732906908384 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11267/Authors" ], [ "ICLR.cc/2025/Conference/Submission11267/Authors" ], [ "ICLR.cc/2025/Conference/Submission11267/Reviewer_LpHt" ], [ "ICLR.cc/2025/Conference/Submission11267/Authors" ], [ "ICLR.cc/2025/Conference/Submission11267/Authors" ], [ "ICLR.cc/2025/Conference/Submission11267/Authors" ], [ "ICLR.cc/2025/Conference/Submission11267/Authors" ], [ "ICLR.cc/2025/Conference/Submission11267/Area_Chair_2uWC" ], [ "ICLR.cc/2025/Conference/Submission11267/Authors" ], [ "ICLR.cc/2025/Conference/Submission11267/Reviewer_SDx9" ], [ "ICLR.cc/2025/Conference/Submission11267/Authors" ], [ "ICLR.cc/2025/Conference/Submission11267/Authors" ], [ "ICLR.cc/2025/Conference/Submission11267/Authors" ], [ "ICLR.cc/2025/Conference/Submission11267/Authors" ], [ "ICLR.cc/2025/Conference/Submission11267/Reviewer_SDx9" ], [ "ICLR.cc/2025/Conference/Submission11267/Authors" ], [ "ICLR.cc/2025/Conference/Submission11267/Authors" ], [ "ICLR.cc/2025/Conference/Submission11267/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11267/Reviewer_LpHt" ], [ "ICLR.cc/2025/Conference/Submission11267/Authors" ], [ "ICLR.cc/2025/Conference/Submission11267/Reviewer_CqaJ" ], [ "ICLR.cc/2025/Conference/Submission11267/Authors" ], [ "ICLR.cc/2025/Conference/Submission11267/Authors" ], [ "ICLR.cc/2025/Conference/Submission11267/Authors" ], [ "ICLR.cc/2025/Conference/Submission11267/Authors" ], [ "ICLR.cc/2025/Conference/Submission11267/Reviewer_E9uN" ], [ "ICLR.cc/2025/Conference/Submission11267/Reviewer_E9uN" ], [ "ICLR.cc/2025/Conference/Submission11267/Reviewer_CqaJ" ], [ "ICLR.cc/2025/Conference/Submission11267/Authors" ], [ "ICLR.cc/2025/Conference/Submission11267/Authors" ] ], "structured_content_str": [ "{\"title\": \"Global Comment 2\", \"comment\": [\"**Appendix**\", \"We have made several additions and organisational changes to the Appendix \\\\- the length of the appendix has grown from 18 pages to 51 pages. A summary of the changes to the Appendix is as follows.\", \"Appendix now contains a Table of Contents to help with browsing of the appendix.\", \"Revisions/additions to existing appendices\", \"\\\\[Appendix A\\\\] Rather than containing disconnected examples, Appendix A.1 now provides examples of the agent\\u2019s plan at all layers on the same levels. Appendices A.2.1-A.2.5 now contain additional examples of types of planning.\", \"\\\\[Appendix B\\\\] Appendix B.1 now contains additional examples of the agent\\u2019s plan after interventions. Appendix B.2 now contains results when intervening using an intervention strength parameter, and in the absence of a \\u201cshort-route\\u201d intervention.\", \"New Appendices\", \"\\\\[Appendix A\\\\] We have added Appendices A.2.6-A.2.9 (in which we provide examples of the agent forming plans in OOD scenarios) and Appendix A.2.10 (in which we discuss links with the relevant literature). We have added Appendices A.3.1 (which explores test-time plan improvement at all layers) and A.3.2 (which provides evidence of compute being used for search).\", \"\\\\[Appendix B\\\\] We have added Appendix B.3 in which we intervene to steer the agent to act optimally when it otherwise wouldn\\u2019t.\", \"\\\\[Appendix C\\\\] A new appendix in which we provide results regarding the emergence of concept representations (C.1) and plan refinement capabilities (C.2) during training, and investigate the correlation between planning-like behaviour and concept representations (C.3) and plan refinement capabilities (C.4).\", \"\\\\[Appendix D\\\\] We have added Appendices in which we provide additional class-specific metrics for the probes detailed in the main paper (D.2), consider 5x5 and 7x7 probes (D.3) and apply global probes to show that the agent does not linearly represent which action it will take in specific future time steps (D.5) .\", \"\\\\[Appendix F\\\\] We have added Appendices in which we link our characterisation of planning to definitions of planning (F.1), and in which we show that the agent we study exhibits behavioural evidence of planning (F.5).\", \"**Next Steps**\", \"We are currently working on the following experiments\", \"Interpreting a DRC agent trained to play Mini Pacman. We expect to have results regarding this agent ready before the end of the rebuttal period.\", \"Interpreting Sokoban-playing DRC agents of different sizes. We expect to have results regarding this agent ready before the end of the rebuttal period.\", \"Interpreting a Sokoban-playing ResNet agent. Training is taking a long time so we are uncertain if we will be able to provide results before the end of the rebuttal period.\", \"We commit to releasing the code to reproduce our results in the camera-ready version.\"]}", "{\"comment\": \"Thank you for your thoughtful review. We are glad that you find our paper interesting and have found your comments very helpful for improving the paper.\\n\\n**New Results and Revisions to the Submission**: Firstly, we would like to direct the reviewer\\u2019s attention towards the [global comment](https://openreview.net/forum?id=DzGe40glxs&noteId=iMHfi5Drw6) which summarises the major changes we have made to the submission, including the addition of several new results.\\n\\n**Generalizability to other algorithms and environments**: As the reviewer has noted, a single affirmative result is sufficient for answering whether model-free RL agents can plan or not. However, we agree that results in additional settings could help improve the robustness of our findings. We are currently investigating a DRC agent\\u2019s planning capabilities on Mini Pacman (a grid-based environment with non-local transition dynamics), and expect to be able to add results in this regard by the end of the rebuttal period. \\n\\nWe also believe that our interpretability approach can generalise to other convolutional architectures. As such, we are also looking to give some preliminary results regarding ResNet architecture on Sokoban. However, a relatively large ResNet is required to get good performance on Sokoban (as shown in the original DRC paper [1]) which we have found is very time-consuming to train (training is estimated to require over 10 days on an A100 GPU). Hence, it may be challenging to include the results in time for the rebuttal period, though we will try our best.\\n\\n**Details on how representation for intervention is computed**: In Section 2.4, we now explain that probes learn a vector for each concept class:\\n\\n\\u201cAs a linear classifier, a linear probe will compute a logit $l_k= w^T_kg$ for each class $k$ by projecting the associated activations $g \\\\in \\\\mathbb{R}^d$ along a class-specific vector $w_k \\\\in \\\\mathbb{R}^d$\\u201d\\n\\nWe have reworded the first paragraph of Section 6.1 to make our interventions clearer: \\n\\n\\u201cRecall that a 1x1 probe projects activations along a vector $w_k \\\\in \\\\mathbb{R}^{32}$ to compute a logit for class $k$ of some multi-class concept $C$. We thus encourage the agent to represent square $(x,y)$ as class $k$ for concept $C$ by adding $w_k$ to position $(x,y)$ of the agent's cell state $g_{x,y}$: $g_{x,y}$ \\u2190 $g_{x,y} + w_k$.\\u201d\\n\\n**Inclination Towards Positive Answer That DRC Agent Can Plan**: At a high level, we find the most plausible explanation for the phenomenon we study to be that the agent is engaging in planning. This is because alternative explanations seem less capable of simultaneously accounting for both (1) the behavioural evidence of planning presented in the original DRC paper [1], and subsequent work [2] and (2) the internal evidence of planning that we provide.\\n\\nTo show that, consistent with the original DRC paper, the agent we study exhibits behavioural evidence of planning, we have added Appendix E.5 in which we show the agent solves additional levels when given extra compute. Section 5 has been amended to clarify that the behavioural evidence of planning supports the conclusions we draw: \\u201cWhen considered alongside the agent's planning-like behaviour, the evidence in this section indicates the agent uses the concepts we study to perform search-based planning\\u201d\\n\\nTo provide additional support for the claim that the representations we uncover are linked to an internal planning mechanism, we have also added Appendices A.2.6-A.2.9 in which we investigate how these representations relate to capabilities commonly associated with planning: adapting and generalising to OOD scenarios. For instance, Appendix A.2.6 shows how the agent appears to be capable of generalising and forming plans in terms of these representations in levels with more boxes and targets than it saw during training. We believe alternative explanations are less able to explain the agent\\u2019s ability to form plans in OOD scenarios.\"}", "{\"title\": \"Reply to rebuttal\", \"comment\": \"I think that the authors for the extremely thoughtful and detailed replies. I have upgraded my score accordingly.\"}", "{\"comment\": \"**Stronger and Weaker Interventions** We have added results in Appendix B.2 and B.3. In Appendix B.2, we investigate alternate interventions in Agent-Shortcut and Box-Shortcut levels. We detail experiments in which we:\\n- Scale the intervention vector by an \\u201cintervention strength\\u201d. We find that too low or too high of an intervention strength reduces success rates.\\n- Intervene upon between 0 and 3 squares (as opposed to only 1 square) as part of the \\u201cdirectional\\u201d intervention. We find that intervening on additional squares is helpful for low intervention strengths but not for high strengths. \\n- Intervene on none of the squares on the short route. We find that we can still sometimes successfully steer the agent in Box-Shortcut levels but not Agent-Shortcut levels.\\nWe have also added Appendix B.3 in which we perform interventions in a new set of levels in which we intervene to make the agent act optimally when it otherwise wouldn\\u2019t. \\n\\n**Computational Cost of Methodology** Our probes have few parameters and so are inexpensive to train. It took ~30 minutes on an RTX3090 to train probes on the main training dataset from the paper (>100k transitions). However, we have trained 1x1 probes to a moderate degree of accuracy (i.e. macro F1 scores that are 0.02-0.1 lower than in the paper) on ~3000 transitions in less than a minute on a RTX3090. \\n\\n**Could We Apply Our Methodology During Training?** Yes. We could collect transitions and labelled Sokoban boards with a FIFO buffer during training. We could then continuously train probes on the FIFO buffer. \\n\\n**How Do Concepts Emerge?** We have added two new Appendices:\\n- Appendix C.1, in which we plot the macro F1 achieved when training 1x1 probes to predict the concepts over the first 50 million transitions of training. Appendix C.1 provides evidence that these concepts emerge early in training.\\n- Appendix C.2, in which we plot, for the checkpoints of the agent taken over the first 50 million transitions of training, the increase in macro F1 when probing the agent before and after the agent is given 15 extra internal ticks of computation prior to acting. We show that the agent\\u2019s ability to iteratively refine the plans it uses these concepts to form emerges early in training.\\n\\n**Reproducibility** We will release the code to reproduce our results in the camera-ready version. The code was not uploaded earlier because we are still conducting new experiments (e.g., those in the appendix), and the codebase is undergoing rapid changes.\\n\\nWe again thank you for your detailed review. We would be happy to receive additional comments you have that could aid in improving the paper even more.\\n\\n[1] [Guez et al. (2019) An Investigation of Model-Free Planning](https://arxiv.org/abs/1901.03559)\"}", "{\"comment\": \"[1] [Belinkov (2022) Probing Classifiers: Promises, Shortcomings, and Advances](https://direct.mit.edu/coli/article/48/1/207/107571/Probing-Classifiers-Promises-Shortcomings-and)\\n\\n[2] [Guez et al. (2019) An Investigation of Model-Free Planning](https://arxiv.org/abs/1901.03559)\\n\\n[3] [Shoham & Elidan (2022) Solving Sokoban with forward-backward reinforcement learning](https://arxiv.org/abs/2105.01904)\"}", "{\"title\": \"Global Comment 1\", \"comment\": \"We are grateful to all reviewers for the insightful reviews. We have attempted to address the specific issues each reviewer has raised in individual comments. In this global comment, we will summarise the main changes and additions made to the paper.\\n\\n**Important New Results** \\nWe have added several new results in the appendix that we give details of later in this comment. Some key results that we would like to highlight are:\\n\\n* In Appendices A.2.6 to A.2.9, we have added examples of agent planning in OOD levels. Specifically: \\n * Appendix A.2.6 (Figure 18\\\\) shows examples of the agent forming plans in OOD levels where the agent observes a Sokoban board in which it is not itself present. \\n * Appendix A.2.7 (Figure 19\\\\) shows examples of the agent planning in OOD levels with 5 boxes and targets, and with 6 boxes and targets. The agent was trained with 4 boxes and 4 targets. \\n * In Appendices A.2.8 and A.2.9 (Figures 20 and 21\\\\) we respectively add or remove walls in the environment *during* an episode, and show that the agent updates its (internal) plan in response to the changes in the environment. \\n* In Appendix B.2 (Figures 28-31), we provide ablations for intervention results: \\n (a) varying the number of squares intervened upon. \\n (b) varying the values of intervention strength parameter $\\\\\\\\alpha$. \\n (c) performing directional intervention without performing short-route intervention. \\n* In Appendix B.3 (Figures 32-34), we perform interventions on a new set of levels. These new levels are constructed to test whether we can intervene to steer the agent to act optimally when it otherwise would not. \\n* In Appendix D.3 (Figure 40), we give results for larger probes of sizes 5x5 and 7x7. The performance differential between these much larger probes and our 1x1 probes remains small, validating the hypothesis that agent\\u2019s representations are localised. \\n\\n**Main Text**\", \"we_have_made_the_following_major_changes_to_the_main_text\": [\"\\\\[**Updated Results/Figures**\\\\]\", \"Figure 6 now shows that the agent\\u2019s plans iteratively improve when the agent is forced to remain stationary for the first 5 steps of episodes. Previously, we showed this was the case when the agent performed actions over the first 5 steps of episodes. This removes the potential confounding effect that the improvement in F1 could be due to the concepts getting easier to predict across ticks. Figure 6 also now only shows results for the final layer for consistency with the rest of section 5\\\\. Results for other layers have been moved to Appendix C.3.\", \"Figure 7 has been split into two figures now: Figures 7 and 8 to help improve clarity and to have a style consistent with other figures.\", \"Figure 9 (previously Figure 8\\\\) has been amended to have a style consistent with other figures.\", \"\\\\[**Clarity**\\\\] We have improved our explanations regarding (1) what multi-class and square-level concepts are (Sections 2.4 and 3.2), (2) how linear probes predict classes (Section 2.4), and (3) how we perform interventions (Section 6.1).\", \"\\\\[**References To Appendices**\\\\] We have added detailed references to new and existing appendices.\", \"These changes have been highlighted in blue in the revised paper attached to this submission. To generate space for these changes, sentences have been reworded for greater brevity.\", \"(continued in \\\"Global Comment 2)\"]}", "{\"title\": \"Thank you for revising your score\", \"comment\": \"Thank you for reading through and responding positively to our rebuttal. We highly appreciate that. As promised in our initial comments, we have now included preliminary results analyzing additional agents (including a standard ConvLSTM agent) and an additional environment (Mini Pacman). This is detailed in our new [top-level comment](https://openreview.net/forum?id=DzGe40glxs&noteId=QBUbCZ4YJf).\"}", "{\"metareview\": \"The paper uses concept-based interpretability to investigate if a mode-free RL algorithm plans over a set of concepts that are implicitly encoded in the learnt internal representation of the agent. The results suggest that the algorithm is performing planning in the Sokoban environment.\", \"additional_comments_on_reviewer_discussion\": \"The authors provided detailed responses to the concerns raised by the reviewers. There was a consensus among the reviewers that the paper should be accepted.\"}", "{\"comment\": \"Thank you for your appreciation of our work and thoughtful feedback which has been incredibly useful. We are pleased you find our application of mechanistic interpretability to RL planning to be convincing, and to yield informative insights.\\n\\n**New Results and Revisions to the Submission**: Firstly, we would like to direct the reviewer\\u2019s attention towards the [global comment](https://openreview.net/forum?id=DzGe40glxs&noteId=iMHfi5Drw6) which summarises the major changes we have made to the submission, including the addition of several new results.\\n\\n**Probing different layers**: We have amended Appendix A.1 to include illustrations of the agent\\u2019s plan at all layers in the same example levels, and included relevant discussion of these examples. \\n\\n**Results on Other Domains**: To further improve the robustness of our findings, we are currently investigating a DRC agent\\u2019s planning capabilities on Mini Pacman. We expect to be able to add some results in this regard by the end of the rebuttal period. Although Mini Pacman is still grid-based, it differs from Sokoban in its non-local transition dynamics and the fact that the environment contains other (hard-coded) agents that the RL agent has to account for in its planning.\\n\\n**Background on probing**: We have added a sentence to the bottom of the paragraph in which we introduce linear probes to clarify how linear probes operate:\\n\\n \\u201cAs a linear classifier, a linear probe will compute a logit $l_k= w^T_kg$ for each class $k$ by projecting the associated activations $g \\\\in \\\\mathbb{R}^d$ along a class-specific vector $w_k \\\\in \\\\mathbb{R}^d$.\\u201d \\n\\nWe have also added a reference to Belinkov (2022) [1] for interested readers. We hope that these changes are satisfactory to you. We are sorry that we can not add more detailed background due to the space limitations.\\n\\n**Lack of Theoretical Explanation** The primary goal of this paper is to empirically ascertain whether a model-free agent could indeed learn to internally plan. Theoretical explanation of this phenomenon is orthogonal to our work and would be a very exciting avenue for future work. \\n\\n**Other Model-Free Agents Might Implement Other Planning Algorithms Than Bidirectional Search**: It is likely that different designs of model-free RL agents might plan differently. We are currently working on interpreting a ResNet agent similar to that studied by Guez et al (2019) [2]. Given the training time of this agent, we believe it is unlikely we will be able to provide the results of applying our methodology to this agent during the rebuttal period. We apologise for this, but expect that we will have a more confident answer to your question once we have successfully interpreted it. \\n\\n**Why Parellized Bi-Directional Search Emerges**: We expect the reason that parallelised bidirectional planning emerges is because it allows for especially rapid plan formation at the start of an episode. This is important as it reduces the likelihood of the agent making early mistakes that make levels unsolvable. We discuss this more in a new appendix, Appendix A.2.10. Further evidence of bidirectional search being especially useful in Sokoban can be seen in the fact that one of the most capable handcrafted (i.e. not relying on deep RL) Sokoban agents uses a method that is similar to bidirectional planning [3]. \\n\\n**Effect of model size on quality of (internal) world model** This is a very interesting question. We are currently investigating DRC agents of different sizes and hope to be able to provide results before the end of the rebuttal period.\\n\\n**Suitability of DRC for Sokoban** DRC agents are indeed highly suitable for Sokoban, and the prior positive evidence that DRC excels at Sokoban informed our choice of interpreting this agent. \\n\\n**Performing Many Ticks to Create a Plan and Then Acting Blindly** We do not have a definitive answer at hand to this question. We are fairly sure that for the DRC agents we are interpreting, this is not possible. There are at least two reasons for this: \\n- The agent\\u2019s plans frequently contain transient, minor errors at individual layers. We show examples of this in Appendix A.1. \\n- We believe an \\u2018empty\\u2019 observation would be too OOD for the agent to handle over a large number of timesteps. For example, in Appendix A.2.6 we show instances of the agent planning based on an observation in which the agent itself is not present. While the agent\\u2019s plan can be decoded and observed to be improving over initial timesteps, if run long enough, it can sometimes result in the breakdown of agent\\u2019s representations (and plan). We think this suggests that any capacity to form plans \\u201cblind\\u201d is not robust enough to perfectly guide blind action across many episodes.\\nHowever, we believe it may be possible to train an agent to act in such a way, given the DRC agent does seem to be capable of creating a plan upfront.\\n\\nWe again thank you for your thoughtful comments. We welcome further discussion that would help us improve the paper.\"}", "{\"comment\": \"This reviewer appreciate the authors' very detailed response and is impressed by the new results provided.\", \"a_suggestion_regarding_the_visualization_of_probing_at_different_layers\": \"show the same transition for different layers side-by-side. the current, separated visuals make it hard to see the difference between the layers.\\n\\nWhile this work is highly detailed and impressive, I hesitate to update the paper rating to full marks until the authors ground this phenomenon in some theoretical framework and/or demonstrate it in a more complex, non-grid environment.\"}", "{\"title\": \"Request for Response on Authors' Response\", \"comment\": \"Respected reviewer, we have given a detailed response to your comments (and in the global response). In particular, as detailed in the new [top-level comment](https://openreview.net/forum?id=DzGe40glxs&noteId=QBUbCZ4YJf), we have added additional results analyzing different agents (including a generic ConvLSTM agent) and on an additional enviroment that could be of high interest to you.\\n\\nAs the discussion period will end in 2 days, we would greatly appreciate if you could review our response and let us know if you have any further questions. If we have successfully addressed your concerns, we request that you please revise your score accordingly.\"}", "{\"comment\": \"**Statistical Significance of Probing Results** The below table shows, for each layer , the average macro F1 and the p-value for the difference in means (between 1x1 and 3x3 probes) being statistically significant. All differences are different by a statistically significant margin at a 1% significance level. This is consistent with the fact that these concepts are easier to predict for a 3x3 probe than a 1x1 probe (as evidenced by the large increase in baseline performance when moving from the 1x1 to 3x3 baseline).\\n\\nProbe\\t| 1x1 3x3 | 1x1 3x3 | 1x1 3x3\\n---------|----------------------------------|-----------------------------------|----------------------------------\\nLayer | Layer 1 \\t Layer 1 | Layer 2 Layer 2 | Layer 3 Layer 3 \\nAS \\t| 0.8024 (<0.0001) 0.8847 | 0.8560 (<0.0001) 0.8889 | 0.8516 (<0.0001) 0.8957 \\t \\n\\nAll probes trained on the agent\\u2019s cell state activations achieve macro F1 scores that are statistically significantly different from the macro F1 scores achieved by the respective baseline probe at the 1% significance level (all with p-values <0.0001). For both 1x1 and 3x3 probes, the macro F1 scores achieved at each layer are different by a statistically significant margin (at the 1% significance level) for the macro F1 scores achieved at other layers.\\n\\n**Small Error Bars In Figure 4** This is a consequence of 1x1 and 3x3 probes having a minimal number of parameters (160 and 1440 respectively) and being trained on a large dataset. Specifically, the datasets consist of >100k transitions, each with 64 labelled grid squares. As such, this dataset contains >6400k labelled examples.\\n\\n**What explains \\u201calmost correct plans\\u201d?** In cases of almost correct plans, the agent often (1) represents a plan without the relevant mistakes at an alternate layer (though sometimes with different mistakes) or (2) fixes the mistakes at a later time step. Appendix A.1 has been augmented to now include examples of the agent\\u2019s plan at each layer in the same level that demonstrate point (1). We hypothesise that \\u201calmost correct plans\\u201d are best viewed as intermediate steps of the agent\\u2019s internal plan formation process.\\n\\n**Additional Analysis of Correlation Between Concepts and Compute Benefit** We have added two new relevant sections to the Appendix. The new sections are:\\n- Appendix C.3, in which we show that the emergence during training of agent\\u2019s concept representations at all layers (i.e. not just at the final layer as previously shown) is correlated with additional compute benefit.\\n- Appendix C.4, in which we show that the emergence during training of the agent\\u2019s ability to iteratively refine its plan when given additional compute (i.e. the amount by which the agent\\u2019s plan becomes more correct when given 15 additional internal ticks of compute) is correlated with additional compute benefit.\\n\\n**Choice of Concepts** We chose these concepts as they seemed natural for planning in a grid-based environment with localised transition dynamics. We discuss alternate square-level concepts in Appendix D.4. We decided to study these concepts specifically since (1) subsequent Sokoban states differ only in agent and box locations, and (2) boxes move off of squares (captured by Box Push Direction) when the agent moves onto squares (captured by Agent Approach Direction).\\n\\n**Negative Results With Alternate Concepts** We have added Appendix D.5 which briefly investigates if the agent plans by directly representing the actions it plans to take in N time steps. Appendix D.5 shows that, even when using \\u201cglobal\\u201d linear probes that receive as input the entirety of the agent\\u2019s cell state, we cannot accurately predict the actions the agent will take in N time steps. This is despite \\u201cglobal\\u201d linear probes having many more parameters (i.e. 64x more than 1x1 probes) than the probes we use to predict square-level concepts.\"}", "{\"title\": \"Request for Response on Authors' Response\", \"comment\": \"Dear Respected Reviewer,\\n\\nThank you again for your detailed and insightful review. As the extended discussion period ends in two days, we would greatly appreciate it if you could review our response and let us know if you have any further questions. If we have successfully addressed your concerns, we request that you please revise your score accordingly.\"}", "{\"title\": \"Conveying Thanks To The Reviewer\", \"comment\": \"We deeply appreciate your kind comments, and your engagement throughout the discussion period. We shall, as you suggest, add the discussion regarding \\\"Application to other model-free architectures\\\" as an Appendix.\"}", "{\"summary\": \"The authors conduct a series of experiments that mechanistically interpret learned neural network weights of reinforcement learning (RL) agents using deep repeated convLSTM (DRC) to determine whether they are internally planning on an implicitly learned model. The agent network is probed for specific, predetermined concepts in the Sokoban domain. The results evidence that these agents do learn spatially local concepts and reason about them in a manner resembling parallelized bi-directional search.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"This is a novel and interesting use of mechanistic interpretability.\", \"Provides solid empirical evidence that model-free RL agents can implicitly learn to plan under certain conditions.\", \"Performed experiments are highly relevant and results are well analyzed. all points made about the results are clear and it does not seem like any outstanding phenomena were overlooked.\", \"The discovery that this is reminiscent of bi-directional search is powerful, and may have implications on the future of model-based RL.\", \"visualizations of plans are easily interpretable and highly informative.\"], \"weaknesses\": [\"Does not dive into the effect of probing different layers, even though such results are displayed.\", \"Only tested domain is Sokoban. analyzing second domain that is fundamentally different from grid domains is highly recommended to show that this is not a domain-specific phenomenon.\", \"Does not provide any background on probing, even though this is a central part of the experimentation.\", \"No theoretical explanation as to why planning behavior emerges.\"], \"questions\": [\"Is it possible that algorithms other than DRC that also exhibit emergent planning will be more similar other planning algorithms (as opposed to bi-directional search)?\", \"Can you explain or conjecture why parallelized bi-directional search is the algorithm that emerges in this case?\", \"What about the size of the model and the agent's capacity to learn the world model? Is there a tradeoff between the model size and the quality of concepts that are learned?\", \"It seems like DRC is a perfect fit for problems like Sokoban with spatially local properties.\", \"Is it possible to perform many computational ticks to arrive at a final plan and then act blindly according to that plan? would this yield high accuracy? what is the horizon to which such an agent can plan, and does this also have anything to do with the model size?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your detailed and encouraging review. We are glad that you found our approach interesting and have found your comments very helpful.\\n\\n**New Results and Revisions to the Submission**: Firstly, we would like to direct the reviewer\\u2019s attention towards the [global comment](https://openreview.net/forum?id=DzGe40glxs&noteId=iMHfi5Drw6) which summarises the major changes we have made to the submission, including the addition of several new results.\\n\\n**Repeating the analysis over other environments/architectures**: We are currently investigating a DRC agent\\u2019s planning capabilities on Mini Pacman, and expect to add results in this regard by the end of the rebuttal period. We believe Mini Pacman to be an interesting environment to study as it lacks Sokoban\\u2019s spatially-localised transition dynamics. \\n\\nWe also believe that our interpretability approach can generalise to other convolutional architectures. As such, we are also looking to give some preliminary results regarding ResNet architecture on Sokoban. However, a relatively large ResNet is required to get good performance on Sokoban (as shown in the original DRC paper [1]) which we have found is very time-consuming to train (training is estimated to require over 10 days on an A100 GPU). Hence, it may be challenging to include the results in time for the rebuttal period, though we will try our best.\\n\\n**Planning Helps Agent in Generalization and Adaptation**: We have added additional results in Appendices A.2.6-A.2.9 that investigate the link between the internal planning mechanism we uncover and the agent\\u2019s capacity for generalisation and adaptation. Some highlights of these results are:\\n- Appendix A.2.6 shows examples of the agent forming plans in OOD levels where the agent observes a Sokoban board in which it is not itself present. These results also indicate that the learned planning algorithm within DRC is not egocentric.\\n- Appendix A.2.7 shows examples of the agent planning in OOD levels with 5 boxes and targets, and with 6 boxes and targets. Guez et al (2019) [1] show that, despite being trained solely on levels with 4 boxes and targets, DRC agents can generalise to solve such levels. Our examples show that the agent\\u2019s internal planning mechanism successfully produces plans in these levels, suggesting the planning mechanism that we uncovered helps the agent generalise. \\n- In Appendices A.2.8 and A.2.9, we show examples of the agent adapting its plan to changes in the environment unlike anything seen during training. Specifically, in these experiments we respectively add or remove walls in the environment during an episode, and show that the agent updates its (internal) plan in response to the changes in the environment.\\n\\n\\nThese findings shed light on the usefulness of the representations we study by suggesting a potential link between the agent\\u2019s apparent planning mechanism and the agent\\u2019s capacity for generalisation and adaptability. We have added a detailed reference to these appendices in Section 5 so that readers will be made aware of the relationship between the internal planning mechanism and the agent\\u2019s generalisation and adaptation capabilities.\\n\\n**Additional Metrics**: We have added class-specific precision, recall and F1 tables in Appendix D.2.\\n\\n**Details on how representation for intervention is computed**: When computing the logit for some class k of concept C, a 1x1 probe will project the 32-dimensional vector of cell state activations along a learned 32-dimensional weight vector $w_k$. We have added a sentence at the end of Section 2.4 to make this clear:\\n\\n\\u201cAs a linear classifier, a linear probe will compute a logit $l_k= w^T_kg$ for each class $k$ by projecting the associated activations $g \\\\in \\\\mathbb{R}^d$ along a class-specific vector $w_k \\\\in \\\\mathbb{R}^d$.\\u201d\\n\\n We have reworded the first paragraph of Section 6.1 to make our interventions clearer: \\n\\n\\u201cRecall that a 1x1 probe projects activations along a vector $w_k \\\\in \\\\mathbb{R}^{32}$ to compute a logit for class $k$ of some multi-class concept $C$. We thus encourage the agent to represent square $(x,y)$ as class $k$ for concept $C$ by adding $w_k$ to position $(x,y)$ of the agent's cell state $g_{x,y}$: $g_{x,y}$ \\u2190 $g_{x,y}$ + $w_k$\\u201d\"}", "{\"title\": \"Additional Results Regarding (1) Mini PacMan and (2) Alternate DRC Agents That Enhance the Generalisability of Our Paper\", \"comment\": [\"Since posting our initial comments, we have made the following additions regarding preliminary results (that we will continue to work on in preparation for the camera-ready version of the paper) to our paper that we believe enhances the generalisability of our findings:\", \"**Additional DRC Agents** We have now added Appendix G (Figures 46-54), in which we provide preliminary evidence suggesting that DRC agents of different sizes also engage in planning. Specifically, we show that these alternate agents represent planning-relevant concepts, that the plans these agents form iteratively improve when given extra compute, and [UPDATE] that these plans can be intervened upon to steer the agent.\", \"*Of particular interest, we provide evidence indicating that a DRC agent that performs only a single tick per step also engages in planning. As it only performs a single tick per step, this agent is a generic ConvLSTM agent*. This suggests that\", \"internal planning we uncover is not merely a consequence of the special structure that DRC agent possesses, but could be a broader phenomenon.\", \"At the same time, this also **shows that our proposed approach is general** and not specific to DRC.\", \"**Additional Environment** We have also now added Appendix H (Figures 55-57), in which we provide our preliminary results regarding interpreting a DRC agent trained in an alternate environment: Mini PacMan. We find evidence indicative of this agent representing planning-relevant concepts and using them for planning, albeit in a different way to in Sokoban. This indicates that internal planning is not limited to environments whose transition dynamics are fully spatially-localised.\"]}", "{\"comment\": \"Thank you for your insightful comments, which we have found to be of great help in refining and improving the paper. We are pleased that you appreciate our mechanistic approach to investigating the limits of model-free training.\\n\\n**New Results and Revisions to the Submission**: Firstly, we would like to direct the reviewer\\u2019s attention towards the [global comment](https://openreview.net/forum?id=DzGe40glxs&noteId=iMHfi5Drw6) which summarises the major changes we have made to the submission, including the addition of several new results.\\n\\n**Improvements in Discussion of \\u201cConcepts\\u201d**: We have revised the paper to try and ensure that it is clear to a reader what a concept is without reading the Appendix.\\n- We have amended the first paragraph in Section 2.4 (in which the notion of a concept is introduced) to make clear that multi-class concepts are mappings from inputs/parts of inputs to classes. \\n- We have added a sentence to the end of the first paragraph in Section 3.1 (in which square-level concepts are introduced) to make it clear that square-level concepts assign classes to individual squares.\\n\\n**Importance of The Appendix** We agree that many parts of the Appendix strongly augment the paper. However, given space constraints and the nature of the paper, we sadly are unable to include these sections of the Appendix in the main paper. We have amended the main text to make clearer references to relevant sections of the Appendix:\\n- Section 5 now contains detailed references to sections in the Appendix in which we provide further examples of each type of plan formation\\n- Section 6.1 now provides detailed references to the intervention experiments described in the Appendix\\n\\n**Further Results To Broaden Scope** In our paper, we aimed to answer the question of whether model-free RL agents can plan or not. As has been noted by reviewer E9uN, an affirmative result in a single agent-environment pair here is sufficient to achieve this.However, we agree that the applicability of our paper would be improved by applying our methodology to alternate agents and environments. To help further improve the robustness of our findings, we are currently investigating a DRC agent\\u2019s planning capabilities on Mini Pacman. We expect to be able to add some results in this regard by the end of the rebuttal period. We believe Mini Pacman to be an interesting environment to study as, unlike Sokoban, its transition dynamics are not entirely spatially-localised.\\n\\nWe also believe that our interpretability approach can generalise to other architectures. As such, we are also looking to give some preliminary results regarding a ResNet agent trained on Sokoban. However, a relatively large ResNet is required to get good performance on Sokoban (as shown in the original DRC paper [1]) which we have found is very time-consuming to train (training is estimated to require over 10 days on an A100 GPU). Hence, it may be challenging to include the results in time for the rebuttal period, though we will try our best.\\n\\n**Statistical Significance of Intervention Results** The table below shows the means and p-values when doing a t-test for the difference in means (between success rates for trained and random probes) being statistically significant. All interventions using trained probes are statistically significantly more successful than the respective intervention with random probes (at a 1% significance level).\\n\\nProbe\\t| Trained (%) Random (%) | Trained (%) Random (%) | Trained (%) Random (%)\\n---------|-----------------------------------|-----------------------------------|------------------------------\\nLayer \\t| Layer 1 \\t Layer 1 | Layer 2 Layer 2 | Layer 3 Layer 3 \\nAS \\t| 94.6 (0.0031) 33.7 | 90.1 (0.0064) 29.8 \\t | 98.8 (0.0030) 27.8 \\t \\nBS \\t| 56.2 (0.0042) 31.5 | 72.7 (0.0068) 30.9 \\t | 80.6 (<0.0001) 4.1\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Oral)\"}", "{\"summary\": \"This paper provides the first mechanistic evidence (non-behavioural) that model-free reinforcement learning agents can learn to plan. The authors do this by studying a Deep Repeated ConvLSTM (DRC) agent playing Sokoban. While previous work showed that DRC agents exhibit planning-like behaviors, this paper demonstrates that they may actually perform internal planning.\", \"there_are_three_main_steps_in_the_methodology\": \"Firstly they use linear probes to probe for planning-relevant concepts in the agent's representations. They then look at how plans form within these representations and finally look at the causal relationship of this planning by intervening on the agent behaviour.\\n\\nUsing this methodology, they claim that the DRC agent have an internal representation of planning concepts and can form plans through an algorithm that is like a parallel bidirectional search, planning forward and backwards. It then evaluates and adapts its plans. There is further evidence from the fact that the agent develops planning capabilities that correlate with improved performance when given extra \\\"thinking time\\\".\\n\\nAll of this is done within the Sokoban environment.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper's main strength is in its rigorous approach to demonstrating mechanistic evidence of planning in model-free agents. The probing experiments are well-motivated and well-designed. The authors then build on this foundation through interventional experiments that demonstrate these representations causally influence the agent's behavior.\\n\\nThe authors also show that the emergence of these planning capabilities correlates with improved performance when given extra computation time, connecting their mechanistic findings to previously observed behavioral results. The ablation studies also validate their methodological choices and demonstrate the robustness of the findings.\", \"weaknesses\": \"The lack of detail in the main part of the paper on what a concept means (which is left to the appendix) means that this important point is hard to follow. If reading the appendix is necessary to understand the paper, then the particular detail should not be in the appendix. The same goes for most sections of the appendix which should not be seen as appendices, but actually necessary parts of the paper to understand it as a whole.\\n\\nOne significant methodological weakness is the lack of statistical rigor in the empirical evaluation. The authors run only 5 random seeds for their experiments and perform no statistical significance testing, making it difficult to assess the reliability of their results. For example, when comparing performance between different probe types or intervention strategies, it's unclear whether the observed differences are statistically meaningful. The paper would be significantly strengthened by proper statistical analysis, including hypothesis tests, confidence intervals, and effect size calculations. The error bars in figure 4 are also surprisingly small and regular which is itself surprising. However, given that there is no code provided, it is impossible to know how reliable these results are..\\n\\nAnother major limitation is the narrow scope of the investigation. The paper focuses exclusively on a specific architecture (DRC) in a single environment (Sokoban), which means that it's impossible to know how well these methods or results generalise. The DRC architecture is somewhat atypical, with its multiple computational ticks per timestep, and it's unclear whether more conventional architectures could learn similar planning capabilities. It does not seem all that surprising that an architecture of this type might develop an intrinsic world-model.\\n\\nAdditionally, while Sokoban is a well-established planning benchmark, it has a very specific structure that may make it particularly amenable to the type of planning discovered. The authors do not discuss how their findings might extend to other environments or architectures. While the causal aspects of the paper strengthen their arguments, it would be particularly interesting to see negative results where planning is not found, using the same techiques.\", \"questions\": \"Have you tested whether more conventional architectures (without multiple computational ticks) can learn similar planning capabilities?\\nCan you provide statistical significance tests for your key comparisons, particularly where differences appear small relative to standard deviations?\\nIn cases where the agent forms \\\"almost correct\\\" plans, what prevents it from finding the optimal solution? Is this a systematic failure mode?\\nCan you provide more analysis of the correlation between concept representation and additional compute benefit?\\nWhat led you to choose the specific concepts (Agent Approach Direction and Box Push Direction) to probe for? Did you investigate other potential planning-relevant concepts?\\nWhat happens with stronger or weaker interventions in the causal aspects of the experiments?\\nHow computationally expensive is your probing methodology? Would it be feasible to apply this analysis in real-time during training?\\nCan you look at how the planning concepts emerge through the training process?\\nWill code be made available to reproduce the results shown here?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Softening of language and Inclusion of Additional Examples**:\\n> In some parts, the paper seems to over-interpret the meaning of a small number of empirical observations. For example, the statements in lines 373-376 or 425-427 are not sharp implications as stated in the text. \\n\\nDespite contending that the most likely hypothesis is that the agent is indeed planning, we also believe it is possible that alternative hypotheses we have not thought of could explain both the observed behaviour and the internal mechanisms we uncover. We have adjusted the language accordingly throughout the paper to better acknowledge this. A discussion of alternative hypotheses put forward by the reviewer is provided at the end of our comment. \\n\\nWe have amended the issue you raise regarding lines 373-376 by making our language more reflective of the level of evidence we provide. We now say \\u201cFigures 1 (A)-(B) show examples in which the agent appears to\\u2026\\u201d. We have also added additional examples of the agent forming plans in a manner suggestive of search in Appendices A.2.1-A.2.5. \\n\\nRegarding the issue you raise about lines 425-427, our intention was to say that a failure in our intervention experiments would falsify the hypothesis that the agent planned using these concepts. We have also, in Appendix B.1, added additional examples of the agent forming an alternate plan in response to interventions. We have also added further intervention results to support the conclusion that the interventions influence the agent\\u2019s behaviour in a manner that is consistent with the hypothesis that the agent uses the concepts for planning:\\n- In Appendix B.2, we show that our interventions are largely robust to (1) scaling the probe vectors before we add them to the - agent\\u2019s cell state and (2) increasing the number of squares intervened upon as part of the \\u201cDirectional\\u201d intervention\\n- In Appendix B.3 we perform intervention experiments on an alternate set of levels in which we intervene to steer the agent to act optimally when it otherwise wouldn't. \\n\\n**Alternative Hypothesis 1**:\\n > For example, an LSTM that is trained for N iterative steps is likely to provide more accurate answers after N interative steps at test time. Moreover, advancing throughout the episodes makes the concepts gradually easier to predict. Therefore, Figure 6 apparently represents the expected behavior of the network and it is not necessarily an instance of \\\"plan formation\\\", as suggested.\\n\\nWe thank you for bringing this alternate explanation to our attention. \\n\\nWith respect to the first point you raise, we note the recurrent network is not trained to predict the consequences of future actions. Instead, it is trained to output a return-maximising action and a value estimate. As such, we do not believe that there is an a priori reason that it should be able to better predict the consequences of future actions when given extra time. \\n\\nWe have revised the relevant section of our paper to address the second point you raise. Figure 6 now shows that the agent\\u2019s internal plan becomes iteratively more accurate when it is forced to perform 15 additional internal ticks of computation prior to acting (e.g. when the agent is forced to remain stationary for 5 steps prior to acting). This revised experimental setting removes the confounder that the increase in F1 might be due to the concepts becoming easier to predict as the agent acts in the environment.\"}", "{\"comment\": \"I thank the authors for their very comprehensive response, I believe all of my questions have been addressed.\\n\\nAs a final note, I deeply appreciated the author's example above regarding \\\"Application to other model-free architectures\\\". I think the Breakout example as well as the method's assumptions (e.g. spatially localized concepts) can be added to the appendix section and referenced to in the methods, e.g. Section 3.1. They were helpful for me to gain an appreciation for how one could apply the method elsewhere if someone is interested in building on this work. \\n\\nAll in all, I think this is a very strong work with an excellent degree of scientific rigour. I have increased my score accordingly.\"}", "{\"title\": \"Conveying Thanks To The Reviewer\", \"comment\": \"We greatly appreciate your thoughtful comments. The points you have raised have been of great help in improving the paper.\"}", "{\"comment\": \"**Reason for adding (instead of replacing) intervention vector**: Adding vectors is standard practice when seeking to steer agent behaviour with learned vectors. An example of a paper that adds linear probe vectors to a model during a forward pass is Nanda et al (2023) [2]. The rationale for this practice is to minimise the extent to which information is overwritten.\\n\\n\\n\\n**Application to other model-free architectures**: We believe that, at a high level, our methodology is general. Applying our method to a general model-free agent would involve three steps. We illustrate these using the example of a model-free agent trained on Breakout:\\n\\n- In the first step, we hypothesise concepts the agent could plan with, and then probe for these concepts. For instance, the Breakout agent might plan using concepts corresponding to which bricks it plans to remove over the next 10 hits of the ball. \\n- In the second step, we would inspect the manner in which the agent\\u2019s concept representations develop at test-time. For instance, we might investigate whether the Breakout agent\\u2019s representations of the above concepts developed in a way that corresponded to iteratively constructing a planned hole to drill through the wall from the bottom to the top of the wall. \\n- In the final step, we would investigate whether we could use the vectors from the linear probes to intervene to steer the agent in the expected way. For instance, we could intervene on the Breakout agent to force it to drill a hole at a specific location of the wall.\\n\\nHowever, some implementation details of our methodology are specific to our experimental setting. For instance, the assumption of spatially-localised concept representations may hold in some cases (e.g. CNN-based Atari agents) but is unlikely to hold for all agents (e.g. MLP-based Mujoco agents). In cases where it doesn\\u2019t hold, we would have to probe all of the agent\\u2019s activations at a specific layer rather than using spatially-localised probes as in the paper.\\n\\n> Spelling: L202 behavior should be capitalized\\n\\nThis has been fixed.\\n\\nWe again thank you for your helpful comments and would welcome further comments that could help us improve our paper further.\\n\\n[1] [Guez et al. (2019) An Investigation of Model-Free Planning](https://arxiv.org/abs/1901.03559)\\n\\n[2] [Nanda et al (2023) Emergent Linear Representations in World Models of Self-Supervised Sequence Models](https://aclanthology.org/2023.blackboxnlp-1.2.pdf)\"}", "{\"comment\": \"**Alternative Hypothesis 2**:\\n> The policy network contains come convolutional layers. Thus, in my understanding, the fact that the activations in the policy network linearly correlate with the future movements of the agent can also be explained from the fact that the CNN can learn to encode the spatial gradients of the value function with respect to actions and neighboring states. If true, this hypothesis can be an alternative explanation for the presence of the concepts in the policy network\\n\\nWe interpret this alternative hypothesis as suggesting that the agent learns to estimate and store either (1) the entire value function v(agent location) where the box locations are fixed, or (2) a value function v(agent location, box locations) that accounts for box locations. Under this hypothesis, the agent could estimate the current value by directly looking up its current location (and potentially the locations of boxes). The agent could then use the value of neighbouring locations to determine the optimal next action.\\n\\nWe think this is an interesting hypothesis. However, we are unconvinced by it for the following reasons.\\n- First, we do not think v(agent location) is sufficient to form plans of the type we show the agent does. This is because v(agent location) is limited in that it only enables planning up to the point of pushing the first box (as it assumes box locations are static). An agent that planned using such a value map would be unable to, at the start of episodes, form plans to push all boxes to targets as our probes show the agent does.\\n- Similarly, we do not think the agent\\u2019s planning mechanism can be explained by the agent having learned v(agent location, box locations). For one, this is because the dimension of this value map is prohibitively large (with four boxes and 64 locations, the dimensionality of this value map would be of order 64^5). Additionally, the hypothesis that the agent\\u2019s plans are a consequence of it representing v(agent location, box locations) is inconsistent with evidence found in the original DRC paper [1] that the agent can generalise to levels with a different number of boxes. It is also inconsistent with a new appendix, Appendix A.2.7, in which we show that the agent can generalise to form plans in levels with different numbers of boxes and targets than seen during training.\\n\\nThat said, we acknowledge the possibility that the agent partially learns these value maps and employs them as heuristics to guide the formation of plans. Indeed, we suspect that searching for evidence of partial value maps within the agent\\u2019s cell state could be a good foundation for future work that aims to reverse-engineer the planning algorithm the agent uses. However, we believe these value maps alone are insufficient to account for the full scope of the agent's planning behaviour. \\n\\nWe would be excited to discuss this hypothesis or alternative hypotheses further.\\n\\nWe again thank you for your insightful feedback. We would welcome further comments you have that could aid in improving the work.\\n\\n[1] [Guez et al. (2019) An Investigation of Model-Free Planning](https://arxiv.org/abs/1901.03559)\\n\\n[2] [Garriga-Alonso et al. (2024) Planning behavior in a recurrent neural network that plays Sokoban](https://openreview.net/forum?id=T9sB3S2hok&referrer=%5Bthe%20profile%20of%20Adri%C3%A0%20Garriga-Alonso%5D(%2Fprofile%3Fid%3D~Adri%C3%A0_Garriga-Alonso1))\"}", "{\"title\": \"Request for Response on Authors' Response\", \"comment\": \"Respected reviewer, thank you for your initial positive review. We highly appreciate that. As promised in our initial comments, we have now included preliminary results analyzing additional agents (including a standard ConvLSTM agent) and an additional environment (Mini Pacman). This is detailed in our new top-level comment.\\n\\nAs the discussion period will end in 2 days, we would greatly appreciate if you could review our response and let us know if you have any further questions.\"}", "{\"comment\": \"I thank you the authors for their specific comment, focused to the points I raised. I also apologize for my late reply.\\n\\nI now regard Weakness 1 as being fully addressed.\\n\\nRegarind Weakness 2, the new Figure 6 gives futher evidence to support the authors' claim. I do not believe that this excludes alternative explanations for the planning-like behaviours, but further analisys can be deferred to later studies.\\n\\nGiven the above, I have increased my score.\"}", "{\"summary\": \"This paper investigates the internal behavior of a Deep RL algorithm, namely, DRC by Guez et al. (2019). The purpose of this work is to verify whether this model-free RL agent is capable of planning, even if it does not rely on an explicit model of the environment. This hypothesis is tested in a discrete environment called Sokoban. The paper contains three analyses: testing for important concepts in the neural network activations with linear probes, investigating how these concepts evolve during internal RNN iterations and during episode steps, and observing whether the policy can be influenced by providing a bias to these activations using the concepts above.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper addresses an interesting topic for the RL and planning communities. Understanding whether some form of planning-like behavior is present in model-free RL would greatly help in merging concepts that are now discussed separately and it can help in creating more capable hybrid agents.\\n\\nThe paper adopts an interesting technique that I did not previously encountered in the RL literature. They apply linear probes to the activations of the policy network with specific concepts that are symbolic representations of the future behavior of the policy. The definition of behavior-dependent concepts is an interesting concept that provides useful insights about the policy. The authors also confirm that forcing the internal representation related to the desired concepts can influence the policy in the expected way.\\n\\nThe paper is well written and it is easy to follow.\", \"weaknesses\": \"1. This paper studies one algorithm, DRC, in only one class of environments, Sokoban. This strongly limits the applicability of some of the insights, which cannot be immediately applied to different algorithms and environments. For example, although the idea of behavior-dependent concepts is interesting, the proposed concepts are strongly dependent on Sokoban (there is one feature for each cardinal direction and for each direction in which the box can be pushed by the agent). Nevertheless, considering that the intended purpose of the paper is to understand whether any model-free RL agent is capable of a planning-like behavior, a positive answer can be answered even by looking at a single agent and environment class.\\n\\n2. The paper often tends to be inclined towards a positive answer that DRC agents can plan, and omits to discuss alternative explanations for the observed behaviors. For example, an LSTM that is trained for N iterative steps is likely to provide more accurate answers after N interative steps at test time. Moreover, advancing throughout the episodes makes the concepts gradually easier to predict. Therefore, Figure 6 apparently represents the expected behavior of the network and it is not necessarily an instance of \\\"plan formation\\\", as suggested. Similarly, the fact that the arrows evolve as in Figure 1 does not necessarily imply that the agent is evaluating then refining an hypothetical policy. In some parts, the paper seems to over-interpret the meaning of a small number of empirical observations. For example, the statements in lines 373-376 or 425-427 are not sharp implications as stated in the text. Question (1) is also related to this point.\", \"questions\": \"(1) The policy network contains come convolutional layers. Thus, in my understanding, the fact that the activations in the policy network linearly correlate with the future movements of the agent can also be explained from the fact that the CNN can learn to encode the spatial gradients of the value function with respect to actions and neighboring states. If true, this hypothesis can be an alternative explanation for the presence of the concepts in the policy network.\\n\\n(2) How an intervention is performed in practice? A quick description is given in line 462, but I believe a more precise description is required.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper investigates whether planning -- a typically \\u201cmodel based\\u201d ability -- can emerge within the internal representation of \\u201cmodel free\\u201d agents. Specifically, the authors apply concept-based interpretability methods to identify planning-relevant concepts, whether they emerge, if they support planning, and if they can be intervened on to change the behavior of the agent. This is done for a \\u201cdeep repeated ConvLSTM\\u201d (DRC) agent architecture trained on Sokoban tasks.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"**[Originality]**\\nThe investigation of emergent planning using a concept-based interpretability approach is original and interesting.\\n\\n**[Quality]**\\nThe paper is of high scientific quality, the experiments are rigorous, hypothesis driven and the conclusion (in the affirmative) is well backed-up.\\n\\n**[Clarity]**\\nThe paper is clearly written with relevant information adequately provided. \\n\\n**[Significance]**\\nProvides a greater understanding about what is possible with model-free training alone.\", \"weaknesses\": \"The setting the authors investigate is ultimately restricted to a single agent architecture (DRC) trained using a single type of model-free RL algorithm (IMPALA) in a single environment type (Sokoban). The author is candid about this limitation. Nevertheless, repeating the analysis over more environments could make the general results more convincing, and investigating other architecture and/or learning rules would make the insights more generally applicable (e.g. could it be that some algorithms give rise to planning while others do not?).\\n\\nFurther, the paper does not address the *consequence* of planning, i.e. why is planning useful at all? For instance, [Guez 2019] investigates *data efficiency* and *generalization* as signs for planning. [Wan 2022] shows that even model-based approaches do not necessarily exhibit adaptability to local change. Thus, while it is nice to see a concept-based notion of planning, investigating whether this *leads* to things such as data efficiency, generalization and adaptability can shed greater light for the usefulness of having these representations at all. \\n\\n\\n[Guez 2019] Guez, Arthur, et al. \\\"An investigation of model-free planning.\\\" International Conference on Machine Learning. PMLR, 2019.\\n\\n[Wan 2022] Wan, Yi, et al. \\\"Towards evaluating adaptivity of model-based reinforcement learning methods.\\\" International Conference on Machine Learning. PMLR, 2022.\", \"questions\": \"1. Figure 4 demonstrates the quality of the probes. However, for completeness, would it be possible to provide additional metrics such as precision, recall, class confusion matrices, etc.? The reason is that the results in this work depend heavily on the probes, and therefore it would be good to be fully transparent about the behavior of the probes and its implication on the results.\\n\\n2. L447: \\u201cwe [intervene] by adding the representation of NEVER to cell state position on the short path\\u201d. Could the authors describe in more details how the representation is computed for interventions (ideally as equations / pesudocodes)? \\n - To my understanding, the authors train linear probes $f: \\\\mathbb{R}^d \\\\rightarrow \\\\mathbb{R}^{|C|}$, with $d$ being the internal DRC representation dimension, and outputting a $|C|$-dimensional logit over the number of concepts (left right etc.). The idea here is to add a $d$-dimensional vector back into the internal DRC representation to change the agent\\u2019s behaviour, but since we are mapping from $|C|$ dimension to $d$ where $|C| << d$ (many $d$ can map onto the same prediction with high probability), how is this mapping done? Would one have to optimize for a $d$ that maximizes each class probability? \\n - Also, why add instead of replace? What would happen if you replace?\\n\\n3. Can the authors briefly discuss if / how the method can be applied to other model-free methods that do not have the same architecture as DRC? Is it applicable to all architectures trained with model-free objectives?\\n\\n4. Spelling: L202 behavior should be capitalized\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Request for Response on Authors' Response\", \"comment\": \"Respected reviewer, we have given a detailed response to your comments below (and in the global response). In particular, as detailed in the new [top-level comment](https://openreview.net/forum?id=DzGe40glxs&noteId=QBUbCZ4YJf), we have added additional results analyzing different agents (including a generic ConvLSTM agent) and on an additional enviroment that could be of high interest to you.\\n\\nAs the discussion period will end in 2 days, we would greatly appreciate if you could review our response and let us know if you have any further questions. If we have successfully addressed your concerns, we request that you please revise your score accordingly.\"}", "{\"title\": \"Conveying Thanks To The Reviewer\", \"comment\": \"Thank you for your kind comments. We deeply appreciate your engagement throughout the discussion period, and shall incorporate your suggestion regarding the visualisation of plans across layers.\"}" ] }
DyyLUUVXJ5
Adaptive Caching for Faster Video Generation with Diffusion Transformers
[ "Kumara Kahatapitiya", "Haozhe Liu", "Sen He", "Ding Liu", "Menglin Jia", "Chenyang Zhang", "Michael S Ryoo", "Tian Xie" ]
Generating temporally-consistent high-fidelity videos can be computationally expensive, especially over longer temporal spans. More-recent Diffusion Transformers (DiTs)--- despite making significant headway in this context--- have only heightened such challenges as they rely on larger models and heavier attention mechanisms, resulting in slower inference speeds. In this paper, we introduce a $\textit{training-free}$ method to accelerate video DiTs, termed Adaptive Caching ($\textit{AdaCache}$), which is motivated by the fact that $\textit{``not all videos are created equal''}$: meaning, some videos require fewer denoising steps to attain a reasonable quality than others. Building on this, we not only cache computations through the diffusion process, but also devise a caching schedule tailored to each video generation, maximizing the quality-latency trade-off. We further introduce a Motion Regularization ($\textit{MoReg}$) scheme to utilize video information within AdaCache, essentially controlling the compute allocation based on motion content. Altogether, our plug-and-play contributions grant significant inference speedups (e.g. up to 4.7x on Open-Sora 720p - 2s video generation) without sacrificing the generation quality, across multiple video DiT baselines. Our code will be made publicly-available.
[ "Diffusion Transformers", "Caching", "Content-adaptive Generation" ]
Reject
https://openreview.net/pdf?id=DyyLUUVXJ5
https://openreview.net/forum?id=DyyLUUVXJ5
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zNxWboR0VX", "yUXUclppLK", "yCA46D8ktd", "y4F2rn2PfZ", "xbM0bQhxmP", "xD5btAXVZs", "vAIAsyfPh0", "v2PZCHWDhq", "uxEDSIR8iw", "sfs7pr7W9H", "sGYkTq4Hba", "jnhPmsV0Tm", "idAT5ahthJ", "elIapX3sPe", "dYnvaHa5hI", "d5Y2Srsju5", "cZjmsDYhXi", "aNWG3uM7L4", "aFrdQygiM3", "a2UPAKmZDN", "VPXItowCMX", "TdKM6j7Rrb", "TBy3Adfrwz", "RQ5jtTbfLV", "PWOUmhMl4w", "OSHf8Q2tYY", "N3tqDa9r9R", "K3aQqSOay3", "H2dh9awLDu", "FYO6KrCulY", "FXKfFBXJEJ", "EmOYRrcG0E", "EQyEvNuDU0", "DUGesYc8xL", "D4TMrzHSvc", "ClnVO88Kda", "CGpBL8xRpY", "BfuqExBoCT", "BORDuhRRU1", "BAAp8GbR23", "9tK753wlnH", "8FLu2OzZie", "7rLiFmOAX2", "7qH5oaMs8g", "5YNWVySWns", "4xyT9YfAzO", "4MZsYqwz0Q", "2jPclomylx", "2NiMnakzJK", "1NMnhpEX9U" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732854198869, 1732853721179, 1730048851577, 1729915267999, 1733162456566, 1733162515429, 1733172420176, 1733119188966, 1733229135812, 1732814159791, 1733108797295, 1732657701891, 1732634432692, 1733200511533, 1732658483280, 1733119150981, 1732686861454, 1733207044116, 1734250188210, 1732291486452, 1732857713772, 1732847477689, 1732672641678, 1732289510219, 1733196278770, 1732291368745, 1732803251491, 1732925268536, 1733174472290, 1730645347009, 1732292644083, 1732673195722, 1733002418018, 1732292419426, 1732683571004, 1730477911705, 1733175131685, 1732551093444, 1733110343969, 1733289552982, 1737523685947, 1732289712934, 1732551128867, 1732290311613, 1732855877712, 1732918867588, 1732289623364, 1732289415104, 1732848027293, 1732814010372 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Reviewer_5tUr" ], [ "ICLR.cc/2025/Conference/Submission5127/Reviewer_g4jp" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Reviewer_5tUr" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Reviewer_g4jp" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Reviewer_5tUr" ], [ "ICLR.cc/2025/Conference/Submission5127/Area_Chair_E4UD" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Reviewer_KT6w" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Reviewer_vGd3" ], [ "ICLR.cc/2025/Conference/Submission5127/Reviewer_vGd3" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Reviewer_vGd3" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Reviewer_g4jp" ], [ "ICLR.cc/2025/Conference/Submission5127/Reviewer_KT6w" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Reviewer_g4jp" ], [ "ICLR.cc/2025/Conference/Submission5127/Reviewer_vGd3" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ], [ "ICLR.cc/2025/Conference/Submission5127/Reviewer_g4jp" ], [ "ICLR.cc/2025/Conference/Submission5127/Authors" ] ], "structured_content_str": [ "{\"title\": \"Thanks to reviewer KT6w\", \"comment\": \"We are happy that we were able to resolve all the concerns of reviewer KT6w. We thank the reviewer again for the time/effort spent reviewing our paper, and the positive rating.\"}", "{\"title\": \"Follow-up response to reviewer g4jp\", \"comment\": \"**F1-Q1: Disparity between the speedups in Open-Sora (Table 2b, Table 1) and CogVideoX experiments.**\\n\\nWe really appreciate the engaged discussions from the reviewer g4jp, and we are happy to porvide further clarifications.\\n\\nWe sincerely apologize for the confusion, let us clarify here. The speedups that we observe depend on factors such as the size of generated videos (*e.g.* spatial resolution, number of frames), the denoising schedule, and the undelying DiT model achitecture (+ scale).\\n\\nFor instance, in Table 2b, we see a 4.7x speedup for **720p - 2s (51-frame)** video generations with a baseline denoising schedule of **100** steps. This is the standard generation setup followed by Open-Sora baseline contributors. However, In Table 1, we follow the same experimental setup introduced in PAB [arXiv 2024] to ensure a fair comparison. In this setting, for Open-Sora baseline, we generte **480p - 2s (51-frame)** videos with a baseline denoising schedule with **30** steps (showing 2.24x speedup). We already clarify the details of each setting in the corresponding table captions, and will better highlight how these will affect the speedups in the final version of the paper.\\n\\nIn the new experiments with CogVideoX-2B, we follow the correspoding original setup of generating **480p - 6s (49-frame)** videos with a baseline denoising schedule of **50** steps (showing 1.65x speedup). We will include all such details when we report these results in the paper.\\n\\nPlease let us know if further clarifications are required.\"}", "{\"summary\": \"The paper introduces Adaptive Caching (AdaCache), a plug-and-play, training-free method to accelerate video generation using diffusion transformers. The authors note that \\\"not all videos are created equal\\\", meaning some videos don\\u2019t need as many processing steps to reach high quality. Based on this, they propose an adaptive caching strategy that reduces the computation of denoising steps based on the rate-of-change. The authors also introduce a Motion Regularization (MoReg) scheme to adjust the caching schedule, which can allocate computation based on video motion content for improving the quality-latency trade-off. Experimental results demonstrate that AdaCache significantly speeds up existing video diffusion models.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"a. The approach presented is straightforward, and the method section is generally clear and easy to follow.\\nb. The motivation for the work is reasonable and interesting, i.e., \\\"not all videos are created equal\\\".\\nc. AdaCache provides a training-free acceleration method that can be applied to existing video diffusion models, achieving significant speedups without additional model training.\", \"weaknesses\": \"1. Lines 285-287 mention that using unique caching schedules for each layer makes the generations unstable, but it\\u2019s unclear why this is the case. It would help if the authors provided an explanation.\\n2. Equation 5 introduces a codebook for the caching rate, but it\\u2019s not clear what this codebook is or how it\\u2019s created. The authors should add more details to clarify this part of the method.\\n3. While Table 1 shows AdaCache outperforming PAB, the qualitative comparison in Fig. 7 shows a different result. AdaCache seems to lose more visual detail, especially in the details. This raises concerns about its practical quality compared to PAB.\\n4. In Table 1, AdaCache achieves better VBench results than the baseline. The authors should explain why the accelerated video have better results, especially since the visual quality in Fig. 7 is noticeably worse than the baseline. \\n5. In Table 1, the SSIM of AdaCache-slow on Line 346 appears unusually high.\\n\\nI\\u2019m concerned about the fairness of the experiments. On the OpenSora model, PAB results are based on text-to-video task, while AdaCache is tested on image-to-video task. I keep my original Rating.\", \"questions\": \"1. The primary concern with this paper is the practical effectiveness of AdaCache. More qualitative comparisons are needed to robustly demonstrate AdaCache\\u2019s effectiveness in preserving visual quality, especially for the detail generations.\\n2. The authors need to provide more detailed methodological details, such as the construction and role of the codebook in caching rates.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents AdaCache which accelerates video generation by caching residual computations and devising adaptive caching schedule without requiring re-training. It also introduces MoReg to optimize computation based on motion. This paper demonstrates the effectiveness of AdaCache across various open-source models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1.A novel adaptive algorithm is designed to control the number of steps for reusing cached features.\", \"weaknesses\": \"1. typo need to be corrected:\\n 1. Line346: SSIM of AdaCache-slow should be 0.7910?\\n2. The methodology section lacks clarity in certain areas. For example, in Line 276, a pre-defined codebook is mentioned, and the reviewer wants to know how this codebook was pre-defined. Was it manually set? If so, what criteria or method were used to set it? Is there any further detailed explanation regarding it?\\n3. The qualitative comparison provided by the authors is insufficient and does not adequately demonstrate the superiority of AdaCache. Although AdaCache achieves a higher acceleration speedup, as shown in Figure 7, its results on Open-Sora-Plan (e.g., Rows 1, 2, 6, 7) and Latte (e.g., Rows 2, 4, 6, 7) are significantly worse than those of PAB and the Baseline. Based on the visual quality presented in Figure 7, the reviewer expresses concerns about the stability of AdaCache in terms of visual quality.\\n\\n---\\n\\n\\nAfter reviewing the authors' latest response to Reviewer 5tUr's comments, the reviewer has decided to further lower the score. The primary concerns stem from:\\n\\nThe reliability of quantitative comparison data\\n\\n(i) In the latest responses (3) and (4), the authors stated: \\\"We follow the original inference settings suggested by the original contributors\\u2014in OpenSora, this is image- and text-conditioned generation.\\\" This implies that in Table 1, the experimental setup for OpenSora involves image- and text-conditioned generation. However, as far as the reviewer knows, PAB does not use reference images as image conditions by default in OpenSora. Yet, the performance metrics listed for PAB in Table 1\\u2014including fidelity metrics such as PSNR and SSIM, as well as reference-free metrics like VBench\\u2014are entirely identical to those reported in the PAB paper. The reviewer is uncertain whether these data were directly from the PAB paper. If so, the comparison would be unfair since PAB defaults to operating without image conditioning.\\n\\n(ii) The sources of the Delta-Dit and TGATE data in Table 1 are unclear. Additionally, the FLOPS and visual quality results are unusual, showing almost no speedup.\\n\\nInsufficient clarity in the experimental details presented in the paper, which may lead to misunderstandings\\n\\n(i) The authors conducted experiments on three models: Open-Sora, Open-Sora-Plan, and Latte. Open-Sora-Plan and Latte are text-conditioned, while Open-Sora uses a text- and image-conditioned setup. The reviewer believes that such a unique configuration should be explicitly clarified in the paper.\\n\\n(ii) In the rebuttal, the authors mentioned that for OpenSora, reducing the timesteps from 100 to 30 leads to a dramatic change in the speedup factor, from 4.5x to 2.24x. This significant variation should be emphasized and analyzed in the paper. However, the reviewer could not find any related discussion in the manuscript.\\n\\n---\\n**The final update**\\n\\nThe paper initially received a rating of 6 because the reviewer, while expressing concerns about the reliability and reproducibility of the methodology as well as the rigor of the experimental section, appreciated the motivation behind the work, encapsulated in the statement: \\\"not all videos are created equal.\\\" Based on this, the reviewer assigned a rating of 6.\\n\\nThe reasons for the two rating reductions\\n\\n(1) During the rebuttal period, the author mentioned that modifying the setup from 100 timesteps to 30 timesteps led to a significant change in speedup, decreasing from 4.5x to 2.24x. This detail, however, was neither thoroughly discussed nor emphasized in the paper. It is evident that **the influence of this variable is much greater than that of resolution and video size, which were included in the ablation study**.\\nThe absence of detailed discussion and emphasis on such experimental findings and conclusions not only risks confusing readers but also raises questions about the reliability and reproducibility of AdaCache. As a result, the reviewer lowered the rating for the first time.\\n\\n(2) The lack of rigor in experimental comparisons\\n\\n(i) The authors mentioned that the use of image conditions in OpenSora was motivated by an issue raised in the original repository, which stated that \\\"the lack of a reference image leads to inconsistencies in video quality,\\\" i.e., a decline in motion consistency and visual quality. This implies that **the introduction of image conditions improves visual quality**.\\n\\n(ii) The authors further claimed that AdaCache **directly utilized data from PAB because they reproduced PAB and obtained similar quantitative numbers** (with negligible changes). However, to the reviewer\\u2019s knowledge, PAB does not employ image conditions. **If the authors introduced image conditions to PAB for a fair comparison but still achieved similar VBench metrics, this appears highly counterintuitive.** If this is the case, what is the rationale behind AdaCache incorporating reference images? It should be noted that synthesizing reference images incurs significant computational overhead.\\n\\n(iii) Given that a significant portion of the paper, including the experiments in the ablation study, was conducted on OpenSora, the inclusion of image conditions cannot be overlooked. The introduction of image conditions not only affects visual quality but may also impact the L1 distance between features, thereby influencing the caching rate. **This aspect should have been emphasized in the experimental section, yet it is not mentioned even once throughout the paper.**\\n\\n(iv) To ensure fair quantitative comparisons, AdaCache should follow PAB's experimental setup, exclude reference images, and retest metrics such as VBench, SSIM, and PSNR. \\n\\nDue to concerns about the rigor of the experiments, the reviewer downgraded the rating once again.\", \"questions\": \"1. The caching rate for the steps following step t is determined based on the rate of feature change between steps t and t+k. The reviewer wonders whether this metric is reasonable. For instance, as shown in Fig. 2, during the early and late stages of sampling, the L1 curve exhibits rapid changes, characterized by a large derivative. Would relying on the differences at earlier time steps to determine the subsequent caching rate introduce errors?\\n2. Could the authors provide more visual quality comparison results? For example, visualizations of the sampling process under different configurations (fast, slow) and how different video content leads to varying caching schedules, to more intuitively demonstrate the mechanism and effectiveness of the designed caching schedule.\\n3. PAB demonstrates impressive results in multi-GPU parallel processing. Can the authors' method leverage similar techniques (e.g., DSP) to scale to multi-GPU parallel inference? What would the efficiency be like?\\n4. The reviewer wants to know the source of the Delta-DIT performance in Table 1 and why there is almost no acceleration.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Follow-up\", \"comment\": \"We thank the reviewer g4jp again for their continued enagement in the discussions. Please let us know if the concerns have been addressed. Since the rebuttal period is ending soon, we would really appreciate it if our additional experiments and clarifications can be considered in the final rating.\\n\\nThanks so much!\"}", "{\"title\": \"Follow-up\", \"comment\": \"We thank the reviewer 5tUr again for their continued enagement in the discussions. Please let us know if the concerns have been addressed. Since the rebuttal period is ending soon, we would really appreciate it if our additional experiments and clarifications can be considered in the final rating.\\n\\nThanks so much!\"}", "{\"title\": \"Follow-up\", \"comment\": \"We thank the reviewer vGd3 again for their continued enagement in the discussions. Please let us know if the concerns have been addressed. Since the rebuttal period is ending soon, we would really appreciate it if our additional experiments and clarifications can be considered in the final rating.\\n\\nThanks so much!\"}", "{\"title\": \"Follow-up response (2) to reviewer vGd3 [2/2]\", \"comment\": \"**F-Q2: Generalization of the Codebook Tuned with Fewer Video Prompts**\\n> How do you define \\\"a fair spread of calibration prompts\\\"? This is crucial for the method\\u2019s applicability.\\n\\nWe thank the reviewer for raising this question. As we discuss in our motivation (Section 3), we observe that the complexity of video generations can be characterized across spatial and temporal variations in generated content. In terms of the spatial axis, we select prompts that result in either homogeneous textures, or high-frequency textures. In terms of the temporal axis, we select prompts that result in either a small or a large motion content. By including these 4 types of videos in our calibration set, we construct a fair spread in terms of video generation complexity (which in-turn, results in a range of optimal compute requirements). We will include a complete and separate discussion about the calibration set in the supplementary, and release the corresponding prompts with our codebase.\\n> In the provided table, how were the subsets of 32 and 100 videos selected?\\n\\nHere, the set of 32 videos comes from the standard Open-Sora gallery prompts (given [here](https://hpcaitech.github.io/Open-Sora/)) and the set of 100 videos comes from publicly-available Sora prompts (given [here](https://promptsora.com/)). In contrast, the set of 900+ videos correspond to standard VBench prompts (given [here](https://github.com/Vchitect/VBench/blob/master/prompts/all_dimension.txt)). We will clarify these subsets when reporting these numbers in the final version of the paper\\n\\n\\nWe thank the reviewer for the time and effort spent on these discussions. We hope that we were able to address the reviewer's concerns. We also hope that our additional experiments and clarifications will be kindly considered in the final rating.\"}", "{\"title\": \"Follow-up response to reviewer 5tUr\", \"comment\": \"We thank the reviewer for the continued enagement in discussions. Let us answer the reviewer questions below.\\n\\n> In the publicly available code, during the 100-step inference of OpenSora in the OpenSora gallery, when image condition is not used, the inference efficiency (Speedup) and visual fidelity (SSIM, PSNR; not VBench) did not meet expectations. (1) Could the authors clarify whether this discrepancy is as expected? If so, what is the source of this difference (aside from the performance differences of the model itself in T2V and I2V tasks)?\\n\\nThis should not be the expected behavior. AdaCache preserves a better quality-latency trade-off compared to other inference acceleration pipelines, regardless of whether image-conditioned or not (as also validated by our experiments on different model variants in Table 1). \\n\\nWe are unsure which configuration the reviewer experimented with. However, we want to highlight our Fig. 5, where we show that in extreme cases (AdaCache-fast), reference-based metrics (*e.g* SSIM, PSNR) are expected to drop. Still, we show much better trade-offs compared to PAB. Moreover, our qualitative results validate that the perceived visual quality is preserved even in such extreme cases, revealing the limitations of such reference-based metrics. \\n\\n> (2) PAB works well without the image condition (the default configuration of PAB does not include the image condition). Could AdaCache achieve stable and consistent performance without image condition? If so, why is an initial image needed before inference? \\n\\nIn all our experimental settings, we compare with PAB, and show that AdaCache consistently achieves much better quality-latency trade-offs (please also see our [anonymous-webpage](https://anonymous-adacache.github.io/) for video results). This is regardless of being image-conditioned or not. We also want to highlight that a direct comparison of visual quality should be made at similar speedups (as shown in webpage above), where AdaCache achieves superior performance.\\n\\nAs mentioned previously, we stay faithful to the original inference setting of each video-DiT baseline. In Open-Sora this corresponds to both image- and text-conditioned generation (as mentioned by its original contributors in their GradIO demo and github issues [issue-1](https://github.com/hpcaitech/Open-Sora/issues/504) and [issue-2](https://github.com/hpcaitech/Open-Sora/issues/550)). In Open-Sora-Plan and Latte, this corresponds to just text-conditioned generation. By experimenting on both these settings, we validate that AdaCache generalizes and achieves a stable performance in both settings.\\n\\n> (3) In Table 1, are the OpenSora results based on both image and text conditions? Why is the SSIM of OpenSora significantly higher than that of Open-Sora-Plan and Latte? \\n\\nWe follow the original inference settings suggested by the original contributors\\u2014 in OpenSora, this is image- and text-conditioned generation.\\n\\nThe original quality metrics depend on how good each baseline is. In our experiments, we observe that Open-Sora gives much-better generations than other baselines, which results in better quality metrics. This behavior is also observed in the results reported in PAB paper (and other concurrent work such as FasterCache).\\n\\n> (4) In the ablation study, are the OpenSora results based on both image and text conditions?\\n\\nAs also mentioned above, we follow the original inference settings suggested by the original contributors\\u2014 in OpenSora, this is image- and text-conditioned generation.\\n\\n**We thank the reviewer for the time and effort spent on these discussions. We hope that we were able to address all the concerns, and the reviewer will kindly consider this fact in the final rating.**\"}", "{\"title\": \"Follow-up\", \"comment\": \"We believe, now we have addressed all of the reviewer's concerns, but we would be very happy to engage in further discussion and provide more clarifications if needed.\"}", "{\"title\": \"Official Comment by Reviewer 5tUr\", \"comment\": \"Thanks for the response!\", \"i_have_some_concerns\": \"In the publicly available code, the authors utilized a pre-inferred initial image as the reference image input for OpenSora. However, under the experimental setup without using a reference image, the 100-step inference on OpenSora did not achieve the expected performance in terms of acceleration and visual quality. Could the authors clarify the cause of this discrepancy?\"}", "{\"title\": \"Follow-up: AdaCache on an image-DiT baseline\", \"comment\": \"**W2c, W2d: AdaCache performance with Image-DiTs (e.g. DiT-XL/2) and comparison with FORA [arXiv, July 2024].**\\n\\nFollowing the reviewer\\u2019s suggestion, in this rebuttal, we implement AdaCache (w/o Motion Regularization) on top of an image generation baseline: DiT-XL/2, and compare with the concurrent work FORA [arXiv, July 2024]. Conceptually, FORA is different from AdaCache, as it is a caching mechanism proposed purely for accelerating image-DiTs (not extended to video generation), and is not adaptive w.r.t. the input. In the table below, we observe that AdaCache shows a better/comparable performance with FORA on all quantitative metrics. Please see the trade-off curve in [anonymous-fig-6](https://drive.google.com/file/d/17SXQmYtw7ufPdrwJfIoI8vjA-MOjQKU-/view?usp=share_link), that shows how AdaCache outperforms FORA at the same latency. This shows that AdaCache (originally proposed for accelerating video generation) can also generalize to image generation pipelines.\\n\\n| Method \\t| FID $\\\\downarrow$ \\t| sFID $\\\\downarrow$ \\t| IS $\\\\uparrow$ \\t| Precision $\\\\uparrow$ \\t| Recall $\\\\uparrow$ | Latency (s) $\\\\downarrow$ \\t| Speedup $\\\\uparrow$ \\t|\\n|----------|----------|----------|----------|----------|----------|----------|----------|\\n| DIT-XL/2 \\t| 2.30\\t| 4.56\\t| 276.56\\t| 0.83\\t\\t| 0.58\\t| **16.15** \\t| **1.00x**\\t|\\t\\n| + FORA (Thres=3)\\t| 2.82\\t| 6.04\\t| 253.96\\t| 0.80\\t\\t| 0.58\\t| **6.68**\\t| **2.42x**\\t|\\n| + FORA (Thres=5)\\t| 4.97\\t| 9.15\\t| 222.97| 0.76\\t\\t| 0.59\\t| **4.80**\\t| **3.36x**\\t|\\n| + AdaCache\\t| **3.27**\\t| **7.19**\\t| **243.21**| **0.79**\\t\\t| 0.59\\t| **5.98**\\t| **2.70x**\\t|\\n\\n*all new numbers are in bold.*\"}", "{\"comment\": [\"The performance of AdaCache appears to be highly dependent on the codebook. It seems that the parameters of the codebook are manually configured. Is there any reliable and convincing guidance for determining an appropriate codebook? This might affect the usability and reproducibility of AdaCache.\", \"The authors repeatedly mention that the codebook, calibrated using only 16 data samples, achieves good generalization. This is somewhat counterintuitive, as the paper demonstrates that different videos exhibit varying feature evolution processes. Are 16 samples sufficient to capture and cover all possible variation processes? What criteria should be used to select the calibration data?\", \"The multi-GPU results appear somewhat unusual. According to the results reported by PAB, on OpenSora, four GPUs achieved over 6x speedup, and eight GPUs achieved over 10x speedup. However, the results you reported differ somewhat (e.g., only 3.96x speedup with four GPUs). Could you clarify this discrepancy?\", \"The results for Delta-DiT and T-GATE in Table 1 are not missing. The reviewer is inquiring about the source of the unusual performance data.\", \"Based on the scheduling process provided for the four samples, it appears that the caching frequency in the early stages of sampling is hardly restricted, and caching begins at a very early step. Is this correct? It is well-known that the early steps in the sampling process of diffusion models have a significant impact on the final results, where even minor perturbations can lead to substantial changes in the outcome. For instance, during PAB\\u2019s broadcasting process, the early-stage feature sharing is deliberately avoided. **In this context, how is the performance of fidelity metrics (e.g., SSIM, PSNR) ensured to remain at a high level (better than PAB)?**\"]}", "{\"title\": \"Follow-up (2)\", \"comment\": \"Dear Reviewer g4jp,\\n\\nWe thank you again for your initial feedback and continued engagement in discussions. Please let us know if your concerns have been addressed. Since the rebuttal period is ending soon, we would really appreciate it if our additional experiments and clarifications can be considered in the final rating.\\n\\nThanks so much!\"}", "{\"title\": \"Follow-up: AdaCache on an image-DiT baseline\", \"comment\": \"**W2: AdaCache performance with Image-DiTs (e.g. DiT-XL/2).**\\n\\nFollowing the reviewer\\u2019s suggestion, in this rebuttal, we implement AdaCache (w/o Motion Regularization) on top of an image generation baseline: DiT-XL/2. In the table below, we observe that AdaCache gives reasonable speedups. As expected, the acceleration is smaller than that with a video-DiT in similar settings, which rely on heavier operators (*e.g.* spatial-temporal attention) within the baseline. This shows that AdaCache (originally proposed for accelerating video generation) can also generalize to image generation pipelines.\\n\\n| Method \\t| FID $\\\\downarrow$ \\t| sFID $\\\\downarrow$ \\t| IS $\\\\uparrow$ \\t| Precision $\\\\uparrow$ \\t| Recall $\\\\uparrow$ | Latency (s) $\\\\downarrow$ \\t| Speedup $\\\\uparrow$ \\t|\\n|----------|----------|----------|----------|----------|----------|----------|----------|\\n| DIT-XL/2 \\t| 2.30\\t| 4.56\\t| 276.56\\t| 0.83\\t\\t| 0.58\\t| **16.15** \\t| **1.00x**\\t|\\t\\n| + AdaCache\\t| **3.27**\\t| **7.19**\\t| **243.21**| **0.79**\\t\\t| 0.59\\t| **5.98**\\t| **2.70x**\\t|\\n\\n*all new numbers are in bold.*\"}", "{\"title\": \"Follow-up response (2) to reviewer vGd3 [1/2]\", \"comment\": \"We really appreciate the engaged discussions from the reviewer vGd3, and we are happy to provide further clarifications.\\n\\n\\n**F-Q1: Selection of Codebook Hyperparameters in New Settings**\\n> The step \\\"Observe the distribution of cache-metric values across the denoising process\\\" is unclear. The L1 distance between subsequent representations can vary across architectures. Please provide a more generalized definition for selecting lower and upper bounds.\\n\\nWe sincerely apologize for the lack of clarity here. By \\u201cobserving the distribution\\u201d, we mean visualizing the histograms of cache-metric across the denoising process (as shown in Fig. 2-right). These histograms (when averaged across the calibration set) provide the information such as lower- and upper-bound of the cache-metric. When adapting to a new setting (*e.g.* DiT architecture, or denoising schedule), we simply rely on such histograms to set our range of thresholds in the codebook. We will include the visualization script (and a script that outputs the range) with the release of our codebase, outlining clear steps for adapting the range of cache-metrics to newer settings.\\n\\n\\n> How should users define the number of basis cache-rates? What is its impact, and can you recommend values for this parameter?\\n\\nIn our experiments, we decide the number of basis cache-rates heuristically: we find that, having 2-6 basis cache-rates works well in practice, depending on the *granularity of caching* we need. \\n\\nFor instance, when caching to an extreme (*e.g.* AdaCache-fast, where we cache up to 12-steps at a time, in a 100-step schedule), there is a higher-chance of getting severe artifacts in the accelerated model. Hence, having more fine-grained thresholds (*i.e.,* higher number of basis cache-rates), helps us control the number of cached steps and avoid such artifacts. In contrast, in settings with minimal acceleration (*e.g.* AdaCache-slow where we cache only up to 2-steps), we can have fewer number of basis cache-rates.\\n\\nOur recommendation is to decide the number of cache-rates based on the required acceleration. With a higher acceleration, having fine-grained basis cache-rates helps better preserve the quality. The required acceleration can be decided based on the quality-latency constraints of the user application, and AdaCache has the flexibility to support such varying configurations. We will discuss this guideline in the final version of the paper.\\n\\n\\n> Basis cache-rates involve numerous hyperparameters, making optimization complex. Could you provide detailed instructions to simplify this process?\\n\\nWe agree with the reviewer\\u2019s concern, and let us provide our high-level approach for defining basis cache-rates.\\n\\n(1) Decide the required level of acceleration (*e.g.* fast, slow or mid) based on the quality-latency requirements of user application.\\n\\n(2) Set the largest basis-cache rate based on the above, and select 2-6 rates to split the range of basis cache-rates depending on the required caching granularity (*e.g.* finer granularity is better with higher acceleration).\\n\\n(3) Run the accelerated model on the calibration set, and evaluate the quality-latency metrics (*e.g.* VBench quality, and wall-clock time).\\n\\n(4) Adjust the basis cache-rates heuristically, and iterate above (2)-(3) steps.\\n \\n*e.g.* (a) if the required acceleration is not met, increase the largest basis-cache rate,\\n\\n*e.g.* (b) if the required quality-level is not met, increase the caching granularity\\u2014*i.e.,* the number of basis cache-rates\\u2014 or decrease the largest basis-cache rate.\\n\\nThis iterative process of hyperparameter tuning is relatively-faster, as we rely on a small calibration set and an accelerated inference pipeline. We can even parallelize this process as a common grid search. We will detail these steps in the final version of the paper.\"}", "{\"title\": \"Follow-up response (2) to reviewer g4jp\", \"comment\": \"We really appreciate the engaged discussions from the reviewer, and we are happy to porvide further clarifications.\\n\\n**F2-Q1: Which features are shared among which-steps?**\\n\\nWe apologize for any confusion here. We believe the reviewers understanding is correct here. In the above example, we note the *'compute-steps'* to be `[1, 2, 5, 8, 11, ...]`. This means AdaCache reuses the *residual features* computed in step `2`, through steps `3,4`, whereas new residual features are computed in step `5` to be reused in through subsequent steps `6,7`. We wish to highlight that only the residual computations (as shown in Fig 4-right) are reused, whereas the iteratively-denoised representation gets updated in every step (either based on recomputed or cached+reused residual features).\\n\\nWe uderstand the reviewers concern on diverging from the original baseline generation. This behavior is noticiable in some qualitative examples that we provide (*e.g.* in Fig 7 - bottom row, Fig 6 - middle row), yet not significantly affecting quantitative numbers (in Table 1). We further encourage the revierwer to visit our [anonymous-webpage](https://anonymous-adacache.github.io/), that includes many video results--- inclduing diverging cases (*e.g.* bottom of the webpage, middle column). We hope we provide sufficient evidence to address this concern.\\n\\n**F2-Q2: Clarification on reference-based metrics.**\\n\\nWe sincerely apologize for the confusion here. We introduce some metrics (*e.g.* PSNR, SSIM, LPIPS) as *'reference-based'*, as they are computed relative to a baseline--- in this case, relative to the corresponding DiT baseline w/o any acceleration. These metrics are computed the same way for all acceleration methods that we report (*e.g.* Delta-DiT, T-GATE, PAB), where we do nothing different for AdaCache. \\n\\nIn terms of our experimental setup, we follow the exact same settings as the original baselines (w/ the only exception being the introduction of the proposed caching schedule), in all our video generation pipelines--- Open-Sora, Open-Sora-Plan and Latte. In other words, we do not introduce any unfair advantage in AdaCache to preserve the generation quality when optimizing the latency. We hope this clarifies the reviewer's concern.\\n\\nPlease let us know if further information is needed.\"}", "{\"title\": \"Official Comment by Reviewer 5tUr\", \"comment\": \"Thank you for the response.\\n\\nIn the publicly available code, during the 100-step inference of OpenSora in the OpenSora gallery, when image condition is not used, the inference efficiency (Speedup) and visual fidelity (SSIM, PSNR; not VBench) did not meet expectations. (1) Could the authors clarify whether this discrepancy is as expected? If so, what is the source of this difference (aside from the performance differences of the model itself in T2V and I2V tasks)? (2) PAB works well without the image condition (the default configuration of PAB does not include the image condition). Could AdaCache achieve stable and consistent performance without image condition? If so, why is an initial image needed before inference? (3) In Table 1, are the OpenSora results based on both image and text conditions? Why is the SSIM of OpenSora significantly higher than that of Open-Sora-Plan and Latte? (4) In the ablation study, are the OpenSora results based on both image and text conditions?\"}", "{\"metareview\": \"The paper introduces a novel adaptive algorithm but has several weaknesses. The choice of MSE as a metric is questioned, and the reviewer asks whether alternative metrics, like cosine similarity, would be more suitable. The compatibility with large T2I models like FLUX is unclear. The methodology section lacks clarity, especially regarding the pre-defined codebook. The qualitative comparison is insufficient, with AdaCache showing worse performance on certain datasets, raising concerns about stability. The reliability of the quantitative comparison is questionable, particularly regarding the experimental setup and data sources in Table 1. The experimental details, such as conditions for Open-Sora-Plan and Latte, should be clarified, and the speedup factor variation needs further discussion.\\n\\nThe review scores are mixed, but the detailed negative feedback from the reviewers highlights significant issues with the paper. After reviewing the paper and rebuttal, the Area Chair also concluded that the paper should be rejected.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers appreciate the motivation behind the work but expressing concerns about the reliability and rigor of the methodology and experiments. The remained concerns are significant: 1) The significant impact of changing the timesteps from 100 to 30 (speedup dropping from 4.5x to 2.24x) was not adequately discussed or emphasized in the paper, raising concerns about the reliability and reproducibility of AdaCache. 2) The experimental comparisons lacked rigor.\"}", "{\"title\": \"Response to reviewer 5tUr [2/2]\", \"comment\": \"**W3: Disparity between AdaCache vs. PAB comparisons in Table 1 and Fig 7.**\\n\\nWe understand this perfectly-valid concern from the reviewer, let us clarify this confusion below.\\n\\nFirst, we want to point out that in Fig. 7, we compare AdaCache-fast (w/ MoReg) and PAB-fast configurations. In Table 1, if we consider these two configurations, we see that the quality metrics are not that different (*i.e.,* comparable), whereas AdaCache has much better speedups. AdaCache-slow is the variant that gives much better quality metrics, while still being faster than PAB-fast. Therefore, the quantitative numbers are consistent with the observations in Fig 7.\\n\\nHowever, we wish to highlight that a direct quality comparison based on Fig 7 is unfair, as AdaCache optimizes its latency to an extreme where the quality is expected to have a small drop. Yet, looking at Fig 5 we see that AdaCache performance is more-stable across a range of latencies, compared to PAB. A more reasonable setting would be to compare the quality at a similar latency, which we show in this [anonymous-fig-2](https://drive.google.com/file/d/1e30h_6N7K_QDcOHLRV0zCtNYqfnhlzuA/view?usp=share_link). Here, we include variants AdaCache (2.61x) vs. PAB (1.66x) for 720p - 2s generations, instead of a more-extreme variant AdaCache (4.49x) vs. PAB (1.26x) that we previously presented in Fig 7, making a more-fair comparison. We see that AdaCache shows a much better performance, still being faster.\\n\\nWe will include this discussion and the figure for direct comparison in the final version of the paper. Also, with this rebuttal, we include an [anonymous-webpage](https://anonymous-adacache.github.io/), which we encourage reviewers to view. It includes many video comparisons, and provides a better view on baseline comparisons and ablations (*e.g.* how temporal consistency varies).\\n\\n\\n**W4: Why AdaCache archives better quantitative numbers than the baseline in Table 1, while there are noticeable artifacts in Fig 7?**\\n\\nWe understand this valid concern. First, we would like to highlight that the results better than the baseline in Table 1 are achieved by AdaCache-slow, whereas the visualizations that we present in Fig 7 are with AdaCache-fast (w/ MoReg)\\u2014 that optimizes latency to an extreme, where the quality is expected to show a small drop. Please see this [anonymous-fig-2](https://drive.google.com/file/d/1e30h_6N7K_QDcOHLRV0zCtNYqfnhlzuA/view?usp=share_link) for a fair comparison between AdaCache-slow and Baseline for 720p - 2s generations. We will better clarify this in the paper.\\n\\nIn case of AdaCache-slow, we hypothesize such better quantitative numbers are shown due to two potential reasons: (1) by reusing representations across multiple steps, the denoising process gets smoothed-out, resulting in fewer sharp changes in noise predictions\\u2014 which we have observed to be an issue with Open-Sora baseline. (2) the quantitative metrics do not perfectly-align with the perceived visual quality\\u2014 which has been observed in many prior work (and the reason why such work also evaluate models based on human preferences).\\n\\nTo alleviate the above issue (2), in this rebuttal, we also conduct a user preference study to measure the quality of adacache generations, comparing AdaCache against prior-work and the baseline. Here, we collect a total of 1800 responses from 36 different users in the form of A/B preference tests. The results of this study is given in this [anonymous-fig-3](https://drive.google.com/file/d/1G7L5-KHTk3Yf76cGYldHERawmgN5L49l/view?usp=share_link). Between AdaCache and PAB, we see a clear win for our method (70%) while being extremely-similar to the baseline more than half the time (41%). Among AdaCache variants, users find these to beoften tied (60%) in-terms of perceived quality, yet still showing a better preference for motion-regularized variant (25% vs. 14%). This study validates the effectiveness of Adaptive Caching, and shows that it is indistinguishable from the baseline in many examples. We will include this study and discussion in the final version of the paper.\\n\\n\\n**W5: Typo in Table 1**\\n\\nWe sincerely apologize for this typo, and thank the reviewer for pointing it out. This SSIM value of AdaCache-slow in Open-Sora-Plan should be 0.7910 (instead of 79.10), consistent with other SSIM values. We will correct this in the final version of the paper.\"}", "{\"title\": \"Follow-up response (2) to reviewer g4jp\", \"comment\": \"We thank the reviewer for the engaged discussion. Let us clarify further.\\n\\nIn the comment above, we were discussing all different factors affecting the speedup (even as relatively-small changes). As shown in Table 2b, the spatial resolution and number of frames do incur a relatively small variation, giving a stable performance (4.5x in 480p - 2s setting, 4.4x in 480p - 4s setting and 4.7x in 720p - 2s setting). \\n\\nYet, the original denoising schedule have a more impact on the speedup. For instance, in a 100-step schedule, the rate-of-change between subsequent features is smooth/small. Hence, AdaCache can afford to reuse representations for a longer period (giving higher speedups). In contrast, in a 30-step schedule, the rate-of-change is relatively-larger, and if the same representations are re-used for a longer period, it incurs considerable quality degradations. Therefore, we decide our AdaCache setting such that we get the best quality-latency trade-off in each setting.\\n\\nWe also highlight that in all settings, we outperform prior similar acceleration methods--- both in-terms of quality and latency, across multiple benchmarks and baseline DiTs--- validating the generalizability of AdaCache.\\n\\nWe kindly ask the reviewer to reconsider the decision to lower their rating in light of this discussion. We have tried our best to answer all reviewer concerns during the rebuttal period with significant efforts (including additional experimental settings and evaluations). Please give us an opportunity to to provide further clarifications as needed.\"}", "{\"title\": \"To authors\", \"comment\": \"Your responses have solved my concern. I keep my original score.\"}", "{\"title\": \"Follow-up response to reviewer g4jp [1/2]\", \"comment\": \"We thank the reviewer for the engaged discussion, and allowing us resolve any further confusions. Please see our responses below.\\n\\n**F-Q2: Does the codebook calibrated on fewer video prompts generalize?**\\n\\nWe observe this to be true in our experiments across multiple video-DiT baselines. In fact, in the table below, we show that the behavior of both quality metrics and speedups across different AdaCache variants is consistent in 32-video, 100-video and 900-video (standard VBench) benchmarks, validating that our hyperparameters within the codebook generalize well. Let us further clarify the reasoning for this.\\n\\nAs mentioned before in our responses, the codebook consists of two sets of hyperparameters: (1) **basis cache-rates**, and (2) **cache-metric thresholds**.\\n\\nAmong these, basis cache-rates can be set easily, depending on the speedup required by the user. We show that by simply changing the basis cache-rates (*w/o needing to tune the cache-metric thresholds*), AdaCache can achieve different speedups: fast, mid and slow (as given in Table 2e)\\u2014 Here, cache-rates of AdaCache-mid (`8-6-4-2-1`) in row-2 of Table 2e correspond to the codebook `{0.03: 8, 0.05: 6, 0.07: 4, 0.09: 2, 0.11: 1, 1.00: 1}`, with *the same threshold values* as AdaCache-fast, and -slow. This shows that the proposed method supports a range of speedups w/o any further hyperparameter tuning.\\n\\nThe cache-metric thresholds are the values we calibrate based on a small set of video prompts. We select these prompts randomly, and based on the distribution of metric values (*i.e.,* L1 distance between subsequent representations) across the denoising process, we select a reasonable range (*e.g.* 0.03 - 0.11 for 100-step Open-Sora) and split uniformly into the number of basis-rates we want to have. Since the cache-metric is **normalized**, the thresholds generalize well to unseen prompts. However, we agree that outliers could exist, yet on-average, we achieve a reasonable quality-latency trade-off as validated by our experiments.\\n\\nSuch a generalization based on a small calibration set is not counterintuitive to our motivation that each video generation is unique (or, each video shows a unique variation in feature similarity\\u2014 which also corresponds to the cache-metric). However, as validated in Fig 2-right, even though the distribution changes, the range of values (in y-axis) stay more-or-less the same. Meaning, the set of thresholds we calibrated can stay the same across different video generations, yet which threshold gets activated will vary depending on each video (based on the cache-metric). We will better clarify this in the supplementary.\\n\\n| Method | 32 videos || 100 videos || 900+ videos ||\\n|----------|----------|----------|----------|----------|----------|----------|\\n| \\t\\t\\t\\t| VBench| Latency (on A6000) | VBench | Latency (on A6000) | VBench | Latency (on A100) |\\n| Open-Sora\\t\\t\\t| **84.09** | **86.57** | **82.97** | **86.35** | 79.22 | 54.02 |\\n| + AdaCache-fast\\t\\t| **83.42** | **37.06 (2.34x)** | **82.21** | **37.22 (2.32x)** | 79.39 | 24.16 (2.24x) |\\n| + AdaCache-fast (w/ MoReg)\\t| **83.42** | **39.56 (2.19x)** | **82.32** | **39.65 (2.18x)** | 79.48 | 25.71 (2.10x)|\\n| + AdaCache-slow\\t\\t| **83.93** | **57.33 (1.51x)** | **82.89** | **58.51 (1.48x)** | 79.66 | 37.01 (1.46x)|\\n\\n*all new numbers are in bold.*\\n\\n\\n**F-Q1: Guidance on deciding a codebook when adapting to a new setting (for usability and reproducibility).**\\n\\nAs we discussed above, the basis cache-rates within our codebook are user-defined, and can be changed easily without any re-calibration depending on the required quality-latency trade-off. When setting the metric thresholds, we follow the simple strategy of (1) selecting a small calibration set of random video generation prompts, (2) observing the range of change in feature similarity, and (3) uniformly splitting such range into the set of basis cache-rates. A more-complex strategy (*e.g.* carefully sampling the calibration set, non-uniform splitting of range) may give better trade-offs with some extra effort. However, in our experiments, we observe that even a simpler strategy generalizes well to a large number of videos\\u2014 thanks to (a) the consistent range of feature similarities between subsequent steps in a given DiT backbone, and (b) the *normalized* cache-metric. We will discuss these details in the supplementary. We also release our codebase with the paper to ensure the reproducibility and easier adoption to newer settings.\"}", "{\"title\": \"Response to reviewer vGd3 [1/3]\", \"comment\": \"**W1a: More details about caching-schedule hyperparameters.**\\n\\nWe understand the reviewer\\u2019s concern and sincerely apologize for the lack of details. In AdaCache, once we compute the distance metric between subsequent representations ($c^l_t$), we select the next caching rate ($\\\\tau^l_t$) based on a *pre-defined codebook of basis cache-rates*. Here, a *\\u2018cache-rate\\u2019* is defined as the number of subsequent steps during which, a previously-computed representation is re-used (*i.e.,* a higher cache-rate gives more compute savings). Simply put, a higher distance metric will sample a lower cache-rate from the codebook, resulting in more-frequent re-computations.\\n\\nThe codebook is basically a collection of cache-rates that is specific to a denoising schedule (i.e., #steps), coupled with distance metric ($c_t$) thresholds for selection. Both basis cache-rates and thresholds are hyperparameters. Here, optimal thresholds may need to be tuned per video-DiT baseline, whereas the cache-rates can be adjusted depending on the required speedup (*e.g.* AdaCache-fast, AdaCache-slow). We tune these hyperparameters (`codebook = {threshold-1: cache-rate-1, \\u2026}`) based on empirical observations on a small calibration set (with just 16 video prompts), and observe that they generalize well (*e.g.* on larger benchmarks such as VBench w/ 900+ prompts). This is thanks to the **normalized** cache-metric that we use for deciding the caching schedule (irrespective of the video prompt), relative to which we calibrate the threshold values.\\n\\nFor instance, on Open-Sora baseline, we use the codebook `{0.03: 12, 0.05: 10, 0.07: 8, 0.09: 6, 0.11: 4, 1.00: 3}` in a 100-step denoising schedule, and the codebook `{0.08: 6, 0.16: 5, 0.24: 4, 0.32: 3, 0.40: 2, 1.00: 1}` for AdaCache-fast in a 30-step schedule. For AdaCache-slow in a 30-step schedule, we decrease the basis cache-rates (w/o having to change the thresholds), and use the codebook `{0.08: 3, 0.16: 2, 0.24: 1.00: 1}`. A specific cache-rate is selected if the distance metric is smaller than the corresponding threshold (and larger than any previous thresholds). We also ablate various codebooks (*e.g.* fast, mid, slow in Table 2e). We will include this discussion in the final version of the paper.\\n\\n\\n\\n**W1b: Why per-layer caching-schedules can be unstable?**\\n\\nWe thank the reviewer for requesting further evidence on this observation, which we believe will be useful to the reader. By design, AdaCache can have unique caching schedules *per-layer* (and, per each residual computation). However, we observe that it will make the generations unstable in the current DiT architectures that we tested. \\n\\nOne possible hypothesis for this observation is the incompatibility between the cached and newly-computed features. Having unique caching schedules *per-layer*, forces the model to use both these features within **different DiT layers of the same denoising step**. As such features (cached vs. recomputed) are not perfectly-aligned, their compatibility becomes an issue which results in an unstable performance. \\n\\nIn contrast, if we have a common caching schedule for all the layers, the features used in each step (either cached or recomputed) would all correspond to **a specific same denoising step**, that are all perfectly-compatible with each other. We include qualitative samples in this rebuttal to visualize this observation (see [anonymous-fig-1](https://drive.google.com/file/d/1a_VzVZh82hFE-5ZN1HIA6vrIA7WoHiX_/view?usp=share_link)). We see that the per-layer cache-schedule shows more artifacts/degradations in the generated videos, compared to a more-stable common cache-schedule. \\n\\nHowever, we note that this observation may not generalize to all video generation models (*e.g.* some architectures may be stable enough to take advantage of unique levels of redundancy in different layers), so we keep formulation/design of AdaCache more-generic in the method section. We will include this discussion in the supplementary.\"}", "{\"title\": \"Follow-up (2)\", \"comment\": \"Dear Reviewer 5tUr,\\n\\nWe thank you again for your initial feedback and continued engagement in discussions. Please let us know if your concerns have been addressed. Since the rebuttal period is ending soon, we would really appreciate it if our additional experiments and clarifications can be considered in the final rating.\\n\\nThanks so much!\"}", "{\"title\": \"Response to reviewer 5tUr [1/2]\", \"comment\": \"**W1: Why per-layer caching-schedules can be unstable?**\\n\\nWe thank the reviewer for requesting further evidence on this observation, which we believe will be useful to the reader. By design, AdaCache can have unique caching schedules *per-layer* (and, per each residual computation). However, we observe that it will make the generations unstable in the current DiT architectures that we tested. \\n\\nOne possible hypothesis for this observation is the incompatibility between the cached and newly-computed features. Having unique caching schedules *per-layer*, forces the model to use both these features within **different DiT layers of the same denoising step**. As such features (cached vs. recomputed) are not perfectly-aligned, their compatibility becomes an issue which results in an unstable performance. \\n\\nIn contrast, if we have a common caching schedule for all the layers, the features used in each step (either cached or recomputed) would all correspond to **a specific same denoising step**, that are all perfectly-compatible with each other. We include qualitative samples in this rebuttal to visualize this observation (see [anonymous-fig-1](https://drive.google.com/file/d/1a_VzVZh82hFE-5ZN1HIA6vrIA7WoHiX_/view?usp=share_link)). We see that the per-layer cache-schedule shows more artifacts/degradations in the generated videos, compared to a more-stable common cache-schedule. \\n\\nHowever, we note that this observation may not generalize to all video generation models (*e.g.* some architectures may be stable enough to take advantage of unique levels of redundancy in different layers), so we keep formulation/design of AdaCache more-generic in the method section. We will include this discussion in the supplementary.\\n\\n\\n**W2: More details about caching-schedule hyperparameters.**\\n\\nWe understand the reviewer\\u2019s concern and sincerely apologize for the lack of details. In AdaCache, once we compute the distance metric between subsequent representations ($c^l_t$), we select the next caching rate ($\\\\tau^l_t$) based on a *pre-defined codebook of basis cache-rates*. Here, a *\\u2018cache-rate\\u2019* is defined as the number of subsequent steps during which, a previously-computed representation is re-used (*i.e.,* a higher cache-rate gives more compute savings). Simply put, a higher distance metric will sample a lower cache-rate from the codebook, resulting in more-frequent re-computations.\\n\\nThe codebook is basically a collection of cache-rates that is specific to a denoising schedule (i.e., #steps), coupled with distance metric ($c_t$) thresholds for selection. Both basis cache-rates and thresholds are hyperparameters. Here, optimal thresholds may need to be tuned per video-DiT baseline, whereas the cache-rates can be adjusted depending on the required speedup (*e.g.* AdaCache-fast, AdaCache-slow). We tune these hyperparameters (`codebook = {threshold-1: cache-rate-1, \\u2026}`) based on empirical observations on a small calibration set (with just 16 video prompts), and observe that they generalize well (*e.g.* on larger benchmarks such as VBench w/ 900+ prompts). This is thanks to the **normalized** cache-metric that we use for deciding the caching schedule (irrespective of the video prompt), relative to which we calibrate the threshold values.\\n\\nFor instance, on Open-Sora baseline, we use the codebook `{0.03: 12, 0.05: 10, 0.07: 8, 0.09: 6, 0.11: 4, 1.00: 3}` in a 100-step denoising schedule, and the codebook `{0.08: 6, 0.16: 5, 0.24: 4, 0.32: 3, 0.40: 2, 1.00: 1}` for AdaCache-fast in a 30-step schedule. For AdaCache-slow in a 30-step schedule, we decrease the basis cache-rates (w/o having to change the thresholds), and use the codebook `{0.08: 3, 0.16: 2, 0.24: 1.00: 1}`. A specific cache-rate is selected if the distance metric is smaller than the corresponding threshold (and larger than any previous thresholds). We also ablate various codebooks (*e.g.* fast, mid, slow in Table 2e). We will include this discussion in the final version of the paper.\"}", "{\"title\": \"Follow-up (2)\", \"comment\": \"We thank the reviewer 5tUr again for the initial feedback, which will definitely improve the quality and clarity of this paper. We believe we have addressed all of the reviewer's concerns, but we would be very happy to engage in further discussion and provide more clarifications if needed. Please let us know if the concerns have been addressed.\\n\\nThanks so much!\"}", "{\"title\": \"Follow-up response to reviewer vGd3\", \"comment\": \"We really appreciate the engaged discussions from the reviewer vGd3, and we are happy to provide further clarifications.\\n\\n**F-Q1: How to select the codebook hyperparameters when adapting to a new setting?**\", \"our_codebook_consists_of_two_sets_of_hyperparameters\": \"(a) **cache-metric thresholds**, and (b) **basis cache-rates**. We follow the steps below when tuning these:\\n\\n(1) Select a small calibration set of random video generation prompts. We select 16 prompts and visually validate the varying levels of complexity in the corresponding video generations (*e.g.* high- and low-frequency textures, fast and slow moving content)\\u2014 We use the same set of prompts across all our experimental settings, observing that these generalize. We will highlight our calibration set in the paper for easier adaptability.\\n\\n(2) Observe the distribution of cache-metric values across the denoising process (*i.e.,* L1 distance between subsequent representations), and identify the lower- and upper-bounds\\u2014 We identify `[0.03, 0.11]` to be this range for the 100-step Open-Sora baseline.\\n\\n(3) Split the above range uniformly into the number of *basis cache-rates* we want to have\\u2014 We use the *cache-metric thresholds* `{0.03, 0.05, 0.07, 0.09, 0.11}` for the 100-step Open-Sora baseline.\\n\\n(4) Finally, set the *basis cache-rates* depending on the required quality-latency trade-off\\u2014 These values are user-defined and can be adjusted at inference without needing to tune anything else. We use *basis cache-rates* `12-10-8-6-4-3` for AdaCache-fast with the 100-step Open-Sora baseline (*i.e.,* the codebook will be `{0.03: 12, 0.05: 10, 0.07: 8, 0.09: 6, 0.11: 4, 1.00: 3}`). For AdaCache-mid in Table 2e, we use basis rates `8-6-4-2-1-1` with the same thresholds (*i.e.,* the codebook will be `{0.03: 8, 0.05: 6, 0.07: 4, 0.09: 2, 0.11: 1, 1.00: 1}`). Using smaller basis rates will yield a better quality, but also a smaller speedup. These basis rates can be adjusted based on the number of denoising steps, and validated by inspecting the change in quality of the generated videos (either visually or quantitatively).\\n\\nWe use the same process (w/ the same prompts) across Open-Sora, Open-Sora-Plan, Latte and CogVideoX baselines, achieving consistently-better quality-latency trade-offs compared to prior training-free DiT acceleration methods. We will better highlight these steps in the paper. We also release our codebase with the paper to ensure the reproducibility and easier adoption to newer settings.\\n\\n\\n**F-Q2: On the generalization of the codebook tuned with fewer video prompts.**\\n\\nWe observe that our codebook generalizes across multiple benchmarks. In the table below, we show that both the quality metrics and speedups in different AdaCache variants behave consistently across 32-video, 100-video and 900-video (standard VBench) benchmarks--- all using the codebook tuned with the same 16 prompts. Let us further clarify the reasoning for this.\\n\\n| Method | 32 videos || 100 videos || 900+ videos ||\\n|----------|----------|----------|----------|----------|----------|----------|\\n| \\t\\t\\t\\t| VBench| Latency (on A6000) | VBench | Latency (on A6000) | VBench | Latency (on A100) |\\n| Open-Sora\\t\\t\\t| 84.09 | 86.57 | 82.97 | 86.35 | 79.22 | 54.02 |\\n| + AdaCache-fast\\t\\t| 83.42 | 37.06 (2.34x) | 82.21 | 37.22 (2.32x) | 79.39 | 24.16 (2.24x) |\\n| + AdaCache-fast (w/ MoReg)\\t| 83.42 | 39.56 (2.19x) | 82.32 | 39.65 (2.18x) | 79.48 | 25.71 (2.10x)|\\n| + AdaCache-slow\\t\\t| 83.93 | 57.33 (1.51x) | 82.89 | 58.51 (1.48x) | 79.66 | 37.01 (1.46x)|\\n\\n\\nFirst, we ensure a fair spread of calibration prompts by visually validating the corresponding video generations to have varying levels of complexity (*e.g.* high- or low-frequency textures, fast and slow moving content). Secondly, by making our cache-metric a **normalized** one, we make sure that the thresholds that we tune generalize well to unseen prompts (for a given DiT model, and denoising schedule). \\n\\nThis not counterintuitive to our motivation that each video generation is unique. Even though the range of normalized metric values stays the same across different video generations, the underlying distribution of values is still **unique for each video** (*e.g.* how the metric changes in different stages of denoising)\\u2014 as also seen in Fig 2-right. Meaning, the thresholds we calibrated can stay the same across different video generations, yet which one gets activated at a given step will vary depending on each video. We will better clarify this in the supplementary.\\n\\nThat being said, we agree with the reviewer that outliers could exist, and bettter speedups may be squeezed with a more-complex codebook selection. Yet on-average, we achieve a quality-latency trade-off that consistently outperforms prior acceleration methods as validated by our experiments.\\n\\nPlease let us know if further clarifications are required, as we are happy to engage in further discussions. We thank the reviewer again for the time and effort spent on these discussions.\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for your efforts during the discussion period. I found your idea of adaptive caching innovative and the results interesting, leading me to raise my rating to 6.\\n\\nHowever, the manuscript needs some revisions. Specifically, your explanations clarified the choice of hyperparameters, which are crucial for reproducibility and should be included in the revised paper. Additionally, the complexity of these hyperparameters may pose a challenge for first-time users of AdaCache, suggesting a promising direction for future research. I also recommend incorporating the experimental results shared during the rebuttal period into the manuscript.\\n\\nBest regards, \\nReviewer vGd3\"}", "{\"summary\": \"The paper proposes training-free Diffusion Transformer (DiT) acceleration named AdaCache. The method is motivated by the fact that different videos require different amounts of computation. AdaCache decides whether to skip or recompute the cache during the cache step defined by the schedule, using an introduced rate-of-change metric $c_t$, which is basically L1 distance between residual block features in current and previous cache schedule timesteps. Authors further augment rate-of-change metric c_t with a Motion Regularization (MoReg) metric to add information about the motion content of the generated video.\\n\\nOverall, the AdaCache idea has good potential. However, the current version of the manuscript needs significant revision to be accepted for this conference. Please refer to the Weaknesses and Questions sections for more details.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Novelty: The idea of adaptive caching seems novel in the field of diffusion model caching.\\n2. Motivation: The paper provides a clear motivation for AdaCache method.\\n3. Clearness: The method is simple and easy to understand.\", \"weaknesses\": \"1. Method section requires clarifications:\\n\\na. The paper lacks information about the selection of rate-of-change schedule hyperparameters.\\n\\nb. Lines 286-287 stat that authors observe that unique caching schedules for each layer will make the generations unstable. This important observation requires further explanation and clarification.\\n\\n2. Experiment results require better presentation:\\n\\na. There are concerns regarding the reported speedup and latency. Given that AdaCache is not a deterministic method and inference time for different videos varies, it is incorrect to report just the mean inference time and speedup for all videos. Standard deviation (std) values should be included.\\n\\nb. Ablation studies are performed on only 32 videos. Considering that AdaCache is not deterministic, the results may have low statistical significance.\\u00a0 \\n\\nc. AdaCache without MoReg is a general method that could apply to image diffusion transformers. A comparison with prior works on DiT architectures for image generation, such as PIXART-\\u03b1 for T2I generation and DiT-XL for class-conditional image generation on ImageNet, is suggested.\\n\\nd. The paper missed a comparison with another DiT caching method like FORA [1].\\n\\n3. The paper lacks Limitation section. It would be interesting to see if there are videos on which AdaCache performs worse than its deterministic competitors.\", \"questions\": \"1. My main concerns regarding this paper are related to experiments results presentation and clarification of method hyperparameters:\\n\\na. Since method is not deterministic, the results should include std values for inference time and speedup (See Weakness 2a). Moreover, apart of adding std values, I recommend to conduct ablation studies on more than 32 videos (See Weakness 2b).\\n\\nb. AdaCache without MoReg is not tied to video generation, I recommend to include image generation DiTs caching (See Weakness 2c).\\n\\nc. I suggest authors to provide information about rate-of-change schedule hyperparameters selection, as it is crucial pert of the proposed method (See Weakness 1a).\\n\\n2. Additional group of questions is not as important as the main one, but can also help to improve the quality of the manuscript:\\n\\na. Statement regarding unique caching schedules for each layer needs clarification (See Weakness 1a). \\n\\nb. The paper would greatly benefit from method\\u2019s limitations analysis (See Weakness 2).\\n\\nc. The authors may include FORA [1] in comparisons.\\n\\nd. It interesting to see how AdaCache performs on MMDiT models such as CogVideoX [2] and SD-3 [3].\\n\\ne. In line 196, it would be beneficial to explain which features were used for visualization.\\n\\n\\n[1] Selvaraju, P., Ding, T., Chen, T., Zharkov, I., & Liang, L. (2024). Fora: Fast-forward caching in diffusion transformer acceleration.\\u00a0arXiv preprint arXiv:2407.01425.\\n\\n[2] Yang, Z., Teng, J., Zheng, W., Ding, M., Huang, S., Xu, J., ... & Tang, J. (2024). Cogvideox: Text-to-video diffusion models with an expert transformer.\\u00a0arXiv preprint arXiv:2408.06072.\\n\\n[3] Esser, P., Kulal, S., Blattmann, A., Entezari, R., M\\u00fcller, J., Saini, H., ... & Rombach, R. (2024, March). Scaling rectified flow transformers for high-resolution image synthesis. In\\u00a0Forty-first International Conference on Machine Learning.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer g4jp [2/2]\", \"comment\": \"**Q1: Is it reasonable to estimate the next cache-rate, based on previous features?**\\n\\nThis is a very good question, and we thank the reviewer for raising it. We agree that relying on the rate-of-change computed based on past features, to adjust the subsequent cache schedule can introduce some errors (as the metric is lagged behind). However, this error is minimal. When observing the change between adjacent features as shown in Fig 2-right, we see that it varies smoothly, w/o any abrupt changes for the most-part of the denoising schedule. Therefore, relying on immediate-past features is not a bad idea. \\n\\nRegardless, if we want to avoid such errors, the solution that we can think of is having a dry-run. Meaning, we have to estimate the metrics in the first run, and then in the next run, adjust the caching-rate at each step based on an **up-to-date metric** (as we can not rely on future features to estimate an up-to-date metric on-the-fly). However, such an approach will beat the motivation of getting an inference acceleration. We will discuss this limitation in the supplementary.\\n\\n\\n**Q2: Concrete examples of cache-schedules in different video generations.**\\n\\nWe thank the reviewer for requesting this visualization, as it would provide better context to the reader. In this [anonymous-fig-4](https://drive.google.com/file/d/1g3bUI-g3tui4XvBSqwuW9x1iUoZfPB0U/view?usp=share_link), we show a few different video generations with their corresponding computational steps (within a 100-step baseline schedule), when accelerated with AdaCache-fast (w/ MoReg). \\n\\nHere, the first two videos have a smaller motion content, whereas the last two have higher motion. When observing the total number of compute steps, we see that it varies proportional to the motion content (*i.e.,* more motion \\u2192 more compute steps). In terms of where the computations happen across the diffusion axis (*i.e.,* step-id), we see that every schedule is unique, supporting the underlying motivation of AdaCache. Here are the cache-shedules of these specific examples.\\n \\nSample 1 (living-room): `15` compute-steps @ step ids `[1, 2, 5, 8, 11, 14, 22, 32, 44, 56, 68, 80, 92, 99, 100]`\\n\\nSample 2 (ancient-ruin): `14` compute-steps @ step ids `[1, 2, 5, 8, 11, 17, 27, 39, 51, 63, 75, 87, 99, 100]`\\n\\nSample 3 (moving-train): `18` compute-steps @ step ids `[1, 2, 5, 8, 11, 14, 22, 30, 38, 46, 54, 62, 72, 82, 90, 98, 99, 100]`\\n\\nSample 4 (underwater): `16` compute-steps @ step ids `[1, 2, 5, 8, 11, 17, 27, 35, 45, 55, 65, 75, 85, 95, 99, 100]`\\n\\nWe will include and discuss these observations in the final version of the paper. \\n\\n\\n**Q3: AdaCache performance on multi-gpu settings.**\\n\\nWe thank the reviewer for raising this interesting comparison. The benefit of AdaCache is especially relevant in multi-gpu settings, as it not only reduces computational costs, but also avoids some of the *gpu communication overheads*. In this rebuttal, we evaluate the acceleration on multiple-gpus and report in this [anonymous-fig-5](https://drive.google.com/file/d/1s7ECqxZ2NgZJuBcJCERbpUOVQxRgBnMg/view?usp=share_link). Here, we reily on Dynamic Sequence Parallelism (DSP), and compare Open-Sora and Open-Sora-Plan baselines, with eather PAB or AdaCache for acceleration. Here, AdaCache consistently outperforms PAB with better inference speeds across all settings. We will include these results in the final version of the paper.\\n\\n\\n**Q4: Clarification on missing latency values (Delta-DiT, T-GATE) in Table 1.**\\n\\nWe sincerely apologize for the confusion here. It is not the case that Delta-DiT and T-GATE have no acceleration, but rather the latency values could not be replicated for these baselines in our settings (at least at the time of submission). To be more specific, Delta-DiT has no publicly-available codebase, which prevents us from replicating and measuring its latency. However, in this rebuttal, we provide new latency measurements for T-GATE as it provides an open-source implementation: it costs 49.11s (1.10x speedup) w/ Open-Sora, 113.75s (1.14x speedup) w/ Open-Sora-Plan, and w/ Latte 29.23s (1.11x speedup). We will update these results in Table 1.\"}", "{\"title\": \"Follow-up response to reviewer g4jp [2/2]\", \"comment\": \"**F-Q3: Disparity between multi-gpu results in [anonymous-fig-5] vs. those reported in PAB.**\\n\\nWe sincerely apologize for the confusion here. The latency measurements (and, speedups) vary based on the video-generation settings (*e.g.* number of denoising steps, spatial resolution, number of frames) and the Hardware settings (*e.g.* type of GPU, memory availability, disk read/write speeds). It is important to run all baselines in the same setting (and, the same GPU node if possible) to make a fair comparison.\\n\\nFor instance, in PAB [arXiv 2024], the authors run their multi-gpu experiments on 8xH100s, for generating 480p - 8s (204-frame) videos with Open-Sora using a 30-step schedule\\u2014 showing a 10.5x speedup with 8 gpus. In accordance with our resource availability, we run our multi-gpu experiments on 8xA100s, for generating 480p - 2s (51-frame) videos with Open-Sora using a 30-step schedule\\u2014 here, the same PAB baseline (based on the official codebase released by authors) shows a 5.24x speedup as given in [anonymous-fig-5](https://drive.google.com/file/d/1s7ECqxZ2NgZJuBcJCERbpUOVQxRgBnMg/view?usp=share_link). As long as we run AdaCache in the same setting, we ensure a fair comparison.\\n\\nUnder such a fair comparison, we see that AdaCache consistently outperforms PAB in all gpu configurations. We will better clarify the experimental setting when we report these numbers in the final version of the paper.\\n\\n\\n**F-Q4: Clarification on the performance of Delta-DiT / T-GATE in Table 1.**\\n\\nWe sincerely apologize for our confusion about the reviewer\\u2019s original concern. Let us provide further information here. In Table 1, we reuse the baseline numbers reported by PAB [arXiv 2024], except for the latency measurements (and, speedups) which need to be replicated in our setting for a fair comparison\\u2014 all the other metrics (VBench, PSNR, SSIM, LPIPS, FLOPs) are hardware independent and can be reused. \\n\\nThe authors of PAB replicate Delta-DiT (from scratch) and T-GATE (from the official codebase), and adopt them to video generation, as detailed in the appendix of their paper. We believe that the limited speedups of these baselines are due to the fact that they are originally proposed as image-DiT acceleration methods, which do not fully exploit video information. Meaning, better speedups can not be achieved with these baselines without sacrificing the temporal consistency of generations. We verify that the numbers reported by PAB authors are correct, based on our T-GATE and PAB replications (both from the respective official codebases)\\u2014 except for a reasonable variance in latency measurements which is expected. We will clarify where the numbers in Table 1 originate from, in the final version of the paper.\\n\\n\\n**F-Q5: Why AdaCache does not have a hard boundary to avoid caching in early/late denoising steps?**\\n\\nIn AdaCache, we do not have a hard boundary to avoid caching (either in very early/late denoising steps). As the reviewer stated, changes to the early denoising steps can change the convergence\\u2014 and similarly, changes to the late denoising steps can introduce unwanted artifacts. However, since our method is adaptive, we do not have to avoid caching in these two-ends explicitly, but rather it is handled implicitly (as a soft boundary).\\n\\nFor instance, in the above sample 3, the schedule is `[1, 2, 5, 8, 11, 14, 22, 30, 38, 46, 54, 62, 72, 82, 90, 98, 99, 100]`. In early steps (`[1, 2, 5, 8]`) the average cache-rate is 2.33. In late steps (`[ 90, 98, 99, 100]`), the average cache-rate is 3.33. In contrast, in middle steps (`[11, 14, 22, 30, 38, 46, 54, 62, 72, 82]`) the average cache-rate is 7.89. This shows that AdaCache will have a soft-boundary\\u2014 that is also dependent on each video. We empirically observe that having such small cache-rates in early/late steps do not augment the convergence significantly, as also evident in our qualitative results.\\n\\nIn PAB, the authors also observe the same behavior in feature similarity as in our Fig 2-right\\u2014 early/late layers have higher change, whereas middle layers have lower change. Based on this, PAB introduce hard boundaries to avoid caching in early and late layers, and use a constant caching rate in the mid layers (as it is not adaptive)\\u2014 which is a hand-designed schedule. In contrast, AdaCache provides more-flexibility (having adaptive soft-boundaries decided by the cache-metric), allowing us to optimize the latency even further, while preserving a better quality. We will better clarify the difference in the final version of the paper.\"}", "{\"comment\": \"Dear Authors,\\n\\nThank your prompt response to by questions. However, some moments of your explanation need more formal definitions and detailed explanations:\\n\\n**F-Q1: Selection of Codebook Hyperparameters in New Settings**\\n\\n1. The step \\\"Observe the distribution of cache-metric values across the denoising process\\\" is unclear. The L1 distance between subsequent representations can vary across architectures. Please provide a more generalized definition for selecting lower and upper bounds.\\n\\n2. How should users define the number of basis cache-rates? What is its impact, and can you recommend values for this parameter?\\n\\n3. Basis cache-rates involve numerous hyperparameters, making optimization complex. Could you provide detailed instructions to simplify this process?\\n\\n**F-Q2: Generalization of the Codebook Tuned with Fewer Video Prompts**\\n\\n1. How do you define \\\"a fair spread of calibration prompts\\\"? This is crucial for the method\\u2019s applicability.\\n\\n2. In the provided table, how were the subsets of 32 and 100 videos selected?\\n\\n\\nBest regards, \\nReviewer vGd3\"}", "{\"title\": \"Response to reviewer g4jp [1/2]\", \"comment\": \"**W1: Typo in Table 1**\\n\\nWe sincerely apologize for this typo, and thank the reviewer for pointing it out. This SSIM value of AdaCache-slow in Open-Sora-Plan should be 0.7910 (instead of 79.10), consistent with other SSIM values. We will correct this in the final version of the paper.\\n\\n\\n**W2: More details about caching-schedule hyperparameters.**\\n\\nWe understand the reviewer\\u2019s concern and sincerely apologize for the lack of details. In AdaCache, once we compute the distance metric between subsequent representations ($c^l_t$), we select the next caching rate ($\\\\tau^l_t$) based on a *pre-defined codebook of basis cache-rates*. Here, a *\\u2018cache-rate\\u2019* is defined as the number of subsequent steps during which, a previously-computed representation is re-used (*i.e.,* a higher cache-rate gives more compute savings). Simply put, a higher distance metric will sample a lower cache-rate from the codebook, resulting in more-frequent re-computations.\\n\\nThe codebook is basically a collection of cache-rates that is specific to a denoising schedule (i.e., #steps), coupled with distance metric ($c_t$) thresholds for selection. Both basis cache-rates and thresholds are hyperparameters. Here, optimal thresholds may need to be tuned per video-DiT baseline, whereas the cache-rates can be adjusted depending on the required speedup (*e.g.* AdaCache-fast, AdaCache-slow). We tune these hyperparameters (`codebook = {threshold-1: cache-rate-1, \\u2026}`) based on empirical observations on a small calibration set (with just 16 video prompts), and observe that they generalize well (*e.g.* on larger benchmarks such as VBench w/ 900+ prompts). This is thanks to the **normalized** cache-metric that we use for deciding the caching schedule (irrespective of the video prompt), relative to which we calibrate the threshold values.\\n\\nFor instance, on Open-Sora baseline, we use the codebook `{0.03: 12, 0.05: 10, 0.07: 8, 0.09: 6, 0.11: 4, 1.00: 3}` in a 100-step denoising schedule, and the codebook `{0.08: 6, 0.16: 5, 0.24: 4, 0.32: 3, 0.40: 2, 1.00: 1}` for AdaCache-fast in a 30-step schedule. For AdaCache-slow in a 30-step schedule, we decrease the basis cache-rates (w/o having to change the thresholds), and use the codebook `{0.08: 3, 0.16: 2, 0.24: 1.00: 1}`. A specific cache-rate is selected if the distance metric is smaller than the corresponding threshold (and larger than any previous thresholds). We also ablate various codebooks (*e.g.* fast, mid, slow in Table 2e). We will include this discussion in the final version of the paper.\\n\\n\\n**W3: Disparity between AdaCache vs. PAB comparisons in Table 1 and Fig 7.**\\n\\nWe understand this perfectly-valid concern from the reviewer, let us clarify this confusion below.\\n\\nFirst, we want to point out that in Fig. 7, we compare AdaCache-fast (w/ MoReg) and PAB-fast configurations. In Table 1, if we consider these two configurations, we see that the quality metrics are not that different (*i.e.,* comparable), whereas AdaCache has much better speedups. AdaCache-slow is the variant that gives much better quality metrics, while still being faster than PAB-fast. Therefore, the quantitative numbers are consistent with the observations in Fig 7.\\n\\nHowever, we wish to highlight that a direct quality comparison based on Fig 7 is unfair, as AdaCache optimizes its latency to an extreme where the quality is expected to have a small drop. Yet, looking at Fig 5 we see that AdaCache performance is more-stable across a range of latencies, compared to PAB. A more reasonable setting would be to compare the quality at a similar latency, which we show in this [anonymous-fig-2](https://drive.google.com/file/d/1e30h_6N7K_QDcOHLRV0zCtNYqfnhlzuA/view?usp=share_link). Here, we include variants AdaCache (2.61x) vs. PAB (1.66x) for 720p - 2s generations, instead of a more-extreme variant AdaCache (4.49x) vs. PAB (1.26x) that we previously presented in Fig 7, making a more-fair comparison. We see that AdaCache shows a much better performance, still being faster.\\n\\nWe will include this discussion and the figure for direct comparison in the final version of the paper. Also, with this rebuttal, we include an [anonymous-webpage](https://anonymous-adacache.github.io/), which we encourage reviewers to view. It includes many video comparisons, and provides a better view on baseline comparisons and ablations (*e.g.* how temporal consistency varies).\"}", "{\"comment\": \"If the reviewer has correctly understood, AdaCache uses shared features at steps 3, 4, and 6, 7. Based on the reviewers\\u2019 experience, feature sharing in the early stages of sampling could often result in significant changes in the final synthesized content, which might be difficult to mitigate even with a lower caching rate. The paper mentions \\u201creference-based\\u201d metrics such as PSNR and SSIM. Could the authors clarify whether any additional reference frames were introduced during inference to ensure high fidelity metrics in Open-Sora?\"}", "{\"summary\": \"The paper introduces a training-free method called Adaptive Caching (AdaCache) to accelerate video Diffusion Transformers (DiTs). AdaCache is based on the idea that \\\"not all videos are created equal,\\\" meaning some videos require fewer denoising steps to achieve reasonable quality. It caches computations through the diffusion process and devises a caching schedule tailored to each video generation to maximize the quality-latency trade-off. Additionally, the paper introduces a Motion Regularization (MoReg) scheme to utilize video information within AdaCache, essentially controlling compute allocation based on motion content. These plug-and-play contributions significantly speed up inference (e.g., up to 4.7\\u00d7 faster on Open-Sora 720p - 2s video generation) without compromising generation quality across multiple video DiT baselines. The code for this method will be made publicly available.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. Adaptive Caching achieve very good performance even compared with recent PAB paper. I very appreciate it.\\n\\n2. This approach requires no training and can seamlessly be integrated into a baseline video DiT at inference, as a plug-and-play component.\\n\\n3. Motion Regularization (MoReg) to allocate computations based on the motion content in the video being generated seems to be very reasonable.\", \"weaknesses\": \"1. Regarding the choice of metric, why was the Mean Squared Error (MSE) selected directly? Can the MSE metric truly reflect the actual reduction in features between adjacent steps? Are there alternative metrics that might be more suitable, or can you provide comparisons with other metrics such as the cosine similarity metric or others?\\n\\n2. Secondly, I'm interested in knowing if the proposed method is compatible with large Text-to-Image (T2I) base models, like FLUX. If it is, what would be the expected impact on the performance metrics?\\n\\n3. Although above questions exists, I think this is a really valuable paper\", \"questions\": \"On the whole, I consider this paper to be well-executed. However, I'm intrigued by the possibility of identifying a metric that could more accurately assess the redundancy in the system. Furthermore, I'm curious about the potential of integrating layer-wise broadcasting dynamically with distillation techniques to enhance real-time video generation capabilities.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks to reviewer vGd3\", \"comment\": \"We are happy that we were able to resolve the concerns of reviewer vGd3. We thank the reviewer again for the time and effort spent reviewing our paper and engaging in lengthy discussions, allowing us to provide much-needed clarifications. These discussions will imporve the quality of our paper greatly, which we will include in the final version of the paper. Finally, we appreciate the reviewer's positive rating.\\n\\nKind Regards!\"}", "{\"title\": \"Follow-up\", \"comment\": \"Dear Reviewer g4jp,\\n\\nThank you again for your constructive feedback and time/effort reviewing our paper. Since the rebuttal period is ending soon, please let us know if our responses have addressed your concerns. We are happy to engage in further discussion to provide more clarifications if needed.\\n\\nKind Regards!\"}", "{\"title\": \"Follow-up response to reviewer 5tUr\", \"comment\": \"We thank the reviewer for the engaged discussion, and allowing us resolve any further confusions.\\n\\nWe sincerely apologize for the confusion here. In our AdaCache experiments, we follow the baseline video-DiT inference setup exactly (except for the caching-related changes), in all Open-Sora, Open-Sora-Plan, and Latte pipelines. Among these, Open-Sora is both text- and image-conditioned, as suggested by the original contributors in their github issues (please see [issue-1](https://github.com/hpcaitech/Open-Sora/issues/504) and [issue-2](https://github.com/hpcaitech/Open-Sora/issues/550)) and GradIO demo (please see [instructions-to-run-locally](https://github.com/hpcaitech/Open-Sora/tree/main/gradio) as the public demo is currently offline). In contrast, in Open-Sora-Plan and Latte, the video generations are only text-conditioned. By being faithful to each setting, we show that AdaCache can generalize to both these settings. We release our reference implementation publicly with this paper, including the detailed steps to replicate the reported results, validating our contributions based on the Open-Sora baseline. We will continue to update our codebase to support other video-DiT baselines that we experimented with.\\n\\nWe thank the reviewer for the time and effort spent on these discussions. We hope that we were able to address all the concerns, and the reviewer will kindly consider this fact in the final rating.\"}", "{\"title\": \"Final response to reviewer g4jp\", \"comment\": \"We thank the reviewer for the engaged discussion. Yet, we find it really unfortunate that the reviewer decides to reduce the rating even further (6 -> 5 -> 3), even after the rebuttal period has ended, not allowing the authors to engage in further discussions. We have tried our best to accommodate all the reviewer requests including extensive experimentation and detailed clarifications throughout the two-week discussion period, spending a significant effort.\\n\\nLet us provide our final responses to the reviewer below.\\n\\n\\n**Q 1(i): The reliability of quantitative comparisons.**\\n\\nIn our main results (Table 1), we first run the PAB baseline on VBench benchmark and verify that we can replicate the same quantitative numbers (w/ a negligible change)--- as we find no negligible change, we report the same numbers as in the PAB paper for the quality metrics, and recompute all the latency measurements on our hardware (as latency changes depending on hardware). Then only we adopt the same baseline settings to run AdaCache. We believe we make a fair and reliable comparison across all the baselines.\\n\\n**Q 1(ii): Delta-DiT and T-GATE numbers in Table 1 are unusual.**\\n\\nAs mentioned in our previous responses, we adopt the same numbers for these two image-DiT baselines as reported in the PAB paper. We do not re-implement them ourselves. We direct the reviewer to appendix A.3 in PAB paper for a detailed description on the re-implementation settings. We believe that the limited speedups of these baselines are due to the fact that they are proposed as image-DiT acceleration methods\\u2014 which do not fully exploit video information to be competitive in video-DiT acceleration.\\n\\n**Q 2(i): Clear discussion on T2V and TI2V settings.**\\n\\nWe experiment across these two families of models (T2V and TI2V) to show the generalization of AdaCache. In the rebuttal, we provide evidence of further generalization (w/ T2I and multi-modal T2V settings). We will include this discussion on different families of models in the final version of the paper.\\n\\n**Q 2(ii) Analysis of different speedups at different configurations (*e.g.* 4.5x at 100-steps, 2.24x at 30-steps in Open-Sora)**\\n\\nThroughout the paper, when we report speedups, we are always upfront about the corresponding video generation configurations\\u2014 we mention the spatial resolution, frame count and the number denoising steps. We believe without these details, the acceleration measurements are not grounded. By experimenting on different configurations, we provide the reader a holistic-view on how much speedup to be expected.\\n\\nAs discussed in our previous responses, the speedup of AdaCache depends on different settings such as (1) number of denoising steps, (2) resolution/frame-count in the generation, and (3) the architecture of the underlying DiT. We believe the above-mentioned variation is not unusual for an adaptive, training-free acceleration method that optimizes quality-latency trade-off. We will include this discussion in the final version of the paper.\\n\\n**Based on all our responses during this two-week discussion period, we kindly request the reviewer to reconsider their rating. We believe we have addressed most of the reviewer concerns, and the reject rating (6 -> 3) is unreasonable.**\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to reviewer vGd3 [3/3]\", \"comment\": \"**W2c: AdaCache performance with Image-DiTs (e.g. DiT-XL/2 or PixArt-alpha).**\\n\\nWe agree that this comparison based on image-DiTs is useful to evaluate how AdaCache generalizes to image generation pipelines. We are currently experimenting with DiT-XL/2 baseline, and will report the results in the subsequent comments within the coming days. We really appreciate the patience of the reviewer as the experiments are being finalized.\\n\\n\\n\\n**W2d: Missing citation/comparison with image-DiT acceleration method: FORA.**\\n\\nWe thank the reviewer for bringing this important related work to our attention. First, we kindly note that FORA (released on arXiv in July 2024) is considered a \\u201cnon-penalized concurrent work\\u201d as per the ICLR 2025 submission policy. That being said, we still believe this would be a valuable discussion to include when reporting the performance of AdaCache on image-DiT baselines (as we are currently experimenting with DiT-XL/2). Conceptually FORA is different from AdaCache, as it is a caching mechanism proposed purely for image-DiTs, and is not adaptive w.r.t. the input. \\n\\nWe will add a quantitative comparison with FORA in the subsequent comments within the coming days. We really appreciate the patience of the reviewer as the experiments are being finalized.\\n\\n\\n**Q1d: AdaCache performance with Multi-modal DiTs (e.g. CogVideoX).**\\n\\nWe agree that this comparison based on multi-modal DiTs is useful to evaluate how AdaCache generalizes to various DiT pipelines. We are currently experimenting with CogVideoX baseline, and will report the results in the subsequent comments within the coming days. We really appreciate the patience of the reviewer as the experiments are being finalized.\\n\\n\\n**W3: Discussion on the limitations of AdaCache (e.g. worse performing settings compared to prior work).**\\n\\nThis is an important discussion which we will include in the final version of the paper.\\n\\nFirst, we note that AdaCache usually outperforms the quality of other inference optimization methods at a comparable speedup, as validated by the quantitative (*e.g.* Fig 5) and many qualitative results that are already in the paper. In this rebuttal, we also include an [anonymous-webpage](https://anonymous-adacache.github.io/), which we encourage reviewers to view as it includes many video comparisons to support this claim (better-viewed on Chrome browser). However, when we further reduce the latency\\u2014 considerably beyond that of the prior work\\u2014 we start seeing some artifacts and loss of fine-grained detail (*e.g.* as visible in some examples in Fig 7). Yet, we highlight that a fair comparison should be ideally made at comparable speedups (see [anonymous-fig-2](https://drive.google.com/file/d/1e30h_6N7K_QDcOHLRV0zCtNYqfnhlzuA/view?usp=share_link))\\n\\nIn addition, we observe a few limitations of the current AdaCache implementation: \\n\\n(1) As we do not rely on any re-training (or, finetuning) of the baseline model (which gives considerable compute savings and data acquisition costs), any limitations that are present in the corresponding baseline may transfer to the AdaCache variant of the same model. It is important that we raise caution about this to the user.\\n\\n(2) In the current setup, the hyperparameters related to the caching schedule (*e.g.* basis cache-rates, cache metric thresholds) are set based on heuristics and empirical validation on a small set of video prompts. Although these generalize well as we observe in our experiments, they may require some tuning when adopting to different baseline models or denoising schedules.\\n\\n(3) Finally, as our computational graph is adaptive, it may be less-suited in custom hardware architectures that rely on fixed (*i.e.,* static) computational graphs for running model inference (*e.g.* custom chips for on-device inference). AdaCache variant with a fixed caching schedule (tuned with a pre-defined calibration dataset) will work better in such scenarios.\\n\\n**Q1e: Which features used for visualization in Fig 2 (right)?**\\n\\nWe sincerely apologize for the lack of details about the features that we use in Fig 2-right (and L196). Here, we consider residual computations corresponding to *spatial-temporal attention* within an Open-Sora baseline for 720p - 2s video generations. We select a pre-defined layer (here, the middle-layer of the DiT), sample the features, and compute L1-distance between corresponding features of subsequent diffusion steps (aggregated over all axes) to come up with a scaler representation of feature change. Based on how this metric varies, we can get an idea how redundant these computations are during different stages of denoising. We will better-clarify and include these details in the final version of the paper.\"}", "{\"title\": \"Follow-up\", \"comment\": \"Dear Reviewer 5tUr,\\n\\nThank you again for your constructive feedback and time/effort reviewing our paper. Since the rebuttal period is ending soon, please let us know if our responses have addressed your concerns. We are happy to engage in further discussion to provide more clarifications if needed.\\n\\nKind Regards!\"}", "{\"title\": \"Response to reviewer KT6w [1/1]\", \"comment\": \"**W1: Is L1/L2 distance, the right choice to compute the cache-metric?**\\n\\nWe understand this valid concern. In AdaCache, we want to measure the rate-of-change in computed features (*i.e.,* residual connections in STA, CA, MLP layers) across the diffusion steps, so that we can make a decision on when to cache-and-reuse/recompute features. If the change is small\\u2014 meaning the features are highly-redundant\\u2014 then we can reuse previously cached features in subsequent steps.\\n\\nTo measure the difference between such features, we need to rely on a distance metric that (1) is fast to compute, and (2) measures an absolute distance which directly corresponds to the given input (*i.e.,* we can not rely on distribution-based distances such as KL-divergence). Among fast and direct distance measures (*e.g.* L1, L2, Cosine-distance), we see that L1/L2 give an absolute measure which aligns better with the actual change. In contrast, Cosine-distance computes a normalized-distance, which is not a reasonable estimate of change. For instance, if the features differ only by a scale, the cosine distance will be zero. However, here we wish to have a non-zero value as the features have actually changed. We ablate these different metrics in Table 2c of the original paper (and discussed in L471-476), which verify the better performance of absolute distance metrics such as L1/L2. Among these, L1 provides a better quality-latency trade-off, and hence, we adopt it by default. We will include this extended discussion in the supplementary.\\n\\n\\n**W2: AdaCache performance with Image-DiTs (e.g. DiT-XL/2 or PixArt-alpha).**\\n\\nWe agree that this comparison based on image-DiTs is useful to evaluate how AdaCache generalizes to image generation pipelines. We believe AdaCache will still provide reasonable speedups, but we expect the acceleration to be smaller than that of a video-DiT, which relies of heavier operators (*e.g.* spatial-temporal attention) within the baseline. \\n\\nTo validate this, we are currently experimenting with the DiT-XL/2 baseline, and will report the results in the subsequent comments within the coming days. We really appreciate the patience of the reviewer as the experiments are being finalized.\"}", "{\"comment\": \"The authors\\u2019 clarification regarding the impact of video resolution and number of frames on the speedup factor is not convincing. For example, in Table 2(b) of the main text, consistent and stable speedup factors are observed across different resolutions and frame rates. The significant speedup variation caused solely by different timestep settings raises concerns about the reliability and stability of AdaCache. As a result, the reviewer has decided to lower the rating.\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for conducting extensive experiments and providing clarifications to my questions. I still have several points that need clarification to form a detailed opinion about your paper:\\n\\n1. It is still unclear to me how to select a codebook of basis cache rates for new diffusion architectures. Could you please provide an algorithm or detailed steps on how to achieve this?\\n\\n2. The generalization from small to large numbers of videos remains ambiguous. You mentioned using a \\\"small calibration set (with just 16 video prompts)\\\" to select codebook parameters, which seems to contradict your paper's motivation that \\u201cnot all videos are created equal.\\u201d If the calibration set mainly includes complex videos, the selected codebook might not be aggressive enough for simpler videos that could be cached more efficiently. How do you ensure the calibration set is representative?\\n\\nLooking forward to your clarifications.\\n\\nBest regards, \\nReviewer vGd3\"}", "{\"title\": \"Response to reviewer vGd3 [2/3]\", \"comment\": \"**W2a: As AdaCahe is not deterministic, report standard deviation of latency measurements.**\\n\\nWe thank the reviewer for raising this valid concern. In AdaCache, the variation in latency is small. Hence, following the standard practice as in other adaptive methods (*e.g.* AdaDiff, Object-Centric Diffusion, LazyDiffusion, Block Caching), we initially reported the average latency on a standard benchmark. However, we agree that the standard deviation (*std*) would provide useful information to the reader. Therefore, in this rebuttal, we include *std* numbers for AdaCache with the Open-Sora baseline. We will include them in our results tables in the final version of the paper. \\n\\n| Method | Latency (s) |\\n|----------|----------|\\n| Open-Sora\\t| 54.02\\t|\\n| + AdaCache-fast\\t| 24.16 $\\\\pm$ **1.54** |\\n| + AdaCache-fast (w/ MoReg)\\t| 25.71 $\\\\pm$ **1.08**\\t|\\n| + AdaCache-slow\\t| 37.01 $\\\\pm$ **1.30**\\t|\\n\\n*all new numbers are in bold.*\\n\\n \\n**W2b: Does ablations on 32 videos generalize?**\\n\\nWe understand the reviewer\\u2019s point of view. In this paper we consider standard benchmark video prompts in all evaluations (VBench prompts for 900+ videos in Table 1 and Open-Sora-Gallery prompts for 32 videos in Table 2), keeping all the experiments reproducible. We decide to have a smaller benchmark for ablations, to keep the overall computational cost tractable as we evaluate a wide range of design decisions. However, we find that the observations usually generalize across both benchmarks. \\n\\nTo further strengthen this claim, we provide additional results on a new set of prompts (corresponding to 100 videos), comparing 480p-2s video generations. We include both mean and standard deviation values for AdaCache variants as their latency is dependent on each video generation. Here, we make two observations: \\n\\n(1) Even though the absolute performance (VBench) metrics vary between benchmarks\\u2014 which is expected as the set of prompts are different for each setting\\u2014 the overall change between different model variants stays consistent: AdaCache-slow performs better than AdaCache-fast, and MoReg helps improve the performance.\\n\\n(2) The standard deviation in latency measurements is small, in all benchmarks. This shows that the speedups that we report in the ablation table generalize to the larger benchmarks such as VBench (900+ videos).\\n\\n**Note:** we have to rely on A6000 gpus for newly-reported latency measurements as we no longer have access to original the A100 gpus, yet the speedups remain consistent. \\n\\nWe will include these results and the discussion in the final version of the paper.\\n\\n| Method | 32 videos || 100 videos || 900+ videos ||\\n|----------|----------|----------|----------|----------|----------|----------|\\n| | VBench | Latency (on A6000) | VBench | Latency (on A6000) | VBench | Latency (on A100) |\\n| Open-Sora\\t| **84.09** | **86.57** | **82.97** | **86.35** | 79.22 | 54.02 |\\n| + AdaCache-fast\\t| **83.42** | **37.06 $\\\\pm$ 0.89** | **82.21** | **37.22 $\\\\pm$ 0.70** | 79.39 | 24.16 **$\\\\pm$ 1.54** |\\n| + AdaCache-fast (w/ MoReg)\\t| **83.42** | **39.56 $\\\\pm$ 0.94** | **82.32** | **39.65 $\\\\pm$ 1.16** | 79.48 | 25.71 **$\\\\pm$ 1.08** |\\n| + AdaCache-slow\\t| **83.93** | **57.33 $\\\\pm$ 1.53** | **82.89** | **58.51 $\\\\pm$ 1.61** | 79.66 | 37.01 **$\\\\pm$ 1.30** |\\n\\n*all new numbers are in bold.*\"}", "{\"title\": \"General comment\", \"comment\": \"We thank all the reviewers for their constructive feedback and appreciate their time/effort reviewing our paper. In this rebuttal, we provide clarifications with evidence to answer reviewer concerns, as individual responses to each reviewer. Please let us know if further clarifications are needed during the rebuttal period.\"}", "{\"comment\": \"On OpenSora, with the same prompt, AdaCache-fast achieves a 4.7\\u00d7 speedup. Why does it only achieve 1.65\\u00d7 on CogVideoX? Could the authors clarify the reasons for this significant discrepancy?\\n\\nFurthermore, AdaCache achieves a speedup of 4.7\\u00d7 on prompts from the OpenSora gallery, yet only 2.24\\u00d7 on prompts from VBench. What accounts for such a significant discrepancy in the speedup achieved by the same model?\"}", "{\"title\": \"Follow-up: AdaCache on a multi-modal DiT baseline\", \"comment\": \"**Q1d: AdaCache performance with Multi-modal DiTs (e.g. CogVideoX).**\\n\\nFollowing the reviewer\\u2019s suggestion, in this rebuttal, we implement AdaCache on top of a milti-modal diffusion transformer for video generation: CogVideoX, and compare with the concurrent work FasterCache [arXiv, Oct 2024]\\u2014 a training-free inference acceleration method that is not content adaptive. Here, we generate 480p - 6s videos following the baseline, and evaluate on prompts from Open-Sora gallery. In the table below, we observe that AdaCache shows a better quality-latency trade-off compared to FasterCache, and validate that it can work with multi-modal DiTs.\\n\\n| Method | VBench | Latency (s) | Speedup |\\n|----------|----------|----------|----------|\\n| CogVideoX-2B\\t\\t\\t\\t| **82.20**\\t|**152.70**\\t\\t| **1.00x**\\t|\\n| + FasterCache [arXiv, Oct 2024]\\t| **82.13**\\t|**102.32**\\t\\t| **1.49x**\\t|\\n| + AdaCache-fast\\t\\t\\t| **82.00**\\t|**92.51**\\t\\t| **1.65x**\\t|\\n| + AdaCache-slow\\t\\t\\t| **82.46**\\t|**102.47**\\t\\t| **1.49x**\\t|\\n\\n*all new numbers are in bold.*\"}" ] }
Dyo2tS5A8b
What do we learn from inverting CLIP models?
[ "Hamid Kazemi", "Atoosa Chegini", "Jonas Geiping", "Soheil Feizi", "Tom Goldstein" ]
We employ an inversion-based approach to examine CLIP models. Our examination reveals that inverting CLIP models results in the generation of images that exhibit semantic alignment with the specified target prompts. We leverage these inverted images to gain insights into various aspects of CLIP models, such as their ability to blend concepts and inclusion of gender biases. We notably observe instances of NSFW (Not Safe For Work) images during model inversion. This phenomenon occurs even for semantically innocuous prompts, like `a beautiful landscape,' as well as for prompts involving the names of celebrities.
[ "CLIP; NSFW; Interpretability; Gender Bias" ]
https://openreview.net/pdf?id=Dyo2tS5A8b
https://openreview.net/forum?id=Dyo2tS5A8b
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tAkVAzj62v", "r7gvHDARw9", "Moyw80HQmO", "44p8YOJYw1", "2L841yKhMs" ], "note_type": [ "comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1732777197325, 1730745002001, 1730708640937, 1730103151710, 1730112226518 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission571/Authors" ], [ "ICLR.cc/2025/Conference/Submission571/Reviewer_5AfH" ], [ "ICLR.cc/2025/Conference/Submission571/Reviewer_Ute6" ], [ "ICLR.cc/2025/Conference/Submission571/Reviewer_c6Ws" ], [ "ICLR.cc/2025/Conference/Submission571/Reviewer_xhP6" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper investigates CLIP models using image inversion techniques to analyze their learned representations and biases. Their analysis reveals three findings: (a) CLIP models can effectively blend different concepts in inverted images, (b) models contain embedded NSFW associations that emerge even from innocent prompts, and (c) strong gender biases exist, particularly in occupational and status-related concepts.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Strengths:\\n- The analysis shows that CLIP models trained on more data are more amenable to image inversion (better quality of inverted images\", \"weaknesses\": [\"Weaknesses:\", \"No novelty in methodology or findings. The problem of image inversion has been studied in the context of discriminative (deepdream and papers cited in the submission) and generative models (e.g., https://arxiv.org/abs/2405.15012, https://dl.acm.org/doi/abs/10.1145/3372297.3417270). The problem of identifying CLIP biases has also been studied in the past (e.g., https://arxiv.org/abs/2311.05746, https://ojs.aaai.org/index.php/AIES/article/view/31657 and references therein).\", \"Fairly indirect approach to study model biases. The image inevrsion approach proposed in this paper requires a second human-in-the-loop inspection of inverted images to identify biases. This can be tedious, error-prone, and is not scalable. Also, it is unclear if there is a principled way to quantify the extent to which a given model is biased w.r.t. a given concept, which is essential for making progress in this direction of forgetting / unlearning undesirable concepts.\", \"This paper specifically focuses on CLIP models, even though the image inversion approach can be applied to more recent (and better performing) image-text models. Studying biases across models (e.g., do inverted images w.r.t. CLIP models tranfer to other models) would be interesting.\"], \"questions\": [\"Chris Olah's work on feature visualizaion (https://distill.pub/2017/feature-visualization/) shows that applying image inversion on fourier-transformed images can yield better results; would be interesting to see if the results are different if this fourier-based regularization technique is used.\", \"Is there a way to leverage the proposed image inversion approach to mitigate identified biases in CLIP models?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work presents an insight into embeddings learnt by CLIP via the lens of model inversion. Since the data used for training CLIP is proprietary, the authors essentially leverage model inversion to generate images whose embeddings align closely with the embeddings of certain text prompts. An analysis leads to the following conclusions on both safety and model capabilities: a) Seemingly normal prompts elicit NSFW images unexpectedly, specifically for many female celebrities b) There exists prevalent gender bias based on providing prompts on factors like profession etc c) the images represent blending in the semantic space as expressed by concept blending in the prompt, implying the model\\u2019s understanding of semantic relationships (as opposed to more visual, pixel level features). The paper is accompanied by appropriate visualizations and statistics where apt to support the presented claims.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This work presents an interesting analysis on interpreting embedding-based image features from the widely popular CLIP model (where the information about training data is proprietary). It flags important drawbacks elicited via model inversion which point out potential flaws in the training data, flagging an important issue given that CLIP image embeddings are widely used. Further, the authors have aptly presented the different implications in an organized and coherent fashion making it easy to follow the paper.\", \"weaknesses\": \"While the work presents an interesting analysis, it is unclear how these insights can be concretely leveraged to improve image generation pipelines as of today. Can we inform any of the following? a) Modelling strategies and making models more robust to such potentially bad data points? Any kind of safety finetuning? b) data curation strategies if any? Further, some modelling choices (e.g. choice of transformations) are not well motivated. These questions have been outlined in the next section.\", \"questions\": \"I would love to hear from the authors regarding the following questions:\\n1. From the above section on weaknesses, how can the research community today concretely leverage these insights to inform data, modelling or evaluations? For example, how do any of these insights translate into useful modelling pointers for generative diffusion models? Do any of the learnings from CLIP-like models transfer to the generative context?\\n2. Tangentially, this work also demonstrates the ability to inspect training data ~ is it accurate to say that this can imply training data leakage (and inspection via model inversion) and thus there could be evidence of memorization? If yes, how can we ensure downstream models are robust to such inversion analysis?\\n3. How can we improve text rendering based on the analysis presented? What is the intuition behind the choice of regularization being correlated with the text rendering quality? \\n4. On the statement of \\\"Similarly, in CLIP inversion, if an image aligns with a given prompt, its augmentations must align with that prompt as well. \\\" - why was a different set of transformations chosen? What is the motivation? It is suprising that this is a big discriminator of image quality.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper employs model inversion on CLIP models to visualize images that induce high similarity with a variety of text prompts. By studying the generated images, the paper concludes that:\\n1) CLIP can blend multiple semantic concepts,\\n2) the prompt of (primarily female) celebrities is close to sexually explicit words and can produce sexually explicit images,\\n3) the inverted images reflect gender biases learned from data,\\n4) CLIP models trained on a larger dataset produce higher quality inversions, and\\n5) CLIP inversions include written text in the images.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Applying model inversion to CLIP models is a suitable way to obtain insights into its proprietary and unavailable training data.\", \"The paper reveals that the CLIP training data was not cleaned from potentially harmful content.\", \"It is interesting to see that CLIP model inversion can produce (somewhat) coherent objects and text\"], \"weaknesses\": \"- While CLIP model inversion is interesting for the sake of scientific curiosity, the paper does not discuss any practical implications on downstream tasks, such as retrieval, classification, segmentation, text-to-image modeling. Sec. 7 (l. 470) even states that \\\"these behaviors do not have to be represented in other operational modes\\\".)\\n- The paper makes rather strong claims which are mostly supported by few qualitative examples. Furthermore, the experimental analysis is not thorough enough. The following elaborates on each contribution.\\n 1) Very few images show blending of concepts, and it is not clear if such combined concepts existed in the original training data (e.g., fictional art). If the study was performed on CLIP models trained on open data, it is relatively easy to verify through retrieval if the given concept combination is novel (or at least to provide stronger support).\\n 2)\\n 1) For the prompt analysis, a word pool of the 10,000 most common English words and 1,913 additional unsafe words are used. Such overrepresentation of unsafe words can skew and bias the results and their interpretation. For example, it is likely the that unsafe words are much closer to each other in the embedding space than most of the top 10,000 words which have greater variety. Hence, if one unsafe word is retrieved, multiple similar words will follow, potentially exaggerating the qualitative results of Tab. 2. Extending the most common word list naturally would be a better way to incorporate more words, especially because the top 10k words already include unsafe words.\\n 2) Top-k closest word lists (Tab. 1, 2) can be misleading because they do not reveal the similarity of the words. If the retrieval word list does not contain relevant words, similarity can be low and decrease quickly. The closest words in Tab. 2 contain stop-words, characters, or other non-words such as d, j, to, ok, lol, mm, yea, yo, ha, um, ia, da, si, which suggests that the surrounding words have rather low similarity with the celebrity names.\\n 3) Tab. 4 suggests strong gender biases. However, model inversion is not a faithful generative, but an optimization process. If CLIP was gender biased 55% to 45%, i.e., slightly higher similarity for a prompt when depicting one gender over the other, an optimization process will always seek to generate that one gender. As a result, model inversion likely exacerbates model biases. This could explain the very one-sided values in Tab. 4. which do not allow to draw conclusions about the severity of the CLIP model's biases.\\n 4) \\\\+ 5. The findings are not particularly surprising and not quantitatively supported, e.g. through prompt similarity or OCR. As stated in line 434, it is well known that CLIP recognizes text in images.\\n- While the paper presents some insights from model inversion, it does not try to explain them. E.g., could a confounding latent concept such as \\\"(female) celebrity\\\" cause these findings? An ablation could test for regular male and female names to support this hypothesis. Similarly, more general concepts/words such as \\\"woman\\\" or \\\"man\\\" could be tested.\\n- The paper does not try to make a connection to real images. Inverted images are out-of-distribution for CLIP. Are the image-text similarities also out-of-distribution, e.g., much higher than what real images typically achieve? Do the conclusions change if the inversion process is restricted/regularized to similarity magnitudes of real images?\", \"minor\": [\"L. 107: The abbreviation 'TV' is used without prior introduction.\", \"L. 146: Missing citation\", \"L. 181: Equation should be numbered; x and p are not introduced\", \"Sec. 5 seems to better fit as Sec. 4.5\", \"Tab. 4 extends beyond page boundaries and there is not enough whitespace between caption and main text\", \"While the text mentions ViT-B16 (OpenAI?) is used unless otherwise specified, it would be clearer if the captions of each image specify the model\"], \"questions\": [\"Please address the points raised in the Weaknesses section. Here are additional relevant questions:\", \"How do the findings affect typical downstream tasks of CLIP (embeddings)?\", \"The generated images of Fig. 4 seem to be most surprising because the prompts are harmless. Is there a reasonable explanation for this behavior? The paper does not include a quantitative evaluation such as the ratio of flagged images by the stable diffusion safety checker, e.g., when generating 100 images of each of the three prompts, how many are flagged?\", \"Please explain the hypothesis of line 320 that the prompts of Fig. 4 are near to NSFW language when Fig. 1 (top) only shows safe words as close in the embedding space.\", \"What is the similarity value of each word in Tab. 1 (top) and in Tab. 2? Do the unsafe words have a high similarity with celebrity names or other harmless words?\", \"Regarding Sec. 4.3, what if the prompt is \\\"a successful male or female student in university\\\"? Is the bias still one-sided?\", \"In this study cosine similarity is optimized in isolation. In practice, CLIP is often used contrastively (the way it was trained), e.g., in zero-shot classification. What happens if the Softmax of the cosine similarity is optimized, e.g., for the three prompts in Fig. 4, or for the different celebrities?\"], \"verdict\": \"The paper applies model inversion to CLIP models, yielding interesting insights about their training data, as well as some expected results regarding model behavior. However, it is unclear how these findings would translate to downstream applications of CLIP. Unfortunately, the experimental evaluation falls short of ICLR publication standards and does not adequately support the claims made. To strengthen the study, quantifying the frequency or severity of each insight wherever possible and seeking explanations for the observed behaviors could provide a more robust analysis. This could involve inspecting open CLIP models where both model and data are available.\", \"flag_for_ethics_review\": \"['Yes, Potentially harmful insights, methodologies and applications', 'Yes, Responsible research practice (e.g., human subjects, data release)']\", \"details_of_ethics_concerns\": [\"I did not take ethical concerns into account for the main review and score. These are the potential ethical concerns:\", \"While the paper includes a warning at the beginning of the paper, it could do a better job at protecting the reader who does not want to be exposed to potentially harmful content. For instance, the blurring in Fig. 4 and 5 does not sufficiently censor sexually explicit content. Instead, one solution could involve proper censorship in the main paper (including an brief, objective textual description), while linking to the current figures in the supplementary with a warning. The same applies to the text in Tab. 1 and 2.\", \"The study includes the names of real people (celebrities) for which sexually explicit images (blurred) are generated by the inversion method and unsafe words are retrieved by the CLIP model.\"], \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper examines the biases present in the CLIP model and proposes using model inversion to generate content that is assigned to a chosen class label with high confidence. The authors claim to demonstrate that CLIP can blend concepts, produce NSFW images from seemingly harmless prompts, and exhibit inherent gender bias. They also show that more training data can lead to better model inversions and that textual components are present within the inverted images, which they link to TV regularization not being used in the loss function.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The paper presents a solid analysis of CLIP models through a novel approach using model inversion. The paper is well written and the motivation is clear. The problematic studied in the paper is timely and will be of good use to the community. To my knowledge, the idea is indeed novel.\\n\\nSpecifically, the authors clearly demonstrate that CLIP models possess the capability to blend concepts, akin to generative models like DALLE and IMAGEN. Their study reveals associations between seemingly harmless prompts and NSFW content, particularly concerning female celebrities, highlighting the necessity for better data curation. Additionally, the paper uncovers gender biases related to professions and roles. Furthermore, the investigation into textual components within inverted images, especially the effects of TV regularization, enriches the understanding of model behavior, as well as unwanted artefacts that might arise from usage of CLIP models. Overall, the paper offers good contributions to the understanding of CLIP models, emphasizing both their capabilities and areas for improvement.\", \"weaknesses\": \"Overall, the paper appears to be poorly formatted, giving the impression of being rushed without proper attention to formatting guidelines. For example, Table 4 is misaligned and requires reformatting, and there are 10 unnecessary empty lines between Figure 2 and the text. The same issue occurs with Table 6 in the appendix. This is disappointing, as the text within the paper is well-written and addresses an important topic.\\n\\nAdditionally, while the experiments in Table 4 seem convincing, I believe additional experiments would be beneficial, such as examining potential racial biases (for example you can do the same for Table 4 but for black/white people). The paper generally lacks comprehensive experimental analyses - for example, for \\\"a beautiful landscape, \\\"the map of the african continent\\\" or \\\"a scientist conducting groundbreaking research\\\", how many times did you see NSFW images? Do all of them not pass through the safety checker because the residuals (as marked with red squares in your images) are always the same? Is there anything else that appears in these images (for the same prompts) that is flagged as NSFW? The findings are very interesting and find the reader wanting for more but even after reading through the appendix, the paper falls short. Also, the authors should provide details regarding the model which they perform classification with (this model is vaguely mentioned in line 392).\\n\\nFinally, regarding the fourth point which you raise (line 106), I am not sure what contribution does it bring to the paper. Although it is visible that scale has quality on the inversions, why is this not obvious? Furthermore, since the paper does not improve the techniques for inversions, what does this insight contribute to the paper, in particular in examining whether CLIP contains biases or not? In my opinion it would be better to remove this point from the paper as it does not really relate to the problem that you are trying to investigate, but I would love to hear why you think it is important if you decide to keep it.\\n\\nSince there are no space constraints and an additional page is available, I would be inclined to raise my score if the authors reformat the paper, improve its overall structure, and provide further experiments similar to those in Table 4.\", \"other_minor_details\": \"\", \"line_107\": \"The abbreviation \\\"TV\\\" is used without introduction.\", \"line_112\": \"Please clarify what \\\"activating a target class maximally\\\" means.\", \"line_146\": \"Reference error.\", \"line_150\": \"Change \\\"delves\\\" to \\\"delve.\\\"\", \"line_183\": \"Typo (\\\"which\\\").\", \"tables_1_and_2\": \"More words should have an asterisk, e.g., \\\"sh*thead,\\\" and the filtering should be stronger.\", \"line_321\": \"Reference the word lists by providing links (e.g., English words, Naughty Obscene and Otherwise bad words).\\nLines 395 and 396 overlap.\", \"line_448\": \"Typo with a double period.\", \"questions\": \"Does using random affine transformations instead of ColorShift affect the conclusions? I believe an analysis like this would be beneficial as well. You mention in line 198 that it has a significant impact on the quality of images, but it is not clear whether the underlying conclusions change and whether the generated images semantically change or remain the same.\\n\\nWhat classification model is used for experiments in Table 4? Could you provide details about the architecture, training data and some performance metrics of the model used please?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
DydCqKa6AH
Learning to Generate Diverse Pedestrian Movements from Web Videos with Noisy Labels
[ "Zhizheng Liu", "Joe Lin", "Wayne Wu", "Bolei Zhou" ]
Understanding and modeling pedestrian movements in the real world is crucial for applications like motion forecasting and scene simulation. Many factors influence pedestrian movements, such as scene context, individual characteristics, and goals, which are often ignored by the existing human generation methods. Web videos contain natural pedestrian behavior and rich motion context, but annotating them with pre-trained predictors leads to noisy labels. In this work, we propose learning diverse pedestrian movements from web videos. We first curate a large-scale dataset called CityWalkers that captures diverse real-world pedestrian movements in urban scenes. Then, based on CityWalkers, we propose a generative model called PedGen for diverse pedestrian movement generation. PedGen introduces automatic label filtering to remove the low-quality labels and a mask embedding to train with partial labels. It also contains a novel context encoder that lifts the 2D scene context to 3D and can incorporate various context factors in generating realistic pedestrian movements in urban scenes. Experiments show that PedGen outperforms existing baseline methods for pedestrian movement generation by learning from noisy labels and incorporating the context factors. In addition, PedGen achieves zero-shot generalization in both real-world and simulated environments. The code, model, and data are available at https://genforce.github.io/PedGen/.
[ "Pedestrian Movement Analysis", "Human Motion Dataset", "Human Motion Generation" ]
Accept (Poster)
https://openreview.net/pdf?id=DydCqKa6AH
https://openreview.net/forum?id=DydCqKa6AH
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uakr6EVHH7", "uBbXYJO7aH", "rEfQGZuU2A", "qyFKYc9BdC", "mjLjRppPt8", "lrE2XINO8t", "kOW4nYSYWJ", "gT56JPCyRS", "btsoPFCfU6", "aTuDHFD4Or", "XJZAJkoMmn", "VNmGaiAvrX", "VIc0Chp4tW", "T70oO55Gt5", "SwU5KPQdLl", "RmrjLddPC4", "HBx8mzg5YR", "FFPAi1VLRN", "F6r6jNsOEz", "9ASsFaYwqc", "7huSF7htGG", "6fRRwUADGD", "6NnET5wxRm", "1H12fyDNaZ" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732728754133, 1732520959695, 1732068731134, 1729069937897, 1730015165921, 1732066758612, 1737523475021, 1730607271170, 1732068213215, 1732573240874, 1732564031972, 1732067766969, 1732068977756, 1732069203197, 1734404696652, 1732641800417, 1732586019888, 1732559930731, 1732416552807, 1729684317934, 1732559865229, 1732572494022, 1732069312805, 1732728980675 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1930/Authors" ], [ "ICLR.cc/2025/Conference/Submission1930/Reviewer_yQtW" ], [ "ICLR.cc/2025/Conference/Submission1930/Authors" ], [ "ICLR.cc/2025/Conference/Submission1930/Reviewer_XLS3" ], [ "ICLR.cc/2025/Conference/Submission1930/Reviewer_M4G7" ], [ "ICLR.cc/2025/Conference/Submission1930/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1930/Reviewer_EYrc" ], [ "ICLR.cc/2025/Conference/Submission1930/Authors" ], [ "ICLR.cc/2025/Conference/Submission1930/Authors" ], [ "ICLR.cc/2025/Conference/Submission1930/Reviewer_EYrc" ], [ "ICLR.cc/2025/Conference/Submission1930/Authors" ], [ "ICLR.cc/2025/Conference/Submission1930/Authors" ], [ "ICLR.cc/2025/Conference/Submission1930/Authors" ], [ "ICLR.cc/2025/Conference/Submission1930/Area_Chair_Tqr9" ], [ "ICLR.cc/2025/Conference/Submission1930/Authors" ], [ "ICLR.cc/2025/Conference/Submission1930/Reviewer_EYrc" ], [ "ICLR.cc/2025/Conference/Submission1930/Authors" ], [ "ICLR.cc/2025/Conference/Submission1930/Authors" ], [ "ICLR.cc/2025/Conference/Submission1930/Reviewer_yQtW" ], [ "ICLR.cc/2025/Conference/Submission1930/Authors" ], [ "ICLR.cc/2025/Conference/Submission1930/Reviewer_M4G7" ], [ "ICLR.cc/2025/Conference/Submission1930/Authors" ], [ "ICLR.cc/2025/Conference/Submission1930/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer EYrc,\\n\\nThanks again for your insightful comments! We believe we have addressed your comments and are keen to participate in the discussion with you.\\n\\nWe would appreciate it if you would let us know whether our responses sufficiently address your questions and whether you need to see additional visualizations, as the last day to update the materials is Nov. 27th. Thank you once again!\\n\\nBest regards, \\nAuthors of Submission ID 1930\"}", "{\"comment\": \"Apologies for the confusion. I did watch the supplementary videos and noticed that the authors included scenarios set in realistic environments with other pedestrians and vehicles present. These videos are quite misleading and raise questions about how the authors account for these potentially dynamic environments. Could the authors elaborate on how these videos were generated or provide more details on how such dynamics are addressed?\\n\\nAdditionally, while the approach to \\\"populate empty urban spaces in simulation by generating realistic and diverse pedestrian movements\\\" is certainly valid, I would argue that this setup is somewhat limited, particularly since the environment has already been factored in. More importantly, regarding the claim that \\\"While the model can generate millions of plausible results, it learns the posterior distribution of which movements are more likely given the current context,\\\" I have questions about the definition of \\\"current context.\\\" From the dataset videos, it appears the \\\"current context\\\" includes both static elements (e.g., the environment) and dynamic elements (e.g., surrounding or accompanying pedestrians). How do the authors ensure the model learns appropriately from a \\\"context\\\" that integrates both static and dynamic factors, while aiming to make predictions under static scenarios alone? In other words, how do the authors decouple the static and dynamic components in the learning process to achieve this separation effectively?\"}", "{\"comment\": \"Thank you for your thorough and insightful comments. We sincerely appreciate the supportive feedback that \\u201cthe concept of integrating contextual information into pedestrian generation is sound,\\\" \\\"the ablation studies examining each factor are thorough and well-executed\\u201d, and \\\"the visual examples provided in both the main paper and supplementary materials are satisfactory.\\\" We address your questions below.\\n\\n**Q1**. Unclear Task Definition: Section 4.1 outlines the overall task definition, but it seems that only the 3D location of pedestrians and the 2D image of the scene in the first frame (t_1) are provided, while all other elements are predicted. Is this interpretation correct? If so, what is actually learned from this setup, considering there could be millions of plausible results? Or do we learn the bias in the training set? Is this a reasonable setup?\\n\\n**A1**. Yes, the model is only given the starting 3D location of the pedestrian and the context factors in the first frame. As a generative model, we aim to capture the real-world pedestrian movement ***distribution*** conditioned on the context factors. We believe such a setting is reasonable as we can match the real-world distribution by leveraging large-scale training data and diffusion models so the generated movements are natural and diverse. While the model can generate millions of plausible results, it learns the posterior distribution of which movements are more likely given the current context. Zero-shot generation experiments on the Waymo dataset and the Carla test set in Tab.1 further validate our model\\u2019s strong generalization ability, where it improves mADE by ***0.13*** on Waymo and collision rate by ***0.5%*** on CARLA, in comparison to the best-performing baselines. Thus, we can leverage the learned real-world distribution to populate more realistic pedestrian movements in simulation environments instead of fixed animations.\\n\\n\\n**Q2**. Clarification of Contributions: The overall contributions should be clarified. Lines 97-101 describe the contributions in a confusing manner. For instance, line 98 states, \\u201c1) A new task of context-aware pedestrian movement generation from web videos with unique challenges in dealing with label noise and modeling various motion contexts.\\u201d This seems contradictory since the dataset with noisy labels is presented as a contribution in \\u201cA new large-scale real-world pedestrian movement dataset, CityWalkers, with pseudo-labels of diverse pedestrian movements and motion contexts\\u201d (l. 99). I have significant doubts about the dataset's quality and question whether a noisy dataset can genuinely be considered a contribution.\\n\\n**A2**. The two points of our contribution are not contradictory but complementary. Our CityWalkers dataset is a large-scale dataset that has the most diverse pedestrian movements and motion contexts compared to existing human motion datasets. Our PedGen model aims to learn from the diverse labels and harness the inherent noise from large-scale data by leveraging the partial labels and filtering out the noisy labels. Ablation studies in Tab.3a further demonstrate that training with CityWalkers with noisy labels significantly improves performance than training on a smaller-scale dataset SLOPER4D with ground truth labels, where training with CityWalkers achieves a mADE of ***1.09*** using the human context compared to a mADE of ***3.82*** from training with SLOPER4D. This could show the utility of CityWalkers in capturing large-scale real-world pedestrian movements, albeit inherent noises.\"}", "{\"summary\": \"This paper studies a new task of generating context-aware pedestrian movements by learning from web videos with noisy labels. Different from previous human motion generation task that focuses on in-door human motion generation, the proposed method can generate outdoor human movements conditioned on scene information and person identity information. To achieve this goal, this paper also introduces a new dataset named as CityWalker, which contains outdoor urban pedestrian movements from web videos. Experiments on several benchmarks demonstrate the effectiveness of the proposed method.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The contribution of this paper is comprehensive because it contains both dataset and method, which can provide reference for future research in this area.\\n2. The focus of this paper is interesting, generating human motion in outdoor environment is not well-studied by previous work and this paper present a solution to this task, which may be useful for autonomous driving and related field.\\n3. This paper is well-written and easy to understand.\", \"weaknesses\": \"Although this paper clarifies their task as motion generation, the proposed method and evaluation metrics are more like motion prediction, i.e., output person movements by accepting current condition inputs. The evaluation metrics also does not involve the diversity evaluation of the generated human motion. Therefore I have doubts about the task type of this paper, maybe motion prediction is more accurate.\", \"questions\": \"In weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose a method for learning pedestrian movements from web videos by using pre-trained predictors to generate pseudo-labels through off-the-shelf 4D human motion estimation models, despite the inherent noise in the labels. To refine these noisy labels, they introduce the PedGen model, which filters out noise and incorporates conditional inputs that may influence pedestrian behavior, thereby lifting the 2D scene into a 3D representation. The authors intend to provide open access to both the dataset and the model.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper focuses on using noisy labels to learn pedestrian motion, a novel approach with the potential to benefit various research areas.\\n2. The authors contribute a dataset accompanied by a label generation and filtering strategy, addressing the challenge of noise in automated labeling pipelines.\\n3. The results demonstrate performance improvements over baselines, and comprehensive ablation studies are conducted to validate the approach.\", \"weaknesses\": \"1. Although the automated labeling and filtering pipeline is essential, it is a fairly common approach, limiting the novelty of this contribution.\\n2. The baselines used in the comparison experiments appear weak, with only three included, potentially limiting the robustness of the results.\\n3. While using the goal as a conditioning factor is crucial and enhances pedestrian movement prediction, some conditions are often not visible or are difficult to capture in practical applications, such as autonomous driving. This raises concerns about the real-world applicability of the proposed setting and whether alternative solutions might address this limitation.\", \"questions\": \"1. How does the proposed method handle scenarios where conditional inputs are unavailable or unreliable, as might be the case in applications like autonomous driving?\\n2. Could the authors elaborate on why only three baselines were chosen for comparison, and whether additional baselines might provide a more comprehensive evaluation?\\n3. Could you clarify the specific novel aspects of the automated labeling and filtering pipeline? Additionally, is there potential for further innovation in this pipeline to enhance its originality, or were there particular design constraints that influenced its current implementation?\\n\\nI am open to reconsidering my final rating if the authors address the concerns raised.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Global Response\", \"comment\": \"We appreciate all reviewers for the insightful and helpful feedback. We would like to first emphasize our problem setting and the motivation behind it. Our paper aims to address the new task of pedestrian movement generation and **apply it to the simulation environment to synthesize realistic pedestrian animations**. Unlike existing works that tackle long-term pedestrian trajectory prediction, pedestrian movement generation focuses more on learning realistic body movements for the SMPL meshes from real-world videos. Therefore, in Sec.2, we have defined our task as \\u201c***pedestrians continuously make short-term movement decisions on their route to respond to their immediate environment***\\u201d in reference to the literature on pedestrian behavior analysis (Feng et al., 2021). To support the application of PedGen to a simulator, we have made two key assumptions to our problem setting.\\n\\nFirst, we assume the global trajectory can be given by another module so we can focus on generating realistic pedestrian body movements. For example, we can use a path planner like A* to generate a plausible path in the simulator. Our model can run sequentially to these modules and use their output goal points as the goal context. Its key novelty is that it can transfer natural and diverse real-world movements to simulation with more degrees of freedom instead of relying on fixed animations in existing simulators (Shan et al., 2023). Second, we only consider the static scene context at the current frame to support populating an empty environment in a simulator with no history information. In our experiments in Tab. 2b, we have shown the effectiveness and importance of incorporating each context factor compared to using no context. Despite PedGen only having one static scene context, the result in Tab. 2b already shows that it can provide PedGen ***6.7%*** and ***8.5%*** improvement in aADE and aFDE compared to the one without it. With a simple modification of stacking history point clouds, our scene context encoder can also encode multi-frame context information.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper introduces a context-aware generative model for realistic pedestrian movement prediction. It leverages a conditional diffusion framework that uses 3D point clouds to capture spatial scene context. PedGen offers a solution suitable for applications in autonomous systems, crowd simulation, and urban planning.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"Context is an important factor in pedestrian trajectory prediction, which poses a challenge due to the difficulty of identifying and measuring it while predicting future paths. This paper proposes a valuable idea, supported by clear visualizations and thorough ablation studies.\", \"weaknesses\": \"1- I find Fig. 3 confusing. Based on the context of the paper, it appears that only one timestep is observed, and the rest are predicted. However, Fig. 3 suggests that the model is fed with the timesteps from t=1 to t=T.\\n\\n2- The learnable mask *m* needs to be explained more in the paper. How is this mask learned?\", \"questions\": \"1- Since pedestrian path generation is done in static settings, the social attributes of pedestrians are not taken into account. Are other pedestrians considered as objects when calculating the collision measure?\\n\\n2- How is the collision rate affected by the ablations?\\n\\n3- The training of masks *m* in Fig. 3 is unclear. How are these masks trained, and which part of the loss function guides this training?\\n\\n3- Looking at Table 3.b, it appears that the goal has a significant effect on error reduction. However, in practice, the goal of a pedestrian is generally unknown when predicting the path. Why is the goal handled as context in this work? Shouldn\\u2019t the model predict the goal as part of the path prediction process? I am also curious to see visualizations with an ablated version where the goal is not provided as input.\\n\\n4- mADE is mentioned to be measured across 50 movements. Does this mean that 50 possible paths were generated?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your thoughtful and constructive feedback. We address your questions below.\\n\\n**Q1**: How does the proposed method handle scenarios where conditional inputs are unavailable or unreliable, as might be the case in applications like autonomous driving?\\n\\n**A1**: To handle unavailable inputs, we have trained PedGen with different combinations of the context factors in Sec. 5.2. The results validate the effectiveness of adding context in each setting. To handle unreliable inputs, we have conducted additional experiments on each context factor to study its sensitivity to input noises. We add Gaussian noises with a standard deviation of 0.5 on the scene point cloud, the SMPL beta vector, and the goal position. The results are shown below:\\n| Context Factor | w/o noise | | | |w/noise ($\\\\sigma$=0.5) | | | |\\n|----------------|-----------|------|------|------|------------------------|------|------|------|\\n| | mADE | aADE | mFDE | aFDE | mADE | aADE | mFDE | aFDE |\\n| No | 1.13 | 4.08 | 1.61 | 7.56 | - | - | - | - |\\n| Scene | 1.11 | 3.75 | 1.55 | 6.92 | 1.69 | 4.02 | 2.74 | 7.41 |\\n| Human | 1.09 | 3.24 | 1.61 | 5.95 | 1.83 | 3.50 | 3.08 | 6.22 |\\n| Goal | 0.60 | 1.09 | 0.47 | 1.00 | 1.61 | 2.33 | 2.55 | 2.79 |\\n\\n\\nWe can see that all factors suffer from input noises, with the goal having the most degraded performance, where mADE increases by 168% and aADE increases by 114%. The scene and the human factors have increased less in terms of the average metrics (aADE increases by 7% for scene context and increases by 8% for human context) and still performs better than the baseline without context. However, they are less robust in terms of the min metrics (mADE increases by 52% for scene context and increases by 68% for human context) as it is hard to predict the exact future movement of the dataset with noisy inputs. We will add this experiment to our updated paper.\\n\\n\\n**Q2**. Could the authors elaborate on why only three baselines were chosen for comparison, and whether additional baselines might provide a more comprehensive evaluation?\\n\\n**A2**: As context-aware pedestrian movement generation is a new task, there is no prior work that can directly support this task. Hence, we choose state-of-the-art methods from the tasks closest to our problem setting for comparison: MDM is one of the best works for action/text-conditioned human motion generation. HumanMAC is one of the most competitive methods for human motion prediction without context. TRUMANS is the state-of-the-art for indoor human-scene interaction synthesis. Please feel free to suggest other methods we are unaware of so we can compare them.\\n\\n**Q3**: Could you clarify the specific novel aspects of the automated labeling and filtering pipeline? Additionally, is there potential for further innovation in this pipeline to enhance its originality, or were there particular design constraints that influenced its current implementation?\\n\\n**A3**: We agree that our automated labeling and filtering method is common for other tasks. However, our novelty comes from **our problem setting of learning from noisy labels of web videos for diverse pedestrian movements**. This is the first time it has been attempted. In fact, existing approaches in human motion generation all assume the labels are perfect without noise. Moreover, the proposed automated label filtering is inspired by techniques addressing unsupervised anomaly detection, and we adapt them to reduce the noise level in the labels with an iterative procedure. Our experiment results in Tab.2a have shown the effectiveness of our adaptation to the new task, which reduces aADE from ***4.45*** to ***4.32*** by filtering out the anomaly labels and further reduces to ***4.08*** by adding the partial labels. For further innovation, we will identify specific parts of the motion label that have high noise, such as specific timestamps or body joints, instead of filtering out the whole motion in our future work.\"}", "{\"comment\": \"Thanks for the comments. We will address your further questions.\\n\\n**Q1.** I see Figure 8 in Appendix with two rows (No context and Scene only). Which of these figures are the result of Goal ablation?\\n\\n**A1.** Sorry for the confusion. We have updated Fig. 8 to add visualizations with both the scene and the goal context for a better comparison. The figure shows that the model using both the scene and the goal context can reach the goal precisely, while the model can still generate plausible pedestrian movements when only using the scene context (especially for the sitting and standing poses in Fig. 8b and Fig. 8d). The model would perform poorly when no context factor is provided. The qualitative results further demonstrate the effectiveness of the scene context when the goal is not provided as input.\\n\\n**Q2.** Why is the collision rate (and its values after ablations) only calculated in CARLA and not in the two other datasets?\\n\\n**A2.** We only calculate the collision rate in CARLA as it is a simulator that has ground truth scene geometry and a collision checker. On the contrary, it is challenging to compute the collision rate in real-world datasets due to a lack of ground truth labels for the scene geometry, like the ground height and meshes of the obstacles. Therefore, we only evaluate the ADEs and MDEs on these datasets by comparing with the ground truth pedestrian movement.\"}", "{\"comment\": \"Thank you for responding to questions.\\n\\n-I see Figure 8 in Appendix with two rows (No context and Scene only). Which of these figures are the result of Goal ablation?\\n\\n-Why is the collision rate (and its values after ablations) only calculated in CARLA and not in the two other datasets?\"}", "{\"comment\": \"Thank you for the constructive feedback. We appreciate your positive comments that this work \\\"proposes a valuable idea, supported by clear visualizations and thorough ablation studies.\\\" We address your questions below.\\n\\n**Q1**: Since pedestrian path generation is done in static settings, the social attributes of pedestrians are not taken into account. Are other pedestrians considered as objects when calculating the collision measure?\\n\\n**A1**: As mentioned in the global response, our model serves as the first method for the new task of pedestrian movement generation for synthesizing realistic pedestrian movements in simulation, and we use the static scene context to better support populating empty environments. To evaluate generation in simulation, we only experimented on static scenes without dynamic objects in the CARLA simulator to measure the collision rate. The experiment shows that our model already implicitly learns the social attributes of pedestrians. As shown in Tab.1, PedGen improves mADE from ***1.31*** to ***1.13*** on CityWalkers and from ***3.03*** to ***2.90*** on Waymo in real-world scenarios by conditioning on the scene context, which contains the other pedestrians' point cloud. We could further iteratively generate future movements based on the context at the latest time step so the model can adaptively update its predicted movements according to the behaviors of other pedestrians. \\n\\n\\n**Q2**: How is the collision rate affected by the ablations?\\n\\n**A2**: An additional ablation experiment in the CARLA test set shows that adding context factors reduces the collision rate and improves the physical plausibility. See the table below: \\n| Context Factor | Collision Rate % | Foot Floating Rate %|\\n|----------------|-----|-----|\\n| No | 2.1 | 5.2 |\\n| Scene | 1.6 | 2.6 |\\n| Human | 1.9 | 0.7 |\\n| Scene+Human | 1.5 | 0.3 |\\n| Goal | 0.0 | 0.0 |\\n\\n\\nThe scene context can contribute to both a lower collision rate of ***1.6%*** and a lower foot floating rate of ***2.6%***, while the human context is more useful in reducing the foot floating rate to only ***0.7%***. Adding both the scene and the human context can further improve the physical plausibility of the generated movements. Using the goal context is the most crucial factor and can reduce the failure rate to 0. We will add this experiment to our updated paper.\\n\\n**Q3**: The training of masks *m* in Fig. 3 is unclear. How are these masks trained, and which part of the loss function guides this training?\\n\\n**A3**: To train the partial labels with masking embeddings $m$, we first define a label mask $\\\\boldsymbol{M}$, where $\\\\boldsymbol{M}_t==1$ indicates the label at timestep $t$ is missing. Then we add the mask embedding to the original noisy sample as $\\\\boldsymbol{x}^k = \\\\boldsymbol{x}^k(1-\\\\boldsymbol{M}) + \\\\boldsymbol{m}\\\\cdot \\\\boldsymbol{M}$ to replace the missing timesteps with the mask embedding $\\\\boldsymbol{m}$ and feed to the network to output the denoised prediction $\\\\hat{\\\\boldsymbol{x}}$ similar to Sec.4.2. We then update the loss with the masked predictions and masked ground truth as $L = L(\\\\boldsymbol{x}(\\\\boldsymbol{M}), \\\\hat{\\\\boldsymbol{x}}(\\\\boldsymbol{M}))$ so it only operates on the labels at the available timesteps. All losses will guide the training with the masked outputs. We will update the corresponding section in our paper to make the training of masks $\\\\boldsymbol{m}$ clearer.\\n\\n**Q4**: Looking at Table 3.b, it appears that the goal has a significant effect on error reduction. However, in practice, the goal of a pedestrian is generally unknown when predicting the path. Why is the goal handled as context in this work? Shouldn\\u2019t the model predict the goal as part of the path prediction process? I am also curious to see visualizations with an ablated version where the goal is not provided as input.\\n\\n**A4**: As mentioned in the global response, our work focuses on generating realistic and detailed pedestrian local movements as SMPL body meshes and synthesizing pedestrian animations in simulation. Therefore, we assume the global path is given by a path planner. We will add visualizations to compare the model generation without the goal context to further demonstrate the effectiveness of the other context factors.\\n\\n**Q5**: mADE is mentioned to be measured across 50 movements. Does this mean that 50 possible paths were generated?\\n\\n**A5**: Yes, we generate 50 possible movements, and mADE measures the minimum ADE (average displacement error) among the 50 movements compared to the ground truth.\"}", "{\"comment\": \"**Q3**. Need for More Explanations About Results: Based on Table 2(b), incorporating scene information yields only minor improvements. The priority order appears to be Goal, Human, and Scene, which raises questions about the usefulness and necessity of including Scene in the overall model. The authors should provide an explanation for this observation.\\n\\t\\n**A3**: It is necessary to include the scene context in the PedGen model, as the experiment results show all three context factors can lead to improvements on top of each other, and incorporating all context factors lead to the best performance. Also, the performance gain from incorporating the scene context is already significant. As shown in Tab.2, filtering out the noisy labels and adding the partial labels improves mFDE from ***1.64*** to ***1.61***, while using the scene context further improves mFDE to ***1.55***. It is worth noting that we use motion prediction metrics ADE and FDE in our real-world experiments by comparing the error between the generated movements and the ground truth. The results can not truly showcase the effectiveness of the scene context, as the ground truth is only one of the plausible results, and the scene context is more useful in eliminating bad predictions that collide with other objects rather than making the predictions match exactly to the ground truth. To better showcase the effectiveness of the scene context in other metrics, we have conducted additional ablations of the context factors in the CARLA simulator and evaluated the performance using collision rate and foot floating rate. The results are shown below:\\n\\n| Context Factor | Collision Rate % | Foot Floating Rate % |\\n|----------------|------------------|----------------------|\\n| No | 2.1 | 5.2 |\\n| Scene | 1.6 | 2.6 |\\n| Human | 1.9 | 0.7 |\\n| Scene+Human | 1.5 | 0.3 |\\n| Goal | 0.0 | 0.0 |\\n\\nWe can see that the scene context is more helpful than the human context in reducing the collision rate (***0.5%*** improvement compared to ***0.2%***), while the goal remains the most critical context factor. We will add this experiment to our updated paper.\\n\\n**Q4**. Visual Results for Complex Scenes: More visual examples are needed to illustrate how pedestrians navigate around obstacles, such as chairs, trees, or cars, to reach their targets. Without these examples, the scene context seems to have limited utility, as indicated by the table.\\n\\n**A4**: We will include more visual results in the supplementary to demonstrate the effectiveness of the scene context in obstacle avoidance. \\n\\n\\n**Q5**. Given that the ground truth (GT) is primarily generated using existing methods, how do the authors ensure consistency across these various methods? For example, is the generated depth map aligned with the generated 4D human pose? If there is a discrepancy, what potential drawbacks arise from this mismatch, and how can the proposed method address them?\\n\\n**A5**: We have mentioned in supp. Sec. C the potential inconsistencies between the depth map label and the 4D human pose label and outline our ways to address this issue. To summarize, we multiply the depth map label by a factor $\\\\gamma$, which equals the ratio between the depth from the SMPL root translation of the first frame and the depth of the human root\\u2019s projection in the 2D depth map label to align the starting position of the motion and its surrounding scene context. As shown in the bottom left examples of Fig.2, the scene and movement labels fit well after alignment.\\n\\n**Q6**. How do the authors convert the depth labels into a 3D point cloud without knowing the camera parameters (l.310)? Am I missing any assumptions here?\\n\\n**A6**. As stated in supp. Sec. C, we estimate the camera intrinsics by setting the focal length to be the diagonal pixel length of the image and the optical center to be the center of the image. While such estimation may lead to additional errors, our experiments in Tab. 2b show that learning from the noisy scene context labels can still benefit pedestrian movement generation compared to using no context, reducing aADE and aFDE by ***6.7%*** and ***8.5%***, respectively.\"}", "{\"comment\": \"**Q7**. It appears that only static objects and elements are considered in the scene context, as there is no explicit modeling of dynamics in the context encoder. Do the authors intentionally exclude dynamics during scene context modeling, or are these dynamics treated as static objects? Is it valid to assume that other road participants, such as pedestrians, are not significant in the modeling process for pedestrian motion generation? I find this assumption questionable, especially since collisions are used as evaluation metrics in this study.\\n\\n**A7**: As discussed in our global response, the key motivation of our model is to **populate empty urban spaces in simulation by generating realistic and diverse pedestrian movements**, as shown in the supp. video. Therefore, we only consider the static environment so we can start generation from an empty static scene without pedestrians, though it is possible to extend our model to incorporate historical information. In addition, as our focus is more on generating detailed human body movements as SMPL meshes than the global trajectory, we assume a path planner like A* already models the scene dynamics when predicting the global path. We could further iteratively generate future movements based on the context at the latest time step so the model can adaptively update its predicted movements according to the behaviors of other pedestrians. In training, when extracting the scene context in urban environments, the context often includes road participants like other pedestrians and local obstacles such as benches and fences, so the model has already learned some collision avoidance capabilities implicitly (as shown in the ablation table above, adding the scene context reduces collision rate from ***2.1%*** to ***1.6%***).\"}", "{\"metareview\": \"This paper introduces a context-aware generative model for realistic pedestrian movement prediction. This paper proposes a valuable idea, supported by clear visualizations and thorough ablation studies. The results demonstrate performance improvements over baselines, and comprehensive ablation studies are conducted to validate the approach. The major concerns of reviewers include implementation details, methodological reasonableness, unclear task definitions, and clarification of contributions. The author's response has addressed most of these issues. So the final vote is acceptance.\", \"additional_comments_on_reviewer_discussion\": \"The authors improved the readability of the paper during the rebuttal, including the addition of method descriptions, additional experiments, and visualized results.\"}", "{\"comment\": \"Thanks for the suggestion. We have updated Figure 8 in the appendix, and now the first timestep pose is white. Note that the initial pose in the first timestep is also generated instead of observed, as our model only requires the initial starting point of the movement. Therefore, the generated initial pose is also influenced by the input context factors, so there is a significant shift in the initial pose between using the context and not using the context. Generating the initial pose makes our model flexible and facilitates populating an empty simulation environment.\"}", "{\"comment\": \"Thank you for updating Figure 8 in the appendix.\\n\\n- Could you please include the first timestep movement (the observed timestep) with a slightly different color? It seems like figure 8 only includes the generated movements. I am trying to understand why there is a significant shift in the direction and pose between each of these generated movements.\"}", "{\"comment\": \"Thanks for the comments. We appreciate you find our key motivation to populate empty urban spaces in simulation \\u201cis certainly valid.\\u201d As for our experiment in real-world environments, we use all the context factors, including the ground truth goal points from the dataset. We believe the dynamic environment is already considered and addressed when obtaining these goal points, and the main focus of PedGen is to generate local body movements instead of planning the global trajectory.\\n\\nFor your second question about the definition of the \\u201ccurrent context,\\u201d note that our context factors include not only the static scene but also the goal point and the SMPL body shape parameter. Our main model should include all three context factors, and the model variant that only considers the scene context is only used to ablate the effectiveness of each context factor. We believe the dynamic components are already modeled in the goal context factor from another module, such as A* path planner in simulation or a motion prediction model in real-world applications, and our model focuses on using the local static scene context and the SMPL body shape to generate more plausible local body movements while reaching the goal. For example, in Fig. 8 of the supplementary, incorporating only the static scene context can help generate a sitting pose when there is a bench at the starting point and a walking upward movement when there is a slope ahead. In these cases, the static components are also critical in determining the detailed local poses other than the goal points.\"}", "{\"title\": \"Paper Update\", \"comment\": \"Thanks again for all reviewer's suggestions on our work.\\nWe have updated our submission. Here's the summary of the modifications to our paper to address some of the questions.\\n1. We have added more formulas to the training with the mask embedding $\\\\boldsymbol{m}$ to make it clearer in Sec. 4.3, as suggested by Reviewer EYrc Q3.\\n2. We have conducted additional experiments to show how each context factor would affect the collision rate and the foot floating rate in the CARLA simulator in Tab. 2b, as asked by Reviewer EYrc Q2.\\n3. We have added additional visualizations to better show the utility of the scene context compared to not using the context in Fig. 8 in the appendix, as asked by Reviewer EYrc Q4 and Reviewer yQtW Q4.\"}", "{\"summary\": \"In this work, the authors propose to learn pedestrian movements from web videos. To this end, they curate a large-scale dataset called CityWalkers that captures real-world pedestrian movements in urban scenes from YouTube, with some extra efforts to generate pseudo-GT labels and remove the low-quality labels. Given this dataset, the authors propose a generative model where a context encoder is introduced to incorporate various context factors, including goal, human, and scene, in generating realistic pedestrian movements in urban scenes.\\n\\nExperiments show that the proposed outperforms existing baseline methods for pedestrian movement generation by learning extra data and incorporating the context factors.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The concept of integrating contextual information into pedestrian generation is sound and has already demonstrated effectiveness in related trajectory generation tasks.\\n2. The ablation studies examining each factor are thorough and well-executed.\\n3. The visual examples provided in both the main paper and supplementary materials are satisfactory.\", \"weaknesses\": \"1. Unclear Task Definition: Section 4.1 outlines the overall task definition, but it seems that only the 3D location of pedestrians and the 2D image of the scene in the first frame (t_1) are provided, while all other elements are predicted. Is this interpretation correct? If so, what is actually learned from this setup, considering there could be millions of plausible results? Or do we learn the bias in the training set? Is this a reasonable setup?\\n2. Clarification of Contributions: The overall contributions should be clarified. Lines 97-101 describe the contributions in a confusing manner. For instance, line 98 states, \\u201c1) A new task of context-aware pedestrian movement generation from web videos with unique challenges in dealing with label noise and modeling various motion contexts.\\u201d This seems contradictory since the dataset with noisy labels is presented as a contribution in \\u201cA new large-scale real-world pedestrian movement dataset, CityWalkers, with pseudo-labels of diverse pedestrian movements and motion contexts\\u201d (l. 99). I have significant doubts about the dataset's quality and question whether a noisy dataset can genuinely be considered a contribution.\\n3. Need for More Explanations About Results: Based on Table 2(b), incorporating scene information yields only minor improvements. The priority order appears to be Goal, Human, and Scene, which raises questions about the usefulness and necessity of including Scene in the overall model. The authors should provide an explanation for this observation.\\n4. Visual Results for Complex Scenes: More visual examples are needed to illustrate how pedestrians navigate around obstacles, such as chairs, trees, or cars, to reach their targets. Without these examples, the scene context seems to have limited utility, as indicated by the table.\", \"questions\": \"Please address my concerns in weakness as well as in below.\\n\\n1. Given that the ground truth (GT) is primarily generated using existing methods, how do the authors ensure consistency across these various methods? For example, is the generated depth map aligned with the generated 4D human pose? If there is a discrepancy, what potential drawbacks arise from this mismatch, and how can the proposed method address them?\\n2. How do the authors convert the depth labels into a 3D point cloud without knowing the camera parameters (l.310)? Am I missing any assumptions here?\\n3. It appears that only static objects and elements are considered in the scene context, as there is no explicit modeling of dynamics in the context encoder. Do the authors intentionally exclude dynamics during scene context modeling, or are these dynamics treated as static objects? Is it valid to assume that other road participants, such as pedestrians, are not significant in the modeling process for pedestrian motion generation? I find this assumption questionable, especially since collisions are used as evaluation metrics in this study.\", \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety', 'Yes, Legal compliance (e.g., GDPR, copyright, terms of use)', 'Yes, Responsible research practice (e.g., human subjects, data release)']\", \"details_of_ethics_concerns\": \"The authors address potential ethical issues in both the main paper and supplementary materials.\\n\\nAs I am not an ethics reviewer, I would like to highlight these concerns and recommend that they be reviewed by experts in the field.\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the further discussion. We will clarify the first point. We believe that modeling the real-world pedestrian movement distribution $Q(X)$ is complicated, and many more factors can influence the posterior distribution of pedestrian movement in addition to the dynamic scene context, such as the weather, social norms, group dynamics, and a person\\u2019s mood. As the first paper addresses the task of pedestrian movement generation, it is challenging to incorporate and validate all the possible context factors and learn the exact real-world distribution $Q(X) = P(X|Y_1, Y_2, ..., Y_N)$. We believe deciding which context factor $Y_i$ should be included in the model depends on its application. For example, in trajectory prediction in autonomous driving, $Y_i$ can include the dynamics of other agents, but it does not contain personal characteristics as it does not require predicting the local body movements. However, as the main motivation of PedGen is to populate empty urban spaces in simulation with detailed body movements, we identify the three most fundamental factors for our application to be $Y_1$=static scene context, $Y_2$=personal characteristics, and $Y_3$=goal points. These three factors are already sufficient to support our main task, and hence, we only aim to learn the distribution $P(X|Y_1=(\\\\mathrm{scene}), Y_2=(\\\\mathrm{human}), Y_3=(\\\\mathrm{goal}))$. As for the scene context, our goal is not to show the real-world distribution $Q(X)$ is equivalent to $P(X|Y_1=(\\\\mathrm{scene}))$ but to prove the effectiveness of each factor by showing $Q(X)$ is closer to $P(X|Y_1=(\\\\mathrm{scene}))$ than $P(X)$. As CityWalkers is a large-scale dataset, the learned posterior distribution $P(X|Y_1=\\\\mathrm{scene})$ has strong generalization ability. As shown in Tab. 2b, incorporating the scene context can reduce aADE from 4.08 to 3.75 on the validation set of CityWalkers with novel scenes unseen during training and reduce the collision rate from 2.1\\\\% to 1.6\\\\% in zero-shot deployment on the CARLA test set. These results show that the learned distribution is generalizable and does not overfit the training data.\"}", "{\"title\": \"Concerns addressed\", \"comment\": \"Thank you for the detailed and informative feedback. I appreciate the authors' efforts to address the concerns raised in my initial review.\\n\\nI recognize that this is a novel task with limited baselines available for direct comparison. The task of learning from noisy web video labels is fascinating, and the additional results provided demonstrate robustness to noisy inputs. Furthermore, the design shows adaptability in handling unavailable and varied inputs, which strengthens its practical applicability. Based on the authors\\u2019 thorough rebuttal and the new insights provided, I will increase my final rating accordingly.\"}", "{\"comment\": \"Thank you for your valuable and helpful feedback. We sincerely appreciate the supportive comments that \\\"The contribution of this paper is comprehensive,\\\" \\u201cThe focus of this paper is interesting,\\u201d and \\u201cThis paper is well-written and easy to understand.\\u201d We address your question below.\\n\\n**Q1**. Although this paper clarifies their task as motion generation, the proposed method and evaluation metrics are more like motion prediction, i.e., output person movements by accepting current condition inputs. The evaluation metrics also does not involve the diversity evaluation of the generated human motion. Therefore I have doubts about the task type of this paper, maybe motion prediction is more accurate.\\n\\n**A1**. In our setting, defining our task as pedestrian movement generation is more relevant to the key motivation of PedGen, which is populating empty urban spaces in simulation by generating realistic and diverse pedestrian movements (shown in the supp. video). We also evaluate the context awareness and the physical plausibility of the generated movements in simulated environments on the Carla test for this application. Another application is to predict future pedestrian movements in the real world, and hence we use motion prediction metrics like ADE and FDE. For diversity evaluation, we use average pairwise distance (APD) between all predictions and evaluate the result on each context factor in the table below:\\n| Context Factor | APD |\\n|----------------|------|\\n| No | 38.6 |\\n| Scene | 29.8 |\\n| Human | 21.7 |\\n| Goal | 10.2 |\\n\\n\\nThe results show that the model with no context achieves the best diversity of 38.6 APD, whereas adding the goal context reduces APD to only 10.2. This is expected as adding context factors reduces diversity by eliminating the implausible results. Therefore, we do not add APD as our metric, since comparing the generation diversity of models with different context factors is meaningless in our setting.\"}", "{\"comment\": \"Dear Reviewer XLS3,\\n\\nThanks again for your supportive feedback! We would appreciate it if you would let us know whether our responses sufficiently address your questions and whether you need to see additional visualizations, as the last day to update the materials is Nov. 27th. Thank you once again!\\n\\nBest regards, Authors of Submission ID 1930\"}" ] }
DxT3e2f1jc
Video-Infinity: Distributed Long Video Generation
[ "Zhenxiong Tan", "Xingyi Yang", "Songhua Liu", "Xinchao Wang" ]
Diffusion models have recently achieved remarkable results for video generation. Despite the encouraging performances, the generated videos are typically constrained to a small number of frames, resulting in clips lasting merely a few seconds. The primary challenges in producing longer videos include the substantial memory requirements and the extended processing time required on a single GPU. A straightforward solution would be to split the workload across multiple GPUs, which, however, leads to two issues: (1) ensuring all GPUs communicate effectively to share timing and context information, and (2) modifying existing video diffusion models, which are usually trained on short sequences, to create longer videos without additional training. To tackle these, in this paper we introduce Video-Infinity, a distributed inference pipeline that enables parallel processing across multiple GPUs for long-form video generation. Specifically, we propose two coherent mechanisms: Clip parallelism and Dual-scope attention. Clip parallelism optimizes the gathering and sharing of context information across GPUs which minimizes communication overhead, while Dual-scope attention modulates the temporal self-attention to balance local and global contexts efficiently across the devices. Together, the two mechanisms join forces to distribute the workload and enable the fast generation of long videos. Under an 8 x Nvidia 6000 Ada GPU (48G) setup, our method generates videos up to 2,300 frames in approximately 5 minutes.
[ "diffusion model", "video generation" ]
https://openreview.net/pdf?id=DxT3e2f1jc
https://openreview.net/forum?id=DxT3e2f1jc
ICLR.cc/2025/Conference
2025
{ "note_id": [ "pxbdDtpbBp", "aqyFmtYYVv", "ZaQU1OFbVk", "PCehTVbUJt", "NWobWaWEt6" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730526534726, 1730528365674, 1730707883201, 1732603422233, 1730623701760 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6150/Reviewer_4vVA" ], [ "ICLR.cc/2025/Conference/Submission6150/Reviewer_x3cC" ], [ "ICLR.cc/2025/Conference/Submission6150/Reviewer_pci8" ], [ "ICLR.cc/2025/Conference/Submission6150/Authors" ], [ "ICLR.cc/2025/Conference/Submission6150/Reviewer_tAHD" ] ], "structured_content_str": [ "{\"summary\": \"The paper presents Video-Infinity, a novel framework for generating long-form videos using distributed diffusion models across multiple GPUs. This approach aims to reduces the inference time and resource demands typically associated with long video generation. The paper proposed two methods: Clip parallelism and Dual-scope attention, which optimize inter-GPU communication and temporal attention across frames, respectively. The methodology enables the generation of videos up to 2,300 frames in just 5 minutes, faster than existing methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The integration of Clip parallelism and Dual-scope attention is a novel approach that effectively addresses the scalability and efficiency challenges in video generation.\\n\\n2. The paper demonstrated ability to generate longer videos much faster than current methods, achieving substantial reductions in generation time.\\n\\n3. Experiments are conducted to validate the performance, showcasing significant improvements over other methods in terms of speed and video length capabilities.\", \"weaknesses\": \"1. The method of synchronizing context across GPUs, crucial for maintaining temporal coherence, is not discussed detail.\\n\\n2. While the framework improves efficiency, there is not much discussion on how these gains impact the qualitative aspects of the videos, such as resolution, realism, particularly under complex scene dynamics.\", \"questions\": \"1. How the synchronization latency affects the continuity and quality of the video, especially in dynamic scenes? Are there mechanisms in place to mitigate negative impacts if synchronization is delayed?\\n\\n2. How does the proposed model perform with high-motion sequences or videos requiring rapid scene changes, and what are the limitations of the current approach in handling such dynamics? \\n\\n3. The paper focuses on generating videos from a limited set of prompts and scenarios. How well does the approach generalize to a wider variety of video content or more complex scenes?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces Video-Infinity, a distributed inference framework designed to enable efficient generation of long videos using diffusion models. It tries to address a challenge in video generation: the resource-intensive nature of long-form content, which often restricts video length and quality due to memory and computation limits. The proposed approach leverages two main innovations: Clip parallelism, which optimizes the distribution and synchronization of context information across multiple GPUs, and Dual-scope attention, which balances local and global temporal self-attention to maintain semantic coherence without requiring additional model training.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The empirical results are persuasive, with Video-Infinity achieving a 10x improvement over comparable methods like FIFO-Diffusion and being significantly faster than alternatives like Streaming T2V.\\n\\n2. The paper is well-organized, clearly outlining the technical details, methodology, and communication strategies.\", \"weaknesses\": \"1. It looks like this work adopt the idea from DistriFusion [1]. While the authors claim to tackle a more challenging problem, the dimensionality of frames, from a technical standpoint, is actually much simpler to manage compared to the problems addressed in DistriFusion.\\n\\n2. How does this method impact frame-to-frame continuity? I noticed that many of the generated videos in the Supplementary Material exhibit noticeable continuity issues. The authors do not seem to have adequately addressed this problem. Additionally, many other generated long videos can only display repetitive motions and clips.\\n\\n3. The evaluation lacks comprehensiveness, as the authors have only demonstrated their method on a single model, VideoCrafter2. It remains unclear whether the approach is effective across a broader range of model architectures. For instance, how well does this method generalize to new architectures like DiT? Additionally, what is the performance impact on these models? \\n\\n4. It's more of an engineering work, the novelty contribution of this work is not good enough. \\n\\nI'm sure it needs dedicated effort for applying this method on every new model architecture.\\n\\n[1] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models, CVPR'24\", \"questions\": \"Please refer to Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes Video-Infinity, a distributed inference pipeline designed for long-form video generation using diffusion models. The framework leverages two main mechanisms: Clip parallelism, which distributes video segments across multiple GPUs to improve processing efficiency, and Dual-scope attention, which balances local and global temporal contexts across devices. Together, these components enable Video-Infinity to generate lengthy, coherent videos with reduced memory overhead. On an 8 \\u00d7 Nvidia 6000 Ada GPU setup, the framework can produce videos up to 2,300 frames in approximately 5 minutes.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This work brings incremental novelty by adapting distributed parallelism specifically for long-form video generation. It introduces a dual-scope attention mechanism to balance local and global temporal interactions, ensuring coherence across extended sequences. The clip parallelism approach further enables efficient processing of video clips across GPUs, effectively handling the unique scalability and memory demands of video data. These adaptations, including optimizations for temporal continuity, showcase Video-Infinity\\u2019s tailored application of distributed inference to the distinct challenges of generating coherent long videos.\\n\\n2. Speed up performance is great. The proposed Clip parallelism and Dual-scope attention mechanisms optimize inter-device communication and memory requirements, leading to faster processing times and scalability for generating extended video sequences. It could reduce the inference time by up to 52%.\", \"weaknesses\": \"1. Performance. In the Table 2 under 64 frames settings, although the proposed work got the highest overall score, it did not showed dominating better results than other baselines.\\n\\n2. Results on longer context. This work claims capability to generate longer video clips, while it only shows results for a maximum of 192 frames in Table 2. Since it emphasis the long video generation ability, I would suggest putting more quantitive results on longer video. \\n\\n3. Results on memory usage comparison. This work lacks of comparison of reduced memory overhead to demonstrate the efficiency of the method.\", \"questions\": \"Has \\u201cVideo-Infinity\\u201d been tested on different types of video content, such as fast-moving scenes or varying lighting conditions, which might challenge the coherence of frame transitions? How robust is the model across these diverse scenarios?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper presents Video-Infinity, a distributed inference pipeline for long-video generation using diffusion models. It introduces two techniques for the main challenges in long-video generation. Clip parallelism divides a long clip generation task into several short clips to address the high GPU memory usage. Dual-scope attention gathers local and global context for temporal self-attention to generate a consistent long video. It compares with FreeNoise, StreamingT2V, and OpenSora 1.1V.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written and easy to follow.\\n1. It is a training-free inference pipeline while extending the baseline model generation capacity.\\n2. ***Dual-scope Attention*** provides a new view of gathering the global and local context for high-fidelity long video generation. The generation results are impressive. It might provide insight into the training scheme or new architecture design.\", \"weaknesses\": \"1. The novelty of **Clip Parallelism** is limited. The paper merely migrates the DistriFusion[1] to the video diffusion model, where DistriFusion splits a large image into patches while this paper splits a long video into short clips. The distributed modules are similar to the sparse operations in DistriFusion[1], except for extending the sparse 2D convolution to the 1D/3D temporal convolution with different padding schemes. Also, the *GroupNorm* modification is similar. Moreover, the DistriFusion[1] further introduced *Corrected asynchronous GroupNorm*, which is more efficient than the paper's implementation since the asynchronous communication can be pipelined into the computation.\\n1. The paper didn't compare the video quality with FIFO-Diffusion[2], which also focused on long-video generation. It is difficult to demonstrate the proposed method's advantage over the SOTA work.\\n1. In the comparison of efficiency, comparing Open-Sora v1.1 and the proposed method is unfair because they use different model architectures (Spatial-Temporal DiT vs. VideoCrafterV2).\\n1. There are several typos. (e.g., GPU index in Fig.1, captions in Fig.3\\n\\n[1] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models. Muyang Li, Tianle Cai, et al. CVPR 2024\\n\\n[2] FIFO-Diffusion: Generating Infinite Videos from Text without Training. Jihwan Kim and Junoh Kang and Jinyoung Choi and Bohyung Han, NeurIPS 2024\", \"questions\": \"1. The paper focuses on the video diffusion model with the temporal self-attention layer. However, DiT-based architectures are popular in current work, and they tend to use the 3D attention layer instead of the temporal self-attention layer (e.g., CogVideo-5B[2]).\\nHow does the *Dual-scope Attention* perform on the 3D attention layer?\\n2. How is the video quality compared with FIFO-Diffusion[2]?\\n3. Would it be possible to extend Video-Infinity to OpenSora for the efficiency comparison?\\n\\n\\n[3] CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer. Zhuoyi Yang, Jiayan Teng, et al, arxiv 2408.06072. 2024.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
DwqoBkj2Mw
The Last Iterate Advantage: Empirical Auditing and Principled Heuristic Analysis of Differentially Private SGD
[ "Milad Nasr", "Thomas Steinke", "Borja Balle", "Christopher A. Choquette-Choo", "Arun Ganesh", "Matthew Jagielski", "Jamie Hayes", "Abhradeep Guha Thakurta", "Adam Smith", "Andreas Terzis" ]
We propose a simple heuristic privacy analysis of noisy clipped stochastic gradient descent (DP-SGD) in the setting where only the last iterate is released and the intermediate iterates remain hidden. Namely, our heuristic assumes a linear structure for the model. We show experimentally that our heuristic is predictive of the outcome of privacy auditing applied to various training procedures. Thus it can be used prior to training as a rough estimate of the final privacy leakage. We also probe the limitations of our heuristic by providing some artificial counterexamples where it underestimates the privacy leakage. The standard composition-based privacy analysis of DP-SGD effectively assumes that the adversary has access to all intermediate iterates, which is often unrealistic. However, this analysis remains the state of the art in practice. While our heuristic does not replace a rigorous privacy analysis, it illustrates the large gap between the best theoretical upper bounds and the privacy auditing lower bounds and sets a target for further work to improve the theoretical privacy analyses.
[ "differential privacy", "heuristics", "privacy auditing" ]
Accept (Poster)
https://openreview.net/pdf?id=DwqoBkj2Mw
https://openreview.net/forum?id=DwqoBkj2Mw
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yEBMI8BdV5", "tdKSpJGz0s", "fT8bS1kega", "aU4erQsBIp", "XWROSf2nA4", "T4oGmaTmyT", "MdbUQyubBb", "D9MBSzOIm4", "6YiC7Lqi8B", "6Byz35l6vW", "4lakEFS9dY" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_review", "decision", "official_comment", "official_review", "official_review", "meta_review", "official_comment", "official_comment" ], "note_created": [ 1733168194679, 1730729229127, 1732324080644, 1730471418320, 1737524145886, 1732589791796, 1730428137354, 1730386484476, 1734741195034, 1732560488546, 1732319263395 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11788/Authors" ], [ "ICLR.cc/2025/Conference/Submission11788/Reviewer_PBks" ], [ "ICLR.cc/2025/Conference/Submission11788/Authors" ], [ "ICLR.cc/2025/Conference/Submission11788/Reviewer_SieV" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11788/Authors" ], [ "ICLR.cc/2025/Conference/Submission11788/Reviewer_YKem" ], [ "ICLR.cc/2025/Conference/Submission11788/Reviewer_MfJb" ], [ "ICLR.cc/2025/Conference/Submission11788/Area_Chair_gbqb" ], [ "ICLR.cc/2025/Conference/Submission11788/Authors" ], [ "ICLR.cc/2025/Conference/Submission11788/Authors" ] ], "structured_content_str": [ "{\"comment\": \"We again thank the reviewers for their valuable feedback. We have responded to all of the reviewers and we hope that this has clarified their questions about the submission. If there are any further questions, we are happy to answer them.\\n\\nWe wish to reiterate that the high-level point of our work is to offer a novel perspective on the privacy of DP-SGD. Our approach doesn't fit the usual paradigms of provable theoretical upper bounds or empirical auditing lower bounds on the privacy leakage. That makes it hard to evaluate our contribution, but we believe that novel approaches are needed since the existing approaches seem unable to fully shed light on the privacy properties of DP-SGD.\"}", "{\"summary\": \"The paper introduces a heuristic privacy analysis for DP-SGD when only the final model is released, and intermediate updates are hidden. This heuristic assumes linear loss functions and, when the assumption holds, provides a more accurate estimate of privacy leakage than standard composition-based analyses, which often overestimate privacy loss by assuming adversaries have access to all training iterates. The authors experimentally demonstrate that their heuristic closely predicts the outcomes of privacy auditing tools, serving as a practical upper bound on privacy leakage in deep learning settings (where auditing is usually expensive). They also discuss counterexamples where the heuristic underestimates privacy leakage, highlighting its limitations. By bridging the gap between theoretical upper bounds and empirical lower bounds from privacy auditing, the heuristic sets a more realistic target for both theoretical improvements and practical attacks. It offers a computationally efficient way to estimate privacy leakage, aiding in tasks like hyperparameter selection before training without the overhead of extensive privacy audits.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper presents a new heuristic privacy analysis for DP-SGD when only the final model is released.\", \"The heuristic allows practitioners to estimate privacy leakage before training, aiding in hyperparameter selection without the computational cost and complexity of running privacy audits.\", \"The heuristic can serve as a benchmark for future improvements in both theoretical privacy analyses and practical attack methods, encouraging the development of stronger privacy attacks against ML models\", \"The authors provide a good amount of examples and intuition when the heuristic does not hold\"], \"weaknesses\": [\"The reliance on linear loss functions is a simplification. This mismatch may limit the applicability of the heuristic to real-world models as a valid upper bound. (see the questions)\", \"The paper could position itself within the existing body of work on privacy auditing, particularly recent methods that perform effective audits under similar threat models. A deeper comparison would clarify the novelty and contribution of this work. (see the questions)\", \"While formalising the heuristic is helpful, it can be seen as a natural extension of existing observations that independent gradients are optimal for tight auditing when only the last iterate is revealed. The paper may not fundamentally add new knowledge to privacy auditing beyond this formalisation. (see the questions)\"], \"questions\": \"* How does this work relate to other recent results in privacy auditing that perform effective audits under similar threat models? Specifically, could the authors elaborate on the novelty of their approach compared to methods presented in papers [1], [2], and [3], which make assumptions about the adversary and not the loss itself?\\n* Could the authors comment on the strength of the linearity assumption compared to other assumptions or heuristics used in privacy auditing? For instance, in [4], an auditing procedure for one-shot auditing is designed under the assumption that the adversary can insert a known random gradient, which can easily be extended to the blacbox setting. How does the linearity assumption compare?\\n* Are there practical scenarios or specific types of models where the linearity assumption approximately holds? \\n* Theoretical work also addresses this threat model for the non-convex case (see [5]). Is there any connection or similarity between the works?\\n\\n[1]: https://arxiv.org/pdf/2405.14106\\n[2]: https://arxiv.org/pdf/2405.14457\\n[3]: https://arxiv.org/pdf/2407.06496\\n[4]: https://arxiv.org/pdf/2302.03098\\n[5]: https://arxiv.org/pdf/2305.09903\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for their time. We respond to their comments/questions below.\\n\\n> First, I am not aware of any real-world loss function, even in the case of linear regression, that yields or can be framed to yield a constant gradient independent of model weight. \\n\\nA constant gradient independent of model weights is precisely what a linear loss function is.\\n\\nHowever, we wish to emphasize that we are **not** assuming that realistic losses are linear. That assumption would obviously be false. As we stated in the paper, \\\"assuming linearity is unnatural from an optimization perspective, as there is no minimizer.\\\" That is to say there is an obvious reason why linear losses never arise in practice -- the optimization procedure would not converge (unless we add a regularizer or a projection step).\\n\\nOur thesis is that, in terms of privacy, linear losses are close to the worst case for realistic losses.\\n\\nFor full batch gradient descent that claim is provably true (see Appendix B). Our observation is that it also seems to be close to true even for minibatch gradient descent, at least as far as existing privacy auditing methods are concerned.\\n\\n> The absence of results demonstrating, at a minimum, the conditions under which the model in Theorem 1 can serve as a provable upper bound for the estimate makes it challenging to evaluate and fully understand this auditing method.\\n\\nIt would be great if we could give provable general-purpose upper bounds on privacy loss. But that would be a different paper.\\nOur counterexamples section shows why it is difficult to translate our heuristic into a provable general-purpose upper bound. \\nWe hope that future work advances in this direction. And our paper helps set a target for such improved upper bounds.\\n\\n> Additionally, can the authors elaborate on their statement about the justification for focusing on linear losses which stems from the observation that existing auditing techniques achieve the highest epsilon values? The observation itself makes sense as if every time is worst case, the privacy loss seems to be the maximal. But I do not get why this supports the reduction to the linear function,\\n\\nIf we could identify the true worst case pair of inputs, then we would simply analyze those and provide a general-purpose upper bound on the privacy loss for all inputs.\\nWhile linear losses are not the true worst case (as evidenced by our counterexamples section), they seem to be close to the worst case for realistic losses. (Of course, \\\"close\\\" and \\\"realistic\\\" are open to interpretation.) Thus we propose analyzing linear losses as a heuristic.\\n\\n> Can the author comment or empirically get the truly last-iterate in some practical tasks (it is ok to just run two or three iterations) and then compare it with the estimate from the heuristic auditing proposed?\\n\\nUnfortunately, we do not understand this question.\\n\\n> Can the author explain how to generalize the analysis to capture the practical random $v_i$ scenario?\\n\\nIn general all we need is a bound on the total length of the gradients I.e. $\\\\| \\\\sum_t v_i^t \\\\|$ where $v_i^t$ denotes the gradient of the $i$-th example at step $t$. \\n\\nThe difficulty in generalizing our analysis is not that the canary gradients may be random. \\nWhat makes it difficult is that the canary gradients and the gradients of the other examples (and the regularizer) will not be independent in general, since they all interact with the model weights. Intuitively, the influence of the canary can be amplified.\"}", "{\"summary\": \"This paper tackles a fundamental problem in DP-SGD: the privacy analysis of the last iterate. It is well known that DP-SGD (in the centralized case) makes an artificial assumption that all intermediate iterates are published which can be observed by adversary to ease the privacy analysis through composition. New heuristic auditing methods are presented to approximate the leakage from the last-iterate. Disadvantage and failure cases of proposed methods are discussed.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is well motivated and organized. Interesting examples are presented with good intuitions.\", \"weaknesses\": \"My primary concern is with the meaningfulness of the main result in Theorem 1.\\n\\nIn the proof, the authors appear to use $v_i$ to represent the $i$-th per-sample gradient, assuming $v_i$ is constant. First, I am not aware of any real-world loss function, even in the case of linear regression, that yields or can be framed to yield a constant gradient independent of model weight. Second, in practice, when $v_i$ is treated as a random variable, it is important to note that (a) the variable $v_i$ at different iterations is correlated with $v_j$ , and (b) the distribution of $v_i$ \\u200b changes when it is derived from a pair of adjacent datasets. Thus, when \\n$v_i$ is random, the proof no longer holds, as the divergence cannot simply be reduced to that between the two Gaussian mixtures derived.\\n\\nThis simplification also leads to non-monotonicity, as discussed by the authors. Although they propose taking the worst-case estimate across all parameter selections, this approach feels somewhat ad hoc. The absence of results demonstrating, at a minimum, the conditions under which the model in Theorem 1 can serve as a provable upper bound for the estimate makes it challenging to evaluate and fully understand this auditing method. \\n\\nAdditionally, can the authors elaborate on their statement about the justification for focusing on linear losses which stems from the observation that existing auditing techniques achieve the highest epsilon values? The observation itself makes sense as if every time is worst case, the privacy loss seems to be the maximal. But I do not get why this supports the reduction to the linear function, \\n\\nMoreover, I believe a ground truth for the last-iterate privacy loss is missing.\", \"questions\": \"In general, I think there is some interesting results in the paper but more work seems to be needed.\\n\\n1. Can the author comment or empirically get the truly last-iterate in some practical tasks (it is ok to just run two or three iterations) and then compare it with the estimate from the heuristic auditing proposed? \\n\\n2. Can the author explain how to generalize the analysis to capture the practical random $v_i$ scenario?\", \"minors\": \"1. There is overlap in the figures in Fig.1.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"We thank the reviewer for their input. We respond to the questions below.\\n\\n> 1. Could the authors comment on why the proposed heuristics give a realistic privacy risk estimate, given that the linear loss function is not commonly used? That is, whether and how often would the proposed heuristics overestimate the last-iterate privacy loss.\\n\\nLinear loss functions are not used in practice, but our thesis is that linear losses are close to the worst case in terms of privacy for realistic losses in the hidden state/last iterate setting. Thus it makes sense to study them, as we do.\", \"the_justification_for_this_thesis_is_twofold\": \"First, in the special case of full batch gradient descent it is provably true (see Appendix B). Second, as observed in previous papers on privacy auditing -- and replicated in our experimental results -- the strongest privacy auditing results are achieved by making the loss behave more like a linear loss.\\n\\nOur method is a heuristic -- it may overestimate the true privacy loss and it may underestimate it. We spent a great deal of effort investigating when and how this might occur.\\n\\nIn practice, one does not know the true privacy loss. There are provable upper bounds and empirical privacy auditing lower bounds; the truth could be anywhere in between. Our heuristic offers a novel perspective. It gives a third number that is neither an upper nor a lower bound -- it should be in between and it gives some possible explanations for the gap. We believe this has value even if it's not the perfect answer people are looking for.\\n\\n> 2. Besides the small sampling rate and the pathological counterexamples (as discussed in 4.3), are there other regimes where the proposed heuristic is significantly smaller than the DP upper bound? This is to understand the usefulness of the proposed heuristics.\\n\\nDP-SGD with a small sampling rate is the setting of practical interest. In practice, datasets are large, batches are small, and training epochs are few, which all correspond to a small sampling rate. Thus we focus on this setting.\\n\\n> 3. Minor questions regarding clarity:\\n> * Line 297 - we expect this looseness to be the result of statistic effects -- could the authors provide error bars in Figure 3 to validate this hypothesis?\\n\\nFor privacy auditing we aim to give a lower bound on the true privacy loss (with 95% confidence). Thus we are effectively reporting the lower end of the confidence interval already.\\n\\nThe reason we believe there is a big gap between the auditing result and our heuristic for $T=1$ step is that with a low sampling rate ($q=0.01$) we will only see the canary in 1% of runs. There's just not enough data to perform a high-confidence attack. We will clarify this sentence.\\n\\n> * Figure 3 and 4: why is the standard epsilon constant under increasing the number of steps?\\n\\nWe adjust the noise scale to keep this constant. So more steps correspondingly has more noise per step.\\n\\n> * What is the exact definition of a heuristic privacy estimate? It seems to be DP's upper bound over a family of loss functions, rather than all loss functions.\\n\\nThere is no exact definition. This is why we use the term \\\"heuristic\\\".\"}", "{\"summary\": \"This paper derives an exact privacy analysis of DP-SGD for linear models (referred to as the heuristic) when only the last iterate is released. The authors also compare existing empirical privacy auditing methods with the exact heuristic across different regimes, including image classification tasks and language models.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"This paper introduces a novel approach (named heuristic) to assess the performance of existing empirical evaluations of DP-SGD's privacy guarantees. While empirical evaluation provides a lower bound on the privacy budget of DP mechanisms, it may underestimate the true privacy loss. By comparing it with the heuristic bound for linear models, if the empirical audit bound is loose, it is unreasonable to expect it to hold tightly for more complicated non-linear (or even non-convex) models.\", \"weaknesses\": \"The main weakness of this paper is its limited theoretical contribution. In my view, the heuristic bound (Theorem 1) offers a simplified analysis of existing convergence privacy analysis for (strongly) convex loss functions (cf., the literature referenced in the introduction of this paper, which uses both RDP and $f$-DP). Additionally, the linear model is relatively simple, as the last-iterate output has a closed-form representation. Therefore, I propose the following questions or modifications:\\n\\n1. $\\\\textbf{Linear Probing as a Benchmarking Method:}$ Linear probing is commonly used for benchmarking when privately fine-tuning a foundation model. Therefore, the proposed heuristic for linear models could be beneficial when fine-tuning the last layer (linear probing) of a foundation model. Besides the well-known paper by De et al., the following recent studies may provide strong support for the utility of linear models in private fine-tuning, potentially strengthening the story for using heuristics for linear models.\\n\\nDifferentially Private Image Classification by Learning Priors from Random Processes. Tang et al., NeurIPS'23.\", \"neural_collapse_meets_differential_privacy\": \"Curious Behaviors of NoisyGD with Near-perfect Representation Learning. Wang et al., ICML'24.\\n\\n2. $\\\\textbf{Extension to Convex Loss Functions:}$ Could this heuristic bound be extended to convex loss functions? I understand that the existing convergence bounds for DP-SGD involve complicated constants that are challenging to specify in practice, but a potential comparison between empirical auditing bounds and the theoretical upper bound under convex loss would be more convincing than in the linear case.\\n\\n3. $\\\\textbf{Comparison with Privacy Auditing Bound using privacy profiles: }$ It appears that the comparison with the privacy auditing bound is based on a specific value of $(\\\\epsilon, \\\\delta)$. What about examining the entire $(\\\\epsilon, \\\\delta(\\\\epsilon))$-curve (or equivalently, the ROC curve or the type I/II error trade-off curve)? My question arises because, even if the privacy auditing bound might be (nearly) tight for specific values of $(\\\\epsilon, \\\\delta)$, it may not remain tight across the entire curve.\", \"questions\": \"See the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper computes heuristic differential privacy parameters $\\\\varepsilon$ and $\\\\delta$ for releasing the last iterate of the DP-SGD algorithm. The proposed heuristic relies on computing the DP upper bound under linear loss function, which is always smaller than the DP upper bound under worst-case loss functions. Numerical experiments show interesting scenarios where the proposed heuristics yield estimates that are significantly smaller than the worst-case DP upper bound. Experiments on image and language datasets confirm that the heuristic estimates lie between the DP upper bound and the lower bounds obtained via privacy auditing. Finally, the authors provide examples where the proposed heuristics underestimate the privacy loss.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"A systematic investigation of using heuristic privacy estimates to study the last-iterate privacy loss of the DP-SGD algorithm, including derivations, numerical experiments, comparison with auditing experiments, and analysis of counterexamples where the proposed heuristics underestimates the privacy loss.\", \"Numerical experiments show that when the sampling rate is small, the proposed heuristic yields an estimate that is significantly smaller than the standard DP upper bound.\", \"Auditing experiments illustrated that interestingly, black-box gradient space attacks fail to give tight auditing lower bound for the last-iterate of DP-SGD algorithm under natural datasets, in which case the heuristic estimates and the input-space attack gives more reliable estimates for the privacy loss.\"], \"weaknesses\": [\"The message of the paper is not fully clear -- why is such a heuristic privacy estimate realistic and useful? Specifically, (1) linear loss is not a commonly used loss function; and (2) the heuristic estimates appear to be roughly the same as DP upper bound in certain auditing experiments (Figure 2). See questions 1 and 2 for more details.\", \"Several terms and plots in the paper are not explained in detail and the claims require more clarification. See question 3 for more details.\"], \"questions\": \"1. Could the authors comment on why the proposed heuristics give a realistic privacy risk estimate, given that the linear loss function is not commonly used? That is, whether and how often would the proposed heuristics overestimate the last-iterate privacy loss.\\n\\n2. Besides the small sampling rate and the pathological counterexamples (as discussed in 4.3), are there other regimes where the proposed heuristic is significantly smaller than the DP upper bound? This is to understand the usefulness of the proposed heuristics.\\n\\n3. Minor questions regarding clarity:\\n - Line 297 - `we expect this looseness to be the result of statistic effects` -- could the authors provide error bars in Figure 3 to validate this hypothesis?\\n - Figure 3 and 4: why is the standard epsilon constant under increasing the number of steps?\\n - What is the exact definition of a heuristic privacy estimate? It seems to be DP's upper bound over a family of loss functions, rather than all loss functions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper investigates the privacy of DP-SGD when only the last iterate is provided (and intermediate model states are hidden). It shows a heuristic analysis for linear functions which is found to be predictive of the privacy obtained via auditing on various training procedures. The heuristic is evaluated on image and language datasets and the paper shows that it consistently upper bounds the privacy loss obtained via auditing. Though the proposed technique is a heuristic and is not rigorous, I think it provides an interesting avenue for future research. There are clearly gaps between the current theoretical upper bounds in DP and results obtained via auditing, and the work could be a step towards rigorously closing it.\\n\\nI also suggest that the authors incorporate the recent contemporaneous work in the related work in the revision of the paper.\", \"additional_comments_on_reviewer_discussion\": \"The reviewer's raised several questions, including regarding relation to prior work and why the heuristic is meaningful. The comments seem to be addressed by the author's response.\"}", "{\"comment\": \"We thank the reviewer for their time and comments. We respond to the comments below.\\n\\n> The main weakness of this paper is its limited theoretical contribution.\\n\\nThe purpose of our paper is to introduce a novel viewpoint (heuristic analysis) to accompany the existing lines of work on provable differential privacy guarantees and privacy auditing. We acknowledge that the technical novelty is limited (although we believe that our empirical evaluations & counterexamples are significant technical contributions). But our hope is that the conceptual contribution is of interest to the ICLR audience.\\n\\n> 1. **Linear Probing as a Benchmarking Method:** Linear probing is commonly used for benchmarking when privately fine-tuning a foundation model. Therefore, the proposed heuristic for linear models could be beneficial when fine-tuning the last layer (linear probing) of a foundation model. Besides the well-known paper by De et al., the following recent studies may provide strong support for the utility of linear models in private fine-tuning, potentially strengthening the story for using heuristics for linear models.\\n\\nWe are not sure we understand this suggestion. Fine-tuning the last layer of a foundation model involves training a linear model, but the loss function is still nonlinear due to the softmax and cross entropy loss.\\n\\n> 2. **Extension to Convex Loss Functions:** Could this heuristic bound be extended to convex loss functions? I understand that the existing convergence bounds for DP-SGD involve complicated constants that are challenging to specify in practice, but a potential comparison between empirical auditing bounds and the theoretical upper bound under convex loss would be more convincing than in the linear case.\\n\\nOur heuristic analysis heavily relies on the simple structure of DP-SGD for linear loss functions. \\nIn our counterexamples section we generalized our analysis to quadratic loss functions/regularizers, which we think is a good proxy for general convex losses. This was already nontrivial.\\nThus it seems difficult to generalize to arbitrary convex losses.\\n\\nUnfortunately, it is difficult to extract directly comparable bounds from the related work on convex loss functions. These works are theoretical in nature; in particular, many have unspecified constants and the results often only apply to certain asymptotic regimes. Their bounds also depend on parameters like the strong convexity, smoothness, and the diameter of the parameter space. And it's not clear how to specify these for a fair comparison.\\n\\n> 3. **Comparison with Privacy Auditing Bound using privacy profiles:** It appears that the comparison with the privacy auditing bound is based on a specific value of $(\\\\epsilon,\\\\delta)$. What abound examining the entire $(\\\\epsilon,\\\\delta(\\\\epsilon))$-curve (or equivalently, the ROC curve or the type I/II error trade-off curve)? My question arises because, even if the privacy auditing bound might be (nearly) tight for specific values of $(\\\\epsilon,\\\\delta)$, it may not remain tight across the entire curve.\\n\\nThanks. This is a good suggestion. We focused on $\\\\varepsilon$ for a fixed $\\\\delta$ because this is standard in the literature. \\n\\nIt's worth noting that state-of-the-art auditing methods already incorporate \\\"curve fitting\\\" for precisely this reason. I.e., the auditing lower bounds are not tight for small $\\\\delta$ for reasons of statistical uncertainty -- it's hard to accurately estimate small probabilities. Thus SOTA methods compute lower bounds for moderate/large $\\\\delta$ and extrapolate these numbers to small $\\\\delta$. \\n\\nOne of the benefits of our heuristic is that we don't need to worry about statistical uncertainty. (Instead we need to worry about whether the heuristic is \\\"good\\\".)\"}", "{\"comment\": \"We thank the reviewer for their comments and we respond to their questions below.\\n\\n> How does this work relate to other recent results in privacy auditing that perform effective audits under similar threat models? Specifically, could the authors elaborate on the novelty of their approach compared to methods presented in papers 1, 2, and 3, which make assumptions about the adversary and not the loss itself?\\n\\nWe thank the reviewer for bringing these recent papers to our attention. (These appeared online after we wrote the literature review, but before the ICLR deadline, so we will update our paper accordingly.)\\n\\nThe cited papers all try to improve privacy auditing in the hidden state model, by changing the power of the adversary.\\n[Annamalai et al. [1]](https://arxiv.org/abs/2405.14106) allow the adversary to choose the initial model weights.\\n[Cebere et al. [2]](https://arxiv.org/abs/2405.14457) allow the adversary to insert arbitrary gradient canaries. \\n[Annalamai [3]](https://arxiv.org/abs/2407.06496) allows the adversary to choose the loss function.\\n\\nThe third paper [3] is closely related to our counterexamples. Specifically, it constructs a clever loss function that keeps track of the likelihood ratio during the training process, which makes it easy to perform a powerful attack using only the last iterate.\\n\\nThe other two papers [1,2] both support the intuition behind our heuristic in that they both construct settings that behave like linear losses. The first paper [1] chooses the model parameters such that the gradients of the non-canary examples are approximately zero. The second paper [2] inserts constant gradients -- which corresponds to linear losses.\\n\\n> Could the authors comment on the strength of the linearity assumption compared to other assumptions or heuristics used in privacy auditing? For instance, in 4, an auditing procedure for one-shot auditing is designed under the assumption that the adversary can insert a known random gradient, which can easily be extended to the blacbox setting. How does the linearity assumption compare?\\n\\n[Andrew et al. [4]](https://arxiv.org/abs/2302.03098) insert a random constant gradient. Constant gradients correspond to linear losses. So this paper also supports the intuition that linear losses are the right thing to look at.\\n\\n> Are there practical scenarios or specific types of models where the linearity assumption approximately holds?\\n\\nWe wish to clarify that we are *not* assuming that realistic losses are approximately linear. That assumption is clearly not true.\\nOur thesis is that linear losses are close to the worst case in terms of privacy for realistic losses.\\n\\nBy the same token, the aforementioned papers [1,2,3,4] consider unrealistic adversaries. Our observation is that most [1,2,4] of these unrealistic adversaries could achieve the same results with linear losses; that doesn't imply that linear losses are realistic. Rather, that implies that linear losses capture a lot of the power of privacy auditing. \\n\\n> Theoretical work also addresses this threat model for the non-convex case (see 5). Is there any connection or similarity between the works?\\n\\nThe work of [Asoodeh & Diaz [5]](https://arxiv.org/abs/2305.09903) is very interesting and also considers the last iterate setting, but the contributions seem tangential to our work. Specifically they rely on the probabilistic contraction of the Markov kernels under hockey-stick divergence. Linear losses do not exhibit this property because we assume an unbounded parameter domain.\"}" ] }
DwiwOcK1B7
Two Sparse Matrices are Better than One: Sparsifying Neural Networks with Double Sparse Factorization
[ "Vladimír Boža", "Vladimír Macko" ]
Neural networks are often challenging to work with due to their large size and complexity. To address this, various methods aim to reduce model size by sparsifying or decomposing weight matrices, such as magnitude pruning and low-rank or block-diagonal factorization. In this work, we present Double Sparse Factorization (DSF), where we factorize each weight matrix into two sparse matrices. Although solving this problem exactly is computationally infeasible, we propose an efficient heuristic based on alternating minimization via ADMM that achieves state-of-the-art results, enabling unprecedented sparsification of neural networks. For instance, in a one-shot pruning setting, our method can reduce the size of the LLaMA2-13B model by 50% while maintaining better performance than the dense LLaMA2-7B model. We also compare favorably with Optimal Brain Compression, the state-of-the-art layer-wise pruning approach for convolutional neural networks. Furthermore, accuracy improvements of our method persist even after further model fine-tuning. Code available at: https://github.com/usamec/double_sparse
[ "sparse factorization", "pruning" ]
Accept (Poster)
https://openreview.net/pdf?id=DwiwOcK1B7
https://openreview.net/forum?id=DwiwOcK1B7
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ykQlKNPqyv", "yYkvIIFgKe", "qByriQad2U", "okhaGCHOfR", "jVaidfdjoA", "hT1i54iZqw", "gSPtN8Vfaq", "ePWvY6AnNx", "dxGRpM4Hvm", "byODw1thWJ", "ZpFStWfAfO", "W2fykTIpml", "Qvy1j1Vlql", "HVr5RyolZk", "EMiAnqHFyw", "E2cRl9utZs", "BlfSjDs9GV", "9AcoeNJpjr", "73yfRZtbgc", "5MzLYkhbg9" ], "note_type": [ "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1729848360260, 1732778966751, 1732273057912, 1734731640157, 1731757095424, 1730721040974, 1731668189436, 1732904245740, 1732895862131, 1732539104286, 1731668465903, 1732254191435, 1731757700884, 1737523577704, 1730567889493, 1733157596920, 1732730595406, 1732895483963, 1732261166944, 1731758849513 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3466/Reviewer_aP2Z" ], [ "ICLR.cc/2025/Conference/Submission3466/Authors" ], [ "ICLR.cc/2025/Conference/Submission3466/Reviewer_aP2Z" ], [ "ICLR.cc/2025/Conference/Submission3466/Area_Chair_DNQ8" ], [ "ICLR.cc/2025/Conference/Submission3466/Authors" ], [ "ICLR.cc/2025/Conference/Submission3466/Reviewer_VFC3" ], [ "ICLR.cc/2025/Conference/Submission3466/Authors" ], [ "ICLR.cc/2025/Conference/Submission3466/Authors" ], [ "ICLR.cc/2025/Conference/Submission3466/Authors" ], [ "ICLR.cc/2025/Conference/Submission3466/Authors" ], [ "ICLR.cc/2025/Conference/Submission3466/Authors" ], [ "ICLR.cc/2025/Conference/Submission3466/Reviewer_aP2Z" ], [ "ICLR.cc/2025/Conference/Submission3466/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3466/Reviewer_qugK" ], [ "ICLR.cc/2025/Conference/Submission3466/Authors" ], [ "ICLR.cc/2025/Conference/Submission3466/Reviewer_qugK" ], [ "ICLR.cc/2025/Conference/Submission3466/Reviewer_qugK" ], [ "ICLR.cc/2025/Conference/Submission3466/Authors" ], [ "ICLR.cc/2025/Conference/Submission3466/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper proposes Double Sparse Factorization, a method that, instead of pruning the original weight matrix, factorizes it into the product of two matrices (similar to e.g. low-rank decomposition), which together satisfy the same sparsity constraint. To solve this problem, they use the ADMM method. The paper claims to improve upon existing pruning and layer-wise pruning approaches, and they back their claims with experiments on state-of-the-art language models and medium-sized vision models. In addition, they show that the superiority of their methods seems to prevail after retraining the pruned models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The idea is interesting, to the best of my knowledge relatively novel, and the experiments are quite convincing. Most of the paper is fairly easy to follow and the reader is not left with many questions. I appreciate that the authors provide results before and after retraining the pruned models, as this is often not done in other papers. The proposed method is interesting, however there are open questions that I will discuss below.\", \"weaknesses\": [\"I have several concerns regarding the soundness, clarity, and contribution of this work, which I detail below. I hope these remarks are helpful for improving the paper and am open to discussing my evaluation.\", \"### Clarity\", \"While I think that the idea proposed in this paper might be promising, I sometimes had a hard time following the paper. I think the structure as well as details could be improved.\", \"Section 3.1 would greatly benefit from a more detailed explanation of the ADMM method. How are Z and U initialized? I understand that it is not your job to explain ADMM in detail, but I think that nevertheless the paper would greatly benefit from more detailed remarks - at least in the appendix. Since this method is not standard (at least in the pruning literature and to my knowledge), I think it would be helpful to make this more clear.\", \"In two sentences (Line 149, 150) you basically explain how you find the sparsity mask. Why do you precondition? How exactly is the cubic schedule (I presume Zhu & Gupta?) implemented, over how many iterations, with which interval between the increases? I am trying my best to infer this from somewhere, but it is nowhere to be found? Either I am missing something or the paper is lacking a crucial part, namely how the sparsity mask is found.\", \"In Line 258, you state that you are using the Wanda saliency map, I think it would be good to give the mathematical formulation to that, especially how you \\\"scale one of the factors back\\\".\", \"### Soundness\", \"Lines 37-39: If you replace the dense weight matrix with a product of two sparse matrices, will your model not be much slower at inference than when replacing with just a sparse matrix? For Low-rank decomposition, you at least get two linear layers which are much smaller dense matrices, but in your case, you basically have two sparse matrices. While you argue in Line 162 that the total number of multiplications is equal, this is far from realizable on the existing hardware. In practice, you incur a non-trivial overhead. I would like to hear the authors' thoughts on this.\", \"Line 50: \\\"our method is the first layer-wise pruning method in which the larger pruned model is better than the dense smaller model\\\" - Are you sure this is true? I feel like already the original SparseGPT paper gets fairly close and there have been a variety of improvements since then, e.g. using non-uniform layer-wise sparsity. Maybe this claim can or should be made more precise.\", \"### Experimental Validation\", \"Missing ablations: The paper is fixing a lot of hyperparameters and making claims without ablations. That includes e.g. the selection of sparsity distribution between the matrices (Line 209) or the initialization for A and B (Lines 248-250), among others. Such ablations should be added to justify the choice of parameters.\", \"Table 1: Why are you not comparing to SparseGPT, am I missing something? In my experience, SparseGPT is a very strong baseline. Also, why do you omit Wanda for 30% density? Is Wanda using a \\\"finalization\\\" step as well, i.e., are you reconstructing the remaining weights after pruning? You get that more or less for free if you pass the calibration data through anyway.\", \"Section 5.4: I find the choice of hyperparameters for the retraining/fine-tuning quite arbitrary. You use a stepped schedule for most of the pretraining, then use a stepped learning rate schedule for retraining as wellf 70 total) epochs. [1] shows that if you properly choose the initial learning rate of a linear schedule, you can recover the accuracy drop of magnitude pruning in very few iterations. I am not sure if these results would withstand scrutiny. It would be good to use best practices here, i.e., for the convolutional networks you can definitely use a linear/cosine schedule for pretraining, and then choose the initial learning rate for linear-schedule-retraining adaptively, as in [1]. This will give much more realistic results.\", \"### Minor Remarks\", \"Line 131: I presume it should be \\\"**the** layer-wise pruning problem\\\".\", \"In general, you do not seem to use the glossary package and define your DSF-acronym over and over again. That is a bit contrary to the purpose of an abbreviation. Also, you sometimes use DSF, and sometimes DFS (as in Double Factorization Sparse), see e.g. Line 315 or the caption in Line 686 where this happens in the same sentence.\", \"#### References\", \"[1] Zimmer, M., Spiegel, C., & Pokutta, S. (2021). How I Learned to Stop Worrying and Love Retraining. _arXiv preprint arXiv:2111.00843_. https://arxiv.org/abs/2111.00843\"], \"questions\": [\"In Line 465 you state that your method does not support gradual pruning with fine-tuning between pruning steps, could you elaborate why? I am not sure what I am missing here.\", \"In Line 196, you first \\\"look into the projection problem\\\". I am not quite sure I understand correctly how that is not the entire problem? A proper solution to that is what you are looking for, isn't it?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Clarifications\", \"comment\": \"Thank you, for the response, here is a quick clarification.\\n\\n>Could you share the dense runtimes as well.\\n\\nThey were actually in Table 6, but hidden in parentheses under batch size (not the best idea on our part). We put dense runtimes into a separate column to make benchmarks more readable. We also double-checked the dense runtime and found that Deepsparse is slightly faster than just torch.compile with max-autotune (and thus every dense number is from Deepsparse).\\n\\nIn all cases, even at 50% density, DSF is faster than dense runtime (e.g., for 4096x4096 matrices with 64 batch size, we have dense runtime of 872 ms, simple sparsity runtime of 470 ms, and DSF runtime of 533 ms). \\n\\n> Could you repeat the deepsparse benchmarks with additional cores as well?\\n\\nAdded Table 7 with this.\\n\\n> Could you share deepsparse benchmarks with alternating 16/25% sparsity...\\n\\nWe tested this and it is not different from the results with even distribution.\\nFor square matrices with size 4096x4096 with 50% total density, 16% density in the first factor and 34% in the second we get following:\\n\\n| Batch size | Dense runtime | Single sparsity | DSF |\\n|------------|---------------|-----------------|-----|\\n| 64 | 162 | 101 | 122 |\\n| 256 | 551 | 301 | 354 |\\n\\n\\n\\n\\n\\nFor rectangular matrices with size 4096*11008 (found in Llama-2-7B MLP block), 50% total density, 25% density in smaller factor and 40.7% density in the larger factor we get following runtimes:\\n\\n| Batch size | Dense runtime | Single sparsity | DSF |\\n|------------|---------------|-----------------|-----|\\n| 64 | 426 | 251 | 276 |\\n| 256 | 1474 | 792 | 929 |\\n\\n> limited practical relevance\\n\\nWe found that in almost all scenarios we tested so far (even outside ones mentioned in the paper), DSF is better than regular sparsity and can have surprisingly good results. \\n\\nLet us share a **very preliminary** test we ran a couple of days ago (we are definitely not putting this into current paper). We aggressively pruned Llama3-8B to ~16% density, with the goal that the final result would have 2bits per parameter (8bits would be used for nonzeros and mask will be compressed with something similar to compression in https://proceedings.mlsys.org/paper_files/paper/2024/file/c74b624843218d9b6713fcf299d6d5e4-Paper-Conference.pdf) . \\nWe then fine-tuned models for half of a day and measured perplexity.\\n\\nWhen we compare with results from PV-tuning paper (https://arxiv.org/abs/2405.14852 table 2), we would find following\\n\\n| Method | Perplexity |\\n|-----|-----|\\n| Dense | 5.54 |\\n| QUIP | 76.95 | \\n| Regular sparsity | 16.3 |\\n| DB-LLM | 12.77 |\\n| DSF | 10.3 | \\n| PV-tuning | 6.99 |\\n\\nAs you can see, DSF can be almost as good as the best quantization method (and is better than many good quantization methods that use fine-tuning like DB-LLM) while offering the benefits of relatively easy fine-tuning. Also keep in mind, this was just a first very quick experiment with setup like this one.\"}", "{\"comment\": \"Thanks again for the answer. I am far from convinced that this reparametrization as a product of two sparse matrices will be relevant in the future, but since the authors put a lot of effort into improving the presentation as well as explaining the efficiency and storage issues, I will increase my score to 5, borderline reject.\\n\\nIn case of acceptance, I highly recommend to further improve the readibility of the paper and to not leave out any discussion regarding the practical applicability of this setting, independent of how you solve it.\"}", "{\"metareview\": \"The authors propose a method for sparsifying neural network parameter matrices by reparameterizing them as the product of two sparse matrices. This is accomplished via a heuristic that seeks to minimize the error of the factorized approximation relative to the original weights subject to hard (i.e., L0) sparsity constraints on the factorized matrices via ADMM. While the idea is conceptually rather simple, the authors are largely in agreement that the empirical performance of the method is convincing. Reviewers raised concerns that the increased overhead of performing two sparse matrix multiplications could be detrimental, but the authors note that their primary goal is to reduce the memory footprint of the model and not necessarily improve inference speed.\\n\\nWhile one reviewer still notes potential issues with clarity in the manuscript, this would appear to be something that can be addressed in a final revision and I believe this work is of sufficient interest and quality to be accepted.\", \"additional_comments_on_reviewer_discussion\": \"The authors were largely responsive to the initial reviews, which led to two reviewers raising their initial scores. One reviewer still has concerns regarding the presentation of some topics of the work, which the authors should seek to address when preparing a final version of the manuscript.\"}", "{\"title\": \"Authors' response\", \"comment\": \"Thank you for your thoughful review.\", \"here_are_the_responses_to_your_concerns\": \">The SVD comparison is unfair in my opinion. SVD is more suited for low-rank compression and it may not enforce sparsity. Using the sparsity ratio as the main criterion may not be ideal. Why not use FLOPs? As FLOPS directly relates to inference speed as opposed to sparsity ratio. I would suggest that the authors include a comparison based on FLOPs in addition to the sparsity ratio. This would provide a more comprehensive evaluation of computational efficiency across different compression methods, including SVD and sparse factorization approaches.\\n\\nYes, SVD comparison is kind of unfair; that's why we compare with it only in the section about the comparison of matrix approximation methods. \\n\\nWe are not using FLOPs since FLOPs are in a linear relationship with sparsity in the case of single-layer processing and also when using uniform sparsity over all layers (as is in the case of LLMs). \\nFLOPs do not have a direct relationship with sparsity in the case of nonuniform sparsity in vision models (different layers process different numbers of elements). That's why we use FLOPs in the OBC section.\\n\\nSVD has the obvious benefit of doing just two dense matrix multiplications and thus not having time overhead associated with sparse matrix multiplication. But, our primary baseline is regular pruning, which already has sparse matrix multiplication. We added section 4.4 to discuss the computational concern of DSF and argue that DSF has similar overheads as regular pruning.\\n\\n> ADMM optimization may be compute-intensive. Not much discussion about it unless I missed something. Could you provide an asymptotic time complexity analysis and/or empirical running time comparison of the ADMM? You may also discuss the trade-offs between computational cost and compression quality, as it would give readers a clearer understanding of practical applicability of the proposed method.\\n\\nThe original ADMM paper (https://openreview.net/forum?id=1hcpXd9Jir) shows that ADMM is better than solving this problem via gradient descent and is definitely better than $n$ independent linear regressions. It also shows that it is as fast as SparseGPT.\\n\\nAt the end of section 5.1, we mention the pruning time. We added one more experiment to the appendix, showing the relationship between pruning time and model quality.\\n\\n> Could you discuss how this is related to sparse coding?\\n\\nWe added this to the related work section.\\n\\nWe hope that these answers clarify your concerns.\"}", "{\"summary\": \"The paper proposes Double Sparse Factorization (DSF) of the weight matrices to prune them effectively. They formulate it as an alternating optimization and optimize using ADMM. The experiments show a clear benefit of the proposed method on Llama for a language task and resnet on image classification.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The idea is nice, the problem formulation is neat, and using ADMM for optimization is elegant.\\n2. On pruning LLAMA, the method shows clear benefit over the compared methods. Image classification experiments are marginally better than previous methods.\", \"weaknesses\": \"1. The SVD comparison is unfair in my opinion. SVD is more suited for low-rank compression and it may not enforce sparsity. Using the sparsity ratio as the main criterion may not be ideal. Why not use FLOPs? As FLOPS directly relates to inference speed as opposed to sparsity ratio. I would suggest that the authors include a comparison based on FLOPs in addition to the sparsity ratio. This would provide a more comprehensive evaluation of computational efficiency across different compression methods, including SVD and sparse factorization approaches.\\n2. ADMM optimization may be compute-intensive. Not much discussion about it unless I missed something. Could you provide an asymptotic time complexity analysis and/or empirical running time comparison of the ADMM? You may also discuss the trade-offs between computational cost and compression quality, as it would give readers a clearer understanding of practical applicability of the proposed method.\", \"questions\": \"1. Could you discuss how this is related to sparse coding?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author's response\", \"comment\": \"Thank you very much for a helpful and insightful review.\\n\\nHere are the responses to your concerns; we split them over multiple comments.\\n\\n>Section 3.1 would greatly benefit from a more detailed explanation of the ADMM method. How are Z and U initialized? I understand that it is not your job to explain ADMM in detail, but I think that nevertheless the paper would greatly benefit from more detailed remarks - at least in the appendix. Since this method is not standard (at least in the pruning literature and to my knowledge), I think it would be helpful to make this more clear.\\n\\n>In two sentences (Line 149, 150) you basically explain how you find the sparsity mask. Why do you precondition? How exactly is the cubic schedule (I presume Zhu & Gupta?) implemented, over how many iterations, with which interval between the increases? I am trying my best to infer this from somewhere, but it is nowhere to be found? Either I am missing something or the paper is lacking a crucial part, namely how the sparsity mask is found.\\n\\nWe agree that the ADMM method for pruning is underexplored and not well known.\\nWe expanded section 3.1 to provide a quick overview of ADMM method used by \\\"Fast and Effective Weight Update for Pruned Large Language Models\\\" (https://openreview.net/forum?id=1hcpXd9Jir). Mainly, we talk about what ADMM does, how layer-wise pruning maps onto ADMM, and how the pruning mask is found.\\nWe decided not to put more details into an appendix because we believe that if the reader needs even more information, reading the original paper by Boza would be much more beneficial than reading the appendix in this paper.\\n\\n> In Line 258, you state that you are using the Wanda saliency map, I think it would be good to give the mathematical formulation to that, especially how you \\\"scale one of the factors back\\\".\\n\\nWe added this to section 4.3.\\n\\n> Lines 37-39: If you replace the dense weight matrix with a product of two sparse matrices, will your model not be much slower at inference than when replacing with just a sparse matrix? For Low-rank decomposition, you at least get two linear layers which are much smaller dense matrices, but in your case, you basically have two sparse matrices. While you argue in Line 162 that the total number of multiplications is equal, this is far from realizable on the existing hardware. In practice, you incur a non-trivial overhead. I would like to hear the authors' thoughts on this.\\n\\nWe added section 4.4 to discuss the DSF method's computational concerns. Part of our argument regarding speed can be summarized as follows:\\n\\na) In many cases (e.g., local LLM inference), the main concern is fitting the best model into a small memory footprint. In this case, a slight decrease in inference speed might be tolerable. \\n\\nb) There are cases (again, for example, local single batch LLM inference) where the main computational bottleneck is transferring weights from memory to the local cache. This is explored in Flash-LLM (https://arxiv.org/pdf/2309.10285). Memory transfer is similar for DSF and for single sparsity.\\n\\nc) Doing two sparse multiplications is not much slower than doing one sparse multiplication with the same total number of nonzeros. We tested this using DeepSparse and found that DSF is ~10-20% slower than ordinary sparse multiplication (this is inline with other literature), but still faster than dense multiplication.\\n\\n> Line 50: \\\"our method is the first layer-wise pruning method in which the larger pruned model is better than the dense smaller model\\\" - Are you sure this is true? I feel like already the original SparseGPT paper gets fairly close and there have been a variety of improvements since then, e.g. using non-uniform layer-wise sparsity. Maybe this claim can or should be made more precise.\\n\\nSparseGPT was not better in terms of perplexity (5.63 for pruned Llama2-13B, 5.12 for dense Llama2-7B), but is better on zero-shot benchmarks. Outlier weighted sparsity (https://arxiv.org/pdf/2310.05175) might be better, but they do not report such results.\\n\\\"Discovering Sparsity Allocation for Layer-wise Pruning of Large Language Models\\\" (https://openreview.net/forum?id=rgtrYVC9n4) reports better results in zero-shot benchmarks but does not report perplexity.\\n\\nThus, we are adjusting our claim to uniform layer-wise pruning and perplexity measure. \\n\\n> Missing ablations: The paper is fixing a lot of hyperparameters and making claims without ablations. That includes e.g. the selection of sparsity distribution between the matrices (Line 209) or the initialization for A and B (Lines 248-250), among others. Such ablations should be added to justify the choice of parameters.\\n\\nWe added even more ablations to the appendix.\"}", "{\"title\": \"One more note\", \"comment\": \"One more note:\\n\\nWe have moved the whole section about layer-wise error comparison (comparing DSF with magnitude pruning, SVD, ...) to the appendix.\\n\\n\\nIs there anything more we can do to clarify your concerns?\"}", "{\"comment\": \"Thank you very much for raising the score and for the very good feedback.\"}", "{\"title\": \"Additional results for LLM finetuning (distillation)\", \"comment\": \"We finished limited finetuning (distillation) experiments (run for 2 days on 4xA100).\\nWe distilled Llama2-7B and compared the fine-tuning of regularly pruned model via ADMM with 50% density and the DSF pruned model with 45% density (after accounting for masks, these models have similar storage sizes).\\nResults are summarized in the appendix, and here we put the table for the reference:\\n\\n| Model | Pruning type | PPL w/o finetuning | Zero-shot w/o finetune | PPL w/ finetuning | Zero-shot w/ finetuning |\\n|-----------|--------------|--------------------|------------------------|-------------------|-------------------------|\\n| Llama2-7B | Dense | 5.12 | 59.71 | - | - |\\n| Llama2-7B | ADMM 50% | 6.33 | 56.64 | 5.61 | 58.00 |\\n| Llama2-7B | DSF 45% | 5.78 | 57.03 | 5.35 | 59.00 |\\n\\nAs we can see, DSF retains its advantage even after finetuning.\"}", "{\"title\": \"Author response part 2\", \"comment\": \"> Table 1: Why are you not comparing to SparseGPT, am I missing something? In my experience, SparseGPT is a very strong baseline. Also, why do you omit Wanda for 30% density? Is Wanda using a \\\"finalization\\\" step as well, i.e., are you reconstructing the remaining weights after pruning? You get that more or less for free if you pass the calibration data through anyway.\\n\\nADMM by Boza is better than SparseGPT, so we feel that including it in Table 1 does not bring any value.\\nWe added SparseGPT results to Table 1. We omitted 30% density with Wanda, because it has bad results. But since this raised questions, we put it back. Wanda algorithm does not use finalization, it just selects weight to prune. But Wanda with finalization is just ADMM-1 algorithm in ADMM pruning paper and is generally worse than ADMM with pruning done gradually.\\n\\n> Section 5.4: I find the choice of hyperparameters for the retraining/fine-tuning quite arbitrary. You use a stepped schedule for most of the pretraining, then use a stepped learning rate schedule for retraining as wellf 70 total) epochs. [1] shows that if you properly choose the initial learning rate of a linear schedule, you can recover the accuracy drop of magnitude pruning in very few iterations. I am not sure if these results would withstand scrutiny. It would be good to use best practices here, i.e., for the convolutional networks you can definitely use a linear/cosine schedule for pretraining, and then choose the initial learning rate for linear-schedule-retraining adaptively, as in [1]. This will give much more realistic results.\\n\\nFirst of all, in the Imagenet experiment, we already used a linear schedule for fine-tuning (we also compared it to cyclical pruning to ensure that our baseline is high-quality).\\n\\nBut you are completely right about CIFAR. \\nHere, we switched to the linear learning rate schedule for finetuning, which leads to ~0.5% gains in almost all cases (and DSF is still better). Thank you for pointing this out. We keep the pretraining schedule as is since it produces high-quality dense results (slightly better than reported in the original resnet paper).\\n\\n>Line 131: I presume it should be \\\"the layer-wise pruning problem\\\".\\n>In general, you do not seem to use the glossary package and define your DSF-acronym over and over again. ...\\n\\nThank you for pointing all of this out. DFS is just a typo. We also removed unnecessary DSF definitions.\\n\\n> In Line 465 you state that your method does not support gradual pruning with fine-tuning between pruning steps, could you elaborate why? I am not sure what I am missing here.\\n\\nFirst of all, we changed the wording from \\\"does not support\\\" to \\\"it is unclear how to do.\\\" Here is our reasoning why it is unclear:\\nImagine that your target density is 25%. In regular pruning, you can first prune to 50% density, then finetune network, and then prune again to 25% density. This works because you can easily prune the already-pruned matrix. But if you apply DSF, how should you apply DSF again with a lower target density? You could multiply the factors to get a dense matrix and factorize that, but is it the correct way?\\n\\n\\n> In Line 196, you first \\\"look into the projection problem\\\". I am not quite sure I understand correctly how that is not the entire problem? A proper solution to that is what you are looking for, isn't it?\\n\\nThe original layer-wise pruning problem is min ||XW - XAB||^2. The projection problem is min ||X - AB||^2, thus a simplified version.\\n\\n\\nThank you again for your very good suggestions. We hope our response clarifies your concerns. If that\\u2019s the case, we would greatly appreciate it if you would consider raising your score.\"}", "{\"comment\": \"Thank you for your response. I appreciate the revision of the paper, which makes a lot of things more accessible.\\n\\n> We added this to section 4.3.\\n\\nSo are you here using the Hadamard product between a real value (the norm) and a matrix, or am I reading this in a wrong way?\\n\\n> We added section 4.4 to discuss the DSF method's computational concerns.\\n\\nI greatly appreciate the revisions, they should have been included in the original manuscript, since this setting is not as common. In my personal experience, executing two layers in a row (even with much smaller and dense matrices), leads to significant slowdowns. Also, up to 20% slowdown in your measurements honestly seems like a lot, given the perplexity improvement is not that dramatic. But I agree that this is up to the choice of the practitioner. I am also not entirely sure that the storage requirements are actually the same, given that starting from an $n \\\\times m$ matrix, you end up with an $n \\\\times n$ and an $n \\\\times m$ matrix, i.e., you increase from $nm$ to $n^2+nm$ parameters, despite enforcing the same overall sparsity. I might be mistaken here, but e.g. for CSR format, the row pointer can also depend on the number of rows, which would clearly yield some overhead here. \\n\\n> ADMM by Boza is better than SparseGPT, so we feel that including it in Table 1 does not bring any value. We added SparseGPT results to Table 1. We omitted 30% density with Wanda, because it has bad results. But since this raised questions, we put it back. Wanda algorithm does not use finalization, it just selects weight to prune. But Wanda with finalization is just ADMM-1 algorithm in ADMM pruning paper and is generally worse than ADMM with pruning done gradually.\\n\\nThank you for adding SparseGPT. I do not think that referring to another paper (which you apparently base on) and stating that ADMM is better, not requiring you to compare to SparseGPT, is really an option. The same holds for Wanda, how is the reader supposed to know that Wanda is ADMM-1 from the ADMM paper? Especially since you state that e.g. SparseGPT takes roughly half the time of your algorithm, and that time could be used to reconstruct the weights given the found sparsity mask of SparseGPT. In my experience, this improves even SparseGPT and would yield a more realistic comparison. It would be good to compare the methods on equal terms then. Anyways, thanks for clarification.\\n\\n> But you are completely right about CIFAR. Here, we switched to the linear learning rate schedule for finetuning, which leads to ~0.5% gains in almost all cases (and DSF is still better). Thank you for pointing this out.\\n\\nThat is more clear now, thanks.\\n\\n> First of all, we changed the wording from \\\"does not support\\\" to \\\"it is unclear how to do.\\\" Here is our reasoning why it is unclear: Imagine that your target density is 25%. In regular pruning, you can first prune to 50% density, then finetune network, and then prune again to 25% density. This works because you can easily prune the already-pruned matrix. But if you apply DSF, how should you apply DSF again with a lower target density? You could multiply the factors to get a dense matrix and factorize that, but is it the correct way?\\n\\nI see. \\n\\n> The original layer-wise pruning problem is min ||XW - XAB||^2. The projection problem is min ||X - AB||^2, thus a simplified version.\\n\\nMaybe I am missing something again, but I assume that then the projection problem should involve W and not X? But thanks for clarifying, I was not entirely sure whether I understood that distinction correctly.\\n\\nI thank the authors for their detailed answer and especially for revising the PDF accordingly. A major concern was the presentation of the work and I think that this has been properly addressed, as now many open gaps have been filled. Still, I am not entirely convinced that the setting of having two sparse matrices instead of a single one is interesting and relevant, especially given the fact that we are dealing with unstructured sparsity here and a) the improvements over existing algorithms are somewhat marginal and b) the costs of having two sparse matrices increase over having a single one. I will consider changing my score after the discussion phase.\"}", "{\"comment\": \"Thank you very much for a helpful and insightful review.\\n\\nHere are the responses to your concerns; we split them over multiple comments.\\n\\n> I have some significant concerns regarding the practical applicability of the proposed method.\\n\\nWe added section 4.4 to discuss DSF's computational concerns. We mainly argue that it is comparable to regular pruning, and it also gets much better results. \\nMain arguments are summarized below as responses to your detailed questions.\\n\\n> Fine-tuning (FT) / training memory overhead: During FT, the proposed method requires ~37% more memory to store the intermediate activations of X@A@B compared with X@W. Activations can account for a significant portion of the overall memory footprint during training and this should be acknowledged in the paper.\\n\\nThis is a very good point and completely true. One can mitigate the impact of this by using gradient checkpointing (we almost always use it when finetuning any LLM). We add simple experiments to the appendix, showing that DSF can be finetuned with inputs of similar size as regular pruning.\\n\\n> Mask overheads: Assuming a bit-mask compression strategy and no shared masks between A factors, we find a similar 37% increased overhead compared to single layer sparsity. With shared A factors this overhead drops to ~1%, assuming the mask is shared across all 36 decoder blocks. From this perspective, I find the fixed-mask variant of DSF to be the most practically interesting.\\n\\nYes, masking overhead is ~37%. However, we also need to store the actual weight so that the overall model size increase is not that dramatic (e.g., it increases from 7.3 to 7.7 GiB for 50% pruned Llama2-7B). We discuss this in section 4.4 and also add a graph that compares actual model size with model perplexity to the experiments section. It shows that DSF is much better than regular layer-wise pruning for any storage size.\\n\\n> Indices instead of bitmasks: In the introduction, the authors suggest using indices to store the locations of non-zero elements. However, given that we require uint16 indices to represent all positions in this weight tensor, this would only be practical at sparsities >= 15/16 compared to bit-masking. Given that this is currently an unobtainable level of sparsity for LLMs and roughly the limit at which we are able to find performant CNNs I find the suggestion to use indices to store non-zero locations poorly motivated.\\n\\nYes, you are completely right. We removed this passage from the introduction.\\n\\n> Latency and Throughput: This is the most challenging dimension to estimate. Although the FLOPs analysis suggests similar performance to OBC, this may be misleading considering the additional matmul operations required in the low-rank decomposition and subsequent increase in overall memory bandwidth required to store and load intermediate activations between subsequent matmul kernel calls. I would be more convinced of the practical application for DSF if the authors include a discussion on runtime latency. This could be supported by preliminary benchmarking using Neural Magic\\u2019s DeepSparse Engine which would offer some empirical evidence of improved runtime properties.\\n\\n> Does DSF provide latency/throughput benefits over dense or typical sparse networks (single layer sparsity) when using DeepSparse Engine?\\n\\nYes, the runtime is a main concert with any sparsified neural networks. We added a discussion about this to the paper. Main summary:\\n\\na) In many cases (e.g., local LLM inference), the main concern is fitting the best model into a small memory footprint. In this case, a slight decrease in inference speed might be tolerable.\\n\\nb) There are cases (again, for example, local single batch LLM inference) where the main computational bottleneck is transferring weights from memory to the local cache. This is explored in Flash-LLM (https://arxiv.org/pdf/2309.10285). Memory transfer is similar for DSF and single sparsity.\\n\\nc) Doing two sparse multiplications is not much slower than doing one sparse multiplication with the same total number of nonzeros. We tested this using DeepSparse and found that DSF is ~10-20% slower than ordinary sparse multiplication (this is inline with other literature), but still faster than dense multiplication.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This work introduces double sparse factorization (DSF) which combines matrix decomposition with pruning to yield compressed neural networks with better generalization performance than the baselines used for comparison. The authors demonstrate that their proposed algorithm, an extension of the alternating direction method of multipliers (ADMM) algorithm, is capable of achieving competitive results when compressing pretrained LLMs and CNNs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"While pruning of factors obtained from matrix decomposition is not a novel contribution per se (Le Magoarou & Gribonval, 2016), its application to pretrained model compression is novel as far as I know. In any case, this work clearly distinguishes itself from prior art by focusing on the model compression task, particularly in the context of LLMs.\", \"The paper is well written.\", \"The empirical results outperform strong, SOTA baselines in a variety of contexts for LLMs and CNNS.\", \"The authors take care to consider some of the practical concerns of their method, such as masking overhead. The demonstration of the generalization of DSF with a shared fixed A factor mask is particularly compelling.\", \"Compression and efficient inference is of particular importance as model sizes continue to grow and scale. As such, this work addresses a timely and important topic.\"], \"weaknesses\": [\"Overall, I am leaning towards accept. However, I have some significant concerns regarding the practical applicability of the proposed method. Fundamentally, we require compressed models that offer advantages in one or more of the following dimensions: memory overhead, latency, and/or throughput. For each of these dimensions, we can consider both training and inference. For the following discussion, let\\u2019s consider an intermediate fully-connected layer from a decoder block in a LLaMa 2-7B @ 50% sparsity. This layer\\u2019s weight tensor is of shape (11008, 4096).\", \"Fine-tuning (FT) / training memory overhead: During FT, the proposed method requires ~37% more memory to store the intermediate activations of X@A@B compared with X@W. Activations can account for a significant portion of the overall memory footprint during training and this should be acknowledged in the paper.\", \"Mask overheads: Assuming a bit-mask compression strategy and no shared masks between A factors, we find a similar 37% increased overhead compared to single layer sparsity. With shared A factors this overhead drops to ~1%, assuming the mask is shared across all 36 decoder blocks. From this perspective, I find the fixed-mask variant of DSF to be the most practically interesting.\", \"Indices instead of bitmasks: In the introduction, the authors suggest using indices to store the locations of non-zero elements. However, given that we require uint16 indices to represent all positions in this weight tensor, this would only be practical at sparsities >= 15/16 compared to bit-masking. Given that this is currently an unobtainable level of sparsity for LLMs and roughly the limit at which we are able to find performant CNNs I find the suggestion to use indices to store non-zero locations poorly motivated.\", \"Latency and Throughput: This is the most challenging dimension to estimate. Although the FLOPs analysis suggests similar performance to OBC, this may be misleading considering the additional matmul operations required in the low-rank decomposition and subsequent increase in overall memory bandwidth required to store and load intermediate activations between subsequent matmul kernel calls. I would be more convinced of the practical application for DSF if the authors include a discussion on runtime latency. This could be supported by preliminary benchmarking using Neural Magic\\u2019s DeepSparse Engine which would offer some empirical evidence of improved runtime properties.\", \"2:4 support: It\\u2019s unclear if the proposed method can support 2:4 sparsity as this would require a fixed sparsity level of 50% for both factors. The authors found that a smaller level of sparsity (~33%) yields the best performance but this prohibits using 50% sparsity in both factors as required for 2:4.\", \"Hyperparameter sensitivity: There are a number of specific sparsity values used in the experimental method (16% sparsity, 25% sparsity, etc.). How sensitive is DSF to these values? If DSF is applied to a new model family, is it required to perform a hyperparameter search to find the optimal sparsity level for the smaller factor? How were these sparsity levels found? Could the authors add the results of their hyperparameter sweep for these values, assuming this was how the values were determined.\", \"Reliance on PPL: The authors claim that their method \\u201cis the first layer-wise pruning method in which the larger pruned model is better than the dense smaller model.\\u201d. I believe this claim requires more evidence to support, namely, downstream evaluation for the compressed LLMs on real-world tasks. I would be more willing to support this claim with empirical results from the pruned models on OpenLLM Leaderboard v1 or similar. Relying on perplexity alone has been shown to be misleading for compressed models [1].\", \"LLM fine-tuning: The fine-tuning results section would benefit from expanding its scope to include fine-tuning of the compressed LLMs. I would also be interested to see what the memory overhead looks like for DSF when naive masked sparsity is used.\", \"Based on the above it appears that the memory overhead with a fixed A factor mask is comparable to regular pruning. However, it appears likely that the latency for DSF will be worse than models pruned with other techniques (Wanda, etc.). It is unclear whether DSF can support 2:4 sparsity whereas other methods such as Wanda do support this format (albeit with high loss). If tuning is required per model to set the smaller factor sparsity, this may result in DSF being much more expensive to use on new models. I am willing to accept this paper as the generalization results are good and motivate future work exploring this direction. However, given that DSF is fundamentally motivated by network compression, a more holistic discussion of the above points would convince me to raise my score and, in my opinion, raise the impact of this work.\", \"[1] A. Jaiswal, Z. Gan, X. Du, B. Zhang, Z. Wang, and Y. Yang, \\u201cCompressing LLMs: The Truth is Rarely Pure and Never Simple,\\u201d Oct. 02, 2023, arXiv: arXiv:2310.01382. doi: 10.48550/arXiv.2310.01382.\"], \"questions\": [\"Specifically which LLaMa model is used for reporting the results in Table 1? The authors refer to both LLaMa 1 and 2 in their experimental setup.\", \"Does DSF provide latency/throughput benefits over dense or typical sparse networks (single layer sparsity) when using DeepSparse Engine?\", \"Can DSF be extended to 2:4 sparsity? What is the trade-off with generalization performance?\", \"How do the pruned LLMs compare when evaluated on OpenLLM v1 leaderboard evaluation tasks?\", \"Missing results for Wanda at 70% sparsity: Why were these not included in Table 1?\", \"What is the memory overhead when fine-tuning DSF LLMs in a naive way (i.e., with masked paramters intead of compressed representations)?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Global response and summary\", \"comment\": \"We sincerely thank all reviewers for their constructive and thoughtful feedback.\\nReviewers agreed that our Double Sparse Factorization (DSF) produces strong SOTA results for model pruning.\\nThey also praised the novelty of the idea, paper clarity, and ADMM usage.\\n\\nThe reviewers' main concern was the computation overhead of the DSF. We added whole section 4.4 to discuss this in deeper detail and summarize the main points as follows:\\n\\n* **Memory storage overhead.** Storing two masks requires more space than storing one mask (2x more for square matrices, less for rectangular ones). However, for small-medium sparsities (50-75%), this cost is significantly smaller than the cost of storing nonzero values. \\nWe also added Figure 3 to compare the total model storage size and perplexity for various sparsities. DSF is still better than the best layer-wise pruning method.\\n\\n* **Compute time overhead.** Doing two sparse multiplications instead of one (with the same total number of nonzeros) incurs non-trivial overhead. We measured this on the CPU and saw a 10-20% increase in running time. However, on the DeepSparse engine, even DSF with 50% of nonzeros was still faster than a dense model.\"}", "{\"title\": \"Deepsparse benchmark clarifications\", \"comment\": \"Thanks for the detailed rebuttal.\\n\\n> c) Doing two sparse multiplications is not much slower than doing one sparse multiplication with the same total number of nonzeros. We tested this using DeepSparse and found that DSF is ~10-20% slower than ordinary sparse multiplication (this is inline with other literature), but still faster than dense multiplication.\\n\\nIn Table 6 I see the comparison with the single-sparse matrix. Could you share the dense runtimes as well (ideally wrapped with deep sparse engine or torch.compile(mode='max-autotune')? Some of the features of deepsparse engine will also benefit the dense model so it's important to compare these settings explicitly.\", \"from_table_6_caption\": \"> Each time, we compare the runtime of 48 layers with simple sparsity or DSF with an equal number of nonzeros (i.e., running 96 layers with half the density)\\n\\nThis setting is somewhat different than what DSF proposes. Could you share deepsparse benchmarks with alternating 16/25% sparsity and the remainder non-zeros in even-numbered factors allocated based on global sparsity target as discussed in this paragraph:\\n\\n> When factorizing square matrices (mainly in self-attention), we set the sparsity of one sparse factor to 16%. When factorizing rectangular matrices, the smaller factor will have 25% sparsity. The number of nonzeros in the other factor is just the target number of nonzeros minus the number of nonzeros in the first factor.\\n\\nCould you repeat the deepsparse benchmarks with additional cores as well? It would be beneficial to see the multi-threaded runtimes too. \\n\\nThe downstream evals are great to see and much appreciated as is the formal discussion of computational considerations in the main body of the paper. \\n\\nI remain concerned that this work has limited practical relevance due the increased mask overhead and latency compared to single-sparse networks. If the authors can response to the above requests and if the comparisions with dense appear to show at least modest latency benefits I would be willing to improve my score.\"}", "{\"comment\": \"Thanks for these further clarifications. In my opinion, you have presented sufficient preliminary evidence that this approach could lead to practical benefits. As such, I have increased my score to 8.\"}", "{\"title\": \"Quick replies\", \"comment\": \"Thank you for the replies, here is a quick clarification.\\n\\n> So are you here using the Hadamard product between a real value (the norm) and a matrix, or am I reading this in a wrong way?\\n\\nYes, this was a slight abuse of notation, we changed to explicit element calculation in the paper. We calculate the norm of each feature and multiply the corresponding row in the matrix W (assuming your linear layer calculates $XW$, where $X$ is input and $W$ is the weight matrix).\\n\\n> I am also not entirely sure that the storage requirements are actually the same, given that starting from an matrix, you end up with an and an matrix, i.e., you increase from to parameters, despite enforcing the same overall sparsity. I\\n\\nWe have the same number of nonzeros, so same storage requirement for them. Storage requirements for sparsity masks will be higher, but masks are a smaller part of the storage costs. We also have Figure 7 in the Appendix, which shows that DSF models are much better even when we consider overall storage size (not just the number of nonzeros). \\n\\n> I might be mistaken here, but e.g. for CSR format, the row pointer can also depend on the number of rows, which would clearly yield some overhead here. \\n\\nYes, there will be a small overhead. For example, if you have a matrix size 4096*4096 and 95% sparsity, you need 838860 numbers for nonzeros and 838860 numbers for column indices and 4096 numbers for row pointers. DSF would need an additional 4096 number for row pointers in the second matrix, which is less than 0.3% of overall storage costs.\\n\\n> that time could be used to reconstruct the weights given the found sparsity mask of SparseGPT. In my experience, this improves even SparseGPT and would yield a more realistic comparison.\\n\\nSparseGPT does not have an explicit parameter that trades solution quality and solution time. There is a block size (the lower the blocksize, higher the solution time), but that does not lead to an increase in solution quality in our experience (and for example for Llama-2-7B and 50% sparsity changing block size from 128 to 64 lead to much worse solution, perplexity went from 6.52 to 6.99). \\n\\n> Maybe I am missing something again, but I assume that then the projection problem should involve W and not X? \\n\\nSorry, this is our spelling error in that comment.\", \"it_should_read\": \"\\\"The projection problem is min ||**W** - AB||^2, thus a simplified version.\\\"\\nPaper has this everywhere correctly, this was a mistake only in this comment.\"}", "{\"title\": \"Authors' response part 2\", \"comment\": \"> 2:4 support: It\\u2019s unclear if the proposed method can support 2:4 sparsity as this would require a fixed sparsity level of 50% for both factors. The authors found that a smaller level of sparsity (~33%) yields the best performance but this prohibits using 50% sparsity in both factors as required for 2:4.\\n\\n> Can DSF be extended to 2:4 sparsity? What is the trade-off with generalization performance?\", \"dsf_cannot_do_2\": \"4 sparsity right now. Also, if both factors are in 2:4 sparsity and DSF runs over a square matrix, then you have the same number of nonzeros as the original matrix, which does not bring any gains. What might be viable with 2:4 are two things:\\n\\na) Having one factor in 2:4 format and the second one as very sparse (95+% of sparsity). \\n\\nb) Having block sparsity with blocks having 2:4 sparsity.\\n\\nBut both options require significant research and tuning, and we leave them for future work right now.\\n\\n> Hyperparameter sensitivity: There are a number of specific sparsity values used in the experimental method (16% sparsity, 25% sparsity, etc.). How sensitive is DSF to these values? If DSF is applied to a new model family, is it required to perform a hyperparameter search to find the optimal sparsity level for the smaller factor? How were these sparsity levels found? Could the authors add the results of their hyperparameter sweep for these values, assuming this was how the values were determined.\\n\\nWe found that having both factors with an equal number of nonzeros works quite well but can be slightly tuned. We added more ablations to the appendix (we determined hyperparameters by taking a couple of layers and observing reconstruction error). \\n\\n> Reliance on PPL: The authors claim that their method \\u201cis the first layer-wise pruning method in which the larger pruned model is better than the dense smaller model.\\u201d. I believe this claim requires more evidence to support, namely, downstream evaluation for the compressed LLMs on real-world tasks. I would be more willing to support this claim with empirical results from the pruned models on OpenLLM Leaderboard v1 or similar. Relying on perplexity alone has been shown to be misleading for compressed models [1].\\n\\n> How do the pruned LLMs compare when evaluated on OpenLLM v1 leaderboard evaluation tasks?\\n\\nWe are adjusting our claim to uniform layer-wise pruning and perplexity measure. \\nWe also added measurements on seven zero-shot evaluations used in other pruning papers before (arc-easy, arc-challenge, winogrande, and hellaswag are also parts of OpenLLM v1).\\n\\n> LLM fine-tuning: The fine-tuning results section would benefit from expanding its scope to include fine-tuning of the compressed LLMs.\\n\\nA decent full-parameter finetuning run takes a lot of time (one also needs to do a decent hyperparameter sweep, at least for the learning rate). We are unsure whether we can make this happen during the discussion period. \\n\\n> I would also be interested to see what the memory overhead looks like for DSF when naive masked sparsity is used.\\n\\nUsing masked sparsity naively would result in many problems (mainly ~40% more space needed for weight storage). We tested a different approach, where we stored weight in compressed format and unpacked on the fly during the forward pass. If finetunng batch is large enough, this occurs a negligible time overhead. We explore this more in the appendix A.4. \\n\\n> Specifically which LLaMa model is used for reporting the results in Table 1? The authors refer to both LLaMa 1 and 2 in their experimental setup.\\n\\nWe have Llama1-7B (denoted as 1-7B) and Llama2-7/13/70B (denoted as 2-7B, 2-13B, 2-70B respectively). \\n\\n> Missing results for Wanda at 70% sparsity: Why were these not included in Table 1?\\n\\nWe omitted 30% density with Wanda because it had bad results. But since this raised questions, we put it back.\\n\\nThank you again for your very good suggestions. We hope our response clarifies your concerns. If that\\u2019s the case, we would greatly appreciate it if you would consider raising your score.\"}" ] }
DvU9ijSn1v
Mosaic-IT: Free Compositional Data Augmentation Improves Instruction Tuning
[ "Ming Li", "Pei Chen", "Chenguang Wang", "Hongyu Zhao", "Yijun Liang", "YuPeng Hou", "Fuxiao Liu", "Tianyi Zhou" ]
Finetuning large language models with a variety of instruction-response pairs has enhanced their capability to understand and follow instructions. Current instruction tuning primarily relies on teacher models or human intervention to generate and refine the instructions and responses for training, which are costly, non-sustainable, and may lack diversity. In this paper, we introduce Mosaic Instruction Tuning (Mosaic-IT), a human/model-free compositional data augmentation method that can efficiently create rich and diverse augmentations from existing instruction tuning data to enhance the LLMs. Mosaic-IT randomly concatenates multiple instruction data into one and trains the model to produce the corresponding responses with predefined higher-level meta-instructions to strengthen its multi-step instruction-following and format-following skills. Our extensive evaluations demonstrate a superior performance and training efficiency of Mosaic-IT, which achieves consistent performance improvements over various benchmarks and a $80\%$ reduction in training costs compared with original instruction tuning. Our codes and data are available at https://anonymous.4open.science/r/mosaic-955B.
[ "Large Language Model", "Instruction Tuning", "Supervised Finetuning", "Data Augmentaion" ]
Reject
https://openreview.net/pdf?id=DvU9ijSn1v
https://openreview.net/forum?id=DvU9ijSn1v
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zoQ7PQ7H8T", "yCDGEFXtXA", "tTTPhzuK49", "roGAsCOuPA", "rOMPWDwKqN", "qJUF52393k", "mWJbQNRoIh", "m5bOi7aW3u", "lSszWINwH6", "jqmpc45WrD", "if9zYq7A3o", "i57pNoOIH9", "hYemrOzb0g", "gUqyEmtdQb", "et7UZzM4AL", "dyK3yH0XZO", "d3QcW5Evpc", "YvcYvwxvGu", "Ycw2mdjHbF", "UEicxZGx6x", "T8cp5frEuY", "RGBb8ImvJe", "QJlSbTLp7b", "PeMBTOKsXO", "NPnOGKlKfz", "Lit5N7ytcW", "Jhhd9krxmz", "H7NOJxwyOv", "9NEc3Tu86K", "5mjXUsLZ91", "441JfxTTpf", "381Pvptv8S" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review" ], "note_created": [ 1732648769993, 1732667495480, 1732555695891, 1733282142313, 1737524246087, 1732555793541, 1732420254361, 1733186238824, 1730706179639, 1732420084374, 1732420124029, 1732921195879, 1732420227222, 1733064104131, 1730678849031, 1732555722509, 1732555759913, 1733205637720, 1732420199845, 1732742204054, 1732684232322, 1732921788737, 1733205484379, 1732420296372, 1733205573854, 1729922876996, 1730647858063, 1732420012800, 1732921702460, 1733282077482, 1733014862229, 1734880586753 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13227/Reviewer_w54h" ], [ "ICLR.cc/2025/Conference/Submission13227/Authors" ], [ "ICLR.cc/2025/Conference/Submission13227/Authors" ], [ "ICLR.cc/2025/Conference/Submission13227/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13227/Authors" ], [ "ICLR.cc/2025/Conference/Submission13227/Authors" ], [ "ICLR.cc/2025/Conference/Submission13227/Authors" ], [ "ICLR.cc/2025/Conference/Submission13227/Reviewer_w54h" ], [ "ICLR.cc/2025/Conference/Submission13227/Authors" ], [ "ICLR.cc/2025/Conference/Submission13227/Authors" ], [ "ICLR.cc/2025/Conference/Submission13227/Authors" ], [ "ICLR.cc/2025/Conference/Submission13227/Authors" ], [ "ICLR.cc/2025/Conference/Submission13227/Authors" ], [ "ICLR.cc/2025/Conference/Submission13227/Reviewer_mjBt" ], [ "ICLR.cc/2025/Conference/Submission13227/Authors" ], [ "ICLR.cc/2025/Conference/Submission13227/Authors" ], [ "ICLR.cc/2025/Conference/Submission13227/Authors" ], [ "ICLR.cc/2025/Conference/Submission13227/Authors" ], [ "ICLR.cc/2025/Conference/Submission13227/Reviewer_mjBt" ], [ "ICLR.cc/2025/Conference/Submission13227/Authors" ], [ "ICLR.cc/2025/Conference/Submission13227/Authors" ], [ "ICLR.cc/2025/Conference/Submission13227/Authors" ], [ "ICLR.cc/2025/Conference/Submission13227/Authors" ], [ "ICLR.cc/2025/Conference/Submission13227/Authors" ], [ "ICLR.cc/2025/Conference/Submission13227/Reviewer_7WNG" ], [ "ICLR.cc/2025/Conference/Submission13227/Reviewer_cPom" ], [ "ICLR.cc/2025/Conference/Submission13227/Authors" ], [ "ICLR.cc/2025/Conference/Submission13227/Authors" ], [ "ICLR.cc/2025/Conference/Submission13227/Authors" ], [ "ICLR.cc/2025/Conference/Submission13227/Reviewer_cPom" ], [ "ICLR.cc/2025/Conference/Submission13227/Area_Chair_2VLd" ] ], "structured_content_str": [ "{\"title\": \"Response to authors\", \"comment\": \"Hi,\\n\\nI thank the authors for the clarifications, and it addresses some of my concerns. Please include them in the future versions of the paper. I have increased my rating.\"}", "{\"comment\": \"Thank you for your reply! We will surely include all the discussions in the later version.\"}", "{\"title\": \"A kind reminder\", \"comment\": \"Dear Reviewer w54h,\\n\\nAs we are approaching the deadline of the discussion period, we would like to cordially inquire about the extent to which we have successfully addressed the concerns outlined in your review. Should there be any lingering points that require further attention, please rest assured that we are enthusiastic about the opportunity to provide comprehensive responses to any subsequent queries or comments you may have.\\n\\nYour constructive input remains invaluable to us, and we appreciate your dedication to enhancing the quality of our manuscript. Thank you for your time and consideration.\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"A kind summary\", \"comment\": \"Dear Reviewer cPom,\\n\\nAs the discussion is about to end soon, we have prepared a concise summary of our responses to your last comment:\\n\\n**Q1: Why random concatenation is the optimal.** \\n\\nWe did not claim that random concatenation is optimal. We chose it because it is entirely cost-free without requiring any extra prior knowledge or semantic grouping of data. The non-trivial improvement of such a straightforward application of the proposed augmentations is important in demonstrating the effectiveness of Mosaic-IT. \\n\\n**Q2: Structured or semantically grouped approaches.**\\n\\nWe implemented the semantic grouping approach. Experimental results and conclusions are provided. Concatenating semantically similar samples to train LLMs can help generate condensed responses with comparable quality, when compared with pure-random concatenation, as reflected by the results on Alpaca Eval 2 (LC). \\n\\nWe hope this summary can help you check whether and how we addressed the concerns you raised in your review and discussion. Based on the new updates, we would like to kindly inquiry you to consider raising the current rating to reflect our new results and improvement. \\n\\nSincerely,\\n\\nAuthors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"A kind reminder\", \"comment\": \"Dear Reviewer 7WNG,\\n\\nAs we are approaching the deadline of the discussion period, we would like to cordially inquire about the extent to which we have successfully addressed the concerns outlined in your review. Should there be any lingering points that require further attention, please rest assured that we are enthusiastic about the opportunity to provide comprehensive responses to any subsequent queries or comments you may have.\\n\\nYour constructive input remains invaluable to us, and we appreciate your dedication to enhancing the quality of our manuscript. Thank you for your time and consideration.\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer #3(cPom)\", \"comment\": \"**Weakness**:\\n\\n>Q1: The paper lacks a theoretical basis for why random concatenation should improve instruction-following abilities; structured or semantically grouped concatenations could offer further insights.\\n\\nPlease kindly refer to the Q1 of the General response. \\n\\n>Q2: Randomly concatenated instructions may introduce noise, potentially impacting training stability. An analysis of this effect on model perplexity would strengthen the work.\\n\\nThank you for your insightful suggestions! \\n\\nWe examined the training stability by checking the training curves of loss on different LLMs. Similar to the curves in Figure 4, the starting loss of our method is typically higher compared to the baseline training curves due to the difficulty of the compositional data. This indicates the weakness of existing LLMs in following these compositional instructions, possibly due to the noises or the interference among multiple instructions. However, during the training phase, the loss declines consistently and stably, which indicates that the LLM makes progress on generating each response selectively or in a predefined order according to the corresponding instruction and meta-instruction, without being affected by the interference or noise from other instructions. More discussion will be included in our later version.\"}", "{\"title\": \"Further experiments and responses to Reviewer cPom\", \"comment\": \"Thanks for your follow-up response! We would like to address your remaining concern about whether random concatenation is the best strategy for our proposed compositional augmentations.\\n\\n>Q1: It does not fully address why random concatenation is the optimal strategy.\\n\\nWe did **not claim that random concatenation is the optimal strategy**. We chose it because it demonstrates that: **as the first attempt of model-free compositional augmentation for LLMs, the most straightforward random concatenation already brings significant improvement**. Compared with more sophisticated concatenation strategies, random concatenation stands out as it is **completely cost-free** and does not require any prior knowledge or semantic understanding of the concatenated samples. This supreme efficiency represents one of our greatest contributions. We discussed this in the Limitation section of our manuscript. We will extend this discussion in our future version. \\n\\n>Q2: To strengthen your argument, it would be helpful to include empirical comparisons between random concatenation and structured or semantically grouped approaches in the future revision.\\n\\nWe agreed that including other concatenation methods could further strengthen our argument. So we follow your suggestion by further conducting experiments of a semantic grouping approach based on the Alpaca-GPT4 dataset to finetuning Mistral. Due to the limited time, we cannot fully extend the experiments to more datasets and models. But we will add them in our future versions. \\n\\n**Semantic Grouping:**\\nWe utilized \\u201csentence-transformers/all-mpnet-base-v2\\u201d to obtain the semantic embedding for each sample in the dataset, and then we applied the K-means algorithm to group these data samples into multiple clusters. To ensure enough samples per cluster, we set K=52 as the dataset contains 52k samples. Given the clusters, each concatenated sample is composed of multiple samples randomly drawn from the same cluster. We keep using the same training hyperparameters as before. In the table below, we report the performance on 2 evaluation metrics: pair-wise comparison and Alpaca Eval. \\n\\n| Method | Alpaca Eval 2 (LC) | Alpaca Eval 2 | Pair-wise Compare (with non-mosaic) |Pair-wise Compare (with pure-random) |\\n|---|---|---|---|---|\\n| Pure-random Concatenation | 5.00 | 7.81 | 1.349 | 1.000 |\\n| Concatenation with Semantic Groups | 7.80 | 6.51 | 1.275 | 0.936 |\", \"we_can_draw_insights_from_the_above_experiments\": \"1. The **semantic concatenation can still outperform the non-mosaic baseline** by a large margin, indicating the effectiveness and potential of our Mosaic-IT augmentations and tasks. \\n2. The semantic concatenation method has a slightly lower performance than the pure-random concatenation method, on pair-wise comparison and Alpaca Eval 2 scores. However, it achieves a much higher Alpaca Eval 2 (LC) score. This result suggests that **the response quality of the model trained with semantic concatenation is on par with pure-random but the response length is shorter and more condensed**. \\n3. We found the semantic grouping leads to clusters with highly different average lengths of samples: The largest average length is 316.7 tokens while the smallest is 31.4 tokens. This discrepancy makes the lengths of Mosaic-IT concatenated samples more diverse, resulting in a better trade-off between quality and length of the responses. \\n\\n**Structure-based Grouping:**\\nFollowing your suggestion, we also tried grouping based on samples\\u2019 structures. Following common practice, we extracted the verb-noun pair of each instruction and tried to group the data samples by the verb-noun pair as it indicates the similarity of their targeted tasks. However, instructions in the modern instruction-tuning dataset are so diverse that most of the verb-noun pairs only appear a few times, thus it\\u2019s hard to group and conduct intra-cluster concatenation. We believe more structure-based grouping strategies can be investigated in the future. \\n\\nMore experiments will be conducted and included in the future version of our paper. The main novel discovery we highlighted is that the **cost-free random concatenation already brings non-trivial improvement**. However, we agree that the random selection strategy may **not** be optimal, and more sophisticated strategies should be further investigated.\"}", "{\"summary\": \"The paper argues that acquiring instruction-tuning data from a teacher model or humans is resource-intensive. In addition, it suggests that the complexity of single instruction can be limited for many instances which limits the instruction-following capabilities. To address this, the authors propose Mosaic-IT, a data augmentation strategy where the model is trained to follow multiple instructions via a meta instruction. Specifically, the paper considers multiple mosaic strategies including primary, maskout, permute, and format. Finally, the paper shows good improvements across models, instruction-tuning datasets and evaluation methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper proposes an interesting way to stack multiple instructions to teach more complex instruction-following capabilities to them. It is encouraging that the authors consider many ways in which the instruction-response data can be stacked.\", \"The paper performs a diverse set of experiments across many base language models, instruction-tuning datasets, and evaluation methods.\", \"The paper performs several ablation studies to understand the usefulness of different experimental components. The paper further analyzes the usefulness of the method using the smoothness of the learning dynamics.\"], \"weaknesses\": [\"Motivation: how much of instruction tuning data acquisition is a bottleneck? There are several papers that show that a small number of instruction tuning data is enough to enable strong instruction-following capabilities. With the rise of powerful small language models (e.g., 4o-mini, Gemini-Flash, Haiku), getting a lot of instruction tuning data is not a bottleneck in terms of resources. In addition, I do not understand the connection between instruction tuning and Dense and Aligned Captions paper from the VL literature. The authors should rethink the motivation in the introduction. It is unclear whether this strategy scales with data i.e., having more Mosaic-IT data beneficial or not.\", \"The absolute performance on Alpaca2-LC seems too low. According to the original leaderboard [1], the AlpacaEval LC performance of Alpaca 7B (w/ LLama-1) is 5.9%, and Vicuna is 6.3%. However, the paper indicates that the baseline performance with much stronger base models (Mistral and LLaMA-3-8B) and datasets (Alpaca-GPT4, Wizard-70K, Vicuna, Magpie) is quite low. This makes me wonder if the models have been instruction tuned properly or not.\", \"Table 2 suggests that baseline methods have better 2-round MT-Bench scores than Mosaic-IT. Shouldn\\u2019t the second round MT-Bench scores improve with Mosaic-IT augmentation? Mosaic-IT shares similarity with multi-turn chats in the sense that both require answering multiple instructions in the given context.\", \"[1] https://tatsu-lab.github.io/alpaca_eval/\"], \"questions\": \"Mentioned in the weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer #1(w54h) (Part1)\", \"comment\": \">Q1.1: Motivation: how much of instruction tuning data acquisition is a bottleneck? There are several papers that show that a small number of instruction tuning data is enough to enable strong instruction-following capabilities.\\n\\nThis is still an open problem in the community. There are papers supporting that \\u201ca small number of instruction tuning data is enough for instruction-following capabilities\\u201d, including LIMA [1], Alpagasus [2], Cherry LLM [3], etc. However, there is also a very recent paper [4] mentioning data filtering is not effective for large-scale data. We will include this discussion in our later version. \\n\\nHowever, the discussion of this problem is out of the main scope of our paper, as we do not filter or discard any given data. **Our motivation and method are orthogonal to data filtering and data synthetic methods.** We aim to achieve cost-free data augmentations to further exploit existing instruction datasets without data filtering. The main difference and advantage of our method is that our data augmentation does not require any other models and is cost-free. \\n\\n>Q1.2 With the rise of powerful small language models (e.g., 4o-mini, Gemini-Flash, Haiku), getting a lot of instruction tuning data is not a bottleneck in terms of resources.\\n\\nPowerful small language models can reduce the generation cost of instruction tuning data more easily, however:\\n\\n1. Our method avoids the entire cost of data synthesis by any language models or neural networks. In contrast, small language models (SLMs) still produce concerning carbon footprints. Their lower cost per sample may come with a price of more trials. For example, to achieve superalignment (SLM-generated data improves LLMs), strategies like best-of-n are often needed but significantly increase the cost. \\n2. Our method aims to **exploit the potential of existing data** so the quality of augmented data is 100% guaranteed. In contrast, the newly generated data by SLMs still require careful and complex quality checks in practice to reduce the risk of degrading the original LLM (i.e., negative finetuning). \\n3. **Mosaic-IT is orthogonal and complementary to existing data filtering and data synthesis approaches.** It can be applied together with other methods to further improve LLMs. \\n4. As a model-free method, Mosaic-IT can avoid **the potential violation of licenses** limiting the usage of existing LLMs. \\n\\n>Q1.3: I do not understand the connection between instruction tuning and Dense and Aligned Captions paper from the VL literature. The authors should rethink the motivation in the introduction.\\n\\nThank you for your insightful suggestion! We used the idea of \\u201cdense alignment\\u201d from the \\u201cDense and Aligned Captions\\u201d paper to motivate the importance and challenges of generating multiple responses aligning with the corresponding instructions in the input according to the meta-instruction in Mosaic-IT. In the cited paper, learning with dense captions helps the model to achieve the capability of aligning each part of the image with corresponding descriptive captions. This is similar to the dense alignment between multiple instructions and responses in the compositional data by Mosaic-IT. We will modify the motivation to make it clearer. \\n\\n\\n>Q1.4: It is unclear whether this strategy scales with data i.e., having more Mosaic-IT data beneficial or not.\\n\\nOur experiments include datasets with various data sizes including 50k, 70k, 300k, and 1M (results shown in Q2). Training with Mosaic-IT data achieves consistently better performances across these datasets.\"}", "{\"title\": \"Response to Reviewer #1(w54h) (Part2)\", \"comment\": \">Q2: The absolute performance on Alpaca2-LC seems too low.\\n\\nThank you for reading our paper in such detail! \\n\\nFor finetuning on Vicuna 1M data, we randomly selected 300K data for the training due to the computation budgets, considering the diverse data quality of Vicuna 1M, (which contains conversations of other languages and some dummy conversations like \\u201cHello!\\u201d \\u201cHello!\\u201d), thus we think the main cause of the low performance is the random selection. Thus, we further finetuned Llama-3-8B with all the 1M data to see how the performance goes, the results are shown below, which is much better:\\n\\n| Model + Dataset | Alpaca Eval 2 (LC) | Alpaca Eval 2 |\\n|---|---|---|\\n|Llama-3-8B + Vicuna 1M | 8.6% | 8.4%|\\n|Llama-3-8B + Vicuna 1M + Mosaic | 9.7% | 9.1% |\\n\\nFor finetuning on Magepie data, we also used the filtered 300K data. All the training settings are kept the same as reported in the paper, except for the maximum sequence length, which we set to 4096 compared with their 8192 for the computation budget. We further finetuned Llama-3-8B with the 8192 length to see how the performance goes, the results are shown below, which is slightly better:\\n\\n| Model + Dataset | Alpaca Eval 2 (LC) | Alpaca Eval 2 |\\n|---|---|---|\\n|Llama-3-8B + Magepie 300K | 18.4% | 20.8%|\\n|Llama-3-8B + Magepie 300K + Mosaic | 20.5% | 22.6% |\\n\\nDespite the absolute performance variances between models finetuned by us and the reported performances, the comparison between the baseline and our method is totally under the same setting to ensure a fair comparison, which we believe is solid for verifying the effectiveness of our methods. \\n\\n>Q3: Table 2 suggests that baseline methods have better 2-round MT-Bench scores than Mosaic-IT. Shouldn\\u2019t the second round MT-Bench scores improve with Mosaic-IT augmentation?\\n\\n1. Mosaic-IT does not necessarily improve the second-round dialogue of LLMs if the instruction data are single-round conversations. Our method composites several instructions into one but it is still under the setting of single round conversations. \\n2. The second-round performances are affected by both the instruction data and the LLMs to be trained. As shown in Table 3, when more advanced data and LLMs are used, the second-round performances can be further improved. \\n\\nMore discussion will be included in the later version. \\n\\n\\n[1] LIMA: Less Is More for Alignment. (NeurIPS\\u201923)\\n[2] AlpaGasus: Training a Better Alpaca with Fewer Data. (ICLR\\u201924)\\n[3] From quantity to quality: Boosting llm performance with self-guided data selection for instruction tuning. (NAACL\\u201924)\\n[4] Rethinking Data Selection at Scale: Random Selection is Almost All You Need.\"}", "{\"title\": \"Follow-up Response to Reviewer #2(mjBt)\", \"comment\": \"We sincerely appreciate your time and effort in evaluating our manuscript and providing valuable feedback! In the following, we will respond to your latest concerns.\\n\\n>Q1: My remaining concern is the evaluation. I don't think Alpaca Eval 2, MT-Bench, and Huggingface Open LLM Leaderboard are the suitable test bed to measure the capability of instruction-following, such as \\\"Respond in reverse of the original order\\\", etc. \\n\\nThough our evaluations in the paper focus on the widely used instruction-following benchmarks, **we agree with you on the importance of verifying our trained models\\u2019 capability to follow the meta-instructions in Mosaic-IT**. To this end, we create a test set of compositional instructions from WizardLM test sets using Mosaic-IT. For simplicity, we name this new test setting as Mosaic task, which evaluates LLMs\\u2019 capability to follow multiple instructions with additional diverse constraints (meta-instructions). \\n\\n----\\n**Here is an example**: \\n\\nRespond to each of the following instructions in reverse of the original order.\\n\\n[Ins1]\\n\\n[Ins2]\\n\\n[Ins3]\\n\\n----\\n\\nWe use the success rate (%) to evaluate the performance of models on the Mosaic task. A response is successful if it follows the meta-instruction and no instruction is ignored (unless the meta-instruction masks it). In the table below, we report the success rate (%) of LLMs following three meta-instruction strategies, i.e., Format / Permute / Maskout, on compositional augmentations of different numbers of instructions (i.e., 3, 5, 7 instructions). We report the success rates of GPT4o, two base models, and their Mosaic-IT finetuned versions. \\n\\n| Model | 3 Instructions | 5 Instructions | 7 Instructions |\\n|------------------|------------------|-------------------------------|-------------------------------|\\n| GPT4o | 59.17 / 55.05 / 41.46 | 56.88 / 51.38 / 26.13 | 29.82 / 37.16 / 24.27 |\\n| | | | |\\n| Mistral + Alpaca-GPT4 (baseline) | 20.18 / 3.67 / 3.25 | 10.09 / 2.75 / 5.41 | 7.34 / 0.92 / 0.97 |\\n| Mistral + Alpaca-GPT4 (mosaic) | 98.32 / 66.51 / 69.11 | 95.87 / 60.55 / 67.57 | 97.25 / 64.68 / 66.02 |\\n| | | | |\\n| Llama3 + Magepie (baseline) | 16.06 / 8.26 / 7.32 | 9.63 / 1.38 / 5.41 | 5.50 / 2.75 / 3.88 |\\n| Llama3 + Magepie (mosaic) | 97.71 / 79.82 / 84.55 | 94.95 / 72.94 / 77.48 | 76.61 / 61.01 / 85.44 |\\n\\nThe results expose the weaknesses of existing LLMs on Mosaic-IT tasks and show that training on Mosaic-IT augmentations can significantly improve performance. Specifically,\\n \\n1. **Existing LLMs, even GPT4o, can not perfectly follow multiple instructions with diverse constraints**, not to mention other open-source models like Llama3 finetuned on datasets such as Magpie. These results further demonstrate the difficulty and complexity of Mosaic-IT tasks for existing LLMs, indicating the novelty of our method. \\n2. **The compositional reasoning capability required by Mosaic-IT tasks cannot be covered by the capabilities of base LLMs and existing instruction-tuning datasets**. For example, the success rates of Mistral + Alpaca-GPT4 (baseline) and Llama3 + Magepie (baseline) are similar, although Llama3 + Magepie has relatively better general instruction-following capabilities among them. \\n3. **Our method can bridge the significant gap and enhance LLMs\\u2019 capability to follow multiple instructions with diverse constraints**. Moreover, our data augmentation is cost-free and does not take any effort from humans or models.\\n\\nDue to the limited time and space, we will include the full details of this experiment and evaluations with a more detailed analysis in our next version. \\n\\n>Q2: Why Figure 4 is presented in the paper? or it might better to remove it from the paper? \\n\\nFigure 4 is to discuss the potential memorization issue indicated by the shape of loss curves. It has been discussed in the community [1,2] that the losses for instruction tuning drop suddenly after each epoch (stair-like loss curves). This is probably caused by LLMs\\u2019 memorization of the training data, as the same data will be seen multiple times without any changes in training. This may hurt the generalization. In contrast, **our compositional augmentation creates diverse data that take different combinations of different original samples, so LLMs are always trained on different data which mitigates the memorization issue**. We do not intend to claim our method as the only one that can handle the memorization problem. Instead, we claim this is another merit of our method in addition to the improved performance and efficiency. We will clarify it better in our manuscript. \\n\\n[1] https://github.com/tatsu-lab/stanford_alpaca/issues/236 \\n\\n[2] https://github.com/huggingface/transformers/issues/18730\"}", "{\"title\": \"Response to Reviewer #2(mjBt) (Part2)\", \"comment\": \">Q4.1: In Table 4 (a) (ablation of Mosaic-IT), you tried Format, Permute, Maskout, and Permute/Maskedout. Why didn\\u2019t you try all the combinations?\\n\\nWe evaluated the mix of Permute/Maskout strategies by applying them to different samples, as illustrated in line 381. We did not apply all strategies together to one sample in order to avoid potential ambiguities of the meta-instructions and misinterpretations of LLMs. \\n\\nFor example, for a Mosaic-IT instruction \\u201cIns1, Ins2, Ins3, Ins4, Ins5\\u201d, if we combine Permute and Maskout together, the meta-instruction can be \\u201cRespond to the instructions with the order of [3,2,1,5,4]. Ignore the 2nd and 3rd instructions.\\u201d Then there might be a misunderstanding on which instructions to ignore (the 2nd and 3rd of the original order or the permuted order?) \\n\\nMore discussions will be included in our paper. \\n\\n\\n>Q4.2 Also, your best performance came from Maskout, but the adopted variant for Table 1 seems Permute/Maskedout. Why didn\\u2019t you use Maskout only?\\n\\nTraining LLMs to follow instructions in a predefined order is more challenging than ignoring some instructions. However, this skill might be more important than maskout as it is common to instruct LLMs to execute codes/instructions in a predefined order in practical applications.\"}", "{\"comment\": \"Dear Reviewer mjBt,\\n\\nWe greatly appreciate your following-up discussion and glad to see that the original misunderstanding has been resolved. To address your further concerns on the evaluation, we provided new evaluation results above demonstrating the significant improvement achieved by our method on the meta-instruction following of multiple rules. Would you please check the results and let us know if you have any remaining concerns?\\n\\nThanks!\\n\\nAuthors\"}", "{\"summary\": \"This paper studies instruction-tuning methods in LLMs by augmenting training data with three different templates, Format, Permute, and Maskout strategies. These techniques may reduce the over-fitting or memorization. The proposed method, Mosaic-IT, achieves consistent performance improvements over various benchmarks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"[S1] The experimental results seems to be solid by demonstrating consistent improvement against a no-augmentation baseline.\"], \"weaknesses\": [\"[W1] The techniques to prevent over-fitting and memorization by preparing various formatted templates and order randomization has been well studied and widely known approach; such as pioneering work of Instruction Tuning (Wei et al., 2021, Flan-T5 paper). From that time, the input/output pairs for instruction tuning are not always fixed and dynamically randomized. Considering these literatures, I think this paper is a kind of re-invention of those techniques, and the technical novelty and contribution seems to be limited.\", \"[W2] Figure 3 is unclear to me. Could you clarify what is a \\u201cmixture count\\u201d? While \\u201cFix\\u201d strategy is adopted, the number of \\u201cmixture count\\u201d seems to be distributed among 1-10 (not fixed?). Why do you use Uniform as a default despite its not the best performance?\", \"[W3] In Figure 4, we can see that Mosaic-iT accelerates its training, but the performance at the convergence seems to be the same or even worse than the baselines, which is contradictory to your main results that improves the performance against baselines. Could you clarify the relationship between the convergence performance and the logic of performance improvement.\", \"[W4] In Table 4 (a) (ablation of Mosaic-IT), you tried Format, Permute, Maskout, and Permute/Maskedout. Why didn\\u2019t you try all the combinations?\", \"Also, your best performance came from Maskout, but the adopted variant for Table 1 seems Permute/Maskedout. Why didn\\u2019t you use Maskout only?\", \"**Reference**\", \"Wei et al., 2021. Finetuned Language Models Are Zero-Shot Learners. https://arxiv.org/abs/2109.01652\"], \"questions\": \"See the weakness above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"A kind reminder\", \"comment\": \"Dear Reviewer mjBt,\\n\\nAs we are approaching the deadline of the discussion period, we would like to cordially inquire about the extent to which we have successfully addressed the concerns outlined in your review. Should there be any lingering points that require further attention, please rest assured that we are enthusiastic about the opportunity to provide comprehensive responses to any subsequent queries or comments you may have.\\n\\nYour constructive input remains invaluable to us, and we appreciate your dedication to enhancing the quality of our manuscript. Thank you for your time and consideration.\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"A kind reminder\", \"comment\": \"Dear Reviewer cPom,\\n\\nAs we are approaching the deadline of the discussion period, we would like to cordially inquire about the extent to which we have successfully addressed the concerns outlined in your review. Should there be any lingering points that require further attention, please rest assured that we are enthusiastic about the opportunity to provide comprehensive responses to any subsequent queries or comments you may have.\\n\\nYour constructive input remains invaluable to us, and we appreciate your dedication to enhancing the quality of our manuscript. Thank you for your time and consideration.\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"A further kind reminder\", \"comment\": \"Dear Reviewer 7WNG,\\n\\nSince the discussion period is about to end soon, we would like to cordially inquire about the extent to which we have successfully addressed the concerns outlined in your review. \\n\\nYour constructive input remains invaluable to us, and we appreciate your dedication to enhancing the quality of our manuscript. Thank you for your time and consideration. If our response addresses your concerns, we sincerely hope you can consider raising the ratings. Thank you so much!\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer #2(mjBt) (Part1)\", \"comment\": \"**Weakness:**\\n\\n>Q1: The techniques to prevent over-fitting and memorization by preparing various formatted templates and order randomization has been well studied and widely known approach; such as pioneering work of Instruction Tuning (Wei et al., 2021, Flan-T5 paper). \\n\\nThis paper is indeed an excellent pioneering work of Instruction Tuning, which greatly influences us. However, with all due respect, we **disagree** with your viewpoints on our paper and the Flan-T5 paper. \\n\\n1. There is a **misunderstanding of the novelty and contribution** of our work. Our main contribution is **a cost-free compositional data augmentation method**, which concatenates existing instruction-tuning samples into complex ones. We did not utilize different templates for training. To our best knowledge, we are the first to propose this type of method for improving the instruction-following capability of LLMs, **please let us know if you find any reference applying similar ideas to LLMs.** \\n2. **The Flan-T5 work does not utilize or mention any idea of concatenating multiple instructions into one to make it more complex**, which is our main contribution and motivation. \\n3. **The instruction tuning setting of modern LLM in this paper is different from the one used in the Flan-T5 era**. Flan-T5 needs to cover various tasks and curate a lot of samples per task, thus requiring different formats of templates to prevent overfitting. However, in the modern instruction tuning settings, every instruction is regarded as a different task and it does not require many samples per task, so the diverse templates are no longer required. Our work is based on the modern LLM instruction tuning setting. \\n\\nPlease kindly let us know if you have any feedback. \\n\\n>Q2.1: Figure 3 is unclear to me. Could you clarify what is a \\u201cmixture count\\u201d?\\n\\nThe mixture count is **the number of original samples/instructions to be composited in Mosaic-IT augmentations.** For example, in Figure 2, the mixture counts for the examples are all 3. We will further clarify it for more times in the paper. \\n\\n>Q2.2: While \\u201cFix\\u201d strategy is adopted, the number of \\u201cmixture count\\u201d seems to be distributed among 1-10 (not fixed?). \\n\\nWhen \\u201cFix\\u201d is applied, we fix the number of samples-to-be-composited to 10. However, some composited samples might exceed the maximum token limit for the instruction tuning. In this case, we decrease the number of samples to reduce the length below the limit. Otherwise, these augmentations need to be abandoned and some original samples are ruled out from the training. We mentioned this process in line 249, we will further modify the narratives to make it clearer. \\n\\n>Q2.3: Why do you use Uniform as a default despite its not the best performance?\\n\\n1. Uniform distribution is not the best but it does not require search or tuning of the distribution and it already brings non-trivial improvement to the LLM. Trying more advanced distributions is optional but requires extra cost. We present the performance of other distributions to show the potential additional improvement that can be achieved by further tuning of distributions. \\n2. Uniform distribution by default keeps the method simple and the ablation studies clearer to understand, e.g., the ablation for the max number of instructions in Table 4 and the time reduction in Table 7. \\n\\n>Q3: In Figure 4, we can see that Mosaic-iT accelerates its training, but the performance at the convergence seems to be the same or even worse than the baselines, which is contradictory to your main results that improve the performance against baselines. Could you clarify the relationship between the convergence performance and the logic of performance improvement.\\n\\nThank you for your comment, but we **disagree** on using **losses (especially training losses) to evaluate the performance of instruction tuning**. \\n\\n1. The evaluation of instruction-following capability of LLMs is an open challenge in the community. Perplexity-based loss is not a reliable metric for evaluating the capability achieved by instruction tuning. \\n2. The data in the baseline method are entirely different from ours: our data is the concatenation of the baseline\\u2019s data, which is much more challenging for LLMs to learn. Thus their loss values are not comparable.\"}", "{\"comment\": \"Thank you authors for the detailed response and additional experiments. Also, I really sorry for the late reply due to my personal matter.\\n\\n\\nI realized that my misunderstanding of the main focus of this paper. This paper aims to improve the capability of instruction-following under multiple rules, rather than general LLM's capability. I agree with the effectiveness of compositional combination of instructions to improve the instruction-following performance, and based on it I raised the score.\\n\\nMy remaining concern is the evaluation. I don't think Alpaca Eval 2, MT-Bench, and Huggingface Open LLM Leaderboard are the suitable test bed to measure the capability of instruction-following, such as \\\"Respond in reverse of the original order\\\", \\\"Ignore the longest one/several task(s) according to the word count.\\\", \\\"Enclose each reply with [START] and [END].\\\", etc. While this method trains LLMs with those instructions, the capability of instruction-following with such instructions is not directly measured. I think the strong experimental performance in the relevant benchmarks does not appropriately support the claim on instruction-following capability.\\n\\nIn addition, for the Figure 4, the author said,\\n\\n```\\nThank you for your comment, but we disagree on using losses (especially training losses) to evaluate the performance of instruction tuning.\\n\\nThe evaluation of instruction-following capability of LLMs is an open challenge in the community. Perplexity-based loss is not a reliable metric for evaluating the capability achieved by instruction tuning.\", \"the_data_in_the_baseline_method_are_entirely_different_from_ours\": \"our data is the concatenation of the baseline\\u2019s data, which is much more challenging for LLMs to learn. Thus their loss values are not comparable.\\n```\\n\\nIf this is true, why Figure 4 is presented in the paper? or it might better to remove it from the paper? The faster loss decrease does not related to the performance improvement. I think because we cannot compare the loss, the statement on the training efficiency does not make sense. The y-axis should be the performance on the instruction-following tasks strictly evaluated with whether the given instruction is satisfied or not.\"}", "{\"comment\": \"Thanks for the prompt response! We are still confused about how to apply image augmentations MixMatch or MixUp for representation learning to text data for autoregressive language modeling (in particular, finetuning of modern LLMs such as Llama, which is our focused problem). To the best of our knowledge, whether and how these methods can be applied to LLM finetuning is still unknown as both models and the training objectives are very different. We are eager to hear from you how these methods can be leveraged for the LLM finetuning. Thanks!\"}", "{\"title\": \"A further kin reminder\", \"comment\": \"Dear Reviewer 7WNG,\\n\\nAs we are approaching the deadline of the discussion period, we would like to cordially inquire about the extent to which we have successfully addressed the concerns outlined in your review. Please kindly let us know if you have any further concerns.\\n\\nYour constructive input remains invaluable to us, and we appreciate your dedication to enhancing the quality of our manuscript. Thank you for your time and consideration. If our response addresses your concerns, we sincerely hope you can consider raising the ratings. Thank you so much!\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"A kind summary of our comments for Reviewer mjBt\", \"comment\": \"Dear Reviewer mjBt,\\n\\nThank you for your thoughtful comments! As the discussion is about to end soon, we have prepared a concise summary of our responses to your last comment: \\n\\n**Q1: For the remaining evaluation concern.** \\n\\nWe designed a new test set to be used for directly evaluating our models\\u2019 capability to follow multiple instructions with additional diverse constraints. Experimental results and conclusions are provided, showing our method's advantages. \\n\\n**Q2: For the presents of Figure 4.**\\n\\nWe explained the reason why this figure is included. \\n\\nWe hope this summary can help you check whether and how we addressed your concerns. Based on the new updates, we sincerely inquiry if you would like to consider increasing the current rating to reflect the latest improvement to the paper.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer #4(7WNG)\", \"comment\": \"**Weakness:**\\n\\n>Q1: The core problem of the proposed method is the lack of a detailed explanation of the core reasons for the proposed method. It does not provide justifiable and experimental explanations for the effectiveness of the proposed method. The primary motivation of the proposed method should be further cleared here.\\n\\nPlease kindly refer to the Q1 of the General response.\\n\\n>Q2: From the experiments, it seems that the proposed method does not improve the multi-turn data for MT-Bench. Would there be any explanations for this?\\n\\n1. Mosaic-IT does not necessarily improve the second-round dialogue of LLMs if the instruction data are single-round conversations. Our method composites several instructions into one but it is still under the setting of single round conversations. \\n2. The second-round performances are affected by both the instruction data and the LLMs to be trained. As shown in Table 3, when more advanced data and LLMs are used, the second-round performances can be further improved. \\n\\nMore discussion will be included in the later version. \\n\\n>Q3.1: Many instruction-following methods and literatures focus on data augmentation. There are no comparisons with those baselines. \\n\\nMost existing instruction-following methods and literature focus on data augmentation **utilizing other LLMs to generate new data samples**, which is a data synthesis process relying on other LLMs. However, **our method is a model-free method that aims to exploit the potential of existing data**, which is largely different from existing methods. To our best knowledge, our method is the first of this kind. Please let us know if you find any works of this kind. In more detail: \\n\\n1. Our method avoids the entire cost of data synthesis by any language models or neural networks, which still produce carbon footprints. \\n2. Our method aims to **exploit the potential of existing data** so the quality of augmented data is 100% guaranteed. In contrast, the newly generated data still require careful and complex quality checks in practice to reduce the risk of degrading the original LLM (i.e., negative finetuning). \\n3. **Mosaic-IT is orthogonal and complementary to existing data synthesis approaches**. It can be applied together with other methods to further improve LLMs. \\n4. As a model-free method, Mosaic-IT can avoid **the potential violation of licenses** limiting the usage of existing LLMs. \\n\\n\\n\\n>Q3.2: In addition, how would mask methods be different from the other dropout, etc. methods?\\n\\nThere might be some misunderstanding of the maskout strategy, which is applied to the training data to create data augmentations with different outputs. This is different from dropout-typed approaches that are applied to neurons or parameters of a model, whose goal is to perturb and regularize the neural network training. But it does not change the output in the training data. \\n\\nBy maskout in Mosaic-IT, we instruct LLMs to ignore some of the instructions and when generating the responses and train LLMs to follow such meta-instructions. We will consider changing the name of this strategy to avoid any misunderstanding.\"}", "{\"title\": \"A kind summary of our comments for Reviewer cPom\", \"comment\": \"Dear Reviewer cPom,\\n\\nSince the discussion period is about to end soon, we have prepared a concise summary of our responses to your last comment, focusing on the new updates from our side: \\n\\n**Q1: Why random concatenation is the optimal.** \\n\\nWe did not claim that random concatenation is optimal. We chose it because it is entirely cost-free without requiring any extra prior knowledge or semantic grouping of data. The non-trivial improvement of such a straightforward application of the proposed augmentations is important in demonstrating the effectiveness of Mosaic-IT. \\n\\n**Q2: Structured or semantically grouped approaches.**\\n\\nWe implemented the semantic grouping approach. Experimental results and conclusions are provided. Concatenating semantically similar samples to train LLMs can help generate condensed responses with comparable quality, when compared with pure-random concatenation, as reflected by the results on Alpaca Eval 2 (LC). \\n\\nWe hope this summary can help you check whether and how we addressed the concerns you raised in your review and discussion. Based on the new updates, we would like to kindly inquiry you to consider raising the current rating to reflect our new results and improvement. \\n\\nSincerely,\\n\\nAuthors\"}", "{\"summary\": \"This paper proposes Mosaic-IT, a data augmentation method for the instruction following. It proposes Primary and advanced mosaic strategies. It also includes format, permute, and mark out.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper deploys the method into several evaluation benchmarks and different model structures. It also includes several analyses to study the method's aspects.\", \"weaknesses\": \"1.\\tThe core problem of the proposed method is the lack of a detailed explanation of the core reasons for the proposed method. It does not provide justifiable and experimental explanations for the effectiveness of the proposed method. The primary motivation of the proposed method should be further cleared here.\\n2.\\tFrom the experiments, it seems that the proposed method does not improve the multi-turn data for MT-Bench. Would there be any explanations for this?\\n3.\\tMany instruction-following methods and literatures focus on data augmentation. There are no comparisons with those baselines. In addition, how would mask methods be different from the other dropout, etc. methods?\", \"questions\": \"1. Would there be any justifiable and experimental explanations for demonstrating the method's effectiveness?\\n\\n2. Would it be possible to show more experiments on multi-turn benchmarks?\\n\\n3. For the comparison baselines, would it be possible to add more baselines related to data augmentation for instruction turning?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a data augmentation method, Mosaic-IT, for instruction-tuning large language models (LLMs) without human or model dependency. Unlike traditional approaches that rely on human intervention or teacher models to generate instruction-response pairs, the proposed method works by combining existing instructions into composite multi-instruction samples. They propose four ways to do the composition - primary, format, permute and maskout. By doing so, the paper shows that LLMs trained with this method develop a higher level of instruction-following capacity and format adherence. The proposed method, which reduces training time by approximately 80%, holds promise as a scalable solution for instruction tuning without extensive resource requirements.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-structured, progressing logically from the motivation behind Mosaic-IT to the methodology, followed by experiments and results. Each section builds on the last, making the paper easy to follow and understand.\\n2. The figures do a great job of clearly summarizing the idea. \\n3. The experiments are comprehensive for the scope the paper setup - they have explored different datasets and model families and explored different sampling procedures for the composition\", \"weaknesses\": \"1. The paper lacks a theoretical basis for why random concatenation should improve instruction-following abilities; structured or semantically grouped concatenations could offer further insights.\\n2. Randomly concatenated instructions may introduce noise, potentially impacting training stability. An analysis of this effect on model perplexity would strengthen the work.\", \"questions\": \"Please refer to the weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General Response\", \"comment\": \"We sincerely appreciate the time and effort the reviewers had taken to evaluate our manuscript and provide valuable feedback. In the following, we will respond to the major concerns.\\n\\n>Q1: Why our method works?\\n\\n1. \\n**Mosaic-IT trains LLMs to follow meta-instructions for compositional reasoning.** \\n\\nPrevious methods train LLMs to produce a response for a single instruction or query. Instead, our method produces compositional data augmentations to **train LLMs to generate multiple responses for multiple instructions in diverse forms** (e.g., order, mask, format) specified by different meta-instructions. It also enforces LLMs to partition the input context correctly and manage the interference and dependencies among multiple instructions. These are critical to developing and improving the compositional reasoning capabilities of LLMs, which have not been covered by mainstream instruction-tuning frameworks. \\n\\n2. \\n**Mosaic-IT creates more challenging and complex instructions to further improve LLMs\\u2019 instruction-following capabilities.** \\n\\nMosaic-IT\\u2019s composition of multiple instructions and the diverse meta-instructions create more challenging and complex instruction-tuning data for LLMs. Moreover, since we do not rely on data synthesis using LLMs but solely apply some rules to existing data, the correctness and quality of the augmented data are guaranteed. As shown in Section 5.1, even powerful LLMs like GPT4 can not follow concatenated instructions. **It has been widely accepted that such challenging and complex instructions improve LLMs\\u2019 instruction-following capability [1-8]. Mosaic-IT follows this intuition by making the instruction more challenging and complex** in order to improve LLMs. Different from previous methods relying on humans or stronger teacher LLMs to create the challenging samples, Mosaic-IT does not require any humans/models to create the augmentations. \\n\\nTo quantitatively evaluate the difficulty and complexity of instruction-tuning data, [2] proposes a ChatGPT-based method (Number of InsTag), while [5] proposes a perplexity-based Instrutcion-Following Difficulty (IFD) score. We compute these two metrics on the Alpaca and WizardLM70k datasets to verify the effectiveness of our method in improving the difficulty/complexity: \\n\\n**Number of InsTag [2]:** \\nThe number of InsTag is used to measure the complexity of the instructions. A larger value of the Number of InsTag indicates the intentions of the instruction are complex and benefit the LLM instruction tuning process. For the experiments below, we prompt GPT4o with the exact prompt provided in [2] to generate the Instags. \\n\\nAverage InsTag (Alpaca): 2.62\\n\\nAverage InsTag (Alpaca-Mosaic): 9.75\\n\\nAverage InsTag (WizardLM): 4.20\\n\\nAverage InsTag (WizardLM-Mosaic): 10.93\\n\\nMosaic-IT largely increases the average number of InsTag, indicating a large increase in instruction intention complexity, further leading to better performance. \\n\\n**IFD score [5]:** \\nIFD score is a perplexity-based metric used to evaluate the instruction-following difficulty of a given instruction-response pair. A higher IFD score indicates that it is hard for the current model to build a connection between the instruction and the corresponding response, so it can be used to select training data beneficial for LLM instruction tuning. For the experiments below, we utilized the IFD score computed on GPT2. \\n\\nAverage IFD (Alpaca): 0.60\\n\\nAverage IFD (Alpaca-Mosaic): 0.76\\n\\nAverage IFD (WizardLM): 0.67\\n\\nAverage IFD (WizardLM-Mosaic): 0.79\\n\\nMosaic-IT increases IFD scores, indicating an increase in the instruction-following difficulty, which leads to an improvement in performance. \\n\\n[1] LIMA: Less Is More for Alignment. (NeurIPS\\u201923)\\n\\n[2] #InsTag: Instruction Tagging for Analyzing Supervised Fine-tuning of Large Language Models. (ICLR\\u201924)\\n\\n[3] WizardLM: Empowering Large Pre-Trained Language Models to Follow Complex Instructions. (ICLR\\u201924)\\n\\n[4]What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning. (ICLR\\u201924)\\n\\n[5] From quantity to quality: Boosting llm performance with self-guided data selection for instruction tuning. (NAACL\\u201924)\\n\\n[6] Superfiltering: Weak-to-strong data filtering for fast instruction-tuning. (ACL\\u201924)\\n\\n[7] Selective reflection-tuning: Student-selected data recycling for llm instruction-tuning. (ACL\\u201924)\\n\\n[8] Instruction Fusion: Advancing Prompt Evolution through Hybridization. (ACL\\u201924)\\n\\n>Q2: Novelty and contribution\\n\\nTo the best of our knowledge, Mosaic-IT is the **first cost-free compositional data augmentation for instruction tuning of LLMs**. It reduces the training cost and simultaneously improves the performance. In contrast, most existing data-enhancement methods for instruction tuning rely on human supervision or additional LLMs to generate new data.\"}", "{\"title\": \"A kind reminder\", \"comment\": \"Dear Reviewer cPom,\\n\\nAs we are approaching the deadline of the discussion period, we would like to cordially inquire about the extent to which we have successfully addressed the concerns outlined in your review. Please kindly let us know if you have any further concerns. \\n\\nYour constructive input remains invaluable to us, and we appreciate your dedication to enhancing the quality of our manuscript. Thank you for your time and consideration. If our response addresses your concerns, we sincerely hope you can consider raising the ratings. Thank you so much!\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"A kind summary\", \"comment\": \"Dear Reviewer mjBt,\\n\\nAs the discussion is about to end soon, we have prepared a concise summary of our responses to your last comment: \\n\\n**Q1: For the remaining evaluation concern.** \\n\\nWe designed a new test set to be used for directly evaluating our models\\u2019 capability to follow multiple instructions with additional diverse constraints. Experimental results and conclusions are provided, showing our method's advantages. \\n\\n**Q2: For the presents of Figure 4.**\\n\\nWe explained the reason why this figure is included.\\n\\nWe hope this summary can help you check whether and how we addressed your concerns. Based on the new updates, we sincerely inquiry if you would like to consider increasing the current rating to reflect the latest improvement to the paper.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"title\": \"Response to authors\", \"comment\": \"Thank you for your response. While your explanation highlights how Mosaic-IT trains LLMs to handle compositional reasoning through meta-instructions, it does not fully address why random concatenation is the optimal strategy. If following meta-instructions improves compositional reasoning, grouping instructions by semantics, topic, or structure could plausibly enhance this effect by reducing noise and providing more meaningful learning signals. To strengthen your argument, it would be helpful to include empirical comparisons between random concatenation and structured or semantically grouped approaches in the future revision. That said, my current rating reflects my overall assessment of the work, and I believe the paper\\u2019s contributions are valuable despite these areas for improvement.\"}", "{\"metareview\": \"The paper introduces a data augmentation method for instruction-tuning, the most common post pretraining stage that makes LLMs most useful. The authors propose Mosaic-IT, which can create diverse augmentations from samples of existing instruction tuning datasets. The method works by randomly concatenating multiple instructions into one and requiring the LLM to follow meta-instructions.\\nThe method overcomes the issue of requiring a strong(er) teacher model to rewrite instruction datasets and shows reductions in training time. The idea is also clear and simple and motivated from computer vision's mosaic augmentation strategy for object detection. \\nHowever, the paper in its current form has severe weaknesses: the mosaic strategies are somewhat arbitrary, the training time reduction in instruction tuning is typically not essential, as this step is very short (~13h) on 4 GPUs and similarly whether the current issue of IT data is indeed the quantity (which if it were the case, would motivate Mosaic IT). While the AC disagrees with w54c that the ubiquity of industrial (private) IT datasets counteracts the value of the proposed paper, the point does stand. Moreover, specifying the augmentation format to single-turn conversation does seem to lead to a decrease in performances in 2-round MT Bench.\", \"additional_comments_on_reviewer_discussion\": \"The message to the AC has been considered. While the authors provided rebuttals, the reviewers did engage in some discussion (mjBt, cPom). Yet the points raised remain: the authors further agree that 2-round MT-Bench results are affected, for some LLMs, without much discussion. Similarly the points of mosaicing strategies being arbitrary and lack of analysis of why random concatenation works best remain. Combined with the point of the lack of a clear movitation regarding the need for Mosaic IT in current IT setups, this paper is just below the high bar of acceptance for ICLR and the AC recommends rejection.\"}" ] }
DuyuAHBk1t
AIR: Zero-shot Generative Model Adaptation with Iterative Refinement
[ "Guimeng Liu", "Milad Abdollahzadeh", "Ngai-man Cheung" ]
Zero-shot generative model adaptation (ZSGM) aims to adapt a pre-trained generator to a target domain using only text guidance and without any samples from the target domain. Central to recent ZSGM approaches are *directional loss* which use the text guidance in the form of aligning the image offset with text offset in the embedding space of a vision-language model like CLIP. This is similar to the analogical reasoning in NLP where the offset between one pair of words is used to identify a missing element in another pair by aligning the offset between these two pairs. However, a major limitation of existing ZSGM methods is that the learning objective assumes the complete alignment between image offset and text offset in the CLIP embedding space. **Our work** makes two main contribution. Inspired by the offset misalignment studies in NLP, as our first contribution, we perform an empirical study to analyze the misalignment between text offset and image offset in CLIP embedding space for various large publicly available datasets. Our important finding is that offset misalignment in CLIP embedding space is correlated with concept distance, *i.e.*, close concepts have a less offset misalignment. To address the limitations of the current approaches, as our second contribution, we propose Adaptaiotn with Iterative Refinement (AIR) which mitigates the offset misalignment issue in directional loss by iteratively selecting anchor points closer to the target domain. Extensive experimental results show that the proposed AIR approach achieves SOTA performance across various adaptation setups.
[ "Zero-shot Generative Model Adaptation", "Transfer Learning", "Prompt Learning", "Multi-modal Representation Space" ]
https://openreview.net/pdf?id=DuyuAHBk1t
https://openreview.net/forum?id=DuyuAHBk1t
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qYkR5zPTHy", "gAhSRlqZT8", "btcECakZin", "PgqT9mR973", "CqD1LOKoNc" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730721370395, 1730684362497, 1731656076004, 1730450573259, 1730626872801 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4750/Reviewer_HwPH" ], [ "ICLR.cc/2025/Conference/Submission4750/Reviewer_iT2P" ], [ "ICLR.cc/2025/Conference/Submission4750/Authors" ], [ "ICLR.cc/2025/Conference/Submission4750/Reviewer_pPcL" ], [ "ICLR.cc/2025/Conference/Submission4750/Reviewer_eU1s" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces an innovative Adaptation with Iterative Refinement (AIR) method for addressing offset misalignment in CLIP embedding space within Zero-Shot Generative Modeling (ZSGM). Through a detailed empirical study, the authors analyze the offset misalignment between image and text offsets in CLIP embedding space, demonstrating that this misalignment intensifies with greater concept distance, yet is less impactful between closer domains. To counter this, the AIR method iteratively samples anchor points during adaptation, utilizing a novel prompt learning strategy to describe these anchor points without predefined textual descriptions. The proposed approach effectively mitigates offset misalignment, resulting in good performance in ZSGM for diffusion models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper introduces a novel approach to Zero-shot Generative Model Adaptation (ZSGM) by addressing the critical issue of offset misalignment between image and text representations in the CLIP embedding space, showcasing originality in its formulation and methodology.\", \"weaknesses\": \"1.\\tThe references are somewhat disorganized and have formatting issues; for example, many citations should be formatted as (Smith et al., 2023) rather than Smith et al. (2023). Additionally, there is a lack of coherent context when citing references.\\n2.\\tThe writing of this paper could benefit from some improvement, as it contains several spelling errors (e.g., \\\"Adaptatoin\\\" instead of \\\"Adaptation\\\") and some grammatical inconsistencies.\\n3.\\tIn the Related Work section, this paper assumes that many methods default to the alignment of image and text offsets in CLIP space, which seems to warrant further consideration. For instance, some works, such as SVL, have already discussed T2I consistency. Furthermore, some studies have also addressed zero-shot content consistent T2I, such as Tewel, Yoad, et al. \\\"Training-free consistent text-to-image generation.\\\" \\n4. Lacking some more convincing qualitative and quantitative experiments (e.g., Figure 4, Figure 5), as well as a comparison of the diversity of entities.\", \"questions\": \"1.\\tIn Figure 4 (left), there is no significant difference in qualitative results between AIR, NADA, and IPL. Could this indicate that, in single-descriptor, same-category adaptation, existing methods do not exhibit a significant T2I offset? Otherwise, please provide some more convincing qualitative experiments to support this.\\n2.\\tIn Table 3, please explain why aligning offsets significantly improves generation diversity without causing the generated content from the model to become too similar, resulting in a decrease in diversity.\\n3.\\tCompared to animals and humans, the qualitative and quantitative experiments in the paper seem to lack content-consistent generation for some objects and scenes, as seen in other works. Please provide more diverse experimental examples and results to enhance the paper's persuasiveness.\\n4.\\tThe paper seems to lack a qualitative ablation study. Please provide some specific experimental results to supplement it.\\n5.\\tIf the text description in a sentence (rather than a specific prompt) is inherently ambiguous, is the method of aligning offsets presented in this paper still useful?\\n6.\\tRegarding the results in Figure 5, for example, the eyes in the photo \\u2192 cartoon transformation seem to move, and a similar issue appears in the dog example. This problem also seems to be present in Figure 6 of the supplementary materials. Please provide more compelling results.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In the ZSGM field, previous work has simply aligned the image offset with text offset using directional loss. However, experiments show that these offsets are not merely aligned but often misaligned. Based on this finding, the authors propose Adaptation with Iterative Refinement (AIR) to alleviate this issue by iteratively selecting anchor points closer to the target domain. The anchor points are selected during adaption, coupled with a new prompt learning approach.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper is well-written and clearly introduces its motivation and research methods. First, it highlights the limitation of previous methods, which simply aligned image offsets with text offsets, and verifies this limitation through experiments. Next, it conducts experiments to validate the hypothesis that addressing these misalignments can lead to improved performance. Finally, based on this analysis, the paper presents its proposed research method.\\n\\n2. The paper conduct an analysis of the offset misalignment, and then the first to reveal the misalignment is larger for distance concepts and less for close concepts.\", \"weaknesses\": \"1. Table 1 and Table 2 present the results of the GAN model and the diffusion model, respectively. However, the evaluation metrics, comparison methods, and adaptations used for the two models are not consistent.\\n\\n2. Most of the experiments conducted involve adaptation between two concepts with similar images.\\n\\n3. Line 153 states, \\\"Previous works assume that for two different concepts, \\u03b1 and \\u03b2.\\\" However, Algorithm 1 and Algorithm 2 use two learning rates, also denoted as \\u03b1 and \\u03b2. This could lead to confusion.\", \"questions\": \"1. Why are Table 1 and Table 2 not consistent? Please explain the differences in the evaluation metrics, comparison methods, and adaptations used for the GAN model and the diffusion model.\\n\\n2. How does it perform on adaptation when there are significant differences between the source and target images? For example, in the NADA[1] experiment: Dog -> The Joker, Dog -> Nicolas Cage.\\n\\n3. Note that Algorithm 1 and Algorithm 2 have two learning rates, \\u03b1 and \\u03b2. Are these two learning rates consistent? How was the learning rate of 0.002 chosen in Supp. A.4?\\n\\n[1] Rinon Gal, Or Patashnik, Haggai Maron, Amit H Bermano, Gal Chechik, and Daniel Cohen-Or. Stylegan-nada: Clip-guided domain adaptation of image generators. ACM Transactions on Graphics (TOG), 41(4):1\\u201313, 2022.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"The authors discuss the limitations and ethical issues that we are concerned about in Supp. I and J.\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper presents a novel approach called AIR to address the zero-shot generative model adaptation (ZSGM) problem. First, it performs an empirical study to analyze the misalignment between text offsets and image offsets in CLIP. Second, it proposes AIR to address this issue.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.The paper conducts an empirical study on a large public dataset to analyze offset misalignment in the CLIP embedding space, finding that misalignment increases as the concepts become more distant\\n\\n2.Figures 2 and 3 vividly present the misalignment in the CLIP space and illustrate the impact of offset misalignment\", \"weaknesses\": \"1. There is a concern about the paper lacks theoretical proof or experimental evidence that the after limited iterations of the adaptation, the adapted generator is already closer to the target domain than the pre-trained generator\\n\\n2. There is no sensitivity study conducted for the parameters t_int and t_thresh. Since these parameters play a critical role in introducing adaptive loss and updating the anchor points, their impact should be analyzed.\\n\\n3. This paper lacks comparative experiments involving ITI-GEN, as the design of its learning prompt is based on ITI-GEN (Lines 353-357).\", \"questions\": \"Could you conduct an experiment or provide a theoretical proof for the statement: 'After a limited number of adaptation iterations using directional loss, the encoded concept in the adapted generator is already closer to the target domain than the encoded concept in the source generator'? This is an important assumption underlying your method.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper explores zero-shot adaptation for generative models, particularly diffusion models. It empirically examines the offset misalignment issue in previous methods and the impact of this issue on generative model adaptation. The paper proposes an iterative refinement method to mitigate the effects of offset misalignment.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper is well-organized and clearly presented.\\n2. The study on offset misalignment is novel to me.\\n3. The iterative refinement solution for misalignment is interesting and reasonable.\", \"weaknesses\": \"I did not find any remarkable flaws in this paper. However, I have one question: in the study in Section 3.1, the concept distance is measured between different classes, whereas in the impact study in Section 3.2, the concept distance is constructed using different hand-crafted prompts. Why are these setups misaligned?\", \"questions\": \"Please refer to the Weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
Duuerhutvq
Controlled LLM Decoding via Discrete Auto-regressive Biasing
[ "Patrick Pynadath", "Ruqi Zhang" ]
Controlled text generation allows for enforcing user-defined constraints on large language model outputs, an increasingly important field as LLMs become more prevalent in everyday life. One common approach uses energy-based decoding, which defines a target distribution through an energy function that combines multiple constraints into a weighted average. However, these methods often struggle to balance fluency with constraint satisfaction, even with extensive tuning of the energy function's coefficients. In this paper, we identify that this suboptimal balance arises from sampling in continuous space rather than the natural discrete space of text tokens. To address this, we propose \emph{Discrete Auto-regressive Biasing}, a controlled decoding algorithm that leverages gradients while operating entirely in the discrete text domain. Specifically, we introduce a new formulation for controlled text generation by defining a joint distribution over the generated sequence and an auxiliary bias sequence. To efficiently sample from this joint distribution, we propose a Langevin-within-Gibbs sampling algorithm using gradient-based discrete MCMC. Our method significantly improves constraint satisfaction while maintaining comparable or better fluency, all with lower computational costs. We demonstrate the advantages of our controlled decoding method on sentiment control, language detoxification, and keyword-guided generation.
[ "LLMs", "controlled decoding", "MCMC" ]
Accept (Poster)
https://openreview.net/pdf?id=Duuerhutvq
https://openreview.net/forum?id=Duuerhutvq
ICLR.cc/2025/Conference
2025
{ "note_id": [ "jEKj6K3O0e", "g0dJ1IyWN9", "dPjfw5SrC9", "aWp2ZHuWrZ", "PhGOcNQSBV", "N6OWo67Pf2", "Li9xNGxGY3", "Kp126bLqng", "I6p7p12IHv", "HnD91hf5nz", "E8WcvLSukp", "DUGDekP3pI", "6ITB09eFW7", "5Zxc6GjyNG", "3u4vUWXKI9", "1eNjFTgVlE", "0XNs7aSiW0", "0ISx34lna4" ], "note_type": [ "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "decision", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732171994140, 1732170257464, 1734904890079, 1732555803408, 1732546676228, 1732170153415, 1732171470928, 1732170136048, 1732170289948, 1730718284087, 1730676650398, 1737523505082, 1732172330958, 1730773649830, 1732171717377, 1730699238405, 1732170209801, 1732172420975 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2459/Authors" ], [ "ICLR.cc/2025/Conference/Submission2459/Authors" ], [ "ICLR.cc/2025/Conference/Submission2459/Area_Chair_XR11" ], [ "ICLR.cc/2025/Conference/Submission2459/Reviewer_w6Rd" ], [ "ICLR.cc/2025/Conference/Submission2459/Authors" ], [ "ICLR.cc/2025/Conference/Submission2459/Authors" ], [ "ICLR.cc/2025/Conference/Submission2459/Authors" ], [ "ICLR.cc/2025/Conference/Submission2459/Authors" ], [ "ICLR.cc/2025/Conference/Submission2459/Authors" ], [ "ICLR.cc/2025/Conference/Submission2459/Reviewer_w6Rd" ], [ "ICLR.cc/2025/Conference/Submission2459/Reviewer_RyTz" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2459/Authors" ], [ "ICLR.cc/2025/Conference/Submission2459/Reviewer_3EGw" ], [ "ICLR.cc/2025/Conference/Submission2459/Authors" ], [ "ICLR.cc/2025/Conference/Submission2459/Reviewer_HCm1" ], [ "ICLR.cc/2025/Conference/Submission2459/Authors" ], [ "ICLR.cc/2025/Conference/Submission2459/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Saturated Benchmarks, Notational Errors\", \"comment\": \"Thank you for your helpful comments. We would like to address some of your concerns to clarify the technical details and mathematical formulation.\\n\\n**Saturated benchmarks**\\n\\nWe are unclear about your statement regarding benchmarks being \\\"saturated.\\\" Do you mean there are too many methods addressing the same task? We believe the tasks we selected are appropriate as they allow for a clear comparison of the difference in performance between controlled text generation algorithms. As shown in Table 2, these tasks are useful for understanding where prior methods fall short and where our method excels. Furthermore, the use of similar tasks in prior works demonstrates that our chosen benchmarks are reasonable [1, 2, 5, 6]. \\n\\nFurthermore, regarding your claim that prompting can solve these tasks, we are not aware of any papers that demonstrate prompting is sufficient for the tasks we consider. In fact, [1] does compare to Prompt-T5, a pre-trained LM intended to solve tasks with prompts. They demonstrate that LM-Steer outperforms Prompt-T5, which shows that prompting is not enough to match controlled text generation methods. We choose not to compare to Prompt-T5 as [1] already establishes the limitations of prompt-based methods. \\n\\nFinally, model jailbreaking is a separate research direction from the core focus of this paper. None of the methods we compare against include jailbreaking as a task [1, 2, 5, 6]. We consider it an application of our proposed method that could be investigated for future directions. \\n\\n[1] Han et al. Word Embeddings Are Steers for Language Models. ACL 2024.\\n\\n[2] Liu et al. BOLT: Fast Energy-based Controlled Text Generation with Tunable Biases. ACL 2023. \\n\\n[3] Stidikov et al. Classifiers are Better Experts for Controllable Text Generation. Workshop on Transfer Learning for NLP, 2022. \\n\\n[4] Dekoninck et al. Controlled Text Generation via Language Model Arithmetic. ICLR 2024.\\n\\n[5] Kumar et al. Gradient-based Constrained Sampling from Language Models. ACL 2022.\\n\\n[6] Qin et al. COLD Decoding: Energy-based Constrained Text Generation with Langevin Dynamics. NeurIPS 2022.\\n\\n**Notational errors**\\n\\nThank you for the feedback. While equation 6 is correct, we understand that it can be confusing as it is not very direct. We will change it to \\u201c$P(B|X)$ is defined to be $ \\\\frac{\\\\exp(f(B | X))}{Z_B}$\\u201d. The equation that immediately follows should be $P(B | X, Y)$ as you have pointed out. We occasionally drop the conditioning on X as all terms should be understood as conditioned on prompt X. We have revised the notations in the paper to make them clearer.\"}", "{\"title\": \"Response to B.1.1: Biased Auto-regressive generation $P(Y | X, B)$\", \"comment\": \"Our computation of $P(Y | X, B)$ does not involve concatenating the input $X$ with the bias sequence $B$. As we explain in equation 10 in Sec 4.2, we use $B$ as a biasing sequence that is added to the unnormalized logits the language model outputs for each position. Specifically, during the auto-regressive generation, after the model produces the unnormalized logits $\\\\tilde{y_i}$ for position $i$, we add $\\\\tilde{b_i}$ to obtain a new logit vector. We then apply greedy decoding to this logit vector to obtain the token for this position. However, it is possible to apply other forms of sampling \\u2014 we choose greedy decoding since this is what previous work [1, 2] uses and we want to ensure that comparisons between our method and theirs are as fair as possible.\\n\\n[1] Liu et al. BOLT: Fast Energy-based Controlled Text Generation with Tunable Biases. ACL 2023. \\n\\n[2] Quin et al. COLD Decoding: Energy-based Constrained Text Generation with Langevin Dynamics. NeurIPS 2022.\"}", "{\"metareview\": \"This paper proposed a method for controllable text generation by applying Discrete Langevin Proposal to sample in the discrete space while being able to leverage gradients of the energy function. Experiments on sentiment, toxicity, and lexical control benchmarks demonstrate the effectiveness of the proposed approach.\", \"strengths\": \"1. The method outperforms baselines.\", \"weaknesses\": \"1. A reviewer pointed out the more modern benchmarks such as jailbreaking language models should be considered, whereas the benchmarks considered in this work are already saturated to some extent.\\n2. The method is slow due to the autoregressive sampling instead a for loop.\\n\\nOverall, despite the weaknesses above, overall reviewers seem to like this paper, and I'm recommending acceptance, although I wouldn't mind if the paper gets rejected.\", \"additional_comments_on_reviewer_discussion\": \"Besides to what's mentioned above, a reviewer pointed out that benchmarks considered in this work were already saturated to some extent. Authors clarified that these benchmarks are still good for comparison to baselines. However, I think the reviewer's point does provide constructive feedback --- evaluating this method on more challenging tasks such as jailbreaking a language model would further strengthen this paper.\"}", "{\"title\": \"Thank you for your responses.\", \"comment\": \"Thank you for your responses and for the paper update. Based on these improvements, I am leaning towards acceptance.\"}", "{\"title\": \"Reminder for Discussion Period\", \"comment\": \"We would like to thank the reviewers for their valuable feedback on our submission. Given that the last day for the discussion period is approaching, we wanted to provide a reminder in case there are any follow up questions. We are more than happy to address any lingering doubts or concerns.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your supportive review. We were wondering if there were any aspects of our paper that prevented you from giving it a higher score. We would be more than happy to address any questions or concerns you may have regarding our paper.\"}", "{\"title\": \"B.2: Hidden details\", \"comment\": \"**Sequence Length:**\\n\\nWe do not emphasize that our algorithm requires a specified sequence length as this is a common requirement within the literature \\u2014 MuCOLA [1], COLD [2], and BOLT [3] all require the specification of sequence length. \\n\\nFurthermore, it is possible to enable our algorithm to produce sequences of varying lengths by padding / truncating the bias sequence during generation. We did not focus on enabling dynamic sequence length as this is not a core contribution of our work. \\n\\n**Discrete Langevin Proposal**\\n\\nWe agree with your advice and have included the following discussion regarding our application of Discrete Langevin Proposal (DLP) in the Appendix in our revision. Specifically, to enable the use of large step sizes in the proposal, we adopt the globally balanced version of the DLP proposal [1, 2]:\\n\\n$\\n\\\\text{Categorical} \\\\left( \\\\underset{j \\\\in |V|}{\\\\text{softmax}} \\\\left( \\\\nabla f(\\\\hat{B} | X)_i (\\\\text{Onehot}_j - \\\\hat{b}_i)\\\\right) \\\\right)\\n$\\n\\nHere, $\\\\text{Onehot}_j$ represents the one-hot vector for the $j$ token in the vocabulary $V$, and $\\\\hat{B} = \\\\{\\\\hat{b}_1,\\\\hat{b}_2, \\u2026 \\\\hat{b}_n \\\\}$ represents the original one-hot vector sequence of length $n$. To obtain a distribution over the vocabulary $|V|$, we must compute the inner term for every token $j \\\\in V$.\\n\\nHere, we note that $(\\\\text{Onehot}_j - \\\\hat{b}_i))$ corresponds to the distance between the original token at position $i$ and every token $j$ in the vocabulary $V$. Given the discrete nature of the tokens, we choose to use hamming distance to represent this term. For a token $j$, the hamming distance to the original token in position $i$ is 0 if the $j$th coordinate $\\\\hat{b}\\\\_{ij}= 1$, as they are the same token; and 1 if the $j$th coordinate is 0. Thus we can represent the hamming distance between token $j$ and the current token as $1 - \\\\hat{b}\\\\_{ij}$. Below we include the proposal distribution we sample from to obtain the new token for position $i$. \\n\\n$\\nb\\u2019_i \\\\sim \\\\text{categorical}\\\\left(\\\\underset{j \\\\in |V|}{\\\\text{softmax}} \\\\left( \\\\frac{1}{\\\\tau} (\\\\nabla f(\\\\hat{B} | X))\\\\_{ij} (1 - \\\\hat{b}\\\\_{ij}) \\\\right) \\\\right)\\n$\\n\\nHere, $b\\u2019_i$ is the token we sample from the categorical distribution over V on the right-hand side. We have incorporated this discussion into the appendix, following your advice. \\n\\n**Gradient Computation**\\n\\nOnce we represent the sequence of tokens $B$ as a sequence of one-hot vectors $\\\\hat{B}$, we are able to compute the gradient of the constraint function with respect to $\\\\hat{B}$ using automatic differentiation software, such as Torch Autograd.\\n\\n**Use of Greedy Decoding**\\n\\nWe define the biased auto-regressive generation used in our algorithm in terms of an argmax as shown in equation 10, which should convey that this specific component of our algorithm is deterministic. \\n\\nAdditionally, it should be noted that while we choose greedy decoding, our method is compatible with other decoding approaches that are conventionally used, such as top-k, top-p, and standard Gibbs. We choose to use greedy decoding since this was used by prior work [2, 3] and it places emphasis on the novel aspects of our framework.\\n\\n[1] Kumar et al. Gradient-based Constrained Sampling from Language Models. ACL 2022.\\n\\n[2] Quin et al. COLD Decoding: Energy-based Constrained Text Generation with Langevin Dynamics. NeurIPS 2022.\\n\\n[3] Liu et al. BOLT: Fast Energy-based Controlled Text Generation with Tunable Biases. ACL 2023.\\n\\n[4] Zhang et al. A Langevin-like Sampler for Discrete Spaces. ICML 2022.\\n\\n[5] Pynadath et al. Gradient-based Discrete Sampling by Automatic Cyclical Scheduling. NeurIPS 2024.\"}", "{\"title\": \"Summary Response\", \"comment\": \"We would like to thank the reviewers for their constructive responses. We have provided the revised version of our submission, with modifications highlighted in blue. Below we summarize the primary modifications:\\n\\n1. The related works section is revised to more thoroughly discuss the field of controlled text generation. \\n2. The equations in Section 4 are revised to be more notationally consistent and informative\\n3. Section 4 is revised to include additional motivation for our formulation of the target distribution as a joint distribution over response sequence $Y$ and bias sequence $B$.\\n4. Appendix A is introduced to discuss the derivation of our proposal in equation 7 from Discrete Langevin Proposal [1] in more detail. \\n\\nWe hope that these modifications improve the clarity of our submission. \\n\\nWe would like to briefly highlight the main contributions of our work. Within the field of inference-time controlled text generation (CTG), gradient-based methods offer a flexible and efficient way to enforce constraints on language models. However, these methods typically suffer from a steep tradeoff between fluency and constraint satisfaction. In this paper, we demonstrate that this is a result of the disconnect between the commonly used gradient-based continuous sampling methods and the discrete space of language. To address this, we introduce DAB, a method that leverages gradient information to perform discrete sampling. We demonstrate that our algorithm captures the best aspects of previous state-of-the-art CTG algorithms while enabling a superior balance between fluency and constraint satisfaction, which we demonstrate by beating strong baselines on a range of CTG tasks. Finally, our algorithm is able to achieve these remarkable results while exhibiting superior stability and speed than prior methods. Given the novelty of our method as well as the performance benefits, we believe this is a valuable addition to the field of controlled text generation. \\n\\n[1] Zhang et al. A Langevin-like Sampler for Discrete Distributions. ICML 2022.\"}", "{\"title\": \"Response to B.1.2: Definition of $P(Y | X)$\", \"comment\": \"Defining $P(Y | X) \\\\propto P^{LM} (Y | X) \\\\exp f(Y)$ is equivalent to the energy function formulation used in previous works (lines 155-157) with $\\\\lambda_1 = \\\\lambda_2 = 1$. As you noted, DLP can be used to directly sample from $P(Y | X)$. However, this approach does not support auto-regressive generation and significantly compromises fluency. Non-autoregressive approaches usually suffer from lack of fluency as demonstrated in our experimental results in Table 2 and previous works [1]. Additionally, methods like RLHF or PPO require extra data and fine-tuning, making them unsuitable as inference-time approaches.\\n\\nThe primary motivation behind our framework is the observation that fluency is best satisfied through auto-regressive generation, and gradient-based sampling efficiently finds responses that satisfy constraints. By framing the problem as a joint distribution of $Y$ and $B$, we enable the use of both methods \\u2014 we use autoregressive generation to obtain $Y$, ensuring fluent generations; and we apply Discrete Langevin Proposal (DLP) to sample $B$, ensuring constraint satisfaction. Furthermore, all of this is accomplished without the need to fine-tune the underlying LM. We will add the above explanation to the revision.\\n\\n\\n[1] Liu et al. BOLT: Fast Energy-based Controlled Text Generation with Tunable Biases. ACL 2023.\"}", "{\"summary\": \"The paper proposes an approach to controlled text generation --- DAB (Discrete Auto-regressive Biasing) --- that exploits the DLP (Discrete Langevin Proposal) technique from (Zhang et al. 2022) for efficiently sampling inside a discrete space while still being able to exploit gradients of an energy function over this space. The DLP technique is not used directly over the output sequence, but over an auxiliary \\\"bias sequence\\\" that is coordinated with the output sequence through a Gibbs-Sampling-like alternation. The experiments show competitive results with a few baselines in terms of efficiency as well as balance between constraints and fluency of the obtained results.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The main strength of the paper lies in its innovative application of the DLB technique to the general problem of controlled text generation (Zhang et al. did touch on text generation but only in a very limited way).\\n\\nThe paper is also creative in the way it uses an auxiliary bias sequence to steer the generation of the actual output sequence.\\n\\nThe experiments demonstrate certain advantages in terms of control (i.e. constraint satisfaction) and fluency over a number of baselines, in particular some based on gradient techniques over continuous relaxations of EBMs over discrete spaces.\", \"weaknesses\": \"The main weaknesses of the paper lie on two dimensions: (A) omission of significant related work, (B) lack of discussion/clarity about certain key aspects and modelling decisions in the paper.\\n\\n(A) The Related Work section 2.1 devoted to Language Models as EBMs, totally ignores a substantial line of work specifically devoted to *discrete* sampling from EBMs, either (i) with focus on training autoregressive approximations to these EBMs (exemplified by [1] and a number of more recent publications at ML conferences (see references in [3])), or (ii) with focus on decoding-time techniques [2]. This line of work, like the present paper, is concerned with discrete (as opposed to continuous) sampling, and is not limited to encoder-based architectures.\\n\\n[1] Khalifa et al. A distributional approach to controlled text generation. ICLR 2021.\\n\\n[2] Eikema et al. An approximate sampler for energy-based models with divergence diagnostics. TMLR 2022.\\n\\n[3] Kruszewski et al. disco: a toolkit for Distributional Control of Generative Models. ACL 2023.\\n\\n\\n(B.1) Concerning the core Equation (5). First, in the expression $P^{LM}(Y|X,B)$, what you seem to mean is that you concatenate the input $X$ with the bias sequence $B$ and then apply the LM on this new input, but it would be worth discussing this assumption. Second, and more importantly, while it is not clear at this point in the paper, it seems that in Algorithm 1 you actually need to compute $f(Y)$, and not only $f(B)$. Then a pretty obvious question for the reader is, why not simply define $P(Y|X) \\\\propto P^{LM}(Y|X) \\\\exp(f(Y))$ ? That would be much more direct than what the paper does, would directly define $P(Y|X)$ as an EBM, and then presumably the DLB technique could be directly applied to this EBM. It is not clear to me why the authors do not consider and discuss this possibility. \\n(Of course, several other techniques for sampling from this EBM would be possible, including those mentioned in (A) and techniques related to RLHF/PPO where $f(Y)$ might be seen as a reward.)\\n\\n(B.2) There are several other points in the paper that are kept implicit and would need more discussion. To give a few examples: \\n- The fact that the length $n$ of the response $Y$ needs to be specified in advance, which appears to be a limitation of the approach, is kept implicit.\\n- The important DLB-based equation (7) should be described in a more self-contained way (perhaps using the Appendix), in particular the way the gradient is actually computed.\\n- The fact that step 6 in Algorithm 1 is actually deterministic should be mentioned, as this detracts from standard Gibbs-sampling practice.\", \"questions\": [\"Questions/suggestions (in addition to those implicit in the previous section):\", \"Lines 238-243 seem problematic, as they introduce a notation $P(Y|B)$ that is not conditioned by $X$. Are they correct and/or needed later?\", \"In Lines 494-496, you mention that a good metrics for the keyword-guided generation should consider the meaning similarity of the produced sentence to the constraining keywords, not the actual presence of the keywords. I was not fully convinced by this remark, and was wondering whether considering the actual presence of the keywords should be seen as a more important metrics. More generally, I wondered whether it would be worth reporting, among the metrics, the value $f(Y)$ itself as this seems to be the main driver of the approach.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents an algorithm for constrained autoregressive decoding where the constraint is defined by a distribution function. This is a common approach using energy-based models.\\n\\nEven though BOLT does propose similar algorithm, the main difference in this paper is that instead of sampling the bias in the continuous domain, the bias token sampling in this paper happens in discrete space. Author claim, and substantiate with experiments, that this produces sequences that not only follow the constraint better, but also the LLM policy model leading to more fluent outputs.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"The authors propose DAB - Discrete Autoregressive Biasing, a modification of BOLT where both output sequence Y and the biasing B are always sampled in discrete space. This allows the model to always remain in the discrete space leading to not just lower computational cost compared to other energy based decoding methods, but also improve the generation with respect to fluency and constrain.\", \"as_part_of_dab\": \"1. A joint distribution over the output Y and Bias tokens B is proposed.\\n2. Similar to Gibbs sampling, the proposed sampling algorithm (Langevin within Gibbs) alternates between sampling better Y and B while at the same time using Langevin for prediction the distribution to sample the tokens from.\\n3. Uses MCMC to sample the bias tokens, and then add bias from those while sampling the response token.\\n\\nAlbations are performed on both hard and soft constrains to show the effectiveness of the model.\", \"weaknesses\": [\"Even though the method is fast compared to other energy based models, the method is still slow because of the autoregresive sampling happening inside the step for loop as shows in Algorithm 1, line 3-7.\", \"Results on Sentiment are mixed with decreased Fluency compared to BOLT.\"], \"questions\": \"In equation 8, the bias value b_{i,j} corresponding to any bias token b_{i} is calculated as the L2 distance between their corresponding embeddings. LLM loss functions use inner product between the embeddings as the distance metric while calculating logits. Any reason to use L2, and were other options tried?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Clarification of Method Section\", \"comment\": \"**Discrete Langevin Proposal function**\\n\\nThe $(1 - \\\\hat{b}\\\\_{ij})$ term corresponds to the hamming distance between tokens. We include this term as a result of the definition of Discrete Langevin Proposal (DLP) introduced by [1]. For your convenience, we will explain how we obtain the proposal function below. \\n\\nTo enable the use of large step sizes in the proposal, we adopt the globally balanced version of the DLP proposal [1, 2]:\\n\\n$\\n\\\\text{Categorical} \\\\left( \\\\underset{j \\\\in |V|}{\\\\text{softmax}} \\\\left( \\\\nabla f(\\\\hat{B} | X)_i (\\\\text{Onehot}_j - \\\\hat{b}_i)\\\\right) \\\\right)\\n$\\n\\nHere, $\\\\text{Onehot}_j$ represents the one-hot vector for the $j$ token in the vocabulary $V$, and $\\\\hat{B} = \\\\{\\\\hat{b}_1,\\\\hat{b}_2, \\u2026 \\\\hat{b}_n \\\\}$ represents the original one-hot vector sequence of length $n$. In order to obtain a distribution over the vocabulary $|V|$, we must compute the inner term for every token $j \\\\in V$. \\n\\nWe note that $(\\\\text{Onehot}_j - \\\\hat{b}_i))$ corresponds to the distance between the original token at position $i$ and every token $j$ in the vocabulary $V$. Given the discrete nature of the tokens, we choose to use hamming distance to represent this term. For a token $j$, the hamming distance to the original token in position $i$ is 0 if the $j$th coordinate $\\\\hat{b}\\\\_{ij} = 1$, as they are the same token; and 1 if the $j$th coordinate is 0. Thus we can represent the hamming distance between token $j$ and the current token as $1 - \\\\hat{b}\\\\_{ij}$. Below we include the proposal distribution we sample from to obtain the new token for position $i$. \\n \\n\\n$\\nb\\u2019_i \\\\sim \\\\text{categorical} \\\\left(\\\\underset{j \\\\in |V|}{\\\\text{softmax}} \\\\left( \\\\frac{1}{\\\\tau} (\\\\nabla f(\\\\hat{B} | X))\\\\_{ij} (1 - \\\\hat{b}\\\\_{ij}) \\\\right) \\\\right)\\n$\\n\\nHere, $b\\u2019_i$ is the token we sample from the categorical distribution over V on the right-hand side. \\n\\n**Use of greedy decoding**\\n\\nEquation 10 is an argmax to represent greedy decoding, which is a commonly used decoding technique for auto-regressive generation. While other sampling methods are compatible with our algorithm, we choose greedy decoding as previous inference-time controlled generation algorithms [3, 4], also use greedy decoding. This helps emphasize the novel aspects of our framework and ensures a fair comparison with previous works.\\n\\n\\n**Initialization of $B$**\\n\\nTo answer your question regarding why we initialize $B$ to $Y$ when sampling from $P(B | X, Y)$, we provide our reasoning in section 4.2 under the section titled \\u201cSampling from $P(B | X, Y)$\\u201d. Specifically, our goal is to sample from the distribution: \\n\\n$\\nP(B | X, Y) \\\\propto P^{LM}(Y | X, B) \\\\exp(f(B | X))\\n$\\n\\nAs we discuss in the mentioned section, sampling from this distribution would require computing $P(Y | X, B)$ for all possible values of $B$, which is intractable. In order to obtain a more feasible calculation, we note that $P^{LM} (Y | X, B)$ will be high when the bias $B$ aligns with the original response $Y$ due to the nature of auto-regressive generation. Thus we approximate this distribution by initializing $B = Y$, which will ensure a relatively high value for $P^{LM} (Y | X, B)$. By initializing the bias term into a region with high values of $P(Y | X, B)$, all that remains is to determine which samples within this region enable better constraint satisfaction. If we initialize $B=0$, then sampling must simultaneously find $B$ that results in high $P(Y | X, B)$ and high $f(B | X)$, which is a more difficult task. \\n\\n[1]. Zhang et al. A Langevin-like Proposal for Discrete Spaces. ICML 2022.\\n\\n[2]. Pynadath et al. Gradient-based Discrete Sampling via Automatic Cyclical Scheduling. NeurIPS 2024. \\n\\n[3]. Liu et al. BOLT: Fast Energy-based Controlled Text Generation with Tunable Biases. ACL 2023.\\n\\n[4]. Qin et al. COLD Decoding: Energy-based Constrained Text Generation with Langevin Dynamics. NeurIPS 2022. \\n\\nWe appreciate the time taken to point out the notational errors in our work and areas of confusion. We were wondering if there were any other flaws that lead to your score of a 3 \\u2014 are there any serious concerns you have with the method itself? Were there any concerns with the claims made in our paper and whether we provide sufficient evidence to validate them? We would greatly appreciate your input in helping us improve our work.\"}", "{\"summary\": \"This paper introduces DAB (Discrete Auto-regressive Biasing), an algorithm for controlled text generation with large language models (LLMs). Previous methods often use energy-based decoding in continuous space, which struggles to balance fluency and constraint satisfaction. DAB addresses this by operating entirely in the discrete space of text tokens.\\n\\nDAB samples from the joint distribution of generated sequence Y and an auxiliary bias sequence B and alternate between biased auto-regressive generation and discrete gradient-based sampling. Specifically, given a generated text Y, the gradient-based discrete sampling is used to maximise constraint satisfaction. Then B is fixed, and biased auto-regressive generation is used to sample Y. A penalization is applied based on sampled tokens' distance from B in embedding space.\\n\\nExperiments show DAB outperforms baselines such as BOLT and LM-Steer on sentiment-controlled generation, language detoxification, and keyword-guided generation.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper identifies previous work's deficiency in balancing fluency and constraint satisfaction and proposed a method that maximises both (according to several benchmark numbers).\", \"DAB seems to be more stable and robust than other continuous methods.\"], \"weaknesses\": \"Nothing major as I can see.\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Questions\", \"comment\": \"**Q1: Notation**\\n\\nThank you for pointing this out. The correct term should be $P(Y | X, B)$. We occasionally drop the conditioning on $X$ as all terms should be understood as conditioned on prompt $X$ \\u2014 we have revised the notation in the paper to make it clearer.\\n\\n\\n**Q2.1: Keyword Inclusion**\\n\\nIn the mentioned paragraph, we never claim that the semantic similarity to the keyword is more important than the keyword itself. We specifically state \\u201cThe ideal metric goal for this task should only assign good scores to text where keywords are used in a meaningful way.\\u201d Inclusion of the keyword is necessary, but not sufficient \\u2014 not only should the keywords be included, but it should be included in a meaningful way. Inclusion rate alone is inadequate, as it only captures the inclusion of the keyword, failing to assess whether the inclusion is semantically coherent. Therefore, we use the BERT score to evaluate whether the keywords are included in a meaningful way. \\n\\n**Q2.2: Values of Constraint Function $f(Y)$** \\n\\nFor the sentiment and detoxification task, we include the scores assigned to the samples from the internal classifier, or the classifier used to guide the generation process. This directly corresponds to the constraint value for these tasks. \\n\\nFollowing prior work [1, 2, 3], we do not include the constraint value for the keyword task as it is more difficult to interpret than the success rate and other metrics. The constraint function computes a score based on the probability each logit vector places on the keyword tokens. This value is maximized when all the sequence positions are the keyword token, which is clearly undesirable. As high values and low values can indicate undesirable behavior, this specific constraint function is difficult to interpret. In contrast, the inclusion rate and BERT score are easily interpretable as higher values indicate strictly more desirable behavior. \\n\\n[1] Liu et al. BOLT: Fast Energy-based Controlled Text Generation with Tunable Biases. ACL 2023. \\n\\n[2] Kumar et al. Gradient-based Constrained Sampling from Language Models. ACL 2022.\\n\\n[3] Quin et al. COLD Decoding: Energy-based Constrained Text Generation with Langevin Dynamics. NeurIPS 2022. \\n\\n[4] Liu et al. Don\\u2019t Take It Literally: An Edit-Invariant Sequence Loss for Text Generation. ACL 2022.\\n\\nWe would like to thank you for your helpful review, and we hope that our response clears up any concerns. If there are any remaining areas of confusion or reasons for concern, we are happy to answer follow-up questions and engage in further discussion.\"}", "{\"summary\": \"This paper proposes a new controlled text generation approach called DAB. DAB happens at decoding time, and it consists of two alternating steps: updating the bias term based on the gradient, and updating the response sequence conditioned on the bias term. This approach is designed to address the trade-off between fluency and control satisfaction. They experiment on sentiment, toxicity, and lexical control benchmarks.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"good summary of prior work, it nicely summarizes the field of controllable text generation from AR to NAR, with or without gradient guidance, etc.\", \"the method makes intuitive sense, but the details of the methods seem very unclear.\"], \"weaknesses\": [\"Controllable text generation is important, but the benchmarks this paper tested have been mostly saturated by prior approaches, so further testing on these benchmarks can no longer demonstrate the goodness of the newly proposed approach. Furthermore, prompting could solve these problems, so can this approach solve even harder problems, like model jailbreaking via such MCMC type of approach?\", \"I think the math is not very solid in the paper. equation 6 seems wrong. It's a circular definition. The equation (not numbered) immediately after 6 is also strange, should it be P(B | Y, X)? There are also unclear notations in the method section.\", \"The method section is very badly written. I don't understand many technical details in the method section. Why eqn 7 has the (1-bij) term? Why is eqn 10 argmax for a sampling distribution? If B is the bias term, intuitively initializing B at 0 seems more reasonable than initializing B at Y?\"], \"questions\": \"see the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to A: Related Works\", \"comment\": \"Thank you for providing [1], [2], and [3]. While we did discuss works that fine-tune the model very briefly, we were unaware of these specific works and will include them in the revision, as they add more breadth to our discussion on controlled text generation methods.\\n\\nWe want to emphasize that while the suggested papers are related, they are not closely aligned with the focus of our work. As you pointed out in your review, [1], [3] focus on methods that fine-tune auto-regressive models to align with some defined EBM. Our work is primarily concerned with inference-time decoding algorithms that do not require the model to be fine-tuned.\\n\\nWe would also like to point out that the algorithm presented in [2] requires some proposal function that can be used to generate samples. Its core contribution is an accept / reject algorithm that is agnostic to the proposal function. This is different from our algorithm, which presents a novel decoding algorithm that directly generates new samples. Thus we compare primarily to other works that introduce decoding methods that generate samples, which is not the focus of [2]. Nevertheless, we will make sure to include all three works in our related work section as they provide valuable insight into alternative approaches to this problem. \\n\\n[1] Khalifa et al. A distributional approach to controlled text generation. ICLR 2021.\\n\\n[2] Eikema et al. An approximate sampler for energy-based models with divergence diagnostics. TMLR 2022.\\n\\n[3] Kruszewski et al. disco: a toolkit for Distributional Control of Generative Models. ACL 2023.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your supportive and thoughtful review. We respond to your points below:\\n\\n**Weakness: Auto-regressive sampling**\\n\\nThis is correct \\u2014 the main deficiency of our method is the fact that each step requires the autoregressive generation of the sequence. Prior work also suffers from this issue [1]. Improving efficiency by reducing the number of auto-regressive calls is an interesting future direction.\\n\\n**Weakness: Mixed results on Sentiment Task**\\n\\nWe would like to point out that while our method underperforms BOLT slightly in the context of fluency, our method greatly outperforms BOLT in terms of sentiment control while almost matching BOLT\\u2019s fluency performance. While our method has the second-best fluency metrics, BOLT lags behind other baselines for most of the sentiment-specific metrics. Additionally, as shown in the qualitative examples we include in Table 6 of the Appendix, DAB generations are not noticeably less fluent than BOLT generations. Finally, DAB outperforms BOLT in terms of fluency on the keyword generation task, as shown in Table 2. We believe that this demonstrates that our method is able to achieve a superior balance between control and fluency. \\n\\n**Question: Alternative Calculation of Bias Vector**\\n\\nWe chose to use the $l_2$ distance as this created a much more peaked distribution towards the desired token. When we tried using the inner product, it did not enable sufficient control as the bias vector did not bias strongly enough towards the sampled tokens. We will add this result to the appendix. \\n \\n[1] Liu et al. BOLT: Fast Energy-based Controlled Text Generation with Tunable Biases. ACL 2023.\"}" ] }
DumcCxxzka
RNAinformer: Generative RNA Design with Tertiary Interactions
[ "Sharat Patil", "Frederic Runge", "Jörg K.H. Franke", "Frank Hutter" ]
The function of an RNA molecule depends on its structure and a strong structure-to-function relationship is already achieved on the secondary structure level of RNA. Therefore, the secondary structure based design of RNAs is one of the major challenges in computational biology. A common approach of RNA design is inverse RNA folding. However, existing RNA design approaches cannot invert all folding algorithms because they cannot represent all types of base interactions. In this work, we propose RNAinformer, a novel generative transformer based approach to the inverse RNA folding problem. Leveraging axial-attention, we directly model the secondary structure input represented as an adjacency matrix in a 2D latent space, which allows us to invert all existing secondary structure prediction algorithms. Consequently, RNAinformer is the first model capable of designing RNAs from secondary structures with all base interactions, including non-canonical base pairs and tertiary interactions like pseudoknots and base multiplets. We demonstrate RNAinformer’s state-of-the-art performance across different RNA design benchmarks and showcase its novelty by inverting different RNA secondary structure prediction algorithms.
[ "RNA", "RNA Design", "RNA Inverse Folding", "Transformers", "Generative Design", "Axial Attention", "pseduoknots", "multiplets" ]
Reject
https://openreview.net/pdf?id=DumcCxxzka
https://openreview.net/forum?id=DumcCxxzka
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zLY8KR8vMm", "zBqqYV2NQ7", "xy6n2LjxAe", "v8ZjY4uSCK", "lQ8KxeoeNB", "h1Zc2oytoQ", "dKQDJpSWmS", "cVs3w4u4I5", "Z8oamNRgoZ", "PChozhITIw", "M6LC9KO0lq", "IPkRTAFBXc", "C71xTBWnCG", "9QNhpJsh1f", "8gz7NBETG8", "8UpyNrXMSD", "7OISklE02r", "6FOgU7mYF4", "1ArBX8fAj0", "0cStThFZ4N", "0IasCSgVtC" ], "note_type": [ "official_comment", "decision", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732503851888, 1737523633999, 1732503705841, 1735797727278, 1732504086177, 1732504138782, 1730680119507, 1732503611527, 1731124284802, 1732753812166, 1732502319774, 1732677143339, 1732705744444, 1732503388300, 1732502543482, 1732502977125, 1730579637187, 1730569257621, 1731704448500, 1730669616196, 1732503147366 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4342/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4342/Authors" ], [ "ICLR.cc/2025/Conference/Submission4342/Area_Chair_Pbd2" ], [ "ICLR.cc/2025/Conference/Submission4342/Authors" ], [ "ICLR.cc/2025/Conference/Submission4342/Authors" ], [ "ICLR.cc/2025/Conference/Submission4342/Reviewer_DDzf" ], [ "ICLR.cc/2025/Conference/Submission4342/Authors" ], [ "ICLR.cc/2025/Conference/Submission4342/Reviewer_c4ig" ], [ "ICLR.cc/2025/Conference/Submission4342/Reviewer_Lp2j" ], [ "ICLR.cc/2025/Conference/Submission4342/Authors" ], [ "ICLR.cc/2025/Conference/Submission4342/Reviewer_VnQq" ], [ "ICLR.cc/2025/Conference/Submission4342/Reviewer_WYDo" ], [ "ICLR.cc/2025/Conference/Submission4342/Authors" ], [ "ICLR.cc/2025/Conference/Submission4342/Authors" ], [ "ICLR.cc/2025/Conference/Submission4342/Authors" ], [ "ICLR.cc/2025/Conference/Submission4342/Reviewer_WYDo" ], [ "ICLR.cc/2025/Conference/Submission4342/Reviewer_VnQq" ], [ "ICLR.cc/2025/Conference/Submission4342/Authors" ], [ "ICLR.cc/2025/Conference/Submission4342/Reviewer_Lp2j" ], [ "ICLR.cc/2025/Conference/Submission4342/Authors" ] ], "structured_content_str": [ "{\"title\": \"Author response to Reviewer WYDo\", \"comment\": \"Dear Reviewer WYDo,\\n\\nWe thank you for the valuable feedback. Specifically, we acknowledge the reviewers\\u2019 comments that our work is well-executed. We will address the individual questions in the following.\\n\\n>While the hyperparameters are described, there is little discussion of how they were obtained; how much sweeping did the authors perform?\\n\\nDue to the number of experimental settings, we manually tuned the model using our validation sets to obtain a good common model setting across our evaluations. We mainly tuned the learning rate for our batch size as we found it to be the most important hyperparameter. The training stability and validation performance depended on a fast initial lowering of the training loss.\\n\\n>Do the authors give any consideration to non-canonical/modified nucleotides?\\n\\nWe thank the reviewer for this valuable comment but have to admit that we do not include modified nucleotides. \\nHowever, we are also not aware of any secondary structure prediction algorithm that is capable of handling modified nucleotides.\\n\\n>Could the authors share some reasoning why the model trained on synthetic data performs better than the finetuned model?\\n\\nWe think that the reason is the limited availability of data containing all kinds of base pairs. \\n\\nWhile there exists a lot of data from comparative sequence analysis and predicted secondary structures in the public domain, this data typically lacks base multiplets and often also pseudoknots. The only reliable source of data with all kinds of base pairs is the PDB, where RNA only data describes a very small fraction of all 3D structures.\\nDuring training on known RNAs, we thus rarely visit sequences with multiplets (or always the same when oversampling multiplet data). With synthetic data, on the other hand, we can generate a lot more samples that contain all kinds of base pairs. Given that our folding algorithm is strong enough (we use RNAformer which appears to be the state-of-the-art folding algorithm), it seems that we can learn these kinds of features much better.\\n\\n>The pairwise hamming distance for diversity is designed to account for base pair flips?\\n\\nNo, the pairwise hamming distance metric doesn't take into consideration base pair flips. A generated sequence that contains a base pair flip is considered a different sequence. However, we do not observe a tendency of RNAinformer to only introduce base pair flips.\\n\\nWe thank the reviewer again for the valuable feedback. If there are any further questions we are happy to answer them. Otherwise, we would appreciate it if the reviewer would consider updating our score.\\n\\nWith kind regards,\\n\\nThe Authors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Author response to Reviewer Lp2j continued\", \"comment\": \">For the experiments on RNA design with pseudoknots authors only compare their method to antaRNA which is a relatively old method (2015) while many newer methods with superior results have been published since then.\\n\\nWe agree with the reviewer that antaRNA is not new. However, it is still one of the main methods used for pseudoknotted RNA design. That being said, we also tried to evaluate aRNAque on the dataset but the runtimes were unexpectedly high and because aRNAque doesn't work with level 3 pseudoknots which antarna can handle, we skipped further evaluations of aRNAque.\\n\\n>The length of the supported sequences is mentioned to be capped at 200 in the experiments. That is relatively low to what has been done in recently proposed methods (e.g., 900 in SAMFEO and 500 nucleotides in RDESIGN -- see [1] above). This is an important shortcoming because it is well-known that with the increased length of RNA the problem becomes more difficult. Authors have mentioned this in their Limitations section, but they mention that it is enough for current benchmarks while there are other benchmarks that contain RNAs with more nucleotides and they are simply not used in the presented experiments.\\n\\nWe agree with the reviewer that this is a major limitation of RNAinformer that we also clearly state in the limitations section in the concluding remarks. We also agree that structure prediction for longer sequences is typically more challenging and in turn also poses challenges to design algorithms.\\nHowever, most of the experimentally validated sequences depicted in the PDB are relatively short. For instance, the most commonly used PDB data sets used for the evaluation of deep learning approaches (TS1, TS2, TS3, and TS-Hard; which we also use here) have a maximum sequence length of 189nt. Similarly, from the originally collected data from PDB (collected from RNAsolo which essentially is a PDB mirror) in the RDesign publication ([1]), 87% are shorter than 100nt. The limitation to 200nt thus is unfortunate, but we still think that it captures most of the experimentally validated samples in the available benchmarks. \\n\\n>While authors propose a new pipeline for designing their synthetic datasets, it would have been better to use the datasets used by prior methods (e.g., Eterna100 or the one curated in [1]) to show the performance of the presented method in comparison to the baseline methods, and then mention the motivation behind using a new pipeline for generating a dataset.\\n\\nWe do not fully agree with the reviewer here. While we agree that it is always interesting to see comparisons to existing methods, RNAinformer is the only available method for secondary structure based design that can tackle real-world structures from PDB. Furthermore, we think that using synthetic data is highly desirable, particularly in the biological domain, and that our results show that we can transfer well from synthetic to real world examples. We think that this result could also be of interest to the community. The correct preparation of the synthetic data is one of the key steps to reliably assess the performance on the real data and to avoid data leakage issues. We, therefore, carefully prepared our synthetic pipelines to make them a valuable starting point for future research.\\n\\nWe thank the reviewer again for the valuable feedback. We hope that we adequately addressed all questions and concerns. However, if there are still any questions left, we are happy to answer them! If we have answered your questions satisfactorily, we would appreciate it if you would consider increasing our score.\\n\\nWith kind regards,\\n\\nThe Authors\"}", "{\"metareview\": \"The paper gives a generative transformer model for the inverse RNA folding problem (designing RNAs from secondary structures). The problem that the paper studies is undoubtedly important, and the pipeline for synthetic data generation that the authors have developed can have other uses. However, because the paper is applying an established methodology rather than developing a new one, the bar for experimental evaluation is high, and the paper falls short by this measure. Specifically, the paper misses some important comparisons with other relevant prior efforts -- in particular, generation from RNA tertiary structures as well as efforts on protein-folding. The concerns are substantial enough that I must recommend rejection this time around. I encourage the authors to incorporate the feedback in the reviews and submit to a different deadline.\", \"additional_comments_on_reviewer_discussion\": \"There was significant discussion between the authors and the reviewers during the rebuttal period. In the end, the authors were unable to convince the reviewers.\"}", "{\"title\": \"Author response to Reviewer VnQq\", \"comment\": \"Dear Reviewer VnQq,\\n\\nWe thank you for your valuable feedback and for pointing out the novel aspects of our work and the comprehensive analysis. We will address your concerns and questions in detail in the following.\\n\\n>The related work in this paper is insufficient. It overlooks a class of RNA inverse folding methods based on tertiary structure, such as RDesign [1] and RiboDiffusion [2], which directly use the RNA tertiary structure backbone as model input and implicitly model structural interactions like pseudoknots. These methods are highly relevant to the topic of the paper.\\n\\nWe thank the reviewer for this useful comment. We have added a section on 3D RNA design algorithms to the Related Work section in the revised manuscript.\\n\\n>The benchmarking in this paper has several flaws: (1) The folding-back algorithm is overly simplistic and lacks diversity. Tertiary structure prediction methods should be included to assess whether the designed sequences meet expected tertiary interactions. Additionally, the errors caused by the folding-back algorithm are not adequately explained. AND Can the designed sequence be folded using the tertiary structure prediction model?\\n\\nWe thank the reviewer for this suggestion. We folded the predictions for the PDB test sets using AlphaFold 3 (AF3). The results are shown in Tables 22 and 23. We find that RNAinformer can improve the design for orphan RNAs, where there is no MSA available for the ground truth data.\\n\\n>The model's performance under novel structures, i.e., samples with significant differences from the training set, is not analyzed. AND Can the model design performance on relatively novel structures be tested on natural RNA of CASP15?\\n\\nWe again would like to thank the reviewer for the useful suggestion. We add an analysis of predictions on the CASP15 RNA only data to the revised version of our manuscript. The results are shown in Table 24. We use these results to assess the performance of RNAInformer with and without finetuning on known RNAs from PDB. Surprisingly, we observe that the RNAinformer trained on synthetic data only achieves better results compared to the version that was finetuned on real-world data.\\n\\n>There are relatively few baseline methods. For adjacency matrices, graph neural network-based methods and ResNet-based methods are commonly used to represent RNA data and could be easily adapted to the current task. AND Can some GNN-based baseline methods be added?\\n\\nWe add a GNN baseline based on the structTransformer provided by [1]. We run the baseline with the same batch size and number of steps as the RNAinformer. The results are shown in Table 16.\\n\\n>Although it is reasonable to use synthetic data for proof of concept, using it as the core test set is far from the actual application scenario and may introduce more unexpected errors.\\n\\nWe agree with the reviewer that evaluation on synthetic data alone is not the way to go when training on synthetic data. However, we disagree with the reviewer that we evaluate on the synthetic data only. We run experiments on four test sets of experimentally validated structures from the PDB. These sets were initially provided by [2] and [3] and are commonly used for evaluations of deep learning methods in the field of RNA secondary structure prediction. \\n\\nFurthermore, to assess the level of overfitting to the folding engine that was used during the generation of synthetic data, we fold all generated sequences with three more folding algorithms from the literature. The results are shown in Table 21 of the revised version of our manuscript (Table 19 in our initial submission). We find that RNAinformer does not overfit the folding engine used during synthetic data generation but that it generates sequences that improve the F1 score compared to folding the ground truth sequence across most of the datasets and with different folding engines. We call this \\u201cImproved foldability\\u201d in the manuscript. We further extend this analysis to include AlphaFold 3 as a 3D folding algorithm and observe similar results for orphan RNAs, but not for RNAs where MSA is available for the ground truth data. The respective results are shown in Tables 22 and 23 of the revised manuscript.\\n\\nFinally, we would like to emphasize that we also evaluate the RNAinformer on a realistic task of riboswitch design in Section 5.3.\"}", "{\"title\": \"Author response to Reviewer VnQq continued\", \"comment\": \">There is a lack of comparison on public benchmarks, such as Eterna100, which is crucial to demonstrate the basic capabilities of the model. AND Can benchmarking be done on some public test sets, such as Eterna100 and Openknot?\\n\\nWe agree with the reviewer that public benchmarks could provide valuable insights about strengths and weaknesses of a new algorithm. However, as also stated by the reviewer in the initial Review, the main focus of our work is the design of RNAs for all sorts of base pairs. This includes non-canonical base pairs, pseudoknots and base multiplets. The Eterna100 benchmark and the OpenKnot benchmark both do only provide nested structures or pseudoknotted structures, but not base multiplets. We, therefore, use existing and commonly used public benchmarks from the RNA secondary structure prediction literature instead of creating a new benchmark dataset for RNA design. For our evaluations on nested structures, we already use an additional, more practically relevant benchmark of theophylline riboswitch design. Furthermore, evaluations on the Eterna100 benchmark typically involve 24 hour runs on each of the benchmark tasks, refining the predictions with every iteration. In contrast, we only generate 20 samples with RNAinformer for all experiments on nested structures.\\n\\nWe again thank the reviewer for the useful feedback. We hope that we addressed all questions and concerns but we are happy to answer any further questions if necessary. If there are no further concerns, we would like to kindly ask the reviewer to reconsider our score.\\n\\nWith kind regards,\\n\\nThe Authors\"}", "{\"summary\": \"### A method for 2d structure based RNA inverse folding enabling - for the first time - arbitrary interaction types (e.g. pseudo-knots, non-canonical base pairs, ...)\\n\\nThis paper introduces a secondary structure based RNA inverse folding model (RNAinformer) that is capable of designing RNA sequences from secondary structures with arbitrary interaction types (e.g. non-canonical base pairs, pseudo knots, base multiplets) that were not representable in previous 2D based inverse folding methods. This improvement of being able to represent these arbitrary interaction types is achieved by working with the more expressive adjacency matrix representation instead of dot-bracket representations of the secondary structure.\\n\\nThe RNAinformer model is based on an auto-regressive encoder-decoder transformer. The secondary structure (in the form of an adjacency matrix) is encoded via axial attention (similar to the RNAformer structure prediction model) and finally pooled from a 2d to a 1d vector that is passed to the decoder for decoding into an RNA sequence. RNAinformer also supports constrained generation based on masked sequences, which are embedded into a 2d representation by the encoder if provided, or desired GC content (linearly embedded and added to the embedding).\\n\\nThe authors also make an interesting, strong claim that training on synthetic data only improves performance over training with experimental data, that -- if true -- would be of significant interest to the community.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well written, clear and has a nice flow.\\n\\n2. The provided codebase looks well structured & documented upon a first spot check. \\n\\n3. I'm a big fan of your spider plots showing a variety of metrics of interest (valid sequences, diversity, solved ,...)\\n\\n4. The performance of the model and its enhanced design capabilities (GC content, masking, leveraging structure beyond what can be represented in simple dot-brackets) are promising and of interest to the community. If the authors can demonstrate that these also hold when using non-synthetic training data or further support their claim on the synthetic training data being superior to currently available experimental training data this would be of much interest to the community. (c.f. also weaknesses).\", \"weaknesses\": \"1. A link to the 3D inverse folding literature is currently missing: In the related work section I would have expected to see a mention of the (deep learning based) inverse folding efforts based on 3D structure (e.g. Rosetta, gRNADe, etc.). How does secondary structure-based inverse folding perform compared to 3D based inverse folding, in cases where 3D structure is available? This may represent a route of further testing and strengthening the hypothesis that the essential information for rna structure-function relations are encoded in 2D connectivity patterns (base pairs, psuedo-knots, base multiplets, ...).\\n\\n2. I am somewhat concerned by the purely synthetic data based training strategy, since synthetic data are created with the same method that is used for evaluation. The authors argue that this allows side-stepping the data gap in gold standard secondary structure data, which is limited in the PDB. However, ultimately any secondary structure prediction method will have been trained on some level of structural data, partially exhibiting the biases and limitations of the datasets that the authors discussed. In addition to those, the models might also have certain model-specific biases that the inverse folding model may then pick up. Since the evaluation is also done by the same prediction model (e.g. RNAfold in 5.1) this risks reinforcing those model-specific biases. While the authors provide an experiment in the appendix to address this, I believe this point deserves further discussion and should be featured more central stage (at the very least including table 20, and possibly a test also for some of the other tasks specifically). This point is quite interesting and -- if true and well supported -- could be of significant interest to the community beyond the method in this paper alone.\", \"some_analyses_i_would_like_to_see_in_this_regard\": [\"a comparison of training on synthetic data from model 1, but an evaluation with another, independent structure prediction model 2.\", \"an analysis of the sequence recovery (an imperfect metric, I know, but still somewhat informative to exclude cases of the inverse folding model overfitting on quirks of the structure prediction model). If you were to go all out, an evolutionary inspired recovery may be even more informative (c.f. https://openreview.net/forum?id=y5L8W0KRUX&referrer=%5Bthe%20profile%20of%20Chengyue%20Gong%5D(%2Fprofile%3Fid%3D~Chengyue_Gong1) in the protein world\"], \"questions\": [\"In Table 1, it'd be helpful if the authors could highlight what % means (here % solved), as well as the n=... that was used for this estimate and the topk from which the answer was picked. From the description, it is not quite clear to me how those % were obtained.\", \"Were all benchmarked methods re-trained on the same synthetic data? (for each of the tasks)\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author response to Reviewer Lp2j continued\", \"comment\": \"> That being said, even for most other prior methods, we can still tune the diversity. For example you can change the temperature value in the Boltzmann sampling part of SAMFEO to get a less skewed distribution that leads to a higher diversity. However, it is important to have a skewed enough distribution to make sure we have a high recall among a small number of candidates.\\n\\nWe think the RNAinformer shows strong performance in terms of solved tasks and high diversity. While diversity might be increased artificially, at least for RNAinformer there is no need to do so. In contrast, we think that our model shows strong performance across all tasks, while ensuring high diversity of the generated candidates.\\n\\n>The method authors use for measuring the diversity of the valid sequences is computing the mean of pairwise hamming distances between the sequences. This may lead to an overestimation of the diversity because if two RNA sequences have just an indel (unpaired nucleotide) compared to each other, it considers all the subsequence bases as different. I think in this case something similar to the Needleman-Wunsch algorithm would be a better measure of dissimilarity.\\n\\nFor our calculations of diversity, we only consider sequences of the same length that were generated for a given target (generation is length limited). Therefore, aligning sequences e.g. using Needleman-Wunsch is not necessary (there are no indels).\\n\\n>It is also not very clear why diversity is an important factor here because unlike image generation and text generation, for RNA design it is more important to get the correct design among the few samples. For example, when looking for a specific RNA to bind to the snoRNA of a new virus to inhibit its activity, why should one care about the diversity of the proposed solutions rather than just having the top few solutions that are more likely to work. It is more important to have the expected RNA among the top few generated solutions to also decrease the costs of downstream in-vivo experiments.\\n\\nWe think that In silico RNA design methods should support experimentalists during the search for promising candidates that can then be subsequently analyzed in the wet-lab. It is, therefore, important to provide a large list of diverse and promising candidates for screening.\\nThis is in line with recent developments and publications in the RNA design community [6,7]. To be maximally useful, an RNA design algorithm should be able to generate as many valid sequences which are also as diverse as possible to explore the potential space of solutions and reduce human effort.\\n\\n[6] Hammer, S., G\\u00fcnzel, C., M\\u00f6rl, M., & Findei\\u00df, S. (2019). Evolving methods for rational de novo design of functional RNA molecules. Methods, 161, 54-63.\\n\\n[7] Runge, F., Franke, J., Fertmann, D., Backofen, R., & Hutter, F. (2024). Partial RNA design. Bioinformatics, 40(Supplement_1), i437-i445.\\n\\n>Some results such as the one presented in Figure 4 might portray the amount of non-canonical base-pairs (NC) as a measure that has to be maximized. However, although these non-canonical base-pairs are present in folded RNAs, this amount has to be the correct amount. If these values are maximized by the model but does not lead to valid sequences then why should it be considered as a good thing?\", \"we_think_this_part_requires_clarification\": \"We are not aiming at maximizing the number of non-canonical base pairs but report the number of valid sequences that contain non-canonical base pairs. The reason is that existing secondary structure based RNA design methods cannot design RNAs that contain non-canonical base pairs at all \\u2013 we show that the designs of RNAinformer contain non-canonical base pairs in many cases, similar to the ground truth data.\\n\\n>For experimentally verified 3D structures, which can be considered the gold standard set for evaluation, authors only report the results of their method and do not compare it to any other baseline. From table 17, the ratio of the valid sequences seems to be very low and it would be important to see if prior methods do not outperform RNAinformer on this dataset.\\n\\nWe agree with the reviewer that we do not compare to any other method on the experimentally validated structures. The reason is that RNAinformer is the first secondary structure based RNA design algorithm that is capable of designing RNAs for secondary structures that contain all kinds of base pairs. However, we implement a GNN baseline and add the results to Table 16 in the Appendix.\\n\\nWe also agree with the reviewer that the ratio of valid sequences is relatively low. However, this is a pioneering work that \\u2013 for the first time \\u2013 enables the design of RNAs with all kinds of base interactions and we show that we can solve some of the tasks that contain pseudoknots, multiplets and non-canonical base pairs. This is novel in the field of RNA design.\"}", "{\"summary\": \"This paper proposes RNAinformer, a novel generative transformer based approach to the inverse RNA folding problem. Leveraging axial-attention, they model the secondary structure input represented as an adjacency matrix in a 2D latent space, which allows us to invert all existing secondary structure prediction algorithms. The authors claim that RNAinformer is the first model capable of designing RNAs from secondary structures with all base interactions, including non-canonical base pairs and tertiary interactions like pseudoknots and base multiplets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper studies RNA inverse folding (from secondary structures), which is an important problem in biology\", \"weaknesses\": [\"The methodology is a standard transformer and lacks innovation\"], \"questions\": [\"Have you tried finetuning RNAinformer using the data from PDB? You said secondary structures derived from PDB are the golden standard but the dataset size is small. That's why you used synthetic data to pre-train. I wonder if it can improve your model performance with additional finetuning.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I would like to thank the authors for their responses! The problem that the authors have targeted in their work is indeed an important problem; however, I think since the introduced model does not have a novel component it should at least present a comprehensive comparison to other base-lines proving its effectiveness. Although the introduced model has additional capabilities (e.g., including pseudo-knots), I still expect to see it outperform other methods in well-known benchmark datasets that have been used by prior work, even though they are designed for simpler evaluations because they can be considered subproblems (e.g., finding the sequences without desired tertiary structure that doesn't contain a pseudoknot).\", \"i_have_two_other_minor_follow_up_comments_regarding_the_provided_responses\": [\"Regarding the capability of other models such as SAMFEO for generating more diverse sequences, they mentioned that \\\"While diversity might be increased artificially, at least for RNAinformer there is no need to do so.\\\" However, changing the temperature for the sampling from any generative model is an inherent hyperparameter. There is nothing artificial about it. The same hyper-parameters and strategies are used for other generative models such as transformers used by the authors and can be tuned for a desired level of diversity.\", \"Regarding the added results for the GNN baseline, the results seem to be the same as random. Do the authors explore different hyperparameters and settings to generate the best possible results?\", \"Still, the major concern I raised at the beginning of this comment prevents me from raising my score; however, I would remain open to discussion with other reviewers and AC and would not challenge it if they decide to accept the paper.\"]}", "{\"title\": \"Changes to the manuscript\", \"comment\": [\"We thank all reviewers for their patience and we apologize for the late response. We have uploaded a revised version of our manuscript. The key changes we would like to highlight are the following:\", \"We add a section about 3D RNA design approaches to the related work in response to the comments of reviewers DDzf and VnQq.\", \"We implement a GNN baseline in response to the comments of reviewers Lp2j and VnQq. The results are shown in Table 16.\", \"We evaluate RNAinformer on the RNA only data from the CASP15 blind competition in response to the comment of reviewer VnQq. We use these evaluations to assess the influence of finetuning RNAinformer with experimentally validated samples from PDB as requested by reviewer c4ig. The results are shown in Table 20.\", \"Evaluations with AlphaFold 3 for 3D structure predictions\", \"We evaluate the generated sequences using AlphaFold 3 in response to the comment of reviewer VnQq. We analyzed the result in terms of improved foldability as shown in Table 22. We found that, while we cannot improve the general foldability of the sequences, this is mainly due to a lack of MSA for the designed sequences. Therefore, we show the difference in TM score for examples where there was MSA available for the ground truth sequence and for orphan RNAs, highlighting that RNAinformer designs improve the foldability of orphan RNAs across nearly all test sets.\", \"We also evaluate the predictions of RNAinformer on the CASP15 data with AlphaFold 3. The results are shown in Table 24 and indicate that the RNAinformer model finetuned on known samples seems to achieve worse performance compared to the version trained only on synthetic data.\", \"We deepen the analysis of training on synthetic data by folding all predictions of the RNAinformer version trained on known RNAs with different folding algorithms as requested by reviewer DDzf. The results are shown in Table 21.\", \"We extend the comparison of training on synthetic data, synthetic training with finetuning on experimentally validated structures from PDB, and training on known RNAs with finetuning on PDB samples as requested by reviewer c4ig. The results are shown in Table 19.\", \"Any issues with rendering of Figures and text above the page limit in the current intermediate version of our manuscript will be resolved in later versions.\", \"We also post individual responses to all reviewers. We are looking forward to a fruitful discussion.\", \"With kind regards,\", \"The authors\"]}", "{\"comment\": \"Thanks for the author's response. I will increase the score to 5. The main reasons preventing the score from improving further are the relatively limited innovations in the model, the lack of adequate baseline comparisons on new tasks, and the absence of convincing generation case studies.\"}", "{\"title\": \"Rebuttal acknowledged\", \"comment\": \"I thank the authors for addressing my questions.\\n\\nI think my original score is appropriate and I will therefore maintain it. Justification: the next scoring option available to me as a reviewer would be an 8 and that would require greater technical novelty for me to deem it appropriate.\"}", "{\"title\": \"Author response to Reviewer Lp2j\", \"comment\": \"Dear Reviewer Lp2j,\\n\\nWe thank you for the valuable feedback and for highlighting the usefulness of our data generating pipelines for future work and the importance of the problem in general.\\nWe will address your questions and concerns in detail in the following.\\n\\n>There are many recent methods for solving the problem of inverse RNA design which have not been used for comparisons in this paper. The comparisons are limited to only a few of the prior works (mostly only 2). Some of these prior methods have similarly used conditional autoregressive formulation similar to this work. The paper in [1] targets an even more general problem by inverse RNA tertiary structure which solves the RNA inverse problem as a subproblem. Without thorough comparisons it is hard to ensure the merits of the newly proposed approach.\\n\\nWe thank the reviewer for sharing these references with us. We agree that there are methods available that we did not consider for evaluation. Regarding the three works suggested by the reviewer, we did not include these methods for different reasons.\\n\\nRegarding [1]: While RDesign appears to be a promising approach, the algorithm tackles a different problem than RNAinformer, RNA design based on 3D structure information. This approach poses different challenges on the algorithm compared to pure secondary structure based design. Furthermore, the algorithm requires additional 3D structure information for the design of RNAs which is often not available in real-world scenarios. Finally, secondary structures designed by RDesign are limited to canonical base interactions in dot-bracket notation. We, therefore, exclude RDesign from our evaluations.\\n\\nRegarding [2]: The described algorithm designs RNAs for nested structures in dot-bracket notation only. Since we are mainly interested in designing RNAs for all possible base interactions based on matrix representations, we exclude it from our evaluations.\\n\\nRegarding [3]: We tried to run aRNAque on our datasets. However, due to long runtimes for each evaluation by arNAque and because aRNAque doesn't work with level 3 pseudoknots, which antarna can manage and which are also in our test set, we did not include aRNAque in our evaluations.\\n\\n>There is also a similarity between the protein inverse folding problem and RNA inverse folding and since the former has been studied more, most models can still be applicable to the RNAs (instead of 20 enzymes we will have 4 nucleotides). Therefore, works in RNA inverse folding still use these methods as baselines. For example the following works are used as baselines for RNA inverse problems as well.\\n\\nWe add a GNN baseline based on the structTransformer provided by [1]. We run the baseline with the same batch size and number of steps as the RNAinformer. The results are shown in Table 16 of the revised version of our manuscript.\\n\\n>For many of the comparisons, such as the ones on nested structures in which the proposed method achieves significantly worse results than the baseline methods (SAMFEO), the authors justify the presented results by highlighting the diversity of their generated outcomes. However, the first goal is to derive valid sequences. If neither of the sequences for a task is not valid what would be the benefit of diversity?\\n\\nWe agree with the reviewer that we do not achieve SOTA performance for this explicit experiment (while still achieving second best performance, solving 91% of the tasks while the third best competitor achieves 77% only). We think that we also clearly state this in the text as mentioned by the reviewer. However, we disagree that this is the case for many comparisons, in fact, this is the only experiment where RNAinformer is outperformed by any of the reported (specialized) baselines.\\n\\nRegarding diversity, we would like to clarify that it is calculated for all the designs that solve a given structure (valid sequences). We clearly observe that the generated solutions of RNAinformer are more diverse than those of SAMFEO and think that this is a very important result for an auto-regressive modeling approach trained on sequence recovery while only generating 20 samples per task.\\n\\nOverall, we think that reporting cases where our method fails (or does not achieve SOTA results) is important and could lead to a better understanding of the strengths and weaknesses and thereby to better methods in the future.\"}", "{\"title\": \"Author response to Reviewer c4ig\", \"comment\": \"Dear Reviewer c4ig,\\n\\nWe thank you for your valuable feedback and for pointing out the importance of our approach for the field of biology. We will address your concerns and questions in detail in the following.\\n\\n>The methodology is a standard transformer and lacks innovation\\n\\nWe agree with the reviewer that our approach employs rather standard deep learning techniques including axial attention and auto-regressive generation. However, we would like to emphasize that the major novelty of our approach is about the application of these methods to a long-standing problem of computational biology. In this regard, the usage of axial-attention to process an RNA structure represented as an adjacency matrix is novel, offering multiple advantages over existing secondary structure-based RNA design algorithms as discussed in the Introduction of our initial submission. Most importantly, the right combination of these \\u2018standard\\u2019 techniques enables RNA design for nucleotide interactions that were previously intractable with other secondary structure-based RNA design approaches.\\n\\n>Have you tried finetuning RNAinformer using the data from PDB? You said secondary structures derived from PDB are the golden standard but the dataset size is small. That's why you used synthetic data to pre-train. I wonder if it can improve your model performance with additional finetuning.\\n\\nWe thank the reviewer for this helpful comment. \\nWe pre-trained a RNAinformer model on synthetic data and data from known RNAs, and finetuned both models on the PDB data. \\nThe results are shown in Table 19 in Appendix E.4 in the revised version of our manuscript.\\n\\nWe observe that finetuning indeed improved performance on the different PDB test sets. To further investigate this, we also evaluated the models on RNA only data from the CASP15 competition (results shown in Table 20 and 24 in Appendix E.4). Here, we observe that the finetuned model performs slightly worse than the model trained only on synthetic data. We conclude that finetuning can be beneficial in specific cases but that training on synthetic data seems to be beneficial.\\n\\nWe again thank the reviewer for the useful feedback. We hope that we addressed all questions and concerns but would be happy to answer further questions if necessary.\\nIf there are no further questions, we would like to kindly ask the reviewer to consider increasing our score.\\n\\nWith kind regards,\\n\\nThe Authors\"}", "{\"title\": \"Author response to Reviewer DDzf\", \"comment\": \"Dear Reviewer DDzf,\\n\\nWe thank you for your valuable feedback and for acknowledging the style of writing and the quality of our codebase. We also appreciate the comment on our spider plots and for assessing our work as relevant for the community. In the following, we address your questions and concerns in detail.\\n\\n>A link to the 3D inverse folding literature is currently missing: In the related work section I would have expected to see a mention of the (deep learning based) inverse folding efforts based on 3D structure (e.g. Rosetta, gRNADe, etc.). How does secondary structure-based inverse folding perform compared to 3D based inverse folding, in cases where 3D structure is available? This may represent a route of further testing and strengthening the hypothesis that the essential information for rna structure-function relations are encoded in 2D connectivity patterns (base pairs, psuedo-knots, base multiplets, ...).\\n\\nWe thank the reviewer for this helpful comment. We add a discussion about 3D RNA design methods in the Related Work section of the revised version of our manuscript.\\nWe also agree with the reviewer that comparing the performance of 3D methods with 2D design approaches could lead to interesting insights. We will try to obtain predictions for our test data using dedicated 3D RNA design approaches in the future.\\n\\n>I am somewhat concerned by the purely synthetic data based training strategy, since synthetic data are created with the same method that is used for evaluation. However, ultimately any secondary structure prediction method will have been trained on some level of structural data, partially exhibiting the biases and limitations of the datasets that the authors discussed. In addition to those, the models might also have certain model-specific biases that the inverse folding model may then pick up. Since the evaluation is also done by the same prediction model (e.g. RNAfold in 5.1) this risks reinforcing those model-specific biases.\\n\\nGenerally, we agree with the reviewer that RNA design is tightly connected to RNA folding as we always require a folding oracle for the evaluation of a given design as long as there is no lab-in-the-loop approach involved. In this regard, different design methods use different folding oracles (most of them use RNAfold) and we would like to emphasize that one of our major contributions is that our approach allows us to employ any RNA secondary structure prediction algorithms. We also agree with the reviewer that this procedure bears the risk of overfitting the folding algorithm, as recently shown in a benchmarking paper for learning-based approaches for RNA design [1]. Therefore, we analyze the predictions of RNAinformer using multiple state-of-the-art deep learning based folding algorithms (SPOT-RNA, MXFold2, UFold). The results are shown in Table 19 of our initial submission (or Table 21 of our revised manuscript) and indicate that RNAinformer does not overfit the RNAformer model, but seems to design RNAs with improved foldability for nearly every folding engine compared to the original PDB sequences across the test sets.\\n\\nTo further investigate this, we use AlphaFold 3 to predict 3D structures for the generated sequences and analyze these in terms of TM Score. Our results (Tables 22 and 23 in the revised manuscript) indicate that RNAinformer predictions improve the predictions in a fair comparison on orphan RNAs but due to a lack in MSA for the generated sequences, show strongly decreased performance on sequences where there is MSA available for the ground truth data.\\n\\n[1] Koodli, R. V., Rudolfs, B., Wayment-Steele, H. K., Eterna Structure Designers, & Das, R. (2021). Redesigning the EteRNA100 for the Vienna 2 folding engine. BioRxiv, 2021-08.\\n\\n>While the authors provide an experiment in the appendix to address this, I believe this point deserves further discussion and should be featured more central stage (at the very least including table 20, and possibly a test also for some of the other tasks specifically). This point is quite interesting and -- if true and well supported -- could be of significant interest to the community beyond the method in this paper alone.\\n\\nWe agree with the reviewer that these results are very interesting to the community and we will highlight them more clearly in the main body. For now, however, we updated the discussion in Section 5.4 to account for our new results.\\n\\n>Some analyses I would like to see in this regard: a comparison of training on synthetic data from model 1, but an evaluation with another, independent structure prediction model 2.\\n\\nAs mentioned earlier, we already provide a similar experiment with our initial submission. Furthermore, we now include evaluations of 3D structures using AlphaFold 3 predictions (Tabels 22 and 23) as explained above.\"}", "{\"summary\": \"The authors present RNAInformer, an RNA secondary structure inverse folding method. The authors identify that there has been strong recent progress in RNA secondary structure prediction, and use this as the motivation for the work. The authors additionally identify that the vocabulary of base-base interactions in RNA secondary structure is much larger than what the community typically models and account for this in their method through an axial attention mechanism\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"Overall I think the work is quite carefully considered: from the problem statement to the dataset setup and splits, and the evaluation. The empirical results are strong, across the different design tasks and the authors demonstrate the ability to incorporate additional property constraints (GC content) convincingly. The paper is laid out clearly and well-written.\", \"weaknesses\": \"The predominant weakness in this work is the modest technical novelty in the proposed architecture. This reviewer is of the opinion that sound and well-executed applied work (such as this) has a place in venues such as ICLR but the judgment on this rests with the AC.\", \"questions\": [\"While the hyperparameters are described, there is little discussion of how they were obtained; how much sweeping did the authors perform?\", \"Do the authors give any consideration to non-canonical/modified nucleotides?\", \"Could the authors share some reasoning why the model trained on synthetic data performs better than the finetuned model?\", \"The pairwise hamming distance for diversity is designed to account for base pair flips?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This manuscript explores a more general RNA inverse folding problem, beyond the canonical base-pairing constraint. The authors propose a Transformer-based learning model, leveraging the adjacency matrix to capture more intricate nucleotide tertiary interactions and map them to sequences. Comprehensive experiments are conducted to validate the effectiveness of this model.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThis paper addresses the limitations of traditional dot-bracket secondary structure inverse folding by exploring adjacent-matrix based methods to represent more complex structural motifs. This approach aligns more closely with the practical requirements of functional RNA design.\\n2.\\tIn the absence of prior research on this topic, this paper constructs extensive training data and diverse test tasks, demonstrating the capacities of the proposed model in multiple aspects.\\n3.\\tThe overall writing structure of this paper is clear and logically organized, making it easy to follow.\", \"weaknesses\": \"1.\\tThe related work in this paper is insufficient. It overlooks a class of RNA inverse folding methods based on tertiary structure, such as RDesign [1] and RiboDiffusion [2], which directly use the RNA tertiary structure backbone as model input and implicitly model structural interactions like pseudoknots. These methods are highly relevant to the topic of the paper.\\n\\n2.\\tThe benchmarking in this paper has several flaws:\\n (1)\\tThe folding-back algorithm is overly simplistic and lacks diversity. Tertiary structure prediction methods should be included to assess whether the designed sequences meet expected tertiary interactions. Additionally, the errors caused by the folding-back algorithm are not adequately explained.\\n (2)\\tThe model's performance under novel structures, i.e., samples with significant differences from the training set, is not analyzed.\\n(3)\\tThere are relatively few baseline methods. For adjacency matrices, graph neural network-based methods and ResNet-based methods are commonly used to represent RNA data and could be easily adapted to the current task.\\n(4)\\tAlthough it is reasonable to use synthetic data for proof of concept, using it as the core test set is far from the actual application scenario and may introduce more unexpected errors.\\n(5)\\tThere is a lack of comparison on public benchmarks, such as Eterna100, which is crucial to demonstrate the basic capabilities of the model.\\n\\n3.\\tThe paper lacks case study results to demonstrate the correctness, rationality, and novelty of the designed sequences.\\n\\n4.\\tFor the machine learning community, the method in this paper adopts a well-studied Transformer structure and classical loss function, making it difficult to claim a technical contribution. For the biology community, the paper lacks rigorous and comprehensive evaluation to demonstrate its advantages and reliability. Overall, this manuscript does not yet meet the standard for publication.\\n\\n[1] RDesign: Hierarchical Data-efficient Representation Learning for Tertiary Structure-based RNA Design. The Twelfth International Conference on Learning Representations. 2024.\\n\\n[2] RiboDiffusion: tertiary structure-based RNA inverse folding with generative diffusion models. Bioinformatics 40. Supplement_1 (2024): i347-i356.\", \"questions\": \"Please refer to the Weakness section for more details.\\n1. Can some GNN-based baseline methods be added?\\n2. Can benchmarking be done on some public test sets, such as Eterna100 and Openknot?\\n3. Can the model design performance on relatively novel structures be tested on natural RNA of CASP15?\\n4. Can the designed sequence be folded using the tertiary structure prediction model?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Initial author response\", \"comment\": \"We thank all reviewers for their useful comments and valuable feedback.\\n\\nTo reduce the overhead for the reviewers, we will prepare individual responses for each review in the next few days.\\n\\nWe are looking forward to fruitful discussions and an interesting rebuttal period.\\n\\nBest regards,\\n\\nThe authors\"}", "{\"summary\": \"The authors propose RNAinformer, a new method for solving the problem of inverse RNA design. This problem considers finding the nucleotide sequence of RNAs that will result in the desired structure once they fold. Authors approach this problem in a more general setting by considering pseudoknots and base multiplets. They train a conditional transformer model to generate RNA sequences conditioned on the desired GC content and possibly additional constraints. They also introduce a new pipeline for generating synthetic data that can be used for generative models.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Authors have invested time to design a pipeline for generating synthetic data that can become useful for designing learning-based methods since the experimentally verified sequences are more costly to generate.\\n\\nAuthors consider the problem in the general form where different types of base pairing (e.g., non-canonical base-pairs and pseudoknots) exist, and also allow additional constraints (e.g., GC content) to be specified. The problem is highly important and good solutions can be very impactful.\", \"weaknesses\": \"There are many recent methods for solving the problem of inverse RNA design which have not been used for comparisons in this paper. The comparisons are limited to only a few of the prior works (mostly only 2). Some of these prior methods have similarly used conditional autoregressive formulation similar to this work. The paper in [1] targets an even more general problem by inverse RNA tertiary structure which solves the RNA inverse problem as a subproblem. Without thorough comparisons it is hard to ensure the merits of the newly proposed approach.\\n\\n\\n1. Tan, C., Zhang, Y., Gao, Z., Hu, B., Li, S., Liu, Z., & Li, S. Z. (2024). RDesign: Hierarchical Data-efficient Representation Learning for Tertiary Structure-based RNA Design. In The Twelfth International Conference on Learning Representations.\\n\\n2. Rubio-Largo, \\u00c1., Lozano-Garc\\u00eda, N., Granado-Criado, J. M., & Vega-Rodr\\u00edguez, M. A. (2023). Solving the RNA inverse folding problem through target structure decomposition and Multiobjective Evolutionary Computation. Applied Soft Computing, 110779.\\n\\n3. Merleau, N. S., & Smerlak, M. (2022). aRNAque: an evolutionary algorithm for inverse pseudoknotted RNA folding inspired by L\\u00e9vy flights. BMC bioinformatics, 23(1), 335.\\n\\n\\nThere is also a similarity between the protein inverse folding problem and RNA inverse folding and since the former has been studied more, most models can still be applicable to the RNAs (instead of 20 enzymes we will have 4 nucleotides). Therefore, works in RNA inverse folding still use these methods as baselines. For example the following works are used as baselines for RNA inverse problems as well.\\n\\n4. Ingraham, J., Garg, V., Barzilay, R., & Jaakkola, T. (2019). Generative models for graph-based protein design. Advances in neural information processing systems, 32.\\n\\n5. Gao, Z., Tan, C., & Li, S. Z.(2023) PiFold: Toward effective and efficient protein inverse folding. In The Eleventh International Conference on Learning Representations.\\n\\n\\nFor many of the comparisons, such as the ones on nested structures in which the proposed method achieves significantly worse results than the baseline methods (SAMFEO), the authors justify the presented results by highlighting the diversity of their generated outcomes. However, the first goal is to derive valid sequences. If neither of the sequences for a task is not valid what would be the benefit of diversity?\\n\\nThat being said, even for most other prior methods, we can still tune the diversity. For example you can change the temperature value in the Boltzmann sampling part of SAMFEO to get a less skewed distribution that leads to a higher diversity. However, it is important to have a skewed enough distribution to make sure we have a high recall among a small number of candidates.\\n\\nThe method authors use for measuring the diversity of the valid sequences is computing the mean of pairwise hamming distances between the sequences. This may lead to an overestimation of the diversity because if two RNA sequences have just an indel (unpaired nucleotide) compared to each other, it considers all the subsequence bases as different. I think in this case something similar to the Needleman-Wunsch algorithm would be a better measure of dissimilarity.\\n\\nIt is also not very clear why diversity is an important factor here because unlike image generation and text generation, for RNA design it is more important to get the correct design among the few samples. For example, when looking for a specific RNA to bind to the snoRNA of a new virus to inhibit its activity, why should one care about the diversity of the proposed solutions rather than just having the top few solutions that are more likely to work. It is more important to have the expected RNA among the top few generated solutions to also decrease the costs of downstream in-vivo experiments.\\n\\nSome results such as the one presented in Figure 4 might portray the amount of non-canonical base-pairs (NC) as a measure that has to be maximized. However, although these non-canonical base-pairs are present in folded RNAs, this amount has to be the correct amount. If these values are maximized by the model but does not lead to valid sequences then why should it be considered as a good thing?\\n\\nFor experimentally verified 3D structures, which can be considered the gold standard set for evaluation, authors only report the results of their method and do not compare it to any other baseline. From table 17, the ratio of the valid sequences seems to be very low and it would be important to see if prior methods do not outperform RNAinformer on this dataset.\\n\\nFor the experiments on RNA design with pseudoknots authors only compare their method to antaRNA which is a relatively old method (2015) while many newer methods with superior results have been published since then.\\n\\nThe length of the supported sequences is mentioned to be capped at 200 in the experiments. That is relatively low to what has been done in recently proposed methods (e.g., 900 in SAMFEO and 500 nucleotides in RDESIGN -- see [1] above). This is an important shortcoming because it is well-known that with the increased length of RNA the problem becomes more difficult. Authors have mentioned this in their Limitations section, but they mention that it is enough for current benchmarks while there are other benchmarks that contain RNAs with more nucleotides and they are simply not used in the presented experiments.\\n\\nWhile authors propose a new pipeline for designing their synthetic datasets, it would have been better to use the datasets used by prior methods (e.g., Eterna100 or the one curated in [1]) to show the performance of the presented method in comparison to the baseline methods, and then mention the motivation behind using a new pipeline for generating a dataset.\", \"questions\": \"Please see the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author response to Reviewer DDzf continued\", \"comment\": \">an analysis of the sequence recovery (an imperfect metric, I know, but still somewhat informative to exclude cases of the inverse folding model overfitting on quirks of the structure prediction model). If you were to go all out, an evolutionary inspired recovery may be even more informative (c.f. https://openreview.net/forum?id=y5L8W0KRUX&referrer=%5Bthe%20profile%20of%20Chengyue%20Gong%5D(%2Fprofile%3Fid%3D~Chengyue_Gong1) in the protein world\\n\\nWe thank the reviewer for this useful comment and the interesting reference. We have added the sequence recovery metric to the revised version of our manuscript.\\n\\n>In Table 1, it'd be helpful if the authors could highlight what % means (here % solved), as well as the n=... that was used for this estimate and the topk from which the answer was picked. From the description, it is not quite clear to me how those % were obtained.\\n\\nWe thank the reviewer for pointing out this issue. We updated the table in the revised version of the manuscript to avoid confusion.\\n\\n>Were all benchmarked methods re-trained on the same synthetic data? (for each of the tasks)\\n\\nFor our experiments, only the LEARNA family of algorithms (LEARNA, Meta-LEARNA, Meta-LEARNA-Adapt, libLEARNA) are learning based approaches. These, however, are automated reinforcement learning approaches that do not directly provide training pipelines, but rather, these methods employ large scale joint architecture and hyperparameter search to directly evolve a trained architecture with corresponding hyperparameters. Running these pipelines requires substantial computational resources and we did not rerun this optimization procedure.\\n\\nWe would like to thank the reviewer again for the valuable feedback. We hope that we addressed all the questions and concerns satisfactorily and would appreciate it if the reviewer would consider increasing our score.\\n\\nWith kind regards,\\n\\nThe Authors\"}" ] }
DugT77rRhW
Unposed Sparse Views Room Layout Reconstruction in the Age of Pretrain Model
[ "Yaxuan Huang", "Xili Dai", "Jianan Wang", "Xianbiao Qi", "Yixing Yuan", "Xiangyu Yue" ]
Room layout estimation from multiple-perspective images is poorly investigated due to the complexities that emerge from multi-view geometry, which requires muti-step solutions such as camera intrinsic and extrinsic estimation, image matching, and triangulation. However, in 3D reconstruction, the advancement of recent 3D foundation models such as DUSt3R has shifted the paradigm from the traditional multi-step structure-from-motion process to an end-to-end single-step approach. To this end, we introduce Plane-DUSt3R, a novel method for multi-view room layout estimation leveraging the 3D foundation model DUSt3R. Plane-DUSt3R incorporates the DUSt3R framework and fine-tunes on a room layout dataset (Structure3D) with a modified objective to estimate structural planes. By generating uniform and parsimonious results, Plane-DUSt3R enables room layout estimation with only a single post-processing step and 2D detection results. Unlike previous methods that rely on single-perspective or panorama image, Plane-DUSt3R extends the setting to handle multiple-perspective images. Moreover, it offers a streamlined, end-to-end solution that simplifies the process and reduces error accumulation. Experimental results demonstrate that Plane-DUSt3R not only outperforms state-of-the-art methods on the synthetic dataset but also proves robust and effective on in the wild data with different image styles such as cartoon. Our code is available at: https://github.com/justacar/Plane-DUSt3R
[ "layout reconstruction", "holistic 3D representation", "large 3D model." ]
Accept (Poster)
https://openreview.net/pdf?id=DugT77rRhW
https://openreview.net/forum?id=DugT77rRhW
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ytZrnu6Qbn", "yenTCbTAcy", "vMxcySkHY1", "tSE3q7is7f", "rRJMlRgUm5", "oKalkTPdg6", "o0YYJZ1CiC", "nVIE67GphU", "kzzTcuUdK2", "jMDLfSkeNr", "hedCqOih6J", "fW0gCrANSk", "cSHXmsbiB9", "Z7cb1kAXQ5", "U0oRop5wNu", "Skr8AnoQbH", "RXLIqyu2DD", "Qu9UChadKN", "Ozkhhux1ih", "OEo86NQmW4", "MiS8qfwNFS", "MXPoOevepl", "DmLUBZVFsj", "C5FDja3jSU", "BDCpeJBjMK", "9GGM9neF1C", "8AOEarK5DO", "7axQZHceCY", "4JzIllUkui" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732176573705, 1732592617436, 1732970778880, 1730018632946, 1732948214399, 1732178054448, 1733141535582, 1732178172296, 1733149342496, 1733066839061, 1732693021633, 1733145737668, 1732177487685, 1732958564306, 1732947818230, 1732501657548, 1737523772088, 1732501863151, 1732694612131, 1732502291102, 1732178568650, 1731155617522, 1732948239100, 1733149236282, 1734609066395, 1730708510779, 1732702787390, 1730674396048, 1732551620088 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6479/Authors" ], [ "ICLR.cc/2025/Conference/Submission6479/Authors" ], [ "ICLR.cc/2025/Conference/Submission6479/Reviewer_6JGC" ], [ "ICLR.cc/2025/Conference/Submission6479/Reviewer_UrzG" ], [ "ICLR.cc/2025/Conference/Submission6479/Authors" ], [ "ICLR.cc/2025/Conference/Submission6479/Authors" ], [ "ICLR.cc/2025/Conference/Submission6479/Authors" ], [ "ICLR.cc/2025/Conference/Submission6479/Authors" ], [ "ICLR.cc/2025/Conference/Submission6479/Authors" ], [ "ICLR.cc/2025/Conference/Submission6479/Authors" ], [ "ICLR.cc/2025/Conference/Submission6479/Authors" ], [ "ICLR.cc/2025/Conference/Submission6479/Reviewer_UrzG" ], [ "ICLR.cc/2025/Conference/Submission6479/Authors" ], [ "ICLR.cc/2025/Conference/Submission6479/Reviewer_UrzG" ], [ "ICLR.cc/2025/Conference/Submission6479/Authors" ], [ "ICLR.cc/2025/Conference/Submission6479/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6479/Authors" ], [ "ICLR.cc/2025/Conference/Submission6479/Authors" ], [ "ICLR.cc/2025/Conference/Submission6479/Authors" ], [ "ICLR.cc/2025/Conference/Submission6479/Authors" ], [ "ICLR.cc/2025/Conference/Submission6479/Reviewer_6JGC" ], [ "ICLR.cc/2025/Conference/Submission6479/Authors" ], [ "ICLR.cc/2025/Conference/Submission6479/Authors" ], [ "ICLR.cc/2025/Conference/Submission6479/Area_Chair_xk66" ], [ "ICLR.cc/2025/Conference/Submission6479/Reviewer_H8Ec" ], [ "ICLR.cc/2025/Conference/Submission6479/Reviewer_6JGC" ], [ "ICLR.cc/2025/Conference/Submission6479/Reviewer_oARR" ], [ "ICLR.cc/2025/Conference/Submission6479/Reviewer_H8Ec" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewers,\\n\\nWe appreciate your valuable feedback and your recognition of our work\\u2019s novelty and performance. We noticed that there are common concerns about only training and evaluating on one synthetic dataset, and a desire to see quantitative results on some real-world data. We have carefully considered your suggestion and conducted additional experiments on the CAD-Estate dataset. Our choice of CAD-Estate is motivated by several reasons: 1. Data Quality and Scale: CAD-Estate provides room layouts from 2246 videos, offering a larger and more diverse evaluation base compared to alternatives like ScanNet layout [1] (293 views). 2. Multi-view Completeness: 360-panorama datasets primarily offer rotational variations, while CAD-Estate provides multiple views with both rotational and translational differences for each scene. 3. Practical Considerations. CAD-Estate is already well-established and has a much more similar annotation style to Structured3d. We believe the evaluation results conducted on this dataset are representative.\\n\\nFor evaluation settings, despite CAD-Estate's similar annotation style, some notable differences exist. Our method and Structured3D operate under the assumption of a single floor, single ceiling, and multiple walls configuration. CAD-Estate, however, presents more complex scenarios, including multiple ceiling levels (particularly in attic rooms) and interconnected rooms through open doorways (whereas Structured3D treats doorways as complete walls). To ensure a fair comparison, we carefully selected a subset of CAD-Estate data that aligns with Structured3D's annotation style. Our final evaluation dataset consists of 100 scenes containing 469 images, with each scene containing 2 to 10 images. \\n\\nWe report performance using both 2D metrics (IoU and pixel error) and 3D metrics (precision and recall). While CAD-Estate's label classes include [\\\"<ignore>\\\", \\\"wall\\\", \\\"floor\\\", \\\"ceiling\\\", \\\"slanted\\\"], we only focus on wall, floor, and ceiling classes. We utilize the dataset's provided intrinsic parameters for reprojection during the evaluation. Results are reported for both \\\"Noncuboid + GT pose\\\" and \\\"Plane-DUSt3R (metric)\\\". Please refer to our supplementary materials for qualitative results.\\n\\n| Method |re-IoU (%)\\u2191 | re-PE (%)\\u2193 |3D-Precision (%)\\u2191 |3D-Recall (%)\\u2191 |\\n|--------|----------|----------|---------|---------|\\n| Noncuboid + GT pose on cad-estate | 55.99 | 20.33 |15.59|**30.28**|\\n| Ours (metric) on cad-estate | **63.14** | **15.15** | **22.58**|26.55|\\n\\n\\nOur method achieves performance in re-loU (63.14% vs 55.99%) and re-PE (15.15% vs 20.33%), along with higher 3D precision (22.58% vs 15.59%) at 15 degrees and 0.2m thresholds. The relatively low precision scores can be attributed to our model's bias from training on Structured3D, where most adjacent walls are predominantly orthogonal and ceilings are typically horizontal. In contrast, CAD-Estate contains more diverse architectural features, including non-orthogonal walls and slanted ceilings. While the baseline method shows higher 3D recall, this is likely due to its tendency to generate duplicated planes and its utilization of ground-truth pose information. Additionally, some performance discrepancies may be influenced by some annotation inconsistencies in CAD-Estate's 3D triangle meshes.\\n\\n[1] https://github.com/vevenom/ScanNet-Layout\"}", "{\"comment\": \"Thank you for your positive feedbacks. Your insightful and constructive comments really help us improve the quality of our work.\"}", "{\"comment\": \"I'd like to comment that CAD-Estate dataset is part of the RealEstate10k dataset. CAD-Estate is the name of structural elements and object annotations on the RE10k dataset.\"}", "{\"summary\": \"This work propose Plane-DUSt3R to reconstruct room layout from sparse views with unknown poses. The DUSt3R is retrained to have the amodal perception to see through the occluder and predict floors, walls, ceilings pointmap. The unknown camera parameters can then be estimated from the predicted point map. The outcome from Plane-DUSt3R is then integrated with the other prediction from single-view layout model (e.g., plane instance masks), which is then formulated as a minimum cut problem to produce the final room layout.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This work introduce state-of-the-art geomety prediction model, DUSt3R, into the classical layout estimation task, solving an underexplored aspect with sparse views and unknown poses.\", \"The problem formulation in Sec.3.1 is clear and fluent, which unifies previous room layout estimator in a single pipeline and help readers understand how the propose method augments the existing pipeline.\", \"A new benchamrk with some reasonable baselines is constructed for this task.\"], \"weaknesses\": [\"In Table2, the performance by the proposed method on Co3Dv2 and RealEstate10K are missing. It's reasonable that DUSt3R and MASt3R perform worse as they didn't trained on the synthetic Structured3D dataset. What about the generalized performance of the proposed Plane-DUSt3R on the other real-world dataset?\", \"It's a pity that the proposed Plane-DUSt3R is only trained and mainly evaluated on the synthetic Structured3D dataset. There are many existing resource with multiview and room layout annotation on the 360 panorama domain, e.g., ZInD[1] has multiview layout on unfurnished rooms, MP3DLayout[2] has single-view layout annotation but the source Matterport3D also offer nearby views. Gather more data by projecting these 360 to perspective can make this work much stronger.\", \"[1] Zillow Indoor Dataset: Annotated Floor Plans With 360\\u00ba Panoramas and 3D Room Layouts\", \"[2] Manhattan Room Layout Reconstruction from a Single 360\\u00b0 image: A Comparative Study of State-of-the-art Methods\", \"The qualitative comparison in Fig.6 and Fig.8 is unfair to the baseline. In the fourth column, the baseline \\\"Noncuboid+MASt3R\\\" is presented without texture. We can then easily spot one big issue that some of the walls are duplicated and misaligned. However, in the final column, the layout wireframe from the proposed method is hiding by the rgb point cloud, making it difficult to judge if the similar issue happend to the proposed method as well.\", \"Too few qualitative results. On the in-domain Structured3D dataset, only two are provided in Fig.6. The additional three in supp's Fig.9 also has two duplicated scenes from Fig.6. In application of room tour and room demo, the final visual outcome is what we really care about while the improvement on number is sometimes hard to interpret how it affect the final visual. In addition, some more visualization, espeically on the failure cases, may can provide some hint for future work to further improve.\"], \"questions\": \"Seems that Sec3 only covers camera poses. Is the camera intrinsic also estimated as in DUSt3R or assumed to be known in this work?\\n\\nHow the accuracy vary with the number of input views? Is the performance scalable with more input views?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you very much for your time and effort. Since the rebuttal period has reaching to the end, could you take a look at our response and reconsider your score for our submission.\\n\\nBest regards\\n\\nAuthors\"}", "{\"comment\": \"Thank you for your comment. We appreciate that you found our work well-written and useful for analyzing the formulation of the multi-view layout reconstruction problem. We respond to your questions about limitations on dataset, architecture choice, and evaluation metrics below.\\n\\n> Q1 Limitation on not enough dataset.\", \"a\": \"We acknowledge the error in our paper regarding the angle threshold. The correct threshold we used for calculating the metric is 10 degrees instead of 15, while the translation threshold remains at 0.15m. Table 1 presents our results under various threshold settings.\", \"tabel1\": \"| Threshold(Rotation & Translation) | Precision (%)\\u2191 | Recall (%)\\u2191 |\\n|-----------|---------------|-------------|\\n| 5\\u00b0 / 0.1m | 34.11 | 31.66 |\\n| 10\\u00b0 / 0.15m | 52.63 | 48.37 |\\n| 15\\u00b0 / 0.2m | 64.64 | 59.53 |\\n| 30\\u00b0 / 0.4m | 82.75 | 76.13 |\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you very much for your time and effort. Since the rebuttal period has reaching to the end, could you take a look at our response and reconsider your score for our submission.\\n\\nBest regards\\n\\nAuthors\"}", "{\"comment\": \"Thank you for your comment and your recognition of our performance in multi-view layout estimation.\\n\\nWhile we acknowledge that our method might be technically incremental, our main contribution lies in introducing and analyzing the multi-view room layout estimation problem, which was largely unexplored in previous work. We provided the first systematic study of this setting, including comprehensive baselines and analysis that will benefit future research. Our technical contribution goes beyond simply combining DUSt3R with existing single-view methods - we designed specific strategies to effectively leverage multi-view geometric constraints while maintaining robustness. The robust performance across both synthetic and real-world scenarios demonstrates the practical value of our approach. Through this work, we aimed to establish a foundation for future research in multi-view room layout estimation.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you very much for your time and effort. Since the rebuttal period has reaching to the end, could you take a look at our response and reconsider your score for our submission.\\n\\nBest regards\\n\\nAuthors\"}", "{\"comment\": \"Dear reviewers, thank you for your valuable comments.\\n\\n> Q1. Regarding Dataset Selection:\\n\\nWe appreciate reviewer 6JGC's clarification about CAD-Estate's relationship with RealEstate10K. While RealEstate10K is indeed a valuable real-world dataset, it lacks ground-truth structural information. This is why we utilize CAD-Estate, which provides the necessary structural annotations on RealEstate10K scenes.\\n\\nRegarding Co3Dv2, it's an dataset for single-object reconstruction and camera pose estimation, it's not suitable for our task of room-level layout reconstruction. The dataset primarily focuses on single objects captured from different viewpoints rather than indoor scenes with structural elements. We initially included these results to demonstrate the pose estimation capabilities of DUSt3R. As reviewer H8Ec also raised the same concern about Table2 in our paper, we could remove the comparsion of Co3Dv2 dataset in tabel2 in the camera-ready version.\\n\\n> Q4. Performance Across Different View Numbers:\\n\\nOur previous presentation might have been misleading as it included rooms with varying numbers of available views. And the most complicated cases usually falls in the 5-views settiing. We have analyzed the distribution of views across rooms in Structure3D:\\n\\nTable 1\\n| Input View Number | Room Number |\\n|----------------|----------------|\\n| 1 | 102 |\\n| 2 | 171 |\\n| 3 | 274 |\\n| 4 | 385 |\\n| 5 | 720 |\\n\\nFor a fair comparison, we now report results only on rooms that have all 5 views available in Tabel 2, allowing us to evaluate the same rooms across different view settings. This approach eliminates potential bias from room complexity variations. The results show a general improvement trend as the number of views increases. \\n\\nTable 2\\n| Input View Number | IoU (%)\\u2191 | PE (%)\\u2193 | EE\\u2193 |RMSE\\u2193|Precision (%)\\u2191|Recall (%)\\u2191|\\n|----------------|----------------|----------------|----------------|----------------|----------------|----------------|\\n| 2 | 75.02 | 8.72 | 8.70 | 0.4148 | 53.19 | 42.60 |\\n| 3 | 75.29 | 8.53 | 8.56 | 0.3596 | 54.43 | 47.97 |\\n| 4 | 75.55 | 8.39 | **8.55** | 0.3463 | 54.91 | 49.44 |\\n| 5 | **75.57** | **8.35** | 8.59 | **0.3422** | **55.02** | **49.59** |\"}", "{\"comment\": \"Dear reviewer,\\n\\n\\nWe thank you for the valuable time and effort in reviewing our paper.\\n\\nIn the rebuttal, we have added the evaluation on the real-world dataset CAD-Estate in Appendix E and more qualitative results in Appendix D. Moreover, we have performed additional comparative experiments with different input views, and also explained the implementation detail about Intrinsic settings. \\n\\nWe hope you might find our responses and revisions satisfactory, and sincerely hope you will reconsider your rating based on our clarification in responses and the revised paper. Thank you for your time!\\n\\nBest Regards,\\n\\nAuthors\"}", "{\"comment\": \"**Q1**. I agree. Co3Dv2 results on Table 2 is very confusing with all the baselines numbers but without the one by the proposed method. I also suggest removing them. Regarding the RealEstate10K in Table2, the proposed method should be able to report the mmA@30? It's totally fine to me if the number do not outperform the others given the additional ability to recover room layout. It's good for future work to know if there is still any aspects can still be further improved. If for some reasons the proposed method can not be evaluated on RealEstate10K, I also suggest remove the RealEstate10K column to prevent confusion.\\n\\n**Q4**. Thanks for the new evaluations. The results make sense to me now.\\n\\nI will increase my rating and lower my confidence.\"}", "{\"comment\": \"We really appreciate your positive evaluation score of 10, which acknowledges the contributions of our work.\\n\\nWe have conducted additional evaluations on the CAD-Estate dataset, with details, please refer to the supplementary materials in our revised version, and the overall response to all reviewers.\\n\\nWhile we would prefer to use vector graphics throughout, OpenReview's file size limitations constrained us. But we have replaced the key architectural figures with higher resolution.\"}", "{\"comment\": \"Sorry for late reply. I appreciate authors effort in preparing the additional experiments and paper revision. Some concerns remain.\\n\\n**Q1**. Thanks for the evaluation on additional CAD-estate dataset. My question is: why the proposed method can not be evaluated on Co3Dv2 and RealEstate10K datasets. Please remind me if I miss some discussions or explanations. I think the comparisons on Co3Dv2 and RealEstate10K is more important as they are the real-world datasets with flourished baselines, which helps us better assess the performance comparing to the state of the art.\\n\\n**Q4**. More discussion for the experimental results with different number of input views is needed. We observe consistent improvement of EE and RMSE with more input views but the other four metrics are not. It seems that the IoU, PE, and 3D recall drops a lot from 4 input views to 5 input views. The author should provide more discussion about these results.\"}", "{\"comment\": \"Thank you again for your strong acknowledgment of our method and for maintaining the \\\"strong accept\\\" rating. Thank you for your comments, we have now updated our overall response and included 3D evaluation metrics in the final result. Your constructive feedback has helped us a lot.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your detailed review, which would help improve our paper a lot! We have carefully addressed each of your comments and concerns with additional experiments and detailed explanations. As the discussion period is nearing its end, we would greatly appreciate if you could review our responses and consider raising your score if you find that we have adequately addressed your concerns.\\n\\nThank you for your time.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your detailed review, which will significantly improve our paper. We have carefully addressed each of your comments and concerns with additional experiments and detailed explanations. As the discussion period is ending soon, we would greatly appreciate if you could review our responses and reconsider your score if you find that we have adequately addressed your concerns.\\n\\nThank you for your time.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe thank you for your valuable time and effort in reviewing our paper.\\n\\nIn the rebuttal, we have clarified that while individual components of our method build upon existing work, our contribution is significantly more comprehensive. We have presented the first systematic study of multi-view room layout estimation. In addition to our novel method, we have also developed two new baseline approaches from scratch, as no existing implementations were available for this specific task. These baselines provide valuable benchmarks for future research. Moreover, we have added additional evaluation on a real-world dataset which further demonstrates that our method is robust across different scenarios. We believe our work not only advances the baseline but also opens up new research directions in room layout estimation that will facilitate future research in this field.\\n\\nWe hope you find our responses and revisions satisfactory, and sincerely hope you will reconsider your rating based on our clarifications in responses and the revised paper. Thank you for your time!\\n\\nBest Regards,\\n\\nAuthors\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your detailed review of our paper. We have carefully addressed your feedback regarding the contribution and technical aspects of our work. As the discussion deadline is approaching, we would greatly appreciate it if you could review our response and reconsider your score if you find that we have adequately addressed your concerns.\\n\\nThank you for your time.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"Thank you for your comment. We appreciate that you found our work well-written and useful for analyzing the formulation of the multi-view layout reconstruction problem. We respond to your concern about the limited dataset, qualitative results, and technical details below.\\n\\n> Q1 Limitation on not enough dataset. \\n\\nWe have conducted additional evaluations on the CAD-Estate dataset, with details, please refer to the supplementary materials in our revised version, and the overall response to all reviewers.\\n\\n> Q2 Qualitative results\\n\\nWe have added more qualitative results in our revised version and inserted a column to visualize the result without texture in Fig 9 of supplementary material. We also present several failure cases in Fig 10.\\n\\n> Q3 Intrinsic. \\n\\nThe intrinsic is assumed unknown and could be calculated in DUSt3r. However, during the evaluation, we used the known intrinsic data for reprojecting 3D information into 2D data.\\n.\\n> Q4 Performance on different input views.\\n\\nThe impact of varying input views on performance is presented in Table 1. We align the depth based on the ground-truth relative pose and the predicted pose for multi-view cases but use a predefined scale for single-view cases. So we can observe a reduced performance for single-view setting. \\nRegarding scalability, our answer is yes. We could handle more input views. This is also demonstrated in the additional CAD-estate dataset evaluation where scenes contain 2-10 views. Since NonCuboid is run parallelly on each input image. The only bottleneck for using more images like more than 20, is that DUSt3R may go out of memory in the global alignment step. But this problem has been solved on its following work Mast3r. \\n\\nTable 1\\n| Input View Number | IoU (%)\\u2191 | PE (%)\\u2193 | EE\\u2193 | RMSE\\u2193 | Precision (%)\\u2191 | Recall (%)\\u2191 |\\n|------------------|-----------|----------|------|--------|----------------|-------------|\\n| 1 | 68.53 | 12.10 | 27.66 | 1.6430 | 15.78 | 14.62 |\\n| 2 | 78.81 | 7.18 | 12.88 | 0.3584 | **55.16** | **52.76** |\\n| 3 | **78.92** | 7.09 | 10.33 | 0.5450 | 51.80 | 47.75 |\\n| 4 | 78.78 | **6.98** | 9.56 | 0.4207 | 55.20 | 52.09 |\\n| 5 | 75.57 | 8.35 | **8.59** | **0.3422** | 55.02 | 49.59 |\"}", "{\"summary\": \"The paper proposes an extension of DUSt3R to room layout reconstruction by retraining and using additional processing steps. Results demonstrate superior performance.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"The method is sound with great results outperforming other methods.\", \"This is the first method for unposed sparse view room layout reconstruction, especially when the views are not overlapping.\", \"The whole idea is interesting, especially to use structural plane depth map and metric scale for DUSt3R.\", \"The evaluation is great, especially the design of baselines.\", \"The paper is well-written and was a pleasure to read.\"], \"weaknesses\": [\"Evaluation on more datasets would be more interesting, e.g. CAD-Estate dataset with structural elements annotations.\"], \"questions\": [\"Figures seem to have low resolution since they are most likely represented as jpg/png images. I recommend using vector graphics instead for better visualisations.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"10\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you very much for your time and effort. Since the rebuttal period has reaching to the end, could you take a look at our response and reconsider your score for our submission.\\n\\nBest regards\\n\\nAuthors\"}", "{\"comment\": \"Thank you for your positive and constructive feedbacks! We will remove the Co3Dv2 baseline results from Table 2 as suggested to avoid confusion. And we will run experiments on RealEstate10K to evaluate pose estimation (mmA@30) in our final version.\"}", "{\"metareview\": \"This paper introduces a pipeline for multi-view room layout estimation, leveraging the 3D foundation model DUSt3R. Experiments on both synthetic and real-world datasets demonstrate its improved performance and generalization.\", \"main_strengths_of_this_paper_are_as_below\": [\"The paper presents a new pipeline for multi-view room layout estimation, simplifying traditionally complex multi-view geometry into a single-step solution using the 3D foundation model DUSt3R.\", \"Experiments on synthetic and real-world datasets validate its improved performance and generalization.\", \"The paper is well-organized and offers comprehensive experimental results, making it accessible and providing valuable insights.\", \"Needing more experiments were raised by the reviewers. Also, the new part different from the previous methods need more detailed depictions. Please revise the paper according to the discussions before submitting the final version.\"], \"additional_comments_on_reviewer_discussion\": [\"Reviewer 6JGC suggested evaluating the method on the CAD-Estate dataset, which includes 3D plane annotations and meshes. The authors responded by conducting additional evaluations on the CAD-Estate dataset and including 3D evaluation metrics in the revised version. They also replaced key architectural figures with higher-resolution versions.\", \"Reviewer H8Ec raised concerns about the method being evaluated only on synthetic data, suggesting the use of real-world datasets like ScanNet for comparison. The authors responded by adding real-world data evaluations and revising Table-2 with 3D precision/recall metrics at multiple thresholds. Additionally, Reviewer H8Ec questioned the necessity of Plane-DUSt3R, proposing that regular DUSt3R with off-the-shelf semantic segmentation and NonCuboids + f3 post-processing might be sufficient. The authors justified their method by highlighting the specific improvements and innovations of Plane-DUSt3R in the revised version.\", \"Reviewer oARR considered that the main contribution of the paper to be the post-processing step for extracting layout planes, considering it technically incremental. The authors responded by clarifying that their contribution is the first systematic study of multi-view room layout estimation, introducing new baselines, and demonstrating robust performance across synthetic and real-world datasets.\", \"Reviewer UrzG raised concerns about dataset selection, qualitative results, and performance across varying input views. The authors added evaluations on the CAD-Estate dataset, clarified dataset limitations, provided more qualitative results (including failure cases), and discussed performance variations with input views. They agreed to revise Table 2 for clarity.\"]}", "{\"summary\": \"In this work authors propose sparse-view layout reconstruction pipeline that combines existing single view layout methods such as [1] and sparse view 3D reconstruction method Dust3R [2], to generate more accurate 3D layout from sparse images.\\n\\nAuthors retrained [1] to more accurately detect 2D premitives. They also retrain Dust3R, to only predict layout-plane pointmaps, essentially ignoring foreground furniture. They call this new model Plane-Dust3R. Authors train their PlaneDust3R method on synthetic multi-view Structure3D dataset. And compare accuracy of multiple components of their pipeline, on this synthetic dataset. \\n\\nFrom 2D primitives and 3D plane pointmaps, authors design post-processing pipeline that converts them to 3D plane equations, and their relationships, resulting in full 3D layout representations. \\n\\n\\n[1] Yang, Cheng, et al. \\\"Learning to reconstruct 3d non-cuboid room layout from a single rgb image.\\\" Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2022.\\n\\n[2] Wang, Shuzhe, et al. \\\"Dust3r: Geometric 3d vision made easy.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Proposed sparse multi-view layout method is first of its kind. Authors make use of existing methods and retrain them to make them work for their task.\\n\\nOverall pipeline is shown to work better than naively combining existing methods such as NonCuboid + Mast3R / NonCuboid + GT Pose. \\n\\nOverall paper is well written, is easy to follow. There are many components in the pipeline, but authors explain them in an understandable manner.\", \"weaknesses\": \"The big limitation of method is that it is only shown to work on synthetic dataset. While availability of real multi-view layout dataset is scarse, there exists many multi-view 3D datasets (such as ScanNet, ARKit Scene) etc. These datasets have ground-truth depth maps and camera poses that can be used for pseudo ground-truth generation and the proposed method can be compared against other layout reconstruction methods on such dataset. Authors do provide qualitative results on some real-world data, but it is not enough to assess accuracy of method on the real data. If authors can show effectiveness of the method on real data this method can be promising.\\n\\nSecond, I am not convinced as to why PlaneDust3R is needed. If authors take regular Dust3R and use off-the-shelf semantic segmentation to segment walls, floors, and ceilings and use only those points to generate layout (by combining this with NonCuboids + f3 post-processing proposed in the paper), that might also be enough. Retraining method only on synthetic dataset such as Structure3D might reduce their accuracy on the real-world dataset.\", \"questions\": \"In Table-2, other datasets results are not useful since this method does not have numbers for those. I would suggest authors to keep results for only structure 3D.\\n\\n3D precision and recall should be reported at different threshold. I think 15deg threshold is pretty high. Maybe authors should show results at 5deg, 15deg, and 30deg angular thresholds, and similar translation thresholds. This will give more comprehensive view of the methods performance.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for providing additional experiments on real data. This is very important. It makes the paper much stronger. I keep my strong accept rating.\\n\\nOne thing to note is that it's not true that CAD-Estate has only 2D annotations for structural elements. There are also annotations for 3D planes and their spatial extents. I believe there are also ground truth meshes for all elements, e.g. walls, floor, ceiling. Therefore, it should be possible to have also some 3D evaluation metrics.\"}", "{\"summary\": \"This paper introduces Plane-DUSt3R, a method that leverages the DUSt3R framework to estimate structural planes from multi-view images by finetuning on a room layout dataset (Structure3D). This approach allows for room layout estimation with just a single post-processing step and 2D detection results, handling multiple-perspective images. Experimental results show that Plane-DUSt3R outperforms existing methods on synthetic datasets and is robust across various real-world image styles, including cartoons.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The major strength of this paper is the first time addressing 3D room layout estimation from multi images by using large reconstruction model s (i.e., DUST3R).\\n2. To support using DUST3R, this paper proposed Plane-DUSt3R to estimate structural planes from multi-view images by only require a single post-processing step.\\n3. Plane-DUSt3R achieves the best performance comparing with the baselines.\", \"weaknesses\": \"1. I think the major contribution of the performance boost should go to Dust3R architecture and its pretrained weight, as it is a strong prior model that gives stable and faithful 3D points and camera poses output for Plane-DUSt3R.\\n\\n2. The key model of this paper is only the post-processing step of extracting layout planes, which is, technically speaking, quite incremental.\", \"questions\": \"None\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for the rebuttal!\", \"comment\": \"I thank authors for providing rebuttal that answers my concerns. The paper is a good addition to Dust3r family of method. I will raise my score to 6.\"}" ] }
DtFCIfvAFc
Gaussian-Det: Learning Closed-Surface Gaussians for 3D Object Detection
[ "Hongru Yan", "Yu Zheng", "Yueqi Duan" ]
Skins wrapping around our bodies, leathers covering over the sofa, sheet metal coating the car – it suggests that objects are enclosed by a series of continuous surfaces, which provides us with informative geometry prior for objectness deduction. In this paper, we propose Gaussian-Det which leverages Gaussian Splatting as surface representation for multi-view based 3D object detection. Unlike existing monocular or NeRF-based methods which depict the objects via discrete positional data, Gaussian-Det models the objects in a continuous manner by formulating the input Gaussians as feature descriptors on a mass of partial surfaces. Furthermore, to address the numerous outliers inherently introduced by Gaussian splatting, we accordingly devise a Closure Inferring Module (CIM) for the comprehensive surface-based objectness deduction. CIM firstly estimates the probabilistic feature residuals for partial surfaces given the underdetermined nature of Gaussian Splatting, which are then coalesced into a holistic representation on the overall surface closure of the object proposal. In this way, the surface information Gaussian-Det exploits serves as the prior on the quality and reliability of objectness and the information basis of proposal refinement. Experiments on both synthetic and real-world datasets demonstrate that Gaussian-Det outperforms various existing approaches, in terms of both average precision and recall.
[ "3D Gaussian Splatting", "3D Object Detection", "Surface Closure" ]
Accept (Poster)
https://openreview.net/pdf?id=DtFCIfvAFc
https://openreview.net/forum?id=DtFCIfvAFc
ICLR.cc/2025/Conference
2025
{ "note_id": [ "oXj3Lmi5Ze", "l3PrF3j3BL", "kKNCYylkZ1", "fhfv128wdN", "eylfgIWNck", "bbgWVKRoLc", "b18dkPNsot", "aSqGoggVmh", "RLpaEFO1rK", "Ouu0sQKshJ", "N1Gb2lia9k", "M8Shn8UOBm", "Lo6JxoPpgj", "HHBuNnYJb8", "6FzNmuzHe2", "4wp2jvztIt", "10qQDnw0Ul" ], "note_type": [ "official_comment", "official_review", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732587163326, 1730744035014, 1737523655439, 1732258575363, 1731203275584, 1732258674734, 1732258513438, 1732778175434, 1732744226668, 1734747897543, 1733118645065, 1730659369141, 1732590238807, 1732258817448, 1732258379861, 1730408588086, 1732258303100 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4682/Reviewer_kAE9" ], [ "ICLR.cc/2025/Conference/Submission4682/Reviewer_XhiG" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4682/Authors" ], [ "ICLR.cc/2025/Conference/Submission4682/Reviewer_s6A7" ], [ "ICLR.cc/2025/Conference/Submission4682/Authors" ], [ "ICLR.cc/2025/Conference/Submission4682/Authors" ], [ "ICLR.cc/2025/Conference/Submission4682/Authors" ], [ "ICLR.cc/2025/Conference/Submission4682/Reviewer_nYGE" ], [ "ICLR.cc/2025/Conference/Submission4682/Area_Chair_zw22" ], [ "ICLR.cc/2025/Conference/Submission4682/Authors" ], [ "ICLR.cc/2025/Conference/Submission4682/Reviewer_nYGE" ], [ "ICLR.cc/2025/Conference/Submission4682/Authors" ], [ "ICLR.cc/2025/Conference/Submission4682/Authors" ], [ "ICLR.cc/2025/Conference/Submission4682/Authors" ], [ "ICLR.cc/2025/Conference/Submission4682/Reviewer_kAE9" ], [ "ICLR.cc/2025/Conference/Submission4682/Authors" ] ], "structured_content_str": [ "{\"title\": \"RE: Response\", \"comment\": \"Thanks for providing answers to my concerns. Most of them are addressed. However, if the manuscript eventually gets accepted, please include the clarifications and explanations to make the paper more clear.\"}", "{\"summary\": \"In this paper is presented a novel method for multi-view 3D object detection, exploiting Gaussian splatting to obtain a continuous scene representation. Then, to handle outliers in the previous representation, a closure inferring module is included that learns a probabilistic feature residual for partial surfaces and coalesces them into holistic representation on closure measurement. Experimental evaluation is reported on both synthetic and real datasets, including an ablation study and some comparisons with respect to competing approaches.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is addressing a complex and important problem.\\n\\nThe authors include most of the details to understand their proposal, being the technical section clear enough and well written. \\n\\nGaussian-Det is simple and effective. To this end, the overall framework is divided into the construction of surfaces based on Gaussian representations, an object proposal initialization, as well as a partial surface feature inference and holistic surface closure coalescence. \\n\\nBoth synthetic and real experiments are provided in the paper. The results are competitive with respect to several competing approaches.\\n\\nThe ablation study helps the reader.\", \"weaknesses\": [\"Camera poses exploit a type of ground truth (point cloud) to be inferred. In my opinion, this is clearly a strong limitation of the paper.\", \"Lack of challenging cases. In my opinion, some qualitative instances could be included in the paper, showing the effectiveness of the method in complex cases, especially, in comparison with other approaches.\"], \"questions\": \"To be honest, I do not have many issues with the current submission.\\n\\nIn figure 6, maybe the authors could use a different color per object. \\n\\nHow could a noisy estimate of the cameras affect overall performance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response\", \"comment\": \"We truly appreciate your valuable comments. In the following, we provide responses to the concerns.\\n\\n**Q1: Qualitative comparisons with ablated components.**\", \"a\": \"Ideally, a set of closed surfaces can fully enclose a volume. However, when there exists obvious discontinuity in this enclosure due to incompleteness or the introduction of outlier surfaces, the set of surfaces does not fully enclose a volume and are regarded as open. Such enclosure is quantitatively measured by the flux value in Theorem 1. The more closed a set of surfaces is, the closer the $|\\\\Phi|$ value computed on them is to zero.\\n\\nWe agree that Figure 3 in our original manuscript is crucial for motivating our method and any confusion should be avoided. In the revised manuscript, we have moved it ahead as Figure 2 and clarified that the open or closed surfaces are measured by the $|\\\\Phi|$ value. In the framework figure (Figure 3) of the revised manuscript, we have also added the necessary information that $|\\\\Phi|$ measures how well the surfaces enclose a volume, which is used to discriminate the degree of surface closure. \\n\\n[1] Kerbl, Bernhard, et al. \\\"3D Gaussian Splatting for Real-Time Radiance Field Rendering.\\\" ACM Transactions on Graphics 42.4 (2023): 1-14.\"}", "{\"summary\": \"This paper proposed a 3D object detection method based on the Gaussian splatting. With the Gaussian splatting as input, the proposed deep model extracts the 3D object bounding boxes. Experiments show the proposed methods give higher recall and precision than the competing methods.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The proposed method takes advantage of the Gaussian splatting representation for 3D object detection. Gaussian splatting is a more compact representation than the point clouds.\", \"weaknesses\": [\"The paper combines several methods: (1) Official implementation of Gaussian splatting and (2) PointNet++. There is no new network structure\", \"developed. The novelty is low.\", \"The overall network is not explicitly defined in the paper. please add this, otherwise it is impossible to check the details.\", \"The writing of the paper is quite unclear, especially the section 3.3. Theorem 1 is directly from the existing theorem. No need to include here. It is also confusing where this theorem is used anywhere in the paper. The equations are sloppy. Notations are a mess. The features\", \"used two notations F and f. F^{cand} is not defined. f^{part} is also never defined anywhere in the paper. The equations from (6), (7), (11) need motivation and clarification.\", \"The experimental setting is not clearly defined. The authors used a dataset originally for 3D object detection in Nerf. It is not clear how the point cloud based methods, which need a point cloud input, are used in the experiment comparison.\", \"The qualitative examples of the proposed method show the quality of the 3D boxes is sometimes a lot worse than other competing methods. The boxes are often bigger and quite off. The competing methods also show many overlapping bounding boxes. No non-max suppression This may cause the lower numbers from the competing methods.\"], \"questions\": \"See the questions in the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response (Part I)\", \"comment\": \"We truly appreciate your valuable comments. In the following, we provide responses to the concerns.\\n\\n**Q1: The goal of using the backbone in the framework figure.**\", \"a\": [\"Thank you for your insightful suggestion. Theorem 1 mathematically defines our measurement of surface closure. We strongly agree that it is crucial to conceptually introduce such a measurement, and properly convey how the measurement is leveraged by the CIM module in Gaussian-Det. In the revised manuscript, we have re-organized the content in Section 3.3. Specifically, we have made the following polishments to better fit Theorem 1 into the elaboration of CIM module conceptually, mathematically and practically:\", \"**Before Theorem 1:** We have moved the illustrative example of the surface closure prior ahead, thus bringing up earlier the concept that the surface closure is utilized as a measurement of the quality of the predicted objectness.\", \"**Before Theorem 1:** We have highlighted that CIM leverages a key geometric\", \"property that the partial surfaces corresponding to an object can approximate a closed surface (L259-L262 in the revised manuscript), thereby naturally educing the details of Theorem 1 that mathematically describes this property.\", \"**After Theorem 1:** In the following narration, we have noted how to utilize such a property as a measurement of the objectness quality in practice.\", \"The proof of Theorem 1 has been moved to the Appendix for brevity.\"]}", "{\"title\": \"Response\", \"comment\": \"We truly appreciate your valuable comments. In the following, we provide responses to the concerns.\\n\\n**Q1: Camera poses exploit a type of ground truth (point cloud) to be inferred. In my opinion, this is clearly a strong limitation of the paper. How could a noisy estimate of the cameras affect overall performance?**\", \"a\": \"Thank you for your valuable suggestion. In the original submission (Figure 9 in Appendix), we have included several qualitative instances under challenging scenarios including occlusions and jointedness. In Figure 13 of the revised manuscript, we have included more qualitative results on occlusions, where the compared methods predict inaccurate sizes or locations. Moreover, to more comprehensively demonstrate the effectiveness of the proposed method, we have also used a unique color for each bounding box in Figure 4 of the revised manuscript.\\n\\nFurthermore, we have experimented on the challenging open-vocabulary 3D instance segmentation, where the incorporation of the prior of surface closure substantially enhances the segmentation performance of the Gaussian-Grouping baseline [2] (see Table 6 in the revised manuscript). The rendering results of qualitative examples containing multiple object instances are shown in Figure 7 of the revised manuscript. It can be seen that incorporating the prior of surface closure leads to fewer noise-like erroneous predictions. \\n\\n[1] Hu, Benran, et al. \\\"Nerf-rpn: A general framework for object detection in nerfs.\\\" CVPR. 2023.\\n\\n[2] Ye, Mingqiao, et al. \\\"Gaussian grouping: Segment and edit anything in 3d scenes.\\\" ECCV, 2024.\"}", "{\"comment\": \"We sincerely appreciate your valuable comments and questions, which have greatly contributed to improving the manuscript. We are also grateful for the increase in your score. We have initiated experiments on 2DGS following your suggestion and plan to incorporate any meaningful findings in future revisions.\"}", "{\"comment\": \"Thank you for your reply. The reply addressed most of my concerns and I will raise my score. Besides, I suggest authors add an experiment of using 2DGS as the underlying representation, as it shows better surface quality.\"}", "{\"metareview\": \"The proposed method, Gaussian-Det, uses Gaussian Splatting (3DGS) to represent the surfaces for multiview 3D object detection. Gaussian-Det formulates the input Gaussians as feature descriptors on partial surfaces. Outliers derived from 3DGS are reduced by the Closure Inferring Module, which estimates the probabilistic feature residuals for partial surfaces and combines them to produce 3D object proposals.\\n\\nThe paper received mixed reviews, with scores of 3, 5, 6, and 6. After the rebuttal, one reviewer increased the rating, resulting in final scores of 3, 6, 6, and 6. It should be noted that the reviewer who gave the score of 3 did not provide any further responses during the rebuttal and reviewer discussion phases even though the authors adequately addressed the reviewer. Additionally, some comments from this reviewer contained apparent errors, which suggests a potentially low-quality review. Consequently, the area chair decided to disregard this review when making the final decision. Given that the other three reviewers provided positive feedback and consider the paper to be above the acceptance threshold, the area chair recommends accepting the paper.\", \"additional_comments_on_reviewer_discussion\": \"In addition to the improvements included in the revision, the authors are encouraged to present the experimental results of using 2DGS as the underlying representation for its better surface modeling quality.\"}", "{\"title\": \"General Response\", \"comment\": [\"We genuinely thank all the reviewers for their constructive feedbacks and suggestions, which have helped us improve the quality and clarity of our work. Based on the reviewers' comments, we have revised our manuscript as follows:\", \"Added experiments on 3D instance segmentation in Table 6, Figure 7 of Section 4 and Table 11 of Appendix E.2 based on the comments by reviewer **s6A7**, **XhiG**, **kAE9**.\", \"Improved the clarity of the motivative illustration in Figure 2 based on the comments by reviewer **nYGE**.\", \"Improved the clarity of the framework figure along with the caption in Figure 3 based on the comments by reviewer **s6A7**, **nYGE**, **kAE9**.\", \"Improved the clarity and coherence of the Closure Inference Module (CIM) in Section 3.3 based on the comments by reviewer **s6A7**,**kAE9**.\", \"Improved and enriched the qualitative illustrations in Figure 4, Figure 12 and Figure 13 based on the comments by reviewer **XhiG**, **nYGE**.\", \"Added ablational experiments on the quality of Gaussian representation in Table 5 of Section 4 and Table 9 of Appendix C.3 based on the comments by reviewer **XhiG**, **nYGE**.\", \"Added verification of the quality of the 3D-FRONT dataset in Appendix A.1 based on the comments by reviewer **kAE9**.\", \"Added technical details of Gaussian-Det network and point cloud based methods we compared with in Appendix B.1 based on the comments by reviewer **s6A7**.\", \"We respond to each reviewer below to address the concerns. Please take a look and let us know if further clarification or discussion is needed. Also, we will ensure that all discussions, tables and illustrations in the current revision are included in future versions.\"]}", "{\"summary\": \"This paper introduces a novel 3D object detection method, Gaussian-Det, which uses 3DGS to represent objects as continuous surfaces rather than NeRF discrete points. The method uses Gaussian as a surface descriptor and introduces the Closure Inference Module (CIM), which solves the outlier problem inherent in the 3DGS method by considering surface closure as a prior for objectness. The CIM operates in two phases: firstly, the probabilistic feature residuals are estimated for some of the surfaces, and then they are merged to form a holistic representation that measures the overall surface closure. The method is evaluated on synthetic (3D-FRONT) and real-world (ScanNet) datasets, and the results show a significant improvement over previous SOTA methods. The main innovations of the method are its surface-based representation and the use of surface closure as a geometric prior to improve the quality of detection.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The proposed method show SOTA performance compared with previous NeRF based methods.\", \"Use 3DGS is faster than NeRF.\", \"The CIM module effectively handles outliers in Gaussian Splatting.\"], \"weaknesses\": [\"There is a lack of qualitative comparisons between different components, particularly regarding the use of Holistic Surface Coalescence and Residual Estimation. These comparisons are important for readers to better understand the effectiveness of each component.\", \"It would be interesting to analyze how the underlying Gaussian representation affects the results. For example, what is the impact of using the original 3DGS or a 2DGS representation?\", \"The overall writing is unclear and difficult to follow.\", \"L.208 should be \\\"From G^{cand}\\\"\", \"There is multiple M in FIg. 2, are they the same?\", \"Figure 3 is confusing. It is unclear what \\u201copen surfaces\\u201d and \\u201cclosed surfaces\\u201d refer to. In Figure 3 (a) and (b), the primary visual difference appears to be the incompleteness of the Gaussians in (b). Please clarify the relationship between open and closed surfaces.\"], \"questions\": \"Improving readability and providing a visual comparison of the ablations would better present this work.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your positive feedback. We have incorporated all the suggested clarifications and explanations into current version of the manuscript to enhance its clarity and comprehensiveness. We will also make sure that all these clarifications and explanations above are included in the final version.\"}", "{\"title\": \"Response (Part II)\", \"comment\": \"**Q4: Verification on the quality of 3D-FRONT dataset.**\", \"a\": \"During the discussion phase, we have further experimented on the task of open-vocabulary instance segmentation in 3D scenes, which detects the target objects in real-world 3D datasets in a much finer granularity. On the Gaussian-Grouping baseline [2], we have implemented the estimation of surface closure as a prior on the reliability of the object instance, thus supporting or suppressing the segmented results.\\n\\nFrom the quantitative results in Table D&E (Table 6&11 in the revised manuscript), we can see that leveraging the surface closure provides an informative prior on objectness deduction and largely enhances the performance of instance segmentation in 3D scenes. Moreover, it substantially speeds up the training period and saves the GPU memory footprint after ruling out potential outliers and cutting down the total number of 3D Gaussians. The rendered results of qualitative examples containing multiple object instances are shown in Figure 7 of the revised manuscript. It can be seen that the incorporation of the prior of surface closure contributes to fewer noisy predictions. \\n\\nWe are sorry that during the discussion phase, we are provided with limited time and computational resources to perform the 3D Gaussian Splatting on the large-scale dataset for autonomous driving. This will be our future work where we investigate how Gaussian-Det performs on the outdoor datasets. \\n\\n### Table D: Open-vocabulary 3D instance segmentation on the real-world LERF-Mask/figurines dataset\\n\\n\\n| Methods | mIoU | mBIoU |\\n|-------------------------------------|------|-------|\\n| DEVA (ICCV'2023) | 46.2 | 45.1 |\\n| LERF (ICCV'2023) | 33.5 | 30.6 |\\n| SA3D (NeurIPS'2023) | 24.9 | 23.8 |\\n| LangSplat (CVPR'2024) | 52.8 | 50.5 |\\n| Gaussian-Grouping (ECCV'2024) | 69.7 | 67.9 |\\n| **Gaussian-Grouping+Ours** | **76.5** | **73.3** |\\n\\n### Table E: Comparisons on model efficiency in the task of open-vocabulary 3D instance segmentation. \\n\\n| Methods | Training Time | GPU Memory |\\n|-------------------------------------|---------------|------------|\\n| Gaussian Grouping (ECCV'2024) | 1.33h | 35.8GB |\\n| **Gaussian-Grouping+Ours** | **0.65h** | **12.8GB** |\\n\\n[1] Hu, Benran, et al. \\\"Nerf-rpn: A general framework for object detection in nerfs.\\\" CVPR. 2023.\\n\\n[2] Ye, Mingqiao, et al. \\\"Gaussian grouping: Segment and edit anything in 3d scenes.\\\" ECCV, 2024.\"}", "{\"title\": \"Response (Part II)\", \"comment\": \"**Q5: Motivation and clarification of Eqn. (6)(7)(10) in the revised manuscript.**\", \"a\": \"**Quality of the 3D boxes**: Thank you for pointing this out. In the revised manuscript, we have removed the first row with lower quality in Figure 5, and used a unique color for each bounding box for intuitive presentation.\\n\\n**NMS**: We clarify that in the qualitative results, we have faithfully used the official implementation of FCAF3D and NeRF-RPN, both using NMS as post-processing. We also clarify that the usage of Non-Maximum Suppresion (NMS) only removes the boxes that are overlapped over a pre-set threshold. Therefore, the overlapping observed in the competing methods is due to this threshold, which may have caused your concern.\\n\\n[1] Antoine, Gu\\u00e9don, et al. \\\"Sugar: Surface-aligned gaussian splatting for efficient 3d mesh reconstruction and high-quality mesh rendering.\\\" CVPR, 2024. \\n\\n[2] Kingma, Diederik P. \\\"Auto-encoding variational bayes.\\\" ICLR, 2013.\"}", "{\"summary\": \"The paper presents Gaussian-Det, a method to detect 3D objects via Gaussian Splatting. The proposed method leverages Gaussian Splatting as surface representation for multi-view based 3D object detection. Gaussian-Det proposes Closure Inferring Module (CIM) to deal with the outliers present using Gaussian-Splatting technologies. The paper presents experiments on both synthetic and real 3D-object detection datasets where Gaussian-Det mostly outperforms previous methods.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"Interesting solution using Gaussian-Splatting technologies for 3D object detection.\", \"Proposing Closure Inferring Module (CIM) to deal with the noisy nature of Gaussian-Splatting representations.\"], \"weaknesses\": [\"Clarity of the paper is not good and needs improvement (my most concerning weakness). First, overall the high-level idea is not clearly explained. Although, Fig. 2 tries to do this, it falls short. Figure 2 lacks a clear illustration of the goal using the Backbone, and most importantly, fails to explain clearly how the CIM module (in my opinion the main contribution). Second, given the lack of clarity, it is hard to judge the novelty and quality of the solution. For example, I don't find the description of the Theorem if the high-level idea is not explained clearly. In my opinion it is just using space that could've been used to add Figures to explain CIM in a better way.\", \"Lack of details in the experiments. First, in line 320-321, the narrative says that the scenes are manually downsized into 159 usable rooms. However, it is not clear if the data was verified to ensure high quality. This is important since this data is used to measure quality.\", \"Insufficient experiments. In my opinion, showing results only on 2 datasets is not that convincing. Given that 3D object detection has been around for some time, it would've been more informative to know the performance of this proposed method on more real-scenes benchmarks/datasets (e.g., Waymo Open datasets, Objectron, etc.). I think that these datasets could've revealed the practical benefit of the proposed approach.\"], \"questions\": \"See Weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response (Part I)\", \"comment\": \"We truly appreciate for your valuable comments. In the following, we provide responses to the concerns.\\n\\n**Q1: Novelty of comibining existing Gaussian Splatting and PointNet++.**\", \"a\": \"In the original manuscript, $F^{cand}$ is initially defined as the feature of candidate Gaussians (see L207 in the original manuscript, L217 in the revised manuscript). $\\\\mathbf{f}^{part}_k$ is defined as the partial-aware features of each proposal (see L211-L213 in the original manuscript, L220-L223 in the revised manuscript).\"}" ] }
DtATVd5NLc
Finetuning Weather Foundation Models to Develop Climate Model Parameterizations
[ "Aman Gupta", "Sujit Roy", "Johannes Schmude", "Vishal Gaur", "Wei Ji Leong", "Manil Maskey", "Rahul Ramachandran", "Aditi Sheshadri" ]
Climate prediction models parameterize a range of atmospheric-oceanic processes like clouds, turbulence, and gravity waves. These physical parameterizations are a leading source of uncertainty and strongly influence future projections of global temperature rise. We present a fresh approach to developing parameterizations for coarse-climate models by leveraging pre-trained AI foundation models (FMs) for weather and climate. A pre-trained encoder and decoder from a 2.3 billion parameter FM (NASA and IBM's Prithvi WxC) --- which contains a latent probabilistic representation of atmospheric evolution --- is fine-tuned to create a data-driven predictor of atmospheric gravity waves (GWs). Current climate models are not fine enough to resolve GWs. We create an ML-based parameterization that learns GW fluxes from high-resolution ``GW resolving" climate models to represent them in "GW missing" coarse-climate models. The fluxes predicted by our fine-tuned model are comprehensively evaluated using a set of three tests. Comparison with a baseline (Attention U-Net) reveals the superior predictive performance of the fine-tuned model throughout the atmosphere. The model outperforms the baseline even in regions excluded from the FM pre-training. This is quantified using the Hellinger distance which is 0.11 for the baseline and 0.06, i.e., roughly half, for the fine-tuned model. FMs are largely unexplored in climate science. Our findings emphasize their versatility and reusability to accomplish a range of weather- and climate-related downstream applications, especially in a low-data regime. These FMs can be further leveraged to create new parameterizations for other earth-system processes.
[ "Atmospheric Dynamics", "Parameterizations", "Climate Modelling", "Foundation Model", "ERA5", "Finetuning", "Machine Learning" ]
Reject
https://openreview.net/pdf?id=DtATVd5NLc
https://openreview.net/forum?id=DtATVd5NLc
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zjIsx9lPfK", "xez19ZG0N1", "wwjBReq20a", "v1DAcKJz1L", "rbQd7TvRyh", "k47N5RrlXj", "ia1HxRpmW5", "frIFXsNDyl", "XNY6TuWVGW", "TzqWafPYr5", "Okod5EVvLn", "MwBnYLVCgW", "M50tnU4SEo", "GiwBA8rKzD", "F9C6FiFWdR", "DVrK3WXupo", "DPX7HZ5EKL", "BkngHUHdm1", "AQApNu1jmJ" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_review", "official_review", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment" ], "note_created": [ 1733196395394, 1730619869051, 1732329640786, 1730676803982, 1730650575163, 1732161024819, 1740183091331, 1732078625430, 1732486947819, 1732084465976, 1732161929022, 1732162952598, 1737523475111, 1732160753352, 1732089423686, 1734446086343, 1732336094682, 1730373497283, 1732336639213 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1934/Reviewer_XXRg" ], [ "ICLR.cc/2025/Conference/Submission1934/Reviewer_XXRg" ], [ "ICLR.cc/2025/Conference/Submission1934/Authors" ], [ "ICLR.cc/2025/Conference/Submission1934/Reviewer_5F5c" ], [ "ICLR.cc/2025/Conference/Submission1934/Reviewer_ripD" ], [ "ICLR.cc/2025/Conference/Submission1934/Authors" ], [ "ICLR.cc/2025/Conference/Submission1934/Authors" ], [ "ICLR.cc/2025/Conference/Submission1934/Authors" ], [ "ICLR.cc/2025/Conference/Submission1934/Reviewer_5F5c" ], [ "ICLR.cc/2025/Conference/Submission1934/Authors" ], [ "ICLR.cc/2025/Conference/Submission1934/Authors" ], [ "ICLR.cc/2025/Conference/Submission1934/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1934/Authors" ], [ "ICLR.cc/2025/Conference/Submission1934/Reviewer_ripD" ], [ "ICLR.cc/2025/Conference/Submission1934/Area_Chair_AHqc" ], [ "ICLR.cc/2025/Conference/Submission1934/Authors" ], [ "ICLR.cc/2025/Conference/Submission1934/Reviewer_DYbv" ], [ "ICLR.cc/2025/Conference/Submission1934/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your detailed responses. Your clarification regarding the \\\"weaknesses,\\\" especially the explanation of the \\\"experiments,\\\" has addressed my concerns about the completeness of the work. However, I remain uncertain about the paper\\u2019s contribution to interdisciplinary research, particularly with regard to its innovative aspects. While this is certainly a strong application of machine learning to climate modeling and forecasting, from a research paper\\u2019s perspective, it is essential to emphasize the novelty or irreplaceability of the method.\\n\\nThe reason ClimSim received recognition at NeurIPS 2023 is due to the importance of quality datasets in advancing machine learning, much like the role ImageNet has played in computer vision. While excellent algorithmic applications can have a similar impact, this paper does not sufficiently highlight the unique contributions it makes to the field, community, or society at large. Although I understand the authors' frustrations, I believe that, as an interdisciplinary piece, the paper should more clearly address the varying expectations across fields.\\n\\nIn particular, for a submission to a leading machine learning conference, the paper should be structured around the interests of that community. For instance, it would be beneficial to clearly explain the reasons behind the \\\"data scarcity\\\" issue (thank you for addressing this in your response), and to discuss the dataset split\\u2014whether using only 2% of the data for testing may affect generalizability.\\n\\nBased on these points, I will not change my score, unfortunately. I would recommend considering submission to the \\\"Applied Data Science Track\\\" at a top-tier machine learning conference or to a high-impact journal in the climate science field.\"}", "{\"summary\": \"This paper presents a data-driven parameterization scheme for gravity waves, aiming to achieve gravity wave parameterization in coarse climate models by fine-tuning weather foundation models. The work utilizes the newly proposed Prithvi WxC as a pre-trained model and leverages higher-resolution ERA5 data to achieve this goal. Their main contribution is both reducing the costs of training data-driven parameterization models from scratch and improving their generalization capabilities through fine-tuning foundational models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Significance: This work introduces a fine-tuning algorithm for weather foundation models into parameterization, achieving a more lightweight and better generalization performance data-driven parameterization scheme.\\n2. The manuscript is well-written and comprehensible, adhering to ICLR formatting guidelines, with no discernible errors. \\n3. The performance of the proposed algorithm is demonstrated through three different evaluation methods.\", \"weaknesses\": \"1.**Innovativeness:** This work showcases commendable application performance on specific tasks; however, its contribution to the broader machine learning community may be viewed as somewhat limited. Additionally, the fine-tuning scheme is discussed in detail in Section 3.2 of the cited paper that presents the pre-trained foundational model used here, where it is described as a typical downstream task. Given the context of a high-level conference, the innovativeness of this work might be considered subtle.\\n\\n2. **Experimental Setup:** It seems that neither the methods relying on equations nor the mixed probability methods mentioned in the 'Related Work' section are included in the baselines; rather, the comparison is made with Attention U-Net. Furthermore, the input variables for Attention U-Net are one-quarter fewer than those utilized in this work, resulting in a reduction of input information. Could you clarify why it is not possible to fully input the data of [488, 64, 128]? \\n\\n3. **Experiments:** In the daily average GW momentum flux experiments (Figures 5 and 10), notable differences are observed in the distribution of small-scale features between the deep learning model and ERA5. It would be beneficial to include the MERRA-2 distribution to clarify that the data-driven parameterization scheme successfully learns GW information from the ERA5 dataset, rather than simply inheriting features from a pre-trained model.\", \"questions\": \"1. In the \\\"Contributions\\\" section, the authors suggest that this method could potentially be extended to cloud parameterization and precipitation forecasting (line 105). However, these topics are not explored within this work. Given their distinct nature compared to gravity wave parameterization, and considering that the ERA5 dataset may lack reliable data for these variables, could the authors provide additional experiments or insights to substantiate this claim?\\n\\n2. In lines 201 and 202, the authors state, \\\"This corresponds to roughly 35k training samples, which pretty much classifies as 'data-scarce'.\\\" This definition of sample size seems to differ from what is typically observed in other areas of the machine learning community. Could the authors provide context on the typical dataset sizes used for similar climate modeling tasks, and explain why 35k samples is considered data-scarce in this domain compared to other machine learning applications?\\n\\n3. Could the authors clarify how the fine-tuning and test datasets are partitioned for the experiments? Specifically, please provide the exact split ratios for training, validation, and testing, and indicate whether the data was split randomly or using another method, such as temporal splits for time series data. This would enhance readers' understanding of the experimental setup.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Certainly relevant for ICLR\", \"comment\": \"We thank the reviewer for their detailed comments, and for acknowledging that our work has the potential to be useful to both the ML & the climate science community. However, we do disagree on the degree to which this study could be useful to the broader ML community. Below we explain why.\\n\\n1. **Solving climate science use cases can benefit other ML problems too:** An increasing number of AI weather forecasting models and foundation models (FMs) have been developed over the past 3-4 years. The release of FourCastNet [1] inspired both the weather forecasting and machine learning community alike to develop more stable weather forecasting models. Likewise, the release of ClimaX [2] proof-of-concept inspired the development of more versatile FMs like Aurora [3], AtmoRep [4], and Prithvi [5], all of which involved heavy collaboration between the weather forecasting and the AI community. **Yet, the application of these FMs to climate prediction tasks has been severely limited**. Either we wait for these models to be stable over multi-year timescales or we find novel ways to bypass these limitations and use AI to improve climate models (not weather models) now. Literally none of the downstream applications for FMs focus on reducing climate uncertainty \\u2013 because it typically involves analysis and rollout on longer timescales over which these AI models are not stable, and significant rapid advancements will be needed before they could do so. Our analysis establishes that this should not discourage the use of these models for climate prediction and climate modeling. This is best accomplished by introducing new climate science use cases more accessible to ML experts and by developing new probabilistic metrics potentially useful in broader ML. **Therefore, the focus of this study is not just to highlight the prowess of FMs for downstream applications (which has been amply done before), but also pioneer a new research direction to use AI to advance climate modeling and climate prediction**, at the same time aligning with societal needs. **Presenting this at ICLR and sharing this with the ML community is the best way to inspire more research in this direction.** \\n\\n2. **Submitted to \\u201cApplication to physical sciences\\u201d area:** we totally appreciate that the idea of using a finetuned model to perform weather-related downstream tasks has been put forth by past studies. However, this is the first study to leverage weather-focused FMs to create a viable downstream task for climate model development and the state of the art Attention Unet baseline (state-of-the-art for gravity wave analysis as least) has been used to create a new but similar finetuning model architecture. By doing so, we confident we are redefining the limits of what problems (domain and timescale) weather FMs can be used to address, potentially motivating more climate-focused applications in the future, also creating room for development of better model architectures. In the same spirit, we have submitted our paper to the \\u201capplication to physical sciences\\u201d sub-area of the conference.\\n\\n3. **New probabilistic metrics:** The tail Hellinger metric introduced in the study can be used more broadly across different problems, especially in studies which focus on simulating distributions and their tails, like in extreme event analysis and modeling of intermittent systems.\\n\\n4. **Relevant audience:** submitting our paper to ICLR therefore ensures that we get the optimal audience whose focus is both on identifying novel avenues to apply ML to advance long-term climate analysis through development of new models and use new metrics to quantify performance of machine learning models to predict intermittent nonlinear dynamics.\", \"references\": \"[1] Pathak, Jaideep, et al. \\\"Fourcastnet: A global data-driven high-resolution weather model using adaptive fourier neural operators.\\\" arXiv preprint arXiv:2202.11214 (2022).\\n\\n[2] Nguyen, T., Brandstetter, J., Kapoor, A., Gupta, J. K., & Grover, A. (2023). ClimaX: A foundation model for weather and climate. arXiv preprint arXiv:2301.10343.\\n\\n[3] Bodnar, Cristian, et al. \\\"Aurora: A foundation model of the atmosphere.\\\" arXiv preprint arXiv:2405.13063 (2024).\\n\\n[4] Lessig, Christian, et al. \\\"AtmoRep: A stochastic model of atmosphere dynamics using large scale representation learning.\\\" arXiv preprint arXiv:2308.13280 (2023).\\n\\n[5] Schmude, Johannes, et al. \\\"Prithvi WxC: Foundation Model for Weather and Climate.\\\" arXiv preprint arXiv:2409.13598 (2024).\"}", "{\"summary\": \"This paper explores the use of AI foundation models (FMs) to improve climate model parameterizations, specifically focusing on gravity wave (GW) effects. The authors leverage a pre-trained weather foundation model (Prithvi WxC) by fine-tuning its encoder and decoder components to predict gravity wave momentum fluxes that are typically unresolved in coarse-resolution climate models.\\n\\n\\nThe work demonstrates how a foundation model pre-trained on MERRA-2 reanalysis data can be fine-tuned using ERA5 data to create parameterizations that capture gravity wave physics. The authors develop a model that predicts GW momentum fluxes given background atmospheric conditions, comparing their fine-tuned approach against a baseline Attention U-Net model trained from scratch. They evaluate the models using three tests: predicting global flux distributions, region-specific flux spectra across different atmospheric heights, and temporal evolution of fluxes at known gravity wave hotspots.\\n\\n\\nThe results show that the fine-tuned foundation model approach outperforms the baseline, particularly in predicting stratospheric gravity wave behavior - even across pressure levels where the original foundation model was not pre-trained. The authors employ the tail-Hellinger distance to specifically evaluate how well the models capture extreme events in the flux distributions.\\n\\n\\nThe paper positions this work as a proof-of-concept for using foundation models to develop improved climate model parameterizations more broadly, suggesting that similar approaches could be applied to other unresolved processes like clouds and precipitation. The authors argue that this approach offers advantages in terms of training efficiency, generalization capability, and physical consistency, while acknowledging current limitations and areas for future work.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper puts forth a fine-tuning framework and a rigorous evaluation methodology in applying foundation models to climate science parameterizations. The work's primary strengths lie in its comprehensive experimental design and clear presentation of results. The authors provide a thorough evaluation framework through three increasingly stringent tests - from global distributions to temporal evolution at specific hotspots - which could serve as a valuable template for assessing other climate model parameterizations.\\n\\n\\nFrom a technical perspective, the paper employs the tail-Hellinger distance metric for evaluating extreme events in flux distributions, showing careful consideration of evaluation methodology appropriate for climate science applications. The empirical results demonstrate that their fine-tuned approach outperforms a strong baseline (Attention U-Net), particularly in an interesting case where the model generalizes well to stratospheric gravity wave behavior even in regions where the original foundation model wasn't pre-trained.\\n\\n\\nThe paper is very clear and accessible through a well-structured presentation. The authors have done a good job at bridging machine learning and climate science concepts, making the work comprehensible to both communities. The figures are particularly well-designed, with Figure 1 effectively illustrating the gravity wave prediction task and Figures 5-8 systematically presenting the evaluation results across different atmospheric conditions.\\n\\n\\nWhile the work doesn't advance machine learning methodology significantly, it provides a well-executed case study demonstrating how foundation models can be leveraged for climate model parameterizations. The successful demonstration with limited fine-tuning data (four years of ERA5) suggests a practical pathway for developing similar parameterizations, though these contributions are more relevant to climate science than machine learning. The clear presentation and thorough validation make the work's climate science implications accessible to a broad audience, even if the core innovations lie primarily in the application domain rather than methodological advancement.\\n\\n\\nTo summarize, I acknowledge the paper's strong execution and clarity while being more explicit about its primary contributions being to climate science rather than machine learning methodology.\", \"weaknesses\": \"**Technical Innovation:**\\nThe paper's primary limitation lies in its modest contribution to machine learning methodology, which is a crucial consideration for ICLR. While the work presents a compelling application of foundation models to climate science, it essentially applies established fine-tuning techniques to a new domain without introducing significant methodological innovations. The approach largely follows the pre-training/fine-tuning paradigm that has been well-documented in recent literature, including applications in weather and climate modeling [1,2].\\n\\n**Connection to prior works:**\\nSimilar applications of foundation models in Earth system science have already been demonstrated by works such as ClimaX [1] and Aurora [2], which showed that weather-trained models can be effectively fine-tuned for various downstream tasks, including data-scarce scenarios. The current paper's findings, while valuable for climate science, align with these expected outcomes and don't present surprising methodological insights for the machine learning community.\\n\\n**Limited contributions to ML:**\\nFrom a technical perspective, the paper primarily describes a straightforward application of fine-tuning the Prithvi WxC model to predict gravity wave momentum fluxes. I do acknowledge the engineering effort required to carry out this study, but from a machine learning standpoint I see limited contributions. While the authors have conducted thorough experiments, these contributions are more relevant to climate science evaluation than advancing machine learning methodology. The neural network architecture modifications described in Section 2.4 are relatively standard adaptations rather than novel technical contributions.\\n\\n**Venue fit:**\\nThe paper's strengths - particularly its thorough evaluation of gravity wave predictions and implications for improving climate model parameterizations - would be better suited for climate science venues where domain experts could properly evaluate the scientific implications. Journals such as Journal of Advances in Modeling Earth Systems (JAMES) or Geophysical Research Letters (GRL) would provide a more appropriate audience and review process for assessing the work's primary contributions to climate modeling.\\n\\n**Recommendation for improvement:**\\nWhile the paper effectively demonstrates the potential of machine learning in climate science, its core innovations lie in the application domain rather than in advancing machine learning methodology. The work would benefit from either substantial enhancement of its machine learning contributions for ICLR or redirection to a venue better aligned with its primary contributions to climate science.\\n\\n**Impact considerations:**\\nThis assessment isn't meant to diminish the paper's value but rather to highlight that its strengths may be better appreciated and more impactful in a different academic venue. The thorough experimental validation and careful consideration of climate science implications would likely generate more meaningful discussion and follow-up work in the climate modeling community.\\n\\n[1] Nguyen, T., Brandstetter, J., Kapoor, A., Gupta, J. K., & Grover, A. (2023). ClimaX: A foundation model for weather and climate. arXiv preprint arXiv:2301.10343.\\n\\n[2] Bodnar, C., Bruinsma, W. P., Lucic, A., Stanley, M., Brandstetter, J., Garvan, P., ... & Perdikaris, P. (2024). Aurora: A foundation model of the atmosphere. arXiv preprint arXiv:2405.13063.\", \"questions\": \"1. Could the authors elaborate on why they chose to freeze both the encoder and decoder during fine-tuning? Would allowing some layers to be trainable, particularly in the decoder, potentially improve performance? At 2.3 billion parameters, the model size is not too prohibitive for full-parameter fine-tuning.\\n\\n2. The daily flux predictions show difficulty with small flux values (Figure 5b). Could the authors discuss potential approaches to address this limitation?\\n\\n3. The tail-Hellinger distance is an interesting metric - could the authors provide more intuition about how to interpret different values, particularly negative ones?\\n\\n4. Have the authors considered evaluating the model's performance during extreme weather events or seasonal transitions where gravity wave behavior might be particularly challenging to predict?\\n\\n5. Could the authors provide more details about the data preparation, particularly how the coarse-graining from 25km to 280km resolution was implemented?\\n\\n6. What was the rationale behind choosing 4 convolutional blocks before and after the frozen encoder-decoder? How sensitive is the model performance to this architectural choice?\\n\\n7. The authors mention plans to couple their scheme to a coarse-climate model. Could they elaborate on the technical challenges they anticipate in this integration? As climate models require long-time integration of the underlying PDEs do you foresee any stability issues when an ML parametrization is used?\\n\\n8. The authors chose Prithvi WxC as their foundation model, but recent work has shown Aurora achieving state-of-the-art performance in weather prediction, especially at high resolutions (see [2] above). Could the authors discuss why they selected Prithvi over Aurora, and whether they expect their findings would generalize or potentially improve when using Aurora as the base model? This discussion would be particularly relevant given Aurora's demonstrated superior accuracy in weather forecasting and its potential advantages for learning atmospheric dynamics. It would also help readers understand whether the choice of Prithvi was primarily due to practical considerations (like availability or computational constraints) or if there were specific architectural features that made it more suitable for this particular application.\\n\\n9. No code was provided as part of the submission. This would help to further assess the technical correctness and reproducibility of the results. Are the authors planning to open source their framework?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethics concerns to report.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces the fine-tuning of weather/climate foundation models to the task of atmospheric gravity waves.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"The presentation of the results is interesting and thought-provoking. Very cool to see the difference in the modeling between baseline and fine-tuned model.\"], \"weaknesses\": [\"The novelty / content of the paper is somewhat limited. For example in ClimaX -- an ICML paper -- which introduced a full end-to-end pretraining and finetuning pipeline over multiple datasets and multiple weather and climate finetuning tasks. A possible extension for this paper would be to use different foundation model backbones - any of the large weather models are interesting here.\", \"The Prithvi WxC model is very low resolution compared to other models, shows hardly no ablations, and for none of the tasks they consider, they beat any SOTA baselines. The current SOTA in weather and climate modeling has moved way beyond the 0.625x0.5 resolution. This of course is not the fault of the authors. Yet for example, the Aurora foundation model (left out in the paper) is trained on many atmospheric datasets on much finer resolution. It would have been nice to put a comparison between these two models.\", \"The downstream task is super low resolution - which a priori is ok, but not rtoo impressive.\", \"At least one more baseline would be needed to really gauge the results.\", \"I would advise the authors to consider user other foundation models too. And please stop refering to the Prithvi WxC model as SOTA model.\"], \"questions\": \"-\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Comment on code availability\", \"comment\": \"We should note that the code is already available on github and model weights are available on Hugging Face, but were not shared to ensure anonymity. A link to both will definitely be added in the final manuscript.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We are withdrawing this paper as this is submitted elsewhere now. Thanks for considering the submission.\\n\\nBest,\\nAuthors\"}", "{\"title\": \"Seems like the reviewer might be mixing two very different things here. We clarify.\", \"comment\": [\"We thank the reviewer for putting in valuable time to review our manuscript. It seems like they are mixing two very different things here but that is ok -- we try to answer them below:\", \"**The goal of this study** is to demonstrate *ONE WAY* in which foundation models (FM) can be relevant to the domain of AI for climate science modeling (not weather forecasting). ClimaX and Aurora (and even Prithvi) otherwise only address downstream tasks relevant over weekly to monthly timescales (which is weather, not climate!). The study *DOES NOT CLAIM* that Prithvi is the only FM fit to do so. That is simply not the point of this study. And we think the reviewer also understands this which is why they suggest \\u201cpossible extensions\\u201d.\", \"**Climax has many loose ends: ClimaX \\u2013 an ICML workshop paper** \\u2013 was trained on a very coarse dataset of 5 degrees. Even for rollout forecasting the authors used convolution models instead of time series models (UNet and ResNet). For some surprising reasons, ClimaX's authors did not compare their results with (or acknowledge) Fourcastnet, which was the SOTA at that time. However, in this paper we are not creating a comparison between models anyway, rather creating a case for downstream application for climate model (and not weather model) improvement. Given that this is the first study (to our knowledge) to apply FMs to address a physical problem which affects climate (and not weather) models over yearly and multi-year timescales, we strongly disagree that this paper does not have any ML novelty. Prithvi is higher resolution compared to ClimaX and is trained on MERRA2 dataset. We are not presenting Prithvi WxC here, and are merely using it, so we can\\u2019t compare its performance for all the tasks. So, we think this concern raised by the reviewer might better belongs to their paper, not ours.\", \"**Aurora code was unavailable before August:** Aurora code was released on August 22. We already finished our analysis and started preparing our paper by then. So, it does not make much sense to then include a half-cooked analysis in our paper just for the sake of it when it is not needed. Our conclusion stands on its own. This is also consistent with ICLR policy regarding use of fresh work for comparison. But we appreciate that the reviewer might not be aware of this. Aurora's stable code was released on August 22. So its not practical for us to run a completely new model within 2 weeks of submission due date and write down the results.\", \"WP #3\", \"**This is not correct** - it is *NOT* A \\\"super low resolution\\\" task as you think. The manuscript clearly mentions that the fluxes are \\u201cconservatively coarsegrained\\u201d and not merely interpolated. This means that the interpolated data contains all the information of the 25 km grid, then projected onto a 300 km grid. Conservatively coarsegraining fluxes is the scientifically accurate way to represent them as the fluxes are only defined in terms of wave averages. The physically accurate way to design a flux prediction experiment is to coarsegrain them to a 100-300 km grid (as that is the scale of the longest gravity waves). This consideration is based on past atmospheric wave dynamics literature (Polichtchouk et al. (2022) for instance; https://doi.org/10.1002/qj.4202). Now, since you mention ClimaX, we should mention that their downstream tasks are actually quite low-resolution because they simply coarsen the input-output pairs to a coarser grid by discarding the finer-scale details. Thus, the task presented here presents a robust test to assess \\u201csmall-scale\\u201d predictions. We appreciate that this subtle detail might not be apparent to reviewer who may be approaching this problem from a pure data-driven perspective.\", \"WP #4\", \"**This comment is a bit ambiguous too**, and we strongly disagree with the reviewer. We have used the most recent SOTA baseline to compare our results for gravity waves downstream task. We searched extensively to find the most appropriate baselines for the gravity wave flux task but found only one study (Gupta et al. (2024); ICML 2024- https://arxiv.org/pdf/2406.14775). That study establishes Attention UNet - a SOTA model for dense prediction - as a baseline. We were also inspired by the fact that if ClimaX can use a basic UNet to compare their forecasting results without any temporal component whatsoever, we can definitely use an improved version of their choice to serve as a baseline here.\", \"Lastly, we feel sorry for the reviewer to be triggered by our reference to Prithvi as SOTA model, even though there is not reason to be. We decided to used Prithvi for our application because it was developed by a team of both ML and climate/weather domain experts. Not sure why the reviewer sounds offensive.\"]}", "{\"comment\": \"Thank you for your detailed responses. While I appreciate the thorough clarification of technical points and acknowledge the potential impact of this work for climate science, I maintain my position regarding the paper's fit for ICLR for several reasons:\\n\\n1. Although ICLR has a track for physical science applications, submissions are still expected to advance machine learning methodology or demonstrate novel ML techniques. You mention ClimSim's recognition at NeurIPS 2023, but it's important to note that this was in the datasets track, where it was recognized for providing a novel and comprehensive dataset to foster progress in the field. It was not accepted as an original research article advancing ML methodology, which is what your current submission aims to be.\\n\\n2. While the integration of foundation models with climate science is important work, the primary contributions appear to be: (a) Application of existing fine-tuning methods to a new domain; (b) Domain-specific evaluation metrics (tail-Hellinger distance, which is not an entirely novel metric, but its use here is novel); (c) Empirical results on gravity wave modeling\\nThese contributions, while valuable, would be better suited for venues focused on climate modeling, where the domain impact can be properly evaluated by experts in climate science.\\n\\n3. The clarifications provided in your rebuttal are very helpful but are not reflected in the current manuscript or appendix. A revised manuscript incorporating these points would be necessary to reconsider the evaluation.\\n\\nGiven these considerations, I maintain my original score while acknowledging the potential impact of this work in its intended application domain.\"}", "{\"title\": \"We clarify why the comments do not classify as weaknesses\", \"comment\": \"Thankyou so much for your thoughtful comments. We clarify why the concerns do not really classify as weaknesses.\\n\\n# Weaknesses:\\n\\n1. **Innovativeness:** The use of AI to empower climate modeling (not weather forecasting) has been severely limited. Our work opens avenues for the broader ML community to develop AI models to advance climate modeling and climate science (which has a different set of challenges than weather forecasting). We also introduce the tail-Hellinger distance, a novel metric focusing on the accuracy of the predicted tails\\u2014crucial for capturing rare but impactful events in any domain where ML is applied to learn distributions. Our work bridges ML advancements and high-impact applications in climate science (not weather), promoting interdisciplinary research and deployment of FMs in other scientific fields, and provides a pathway for ML experts to foray into climate science. It is worthwhile mentioning ClimSim - which won the best paper at NeurIPS 2023. ClimSim did not introduce any new model architecture, but provided a dataset/pathway to allow ML experts to work with ideas in climate modeling. We have appropriately submitted our work to the \\u201cApplications to physical sciences\\u201d area of the conference.\\n\\n\\n2. **Experimental Setup: We disagree here**. The two methods have been used in different contexts. Eqn disc has been used for momentum closures in the ocean (where analytic forms are not known) and the prob. models focus on combining low-fidelity and multi-fidelity datasets for precipitation. **Both setups also require different datasets than ours**. Moreover, in our case, the analytic forms are already known but not resolved. We searched the literature and found only one study which has looked at (resolved) gravity wave fluxes (Gupta et al. (2024); https://arxiv.org/pdf/2406.14775). Subsequently, here we used their Attention UNet baseline. Using different inputs for the two models **does not create any discrepancies** because (as also mentioned in the manuscript) the potential temperature (\\ud835\\udec9) is simply a product of temperature (T) and pressure (p): \\ud835\\udec9 = T*(p)$^{constant}$. So, both models have the same physical info, in different dimensions, and these models are robust enough to be unaffected by these differences.\\n\\n\\n3. **Experiments:** Thanks for the comment but **this is definitely not the case (and does not qualify as a weakness)**. MERRA-2 has (a) a factor two coarser grid (0.5 deg x 0.625 deg as opposed to ERA5\\u2019s 0.25 deg) and (b) has finite volume numerics. As a result, MERRA2 does not provide a competent GW field (Li et al. 2023, https://doi.org/10.1002/qj.4605). Also, including (weak) GW fluxes from MERRA2 won't be informative because the small value prediction affects the UNet baseline and the finetuning model alike. Otherwise, the UNet baseline, which is pretraining agnostic, would have shown better skill in predicting small values. If anything, we see the opposite, i.e., pretraining on one dataset and finetuning on the other leads to improvements wrt baseline.\\n\\n\\n--- \\n# Questions:\\n1. A similar strategy can be applied to develop other params e.g. clouds. We do not suggest using ERA5 to develop these since it under-resolves convection. One can use a mix of high-resolution climate model output and satellite irradiances and provide task-specific input data like specific humidity, saturation pressure, precipitation fluxes, latent heat fluxes etc. from other high-res datasets to develop these ML params. The baseline models would also vary. For GWs, we only found one existing baseline \\u2014 the Attention UNet (Gupta et al.) \\u2014 to predict the small-scale fluxes, and so we used that to inform our finetuning. Similarly, in the future, SOTA benchmarks for cloud params & pecip. (and other processes) could be used to compare the performance of finetuning models. Alternatively, in case of a lack of a baseline, encoder-decoder pairs from other FMs can promote effective intercomparison of param. architecture.\\n\\n2. Data-scarcity. **Past ML param. studies:** Wang et al. (2022)(https://doi.org/10.1029/2022MS002984) use ~4.5 mil samples as training+validation set. Espinosa et al. (2022)(https://doi.org/10.1029/2022GL098174) use ~11 mil samples. Similarly, Zanna and Bolton use 10 yrs of high-res training data. **Temporal coverage:** we (purposely) used only 4 yrs of data for training. This could lead to: (1) underrepresentation of tropical convection and hence gravity waves: processes like the El-Ni\\u00f1o Southern Oscillation have a typical period of 2-7 yrs but are not fully represented. Similarly, the quasi biennial oscillations (QBO) in the tropical stratosphere have a period of 28 months. Since we sample our data over 2010, 2012, 2014, 2015, the training data does not cover both phases of the QBO. \\n\\n3. Of 48 months, 47 months training + 1 month (May 2015) validation. We also trained the baseline on 3 yrs and tested it on 1 yr and found similar results & losses. Training data was randomly shuffled.\"}", "{\"title\": \"Response to Significance\", \"comment\": \"Thank you for your questions. There clearly seems to be some confusion and we clarify it below. We will also add these clarifications to the final revised manuscript.\\n\\n1. We completely disagree here because our choice to coarsegrain the fluxes is based on robust scientific principles and **utmost care was ensured while creating the model training data**. We arrived at the decision only after consulting multiple domain experts on gravity waves. We should clarify that the momentum fluxes used to the train the models are conservatively coarsegrained (and not simply interpolated) using Python's xESMF library, so they preserve all the information from the 25 km fine-grid by averaging along the longest-resolved wavelengths. Even if we were using a 1 km climate model output, the correct way to define the fluxes would be to coarse grain them onto a coarser grid appropriately chosen according to the longest resolved wavelength, otherwise, we would have to deal with ringing effects associated with wave phases. **Simply put, even as the final grid seems coarse, it preserves all the knowledge from the high-resolution dataset and contains all the relevant information for climate models**. All this would lead to high errors in ML-based predictions and would be rendered fruitless for climate science applications. Since climate models are typically coarse, the coarse resolution selected here (100-300 km) is exactly the optimal fit. Too fine of a resolution would mean that we are predicting wave phases - which would be a poorly-defined problem. In this 'coarsegrained' form, the fluxes could be used optimally by climate models as the traditional parameterizations too use a similar approach to compute momentum fluxes.\\n\\n2. Yes, efforts are underway to test the online performance of this scheme. We understand the importance of online testing and have clearly acknowledged in the conclusion section that we are working on it. Since climate models are complex fortran codes, coupling the torch model to a climate model is a challenging technical problem - especially if we want the coupled ML model to provide the optimal speed (which we do). We had to overcome some key logistics problem to build a team that can effectively help us with this and we are now making quick progress on this. Moreover, most climate model parameterizations are tested offline first. **So, this does not reduce the significance of this work at all as rigorous offline testing (the three proposed tests) is key to ensuring stable online performance later**. Also, we appreciate that this may not have been clear from the text but we are not \\u201cforecasting climate\\u201d here, rather representing missing physics in climate models at any given instance by learning from high-resolution reanalysis data. Otherwise, this line of thought would seem to suggest that most downstream tasks proposed in foundation models like ClimaX, Aurora, or Prithvi have no value because they are offline; this is clearly not the case. While our downstream tasks are not online yet, its rigorous offline testing has inherent value to ensure optimal online performance and the analysis presented here comprises more than half of the entire problem.\"}", "{\"title\": \"Response to Questions\", \"comment\": \"Thanks for the three questions. We clarify the concerns below.\\n\\n1. **Resolution:** ideally, a spatial resolution of 500 m to 1 km and a temporal resolution of 5-10 minutes should be sufficient to resolve most gravity waves (GWs) in the atmosphere. However, datasets that follow both criteria are quite limited as (i) models provide high-resolution in space but not in time due to memory factors. The state of the art global 1 km climate model output provides atmospheric data for 8 months but only every 3 hours, and the whole data takes more than 3 petabytes of space. On the other hand, satellites & balloons provide high-frequency data, but have poor spatial resolution. Thus, ERA5 currently provides the best dataset/trade-off between spatial (25 km) and temporal (hourly) resolution. We will make sure to elaborate more on this in the revised manuscripts.\\n\\n2. **Frozen encoder-decoder:** Finetuning can be done in several ways where we can freeze all layers/some layers or none, and add couple of trainable parameters in the beginning and end of the model. Here is our thought process: if we load the model for finetuning, we would have to adopt the FSDP approach so that model layers can be finetuned, which will need a minimum of 4 A100 40 GB around fine-tuning. However, freezing the model helped us to load it on a single GPU. This works great to ensure a wider applicability by a broader community of ML scientists and climate scientists who may have limited compute resources to create such parameterizations. Additionally, making all the layers trainable in the decoders would not have offered improvements as the model was pretrained on 14 vertical levels from MERRA 2 and finetuned on 122 vertical levels from ERA5. Considering both of the dataset have different assimilation strategies from raw observations, the model will need to learn the relationship between both MERRA5 and ERA5, which might differ by parametrization/handling of different PDEs (assimilation objectives, finite vs pseudospectral numerical methods). So, we froze the model and added convolution in the beginning, which would have learned local features from the data, and then hit the embedding space. One can call this \\u201clocal information injection\\u201d. So the model weights are changing but not of base model but of surrounding convolutions. Thanks for the suggestion though, and letting us know that this was not clear at first from the text. We will improve the caption and if needed, move the figure to the Appendix in the final manuscript.\\n\\n3. **Learning closed-form expressions:** Good question! the underlying closed forms are approximately known from linear wave theory. But low resolution climate models cannot meaningfully resolve these terms because a bulk of the contribution in these terms come from spatial scales 100 km and shorter, and these scales are not resolved in a typical climate (not weather) model . So, we extract these quantities from high-res observations (which represent physics) to prepare training data and then use the trained ML model to represent these missing physics/unresolved terms in low-res climate models, boosting their physics representation. So, in short, we know what physics climate models are missing and why - so we resort to high-resolution climate data/observations to learn the physics using AI and analytical forms - and then we coupled these ML models back to the low-res climate models to represent the missing physics.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to comments on summary and weaknesses\", \"comment\": \"We thank you for your thoughtful comments. Most points raised are not actually weaknesses and we explain why:\\n\\n1. **Choice of Foundation Model**: Aurora\\u2019s stable code was publicly released on August 22 2024. By then we had already completed our analysis and started writing the manuscript, so it made little sense to go back and redo the analysis just because there is a new foundation model in town. Also, it seems like Aurora pretty much focuses on weather forecasting and air quality forecasting as the only downstream task, so science-wise there's not much difference between Aurora and Prithvi. AtmoRep does not discuss any downstream tasks whatsoever in their non-peer reviewed paper. If the authors themselves have not discussed many downstream tasks in their paper, it makes us a bit skeptical using the model for our analysis as well. ClimaX is too coarse resolution and trained on potentially-biased CMIP6 model output to serve as a meaningful/good choice for climate science applications. Thus, we find Prithvi WxC to be more balanced in terms of both pre-training and fine-tuning applications. Despite not using the models, we reiterate that intercomparing the foundations models is simply not the focus of this study. That said, the goal is also not to suggest that Prithvi is the optimal foundation model for the task of climate model parameterization development. The goal is to open the field of climate science/modeling open to the broader AI community by devising novel use cases in the domain and providing a robust roadmap or proof-of-concept. In principle, any well-designed FM could be used for this ML parameterization development task. We just demonstrate it using Prithvi since it was developed by a mix of both ML and climate-science domain experts and has a well-defined encoder-decoder architecture.\\n2. **Features and data selection**: we should clarify that using different input variables for the two models does not create any discrepancies whatsoever because (as also mentioned in the manuscript) the potential temperature (\\ud835\\udec9) used as input for one model is simply a product of temperature (T) and pressure (p): \\ud835\\udec9 = T*(p)$^{constant}$. Thus, both models contain the same amount of physical information, just in different dimensions, and these models are robust enough to not be affected by these differences. We like your point though, and we would be happy to add the loss curves for model training using different feature combinations in the revised manuscript.\\n3. **Choice of baseline**: we used an Attention UNet as a baseline because it is a state-of-the-art method for dense prediction. Even previous papers like ClimaX compared their forecasting performance to a UNet and ResNet (though not time series). While we are not comparing our study to theirs, we have certainly drawn inspiration from their approach. Also, since Attention UNet has been accepted well by the ML community, we are confident it can serve as a very effective baseline for our instantaneous mapping task. The fine-tuned model is definitely using 2.3B parameters, but to note, we have frozen the weights of the model so these parameters are not changing, only trainable parameters are the convolution around the Prithvi WxC. Sure, we can definitely go ahead and train a transformer for it, but we also want to acknowledge and build on the accepted published baselines for this task (the Gupta et al. (2024), ICML paper, https://arxiv.org/pdf/2406.14775)\\n4. Sure, we would be happy to share it in the revised version.\\n5. **Log-scaling**: Please note the log scaling of the y-axis. The dotted lines are indeed the 2.5th and 97.5th percentile (see linear plot here: https://tinyurl.com/iclr25)\\n6. **Hellinger distance**: unfortunately, we could not find a super-relevant study which could act as a reference here to interpret the exact values of the hellinger distance. However, our decision to use 0.05 is explained and supported by monte carlo Gaussian sampling conducted over thousands of Gaussian samples, where we found that a Hellinger of 0.05 for standard Gaussians is explained by a roughly equal, i.e.,~60% perturbation in the mean or a ~60% perturbation of the std. dev. This also makes interpreting results easier in terms of increased spread or increased shift in the mean, i.e. if the shapes are similar, the Hellinger distance above 0.05 can be interpreted as being large due to changes in standard deviation or due to changes in standard deviation. Would be happy to elaborate further in the revised manuscript supported with appropriate figures. Since the Hellinger distance ranges from 0 (identical) to 1 (disjoint), a distance of 0.05 definitely ranges on the lower side indicating the distributions are indeed strongly similar with minimal divergence. When compared to other measures (e.g., KL divergence, Wasserstein distance, etc.), 0.05 often aligns with very small discrepancies. In high-precision domains, a Hellinger distance of 0.05 might be considered excellent.\"}", "{\"title\": \"Thanks for the clarification.\", \"comment\": \"Those answers helped to understand the motivation behind the paper, I agree with the comments. My point about ClimaX was to point out that there are papers which present many downstream tasks, but I understand that your specific downstream task is hard and unique and thus might require a study by itself. I am not raising the score, since I still think the paper needs more meat - I know this is unsatisfying.\"}", "{\"metareview\": \"The paper proposes using foundation models to better the climate models predictions and specifically focusing on modeling the gravity wave effects via fine tuning. Almost all the authors agree that the technical contribution from ML perspective is minimal. I agree with the assessment of the reviewers that ICLR is perhaps not the best venue for this work.\", \"additional_comments_on_reviewer_discussion\": \"All the reviewers are inclined towards rejecting the paper, their primary concern being venue fit. I agree with the reviewers that the ML contribution of the work is minimal and that the paper will be received better at an alternate venue.\"}", "{\"title\": \"Response to questions\", \"comment\": \"Thank you for the thoughtful questions. We provide the response below. We provide detailed answers to questions 2 and 3 on daily flux predictions and tail-Hellinger distance separately as a follow-up comment:\\n\\n1) If we unfreeze and load the full model for finetuning, we would have to adopt the FSDP approach so that model layers can be finetuned, which will need a minimum of 4 A100 40 GB around fine-tuning. However, freezing the model helped us to load it on a single GPU. Additionally, Making some layers trainable in the decoders would not have offered improvements as the model was pretrained on 14 vertical levels from MERRA 2 and finetuned on 122 vertical levels from ERA5. Considering both of the dataset have different assimilation strategies from raw observations, the model will need to learn the relationship between both MERRA5 and ERA5, which might differ by parametrization of different PDEs. So, we froze the model and added convolution in the beginning, which would have learned local features from the data, and then hit the embedding space. One can call this \\u201clocal information injection\\u201d. Later on we use the transformation to reach back to the gravity wave. Considering, change in data resolution- space, time and domain space, we decided to use convolution which would have helped the model to adapt better to the domain. \\n\\n4. Yes, we plan to test the model performance during (a) seasonal transitions in the southern hemisphere stratosphere, i.e. final warmings, and around extreme events in the northern hemisphere stratosphere, a.k.a, sudden warmings. Getting the gravity waves forcing correct around these features will serve as a strict test of model performance. However, to ensure statistical confidence, this would be best accomplished using online testing. Thus, once our scheme is coupled to the climate model, we plan to test on well-known atmospheric features like final warming dates, sudden warming frequencies, tropical QBO period.\\n\\n5. Yes, absolutely. The conservative coarsegraining (not the same as linear interpolation) was achieved using the 1st-ord. conservative regridding func. provided by the xESMF Python library. The fluxes were first computed and stored at a 25 km grid, then coarsegrained to a T42 (~300 km) Gaussian grid. We tested different coarsegraining methods in xESMF and found little differences. The 300 km resolution was purposely selected to ensure consistency as we are currently coupling the ML model to a 300 km grid resolution global climate model. \\n\\n6. Since the Attention UNet baseline also uses 4 downsampling layers (excluding the bottleneck), we selected 4 conv. blocks to strike similarity between the baseline and the finetuning design. Otherwise, it would be susceptible to surmise that performance gains are due to different model depths. We would be happy to provide ablation results in the revised manuscript. \\n\\n7. Yes, efforts are underway to couple this scheme to a climate model. As mentioned in the paper \\u2013 good offline performance does not always equate to good online performance. This is due to **nonlinear feedbacks** between the ML scheme and the climate model. Due to such feedbacks, small errors can often grow exponentially leading the model to produce nonsensical results. Moreover, online performance will also present a rigorous test of the ML as a long term simulation will test the **generalizability** of the model to new inputs and new model climatology. One technical challenge in particular is **speed**. Coupling the torch model with a Fortran code and invoking it every model physics step is particularly challenging as the communication leads to an ultimate slowdown of the climate model. We are working towards finding a solution. However, if the scheme is evaluated strictly, as we have attempted to do in this study, it generates sufficient confidence that the scheme can perform plausibly during offline tests as well. \\n\\n8. We completed our analysis by the last week of August. Aurora\\u2019s code was made public on Aug 22 as it made little practical sense to \\u2018switch\\u2019 to Aurora. Also, Prithvi was made by a mix team of both climate/weather scientists and ML scientists, so we stuck to Prithvi. Regardless, since the focus of our study is to open avenues for the application of ML for climate science modeling (not weather science), and since our study is agnostic of the foundation model used, FM choice is not a significant issue. \\n\\n9. The code is already available on GitHub but we didn\\u2019t share it to ensure anonymity. It will be made available in the final version.\"}", "{\"summary\": \"A recent foundation model for weather and climate (Prithvi WxC) is fine tuned to predict gravity waves. Those predictions are supposed to parameterize subscale processes of coarse climate models and thus improve climate projections. The fine tuned model outperforms a single-task U-Net and generalizes well in regions that are outside Prithvi WxC's training range.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"_Originality:_ The proposed manuscript touches on relevant topics of the society and applies state-of-the-art foundation models. It is great to see that the fine tuned foundation model generates accurate predictions even outside the regime of the foundation model's training data.\\n\\n_Clarity:_ The manuscript is mostly clear and well organized. Supplementary material and code for replicating the results are not provided, though.\", \"weaknesses\": \"_Quality_\\nThe quality of the manuscript lacks in several aspects. Mostly, the comparison of the foundation model vs the baseline seems unfair.\\n1. Unclear for what reason the authors choose Prithvi WxC as foundation model and not ClimaX or AtmoRep. Comparing these three foundation models can be considered more fair than comparing the fine tuned FM against conventional one-task approaches, like Attention U-Net etc.\\n2. In the same vein, given that the baseline and fine tuning models are trained on different sets of input variables (lines 203 through 210), it is unclear how to differentiate between the model quality and the data selection.\\n3. The baseline is a convolutional architecture, whereas the fine tuned model is a transformer, which questions the role of the pretraining vs. the model architecture. It would be great to see how a transformer would compare as baseline model. Similarly, the number of parameters of the U-Net (35M) is not comparable to that of the fine tuned model (2.3B).\\n4. A plot showing the convergence of the baseline vs the fine tuning model would be informative to better capture their behavior (line 271).\\n5. In Figure 5, the dotted lines are described to indicate the 2.5th and 97.5th percentiles. The data distributions, however, hardly confirm this. There appears to be substantially more than 2.5 percent of the data left and right of the blue dashed vertical lines.\\n6. How is the argument in line 325 substantiated, that a Hellinger distance of 0.05 or less is considered pretty good? Is there some literature or data that suggests this decision?\\n\\n_Significance_\\nIt is hard to assess whether the proposed method is relevant for climate forecasting, mostly as the parametrizations are not tested in numerical climate models.\\n1. Output fluxes are on fairly coarse resolution and I'm concerned that the coars gravity wave precitions are of limited value for a numerical climate model?\\n2. The model is introduced to provide parametrizations for climate models, however the study does not test those parametrizations in climate models. As detailed in the limitations section, the actual verification of the parametrization is a key contribution that I consider substantial for assessing the relevance of the proposed method.\\n\\n\\n_Minor comments_\\n- lines 37-38: Add details about why SOTA future climate projections are highly uncertain\\n- line 289: Typo in \\\"evolutionlution\\\"\", \"questions\": \"1. What spatial and temporal resolution is required to resolve gravity waves? (see abstract and line 60) Please add details to the manuscript.\\n2. In the caption of Figure 2, what does it mean that encoder and decoder blocks are frozen and used for fine tuning? This reads conflicting, since frozen weights cannot be fine tuned. Also, this figure does not seem to convey much information for the model setup at hand. I have difficulties extracting details about data or architecture. EDIT: In Section 2.4 this is outlined clearly; I suggest to remove Figure 2.\\n3. What do the models learn effectively? It seems like the models are trained to approximate Equations (1)-(3) and I do not understand why this is done with deep learning models instead of using these equations directly.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to questions - part 2\", \"comment\": [\"Here we provide detailed answers to your questions 2 and 3.\", \"2. Yes, this is an important issue and is reminiscent of the issues common to AI weather forecasting models like FourCastNet, GraphCast, etc., where the models tend to learn the large-scale features better but struggle to accurately represent the small-scale fronts, filaments, atmospheric rivers, etc. Moreover, the high correlation coefficient in the instantaneous fluxes in Figure 7 show that most of these small values might be appearing outside of the selected hotspots. We propose a couple of solutions to tackle this issue:\", \"A more physics-informed loss function, for e.g, regularizing the predicted distribution towards the true distribution\", \"A latitude-weighted loss function \\u2013 since the correlation coefficient is weaker in the tropics, a latitude weighted loss function can improve/reduce the errors in the tropics\", \"A different scaling for fluxes - strong tendency for small values to be treated as noise. While the cuberoot scaling of fluxes works very well in yielding a plausible climatology, it may make it difficult to learn the small-scale values as the cuberoot tends to push small values away from zero and large-values towards one.\", \"3. This is a good question, thanks! The traditional Hellinger distance ranges from 0 to 1, 0 indicating that the two distributions are identical for the whole sample space and 1 indicating that the two distributions are completely disjoint. The tail-Hellinger distance envisioned in this study, however, has a different range. We provided some context how to interpret values for the tail-Hellinger metric in the manuscript but would be happy to elaborate more in the revised version.\", \"Essentially, tail-Hellinger zooms in on the tails assuming that the bulk of the distributions are similar (if not identical). With \\u00bd as a fixed constant, a positive increment to tail-Hellinger come from the second term which is simply the integral of the tail for the predicted distribution. If the bulk is identical for both $p$ and $q$, then the second term (integral of $p$) should be equal to 0.5 as well. If subsequently the tails are disjoint, then the tail-Hellinger will be equal to 1. In this case, the interpretation will be that the bulk of the distributions are identical, yet the tails are totally different/disjoint.\", \"Any negative contribution, that makes the tail-Hellinger negative comes from the cross-term between $p$ (predicted) and $q$ (truth). If the tails of the two distributions are disjoint then the contribution from the cross term will be zero and the tail-Hellinger will be nudged toward more positive values by the second term (scaled integral of $p$). On the other hand, a negative value would imply a fatter tail of the predicted distribution than the true distribution because $\\\\sqrt{p(x)}\\\\sqrt{q(x)} \\\\geq \\\\sqrt{p(x)}\\\\sqrt{p(x)}$ over the tails and thus.\", \"So, if the Hellinger distance is (say) -0.25, this means the model is underestimating the tails, i.e. $\\\\sqrt{q(x)} > \\\\sqrt{p(x)}$. If the bulks are identical, this would mean that the range of $q$\\u2019s tails is narrower than the range of $p$\\u2019s tails but since $q (x)$ > $p(x)$ $\\\\forall x$, the cross-integral $\\\\int \\\\sqrt{q(x)}\\\\sqrt{p(x)} dx$ > 0.5 + $\\\\int \\\\sqrt{p(x)}\\\\sqrt{p(x)} dx$, leading to negative values. The more the prediction underestimated the tails, the more negative is the tail hellinger distance. The fatter the tail of $q$ relative to $p$, the more negative the tail-Hellinger. However, the tail could be fatter on either one side or both, and the tail-Hellinger would not be informative in suggesting which (limitation).\", \"It is worth mentioning that, negative values of tail-Hellinger are more likely for small values of $\\\\epsilon$ as that increases the probability of the bulk of the distributions to not be identical.\", \"Ofcourse, this is the net sum of the tail product over both tails of the distributions - sort of like a two-sided student t-test. A more nuanced metric could be defined to focus on just the left or the right tail. Another possibility is to not assume similar bulks and assume systematic biases in the mean and account for such biases when computing the distributions distance. Since, it was not super relevant for this study, we did not discuss these ideas in the Appendix.\"]}" ] }
DsW4boRh8H
GFNet: Homography Estimation via Grid Flow Regression
[ "Kaining Zhang", "Jiayi Ma", "Paolo Favaro" ]
Current deep homography estimation methods are constrained to processing image pairs with limited resolution due to restrictions in network architecture and computational capacity. For larger images, downsampling is often necessary, which can significantly degrade estimation accuracy. To address this limitation, we propose GFNet, a Grid Flow regression Network that consistently delivers high-accuracy homography estimates across varying image resolutions. Unlike previous methods that directly regress the parameters of the global homography between two views, GFNet directly estimates flow over a coarse grid and then uses the resulting correspondences to compute the homography. This approach not only supports high-resolution processing but also preserves the high accuracy of dense matching while significantly reducing the computational load typically associated with such frameworks, thanks to the use of coarse grid flow. We demonstrate the effectiveness of GFNet on a wide range of experiments on multiple datasets, including the common scene MSCOCO, multimodal datasets VIS-IR and GoogleMap, and the dynamic scene VIRAT. In specific, on GoogleMap, GFNet achieves an improvement of +9.9\% in auc@3 while reducing MACs by $\sim$47\% compared to the SOTA dense matching method. Additionally, it shows a 1.7$\times$ improvement in auc@3 over the SOTA deep homography method.
[ "homography estimation", "multimodal", "image matching" ]
https://openreview.net/pdf?id=DsW4boRh8H
https://openreview.net/forum?id=DsW4boRh8H
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z9T1nvuYjY", "pODJjxUv3N", "FRRJ3csEdL", "1hT5j8rLWV" ], "note_type": [ "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730180804680, 1730389447466, 1730381465299, 1731603019700 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4703/Reviewer_T19m" ], [ "ICLR.cc/2025/Conference/Submission4703/Reviewer_SDxP" ], [ "ICLR.cc/2025/Conference/Submission4703/Reviewer_Jqqw" ], [ "ICLR.cc/2025/Conference/Submission4703/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This work argues that current methods are constrained to limit resolution which hinders the accuracy of estimating homography, thus proposes a framework to estiamte homography which could utilize the large resolution information. To achieve it, authors firstly compute the flow fileds and then solve homography. As shown in the experiments, GFNet achives multiple sota on common, multimodal, and dynamic datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well written and easy to follow.\\n2. This work is a fairly good system paper.\\n3. The authors provide the reproduction code and weights in the supplementary material.\", \"weaknesses\": \"Related Work:\\n\\n1. Several deep homography methods have been proposed, for example, [1-3]. [1] estimates homography across resolution and modality; [2] generates a supervised homography dataset that simulates the real-world distribution; [3] directly represents the homography via an 8-rank flow field, eliminating the need for post-solving from flow fields.\", \"method_part\": \"1. The technical contribution (GRID FLOW REGRESSION) includes initially estimating the flow, then solving for homography, and learning the flow motion between fixed grids. The former seems to has been applied in previous methods [2]. I would like to discuss with the authors the advantages compared to it; the latter, in my view, can be more regarded as a useful engineering trick rather than a major technique contribution.\\n\\n2. Leveraging priors in foundation models (DINO or Stable Diffusion) to produce features and estimate geometric transformations has also been seen in previous works, such as [4]. These works demonstrate that features from foundation models are very helpful for multi-modality tasks, such as semantic matching.\\n\\n3. The proposed dataset generating method may not meet the realism criteria (the realism of frame content and inter-frame motion, please refer to [3] for more details), which is crucial for ensuring performance and generalizability. For example, the proposed method cannot simulate parallax changes or human walking.\", \"experiments\": \"1. I recommend that the authors conduct experiments, at least zero-shot inference, on recent deep homography datasets [2], which represent general real-world scenes, including parallax changes, dynamic foregrounds, and adverse conditions.\\n\\n[1] CrossHomo: Cross-Modality and Cross-Resolution Homography Estimation. TPAMI 2024\\n\\n[2] Supervised Homography Learning with Realistic Dataset Generation. ICCV 2023\\n\\n[3] Unsupervised Global and Local Homography Estimation with Motion Basis Learning. TPAMI 2023\\n\\n[4] Emergent Correspondence from Image Diffusion. NIPS 2023\", \"questions\": \"Motivation:\\n1. Theoretically, traditional methods can solve a homography with just 4 correspondences. While additional correspondences can enhance accuracy through an overdetermined solution, which is not that essential compared to searching 4 key correspondences. \\nMy question, therefore, is how this approach compares to methods that focus on identifying key correspondences, and why is increasing resolution important, despite the theoretical sufficiency of four correspondences?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work proposes a grid flow regression network that consistently delivers high-accuracy homography estimates across varying image resolutions. In particular, the proposed GFNet directly predicts the flow over a coarse grid and then uses the resulting correspondences to obtain the homography. Experimental results demonstrate the effectiveness of the proposed method across various datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This work serves as the first attempt to address the limitation of the fixed input resolution in the homography estimation problem. The motivation of using a grid flow-like representation is clear and sound.\", \"The paper is easy to read and the structure is well organized.\", \"The experiments are comprehensive and the results are convincing. The proposed method outperforms previous methods across various datatasets, showing promising performance and robustness.\", \"The codes and pre-trained models have been uploaded, which significantly helps the reviewers to understand the details and mechanism of the proposed framework.\"], \"weaknesses\": [\"However, I still have some concerns as follows and tend to raise my rating if the authors can properly address them.\", \"The proposed grid flow representation is similar to the classical 'mesh flow' (MeshFlow: Minimum Latency Online Video Stabilization) used in the video stabilization task. Their differences and limitations are expected to be clarified in the context of the homography estimation.\", \"The pixel-wise flow can flexibly adapt to different input resolutions, but it might be redundant to describe the limited DoF homography matrix, which typically only has 8 DoF. Do we really need such a dense flow in a homography estimation network? More discussions should be presented when applying the flow and grid representations.\", \"This work introduces an iterative flow regression approach to prevent suboptimal multi-scale flow optimization in challenging scenes. However, this iterative or progressive regression method has also been explored in previous flow estimation and image warping works. For example, \\\"Raft: Recurrent all-pairs field transforms for optical flow\\\", \\\"MOWA: Multiple-in-One Image Warping Model\\\", \\\"Semi-supervised coupled thin-plate spline model for rotation correction and beyond\\\", etc. The authors are suggested to highlight their unique contributions and provide some discussions compared with the above works.\", \"Some recent homography estimation works are also missing in this work. For example, \\\"DMHomo: Learning Homography with Diffusion Models\\\", \\\"Supervised Homography Learning with Realistic Dataset Generation\\\", \\\"Depth-aware multi-grid deep homography estimation with contextual correlation\\\", etc.\", \"Is the sota performance of this work mainly derived from the powerful DINOv2 backbone? Did the author try other backbone networks such as the classical ResNet?\", \"When performing the ablation study, an important experiment would be using the grid or 4-pt representation (widely used in previous homography estimation works) to compare the proposed grid flow representation in different resolution settings, including their specific estimation accuracy, complexity, and efficiency.\"], \"questions\": \"Can the proposed method be applied to the related downstream vision tasks? Some discussions and future works could be provided.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposed a Grid Flow regression Network called GFNet. Compared with the previous methods for homography estimation, GFNet supports varying resolutions and significantly reduces the computational load by using the grid flow regression. When evaluated on several challenging datasets, including multimodal and dynamic scenes, the GFNet achieves state-of-the-art performance with varying resolution and lower computation load.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper provides comprehensive comparisons across various algorithms on multiple datasets, demonstrating that the proposed method achieves superior performance over existing approaches.\\n\\n2. By incorporating composite homography in data generation, this approach enhances the network's ability to generalize across both forward and reverse input sequences, reducing overfitting and improving robustness under varied conditions.\", \"weaknesses\": \"1. While the paper mentions that iterative grid flow regression reduces the computation load of global correlation at each scale, the improvement appears limited. The time complexity remains in the same order as pixel-based regression. To clarify the extent of the efficiency gain, it would be useful to see a detailed breakdown of computation time across different components or empirical runtime comparisons on various hardware configurations.\\n\\n2. I couldn\\u2019t find any clear ablation study examining the impact of using grid flow regression on both accuracy and computational load. A comparison between the proposed grid-based approach and a pixel-based version, with other components held constant, could provide valuable insights.\\n\\n3. While the grid flow regression aims to reduce the computational load, the addition of DINOv2 seems to offset this benefit, potentially increasing the overall computation. To better understand the computational trade-offs, it would be helpful to provide a detailed analysis, including a breakdown of computational costs for each component and how they balance out in the overall architecture.\\n\\n4. To better clarify the novel contributions of GFNet beyond the use of grid flow and global correlation computation, it would be useful to provide a more detailed comparison between GFNet's iterative structure and that of MCNet.\\n\\n5. The experimental results are not convincing enough, as many compared methods report their accuracies of MACE (in pixels), such as RHWF, MCNet, and PRISE, but this paper doesn't compare them. The results in Table 2 in this paper aren't consistent with the reported ones in the paper of RHWF, MCNet, and PRISE, as their accuracy is very high on MSCOCO and GoogleMap. Including MACE comparisons and addressing the discrepancies with the results reported in the RHWF, MCNet, and PRISE papers could enhance the clarity and robustness of the evaluation.\", \"questions\": \"Is there an ablation study that validates the effectiveness of the Augment with dynamic occlusions component?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
DsMxVELk3K
TextEconomizer: Enhancing Lossy Text Compression with Denoising Autoencoder and Entropy Coding
[ "Mahbub E Sobhani", "Anika Tasnim Rodela", "Chowdhury Mofizur Rahman", "Swakkhar Shatabda" ]
Lossy text compression reduces data size while preserving core meaning, making it ideal for summarization, automated analysis, and digital archives where exact fidelity is less critical. While extensively used in image compression, text compression techniques, such as integrating entropy coding with autoencoder latent representations in Seq2Seq text generation, have been underexplored. A key challenge is incorporating lossless entropy coding into denoising autoencoders to improve storage efficiency while maintaining high-quality outputs, even with noisy text. Prior studies have mainly focused on near-lossless token generation with little attention to space efficiency. In this paper, we present a denoising autoencoder with a rectified latent representation that compresses variable-sized inputs into a fixed-size latent space without prior knowledge of dataset dimensions. By leveraging entropy coding, our model achieves state-of-the-art compression ratios alongside competitive text quality, as measured by diverse metrics. Its parameter count is approximately 196 times smaller than comparable models. Additionally, it achieves a compression ratio of 67× while maintaining high BLEU and ROUGE scores. This significantly outperforms existing transformer-based models in memory efficiency, marking a breakthrough in balancing lossless compression with optimal space optimization.
[ "Text Compression", "Denoising AutoEnccoder", "Lossy Text", "Entropy Coding", "Latent Space" ]
https://openreview.net/pdf?id=DsMxVELk3K
https://openreview.net/forum?id=DsMxVELk3K
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yQ9uSB2K48", "lytKrwoalu", "lq7acY90SO", "kJK0zVAocO", "i6P3VDEp3V", "eMh6LfAuhb", "XtAqrrthBb", "RSTy0tWzwe", "OYzNBFg5Ke", "Kib38MUbYW", "CvhqR6G4iG", "CnCBzQkc5m", "1zbFNSRz1x" ], "note_type": [ "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_comment" ], "note_created": [ 1732029088723, 1730687276916, 1731051354224, 1730641927970, 1732555440915, 1732641512500, 1732030858784, 1732035277330, 1732008101521, 1731988404513, 1737604665424, 1730914882293, 1733073070672 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11419/Authors" ], [ "ICLR.cc/2025/Conference/Submission11419/Reviewer_z5S6" ], [ "ICLR.cc/2025/Conference/Submission11419/Reviewer_KeMW" ], [ "ICLR.cc/2025/Conference/Submission11419/Reviewer_k4jx" ], [ "ICLR.cc/2025/Conference/Submission11419/Reviewer_XGgy" ], [ "ICLR.cc/2025/Conference/Submission11419/Authors" ], [ "ICLR.cc/2025/Conference/Submission11419/Authors" ], [ "ICLR.cc/2025/Conference/Submission11419/Authors" ], [ "ICLR.cc/2025/Conference/Submission11419/Authors" ], [ "ICLR.cc/2025/Conference/Submission11419/Authors" ], [ "ICLR.cc/2025/Conference/Submission11419/Authors" ], [ "ICLR.cc/2025/Conference/Submission11419/Reviewer_XGgy" ], [ "ICLR.cc/2025/Conference/Submission11419/Authors" ] ], "structured_content_str": [ "{\"title\": \"Responses on TextEconomizer's Design and Efficiency\", \"comment\": \"In NUGGET the authors chose BART ref[1], a transformer-based sequence-to-sequence model as the foundational architecture for their experiments on top of 602M parameters checkpoint ref[2]. TextEconomizer takes a more lightweight approach, using a simpler bi-directional GRU-based architecture with just ~67M parameters. Despite this, TextEconomizer achieves a strong balance, with only a 7.06% drop in BLEU score while maintaining a promising BERT Score. Since we're working on lossy text compression, semantic similarity (captured by BERT Score) is equally important, as BLEU's strict word-by-word matching might not fully reflect text quality in this context. We'll make sure to highlight this trade-off more clearly.\\n\\nref[1]: Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., Stoyanov, V., and Zettlemoyer, L. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. In Annual Meeting of the Association for Computational Linguistics (ACL), 2020.\\n\\nref[2]: Tang, Y., Tran, C., Li, X., Chen, P.-J., Goyal, N., Chaudhary, V., Gu, J., and Fan, A. Multilingual Translation with Extensible Multilingual Pretraining and Finetuning, 2020.\\n\\n\\nReply [Q1]: The Transformer produces a tensor with the shape [max_length (can vary in each batch), batch_size, hidden_dim] at the encoder end before passing the contextualized tensor to the decoder for each batch. In contrast, TextEconomizer passes a tensor with the shape [batch_size, hidden_dim] to the decoder. This means that while processing each batch, TextEconomizer can generate high-quality contextualized tensors while eliminating [max_length * 4] bytes. Consequently, storing our model's contextualized tensors requires less memory than the Transformer during training, and also if we want to save the bottleneck tensors explicitly. Additionally, entropy coding provides an extra advantage in saving even more memory when saving explicitly.\\n\\nReply [Q2]: No the latent Z was not quantized before encoded with LZMA.\\n\\nReply [Q3]: Thank you for sharing your intuition. With our TextEconomizer, we have measured the bits-per-character (bpc) as follows: 0.8988 for WMT19, 0.3350 for WMT14, 0.7016 for PwC, and 0.8670 for the BookCorpus dataset. If you don't mind, could you share a little more briefly about incorporating the entropy model with TextEconomizer? That information would be helpful for our quick experiments.\"}", "{\"summary\": \"The paper argues that while lossy compression is widely used in image processing, its application in text compression has been less explored. Their proposed model, TextEconomizer, compresses variable-sized text inputs into a fixed-size latent space via an autoencoder-style framework and achieves relatively good compression ratios with competitive text quality, as measured by BLEU, BERT Score, ROUGE scores and PPL. The model is parameter-efficient, and it significantly outperforms existing transformer-based models in memory efficiency with the use of entropy coding on latent representations.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"High Compression Ratio: TextEconomizer achieves a remarkable 67\\u00d7 compression ratio (lossy) while maintaining relatively high BLEU and ROUGE scores, indicating that it is effective at compressing text without significant loss of meaning.\", \"memory_efficiency\": \"The model demonstrates significant memory efficiency with additional entropy coding on latent representations.\", \"parameter_efficiency\": \"TextEconomizer shows that high performance can be achieved with a parameter count significantly smaller than comparable models.\", \"weaknesses\": \"(1) Using autoencoder and entropy coding for lossy compression is not a new idea, especially for visual signal compression like image compression [Ref1] and video compression [Ref2]. When referring to lossy image compression methods with variational autoencoders, the authors should include these representative related works in this paper (incomprehensive literature review).\\n\\n[Ref1] Variational Image Compression with a Scale Hyperprior. Ball\\u00e9 et al., ICLR 2018.\\n\\n[Ref2] DVC: An End-to-end Deep Video Compression Framework. Lu et al., CVPR2019.\\n\\n(2) When applying lossy compression for text compression, it is obvious that text corpus get much high compression ratio compared with lossless compression methods, but with the sacrifice of text reconstruction precision. Different from visual signals likes images, text is a data modality with high information density. Therefore, if the authors would like to compression text in a lossy way, they should convince others that \\\" Lossy text compression reduces data size while preserving core meaning, making ideal for some tasks\\\". In other words, some experiments should be performed on tasks like summarization, automated analysis, and digital archives, to ensure that lossy text compression is still useful for these tasks.\\n\\n(3) Different from lossless compression, lossy compression should usually be measured by different compression ratios and different distortion levels, more like a compression ratio - distortion curve. Maybe the authors can adjust the latent dimension to investigate on different compression ratios.\\n\\n(4) Most of the literature referred in this paper is not correctly cited. Many references are Arxiv version. In addition, the well-known paper \\\"Attention is All you Need\\\" is mistaken as a paper published in 2023 (which should be a paper published in NeurIPS 2017)\", \"questions\": \"As I am not very familiar to this area, I hope the abovementioned weakness points could get some feedback from the authors.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No Ethics Concerns\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposed a DAE method designed for English text self-coding task is constructed by one-way bidirectional gated recursive unit (GRU) and combined with entropy coding to optimize the compression effect.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"S1. The memory compression ratio achieves on texts are impressive.\", \"weaknesses\": \"W1. [Clarity & Writing - 1] The paper's writing could be improved to clearly outline its contributions in the introduction and abstract. Additionally, the text becomes overly verbose and includes several claims in the method section without direct references. Proper citations are needed for every claim that refers to existing literature. For instance, there is a missing reference or explanation for the \\\"Lempel\\u2013Ziv\\u2013Markov compressor.\\\"\\n\\nW2. [Clarity & Writing - 2] The paper's structure should be clearer and better organized. For example, Section 3 dedicates substantial space to details about the dataset (e.g., number of words), which is less relevant to the content of Section 4. Furthermore, the dataset attributes are repeated in Section 5.1. Additionally, Section 5 appears to be a continuation of the experimental results, and it should not be clearly separated from Section 6. Additionally, the tables should be properly positioned and the fonts should be consistent.\\n\\nW3. [Experiments] There are a lot of unfair comparisons and over claims in the paper. For example, the Transformer shown in Table 1 outperforms the proposed method (97.33 vs 95.75, 99.46 vs 99.28) in terms of BLEU and BERT Score. Considering that the number of parameters of the transformer can be adjusted by reducing the number of layers or the dimension of embeddings, it is more fair to choose the transformer structure with the same number of parameters as the method in this paper.\", \"questions\": \"1. Teacher-forcing is mentioned in Figure 1, but is not described in the method section. Can you describe it briefly?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": [\"This paper presents TextEconomizer, a lossy text compression method that combines a denoising autoencoder with entropy coding. The main claimed contributions are:\", \"A text noise process for training robust denoising\", \"A monolingual autoencoder architecture using fixed-size latent representations\", \"Integration of entropy coding for improved compression ratios\", \"Evaluation showing 67x compression while maintaining quality metrics\", \"Analysis of training corpus size effects on performance\"], \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The reported performance is good.\"], \"weaknesses\": [\"**Lack of technical contribution**: The reviewer didn't see any technical contribution of the proposed method.\", \"**Lack of analysis and inspiration**: The authors didn't provide any principles, analysis, theory, or even intuitive explanations for their proposed methods. Readers are unable to understand why the proposed method outperforms others.\", \"**Duplicate claims of contribution**: Despite the claimed 5 contributions, most of them are duplicate and meaningless.\", \"**Bad presentation**: The proposed method is not presented well in the context of the paper. The authors prioritized massive details of datasets over task formulation and methodology, which makes it difficult for readers to understand the technical contribution of the paper. **LZMA** shows up in Figure 1 without any prior definition. Redundancy in this paper somehow validates the importance of text representation compression.\"], \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the response. Regarding Q2 and Q3, they go hand in hand, and perhaps that is causing me some confusion. Given your reply to Q1, my understanding is that for each text, the Z variable is a vector of dimension hidden_dim. Is Z floating-point? If so, how is LZMA applied to this vector? Eq (3) says tanh() is applied, so the values of Z are continuous between 0 and 1. As a lossless compressor, LZ can only be applied to discrete inputs, which is why I don't understand how it can be applied without quantizing Z first.\\n\\nIn the neural compression literature, the latent variable is typically quantized, and then a likelihood model is fitted over the quantized symbols which are discrete. During training, the likelihood model is optimized to minimize the entropy of the quantized symbols. This helps facilitate a joint rate-distortion trade-off during training. Regarding Q3, it seems to get best performance, one should do a similar approach here. But given that Z is not quantized, I have some significant confusion as to how LZ (or other entropy coding) is applied, and how the bitrates in the paper are reported.\"}", "{\"comment\": \"Thanks for your valuable comment.\\n\\nWe compressed the latent space by first converting the floating-point latent vector \\\\( Z \\\\) into its binary representation. To achieve this, we detached \\\\( Z \\\\) from the computation graph and moved it to the CPU to ensure compatibility with NumPy, which we used to serialize each element of \\\\( Z \\\\) into its raw byte format. Then we applied the LZMA compressor to generate the compressed representation of \\\\( Z \\\\), and we computed the compressed memory consumption at this stage. After this calculation, we used the LZMA decompressor to reconstruct the original binary representation and subsequently converted it back into the original latent vector \\\\( Z \\\\) with its original shape, data type, and gradient properties preserved. This reverse process ensures the recoverability of the latent space before feeding to the decoder. We calculated the ratio, $r$, by dividing the *input memory consumption* by the *compressed memory consumption*.\\n\\nWhen calculating the bits per character (bpc), we used the formula: \\n\\n`bpc = (test_loss / ln(2)) \\u00f7 avg_num_of_chars_per_token`\\n\\n*ln(2)* is used to convert the test loss from nats (natural log base) to bits (log base 2). Where it aligns the loss with bits per character, reflecting the average information content per character. It is further normalized by the average number of characters per token.\\n\\n`Average Characters per Token = Total Number of Characters in Text / Total Number of Tokens (WordPiece)`\\n\\nWe hope this explanation addresses your concerns. If you have any other questions, we would be happy to answer them.\"}", "{\"title\": \"Utility and Semantic Integrity of Lossy Text Compression\", \"comment\": \"Thank you for your insightful comment. Our goal with TextEconomizer is not only to achieve higher compression ratios but also to ensure that the compressed representation retains its semantic essence. This is clear in our evaluation metrics, where we evaluated our model with BLEU, BERT, and ROUGE 1-2-L scores. These metrics emphasize semantic similarity and word-for-word alignment. Our approach reflects how well-reconstructed lossy text can still convey meaning\\u2014a critical factor for applications like summarization, search indexing, and digital archiving. TextEconomizer with a negligible word-to-word distortion while still conveying the same meaning provides a significant advantage by utilizing less memory during training and saving relevant storage space. We look forward to strengthening our methods through more targeted task evaluations.\"}", "{\"title\": \"Noise Robustness and Lower Complexity\", \"comment\": \"Thank you for your valuable feedbacks. TextEconomizer is well-suited for auto-encoding tasks due to its bidirectional encoder, which captures contextual information from both past and future tokens, and its attention mechanism, which enables the decoder to focus on the most relevant parts of the input sequence. The model effectively encodes the input into a compact latent representation, leveraging GRU layers to manage sequential dependencies. During decoding, the attention mechanism ensures that the output is aligned with the input, preserving its structural integrity while reconstructing the sequence. This combination allows the model to excel in reconstructing input sequences with high fidelity, even when faced with noise or variability in the data. As we have annotated all the datasets with rigorous noise injection, the model learned to convert real-world noise into properly denoised text, exhibiting comprehensive performance in memory consumption with negligible performance trade-offs.\\n\\nWe have observed significant advancements in image compression using variational auto encoders. In contrast, our approach to text compression uniquely incorporates real-world noise, demonstrating competitive results across all relevant metrics while maintaining lower architectural complexity. This is particularly noteworthy as transformer-based networks are more complex and require large amounts of data.\"}", "{\"title\": \"Additional Experimental Results.\", \"comment\": \"We\\u2019ve run experiments where 20% and 50% of tokens were passed to the decoder in a Vaswani-style Transformer. The token selection was done using a softmax layer based on a probability distribution. Additionally, we modified the architecture by replacing sinusoidal positional embeddings with Rotary Positional Embeddings, ReLU with SwiGLU, and LayerNorm with RMSNorm (we\\u2019ve been calling this version LLaMAFormer in the below table) and depicted the results below. We also tested the 20% and 50% token selection on this LLaMAFormer architecture. All these experiments were run on the PWC (full) and WMT19 (600K) datasets, and the results have been illustrated in the below table.\\n\\n| Model Name | Dataset | #of Token | BLEU Score | BERT Score | ROUGE-L | PPL |\\n|-------------|---------|-----------|------------|------------|---------|-------|\\n| Transformer | PwC | 20% | 95.6077 | 0.9911 | 0.9835 | 4.846 |\\n| LLaMAFormer | PwC | 20% | 92.6727 | 0.9855 | 0.9649 | 4.896 |\\n| Transformer | WMT19 | 20% | 90.9721 | 0.9822 | 0.9597 | 6.01 |\\n| LLaMAFormer | WMT19 | 20% | 93.1267 | 0.9868 | 0.9712 | 5.036 |\\n\\n| Model Name | Dataset | #of Token | BLEU Score | BERT Score | ROUGE-L | PPL |\\n|-------------|---------|-----------|------------|------------|---------|-------|\\n| Transformer | PwC | 50% | 96.9844 | 0.9941 | 0.9926 | 4.34 |\\n| LLaMAFormer | PwC | 50% | 95.9452 | 0.9922 | 0.9861 | 4.215 |\\n| Transformer | WMT19 | 50% | 93.1854 | 0.9867 | 0.9747 | 5.41 |\\n| LLaMAFormer | WMT19 | 50% | 93.6126 | 0.9878 | 0.9744 | 4.879 |\\n\\n| Model Name | Dataset | #of Token | BLEU Score | BERT Score | ROUGE-L | PPL |\\n|-------------|---------|-----------|------------|------------|---------|-------|\\n| LLaMAFormer | PwC | 100% | 97.1551 | 0.9943 | 0.9941 | 4.185 |\\n| LLaMAFormer | WMT19 | 100% | 93.9515 | 0.9883 | 0.9764 | 4.822 |\"}", "{\"title\": \"Clarification on Teacher-forcing and Experimental Results with Parameter Matching in Transformer\", \"comment\": \"Reply [Q1]: In our model, teacher forcing refers to the process where, during training, the target sequence tokens are directly used 50% of the time as input to the decoder at each time step instead of the decoder\\u2019s own predicted output and 50% of the time decoder has got it's own predicted output as input. Incorporating, this approach helps guide the decoder more effectively towards generating accurate outputs and stabilizes the training process.\", \"reply\": \"W3. [Experiments] Thank you for your insightful suggestion regarding running experiments with transformer parameters same as the TextEconomizer. We conducted experiments using Vaswani's style transformer by reducing the feed-forward layer dimensions and the number of encoder-decoder layers to 5 instead of 6. Interestingly, we observed that the results remained identical on PwC dataset. I have noted the results below:\\n\\n| Model | #of Parameter | PPL | BLEU | ROUGE-L | BERT SCORE |\\n|------------|---------------|------|-------|----------|------------|\\n| Transformer | ~67M | 4.2 | 97.43 | 0.9954 | 0.9948 |\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper proposes an auto encoder for text data, with the goal of lossy text compression. The goal is to preserve text semantics while minimizing rate transmitted. The method, called TextEconomizer, consumes a noisy text, transmits a fixed size latent vector, and reconstructs the text, using cross-entropy loss with partial teacher-forcing. Experiments compare TextEconomizer to several baselines, primarily lossless text compressors, and some text language models.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"Lossy text compression seems to be a relatively novel area of research, so the novelty of such a work is good.\"], \"weaknesses\": [\"I found the overall presentation of the work to be a bit confusing. Section 3 seems to include a lot of data creation details, which appears to explain the noise adding process, but a lot of the details I felt should have been placed in experimental setup or appendix.\", \"Many of the baselines mentioned in the related work are not compared to. The only relevant baseline used in section 6 that actually does lossy text compression appears to be NUGGET, although I may be mistaken. Everything else appears to be a language model (such as T5) or lossless text compressor (Huang et al, 2023). It is difficult to judge the efficacy of TextEconomizer without a comparison to the lossy text compressors in the related work.\", \"Furthermore, the baselines section of 5.2 does not include all the baseline comparisons actually used in section 6.\", \"In Table 1, it is hard to say that TextEconomizer is superior to NUGGET (the only other lossy text compressor). NUGGET has less memory compression ratio but superior BLEU.\", \"In addition, NUGGET is missing in Table 3. It would be helpful to have a qualitative comparison for NUGGET.\", \"In my opinion, it may also be useful to have a metric such as Levenshtein score, in order to measure the similarity between texts in text space, rather than just BERTScore, which compares in the embedding space. This comparison would help support the qualitative results in Table 3.\", \"An ablation study is missing. I think this is important because the texts shown seem to be relatively short. Since the latent variable is fixed-size, it is possible the performance may suffer if the input text lengths are longer. It would be helpful to know how the model performance changes based on (i) the fixed-size latent variable size and (ii) the input text length. In addition, the ablation study could support other design choices, such as the noise adding process.\"], \"questions\": [\"In addition to the questions in the weaknesses section, I have the following questions:\", \"Why is the fixed-size latent space necessary? I think this was motivated in the introduction, but I found the explanation there to be confusing. Presumably, a variable-size latent space could still be entropy coded or LZ-coded.\", \"Moreover, is the latent Z quantized before encoded with LZ? If not, this does not seem possible, as lossless compression needs a discrete input.\", \"Why not additionally use an entropy-model to minimize the entropy of the latent z? This would further reduce the rate, and jointly minimize the rate and cross-entropy. It can also support a variable-size latent space.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We appreciate your response, which has helped us further improve our work. Your time and contribution means a lot for us. At the same time, if our explanation has addressed your concerns, we kindly hope that you would consider increasing the score or confidence of our work. If you still have any questions regarding our work, please feel free to contact us, and we will respond as soon as possible.\\n\\nThanks again for your assistance here.\"}" ] }
DsIOUoZkVk
The "Law'' of the Unconscious Contrastive Learner: Probabilistic Alignment of Unpaired Modalities
[ "Yongwei Che", "Benjamin Eysenbach" ]
While internet-scale data often come in pairs (e.g., audio+image, image+text), we often want to perform inferences over modalities unseen together in the training data (e.g., audio+text). Prior work has addressed this issue by learning multiple contrastive embedding spaces between existing modality pairs, implicitly hoping that unseen modality pairs will end up being aligned. This theoretical paper proves that this hope is well founded, under certain assumptions. Starting with the proper Bayesian approach of integrating out intermediate modalities, we show that directly comparing the representations of data from unpaired modalities can recover the same likelihood ratio. Our analysis builds on prior work on the geometry and probabilistic interpretation of contrastive representations, showing how these representations can answer many of the same inferences as probabilistic graphical models. Our analysis suggests two new ways of using contrastive representations: in settings with pre-trained contrastive models, and for handling language ambiguity in reinforcement learning. Our numerical experiments study the importance of our assumptions and demonstrate these new applications.
[ "theory", "contrastive learning", "probabilistic graphical models", "multi-modal learning", "reinforcement learning" ]
Accept (Poster)
https://openreview.net/pdf?id=DsIOUoZkVk
https://openreview.net/forum?id=DsIOUoZkVk
ICLR.cc/2025/Conference
2025
{ "note_id": [ "w9XpxmXb8A", "u57iHFCZOd", "tpvpnw6SsS", "nqi6RvHbz7", "ljaL0i5YcQ", "e2PFm9zFKG", "cHLpGODcab", "afg4nS9aUx", "Z5MtN0XhQe", "Xjarh4HsPH", "XZ0a66lMRA", "Tr6lP7MsfK", "P13SYyp07a", "OHQx3Wyasq", "MekF2hKmJD", "LoPynk0N0q", "JFZXDOyTB5", "FIXDfOwE6I", "FDeN92p8AL", "7AF3t1v2Vb", "6H33uo0a7U", "0z7V2MYPLE", "0BKHyGKUI3" ], "note_type": [ "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_review", "official_comment" ], "note_created": [ 1733017851213, 1732555246159, 1737523931209, 1732620172900, 1732287412591, 1732595702275, 1732287673272, 1732287401643, 1730607146725, 1732552998138, 1732554584467, 1732554527809, 1732554478175, 1730671387350, 1732287251525, 1732552989181, 1732513854196, 1732992818762, 1734625763480, 1732553004640, 1730683579011, 1730686733084, 1733018303143 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8771/Authors" ], [ "ICLR.cc/2025/Conference/Submission8771/Reviewer_rpJk" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8771/Authors" ], [ "ICLR.cc/2025/Conference/Submission8771/Authors" ], [ "ICLR.cc/2025/Conference/Submission8771/Reviewer_RYrX" ], [ "ICLR.cc/2025/Conference/Submission8771/Authors" ], [ "ICLR.cc/2025/Conference/Submission8771/Authors" ], [ "ICLR.cc/2025/Conference/Submission8771/Reviewer_RYrX" ], [ "ICLR.cc/2025/Conference/Submission8771/Authors" ], [ "ICLR.cc/2025/Conference/Submission8771/Area_Chair_SRzm" ], [ "ICLR.cc/2025/Conference/Submission8771/Area_Chair_SRzm" ], [ "ICLR.cc/2025/Conference/Submission8771/Area_Chair_SRzm" ], [ "ICLR.cc/2025/Conference/Submission8771/Reviewer_Xnbz" ], [ "ICLR.cc/2025/Conference/Submission8771/Authors" ], [ "ICLR.cc/2025/Conference/Submission8771/Authors" ], [ "ICLR.cc/2025/Conference/Submission8771/Reviewer_i8rh" ], [ "ICLR.cc/2025/Conference/Submission8771/Authors" ], [ "ICLR.cc/2025/Conference/Submission8771/Area_Chair_SRzm" ], [ "ICLR.cc/2025/Conference/Submission8771/Authors" ], [ "ICLR.cc/2025/Conference/Submission8771/Reviewer_i8rh" ], [ "ICLR.cc/2025/Conference/Submission8771/Reviewer_rpJk" ], [ "ICLR.cc/2025/Conference/Submission8771/Authors" ] ], "structured_content_str": [ "{\"title\": \"Author Response - Rebuttals Ending Soon\", \"comment\": \"Dear Reviewer,\\n\\nWe have worked hard to incorporate the additional feedback by running the requested experiments and revising the paper (See Appendix C.2). We have also included additional audio-text retrieval experiments on the AudioCaps dataset [1]. We would really appreciate if you could confirm whether these changes address the concerns about the paper.\\n\\nKind regards,\\n\\nThe Authors\\n\\n-------------------\\n\\n[1] Additional AudioCaps Experiment Details:\\n\\nThe experiments were run on the AudioCaps test set using the same setup as Appendix C.2.1. We measured Recall@1 accuracy on sets of 25 samples, with our Monte Carlo algorithm computing LogSumExp over sampled image frames from videos in the AudioCaps training set.\", \"results_for_audio_text_retrieval\": \"Dataset | Baseline | LogSumExp | Direct (ImageBind) | CLAP\\n----------|----------|--------------|-------------------|-------------\\nAudioSet | 0.040 | 0.294\\u00b10.035 | 0.291\\u00b10.020 | 0.497\\u00b10.040\\nAudioCaps | 0.040 | 0.468\\u00b10.018 | 0.568\\u00b10.017 | 0.795\\u00b10.016\\n\\nThe substantial improvement of LogSumExp over the baseline provides strong validation of our theoretical framework. These results further demonstrate that our method generalizes well to standard audio-text retrieval benchmarks beyond AudioSet. As the revision period has ended, we will include the new AudioCaps experiments in the camera-ready version of the paper.\"}", "{\"comment\": \"Thanks for the response, which is helpful, particularly in clarifying Assumption 3 and its role in the work. It would be interesting to see if similar observations can be found in additional experiments after swapping modalities. Since I have not seen such experiments yet, I will maintain my position for now. That said, given the authors' commitment to addressing the feedback in the next version, I believe the updated work has the potential for acceptance, even if not here.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Author Response\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your clear feedback. We have now added the requested experiments in Appendix C.2, where we test our framework using audio as an intermediate modality for vision-language alignment and vision as an intermediate modality for audio-language alignment. The results strongly support our theoretical analysis, showing that LogSumExp closely matches (within one percentage point) direct evaluation accuracy regardless of the intermediate modality type.\\n\\nIn addition to these modality swapping experiments, we have already revised the paper and added new experiments to address the three weaknesses noted in your original review. We believe these revisions have strengthened the paper. Do these changes fully address your original concerns? If not, we would be happy to run additional experiments or further revise the paper.\\n\\nKind regards,\\n\\nThe Authors\"}", "{\"title\": \"Author Response\", \"comment\": \"Dear Reviewer,\\n\\nThank you for your detailed review and your positive and constructive feedback. To address the concern about the the Monte Carlo approximation, we have run a new experiment (Fig. 5) showing that this gap shrinks towards zero as we increase the number of samples; note that our theoretical results only say that the methods should be the same in the limit of infinite samples (i.e., computing the exact expectation). We incorporate the other suggestions, we have made substantial revisions to the paper organization and added new results on another real-world model (ImageBind, in addition to the real-world experiments already included in the initial submission). Together with the discussion below, **do these new experiments and revisions fully address the reviewer's concerns?** If not, we'd be happy to run additional experiments and make further revisions to the paper.\\n\\n> additional real-world experiments\\n\\nWe have added an experiment with ImageBind (see revised Fig. 5), which provides an additional modality pairing example (Aligning Audio and Text through the shared Image modality).\\n\\n> Gap in the Monte Carlo approximation method and the main framework\\n\\nThe 12% gap in Fig 4 is caused by using a small number of Monte Carlo samples in the original paper. When we increase the number of samples from 600 \\u2192 500,000, we observe that this gap shrinks to almost zero. This is in line with our theoretical results, which say that these methods should be equivalent if the expectation is computed exactly. We have added a new Fig. 5 to show how this gap shrinks to zero as the number of samples is increased.\\n\\n> Assumption 3 is noted as not strictly necessary, introducing ambiguity in the experiments\\n\\nWe have revised Sec. 5 to clarify that Assumption 3 is necessary for our direct comparison method (prior work) but not for the Monte Carlo approach (our approach). Our experiments validate this distinction: direct evaluation and Monte Carlo perform well when uniformity holds (Section 6.2.2 and Figure 5), while the Monte Carlo algorithm excels on highly non-uniform reinforcement learning data (Section 6.3).\\n\\nOne important contribution of our paper is to highlight that a heuristic that is already commonly used in practice (\\\"direct comparison\\\") implicitly relies on Assumption 3. By highlighting this assumption, our paper (1) provides the first (to the best of our knowledge) proof of why this heuristic often works well in practice, (2) why this heuristic can fail in some settings, and (3) provides a new method (Monte Carlo) that continues to work when this assumption is violated (such as language-conditioned reinforcement learning).\\n\\n> The experimental setup in Section 6 lacks standard organization\\n\\nWe have revised the organization of Section 6.3 to clarify the connection between our theoretical framework and the RL experiments, while providing more rigorous empirical validation. We have also clarified the use evaluation metrics in section 6 for the synthetic data experiments. \\n\\n> Lemma 2 formatting issue\\n\\nWe have made revisions to Lemma 2 and its proof.\\n\\n> swap the modalities within the same setup. \\n\\nThanks for this suggestion! We have started working on this by running experiments on using audio and image as the intermediate modality. We've been prioritizing the other requested experiments and haven't finished this experiment yet. We will include the final results in the camera ready version of the paper.\\n\\nKind regards,\\n\\nThe Authors\"}", "{\"title\": \"Thank you for your response\", \"comment\": \"Thanks for updating the results and the draft accordingly. I now understand that the performance gap is mostly due to limited MC samples, and the experiments on RL. I updated my score accordingly.\\n\\nIn generaly I am still leaning negative, and the main concern is still on the applicability to real-world. The updated results are still on the same real-world dataset AudioSet (which is noisy itself), while the others remain on synthetic data. I feel more solid experiments on extra real-world and standard benchmakr datasets, like ImageNet-1K (for image-text retrieval) or AudioCaps/Clotho (for audio-text retrieval) would be helpful to better validate the proposed method.\"}", "{\"title\": \"Author Response\", \"comment\": \"Dear Reviewer,\\n\\nThanks for the detailed review and constructive feedback. To incorporate the reviewer's feedback and answer important questions, we have run two additional experiments. The first new experiment shows that the previously observed Monte Carlo performance drop is primarily a function of sample size, and the performance drop rapidly diminishes as we increase the number of Monte Carlo samples (see Fig. 5). The second new experiment shows that our theoretical results also also apply to an additional real-world model (ImageBind) (see Fig. 5). **Together with the further revisions discussed below, do these new experiments fully address the reviewer's concerns?** We look forward to continuing the discussion, and would be happy to make additional revisions or run additional experiments.\\n\\n> Why does performance drop on LanguageBind\\n\\nOur new experiments demonstrate this gap is primarily due to limited Monte Carlo (MC) sampling. In Figure 5, we show:\\n- With 1000 MC samples: 12% performance gap\\n- With 100,000 MC samples: <3% gap\\n- With 500,000 MC samples: <1% gap, matching direct computation This scaling analysis was validated across different models (LanguageBind, ImageBind), demonstrating consistent convergence behavior.\\n\\n> real-world data\\n\\nTo study this question, we have run additional experiments with more models on real-world data (LanguageBind and ImageBind). The new results, found in Fig. 5, validate our Monte Carlo algorithm as well as Assumptions 1 and 2.\\n\\n> are modalities A (e.g., audio) and C (e.g., image) really independent conditioned on B (language)\\n\\nWe agree that, in practice, this assumption may be violated (see Sec. 5). We have included an ablation experiment studying what happens when this assumption is violated in Appendix C.3. We believe that our theoretical results are important because they are the first (to the best of our knowledge) theoretical characterization of why the direct comparison method should work; this result is important because (1) it helps explain that this prior approach is not a heuristic, but (2) that it implicitly makes this assumption and so may fail in certain settings (see, e.g., Appendix C.3).\\n\\n> Section 6.3 is difficult to understand from the main text alone\\n\\nWe have revised the exposition in Section 6.3 to improve clarity and better integrate its connection with the supporting materials in the Appendix. Does this sufficiently address the reviewer\\u2019s concerns? If not, we would be happy to further revise the paper.\\nWe would like to thank the reviewer again for the feedback! We believe the additional experiments and clarifications have provided additional support for the theoretical framework. Please let us know if we have addressed the concerns.\\n\\nKind Regards,\\n\\nTha Authors\"}", "{\"title\": \"Author Response\", \"comment\": \"Dear Reviewer,\\n\\nThank you for the detailed review and constructive feedback. It seems like the main suggestions relate to clarifying how our framework handles language ambiguity and studying its applicability to real-world applications. To incorporate these suggestions, we have added a step-by-step analysis of the language ambiguity experiment (Appendix D.5) and run new experiments on a separate real-world model (ImageBind) (See revised Fig. 5). Together with the discussion and further revisions discussed below, **do these new experiments and revisions fully address the reviewer's concerns?** If not, we would be happy to run additional experiments or further revise the paper.\\n\\n> Detailed Explanation of why LogSumExp can help language ambiguity\\n\\nWe agree that the benefits to handling language ambiguity stem from our probabilistic framework. We have added a step-by-step analysis in Appendix D.5 to demonstrate these benefits.\\n\\n> The proposed assumptions seem to less contribute to the real-data application\\n\\nTo study this question, we have run additional experiments with new models on real-world data (ImageBind and LanguageBind). The new results, found in Fig. 5, validate our Monte Carlo algorithm as well as Assumptions 1 and 2.\\n\\nWe agree that theoretical results make certain assumptions, but our empirical results (see Fig. 4) have already shown the importance of these results in real-data settings where the assumptions may be violated. One contribution of our paper is to highlight that a heuristic that is already commonly used in practice (\\\"direct comparison\\\") implicitly relies on a certain assumption (Assumption 3). By highlighting this assumption, our paper (1) provides the first (to the best of our knowledge) proof of why this heuristic often works well in practice, (2) why this heuristic can fail in some settings, and (3) provides a new method (Monte Carlo) that continues to work when this assumption is violated (such as language-conditioned reinforcement learning).\\n\\nThanks for raising these important questions about our theoretical and experimental results. The additional experiments and visualizations have strengthened our paper. Please let us know if we have addressed the concerns.\\n\\nKind Regards,\\n\\nThe Authors\"}", "{\"summary\": \"This paper tries to provide a theoretical justification to the scenario of using the third modality to connect two modalities which are not explicitly trained during contrastive learning. A typical use case is to conduct image-audio retrieval tasks through CLIP and CLAP with language as the intermediate.\\n\\nAuthors prove that such heuristic is theoretically grounded, by making a conditional independence assumption (of the two modalities to be\\nconnected), along with two ideas published in previous work.\\n\\nTwo numerical experiments with synthetic and real-world datasets are employed to validate the effectiveness of the proof.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"- It is desired to have some theoretical justifications to the common\\n use case or practice of connecting two modalities that are not\\n explicitly trained during contrastive learning. There are published\\n work, but not much theoretical justifications contained.\", \"weaknesses\": \"- The method derived from the proof, LogSumExp, seems not working well for\\n LanguageBind, and it leads to performance drop (Figure4).\\n\\n- Overall the proposed proof seems mostly effective on synthetic\\n datasets. I am thus concerned about how strong are the assumptions,\\n especially assumption 1 -- are modalities A (e.g., audio) and C\\n (e.g., image) really independent conditioned on B (language)?\\n\\n- Writing needs to be improved; Sec. 6.3 experiment is difficult to\\n understand with the main text alone.\", \"questions\": \"Line457. LanguageBind is claimed to assume the proposed Law\\nimplicitly. However, when applying the law explicitly, why the\\nperformance actually drops? I was expecting no impacts on LanguageBind\\nif the Law is implicitly implemented already.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response\", \"comment\": \"Dear Reviewer,\\n\\nWe have worked hard to incorporate the review feedback by running new experiments and revising the paper. We'd really appreciate if you could confirm whether these changes address the concerns about the paper. If we have misunderstood any concerns we'd like to learn that now so we can further revise the paper or run additional experiments.\\n\\nThank you!\\n\\nThe Authors\"}", "{\"title\": \"Rebuttals ending soon, please discuss further.\", \"comment\": \"Have the new experiments addressed your concerns? Is the presentation now satisfactory?\"}", "{\"title\": \"Are author responses satisfactory?\", \"comment\": \"Rebuttals are coming to a close. Have the author clarifications addressed your concerns?\"}", "{\"title\": \"Are author responses satisfactory?\", \"comment\": \"Have they improved their presentation and addressed other weaknesses?\"}", "{\"summary\": \"Existing work often assumes alignment between different modality pairs without providing a theoretical basis. This paper, motivated by (1) isotropic representations and (2) a Bayesian probabilistic framework, introduces three assumptions that theoretically ensure alignment.\\n\\nIn a synthetic experiment, the authors first demonstrate that alignment fails when Assumption (iii)\\u2014that acquired representations have a marginal distribution following an isotropic Gaussian\\u2014is violated. They then show that retrieval accuracy remains high (indicating successful alignment) when Assumptions (i) and (ii), which involve Monte Carlo approximations, are satisfied, using an intermediate modality like language.\\n\\nFinally, in a real-world application, the authors apply contrastive representations from pre-trained contrastive models within a probabilistic framework to manage language ambiguity in reinforcement learning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper proposes theoretical assumption to guarantee the universally agreed alignment for multi-modalities.\\n2. The experiments on synthetic data verifies the correctness of the assumptions. \\n3. The real application experiment show promise of the assumption in reinforcement learning for addressing handling language ambiguity.\", \"weaknesses\": \"It is not clear why the proposed framework is better at language ambiguity situation, is it because that the framework is based on a probabilistic framework, i.e., the uncertainty.\\n\\nThe proposed assumptions seem to less contribute to the real-data application, i.e., language ambiguity in reinforcement learning framework, which limits its practical contribution.\", \"questions\": \"See weakness above.\\n\\n1. Any practical implication based on the assumptions.\\n2. Detailed explanations of the reason why the framework can help language ambiguity, is that possible applied to other frameworks. The motivation of taking different multi-modality data into consideration in the RL settings.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response\", \"comment\": \"Dear Reviewer,\\nThank you for your detailed review and constructive feedback. We especially appreciate the detailed review of the formal proofs in the paper. We have also incorporated the reviewer feedback into the revised paper (see below).\\n\\n> Intuition for Equation 4.2\\n\\nEquation 4.2 follows from the triangle inequality \\u2013 note that all vectors are unit length. We have added this to the paper.\\n\\n> Why the von-Mises-Fisher distribution in Lemma 2?\\n\\nThe reviewer is correct that we use the vMF distribution in Lemma 2 to match Assumption 3. Based on prior work, we assume our intermediate embedding distributions are uniformly distributed over the unit hypersphere. The von-Mises-Fisher distribution with parameter $\\\\kappa = 0$ captures this uniform distribution.\\n\\n> Monte Carlo computation and time efficiency\\n\\nWhile Monte Carlo methods can be computationally expensive, in our case it only incurs a large one-time cost. The majority of computation time comes from generating embeddings for the intermediate representation (running inference for one image through ImageBind takes **23 milliseconds**). However, this needs to be done only once and can be cached for all future computations. Subsequently, the time complexity becomes linear in the number of Monte Carlo samples with a small constant factor (for an embedding dimension of 512, a single dot product takes **16 microseconds**). \\n\\n> Section 6.1 does not suffice as a modality\\n\\nWe agree that Section 6.1 does not meet the conventional definition of a modality. We merely use this as a didactic example to illustrate our theoretical framework, before progressing to complex, real-world modalities in subsequent experiments in Sec. 6.2.\\n\\n> Typos\\n\\nWe have fixed these in the revised paper.\\n\\nWe would like to thank the reviewer again for their detailed comments regarding this work. Please let us know if there are any additional questions or concerns.\\n\\nKind Regards,\\n\\nThe Authors\"}", "{\"title\": \"Author Response\", \"comment\": \"Dear Reviewer,\\n\\nWe have worked hard to incorporate the review feedback by running new experiments and revising the paper. We'd really appreciate if you could confirm whether these changes address the concerns about the paper. If we have misunderstood any concerns we'd like to learn that now so we can further revise the paper or run additional experiments.\\n\\nThank you!\\n\\nThe Authors\"}", "{\"title\": \"Response to the authors\", \"comment\": \"Thank you for your response, and it answers my questions :). As my score right now is pretty high, I will maintain my score.\"}", "{\"title\": \"Author Response\", \"comment\": \"Dear Reviewer,\\n\\nFollowing the suggestion, we have conducted new audio-text retrieval experiments on the AudioCaps dataset. In addition, we have previously added Appendix C.2 that tests both audio-text and image-text retrieval on AudioSet by performing LogSumExp over different intermediate modalities (image and audio respectively). These new experiments further validate our method's real-world applicability across different modality combinations.\\n\\nThe new AudioCaps experiments are run on its test set using the same experimental setup as Appendix C.2.1. Our evaluation measures Recall@1 accuracy on sets of 25 samples, with our Monte Carlo algorithm computing LogSumExp over sampled image frames from videos in the AudioCaps training set.\", \"below_are_the_audio_text_retrieval_results_on_audioset_and_audiocaps\": \"Dataset | Baseline | LogSumExp | Direct (ImageBind) | CLAP |\\n|---------|----------|-----------|-------------------|------|\\n| AudioSet | 0.040 | 0.294\\u00b10.035 | 0.291\\u00b10.020 | 0.497\\u00b10.040 |\\n| AudioCaps | 0.040 | 0.468\\u00b10.018 | 0.568\\u00b10.017 | 0.795\\u00b10.016 |\\n\\nThe substantial improvement of LogSumExp over the baseline provides strong validation of our theoretical framework. These results further demonstrate that our method generalizes well to standard audio-text retrieval benchmarks beyond AudioSet. Do these new experiments fully address the reviewer's concerns?\\n\\nAs the revision period has ended, we will include the new AudioCaps experiments in the camera-ready version of the paper.\"}", "{\"metareview\": \"This paper presents a theoretical framework for understanding when multimodal knowledge alignment occurs. They prove three assumptions are key multimodal learning, all involving approximations to the distribution of the learned representations.\", \"pros\": [\"The multimodal framework is topical and their approaches principled in the theory.\", \"Their synthetic experiments confirm that when some assumptions are violated, models fail to align across modalities.\", \"They include several more natural multimodal datasets in the experiments.\", \"Their framework explains how ambiguity in language can be better handled through explicitly contrastive learning.\"], \"cons\": [\"Skeptical of the connection between the theoretical and real-world setting. More realistic experiments were not finished during rebuttals, and the last semi realistic experiments are not strong on their own.\", \"Most experiments are very artificial or use noisy datasets.\"], \"additional_comments_on_reviewer_discussion\": \"Reviewer rpJk did not update their score because the new results were not yet added to the paper, but their requested experiments were run. Xnbz never replied to rebuttal (main objection is lack of real world application of assumptions). The last response from authors, post-revisions, included more semi-realistic experiments; although the modalities added are limited, these seem satisfactory to me. (No reviewers responded to the last experiments as they were added late, but I believe they expand the realistic experiment basis sufficiently.)\"}", "{\"title\": \"Author Response\", \"comment\": \"Dear Reviewer,\\n\\nWe have worked hard to incorporate the review feedback by running new experiments and revising the paper. We'd really appreciate if you could confirm whether these changes address the concerns about the paper. If we have misunderstood any concerns we'd like to learn that now so we can further revise the paper or run additional experiments.\\n\\nThank you!\\n\\nThe Authors\"}", "{\"summary\": \"This paper proves under certain assumption that the \\u201chope\\u201d of unseen modality pairs will be aligned when the embedding space of models between existing modality pairs are trained contrastively. Using Bayesian approach, the paper shows that directly comparing the representation of data from unpaired modalities can recover the same likelihood ratio. The analysis shows that contrastive representations can answer many of the same inferences as probabilistic graphical models. The result of the paper suggests that the contrastive representations can be used in settings with pre-trained contrastive models, and for handling language ambiguity. Experiments are done to verify the theoretical results over synthetic datasets and more realistic setting.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The three assumptions are clearly stated, and proofs as well as empirical tests for the assumptions are provided. The derivation of the conclusion that the relationship between the $\\\\phi$ functions for both normalized and unnormalized representation closely resemble their critic functions.\", \"Empirical evaluations using the Monte Carlo approximation validates some assumptions\\u2019 validity, and also brings the possibility of only use intermediate modality data to align all the modalities\\u2019 representations. Experiments are done in both synthetic setup (section 6.1, 6.3) and real-world data (Section 6.2).\"], \"weaknesses\": \"Section 6.1\\u2019s setup might be a bit artificial. Since modality A and C are only a projection of B, it does not quite suffice the general definition of \\u201cmodality\\u201d that most of us agrees on.\", \"questions\": [\"In equation from 4.2, $\\\\phi(A)^T\\\\phi(B) + \\\\phi(B)^T\\\\phi(C) \\\\geq \\\\phi(A)^T\\\\phi(C)$ is because the trained pairs are almost guaranteed to be closer than the unseen pair?\", \"Missing subscript $C$ for $\\\\phi_C(C)$ in line 247?\", \"Apologize for not being familiar enough with the theory of the work, why is von-Mises-Fisher the distribution in Lemma 2? Is it just because the normalization constant or wanting to match assumption 3?\", \"The citation format in section 6.2.1 should be changed. E.g., AudioSet (Gemmeke et al., 2017).\", \"Monte Carlo approximation can be expensive. What is the cost in computation and time efficiency?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"A common belief in contrastive multimodal learning is that the embedding spaces of seen modality pairs (A&B and B&C) naturally align with those of unseen modality pairs (A&C). This paper introduces the \\\"Law of the Unconscious Contrastive Learner\\\", showing that under specific assumptions about the geometric and probabilistic properties of contrastive embeddings, it is possible to establish relationships between unseen modality pairs. The law relies on three key assumptions: the first two allow for evaluating the connection between two unpaired modalities (A&C) via a Bayesian approach that integrates over an intermediate modality (B), while the third enables the use of the intermediate representation marginal distribution to derive a closed-form solution. The law is then formalized into a practical algorithm using Monte Carlo approximations. It is validated through experiments on synthetic and real datasets, including CLIP and CLAP. Results show the alignment of unpaired modalities, though some assumptions may not be strictly necessary in practice.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The idea of aligning representations of the same concept or sense, regardless of modality, is valid and taps into a widely-held belief. Although this is commonly accepted, it has not been well grounded in theory. This paper makes strides by validating this belief under certain assumptions. It provides a foundation that may guide further theoretical development in contrastive embedding alignments.\\n2. Although not checking line-by-line, the proposed theory assumes probabilistic contrastive learning, allowing connections between unpaired modalities (A&C) to be established across a shared intermediate modality (B) using Bayes Rule. Its algorithm using Monte Carlo approximations is backed by empirical evidence, which is interesting compared to the Oracle methods, where modality pairs (A&C) can be seen.\\n3. The paper is well-presented, especially the first half theoretical sections. This clarity makes the framework and assumptions easy to follow. Results shown in Figure 2 are helpful for a good understanding.\", \"weaknesses\": \"1. The experimental setup in Section 6 lacks standard organization, making it difficult to immediately grasp the task, input, output, and evaluation metrics. A clearer presentation of each experiment would improve readability and understanding.\\n2. There appears to be a gap between the Monte Carlo approximation method and the main theoretical framework. For example, Assumption 3 is noted as not strictly necessary, introducing ambiguity in the experiments and leaving some uncertainty about the necessity of all three proposed assumptions given the results.\\n3. Of the three main experiments, both the first (6.1) and third (6.3) are conducted on in-house datasets, and only the second experiment applies the algorithm to an external dataset (AudioSet) by comparing CLIP and CLAP, but its scope, datasets, model types, and modality pairings are limited.\", \"questions\": \"An easy way to strengthen the experimental findings could be to swap the modalities within the same setup. Theoretically, there is no strict assignment of text, image, or audio to roles A, B, or C, so these modalities could be switched to test the robustness of results.\\n\\n(There appears to be a formatting issue in the proof of Lemma 2 on page 5.)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response - Rebuttals Ending Soon\", \"comment\": \"Dear Reviewer,\\n\\nWe have worked hard to incorporate the review feedback by running new experiments and revising the paper. As we haven't yet received your response to our rebuttal and the discussion period is coming to an end, we would really appreciate if you could confirm whether our changes address your concerns about the paper.\\n\\nKind Regards,\\n\\nThe Authors\"}" ] }
DrNN5qx66Z
TVBench: Redesigning Video-Language Evaluation
[ "Daniel Cores", "Michael Dorkenwald", "Manuel Mucientes", "Cees G. M. Snoek", "Yuki M Asano" ]
Large language models have demonstrated impressive performance when integrated with vision models even enabling video understanding. However, evaluating these video models presents its own unique challenges, for which several benchmarks have been proposed. In this paper, we show that the currently most used video-language benchmarks can be solved without requiring much temporal reasoning. We identified three main issues in existing datasets: (i) static information from single frames is often sufficient to solve the tasks (ii) the text of the questions and candidate answers is overly informative, allowing models to answer correctly without relying on any visual input (iii) world knowledge alone can answer many of the questions, making the benchmarks a test of knowledge replication rather than visual reasoning. In addition, we found that open-ended question-answering benchmarks for video understanding suffer from similar issues while the automatic evaluation process with LLMs is unreliable, making it an unsuitable alternative. As a solution, we propose TVBench, a novel open-source video multiple-choice question-answering benchmark, and demonstrate through extensive evaluations that it requires a high level of temporal understanding. Surprisingly, we find that most recent state-of-the-art video-language models perform similarly to random performance on TVBench, with only a few models such as Qwen2-VL, and Tarsier clearly surpassing this baseline.
[ "Video-Language evaluation", "Video-Language benchmark" ]
Reject
https://openreview.net/pdf?id=DrNN5qx66Z
https://openreview.net/forum?id=DrNN5qx66Z
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xuLjwnv0jf", "wevkWwItbb", "lep6Ier12x", "gshTU2IYcz", "gSKD44FUXh", "ZLCc1xGpEu", "YGAzffxB0o", "VvvyxsppSC", "V73MUR46yY", "T7nLHeABUX", "QTrgiKMI2H", "PcYO2OFHJ6", "OqDeFoJTob", "Lbop2xF7qD", "KwhVuRcQu8", "JBbF7kehlg", "E3THvNoUT5", "DpkRdxRhl1", "CcDhGvxxC6", "C0BPaugkT5", "B6qVAoJIL5", "Azzpj8twHu", "Av0bTApzg8", "73SI5vKcAR", "6fFprReKfZ", "2Od5QdBOWD" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_review" ], "note_created": [ 1733183045015, 1732550560385, 1732227737677, 1732907730461, 1730526397167, 1732550590905, 1732855162477, 1732227953027, 1732227648234, 1732227978239, 1732227501675, 1732227622567, 1730588609719, 1732227756202, 1733196537402, 1734353763466, 1732227523530, 1732227991344, 1732546057596, 1732227783305, 1730700263228, 1732730437600, 1732550533603, 1733316782533, 1737524128667, 1730682000181 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11521/Reviewer_TPmi" ], [ "ICLR.cc/2025/Conference/Submission11521/Authors" ], [ "ICLR.cc/2025/Conference/Submission11521/Authors" ], [ "ICLR.cc/2025/Conference/Submission11521/Authors" ], [ "ICLR.cc/2025/Conference/Submission11521/Reviewer_QEG7" ], [ "ICLR.cc/2025/Conference/Submission11521/Authors" ], [ "ICLR.cc/2025/Conference/Submission11521/Reviewer_TPmi" ], [ "ICLR.cc/2025/Conference/Submission11521/Authors" ], [ "ICLR.cc/2025/Conference/Submission11521/Authors" ], [ "ICLR.cc/2025/Conference/Submission11521/Authors" ], [ "ICLR.cc/2025/Conference/Submission11521/Authors" ], [ "ICLR.cc/2025/Conference/Submission11521/Authors" ], [ "ICLR.cc/2025/Conference/Submission11521/Reviewer_A5py" ], [ "ICLR.cc/2025/Conference/Submission11521/Authors" ], [ "ICLR.cc/2025/Conference/Submission11521/Area_Chair_La1x" ], [ "ICLR.cc/2025/Conference/Submission11521/Area_Chair_La1x" ], [ "ICLR.cc/2025/Conference/Submission11521/Authors" ], [ "ICLR.cc/2025/Conference/Submission11521/Authors" ], [ "ICLR.cc/2025/Conference/Submission11521/Authors" ], [ "ICLR.cc/2025/Conference/Submission11521/Authors" ], [ "ICLR.cc/2025/Conference/Submission11521/Reviewer_TPmi" ], [ "ICLR.cc/2025/Conference/Submission11521/Authors" ], [ "ICLR.cc/2025/Conference/Submission11521/Authors" ], [ "ICLR.cc/2025/Conference/Submission11521/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11521/Reviewer_sqwF" ] ], "structured_content_str": [ "{\"comment\": \"Thanks for the response!\\n\\nI feel like most benchmarks have both easy and hard examples, and the technology innovations are in tackling the remaining hard examples after the easy ones are solved by existing methods. In this sense, I do not consider that MVBench fails to meet the criteria of a robust temporal benchmark. From your experiments, cleaned-up version of MVBench being close to random chance suggests that the model (Gemini) doesn't yet have a robust temporal understanding capability, and this is the characteristic of a strong benchmark.\\n\\nFor this reason, I would suggest to reconsider the claims on MVBench.\"}", "{\"title\": \"Feedback on Rebuttal\", \"comment\": \"Dear Reviewer A5py,\\n\\nThank you again for the time and effort spent on your thorough review of our paper. Since the author-reviewer discussion deadline is fast approaching, we kindly ask for feedback on our responses. We would be happy to discuss more if there are still some open questions.\\n\\nBest Regards, \\nAuthors\"}", "{\"title\": \"Rebuttal (1/2)\", \"comment\": \"Thank you for the time and effort spent in reviewing our paper. In the following, we reply to all the concerns raised in the review.\\n\\n**W1 \\u2013 Discussion on TVBench and future evaluation benchmarks:** Thank you for highlighting this. We believe that new benchmarks must be created every few years to keep pace with the rapid advancements in AI. However, our benchmark, TVBench, is far from being solved. Even the best-performing method, Tarsier-34B, achieves only ~20% above the random baseline, while many recent models, such as mPLUG-Owl3, GPT-4o, VideoGPT+, and PPLaVA, perform close to random chance, with less than a 9% improvement.\\nIn TVBench, we source videos for each task from various existing datasets (see Section 5.2), including Perception Test, CLEVRER, STAR, MoVQA, Charades-STA, NTU RGB+D, FunQA, and CSV.\\nTVBench thus includes a diverse range of scenarios, featuring both first- and third-person perspectives, indoor and outdoor environments, and real and synthetic data. It comprises 2,654 QA pairs across 10 different tasks, ensuring robust coverage of various temporal challenges. This diversity is essential for creating a benchmark that truly assesses temporal reasoning capabilities. For video QA examples of TVBench, please refer to Appendix A.4.\\n\\n**W2 \\u2013 Details on Task Selection:** Our benchmark, TVBench, draws inspiration from existing datasets [1, 2] to cover different skill areas\\u2014Memory, Abstraction, Physics, and Semantics\\u2014through 10 selected temporally challenging tasks: repetition counting (Action Count); properties of moving objects (Object Shuffle, Object Count, Moving Direction); temporal localization (Action Localization, Unexpected Action); sequential ordering (Action Sequence, Scene Transition, Egocentric Sequence); and distinguishing between similar actions (Action Antonyms). We finalized these 10 tasks by verifying that they are free from spatial and textual biases, unlike previous benchmarks.\\n\\n--- continued in next comment ---\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your response and for acknowledging TVBench as a strong benchmark.\\n\\n\\n**Re. MVBench claims weakened:** We respectfully disagree with this. The performance drop across different settings, such as image or shuffled inputs, remains significant. The possibility of any model (e.g., Gemini 1.5 Pro) solving multiple tasks (Table 1) equally well using only a single random frame rather than the full video is very concerning. Even more strikingly, Gemini achieves nearly 50% accuracy on the entire MVBench dataset with just a single random frame as input\\u201421% above the random chance baseline. Moreover, when frames are shuffled, Gemini\\u2019s performance drops only slightly from 60.5% to 56.8%, clearly showing that temporal aspects of the videos can be disregarded. These findings demonstrate that MVBench cannot reliably assess the temporal understanding of video-language models. Therefore, it does not meet the criteria for a robust temporal benchmark that our community should adopt.\\n\\n\\n**Re. Cleaning MVBench?:** \\nAs suggested, we exclude samples that can be correctly answered using text-only, single-image, and shuffled-video inputs for Gemini 1.5 Pro. This leaves only 978 samples (24.7% of MVBench). On this subset, Gemini 1.5 Pro, which has proven to be a strong model in TVBench, achieves a performance of 27.3%, equivalent to random chance. This indicates that all samples correctly answered by Gemini 1.5 Pro are influenced by spatial or textual biases. The remainder fails as a temporal benchmark since performance aligns with random chance, further emphasizing the need for a more robust benchmark like TVBench.\\n\\n\\nThank you once again for your valuable feedback. We hope our response effectively addresses your remarks, and we remain open to further discussion if needed.\"}", "{\"summary\": \"This paper analyzed issues in the current most popular video-language benchmark (MVBench), and propose a new benchmark that alleviates the issues. Specifically, the authors provide solid evidence showing MVBench is less temporal, less visual, and simple solution of making an open-ended evaluation protocol can't address the problem. The authors then manually design question types and create questions by combining and filtering questions from existing video-language benchmarks. The authors show the resulting dataset is does not suffer from the issues.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The paper works on an important problem of video-language model evaluation, and identify issues in a widely used benchmark with solid evidences. I believe the impact of the paper will be high.\", \"I like the analysis and experiments the authors provided for MVBench, the numbers in Table 1 and Table 2 are convincing, and the examples in Figure 2-4 are illustrative. The presentation is well structured and convincing.\", \"Evaluation on the new benchmark in Table 4 shows text and image only baselines are close to random, and shuffle or reverse the frames drops the performance for all models. This supports that the proposed benchmark emphasizes on temporal information. Table 4 contains results of a wide range of models.\"], \"weaknesses\": [\"While this paper convincingly shows the proposed benchmark is better than MVBench, the authors did not provide discussion/ evidence whether if it is the final video language evaluation benchmark. For example, the benchmark might be too short / in too limited domains / person centric (I am not raising concerns on these particular issues). Some statistics about the datasets are needed.\", \"The 10 tasks picked in Table 5 looks a bit arbitrary / artificial to me. It will be helpful if the authors provide more rationale why these tasks are picked.\", \"It is unclear to my how the question-answers are created. Are then all from existing datasets, or the authors hired raters to filter / verify them?\"], \"questions\": \"Overall this paper works on an important problem and provided a valid solution (a new benchmark). The analysis are backed up by solid experiments, and I believe this paper will have positive impact on the community. My concerns are mostly on discussions and clarity, and I expect the authors to address them in the rebuttal. My current rating is a weak accept, and I am happy to raise my rating if my concerns are addressed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Feedback on Rebuttal\", \"comment\": \"Dear Reviewer QEG7,\\n\\nThank you again for the time and effort spent on your thorough review of our paper. Since the author-reviewer discussion deadline is fast approaching, we kindly ask for feedback on our responses. We would be happy to discuss more if there are still some open questions.\\n\\nBest Regards, \\n\\nAuthors\"}", "{\"comment\": \"I'd like to thank the authors for the detailed rebuttal. Most of my concerns have been addressed, especially (a) human baseline and (b) using the same model consistently across all the ablations. Looking at the updated results, I think the claims on the drawbacks of MVBench have weakened (based on performance difference between image and video models, drop in performance due to shuffling etc.) and I'd like to request the authors to update the text accordingly. Nevertheless, just based on updated Table 4, TVBench looks like a strong benchmark.\\n\\nRE Cleaning MVBench?\\nSince Table 4 already has the set of examples in MVBench that can be answered using text-only, image-only etc., can the authors filter MVBench and provide the number of remaining examples, and the performance on the remaining examples, of say Gemini 1.5 pro? This will clearly demonstrate the marginal impact of this work.\"}", "{\"title\": \"Rebuttal (1/3)\", \"comment\": \"Thank you for the time and effort spent in reviewing our paper. In the following, we reply to all the concerns raised in the review.\\n\\n**W1 - Human baseline:**\\nThank you for pointing it out. We used 14 annotators to establish a human baseline for TVBench. Each annotator labeled around 30 videos. The overall accuracy of the human baseline for our benchmark is 95%. In our updated Appendix we have also computed the statistics for the mean error for this task: 4.5% using finite population correction, thus verifying the quality and solvability of our benchmark.\\n\\n**W2 & W3 & Q1 Model consistency:** Thank you for your suggestion. We updated Tables 1, 2, and 4 using the same models (GPT-4o, Gemini 1.5 Pro, and Tarsier 34B) across all modalities enabling a direct comparison.\\n\\nFor convenience, we attach the updated tables here: \\n\\n*Table 1: Examining the spatial bias of MVBench using the same models*\\n\\n|| **Input** | **Fine-grained Action** | **Scene Transition** | **Fine-grained Pose** | **Episodic Reasoning** | **Average** |\\n|----------------|---------|---------------------|------------------|-------------------|--------------------|---------|\\n| Random | -- | 25.0| 25.0 | 25.0 | 20.0 | 23.8|\\n| |||||\\n| Gemini 1.5 Pro |image| 47.0| 78.0 | 46.5 | 56.5 | 57.0|\\n| GPT-4o | image| 49.0| 84.0 | 53.0 | 65.0 | 62.8|\\n| Tarsier 34B| image | 48.5| 67.0 | 22.5 | 46.0 | 46.0|\\n| |||||\\n| Gemini 1.5 Pro | video| 50.0| 93.3 | 58.5 | 66.8 | 67.2|\\n| GPT-4o |video | 51.0| 83.5 | 65.5 | 63.0 | 65.8|\\n| Tarsier 34B| video | 48.5| 89.5 | 64.5 | 54.5| 64.3|\\n| |||||\\n| Gemini 1.5 Pro | video shuffle| 49.5| 90.0 | 54.5 | 63.0 | 64.3|\\n| GPT-4o| video shuffle | 52.0| 84.5 | 69.0 | 64.5 | 67.5|\\n| Tarsier 34B| video shuffle| 51.0| 89.0 | 56.5| 51.5 | 62.0|\\n\\n*Table 2: Examining the textual bias of MVBench using the same models*\\n\\n|| **Input**| **Action Count** | **Unexpected Action** | **Action Antonym** | **Episodic Reasoning** | **Average** |\\n|-|-|-|-|-|-|-|\\n| Random| -- | 33.3| 25.0 | 33.3 | 20.0| 27.9|\\n| |||||\\n| Gemini 1.5 Pro | text-only | 49.0 | 68.0 | 85.5 | 49.0| 62.3|\\n| GPT-4o | text-only | 44.0 | 69.5 | 57.5 | 51.5| 55.6|\\n| Tarsier 34B|text-only| 37.0 | 39.5 | 66.0 | 44.0| 46.6|\\n| |||||\\n| Gemini 1.5 Pro | video | 41.2 | 82.4 | 64.5 | 66.8 | 63.7|\\n| GPT-4o| video | 43.5 | 75.5 | 72.5 | 63.0 | 63.6|\\n| Tarsier 34B| video | 46.5 | 72.0 | 97.0 | 54.5 | 67.4|\\n\\n*Table 4: Benchmark overview with the same model.*\\nFor a full table with all models and all TVBench tasks please see the updated PDF. We omitted GPT-4o for shuffle and reverse for cost reduction. \\n\\n| **Model** | **Input**| **MVBench Average (%)** | **TVBench Average (%)** |\\n|--|-|---|-|\\n| Random| \\u2013| 27.3 |33.3|\\n| || | |\\n| GPT-4o |Text-only | 34.8| 33.8|\\n| Gemini 1.5 Pro | Text-only | 38.2| 33.6|\\n| Tarsier 34B |Text-only| 35.7| 34.4|\\n| || | |\\n| GPT-4o | Image | 47.8| 35.8|\\n| Gemini 1.5 Pro | Image| 48.5| 36.3|\\n| Tarsier 34B | Image | 45.1| 35.0|\\n| || ||\\n| Gemini 1.5 Pro | Video Reverse | 53.1| 27.0|\\n| Tarsier 34B | Video Reverse | 67.7| 27.2|\\n| || | |\\n| Gemini 1.5 Pro | Video Shuffle | 56.8| 36.1|\\n| Tarsier 34B | Video Shuffle | 61.2| 38.0|\\n| || | |\\n| GPT-4o |Video | 49.1| 39.1|\\n| Gemini 1.5 Pro | Video| 60.5| 46.5|\\n| Tarsier 34B |Video| 67.6| 53.8|\\n\\n### With that, our message becomes even clearer: \\n*Spatial Bias:* \\n\\n- On MVBench, models that receive only a random frame as input demonstrate strong performance across all four tasks, surpassing the random baseline (Table 1). Notably, GPT-4o achieves the highest average performance of 62.8% across these tasks, nearly matching its video-based performance of 65.8%. This issue extends beyond these four tasks, as GPT-4o attains an average accuracy of 47.8% across all 20 MVBench tasks, which is 20.5% higher than the random baseline of 27.3% (Table 4). This indicates that a significant portion of the benchmark is influenced by spatial bias. Similarly, for Gemini 1.5 Pro and Tarsier.\\n - In contrast, on TVBench, GPT-4o with a random frame performs close to random chance, achieving only a 1.7% improvement over the baseline. This verifies that TVBench is a strong new benchmark that cannot be solved with a single random frame.\\n- When videos in MVBench are shuffled or reversed, the performance of top models like Tarsier-34B and Gemini 1.5 Pro remains largely unchanged. Specifically, Tarsier-34B maintains performance levels of 61.2% and 67.7%, which are 33.9% and 40.4% above the random baseline, respectively (Table 4). This consistency indicates that the order of frames does not matter for MVBench.\\n -In contrast, shuffling and reversing videos in TVBench result in substantial performance declines of 38.0% and 27.2% which is only 4.7% better and even 6.1% worse than the random baseline respectively. These results demonstrate that frame order is crucial for TVBench, as reversing the sequence of frames leads to incorrect answers.\\n\\n--- continued ---\"}", "{\"title\": \"Rebuttal (2/2)\", \"comment\": \"### Clarification of Figure 1\\nFigure 1 presents a summary of our paper\\u2019s main results. We have updated this figure by incorporating additional models (see W2) and clarified the right side by showcasing only the top-performing video model, Tarsier 34B, for comparisons between MVBench and TVBench under video, shuffle, and reverse settings.\\n\\n*Right side:* Examines how performance changes when videos are shuffled or reversed on MVBench versus TVBench. On MVBench, altering the frame order results in only minor performance variations, indicating that frame order is not critical for solving MVBench tasks. In contrast, TVBench performance significantly drops when videos are shuffled and drop even further below random chance when videos are reversed. This demonstrates that TVBench requires strong temporal understanding to be solved.\\n\\n*Left side:* TVBench serves as a strong temporal benchmark, with only a few frontier models surpassing the random chance baseline. In contrast, MVBench exhibits a continuous performance progression across all models. Notably, models that rely solely on a single random frame achieve nearly 50% accuracy, while those based only text (question) reach approximately 40%. These findings raise concerns about what MVBench truly measures, as strong performance can be achieved without genuine temporal understanding.\\n\\n**W2 \\u2013 Adding More Video Models to the Benchmark:** We agree with the reviewer that TVBench can benefit from incorporating additional models. Therefore, we have included the suggested models such as GPT-4o with videos, VideoLLaVA, mPLUG-Owl3, VideoLLaMA2 7B, VideoLLaMA2.1 7B, VideoLLaMA2 72B, Qwen2-VL 7B, and Qwen2-VL 72B with more models, e.g. PandaGPT and LLaVA-Next, in the coming days. To ensure the quality of our benchmark, we have also included a human baseline. For more details, see Appendix A.2.2.\\n| **Model** | **MVBench (%)** | **TVBench (%)** |\\n|----------------------|-------------------------|-------------------------|\\n|Random | 27.3 | 33.3|\\n| VideoLLaVA | 42.5 | 33.8 |\\n| VideoChat2 | 51.0 | 33.0 |\\n| ST-LLM | 54.9 | 35.3 |\\n| GPT-4o | 49.1 | 39.1 |\\n| PLLaVA-7B | 46.6 | 34.2 |\\n| PLLaVA-13B | 50.1 | 35.5 |\\n| PLLaVA-34B | 58.1 | 41.9 |\\n| mPLUG-Owl3 | 54.5 | 41.4 |\\n| VideoLLaMA2 7B | 54.6 | 41.0 |\\n| VideoLLaMA2.1 7B | 57.3 | 41.4 |\\n| VideoLLaMA2 72B | 62.0 | 47.5 |\\n| VideoGPT+ | 58.7 | 41.5 |\\n| Gemini 1.5 Pro | 60.5 | 46.5 |\\n| Qwen2-VL 7B| 67.0 | 43.6 |\\n| Qwen2-VL 72B | 73.6 | 52.5 |\\n| Tarsier-7B | 62.6 | 45.8 |\\n| Tarsier 34B| 67.6 | 53.8 |\\n|Human Baseline |-- | 94.8 |\\n\\n**W3 \\u2013 Presentation:** \\nThank you for the detailed feedback on our paper. In response to your suggestions, we have made the following updates to our revised PDF:\\n- Minor Adjustments: Added the axis to Fig. 1 and corrected the reference to Table 1 to improve clarity and accuracy.\\n- Reducing MVBench examples: We removed one of the three examples of MVBench in Fig. 2\\n- Human Baseline: Included a human baseline to verify the quality and solvability of our benchmark.\\n- Benchmark Examples: Added examples of TVBench for all tasks in Appendix A3 to give more insights.\\n- Model Consistency: Ensured consistency by including the same models across all modalities in Tables 1, 2, and 4, facilitating direct comparisons.\\n- Benchmark Creation Details: Clarified the strategies for creating the benchmark in Section 5.1, including the addition of Figure 7 to visually represent our process.\\n- Extended MCQA Analysis: Expanded our analysis beyond MVBench by also evaluating NextQA, as detailed in Appendix A.3, exhibiting the same problems of spatial and temporal biases.\\n\\n*Importance of Section 4 (Open-Ended VQA):* Currently, video-language models are evaluated using two methods: multiple-choice and open-ended VQA. After examining the widely used MVBench benchmark for multiple-choice tasks\\u2014and extending our analysis to NextQA in Appendix A.3\\u2014we found that spatial and textual biases also exist in open-ended VQA. Additionally, relying on closed-source proprietary LLMs for evaluation introduces unreliability. This discovery motivated us to focus on designing a multiple-choice VQA benchmark rather than a new open-ended VQA. As appreciated by Reviewer TPmi, we believe that highlighting these issues in open-ended VQA is beneficial for the community.\\n\\n\\nThank you again for reviewing our paper and providing constructive feedback. We hope that our responses have addressed your concerns and that you will consider increasing your score.\"}", "{\"title\": \"Rebuttal (2/3)\", \"comment\": \"*Textual Bias:*\\n\\n- MVBench exhibits a strong textual bias as models using only text (question) achieve competitive results compared to those with video input across the four tasks in Table 2. For instance, Gemini 1.5 Pro (text-only) achieves an average performance of 62.3% nearly matching its video-based performance of 63.7%. This issue goes beyond the four tasks, as Gemini Pro 1.5 (text-only) achieves an average performance across all 20 tasks of 38.2%, which is 10.9% higher than the random chance baseline of 27.3%, see Table 4.\\n - On TVBench, models with only text, such as Gemini 1.5 Pro, only improve 0.3% above the random chance baseline, verifying that questions cannot be answered without the corresponding videos. \\n\\nThe new tables can be found in the updated PDF. We further improved our benchmark by evaluating the newest video language models such as Qwen2-VL, Video-LLaMA2, Video-LLaMA2.1, Video-LLaVA, mPLUG-Owl3 on our benchmark.\\n\\n\\n**L1 Standard template QA:**\\nIn the era of modern LLMs, providing unnecessary context in questions or answer candidates can lead to mis-evaluations. We observed this in MVBench, where LLMs were used to create answer candidates, some of which are non-sensical. Hiring annotators to propose answer candidates is however expensive. Therefore, we decided to adopt standard templates specifically designed for each of the 10 tasks to eliminate any spatial or textual bias. With this approach, we created a temporally challenging benchmark in which the most recent methods achieve only 53.7%, representing an improvement of ~20% over the random chance baseline.\\n\\n\\n\\n**Q2 Reminder of MVBench:**\\n\\n### MVBench is not yet saturated, why TVBench? \\nThank you for pointing this out. While MVBench is not saturated, it is unclear what it truly measures, as it exhibits both strong spatial and textual biases. For instance, in Fig. 3, example 5, the Episodic Reasoning task requires extensive world knowledge to answer the question about the TV show but lacks any temporal component. Similarly, the Fine-grained Action Recognition task in Fig. 2 only requires image understanding of the bathtub, with no need for temporality. This issue extends beyond individual examples, as image-based models achieve nearly 50% accuracy on MVBench.\\nIn summary, MVBench assesses various aspects of multi-modal understanding but fails to specifically measure temporality. This limitation motivated us to develop TVBench, explicitly designed to evaluate temporal understanding, as confirmed through experiments such as shuffling, and analyzing image and text accuracies.\\n### Cleaning MVBench for a larger benchmark?\\nThe problems with MVBench are more structural than simply cleaning bad examples. To address this, we analyzed MVBench and selected tasks suitable for a temporal benchmark\\u2014those that are unsaturated and temporally challenging\\u2014but required new templates for generating question-answer pairs to mitigate spatial and textual biases. We also sourced videos from original datasets like Perception Test, CLEVRER, and STAR, designing new task-specific templates based on the original annotations. Below are the tasks we retained from MVBench but redesigned:\\n- Object Shuffle (OS):This task effectively evaluates temporal understanding. QA pairs are sourced from the Perception Test and supplemented with additional pairs to ensure balanced answer distributions, avoiding correlations that might bias models.\\n- Action Count (AC): Similar to OS, this task evaluates temporal reasoning but suffers from imbalanced answers (e.g., one answer appears 45% of the time). We balance the dataset by adding more QA pairs from the original source.\\n- Action Localization (AL): QA pairs from MVBench are reused but filtered to remove correlations between question verbs (e.g., \\\"open\\\") and answers (e.g., \\\"At the beginning of the video\\\"). The set is rebalanced to minimize textual bias.\\n- Moving Direction (MD): Additional QA pairs are sourced from CLEVRER, excluding stationary object answers to ensure temporal reasoning is required. MVBench's ChatGPT-based QA generation strategy is avoided.\\n- Scene Transition (SC): Videos are reused, and QA pairs are rephrased to include only challenging candidates. Options are restricted to scenes that appear in the video, reducing spatial bias.\\n- Action Sequence (AS): Videos are sourced from STAR, as in MVBench, but answer options are limited to actions that actually occur in the video to address spatial bias.\\n- Unexpected Action (UA): Videos from FunQA are used with a new QA generation strategy. Instead of describing actions, the task requires localizing unexpected actions, avoiding biases from ChatGPT-generated QA pairs.\\n\\n--continued--\"}", "{\"title\": \"Rebuttal (1/2)\", \"comment\": \"Thank you for the time and effort spent in reviewing our paper. In the following, we respond to your comments and how they have further strengthened our submission.\\n\\n**W1 & Q3 - Problems also present in other benchmarks?**: In Section 3, we focused on MVBench as it is widely used for multiple-choice video question-answering. In Section 4, we broadened our analysis to three other benchmarks (MSVD-QA, MSRVTT-QA, ActivityNet-QA) for open-ended video QA. We found these benchmarks exhibit similar issues as MVBench (e.g. video shuffle and video perform almost similarly well on ActivityNet-QA (see Table 3). Moreover, we show that these can be unreliable due to reliance on closed API LLMs for evaluation. Additionally, we have now analyzed the NextQA [1] multiple-choice benchmark (see Appendix A.3), which shows the same pattern of strong textual and spatial biases: \\n| Model | Input| NextQA (%) |\\n|-------|----------------|--------|\\n|Random | -- | 20.0 |\\n|Tarsier 34B | text-only | 47.6 |\\n|Tarsier 34B | image| 71.3 |\\n|Tarsier 34B | video shuffle | 78.5 |\\n|Tarsier 34B | video reverse | 77.6 |\\n|Tarsier 34B | video| 79.0 |\\n\\nAs shown, Tarsier 34B achieves an accuracy of 71.3% using only a single image, nearly matching its 79.0% accuracy with full video input, indicating a strong spatial bias, similar to what we observed with MVBench. Also, shuffling or reversing video frames does not impact performance, similarly to MVBench, demonstrating temporal frame consistency is not needed.\\n\\n**W2 & Q2 \\u2013 TVBench Examples:** Thanks for pointing this out. In Appendix A.4, we add Fig. 7-13 providing 45 examples from our benchmark. In addition, we also provide more examples of the spatial and textual bias in MVBench in Fig. 14-25.\\n\\n**W3 \\u2013 Task Design:** Thank you for highlighting this point. It is possible that selecting the correct two frames is sufficient to address the Scene Transition task. However, the model must accurately identify and interpret these frames, including their order, making it a temporal challenge. To verify this, we report the performance of the leading method, Tarsier 34B, on TVBench as follows: \\n\\n| Setting | TVBench (%) |\\n|-|---------|\\n| Random| 33.3%|\\n| Single Random Frame | 35.0%|\\n| Two Random Frames | 36.8%|\\n| Video | 53.8%|\\n\\nWe find that TVBench cannot be effectively addressed using two random frames. Although scene transition might appear to be a straightforward task, many recent models struggle with it even when given the entire video, performing nearly at chance levels\\u2014for instance, GPT-4o achieves 39.1%, which is only 8.8% better than random guessing. Additionally, we evaluate the average performance on MVBench using two random frames with Tarsier 34B:\\n\\n| Setting | MVBench (%) |\\n|---|--------------|\\n| Random | 27.3% |\\n| Single Random Frame | 45.1% |\\n| Two Random Frames | 56.5% |\\n| Video | 67.6% |\\n\\nThese results on MVBench indicate that using two random frames significantly improves performance by 11.4% compared to a single frame, approaching the video-level performance of 67.6%. In contrast, on TVBench, the performance with two random frames remains close to random chance. This further underscores the necessity of a challenging temporal benchmark like TVBench.\\n\\n--- continued in next comment ---\"}", "{\"title\": \"Rebuttal (1/2)\", \"comment\": \"Thank you for the time and effort spent in reviewing our paper. In the following, we reply to all the concerns raised in the review.\\n\\n**W1 \\u2013 Experiment consistency:** Thank you for addressing this. Below, we respond to your concerns in detail:\\n\\n### Model consistency: \\nTables 1 and 2 examine the spatial and temporal biases of MVBench. We have updated these tables in response to your feedback to include the same models across all settings, enabling direct comparisons.\\n\\n*Table 1: Examining the spatial bias of MVBench using the same models*\\n\\n|| **Input** | **Fine-grained Action** | **Scene Transition** | **Fine-grained Pose** | **Episodic Reasoning** | **Average** |\\n|----------------|---------|---------------------|------------------|-------------------|--------------------|---------|\\n| Random | -- | 25.0| 25.0 | 25.0 | 20.0 | 23.8|\\n| |||||\\n| Gemini 1.5 Pro | | 47.0| 78.0 | 46.5 | 56.5 | 57.0|\\n| GPT-4o | image| 49.0| 84.0 | 53.0 | 65.0 | 62.8|\\n| Tarsier 34B| | 48.5| 67.0 | 22.5 | 46.0 | 46.0|\\n| |||||\\n| Gemini 1.5 Pro | | 50.0| 93.3 | 58.5 | 66.8 | 67.2|\\n| GPT-4o |video | 51.0| 83.5 | 65.5 | 63.0 | 65.8|\\n| Tarsier 34B| | 48.5| 89.5 | 64.5 | 54.5 | 64.3|\\n| |||||\\n| Gemini 1.5 Pro | | 49.5| 90.0 | 54.5 | 63.0 | 64.3|\\n| GPT-4o | video shuffle | 52.0 | 84.5 | 69.0 | 64.5 | 67.5|\\n| Tarsier 34B| | 51.0| 89.0 | 56.5 | 51.5 | 62.0|\\n\\nWith the updated table, the message becomes clearer. Models that receive only a random frame as input demonstrate strong performance across all four tasks, surpassing the random baseline. Notably, GPT-4o achieves the highest average performance of 62.8% across these tasks, nearly matching its video-based performance of 65.8%. Overall, GPT-4o gets an average accuracy of 47.8% across all 20 MVBench tasks, which is 20.5% higher than the random baseline of 27.3% (see Table 4). This suggests that a significant portion of the benchmark is influenced by spatial bias. Additionally, shuffling the videos has minimal impact on the performance of all video-language models, with an average difference of only 2.3%, indicating that frame order is not crucial for solving these tasks. This problem goes beyond the four tasks analyzed here. As shown in Table 4, Gemini 1.5 Pro and Tarsier achieve average accuracies of 60.5% and 67.6% across all 20 MVBench tasks, respectively. Shuffling video frames results in performance drops of merely 3.8% and 6.4%, respectively, highlighting that spatial bias affects not only the tasks discussed in this table but the entire dataset.\\nAdditionally, we verify the agreement between the correct responses of Tarsier 34B across modalities: 91.0% between image and video inputs, and 93.9% between video and shuffled video. This confirms that current models heavily rely on spatial biases to solve MVBench.\\n\\n*Table 2: Examining the textual bias of MVBench using the same models*\\n\\n|| **Input**| **Action Count** | **Unexpected Action** | **Action Antonym** | **Episodic Reasoning** | **Average** |\\n|----------------|------------|--------------|-------------------|----------------|--------------------|---------|\\n| Random | -- | 33.3 | 25.0 | 33.3 | 20.0 | 27.9|\\n| |||||\\n| Gemini 1.5 Pro | | 49.0 | 68.0 | 85.5 | 49.0 | 62.3|\\n| GPT-4o | text-only | 44.0 | 69.5 | 57.5 | 51.5 | 55.6|\\n| Tarsier 34B| | 37.0 | 39.5 | 66.0 | 44.0 | 46.6|\\n| |||||\\n| Gemini 1.5 Pro | | 41.2 | 82.4 | 64.5 | 66.8 | 63.7|\\n| GPT-4o | video | 43.5 | 75.5 | 72.5 | 63.0 | 63.6|\\n| Tarsier 34B| | 46.5 | 72.0 | 97.0 | 54.5 | 67.4|\\n\\nWith the updated table, our findings show that models based on text-only can effectively eliminate incompatible candidates, significantly outperforming the random baseline. Notably, models using only text achieves results comparable to video-language models across these four tasks. For example, Gemini 1.5 Pro attains an average performance of 62.3% with text-only input, versus 63.7% when using videos. This trend extends beyond the four tasks, as Gemini 1.5 Pro reaches an average performance of 38.2% across all 20 tasks, which is 10.9% higher than the random chance baseline of 27.3%. We have identified three key sources of this textual bias in the paper.\\nAdditionally, we verify the agreement between the correct responses of Tarsier 34B across modalities: 85.3% between text and video inputs. This confirms that current models like Tarsier heavily rely on textual biases to solve MVBench.\\n\\nFurthermore, we have expanded Table\\u202f4 to include Gemini\\u202f1.5 Pro and Tarsier 34B across all settings (text-only, image, video shuffle, video reverse, and video) for both MVBench and TVBench. Additionally, we examined the spatial and textual biases on another multiple-choice QA benchmark, Next-QA, confirming the same biases observed in MVBench. For more details, see Appendix A.3.\\n\\n\\n--- continued in next comment ---\"}", "{\"summary\": \"The paper investigates three issues of MVBench: 1) independence of video or video motion, 2) bias in the generated question-answer pairs, and 3) heavy reliance on world knowledge in questions. A significant part of the paper was written to prove and showcase these problems in the MVBench. A new benchmark called TVBench is proposed to mitigate these issues by redesigning the questions and available choices. The new benchmark attempts to prove that with no visual input or just image input, the models will perform like random guesses. Some strong video models also perform so even with full video inputs. The experiment also presents the results of inputting video frames in reverse order or shuffled order to prove that the benchmark questions requires understanding on the true video motion to be answered correctly.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper addresses some critical issues with existing video understanding benchmarks, which is that the correct answers to many questions do not rely on information from the video or video motion. Thus, proposing a new benchmark to resolve these issues are well-motivated.\", \"The paper explains in detailed examples and some ablation studies to prove that these problems exist widely in MVBench.\", \"Based solely on the reverse & shuffle order experiment results in Table 4, it seems that TVBench indeed improves some questions' reliance on video inputs and the motion contained in those videos.\"], \"weaknesses\": [\"While I appreciate the authors' great efforts to prove their statements of the issues, I am confused by many details after reading through the paper and am not fully convinced by the quality of the new benchmark.\", \"Some of the results in the figures and tables, or the way they are presented, can be confusing. In Fig.1 left, the trend seems linear after the models achieve a certain level of performance (>50) on MVBench. In Fig. 1, right, MVBench shows a performance drop in VIdeoChat2 when the video is reversed. How do these results support the claim that MVBench does not measure temporal understanding? What is Table 1 trying to prove? I cannot compare the results of GPT-4o + image inputs with Gemini 1.5 Pro + video input to the conclusion that a single image is sufficient. You should at least fix other variables and leave the input as the one changing to prove that. Besides, even though the results are close, did you prove that the questions answered correctly are the same ones? It's a similar issue in Table 2 that I cannot understand how text-only rows could be compared to video input rows since they are using different models.\", \"Since it's a benchmark, it should attempt to document the performance of as many models as possible. A lot of video models are missing, such as Video-LLaVA, mPLUG-Owl, PandaGPT, ImageBind, Video-LLaMa and etc. In addition, the GPT-4 series can accept multiple images, which is essentially the same as video models with video inputs -- they all need to sample a certain number of frames as multiple image inputs. You can also concatenate multiple frames into one image and feed into the GPT-4 series. It doesn't make sense to me to only benchmark GPT-4o with a single frame input.\", \"Writing is a big issue in this paper. So many details make it hard to understand the paper without being confused.\", \"In Fig. 1, what is the unit of the axes?\", \"Table 1 is presented but never referred to in the text. If I understand correctly, some \\\"Tab. 2\\\" should refer to Table 1 instead. Please also choose between \\\"Tab 2\\\" and \\\"Tab. 2\\\" so that searching is convenient.\", \"I think the paper shows an excessive amount of bad examples from MVBench, which makes some of these figures unnecessary. While it's good to identify and prove the existence of these problems, more efforts should be spent convincing the readers that the \\\"proposed\\\" benchmark is high-quality and indeed resolves these issues.\", \"I understand that Sec. 4 is trying to show that open-ended qa and evaluation are not reliable, but how does that matter with the main point of this paper? Multiple-choice-based QA and open-ended QA are different settings used in different benchmarks or evaluations. It doesn't convince me that TVBench is high quality by showing the weaknesses of open-ended QA -- they are different settings.\", \"In line 430, *following the model provided in Tab. 5 for each task*, what is *model* in Table 5? There is no *model* column in Table 5 and the appendix is too short to provide enough context. How many templates are you using? Are the templates in Table 5 showing all you are using? How did you collect these templates? If you have hired annotators, how did you ensure the quality of these templates? These are all important details to be included in the paper to convince readers about the quality of TVBench.\", \"This is a minor point, but I don't favor using statistics of **Huggingface downloads** as some sort of evidence in the introduction (lines 37-38). Regardless of whether you are trying to use the number to support MVBench or question its reliability, it's better to appreciate **scientific merits** instead of **popularity metrics** in academic writing.\"], \"questions\": \"Please see the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal (2/2)\", \"comment\": \"**W3 \\u2013 Clarification on Benchmark Creation:** Thank you for bringing this up. In TVBench, we carefully design QA templates for each task to ensure they cannot be solved using only text or a single random frame. Rather than hiring annotators, we derive these templates directly from the original dataset annotations. However, we intentionally avoid using the existing QA pairs from these datasets, as they are often unbalanced and prone to textual or spatial biases. To validate the quality of our benchmark, we conducted a human baseline study, achieving a performance of 95%. For more details, please refer to Appendix A.2.2.\\nWe have revised Section 5.1 to provide a clearer explanation of the two strategies used in creating TVBench, see below. Furthermore, Appendix A.2.1 offers detailed information about each task, including the templates, questions and answers, and video statistics.\\n\\n### Strategy 1: Define Temporally Hard Answer Candidates.\\n\\nTo address Problem 1, the temporal constraints in the question must be essential for determining the correct answer. This involves designing time-sensitive questions and selecting temporally challenging answer candidates.\\n- We select 10 temporally challenging tasks that require: Repetition counting (Action Count), Properties of moving objects (Object Shuffle, Object Count, Moving Direction), Temporal localization (Action Localization, Unexpected Action), Sequential ordering (Action Sequence, Scene Transition, Egocentric Sequence), distinguishing between similar actions (Action Antonyms).\\n- We define hard-answer candidates based on the original annotations to ensure realism and relevance, rather than relying on LLM-generated candidates that are often random and easily disregarded, as seen in MVBench. For example, in the Scene Transition task (see Figure 6), we design a QA template that provides candidates based on the two scenes occurring in the videos for this task, rather than implausible options like \\\"From work to the gym.\\\" Similarly, for the Action Sequence task, we include only two answer candidates corresponding to the actions that actually occurred in the video. More details for the remaining tasks can be found in Appendix A2.\\n\\n### Strategy 2: Define QA pairs that are not overly informative.\\n\\nContrary to LLM-based generation, we apply basic templates to mitigate the effect of text-biased QA pairs, addressing Problem 2. Please see Figure 7 in the updated PDF as a summary.\\n- We design QA pairs that are concise and not unnecessarily informative by applying task-specific templates. These templates ensure that the QA pairs lack sufficient information to determine the correct answer purely from text. An example of Unexpected Action is illustrated in Figure 2. QA pairs require the same level of understanding for the model to identify what is amusing in the video but without providing additional textual information. Unlike MVBench, the model cannot simply select the only plausible option containing a dog. We use the same candidate sets across tasks like Action Count, Object Count, Object Shuffle, Action Localization, Unexpected Action, and Moving Direction to ensure balanced datasets with an equal distribution of correct answers, keeping visual complexity while reducing textual bias. Appendix Table 3 provides an overview of all tasks, demonstrating that the QA templates are carefully crafted without unnecessary textual information.\\n- Solving the overreliance on world knowledge requires providing questions and candidates that contain only the necessary information, specifically removing factual information that the LLM can exploit. We remove tasks such as Episodic Reasoning, that are based on QA pairs about TV shows or movies. \\n\\nThank you again for your feedback. We hope that our answer addresses your concerns and that you will consider raising your score.\\n\\n[1] Kexin Yi, Chuang Gan, Yunzhu Li, Pushmeet Kohli, Jiajun Wu, Antonio Torralba, Joshua B. Tenenbaum: CLEVRER: Collision Events for Video Representation and Reasoning. ICLR 2020\\n\\n[2] Gupta, Adri\\u00e0 Recasens, Larisa Markeeva, Dylan Banarse, Skanda Koppula, Joseph Heyward, Mateusz Malinowski, Yi Yang, Carl Doersch, Tatiana Matejovicova, Yury Sulsky, Antoine Miech, Alexandre Fr\\u00e9chette, Hanna Klimczak, Raphael Koster, Junlin Zhang, Stephanie Winkler, Yusuf Aytar, Simon Osindero, Dima Damen, Andrew Zisserman, Jo\\u00e3o Carreira: Perception Test: A Diagnostic Benchmark for Multimodal Video Models. NeurIPS 2023\"}", "{\"title\": \"Discussion due soon\", \"comment\": \"Dear all reviewers,\\n\\nOur reviewer-author discussion will end soon. For each of you, please check all the files and see if anything you'd like to discuss with authors.\\n\\nBest, Your AC\"}", "{\"metareview\": \"This paper proposes a video model evaluation by identifying three issues with solutions. It received mixed reviews. [TPmi, QEG7] are positive while the other two ([sqwF, A5py]) are negative. The raised issues reside in task configuration and benchmark construction details. In the rebuttal phase, the authors try to address these issues by providing more experiments and specific explanations. Overall, the AC has checked the files, and agrees with [sqwF, A5py] that task setting is a bit of unclear (e.g., frame utilization), and the benchmark detailed explanations need improvement. The authors shall better prepare the current presentation and setting, and welcome for the next venue.\", \"additional_comments_on_reviewer_discussion\": \"[sqwF] raised problem definition, small amount of data, task design unclear, lack of benchmark creation details. The authors responded by showing other benchmarks analysis, adding 45 examples, analyzing two frames configuration, and illustration dataset details. These aspects, overall, fall within the similar design of existing works and do not show the clear contribution w.r.t prior arts. On the other hand, [A5py] points out many writing issues, which are partially addressed by the authors, with concerns remained.\"}", "{\"title\": \"Rebuttal (2/2)\", \"comment\": \"**W4 & Q1 \\u2013 Benchmark Creation Details:** Thank you for highlighting this. Below, we outline the two primary strategies used to create our benchmark, which have now been incorporated into Section 5.1 of the paper. We base our QA templates on the original dataset annotations for each task and do not require annotators. Detailed information for each task, including templates, questions and answers, and video statistics, is provided in Appendix A.2.\\n\\n### Strategy 1: Define Temporally Hard Answer Candidates. \\nTo address Problem 1, the temporal constraints in the question must be essential for determining the correct answer. This involves designing time-sensitive questions and selecting temporally challenging answer candidates.\\n- We select 10 temporally challenging tasks that require: Repetition counting (Action Count), Properties of moving objects (Object Shuffle, Object Count, Moving Direction), Temporal localization (Action Localization, Unexpected Action), Sequential ordering (Action Sequence, Scene Transition, Egocentric Sequence), distinguishing between similar actions (Action Antonyms).\\n- We define hard-answer candidates based on the original annotations to ensure realism and relevance, rather than relying on LLM-generated candidates that are often random and easily disregarded, as seen in MVBench. For example, in the Scene Transition task (see Figure 6), we design a QA template that provides candidates based on the two scenes occurring in the videos for this task, rather than implausible options like \\\"From work to the gym.\\\" Similarly, for the Action Sequence task, we include only two answer candidates corresponding to the actions that actually occurred in the video. More details for the remaining tasks can be found in Appendix A2.\\n\\n### Strategy 2: Define QA pairs that are not overly informative.\\nContrary to LLM-based generation, we apply basic templates to mitigate the effect of text-biased QA pairs, addressing Problem 2. Please see Figure 7 in the updated PDF as a summary.\\n- We design QA pairs that are concise and not unnecessarily informative by applying task-specific templates. These templates ensure that the QA pairs lack sufficient information to determine the correct answer purely from text. An example of Unexpected Action is illustrated in Figure 2. QA pairs require the same level of understanding for the model to identify what is amusing in the video but without providing additional textual information. Unlike MVBench, the model cannot simply select the only plausible option containing a dog. We use the same candidate sets across tasks like Action Count, Object Count, Object Shuffle, Action Localization, Unexpected Action, and Moving Direction to ensure balanced datasets with an equal distribution of correct answers, keeping visual complexity while reducing textual bias. Appendix Table 3 provides an overview of all tasks, demonstrating that the QA templates are carefully crafted without unnecessary textual information.\\n- Solving the overreliance on world knowledge requires providing questions and candidates that contain only the necessary information, specifically removing factual information that the LLM can exploit. We remove tasks such as Episodic Reasoning, that are based on QA pairs about TV shows or movies. \\n\\nThank you for your effort in reviewing our paper and the feedback provided which we believe has strengthened our work. We are happy to further discuss and if these points are answered we ask that you consider increasing your score to reflect the revisions and clarifications.\\n\\n[1] NExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions. Xiao, J., Shang, X., Yao, A., & Chua, T.-S. CVPR 2021.\"}", "{\"title\": \"Rebuttal (3/3)\", \"comment\": \"For a comprehensive temporal benchmark, we introduced the following tasks to expand the benchmark:\\n- Object Count (OC): Videos are sourced from CLEVRER, similar to the Moving Count tasks in MVBench. However, unlike MVBench, ignoring the temporal aspects of the question leads to incorrect answers in TVBench. Answers are also balanced to ensure fairness.\\n- Action Antonym (AA): While MVBench includes an Action Antonym task, we have completely reformulated it. Videos are sourced from a different dataset (NTU RGB-D instead of PAXION), and temporally opposed candidates (e.g., \\\"sitting down\\\" vs. \\\"standing up\\\") are generated, replacing textually opposed ones, as shown in examples 1 and 2 of Fig. 3.\\n- Egocentric Sequence (ES): Videos are sourced from the CSV dataset (not utilized in MVBench), featuring detailed action sequences recorded from a first-person perspective. To evaluate temporal understanding, negative candidates are created by reordering the correct sequence of actions.\\n\\n**L2 Absence detection:**\\nThank you for pointing this out. Indeed, correctly classifying the presence or absence of an object requires analyzing all frames of a video. However, this can be achieved by treating each frame independently, as this task does not require temporal understanding of the video. State-of-the-art models like Tarsier-7B and Tarsier 34B effectively solve this task, achieving accuracies of 95.0% and 96.5% on the Object Existence (OE) task in MVBench, respectively. Thus, in order to track progress in challenging video understanding cases, we have chosen to not incorporate this task into our benchmark. \\n\\nThank you again for your feedback! We hope that our answer addresses your concerns about experimentation and that you will consider raising your score.\"}", "{\"title\": \"Feedback on rebuttal\", \"comment\": \"Dear Reviewer TPmi,\\n\\nThank you again for the time and effort spent on your thorough review of our paper. Since the author-reviewer discussion deadline is fast approaching, we kindly ask for feedback on our responses. We would be happy to discuss more if there are still some open questions.\\n\\nBest Regards,\\nAuthors\"}", "{\"title\": \"Global Response\", \"comment\": \"We thank the reviewers for dedicating their time and effort to reviewing our paper and for providing their thoughtful feedback. The positive reception of our paper addressing an important problem in video-language evaluation (TPmi, A5py, QEG7) that will have a positive and high impact on the community (QEG7) is highly encouraging. Additionally, we are pleased that the presentation was found to be clear and convincing (QEG7), effectively demonstrating the limitations of existing benchmarks like MVBench (TPmi, sqwF, A5py, QEG7). Our new benchmark TVBench is recognized as both difficult (TPmi, A5py) and temporally challenging (QEG7), revealing that most models fail at true temporal reasoning (sqwF). Lastly, we appreciate the acknowledgment of pointing out a significant and overlooked issue of open-ended evaluation (TPmi).\\n\\nBelow, we address each of the reviewers\\u2019 comments individually and look forward to engaging in a constructive discussion during the author-reviewer discussion period. \\nThank you once again.\"}", "{\"summary\": \"This paper introduces TVBench, a new benchmark for testing video understanding capability of multimodal models. Flaws in widely used existing benchmark (MVBench) are demonstrated, namely, spatial biases, textual biases and reliance on world knowledge. In addition, it's also shown that open-ended benchmarks can contain similar biases. TVBench is constructed from pre-defined templates in order to mitigate these biases and test temporal reasoning capabilities. Supporting experimental results demonstrate that state-of-the-art models struggle on this benchmark. Similarly, text-only or image-text foundation models struggle to beat random chance signifying the difficulty of this benchmark compared with existing benchmark.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The paper is well-written and easy to understand. It tackles an important area of video understanding, i.e. the lack of strong benchmarks that test temporal reasoning in videos. The presentation clearly analyzes drawbacks of existing benchmarks and proposes a new benchmark.\", \"The QA pairs don't use LLMs in the loop, and thus can avoid many hallucination related issues.\", \"The performance of SOTA models is very low (Table 4). This indicates the benchmark is indeed difficult.\", \"Clear contrast with MVBench is demonstrated, especially using text-only and image-only models. This justifies most of the claims in the paper.\", \"A significant, and often overlooked issue in open-ended evaluations is pointed out in Section 4. Using closed-source proprietary models whose back-ends may change arbitrarily to score open-ended responses and track our progress on video understanding can be misleading.\"], \"weaknesses\": [\"The main weakness of this work is around experimentation.\", \"Human baseline performance is not presented. This is important to judge the quality of the benchmark and the presented results.\", \"Different models are used in Table 2 to make the claim that MVBench has textual bias. Ideally, the same model (ideally the best model) needs to be presented with text-only and video as inputs to justify the claim.\", \"Similarly, in Table 4, different models are used to compare different biases (text, image, video) of the model.\"], \"further_limitations\": [\"Using standard template QA pairs may limit the range of video understanding being assessed.\", \"In Figure 2 and the associated text in the paper, it's presented as if detecting the absence of something is an easy task. However, by definition, one must watch the entire video to make sure what we're detecting is indeed absent.\"], \"questions\": [\"Can we use the same model and ablate text-only, image, video, shuffle, reverse, etc. in Table 4? Ideally Gemini 1.5 pro as it performs the best on this benchmark?\", \"MVBench is presented as not a great benchmark. However, its performance is also not saturated (best model achieves 67.7 in Table 4). Do the remaining QA pairs satisfy the criteria set in the paper? What is the size of the data? Can we remove the bad examples from MVBench and get a bigger and better dataset than TVBench?\"], \"edit\": \"Updated score based on the rebuttal.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Extended Rebuttal\", \"comment\": \"Dear Reviewer A5py,\\n\\nAs per the reviewer\\u2019s suggestion and in continuation of our previous response, we have expanded our analysis by including additional video-language models. The updated list now features the latest models, including LLaVA-Video 7B, LLaVA-Video 72B, Aria, and IXC-2.5-7B. Below is the revised table, presenting the performance of all models on MVBench and TVBench:\\n\\n| **Model** | **MVBench (%)** | **TVBench (%)** |\\n|----------------------|-----------------|-----------------|\\n| Random | 27.3 | 33.3 |\\n| VideoLLaVA | 42.5 | 33.8 |\\n| VideoChat2 | 51.0 | 33.0 |\\n| ST-LLM | 54.9 | 35.3 |\\n| GPT-4o | 49.1 | 39.1 |\\n| PLLaVA-7B | 46.6 | 34.2 |\\n| PLLaVA-13B | 50.1 | 35.5 |\\n| PLLaVA-34B | 58.1 | 41.9 |\\n| mPLUG-Owl3 | 54.5 | 41.4 |\\n| VideoLLaMA2 7B | 54.6 | 41.0 |\\n| VideoLLaMA2.1 7B | 57.3 | 41.4 |\\n| VideoLLaMA2 72B | 62.0 | 47.5 |\\n| VideoGPT+ | 58.7 | 41.5 |\\n| Gemini 1.5 Pro | 60.5 | 46.5 |\\n| Qwen2-VL 7B | 67.0 | 43.6 |\\n| Qwen2-VL 72B | 73.6 | 52.5 |\\n| LLaVA-Video 7B | 58.6 | 45.2 |\\n| LLaVA-Video 72B | 64.1 | 49.6 |\\n| Aria | 69.7 | 50.5 |\\n| IXC-2.5-7B | 69.1 | 50.5 |\\n| Tarsier-7B | 62.6 | 45.8 |\\n| Tarsier 34B | 67.6 | 53.8 |\\n| Human Baseline | -- | 94.8 |\\n\\nWe kindly request any feedback on our responses and would be happy to address any remaining questions or concerns.\\nThank you for your time and consideration.\\n\\nBest regard, Authors\"}", "{\"title\": \"Feedback on rebuttal\", \"comment\": \"Dear Reviewer sqwF,\\n\\nThank you again for the time and effort spent on your thorough review of our paper. Since the author-reviewer discussion deadline is fast approaching, we kindly ask for feedback on our responses. We would be happy to discuss more if there are still some open questions.\\n\\nBest Regards, \\nAuthors\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your comment.\\n\\nWe redesigned the tasks in TVBench from the ground up to ensure they are inherently temporally challenging, rather than relying on state-of-the-art methods to identify \\u201chard\\u201d samples in existing benchmarks. Consequently, some methods perform at random baseline levels, while only those with robust temporal reasoning, like Gemini, outperform this baseline. Additionally, reversing the frame order causes models like Gemini to perform below random levels, indicating that our benchmark requires temporal understanding. This highlights a limitation of the filtered MVBench subset, where performance reflects random chance rather than the method\\u2019s (e.g., Gemini\\u2019s) capabilities.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper introduces TVBench, a new video-language benchmark that addresses critical flaws in existing benchmarks like MVBench. The authors identified three problems with current benchmarks such as:\\n- Single frames are enough\\n- Question text reveals answers.\\n- Common knowledge beats video.\\n\\nThe authors demonstrated those problems by showing that both text-only language models and single-frame vision models perform well on existing benchmarks. In contrast, when it comes to TVBench, most state-of-the-art video-language models perform close to random chance.\\n\\nThe benchmark consists of 10 temporal tasks across 2,654 question-answer pairs, ensuring models must understand the sequence and timing of video events to succeed. The authors validated their benchmark by showing that shuffling or reversing video frames significantly impacts performance, unlike previous benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"MVBench Analysis: Thorough and systematic identification of MVBench limitations with clear evidence.\", \"Validation Methods: Creative use of video shuffling/reversal to verify temporal understanding requirements.\", \"Benchmark difficulty: Evaluation showing most current models fail at true temporal reasoning.\"], \"weaknesses\": [\"Problem analysis: even though the authors did identify the problems with MVBench, the analysis of other benchmarks is quite limited. The paper states \\u201cWe conduct a comprehensive analysis of widely used video question-answering benchmarks\\u201d while focusing only on MVBench. Several datasets in the relative section can be analyzed similarly and it is still uncertain if all of those datasets also have those problems\", \"Small amount of dataset examples: there are 10 different tasks within the dataset, yet the paper shows only one example from the whole dataset.\", \"Task design: Authors state: \\u201cQuestions should not be answerable using spatial details from a single random frame or multiple frames e.g. after shuffling them.\\u201d However, even the only given example from TVBench about scenes in the movie can be solved using two frames. Additionally, tasks in TVBench like Scene transition, Action Antonym, and Moving Direction can be solved with only two frames instead of one. Image LLM evaluation with more frames would be important.\", \"Benchmark creation details: There are very few details on how the dataset was collected and annotated. In general, given details about the dataset creation are very vague. For example: \\\"Instead of including random, easy negative candidates, we define hard candidates that cannot be discarded without temporal information\\\". How do you generate the hard negative examples?\"], \"questions\": [\"How the dataset was annotated, how exactly did you come up with the wrong answers?\", \"Show more examples from the dataset. Several examples from each of the tasks. The examples can be illustrated similarly to how you did it in Figure 5\", \"Are those MVBench problems shown in other benchmarks?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"no ethics concerns\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}" ] }
DqU4AB4wRy
GUARANTEED USER FAIRNESS IN RECOMMENDATION
[ "Nitin Bisht", "Xiuwen Gong", "Guandong Xu" ]
Although recommender systems (RS) have been well-developed for various fields of applications, they suffer from the crisis of platform credibility with respect to RS confidence and fairness, which may drive users away from the platform and result in the failure of the platform’s long-term success. In recent years, a few works have tried to solve either the model confidence or fairness issue, while there is no statistical guarantee for these methods. It is therefore an urgent need to solve both issues with a unifying framework with statistical guarantee. In this paper, we propose a novel and reliable framework called Guaranteed User Fairness in Recommendation (GUFR) to dynamically generate prediction sets for users across various groups, which are guaranteed 1) to include the ground-truth items with user-predefined high confidence/probability (e.g., 90%); 2) to ensure user fairness across different groups; 3) to have the minimum average set size. We further design an efficient algorithm named Guaranteed User Fairness Algorithm (GUFA) to optimize the proposed method, and upper bounds of the risk and fairness metric are derived to help speed up the optimization process. Moreover, we provide rigorous theoretical analysis with respect to risk and fairness control as well as the minimum set size. Extensive experiments also validate the effectiveness of the proposed framework, which aligns with our theoretical analysis. The code is publicly available at https://anonymous.4open.science/r/GUFR-76EC.
[ "Recommendation Systems", "Fairness in RS", "Conformal Prediction" ]
https://openreview.net/pdf?id=DqU4AB4wRy
https://openreview.net/forum?id=DqU4AB4wRy
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zMx6sa6IPF", "wyXGxaJXCz", "wFFx3CU7iC", "vdt3QViwBh", "uNLlxnNOTR", "tjrUF87d0F", "kENB8a5lqQ", "k7E53AyKt9", "jkfmjECCsv", "hotVcQpTen", "hg6xwy6ZLb", "hTnZwzooIa", "gJEi5IfUpB", "epjMrS99ZA", "dOZCO9tQtz", "cpgRTm0Bch", "buwSWCgVJs", "Xq740iGzsp", "Q9fRv4K1tU", "L0MOcGaQEe", "GUtte1GtIr", "EYLF1rNmCR", "E5RJRcxH4N" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732488748911, 1732594605275, 1732798165893, 1732695856971, 1730709471470, 1732449616180, 1729081716438, 1732458588231, 1737691348751, 1732413620332, 1732595519230, 1732516682422, 1732763502402, 1732445371239, 1730193123358, 1732412583449, 1732411324633, 1732439740526, 1730194239541, 1732413195316, 1732594707780, 1732698106624, 1732505737292 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1618/Authors" ], [ "ICLR.cc/2025/Conference/Submission1618/Authors" ], [ "ICLR.cc/2025/Conference/Submission1618/Authors" ], [ "ICLR.cc/2025/Conference/Submission1618/Reviewer_NF9U" ], [ "ICLR.cc/2025/Conference/Submission1618/Reviewer_p37S" ], [ "ICLR.cc/2025/Conference/Submission1618/Reviewer_feQt" ], [ "ICLR.cc/2025/Conference/Submission1618/Reviewer_NF9U" ], [ "ICLR.cc/2025/Conference/Submission1618/Reviewer_feQt" ], [ "ICLR.cc/2025/Conference/Submission1618/Authors" ], [ "ICLR.cc/2025/Conference/Submission1618/Authors" ], [ "ICLR.cc/2025/Conference/Submission1618/Authors" ], [ "ICLR.cc/2025/Conference/Submission1618/Reviewer_p37S" ], [ "ICLR.cc/2025/Conference/Submission1618/Authors" ], [ "ICLR.cc/2025/Conference/Submission1618/Authors" ], [ "ICLR.cc/2025/Conference/Submission1618/Reviewer_U6ND" ], [ "ICLR.cc/2025/Conference/Submission1618/Authors" ], [ "ICLR.cc/2025/Conference/Submission1618/Authors" ], [ "ICLR.cc/2025/Conference/Submission1618/Reviewer_U6ND" ], [ "ICLR.cc/2025/Conference/Submission1618/Reviewer_feQt" ], [ "ICLR.cc/2025/Conference/Submission1618/Authors" ], [ "ICLR.cc/2025/Conference/Submission1618/Authors" ], [ "ICLR.cc/2025/Conference/Submission1618/Reviewer_p37S" ], [ "ICLR.cc/2025/Conference/Submission1618/Reviewer_NF9U" ] ], "structured_content_str": [ "{\"title\": \"Thanks to the Reviewers\", \"comment\": \"We sincerely thank the reviewer for their updated feedback and for raising the score. We appreciate the reviewers acknowledged the challenges we faced regarding code availability and are glad to have addressed your concerns adequately.\\n\\nYour constructive feedback has significantly strengthened our work, and we are committed to including the suggested comparisons in future research when implementation details become accessible.\\n\\nThank you once again for your time and thoughtful engagement.\\n\\nMany Thanks\"}", "{\"comment\": \"We thank the reviewer for their response and are glad we were able to alleviate few of their concerns. We appreciate their thoughtful and constructive feedback and appreciate the opportunity to address the concerns raised. Below, we attempt to answer the concerns:\\n\\n\\n### **Question** . ...the current setup feels arbitrary and unrealistic....\\n\\nWe appreciate the reviewer\\u2019s observations regarding the use of 0-1 loss for risk control and NDCG for fairness and we concede we might have missed to answer this in previous reply with enough clarity. This choice we made is intentional to address two distinct yet complementary goals a) Risk Control: Ensuring minimum reliability by guaranteeing that at least one relevant item is included in the prediction set for all users. b) Fairness: Measuring disparities in ranking quality across user groups using NDCG to ensure equity.\\n\\nThese metrics are designed to work together, with 0-1 loss providing reliability and NDCG capturing ranking equity between the user groups. Combining them allows for a more nuanced framework. While aligning the metrics (e.g., the same metric for both) could simplify the setup, it would overlook this balance, potentially compromising the framework's ability to address both goals effectively.\\n\\n---\\n\\n### **Question** ...making such assumptions implies that the proposed theory... settings of many real-world applications..\\n\\nWe sincerely appreciate the reviewer\\u2019s concern about the LOO assumption and its impact on the generality of our framework. To clarify, we employ the LOO assumption solely in our experimental evaluation, as we mentioned in the experimental setup, aligning with established practices in recommendation research [1][2]. However, we respectfully disagree with the reviewer that this assumption limits the theoretical contributions of our work. Our framework's guarantees are based on the statistical properties of the risk and fairness metrics, developed using theories of concentration inequalities and conformal prediction methods. These guarantees are probabilistic and are not dependent on the LOO setup, ensuring broader applicability.\\n\\nWhile the LOO setup might be simplistic, it has direct relevance in many real-world recommender system contexts, such as:\\n\\n- Sequential recommendations (e.g., predicting the next relevant item like a song or product),\\n- Cold-start scenarios (e.g., recommending the first item to a user),\\n- Online learning and adversarial robustness tasks,\\n- Anomaly detection in recommendation systems,\\n- Exploration-exploitation trade-off problem in recommendations etc.\\n\\nThese examples highlight the practicality of the LOO setup for multiple scenarios where our framework can be readily applied.\\n\\nWe acknowledge the reviewer\\u2019s suggestion to explore a more generalized framework for multi-item settings. Thanks to the flexible theoretical foundation of our approach, extending it to scenarios involving multiple relevant items can be achieved with minor modifications. This adaptability underscores the broader potential of our framework and demonstrates its capacity to handle diverse recommendation scenarios, which we aim to explore in greater detail in future work.\\n\\n---\\n\\n\\n\\n### 3. **Question** nDCG is equivalent to DCG...\\n\\nWe thank the reviewer for highlighting the equivalence of NDCG and DCG under the LOO setting, given that the iDCG is always 1. While this is true in our current evaluation setup, we chose NDCG for its recognition in recommendation research and its ability to generalize scenarios with multiple relevant items. This choice ensures consistency with established practices while preparing the framework for broader applications beyond the LOO setting.\\n\\n---\\n\\n### 4.**Question** ..Providing theoretical guarantees.. in my opinion, a minor contribution..\\n\\nWe appreciate the opportunity to clarify the theoretical contributions of our work. Most existing fairness approaches rely on heuristics or dataset-specific adjustments, which can limit their generalizability. In contrast, our framework introduces statistical guarantees for fairness and risk metrics through conformal prediction and concentration inequalities. These guarantees are designed to be model- and dataset-agnostic, enabling applicability across a wide range of recommendation settings. By providing a formal foundation for fairness-aware recommendation systems, our work aims to address existing gaps in the literature and contribute to ongoing efforts in developing reliable and equitable recommendation systems.\\n\\n---\\n\\nWe again thank the reviewer for the thoughtful feedback, which has greatly helped us explain our contributions.\\n\\n### References\\n[1] Xu et al. (2023). Toward Equivalent Transformation of User Preferences in Cross Domain Recommendation\\n\\n[2] Han et al. (2023). In-processing User Constrained Dominant Sets for User-Oriented Fairness in Recommender Systems\"}", "{\"comment\": \"We sincerely thank the reviewer for their valuable time, insightful comments, and dedicated effort in reviewing our work.\\n\\nOur main contribution is to develop a risk-controlled framework with user-pre-defined confidence (i.e., say, 90% probability) via a calibration step, which can work on top of any recommendation model. Specifically, given a user model that generates relevance scores, we use these scores to compute standard metrics such as NDCG, and based on these metrics, we can optimize model parameter (e.g., \\u03bb) that align with stakeholder-defined thresholds to ensure both performance and fairness. Through rigorous theoretical analysis, we provide probabilistic guarantees on achieving the desired reliability, while our experimental results substantiate our claims that we achieve the desired reliability in metrics by creating predictions sets based on the calculated \\u03bb. \\n\\nWe would like to clarify that the Leave-One-Out (LOO) setup mentioned by the reviewer is not inherently tied to our proposed framework. Instead, it is a data-splitting technique commonly employed in fairness literature [1][2][3], which is specified in line 820 of Appendix A.3.2. The framework has already formulated the problem setup by clarifying the only one true item in Section 3, and the theory is to demonstrate the validity of the proposed framework, which has no relationship with the concept of LOO, as the LOO is served as a practical technique choice in experiments for data splitting. \\n\\nAs our framework is a post-processing method, it can work on top of any recommendation model, including collaborative filtering methods. We embark on this fundamental research from the only one true item setting with rigorous theoretical guarantee and extensive experiments, and we believe it does make a big leap forward to ensuring guaranteed reliability and fairness in recommender systems. We are committed to addressing the scenarios involving multiple relevant items by relaxing monotonicity conditions in the next work. \\n\\nOnce again, we are grateful for the reviewer\\u2019s constructive feedback. We will follow the reviewer\\u2019s suggestions to add a discussion regarding the limit of the current setting and the vision of future extension, improving the clarity and readability of the manuscript. \\n\\n### References \\n[1] He, Xiangnan, et al. \\\"Neural collaborative filtering.\\\" Proceedings of the 26th international conference on world wide web. 2017. \\n[2] Han, Z., Chen, C., Zheng, X., Liu, W., Wang, J., Cheng, W., & Li, Y. (2023, October). In-processing User Constrained Dominant Sets for User-Oriented Fairness in Recommender Systems. In Proceedings of the 31st ACM International Conference on Multimedia (pp. 6190-6201). \\n[3] Chen, X., Zhang, Y., Tsang, I. W., Pan, Y., & Su, J. (2023). Toward equivalent transformation of user preferences in cross domain recommendation. ACM Transactions on Information Systems, 41(1), 1-31.\"}", "{\"comment\": \"Thanks for your response, and they address my concerns. I suggest the authors explicitly stating that the grouping is based on sensitive attributes in the problem formulation. I will raise my score to 6.\"}", "{\"summary\": \"The authors propose an algorithm to determine the ranking length for personalized recommendation while ensuring risk-control and user-oriented fairness.\\nThe proposed method is based on the framework of Risk-Controlling Prediction Sets (RCPS),\\nwhich allows for the straightforward analysis of statistical guarantees as provided in Sections 4 and 5.\\nThe authors also design a greedy algorithm to efficiently optimize score thresholds based on their theoretical analysis.\\nHowever, some of the notations/definitions in the current manuscript are confusing, and the theoretical claims are thus not convincing.\\nThe empirical evaluation also lacks baseline methods,\\nand I believe the reported results do not directly demonstrate the effectiveness of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The authors propose the concept of RCPS and FCPS for recommender systems.\\n2. The reproducible code is available.\", \"weaknesses\": \"### (W1) The problem formulation is somehow unrealistic.\\nThe authors define the expected risk (Eq. (3)) based on the 0-1 loss for each user in Eq. (1),\\nwhich is similar to the hit rate measure.\\nTo my understanding, the 0-1 loss serves as a measure of user dissatisfaction, and so,\\nminimizing the expectation of the 0-1 loss allows us to guarantee each user's satisfaction and achieve risk-control. \\nNevertheless, they define the user fairness metric based on generalized recommendation measures for each user, which also serves as user satisfaction, rather than the hit rate measure.\\nIt is quite counterintuitive for me because the user merit functions in the risk and fairness metrics are inconsistent.\\nIn particular, since NDCG takes values between 0 and 1 and **not a function for a set** but an ordered set, the ranking position of a relevant item is essential in evaluating user satisfaction.\\nAlso, using NDCG as the user loss is not trivial based on the currently provided theoretical guarantee;\\nthe assumption of Bernoulli distributed losses in Theorem 1 is no longer valid.\\n\\n### (W2) The empirical evaluation is insufficient.\\nIn Section 6, the authors compare the different base models with the proposed method.\\nHowever, there is no comparison between the models with and without the proposed method,\\nand the current evaluation is not sufficient to directly show the effectiveness of the proposed method.\\nIt would be helpful to set some baseline methods.\", \"questions\": \"### (Q1) On the definition of 0-1 user loss for multiple relevant items.\\nPlease clarify the random variables to take expectation in Eq. (3).\\nIn recommender systems, we often observe multiple relevant items for each user.\\nSo, the current notation of $i_{true}$ is quite confusing.\\nHow can the 0-1 loss be defined for multiple relevant items for a single user?\\n\\n### (Q2) On the monotonicity of recommendation metrics.\\nIn line 177, the authors compute NDCG for the set output of $\\\\phi$.\\nHowever, NDCG is a ranking measure and cannot be computed for unordered sets; the hit rate measure is ok, on the other hand.\\nAlso note that NDCG includes a normalization factor (i.e., ideal DCG) that may increase with the ranking length.\\nThis implies that the monotonicity of $\\\\Delta F$ assumed in line 681 is not trivial.\\nAuthors can use DCG instead; it is additive and monotonic w.r.t. $\\\\lambda$.\\nStill, I think that the monotonicity of $\\\\Delta F$ is quite questionable;\\neven if $M(\\\\phi_\\\\lambda(u))$ is monotonic w.r.t. $\\\\lambda$ for all $u$,\\n$\\\\Delta F$ is generally not monotonic because of the absolute difference.\\nAny clarifications on this point would be helpful.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I believe that guaranteeing the minimum average prediction set size is not practical in real-world recommendation scenarios. In reality, recommendation systems typically maintain prediction sets of the same size for all users. The goal of better recommendation outcomes is not to minimize the average size of prediction sets but to maximize the likelihood of including items that users may interact with within a fixed prediction set size. Additionally, it is crucial to prioritize and rank items with higher probabilities of user interaction. Therefore, I suggest that the authors do not claim to guarantee the minimum average prediction set size as a challenge that should be solved. It is difficult to realize in real life. I will maintain my score.\"}", "{\"summary\": \"The paper introduces a framework named GUFR, which aims to generate high-confidence recommendation sets satisfying fairness criteria. The authors provide statistical guarantees, and the results on two public datasets demonstrate the effectiveness of proposed method.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"S1. The authors provide theoretical analysis.\\n\\nS2. The experimental results validate that the proposed framework enables various baseline models to meet the defined fairness conditions and risk control.\", \"weaknesses\": \"W1. Why the fairness metric is defined as the difference in recommendation performance (e.g., NDCG) between two groups requires further discussion. For instance, if Group G1 and Group G2 consist of males and females, respectively, recommending gender-preferred items to each group might achieve similar recommendation performance, but such recommendations could be discriminatory based on gender, which is unfair. In my view, the balanced groups (not limited to just gender) seem to be an important underlying assumption, which is not discussed.\\n\\nW2. The objective function of minimizing the size of the recommendation set is puzzling. While a smaller set might help reduce uncertainty in recommendations, this conflicts with the objective of encouraging users to purchase or click as much as possible.\", \"questions\": \"Q1. Why does the constrained optimization problem defined by equation (7) exist a solution? It seems that requiring all users to satisfy the risk and fairness constraints does not necessarily guarantee the existence of a solution.\\n\\nQ2. Is the parameter $\\\\lambda$ (defined in line 144-145) a scalar on $\\\\mathbb{R}$? What is $\\\\Delta$ in the update formula for parameter $\\\\lambda$ in line 11 of Algorithm 1, and why can such update formula obtain the **optimal** $\\\\lambda$ that satisfies the required conditions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for your prompt reply\", \"comment\": \"Thank you for the authors' prompt response. I understand the challenges associated with the lack of publicly available code. The authors have adequately addressed my concerns, and I would like to raise my score from 3 to 6.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"comment\": \"We sincerely thank the reviewer for their thoughtful feedback and valuable suggestions. Below, we address each concern and question raised.\\n\\n---\\n\\n## Why is the fairness metric defined as the difference in recommendation performance (e.g., NDCG) between groups? Could this lead to discriminatory recommendations, particularly with gender-based preferences?\\n\\n### Answer: \\nWe understand the reviewer\\u2019s valid concern regarding fairness metrics based on group performance differences. However, achieving similar performance across groups is not inherently discriminatory if the recommendations reflect actual user preferences. For example, if data shows that male users predominantly prefer action novels while female users prefer romance novels, tailoring recommendations to these preferences enhances user satisfaction and reflects personalization, not discrimination. \\n\\nWe believe discrimination arises when recommendations enforce stereotypes, such as assuming all males prefer action novels, thereby excluding individuals with differing preferences. Our framework ensures fairness by tailoring recommendations based on individual preferences within groups, rather than generalizing or enforcing stereotypes. Therefore, our approach does not lead to such discriminatory recommendations.\\n\\n---\\n\\n## Why does the objective of minimizing recommendation set size conflict with encouraging user engagement (e.g., clicks or purchases)?\\n\\n### Answer: \\nWe acknowledge and agree with the reviewer that user engagement is a critical goal for recommender systems. However, concise recommendation sets can enhance engagement by reducing cognitive load on users as they are presented with only highly relevant items. This approach minimizes decision-making burden, improves user satisfaction, and fosters trust in the system. Additionally, from the platform\\u2019s perspective, compact sets optimize computational efficiency, making the system scalable for large-scale applications while maintaining user retention. \\n\\nAlthough larger sets may increase the volume of items displayed, they risk overwhelming users and reducing the quality of engagement (e.g., click or purchase of an item). Our method strikes a balance by ensuring high-confidence, targeted recommendations that support usability and improve engagement.\\n\\n---\\n\\n## Why does the constrained optimization problem in equation (7) guarantee a solution?\\n\\n### Answer: \\nWe thank the reviewer for raising this important question. The existence of a solution for Eq. (7) in our framework is supported by both theoretical and empirical evidence. Theoretically, we utilize Theorems 1 and 2 to provide upper-bound constraints for risk and fairness metrics, while Theorem 3 guarantees the feasibility of solutions through concentration inequalities. These theorems are detailed in Sections 4 and 5, with proofs in Appendix A2. \\n\\nAlgorithm 1 iteratively adjusts group-specific parameters ($\\\\lambda_{G1}$ and $\\\\lambda_{G2}$) to align empirical risk and fairness bounds with pre-defined thresholds, ensuring convergence. Experiments on diverse datasets demonstrate consistent solution feasibility, even with varying group characteristics, as discussed in Section 6 (highlighted in red).\\n\\n---\\n\\n## What is the parameter $\\\\lambda$ in line 144-145, and how does the update formula in Algorithm 1 ensure an optimal solution?\\n\\n### Answer: \\nThe parameter $\\\\lambda$ is a scalar in $[0,1]$, representing the confidence threshold for including relevant items in the prediction set. The update formula in Algorithm 1 iteratively adjusts $\\\\lambda$ to balance prediction set size, risk control, and fairness constraints. \\n\\n$\\\\Delta$, the step size used in the updates, is manually validated to ensure an appropriate balance between convergence speed and prediction set size. \\n\\nAs shown in Theorem 3, the updated $\\\\lambda$ satisfies the required fairness and risk conditions. We further utilize Theorem 4 to guarantee that the final prediction set size is minimized without violating risk and fairness guarantees. Together, these elements provide theoretical support for the parameter update mechanism in Algorithm 1.\\n\\n---\\n\\nWe sincerely appreciate the reviewer\\u2019s insights and comments. Thank you for your time and consideration.\"}", "{\"comment\": \"We thank the reviewer for their detailed feedback thoughtful response. Below, we try to address their concerns:\\n### **Question** The concept of \\u2018user fairness\\u2019 ...\\n### Answer\\nWe thank the reviewer for raising this point. Our fairness metric builds on well-known works in user-sided group fairness [1] and uses $\\\\Delta F(\\\\lambda_{G_1}, \\\\lambda_{G_2})$ to measure performance disparity (e.g., NDCG, Hit Rate) between sensitive groups, ensuring no group is systematically disadvantaged. This aligns with fairness goals like equalized odds. Since groups are defined by sensitive attributes, minimizing $\\\\Delta F$ ensures equitable treatment, reflecting fairness for these attributes. While operationalized as prediction stability, the objective of balancing outcomes is consistent with classical fairness notions.\\n\\n---\\n\\n### **Question**: Is it possible to explicitly write .... \\n\\n### Answer\\nWe appreciate the reviewer\\u2019s question on explicitly writing out the objective function. Below, we present the full formulation and reference the equations in the submission.\\n\\n### Objective Function\\n\\nThe optimization objective is to find the thresholds $\\\\(\\\\hat\\\\lambda_{G_1}, \\\\hat\\\\lambda_{G_2} \\\\)$ ( Line 215 ) such that:\\n\\n$$\\n\\\\hat\\\\lambda_{G_1}, \\\\hat\\\\lambda_{G_2} = \\\\sup ( \\\\{ \\\\lambda_{G_1}, \\\\lambda_{G_2} \\\\in [0, 1] \\\\mid R^+(\\\\lambda_G, \\\\delta) \\\\leq \\\\alpha, \\\\; \\\\Delta F^+(\\\\lambda_{G_1}, \\\\lambda_{G_2}, \\\\hat{\\\\delta}) \\\\leq \\\\eta \\\\} ), \\n$$\", \"where\": \"1. $\\\\(R^+(\\\\lambda_G, \\\\delta)\\\\)$ is the upper bound of the risk metric (refer to the Risk Metric equation below).\\n2. $\\\\(\\\\Delta F^+(\\\\lambda_{G_1}, \\\\lambda_{G_2}, \\\\hat{\\\\delta})\\\\)$ is the upper bound of fairness disparity (refer to the Fairness Metric equation below).\\n\\n### Risk Metric\\nThe group-wise risk metric (Line 247) ensures that the true item is not excluded from the prediction set with high probability:\\n$$\\nR^+(\\\\lambda_G, \\\\delta) = \\\\sup \\\\left[ \\\\hat{R}(\\\\lambda_G) : \\\\text{BinomCDF}(n\\\\hat{R}(\\\\lambda_G), n, \\\\alpha) \\\\leq \\\\delta \\\\right].\\n$$ \\n\\nThis equation guarantees that the group-level risk, $\\\\(\\\\hat{R}(\\\\lambda_G)\\\\),$ is statistically bounded with respect to the confidence parameter $\\\\(\\\\delta\\\\)$.\\n\\n\\n### Fairness Metric\\nThe fairness disparity metric (Line 260) is controlled using the following upper bound:\\n\\n$$\\n\\\\Delta F^+(\\\\lambda_{G_1}, \\\\lambda_{G_2}, \\\\hat{\\\\delta}) = \\\\Delta F(\\\\lambda_{G_1}, \\\\lambda_{G_2}) + \\\\sqrt{\\\\frac{2 \\\\sigma_F^2 \\\\log \\\\left(\\\\frac{2}{\\\\hat{\\\\delta}}\\\\right) + \\\\frac{2}{3} \\\\log \\\\left(\\\\frac{2}{\\\\hat{\\\\delta}}\\\\right)}{n_1 + n_2}},\\n$$\\n\\nwhere $\\\\(\\\\Delta F(\\\\lambda_{G_1}, \\\\lambda_{G_2})\\\\)$ represents the fairness disparity between the two groups.\\n\\n\\n\\n### Risk Definition\\nThe risk metric $\\\\(R(\\\\lambda_G)\\\\) $ (line 251) is defined explicitly as:\\n\\n$$\\nR(\\\\lambda_G) = \\\\frac{1}{|G|} \\\\sum_{u \\\\in G} L(i_{true}, \\\\phi_{\\\\lambda_G}(u)),\\n$$\\n\\nwhere $\\\\(L(i_{true}, \\\\phi_{\\\\lambda_G}(u))\\\\)$ is the loss incurred if the true item $\\\\(i_{true}\\\\)$ is not in the prediction set $\\\\(\\\\phi_{\\\\lambda_G}(u)\\\\)$.\\n\\n\\n### Fairness Disparity Definition\\nThe fairness disparity metric $\\\\(\\\\Delta F(\\\\lambda_{G_1}$, $\\\\lambda_{G_2})\\\\) $ (Line 161) is defined as:\\n\\n$$\\n\\\\Delta F(\\\\lambda_{G_1}, \\\\lambda_{G_2}) := \\\\left| \\\\frac{1}{|G_1|} \\\\sum_{u \\\\in G_1} M(\\\\phi_{\\\\lambda_{G_1}}(u)) - \\\\frac{1}{|G_2|} \\\\sum_{u \\\\in G_2} M(\\\\phi_{\\\\lambda_{G_2}}(u)) \\\\right|,\\n$$\\n\\nwhere $\\\\(M(\\\\phi_{\\\\lambda_G}(u))\\\\) $ represents the metric of interest (e.g., Hit Rate or NDCG) for 2 user groups.\\n\\n---\\n\\n### Addressing Convexity and Practical Feasibility\\n\\nWe sincerely appreciate the reviewer\\u2019s insightful comments regarding the convexity of the problem and the feasibility of finding an optimal solution. Below, we provide clarifications and additional details to address these concerns.\\n\\nExistence of a Solution\\n- Assumption 2 (Line 665) guarantees the existence of a feasible $\\\\lambda_{\\\\text{min}}$ that satisfies both the risk and fairness constraints. \\n- While $\\\\lambda_{\\\\text{min}}$ theoretically corresponds to including all items in the prediction set, such extremes are rarely required in practice. The algorithm converges well before reaching this boundary.\\n\\nConvexity and Monotonicity\\n- Due to the monotonic nature of both the objective function and the constraints (risk and fairness metrics), the optimization can be efficiently reduced to an iterative grid-based search. \\n- This structure allows the optimization to behave effectively as a convex problem within the bounded search space, ensuring that an optimal solution can always be found.\\n\\nPractical Feasibility\\n- Our algorithm systematically adjusts $\\\\lambda_{G_1}$ and $\\\\lambda_{G_2}$ in small, incremental steps ($\\\\Delta \\\\lambda$), leveraging monotonicity to ensure efficient convergence.\\n- Additionally, empirical results (Tables 1\\u20134) consistently demonstrate that the algorithm is capable of deriving prediction sets that satisfy both constraints across all evaluated datasets.\\n\\nWe are grateful to the reviewer for the opportunity to provide these explanations.\\n\\n[1] Li, Yunqi, et al. \\\"User-oriented fairness in recommendation.\\\" WWW' 2021\"}", "{\"comment\": \"Thank authors for your detailed responses.\\nI appreciate authors' responses that indeed clarify some of my concerns. However, I will maintain my original score.\\nI would like to share my comments on authors' responses.\\n\\nFirst, regarding (W1), what I intended to express is that I do not understand the reasoning behind using a user loss/risk metric that differs from the ranking metric. These two metrics clearly measure the same concept, and the current setup feels arbitrary and unrealistic.\\nAs for the theoretical analysis with non-Bernoulli losses, I agree with authors that it could be addressed using other probabilistic/concentration inequalities.\\n\\nThe most important point, however, is that the adoption of Leave-One-Out (LOO) for experimental evaluation and the theoretical analysis should be clearly stated in the discussion. Making such assumptions implies that the proposed theory and method hold only in scenarios where there is exactly one positive item per user. This diverges significantly from the settings of many real-world applications, severely narrowing the applicability of the method. Providing theoretical guarantees under such unrealistic assumptions is, in my opinion, a minor contribution.\\n\\nAs an aside, authors state, \\\"While alternative metrics like DCG could also be considered, NDCG aligns better with our framework's ranking quality and fairness goals.\\\"\\nHowever, under the LOO assumption, nDCG is equivalent to DCG because the ideal DCG is 1 for all $K=1,\\\\dots$. \\nI strongly recommend that authors first state the LOO assumption clearly and include the monotonicity of nDCG under LOO as a lemma.\"}", "{\"comment\": \"We sincerely thank the reviewer for their positive feedback and for raising the score. We greatly appreciate the reviewer's suggestion to refine our problem to ensure the grouping assumption is clearly stated upfront. We will make the change included in the revised submission.\\n\\nOnce again, we are grateful for your constructive feedback, which has helped us improve the clarity and rigor of our paper.\\nMany Thanks.\"}", "{\"comment\": \"We thank the reviewer for their thoughtful response. We are pleased that we were able to address some of their concerns and will attempt addressing their remaining concerns:\\n\\na) We greatly appreciate the reviewer recommending the works [1][2][3][4] in their initial review and [1][2] again in this review. These papers are foundational to user-oriented fairness research, and we have cited them accordingly to acknowledge their significant contributions to advancing the field.\\n\\nHowever, we faced practical constraints in incorporating these methods:\\n\\n- For three papers [1][2][3], publicly available code was not accessible. Despite proactively contacting the authors twelve days ago to request implementation details, we have not yet received a response. \\n- One paper [4] provided code, but its focus on conversational recommendation systems was not directly applicable to the objectives of our work.\\n\\nWe respect the importance of these methods and remain committed to incorporating them into our future experiments once the necessary details become available.\\n\\n---\\n\\nb) Given these constraints, we selected two accessible baselines to ensure a rigorous evaluation within the available timeframe:\\n\\n1. A pre-processing method from 2021 [5], which provided both recency and relevance compared to the baseline [7] from 2020, cited in the shared works. \\n2. A post-processing method from 2021 [6], which was directly cited as a baseline in the shared works.\\n\\nWhile these baselines were chosen out of necessity rather than preference, they allowed us to robustly evaluate our approach in the context of established fairness methods. We remain committed to expanding our comparisons to include the reviewer-suggested methods in future work once implementation details are accessible.\\n\\n---\\n\\nHowever, we emphasize that our work is the first to introduce a novel fairness framework leveraging conformal prediction and providing statistical guarantees for fairness and accuracy. This framework addresses critical gaps in heuristic-based methods, including those shared by the reviewer [1][2][3][4][6]. Our results highlight several strengths of our approach:\\n\\n- **Dataset-agnostic**: Effective across diverse datasets. \\n- **Model-agnostic**: Applicable to various recommendation models. \\n- **Group-agnostic and parameter-free**: Ensures fairness without requiring sensitive parameter tuning.\\n\\nBy addressing these challenges with robust statistical guarantees, our framework builds upon the foundation established by the reviewer\\u2019s cited works while pushing the boundaries of fairness in recommendation systems.\\n\\n---\\n\\n## Conclusion\\n\\nWe hope this response clarifies the rationale behind our baseline selection, proactive efforts to address the constraints, and the significant contributions of our work. We again thank the reviewer for their constructive feedback, which has been instrumental in refining our study, and we look forward to further advancing fairness research in collaboration with the broader community.\\n\\n---\\n\\n**References**\\n\\n1. Han Z, Chen C, Zheng X, et al. In-processing User-Constrained Dominant Sets for User-Oriented Fairness in Recommender Systems. \\n2. Han Z, Chen C, Zheng X, et al. Hypergraph Convolutional Network for User-Oriented Fairness in Recommender Systems. \\n3. Han Z, Chen C, Zheng X, et al. Intra- and Inter-Group Optimal Transport for User-Oriented Fairness in Recommender Systems. \\n4. Liu Q, Feng X, Gu T, et al. FairCRS: Towards User-Oriented Fairness in Conversational Recommendation Systems. \\n5. Rashidul Islam et al., Debiasing Career Recommendations with Neural Fair Collaborative Filtering. \\n6. Li et al., User-Oriented Fairness in Recommendation. \\n7. Wen et al., Distributionally-Robust Recommendations for Improving Worst-Case User Experience.\"}", "{\"summary\": \"This paper addresses the critical issue of fairness in recommender systems, an area of increasing importance in machine learning and artificial intelligence. The authors propose an approach aimed at enhancing fairness in recommendations by introducing a framework that seeks to minimize the disparity in performance between advantaged and disadvantaged user groups. The paper provides a comprehensive overview of their methodology, including various modeling stages and experimental setups designed to evaluate the effectiveness of their proposed solution. The authors conduct experiments to prove the effectiveness of the proposed method.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Fairness is a crucial area of research in recommender systems, and the authors' focus on this topic is commendable.\\n2. The overall methodology presented in the paper is logical and coherent.\", \"weaknesses\": \"The authors demonstrate a lack of deep understanding and research in the field of fairness in recommender systems. Specific issues include the following:\\n\\n1. The authors assert that existing methods \\\"fail to construct a generalized fairness-based recommendation framework for different applications.\\\" However, the term \\\"generalized fairness method\\\" is not clearly defined. How does the proposed method differ from existing approaches? This statement lacks clarity and is not supported by references, theoretical backing, or experimental evidence.\\n\\n2. The authors use interaction data to distinguish between advantaged and disadvantaged users, aiming to reduce the performance gap between these two groups. To my knowledge, this aligns closely with user-oriented fairness [1], with the only distinction being the constraint of the minimum prediction set in this paper. However, the authors do not compare their approach with any existing user-oriented fairness methods [2,3,4,5], nor do they analyze the advantages of their method over current approaches. It is essential to compare against existing methods in the experiments, and the authors could add constraints on the top-k values to introduce an average set size constraint for current methods.\\n\\n3. While the paper aims to guarantee user fairness in recommendations, it only conducts experiments on user-oriented fairness without addressing attribute fairness, e.g., with gender or race as the sensitive attribute. Such experimental conclusions do not support the claim that the approach is generalized.\\n\\n4. The figures in the paper are too small and unclear, making it difficult to interpret the results effectively.\\n\\n[1] Li Y, Chen H, Fu Z, et al. User-oriented fairness in recommendation[C]//Proceedings of the web conference 2021. 2021: 624-632.\\\\\\n[2] Han Z, Chen C, Zheng X, et al. In-processing User Constrained Dominant Sets for User-Oriented Fairness in Recommender Systems[C]//Proceedings of the 31st ACM International Conference on Multimedia. 2023: 6190-6201.\\\\\\n[3] Liu Q, Feng X, Gu T, et al. FairCRS: Towards User-oriented Fairness in Conversational Recommendation Systems[C]//Proceedings of the 18th ACM Conference on Recommender Systems. 2024: 126-136.\\\\\\n[4] Han Z, Chen C, Zheng X, et al. Hypergraph Convolutional Network for User-Oriented Fairness in Recommender Systems[C]//Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2024: 903-913.\\\\\\n[5] Han Z, Chen C, Zheng X, et al. Intra-and Inter-group Optimal Transport for User-Oriented Fairness in Recommender Systems[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2024, 38(8): 8463-8471.\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely thank the reviewer for their thoughtful and constructive feedback, which has been invaluable in improving our manuscript. Below, we address each concern.\\n\\n---\\n\\n## Question: The paper exhibits presentation issues, including typos (e.g., line 53: \\\"However, ,\\\"), missing punctuation after equations, and figures that are too small and unclear.\\n\\n### Answer: \\nWe thank the reviewer for highlighting these issues. We have corrected all typographical errors, including the one on line 53, and ensured that equations are properly punctuated. We also updated figures with larger font sizes for labels and legends for improved readability.\\n\\n---\\n\\n## Question: Why differentiate users based on interactions rather than sensitive attributes, such as race?\\n\\n### Answer: \\nWe used interaction-based grouping [1] as it leverages readily available user engagement data, making it practical and adaptable across recommendation scenarios.\\n\\nTo address the concern of the reviewer, we further conducted experiments on three more datasets ( Last.fm, MovieLens, and Book-Crossing) using three sensitive attributes: region, gender, and age. The results in **Section 6.3.1, Tables 2\\u20134**, demonstrate that GUFR achieves fairness and risk guarantees across diverse contexts.\\n\\nWhile race is an important attribute, we couldn\\u2019t find any well-known publicly available datasets, possibly due to privacy and ethical constraints. However, based on our results with other common sensitive attributes (region, gender, and age), we are confident that our system can effectively address biases related to race if suitable datasets become available.\\n\\n\\n---\\n\\n## Question: The paper does not compare the proposed method with existing user-oriented fairness methods.\\n\\n### Answer: \\nWe thank the reviewer for highlighting this concern. To address this, we compared our framework on performance and time efficiency with several in-processing [2] and post-processing [1] fairness methods (e.g., NFCF, MFCF, NeuMF-UFR, GMF-UFR). The results in **Section 6.3.1, Tables 1\\u20134, and Section 6.3.3, Table 5**, highlight that GUFR outperforms these baselines in fairness and risk guarantees while being computationally more efficient.\\n\\n---\\n\\n## Question: The authors only report the performance of the recommendation model with GUFR, without providing the original performance of the model.\\n\\n### Answer: \\nWe have updated our Tables 1\\u20134 in Section 6.3.1 with experiments comparing models with and without GUFR. The results highlight that GUFR consistently enhances fairness and ensures risk guarantees, while baseline models often fail in either of the two aspects. We have also uploaded relevant files and datasets in the codebase.\\n\\n---\\n\\n## Question: Why is it necessary to guarantee the minimum average prediction set size?\\n\\n### Answer: \\nWe thank the reviewer for raising this very important point. Minimum set size guarantees are essential for user retention and the long-term sustainability of recommendation systems. Specifically, smaller sets ensure concise, personalized recommendations that meet user expectations, enhancing their experience. Additionally, from the platform perspective, it optimizes resources by reducing computational overhead and latency, thereby ensuring its sustainability.\\n\\n---\\n\\nWe appreciate your thoughtful feedback, which has significantly improved the rigor and clarity of our work. Thank you for your time and consideration.\\n\\n---\\n\\n### References:\\n\\n1. Li et al. (2021, April). User-oriented fairness in recommendation. In Proceedings of the Web Conference 2021 (pp. 624-632). \\n2. Rashidul Islam et al., Debiasing Career Recommendations with Neural Fair Collaborative Filtering. (WWW '21).\"}", "{\"comment\": \"We sincerely thank the reviewer for their thoughtful and constructive feedback. Below, we address the main concerns raised.\\n\\n \\n\\n--- \\n\\n \\n\\n## Question: There seems to be inconsistency between the use of 0-1 loss... and the assumption of Bernoulli-distributed losses in Theorem 1... \\n\\n \\n\\n### Answer: \\n\\nWe thank the reviewer for raising this critical point. We utilize risk as a simple 0-1 loss to guarantee that each user group receives a minimally acceptable recommendation. This is critical in recommender systems as this measure ensures no user group is entirely dissatisfied. In contrast, the goal of fairness is to provide equitable outcomes across user groups (e.g., advantaged vs. disadvantaged users). We agree with reviewer that simply using hit rate diff. could simplify framework and ensure consistency however NDCG diff. is essential to capture ranking fairness, ensuring the quality of recommendations is consistent across groups. Therefore, we used performance disparity (in terms of Hit Rate Diff. and NDCG Diff.) as the fairness metric. Our choice of metrics is grounded in the existing literature on fairness in user-sided recommender systems [1]\\n\\n\\nWe wish to explain that Theorem 1 is used exclusively for risk guarantees associated with the 0-1 loss. While Theorem 1 ensures the true item is included in the prediction set, providing coverage and reliability for all user groups, as the reviewer pointed out, it is not valid for metrics like NDCG. Therefore for NDCG loss, we rely on Theorem 2, which employs Bernstein inequalities. These inequalities provide concentration bounds for metrics with non-binary outcomes and bounded variances like NDCG, unlike Bernoulli's assumption, which is restricted to binary loss. \\n\\n \\n\\n--- \\n\\n \\n\\n## Question: The empirical evaluation lacks comparisons between models with and without the proposed method.... \\n\\n \\n\\n### Answer: \\n\\nWe appreciate this important point, as it helped refine the empirical aspect of our work. To address this concern, we conducted additional experiments to compare the effectiveness of the proposed GUFR framework. Specifically: \\n\\n \\n\\n1. Base Models Comparison: We compared base models with and without GUFR, demonstrating that GUFR enhances fairness and ensures risk guarantees while maintaining comparable performance. \\n\\n2. Baseline Fairness Comparison: We evaluated GUFR against in-processing and post-processing fairness baselines. Results show GUFR consistently outperforms these baselines, which often fail to meet one or more fairness or risk thresholds. \\n\\n3. Computational Efficiency: We analyzed computational efficiency, where GUFR demonstrated faster runtime than fairness baselines, validating its time-efficiency. \\n\\n \\n\\nThe updated results **(Tables 1\\u20135)** demonstrate GUFR\\u2019s effectiveness in ensuring fairness and reliability while being computationally efficient. All relevant code and data have been made available for reproducibility. \\n\\n \\n\\n--- \\n\\n \\n\\n## Question: How is the 0-1 loss defined when users have multiple relevant items? \\n\\n \\n\\n### Answer: \\n\\nOur framework currently adopts the Leave-One-Out (LOO) methodology, a standard in recommendation systems research [3][4], where each user has one true item in the test set. This setting simplifies evaluation and aligns with practical applications like music streaming or news recommendations. \\n\\n \\n\\nWhile our framework focuses on single-item scenarios, we acknowledge extending it to handle multiple relevant items will further enhance the framework\\u2019s applicability to diverse contexts. \\n\\n \\n\\n--- \\n\\n \\n\\n## Question: NDCG needs ordered set .... Additionally, monotonicity of NDCG is not trivial..\\n\\n \\n\\n### Answer: \\n\\nWe appreciate the reviewer for pointing this out. In our implementation, user-item interaction scores are sorted before computing NDCG, ensuring consistency with its definition as a ranking measure. \\n\\n \\n\\nUnder LOO setting, NDCG is non-decreasing with the size of the prediction set. Specifically, as $\\\\lambda$ decreases, more items are added in the prediction set, which increases the probability of selecting the relevant items and thereby improving NDCG. This satisfies the monotonicity condition required for the fairness metric ($\\\\Delta$F). While alternative metrics like DCG could also be considered, NDCG aligns better with our framework's ranking quality and fairness goals. \\n\\n \\n\\n--- \\n\\n \\n\\nWe sincerely appreciate the reviewer\\u2019s thoughtful feedback and hope our responses address the concerns raised. Thank you again for the opportunity to improve our work. \\n\\n \\n\\n--- \\n\\n \\n\\n### References \\n\\n \\n\\n1. Li, Yunqi, et al. User-oriented fairness in recommendation. \\n\\n2. Islam, et al. Debiasing Career Recommendations with Neural Fair Collaborative Filtering. \\n\\n3. Xu et al. (2023). Toward Equivalent Transformation of User Preferences in Cross Domain Recommendation \\n\\n4. Han et al. (2023). In-processing User Constrained Dominant Sets for User-Oriented Fairness in Recommender Systems\"}", "{\"title\": \"Thanks for authors's rebuttal\", \"comment\": \"Thank you for the detailed responses. While some of my concerns have been addressed, I still find the lack of comparisons with state-of-the-art (SOTA) methods concerning. The authors choose to compare with [1, 2] from 2021 instead of more recent SOTA methods such as [3, 4], making the experimental results unreliable.\\n\\n[1] Rashidul Islam et al., Debiasing Career Recommendations with Neural Fair Collaborative Filtering.\\n[2] Li et al., User-oriented fairness in recommendation.\\n[3] Han et al., In-processing User Constrained Dominant Sets for User-Oriented Fairness in Recommender Systems.\\n[4] Han et al., Hypergraph Convolutional Network for User-Oriented Fairness in Recommender Systems.\"}", "{\"summary\": \"This paper addresses the important issue of fairness in recommender systems, proposing a novel framework aimed at improving fairness outcomes for users. The authors thoroughly analyze their proposed method, providing comprehensive proofs and assessments of its effectiveness. They conduct extensive hyperparameter experiments and make their source code publicly available, promoting transparency and reproducibility. Despite these contributions, the paper has several presentation and methodological shortcomings that require attention.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"S1. The overall organization of the presentation is smooth, making it easy to read and understand the authors' intentions.\\\\\\nS2. The authors provide a comprehensive proof and analysis of the proposed method.\\\\\\nS3. The paper includes extensive hyperparameter experiments and openly shares the source code.\\\\\", \"weaknesses\": \"W1. The paper exhibits numerous presentation detail issues, including:\\n1. There are noticeable typos, such as the one found in line 53: \\\"However, ,\\\".\\n2. Some equations lack punctuation following them.\\n3. The figures in the paper are of poor quality and too small to effectively convey information.\\n\\nW2. Additionally, the experimental section presents significant issues:\\n\\n1. Why do the authors differentiate users based on interactions rather than sensitive attributes, such as race?\\n2. The paper does not compare their approach with existing user-oriented fairness methods.\\n3. The authors only report the performance of the recommendation model combined with the GUFR method, without providing the original performance of the recommendation model. This omission makes it difficult to determine if GUFR actually enhances the original model's performance.\", \"questions\": \"My main concerns have already proposed in the weaknesses section, I would like to pose an additional question:\\\\\\nQ1. Why is it necessary to guarantee the minimum average prediction set size? What significance does this have for recommender systems, and are there any related studies on this topic?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely thank the reviewer for their thoughtful and constructive feedback, which has been invaluable in improving our manuscript. Below, we address each concern.\\n\\n---\\n\\n## Question: The term \\\"generalized fairness method\\\" is not clearly defined. Additionally, how are the theoretical claims supported, and how do the experiments validate these claims?\\n\\n### Answer: \\nWe acknowledge the reviewer\\u2019s concern that the term \\\"generalized fairness method\\\" was insufficiently defined. To clarify, we revised this in the manuscript (highlighted in red) and specified that no existing statistically guaranteed fairness model ensures fairness and accuracy across datasets and groupings.\", \"theoretical_support\": \"Existing fairness methods rely on heuristics, requiring dataset-specific tuning and assuming implicit data distributions. In contrast, our approach uses conformal prediction [1] and concentration inequalities [2] to provide statistical guarantees without distribution assumptions. We use Theorems 1 and 2 in Section 4 to establish upper bounds on risk and fairness, while Theorems 3 and 4 in Section 5 ensure we achieve confidence-controlled fairness and accuracy while minimizing prediction set sizes.\", \"experimental_validation\": \"\", \"we_expanded_our_experiments_to_include_diverse_datasets_and_sensitive_attributes\": \"AmazonOffice (e-commerce, grouped by interactions), MovieLens (movies, grouped by gender), Last.fm (music, grouped by region, total interactions, and popular item consumption), and Book-Crossing (books, grouped by age). The results **(Section 6.3.2)** validate that the GUFR framework is a generalized fairness method as it consistently achieves fairness and risk guarantees across diverse contexts.\", \"novelty\": \"To the best of our knowledge, our work is the first to use conformal prediction to ensure fairness and accuracy in recommendation systems, addressing limitations of heuristic approaches, such as scalability and lack of statistical guarantees. \\n\\n---\\n\\n## Question: The paper does not compare the proposed method with existing user-oriented fairness methods.\\n\\n### Answer: \\nWe thank the reviewer for raising this important concern. While GUFR is a post-processing method like user-oriented fairness [3], it introduces a dynamic prediction set mechanism with statistical guarantees for fairness and accuracy, distinguishing it from the heuristic-based reranking approach.\", \"comparisons\": \"To address this, we compared GUFR with additional in-processing [4] and post-processing [3] fairness baselines, fixing prediction set sizes for direct comparisons in Section 6.3.1. We further experimented with the time efficiency analysis in Section 6.3.3. The results depict GUFR outperforms these baselines in fairness and risk guarantees while being computationally more efficient. \\n\\nWe are grateful to the reviewers for sharing the papers [5,6,7]. We acknowledge their importance in the domain of fairness in RS and have cited them accordingly in our updated document. \\n\\n---\\n\\n## Question: The paper claims to address generalized fairness but only evaluates user-oriented fairness, not attribute fairness (e.g., gender, race).\\n\\n### Answer: \\nOn the reviewer\\u2019s suggestion, we have extended our evaluation to include attribute-based fairness using gender, age, and region as sensitive attributes (details are in Section 6.2). \\n\\nWhile race is an important attribute, we couldn\\u2019t find any well-known publicly available datasets, possibly due to privacy and ethical constraints. However, based on our results with other common sensitive attributes (region, gender, and age), we are confident that our system can effectively address biases related to race if suitable datasets become available.\\n\\n---\\n\\n## Question: Figures are too small and unclear, making results difficult to interpret.\\n\\n### Answer: \\nWe have updated figures with larger font sizes for legends and labels to improve clarity and readability. \\n\\n---\\n\\nWe sincerely thank the reviewer for their thoughtful feedback, which has significantly improved the rigor and presentation of our work.\\n\\n---\\n\\n### References\\n\\n1. Angelopoulos et al., A gentle introduction to conformal prediction and distribution-free uncertainty quantification. \\n2. Boucheron et al., Concentration Inequalities. Advanced Lectures on Machine Learning. \\n3. Li et al., User-oriented fairness in recommendation. \\n4. Rashidul Islam et al., Debiasing Career Recommendations with Neural Fair Collaborative Filtering. \\n5. Han et al., In-processing User Constrained Dominant Sets for User-Oriented Fairness in Recommender Systems. \\n6. Han et al., Hypergraph Convolutional Network for User-Oriented Fairness in Recommender Systems. \\n7. Han et al., Intra-and Inter-group Optimal Transport for User-Oriented Fairness in Recommender Systems.\"}", "{\"comment\": \"We sincerely thank the reviewer for their feedback and for highlighting practical considerations in real-world recommendation systems. Below, we address the concerns regarding the practicality of minimizing the average prediction set size:\\n\\n \\n\\n### 1. The role of Prediction Set Size Minimization: \\n\\nOur work does not advocate for minimizing the prediction set size as a standalone objective but rather as part of a balance between user fairness and utility. Our approach does not compete with traditional fixed-size top-k recommendations but complements them. By guaranteeing a minimum average prediction set size, we aim to control the uncertainty in recommendations and ensure that all users receive meaningful and fair predictions. This approach complements ranking-based metrics like NDCG, which prioritize and rank items by relevance. While this paper initially didn't focus on real-world implementation of the minimum set size guarantee, the theoretical foundation it provides is directly extendable. One recent work by Kweon et al. [1] explores Top-Personalized-K Recommendation, aligning closely with our vision. \\n\\n---\\n### 2. Relevance to Real-World Systems: \\n\\nIn real-world, where recommendation systems typically use some arbitrary fixed size k for all users, determined heuristically or based on trial and error. This may or may not represent the best choice , leading to a) Increased cognitive load for users, who may receive overly long lists of recommendations. b) Resource inefficiencies for platforms, particularly when unnecessary items are recommended. c) Fixed-size sets can also exacerbate performance and fairness issues by not accounting for individual user needs or disparities in group fairness. Our approach can solve the following concerns in following ways: \\n\\na) Dynamic Determination of Optimal k: \\n\\nGiven a validation set, our framework identifies the minimum prediction set size for each user that satisfies predefined performance and fairness criteria with statistical guarantees (e.g., 95% confidence). To generalize across users, we can empirically employ an appropriate aggregation method (for exp. mean) to compute global k. This global k, obtained with the theoretical guarantees, can then be applied to unseen users, ensuring that fairness and performance guarantees hold across the system.\", \"for_exp\": \"Consider an e-commerce platform like Amazon. Instead of heuristically fixing the prediction set size (e.g., k=10), our framework determined k (e.g., k=7) that balances fairness and accuracy for all users. By applying this k system-wide, the platform ensures that recommendations are concise, personalized, and resource-efficient without sacrificing fairness or performance ensuring increased user satisfaction, retention and platform\\u2019s sustainability.\\n\\n \\n\\nb) Model Optimization and Fine-Tuning: \\n\\nThe calculated average k can serve as a benchmark for selecting the best-performing model among candidates. Alternatively, it can be used iteratively to fine-tune the underlying base recommendation model, enabling the system to achieve the desired levels of accuracy and fairness across users.\", \"for_example\": \"A video streaming platform like Netflix can use this approach to dynamically optimize k for new users, ensuring personalized recommendations that align with both user experience and platform resource constraints. Additionally, the feedback from k values can inform iterative improvements to their recommendation algorithms.\\n\\n \\n---\\n \\n\\n### 3. Revision in the paper: \\n\\nWe deeply appreciate the reviewer's point about the practicality of minimizing the average prediction set size. In response, we have revised the manuscript to show its applicability (Appendix A.5) as a tool for enhancing fairness and reducing uncertainty, compatible with fixed-size prediction sets and ranking-based metrics.\", \"reference\": \"[1] Kweon, Wonbin, et al. \\\"Top-Personalized-K Recommendation.\\\" Proceedings of the ACM on Web Conference 2024. 2024.\"}", "{\"comment\": \"Thank authors for your responses.\\n\\nChoosing ranking metrics such as nDCG is equivalent to making assumptions about the user/examination model. In my opinion, it is unnatural to define the minimum reliability for users independently of that user model. \\nThe current setup lacks a clear rationale apart from simplifying the theory and is not based on practical benefits.\\n\\nThe claim, \\\"These guarantees are probabilistic and are not dependent on the LOO setup, ensuring broader applicability,\\\" is false. This is because the monotonicity w.r.t. $\\\\lambda$ used throughout the proof (which is the same as the monotonicity w.r.t. ranking length) is only valid for some ranking metrics (e.g., nDCG) under the LOO setup.\\n\\nI agree that there can be cases where the LOO setup is appropriate for certain applications. However, the methodologies used in the experiments and the recommender methods being compared are those prevalent in the field of collaborative filtering, and in the task of collaborative filtering, the LOO setup is neither common nor practical.\\n\\nRegarding the claim, \\\"extending it to scenarios involving multiple relevant items can be achieved with minor modifications,\\\" I cannot agree with this. The proposed method assumes the monotonicity of a ranking metric with respect to ranking length for any rankings. Most such ranking metrics are unidimensional ones, such as recall, which do not imply precision. Therefore, I believe the proposed framework lacks generality.\\n\\n\\nI believe that theoretical contributions in this field are very important. However, providing guarantees by significantly altering the setup from a practical one to an unrealistic one diminishes the significance of that contribution. If technical difficulties require changing the setup to something unrealistic, I think it is important to acknowledge that change and engage in careful discussion about it.\"}", "{\"title\": \"The problem formulation is not appropriate and learning algorithm requires more analysis, and I will maintain the my score.\", \"comment\": \"Thanks for your rebuttal, but some questions require further discussion.\\n\\n**Regarding Problem Definition:**\", \"the_objectives_of_this_paper_can_be_summarized_in_two_aspects\": \"- To generate the minimal item prediction set for each user, ensuring it covers their true preferences with high confidence. \\n- At the same time, to guarantee comparable recommendation performance (e.g., purchase rate) between two known demographic groups. \\n\\nThe concept of \\u2018user fairness\\u2019 proposed in this paper differs from the commonly used concept of \\u2018fairness\\u2019 for protecting sensitive attributes. Instead, the fairness here is intended to ensure prediction **stability**. Therefore, the use of the term \\\"fairness\\\" may not be entirely appropriate. \\n\\n**Regarding the Learning Algorithm:** \\n\\nIs it possible to explicitly write out the objective function of $\\\\lambda = (\\\\lambda_{G1}, \\\\lambda_{G2})$? Given the current definition of the 0-1 loss (Equation (1)), constraints such as NDCG or HR, and the objective of minimizing the size of the prediction set, all these constitute a non-convex optimization problem. Although the authors derive upper bounds and demonstrate the existence of an optimal value, **this does not imply the existence of an optimal solution**. Moreover, even if an optimal solution exists, **there is no guarantee that it can be practically found**. \\n\\nBased on these two weaknesses, I will maintain the my score.\"}" ] }
DqPDavAEXt
RDAS: A Low Latency and High Throughput Raw Data Engine for Machine Learning Systems
[ "Weijian Li", "Han Liu" ]
In the era of large pretrained models, a key challenge in deep learning is the underutilization of fine-grained raw data, often replaced by information-lossy normalized data. To bridge this gap, we introduce the Raw Data Aggregation System for Machine Learning (RDAS). RDAS offers a seamless data interface, enabling machine learning systems to directly access unstructured, high-resolution raw event data with minimal latency. At the heart of RDAS lies the Message Book Model, an innovative data representation framework that underpins the system’s ability to handle event data at nanosecond precision. RDAS is structured around three conceptual layers: (i) the Message Layer, featuring dual message aggregators for sequential and random access, which compile raw messages into timestamp specific message book snapshots; (ii) the Feature Layer, which derives user-specified data features from the message book for any given moment; and (iii) the Verification Layer, tasked with real-time error monitoring and integrity assurance of the message book. A C++ implementation of these layers ensures RDAS’s exceptional performance. To validate its effectiveness, we applied RDAS in an Internet of Things (IoT) scenario, demonstrating significant performance enhancements over existing methods in terms of data throughput and latency. Our results underscore RDAS’s potential to revolutionize data processing in machine learning, offering a pathway to leverage the full spectrum of raw data’s granularity and richness.
[ "machine learning system", "data engine", "low latency", "high throughput" ]
https://openreview.net/pdf?id=DqPDavAEXt
https://openreview.net/forum?id=DqPDavAEXt
ICLR.cc/2025/Conference
2025
{ "note_id": [ "q6ApNW3Cyx" ], "note_type": [ "comment" ], "note_created": [ 1729061065081 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"desk_reject_comments\": \"violating formating guideline\", \"title\": \"Submission Desk Rejected by Program Chairs\"}" ] }
Dq9VrVuLzV
SyntheOcc: Synthesize Geometric-Controlled Street View Images through 3D Semantic MPIs
[ "Leheng Li", "Weichao Qiu", "Yingjie CAI", "Xu Yan", "Qing LIAN", "Bingbing Liu", "Ying-Cong Chen" ]
The advancement of autonomous driving is increasingly reliant on high-quality annotated datasets, especially in the task of 3D occupancy prediction, where the occupancy labels require dense 3D annotation with significant human effort. In this paper, we propose SyntheOcc, which denotes a diffusion model that Synthesize photorealistic and geometric-controlled images by conditioning Occupancy labels in driving scenarios. This yields an unlimited amount of diverse, annotated, and controllable datasets for applications like training perception models and simulation. SyntheOcc addresses the critical challenge of how to efficiently encode 3D geometric information as conditional input to a 2D diffusion model. Our approach innovatively incorporates 3D semantic multi-plane images (MPIs) to provide comprehensive and spatially aligned 3D scene descriptions for conditioning. As a result, SyntheOcc can generate photorealistic multi-view images and videos that faithfully align with the given geometric labels (semantics in 3D voxel space). Extensive qualitative and quantitative evaluations of SyntheOcc on the nuScenes dataset prove its effectiveness in generating controllable occupancy datasets that serve as an effective data augmentation to perception models.
[ "Autonomous Driving", "Image Generation", "Data-centric AI", "3D Vision" ]
https://openreview.net/pdf?id=Dq9VrVuLzV
https://openreview.net/forum?id=Dq9VrVuLzV
ICLR.cc/2025/Conference
2025
{ "note_id": [ "jMW5z5Mf0z", "fGX6Rrxe6X", "SlhVoheLkI", "JBCHF87BHt", "AaJtLR9iKh", "3pB1AfdZX0", "0ByQe9UHdm" ], "note_type": [ "official_review", "official_review", "official_comment", "comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1730333249590, 1730781315593, 1731464682801, 1731551201852, 1731465358609, 1730907893445, 1730380601405 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission338/Reviewer_NHs7" ], [ "ICLR.cc/2025/Conference/Submission338/Reviewer_Ypbc" ], [ "ICLR.cc/2025/Conference/Submission338/Authors" ], [ "ICLR.cc/2025/Conference/Submission338/Authors" ], [ "ICLR.cc/2025/Conference/Submission338/Authors" ], [ "ICLR.cc/2025/Conference/Submission338/Reviewer_GBoG" ], [ "ICLR.cc/2025/Conference/Submission338/Reviewer_KTUH" ] ], "structured_content_str": [ "{\"summary\": \"The paper proposes SyntheOcc, a novel image generation framework that can synthesize photorealistic and geometric-controlled street view images by conditioning on 3D occupancy labels. The key innovation is the use of 3D semantic multi-plane images (MPIs) to efficiently encode 3D geometric information as conditional input to the 2D diffusion model. The extensive experiments demonstrate that the synthetic data generated by SyntheOcc can effectively augment perception models for 3D occupancy prediction tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper proposes an innovative approach by replacing ControlNet with multiplane images, enhancing image synchronization from occluded views.\\n2. The provided video effectively demonstrates and supports the proposed method.\\n3. The writing is clear and easy to follow.\\n4. The experiments are comprehensive, covering both quantitative and qualitative evaluations.\", \"weaknesses\": \"1. **Clarification on excluding ControlNet**: The paper should more thoroughly explain the decision to exclude ControlNet. The rationale for why ControlNet fails to meet 3D requirements remains unclear. Since multiplane images could serve as conditions for ControlNet.\\n\\n2. **Incorporating KPM evaluation for consistency**: It would be beneficial for the paper to include KPM evaluations from Driving into the Future [1] to better assess temporal and multiview consistency.\\n\\n3. **Additional out-of-domain results**: Presenting more out-of-domain results, such as experiments with variations in camera intrinsic and extrinsic parameters, would explain the ability of generalization of the model.\\n\\n4. **Weak video quality**: Some objects in the provided video appear twisted or lack realistic representation. And the videos look a little bit unreal but I cannot tell why.\\n\\n5. **World model integration**: Considering that driving scene generation works nowadays provides world model results\\u2014capable of forecasting future layouts and generating future images based on actions. It would be valuable for the paper to explore integration with world models. For example, testing if the generation method can be adapted to synthesize occupancy predictions, as seen in recent work on occupancy-based world models [2]. It will showcase potential for further real-world applications.\\n\\n[1] Wang, Yuqi, et al. \\\"Driving into the future: Multiview visual forecasting and planning with world model for autonomous driving.\\\" CVPR 2024.\\n[2] Zheng, Wenzhao, et al. \\\"Occworld: Learning a 3d occupancy world model for autonomous driving.\\\" ECCV 2024\", \"questions\": \"I am still wondering why the paper cannot use ControlNet, as multiplane images are still images and could potentially be used as input for ControlNet.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces SyntheOcc, a controllable camera image simulation framework in the autonomous driving domain. The proposed framework uses 3D semantic occupancy grid as the conditions for camera image simulation, where multi-plane semantic images (MPIs) projected from 3D semantic occupancy grids have been used as conditional input to a 2D diffusion model. The effectiveness of SyntheOcc is demonstrated through improved performance in Real-to-sim evaluation and Sim-to-real data augmentation on the NuScenes dataset.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"[S1: Quality] The paper has demonstrated extensive experimental results and showcased that the proposed method is superior under both real-to-sim evaluation and sim-to-real data augmentation, when compared against existing methods. The paper also includes ablation studies including the MPI encoder architecture and reweighing methods.\", \"[S2: Clarity] The proposed method is well described in detail with clear illustrations (e.g., Figure 1 and Figure 2).\"], \"weaknesses\": [\"[W1] The paper does not include several very relevant work in the literature review. Most of the baselines used in the paper come from publications within the past two years. The reviewer feels that two relevant papers on camera simulation is missing [NewRef1] and [NewRef2] from the literature review.\", \"[W2] This paper does not compare against an important baseline UniSim [Yang et al., CVPR 2023].\", \"While the proposed method is pure data-driven, is it possible to showcase the results on Pandaset used in the Unisim? It is unclear whether the proposed method is superior to UniSim as a camera image simulator or simply works well on Nuscenes dataset but does not generalize to other datasets (e.g., Pandaset, Waymo Open Dataset). The reviewer feels that such discussions and experimental comparisons are needed as a strong justification for acceptance.\", \"[W2.1] It is important to understand whether the proposed method is transferrable to other datasets with minimum fine-tuning or adaptation. For example, as shown in Figure 6 of GeoSim paper [NewRef1], the same pipeline works for a different city in the Argoverse dataset.\", \"[W3] While the data augmentation experimental results are interesting (section 4.3, first two rows in Table 1), this paper does not comment on the role of synthetically generated data in nuScenes occupancy prediction.\", \"[W3.1] The semantic categories with significant improvements are bus, traffic cones, trailer, driving surface, other flat, and sidewalk. It is unclear how and why synthetically generated data help for such categories but not the other categories. Is it possible to provide convincing qualitative examples with explanations?\", \"[W3.2] Comment on the overall improvements to the driving system (e.g., behavior prediction and motion planning). How much does the long-tailed scene generation help the downstream tasks?\"], \"references\": [\"[NewRef1] GeoSim: Realistic Video Simulation via Geometry-Aware Composition for Self-Driving, Chen et al., In CVPR 2021.\", \"[NewRef2] Block-NeRF: Scalable Large Scene Neural View Synthesis, Tancik et al., In CVPR 2022.\"], \"questions\": \"Please address the questions raised in the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We would like to express our gratitude to the reviewer for the comprehensive feedback. In terms of our methodology, which is based on diffusion models, we contend that there is no inherent necessity to compare our approach with NeRF (Neural Radiance Fields). Furthermore, NeRF-based solutions exhibit several limitations relative to our method, such as an inability to scale effectively, a lack of generalizability to novel scenarios, and challenges in facilitating diverse stylistic edits on images. Consequently, we deem a comparative analysis with NeRF to be unwarranted.\\n\\nRegarding the aspect of data augmentation, we posit that the quantity of augmented data plays a beneficial role. Concurrently, the presence of a significant number of long-tail objects within the dataset poses challenges and exerts a certain impact on the learning capabilities of generative models. This, in turn, can adversely affect the generation outcomes as well as the efficacy of data augmentation techniques.\\n\\nIt is an excellent suggestion for exploring more downstream tasks! With respect to downstream tasks such as behavior prediction and motion planning, we intend to conduct experiments in the future to explore their applicability and efficacy.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"comment\": \"We wish to convey our sincere appreciation to the reviewer for the thorough and insightful feedback provided.\\n\\n**Comparison with Depth and Segmentation.** In accordance with the reviewer's suggestion, we have analyzed the differences in the discussion at line 375. We posit that the ControlNet integrated with depth information can be perceived as a degeneration of SytheOcc, which is simplified to a singular plane. We will provide FID evaluation further.\\n\\n**Comparative Analysis.** In compliance with the reviewer's recommendation, we have already incorporated a comparative visualization featuring MagicDrive within Figure 8. We respectfully direct the reviewer's attention to this figure for a detailed examination of the comparative analysis.\\n\\n**Fr\\u00e9chet Video Distance (FVD) Evaluation.** In response to the suggestion, we have already included an evaluation using the Fr\\u00e9chet Video Distance metric in our supplementary material. We kindly request that the reviewer refer to the appendix for the FVD results pertaining to our experiments.\"}", "{\"summary\": \"This paper presents SyntheOcc, a method for generating multi-camera images and videos of driving scenarios, using occupancy and text prompt as guiding inputs. The innovation of SyntheOcc lies in its proposed MPI encoder, which projects the raw occupancy of different depth ranges onto the camera plane, combining them into semantic multiplane images. These semantic multiplane images are then encoded as guidance for image generation. The paper provides a robust qualitative and quantitative comparison of generated images and videos. Additionally, it demonstrates the performance of perception models trained on the synthetic data and tested on real validation sets, as well as perception models trained on real data and tested on synthetic validation sets, to validate the proximity of SyntheOcc-generated images to the real domain.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. SyntheOcc has potential for generating rare long-tail data that could support downstream tasks in real-world scenarios.\\n2. The experiments are solid, offering extensive qualitative and quantitative comparisons. The key validation experiments, blending generated data with real data, are particularly convincing.\", \"weaknesses\": \"1. While the paper proposes the MPIs for encoding occupancy as guidance for image generation, the overall novelty remains unclear in several areas. The core functionalities\\u2014such as multi-camera image and video generation for driving scenes\\u2014are well-covered in prior works like MagicDrive [1], DriveDreamer [2], and Drive-WM [3], which demonstrate strong temporal consistency in video generation that appears more stable than the qualitative results in this paper. It would be helpful for the authors to clarify the novel aspects of SyntheOcc\\u2019s contributions by explicitly identifying any unique advantages or improvements over these methods. Furthermore, key capabilities like generating out-of-domain images and videos via text prompts, using occupancy as guiding inputs, editing by modifying ocupancy elements, and simulating images with varying camera intrinsics are already present in WovoGen [4]. The authors could strengthen the paper by specifying which of these functionalities are advanced by SyntheOcc and justifying their relevance. Providing concrete comparisons to these works, either through experiments or discussion, could more clearly highlight SyntheOcc\\u2019s contributions and novelty.\\n2. The generated results of SyntheOcc show notable issues with road markings (evident in Figures 11, 12, and 13), which could be problematic for certain downstream tasks, such as planning. The paper lacks a systematic evaluation of these synthetic data\\u2019s impact on such tasks. Some previous works, like BEVGen [4] and MagicDrive [1], leverage HD maps as guidance, which could effectively resolve such issues. To strengthen the paper, the authors could provide a quantitative analysis comparing the quality of road markings in their generated images to ground truth or to outputs from other methods. Such an analysis would clarify the current limitations and help illustrate areas for potential improvement. Additionally, evaluating the impact of these road marking inconsistencies on specific downstream tasks, such as lane detection or path planning, could further demonstrate the practical implications of this issue and guide refinements to improve SyntheOcc\\u2019s usability for these applications.\\n3. I am not convinced that SyntheOcc effectively expands current datasets. Firstly, the paper mentions (Table 9) that excessive synthetic data can hinder perception model performance. Some corner cases, which require manual adjustment, are the truly valuable data needed in datasets (such as those in the lower parts of Figures 1 and 7). Editing these corner cases still demands considerable manual intervention (e.g., placing barriers on roads, altering road structures, or positioning pedestrians atop vehicles).\", \"references\": \"[1] Ruiyuan Gao, Kai Chen, Enze Xie, Lanqing Hong, Zhenguo Li, Dit-Yan Yeung, and Qiang Xu. Magicdrive: Street view generation with diverse 3d geometry control. In ICLR, 2024.\\n[2] Xiaofeng Wang, Zheng Zhu, Guan Huang, Xinze Chen, and Jiwen Lu. Drivedreamer: Towards real-world-driven world models for autonomous driving. In arxiv, 2024.\\n[3] Yuqi Wang, Jiawei He, Lue Fan, Hongxin Li, Yuntao Chen, and Zhaoxiang Zhang. Driving into the future: Multiview visual forecasting and planning with world model for autonomous driving. In CVPR, 2024.\\n[4] Jiachen Lu, Ze Huang, Jiahui Zhang, Zeyu Yang, and Li Zhang. Wovogen: World volume-aware diffusion for controllable multi-camera driving scene generation. In ECCV, 2024.\\n[5] Alexander Swerdlow, Runsheng Xu, and Bolei Zhou. Street-view image generation from a bird\\u2019s-eye view layout. IEEE RAL, 2024.\", \"questions\": \"What specific perception model was used in the experiments?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces SytheOcc, a novel image generation framework enabling precise 3D geometric control for applications like 3D editing and dataset generation. By leveraging 3D semantic multiplane images (MPIs), the framework achieves finer geometry and semantic control, enhancing image quality and recognizability. Experimental results show the effectiveness of synthetic data in augmenting 3D occupancy prediction tasks, indicating a significant advancement over existing methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. SytheOcc offers finer and precise 3D geometric control, allowing for intricate manipulation of object shapes and scene geometry, which is crucial for tasks like 3D editing and dataset generation.\\n2. Experimental results demonstrate that the synthetic data generated by SytheOcc exhibit better recognizability, indicating a substantial advancement in image quality over existing methods.\\n3. The synthetic data produced by SytheOcc prove to be highly effective for data augmentation in 3D occupancy prediction tasks, enhancing the performance and robustness of perception models in such applications.\", \"weaknesses\": \"1. The practicality of occupancy editing in 3D space should be addressed. It is crucial to automate or accelerate the editing process to make it feasible for practical applications requiring large amounts of data. The authors may report the time required to generate a new image through editing, and discuss possible solutions to scaling up data.\\n2. The paper use occupancy as a condition due to its spatial information. It would be beneficial to discuss the fundamental differences between using occupancy and using of depth&segmentation maps as conditions to control image generation [1]. Experimental comparisons between using depth&semantic maps versus occupancy as conditions could be conducted to evaluate metrics like FID and inference time.\\n3. The multi-view consistency appears not so good. In Figure 5 (b) (c), the color of the car in the first row changes in the second and third images. The authors could includsa comparative analysis of the generation results from different models (e.g., MagicDrive) within the same scene.\\n4. The concept of imbalance mentioned by the authors in Line 272 requires further clarification. It is essential for the authors to provide a detailed explanation of what this imbalance refers to and how it impacts their proposed framework.\\n5. The examples of editing provided in paper mostly revolve simple cars. In Figure 6, it would be beneficial to explore if the model can accurately move and position more complex, irregularly shaped vehicles or pedestrians to demonstrate the capability of the framework in generating new, diverse scenes.\\n6. While the paper discusses some designs related to temporal consistency, only qualitative results are presented. It would be valuable for the authors to report metrics like FVD compared to existing works such as DrivingDiffusion and Panacea to provide a more comprehensive evaluation of the proposed framework.\\n7. Missing reference: [2-4]\\n\\n[1] UniControl: A Unified Diffusion Model for Controllable Visual Generation In the Wild\\n\\n[2] Generalized Predictive Model for Autonomous Driving\\n\\n[3] Vista: A Generalizable Driving World Model with High Fidelity and Versatile Controllability\\n\\n[4] SimGen: Simulator-conditioned Driving Scene Generation\", \"questions\": \"By conducting these expanded experimental comparisons, the authors can more comprehensively validate their claims and provide more compelling evidence for the effectiveness of the proposed SytheOcc framework.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
Dpqw0namg3
LAM Simulator: Advancing Large Action Model Training for Agent via Online Exploration and Feedback Simulation
[ "Thai Quoc Hoang", "Shirley Kokane", "Jianguo Zhang", "Tian Lan", "Zuxin Liu", "Ming Zhu", "Jake Grigsby", "Michael S Ryoo", "Shelby Heinecke", "Huan Wang", "Silvio Savarese", "Caiming Xiong", "Juan Carlos Niebles" ]
Large Action Models (LAMs) for AI agents have significant potential, but their development is often constrained by the reliance on supervised learning and manual data curation, which are both time-consuming and costly. To address these limitations, we present the LAM Simulator, a comprehensive framework designed for online exploration of agentic tasks with high-quality feedback. This framework includes a curated set of high-quality agentic tasks, a diverse collection of tools, and an interactive environment where agent models can call tools, receive execution responses, and obtain action feedback. Our findings indicate that the LAM Simulator significantly enhances model performance and effectively identifies and addresses potential issues. Specifically, our model, LAM-Sim-8x7B, demonstrates an 18.54\% improvement over its base LAM and significantly outperforms other state-of-the-art alternatives on ToolEval benchmark. Furthermore, we have demonstrated that LLMs lacking in agentic capability can greatly benefit from the implementation of LAM Simulator. Our experiments with a model trained on Mixtral-8x7B-Instruct-v0.1 have yielded a doubling to tripling of performance. Remarkably, the data construction process for training these models requires minimal human intervention, making the LAM Simulator a robust framework for accelerating the development of AI agents.
[ "LLMs Agent; Self-learning", "Reinforcement Learning; Data Generation" ]
Reject
https://openreview.net/pdf?id=Dpqw0namg3
https://openreview.net/forum?id=Dpqw0namg3
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yXXiZEqr9U", "x8EeErecmK", "wcz5A9uqUM", "t15Ly9TwqR", "s0mSTNjpFq", "riuqz9ypb0", "m3Uzd4SPXw", "j9CvsLR5fN", "hOzYSQEkM1", "fGQgKXlWRY", "egqpHkulFn", "cYpU2jkFvK", "Pc2De0MLLN", "JzQG9j2TJg", "J1gGFzF9yF", "IrZXHrExGb", "GT3Rk7q5IC", "C7wn5gE1RH", "B18vsLl8T6", "8f5LfnVb3r", "3z7L4fCkIQ" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1733215482539, 1733215330493, 1733133818506, 1732342858260, 1732233449419, 1732232625361, 1734625915477, 1732343098673, 1732343232695, 1732233340443, 1730676564634, 1732233521838, 1737524258364, 1730604958981, 1733215196569, 1732343007713, 1732233541936, 1732916211018, 1732348492865, 1729675840739, 1732648356898 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13418/Authors" ], [ "ICLR.cc/2025/Conference/Submission13418/Authors" ], [ "ICLR.cc/2025/Conference/Submission13418/Reviewer_nYeX" ], [ "ICLR.cc/2025/Conference/Submission13418/Authors" ], [ "ICLR.cc/2025/Conference/Submission13418/Authors" ], [ "ICLR.cc/2025/Conference/Submission13418/Authors" ], [ "ICLR.cc/2025/Conference/Submission13418/Area_Chair_B9Yi" ], [ "ICLR.cc/2025/Conference/Submission13418/Authors" ], [ "ICLR.cc/2025/Conference/Submission13418/Authors" ], [ "ICLR.cc/2025/Conference/Submission13418/Authors" ], [ "ICLR.cc/2025/Conference/Submission13418/Reviewer_4LXX" ], [ "ICLR.cc/2025/Conference/Submission13418/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13418/Reviewer_nYeX" ], [ "ICLR.cc/2025/Conference/Submission13418/Authors" ], [ "ICLR.cc/2025/Conference/Submission13418/Authors" ], [ "ICLR.cc/2025/Conference/Submission13418/Authors" ], [ "ICLR.cc/2025/Conference/Submission13418/Reviewer_4LXX" ], [ "ICLR.cc/2025/Conference/Submission13418/Authors" ], [ "ICLR.cc/2025/Conference/Submission13418/Reviewer_8sh2" ], [ "ICLR.cc/2025/Conference/Submission13418/Authors" ] ], "structured_content_str": [ "{\"title\": \"Thank you for your review!\", \"comment\": \"We sincerely thank you for reviewing our paper and increasing the score to 6! Your feedback means great to us, and we remain dedicated to improving our work further.\\n\\nWith best regards,\\n\\nAuthors of submission 13418\"}", "{\"title\": \"Thank you for your review!\", \"comment\": \"Thank you for reviewing our paper and maintaining the score! We greatly appreciate your feedback and remain dedicated to improving our work.\\n\\nWith best regards,\\n\\nAuthors of submission 13418\"}", "{\"comment\": \"Sorry for the late reply, and thank you for your detailed responses and for addressing my concerns. I appreciate the additional clarifications and the revisions you\\u2019ve made to the manuscript. After reviewing the changes, I have decided to maintain my original rating.\"}", "{\"title\": \"Rebuttal by Authors (part 1)\", \"comment\": \"We deeply appreciate your valuable feedback and insightful questions regarding our paper. We have thoughtfully reviewed your concerns and are eager to provide you with responses.\\n\\n### **W1. Limited Comparative Analysis**\\nThank you for commenting about the comparison to related work. We acknowledge the importance of a comprehensive comparative analysis and understand that having one would offer a clearer perspective of our framework\\u2019s positioning relative to existing technologies.\\n\\nTo address this, we have added the comparison into our revised submission to include a more detailed comparison with other frameworks in Section 2. Here is the attached comparison table for your quick overview.\\n\\n\\n| | Multi-turn | Open Action | Programmatic Evals | Automated Data Gen | Self-exploration |\\n|--------------------------|------------|------------------------|-----------------------|--------------------------|------------------|\\n| ToolBench | **\\u2713** | **\\u2713** | \\u2717 | **\\u2713** | \\u2717 |\\n| ToolTalk | **\\u2713** | \\u2717 | **\\u2713** | \\u2717 | \\u2717 |\\n| WebArena | **\\u2713** | \\u2717 | \\u2717 | \\u2717 | \\u2717 |\\n| APIGen | \\u2717 | **\\u2713** | \\u2717 | **\\u2713** | \\u2717 |\\n| **LAM Simulator (ours)** | **\\u2713** | **\\u2713** | **\\u2713** | **\\u2713** | **\\u2713** |\\n\\nHere, we display the comparison of Prior Frameworks and Our LAM Simulator. **Multi-turn** assesses support for multi-turn settings, **Open Action** assesses if agent\\u2019s actions space are predefined or open, **Programmatic Evals** assesses if ALL evaluators (both action and task) are using a programmatic approach without using LLMs, **Automated Data Gen** assesses automated training data generation capabilities, and **Self-exploration** assesses if models can self-improve through the framework without external models or human supervision.\\n\\nWe believe this comparison not only clarifies the positioning of our framework but also substantiates the LAM Simulator's contributions to the field. We hope this revision will satisfy the need for a clear contextual understanding of our work in relation to existing contributions.\\n\\n### **W2. Comparative Performance Analysis on Agent Self-Exploration**\\n\\nThank you for your feedback. We recognize your suggestion for a comparative analysis with other self-exploration methods.\\n\\nTo the best of our knowledge, there are rare literature for self-exploration work in the LLM agent space, except some very recent ones. Though, it would be challenging and take siginificant efforts to applying them under the reasonable setting for a fair and completed comparison. Therefore, they are beyond the current scope of this paper and we leave it as an interesting future direction. However, we would like to emphasize that our work is focusing to remove the reliance on human or stronger models to generate high-quality training data for agent development.\\n\\nWith the aid of the simulated framework that enables real-time interactions and our evaluators (both action and task), we are able to offer the capability for LLMs to self-explore and build a high-quality training dataset directly from a base model, which then significantly improves its own performance, as highlighted in Section 4 of our paper.\\n\\n### **W3.Insufficient Examples of Human-Crafted Tools**\\n\\nOnce again, thank you so much for bringing up this point. We have addressed this in our Rebuttal revision by adding a details about how we have created the Tools collection in Appendix A.3.\\n\\nWe have also included examples of how we store documentation for each tool in our Tools collection in Appendix A.2.2 to give more detail of how can Agent access tools information.\"}", "{\"title\": \"Rebuttal by Authors (part 2)\", \"comment\": \"Below are the last part of our response.\\n\\n### **Q2. \\\"data\\\" and \\\"database\\\" meaning? Why is the \\\"data\\\" category such a high proportion?**\\nThe \\\"data\\\" category encompasses tasks that involve data retrieval, processing, and analysis. Some examples we have are: `Search for properties on Zillow at a given location`, `Get list of ios apps`, `Retrieve top hot posts of LeetCode Dicuss Compensation`.\\n\\nThe \\u201cdatabase\\u201d category, in contrast, specifically relates to tasks that involve database interactions, primarily through queries. This involves retrieving, inserting, and managing data within a structured query language (SQL) framework or similar environments. Some examples we have are: `Executing a SQL query`, `Convert SQL to MongoDB`.\\n\\nThe reason why the \\\"data\\\" category represents such a significant proportion of tasks is due to the extensive availability and utility of data-related tools provided by the RapidAPI marketplace. \\n\\n### **Q3. Why DPO for preference optimization instead of other options?**\\nOur paper highlights the LAM Simulator framework's ability to generate high-quality feedback during interactions between a user and an agent. This feedback provides valuable rewards, facilitating the refinement of subsequent learning algorithms.\\n\\nGiven our focus on framework over individual algorithms, we adopt the most established training methodologies suitable for each type of base model:\\n\\n* For models like Large Action Models (Large Language Models with agentic capabilities), we chose Direct Policy Optimization (DPO) due to its popularity and relevance.\\n* For other Large Language Models with minimal agentic capabilities, Supervised Fine-Tuning (SFT) is utilized to introduce aspects of agency.\\n\\nAdditionally, we are exploring more nuanced algorithms for agency-specific tasks and plan to discuss these advancements in future publications.\\n\\n### **Q4. What is U. Tool, U. Cat etc in the header of table 1?**\\n\\nIn Table 1, Table 2, Table 4, Table 5, U represents \\u201cUnseen\\u201d. These tables present benchmark results from the ToolEval datasets[2] for different combinations of seen and unseen categories and tools:\\n\\n1. U.Inst (Unseen Instruction & Seen Tools): this set contains tasks involving **unseen** instruction (or user command query) with tools **seen** during training phase\\n2. U.Tools (Unseen Tools & Seen Categories): this set contains tasks involving only **unseen** tools from **seen** task category\\n3. U. Tools & U.Cat (Unseen Tools & Unseen Categories): this set contains tasks of **unseen** tools from **unseen** task category\\n\\n### **Q5. Table 2 shows average errors, but these numbers are meaningless without knowing the number of times the model was evaluated. Moreover, there are no standard deviation metrics in any tables and specifically table 2, it is difficult to know if the results are really statistically significant or if there is some lucky prompting changes/randomness in the inference models or evaluation LLMs.**\\n\\nWhen running inferences, we used greedy sampling strategy to remove randomness in the decoding and give consistent output across different runs. We standardized the prompt templates as the official base models (xLAM[1] for LAM-Sim-7B and LAM-Sim-8x7B; Mixtral[3] for LAM-Sim-8x7B-Mixtral) to achieve best performance.\\n\\nFor evaluation, we used majority vote on 5 judgements of GPT-4-0125-preview. We utilized the official ToolEval\\u2019s prompt template on GPT-4-0125-preview to determine the validity of the generated responses. We instructed the evaluator model to produce 5 separate judgement for the model response of each test instance, after which a majority voting mechanism was applied to reach a final judgment about each response's pass/fail status. This strategy helps mitigate variations and bias that single predictions might introduce, increasing the reliability of our reported results.\\n\\nDue to the nature of greedy sampling, where repeating runs guarantee the same output responses, and because we did a majority vote out of 5 evaluations from GPT-4-0125-preview, we did not add the standard deviation metrics into the table. In response, we can add extra clarification about the test sets we used. Each of the 3 test sets contains 200 instances, providing a robust basis for the reported averages.\\n\\n### **References**\\n\\n[1] Zhang, J., Lan, T., Zhu, M., Liu, Z., Hoang, T., Kokane, S., Yao, W., Tan, J., Prabhakar, A., Chen, H., Liu, Z., Feng, Y., Awalgaonkar, T., Murthy, R., Hu, E., Chen, Z., Xu, R., Niebles, J. C., Heinecke, S., . . . Xiong, C. (2024, September 5). XLAM: a family of large action models to empower AI agent systems. arXiv.org. https://arxiv.org/abs/2409.03215\\n\\n[2] Qin, Y., Liang, S., Ye, Y., Zhu, K., Yan, L., Lu, Y., Lin, Y., Cong, X., Tang, X., Qian, B., Zhao, S., Hong, L., Tian, R., Xie, R., Zhou, J., Gerstein, M., Li, D., Liu, Z., & Sun, M. (2023, July 31). ToolLLM: Facilitating large language models to master 16000+ real-world APIs. arXiv.org. https://arxiv.org/abs/2307.16789\"}", "{\"title\": \"Rebuttal by Authors (part 1)\", \"comment\": \"Thank you very much for providing valuable feedback and posing excellent questions regarding our paper. We have thoughtfully reviewed your concerns and are eager to respond to them effectively.\\n\\n\\n### **W1. Complexity and diversity of agent tasks in the evaluation set**\\n\\nOnce again, thank you for your feedback. We fully acknowledge the importance of testing frameworks in varied and dynamic settings, as this not only enhances the generalizability of our findings but also better represents the complicated nature of real-world applications.\\n\\nWhile we strive to bridge the gap between academic evaluations and real-world applicability, we chose to utilize the ToolBench[1] benchmarking suite for several reasons that align with both the focus of our tools diversity and the supports for multi-turn tasks evaluation:\\n\\n1) API Diversity: ToolBench supports thousands of APIs from RapidAPI Hub marketplace, offering a wide array of real-world utilities spanning various domains. This extensive collection allows us to evaluate the framework's performance across a diverse set of tasks, measuring how well it adapts and responds to different challenges posed by these varying contexts.\\n\\n2) Complex Task Scenarios: The tasks within ToolBench[2] require multi-step processing and the coordinated use of multiple tools. Such tasks necessitate detailed planning and dynamic adjustment based on former interactions, closely paralleling the complexity encountered in real-world agent operations.\\n\\nTo illustrate the nature of these tasks, consider this example from our benchmark test set:\\n```\\n{\\n \\\"query\\\": \\\"My company is hosting a rugby event and we need to provide pre-match form information to the participants. Can you fetch the pre-match form for a specific rugby match? We would also like to see the incidents that occurred during the match.\\\"\\n \\n \\\"available_tools\\\": ['leaguenextmatches_for_rugbyapi2', 'leaguemedia_for_rugbyapi2', 'categories_for_rugbyapi2', 'categorytournaments_for_rugbyapi2', 'leaguelogoimage_for_rugbyapi2', 'teammedia_for_rugbyapi2', 'matchincidents_for_rugbyapi2', 'match_for_rugbyapi2', 'prematchform_for_rugbyapi2', 'categoryschedules_for_rugbyapi2']\\n}\\n```\\n### **W2. Impact of feedback at different stages on learning outcomes**\\n\\nThank you for raising the concerns about the effectiveness of feedback at different stages on the learning outcomes. We have conducted an experiment in Section 4.3.3 to gain the insights about this, and we are really happy to share with you some more insights we have:\\n\\nStarting with the high-quality dataset (HQ-Data) that we generated using the Mixtral-8x7b-Instruct-v0.1 model, we augmented new dataset to understand the contributions of each evaluator as follows:\\n\\n1. To understand impact of Intermediate Action Evaluators, we created a Low-Quality Intermediate Dataset (LQ-Interim) by adding on top of our HQ-Data with actions that were rejected by the Intermediate Action Evaluators.\\n2. To understand impact of Final Task Evaluators, we created a Low-Quality Final Response Data (LQ-Final) by adding on top of our HQ-Data with actions rejected by our Final Task Evaluators.\\n\\nBy running instruction finetuning on Mixtral-8x7b-Instruct-v0.1 using the 3 datasets above, we can see the importances of our evaluators.\\nFrom the outcomes, it's evident that both types of feedback\\u2014intermediate and final\\u2014are vital.\\n\\n* Action feedbacks through Intermediate Action Evaluators allows agents to adjust behaviors during each action, such as ensuring it follows correct response format or using correct tools and arguments without hallucinating. This contributes to enhance tool usage abilities. Thus, without this correct feedback on Action-level, data generation through self-exploration is impossible as the generated data can carry a lot of different types of uncontrollable errors. In our experiment, model trained with LQ-Interim shows no capability to solve agentic tasks, highlighting the importance of the Action feedbacks.\\n* Task feedbacks through Final Task Evaluators on the other hand allows agents to understand whether it solves this task correctly, or its action needed to be revised. Without the correct feedback on Task-level, data generation through self-exploration is hard to control the quality. In our experiment, model trained with LQ-Final has a 30% relative performance decline comparing to the model trained with HQ-Data.\\n\\nHence, ensuring high-quality feedback at both the intermediate action levels and the final task completion stages is essential for the optimal training of agents.\\n\\n### **References**\\n[1] Qin, Y., Liang, S., Ye, Y., Zhu, K., Yan, L., Lu, Y., Lin, Y., Cong, X., Tang, X., Qian, B., Zhao, S., Hong, L., Tian, R., Xie, R., Zhou, J., Gerstein, M., Li, D., Liu, Z., & Sun, M. (2023, July 31). ToolLLM: Facilitating large language models to master 16000+ real-world APIs. arXiv.org. https://arxiv.org/abs/2307.16789\"}", "{\"metareview\": \"The paper introduces the LAM Simulator, a framework for enhancing Large Action Model (LAM) training through online exploration and simulated feedback. The discussion points regarding this paper are readability, experimental rigor, and the adequacy of comparisons. Although the authors provided detailed responses during the rebuttal period and supplemented with a large number of baseline experiments, it seems they did not receive clear affirmation from the reviewers. In summary, AC leans towards rejecting this paper, hoping that the authors will provide more detailed and substantial experimental supplements to improve its overall quality.\", \"additional_comments_on_reviewer_discussion\": \"Despite the authors receiving positive scores (666), no one gave explicit support. Moreover, one of the reviewers gave a confidence score of only 2 for their score of 6, explicitly stating that this score was due to the authors' improvements in readability. The other two reviewers responded to the author's rebuttal and considered a score of 6 to be appropriate, but did not provide more explicit support.\"}", "{\"title\": \"Rebuttal by Authors (part 3)\", \"comment\": \"Below is our third part of the response.\\n\\n### **Q3. Limited Discussion on the Impact of Feedback Granularity**\\nThank you for raising the concerns about the effectiveness of feedback at different stages on the learning outcomes. We have conducted an experiment in Section 4.3.3 to gain the insights about this, and we are really happy to share with you some more insights we have.\\n\\nStarting with the high-quality dataset (**HQ-Data**) that we generated using the Mixtral-8x7b-Instruct-v0.1 base model, we augmented new dataset to understand the contributions of each evaluator as follows:\\n\\n+ To understand impact of Intermediate Action Evaluators, we created a Low-Quality Intermediate Dataset (**LQ-Interim**) by adding on top of our HQ-Data with actions that were rejected by the Intermediate Action Evaluators.\\n\\n+ To understand impact of Final Task Evaluators, we created a Low-Quality Final Response Data (**LQ-Final**) by adding on top of our HQ-Data with actions rejected by our Final Task Evaluators.\\n\\nBy running instruction finetuning on Mixtral-8x7b-Instruct-v0.1 using the 3 datasets above, we can see the importances of our evaluators. From the outcomes, it's evident that both types of feedback\\u2014intermediate and final\\u2014are vital:\\n\\n+ Action feedbacks through Intermediate Action Evaluators allows agents to adjust behaviors during each action, such as ensuring it follows correct response format or using correct tools and arguments without hallucinating. This contributes to enhance tool usage abilities. Thus, without this correct feedback on Action-level, data generation through self-exploration is impossible as the generated data can carry a lot of different types of uncontrollable errors. In our experiment, model trained with LQ-Interim shows no capability to solve agentic tasks, highlighting the importance of the Action feedbacks.\\n+ Task feedbacks through Final Task Evaluators on the other hand allows agents to understand whether it solves this task correctly, or its action needed to be revised. Without the correct feedback on Task-level, data generation through self-exploration is hard to control the quality. In our experiment, model trained with LQ-Final has a 30% relative performance decline comparing to the model trained with HQ-Data.\\n\\nHence, ensuring high-quality feedback at both the intermediate action levels and the final task completion stages is essential for the optimal training of agents.\"}", "{\"title\": \"Rebuttal by Authors (part 4)\", \"comment\": \"This is our last part of the response.\\n\\n### **Q4. What is the success rate of data generation?**\\n\\nWe designed the data generation strategy as a self-learning procedures, in which we let the Agent LLMs to continuously solve given tasks and record the qualified data for training purpose. To show the effectiveness of our system on both top-performing Agent LLMs and low-peroformance LLMs, we are going to display the base Agent's pass rate when running data generation.\\n\\n**Setup**:\\n- We used a **Content Dataset** of 400 entries. This contains 166 entries for our human-crafted tasks and 234 entries for our automated tasks extracted from ToolBench.\\nThese data entries are ultimately used to fit in the corresponding Tasks' templates to create final User Command. We have examples in Appendix A.2 to illustrate this process.\\n\\n- For each task, we let the agent to continuously interact with the environment to solve the task, in which for each step, the agents creates 7 generations with sampling temperatures range from 0 to 1. Each generation is then feeded into our evaluators to get a binary score of either 0 for Failed or 1 for Passed.\\nWe then calculate the number of passed for Actions and Tasks as follows.\\n\\n**Pass rate illustration**:\\nAs mentioned above, we are now displaying the results for pass rate on both Action-wise and Task-wise.\\n\\n**Note** that, only our human-crafted tasks are equipped with Task evaluator, so we are only recording the Task pass rate for tasks under this type.\\n\\n- **xLAM-7B-r[5] (base model for LAM-Sim-7B) Action and Task pass rate**\\n | Task type | Action | Task |\\n |---------------|--------|--------|\\n | Human-crafted | 0.9135 | 0.5783 |\\n | Extracted | 0.8116 | NaN |\\n\\n- **xLAM-8x7B-r[5] (base model for LAM-Sim-8x7B) Action and Task pass rate**\\n | Task type | Action | Task |\\n |---------------|--------|--------|\\n | Human-crafted | 0.9130 | 0.6325 |\\n | Extracted | 0.8374 | NaN |\\n\\n- **Mixtral-8x7B-Instruct-v0.1[6] (base model for LAM-Sim-8x7B-Mixtral) Action and Task pass rate**\\n | Task type | Action | Task |\\n |---------------|--------|--------|\\n | Human-crafted | 0.3960 | 0.2590 |\\n | Extracted | 0.3895 | NaN |\\n\\nIn this setup, we use passed data to create training datasets. Models with lower success rates require more time to produce sufficiently large training datasets, which in return, these trainining sets can improve the models more significant. The tables above also show that even high-quality Agent LLMs like the xLAM series still have struggle with using tools correctly for each action and completing tasks accurately. This highlights the strength of our framework and its task collection in identifying and fixing errors systematically in a self-exploration way, which is critical to reduce human intervention when developing AI Agents.\\n\\n### **Q5. What specific measures ensure the quality of generated data?**\\nIn this work, we only selected the positive data points for training using only the ones passed all of our evaluators (Intermediate Action Evaluators and Final Tasks Evaluators). Since our evaluator is systematic and got implement to correctly capture the expected state and response state from agent, we would be able to ensure the quality of the response.\\n\\nOnce again, thank you so much for all of your valuable feedbacks. We hope our responses have addressed your concerns and questions well!\"}", "{\"title\": \"Rebuttal by Authors (part 1)\", \"comment\": \"We sincerely thank you for sharing a lot of great insights about our paper as well as raising amazing questions for us. We have carefully considered your concerns and really hope we can address those to you.\\n\\n### **W1. Figure 1 seems to suggest you can only pass or fail via the syntax verification engine. What happens after the request verification engine?**\\n\\nOur LAM Simulator framework focuses on providing high-quality feedback for each agent\\u2019s action, which can be divided into several layers:\\n\\nFirst, the agent\\u2019s response (containing a Tool call and its corresponding arguments) is sent through the **Syntax Verification Engine** to check if: 1) the response format is correct, 2) the Tool call is valid (i.e. provided to the agent), and 3) the corresponding Tool arguments are correctly used.\\n\\nIf the agent\\u2019s response fails here, the **Syntax Verification Engine** sends this information directly to the **Evaluation Engine** to assign the corresponding score given this failure. Otherwise, we send the agent\\u2019s Tool call to the **Request Execution Engine** to execute the request then sends the corresponding observation to the **Evaluation Engine** for evaluating the executed action.\\n\\nIn **Evaluation Engine**, we have 2 different evaluators:\\n\\n1. **Intermediate Action Evaluator**: this evaluator assigns the score based on the syntactical feedback from the Syntax Verification Engine mentioned above. If there are extra requirements for each action given by the task, this evaluator also look at the agent's action and observation from the Request Execution Engine to assign the score.\\n2. **Final Task Evaluator**: this evaluator only triggers when the agent makes the final response to the given multi-turn task or when the number of interaction turns reaches a predefined limit. We compare the final result (state) between the gold label and the final response provided by the agent to give the corresponding score.\\n\\nThe scores from Intermediate Action Evaluator and (optionally) Final Task Evaluator are used to determined whether we want to use this generation output from agent as a training data instance. In this work, we selected only the generation outputs that passed ALL evaluators for positive training data, and others can be selected for negative training data in case of DPO training.\\n\\nAfter finishes checking the syntax, executing the agent\\u2019s action, and assign corresponding scores, the current Agent\\u2019s action, observation from environment, and scores from evaluators will finally be sent back to the Conversation Data manager to either proceed with the next turn or finalize this task and move on to the next one.\\n\\n### **W2. Examples of user interactions, agent responses, user command templates, task evaluators, etc.**\\n\\nThank you for this very important feedback. We have added the corresponding examples into our rebuttal revision's appendix A.2.\\n\\n### **Q1. About the \\\"astronomy\\\" tasks:**\\n**Clarification on Tasks Selection**:\\n\\nThe task domains in our model were chosen based on historical data concerning the most common inquiries received by our Agents. These domains include:\\n1. `Data`: Involves tasks such as data retrieval, processing, and analysis.\\n2. `Tools`: Focuses on utilizing tools to perform actions or solve tasks.\\n3. `Entertainment`: Covers tasks related to finding information on entertainment options such as movies, or executing entertainment-related actions, like playing games.\\n4. `Sciences`: Originally included specialized topics about sciences\\n\\nRight now, there is one task under sciences, which is `Finding relevant news about an astronomy problem`, and that was the reason why we were naming it \\u201cAstronomy\\u201d. However, given that this task represents just 1 of 30 human-crafted tasks, it does not markedly shift our overall focus towards astronomy. Nonetheless, we agree that this would cause confusion, and we think having a generic \\u201cSciences\\u201d category suits better.\\n\\nWe have updated our visualization for Human crafted Abstract Tasks in Figur 2(a) to avoid the confusion. Here is our detail task breakdown:\\n\\n`data: 20 tasks \\u2014 66.7%, sciences: 1 task \\u2014 3.3%, entertainment: 4 tasks \\u2014 13.3%, tools: 5 tasks \\u2014 16.7%`.\\n\\nThank you again for your insightful feedback. This dialogue helps us make necessary adjustments and improve our manuscript. In future work, we are also continuously expanding our task collection to better represent wide areas of interest, reflecting the diversity and needs of our user base.\"}", "{\"summary\": \"The submission introduces a system that incorporates a simulator with a language model for more effective large action models. In response to a user query the LAM Simulator system iteratively refines its response multiple times in a loop. The refinement leverages feedback from the simulator which has syntax verification and request execution tools to improve the final response.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Preference data generation for finetuning via DPO is neat to address the challenges of multi-turn conversations, it encourages more diverse data pairs\", \"Compared to past work this submission introduces many more domains the system can handle queries for\"], \"weaknesses\": [\"Many parts of the submission seem to lack explanation or additional clarification (perhaps to be put in the appendix) on many details. I ask some direct clarification questions in the questions section. It is also possible that my inexperience with research on these types of agents has led to me being unaware of some terms.\", \"Figure 1 seems to suggest you can only pass or fail via the syntax verification engine. What happens after the request verification engine?\", \"There's a lack of example/qualitative user interactions and agent responses (including the generated conversation data in the multi-turn setup). There also appear to be no examples of user command templates, task evaluators etc., (many elements of figure 1). It is quite hard to visualize what exactly is going on and how it might be better than related work.\"], \"questions\": [\"How come the human-crafted abstracted tasks has such a relatively high proportion of \\\"astronomy\\\" tasks. It feels like strangely very specific, how come there aren't other e.g. science topics? It is claimed in section 3.4 the human-crafted abstracted tasks are meticulously designed for the typical requests the LAM might receive from users. However this seems heavily audience dependent, perhaps I missed a sentence somewhere but is the audience mostly astronomers in this case?\", \"For task categories what does \\\"data\\\" and \\\"database\\\" mean exactly? Why is the \\\"data\\\" category such a high proportion?\", \"Why DPO for preference optimization instead of other options?\", \"What is U. Tool, U. Cat etc in the header of table 1?\", \"Table 2 shows average errors, but these numbers are meaningless without knowing the number of times the model was evaluated. Moreover, there are no standard deviation metrics in any tables and specifically table 2, it is difficult to know if the results are really statistically significant or if there is some lucky prompting changes/randomness in the inference models or evaluation LLMs.\", \"If all weaknesses and questions are addressed I am happy to raise the score into the accept range. My biggest concern is mostly around the lack of examples and explanations for a lot of terms and phrases.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by Authors (part 2)\", \"comment\": \"Below are the second part of our response.\\n\\n### **W3. Reliance on pre-defined tasks & tools / Generalization to open-world or unstructured environment**\\nWhen evaluating the models went through our framework for data generation & training, we are using 3 ToolEval test sets from ToolBench[1], which are:\\n1) U.Inst (Unseen Instruction & Seen Tools): this set contains tasks involving **unseen** instruction (or user command query) with tools **seen** during training phase\\n2) U.Tools (Unseen Tools & Seen Categories): this set contains tasks involving only **unseen** tools from **seen** task category\\n3) U. Tools & U.Cat (Unseen Tools & Unseen Categories): this set contains tasks of **unseen** tools from **unseen** task category\\n\\nThe performance results from these tests have been significantly positive. Notably, our models experienced exceptional improvements in the \\\"U.Tools\\\" and \\\"U.Tools & U.Cat\\\" categories (refer to Tables 1, 2, 4, and 5). This highlights the robust out-of-domain performance and generalization capabilities of our LAM Simulator, evidencing its effectiveness in reducing tool-related errors and enhancing agent reliability across diverse tasks.\\n\\n\\n### **Q1. How does the LAM Simulator handle situations where the task requirements change dynamically, or where tools malfunction during execution? Would it be able to generalize to such cases?**\\n\\nThe core value of LAM Simulator's design is the concept of promoting the Agent's capability for self-exploration. This approach is fundamental in enabling the Agent to navigate through varying scenarios, rather than merely learning to solve a predefined set of tasks via fixed paths. Here\\u2019s how we\\u2019ve implemented this:\\n\\n1) **Intermediate Action Evaluator**: this evaluator checks for syntactical errors, including response structure, tool calls, tool arguments. If there are extra requirements for each action given by the task, this evaluator also looks at the agent's action and observation from the Request Execution Engine to assign the score.\\n2) **Final Task Evaluator**: this evaluator only triggers when the agent makes the final response to the given multi-turn task or when the number of interaction turns reaches a predefined limit. We compare the final result (state) between the gold label and the final response provided by the agent to give the corresponding score.\\n\\nBy separating the evaluation into these two stages, with the Intermediate Action Evaluator providing necessary checks during each step while solving the task and the Final Task Evaluator assessing the final answer, we allow the Agent the flexibility to explore various approaches to solving the task without being restrainted to a predefined solution path.\\n\\nFurthermore, the environments integrated into the LAM Simulator are programmed for real-time tool execution, which not only simulates but actively engages the Agent in scenarios where tool malfunctions may occur. This setup is crucial as it necessitates that Agents adapt their strategies based on the current operational state of their tools\\u2014essentially training them to anticipate and rectify issues dynamically, fostering resilience and versatility.\\n\\nThis environment thereby supports Agents in developing strategies that are not only effective in familiar settings but are also robust and adaptable in facing new or unforeseen challenges. Through this design, our goal is to enable the Agent to generalize effectively to new tasks and handle unexpected situations in real-time with greater competence.\\n\\nWe hope this explanation addresses your concerns, and we are open to further discussions to enhance our system's responsiveness to dynamic task requirements and tool functionalities. Thank you for your constructive feedback.\\n\\n### **References**\\n[1] Qin, Y., Liang, S., Ye, Y., Zhu, K., Yan, L., Lu, Y., Lin, Y., Cong, X., Tang, X., Qian, B., Zhao, S., Hong, L., Tian, R., Xie, R., Zhou, J., Gerstein, M., Li, D., Liu, Z., & Sun, M. (2023, July 31). ToolLLM: Facilitating large language models to master 16000+ real-world APIs. arXiv.org. https://arxiv.org/abs/2307.16789\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper introduces the LAM Simulator, a framework for enhancing Large Action Model (LAM) training through online exploration and simulated feedback. By providing AI agents with curated tools, feedback, and interactive environments, the framework enables cost-effective, autonomous data generation with minimal human input. Experiments show significant performance improvements, particularly with the LAM-Sim-8x7B model, and error reduction through automated feedback.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\t**Innovative Framework:** The LAM Simulator introduces a novel method for automating LAM training through simulated feedback and task exploration.\\n2.\\t**Performance Improvements:** The framework achieves measurable improvements over baseline models, e.g. GPT-4o, showing its effectiveness in enhancing agent task performance.\\n3.\\tScalability: The reduction in human intervention for data generation makes this framework adaptable for large-scale agent training.\\n4.\\t**Comprehensive Evaluation:** The use of multiple evaluators provides continuous feedback, leading to improved agent performance and fewer errors.\", \"weaknesses\": \"1. **Limited Comparative Analysis:** While the paper briefly mentions related methods like ToolTalk, WebArena, and APIGen, it lacks a thorough comparison with these and other frameworks for automated data synthesis and agent training. Such a comparison would provide valuable context and clarity on the specific advantages or disadvantages of the LAM Simulator relative to existing approaches. The authors could strengthen the paper by providing empirical comparisons or, at minimum, a more detailed discussion of how the LAM Simulator diverges or improves upon these methods.\\n\\n2. **Comparative Performance Analysis on Agent Self-Exploration:** Although the paper demonstrates improvements over xLAM, it would benefit from a direct, quantitative comparison to other self-exploration-based methods. This could include metrics for data quality, agent adaptability, or overall effectiveness in reducing human intervention. Without such benchmarks, it\\u2019s challenging to assess how the LAM Simulator stacks up against state-of-the-art methods in self-guided agent learning.\\n\\n3. **Insufficient Examples of Human-Crafted Tools:** The paper describes a curated set of tools used for agent training but doesn\\u2019t provide concrete examples or details on these tools in the main text or the appendix. Including examples, especially in an appendix, would improve transparency around the toolset design and selection criteria. This addition would allow readers to better understand the diversity and complexity of tasks the agents are trained on, as well as any limitations or biases in the tool curation process.\", \"questions\": \"1. **Lack of Detail on Reward Calculation:** The paper briefly describes using \\u201caction rewards\\u201d to generate preference data for DPO but doesn\\u2019t clarify how these rewards are calculated. Are rewards based on a specific scoring metric, or are they derived from evaluator feedback? A clearer explanation of the reward components and their calculation method would clarify how preference scores are assigned to different actions or paths.\\n\\n2. **Potential Overfitting to ToolEval Benchmark:** With performance gains reported on ToolEval, there\\u2019s a possibility that the LAM Simulator is optimized specifically for this benchmark. Given ToolEval\\u2019s structured and predefined nature, the model might be learning patterns specific to ToolEval, which could limit generalizability. Testing the framework on multiple diverse benchmarks, or introducing a new, unstructured benchmark, would give a more robust picture of its true capabilities.\\n\\n3. **Limited Discussion on the Impact of Feedback Granularity:** The framework provides both action-wise and task-wise feedback, but the paper does not analyze the relative impact of these feedback types on agent learning. Understanding whether detailed, step-by-step feedback leads to more effective training compared to feedback provided only at task completion could help optimize the feedback strategy and reduce computation costs.\\n\\n4. **What is the success rate of data generation?:** Can the authors provide quantitative information on the success rate of data generation within the LAM Simulator? For instance, out of all attempted tasks or interactions, what percentage completes successfully without errors? This metric would give insight into the reliability and efficiency of the simulator.\\n\\n5. **What specific measures ensure the quality of generated data?:** The paper mentions filtering methods to ensure data quality, but could the authors elaborate on the full set of criteria or techniques used to evaluate data quality? Are there specific metrics, benchmarks, or thresholds that generated data must meet?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your feedback!\", \"comment\": \"Thank you so much for reviewing our paper and raising the score! We truly appreciate your support and are committed to further refining our work.\\n\\nWith best regards,\\n\\nAuthors of submission 13418\"}", "{\"title\": \"Rebuttal by Authors (part 2)\", \"comment\": \"Below is our second part of the response.\\n\\n### **Q1. Lack of Detail on Reward Calculation**\\nThank you so much for raising the question about how our rewards are calculated. We are more than happy to provide a more detail explanation regarding to the interaction flow between Agent and our framework and how do we assign reward on the go:\\n\\nFor each of the Agent step to solve a given task, the Agent gives a response (include a Tool call and arguments) to the Environment. This first goes through **Syntax Verification Engine** to check syntactical issues (format, valid tool call, valid args). Here, if anything failed, we share this information to **Evaluation Engine** and the action would have the binary score of 0 (Failed). Otherwise, we can send the Agent\\u2019s action to the **Request Execution Engine** to execute the request then sends the corresponding observation to the **Evaluation Engine** for evaluating the executed action.\\n\\nIn Evaluation Engine, we have 2 different evaluators:\\n\\n1. **Intermediate Action Evaluator**: this evaluator assigns a binary score (0: Failed, 1: Passed) based on the syntactical feedback and any task-specific requirements.\\n2. **Final Task Evaluator**: this evaluator only triggers when the agent makes the final response to the given multi-turn task or when the number of interaction turns reaches a predefined limit. We compare the final result (state) between the gold label and the final response provided by the agent to give the corresponding binary score (0: Failed, 1: Passed).\\n\\nIn this work, we selected only the generation outputs that passed ALL evaluators for positive training data, and others can be selected for negative training data in case of DPO training.\\n\\nAfter syntax checking, executing actions, and scoring, the Agent's action and the Environment observations are sent to the Conversation Data Manager to either proceed to the next turn or finalize the current task and move to the next one.\\n\\n### **Q2. Potential Overfitting to ToolEval Benchmark**\\nTo ensure a fair and unbiased assessment of the LAM Simulator, we have implemented a strict separation in our data handling and task generation processes. Specifically, for the human-crafted tasks, they were developed independently of ToolBench through a separate pipeline that does not interact with or utilize data from ToolEval, thereby preventing any inadvertent exposure to the benchmark during training. Furthermore, for tasks extracted from ToolBench, we strictly avoid using tasks, tools, or domains that are part of the ToolEval benchmark. This rigorous approach guarantees that our training set remains distinct and unbiased, ensuring that the evaluation on ToolEval genuinely reflects the model's ability to generalize.\\n\\nThe results presented in our paper under Section 4 are run on different test sets from ToolEval, including:\\n1. U.Inst (Unseen Instruction & Seen Tools): this set contains tasks involving **unseen** instruction (or user command query) with tools **seen** during training phase\\n2. U.Tools (Unseen Tools & Seen Categories): this set contains tasks involving only **unseen** tools from **seen** task category\\n3. U.Tools & U.Cat (Unseen Tools & Unseen Categories): this set contains tasks of **unseen** tools from **unseen** task category\\n\\nThe results from these tests have been very positive, including out-of-domain testings. In particular, our models showed significant improvements in the \\\"U.Tools\\\" and \\\"U.Tools & U.Cat\\\" categories (see Tables 1, 2, 4, and 5). This demonstrates that our LAM Simulator performs well even in new and different situations, effectively reducing errors related to tools and making the system more reliable for a variety of tasks.\\n\\nAdditionally, we have normalized the input by structuring the Agent's template to align with its base model's architecture, ensuring all tasks from any environment, whether native, from ToolBench, or newly introduced, adhere to a standard format. This template unification allows us to focus on the Agent's capabality in general instead of optimizing towards any given set of tasks or environments.\"}", "{\"title\": \"Rebuttal by Authors (part 3)\", \"comment\": \"Below are the last part of our response.\\n\\n### **Q2. Have you considered applying the LAM Simulator to environments that involve real-time multi-agent interactions or more complex tool dependencies?**\\n\\n**About multi-agent interactions**: In this work, the simulation focuses on single-agent scenarios, due to our prioritization of refining single agent interactions. However, the architecture of the LAM Simulator is designed to be scalable and allows for eventual extension to multi-agent dynamics. This will enable us to support real-time multi-agent interactions in future iterations of the simulator.\\n\\n**Complex Tool dependencies**: In developing the LAM Simulator, we acknowledged the complexities of tool dependencies across various environments. An engine was implemented to standardize interactions, acting as a mediator to format agent actions and environment observations appropriately. To integrate new environments with complex tool dependencies, we simply need to register the pre-processing and post-processing logic for each specific environment.\\n\\nWe appreciate your feedback as it reinforces the importance of these features and guides the future development of the LAM Simulator. We are excited about the potential to expand into multi-agent settings and further enrich the simulation capabilities.\\n\\n### **Q3. Can you provide more detailed insights into how the feedback mechanisms evolve as the agent performs tasks over time? Are there diminishing returns to feedback as the agent improves?**\\n\\nFor any given set of tasks, as the agent's performance improves and it becomes more adept at handling these tasks, the effectiveness of feedback on these set of tasks diminishes. This means that while the agent initially gains significant insights from feedback, over time, its utility decreases as the task complexity remains static. To address this, our LAM Simulator is designed to easily extend to new environments and tasks. This adaptability allows us to introduce sophisticated tasks with corresponding evaluators, enabling agents to continually enhance their capabilities.\\n\\nAs future work, we are planning to support more complicated environments as well, and hoping that we can see other potential directions such as enabling curriculum learning for training agent models, where the tasks difficulty continuously got leverage during training to improve the agent\\u2019s quality.\\n\\n### **Q4. How does the action verification mechanism scale with an increasing number of tools or tool parameters? Does it introduce any computational overhead that might limit real-time applicability?**\\n\\nThe verification mechanism only verify one tool per step (the tool got called by the agent), so despite the scale of available tools, we will not get blocked by the verifier. This would result in no extra overhead to real-time applicability as we support more tools.\\n\\n### **References**\\n[1] Qin, Y., Liang, S., Ye, Y., Zhu, K., Yan, L., Lu, Y., Lin, Y., Cong, X., Tang, X., Qian, B., Zhao, S., Hong, L., Tian, R., Xie, R., Zhou, J., Gerstein, M., Li, D., Liu, Z., & Sun, M. (2023, July 31). ToolLLM: Facilitating large language models to master 16000+ real-world APIs. arXiv.org. https://arxiv.org/abs/2307.16789\"}", "{\"title\": \"Reviewer Response\", \"comment\": \"Thanks for the clarifications. I raised my score to a 6 to reflect that I now at least find the paper to be in a much more presentable format that is understandable to someone who is not in this exact subfield. I find it difficult to raise any higher however but my confidence of 2 should reflect that my review score should not be weighed very highly, this topic is some distance apart from what I study and research.\\n\\nApologies for the late review!\"}", "{\"title\": \"Regarding the modifications in the revised rebuttal\", \"comment\": [\"We extend our sincere thanks to all reviewers for their insightful and detailed feedback. We have made the following key revisions to our manuscript based on all of the valuable comments we have received:\", \"Per Reviewer 4LXX's suggestions, we have enriched Appendix A.2 with examples of our core components, including: `Abstract Task, Tools Collection Documentation, Content Dataset, User Command, Input to Agent LLM, Agent Response,` and `Response from Environment`. Additionally, we have refined the visualization in Figure 2(a) to clarify the breakdown of categories.\", \"In response to Reviewer nYeX, we have added Table 0 to compare our framework with popular related framework on AI Agents development. We selected the number '0' to ensure clarity and consistency throughout our document during this rebuttal period, thus avoiding any potential confusion among reviewers regarding the indexing of previously existing tables. Furthermore, we have expanded Appendix A.3 to provide additional details about the process of constructing the tools collection in our framework.\", \"We have also made some minor textual revisions to adhere to the page limit, while adding new content in response to reviewer suggestions.\", \"We believe these revisions have addressed the reviewers' concerns and have significantly enhanced both the quality and clarity of our manuscript. Once again, we appreciate all your thoughtful feedback.\"]}", "{\"summary\": \"This paper introduces the LAM Simulator, a framework designed to enhance the training and development of Large Action Models (LAMs) for AI agents. The LAM Simulator aims to mitigate the reliance on supervised learning and manual data curation by enabling online exploration of agentic tasks with real-time, high-quality feedback. The framework provides a diverse set of tasks, tools, and feedback mechanisms to help agents learn more effectively. The paper demonstrates significant performance improvements using the LAM Simulator, with models such as LAM-Sim-8x7B showing an 18.54% improvement over the base model on the ToolEval benchmark.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1) One of the key strengths is the ability of the LAM Simulator to automate feedback and reduce the need for human intervention. This is crucial for scaling agentic models and training on more extensive datasets without the burden of manual curation.\\n2) The empirical results show significant performance improvements on the ToolEval benchmark. The LAM-Sim-8x7B model consistently outperforms other state-of-the-art alternatives, showcasing the potential of the proposed framework.\\n3) The LAM Simulator integrates action verification mechanisms (e.g., syntax verification, tool name validation), which significantly reduce errors related to tool usage, helping agents perform more reliably in tasks involving multiple tools.\", \"weaknesses\": \"1) The tasks used in the evaluation are relatively constrained and do not reflect the complexity of real-world agent tasks. The paper could have included experiments in more diverse environments, such as dynamic multi-agent systems or open-ended tasks, to demonstrate broader applicability.\\n2) While the framework\\u2019s feedback loop is central to its design, the paper does not sufficiently explore its effectiveness. It would have been useful to include ablation studies or case studies showing how feedback at different stages (e.g., intermediate actions versus final steps) influences learning outcomes.\\n3) The paper does not adequately address how well the LAM Simulator scales to more complex environments or to tasks that require real-time interactions with a broader set of tools. The reliance on predefined tasks and tools may limit its generalization to open-world or unstructured environments.\", \"questions\": \"1) How does the LAM Simulator handle situations where the task requirements change dynamically, or where tools malfunction during execution? Would it be able to generalize to such cases?\\n2) Have you considered applying the LAM Simulator to environments that involve real-time multi-agent interactions or more complex tool dependencies?\\n3) Can you provide more detailed insights into how the feedback mechanisms evolve as the agent performs tasks over time? Are there diminishing returns to feedback as the agent improves?\\n4) How does the action verification mechanism scale with an increasing number of tools or tool parameters? Does it introduce any computational overhead that might limit real-time applicability?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Did our revision and discussions meet your expectation?\", \"comment\": \"Dear Reviewer 4LXX,\\n\\nWe want to express our deep gratitude for your detailed and extremely helpful review on our works, and we have diligently worked to address your comments and concerns.\\n\\nSince the discussion period is ending, we wish to hear back from you to see that if our responses resolved your concerns or any further comments you have on our work. We sincerely hope our responses meet your expectations and would be really grateful if you would consider our work as an important step towards improving autonomous language agents, especially in a self-exploration setting.\\n\\nOnce again, thank you so much for all of your invaluable feedback to our paper.\\n\\nWith best regards,\\nAuthors of submission 13418\"}" ] }
DpnY7VOktT
Can Model Randomization Offer Robustness Against Query-Based Black-Box Attacks?
[ "Quoc Viet Vo", "Bao Gia Doan", "Ehsan Abbasnejad", "Damith Ranasinghe" ]
Deep neural networks are misguided by simple-to-craft, imperceptible adversarial perturbations to inputs. Now, it is possible to craft such perturbations solely using model outputs and black-box attack algorithms. These algorithms compute adversarial examples by iteratively querying a model and inspecting responses. Attacks success in near information vacuums pose a significant challenge for developing mitigations. We investigate a new idea for a defense driven by a fundamental insight—to compute an adversarial example, attacks depend on the relationship between successive responses to queries to optimize a perturbation. Therefore, to obfuscate this relationship, we investigate randomly sampling a model from a set to generate a response to a query. Effectively, this model randomization violates the attacker's expectation of the unknown parameters of a model to remain static between queries to extract information to guide the search toward an adversarial example. It is not immediately clear if model randomization can lead to sufficient obfuscation to confuse query-based black-box attacks or how such a method could be built. Our theoretical analysis proves model randomization always increases resilience to query-based black-box attacks. We demonstrate with extensive empirical studies using 6 state-of-the-art attacks under all three perturbation objectives ($l_\infty, l_2, l_0$) and adaptive attacks, our proposed method injects sufficient uncertainty through obfuscation to yield a highly effective defense.
[ "query-based black-box attacks", "model randomness", "diversity", "adversarial defense", "trustworthy machine learning", "safety and responsible AI" ]
Reject
https://openreview.net/pdf?id=DpnY7VOktT
https://openreview.net/forum?id=DpnY7VOktT
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xFNcPPYEjd", "usfxd4Nblc", "t1CSmFTclv", "qZUSE0CNp8", "q5LlYBHEEq", "pytjHtx1C6", "n4Woq0Hpg7", "mp9m4nkWDc", "mSeQOvcfwi", "iOR4bxrr7x", "gXEVZj9kKn", "fg5IxbJ7NR", "fBttM8QJ4y", "cSmzTUanJM", "ZKY6LjgsEv", "Z5yvskjnTC", "Y3fjjtwJKA", "Y155ZnmYIh", "XcN9Bq2p9A", "RlkoDCCsD2", "QVE26ttnuw", "NwdIjPf1ld", "JHkbrfwkBZ", "I45Fv9byeH", "HylROjnyY6", "Hjqpg8wMas", "F3GzpCyVh0", "BJmIZtMAWt", "8TvKKdIMKr", "4MHjD45urt", "2tcmKanUQe", "2cMzm91ZdK", "2FfKU55kaC", "0l1Z8efybB", "0ZXsETTwOe" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_review", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment" ], "note_created": [ 1732694119446, 1732368012290, 1733227050378, 1732408494123, 1734406847847, 1730507942998, 1730716735975, 1733217118468, 1737523779572, 1730707374165, 1732398137516, 1733278326782, 1730445621725, 1732373131282, 1732406435430, 1732510147349, 1732401213240, 1732523711573, 1732452871738, 1730741702970, 1732524567615, 1732507060181, 1733217186630, 1730425306818, 1732427403856, 1733219191096, 1732372690232, 1732626678061, 1732367798749, 1732497760572, 1732697749026, 1732640890038, 1740298772476, 1732832658848, 1732987096179 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6605/Authors" ], [ "ICLR.cc/2025/Conference/Submission6605/Authors" ], [ "ICLR.cc/2025/Conference/Submission6605/Reviewer_YFcN" ], [ "ICLR.cc/2025/Conference/Submission6605/Authors" ], [ "ICLR.cc/2025/Conference/Submission6605/Area_Chair_5QzV" ], [ "ICLR.cc/2025/Conference/Submission6605/Reviewer_9act" ], [ "ICLR.cc/2025/Conference/Submission6605/Reviewer_YFcN" ], [ "ICLR.cc/2025/Conference/Submission6605/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6605/Reviewer_whE1" ], [ "ICLR.cc/2025/Conference/Submission6605/Authors" ], [ "ICLR.cc/2025/Conference/Submission6605/Authors" ], [ "ICLR.cc/2025/Conference/Submission6605/Reviewer_r45L" ], [ "ICLR.cc/2025/Conference/Submission6605/Authors" ], [ "ICLR.cc/2025/Conference/Submission6605/Authors" ], [ "ICLR.cc/2025/Conference/Submission6605/Reviewer_whE1" ], [ "ICLR.cc/2025/Conference/Submission6605/Authors" ], [ "ICLR.cc/2025/Conference/Submission6605/Authors" ], [ "ICLR.cc/2025/Conference/Submission6605/Authors" ], [ "ICLR.cc/2025/Conference/Submission6605/Reviewer_WLVn" ], [ "ICLR.cc/2025/Conference/Submission6605/Authors" ], [ "ICLR.cc/2025/Conference/Submission6605/Reviewer_r45L" ], [ "ICLR.cc/2025/Conference/Submission6605/Authors" ], [ "ICLR.cc/2025/Conference/Submission6605/Reviewer_Scpz" ], [ "ICLR.cc/2025/Conference/Submission6605/Authors" ], [ "ICLR.cc/2025/Conference/Submission6605/Authors" ], [ "ICLR.cc/2025/Conference/Submission6605/Authors" ], [ "ICLR.cc/2025/Conference/Submission6605/Reviewer_r45L" ], [ "ICLR.cc/2025/Conference/Submission6605/Authors" ], [ "ICLR.cc/2025/Conference/Submission6605/Authors" ], [ "ICLR.cc/2025/Conference/Submission6605/Authors" ], [ "ICLR.cc/2025/Conference/Submission6605/Reviewer_WLVn" ], [ "ICLR.cc/2025/Conference/Submission6605/Authors" ], [ "ICLR.cc/2025/Conference/Submission6605/Authors" ], [ "ICLR.cc/2025/Conference/Submission6605/Reviewer_YFcN" ] ], "structured_content_str": [ "{\"comment\": \"**Derivation is fixed** *While I consider the mathematical derivation in your last reply to be correct...*\\n\\n*Response:*\\n\\nThank you for allowing us to answer your question and address the basis for lowering your evaluation of our work.\\n\\n---\\n\\nWe are happy to answer the new feedback and we appreciate the opportunity to engage with the Reviewer to add your concerns.\\n\\n**Q2** *I believe the primary concern with this paper lies in the significant computational and resource costs associated with training. Would it be possible for your method to take advantage of pre-trained models, such as those provided by the timm library for the ImageNet dataset?*\\n\\n*Response:*\\n\\n**Yes**, our method can indeed leverage pre-trained models, such as those available in the `timm` library or other publicly available repositories. In fact, we have done this with a ***pre-trained CLIP*** model from `https://github.com/mlfoundations/open_clip` to demonstrate practicability and a working defense for a model suitable for serious applications.\\n\\n***OpneCLIP*** is an open-source implementation of ***OpenAI's CLIP (Contrastive Language-Image Pre-training)***\\n\\n- We stated with a ***pre-trained CLIP*** (to put CLIP into perspective, please note that ***RestNet50 has approximately 25 million parameters compared to 114 million we deal with in CLIP***).\\n- We then trained an ensemble of 5 using the methods in [3].\\n - The model set achieves approximately 78% clean accuracy.\\n - During the inference phase, two out of five were randomly selected from the trained set.\\n - Two out of five has a clean test accuracy of approximately 77%\\n- We also train a single CLIP model for use with RND and RF\\n - The single model achieves 76.07% clean accuracy\\n - We add noise, so the clean accuracy drop values are as in RF (ICLR'24), approximately 1% and comparable to ours.\\n- Table 4 shows that the significant reduction in overhead we achieve. \\n- Table 5 shows that compared to current state-of-the-art we outperform with a margin of up to 9.57%\\n\\n_Table 4:_ Trainable Parameters and Storage Consumption of a Single CLIP and a set of five CLIP models ***we trained*** to implement LoRA(DISCO).\\n| Models|***Single*** CLIP|The set of 5 CLIP models using LoRA|\\n| -------------------- | -----------|------|\\n| Trainable Parameters |114 M| 1.84 M (1.6% incrase, ***0.32% per model***)|\\n| Storage Consumption |433 MB| 439 MB (1.35% increase, ***0.28% per model***)|\\n\\n_Table 5:_ $l_\\\\infty$ objective. Robustness ($\\\\uparrow)$ of different defense methods against SQUAREATTACK with the __ImageNet__ task task with __CLIP__ model architecture [3] (For details on the experiment, please see response to Reviewer).\\n| Methods | 0.025 | 0.05 | 0.075 | 0.1 |\\n| ------- | ---------- | ---------- | ---------- | --------- |\\n| RND | 83.39% | 61.95% | 43.37% | 24.89% |\\n| RF | 86.45% | 65.1% | 51.14% | 35.83% |\\n| DISCO | __90.76%__ | __72.51%__ | __56.17%__ | __45.4%__ |\\n| DISCO Improvement (vs. Next best)|4.31%|7.41%|5.03%|9.57%|\\n\\nWhat we are proposing are ***marginal cost increases*** to achieve significant improvements in robustness.\\n\\n- Effectively we are saying <1.6% increase in overhead can yeild 4.31 to 9.57% better robustness on a large-scale network of practical significance\\n- Now adding a model incurs **<0.32%** overhead in terms or trainable parameters or storage.\", \"we_really_hope_the_results_provides_the_assurances_sought_by_the_reviewer_that_our_methods_is\": [\"Robust and\", \"Practical\"], \"we_hope_the_reviewer_can_appreciate_the_question_we_posed_in_the_paper\": [\"CAN MODEL RANDOMIZATION OFFER ROBUSTNESS AGAINST QUERY-BASED BLACK-BOX ATTACKS?\", \"We believe the answer is now, yes, irrevocably.\", \"In addition to understating what such a method can offer, and the theoretical analysis, we have now shown it can be of practical significance.\", \"One paper can't solve all problems, but we have certainly worked hard at making our theoretical work stick.\", \"We sincerely thank the Reviewer for all their efforts to help us improve our work and its presentation.\"], \"title\": \"Response to Further Questions (Yes we have now used a pre-trained model, cost overhead is <0.38% per addition of a model)\"}", "{\"comment\": \"***Thank you for all your feedback. We have answered all the questions. Please let us know if you need further clarifications***\\n\\n- What is the training time, inference time, and storage of Disco?\\n\\n*Response: Plese refer to our answer for **Question 1** above in addressing the Weaknesses in the paper.*\\n\\n---\\n\\n- Is Disco still effective on high-resolution datasets such as Imagenet?\\n\\n*Response:* Yes it is. Performs better than others.\\n\\n - Please refer to our answer for **Question 2** above in addressing the Weaknesses in the paper (Table 4).\\n - Please see our more detailed response, including cost overheads, to [___Reviewer r45L___] https://openreview.net/forum?id=DpnY7VOktT&noteId=xFNcPPYEjd (Table 4 & Table 5)\\n\\n---\\n\\n- What is the performance of Disco on other types of architectures such as transformers?\\n\\n*Response: Please refer to our answer for **Question 3** above in addressing the Weaknesses in the paper.*\\n\\n---\\n\\n***We appreciate your help in improving our work!*** Please let us know if you have any further question, we will reply to them promptly.\", \"title\": \"Answers to Questions\"}", "{\"comment\": \"Thank you for your clarification. The overhead in training and inference is indeed the main drawback of this work. However, considering the novelty of the approach, I am inclined to accept the paper.\\nI just want to have a small comment. The main question of the paper, \\\"*Can Model Randomization Offer Robustness Against Query-Based Black-Box Attacks?*\\\", I believe has already been answered in previous studies such as RND and RF. This paper proposes a more effective way to utilize that property in defending the model.\"}", "{\"title\": \"Questions to Authors: We report new results for P-BO Attack & Provide clarifications\", \"comment\": \"__Q3. The performance of defense mechanisms under recent advanced attack P-BO__\\n\\nThank you for the suggestion. We have been able to test with the attack recommended by the Reviewer. We report the new result in Table 4 below.\\n\\n- Ours (DISCO) is demonstrably more robust.\\n- We are in the process of updating our paper with this new result.\", \"table_4\": \"$l_\\\\infty$ objective. Robustness of different defense methods against P-BO attack with the __CIFAR-10__ task.\\n|Methods|0.02|0.04|0.06|0.08|0.1|\\n|---|---|---|---|---|---|\\n|RND|70.33%|31.47%|15.75%|7.23%|6.65%|\\n|RF|66.43%|28.04%|13.67%|8.34%| 6.21%|\\n|DISCO|__79.98%__|__47.94%__|__29.8%__|__18.08%__|__12.16%__|\\n\\n__Q4. Capture the misleading nature of the search direction__\\n\\nThank you for your constructive feedback. Let us explain further here:\\n\\n- In Section 3.3.2, line 267-269, we use $\\\\frac{\\\\tilde{H}(x,u)}{\\\\hat{H}(x,u)}<0$ to represent the *mismatch* between the attack direction from a random subset of models versus that generated using the entire set of models (effectively a single model).\\n- This condition specifically reflects opposite directions as it relies on the *different* sign of search direction. Therefore, line 269, $P(\\\\frac{\\\\tilde{H}(x,u)}{\\\\hat{H}(x,u)}<0)$ represents the probability of misleading an attack direction.\\n- This mislead direction is not limited to arbitrary discrepancies but explicitly incorporates the notion of opposing directions. To this end, we believe that our theoretical analysis in Section 3.3.2 on misleading a search direction is able to fully capture the misleading nature of the search direction described by the Reviewer.\\n\\nWe really appreciate the observation and will clarify this point further in the manuscript to ensure its practical implications are well-understood.\"}", "{\"metareview\": \"The reviewers were conflicted about accepting this paper, and even the positive reviewers (none of which gave higher than 6) were skeptical that this defense could be practical, especially given the high computational costs. Moreover, while I appreciate the discussion in the rebuttal period, addressing all reviewer concerns requires a massive change to the original submission which may be best left for a future submission. Therefore, I recommend rejection for this paper, but encourage the authors to keep improving their work.\", \"additional_comments_on_reviewer_discussion\": \"The authors engaged heavily with reviewers during the rebuttal period, including numerous new experiments. Nonetheless, reviewers were still lukewarm at the end of the period, and nobody chose to champion this paper.\"}", "{\"summary\": \"This paper explores whether model randomization can enhance robustness against query-based black-box attacks. Traditional defenses involve random noise injections, but this study proposes a novel defense: generating responses by sampling from an ensemble of diverse models. Theoretical analysis and extensive empirical tests across various attacks and perturbation metrics validate the efficacy of this strategy, achieving robustness with minimal performance compromise.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Good writing with clear motivation and insights.\\n2. Authors provide a strong theoretical foundation, with proofs demonstrating how randomization increases the difficulty of gradient estimation and search-based attacks.\\n3. By selecting models with a diversity-promoting training objective, the paper manages to keep clean accuracy relatively stable.\", \"weaknesses\": \"1. How does increasing model diversity impact clean accuracy, and is there a systematic way to balance these two metrics?\\n2. Disco tests VGG and ResNet architectures, I think it would be useful to know whether this approach is effective across more different model types.\\n3. Although Disco's results on MNIST, CIFAR-10, and STL-10 are promising, these are relatively small datasets. It would strengthen the paper to see how this defense performs on more challenging datasets, like ImageNet.\\n4. In the paper, authors mention that they use a larger number of models (40) for the MNIST task and a lower number of models (10) for high-resolution CIFAR-10 and STL-10 tasks. Have the authors tested the impact of different ensemble sizes, and is there an optimal balance between model count and robustness?\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes to defend against query-based adversarial attacks by randomization. Existing defenses use this principle by injecting a random noise into the input, feature, or parameters of the model. On the other hand, this work suggests creating many diverse models and randomly ensembling their predictions to fool the attacker. Their method, named Disco, employs a Bayesian framework to learn a diverse set of models. To handle the accuracy degradation, this work proposes a novel asymmetric training objective that forces each model to perform well. The paper also provides a theoretical analysis showing that randomly ensembling significantly increases the number of queries for a successful attack. Experimental results demonstrate the effectiveness of Disco against score-based attacks, decision-based attacks, and adaptive attacks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper provides a novel approach for randomized defense against query-based attacks. Instead of fixing the model and injecting random noise, they suggest creating many diverse models and randomly selecting some of them to predict.\", \"The paper proposes a novel training objective to avoid an accuracy drop in each model.\", \"Theoretical analysis shows that randomly ensembling is effective against score-based attacks.\", \"Experimental results for a wide range of attacks demonstrate that Disco safeguards the model against query-based attacks.\"], \"weaknesses\": [\"The main disadvantage of this approach is that we need to store a set of models instead of one. Ensembling also requires querying many models during inference, which incurs significant computational costs. This problem is even more severe when we deploy large-scale models in practice. An analysis of the training time, storage, and inference time of Disco compared to other defenses such as RND and RF could be helpful.\", \"The experiments are conducted on low-resolution datasets only.\", \"The architecture of the target model in the experiments is also quite limited. There is only one model for each dataset and they are all convolutional networks.\", \"The paper does not explain why the proposed objective helps each individual model perform well. Can we instead randomly sample a subset of model when training?\"], \"questions\": [\"What is the training time, inference time, and storage of Disco?\", \"Is Disco still effective on high-resolution datasets such as Imagenet?\", \"What is the performance of Disco on other types of architectures such as transformers?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"We addressed all the new questions. We clarify inference time. Answer additional question on training from scratch\", \"comment\": \"__Q1.1: Inference Time__\\n\\nThank you for pointing this out. In our approach to measure inference time, the measurement and calculation included the first few warm-up runs of a model. These warm-up runs are slower due to various factors like *cache warming* and *JIT compilation* [1]. Effectively, the previously reported inference times, including warm-up runs, significantly diluted the actual inference time, which should be in the *micro second* scale. Additionally, while measuring the inference time, there were other workloads (multiple programs/codes) running together in the same server used for producing results for the rebuttal. This seemed to have affected the measurements.\\n\\nTherefore, we can kindly update the Reviewer with the following we now measured. We now use a 'quiet' machine and average just the inference time, where the model is in a state ready to receive inputs from an end-user via an API. \\n\\n_Table 1:_ Inference time (per query) of a single vs a set of models on different tasks.\\n| Datasets | Single | A subset of five out of 10 Models (DISCO) |\\n| -------- | ---------- | -------- |\\n| MNIST | ~0.7 us | ~3.8 us |\\n| CIFAR-10 | ~1.9 us | ~9.8 us |\\n| STL-10 | ~2.5 us | ~12.8 us |\\n\\n[1] https://medium.com/@MarkAiCode/mastering-pytorch-inference-time-measurement-22da0eaebab7\\n\\n---\\n\\n__Q1.2: In the response to reviewer whE1, you recommend leveraging LoRA to mitigate the number of trainable parameters and storage. However, it is only applicable in the fine-tuning setting. Do you have any suggestions for the scenario where we need to train a model from scratch?__\\n\\nYes we do. But, we just want to highlight a few things in case this is lost or forgotten as we now start to move away from the main thesis for our work, to *investigate* and *understand* a new idea both theoretically and empirically. Indeed, our work confirms model randomization ***does offer*** robustness against query-based black-box attacks. \\n\\nSo, let's now answer the question. First, we want to *thank you* for this valuable feedback and engaging with us! It helps us consider and address a broader range of practical considerations to ensure a broader applicability of our framework, ***DISCO***.\\n\\nIn addition to supporting the common practice of using pre-trained models, there are strategies that can help mitigate the overhead associated with training models from scratch, one simple strategy can simply build upon our approach for reducing the costs when using pre-trained models. To elaborate: \\n\\n- One promising approach is to train a single model to obtain a good pre-trained model in the *first* stage.\\n- Then employ an ensemble of LoRA to fine-tune the good pre-trained model to achieve a diverse set of models as we demonstrate with CLIP on ImageNet.\\n - The tuning stage learning objective can still employ our Equation (10) and (11).\\n- The first stage can use a portion of the training dataset or another dataset if a datset of sufficient size to train a well performing model from scratch is not available.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper proposes a defense mechanism called Disco against query-based black-box adversarial attacks by randomizing model responses. The core idea is that model randomization\\u2014drawing models from a pool of well-trained diverse models for each query\\u2014can obfuscate the consistent feedback adversarial attacks depend on. By breaking the static relationship between query responses, this method aims to degrade an attacker's ability to generate effective adversarial examples. The authors conduct extensive theoretical and empirical evaluations, examining attacks across multiple threat models and perturbation types, and find that Disco is shown to outperform existing defenses against black-box attacks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The proposed defense is straightforward and simple to implement, making it a practical solution for enhancing model robustness against adversarial attacks.\", \"Theoretical analysis thoroughly supports the effectiveness of model randomization in the proposed defense.\", \"The paper conducts an extensive evaluation across various threat models and perturbation types, providing a robust assessment of the defense's performance.\"], \"weaknesses\": [\"The approach requires substantial resources, as multiple models must be trained during the training phase. Additionally, switching between models during inference could increase response time and demand more server resources.\", \"Models trained on the same dataset may share similar adversarial boundaries, posing a risk that attackers could exploit if the decision boundaries are too alike across models in the pool.\"], \"questions\": \"1. Could you give some solution about how to reduce the resource overhead in both training and inference phase?\\n2. Could you explain how possible or impossible that all models in the pool share a similar adversarial boundaries?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"We address all the weaknesses: The method is model-agnostic but we show generality to other architectures and results for overhead\", \"comment\": [\"***Thank you for your initial evaluation of our work and great feedback to improve our work!***\", \"__Q1. The generalizability of the approach's effectiveness across a broader spectrum of models and large data types__\", \"We agree and are happy to report new results to address this concern.\", \"As we discussed with **Reviewer WLVn**, we fine-tuned a CLIP model to build an ensemble of 5 particles (models). This network is very different from VGG and ResNet architectures.\", \"For comparison, we also fine-tuned a single CLIP model for RND and RF defenses, achieving a clean accuracy of 76.07% for the single model. Noise was selected to maintain a clean accuracy drop of approximately 1% as in the prior work.\", \"For this experiment, we randomly selected 100 correctly classified images.\", \"The results in Table 4 show that DISCO achieves higher robustness compared to RND and RF methods. Thus, DISCO is effective across different model types, scales well to large models and generalises to challenging datasets like **Imagenet**.\"], \"table_1\": \"$l_\\\\infty$ objective. Robustness ($\\\\uparrow$) of different defense methods against SQUAREATTACK with the __ImageNet__ task with CLIP architecture [2]\\n| Methods | 0.025 | 0.05 | 0.075 | 0.1 |\\n| ------- | ---------- | ---------- | ---------- | --------- |\\n| RND | 83.39% | 61.95% | 43.37% | 24.89% |\\n| RF | 86.45% | 65.1% | 51.14% | 35.83% |\\n| DISCO | __90.76%__ | __72.51%__ | __56.17%__ | __45.4%__ |\\n\\n__Q2. An analysis of its computational overhead__\\n\\nWe recognize that storing and querying multiple models can lead to increased computational and storage demands.\\n\\nTo address this, we did two things.\\n\\n1. We conducted a detailed comparison of storage requirements and inference time across DISCO, RND, and RF, as presented in Tables 2 and 3.\\n - Notably, both RND and RF rely on a single model, resulting in identical storage requirements.\\n - In contrast, DISCO uses a set of 40 models for MNIST and 10 models for CIFAR-10/STL-10. For the results reported in Table 2, we processed 1,000 queries and calculated the average inference time to provide a fair and comprehensive evaluation.\\n2. We demonstrate that the overhead needed to achieve the high robustness levels can be dramatically minimized by using recent advances to reduce the cost of building a set of networks based on CLIP [1], such as LoRA [2] and the recent study [3].\\n - Please see our discussion with __Reviewer YFcN__ and the results reported in ***Table 4*** there. \\n \\n_Table 2:_ Storage Consumption of models trained on different datasets between RND, RF vs DISCO.\\n| Datasets | RND & RF | DISCO |\\n| -------- | -------- | ----- |\\n| MNIST | 1.3 MB | 145 M |\\n| CIFAR-10 | 57 MB | 1.7 G |\\n| STL-10 | 43 MB | 1.3 G |\", \"table_3\": \"Inference time (per query) of undefended vs defended (RND, RF vs DISCO) models on different datasets.\\n| Datasets | Undefended | RND | RF | DISCO |\\n|---| --- | --- | --- | ----- |\\n|MNIST|10.17 ms|12.14 ms | 12.53 ms | 15.12 ms |\\n|CIFAR-10|10.56 ms|12.61 ms | 12.92 ms | 20.62 ms |\\n|STL-10|11.26 ms|13.12 ms | 13.48 ms | 24.85 ms |\\n\\n[1] Radford, Alec, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger and Ilya Sutskever. \\u201cLearning Transferable Visual Models From Natural Language Supervision.\\u201d ICLR, 2021.\\n\\n[2] Hu, E. J.; yelong shen; Wallis, P.; Allen-Zhu, Z.; Li, Y.; Wang, S.; Wang, L.; and Chen, W. LoRA: Low-Rank Adaptation of Large Language Models. ICLR, 2022.\\n\\n[3] Doan, Bao Gia, Afshar Shamsi, Xiao-Yu Guo, Arash Mohammadi, Hamid Alinejad-Rokny, Dino Sejdinovic, Damith Chinthana Ranasinghe and Ehsan Abbasnejad. \\u201cBayesian Low-Rank LeArning (Bella): A Practical Approach to Bayesian Neural Networks.\\u201d ArXiv (2024).\"}", "{\"title\": \"Thank you for your constructive feedback. We discuss adding noise to inputs (RND) or features (RF) vs. model randomization.\", \"comment\": \"We really appreciate the effort the Reviewer has spent engaging with us and the insightful discussions. We hope ICLR rewards and recognizes this.\\n\\nWe also appreciate the recognition of the novelty and contributions of our work. \\n\\nWe wanted to kindly highlight the distinctions between our approach and prior methods RND and RF. We believe the key question raised in our paper has not been discussed and answered in previous studies and we hope this clarification makes our paper standout.\\n\\nAs we discussed in *Section 2*, Randomized Noise Defense (RND) and Randomized Features (RF) methods explored randomization in input (adding noise to the input) and feature spaces (injecting random noise into computed features). So, it is arguable if this constitutes a randomization of models, as the model represented by the parameter $\\\\theta$ where $f(x,\\\\theta)$ describes the model, is not changed or randomized. So RND and RF do not randomise models, as the model parameters are not altered or somehow different for each query. \\n\\nIn contrast, our method ***does not rely on noise at all***. We actually randomize the model or select a different function or different model parameter $\\\\theta$ for each query. This approach leverages model diversity and response confusion (by randomly selecting models or model randomization). We don't touch inputs or features or outputs. Diversity complements response confusion experienced by the attacker. \\n\\nImportantly, our approach and its implementation (with learning objectives in Equation 10 & 11) allows the defender to enhance robustness whilst mitigating the performance trade-offs associated with noise-based defenses. \\n\\nSo these are our thoughts. Thank you so much for the discussion!\"}", "{\"summary\": \"This paper investigates model randomization as a defensive strategy against query-based black-box attacks on deep neural networks. The authors argue that these attacks exploit the correlation between successive model responses to queries. By randomizing the model responsible for each response, this correlation can be obscured, thereby making the attack more challenging. A theoretical framework is proposed in which models are sampled from a diverse pool to respond to queries, increasing uncertainty for the attacker and degrading the quality of information that can be extracted from query responses.\\n\\nTo encourage model diversity without compromising individual model performance, the authors introduce a novel learning objective. Extensive empirical evaluations demonstrate that the proposed method, named Disco, substantially enhances model robustness against advanced query-based black-box attacks across various perturbation norms ($\\\\ell_\\\\infty$, $\\\\ell_2$, $\\\\ell_0$) and adaptive attack strategies, while maintaining high accuracy on clean data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This work proposes an interesting insight and analyzes its validity theoretically, then presents supporting evidence experimentally.\\nExperimental results show that the proposed method is highly effective, maintaining strong robustness without compromising accuracy.\", \"weaknesses\": \"1. The emphasis on specific architectures, such as Convolutional Neural Networks (CNNs), and small datasets, including MNIST, CIFAR-10, and STL-10, offers valuable insights. However, this focus may restrict the generalizability of the approach's effectiveness across a broader spectrum of models and large data types (such as the ImageNet dataset or the COCO dataset).\\n\\n2. While the proposed method demonstrates enhanced robustness, could the authors provide an analysis of its computational overhead? In particular, how do time and resource requirements measure up when comparing inference time and memory usage with existing techniques on a standardized hardware setup?\", \"questions\": \"1. The abstract mentions that state-of-the-art attacks were employed to evaluate the proposed method; however, current query-based black-box attacks that incorporate prior knowledge, such as PBO [1], have demonstrated significantly better performance than the methods discussed in the paper. It would be beneficial to consider these more advanced techniques in the evaluation, where the souce code of P-BO can be found in https://github.com/machanic/P-BO.\\n\\n2. The mathematical notation in Section 3.3.1 is ambiguous. For clarity, vector $\\\\mathbf{u}$ should have the same dimension as $\\\\mathbf{x}$ to allow summation in Equations (4), (5), and (6). Similarly, $g(\\\\mathbf{x})$, as the estimated gradient, should align dimensionally with $\\\\mathbf{x}$. In Proposition 1, it is unclear how $g(\\\\mathbf{x})$ is bounded by the constants $a_i$ and $b_i$. If $a_i$ and $b_i$ are vectors, why would the lower bound for $n$ in the conclusion then be represented as a vector?\\n\\n3. While the theoretical analysis considers two aspects (GRADIENT ESTIMATION ATTACKS and GRADIENT-FREE ATTACKS), some uncertainties persist. For example, as noted in Section 3.3.2, $P(\\\\frac{\\\\tilde{H}(\\\\mathbf{x}, \\\\mathbf{u})}{\\\\hat{H}(\\\\mathbf{x},\\\\mathbf{u})})<0$ represents the probability of misleading an attack direction. However, in practical applications, it may sometimes be necessary to consider the opposite direction due to potential inaccuracies in direction (as seen in methods like SignOPT [2]). Thus, relying solely on the sign may not fully capture the misleading nature of the search direction.\\n\\n[1] Cheng, S., Miao, Y., Dong, Y., Yang, X., Gao, X.-S., & Zhu, J. Efficient Black-box Adversarial Attacks via Bayesian Optimization Guided by a Function Prior. In *Proceedings of the 41st International Conference on Machine Learning (ICML)*, pp. 8163\\u20138183, 2024.\\n\\n[2] Minhao Cheng, Simranjit Singh, Pin-Yu Chen, Sijia Liu, and Cho-Jui Hsieh. Sign-OPT: a query-efficient hard-label adversarial attack. In *International Conference on Learning Representations*, pp. 1\\u201316, 2020.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"__Q3. An analysis of the impact of the size of the subset of models on the number of queries.__\", \"Thank you for your thoughtful question. Proposition 1 relates directly to the number of queries but doesn't explicitly link to the random selection method. The link follows naturally from Proposition 1, and we are sorry we didn't make this clear.\", \"Following your suggestion, we can provide an additional analysis for the trade-off between the selection of $N$ (the subset set size) from $K$ models and the number of queries to achieve a low error estimation.\", \"Intuitively, a larger subset size $N$ reduces the _number_ of combinations of model subsets. This results in a reduction in the number of random models presented to the attacker.\", \"In addition, a larger subset size $N$ also reduces the variance in estimates of gradient attempted by an attacker. Because, the averaged prediction from a larger set of models is more confident and the variance, for example, in output scores between these large subsets is less.\", \"Consequently, averaging across larger subsets of models leads to more informative resoponses (better gradient estimates for example) and less queries to obtain low error estimations.\", \"In contrast, smaller $K$ values increase the uncertainty, which leads to increased variance in gradient estimation or in other words the difference in upper and lower bound for the gradient\\u2019s value will be larger. Then following ***Proposition 1***, this increase the cost of the attack, that force the attacker to expend more queries to obtain a low error estimation of a gradient.\", \"__Q4. Motivation for Equation (10) is to Encourage Parameter Diversity__\", \"We presented our motivation and justification in **Section 3.4.1**. We can clarify further here. To explain:\", \"Recall, our hypothesis (see Hypothesis 2) was \\\"_Randomly sampling functions or models from a set with very diverse parameters should increase diversity in outputs_\\\"\", \"The objective in (10) effectively pushes model parameters apart (in the Bayesian context to better approximate a multi-modal posterior). This forces learning different and diverse represenations as shown in prior work [7] to yeild, effectively, a set of functions with different parameters.\", \"Then we can expect sampled functions (models from such a set of models) to result in output diversity. Consequently, leading to high output variance.\", \"Importantly, our diversity analysis results in ***Figure 3*** can demonstrates and confirm our motivation is justified.\"]}", "{\"title\": \"Answers to Questions\", \"comment\": \"- Computational complexity?\\n\\n*Response: Please refer to our answer for **Question 1** above in addressing the Weaknesses in the paper.*\\n\\n- Can the authors provide more details on the motivation and justification of equation (10) to train more diverse models?\\n\\n*Response: Please refer to our answer for **Question 4** above in addressing the Weaknesses in the paper.*\"}", "{\"comment\": \"Thanks for the clarification. It addresses my concerns. I would like to keep my score.\"}", "{\"title\": \"We provide a solution to reduce overhead. We explain how our learning objective (Equation 10) addresses similar adversarial boundaries\", \"comment\": \"***Thank you for all the valuable comments!***\\n\\n__Q1. Solutions to reduce the resource overhead in both the training and inference phases__\\n\\nTo reduce resource overhead in both the training and inference phases, we can:\\n\\n- Leverage LoRA as we have already demonstrated. Simply, we can incorporate LoRA to build an ensemble from a single pre-trained model and employ SVGD to push the parameters of LoRA apart. This can significantly reduce the need to train and store multiple large models while maintaining diversity. LoRA modifies only a small subset of parameters, which can make both training and inference more resource-efficient.\\n- We now report results from using LoRA for a CLIP model for the **ImageNet** task. Please see __Reviewer YFcN, Table 4__.\\n- Notably, one could also explore other efficient diversity-promoting schemes. To illustrate, we can dynamically generate diversity through parameter perturbation or layer-wise modularity, which could reduce storage and computational costs.\\n\\n__Q2. Share similar adversarial boundaries__\\n\\nThank you for the interesting question. Indeed, this is why we:\\n\\n1. Select a random model and\\n2. Build models that learn *different* representations (Objective in Equation 10)\\n\\nThis reduces the possibility of adversarial boundaries being the same. Effectively, during training, the objective in (10) pushes model parameters apart in the parameter space, fostering unique representations across models. This results in individual models with different decision boundaries.\\n\\nIndeed, ***if*** the models were not *diverse* (for example, we did not use the objective in 10), we can expect the models in the pool to possibly share adversarial boundaries.\"}", "{\"title\": \"Reviewer r45L, We just wanted to let you know, we have now finished responding to Question 2.\", \"comment\": \"***Thank you for careful review. We really appreciate the opportunity to clarify this important point. We are sorry this reply is late, we were working through all of our new results first.***\\n\\nWe will go through the answer, step by step.\\n\\n1. __Dimension of $u$, $x$ and $g(x)$__: You are correct that the vector $u$ and esitmated gradient $g(x)$ should have the same dimension as $x$ to ensure proper summation in Equations(4), (5), and (6). We will revise the notation to explicitly state that $u \\\\in R^d$, $x\\\\in R^d$ and $g(x) \\\\in R^d$.\\n2. __The bound of g(x)__: As $g(x)$ is a vector, the bound of $g(x)$ can be defined as the bound for each element (each dimension) of $g(x)$. To eliminate ambiguity, we will formally define the bound of $g(x)$ as follow:\\n$$a_{i}^j \\\\leq g\\\\(x\\\\)\\\\_{i}^j\\\\leq b_{i}^j$$ \\n - where $i=1,..., n$, $n$ is the number of different gradient estimators $g(.)$, $j=1,..., d$ for each dimension, $d$ is the number of elements (dimensions) of $g(x)_i$.\\n \\n To simplify the notation, we drop $i$ for gradient estimator $g(x)_i$ to have $g(x)$.\\n\\n 3. __The lower bound for n__: We are sorry for the lack of clarity here and describing the bound for each scalar element without expclicity describing this clearly. To address the confusion, we provide an update for equation (7) as follow:\\n - We define $A^j$ as $|\\\\bar{g}(x)^j - \\\\hat{G}(x)^j|\\\\geq \\\\Delta$\\n - According to the Hoeffding's inequality and employing a union bound over all $d$ dimensions to bound the probability of deviation in any component, we have:\\n $$P(\\\\cup_{j=1}^{d}A^{j})\\\\leq \\\\sum_{j=1}^{d} P(A^j)=\\\\sum_{j=1}^{d}2\\\\exp{\\\\Big(-\\\\frac{2n^2\\\\Delta^2}{\\\\sum_{i=1}^{n}(a_{i}^j-b_{i}^j)^2}\\\\Big)}.$$\\n This term can further be upper bounded by considering the fact that $\\\\exp(-x)$ is monotonically decreasing, we know for any $j$:\\n $$\\\\exp{\\\\Big(-\\\\frac{2n^2\\\\Delta^2}{\\\\sum_{i=1}^{n}(a_{i}^j-b_{i}^j)^2}\\\\Big)}\\\\leq \\\\exp{\\\\Big(-\\\\frac{2n^2\\\\Delta^2}{\\\\sum_{i=1}^{n}[\\\\max_j(a_{i}^j-b_{i}^j)^2]}\\\\Big)}$$\\n Therefore, we have:\\n $$P(\\\\cup_{j=1}^{d}A^{j}) \\\\leq \\\\sum_{j=1}^{d}2\\\\exp{\\\\Big(-\\\\frac{2n^2\\\\Delta^2}{\\\\sum_{i=1}^{n}(a_{i}^j-b_{i}^j)^2}\\\\Big)} \\\\leq 2d\\\\exp{\\\\Big(-\\\\frac{2n^2\\\\Delta^2}{\\\\sum_{i=1}^{n}[\\\\max_j(b_{i}^j-a_{i}^j)]^2}\\\\Big)}$$\\nTo achieve low margin error $\\\\Delta$ with the desired confidence level $1 \\u2212 \\\\delta$ and the given bound as above, we set the right-hand side of the inequality smaller than $\\\\delta$ and solve for n as the following:\\n\\n\\n\\n$$\\n\\\\begin{equation}\\n 2d\\\\exp{\\\\Big(-\\\\frac{2n^2\\\\Delta^2}{\\\\sum_{i=1}^{n}[\\\\max_j(b_{i}^j-a_{i}^j)]^2}\\\\Big)}\\\\leq\\\\delta\\n\\\\end{equation}\\n$$\\n$$\\n\\\\begin{equation}\\n-\\\\frac{2n^2\\\\Delta^2}{\\\\sum_{i=1}^{n}[\\\\max_j(b_{i}^j-a_{i}^j)]^2}\\\\leq \\\\log\\\\frac{\\\\delta}{2d}\\n\\\\end{equation}\\n$$\\n$$\\n\\\\begin{equation}\\n \\\\frac{2n^2\\\\Delta^2}{\\\\sum_{i=1}^{n}[\\\\max_j(b_{i}^j-a_{i}^j)]^2}\\\\geq \\\\log\\\\frac{2d}{\\\\delta}\\n\\\\end{equation}\\n$$\\n$$\\n\\\\begin{equation}\\n n^2\\\\geq \\\\frac{\\\\log\\\\frac{2d}{\\\\delta}\\\\sum_{i=1}^{n}[\\\\max_j(b_{i}^j-a_{i}^j)]^2}{2\\\\Delta^2}\\n\\\\end{equation}\\n$$\\n$$\\n\\\\begin{equation}\\n n\\\\geq \\\\sqrt{\\\\frac{\\\\log\\\\frac{2d}{\\\\delta}\\\\sum_{i=1}^{n}[\\\\max_j(b_{i}^j-a_{i}^j)]^2}{2\\\\Delta^2}}\\n\\\\end{equation}\\n$$\\n\\nWe hope that our response addressed the concerns of the reviewer. We're happy to promptly answer any additional questions.\"}", "{\"comment\": \"__Q1. Model Selection__\\n\\nAs we aim to design a defense mechanism that can mitigate compromising performance for robustness, while training a set of models together e.g. 10 models, we select a model set that obtains the best clean accuracy. To achieve this, we follow the standard training scheme and choose the model set at the epoch where the best clean accuracy on a test set is achieved. \\n\\n__Why this approach was chosen over using a validation set?__\\n\\nA validation set is used to tune hyperparameters (hyperparameter choices) and assess model performance during training. For datasets such as, MNIST, CIFAR-10 and STL-10 define only two standard subsets: i) training; and ii) testing. You could split the training set into training and validation but ultimately, benchmarks report the best performance (accuracy) on the standard testing set. In our study, we used these datasets to train models, and chose models based on their performance on test set. This is a common practice [1, 2]\\n\\n[1] He, Kaiming, X. Zhang, Shaoqing Ren and Jian Sun. \\u201cDeep Residual Learning for Image Recognition.\\u201d IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016).\\n\\n[2] Zagoruyko, Sergey and Nikos Komodakis. \\u201cWide Residual Networks.\\u201d ArXiv abs/1605.07146 (2016)\\n\\n__Q2 and Q3. Effectiveness on High-resolution dataset and effectiveness across a wider range of models__\\n\\nThank you for this suggestion. We have conducted new experiments to address this concern.\\n\\nAs we discussed with __Reviewer WLVn__, to demonstrate the effectiveness of our defense mechanism on a high-resolution dataset (Imagenet) and effectiveness across models (convolutional and transformer-based networks), we conducted an experiment.\\n- For our proposed defense, we fine-tuned a set of five CLIP models with LoRA on ImageNet (The model set achieves 78.05% clean accuracy). At test time, we randomly select two out of five models to make predictions. \\n- For RND and RF defenses, we fine-tune the CLIP model (achieve 76.07 % clean accuracy). Then we set hyperparameters such that the clean accuracy drop of these two defenses is around 1%. \\n- We randomly selected 100 correctly classified images. The results in Table 4 show that our approach works well on ImageNet and achieves better robustness than RND and RF methods. Thus, our defense is effective across different model types, scales well to large models and high-solution datasets like **Imagenet**.\", \"table_1\": \"$l_\\\\infty$ objective. Robustness ($\\\\uparrow)$ of different defense methods against SQUAREATTACK with the __ImageNet__ task with CLIP model architecture.\\n| Methods | 0.025 | 0.05 | 0.075 | 0.1 |\\n| ------- | ---------- | ---------- | ---------- | --------- |\\n| RND | 83.39% | 61.95% | 43.37% | 24.89% |\\n| RF | 86.45% | 65.1% | 51.14% | 35.83% |\\n| DISCO | __90.76%__ | __72.51%__ | __56.17%__ | __45.4%__ |\\n\\nNevertheless, it is important to highlight:\\n\\n- First, while our initial experiments were conducted on low-resolution datasets to thoroughly analyze the core principles of DISCO, the proposed framework remains highly relevant and scalable to high-resolution datasets such as ImageNet. \\n- Second, the fundamental mechanisms underlying our defense, model diversity and model randomization, are not inherently tied to a specific-mode architecture. However, due to our limited resources and our main aim being to theoretically and empirically examine the effectiveness of our defense idea, we only chose one network architecture for each dataset. We have addressed this now.\\n \\n__Q4. Code release__\\n\\nWe want to strongly re-firmly our full commitment to reproducibility and transparency. \\n\\nA partial code release has been done to allow our defense to be evaluated during the paper submission and review. In the next phase, the complete code in our study will be released. It will include:\\n\\n- The implementation of the SVGD+ training method\\n- The model randomization procedure (it should be ready in the released code but will be updated)\\n- Code for custom loss functions.\", \"title\": \"We clarify model selection follows standard practice. Provide new results for ImageNet on CLIP and confirm full code release at our public repo\"}", "{\"summary\": \"The paper proposes a defense mechanism against query-based black-box attacks relying on model randomization. The proposed method aims to hinder the estimation of the gradients needed to compute adversarial examples obfuscating the relationship of successive queries by randomly sampling a model from a set of models for each query. The paper includes a theoretical analysis showing how this strategy forces the attacker to increase the number of queries to obtain an accurate estimate of the gradient and how model diversity helps to enhance robustness against gradient-free attacks. The experimental evaluation shows that this method is more robust than other strategies relying on randomization for defending against query-based black-box attacks and that the proposed approach to increase models\\u2019 diversity helps also to increase robustness.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The authors propose a novel approach to generate output diversity for defending against query-based attacks relying on randomization. The propose a new strategy to enhance models\\u2019 diversity minimizing the impact on the clean accuracy.\", \"The authors provide a nice theoretical analysis justifying why model randomization is effective against both gradient estimation and gradient-free attacks. In the first case, Proposition 1 shows that this approach forces the attacker to increase the number of queries to produce accurate gradient estimates and, for gradient-free attacks, Proposition 2 shows how increasing model\\u00b4s diversity increases the probability of misleading an attack direction, enhancing the robustness to query-based attacks.\", \"Compared to other defenses that have a negative impact on the clean accuracy, the proposed defense aims to achieve both high robustness and clean accuracy. For this, the proposed strategies to select models\\u2019 subsets and increase diversity show good empirical results.\", \"The experimental evaluation includes a good representation of state-of-the-art attacks and defenses against query-based attacks. The results show that the proposed method improves robustness compared to other competing methods relying on randomization and that the mechanism proposed by the authors to increase diversity is important to achieve such goal.\"], \"weaknesses\": [\"Compared to other randomization-based strategy, training and storing a diverse set of models can be computationally demanding, especially for large models and training datasets. In this sense, the computational complexity is not well discussed. It would be convenient to compare the complexity of this approach with the other competing randomization-based defenses mentioned in the paper, as well as to discuss better the trade-offs between robustness and computational burden.\", \"Following up with the previous point, the computational complexity of model sampling and diversity training can have important scalability and latency issues in some practical applications (e.g. real time) or in common scenarios of modern machine learning systems, where the models and the datasets are large. In contrast, for smaller models, other alternatives to query-based strategies, like transfer attacks, can be more appealing to attackers, which can limit the capacity of the proposed approach to defend against attacks. In this sense, I think that the authors should position better the paper and discuss the type of scenarios where such a defense can be useful and applicable.\", \"Although the theoretical analysis provides some good insights that support the benefits of the proposed method, the result in Proposition 1 is somewhat limited, as it relates the number of queries to achieve some quality in the gradient estimation as a function of the upper and lower bound for the gradient\\u2019s value. Thus, the result does not relate to the characteristics of the random model selection method. For instance, it would be interesting to analyze how the size of the subset K has an impact on the number of queries required to achieve an error estimation in the gradient lower than Delta.\", \"Although the empirical endorse the use of equation (10) to train diverse models, the motivation for proposing this approach is not well motivated and justified. Perhaps the authors could provide a more detailed explanation about the motivation and justification.\"], \"questions\": [\"Computational complexity: Can the authors provide some insights about the computational complexity and the applicability of the proposed defense compared to other competing methods based on model randomization?\", \"Can the authors provide more details on the motivation and justification of equation (10) to train more diverse models?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your valuable feedback Reviewer whE1! We will share an updated PDF.\", \"comment\": [\"We appreciate the Reviewer for checking back our responses.\", \"We are updating the PDF to reflect the discussion we have above.\"]}", "{\"comment\": \"Thank you for your response.\\n\\nUnfortunately, my concern regarding Question 2\\u2014the ambiguity in the mathematical notation in Section 3.3.1\\u2014remains unaddressed. This ambiguity suggests there may be a mistake in the theoretical analysis presented. As a result of this unresolved issue, I have decreased my score to 5.\"}", "{\"comment\": \"__Q4: Can we substitute the first term in (11) with a random subset of models?__\\n\\nThank you for this interesting question. Certainly food for thought. So let's investigate:\\n\\nFirst, let's recall, what we have found is that Equation (10) as a learning objective for an ensemble is very crucial for model diversity (please see our ***Hypothesis 2***, *line 198* in updated paper). Then, Equation (11) addresses the two questions we posed about how to achieve diversity while needing to reduce the ***asymmetry*** in performance between models. Reducing the asymmetry is important, because we want any random combination of models to perform well. This mitigates the sacrifice in performance associated with devising means for achieving robustness. So, Equation (10) for diversity, Equation (11) to promote individual model performance.\\n\\n- As our ablation study results in ***Appendix D*** shows, *without* Sample Loss, the individual particle (model) performance can be poor, more significantly, the performance we can expect from a model randomization method is also poor (Please see ***Table 7***, ***Appendix D***)\\n- As our results in ***Figure 3*** shows, without SVGD (see results for *Ensembles*, blue color bar), the diversity among the learned representations is low. This is the reason that SVGD+ (learning with Equations 10 & 11), lead to the best robustness results (Please see ***Table 13***, ***14***, ***15*** and ***16*** in ***Appendix I*** where we compare *DISCO* method with *Ensembles*, *DivDis*, and *DivReg* ensembling methods with other diversity objectives).\\n\\nThen, to be clear, in using LoRA, the idea is not just to optimize a set of models but to use is with Equation (10) and (11).\\n\\nNow, to answer the question, substituting the first term in equation (11) with a random subset of models is an interesting idea. However, for us:\\n\\n- It is not clear, how a substitution can be optimal or equivalent in effectiveness, particularly in terms of achieving diversity and robustness, which are fundamental goals in our DISCO framework. The primary reason for using all of the models during training is to ensure that the ensemble learns a diverse set of *representations* enforced by Equation (10). If we were to train only a random subset of models at each step, the optimization process would lack cohesion, potentially leading to underutilized model capacity, less diversity, etc.. Further, it would also remove the conditions under which the proof and derivation for Equation 10 is undertaken in [2] and generalized in [3]. So, we expect, this will reduce the effectiveness of the model set in providing robustness against adversarial attacks.\\n\\n- It is also not clear how long it would take to converge to an effective solution. Sampling subsets of models during training could undermine the balance between diversity and accuracy, as some models are optimized more effectively than others. Consequently, this approach may converge slower to an effective solution and cost more training time.\\n\\n[2] Qiang Liu and Dilin Wang. Stein variational gradient descent: A general purpose Bayesian inference algorithm. In Neural Information Processing Systems, 2016.\\n\\n[3] Dilin Wang and Qiang Liu. Nonlinear stein variational gradient descent for learning diversified mixture models. In Proceedings of the 36th International Conference on Machine Learning (ICML),2019.\\n\\n*Please kindly reach out, if you have any further questions, we stand ready to answer them promptly to help clarify any further issues.* Once again, we thank you for seeking these clarification and your kind input and feedback.\", \"title\": \"We answer additional question on, what if random subset for training is used?\"}", "{\"summary\": \"The paper explores a novel defense mechanism to improve the robustness of deep learning models against query-based black-box adversarial attacks. The authors propose using model randomization as a defense. This obfuscates the relationship between successive responses, thereby hindering the adversary's optimization process for generating adversarial examples. The paper provides both theoretical analyses and empirical results showing that model randomization improves resilience across various attack types and perturbation objectives.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. Well-written and Clear: The paper is clearly articulated, making complex concepts accessible to the reader. The structure of the paper allows for an easy understanding of the problem, solution, and results.\\n2. Theoretical Justification of Randomization: The paper provides a strong theoretical foundation, demonstrating how model randomization can enhance robustness against adversarial attacks.\\n3. Rigorous Experiments: The experiments are comprehensive and rigorously conducted, covering multiple types of adversarial attacks (score-based, decision-based) and perturbation objectives (l\\u221e, l2, l0). This enhances the credibility of the paper's claims regarding the method\\u2019s efficacy.\", \"weaknesses\": \"1. Over-Reliance on Test Accuracy: As noted in Appendix C (around line 1020), it appears that model selection is primarily based on test accuracy. (If I have misunderstood this aspect, I would appreciate any clarification. Addressing this concern would greatly encourage me to consider a more favorable evaluation of the paper.)\\n\\n2. Experiments Limited to Low-Resolution Data: The experiments are conducted on relatively low-resolution datasets like MNIST, CIFAR-10, and STL-10. This limits the generalizability of the results to more complex, high-resolution datasets, which are more common in real-world scenarios.\", \"questions\": \"1. Model Selection: Could the authors clarify the exact process used for model selection? If test accuracy was indeed a criterion, it would be helpful if the authors could explain why this approach was chosen over using a validation set. Addressing this concern would greatly encourage me to consider a more favorable evaluation of the paper.\\n\\n2. High-Resolution Data: Have the authors conducted experiments on high-resolution datasets (e.g., ImageNet)? It would be valuable to assess how the proposed defense mechanism performs on more complex, high-dimensional data commonly found in real-world applications.\\n\\n3. Dataset-Specific Models: The paper uses different models for each dataset, which raises concerns about whether the results are model-specific. Could the authors clarify the reasoning behind selecting different models for each dataset and provide results that demonstrate the method\\u2019s effectiveness across a wider range of models? This would help ensure the robustness of the method across various architectures.\\n\\n4. Incomplete Released Code: Although the paper mentions that the code is available on GitHub, the provided repository does not include the training code, which is crucial for reproducing the results. To ensure full reproducibility and transparency, it would be helpful if the authors could include specific components of the training process in their code release, such as the implementation of the SVGD+ method, the model randomization procedure, and any custom loss functions or optimizers used. Including these details would provide clearer guidance on what is needed for accurate replication of the results.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"We address the concerns about balancing diversity and accuracy, effectiveness across model types and datasets and the impact of different sizes (Appendix J)\", \"comment\": [\"***Thank you for the valuable comments. This has helped us improve our work***\", \"__Q1. A systematic way to balance model diversity and clean accuracy__\", \"Thank you for raising an important point. This is studied in depth in deep Bayesian neural networks.\", \"First, as expected, increasing model diversity alone will come at the expense of model performance.\", \"The SVGD method provides a systematic method to control this balance. A model trainer can determine the empasis placed on diversity vs. model performance by selecting a suitable $\\\\gamma$ paramter for Equation (10) as we mentioned in Line 332.\", \"__Q2 and Q3. Effectiveness across different model types and datasets__\", \"We agree and are happy to report new results to address this concern.\", \"As we discussed with **Reviewer WLVn**, we fine-tuned a pre-trained CLIP model to build an ensemble of 5 models, which is different from VGG and ResNet architectures.\", \"Importantly, we addressed the computational demands of the approach by employing LoRA to build a set of five models from a single pre-trained CLIP model to achieve approximately 78% clean accuracy. During the inference phase, two out of five were randomly selected from the trained set.\", \"For comparison, we also fine-tuned a single CLIP model for RND and RF defenses, achieving a clean accuracy of 76.07% (noise was selected to maintain a clean accuracy drop of approximately 1% as in the prior works). For this experiment, we randomly selected 100 correctly classified images.\", \"The results in Table 1 show that DISCO achieves higher robustness compared to RND and RF methods. Thus, DISCO is effective across different model types, scales well to large models, and works well on challenging datasets like **Imagenet**.\"], \"table_1\": \"$l_\\\\infty$ objective. Robustness of different defense methods against SQUAREATTACK with the __ImageNet__ task.\\n| Methods | 0.025 | 0.05 | 0.075 | 0.1 |\\n| ------- | ---------- | ---------- | ---------- | --------- |\\n| RND | 83.39% | 61.95% | 43.37% | 24.89% |\\n| RF | 86.45% | 65.1% | 51.14% | 35.83% |\\n| DISCO | __90.76%__ | __72.51%__ | __56.17%__ | __45.4%__ |\\n\\n__Q4. The impact of different sizes of the model set__\\n\\nThank you for this insightful question. Indeed, we used the MNIST to enable us to train large sets of models to investigate the question posed by the Reviewer. \\n- Our extensive results with MNIST show that as long as we can maintain model diversity and performance, having a large pool of models to select from is more robust. To demonstrate, we have *extracted* the results in ***Appendix J *** (in the revised manuscript) and shown the impact of selecting 1 from 10, 20 and 40 models under a strong attack budget (harder to defend against). Please see results in ***Table 2*** below. This demonstrates that having a large pool is beneficial. \\n- Recall, robustness of a model is correlated with the number of queries needed to mount a successful attack. Then, from our theoretical analysis, robustness relies on the output score variance from our model randomisation approach.\\n - So, as we discussed with Reviewer ***WLVn*** in Q3, increasing the ensemble size leads to improved robustness due to larger variance in model outputs.\", \"table_2\": \"$l_2$ objective. The robustness of DISCO against SQUAREATTACK at the strong attack budget 4.0 when sampling one out of different sizes of model sets.\\n| Random Selection | Accuracy |\\n| ---------------- | -------- |\\n| 1 out of 10 | 80.0% |\\n| 1 out of 20 | 83.5% |\\n| 1 out of 40 | 88.2% |\"}", "{\"title\": \"Please let us know if our answers have provided the clarification you have sought?\", \"comment\": [\"We thank you for the opportunity to answer the important question you have raised.\", \"We stand ready to provide any further clarifications or results.\"]}", "{\"title\": \"We addressed all of the Weaknesses and Questions in your Feedback with New Results.\", \"comment\": \"***Thank you for the initial evaluation of our paper.***\\n\\n__Q1. Cost Analysis and Complexity Mitigation Strategy__\\n\\nIndeed the Reviewer is right, there is *no free lunch*.\\n\\nWe achieve much better robustness compared to previous methods. But, model randomization does lead to increasing the training and storage burden. RND and Random Feature use a single model, we employ a set of $n$ models so the number of parameters in our approach is $n \\\\times$ higher and the memory consumption is also larger.\\n\\nFortunately, this problem can be mitigated:\\n\\n- Recent work has begun tackling this problem for ensembling methods. For example, the study [1] shows how a pre-trained model can be trained with less than a ***1%*** increase in parameters and storage cost to build ensembles of diverse models. The authors use CLIP for ImageNet and VBL for language tasks.\\n- Importantly, this recent work builds upon research into efficient model tuning with low-rank adapters (LORAs) [2]. Based on these studies, we'll add a section to discuss how to address the issue with training and storage.\\n\\nFollowing the recommendation from the reviewer, we report:\\n\\n**(1)** Cost comparisons for training a single model vs. a set of models (40 models for MNIST, ten models for CIFAR-10/STL-10) used in our experiments to show that achieving better robustness does come at some cost. However, that cost can be mitigated through methods as [1].\", \"table_1\": \"Training time of models trained on different datasets between a single model and a set of models (DISCO).\\n|Datasets|Single Models|DISCO|\\n|---|---|---|\\n|MNIST|~0.5 hr|~12.5 hrs|\\n|CIFAR-10| ~1.5 hr|~72.0 hrs|\\n|STL-10| ~1.2 hr|~60.0 hrs|\", \"table_2\": \"Trainable Parameters of models trained on different datasets between a single model and a set of models (DISCO).\\n|Datasets|Single Models|DISCO|\\n|---|---|---|\\n|MNIST|0.312 M|12.5 M|\\n|CIFAR-10|14.73 M|147.3 M|\\n|STL-10|11.18 M|111.8 M|\\n\\n_Table 3:_ Storage Consumption of models trained on different datasets between a single model and a set of models (DISCO).\\n|Datasets|Single Models|DISCO|\\n|---|---|---|\\n|MNIST|1.19 MB|47.7 MB|\\n|CIFAR-10|56.18 MB|561.84 MB|\\n|STL-10|43.15 MB|426.55 MB|\\n\\n**(2)** __ImageNet with CLIP (at only 1% training cost )__\\n\\n- We added new results, following the method in [1], for ImageNet using a set of 5 models and selecting 2 out of 5 for model randomization.\\n- Now:\\n - The cost in training is to update just 1.84M parameters instead of 570M for 5 models (notably each CLIP model updates 114M parameters). This is just 0.38% of the cost for training 5 models. So, less than ***1%*** (1.84M/5) of parameters in a single CLIP model needs to be updated to build a *single* model in an ensemble.\\n - The storage for a single CLIP mode is 433 MB. Using the LoRA-based method, our 5 models consume just 439 MB, since parameters not updated during tuning do not need to be replicated. This is only a 1.38% increase in storage for 5 models compared to a single model. So, less than ***0.3%*** increase in storage is required for each model in an ensemble.\\n- But we still benefit from the ***improved robustness*** to attacks.\", \"table_4\": \"$l_\\\\infty$ objective. Robustness ($\\\\uparrow)$ of different defense methods against SQUAREATTACK with the __ImageNet__ task with __OpenCLIP__ architecture [3] (For experimental settings, kindly see the response to __Reviewer YFcN__).\\n|Methods|0.025|0.05|0.075|0.1|\\n|---|---|---|---|---|\\n|RND|83.39%|61.95%|43.37%| 24.89%|\\n|RF|86.45%|65.1%|51.14%| 35.83%|\\n|DISCO|__90.76%__|__72.51%__|__56.17%__|__45.4%__|\\n\\n**Q2. Application Dilemma - Better robustness or reduce the cost of training?**\\n\\n- Our work aims to theoretically investigate an alternative defense *idea* capable of better robustness.\\n- Recent research, as we reported, shows that we can mitigate the issues of computational complexity. So, trading-off a small increase in training cost can achieve improved robustness.\\n- This is relevant for, model service offerings in applications such as finance, healthcare, or defense where the cost of adversarial failures is significant, and robustness takes paramount. \\n\\n**Transfer attacks?**\\n\\n- As we discussed in **Section 2**, transfer-based attacks\\u2019 success relies on the similarity between the surrogates and target models. [4, 5], & [6] pointed out that the success of transfer attacks is limited for diverse models. Therefore, if an adversary favors transfer attacks, our approach is well-suited for these attacks. _Simply, the diverse models help reduce the similarity between the defended model and the surrogate model used by an adversary_.\"}", "{\"title\": \"Thank you for your response\", \"comment\": \"Thank you for your detailed response. While I consider the mathematical derivation in your last reply to be correct, I believe **the primary concern with this paper lies in the significant computational and resource costs associated with training**, as highlighted in comments of Reviewers WLVn and whE1. For example, using ImageNet as a case study, training an ensemble of multiple models would require extensive time and GPU resources, leading to increased carbon emissions and financial burden. Would it be possible for your method to take advantage of pre-trained models, such as those provided by the `timm` library for the ImageNet dataset?\"}", "{\"title\": \"We addressed all of the Weaknesses and Questions. We included new results on ImageNet with OpenCLIP\", \"comment\": \"__Q1. An analysis of the training time, storage, and inference time of DISCO.__\\n\\nIndeed, as we discussed with Reviewer **WLVn**, there is __no free lunch__.\\n- We report comprehensive comparisons of training time, storage and inference time of Disco, RND and RF in Tables 1, 2 and 3. We note that, RND and RF use a single model, the training time and storage for both of them are the same while DISCO uses 40 models for MNIST and 10 models for CIFAR-10/STL-10.\\n- For the results in Table 3, we ran 1000 queries and calculated the average inference time.\", \"table_1\": \"Training time of models trained on different datasets between RND, RF vs DISCO.\\n|Datasets|RND & RF|DISCO|\\n|---|---|---|\\n|MNIST|~0.5 hr|~12.5 hrs|\\n|CIFAR-10|~1.5 hr|~72.0 hrs|\\n|STL-10|~1.2 hr|~60.0 hrs|\", \"table_2\": \"Storage Consumption of models trained on different datasets between RND, RF vs DISCO.\\n|Datasets|RND & RF|DISCO|\\n|---|---|---|\\n|MNIST|1.3 MB|145 M|\\n|CIFAR-10|57 MB|1.7 G|\\n|STL-10|43 MB|1.3 G|\", \"table_3\": \"Inference time (per query) of undefended vs defended (RND, RF vs DISCO) models on different datasets.\\n|Datasets|Undefended|RND|RF|DISCO|\\n|---|---|---|---|---|\\n|MNIST|10.17 ms|12.14 ms|12.53 ms|15.12 ms|\\n|CIFAR-10|10.56 ms|12.61 ms|12.92 ms|20.62 ms|\\n|STL-10|11.26 ms|13.12 ms|13.48 ms|24.85 ms|\\n\\n***Notably the results for DISCO inference times can easily be improved with parallelisation of the inference pipeline to nearly match a RND and RF, which we have not done***.\\n\\n__Q2. Effectiveness on high-resolution datasets such as ImageNet__\\n\\nWe theoretically investigated a new method. So, we used relatively low resolution datasets to be able to complete the significant number of experiments needed to thoroughly analyze the core principles of DISCO. \\n\\nBut our method ***is*** effective on high-resolution datasets. We can demonstrate this now with new results.\\n- Inspired by recent work to scale ensembling to large-scale models in [1], we fine-tuned a pre-trained model.\\n- We used the large-scale OpenCLIP [2] model with LoRA [3] on ImageNet to build a sample of 5 models, achieving approximately 78% clean accuracy on the test set for the ensemble. We used a random selection of two out of five models in our method, achieving approximately 77% clean accuracy on the test set (1% drop).\\n- For RND and RF defenses, we fine-tune a single OpenCLIP model to achieve 76.07% clean accuracy and, for a fair comparison, choose hyperparameters such that the clean accuracy drop of these two defenses is also around 1%.\\n- In this experiment, due to limited time and computational effort to run attacks, we randomly selected 100 correctly classified images for attacks.\\n- The results in **Table 4** below show that our approach works well on ImageNet and achieves better robustness than RND and RF methods.\", \"table_4\": \"$l_\\\\infty$ objective. Robustness ($\\\\uparrow)$ of different defense methods against SQUAREATTACK with the __ImageNet__ task with *OpenCLIP* model architecture.\\n|Methods|0.025|0.05|0.075|0.1|\\n|---|---|---|---|---|\\n| RND| 83.39%|61.95%|43.37%|24.89%|\\n| RF| 86.45%|65.1%|51.14%|35.83%|\\n| DISCO| __90.76%__ | __72.51%__ | __56.17%__ | __45.4%__ |\\n\\n__Q3. Effectiveness with different network architectures__\\n\\nWe understand and appreciate your concern. Let us explain why and then address it.\\n\\n- Our theoretical work is independent of model architectures. \\n- Then, when we empirically examined the effectiveness of our formulation to improve robustness against query-based black-box attacks, we prioritized using convolutional architectures, they are ubiquitous and simple to work with.\\n- However, to address your concern, we have conducted comprehensive experiments on ***ImageNet*** with ***OpenCLIP*** based on recent work in [1].\\n - The results in Table 4 (above) show that our proposed method is more robust than RND and RF and works well with transformer architectures (a non-convolutional network).\\n - We hope the generality shown in the results addresses the Reviewer's concern.\\n\\n__Q4. Explanation on how the proposed objective helps each individual model perform well__\\n\\nThank you for asking. Let us explain why here, while we improve the explanation in the paper.\\n\\n- Minimizing the loss over an average of logits for a subset of model faces the same problem we tried to address. Because it promotes strong ensemble performance and does not guarantee that each individual model will perform well.\\n- Individual model performance, as we mentioned in **Section 3.4.2**, is very important to ensure a minimal performance degradation for our defense. Because we want any randomly selected model to be well-performing.\\n- To this end, the proposed objective, through the joint training process, promotes diversity among models and ensures *each* individual model maintains strong performance.\\n- Importantly, we emperically show in ***Appendix C, Table 7***, the proposed objective helps obtain well-performing individual models while achieving diversity, concurrently.\"}", "{\"title\": \"References\", \"comment\": \"[1] Bao Gia, D, Shamsi, A, Guo, X-Y, Mohammadi, A, Alinejad-Rokny, H., Sejdinovic, D, Ranasinghe DC, & Abbasnejad, E. Bayesian Low-Rank LeArning (Bella): A Practical Approach to Bayesian Neural Networks. ArXiv (2024)\\n\\n[2] Hu, EJ.; yelong shen; Wallis, P.; Allen-Zhu, Z.; Li, Y.; Wang, S.; Wang, L.; and Chen, W. LoRA: Low-Rank Adaptation of Large Language Models. ICLR, 2022.\\n\\n[3] Radford, A., Kim, JW., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., & Sutskever, I. \\u201cLearning Transferable Visual Models From Natural Language Supervision.\\u201d ICLR, 2021.\\n\\n[4] F. Suya, A. Suri, T. Zhang, J. Hong, Y. Tian, & D. Evans. Sok: Pitfalls in evaluating black-box attacks. (SaTML), 2024.\\n\\n[5] Pin-Yu, C., Zhang, H., Sharma, Y., Yi, J., & Hsieh. C-H., Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. AISec, 2017.\\n\\n[6] Cheng, S., Miao, Y., Dong, Y., Yang, X., Gao, X., & Zhu, J. (2024). Efficient Black-box Adversarial Attacks via Bayesian Optimization Guided by a Function Prior. ICML, 2024.\"}", "{\"title\": \"We have answered all of the new questions (ImageNet on CLIP cost analysis show cost is < 0.32% per model, results in Appendices F &G, new Section 4.5 in updated paper)\", \"comment\": \"**Q2** *better reflect these trade-offs*\\n\\nWe certainly appreciate the Reviewer's sentiment that the main body of the paper should make the cost clearer to the reader, so the performance gains are put in perspective. \\n- Certainly, in the paper we clearly mention the number of models we used in table captions and text.\\n- Then we report sampling from a fixed number in the main body due to the overwhelming nature of the results set for other strategies, defer these to the ***Appendices F & G***\\n\\nNow we will explicitly discuss the costs trade-off and add this to the main body of the paper as a new ***Section 4.5***. We absolutely want to allow the research community to benefit from our in-depth analysis and results and be very upfront about the fact that there is no-free-lunch.\\n\\n- We will share the updated PDF with you to show this addition and the cost analysis recommended by the reviewer.\\n \\n**Q2** *The computational complexity has been an argument raised by most reviewers and I think that the authors could make a more compelling case for defending their approach.*\\n\\n*Response:*\\n\\nWe believe we have a very compelling case. \\n\\nWe start with a***pre-trained CLIP*** model to demonstrate practicability and build a working defense for a model suitable for serious applications.\\n\\n- We stated with a ***pre-trained CLIP*** (to put CLIP into perspective, please note that ***RestNet50 has approximately 25 million parameters compared to 114 million we deal with in CLIP***).\\n- We then trained an ensemble of 5 using the methods in [3].\\n - The model set achieves approximately 78% clean accuracy.\\n - During the inference phase, two out of five were randomly selected from the trained set.\\n - Two out of five has a clean test accuracy of approximately 77%\\n- We also train a single CLIP model for use with RND and RF\\n - The single model achieves 76.07% clean accuracy\\n - We add noise, so the clean accuracy drop values are as in RF (ICLR'24), approximately 1% and comparable to ours.\\n- Table 4 shows that the significant reduction in overhead we achieve. \\n- Table 5 shows that compared to current state-of-the-art we outperform with a margin of up to 9.57%\\n\\n_Table 4:_ Trainable Parameters and Storage Consumption of a Single CLIP and a set of five CLIP models ***we trained*** to implement LoRA(DISCO).\\n| Models|***Single*** CLIP|The set of 5 CLIP models using LoRA|\\n| -------------------- | -----------|------|\\n| Trainable Parameters |114 M| 1.84 M (1.6% incrase, ***0.32% per model***)|\\n| Storage Consumption |433 MB| 439 MB (1.35% increase, ***0.28% per model***)|\\n\\n_Table 5:_ $l_\\\\infty$ objective. Robustness ($\\\\uparrow)$ of different defense methods against SQUAREATTACK with the __ImageNet__ task task with __CLIP__ model architecture [3] (For details on the experiment, please see response to Reviewer).\\n| Methods | 0.025 | 0.05 | 0.075 | 0.1 |\\n| ------- | ---------- | ---------- | ---------- | --------- |\\n| RND | 83.39% | 61.95% | 43.37% | 24.89% |\\n| RF | 86.45% | 65.1% | 51.14% | 35.83% |\\n| DISCO | __90.76%__ | __72.51%__ | __56.17%__ | __45.4%__ |\\n| DISCO Improvement (vs. Next best)|4.31%|7.41%|5.03%|9.57%|\\n\\nWhat we are proposing are ***marginal cost increases*** to achieve significant improvements in robustness.\\n\\n- Effectively we are saying <1.6% increase in overhead can yield 4.31 to 9.57% better robustness on a large-scale network of practical significance\\n- Now adding a model incurs **<0.32%** overhead in terms or trainable parameters or storage.\", \"we_really_hope_the_results_provides_the_assurances_sought_by_the_reviewer_that_our_methods_is\": [\"Robust and\", \"Practical\"], \"we_hope_the_reviewer_can_appreciate_the_question_we_posed_in_the_paper\": [\"CAN MODEL RANDOMIZATION OFFER ROBUSTNESS AGAINST QUERY-BASED BLACK-BOX ATTACKS?\", \"We believe the answer is now, yes, irrevocably.\", \"In addition to understating what such a method can offer, and the theoretical analysis, we have now shown it can be of practical significance.\", \"One paper can't solve all problems, but we have certainly worked hard at making our theoretical work stick.\", \"***Notably, consider using adversarial training to make a model like CLIP more robust with be an insurmountable cost. Ours is a relatively simple method we can perform even on an A6000 GPU.***\", \"We sincerely thank the Reviewer for all their efforts to help us improve our work and its presentation. Please let us know if there are any specific results or experiments the Reviewer would like to see. We stand ready to provide this for you.\"]}", "{\"title\": \"Response to the authors' comments\", \"comment\": \"Thank you very much for your detailed response and your effort in providing additional results.\\n\\nIndeed, there is no free lunch, and there are important trade-offs between the models\\u2019 accuracy, robustness, and computational complexity that need to be considered. While I appreciate the authors\\u2019 effort in providing some additional information about the overhead of the proposed approach, I believe that the experiments should better reflect these trade-offs to have a more comprehensive overview of when DISCO can be an appealing defense to use, regarding the model\\u2019s size, the training set, or the number of models to be trained. For this, I think the experiments must be revised and reconsidered, and I do not think that a minor revision of them will suffice. \\n\\nThe computational complexity has been an argument raised by most reviewers and I think that the authors could make a more compelling case for defending their approach. For instance, the directions pointed out in their response to my Q1 can be promising and worth exploring in more depth. It would be interesting to analyze whether LoRA can produce diverse models and reduce the computational complexity for DISCO. On the other hand, as mentioned in my review, I think that the authors can also do a better work positioning the paper by, for example, analyzing in more depth applications or scenarios where DISCO can be an appealing defense and where the extra memory and training time can be justified for gaining robustness, compared to other defenses. \\n\\nI believe that, despite the limitations with the computational complexity, the paper has potential, but the changes that need to be addressed are not minor. I appreciate some of the extra result reported during the rebuttal, like the analysis of ImageNet and the evaluation against the P-BO attack are interesting and show promising results. However, as mentioned before, I think that the experiments should be reconsidered to make a more compelling case that justifies the extra complexity and reflects better the trade-offs between accuracy, robustness, and complexity. For these reasons I am keeping my score, but I really encourage the authors to keep on working on this defense and improve the paper.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"We updated the paper (Added new results recommended by Reviewers, Provided a cost analysis, With new results for practicability Using OpenCLIP)\", \"comment\": [\"We just wanted let you know that we have updated the paper.\", \"We edited and added content to reflect all of your comments (***New Appendices C, E F and G***)\", \"We have a comprehensive cost analysis (***Section 4.5***)\", \"We have addressed the key concern around the cost of implementing on a large network and high-res dataset and application relevance by using ***ImageNet*** on ***OpenCLIP*** (114 million parameters, open-source implementation of ***OpenAI's CLIP***).\", \"We have added a cost analysis for implementing our method with OpenCLIP (**<0.32%** storage and training cost increase per model)\", \"We have added new attack results for ***PB-O*** (a much stronger attack under a surrogate model-based setting).\", \"Then compared to ***RF (ICLR'24)***, current SToA, our method is a *new* idea, is more robust, even on ImageNet - we set a **new benchmark result for a defense**.\"], \"we_also_want_to_kindly_highlight\": \"*In all our extensive experiments, including the new and strong PB-O attack, our **new method** sets a new benchmark compared to the baseline RF (ICLR'24) and RND (NeurIPS'21 & CVPR'22), with significant margins (up to 9.5% with ImageNet).*\\n\\n*Our new method, as carefully reviewed by all 6 Reviewers, is supported by our **theoretical analysis**. Further the method is practical to implement, even with a large-scale network like OpenCLIP and high-resolution datasets like ImageNet.*\\n\\n*Most importantly, in pursuing our insights into the method to confuse attackers, it was never immediately clear from existing literature, if model randomization can lead to sufficient obfuscation to confuse query-based black-box attacks or how best to build such a method.* Our work:\\n\\n- Shows model randomization can obfuscate the relations exploited by attackers in back-box settings.\\n- Our learning objectives (Equation 10 and proposed Equation 11) provides an effective means to implement the new idea (In fact Equation 10 and 11 outperform other methods in the literatures we tried).\\n\\n\\nThank you very much for all the constructive discussions.\"}", "{\"comment\": \"Thank you for your comment. I believe including the discussion on CLIP and Imagenet would improve the paper. I still have some concerns as follows\\n\\n**Q1.1:** As mentioned in Sec. 4, the method randomly selects a subset of 5 models to make predictions. Thus, I'd expect the inference to be 5x of the base model. How do you achieve ~1.5x inference time in the case of MNIST and ~2x inference times on CIFAR10 and STL-10?\\n\\n**Q1.2:** In the response to reviewer whE1, you recommend leveraging LoRA to mitigate the number of trainable parameters and storage. However, it is only applicable in the fine-tuning setting. Do you have any suggestions for the scenario where we need to train a model from scratch?\\n\\n**Q4:** If optimizing a subset of models has the same effect as ensembling, can we substitute the first term in (11) with a random subset of models similar to the inference step to reduce the resources for backpropagating every model? Since this is an extended discussion, I do not expect to see experimental results but just want to see the reason for not doing that from the beginning.\"}" ] }
DpOQwOzTc2
Combining Denoised Neural Network and Genetic Symbolic Regression for Memory Behavior Modeling via Dynamic Asynchronous Optimization
[ "Jianwen Sun", "Qirong Chen", "Yawei Luo", "Zhihai Hu", "Ruxia Liang", "Xiaoxuan Shen" ]
Memory behavior modeling is a critical topic in cognitive psychology and education. Traditional psychological approaches describe the dynamic properties of memory through memory equations derived from experimental data, but these models often lack accuracy and are frequently debated in terms of their form. In recent years, data-driven modeling methods have improved predictive accuracy but often suffer from poor interpretability, limiting their ability to provide deeper cognitive insights. While knowledge-informed neural network models have achieved significant success in fields such as physics, their application in behavior modeling remains limited. This paper proposes a Self-evolving Psychology-informed Neural Network (SPsyINN), which leverages classical memory equations as knowledge modules to constrain neural network training. To address challenges such as the difficulty in quantifying descriptors and the limited interpretability of classical memory equations, a genetic symbolic regression algorithm is introduced to conduct evolutionary searches for more optimal expressions based on classical memory equations, enabling the mutual progress of the knowledge module and the neural network module. Specifically, the proposed approach combines genetic symbolic regression and neural networks in a parallel training framework, with a dynamic joint optimization loss function ensuring effective knowledge alignment between the two modules. Then, for addressing the training efficiency differences arising from the distinct optimization methods and computational hardware requirements of genetic algorithms and neural networks, an asynchronous interaction mechanism mediated by proxy data is developed to facilitate effective communication between modules and improve optimization efficiency. Finally, a denoising module is integrated into the neural network to enhance robustness against data noise and improve generalization performance. Experimental results on four large-scale real-world memory behavior demonstrate that SPsyINN outperforms state-of-the-art methods in predictive accuracy. Ablation studies further show that the proposed approach effectively achieves mutual progress between different modules, improving model predictive accuracy while uncovering more interpretable memory equations, highlighting the potential application value of SPsyINN in psychological research. Our code is released at: \href{https://anonymous.4open.science/r/SPsyINN-3F18}{https://anonymous.4open.science/r/SPsyINN-3F18}
[ "Memory behavior", "asynchronous optimization", "neural networks", "genetic symbolic regression" ]
Reject
https://openreview.net/pdf?id=DpOQwOzTc2
https://openreview.net/forum?id=DpOQwOzTc2
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yOUD0N9aBl", "wfLvmtscA1", "uhyWHOt7ZR", "qpcTcelB3K", "phE6jDJshm", "oFdBIQATqk", "nbDcKQEd0b", "hWb8APuBzk", "gzzXLMR88N", "e6yoEfom5V", "YaVzUpzUBg", "YPkljcTE6r", "Wss2yniho5", "W8EKxb8F0u", "VtOtnNsXZJ", "Srl2dx8we2", "RvKFtmGkX4", "RJabqMDRNa", "R1aOzgL37R", "QLPqkY7DsZ", "PNkPNaS4Wh", "OtRGow8uh8", "LlMmSr17aZ", "Kq8vp1A1e5", "KQQtSRS7QX", "IoRfj6vOYF", "InbkEJf4pB", "EB2FLZZ21Y", "7ZVaQdRu3P", "6WOt4lcgo3", "5idPkJKx3V", "5XRYFz8Cpi", "55hLCbYH9f", "4pFXSlmHLo", "3208t3BJxE", "2vGuKm0uhe", "2uRTNv8wiI", "0RfQzRSJEV" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733060071924, 1732938484962, 1732633934053, 1732938470953, 1732532786628, 1732156593390, 1733060106827, 1732155596876, 1732696816087, 1733314134959, 1732156042999, 1732567420493, 1737523918473, 1732939165348, 1733312538957, 1732601105584, 1732155653000, 1730728197165, 1732155637411, 1730247757089, 1729826997716, 1732780186138, 1732789470919, 1733060012901, 1732293417658, 1730700299723, 1733060043061, 1732590483345, 1732532741623, 1732444720073, 1732532804710, 1732599124944, 1734596505427, 1732684187035, 1732646641753, 1732532821707, 1732723992856, 1732156602924 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8572/Authors" ], [ "ICLR.cc/2025/Conference/Submission8572/Authors" ], [ "ICLR.cc/2025/Conference/Submission8572/Authors" ], [ "ICLR.cc/2025/Conference/Submission8572/Authors" ], [ "ICLR.cc/2025/Conference/Submission8572/Authors" ], [ "ICLR.cc/2025/Conference/Submission8572/Authors" ], [ "ICLR.cc/2025/Conference/Submission8572/Authors" ], [ "ICLR.cc/2025/Conference/Submission8572/Authors" ], [ "ICLR.cc/2025/Conference/Submission8572/Authors" ], [ "ICLR.cc/2025/Conference/Submission8572/Authors" ], [ "ICLR.cc/2025/Conference/Submission8572/Authors" ], [ "ICLR.cc/2025/Conference/Submission8572/Reviewer_CdzH" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8572/Authors" ], [ "ICLR.cc/2025/Conference/Submission8572/Authors" ], [ "ICLR.cc/2025/Conference/Submission8572/Reviewer_gLam" ], [ "ICLR.cc/2025/Conference/Submission8572/Authors" ], [ "ICLR.cc/2025/Conference/Submission8572/Reviewer_qnLQ" ], [ "ICLR.cc/2025/Conference/Submission8572/Authors" ], [ "ICLR.cc/2025/Conference/Submission8572/Reviewer_tRhP" ], [ "ICLR.cc/2025/Conference/Submission8572/Reviewer_CdzH" ], [ "ICLR.cc/2025/Conference/Submission8572/Authors" ], [ "ICLR.cc/2025/Conference/Submission8572/Reviewer_qnLQ" ], [ "ICLR.cc/2025/Conference/Submission8572/Authors" ], [ "ICLR.cc/2025/Conference/Submission8572/Reviewer_CdzH" ], [ "ICLR.cc/2025/Conference/Submission8572/Reviewer_gLam" ], [ "ICLR.cc/2025/Conference/Submission8572/Authors" ], [ "ICLR.cc/2025/Conference/Submission8572/Authors" ], [ "ICLR.cc/2025/Conference/Submission8572/Authors" ], [ "ICLR.cc/2025/Conference/Submission8572/Reviewer_gLam" ], [ "ICLR.cc/2025/Conference/Submission8572/Authors" ], [ "ICLR.cc/2025/Conference/Submission8572/Reviewer_gLam" ], [ "ICLR.cc/2025/Conference/Submission8572/Area_Chair_shvv" ], [ "ICLR.cc/2025/Conference/Submission8572/Authors" ], [ "ICLR.cc/2025/Conference/Submission8572/Reviewer_tRhP" ], [ "ICLR.cc/2025/Conference/Submission8572/Authors" ], [ "ICLR.cc/2025/Conference/Submission8572/Reviewer_qnLQ" ], [ "ICLR.cc/2025/Conference/Submission8572/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer tRhP,\\n\\nThank you for your meticulous review and valuable feedback during the rebuttal phase. Your insights have been immensely beneficial and have greatly helped us refine our work. We would be deeply grateful if you could consider providing a higher rating for our submission.\\n\\nSincerely,\\n\\nAuthors of Paper 8572\"}", "{\"comment\": \"3. On referring to ANN architecture as TNN\\n\\nIn our paper, we use the term Temporal Neural Network (TNN) to refer collectively to neural network models designed for time-series data modeling (e.g., RNN, LSTM, GRU, Transformer). We acknowledge that this terminology, coined for convenience, may lack precision, and we deeply regret any confusion it may have caused.\\n\\nThere are existing analogous naming conventions, such as Spatio-temporal Neural Network[2] and Temporal Convolutional Neural Network[3]. The use of TNN in our paper aimed to generalize a series of neural network models for time-series data modeling, including RNN, LSTM, Transformer, etc. Specifically, in our method, we employ LSTM to capture dynamic memory states and further process these states with an MLP.\\n\\nOnce again, thank you for pointing out the potential confusion caused by this terminology!\\n\\nReferences\\n\\n[1] Makke N, Chawla S. Interpretable scientific discovery with symbolic regression: a review. Artificial Intelligence Review, 2024, 57(1): 2.\\n\\n[2] Ye J, Sun L, Du B, et al. Co-prediction of multiple transportation demands based on deep spatio-temporal neural network. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery \\\\& data mining. 2019: 305-313.\\n\\n[3] Pelletier C, Webb G I, Petitjean F. Temporal convolutional neural network for the classification of satellite image time series. Remote Sensing, 2019, 11(5): 523.\\n\\nWe hope the above responses address your concerns clearly. Should you have further questions, we would be delighted to discuss them further! Thank you again for your support and suggestions regarding our work!\"}", "{\"comment\": \"We sincerely apologize for the errors in Table 5, where some data were mixed up during formatting. Please note that the values in Table 1 are accurate. Regarding the inconsistency in equation performance that you highlighted, this was an oversight on our part due to using the wrong equation. We have corrected this issue in the latest revised version for your review. Additionally, we have updated the code repository to include a section on equation testing and uploaded the processed data directly. Our data were trained after applying a standardization procedure, and we have clarified this point in the revised manuscript\\u2014thank you for bringing this to our attention.\\nAs for your suggestion to provide a csv file for data extraction, we fully agree with its necessity. We will upload the file to the code repository within two days to further enhance the reproducibility of the tests. We also plan to further optimize the code structure of the entire project to make it more organized and user-friendly. \\n\\nThank you again for your valuable feedback.\"}", "{\"comment\": \"Thank you for your meticulous review and invaluable suggestions on our work! Below are our detailed responses to the issues and concerns you raised:\\n\\n1. On the understanding of the term \\\"asynchronous''\\n\\nWe sincerely appreciate your insightful comments regarding the usage and definition of \\\"asynchronous.'' Upon reflection, we realize that our description of this concept may have lacked clarity, potentially causing confusion about the relationship between our waiting strategy and the asynchronous concept.\\n\\nTypically, \\\"asynchronous'' refers to tasks or operations that can commence without waiting for preceding tasks to complete, characterized by non-blocking and concurrent behavior. In our study, the training of neural networks and symbolic regression models does indeed operate asynchronously and in parallel. However, under the waiting strategy, these two models temporarily pause after completing their respective training rounds to exchange the latest interaction data, aiming to enhance their collaborative effectiveness. After updating the data, both models resume the next iteration in parallel.\\n\\nFrom a synchronous perspective, this waiting strategy can be seen as the \\\\(n\\\\)-th epoch of the neural network waiting for the \\\\((n-1)\\\\)-th epoch of symbolic regression to finish before interaction occurs. Nevertheless, the \\\\(n\\\\)-th epoch of the neural network and the \\\\(n\\\\)-th epoch of symbolic regression execute in parallel. Hence, on a macro level, the training of the neural network does not completely halt or wait for the symbolic regression task to finish, distinguishing it significantly from traditional two-stage training methods.\\n\\nOur original design aimed to achieve fully asynchronous training (SPsyINN-C), where interaction data updates were implemented via local file reading mechanisms. However, in practice, the slower runtime of the symbolic regression algorithm introduced the following issues:\\n\\nFor the neural network, it hindered timely access to more accurate aligned knowledge.\\nFor symbolic regression, it failed to leverage better interaction data.\\n\\nThis imbalance fell short of achieving the desired efficient collaboration. Consequently, we designed three interaction strategies (continuous optimization, interval optimization, and waiting optimization) to explore how a more effective knowledge alignment mechanism could enhance the synergy between the neural network and symbolic regression. We hope this explanation provides a clearer articulation of our understanding of \\\"asynchronous'' and \\\"synchronous'' and the rationale behind designing these strategies.\\n\\n2. On uniformly naming symbolic regression as GSR\\n\\nSymbolic regression (SR) is a significant subfield of machine learning aimed at deriving symbolic mathematical expressions from data. Makke et al.[1] classify symbolic regression into five categories: Linear SR, Nonlinear SR, Expression-tree Search, Physics-inspired, and Mathematics-inspired. Genetic algorithm-based symbolic regression belongs to the Expression-tree Search category and is typically referred to as Genetic Programming Symbolic Regression (GPSR). In our paper, we abbreviate it as Genetic Symbolic Regression (GSR).\\n\\nThe GSR mentioned in our paper specifically refers to the symbolic regression module in our model, which is implemented using PySR. As PySR is a genetic algorithm-based symbolic regression tool, we label it as GSR. We recognize that our terminology might have caused confusion in some parts of the paper. Upon careful review, we did not find instances where all symbolic regression methods were equated to GSR. For example, line 330 states: \\\"Our GSR framework is flexible and supports various algorithms (e.g., TPSR (Shojaee et al., 2023), DGSR (Holt et al., 2023)) and libraries (e.g., Eureqa, PySR, and geppy3).''\\nThis was intended to emphasize the framework\\u2019s adaptability to different symbolic regression algorithms and tools rather than equating all symbolic regression to GSR.\\n\\nIn the future, we aim to integrate advanced symbolic regression and neural network methods for multi-domain rule discovery tasks while maintaining greater rigor in terminology. Thank you for highlighting this concern!\"}", "{\"comment\": \"Our revised manuscript has been uploaded to OpenReview for your review. If you have any additional feedback, we will make further refinements accordingly.\"}", "{\"comment\": \"Dear Reviewer CdzH,\\n\\nThank you for your recognition of our work and for your valuable suggestions, particularly regarding the methodological innovation and empirical evaluation quality. We are delighted that you share an interest in the integration of neural networks with symbolic regression and acknowledge its potential in the field of memory modeling. As you noted, this research could inspire advancements not only in psychology, cognitive science, and neuroscience but also in other domains. Based on your suggestions, we have devised an improvement plan to enhance the clarity and readability of the manuscript, ensuring it is accessible to a broader academic audience. We plan to upload the revised version within 2-3 days and hope you will review it again and share your invaluable feedback. \\n\\n### Improvement Plans for Identified Weaknesses: \\n\\n**1. Simplified formulas and illustrations** \\nTo enhance readability, we will simplify lengthy formulas (e.g., the definition of mean squared error), retaining only essential content. Descriptions of Equations (8) and (9) will be condensed to reduce space usage. Additionally, the equations in Figure 2 will be moved to the main text to better highlight the core concepts of the alignment algorithm. \\n\\n**2. Highlighting table content** \\nThe current discussion of results from Table 2, particularly those generated by symbolic regression, is insufficient. In the revised version, we will provide a more detailed analysis of these results, focusing on the potential implications of symbolic regression formulas for psychological and neuroscience research. \\n\\n**3. Additional details for model reproducibility** \\nWe will include more detailed information about the model in the revised manuscript, such as neural network hyperparameters, training schedules, and symbolic regression initialization settings. This will ensure other researchers can accurately reproduce our experiments. \\n\\n**4. Background and main text optimization** \\nWe recognize some overlap between the background and introduction sections. In the revised version, we will merge overlapping content, retain key background knowledge, and use the saved space to expand discussions on model details and experimental results. \\n\\n**5. Clarification of terms and abbreviations** \\nWe will clearly define all abbreviations upon their first appearance in the revised manuscript. Additionally, abbreviations not used repeatedly will be removed to avoid distracting readers. \\n\\n**6. Significance testing** \\nFor the experimental results in Table 3, we will add statistical significance testing data (e.g., standard errors or error ranges) to enhance the credibility of the results. \\n\\n**7. Enhanced referencing of Table 2** \\nIn the revised version, we will explicitly reference the equations and related discussions in Table 2, elaborating on their significance in memory modeling.\"}", "{\"comment\": \"Dear Reviewer CdzH,\\n\\nThank you for your meticulous review and valuable feedback during the rebuttal phase. Your insights have been immensely beneficial and have greatly helped us refine our work. \\n\\nSincerely,\\n\\nAuthors of Paper 8572\"}", "{\"comment\": \"Dear Reviewer qnLQ,\\n\\nThank you for recognizing the value of our work, particularly the innovative approach and potential of integrating neural networks with symbolic regression for memory modeling. Indeed, our research is greatly inspired by the PINN models from the field of physics, and we also believe our method has the potential for applications in other domains. We will mention this point in the revised manuscript. Additionally, we have included an analysis comparing the discovered memory equations with existing theories, highlighting our method\\u2019s potential in uncovering new psychological theories. \\n\\nWe sincerely apologize for the shortcomings in the presentation of the manuscript, as noted by you and other reviewers. We have carefully considered your suggestions and have revised the manuscript to improve its clarity and presentation. We plan to upload the revised manuscript within 2-3 days and hope you can review it again and provide us with further valuable feedback. We truly appreciate your insightful comments, which have been immensely helpful in refining our work. \\n\\n### Responses and Improvements to the Weaknesses:\\n\\n**1. Lengthy and redundant mathematical notations:** \\nWe acknowledge that the current version\\u2019s notations are overly complex, which may hinder the reader\\u2019s understanding. In the revised manuscript, we have simplified the notations and removed unnecessary redundancies. Furthermore, we have restructured the content to present mathematical symbols and formulas more clearly and concisely, reducing the cognitive load for readers. \\n\\n**2. Lack of clarity in baseline method descriptions:** \\nThank you for pointing out the inadequacies in the explanation of baseline methods. In the revised manuscript, we have enhanced the descriptions of baseline methods, including background details on the DKT-F and FIFKT models, and provided a thorough explanation of each variant in the ablation studies. We have also clarified the effects of combining different components, enabling readers to better understand the significance of various model configurations. \\n\\n**3. Enhancements to background methods and problem description:** \\nSince the paper involves multiple approaches, including symbolic regression, deep learning, memory modeling, and PINNs, we have reorganized the manuscript\\u2019s logical structure and provided more detailed explanations of these background methods in the revised version. These changes will help readers gain a more comprehensive understanding of the research context and core contributions. \\n\\n### Responses to Specific Questions:\\n\\n**1. Combination of asynchronous training and dynamic optimization:** \\nOur proposed method integrates asynchronous training and dynamic optimization. Asynchronous training refers to the joint training of the neural network and the symbolic regression model in an asynchronous manner, while dynamic optimization involves dynamically adjusting the neural network\\u2019s loss weights. For instance, when asynchronous training is enabled but dynamic optimization is disabled, the loss weights remain fixed hyperparameters and do not change dynamically with model performance. It should be clarified that dynamic optimization requires asynchronous training to be enabled. We will further clarify these experimental settings in the revised manuscript. \\n\\n**2. Performance of symbolic regression when trained alone:** \\nThe primary objective of our study is to design a neural network model, with symbolic regression introduced to aid the neural network in better modeling learners\\u2019 memory states. In the original version, we did not provide the standalone performance results of the symbolic regression model. In the revised manuscript, we have added these results and analyses to provide a more comprehensive comparison. \\n\\n**3. Selection of operator sets in symbolic regression:** \\nThank you for raising this question. Due to an oversight, we did not explicitly specify the operator set used in symbolic regression. In this study, we adopted operators from traditional memory equations, and the $log$ function mentioned in the manuscript refers to the natural logarithm ($ln$). In the revised manuscript, we have corrected and standardized the symbolic descriptions to ensure consistent presentation and avoid confusion. \\n\\nThank you once again for your valuable feedback. We will upload the revised manuscript to OpenReview within three days, and we look forward to your further review and suggestions.\"}", "{\"comment\": \"Dear Reviewer gLam,\\n\\nWe have uploaded the complete dataset in CSV format (dataset(csv)) to the code repository. These files include the corresponding $\\\\delta_{1:6}$ and $Recall$ values, with all data preprocessed for direct use in symbolic regression to validate reproducibility. Additionally, we have provided a simple implementation of function performance testing (Test for functions). The data files involved in this process are also preprocessed to facilitate easier execution.\", \"we_would_like_to_clarify_an_important_detail\": \"in the MaiMemo dataset, $\\\\delta_3$ is not available, and it was not used during model training. The features in the MaiMemo dataset consist only of $\\\\delta_1, \\\\delta_2, \\\\delta_4, \\\\delta_5, \\\\delta_6$. However, for the Duolingo, En2De, and En2Es datasets, all features $\\\\delta_{1:6}$ are included. This distinction has been explicitly stated in the submitted documentation.\\n\\nThank you for your thorough review and for helping us improve the clarity of our work. \\n\\nSincerely, \\nAuthors of Paper 8572\"}", "{\"comment\": \"As a supplementary note, when calculating directly using the CSV data, the performance metrics must be computed in the format specified in the code documentation. This is because the label data contains zeros, which cannot be used as a denominator. We still recommend using the simple performance testing document we provided, as it offers a more convenient way for you to conduct the tests.\\nWe have set a mask in the calculation of metrics because there are 0 labels in the label data, and these would be treated as denominators in the metric calculations, which can lead to errors. This approach is consistent with the calculation process in FIFKT and is a standard practice in modeling. All standard calculation methods in our work are consistent.\\nWe have provided examples of how to calculate the corresponding metrics in the CSV.\"}", "{\"comment\": \"Dear Reviewer tRhP,\\n\\nThank you for your recognition of our self-evolving psychological embedding neural network model and for taking the time to review our submission. We apologize for the shortcomings in the paper\\u2019s presentation. We have carefully considered your suggestions and made comprehensive revisions to improve its clarity and structure. We plan to upload the revised version within 2-3 days and hope you can review it again to provide further valuable feedback. Your comments have been invaluable in helping us refine our work, and we are deeply grateful. \\n\\n### Explanations and Responses to Weaknesses and Questions: \\n\\n**1. Incomplete description of the real-world task** \\nWe acknowledge that the original paper lacks clarity in describing the real-world task. In the revised version, we have clarified the problem setup with more detailed explanations and visual aids. Specifically, our study focuses on modeling learners\\u2019 memory states in a word memorization scenario. During the memory process, learners complete quiz questions through memory software. If a learner answers correctly, the word is marked as \\u201cmastered\\u201d (labeled as \\u201ccorrect\\u201d); if not, it is marked as \\u201cnot mastered\\u201d (labeled as \\u201cincorrect\\u201d). Quiz formats include multiple-choice, fill-in-the-blank, listening, and matching questions. We have provided a more intuitive description of this task in the revised manuscript. \\n\\n**2. What is the model\\u2019s final output? Is it derived from the generated equations or the neural network?** \\nThe final output of the model is the prediction results from the neural network. All comparative and ablation experiments are evaluated based on the neural network\\u2019s output. The equations generated by symbolic regression are considered by-products of the model. In the equation comparison experiments, these equations are extracted and evaluated separately for their predictive performance. We have clarified the distinction between the two types of outputs in the revised manuscript to avoid confusion. \\n\\n**3. Confusing notation** \\nWe understand that the complexity of the notation may hinder readers' comprehension. To address this, we have thoroughly optimized the notation system in the revised manuscript. For example, we use $\\ud835\\udc5a$ to represent a single time step and $1:\\ud835\\udc5a$ to represent multiple time steps, and we have adjusted the mapping space of words to make the notation more intuitive and concise. \\n\\n**4. Undefined abbreviations** \\nThank you for pointing out the issue with undefined abbreviations. In the revised manuscript, we have resolved this by providing clear definitions for all abbreviations. \\n\\n**5. Source of $\\\\beta_t$ and similarity to diffusion models** \\nThe noise scheduling parameter $\\\\beta_t$ indeed shares similarities with noise generation equations in diffusion models. In the revised version, we have explicitly acknowledged this connection and added relevant citations. Additionally, we have included a detailed proof in the appendix to demonstrate the consistency between our noise generation method and the DDPM perturbation kernel. \\n\\n**6. Lack of clear descriptions of alternative methods in the ablation study** \\nOur proposed method combines asynchronous training and dynamic optimization. Asynchronous training refers to the joint training of the neural network and symbolic regression model, while dynamic optimization adjusts the loss weights of the neural network dynamically. For instance, if asynchronous training is enabled without dynamic optimization, the model\\u2019s loss weights remain as fixed hyperparameters and do not change dynamically based on performance. It is important to note that dynamic optimization requires asynchronous training to be activated. \\n\\nIn the ablation study, the configurations are as follows: \\n- **Baseline model (DKT-F):** Does not include the denoising module or the symbolic regression module. \\n- **Asy (asynchronous training):** Combines the neural network and symbolic regression with a waiting strategy during training, but the loss weights are fixed hyperparameters. \\n- **DyOp (dynamic optimization):** Introduces dynamic loss weights that adjust based on model performance. If DyOp is not enabled, the loss weights remain fixed hyperparameters. \\n\\nIn the revised manuscript, we have provided detailed explanations and refined the analysis of the experimental results. \\n\\nThe revised version of our manuscript will be uploaded to OpenReview within three days. We sincerely hope you can review it then. If you have any additional suggestions, we would be delighted to incorporate them to further improve our work!\"}", "{\"comment\": \"Thank you for your effort.\\n\\nI've read the updated manuscript and the other reviews. My concerns have been addressed; most importantly; the clarity of the text has been much improved (clarity also has Reviewer qnLQ's concern). To the best of my understanding, the additional experiments provided by the Authors also address some of the Reviewer gLam's raised concerns regarding the benchmarks (although I would like to further discuss it with them).\\n\\nIn the light of these improvements and the Authors' responsiveness, I raise my score.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Dear Reviewer gLam,\\n\\nThank you so much for taking the time to review our manuscript and providing such valuable feedback. Your comments have been immensely helpful in improving and refining our work. \\n\\nWe have made every effort to address the issues and suggestions you raised, and we hope the revised manuscript meets your expectations. On this basis, we would like to kindly inquire whether, if you find our improvements satisfactory, you might consider raising the overall score for our paper. \\n\\nWe greatly respect your professional judgment and would be happy to engage in further discussion if you have any additional questions or suggestions. \\n\\nOnce again, thank you for your support and invaluable input throughout this process! \\n\\nSincerely,\\n\\nAuthors of Paper 8572\"}", "{\"comment\": \"Dear Reviewer gLam,\\n\\n1. **Regarding the inconsistency between equation performance and annotations** \\nWe believe there may have been some misunderstanding. We provided a simple implementation test for the performance of all equations (located in the code documentation under *Test for functions/Function_test.py*). We have carefully reviewed the composition of the CSV data and the performance of the equations, and we did not find any anomalies.\\n\\nWe recommend reviewing the settings for equations in *Test for functions/Function_test.py*. We have carefully validated its reproducibility. Below are examples of the performance for specific equations: \\n\\n**MaiMemo Dataset** \\n- **SPsyINN-C Function**: `0.30229160710443679**((delta1*delta5)**(delta4 + 0.14383100468725032))` \\n - Test: `MAE: tensor(0.2213)` | `MAPE: tensor(22.1332)` \\n- **SPsyINN-W Function**: `0.49258729183071737**((delta1 + 0.008248828395980482)**(delta4**0.6295170754361378))` \\n - Test: `MAE: tensor(0.2157)` | `MAPE: tensor(21.5667)` \\n- **SPsyINN-I Function**: `0.4733634187963817**((delta1**1.109281583119398)**(delta4**0.7567579728325409))` \\n - Test: `MAE: tensor(0.2251)` | `MAPE: tensor(22.5074)` \\n\\n**Duolingo Dataset** \\n- **SPsyINN-C Function**: `-(delta6 + delta2)*(delta1 - delta2) + 0.9135621262263904` \\n - Test: `MAE: tensor(0.1052)` | `MAPE: tensor(12.6823)` \\n- **SPsyINN-W Function**: `((delta5 + 0.0190478124994636)**(delta1-delta2))*0.9257066642765628**e**delta6` \\n - Test: `MAE: tensor(0.1041)` | `MAPE: tensor(12.5264)` \\n- **SPsyINN-I Function**: `0.92171514151 ** (0.2101281541 * delta1+ exp(delta6))` \\n - Test: `MAE: tensor(0.1017)` | `MAPE: tensor(12.3638)` \\n\\n2. **Regarding the inconsistency between the initial, revised, and final versions of equations** \\nOur GSR module uses the PySR method, which provides a candidate set of equations during the symbolic regression process. We retained data files from each training session and selected the final equations from these files. When updating the equations, we rigorously confirmed their reproducibility. We have supplemented the code documentation with information on the equation sets for SPsyINN-C, SPsyINN-W, and SPsyINN-I for your review. \\n\\n3. **Regarding the confusion about regression labels being 1** \\nIn the Duolingo dataset, this issue may arise from the dataset itself. Duolingo\\u2019s testing sessions allow learners only one incorrect response per word. Specifically, for a given word, testing stops only if the number of incorrect responses is less than one. This is explicitly reflected in the original dataset, and we observed this behavior. During model performance testing, we avoided equations and models that predict all values as 1. We have added examples of the original data to the code documentation to clarify this issue. \\n\\n4. **Regarding the validity of the R\\u00b2 score** \\nWe have also considered this issue. While the identified equations perform well on MAE and MAPE metrics, their performance on the R\\u00b2 metric is poor. We believe this is likely due to the nature of the dataset. As you mentioned, most label values are 1, making it challenging for the equations to achieve a good R\\u00b2 score. \\n\\nWe hope the above responses address your concerns. Once again, thank you for pointing out these issues. \\n\\nSincerely, \\nAuthors of Paper 8572\"}", "{\"title\": \"Official Review of Submission8572 by Reviewer gLam\", \"comment\": \"To clarify, the csv should have all the relevant transformations so that the reviewers can directly plug the values into the equations present in Table 5.\\n\\nCurrently, there is no documentation of the code, so I have done my best in reading through all the code and implementing the various data transformation, but I obtained the results as above. The provision of the csv is to the authors' benefit in helping the reviewers verify the reproducibility of the results. Also, if other reviewers can vouch for the reproducibility, I would appreciate if they could let me know if it is simply an error I made in processing the data/running the code.\"}", "{\"comment\": \"### Responses to Specific Questions:\\n\\n**1. How does SPsyINN perform on existing equation discovery benchmark datasets?** \\nOur primary research focus is on designing a model capable of effectively modeling learners\\u2019 memory states, rather than creating a general-purpose symbolic regression algorithm. Consequently, SPsyINN has not been evaluated on general equation discovery benchmarks (e.g., SRBench or SRSD). Nevertheless, we believe our proposed method holds potential for applications in other scenarios, and future research will explore its performance in broader domains. \\n\\n**2. How do advanced SR algorithms perform?** \\nIn response to your suggestion, we have included the performance of advanced symbolic regression methods such as TPSR, DSR, and SBP-GP in the memory modeling scenario. These results have been updated in Table 2 of the revised manuscript, providing a comprehensive comparison with SPsyINN to demonstrate the applicability of symbolic regression algorithms in memory modeling tasks. \\n\\n**3. Addition of error bars in experiments:** \\nWe have addressed this issue by including error bars (e.g., standard deviations or interquartile ranges) in the revised manuscript to highlight significant differences in the results. \\n\\n**4. Citations related to Table 2:** \\nFor the results in Table 2, we have added detailed discussions and included references to the theoretical equations involved in the revised manuscript, enabling readers to better understand the significance of the results. \\n\\nThe revised version will be uploaded to OpenReview within three days. We earnestly request your review at that time and thank you once again for your thorough review and invaluable suggestions!\"}", "{\"summary\": \"This article presents a hybrid symbolic neural network learning approach to model user interactions in memory-based learning tasks, specifically language learning. The approach uses interpretable models based on memory theory and integrates their optimization with a neural network training process. A comparison with other methods for knowledge modeling demonstrate that this hybrid approach has performance benefits, beyond the creation of interpretable results.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"The proposed methodology and the application domain are both interesting. The idea of training a neural network and a symbolic regression model simultaneously, and aligning them in their respective optimization processes, is worthwhile and potentially novel. It could be used in a number of applications where physical laws are known or, as is the case here, there are established equations that map the relationship that should be approximated by machine learning. The interpretability of the end result, combined with the performance of a neural network training, motivates this approach. It would be interesting to see the method applied to the discovery of known physical laws, like in the following work:\\n\\nCranmer, Miles, et al. \\\"Discovering symbolic models from deep learning with inductive biases.\\\" Advances in neural information processing systems 33 (2020): 17429-17442.\\n\\nThe application of memory modeling is also interesting; I am not aware of the application of PINNs to such psychological modeling problems. Most of the knowledge-informed literature is on physics-informed, including in symbolic regression, but the methods can be applied to other domains where existing relationships have been expressed as equations, even if they are not physical laws. There is a clear application to economics here, and expanding the perspectives to include similar applications could be helpful.\", \"weaknesses\": \"The main weakness of this paper is in its presentation. This is a mix of methods and an application readers might not be familiar with, so everything, from deep learning to symbolic regression to memory theory, need to be made clear to a reader. Even a reader who is an expert in some of those things might not know the others.\\n\\nFirst, the mathematical notation is highly verbose, with subscripts for almost all variables, even when certain information is redundant or clear. For example, all tasks map user data U to word sets W. The inclusion of this mapping for every variable is unnecessary and makes the loss equations, like Equations 8 and 9, very hard to parse. Some equations are maybe not even necessary, like the definition of MSE. Simplifying the notation and reducing redundancies in the math would greatly increase clarity.\\n\\nMore explanation of the baseline methods would also help. DKT-F and FIFKT aren't fully explained, nor is the way that symbolic regression is integrated into their methods. The one-sentence explanations in appendix B are not sufficiently nor sufficiently referenced in the main text. For example, the description \\\"DKT-F: An improved version of DKT that incorporates students\\u2019 forgetting behaviors. (Piech et al., 2015)\\\" assumes that the reader understands that DKT refers to \\\"Deep Knowledge Tracing\\\" (it was not defined), and that the reader is familiar with Deep Knowledge Tracing's standard mechanisms, which are not described. In the Background, a short explanation of at least DKT could easily replace the sentence \\\"The superiority of deep learning techniques in knowledge tracing and cognitive modeling has been well-established (Abdelrahman et al., 2023),\\\" which doesn't give much information to the reader and is rather subjective.\\n\\nGreater clarity in the text on the background methods and the problem domain would really help. Acronyms are rarely defined before use, and some acronyms are defined never to be used again (eg, NODS). So, I'm left with a number of questions despite a thorough reading of the paper, which is a shame because it is a very interesting method and application.\", \"questions\": \"In the ablation, what does it mean to have asynchronous training but not \\\"dynamic optimization\\\"? And is the opposite of that (having dynamic optimization but not asynchronous training) the same as the Waiting Optimization Strategy SPsyINN-W?\\n\\nHow well does symbolic regression alone do, and is this equivalent to the first line of Table 3? If the neural network is trained without symbolic regression, how does it do, and is that equivalent to the fourth line of Table 3? Or is the fourth line equivalent to training a neural network without the noise addition? If either of those are the case, stating them in the text would be really useful.\\n\\nIs the log in the function set a natural log? In table 8, the result of ACT-R is presented as a natural log, but the function set just says \\\"log\\\", which could be assumed to be log10. If it is log10, why not include ln if it is used in ACT-R's results?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer gLam,\\n\\nWe sincerely appreciate your detailed review of our submission and your recognition of the method integrating neural networks with equation optimization. Your valuable feedback has provided crucial guidance for improving our manuscript. We apologize for the shortcomings in the presentation of the paper. We have carefully considered your suggestions and revised the manuscript to enhance its clarity and presentation. We plan to upload the revised version within 2-3 days and hope you can review it again and provide further insights. Your comments have been instrumental in improving our work, and we are deeply grateful. \\n\\n### Improvements and Explanations for Weaknesses:\\n\\n**1. Clarification of the paper\\u2019s objectives:** \\nOur primary goal is to construct a knowledge-driven neural network model suitable for memory behavior modeling. To achieve this, we incorporate classical memory theory equations to constrain the neural network\\u2019s modeling process. However, existing memory theories are often debated and lack the explanatory precision seen in physical equations. Therefore, we initialize the neural network with classical memory theory equations and refine these equations using symbolic regression. This approach enables the neural network to absorb knowledge from classical memory theories while achieving collaborative optimization between the memory equations and the neural network model. \\n\\nIn this process, the memory equations derived from symbolic regression serve as proxy models to explain the neural network and offer potential new perspectives for psychological theories. Our primary focus is not on proposing a novel symbolic regression method but on leveraging existing symbolic regression algorithms to address key challenges in memory behavior modeling.\", \"we_chose_genetic_symbolic_regression_algorithms_based_on_three_considerations\": \"1. Genetic algorithms are a classic and well-researched family of symbolic regression methods, offering numerous resources and references. \\n2. They allow us to set initial populations, enabling classical memory theory equations to serve as initialization equations. \\n3. Genetic algorithms can strictly control the depth of symbolic trees, ensuring that the derived equations remain interpretable. \\n\\nOur framework is versatile and compatible with various genetic symbolic regression algorithms. PySR was selected as the codebase for our experiments, and we will clarify this in the revised manuscript. Additionally, we have added experiments to demonstrate the generality of our approach. Theoretically, our method also has the potential to integrate more advanced symbolic regression algorithms, which we plan to explore in future research to further enhance its applicability. \\n\\n**2. Lack of comparisons with state-of-the-art SR algorithms:** \\nTo address your suggestion, we have expanded the experimental section in the revised manuscript by including performance evaluations of advanced symbolic regression methods such as TPSR, DSR, and GP-SBP. These results have been incorporated into Table 2 to enrich the experimental comparisons. \\n\\n**3. Missing error bars in empirical results to assess significance:** \\nRegarding the significance of empirical results, we have added error bars for all experimental results in the revised version and included statistical significance tests to enhance the credibility and completeness of the experimental findings. \\n\\n**4. Lack of details about the MLP architecture and adjustments:** \\nWe have provided detailed information about the architecture of SPsyINN in the revised manuscript, including the MLP architecture, hyperparameter settings, and training schedules, to ensure clarity and precision in describing the model.\"}", "{\"summary\": \"The author propose a method for modelling memory behavior. This method combines symbolic regression and deep learning. A deep network and a symbolic regressor based on genetic algorithms are both jointly trained to predict memory performance from data. The neural network component also includes a loss based on noisy data to improve noise tolerance. Importantly, the two components are also trained to match each other through an alignment loss on their respective output, allowing both of them to train each other.\\n\\nThe resulting model seems to outperform existing approaches in predicting memory behaviors on various datasets.\\n\\n===Edit after authors' response===\\n\\nThe authors have addressed my comments and largely clarified the paper. I believe the idea of jointly optimizing a neural network and an analytical expression is interesting and seems to show promise. Therefore, I increase my score, though still with low confidence.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The problem is interesting, as far as I can tell (I am not an expert in this area).\\n\\nThe method seems novel and the proposed interplay between deep learning and symbolic regression is interesting.\\n\\nThe model seems successful, judging from reported results.\", \"weaknesses\": \"I am mostly concerned about clarity, as the paper often uses confusing notation and undefined acronyms. While the overall method is reasonably clear, it is not easy to understand the details or reported comparisons in performance. See Questions below.\", \"questions\": \"The actual task is not fully described. Fig 1 mentions answers being \\\"correct\\\" or \\\"incorrect\\\", but about what? What was the actual question being asked for each word?\\n\\nWhat is the final overall output of the model (i.e. the one used to generate results in the tables)? Is it the output computed from the generated equation, or the output of the neural network?\\n\\nThe notation is confusing and seems to vary. I'd recommend using always 1:m to denote multiple timesteps and m to denote one single time steps (in the last sentence of Problem statement, apparently just 'm' is used to denote a whole sequence?)\\n\\nThere are many undefined acronyms. E.g. in l. 269 What does KT stand for? Where does this \\\"KT-based framework\\\" come from?\\n\\nL. 276, where do the Beta_t come from? The noise schedule equations look very much like the ones used in diffusion model, which should warrant some kind of citation!\\n\\nIn the results, particularly the ablations, the various alternative methods are not described. As a result it is not at all easy to understand what each alternative version represents. In particular: do you report results based on training a neural network alone (with or without denoising), and a symbolic regressor alone?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this work, the Authors combine deep neural networks with genetic symbolic regression to model human memory in an efficient and interpretable manner. They proposed multiple ways to combine these two models, aiming at both compute efficiency and accuracy. The proposed model was tested on a panel of benchmarks where it showed an improved performance compared to a panel of baseline models.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The originality of the model lies in the novel combination of existing techniques (deep networks, denoising, and symbolic regression) and its application to the new domain (memory). I would like to especially highlight the new alignment algorithms, proposed here to account for the CPU \\u2013 GPU interaction in the model, and domain priors that, first, kept the symbolic regression equations within the realm of memory model equations, and, second, accounted for the noise specific to memory-related data.\\n\\nThe significance of this model is in providing the equations (e.g. in Table 2) that are concise and at the same time have a high explanatory power for the memory-related data. Further analysis of such equations looks plausible and may be beneficial for the fields of psychology, cognitive science, and neuroscience.\\n\\nThe quality of this work is in the thorough empirical evaluation of the proposed model. While the model itself is rooted in literature and thus alone holds the potential of being useful for the task at hand, the evaluation on a series of datasets and the ablation study shows that the model indeed leads to the improvement of the performance and that all model components are necessary for such an improvement.\", \"weaknesses\": \"I would suggest working on the text a little bit more to enhance its clarity to make it more accessible to the broad ICLR community. While the work introduces a relatively straightforward idea \\u2013 (i) merge an efficient deep learning model with an interpretable symbolic regression model and a denoising module; (ii) use auxiliary losses that would match the outputs of the three models; (iii) compute auxiliary losses on an intermittently-generated proxy dataset to smoothly synchronize CPU and GPU computations \\u2013 the text itself is often unnecessarily complicated. For example, I\\u2019d either simplify Figure 2, remove the equations from it, or move it down the text to serve as a summary. I\\u2019d then simplify the equations and remove some of them because, while they introduce straightforward concepts \\u2013 like, the mean square error loss \\u2013 they end up being pretty lengthy. This is best exemplified by Equations 8 and 9 which say: \\u201ccompute the loss on a batch\\u201d but somehow occupy nearly half a page. I would also draw attention to some of the results that are present in the paper but may be overlooked, e.g. the equations in Table 2. These results also could use further discussion. Besides, even though the code is provided, I couldn\\u2019t find the Methods section that would describe the model in sufficient detail to reproduce it (e.g. the parameters of the neural network and the training schedule).\", \"minor\": \"The background section mostly repeats the introduction. I\\u2019d suggest shortening one of these sections and either expanding the other with the details of the models or using the vacated space for an additional discussion of the results.\\n\\n\\u20181 + 1 greater than 2\\u2019 effect -> synergy effect\", \"kt_based_framework\": \"KT is not defined in the main text\\n\\nMAE is not defined in the main text\", \"table_1\": \"second best models: clarify that those do not include SPsyINN models\\n\\nTable 2 is not referenced in the results.\", \"table_3\": \"provide the statistical significance test data (ideally with the false discovery rate correction)\\n\\nOverall, I found this work interesting and relevant, and I am happy to recommend it for acceptance at ICLR but I believe that the text needs to be further edited to enhance the accessibility of this work.\", \"questions\": \"The Authors state that the equations, that the model converges to, vary depending on the initial conditions and, it seems, on the model\\u2019s waiting strategies. Thus, which of the equations should neuroscientists / cognitive scientists use in their research as a result of this project? Are these equations similar or locally similar? Should they be distilled or approximated? How sensitive are these equations to the numerical coefficients? As one of the work\\u2019s stated goals is the interpretability of the results, it is important to know what results to use and to what degree to trust them. It would be great to hear the Authors\\u2019 thoughts on this topic. Separately, it would be highly interesting to see an analysis of the final equation once it\\u2019s established. How similar or dissimilar would it be to/from the existing models? What are the additional terms and what do we learn from them? Does it help us to ground the memory dynamics in neural circuits? An analysis like that has the potential to further increase the impact of this work.\\n\\n_____________________________\", \"post_rebuttal\": \"concerns mostly addressed (especially the ones regarding the clarity of the writing); raising my score to 8.\\n\\n_____________________________\\n\\nPost-discussion. We had a super lengthy and detailed discussion among the Reviewers where they encouraged me to check the reproducibility of the result. Sadly, they turned out to be right: (1) plugging the provided equations into the provided data reproduces the other Reviewer's numbers but not those in the paper; (2) MAE's denominator is not affected by zero labels; (3) zero labels cannot be excluded from a binary dataset.\\n\\nAs I mentioned before, I really like the paper but the other Reviewers are correct in pointing out that it's a serious issue. I hope that the Authors manage to revise their result towards consistency, stability, and reproducibility, hopefully also grounding them in cogsci-derived priors. Meanwhile, I sadly have to adjust my score to reflect the apparent reproducibility issue.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer qnLQ,\\n\\nThank you for your detailed feedback and for recognizing our work with an improved evaluation score. We have carefully reviewed and thoughtfully addressed all the issues you raised, making targeted revisions to our manuscript. Below, we provide specific responses to your major comments: \\n\\n**1. Priority Order of Topics** \\nIn Section 3.5, we focused on *Knowledge Alignment (KA)* as it represents a significant innovation in our work. Regarding asynchronous training, our model components are deployed on GPUs (for DNN) and CPUs (for GSR), and their training processes are asynchronous. Here, \\\"asynchronous\\\" means the tasks performed by the models are not strictly sequential, with differing optimization speeds and steps. \\n\\nSpecifically, the DNN predicts the learner's memory states, while the GSR focuses on identifying optimal equations to fit these states. These tasks are independent and executed concurrently, with integration achieved through Dynamic Asynchronous Optimization (DAO). To enhance collaboration between the models, we introduced *Alignment Loss* and an interaction strategy to ensure effective communication. While the interaction strategy includes a waiting mechanism that resembles synchronous optimization to some extent, it is distinct from strict sequential synchronization. \\n\\nIn Section 4.2, regarding the ablation experiments, SPsyINN (including DN, KA, and DW) and the variant with only KA and DW both adopt a waiting strategy by default. Experimental results (Table 1) demonstrate that adequate interaction significantly enhances performance, supporting this choice in the ablation experiments. \\n\\n**2. Priority of Figures** \\nThank you for pointing out the prioritization issue with our figures. We have accepted your suggestion by prioritizing Figure 2 and moving Figure 3 to Appendix C. Additionally, to enhance readability, we have replaced all figures in the manuscript with vector graphics and adjusted their dimensions to ensure clarity on standard screens. \\n\\n**3. Use of Common Terminology** \\n\\n*Symbolic Regression (SR):*\\nThe symbolic regression algorithm in our work is based on Genetic Symbolic Regression, specifically PySR. PySR allows for flexible initialization of equations. Our work narrows the search space to enable evolutionary improvement starting from high-performing initial equations. \\n\\n*Denoised Neural Network (DNN):*\\nWe apologize for the potential misunderstanding caused by the term \\\"Denoised Neural Network.\\\" In our work, this term refers to a neural network with added noise, not actual denoising functionality. The noise addition aims to enhance robustness when handling noisy data, which is also the motivation for introducing the noise loss $L_{\\\\tilde D}$. This term ensures consistency in the model's performance across noisy and noise-free data. \\n\\nWe recognize that diffusion models like DDPM are commonly used in image denoising. Recent studies have extended diffusion effects to time-series modeling [1][2], highlighting its relevance in this context. \\n\\n*Temporal Neural Networks (TNN):*\\nTNN encompasses various time-series modeling methods, including RNNs (e.g., GRU, LSTM), CNNs, attention mechanisms, and Transformers. These methods are widely applicable to different modeling tasks, and the choice depends on the framework design. \\n\\nIn our task, the goal is to extract memory states from learners' memory sequences and make predictions, and TNN is naturally well-suited for this purpose. Specifically, we designed a TNN that combines LSTM with MLP. During training, the MLP input includes both the LSTM's output and its hidden states, enabling efficient co-training from stateful RNNs to ANNs. \\n\\nWe hope our responses address your concerns and provide a clearer understanding of our work. If you have any further questions or suggestions, we would be happy to incorporate them into further revisions. Thank you again for your valuable time and insightful comments! \\n\\n**References:** \\n[1] Yang L, Zhang Z, Song Y, et al. Diffusion models: A comprehensive survey of methods and applications[J]. ACM Computing Surveys, 2023, 56(4): 1-39. \\n[2] Kuo M, Sarker S, Qian L, et al. Enhancing Deep Knowledge Tracing via Diffusion Models for Personalized Adaptive Learning[J]. arXiv preprint arXiv:2405.05134, 2024. \\n\\nSincerely, \\nAuthors of Paper 8572\"}", "{\"comment\": \"Dear authors, please provide your understanding of the word \\\"asynchronous\\\", as well as any published article which refers to symbolic regression as GSR or refers to all possible ANN architectures as TNN. I believe there are misunderstandings which persist.\"}", "{\"comment\": \"Dear Reviewer qnLQ,\\n\\nThank you for your meticulous review and valuable feedback during the rebuttal phase. Your insights have been immensely beneficial and have greatly helped us refine our work. We would be deeply grateful if you could consider providing a higher rating for our submission.\\n\\nSincerely,\\n\\nAuthors of Paper 8572\"}", "{\"comment\": \"Thank you for your detailed response. I'll stay tuned for the updated version.\"}", "{\"summary\": \"This paper proposes a new algorithm that jointly optimizes a neural network and an equation (that acts as an interpretable surrogate). The paper also explores update strategies of varying update rules. The approach proposed is then evaluated on real-world memory behavior datasets. The prediction performance of the neural network and discovered equation are reported separately.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1. The idea of jointly optimizing an equation with a neural network is novel among symbolic regression (SR) algorithms to the best of my knowledge.\\n1. Figure 2 gives a clear overview of the algorithm.\", \"weaknesses\": \"1. It is unclear whether the main aim of the paper is to discover memory equations or to propose a new SR-based algorithm. If the objective is to introduce both at the same time, then the paper is in an awkward position because it is not mentioned or made obvious in the paper why this specific task \\\"to discover memory equations\\\" require the proposed method (e.g., the paper should explain why the joint optimization with a neural network method is particularly effective for discovery memory equations and not applicable to other domains such as Physics). If the method proposed is indeed not specifically tailored to \\\"discover memory equations\\\", then evaluation on other datasets would provide a stronger case for this paper.\\n\\n1. Missing comparisons to state-of-the-art SR algorithms, that do not use joint optimization with a neural network, should be included as comparisons in the paper (expand Table 2).\\n\\n1. Existing SR benchmark datasets such as SRBench and SRSD should be used to evaluate the proposed algorithm's equation discovery ability to improve the quality of experiments.\\n\\n1. Missing error bars for empirical results, unable to tell if the difference in performance is significant (apart from Table 1 which performs t-test).\\n\\n1. Missing details on the selection of MLP architecture and tuning.\\n\\n1. In line 308, PySR was selected among SR algorithms but this choice was not justified. Several recent state-of-the-art SR algorithms (e.g., DSR [1], TPSR [2]) should be considered as well. Otherwise, the paper should justify why these methods were not considered.\\n\\n[1] Petersen, B. K., Larma, M. L., Mundhenk, T. N., Santiago, C. P., Kim, S. K., & Kim, J. T. (2020, October). Deep symbolic regression: Recovering mathematical expressions from data via risk-seeking policy gradients. In International Conference on Learning Representations.\\n\\n[2] Shojaee, P., Meidani, K., Barati Farimani, A., & Reddy, C. (2023). Transformer-based planning for symbolic regression. Advances in Neural Information Processing Systems, 36, 45907-45919.\", \"questions\": \"1. How does SPsyINN perform on existing equation discovery benchmark datasets like SRBench and SRSD?\\n\\n1. How do state-of-the-art SR algorithms perform in comparison on SPsyINN? The equations these state-of-the-art SR algorithms discover can be used to expand Table 2.\\n\\n1. Where are the error bars for all the empirical results (i.e., standard deviation or inter-quartile range)?\\n\\n1. Table 2 is given but not referenced to. Can the paper include a description and discussion of the results in Table 2?\\n\\nSome of these questions may simply not be relevant because of the scope that the authors have set for the paper. If that is the case, I hope the justification for the limited scope can be addressed in the rebuttal.\\n\\n**After Author-Reviewer Discussions:**\\n\\n1.\\tThe results are still not reproducible. I obtain the MAE values of 0.168, 0.164, 0.161 for PsyINN-C-F, PsyINN-I-F, PsyINN-W-F respectively, which differs largely from the values in Table 5. The standard deviation they have provided are in the range of 0.0016 to 0.0008, there is no reason for the values I obtained to be so different from what they have reported.\\n\\n1.\\tThe initial version, the first revision and the final revision has 3 separate set of equations. For example, for SPsyINN-I-F, MaiMemo, the discovered equation presented was different in all 3 versions. Given that the intention of these equations are meant for experts to analyze, I do not think the paper is in a ready-state given its frequent unstable updates.\\n\\n1.\\tOn closer inspection of the dataset, the true regression label is mostly 1. I computed the MAE of a naive regressor that always predicts the value 1, and obtained the MAE value of 0.1038 on the test set. This beats all but one of the methods in the duolingo dataset (that is if we can even trust the results. Based on my own re-computation of the equations in Table 5, none of their discovered equations beat this).\\n\\n1.\\tMAE and MAPE are present, but because most of the values are 1, I also computed the R2 score which is the most common evaluation metric for equation discovery papers. The R2 score on duolingo were 0.00164, -0.00382 and 0.00774 for SPsyINN-C-F, SPsyINN-I-F, SPsyINN-W-F. These discovered equations will not be helpful to behavioural modelling.\\n\\n**I recommend that the other reviewers do their own independent check on the reproducibility of the paper. This can be done quickly, purely in excel, just to check the equations, using 'test.csv' provided in the dataset (csv) file. After the many revisions, I do not trust using the evaluation code provided.**\\n\\nI have not even had the time to check whether the \\u201ccreation of these equations\\u201d are reproducible because of the constant revision of errors the paper made. I am considering downgrading my rating to \\\"Strong Reject\\\".\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer gLam,\\n\\nThank you for your meticulous review and valuable feedback during the rebuttal phase. Your insights have been immensely beneficial and have greatly helped us refine our work. We would be deeply grateful if you could consider providing a higher rating for our submission.\\n\\nSincerely,\\n\\nAuthors of Paper 8572\"}", "{\"comment\": \"Dear Reviewer CdzH,\\n\\nWe sincerely thank you for your thoughtful comments and valuable questions, which have greatly contributed to the improvement of our work. Your recognition of our detailed responses, including the additional experiments and the revised manuscript, is deeply encouraging to us. We are especially grateful for your positive evaluation of our revised work and for raising the score to a clear 8. \\n\\nOnce again, we deeply appreciate the time and effort you have invested in reviewing and helping us enhance our paper. \\n\\nSincerely, \\nAuthors of Paper 8572\"}", "{\"comment\": \"Our revised manuscript has been uploaded to OpenReview for your review. If you have any additional feedback, we will make further refinements accordingly.\"}", "{\"title\": \"Official Review of Submission8572 by Reviewer gLam\", \"comment\": \"The authors have clarified my concerns, and their changes should address all of my initial concerns. I am willing to increase my score, but am awaiting the upload of the revision to review the significance of their revised empirical results. I hope the authors can send a reply when the revision is uploaded so that I am notified, thanks.\"}", "{\"comment\": \"Our revised manuscript has been uploaded to OpenReview for your review. If you have any additional feedback, we will make further refinements accordingly.\\nAs a point of clarification, in the ablation study section, we have renamed the components and provided detailed explanations to enhance clarity.\"}", "{\"title\": \"Official Review of Submission8572 by Reviewer gLam\", \"comment\": \"The authors' revision has greatly improved clarity, so I will be increasing the presentation score to 3. The authors' have also identified key SOTA SR methods for comparison, which improves the quality of the experiments.\\n\\nHowever, I do have some new issues with the results. To verify the results of the paper, I am testing the obtained equations from Table 5 on the dataset given in the link. However, I am unable to reproduce the MAE scores of Table 5 (which is linked to Table 1). First, there seems to be an error for the results of SBP-GP (MaiMemo) in Table 5, where the average MAE is .3988 but .2660 in Table 1. Second, on the duolingo dataset, using the data extracted from the train_loader and test_loader (and also normalizing with the max and min code variables: \\\"data = (data - min) / (max - min)\\\", and changing nan values to 0: \\\"data = torch.nan_to_num(data, 0)\\\"), I get the following results:\\n\\nDUOLINGO dataset \\nWickelgren Model, \\\"0.89*(1+0.0003*X[:,1])**-0.0003\\\", MAE(using the provided \\\"masked_mae\\\" function provided by the authors): 0.1238\\n\\nPySR Model, \\\"0.92 - X[:,0]*(X[:,4]+0.03)\\\", MAE: 0.0987\\n\\nDSR Model, \\\"np.cos(X[:,0]-X[:,3]+np.exp(X[:,0]*X[:,1]*X[:,2]*(-X[:,2]-X[:,3]-X[:,4])+X[:,4]))\\\", MAE: 0.4667\\n\\nSPsyINN-C-F Model, \\\"0.56*np.exp(-1.75*X[:,3]*X[:,0]*X[:,2])\\\", MAE: 0.4150\\n\\nSPsyINN-W-F Model, \\\"0.56*np.exp(X[:,0]*X[:,3]*(X[:,2]-X[:,0]))\\\", MAE: 0.4149\\n\\nCould the authors', for the sake of easing the process for the reviewers to test reproducibility, extract a csv file with just the 6 features that are used for prediction (e.g., $\\\\delta_1$, $\\\\delta_2$, $\\\\delta_3$, $\\\\delta_4$, $\\\\delta_5$, $\\\\delta_6$), along with the output, R?\\n\\nAlso, I have noticed in the code that these seem to be the normalized values based on the train set, I think for greater clarity, the authors should mention this in both Table 5 and Appendix A.2 for reproducibility. If $\\\\delta_1$, $\\\\delta_2$, $\\\\delta_3$, $\\\\delta_4$, $\\\\delta_5$, $\\\\delta_6$ refer to the unnormalized version instead, please correct me.\"}", "{\"metareview\": \"In my reading of this paper, the logic is as follows:\\n1. In the field of psychology, there are numerous proposed physical equations for human memory abilities. \\n2. It's not clear what the \\\"right\\\" equation is to fit empirical data, especially since certain variables in a given expression might also be outcomes of other expressions (e.g. $\\\\exp(x / S)$ might have $S = ...$ another equation).\\n3. In order to find interpretable equations to fit empirical data, SPsyINN is proposed, which basically consists of a joint training procedure, in which:\\n * An MLP must fit the empirical data\\n * A symbolic regression expression must fit the empirical data\\n * The MLP and the symbolic regression also must agree on most outputs.\\n4. Experiments were conducted on memory benchmarks, and shown to improve over other related baselines.\\n\\n## Main Weakness\\nIt's very unclear what the main contributions of the paper really are. I can think of two possible fronts, but each of them have their own drawbacks currently.\\n * Proposal of a new general symbolic regression method.\\n * It's not clear what the benefits of using the MLP really are. My interpretation is that it provides a form of smoothness regularization, to support regions of the input space where there were was no ground truth data. But there's not a strong justification for this design in the first place.\\n * Application of deep learning methods to the specific field of memory equations.\\n * It's not clear how significant these results are, and whether these applications are important for presentation at a conference such as ICLR, whose audience is primarily in machine learning. At the moment, the paper does not do a good job of providing the impact of said applications.\\n\\nDuring the rebuttal, the authors emphasize that the regression technique and the dataset are inherently tied together and that they aren't proposing a general regression method, but in any case, there isn't a strong contribution from either direction listed above.\\n\\nThis, combined with the reviewer discussion, leads to a clear rejection score.\", \"additional_comments_on_reviewer_discussion\": \"Being honest, this paper had the longest (and maybe _wildest_) discussion thread out of all papers in my batch. _Post-Rebuttal_, the reviewer scores post-rebuttal were across the table (3,5,6,8), with many reviewers in their own words, leaning towards rejection.\", \"the_main_issues_raised_were\": [\"What is the paper even contributing? As written also in my meta-review, the story of the paper is incredibly jumbled, especially combined with the authors' responses post-rebuttal. The paper doesn't do a good job of demonstrating (1) why their new symbolic regression method is good, or (2) demonstrating why memory equations is an impactful application.\", \"Reproduction of results - Reviewer gLam has graciously spent their time trying to reproduce the results for Duolingo and Maimemo datasets, but could not. In fact, they have stated that the reproduced performance is even worse than a basic regressor which \\\"only outputs 1\\\", making the results untrustworthy.\", \"I strongly suggest that the authors resolve these core issues before resubmitting to another conference.\"]}", "{\"comment\": \"Dear Reviewer tRhP,\\n\\nThank you for pointing out the insufficient explanation of Deep Knowledge Tracing (DKT) in our manuscript. In the revised version, we have added a detailed description of DKT in the Background section to enhance the clarity of this concept. Additionally, we have carefully reviewed all abbreviations in the manuscript to ensure their clear and consistent usage. Regarding the citation formatting issues you mentioned, we have corrected them in the latest revision. We sincerely appreciate your thorough review and valuable suggestions, which have been instrumental in improving the quality of our paper.\\n\\nSincerely,\\n\\nAuthors of Paper 8572\"}", "{\"title\": \"What is DKT?\", \"comment\": \"In the baselines, the so-called \\\"DKT\\\" model is referenced multiple times, but it is never explained what DKT is (unless I missed it)! Please explain what \\\"DKT\\\" is.\\n\\nAlso please fix the typography of citations (there should be a space before the first bracket).\\n\\nConsidering the paper has been much improved and clarified, I will increase my score if these two corrections are applied.\"}", "{\"comment\": \"Our revised manuscript has been uploaded to OpenReview for your review. If you have any additional feedback, we will make further refinements accordingly.\"}", "{\"comment\": \"I thank the authors for the improvements made to the article. I do think that some presentation issues remain, but I have raised my score to a 6 to reflect the quality of the revised version. I\\u2019ll explain below some of the existing issues, which I didn\\u2019t take the time to raise individually in my initial review, but which continue my concern over clarity. My apologies for not having been more exhaustive in my first review, but I am heartened by the motivation of the authors to improve the article. An improved presentation is needed for a clear communication of the contribution of this article.\\n\\nPart of the issue in the presentation is prioritization. There is a fair bit of explanation given to the idea of asynchronous training, when the end result shows that synchronous training is preferable to asynchronous. It seems to me that the important part of 3.5 is knowledge alignment, as that impacts the results of the method more than the synchronicity of the training. DAO is presented as a part of the contribution, but in the end, if what works is to align the training epochs, the contribution of DAO is questionable compared to the overall contribution of the method. The ablation study of section 4.2, if I understand correctly, is entirely performed on the synchronous version of SPsyINN, for example. The contribution of SPsyINN without asynchronous training is interesting as it demonstrates the benefit of knowledge transfer - why insist on the asynchronous aspect, and why create a new acronym for it?\\n\\nA better focus would also help fix the priorities in the 10 page limit. Currently, the figures have been reduced to a very small size, making them almost unreadable on standard screens. Figure 2, for example, is more important than Figure 3, in my opinion, and if there is not the full place for both, then choosing one for the main text is far better than making both smaller.\\n\\nFurthermore, using established terms helps with presentation. I'm not familiar with any other works which refer consistently to symbolic regression (SR) as Genetic Symbolic Regression (GSR); SR is commonly used in the literature, whether it be deep symbolic regression or evolutionary. \\\"Denoised\\\" neural networks is also not a common term; the Denoising in DDPM refers to how the diffusion process denoises the image, for example. Here, prediction over memory data is denoised through a loss term - the neural network can be said to do a denoising operation, and the loss is appropriately called a denoising loss. The neural network, however, is not itself denoised, because the network doesn\\u2019t have noise. Finally, neural networks like LSTMs and Transformers can be applied to temporal data, but whether or not they are temporal themselves depends on if they are recurrent, hence why LSTMs are commonly referred to as an example of a Recurrent Neural Network, and not a Temporal Neural Network. For example, the authors state that \\u201cTNN can utilize flexible architectures such as LSTM (Hochreiter, 1997), Transformer (Vaswani, 2017), Mamba (Gu & Dao, 2023), or other specially designed model architectures,\\u201d but LSTMs and Transformers handle training and data representation differently due to the presence or absence of memory. How does the training change from a stateful RNN training to an ANN without state? If these seem like minor quibbles, I\\u2019ll point out that the title is currently \\u201cCombining Denoised Neural Network and Genetic Symbolic Regression for Memory Behavior Modeling via Dynamic Asynchronous Optimization\\u201d, and that I believe there are communication issues with \\u201cDenoised Neural Network,\\u201d \\u201cGenetic Symbolic Regression,\\u201d and \\u201cDynamic Asynchronous Optimization.\\u201d These are central terms in the article, but they are not sufficiently grounded.\\n\\nGiven that there has already been impressive improvements to the article over the review period, I would be open to further increasing my score. I realize that the paper revision deadline is approaching, but I\\u2019d appreciate responses from the authors up to the newly extended discussion period end date.\"}", "{\"comment\": \"### Detailed Responses to Questions:\\n\\n**1. Influence of initial conditions and waiting strategy** \\nAs you mentioned, the final equations produced by the model are indeed influenced by the initial conditions and waiting strategy. In our experiments, different initial conditions may lead to symbolic regression producing equations in varied forms. However, these equations consistently capture core memory dynamics, such as forgetting speed and spacing effects. In other words, while they may differ in local characteristics, their overall trends and applicability remain stable. \\n\\n**2. Which equations should neuroscientists/cognitive scientists use as the results of this study?** \\nOur method generates a series of candidate equations. When selecting the final equation, we recommend prioritizing fit and simplicity while considering experimental design characteristics and relevant theories in the specific application context. This flexibility allows neuroscientists and cognitive scientists to select the most suitable formula for their needs. \\n\\n**3. Are the equations similar or locally similar? Should they be distilled or approximated? How sensitive are they to numerical coefficients?** \\nOur observations indicate that the equations generated by SPsyINN often follow exponential patterns and align with the spacing effects in memory theory. Additionally, we identified new insights in the memory equations, such as the influence of historical memory performance and material difficulty, which enrich the theoretical framework of memory modeling. \\n\\nWe performed a sensitivity analysis on the numerical coefficients in the symbolic regression formulas. While noise can impact specific coefficient values, their influence on the overall trends of the formulas is minimal. The interpretability and predictive performance of these formulas remain stable across different datasets. In the revised version, we will include this analysis to better illustrate the robustness of symbolic regression. \\n\\n**4. Analyzing the final equations could be very interesting. How similar or different are they from existing models? What do the additional terms reveal? Can they help us model memory dynamics in neural circuits?** \\nIn the revised version, we will provide a detailed analysis of the final equations, exploring their similarities and differences with existing memory models. Specifically, we will discuss additional terms identified by symbolic regression, which reveal potential new mechanisms in memory dynamics, such as the impacts of historical memory states and material difficulty. \\n\\nWe believe these additional terms offer novel perspectives for studying memory dynamics in neural circuits. Through this analysis, we hope to further elevate the impact of this work. \\n\\nThe revised manuscript will be uploaded to OpenReview within three days. We sincerely hope you will review it again. If you have further comments or suggestions, we would be happy to make additional modifications and improvements!\"}" ] }
DpLFmc09pC
DEPfold: RNA Secondary Structure Prediction as Dependency Parsing.
[ "KE WANG", "Shay B Cohen" ]
RNA secondary structure prediction is critical for understanding RNA function but remains challenging due to complex structural elements like pseudoknots and limited training data. We introduce DEPfold, a novel deep learning approach that re-frames RNA secondary structure prediction as a dependency parsing problem. DEPfold presents three key innovations: (1) a biologically motivated transformation of RNA structures into labeled dependency trees, (2) a biaffine attention mechanism for joint prediction of base pairings and their types, and (3) an optimal tree decoding algorithm that enforces valid RNA structural constraints. Unlike traditional energy-based methods, DEPfold learns directly from annotated data and leverages pretrained language models to predict RNA structure. We evaluate DEPfold on both within-family and cross-family RNA datasets, demonstrating significant performance improvements over existing methods. DEPfold shows strong performance in cross-family generalization when trained on data augmented by traditional energy-based models, outperforming existing methods on the bpRNAnew dataset. This demonstrates DEPfold’s ability to effectively learn structural information beyond what traditional methods capture. Our approach bridges natural language processing (NLP) with RNA biology, providing a computationally efficient and adaptable tool for advancing RNA structure prediction and analysis
[ "RNA secondary structure prediction", "Dependency parsing", "Biaffine attention", "Pseudoknots", "Pretrained Model", "Deep learning" ]
Accept (Poster)
https://openreview.net/pdf?id=DpLFmc09pC
https://openreview.net/forum?id=DpLFmc09pC
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wM1BNTurZX", "w22iYjfHQz", "vyERa7mMe9", "vO9OOqw1Wh", "uWWpsDwzuw", "q0wuLV8bFl", "pMEumEybVz", "jHwcWBLVy9", "eA3dnbEBHf", "XSk3YsPqDw", "OxR1ipdHwS", "Mh0Yvgw0zN", "DDWwNsLmO4", "6X9w00w20I", "5u1g4o2KVf", "33mCYZNKjG", "2hAYZHUE7Y" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733149762388, 1732756363375, 1730742273691, 1730644844906, 1730762029834, 1732806830090, 1737524198362, 1732806650177, 1732607520220, 1730713293452, 1735936080551, 1732588382335, 1732588311972, 1732589075419, 1732588732902, 1732806784448, 1732587545488 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12543/Reviewer_SsoD" ], [ "ICLR.cc/2025/Conference/Submission12543/Reviewer_XEMD" ], [ "ICLR.cc/2025/Conference/Submission12543/Reviewer_XEMD" ], [ "ICLR.cc/2025/Conference/Submission12543/Reviewer_vMAF" ], [ "ICLR.cc/2025/Conference/Submission12543/Reviewer_ubS6" ], [ "ICLR.cc/2025/Conference/Submission12543/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12543/Authors" ], [ "ICLR.cc/2025/Conference/Submission12543/Reviewer_vMAF" ], [ "ICLR.cc/2025/Conference/Submission12543/Reviewer_SsoD" ], [ "ICLR.cc/2025/Conference/Submission12543/Area_Chair_JdMk" ], [ "ICLR.cc/2025/Conference/Submission12543/Authors" ], [ "ICLR.cc/2025/Conference/Submission12543/Authors" ], [ "ICLR.cc/2025/Conference/Submission12543/Authors" ], [ "ICLR.cc/2025/Conference/Submission12543/Authors" ], [ "ICLR.cc/2025/Conference/Submission12543/Authors" ], [ "ICLR.cc/2025/Conference/Submission12543/Authors" ] ], "structured_content_str": [ "{\"title\": \"response\", \"comment\": \"I would like to thank the authors for the satisfying answer. I thus raised the score to a positive score.\"}", "{\"title\": \"Definitely improved\", \"comment\": \"Thank you for your responses to all my questions.\\n\\nThe paper is definitely improved. \\n\\nI'm still a bit unsure about some of the issues of comparison of different methods and the formal language class that is needed to handle pseudo-knots. I'm afraid that I'm only looking at the new version sort of quickly.\\n\\nI'm definitely raising my score to a 6. It could be that this paper is worth more?\"}", "{\"summary\": \"This paper introduces a new NLP dependency parsing-inspired algorithm for RNA secondary structure prediction, DEPfold. The paper's results suggest that DEPfold performs much better than other methods on within-family and cross-family RNA datasets, getting near perfect results on several datasets.\\n\\nAs a \\\"positionality statement\\\" for this review, I'll mention that I'm expert in NLP dependency parsing, and assume that I was chosen to review for that reason, but I have little understanding of the biology and am generally not familiar with the test sets and performance of other methods in the RNA structure prediction domain.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The approach to treating RNA secondary structure prediction via NLP dependency parsing, and largely using the biaffine dependency parsing algorithm of Dozat and Manning (2016) seems largely original.\", \"The results of the paper, as presented, are very strong. If this all checks out, the method provides a new much more accurate RNA secondary structure prediction and this would be quite significant\"], \"weaknesses\": [\"I feel that the presentation of the paper should have been clearer. It might have been better to move Fig. 1 to section 3 and to have referred to it when presenting the algorithm. The algorithm for conversion to dependency trees is presented somewhat informally. Something more precise would have been better. Among other things: (i) The algorithm is presented with reference to \\\"bracket-dot notation\\\" for RNA secondary structure (lines 126-136) but this is never rigorously defined. This notation may be well known in bioinformatics, but not to me. If this is the starting point of the algorithm, it would minimally be really useful to have shown the example sequence in Fig. 1 in bracket-dot notation. Indeed, it seems like the two subdiagrams in the top right of Figure 1 do not add much to understanding and at least one of them could have been deleted. Lines 165-187: While a projective dependency diagram is isomorphic to a certain kind of binary tree (a single-level X' CFG), a pure dependency grammar presentation would not normally evoke tree structure. Is it necessary? I would guess not and that the algorithm could be described directly in terms of creating dependency arcs. Things might even be clearer that way, given what is in Fig. 1. Tree completion (lines 210-215): AFAICS, g is never defined. Is it the root node, or the \\\"grandparent\\\" (parent of the head node)? Most importantly, I don't think the section 3.1 presentation of the algorithm even attempts to explain rigorously the treatment of pseudoknots. They're mentioned, as on line 223, but what is the details of their dependency grammar representation. Are they just treated as some kind of not fully labeled clump, as perhaps suggested by the notation introduced for P_i on line 159? I know vaguely that there is an older thread of work crossing from NLP to bioinformatics arguing that giving powerful enough grammars to describe pseudoknots requires more powerful \\\"mildly context sensitive\\\" grammars like tree-adjoining grammar, beyond the power of dependency grammars (e.g., https://academic.oup.com/bioinformatics/article/21/11/2611/294713 ). Is that still true? Is having a non-projective dependency tree grammar sufficient? This paper left me no wiser. Finally, I wondered whether the post-processing (line 291ff) couldn't have been incorporated into the decoding algorithm with some appropriate constraint.\", \"Results: I'm not really the person to judge the appropriateness and completeness of all the results presented, but to the extent that I tried to look other things up on the web for a few minutes, I seemed to be left with more questions than answers. The algorithm referred to in this paper as \\\"E2Efold\\\" is referred to by the authors of the cited reference as E2Efold-3D. There is actually a different algorithm by different authors called E2Efold that appear at ICLR in 2020 (https://openreview.net/forum?id=S1eALyrYDH) and which itself used an algorithm \\\"close to\\\" biaffine dependency parsing. At least it is mentioned as related work. How does this paper refer to that paper. This paper is not in the references, and there is no explicit comparison in related work. But then the RNAStrAlign results in Table 2 seem to be the results of E2Efold not E2Efold-3D. They're identical to the ones in the E2Efold paper. Somehow this isn't giving me confidence.... I don't really know what are the best datasets or perceived best methods in this domain, but it then also seems like there are other recent methods claiming good results, such as RNAformer (https://icml-compbio.github.io/2023/papers/WCBICML2023_paper43.pdf) or DEBFold (https://pubs.acs.org/doi/10.1021/acs.jcim.4c00458) which aren't mentioned in the references here. Am I getting the best most up-to-date comparisons? It's not clear to me. The latest, best algorithm compared to is UFold from 2022, but there are clearly papers on this topic from 2023 and 2024, but it's not trivial for me to compare since there seem to be a lot of different datasets around, etc.\"], \"questions\": [\"This largely duplicates what I put in \\\"weaknesses\\\".\", \"Lines 165-187: While a projective dependency diagram is isomorphic to a certain kind of binary tree (a single-level X' CFG), a pure dependency grammar presentation would not normally evoke tree structure. Is it necessary? I would guess not and that the algorithm could be described directly in terms of creating dependency arcs. Things might even be clearer that way, given what is in Fig. 1.\", \"Tree completion (lines 210-215): AFAICS, g is never defined. Is it the root node, or the \\\"grandparent\\\" (parent of the head node)?\", \"Most importantly, I don't think the section 3.1 presentation of the algorithm even attempts to explain rigorously the treatment of pseudoknots. They're mentioned, as on line 223, but what is the details of their dependency grammar representation. Are they just treated as some kind of not fully labeled clump, as perhaps suggested by the notation introduced for P_i on line 159? I know vaguely that there is an older thread of work crossing from NLP to bioinformatics arguing that giving powerful enough grammars to describe pseudoknots requires more powerful \\\"mildly context sensitive\\\" grammars like tree-adjoining grammar, beyond the power of dependency grammars (e.g., https://academic.oup.com/bioinformatics/article/21/11/2611/294713 ). Is that still true? Is having a non-projective dependency tree grammar sufficient?\", \"Can the post-processing (line 291ff) be incorporated into the decoding algorithm with some appropriate constraint?\", \"Clarify the relation between this algorithm and E2Efold (from ICLR 2020)\", \"Clarify the relation and what you're citing as results between E2Efold and E2Efold-3D\", \"Clarify whether there are results from 2023 or 2024 that should be included as comparisons and how the methods and results of algorithms from these years relate to yours.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a novel method named DEPfold, which reframes the prediction of RNA secondary structures as a dependency parsing task.\\n\\nDuring training, given an RNA sequence, DEPfold constructs a dependency tree from the sequence using three types of arcs/labels: stem, pseudoknot, and connector. DEPfold then employs Biaffine parsers to learn these trees. \\nDuring inference, DEPfold first predicts the tree from the raw sequence and subsequently recovers the internal secondary structures through several post-processing steps.\\n\\nThe authors have experimented with various base models, including RNAfm and RoBERTa, and different learning strategies such as fine-tuning and freezing pretrained models. They found that DEPfold outperforms existing models across multiple datasets in the field of RNA structure prediction.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"This is a novel work, which, to my knowledge, is the first to frame RNA structure prediction as dependency parsing.\\n\\nI am excited to see that the wisdom from traditional NLP parsing tasks can still play important roles in a wider range of structured prediction tasks, such as RNA secondary structures.\\nDEPfold is built on the biaffine parser, a widely known neural-based dependency parser, and demonstrates very strong performance by leveraging the power of pretrained masked language models like RoBERTa.\\n\\nI am very optimistic about further improving the performance of DEPfold by scaling it to larger model sizes and larger datasets.\\nAlso, beyond the scope of this work, I believe DEPfold opens the door for utilizing some other parsing methods like stack-pointer [1] networks or modeling RNA structures via context-free grammars [2].\\n\\n[1] Stack-Pointer Networks for Dependency Parsing \\n[2] Strongly Incremental Constituency Parsing with Graph Neural Networks\", \"weaknesses\": [\"I suggest that the authors elaborate further on the definitions of RNA structures. For instance, a small figure illustrating the differences between stem and pseudoknot would be helpful. As someone who is not an expert in RNA structure prediction, I find it challenging to fully understand the details of the paper without resorting to external resources like Wikipedia or search engines.\", \"Line 284, Decoding: The Matrix-Tree Theorem is used for calculating the normalization term of the probabilities of non-projective trees. For decoding, one should use O(N^2) MST algorithms to decode non-projective trees.\", \"Lines 126-129: It would be beneficial if the authors could provide examples to demonstrate the bracketing structures.\"], \"questions\": \"* > Pseudoknots, we find, are equivalent to adding a certain level of non-projectivity to the dependency parsing model.\\n\\nI can't build the connection between the non-projectivity and Pseudoknots. Can the authors give me more details?\\n\\n* There are some length limitatations for RoBETa that make it hard to extrapolate to longer sequences, how did you handle RNA sequence with thoundands of nucleotides?\\n\\n* I'm very curious about the effectiveness of extending DEPfold to structured learning algorithms like TreeCRF, which has been proven very useful in the field of dependency Parsing [1,2,3]\\n\\n* How about the speed of optimal tree decoding. Did you use any parallelized implementations of MST/Eisner in [torchstruct](https://github.com/harvardnlp/pytorch-struct) or [SuPar](https://github.com/yzhangcs/parser) for acceleration?\\n\\n[1] Neural Probabilistic Model for Non-projective MST Parsing \\n[2] Efficient Second-Order TreeCRF for Neural Dependency Parsing \\n[3] Headed-Span-Based Projective Dependency Parsing\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"After the author response, I appreciate the efforts that the authors made to make the paper more readable. I especially found the more explicit inclusion of pseudoknots and the workflow in Figure B1 quite helpful. I thus adjusted my score from a 5 to a 6.\\n\\n----\\n\\nThe paper presents a novel and interesting approach for RNA secondary structure prediction. The authors frame this problem as dependency parsing in NLP. This first involves mapping the RNA structure to a dependency tree (this can be challenging because of pseudo-knots that result in non-projectivity).\\n\\nThe authors then use a neural network biaffine parser (based on Dozat and Manning - 2016) for this task which predicts the pairwise edges + labels. The authors then use either Eisener algorithm (for projective structures or Kirchoff's Matrix-Tree Theorem (for non-projective structure) to find the optimal tree.\\n\\nExperiments show that the authors achieve strong performance on multiple datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"-The paper tackles an important problem.\\n\\n-The method is interesting and novel. \\n\\n-Empirical results are strong.\", \"weaknesses\": \"Overall I found the description of the RNA secondary structure and how to transform it into a dependency tree to be confusing and non-rigorous.\\n\\n-There is brief notation given in Section 2 and it states how a typical RNA secondary structure is composed of linear sequences (dots) and then different types of brackets. However, this is all quite vague and non-rigorous.\\n\\n It would be great to see an example of each type of label and how it is represented in this notation (e.g. stems, loops, pseudoknots etc.) I feel these concepts are not rigorously defined i.e. despite the importance of pseudo-knots I do not see a clear example/definition in the paper. \\n\\n-The description of 3.1 (transformation of RNA structures to dependency) would be clearer with a running example that shows what happens at each step. Moreover, I am confused about whether the transformation is a 1:1 mapping. Is it provable the case that for any RNA secondary structure it maps to a unique dependency tree and vice versa? Or is this a heuristic.\", \"questions\": \"I have many questions about the RNA secondary structure and the transformation to dependency trees as described above in Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear reviewer:\\n\\nThank you very much for your positive comments!\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Dear Reviewer,\\n\\nAs the deadline for the rebuttal phase approaches, we kindly ask you to confirm whether we have sufficiently addressed your comments or if there are any remaining concerns. More specifically, you mentioned you would be willing to increase your score if we follow up on more recent work, and we included new results (RFold) as mentioned in our rebuttal. We discussed other issues you mentioned there.\\n\\nThank you very much for your feedback!\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"comment\": \"Thank you for your hard work and clarifications, which addressed most of my concerns.\\nIn light of the comments from other reviewers, I maintain my positive score.\"}", "{\"summary\": \"The paper proposes to use deep dependency parsing technique to predict RNA secondary structures. The paper shows how RNA structures can be mapped to dependency structures and via verse. Then now, the problem of predicting RNA structures is casted to dependency parsing, a classical task in NLP. The paper uses the Dozat & Manning 2016 parsing method, in which nucleotides are represented by RNA-fm or Roberta contextual embeddings.\\n\\nThe paper shows experiments with popular datasets, RNAStrAlign, ArchiveII, and bpRNA-*. The proposed method substantially outperforms strong baselines (UFold, MXfold2, E2Efold). The paper also presents analyses showing a surprising result that Roberta contexutal embeddings actually are effective for this problem.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper showcases a very interesting application of NLP (more specifically, dependency parsing) to biology. The authors propose an effective way to translate the RNA structure prediction to dependency parsing, connecting the two challenges. The experiment results are evidence for the effectiveness of the proposed method.\", \"weaknesses\": \"The paper only mentions methods and work from 2022 backward, and only one published in 2024. However, I found several publications in 2023, such as [1,2,3]. I thus question about the up-to-date information in the paper.\\n\\n(I'm willing to raise my overall score if the authors can provide comparison with the most up-to-date work in the literature). \\n\\n\\n1. Chen, CC., Chan, YM. REDfold: accurate RNA secondary structure prediction using residual encoder-decoder network. BMC Bioinformatics 24, 122 (2023). https://doi.org/10.1186/s12859-023-05238-8\\n2. Wang, W., Feng, C., Han, R. et al. trRosettaRNA: automated prediction of RNA 3D structure with transformer network. Nat Commun 14, 7266 (2023). https://doi.org/10.1038/s41467-023-42528-4\\n3. Tzu-Hsien Yang. DEBFold: Computational Identification of RNA Secondary Structures for Sequences across Structural Families Using Deep Learning. Journal of Chemical Information and Modeling 2024 64 (9), 3756-3766. DOI: 10.1021/acs.jcim.4c00458\", \"questions\": [\"Did the authors try \\\"undirected\\\" dependency structures? if yes, what are the performance?\", \"Because a RNA sequence can be very long (>1500), could the authors explain how to use Roberta effectively?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The authors propose a new method for predicting RNA secondary structure by reframing the task as a dependency parsing problem. Their approach, called DEPfold, involves three main steps: generating a dependency tree from the RNA structure, using an attention-based framework to predict the tree\\u2019s elements, and then decoding this predicted tree into RNA sequences under structural constraints. Tests show that DEPfold outperforms existing methods and generalizes well across both in-family and cross-family.\\n\\nDuring the review process, several reviewers found the initial description of the method unclear. In response, the authors significantly revised Section 3 and added a pseudocode example with a working illustration in Appendix A. These updates satisfied reviewers ubS6 and XEMD, who raised their scores to 6. Reviewers also noted that recent methods (from 2022 to 2024) were missing from the comparisons. To address this, the authors explained why techniques like RNAFormer and DEBFold were not directly comparable, and they incorporated Rfold as a baseline in Tables 2\\u20138 to benchmark DEPfold against newer work.\\n\\nThe authors also improved the paper\\u2019s readability, including improving the explanation of bracket-dot notation (lines 125\\u2013141), refining the definition of node g (line 119), and clarifying the discussion of pseudoknots in Section 3. Following these changes, all reviewers agreed that the paper should be accepted.\", \"additional_comments_on_reviewer_discussion\": \"See above.\"}", "{\"comment\": \"9. Regarding Comparisons with Results from 2023 or 2024:\\n\\nRegarding recent methods, we acknowledge that there have been new developments in the field in the last couple of years. The limitations of RNA datasets and inconsistencies among them make it challenging to establish a comprehensive and fair comparison for all recent models. However, based on your suggestion, we have reviewed the algorithms mentioned.\", \"rnaformer\": \"This work only provides inference code with pre-trained parameters, and it is not possible to retrain it on the same datasets used in our study for a fair comparison. The model is trained on a mixture of databases, whereas we train on one dataset at a time to evaluate specific capabilities of our model. In our attempt to test RNAformer on the bpRNA-TS0 dataset, it achieved an F1 score of 0.706, which is quite high. However, when tested on new families from the bpRNA-new dataset, its F1 score dropped significantly to 0.394, suggesting potential overfitting issues.\", \"debfold\": \"We noted that only a web-based prediction interface is available, and the source code is not provided, making retraining on our datasets impossible. Moreover, DEBFold is trained on the Rfam dataset, which is inconsistent with our datasets, making a direct comparison difficult. Given these limitations, it is not feasible to obtain a fair comparison with our model.\\n\\nIn response to your suggestion, we have added a comparison with a newer model, RFold[1], published at ICML 2024, which is a state-of-the-art model in this domain. Our model demonstrates superior performance compared to RFold under the same training and testing conditions. These results have been added to the revised paper, as shown in Tables 2-8.\\n\\nWe hope these revisions and additional explanations address your concerns and help resolve them effectively. Thank you again for your invaluable feedback.\\n\\n\\n[1] Cheng Tan, Zhangyang Gao, CAO Hanqun, Xingran Chen, Ge Wang, Lirong Wu, Jun Xia, Jiangbin Zheng, and Stan Z Li. Deciphering rna secondary structure prediction: A probabilistic k-rook matching perspective. In Forty-first International Conference on Machine Learning.\"}", "{\"comment\": \"Dear reviewer:\\n\\nThank you for your detailed feedback. We appreciate the time and effort you took to review our work and provide such insightful comments. Below, we address each of your concerns in detail:\\n\\n1. Regarding Figure1 and Bracket-Dot Notation: \\n\\nThank you for pointing this out. We have revised the description of RNA secondary structures in Section 2 to provide more precise definitions (see lines 125-141), including a more detailed explanation of pseudoknots, emphasizing their significance and structure to ensure readers have a clearer understanding. Specifically, we have redrawn Figure 1 to include a more comprehensive example, featuring stems, loops, and pseudoknots, with clear labels for each type of structure. We have also added the corresponding bracket-dot notation for these elements, making it easier to understand the relationship between the different structures and their representations. As you suggested, we have appropriately cite the Figure 1 in Section 3.\\n\\n2. Regarding the Presentation of the Algorithm for Conversion to Dependency Trees:\\n\\nTo address this, we have added pseudocode in Appendix A that outlines each step of the transformation in detail. Moreover, we have provided a step-by-step example in Appendix B that visually demonstrates how the RNA structure is transformed into a dependency tree, making the process more intuitive for readers.\\n\\n3. Regarding the Necessity of Tree Representation: \\n\\nThe tree constraint enforces that much of the structure is projective overall, which is indeed the case for RNA structures, where pseudoknots are rarer (and denote non-projectivity). As such, the use of a tree adds further structural constraints that improve the inductive bias of the algorithm. Indeed, RNA structure prediction has been done in the past using CFGs, which denote similar tree nestedness constraints.\\n\\n4. Clarification of Node $g$ in Tree Completion: \\n\\nThank you for pointing this out. Node $g$ refers to the node which connect to root node of the entire sequence's dependency tree. We have now clarified this definition in line 119 and revised the description in lines 119-219. Additionally, we have included pseudocode in Appendix A and an example in Appendix B to facilitate understanding.\\n\\n5. Regarding the Treatment of Pseudoknots in the Algorith: \\n\\nIn our algorithm, pseudoknots and stems are treated as two independent sequences first, and each generating its own tree structure using the same method before being connected together. Please refer to the detailed example provided in Appendix B for a step-by-step illustration.\\n\\n6. Regarding the Power of Grammars for Describing Pseudoknots: \\n\\nThank you for the valuable reference! We have added it (see line 493). We now include more information about pseudoknots in Section 3, explaining that a pseudoknot is essentially a case with interlacing base pairs, which denotes a certain level of non-projectivity. Thus, there is a clear mapping between non-projectivity and pseudoknots.\\n\\n7. Regarding Post-Processing and Decoding Algorithm:\\n\\nIndeed, we are currently unable to incorporate these constraints directly into the decoding algorithm, but this is a promising idea and something we will explore in future work. We appreciate your suggestion.\\n\\n8. Clarification on the Relation to E2Efold-3D:\\n\\nWe sincerely appreciate your close reading and for pointing out the confusion regarding the reference to E2Efold. Indeed, this was an oversight on our part. We intended to cite the original E2Efold paper, not E2Efold-3D. We have now corrected the reference throughout the paper to accurately cite E2Efold and have checked all other citations to prevent similar mistakes. See line 571.\"}", "{\"comment\": \"Dear reviewer:\\n\\n\\nThank you for your detailed feedback. We appreciate the time you took to provide us with these valuable insights. Below, we address each of your concerns in detail:\\n\\n\\n1.Regarding the Definitions of RNA Structures and Examples for Bracketing Structures:\\n\\nThank you for pointing this out. We have revised the description of RNA secondary structures in Section 2 to provide more precise definitions (see lines 125-141), including a more detailed explanation of pseudoknots, emphasizing their significance and structure to ensure readers have a clearer understanding. Specifically, we have redrawn Figure 1 to include a more comprehensive example, featuring stems, loops, and pseudoknots, with clear labels for each type of structure. We have also added the corresponding bracket-dot notation for these elements, making it easier to understand the relationship between the different structures and their representations. \\n\\n\\n2. Regarding Decoding with the Matrix-Tree Theorem: \\n\\nThank you for this observation. We have corrected the wording in the paper (see line 292). In the case of inference with this algorithm, the Matrix-Tree Theorem is used to calculate the marginals over the different edges before running the maximum spanning tree algorithm. \\n\\n\\n3. Regarding Pseudoknots and Non-Projectivity:\\n\\n The lack of pseudoknots implies that the trees generated would be projective, i.e. there would be no crossing arcs in the dependency tree. This is because the tree can be described in that case using a phrase-structure-like tree. We now provide more information about pseudoknots and nonprojectivity in section 2, and you can see that a pseudoknot is essentially a case with interlacing base pairs connected, which exactly denote a certain level of non-projectivity.\\n\\n\\n4. Regarding Handling Long RNA Sequences with RoBERTa:\\n\\nGiven that RNA sequences can be very long (over 1500 tokens), we handle them by splitting the sequences into overlapping subsequences based on RoBERTa's maximum length (e.g., 512 tokens) with a suitable stride (e.g., 256 tokens) to maintain continuity of contextual information. Each subsequence is encoded independently through RoBERTa, and the outputs are then concatenated and pooled appropriately. This approach respects the model's input length limitation while preserving the complete information of long sequences, enabling RoBERTa to effectively process ultra-long RNA sequences.\\n\\n\\n5. Regarding Exploring TreeCRF for DEPfold:\\n\\nYour suggestion to extend DEPfold to structured learning algorithms like TreeCRF is insightful. We have experimented with TreeCRF in preliminary studies, but we found that training was relatively slow, possibly due to computational constraints. However, we agree that this is a promising direction for future work, as TreeCRF has demonstrated effectiveness in dependency parsing. We have added a note about this in line 530, and also referenced the stack-pointer approach, which is an excellent suggestion. We appreciate your input on this matter.\\n\\n6. Regarding the Speed of Optimal Tree Decoding:\\n\\nRegarding the speed of optimal tree decoding, we did use parallelized implementations of MST/Eisner for acceleration. Specifically, our implementation leveraged SuPar, as noted in Appendix C, lines 1028-1029. This allowed us to achieve faster and more efficient tree decoding. We also commit to making our code publicly available to ensure transparency and reproducibility.\\n\\n\\nThank you again for your valuable feedback. Your comments have significantly improved the clarity and depth of our paper. We hope our revisions and explanations adequately address your concerns.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your valuable feedback and for bringing these recent publications to our attention.\\n\\n1. Regarding the lack of up-to-date comparisons, with recent publications from 2023 and 2024:\\n\\nThank you for highlighting this point. Indeed, several new deep learning methods have emerged recently. However, due to the limitations and inconsistencies of available RNA datasets, we aimed to provide a comprehensive and fair evaluation of the generalization capabilities of our model using well-established benchmarks.\\n\\nSpecifically, we have reviewed the three works you mentioned:\", \"redfold\": \"This work does not provide source code for training but only a packaged prediction program. Its training data includes an Rfam dataset that differs from our training datasets, making a direct comparison challenging. The framework used by REDfold is based on a U-net architecture similar to that of UFold, which we have already included in our comparisons. Therefore, we believe our comparison with UFold is representative, especially given that our approach differs fundamentally from these methods.\", \"trrosettarna\": \"This model is designed for RNA 3D structure prediction, which is a different task compared to our work focusing on RNA secondary structure prediction. Therefore, a direct comparison is not feasible.\", \"rnaformer\": \"This work only provides inference code with pre-trained parameters, and it is not possible to retrain it on the same datasets used in our study for a fair comparison. The model is trained on a mixture of databases, whereas we train on one dataset at a time to evaluate specific capabilities of our model. In our attempt to test RNAformer on the bpRNA-TS0 dataset, it achieved an F1 score of 0.706, which is quite high. However, when tested on new families from the bpRNA-new dataset, its F1 score dropped significantly to 0.394, suggesting potential overfitting issues.\\n\\n\\nIn response to your suggestion, we have added a comparison with a newer model, RFold[1], published at ICML 2024, which is a state-of-the-art model in this domain. Our model demonstrates superior performance compared to RFold under the same training and testing conditions. These results have been added to the revised paper, as shown in Tables 2-8.\\n\\nWe commit to making our source code and model weights publicly available, along with all comparison data, to ensure full transparency and reproducibility.\\n\\n2. Regarding undirected dependency structures:\\n\\nWe have only utilized directed dependency structures in our work. Thank you for this suggestion; we will consider exploring undirected structures as a direction for future research.\\n\\n3. Regarding handling long RNA sequences effectively with RoBERTa:\\n\\nGiven that RNA sequences can be very long (over 1500 tokens), we handle them by splitting the sequences into overlapping subsequences based on RoBERTa's maximum length (e.g., 512 tokens) with a suitable stride (e.g., 256 tokens) to maintain continuity of contextual information. Each subsequence is encoded independently through RoBERTa, and the outputs are then concatenated and pooled appropriately. This approach respects the model's input length limitation while preserving the complete information of long sequences, enabling RoBERTa to effectively process ultra-long RNA sequences.\\n\\n\\nThank you again for your constructive comments, which have significantly helped us improve our paper. We hope that our revisions and explanations address your concerns and demonstrate the up-to-date nature and robustness of our work.\\n\\n[1] Cheng Tan, Zhangyang Gao, CAO Hanqun, Xingran Chen, Ge Wang, Lirong Wu, Jun Xia, Jiangbin Zheng, and Stan Z Li. Deciphering rna secondary structure prediction: A probabilistic k-rook matching perspective. In Forty-first International Conference on Machine Learning.\"}", "{\"comment\": \"Dear reviewer:\\n\\nThank you very much for your positive comments!\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your valuable feedback. Based on your suggestions, we have made several adjustments to improve the clarity and rigor of our paper.\\n\\n1. Regarding the vague and non-rigorous description of RNA secondary structures in Section 2, including the need for examples and clearer definitions of pseudoknots:\\n\\nThank you for pointing this out. We have revised the description of RNA secondary structures in Section 2 to provide more precise definitions (see lines 125-141), including a more detailed explanation of pseudoknots, emphasizing their significance and structure to ensure readers have a clearer understanding. Specifically, we have redrawn Figure 1 to include a more comprehensive example, featuring stems, loops, and pseudoknots, with clear labels for each type of structure. We have also added the corresponding bracket-dot notation for these elements, making it easier to understand the relationship between the different structures and their representations. This should provide a clearer and more thorough introduction for readers without extensive background knowledge.\\n\\n2. Regarding the transformation of RNA structures to dependency trees in Section 3.1: \\n\\nWe agree that including a running example would significantly improve the clarity of the transformation process. To address this, we have added pseudocode in Appendix A that outlines each step of the transformation in detail. Moreover, we have provided a step-by-step example in the appendix B that visually demonstrates how the RNA structure is transformed into a dependency tree, making the process more intuitive for readers.\\n\\n3. Regarding the uniqueness of the mapping between RNA secondary structures and dependency trees: \\n\\nThank you for raising this important question. In our approach, we use a right-to-left tree generation and connection method, along with specific constraints that ensure a consistent mapping. Under these directional and rule-based constraints, the transformation from an RNA secondary structure to a dependency structure is indeed unique. Similarly, the reverse transformation from the dependency structure back to the RNA secondary structure is also unique. As shown in Figure 1 and Appendix B, this consistency is clearly demonstrated. Of course, these constraints are not the only possible approach\\u2014alternative directions, such as left-to-right, or other connection methods could also be employed.\\n\\nWe sincerely appreciate your constructive feedback, which has greatly helped us improve the clarity and rigor of our paper. We hope that our revisions and additional explanations effectively address your concerns.\"}" ] }
Dolm7rrrQd
Gone With the Bits: Revealing Racial Bias in Low-Rate Neural Compression for Facial Images
[ "Tian Qiu", "Arjun Nichani", "Rasta Tadayon", "Haewon Jeong" ]
Neural compression methods are gaining popularity due to their impressive rate-distortion performance and their ability to compress data to extremely small bitrates, below 0.1 bits per pixel (bpp). As deep learning architectures, these models are prone to bias during the training process, potentially leading to unfair outcomes for individuals in different groups. In this paper, we present a general, structured, scalable framework for evaluating bias in neural image compression models. Using this framework, we investigate racial bias in neural compression algorithms by analyzing 7 popular models and their variants. Through this investigation we first demonstrate that traditional distortion metrics are ineffective in capturing bias in neural compression models. Next, we highlight that racial bias is present in all neural compression models and can be captured by examining facial phenotype degradation in image reconstructions. Additionally, we reveal a task-dependent correlation between bias and model architecture. We then examine the relationship between bias and realism in the image reconstructions and demonstrate a trade-off across models. Finally, we show that utilizing a racially balanced training set can reduce bias but is not a sufficient bias mitigation strategy.
[ "Fairness", "Bias", "Neural Compression", "Phenotype Classification" ]
https://openreview.net/pdf?id=Dolm7rrrQd
https://openreview.net/forum?id=Dolm7rrrQd
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uvpeokkeIG", "oN7JRV2iIc", "Zd3Wm2upvt", "WZ2eeBszB8", "WPMwPdANYc", "Uy6ZsW0oDZ", "SGG3lTjQnd", "RcRbjrTRJE", "MKUjDvXu2A", "Jb3Xf1GpXk", "8lXajHqRY2", "4oFezCB9EZ" ], "note_type": [ "official_comment", "official_review", "official_comment", "comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1733164216703, 1730555748797, 1732388536296, 1737156502568, 1732529231674, 1729676456933, 1730124810627, 1732389550751, 1732389182320, 1731289775352, 1732389948466, 1732389408038 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8943/Authors" ], [ "ICLR.cc/2025/Conference/Submission8943/Reviewer_MyYt" ], [ "ICLR.cc/2025/Conference/Submission8943/Authors" ], [ "ICLR.cc/2025/Conference/Submission8943/Authors" ], [ "ICLR.cc/2025/Conference/Submission8943/Reviewer_VaCJ" ], [ "ICLR.cc/2025/Conference/Submission8943/Reviewer_VaCJ" ], [ "ICLR.cc/2025/Conference/Submission8943/Reviewer_cyKR" ], [ "ICLR.cc/2025/Conference/Submission8943/Authors" ], [ "ICLR.cc/2025/Conference/Submission8943/Authors" ], [ "ICLR.cc/2025/Conference/Submission8943/Reviewer_FHQM" ], [ "ICLR.cc/2025/Conference/Submission8943/Authors" ], [ "ICLR.cc/2025/Conference/Submission8943/Authors" ] ], "structured_content_str": [ "{\"title\": \"Comments to all reviewers\", \"comment\": \"We thank all the reviewers for providing valuable feedback on our paper. We appreciate the reviewers for recognizing the contributions of our paper, and for agreeing that we are revealing an important problem that should be considered in designing neural compression algorithms. Specifically, the reviewers describe our paper that reveals racial bias in neural compression as novel (FHQM), timely, and consider the raised issues very noteworthy and worth investigating (cyKR; VaCJ). The reviewers also recognize the extensive experiments (FHQM), and agree that the proposed evaluation framework is effective at capturing the bias (FHQM, MyYt).\\\\\\nThere are 3 common concerns that the reviewers raised, and we hope we have addressed these concerns in the comments. We\\u2019d appreciate any feedbacks to further strengthen our paper. \\n- The paper lacks bias mitigation approaches (FHQM W1, MyYt W1, VaCJ W1)\\n\\nThis paper is not intended as a solution-centered paper, but rather a benchmarking/evaluation paper to expose previously unexposed bias issues within neural compression models. By establishing a novel metric for evaluating bias in image compression and showing clear weaknesses across all existing neural compression techniques, this paper lays the groundwork for future advancements in this area.\\n- The paper lacks analysis on fundamental cause of bias (MyYt W2, VaCJ W3)\\n\\nThrough this paper, we have decoupled the bias that comes from the dataset and the bias that is inherent to the model (loss function, training regime, architecture). Across our experiments, we reveal a general trend of bias that is consistent across multiple loss functions (MSE, LPIPS) and architectures (VAE variants, VAE with GAN, diffusion-based). From our initial data balancing experiments we demonstrate that bias is present across models both in settings where the training data set is racially balanced and imbalanced. To more thoroughly highlight our point, we present new experiments with African only training data (Appendix H). These experiments show that the bias that is introduced by model (loss function, training regime, architecture).\\n\\nBy highlighting bias across all models, we demonstrate the presence of bias in all neural image compression settings. These results suggest that there is no single component of neural compression that we can attribute the bias to and that this bias cannot be further isolated. As there are no simple ways to isolate and easily remove bias, we believe that this paper strongly motivates algorithmic methods for bias mitigation, an entire new direction of research, which can be done in future follow-up works. Ultimately, we believe that this paper highlights an flaw with current neural image compression models and motivates a path of algorithmic mitigation of bias in these models.\\n- Justification of phenotype classifier-based bias analysis (MyYt W4, cyKR W1)\\n\\nWe have additionally run human studies to analyze how humans classify skin type in facial images. We asked users to label both uncompressed original images and distorted images decoded from the lowest bitrate from the GaussianMix-Attn model. From the human studies, we observe human\\u2019s ability to discern the original skin type for african racial group drops 32%, much higher than the reduction in other races, which are at most 14%. This reflects that humans perceive distorted african facial images as lighter, which is also consistent with both our initial observation that African racial group suffer significantly from losing skin type information, and the classifier accuracy results. Besides human study, we generated saliency maps to verify the validity of phenotype classifiers.(line 371). The saliency maps confirm that the classifiers are looking at relevant images features for classification.\\n\\nWe hope our additional experiments and responses address the concerns from the reviewers. We kindly request reviewers to reply whether our responses answer their questions, and would be happy to participate in further discussions. Thank you!\"}", "{\"summary\": \"This paper investigated the racial bias problems existing in learned image compression. The authors built a framework to systematically examine the extent to which racial bias occurs in compression. Based on the evaluating framework, they proposed a classification-accuracy-based loss function to better reveal the bias. The correlation between bias, model architecture and image realism has been measured. They also show that utilizing a racially balanced training set cannot fix the problem.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe paper has a clear problem definition, constructed a reasonable evaluation framework.\\n2.\\tExisting experiments have proved the existence of the problem from multiple angles to a certain extent\", \"weaknesses\": \"1. The paper seemly shows few contributions to the compression community. Since the paper just proposes the racial bias problem existing in learned image compression but provides no solution from the compression perspective. Similar evaluation schemes seem to be applicable to any field. Have you considered proposing compression-specific bias mitigation techniques?\\n2. It seems that the bias problem is mainly attributed to the dataset and optimizing method. But the authors only focus on the data-related reasons and do not explore the impact of model optimization methods on this issue. It seems unconvincing to simply attribute the difference in model bias to the difference in model architecture. Have you considered analyzing how different loss functions or training regimes impact bias in compression models?\\n3. The authors did not provide bias analysis results for images decoded by traditional codecs like JPEG, HM and VTM. The optimization of traditional codecs is not affected by the distribution of the dataset and should not lead to bias. If this experiment can be provided, it will promote our understanding of this problem. Please estimate traditional codec results at equivalent bitrates using the same bias evaluation framework.\\n4. The author used the accuracy of the classification model to evaluate the loss of image attributes at low bitrates. However, the classification model was learned on undistorted images, and whether it can accurately classify features on distorted images is unverified. Additional experiments should be conducted in this regard to enhance the persuasiveness of the bias-related conclusion. For example, you could compare classifier\\u2019s results with human evaluations on a subset of distorted images and report the accuracy.\", \"questions\": \"Please refer to above weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely thank the reviewer for the valuable feedback.\\n### W1: Lack of bias mitigation techniques\\nThis paper is not intended as a solution-centered paper, but rather a benchmarking/evaluation paper to expose previously unexposed bias issues within neural compression models. Benchmarking papers have been instrumental in driving research in responsible machine learning. As AI continues to progress rapidly, it is crucial to uncover and understand unforeseen challenges that arise. For instance, at ICLR last year, several benchmarking papers (e.g., [1\\u20133]) were accepted, highlighting emerging issues related to bias and privacy in large language models (LLMs). We believe that our benchmarking work provides valuable insights to the community and opens up an important new research topic on fair neural compression. By establishing a novel metric for evaluating bias in image compression and showing clear weaknesses across all existing neural compression techniques, this paper lays the groundwork for future advancements in this area.\\n### W2: Perceptual Metrics\\nThank you for mentioning perceptual metrics. We have added the LPIPS metric to capture the bias in neural compression. \\nAs shown in Figure 2, while LPIPS aligns more closely with human perception\\u2014indicated by the higher curve for African images compared to other, however it does not capture the difference in phenotype degradation across races. This indicates that LPIPS, like MSE and PSNR, is not a sufficient metric to capture bias in these settings. To address this limitation, we introduce the phenotype classifier task, specifically designed to detect and quantify bias that these traditional metrics overlook.\\n### References:\\n[1] Gupta, Shashank, et al. \\\"Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs.\\\"\\u00a0*The Twelfth International Conference on Learning Representations*.\\\\\\n[2] Bel\\u00e9m, Catarina G., et al. \\\"Are Models Biased on Text without Gender-related Language?.\\\"\\u00a0*The Twelfth International Conference on Learning Representations*.\\\\\\n[3] Staab, Robin, et al. \\\"Beyond Memorization: Violating Privacy via Inference with Large Language Models.\\\"\\u00a0*The Twelfth International Conference on Learning Representations*.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Response to the rebuttal\", \"comment\": \"Regarding the bias phenomenon in LICs, I believe it is crucial to focus on understanding its underlying causes and identifying effective solutions. However, as pointed out by other reviewers, the paper does not sufficiently address these aspects. I hope the authors can investigate this issue more thoroughly in future work. I will maintain my score.\"}", "{\"summary\": \"This paper investigates racial bias in neural compression models for facial image compression, particularly at low bitrates. The authors demonstrate that traditional distortion metrics are insufficient for capturing racial bias, which manifests in noticeable degradation of facial features, especially for darker-skinned individuals. They examine the relationship between bias, model architecture, and visual realism, and show that while balancing the training dataset can help reduce bias, it does not fully eliminate it.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper investigate bias in neural compression models, bringing attention to an underexplored area of fairness in AI. The authors reveal clear biases, particularly in skin-type degradation.\\n\\n2. The raised issue is very noteworthy and worth investigating, as it holds certain value for the generalization and reliability of neural compression methods.\", \"weaknesses\": \"1. One of the major concern is that: while the paper identifies the presence of bias, the essential reasons are not thoroughly explored and mitigation strategies suggested (like dataset balancing) are not shown to be completely effective.\\n\\n2. The experiments demonstrate that balancing the training dataset can help but does not fully mitigate the bias. If the dataset balance is not working well. Network architecture\\u2019s impact on bias could be critical and should be analyzed further.\\n\\n3. Exploring the fundamental causes of bias is critical. For instance, what would the bias level and visualized results be like if the network is trained and tested on an Africa-only dataset?\", \"questions\": \"1. Bias is defined as the maximum difference in loss (Eq. 3 and Eq. 6). How to deal with the impact of extreme values on results and how well does this definition of bias reflect the overall dataset?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents an analysis of racial bias in different neural compression algorithms. To measure this, the method uses face/phenotype classifiers and measure how much the classification decision is affected by the neural compression algorithm. This is similar to a rate-distortion measure where distortion is classification error instead of pixel error. The paper shows a clear bias in neural compression algorithms when trained on imbalanced datasets which is only partially mitigated by using balanced data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The work here is interesting and timely. As neural compression continues to approach a usable state, but has yet to be deployed in any meaningful way, it is extremely important to start considering any potential bias in trained models or inherent to the algorithms. This way, the community can develop mitigations and ensure that those mitigations are used in future products. Additionally, I think the metrics used by the method to quantify bias are sensible and the finding that traditional distortion metrics, such as PSNR, do not accurately capture bias is a good result. This also makes sense since many compression techniques use pixel-error metrics as their objective.\\n\\nAlthough there is some overlap with prior work, as discussed in the paper, I think there is significant value in extending the analysis to neural compression.\", \"weaknesses\": \"While the classification error metrics presented in the paper is a good start it may need some additional development to make it fully capture bias. As the authors point out: the metric is only as good as the classifier itself. If the classifier is not able to make reliable decisions then the metric could miss bias or overly assign bias. I think this topic deserves more attention. Additionally the sensitivity of classifiers to different frequency degradations (which are common for compression); this may also explain different classification results at low bitrates.\\n\\nFinally, there is an entire class of compression algorithms based on Implicit Neural Representations (see SIREN [1] for one example) which train a neural compression model unique to each example. This kind of technique could help mitigate any bias but these were not tested in the paper.\\n\\n1. https://arxiv.org/abs/2006.09661\", \"questions\": [\"How can we show better reliability of the classification metric?\", \"Could INR methods overcome potential bias in neural compression?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely thank the reviewer for the valuable feedback.\\n### W1: Phenotype Classifier Evaluation\\nWe have additionally run human studies to analyze how humans classify skin type in facial images. We asked users to label both uncompressed original images and distorted images decoded from the lowest bitrate from the GaussianMix-Attn model. From the human studies, we observe human\\u2019s ability to discern the original skin type for african racial group drops 32%, much higher than the reduction in other races, which are at most 14%. This reflects that humans perceive distorted african facial images as lighter, which is also consistent with both our initial observation that African racial group suffer significantly from losing skin type information, and the classifier accuracy results. \\\\\\nBesides human study, we generated saliency maps to verify the validity of phenotype classifiers.(line 371). The saliency maps confirm that the classifiers are looking at relevant images features for classification. \\n### W2: Sensitivity of Classifiers to Frequency Degradations\\nWe studied the frequency domain degradation introduced by different neural compression models and edited the paper to include results in Appendix I. The figures plots the percentage of reduction in signal magnitude in the frequency domain. We can observe different overall patterns across neural compression models, but the patterns across races are consistent within each model. This suggests that the phenotype classifier is not leveraging any discrepancy in frequency distortion across races.\\n### W3: Implicit Neural Representations\\nWe thank the reviewer for mentioning INR as a potential neural compression method with no bias. INR models overfit a sample to a small network and transmit the network parameters as the bitstream. Even though it does not suffer from imbalanced dataset distribution, the existence of bias in INR models are not fully studied and is worth further exploration as a potential mitigation strategy. Due to the time limit, we are not able to do a full evaluation of INR models, but this is definitely an exciting future direction!\"}", "{\"title\": \"Reply to Reviewer MyYt - Part 1\", \"comment\": \"We sincerely thank the reviewer for the valuable feedback.\\n### W1: About lack of bias mitigation techniques\\nThis paper is not intended as a solution-centered paper, but rather a benchmarking/evaluation paper to expose previously unexposed bias issues within neural compression models. Benchmarking papers have been instrumental in driving research in responsible machine learning. As AI continues to progress rapidly, it is crucial to uncover and understand unforeseen challenges that arise. For instance, at ICLR last year, several benchmarking papers (e.g., [1\\u20133]) were accepted, highlighting emerging issues related to bias and privacy in large language models (LLMs). We believe that our benchmarking work provides valuable insights to the community and opens up an important new research topic on fair neural compression. By establishing a novel metric for evaluating bias in image compression and showing clear weaknesses across all existing neural compression techniques, this paper lays the groundwork for future advancements in this area (more details are provided in response to the next question).\\n### W2: Impact of loss functions and training regimes\\nWe thank the reviewer for suggesting that the source of bias be attributable to loss functions or training regimes. Through this paper, we have decoupled the bias that comes from the dataset and the bias that is inherent to the model (loss function, training regime, architecture). Across our experiments, we reveal a general trend of bias that is consistent across multiple loss functions (MSE, LPIPS) and architectures (VAE variants, VAE with GAN, diffusion-based). From our initial data balancing experiments we demonstrate that bias is present across models both in settings where the training data set is racially balanced and imbalanced. To more thoroughly highlight our point, we present new experiments with African only training data (Appendix H). These experiments show that the bias that is introduced by model (loss function, training regime, architecture). We believe that this paper strongly motivates algorithmic methods for bias mitigation, which can be done in future works. While it is still unclear exactly which components of the models contribute to bias the most (a challenging problem), this is worthy of investigation and can be done in an extension of this work that explores algorithmic mitigation. Ultimately, we believe that this paper highlights an flaw with current neural image compression models and motivates a path of algorithmic mitigation of bias in these models.\\n### W3: Comparison against traditional codecs\\nWe thank the reviewer for mentioning traditional codecs. In this paper, we focus on low bitrate regimes that can be as low as 0.1bpp. This is a bitrate that JPEG cannot operate in. In order to compare to JPEG, we have compressed images using the lowest quality levels (implemented with Pillow). The bitrates achieved by JPEG are greater than 1 bpp, which is significantly greater than the bitrates we evaluate neural compression models in. We conducted the same bias analysis experiments from Section 4.2 for the JPEG codec. We have updated paper with Appendix J. From the figure, we can see that JPEG codec experiences similar bias towards the African racial group. This finding resonates with the existing literature [4], in which the authors conclude that facial images of individuals with darker skin tones suffer from higher error rates in facial recognition tasks after JPEG compression. These results reenforce the presence of bias in JPEG compression settings as well as further justify the use of our phenotype classifier as an evaluation metric.\\n### References\\n[1] Gupta, Shashank, et al. \\\"Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs.\\\"\\u00a0*The Twelfth International Conference on Learning Representations*.\\\\\\n[2] Bel\\u00e9m, Catarina G., et al. \\\"Are Models Biased on Text without Gender-related Language?.\\\"\\u00a0*The Twelfth International Conference on Learning Representations*.\\\\\\n[3] Staab, Robin, et al. \\\"Beyond Memorization: Violating Privacy via Inference with Large Language Models.\\\"\\u00a0*The Twelfth International Conference on Learning Representations*.\\\\\\n[4] Yucer, Seyma, et al. \\\"Does lossy image compression affect racial bias within face recognition?.\\\" 2022 IEEE International Joint Conference on Biometrics (IJCB).\"}", "{\"summary\": \"This paper introduces a framework for assessing bias in neural image compression models, analyzing seven popular models and finding prevalent racial bias, manifested as unequal degradation of facial features. The study indicates that while using a racially balanced dataset helps mitigate bias, it is not a complete solution. The study indicates that while using a racially balanced dataset helps mitigate bias, it is not a complete solution.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Strength: (1)The topic of this paper is novel, and the racial bias of image compression on face data sets is studied (2)The authors have conducted quite sufficient experiments around this argument to verify that this problem does exist\", \"weaknesses\": \"Weakness: (1)Although the author presents a novel topic, it seems that the author did not fully explore the way to solve the problem. Using a more balanced dataset seems to be one solution, but after discussion by the authors, this approach does not completely eliminate racial bias. So, how to better solve this problem? The author needs to give further elaboration. In fact, this is the point I am most concerned about. (2)The authors used traditional metrics such as PSNR and SSIM in their experiments to reflect racial bias. However, these metrics differ significantly from human visual experience. I wonder if the authors explored more perceptual metrics, such as LPIPS or FID?\", \"questions\": \"Weakness: (1)Although the author presents a novel topic, it seems that the author did not fully explore the way to solve the problem. Using a more balanced dataset seems to be one solution, but after discussion by the authors, this approach does not completely eliminate racial bias. So, how to better solve this problem? The author needs to give further elaboration. In fact, this is the point I am most concerned about. (2)The authors used traditional metrics such as PSNR and SSIM in their experiments to reflect racial bias. However, these metrics differ significantly from human visual experience. I wonder if the authors explored more perceptual metrics, such as LPIPS or FID?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely thank the reviewer for the valuable feedback.\\n### W1: Lack of bias mitigation techniques\\nThis paper is not intended as a solution-centered paper, but rather a benchmarking/evaluation paper to expose previously unexposed bias issues within neural compression models. Benchmarking papers have been instrumental in driving research in responsible machine learning. As AI continues to progress rapidly, it is crucial to uncover and understand unforeseen challenges that arise. For instance, at ICLR last year, several benchmarking papers (e.g., [1\\u20133]) were accepted, highlighting emerging issues related to bias and privacy in large language models (LLMs). We believe that our benchmarking work provides valuable insights to the community and opens up an important new research topic on fair neural compression. By establishing a novel metric for evaluating bias in image compression and showing clear weaknesses across all existing neural compression techniques, this paper lays the groundwork for future advancements in this area (more details are provided in response to the next question).\\n### W2: Impact from network architecture is not fully explored\\nWe thank the reviewer for suggesting that the source of bias be attributable to the architecture. Through this paper, we have decoupled the bias that comes from the dataset and the bias that is inherent to the model (loss function, training regime, architecture). Across our experiments, we reveal a general trend of bias that is consistent across multiple loss functions (MSE, LPIPS) and architectures (VAE variants, VAE with GAN, diffusion-based). From our initial data balancing experiments we demonstrate that bias is present across models both in settings where the training data set is racially balanced and imbalanced. To more thoroughly highlight our point, we present new experiments with African only training data (Appendix H). These experiments show that the bias that is introduced by model (loss function, training regime, architecture). We believe that this paper strongly motivates algorithmic methods for bias mitigation, which can be done in future works. While it is still unclear exactly which components of the models contribute to bias the most (a challenging problem), this is worthy of investigation and can be done in an extension of this work that explores algorithmic mitigation. Ultimately, we believe that this paper highlights an flaw with current neural image compression models and motivates a path of algorithmic mitigation of bias in these models.\\n### W3: Fundamental cause of bias & training with African-only datasets\\nWe thank the reviewer for the suggestion of training with African-only images. We have conducted experiments training neural compression models with African subset from the FaceARG dataset which has provided us with additional valuable insights. We have updated the paper with Appendix H. As the figures have shown, training with African-only images help reduce the bias in skin type in one model (GaussianMix-Attn), but not in other models. This result indicates that using an African-only dataset doesn\\u2019t completely remove the bias, and that the bias that is introduced by model (loss function, training regime, architecture), which tend to generate lighter-colored images. \\\\\\nWe have also attached visualized results (Appendix Fig. H.2) to show examples of where training with African-only images helps or does not help reduce the bias in skin type. \\\\\\nThis additional experiment really gave more insights to this problem, so we sincerely thank the reviewer for the suggestion!\\n### References\\n[1] Gupta, Shashank, et al. \\\"Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs.\\\"\\u00a0*The Twelfth International Conference on Learning Representations*.\\\\\\n[2] Bel\\u00e9m, Catarina G., et al. \\\"Are Models Biased on Text without Gender-related Language?.\\\"\\u00a0*The Twelfth International Conference on Learning Representations*.\\\\\\n[3] Staab, Robin, et al. \\\"Beyond Memorization: Violating Privacy via Inference with Large Language Models.\\\"\\u00a0*The Twelfth International Conference on Learning Representations*.\"}", "{\"title\": \"Reply to Reviewer MyYt - Part 2\", \"comment\": \"### W4: Further Justification of the Phenotype Classifier with human evaluations\\nWe have additionally run human studies to analyze how humans classify skin type in facial images. We asked users to label both uncompressed original images and distorted images decoded from the lowest bitrate from the GaussianMix-Attn model. From the human studies, we observe human\\u2019s ability to discern the original skin type for african racial group drops 32%, much higher than the reduction in other races, which are at most 14%. This reflects that humans perceive distorted african facial images as lighter, which is also consistent with both our initial observation that African racial group suffer significantly from losing skin type information, and the classifier accuracy results.\"}" ] }
Dojny642Dy
IVCR-200K: A Large-Scale Benchmark for Interactive Video Corpus Retrieval
[ "Ning Han", "Yawen Zeng", "Shaohua Long", "Chengqing Li", "Sijie Yang", "Dun Tan", "Zemin Liu", "Jianfeng Dong", "Jingjing Chen" ]
In recent years, significant developments have been made in both video retrieval and video moment retrieval tasks, which respectively retrieve complete videos or moments for a given text query. These advancements have greatly improved user satisfaction during the search process. However, previous work has failed to establish meaningful "interaction" between the retrieval system and the user, and its one-way retrieval paradigm can no longer fully meet the personalization and dynamics needs of at least 80.8% of users. In this paper, we introduce a more realistic setting, the Interactive Video Corpus Retrieval task (IVCR) that enables multi-turn, conversational, realistic interactions between the user and the retrieval system. To facilitate research on this challenging task, we introduce IVCR-200K, a bilingual, multi-turn, conversational, abstract semantic high-quality dataset that supports video retrieval and even moment retrieval. Furthermore, we propose a comprehensive framework based on multi-modal large language models (MLLMs) to support users' several interaction modes with more explainable solutions. Our extensive experiments demonstrate the effectiveness of our dataset and framework.
[ "Interactive Video Corpus Retrieval Dataset; Cross-Modal Video Retrieval; Multi-Modal Large Language Model" ]
https://openreview.net/pdf?id=Dojny642Dy
https://openreview.net/forum?id=Dojny642Dy
ICLR.cc/2025/Conference
2025
{ "note_id": [ "joPrWSFMpl", "SoeRvAusoS", "GAO61zvdeh", "CI4U7lrqTs" ], "note_type": [ "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730602531149, 1730571361821, 1731657891027, 1730555622198 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7258/Reviewer_HXD5" ], [ "ICLR.cc/2025/Conference/Submission7258/Reviewer_SPcM" ], [ "ICLR.cc/2025/Conference/Submission7258/Authors" ], [ "ICLR.cc/2025/Conference/Submission7258/Reviewer_KDhs" ] ], "structured_content_str": [ "{\"summary\": \"Considering the need for personalization and the dynamic requirements of many users, this paper points out that establishing \\\"interaction\\\" between the retrieval system and the user is meaningful. Specifically, it introduces the Interactive Video Corpus Retrieval task (IVCR), which facilitates multi-turn, conversational, and realistic interactions between users and the retrieval system. And then, a large-scale benchmark called IVCR-200K and a comprehensive framework based on multi-modal large language models (MLLMs) are proposed to enhance interaction between models and users.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"S1. The paper includes a questionnaire survey on user search behavior, highlighting users' preferences for interactive search functionalities. Moreover, it summarizes users\\u2019 intricate behavioral patterns that underscore the necessity of an interactive retrieval system.\\nS2. The proposal of the large-scale benchmark IVCR-200K for interactive video retrieval is a significant contribution, and the paper provides a thorough analysis of its high quality.\\nS3. The design of the InterLLaVA framework for interactive video retrieval serves as a valuable example for future work in the field.\\nS4. The writing is well-structured and easy to read, enhancing the paper's overall effectiveness.\", \"weaknesses\": \"W1. As indicated in Table 2, InterLLaVA's performance in video moment retrieval may not be state-of-the-art, revealing some weaknesses in the model.\\nW2. During the training of InterLLaVA, questions are divided stage by stage, which may hinder its ability to handle complex or \\\"jumping\\\" questions. This could explain the model's subpar performance in direct video moment retrieval.\", \"questions\": \"Q1. Can InterLLaVA effectively and directly handle complex or \\\"jumping\\\" questions that involve topic shifts or require nuanced reasoning?\\nQ2. Figures 4 and 5 indicate that the IVCR-200K dataset exhibits a long-tail distribution, which may affect the model\\u2019s learning in interactive video retrieval, particularly its performance on less common queries. Was this factor considered in the construction of IVCR-200K?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The focus on this paper is interactive video retrieval. The authors propose a new task formulation, \\u201cInteractive Video Corpus Retrieval (IVCR)\\u201d, which involves multi-turn interactions between a user and a retrieval system. To study this task, the authors curate videos and captions from 5 existing datasets (TVQA, LSMDC, ActivityNet, DiDeMo, MSR-VTT), and augment these using GPT-4 to construct a new dataset, IVCR-200K. Concretely, GPT-4 is used to re-write the captions/descriptions associated with the original videos, synthesize multi-turn dialogues and predict text-based query responses. The dataset captions are also translated into Chinese.\\n\\nIn order to tackle ICVR, the authors propose an interactive retrieval framework called InterLLaVA. This combines a frozen LLM with a fine-tuned video encoder to perform both video retrieval and video moment retrieval. InterLLaVA is compared with existing methods on IVCR-200K, where it is found to perform strongly for video retrieval, but less well for video moment retrieval.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"Clarity: Overall, the paper was well-structured. Although I had difficulty understanding some of the specific claims made by the authors (see weaknesses below), I was able to follow the overall message of the work.\", \"significance\": \"I think interactive video retrieval is a relatively understudied topic with significant commercial potential (indeed, the growing scale of video hosting social media platforms underscores the importance of video search). Consequently, I think the paper focuses on a useful and impactful problem.\", \"originality\": \"This work explores a different interactive retrieval setting than has been considered in prior work. Specifically, as highlighted in Figure 1(c), the user can mix and match different kinds of queries within a single dialogue. I think this is an interesting and natural direction to explore.\", \"quality\": \"The authors do a good job of visualizing the contents of the dataset. The hierarchical visualization in Figure 9 of the supplementary material, in particular, conveys the content distribution effectively.\", \"weaknesses\": \"1. It was good that the authors used a survey to motivate the interactive retrieval task. However, I found it quite difficult to find any details of what the survey contained. I read the description given in Appendix A, which describes the scale of the survey but not the survey questions. Without a detailed description of survey questions and responses statistics, it is quite difficult to assess the validity of claims made in the introduction. For example, in lines L048-L051, the authors say \\u201cour questionnaire indicate that interactive demands exhibit intricate behavioral patterns.\\u201d\\n\\n2. To support the claim that \\u201cusers desire \\u2018multi-turn interaction\\u2019 with systems\\u201d (L044), the authors cite the number of rounds of interaction in ShareGPT conversation dataset as being \\u201cremarkably high at 7.27.\\u201d (L048). However, my understanding is that ShareGPT primarily contains data from text-based chat dialogues, rather than multi-turn video retrieval. As such, it doesn\\u2019t seem like particularly strong evidence that users want multi-turn behavior in the video setting. A caveat here: unfortunately, the dataset has been taken offline so I\\u2019m basing my claim that ShareGPT is primarily text dialogues from my memory - please correct me if I\\u2019m mistaken. The authors could potentially address this by providing additional evidence that is more specific to the multi-turn video retrieval setting, or by presenting arguments for why text-based interactions are meaningful evidence for their claims. \\n\\n3. Table 1 contains a comparison with prior work. In the \\u201cReal interaction\\u201d column, only the proposed dataset (ICVR-200K) is ticked. However, if I understand correctly, the dialogue is mostly generated by GPT-4 (using the pipeline shown in Figure 3). I say mostly, because in L427, the authors write \\u201cwhile most dialogues consist of concatenated single-round exchanges, we also gather a limited number of multi-turn dialogues from real users.\\u201d I understand that there is a human expert review process, but I would not describe this data pipeline data as \\u201cReal interaction\\u201d, given the heavy role played by GPT-4. The proposed pipeline design also makes it somewhat unsurprising that the average length of questions and answers in IVCR-200K is much longer than AVSD (as discussed in L269). (I would expect GPT-4-generated text to have this property.) It would be appreciated if the authors could clarify how their dataset meets their definition of \\\"Real interaction\\\"?\\n \\n4. I found several parts of the paper quite difficult to follow. To give a few concrete examples:\\n(4.1) When defining the \\u201cinteractive\\u201d task in L099-L123, the third component of the definition is \\u201cReal interaction.\\u201d which is described as follows: \\u201cThe pioneers create simulated environments to generate interactive data (Ma & Ngo, 2022), but we emphasize that only truly understanding users can optimize a better search experience.\\u201d I\\u2019m not sure what \\u201ctruly understanding users\\u201d means here? Is this a technical claim or an aspiration? How does it relate to the property of \\u201cReal interaction.\\u201d If I should interpret it to mean \\u201cusing real user data\\u201d, would this not imply that real user data should be collected? (My understanding, pointed out in weakness 3, and reflecting Figure 3 of the paper, is that the multi-turn dialogues are mostly generated by GPT-4). \\n(4.2) In figure 1, it would be helpful to have a caption explain the takeaways. I understand that (a), (b) and (c) each illustrate a different task, but is there some significance to the red sad faces and happy green faces? Some possible interpretations I had were (i) The face is happy for (c) because this is the formulation proposed by the authors; (ii) The face is happy for (c) because this is what the survey suggested that users prefer; (iii) The face is sad for (a) and (b) because the retrieval result quality is lower as a consequence of failing to use the history of previous queries. If the authors could clarify this, it would be appreciated. \\n(4.3) In the caption for Table 2, it says \\u201cbold represents optimal performance.\\u201d However, every entry in the \\u201cInterLLaVA (Ours)\\u201d row is bolded, even when it is not the best performing. For instance, under the R@1 IoU=0.5 metric, this method lags behind several of the Moment Retrieval baselines. Are these baselines comparable? If so, shouldn't 2D-TAN be bold here?\", \"questions\": \"1. I wasn\\u2019t quite sure how to interpret the phrase L427 \\u201cwhile most dialogues consist of concatenated single-round exchanges, we also gather a limited number of multi-turn dialogues from real users.\\u201d Could the authors provide statistics from how much of the multi-turn dialogues are gathered from real users?\\n\\n2. In Table 2, InterLAVA R@1 accuracy is reported, but R@10 accuracy is not. Was there a rationale for this?\\n\\n3. In L525, it says \\u201cIt also suggests that video retrieval itself is relatively less influenced by multi-turn context understanding.\\u201d I find this result surprising. Do the authors have intuitions for why the multi-turn setup in Table 4 harms performance on retrieval so dramatically?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper introduces a new dataset, benchmark and method for a new task called Interactive Video Corpus Retrieval. The dataset was curated from existing video datasets and re-framed into a multi-turn retrieval dataset using ChatGPT to connect multiple annotations together to simulate a multi-turn conversation. Based on these videos the authors proposed a benchmark to evaluate performance on video-retrieval and moment-retrieval tasks. For the multi-turn benchmark the authors simply use multiple turn to query the system (not very clear about how this is done) and measure the compounding errors across multiple turns. In terms of the method, the authors propose to solve the problem in a two stage manner, the first one uses a standard retrieval system to rank the top-k videos that are later fed into a Multimodal LLM that re-ranks them and also regress boundaries for the moment-retrieval task. The method performs on-par with other baselines for the retrieval tasks but highly underperforms in the moment-retrieval task (this is using a single turn). When adding multi-turn the method seems to benefit from it for moment-retrieval but seems to be harmful for video-retrieval. Although the reason behind it is unclear since the multi-turn setting is not very clear to me.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The paper introduces a method, benchmark and dataset for what seems to be a relevant task for users.\\nThe paper makes an effort on implementing existing methods for the different tasks creating a decent benchmarking of current technologies for video and moment retrieval in the new dataset.\", \"weaknesses\": \"The paper has several weaknesses that will need to be tackled before being accepted. Most of them are related to clarity of the task, and dataset, missing references, and clear technical flaws.\\n\\n1. The paper claims that the 80.8% of the users would prefer interactive search functionality. This is a very strong claim especially with the little information provided on how this number was found. The paper mentions and survey on 500 users, but did not provide the survey. Depending on the nature of the questions and the population the results can be interpreted in different manners. I think the authors need to rephrase of clarify accurately the meaning of this study.\\n\\n2. The paper does not cite a few important works very related to the topic. First, the ActivityNet dataset was proposed by Caba et al [A], not Krishna, the one referenced in the paper is ActicityNet Captions which is an extension of the original ActivityNet, please give proper attribution to the previous fundamental works in the field. The paper has lots of connection with [B], since the latter one introduced the task of single video moment retrieval in a video corpus. However, there was no mentioned of this paper whatsoever. \\n\\n3. The paper utilizes automated method to create and refine dataset annotations. They claim that there is a refinement process afterwards but do not describe how is this one performed. Additionally, the multi-turn setting is not explained clearly in the paper. What kind of multi-turn scenarios there are? There are so many way to interact with a system in a multi-turn manner. One could look for videos and refine the search based on the results, look for videos then look for moments in the video, look for moments and then refine the search, etc. The authors mentioned a few of these examples but did not explain how was the benchmark built to cover them. Without this context, it is really hard to understand the results presented in the multi-turn setting in table 4.\\n\\n4. The authors claim to be measuring BLEU-4 and GPT-4 score metrics on this task. However, I don't understand what kind of answers are the authors evaluating what is the ground-truth for these answers? Is it simply the caption of the video? This requires clarification. \\n\\n\\n5. The provided baselines make sense for the video and moment retrieval tasks. However, the use of a Multimodal LLM for the fine-grained part of the task (video moment retrieval) seems to be not appropiate. First of all, the MLLM can only process a limited number of frames, which limits the resolution of the system to regress accurate time-stamps. This fact can be seen in table 2, where a method that was state-of-the-art in video-language grounding 4 years ago (2D-TAN) outperforms the proposed method by more that 40 absolute points.\\n\\n6. The training of the MLLM has one major technical flaw. The paper states that they use the MLLM for re-ranking the top retrieved videos given by the retrieval system. They do it by training the MLLM with three losses, one of them being a cross-entropy loss shown in equation 2. This cross-entropy loss is flawed since it is applied to the video indices in each retrieved set, where the model is expected to identify the \\\"correct\\\" video among the top-k candidates. This setup treats each re-ranking instance as a unique classification problem over a dynamically changing set of \\\"classes\\\" (the retrieved videos), with each new top-k set effectively re-shuffling the class labels. Cross-entropy relies on fixed, stable classes to guide learning, and without this consistency, the model faces moving targets that prevent it from generalizing any meaningful re-ranking ability. A more suitable approach would involve using ranking-specific losses that do not depend on changing class labels but instead focus on optimizing the order of relevance among retrieved videos, such as pairwise or list-wise ranking losses. Given that there was no ablation of this losses, its hard to diagnose how much is this loss affecting the performance of the system. However, if my understanding is correct, this loss is not the way to train a re-ranking system.\\n\\n7. The conclusions on table 4 are not generalizable to the benchmark, they only talk about the limitations of the current method on performing fine-grained localization. The paper could have proposed a two-stage method using a state-of-the-art retrieval system + a state-of-the-art moment retrieval system and evaluate if under the same setting as table 4. Since this setting is the main selling point of the paper. However, the paper only evaluates the proposed system despite the weaknesses and limitations. \\n\\n\\n\\n[A] Caba Heilbron, F., Escorcia, V., Ghanem, B., & Carlos Niebles, J. (2015). Activitynet: A large-scale video benchmark for human activity understanding. In Proceedings of the ieee conference on computer vision and pattern recognition (pp. 961-970).\\n\\n\\n[B] Escorcia, Victor, et al. \\\"Finding moments in video collections using natural language.\\\" arXiv preprint arXiv:1907.12763 (2019).\", \"questions\": [\"1. **Interactive Search Functionality**:\", \"How was the survey of 500 users conducted? Could the authors provide details on the survey\\u2019s methodology, questions, and population demographics?\", \"How are the findings interpreted given potential biases in the question design or population selection?\", \"Could the authors clarify or rephrase the claim to reflect the limitations of this survey?\", \"2. **Citations and Attribution**:\", \"Why was the original ActivityNet dataset attributed to Krishna instead of Caba Heilbron et al.? Could the authors adjust this citation to credit the foundational work correctly?\", \"Given the connection to the task introduced in Escorcia et al. (2019), could the authors discuss this paper\\u2019s relevance and why it was omitted from the literature review?\", \"3. **Dataset Annotations and Multi-turn Setting**:\", \"How is the refinement process (done by humans) for dataset annotations performed? What was the criteria to refine and the methodology used?\", \"Could the authors clarify what types of multi-turn interactions were modeled and how these interactions were benchmarked in the dataset?\", \"How does the benchmark ensure coverage of diverse multi-turn interaction scenarios, as mentioned in the examples? Authors mentioned that some real multi-turn interactions are included, how were they collected?\", \"4. **Evaluation Metrics (BLEU-4 and GPT-4 Scores)**:\", \"What exactly are the authors evaluating with BLEU-4 and GPT-4 scores, and what is considered the ground-truth answer?\", \"Are these evaluations solely based on video captions, or are there other elements influencing the ground-truth answers?\", \"5. **Appropriateness of MLLM for Fine-grained Video Moment Retrieval**:\", \"How do the authors address the limitations of using an MLLM that processes a limited number of frames, which may impact the system\\u2019s ability to regress accurate timestamps?\", \"Given that older video-language grounding methods (like 2D-TAN) perform better by over 40 absolute points, is the MLLM genuinely suited for this fine-grained task?\", \"6. **Technical Flaw in MLLM Training with Cross-entropy Loss**:\", \"How do the authors justify using cross-entropy loss to re-rank video indices, given that re-ranking each top-k set effectively reshuffles the classes and lacks stable targets?\", \"Could the authors consider ranking-specific losses (pairwise or listwise) that avoid dynamic class labels, or provide an ablation study to clarify the cross-entropy loss\\u2019s impact on performance?\", \"Why cannot the model be evaluated with Recall @ 10 on table 2?\", \"7. **Generalizability of Conclusions on Fine-grained Localization in Table 4**:\", \"Would the authors consider implementing a two-stage method combining state-of-the-art retrieval and moment retrieval systems to validate their setting and offer a more comprehensive evaluation? Why was not this considered for table 4?\", \"Why were the limitations of the proposed system not benchmarked against alternatives, especially if the multi-turn setting is a main focus of the paper?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
DoDNJdDntB
Flow Matching for Posterior Inference with Simulator Feedback
[ "Benjamin Holzschuh", "Nils Thuerey" ]
Flow-based generative modeling is a powerful tool for solving inverse problems in physical sciences that can be used for sampling and likelihood evaluation with much lower inference times than traditional methods. We propose to refine flows with additional control signals based on a simulator. Control signals can include gradients and a problem-specific cost function if the simulator is differentiable, or they can be fully learned from the simulator output. In our proposed method, we pretrain the flow network and include feedback from the simulator exclusively for finetuning, therefore requiring only a small amount of additional parameters and compute. We motivate our design choices on several benchmark problems for simulation-based inference and evaluate flow matching with simulator feedback against classical MCMC methods for modeling strong gravitational lens systems, a challenging inverse problem in astronomy. We demonstrate that including feedback from the simulator improves the accuracy by $53$%, making it competitive with traditional techniques while being up to 67x faster for inference. Upon acceptance, we will make our code publicly available.
[ "generative modeling", "simulation-based inference", "astronomy" ]
Reject
https://openreview.net/pdf?id=DoDNJdDntB
https://openreview.net/forum?id=DoDNJdDntB
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zx5xUmRUvN", "vI4XoL5RDy", "pJsNop1QTW", "n1BvF23JU4", "hzSOppUv0z", "YckSgdrp41", "YagVKj3401", "XsX39112Nw", "Wku2d9Fv5C", "TcqYOF19vr", "Sy9OKWsaHs", "ObYXYmhTdO", "LqcXuP8ckD", "JUucHYxEAp", "CMPb4vqs8n", "977PUOFYT6", "85mq89p6rE", "0u5GJExTcI" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_review", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732896202801, 1733144824125, 1732606616763, 1732896299700, 1730693932655, 1730885923171, 1733168677579, 1734728283519, 1730731408133, 1733165372012, 1733144784802, 1730605104460, 1730738110991, 1737523448169, 1733217952657, 1732895928951, 1733209738350, 1732896101483 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1338/Authors" ], [ "ICLR.cc/2025/Conference/Submission1338/Authors" ], [ "ICLR.cc/2025/Conference/Submission1338/Area_Chair_pxQg" ], [ "ICLR.cc/2025/Conference/Submission1338/Authors" ], [ "ICLR.cc/2025/Conference/Submission1338/Reviewer_sNYF" ], [ "ICLR.cc/2025/Conference/Submission1338/Reviewer_tcVJ" ], [ "ICLR.cc/2025/Conference/Submission1338/Reviewer_B78P" ], [ "ICLR.cc/2025/Conference/Submission1338/Area_Chair_pxQg" ], [ "ICLR.cc/2025/Conference/Submission1338/Reviewer_B78P" ], [ "ICLR.cc/2025/Conference/Submission1338/Reviewer_sNYF" ], [ "ICLR.cc/2025/Conference/Submission1338/Authors" ], [ "ICLR.cc/2025/Conference/Submission1338/Reviewer_ZUi2" ], [ "ICLR.cc/2025/Conference/Submission1338/Reviewer_qrNo" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1338/Reviewer_qrNo" ], [ "ICLR.cc/2025/Conference/Submission1338/Authors" ], [ "ICLR.cc/2025/Conference/Submission1338/Reviewer_ZUi2" ], [ "ICLR.cc/2025/Conference/Submission1338/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response qrNo\", \"comment\": \"Thank you for the detailed review and very thoughtful suggestions.\\n\\nWe have updated the manuscript based on your feedback and suggestions.\\n\\n- **Sequential NPE/NLE methods** We focused on non-sequential NPE methods in this paper, however we agree with you that more baseline comparisons with sequential SBI methods is a good suggestion. If the 1-step estimates become better as paths become straighter (for example due to further improvements in the flow matching training, rectified flows, consistency models, better couplings, etc., then the 1-step estimate becomes exact. Therefore inference via the ODE with simulator feedback can be seen as a sequential method, as predictions get refined in each step with simulator feedback. This sequential approach avoids many complications of other sequential SBI methods such as mismatches with proposal posteriors, truncations or the need to retraining networks on new samples during inference. \\n- **More experiments** We agree that experiments on gravitational waveforms and spiking neurons are interesting, but we think that they are outside of the scope of this rebuttal. We want to stress again that while the dimensionality of the gravitational lensing problem is low for the parameters we want to infer, the dimensionality of the observation is high (a 160 x 160 image). Calling the simulator online is relatively cheap, but it is very sensitive to several parameters, which makes the problem difficult and a nice \\\"challenge\\\" for SBI.\\n- **Bad posterior compared to MCMC and more baselines**\\nWe have updated figures 15 and 16 which now also show the posteriors obtained without simulator feedback. As can be seen, including simulator feedback visibly and consistently reduces the bias of the posterior. We have also run an additional coverage statistic. The evaluation shows that the flow matching (+simulator) posteriors have good coverage, albeit not perfect. They perform better than the MCMC-based baselines HMC and AIES for the given computational budget. HMC and AIES do not always give the same posterior, as they may fail to converge sometimes when modeling a large number of systems, which explains why they compare worse than flow matching on average, but can perform better for specific systems such as the one shown in figure 15. Additionally, we have included more baselines for posterior sampling with diffusion models as mentioned in the general response, which demonstrate that simulator feedback gives a good solution for a difficult problem and outperforms other state-of-the-art methods for conditional sampling from diffusion models.\"}", "{\"title\": \"Response sNYF\", \"comment\": [\"Thank you for the detailed review and very thoughtful suggestions.\", \"**Limited empirical evaluation**: We agree that including more experiments can increase the quality of the empirical evaluation. Given the limited number of pages, it is however difficult to introduce and analyse multiple challenging real-world experiments and it was outside the scope of this rebuttal.\", \"As mentioned in the general response, the gravitational lensing problem is not easy and increasing the dimensionality of the problem (for example by using a pixelated, high-dimensional representation of the source galaxy) also increases the degrees of freedom, which can actually make it easier for the predicted samples to faithfully reconstruct the observation. Therefore we think the dimensionality of the problem is at a sweet spot, where it is challenging, but can also be compared directly to MCMC approaches.\", \"We have added two additional baselines (LGD-MC and TDS, see general response) which can be used for general non-linear inverse problems with a diffusion model prior.\", \"We extended the related work to include the papers [5-6].\"], \"regarding_your_questions\": [\"*What are the challenged to scaling the appraoch to high-dimensional spaces?* We think that flow matching with our proposed simulator feedback can scale to high-dimensional spaces in the same way that diffusion models/flow matching do.\", \"*How sensitive is the method to the quality of the simulator?* If the simulator contains significant approximation errors that can be a problem. We want to improve the predicted posterior from flow matching using feedback from the simulator. However, if the simulator is not accurate, then feedback from the simulator might not provide useful information to further improve the samples. In this case, the control network will learn to ignore simulator feedback and output the pretrained flow without any corrections.\", \"*Have you explored using multiple different types of control signals simultaneously? Could this provide complementary benefits?* We have not explored using multiple control signals simultaneously, but it is an interesting interesting idea and a good experiment for future work. We believe that there can be benefits from using multiple control signals in the same way that multiple, possibly physics-informed losses can help in many problems.\", \"*Could the method be extended to handle multiple observations simultaneously in a more efficient way?* We have not done any experiments with multiple observations. It should be fairly strightforward to modify the control signals to account for multiple observations. The conditioning of the flow network for multiple observations can be more difficult. We think that an approach similar to [7] might work for conditioning flows as well, but we have not done any tests.\", \"[7] Geffner, T., Papamakarios, G., & Mnih, A. (2023). Compositional score modeling for simulation-based inference. In International Conference on Machine Learning (pp. 11098-11116). PMLR.\"]}", "{\"comment\": \"Dear all,\\n\\nThe deadline for the authors-reviewers phase is approaching (December 2).\\n\\n@For reviewers, please read, acknowledge and possibly further discuss the authors' responses to your comments. While decisions do not need to be made at this stage, please make sure to reevaluate your score in light of the authors' responses and of the discussion.\\n\\n- You can increase your score if you feel that the authors have addressed your concerns and the paper is now stronger.\\n- You can decrease your score if you have new concerns that have not been addressed by the authors.\\n- You can keep your score if you feel that the authors have not addressed your concerns or that remaining concerns are critical.\\n\\nImportantly, you are not expected to update your score. Nevertheless, to reach fair and informed decisions, you should make sure that your score reflects the quality of the paper as you see it now. Your review (either positive or negative) should be based on factual arguments rather than opinions. In particular, if the authors have successfully answered most of your initial concerns, your score should reflect this, as it otherwise means that your initial score was not entirely grounded by the arguments you provided in your review. Ponder whether the paper makes valuable scientific contributions from which the ICLR community could benefit, over subjective preferences or unreasonable expectations.\\n\\n@For authors, please respond to remaining concerns and questions raised by the reviewers. Make sure to provide short and clear answers. If needed, you can also update the PDF of the paper to reflect changes in the text. Please note however that reviewers are not expected to re-review the paper, so your response should ideally be self-contained.\\n\\nThe AC.\"}", "{\"title\": \"Response B78P\", \"comment\": \"Thank you for the detailed review and very thoughtful suggestions.\\n\\n- **Additional baselines**: We have added comparisons with Twisted Diffusion Sampler TDS [1] and Loss-Guided Diffusion LGD-MC [2] in table 2 as recommended by reviewers sNYF and B78P. Both methods do not produce posterior samples with accurate reconstructions of the observation and face similar difficulties as Diffusion Posterior Sampling (DPS).\\n- **Coverage tests**: We have included the coverage test TARP [3] in figure 13, which was recommended by you. The evaluation shows that the flow matching (+simulator) posteriors have good coverage, albeit not perfect. They perform better than the MCMC-based baselines HMC and AIES for the given computational budget. \\n- We have updated the related work section, adding a an extended discussion of Legin et al. (2023).\\n\\nWe have addressed the concers regarding the flow matching results for the SBI tasks in table 1 and the posteriors/residuals of the gravitational lensing experiments in our global response. \\n\\n[1] Wu, L., Trippe, B., Naesseth, C., Blei, D., & Cunningham, J. P. (2023). Practical and asymptotically exact conditional sampling in diffusion models. Advances in Neural Information Processing Systems, 36.\\n\\n[2] Song, J., Zhang, Q., Yin, H., Mardani, M., Liu, M. Y., Kautz, J., ... & Vahdat, A. (2023, July). Loss-guided diffusion models for plug-and-play controllable generation. In International Conference on Machine Learning (pp. 32483-32498). PMLR.\\n\\n[3] Lemos, P., Coogan, A., Hezaveh, Y., & Perreault-Levasseur, L. (2023, July). Sampling-based accuracy testing of posterior estimators for general inference. In International Conference on Machine Learning (pp. 19256-19273). PMLR.\"}", "{\"summary\": \"This paper introduces a method to improve flow-based generative models for simulation-based inference by incorporating simulator feedback through control signals. The key idea is to refine a pretrained flow network with additional control signals based on simulator outputs, which can include gradients and problem-specific cost functions for differentiable simulators or learned signals from non-differentiable simulators. The authors demonstrate their method on several benchmark problems and show substantial improvements in accuracy (53%) while maintaining fast inference times (up to 67x faster than MCMC) on a challenging strong gravitational lensing application.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper tackles an interesting problem of incorporating simulator feedback into generative models trained with flow matching. With the recent interest in the flow matching for learning generative models, the problem of incorporating downstream rewards, e.g. simulator feedback, is a critical one. This paper presents one of the first efforts in that direction (based on my knowledge).\", \"The application to the lensing problem is quite interesting. Generative models are well suited to such scientific inverse problems, and this method could be a useful addition to the toolbox for such problems.\", \"The method supports feedback from black-box simulators as well as differentiable simulators where gradient feedback is available.\", \"The method also provides considerable speed-ups in the simulation over other approaches.\", \"The paper is also quite clearly written and easy to follow.\"], \"weaknesses\": \"* The empirical evaluation seems quite a bit limited. Specifically, the paper only considers a single applicaiton on strong gravitational lensing. The other synthetic tasks are quite small and it is unclear how general the method is. Moreover, the results on the gravitational lensing experiment, the results are not convicing. The residuals indicate the samples are not capturing the posterior. Finally, the authors consider a relatively simple variant of the problem (23D). In this setting FM + simulator feedback just acts as a faster approximation to MCMC. Where FM + Simulator feedback might have an advantage is high dimensional unstructured data (e.g. images in the case of lensing)\\n* Another shortcoming of the empirical evaluation is the relatively limited baselines. There are several approaches for unbiased inference with diffusion priors (like DPS) [1-4], so it would be good to add comparisons to some of these baselines.\\n* There is some missing discussion about related work about the guidance of flow matching models [5-6]. \\n* The code to reproduce experiments is not provided though there are sufficient details in the paper. \\n\\n[1] Wu, Z., Sun, Y., Chen, Y., Zhang, B., Yue, Y., & Bouman, K. L. (2024). Principled Probabilistic Imaging using Diffusion Models as Plug-and-Play Priors. arXiv preprint arXiv:2405.18782.\\n\\n[2] Wu, L., Trippe, B., Naesseth, C., Blei, D., & Cunningham, J. P. (2023). Practical and asymptotically exact conditional sampling in diffusion models. Advances in Neural Information Processing Systems, 36.\\n\\n[3] Dou, Z., & Song, Y. (2024). Diffusion posterior sampling for linear inverse problem solving: A filtering perspective. In The Twelfth International Conference on Learning Representations.\\n\\n[4] Chung, H., Lee, S., & Ye, J. C. (2023). Decomposed Diffusion Sampler for Accelerating Large-Scale Inverse Problems. arXiv preprint arXiv:2303.05754.\\n\\n[5] Nisonoff, H., Xiong, J., Allenspach, S., & Listgarten, J. (2024). Unlocking Guidance for Discrete State-Space Diffusion and Flow Models. arXiv preprint arXiv:2406.01572.\\n\\n[6] Zheng, Q., Le, M., Shaul, N., Lipman, Y., Grover, A., & Chen, R. T. (2023). Guided flows for generative modeling and decision making. arXiv preprint arXiv:2311.13443.\", \"questions\": [\"In addition to the weaknesses above:\", \"What are the challenges to scaling the approach to high-dimensional spaces?\", \"How sensitive is the method to the quality of the simulator? What happens when the simulator contains significant approximations or errors?\", \"Have you explored using multiple different types of control signals simultaneously? Could this provide complementary benefits?\", \"Could the method be extended to handle multiple observations simultaneously in a more efficient way?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper studies the problem of modelling the posterior $p(\\\\theta\\\\mid x)$ in a generative model $\\\\theta\\\\to x$, where $p(x\\\\mid\\\\theta)$ is available as a simulator, but possibly without access to exact likelihoods. When the sampling of $\\\\theta$ given $x$ is modelled as a conditional (on $x$) neural ODE and this ODE is trained by flow matching objectives, it is proposed to place an inductive bias on the drift model: the output of simulator or its gradient, evaluated at an intermediate time point or its extrapolation to the target space, is encoded and given as an input to the drift model. Such a form of the drift model is hypothesised to improve the approximation of the target distribution by effectively guiding the drift to the modes of the posterior. Experiments are done on several low-dimensional simulation-based inference tasks, including the lens and source parameter estimation problem in strong gravitational lensing.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The problem studied is highly relevant as foundation models (including diffusion and flow-based models) become available in various scientific domains. It is important to develop inference methods for inverse problems that have low bias, good posterior coverage, and high efficiency of training and inferences -- this paper attempts to solve these problems.\", \"The proposed algorithm plausibly attacks these challenges (even if it is not very well demonstrated by the experiments, see below) and should give an asymptotically correct solution to the posterior sampling problem.\", \"Interesting analysis of algorithm variants in Section 5.2.\"], \"weaknesses\": [\"Throughout the text, there are many inaccurate or somewhat sloppy statements and references that confuse a specialist in flow matching models (and would likely impede understanding by non-specialists as well). A list follows.\", \"Abstract: I would suggest to revise it to explain the problem setting and approach at a higher level.\", \"First sentences: \\\"Flow-based generative modeling is a powerful tool for solving inverse problems in physical sciences\\\" -- this is a bold claim. The use of flow-based models for inverse problems is not yet well-established; in fact, this is what this paper aims to do.\", \"The next few sentences do not set up the problem well (it is not even explained that we are talking about continuous normalising flows).\", \"The results at the end do not make sense without context: what does \\\"improves the accuracy by 53%\\\" mean?\", \"Introduction:\", \"L046: \\\"normalising flows transform a noise distribution to the posterior distribution\\\" is true, but:\", \"It is a statement with low specificity (VAEs and GANs also transform noise to data).\", \"These models are first introduced in the setting of training from samples, which is not what we usually have in Bayesian inference (no ground truth samples from the posterior).\", \"Two of the three citations about diffusion models are actually about ODEs / flow matching. The two are of course connected, but I think it is unfair to call flow-based models an instance of \\\"success of diffusion models [...] specifying a corruption process\\\". For instance, flow ODEs can be learned that are not the probability flow ODEs of diffusion processes, including Liu et al.'s rectified flow (the first iteration is indeed a diffusion ODE, the later 'straightened' ones are not) and Tong et al.'s minibatch OT-based flow matching.\", \"One solution could be to explicitly state the connection between FM and diffusion in the cases where it exists (e.g., for Ornstein-Uhlenbeck noise, optimal drifts of ODE and SDE are both expressed in terms of the score, so learning one is tantamount to learning the other).\", \"In the paragraph starting L052, somehow we have jumped from a general distribution-matching setting (which is not how flow-based models were introduced -- they are trained from samples!) to a conditional posterior modelling setting. Please state the setting/assumptions (e.g., that we have conditional posterior samples from a simulator).\", \"Related work:\", \"\\\"Inverse problems with diffusion models\\\": This seems to be better named \\\"solving inverse problems under a diffusion model prior\\\". Although the works that do this with Monte Carlo (e.g., Cardoso et al, Dou et al,) are mentioned (you could also consider [Song et al.](https://proceedings.mlr.press/v202/song23k.html)), there is also stochastic optimisation (e.g., [Mardani et al.](https://arxiv.org/abs/2305.04391), [Graikos et al.](https://arxiv.org/abs/2206.09012)), or amortisation by RL methods (e.g., [Black et al.](https://arxiv.org/abs/2305.13301), [Fan et al.](https://arxiv.org/abs/2305.16381), [Venkatraman et al.](https://arxiv.org/abs/2405.20971)).\", \"\\\"Flow matching\\\": It is strange to see \\\"optimal transport paths (Lipman et al.)\\\" contrasted with \\\"independent couplings or rectified flows (Liu et al., Tong et al.)\\\". In fact, it is Rectified Flow and OT-CFM (Liu et al., Tong et al.) who consider **non-independent couplings** through rectification steps or OT couplings (thus actually approximating the dynamic OT), respectively, while Lipman et al.'s flow matching is equivalent to one using independent couplings and is OT only on the level of the conditional probability paths used for training.\", \"FM theory:\", \"Equation (1): $\\\\theta$ in subscript should be $\\\\phi$.\", \"L156: Because smoothness conditions are stated, they should be precise: Do you assume Bochner integrability? Continuous differentiability (how many times?) in both $\\\\theta$ and $t$? It should also be stated that $p_t(x)=p(t,x)$ and $p$ is a function $[0,1]\\\\times\\\\mathbb{R}^d\\\\to\\\\mathbb{R}$.\", \"L182 is hard to understand: what is meant by \\\"$q(z)=p_1(\\\\theta)$? It should be said that the conditioning variable $z$ is identified with the endpoint $\\\\theta_1$, etc.\", \"Controls for improved accuracy:\", \"LL201-203 do not make sense to me. The paragraph begins with conditioning of ODEs -- how is an old trick in diffusion \\\"for example\\\" w.r.t. such conditioning?\", \"NB. Equation (7) will be the *exact* $t=1$ endpoint of integration ($\\\\theta_1=\\\\hat{\\\\theta}_1$) if we have a perfectly fit OT or any model with straight integral curves (such as the converged ODE after many iterations of rectified flow)!\", \"LL225-227 and later at LL352-355: Once again, I am surprised by the repeated discussion of Liu et al. and Tong et al. yet the *omission of the actual algorithms they propose* (rectification and MBOT coupling), which both produce **straighter** paths than a vanilla FM (and hence inference in fewer steps).\"], \"experiment_results_are_not_convincing\": [\"Results are not consistently showing improvement (cf. Table 1) and error bars are not reported, making it impossible to assess significance.\", \"In Section 5, can you comment on computational efficiency in terms of wall time?\", \"Lensing:\", \"The evaluations do not seem to guarantee coverage ($\\\\chi^2$ is obviously not sufficient for this, and the SBC tests use projection only on a single parameter). Have you considered coverage tests such as those used in Legin et al., Section 3? Currently it is not demonstrated that the proposed method achieves more accurate posterior sampling.\"], \"questions\": \"Please see above.\\n\\nSimpler than the encoder architecture, did you consider physics-inspired ways of providing the simulator information to the drift model? For example, simply expressing the drift as (learned vector field NN) + (learned scalar or diagonal NN) x (simulator gradient), as often done in work on diffusion models for sampling of Boltzmann distributions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Many thanks for the revised version of the paper, including additional coverage tests and another baselines. However, I still believe that this paper needs substantial improvement, so I will maintain my current score.\"}", "{\"metareview\": \"The reviewers are somewhat divided (3-3-3-6-6) about the paper, but they overall lean towards rejection. The paper introduces simulator feedback as an extension to flow matching for simulation-based inference. The approach is well-motivated, but the results are not convincing. The author-reviewer discussion has been constructive and has led to a number of clarifications and improvements, with the addition of a few new results. However, the reviewers still believe substantial improvements are needed, in particular regarding the presentation, the clarity, and the evaluation of the approach. For these reasons, I recommend rejection. I encourage the authors to address the reviewers' comments and to resubmit to a future conference.\", \"additional_comments_on_reviewer_discussion\": \"The author-reviewer discussion has been constructive and has led to a number of clarifications and improvements, with the addition of a few new results.\"}", "{\"summary\": \"The paper considers the problem of solving inverse problems mostly in physical sciences. In particular, we are interested in posterior inference. This paper propose to use the flow matching perspective refined with additional control signals coming from a simulator. Moreover, the authors consider various scenarios depending on the differentiability of the simulator. Finally, they show the empirical results on a few simulator-based inference problems and the gravitational lensing inverse problem.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper has some strengths overall, which I will outline below.\\n\\n\\n**Strengths:**\\n1. Introduction of the flow matching perspective for these set of problems making a faster inference procedure.\\n2. Providing the solutions on using either differentiable or non-differentiable simulators.\", \"weaknesses\": \"Despite its strengths, the paper has a few major and minor weaknesses.\\n\\n**Major weaknesses:**\\n\\n1. I\\u2019m really concerned about the presentation of the results and the abilities of this method, because of the lack of enough comparisons and metrics. \\n2. Regarding the gravitational lensing problem (presented as the main real-world task being tackled), they are operating in a relatively small space, making the problem easy and solvable. However, they didn\\u2019t include any coverage test (e.g., TARP [1], or any other), so it\\u2019s hard to say if the posteriors are good. Moreover, as far as I understand, the evaluation test is only on their own simulations, the one that the model were being trained on. In particular, we don\\u2019t know if the model is robust to any OOD examples. Finally, the presented residuals (e.g., in Fig. 6) are looking bad.\\n3. Regarding the part: \\u201e(\\u2026) however, previous methods are usually restricted to point estimates, use simple variational distributions or Bayesian Neural Networks (Schuldt et al., 2021; Legin et al., 2021; 2023; Poh et al., 2022) that are not well suited to represent more complicated high-dimensional data distributions.\\u201d - Legin et al. 2021; 2023 use a likelihood-free inference (or simulation-based inference) framework to get posteriors from simple feed-forward nn (not Bayesian).\\n4. The authors proposed the intensive tests only on LV, which is relatively simple problem and for sure not enough for fair comparison. Moreover, the results in Tab. 1 don\\u2019t show any superiority of the proposed method - in particular, NSF is getting similar or better results.\\n\\n\\n**Minor weaknesses:**\\n\\n1. The authors should include comparisons with other novel posterior sampling baseline methods than DPS, e.g., LGD\\u2212MC [2].\\n2. In line 319, should be \\u201espline\\u201d \\n\\n\\n**References:**\\n\\n[1] Lemos, P., Coogan, A., Hezaveh, Y., & Perreault-Levasseur, L. (2023, July). Sampling-based accuracy testing of posterior estimators for general inference. In International Conference on Machine Learning (pp. 19256-19273). PMLR.\\n\\n[2] Song, J., Zhang, Q., Yin, H., Mardani, M., Liu, M. Y., Kautz, J., ... & Vahdat, A. (2023, July). Loss-guided diffusion models for plug-and-play controllable generation. In International Conference on Machine Learning (pp. 32483-32498). PMLR.\", \"questions\": \"I would like to see especially the experiments and responses to the issues mentioned as weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the response and additional results in the rebuttal. I still believe that for a largely empirical paper, the experiments are limited (in terms of the scale as well as breadth) but the ideas presented are interesting for the community. I have raised my score to reflect this.\"}", "{\"title\": \"Response ZUi2\", \"comment\": \"Thank you for the detailed review, thoughtful suggestions and positive feedback.\\n\\n- We agree that including more comparisons with traditional NPE methods would strengthen the empirical results. We have included two additional methods for conditional sampling from diffusion models (LGD-MC and TDS, see global response), but will add more in a future version of the paper. Additionally, we will explore the pretraining/finetuning tradeoff more comprehensively for the gravitational lensing experiment and update figure 1 based on your suggestion.\", \"regarding_your_questions\": [\"*Is simulator feedback limited to flow matching?* The optimal transport coupling paths that we have used are nice, because they produce straighter paths, thus improving the accuracy of the 1-step estimate. In principle, this can be replaced by other, similar objectives/couplings. As for whether simulator feedback can be used for more general NPE methods, we think it depends on the specific method and is hard to answer in general. Flow matching/diffusion can be used to give 1-step estimates, based on which we get feedback from the simulator to correct the current trajectory. For other NPE approaches, this mechanism for simulator feedback would need to be rethought.\", \"*Improvements from multiple samples* Yes, that is a good idea. If we considered a batch of samples for comparison with the observation instead of a single sample, this can lead to a better simulator control (especially for stochastic simulators). We didn't consider it in this work, since it makes the training and inference more complicated, but it is a great idea for follow-up work.\", \"*Comparison with other NPE methods* We will include more comparisons with NPE methods in an updated version of the paper.\", \"*How critical was the $t>0.8$ empirical threshold?* We found it to be important, as it made it much easier for the control network to learn how to correct the flow based on the control signal. For $t < 0.8$ the 1-step estimates were not accurate enough so that feedback from the simulator was helpful. In this case, it was very difficult for the control network to extract useful corrections for the trajectory from the control signal. The threshold is directly related to how straight the flow paths are, which depends on both the problem and the flow matching setup.\"]}", "{\"summary\": \"The paper presents a method for including simulator guidance in a flow matching-based simulation-based inference setup. The basic idea as far as I understand is to model the posterior using generative flow matching, and during training (possibly only at a fine-tuning stage), draw samples from the simulator by interpolating parameter samples from the velocity trajectory, and include an additional signal ensuring consistency between those samples and the training datapoint. The overall motivation is to increase simulation efficiency when a stochastic simulator is used for simulation inference (as is usually the case).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Timely paper on an important problem in the field -- using information from the simulator to speed up simulation-based inference, in order to make it more sample-efficiency and well-calibrated.\", \"Discussion of several control mechanisms, which could be applicable depending on the specific simulator and domain.\", \"Good discussion of underlying theory, both the flow matching aspect as well as simulation-based inference. Sound comparison to surrounding context literature, e.g. relevance to classifier-free guidance.\", \"Application to an important and challenging problem in cosmology -- strong lens modeling, and comparison with a baseline HMC approach.\"], \"weaknesses\": [\"In the strong lensing section, while a comparison with HMC is done, a comparison with \\\"traditional\\\" neural simulation-based inference approaches like NPE is lacking. This comparison would significantly strengthen the outcomes of this experiment.\", \"The pretraining/finetuning tradeoff is not comprehensively explored -- as far as I understand, a big advantage of the method is that one could just finetune using a smaller number of simulation calls. While this is mentioned briefly and tested for specific cases, a more comprehensive study of how fraction of finetuning for a fixed simulator budget affects the outcome would make the results quite a bit stronger.\", \"The high-level presentation could be made slightly clearer -- e.g., in Fig. 1, marking that the it is the posterior $p(\\\\theta\\\\mid x)$ that is being modeled/targeted, to immediately situate the reader to the problem setting.\"], \"questions\": [\"Is there something particular about the flow matching setup (e.g. linear trajectories) that make simulator control applicable here, in contrast to a more traditional method like NPE? Does the method rely on assuming optimal transport coupling paths (eqs. 5-6), or is it more generally applicable?\", \"If I understand correctly, the goal of simulator control is to produce flow matching trajectories that better model the joint $(\\\\theta, x_0)$ space by giving an additional signal that the interpolated parameter-generated samples should produce simulations consistent with the original $x_0$. This reduces the effect of simulator stochasticity. Could this partly be recreated by generating multiple samples from the same parameter point partway through training? Similarly, could producing a batch of samples for comparison to the $x_0$ further improve the flow matching control signal?\", \"In the strong lensing section, why was the comparison primarily made to HMC, rather than e.g. NPE?\", \"How critical was the $t > 0.8$ empirical threshold choice for controls, and do the authors expect this to be problem-dependent or fairly universal?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces flow matching with simulator feedback for simulation-based inference which extends offline flow matching for neural posterior estimation with an online phase that uses online simulations to improve the accuracy of the estimated posterior distribution. Indeed, the authors observe that learning perfectly the score function corresponding to the true posterior distribution is challenging and propose to access the simulator online and correct these imperfections at evaluation time. The method is supposedly much more efficient than alternative online method such as MCMC while having the potential to provide as accurate results. The paper introduces two types of control signals, a gradient-based and a learning-based control signal, which are made for differentiable (and deterministic) and non-differentiable (and potentially stochastic) simulators respectively. The method is empirically tested on 4 common SBI benchmarking tasks and a \\\"strong gravitational lensing\\\" problem. Results highlight that simulator feedback help improving the accuracy of the posterior distributions.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"*Novelty*: To the best of my knowledge, the idea of including simulator feedback in flow matching for posterior estimation is novel\", \"*Soundness*: The idea of using simulator feedback to correct for the imprecision of score matching may indeed be helpful in certain applications.\", \"*Presentation*: I found the tables and figure easy to read and informative, while being also visually appealing.\"], \"weaknesses\": \"While I must acknowledge certain positive aspects of the paper, I have also several concerns that motivate my negative recommendation, which I am listing below.\\n1. While I find the figures and tables quite enlightening, I find the presentation being a weakness. There are multiple hand-wavy explanations and claims that I find a bit confusing. See below for concrete examples.\\n2. Empirical validation: I was surprised by the numbers from figure 3 and 4 which seems worse than existing alternatives. In particular, the paper only compares to offline methods while it is clear from [1] that SNLE or SNPE which are sequential alternatives to NPE/NLE and perform much better than the proposed method. It is also arguable whether these benchmarks are the most relevant ones as there are quite low dimensional in term of observation dimensionality and are not fully representative of applications where SBI can shine. For instance, gravitational waveforms and spiking neurons are interesting benchmark used in many SBI papers. I also find the results of Flow matching + posterior quite bad compared to MCMC methods in figure 14 and 15 where we can clearly see an issue with bias and also variance of the predicted posteriors. \\nOverall, the empirical validation seems insufficient to me and do not clearly demonstrate that the method proposed is of any real use. \\n3. Relevance: Sequential SBI methods, which call the simulator online, are often complicated to motivate as they require to access the simulator while also arguing doing inference on this simulator without SBI is hard because evaluating the simulator is computationally demanding. While I agree this is not the case for all applications, I would expect the paper to clearly highlight and benchmark the method on use cases where simulating more samples offline to get a good amortised posterior is not enough and calling the simulator online does not take too much time. \\n\\nI am not confident these concerns can be addressed in the scope of the discussion (especially regarding results and presentation) but I am open to discussing with authors.\", \"hand_wavy_explanations_examples\": [\"l33-36: Seems an intricated way of explaining likelihood and posterior distributions which are very standard mathematical objects most reader should already know about.\", \"l37-43: The presentation of SBI is again a bit weird in my opinion. It does not clearly say when and why SBI may be necessary and what it solves. It may be interpreted as if SBI was only about Bayesian inference for the uninformed reader where frequentist methods exist as well.\", \"l48: what do you mean by \\\"it became clear ... be specified a priori\\\"?\", \"l49-50: this a vague and pretty strong statement that I would expect to be clarified and supported by reference.\", \"l52-54: I do not understand what you mean by saying there is \\\"no feedback loop\\\".\", \"l74-78: you mix the high level description of the method with specific implementation decisions which makes it hard to understand on what aspects the reader should focus on.\", \"Section 3 was presented in a way that I did not find easy to digest.\", \"l205: \\\"a fundamental problem...\\\" why is it a fundamental problem is unclear to me.\", \"l254-255: What shall we conclude from that sentence?\", \"l277-282: It seems like a hacky solution\", \"l283: While I understand you are trying to emphasise that the algorithm still learn the \\\"true\\\" posterior distribution, this is stated in a hand-wavy way in my opinion.\", \"l345-351: This is quite confusing again. Why is there a dropout layer? It is quite complicated to grasp the details of all variants.\", \"5.4: This is again a very hand-wavy claim without actually being theoretical or empirical support.\", \"[1]:http://proceedings.mlr.press/v130/lueckmann21a/lueckmann21a.pdf\"], \"questions\": \"I would be happy if authors could provide arguments against my concerns.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"I appreciate the time you have spent in the rebuttal and believe the idea presented in your paper is interesting.\\n\\nNevertheless, the paper requires major updates to demonstrate the value of the proposed approach. In particular, I encourage the authors to align the targeted applications, where their method would be valuable, with benchmarks that falls under that scope. I would encourage authors to demonstrate that their method is easily applied in such settings, saves compute, and performs on-par or better than existing alternatives.\\n\\nI have also checked other reviews and it seems that some of my concerns are shared with other reviewers.\"}", "{\"title\": \"Global Response\", \"comment\": \"We thank all reviewers for their constructive feedback and we have updated the manuscript. The main changes comprise the following:\\n\\n- **Updated text**: We have updated the text to clarify and improve some smaller issues mentioned by reviewers tcVJ, qrNO and B78P.\\n- **Additional baselines**: We have added comparisons with Twisted Diffusion Sampler TDS [1] and Loss-Guided Diffusion LGD-MC [2] in table 2 as recommended by reviewers sNYF and B78P. Both methods do not produce posterior samples with accurate reconstructions of the observation and face similar difficulties as Diffusion Posterior Sampling (DPS).\\n- **Coverage tests**: We have included the coverage test TARP [3] in figure 13, which was recommended by reviewer B78P. The evaluation shows that the flow matching (+simulator) posteriors have good coverage, albeit not perfect. They perform better than the MCMC-based baselines HMC and AIES for the given computational budget. \\n- **Updated plots**: We have updated the plots showing posteriors for the two systems in figure 15 and 16, which now include the posterior for flow matching without simulator feedback. Including simulator feedback visibly and consistently reduces the bias of the posterior. \\n\\nOverall, we appreciate the critical feedback on the experiments. We have identified two main concerns that we would like to address:\\n\\n- **Flow matching on par/not significantly improving baselines in toy SBI problems** Our main contribution of this paper was not to show that flow matching/diffusion improves existing low-dimensional benchmark tasks. There are already several recent works that advocate for diffusion training in SBI, e.g. [4,5,6]. As discussed in the paper, flow matching has been evaluated for these exact SBI tasks and compared against other neural posterior estimation (NPE) methods in [4], where it also did not outperform all baselines in every scenario. Diffusion/flow matching provides a stable training algorithm, scales to large network architectures and also works well when parameters/observations are high-dimensional - which is not the case for most NPE methods. We included table 1 with baselines comparisons for the SBI toy tasks in section 5.1, since we do follow-up experiments with flow matching in section 5.2 and 5.3. The main contribution of this paper is to introduce simulator feedback as an extension to flow matching for sbi, as we have identified that offline learning from data alone is not sufficient to obtain very accurate posteriors for many problems. \\n- **Gravitational lensing experiment too low-dimensional/possible bias in predicted posterior** The parameter space comprises 23 dimensions. While this is small in comparison to more high-dimensional data such as images, the marginal posterior distributions for the parameters can be very narrow, as the simulator is very sensitive to some of them. In addition to that, while the parameter space is relatively small, the observations, which are images, are high-dimensional.\\nTherefore this is a very difficult problem and some of the modeled systems still show visible residuals. We have added additional baselines as mentioned above and included an additional coverage test, which shows that flow matching has better coverage than the other MCMC-based methods with the given computational budget. \\n\\n[1] Wu, L., Trippe, B., Naesseth, C., Blei, D., & Cunningham, J. P. (2023). Practical and asymptotically exact conditional sampling in diffusion models. Advances in Neural Information Processing Systems, 36.\\n\\n[2] Song, J., Zhang, Q., Yin, H., Mardani, M., Liu, M. Y., Kautz, J., ... & Vahdat, A. (2023, July). Loss-guided diffusion models for plug-and-play controllable generation. In International Conference on Machine Learning (pp. 32483-32498). PMLR.\\n\\n[3] Lemos, P., Coogan, A., Hezaveh, Y., & Perreault-Levasseur, L. (2023, July). Sampling-based accuracy testing of posterior estimators for general inference. In International Conference on Machine Learning (pp. 19256-19273). PMLR.\\n\\n[4] Wildberger, J., Dax, M., Buchholz, S., Green, S., Macke, J. H., & Sch\\u00f6lkopf, B. (2024). Flow matching for scalable simulation-based inference. Advances in Neural Information Processing Systems, 36.\\n\\n[5] Sharrock, L., Simons, J., Liu, S., & Beaumont, M. (2024). Sequential neural score estimation: Likelihood-free inference with conditional score based diffusion models. In Proceedings of the 41st International Conference on Machine Learning, PMLR.\\n\\n[6] Gloeckler, M., Deistler, M., Weilbach, C. D., Wood, F., & Macke, J. H. (2024). All-in-one simulation-based inference. In Proceedings of the 41st International Conference on Machine Learning, PMLR.\"}", "{\"comment\": \"Thanks to the authors for the response -- I will keep my 6 score, especially in absence of comparison with NPE benchmarks.\"}", "{\"title\": \"Response tcVJ\", \"comment\": \"Thank you for the detailed review and very thoughtful suggestions.\\n\\nWe have updated the manuscript based on your feedback and clarified the points that you had listed.\\n\\n- **Results are not consistently showing improvements (cf. table 1)**\\nThe experiments in table 1 are all low-dimensional with strong baselines producing good posteriors given the available dataset sizes/simulator budget. So flow matching being comparable to the best performing models is a good outcome, as flow matching/diffusion training have the advantage of scalability to larger networks and more high dimensional inputs. As mentioned in the general response, flow matching has been evaluated against other neural posterior estimation problems for these tasks already in [1]. Since we do follow-up experiments with modifications of flow matching in section 5, we have also included some baselines comparisons for the tasks that we had run ourselves.\\n- **Computational efficiency in terms of wall time**\\nTraining and evaluating the test metrics on the flow matching network on the largest budget of $10^7$ simulations took ca. 52 minutes (200 epochs), however the generation of the dataset took an additional estimated x minutes. The network on $10^5$ simulations took on average ca. 13 minutes for training and evaluation. Finetuning with the gradient-based control signals took an additional 2 hours, 2 minutes (early stopping after 83 epochs) while with the learning-based control signal, the run took ca. 1 hour 32 minutes (early stopping at 150 epochs).\\n- **Coverage tests using TARP [1]**\\nWe have included an evaluation of the coverage in appendix C.3, which shows that flow matching both with and without simulator feedback achieves good - although not perfect - results. They perform better than the MCMC-based baselines HMC and AIES for the given computational budget. \\n- **Rectification of flows**\\nWe had cited the rectified flow paper by Liu et al. without discussing the rectification algorithm, as pointed out by you. It is a great idea to finetune with simulator feedback on the rectified flow, as paths should be straighter, thus producing better 1-step estimates.\\nWe have included experiments with the LV task in appendix B.1. We find that algorithm 1 from Liu et al. produces slightly worse results than the flow matching training following Lipman et al. When rectifying the flow in 2- and 3-Rectified Flow, the C2ST score gets worse. We think that the rectified flows perform worse, because of the conditioning on the observations and training on paired data in the reflow stage. When finetuning with simulator, the C2ST score for 2- and 3-Rectified Flows is better than for the finetuned 1-Rectified Flow, showing that with straighter paths, simulator feedback is better. However, because the velocity model is worse for 2- and 3-Rectified Flows, the final C2ST is not better than the one with the simulator feedback finetuning strategy used in section 5.3.\\n- **Using simple control networks**\\nYou asked the question, if it is possible to use a simpler control network in the form a learned scalar or diagonal neural network times the simulator gradient. We had initially used simpler control networks, like the ones you mentioned. However we found in the gravitational lensing experiments that both the time and the value of the cost function provide significant improvements and very simple models did not work very well. We haven't considered a diagonal neural network, but it could be an interesting future experiment.\\n\\n[1] Wildberger, J., Dax, M., Buchholz, S., Green, S., Macke, J. H., & Sch\\u00f6lkopf, B. (2024). Flow matching for scalable simulation-based inference. Advances in Neural Information Processing Systems, 36.\"}" ] }
DoB8DmrsSS
Diffusion Guided Adversarial State Perturbations in Reinforcement Learning
[ "Xiaolin Sun", "Feidi Liu", "Zhengming Ding", "Zizhan Zheng" ]
Reinforcement learning (RL) systems, while achieving remarkable success across various domains, are vulnerable to adversarial attacks. This is especially a concern in vision-based environments where minor manipulations of high-dimensional image inputs can easily mislead the agent's behavior. To this end, various defenses have been proposed recently, with state-of-the-art approaches achieving robust performance even under large state perturbations. Upon closer investigation, however, we found that the effectiveness of the current defenses is due to a fundamental weakness of the existing $l_p$-norm constrained attacks, which can barely alter the semantics of the input even under a relatively large perturbation budget. In this work, we propose SHIFT, a novel diffusion-based state perturbation attack to go beyond this limitation. Specifically, we train a history-conditioned diffusion model, enhanced with policy guidance and realism detection to generate perturbed states that are semantically different from the true states while remaining realistic and history-aligned to avoid detection. Evaluations show that our attack effectively breaks existing defenses, including the most sophisticated ones, and significantly lowers the agent's cumulative reward in various Atari games by more than 50\%. The results highlight the vulnerability of RL agents to semantics-aware adversarial perturbations, indicating the importance of developing more robust policies for safety-critical domains.
[ "Reinforcement Learning", "Adversarial Example", "Diffusion Model" ]
Reject
https://openreview.net/pdf?id=DoB8DmrsSS
https://openreview.net/forum?id=DoB8DmrsSS
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zLKBcoYCJ9", "xVCb5AXSqt", "xTH78NNHRO", "wrekW8NFHH", "v51W5Rlhh2", "v3EbLt3WWI", "tzUu0d7Yd9", "sdg0DmqERy", "sY8TTKo5JN", "qFsmSMZUxi", "nUDj2pZjfi", "d6APBdK4M3", "ZVGQ8EwyjP", "YnQFWvj2tQ", "Sn1Lq9Srcb", "RXF2P2EVcv", "Jq2ooDfEDH", "FGGARtoQ9O", "ElDd565j9B", "CZU20RObhC", "BTmns0HtFY", "0Cs5qRrLjI" ], "note_type": [ "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732670050560, 1732173982596, 1732173769322, 1737524116784, 1733168909244, 1733109466358, 1730662420231, 1732556889875, 1732174720039, 1734888258465, 1733109521630, 1732174188132, 1732749068440, 1732173611217, 1730636203151, 1732174593881, 1733159842909, 1730677121916, 1730580084136, 1732174253866, 1732175074127, 1733109342718 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11313/Reviewer_L64s" ], [ "ICLR.cc/2025/Conference/Submission11313/Authors" ], [ "ICLR.cc/2025/Conference/Submission11313/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11313/Authors" ], [ "ICLR.cc/2025/Conference/Submission11313/Authors" ], [ "ICLR.cc/2025/Conference/Submission11313/Reviewer_L64s" ], [ "ICLR.cc/2025/Conference/Submission11313/Authors" ], [ "ICLR.cc/2025/Conference/Submission11313/Authors" ], [ "ICLR.cc/2025/Conference/Submission11313/Area_Chair_xj5j" ], [ "ICLR.cc/2025/Conference/Submission11313/Authors" ], [ "ICLR.cc/2025/Conference/Submission11313/Authors" ], [ "ICLR.cc/2025/Conference/Submission11313/Authors" ], [ "ICLR.cc/2025/Conference/Submission11313/Authors" ], [ "ICLR.cc/2025/Conference/Submission11313/Reviewer_37tk" ], [ "ICLR.cc/2025/Conference/Submission11313/Authors" ], [ "ICLR.cc/2025/Conference/Submission11313/Reviewer_37tk" ], [ "ICLR.cc/2025/Conference/Submission11313/Reviewer_eNaA" ], [ "ICLR.cc/2025/Conference/Submission11313/Reviewer_D5uT" ], [ "ICLR.cc/2025/Conference/Submission11313/Authors" ], [ "ICLR.cc/2025/Conference/Submission11313/Authors" ], [ "ICLR.cc/2025/Conference/Submission11313/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to authors\", \"comment\": \"Thank you for your detailed response and the additional experiments. The supplementary experiments demonstrate that your proposed method has unique advantages. However, I still believe there are several areas in the manuscript that require further development. The additional experiments presented in the rebuttal should be incorporated into the main text to strengthen your statements and conclusions, as these are the basic standards of an ICLR paper.\\nI am raising my score to 5, and after discussing with other reviewers and the Area Chair, I will determine whether to further increase my score.\"}", "{\"title\": \"Author Responses\", \"comment\": \"Dear Reviewer eNaA:\\n\\nThank you for your valuable and insightful feedback on our work. We will address your questions and concerns point by point.\\n\\n**Q1: Motivation of changing the semantics.**\", \"a1\": \"As shown in recent work such as DP-DQN [3] and Diffusion History (inspired by [5]), current $l_p$ norm bounded attacks including PA-AD cannot bypass these diffusion-based defense methods in environments with raw-pixel inputs such as Atari games even with a relatively large perturbation budget (please refer to **Table 2-1** in the general response). We argue that the main reason is that these attacks are not able to change the essential semantics of the image input under a reasonable attack budget, so that a diffusion-based defense can purify the noise injected to gain strong defense performance.\\n\\nThe existence of these strong diffusion-based defenses against $l_p$ norm bounded attacks motivate us to develop new attacks that can change the semantics of states to mislead those defense methods to choose non-optimal actions even after purifying the perturbed states. Our attack still remains stealthy from both a static and a dynamic perspective by utilizing a history conditioned diffusion model with realism guidance. The static stealthiness is demonstrated through the low reconstruction loss of the perturbed states generated by our method shown in Figure 3 a) in our paper. To better illustrate that our attacks are stealthy from a dynamic perspective, we have added an ablation study to compare the Wasserstein-1 distance between a perturbed state and the true state in the previous time step. As shown in the general response **Table 2-2**, our attack method has the lowest average Wasserstein distance among all the attacks. As argued in [4], the Wasserstein distance captures the cost of moving pixel mass and represents image manipulation more naturally than the $l_p$ distance. The result shows that even when the agent is aware of the true previous state $s_{t-1}$, the perturbed state $\\\\tilde{s}_t$ generated by our attack is more stealthy than other attacks. \\n\\n**Q2: Recent defense baselines in [1] and [2].**\", \"a2\": \"We thank the reviewer for pointing us to these studies. We have included them in the related work section of the revised submission. However, after reading both papers carefully, we found that the defense methods proposed in [1] and [2] might not directly apply to our setting due to the following reasons. First, both studies conduct their experiments in MuJoCo environments where the state spaces are much smaller compared with Atari environments with image input. Further, these approaches are already computationally expensive (both take more than 20 hours) to train in MuJoCo environments. Thus, directly applying them to image domains can be computationally prohibitive, which points to an interesting research direction for further study. Second, the code for [1] is not publicly available at this time and the code for [2] only implements MuJoCo environments so we cannot easily evaluate these two defense methods against our attacks in Atari environments.\\n\\nAlthough we were not able to implement the game theoretic defense in [1] due to the lack of code and the expected high computational overhead in Atari environments, we would like to point out that the DP-DQN [3] defense currently considered in the paper also adopts a game-theoretic approach by identifying an approximate Stackelberg equilibrium. While the vanilla DP-DQN uses PGD attack to simulate worst case attacks, we have retrained DP-DQN by replacing the bounded PGD attack with our unbounded diffusion guided attack. We trained this modified DP-DQN on the Pong environment with 1 million steps but the reward remained at -21 (the worst case) all the time. This gives evidence that even game theoretical defenses might not be strong enough to defend against our attacks.\\n\\nWe sincerely hope our responses have addressed all your concerns. If so, we kindly hope that you consider increasing the rating of our paper. Please do not hesitate to add further comments if you have any additional questions or need further clarifications.\\n\\n[1] Liang et al., Game-Theoretic Robust Reinforcement Learning Handles Temporally-Coupled Perturbations. ICLR 2024\\n\\n[2] Liu et al., Beyond Worst-case Attacks: Robust RL with Adaptive Defense via Non-dominated Policies. ICLR 2024\\n\\n[3] Xiaolin Sun and Zizhan Zheng, Belief-Enriched Pessimistic Q-Learning against Adversarial State Perturbations. ICLR 2024\\n\\n[4] Eric Wong, Frank R. Schmidt, and J. Zico Kolter. Wasserstein Adversarial Examples via Projected Sinkhorn Iterations. ICML 2019. \\n\\n[5] Zhihe Yang and Yunjian Xu, DMBP: Diffusion model-based predictor for robust offline reinforcement learning against state observation perturbations. ICLR 2024\"}", "{\"title\": \"General Response(2)\", \"comment\": \"**Table 3 Temporally Coupled Attack on Atari**\\n| | **PGD(Temporal Coupled)** |\\n|:---:|:---:|\\n| **Pong** | **Reward** |\\n| **DQN** | -21(0) |\\n| **SA-DQN** | -21(0) |\\n| **DP-DQN** | 20(1.73) |\\n\\nIn response to Reviewers eNaA and L64s, we have implemented the temporally coupled attack proposed in [4] on top of PGD (as their code is not publicly available) and evaluated the PGD-based temporally coupled attack on Atari Pong with $\\\\epsilon = 15/255$ and $\\\\bar{\\\\epsilon} = 7.5/255$. This table shows that the temporally coupled attack can compromise SA-DQN but not diffusion-based DP-DQN defense even with a large perturbation budget, indicating the challenge of adapting this attack to Atari games with raw-pixel input. We conjecture that this is because the attack is still constrained by an $l_p$ norm bound, making it difficult to alter the essential semantics of image input. \\n\\n**Table 4-1 Pretrained Diffusion History defense against attacks in [1]**\\n| Attack | Defense | Pong | FreeWay |\\n|---|---|---|---|\\n| **B&C** | DQN | -21 (0.00) | 23 (0.00) |\\n| | SA-DQN | 11 (0.00) | 25 (0.00) |\\n| | Diffusion History | 20 (1.41) | 27.2 (0.68) |\\n| **Blur** | DQN | -21 (0.00) | 18 (0.00) |\\n| | SA-DQN | -20 (0.00) | 27 (0.00) |\\n| | Diffusion History | 20 (0.58) | 33.2 (0.37) |\\n| **Rotate 1** | DQN | -20 (0.00) | 26.6 (0.45) |\\n| | SA-DQN | -18 (0.00) | 21 (0.00) |\\n| | Diffusion History | 14.6 (2.68) | 27.6 (0.45) |\\n| **Shift (1,0)** | DQN | -21 (0.00) | 26 (0.00) |\\n| | SA-DQN | -21 (0.00) | 24 (0.00) |\\n| | Diffusion History | 17.8 (2.85) | 27.2 (0.37) |\\n\\n\\nIn response to Reviewer 37tk, we have implemented the four high-sensitivity direction attacks in [1] (as their code is not publicly available), and conducted new experiments to evaluate how these attacks perform under diffusion based defenses for Atari Pong and Freeway. This table shows that the Diffusion History defense with a diffusion model trained on clean data only is able to defeat the Brightness&Contrast and Blurred Observation attacks, as well as Rotation and Shifting attacks with small rotations and shifts, although it is ineffective against large rotations (>1 degree) and shifts used in [1]. \\n\\n**Table 4-2 Fine-tuned Diffusion History against attacks in [1]**\\n| Pong | Defense | Reward |\\n|---|---|---|\\n| **Rotate 3** | Diffusion History | 20 (0.71) |\\n| **Shift (2,1)** | Diffusion History | 18.8 (1.79) |\\n\\nWe further fine-tuned a diffusion model by randomly applying rotations and shifts to the game frames during training, where the rotation degree is randomly chosen between 0 and 3, and the shift magnitude is randomly chosen between (0,0) and (3,3). This table shows that the fine-tuned Diffusion History defense can successfully mitigate both Rotation and Shifting attacks, even under relatively large rotations and shifts considered in [1]. In contrast, the Diffusion History defense is ineffective against our policy-adaptive attack. \\n\\n**Table 4-3 Wasserstein Distances of attacks in [1] and ours**\\n| **Freeway** | **Wasserstein** |\\n|:---:|:---:|\\n| **B&C** | 0.036(0.004) |\\n| **Blur** | 0.006(0.003) |\\n| **Rotate 1** | 0.006(0.004) |\\n| **Shift (1,0)** | 0.07(0.001) |\\n| **Ours** | 0.001(0.0002) |\\n\\nThis table compares the average Wasserstein Distance between a perturbed state and the previous step\\u2019s true state across an episode, under the attack methods in [1] and our attack. The results show that our attack method has the lowest Wasserstein distance compared with the four attacks evaluated in [1], indicating that our attack is more stealthy. \\n\\n**Table 5 DDPM vs. EDM**\\n| **Pong** | **DDPM** | | | **EDM** | | |\\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| | Reward | Manipulation Rate(%) | Deviation Rate(%) | Reward | Manipulation Rate(%) | Deviation Rate(%) |\\n| **DQN** | -20.6(0.5) | 76.6(1) | 83.6(1) | -20.7(0.5) | 87.1(1.9) | 89.6(1.7) |\\n| **Diffusion History** | 5.4(5.6) | 15.1(0.4) | 45.2(0.3) | 6.0(6.2) | 8.4(0.5) | 25.3(0.9) |\\n| **Running Time** | ~5 sec | | | ~0.2 sec | | |\\n\\nAs recommended by Reviewer L64s, we have compared DDPM and EDM in terms of effectiveness and efficiency. The results in the table show that EDM and DDPM exhibit similar attack performance. However, DDPM is significantly slower than EDM in terms of running time (the average time needed to generate a single perturbed state during testing), making DDPM incapable of generating real-time attacks during testing. This validates the selection of EDM as the diffusion model for constructing our attacks. \\n\\n[1] Ezgi Korkmaz. Adversarial Robust Deep Reinforcement Learning Requires Redefining Robustness. AAAI 2023.\\n\\n[2] Sun, Y., Zheng, R., Liang, Y., & Huang, F. Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL. ICLR 2022.\\n\\n[3] Eric Wong, Frank R. Schmidt, and J. Zico Kolter. Wasserstein Adversarial Examples via Projected Sinkhorn Iterations. ICML 2019. \\n \\n[4] Liang et al., Game-Theoretic Robust Reinforcement Learning Handles Temporally-Coupled Perturbations. ICLR 2024\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Dear Reviewer 37tk:\\n\\nThanks for your feedback on our rebuttal and we will provide further clarifications on your concerns.\\n\\n**Q1**: [1]'s methods can change the essential semantics and decrease 90% of the policy's performance without accessing the victim's policy. \\n\\n**A1**: We acknowledge that rotations and perspective transformations in [1] may alter the absolute distances, such as between the Pong ball and the paddle. However, it is crucial to note that these changes do not affect the semantic meaning based on our **Definition 3**: after projecting perturbed states onto the true state set, the projected states remain close to the original states. This explains why diffusion-based defenses effectively counter the attacks in [1], as shown in Tables 4-1 and 4-2 in the general rebuttal, by recovering true states from perturbed ones. \\n\\nAdditionally, we note that large-scale transformations in [1] are not stealthy. For example, we provide images of 3-degree rotations in Appendix E, where the perturbed states are visually distinguishable from true states, violating static stealthiness. \\n\\n**Q2**: Authors currently still missing quite critical issues that are essential to adversarial machine learning. \\n\\n**A2**: We agree that the policy-independent and history-ignorant attack given in [1] is lightweight to implement and opens up an interesting direction. However, it also has fundamental limitations. First, it cannot bypass diffusion-based adversarial training and may generate unrealistic perturbations, as discussed above and shown in the revised paper. Second and more importantly, it largely ignores the sequential decision-making nature of RL. All the attacks implemented in [1] are applied to individual states in a myopic way. Thus, they either do not change the semantics essential to decision-making (e.g., when the same rotation operation is applied to all the states as in the paper) or cannot generate history-consistent perturbations (e.g., when different operations are applied in each state). Our approach aims to address these limitations. \\n\\n**Q3**: Our work is the first to claim an attack beyond $l_p$-norm. \\n\\n**A3**: We would like to clarify that we did not claim to be the first to propose attacks beyond $l_p$-norm in our paper. Furthermore, we have cited [1] in our revised manuscript and explicitly identified it as a beyond $l_p$-norm attack in the introduction, related work, and evaluation sections. We will further clarify this by incorporating the discussion above into the revision. \\n\\n **Q4**: The statement \\\"adversarial training is not effective against beyond $l_p$-norm attacks\\\" is proved in previous work. \\n\\n**A4**: We respectfully disagree with the reviewer on this. Before our work, it was unclear if there were any adversarial perturbation attacks, including those beyond $l_p$-norm constraints, that could bypass all existing robust RL approaches while remaining stealthy. As shown in Tables 4-1 and 4-2, a diffusion model trained with adversarial data generated by the attacks in [1] can successfully defend against these beyond $l_p$-norm attacks in [1]. This demonstrates that adversarial training can be effective against such attacks. \\n\\nFurthermore, we only claim that our proposed attack compromises existing robust RL methods. We do not rule out the possibility that novel adversarial training-based defenses could effectively counter our attack.\\n\\nWe hope our clarifications address your concerns, and we welcome any further feedback.\"}", "{\"title\": \"Did our responses address all your concerns?\", \"comment\": \"Dear Reviewer 37tk,\\n\\nThank you for your thoughtful feedback. As the rebuttal period approaches its conclusion, we hope to hear whether our responses address your concerns. Below is a brief summary of our rebuttal: \\n\\n1. **Key Contributions Beyond [1]**: While [1] focuses on high-sensitivity directions (e.g., brightness, contrast, rotation), these attacks fail against diffusion-based defenses as they do not alter decision-relevant semantics. Our method leverages a conditional diffusion model to modify essential semantics (e.g., Pong ball position) while maintaining static and dynamic stealthiness, as demonstrated by low reconstruction loss and minimal Wasserstein distance. \\n2. **References and Novelty**: We acknowledged [1] and [2] in the revised submission, highlighting their contributions and distinctions from our approach. Despite the lack of code, we implemented the four attacks in [1] and showed their ineffectiveness against diffusion-based defenses. \\n3. **Technical Details and Reproducibility**: We detailed our experimental setup, hyperparameters, and Atari environment preprocessing in the revised submission. Additionally, we included new ablation studies to emphasize the importance of static and dynamic stealthiness in our method. \\n4. **Roadrunner Results**: We extended our experiments to the Roadrunner environment, demonstrating that our attack compromises both SA-DQN and diffusion-based defenses, consistent with results from other environments. \\n\\nWe hope these clarifications address your concerns and welcome any further feedback or questions.\"}", "{\"summary\": \"This paper introduces SHIFT, a diffusion-based adversarial attack that targets RL agents in vision-based environments by creating realistic, history-aligned state perturbations that go beyond traditional lp-norm attacks. Unlike existing methods, SHIFT generates semantic changes that significantly impair the agent's performance, bypassing even the most advanced defenses. Results demonstrate the attack's efficacy, reducing cumulative rewards by over 50% in Atari games, underscoring the need for more robust defenses in RL.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This work proposes semantic-level RL attacks using conditional diffusion models that balance semantic changes, realism, and historical consistency. The insight is novel.\\n2. Identifies a fundamental weakness in lp-norm attacks - their inability to meaningfully alter state semantics despite large perturbation budgets.\\n3. Employs EDM to enhance generation efficiency, making the approach more feasible\", \"weaknesses\": \"1. Despite using EDM and weighting joint, the paper lacks any systematic analysis of attack efficiency and computational costs.\\n2. While reporting larger attack budgets, results are limited to PGD and MinBest baselines, missing broader comparative analysis.\\n3. Experiments are restricted to only three Atari environments, providing insufficient evidence for the method's generalizability.\\n4. Overall Soundness: While the core idea is interesting, the paper falls short in rigor - lacking ablation studies, methodology analysis, and comprehensive experiments. The current evaluation scope is not convincing enough to support the claims.\", \"questions\": \"1. The Manipulation Rate and Deviation Rate metrics appear exclusive to SHIFT's diffusion-based approach, raising questions about fair comparison with non-diffusion methods. The necessity of diffusion models needs stronger justification.\\n2. The paper lacks crucial comparison between DDPM and EDM in terms of both effectiveness and efficiency. This missing analysis weakens the justification for the chosen architecture.\\n3. Overlooks recent related work [1] about temporally-coupled perturbations\\n\\n[1]Game-Theoretic Robust Reinforcement Learning Handles Temporally-Coupled Perturbations, Liang et al, ICLR 2024\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Looking forward to your feedback\", \"comment\": \"Dear Reviewers:\\n\\nThank you once again for your insightful and helpful reviews. As the rebuttal phase is coming to a close, we are eager to know if our responses have satisfactorily addressed your concerns. If there is any additional information or clarification we can provide, please do not hesitate to let us know. Thank you so much for your time!\"}", "{\"title\": \"Author Responses(2)\", \"comment\": \"In conclusion, although both our attack method and [1] attempt to go beyond $l_p$ norm bounded attacks, the attack methods in [1] focus on policy independent attacks that are visually imperceptible and can compromise defenses like SA-DQN but these attacks cannot change the essential semantics of image observations and fail to fight against diffusion-based defenses. In contrast, our attack method used a conditional diffusion model to generate perturbed states that can change the essential semantics of the image data while remaining stealthy to bypass the strong defenses including diffusion-based defenses.\\n\\n**Q2: The submission substantially lacks appropriate references. The claimed contributions of the submission are misplaced and incorrect.**\", \"a2\": \"We thank the reviewer for the comment and for pointing out the missing related work. After careful review, we found that seven of the nine papers mentioned by the reviewer were included in our original submission. The remaining two papers [1] [2] both focus on high-sensitivity direction attacks, where the approaches are significantly different from ours as discussed above. We have carefully revised our paper to acknowledge their contributions and highlight the novelty of our work. We have also included a discussion of these papers in the related work section of the revised submission. As the code in [1] and [2] are not publicly available, we were not able to include the detection method in [2] as a baseline, but we have managed to implement the attack methods in [1] and showed that they cannot compromise diffusion-based defenses as discussed above.\\n\\n**Q3: Lack of technical and experiment details for reproducibility.**\", \"a3\": \"We discussed our experiment setting in Appendix D.6 in the original submission, where we reported the hyperparameters for training the conditional diffusion model and the guidance strength during testing. As we mentioned there, we used the default parameters setting in their original papers for other defense methods. We also provided the source code together with our paper during submission. In response to the reviewer\\u2019s concern, we have added a section on pre-processing the Atari environments for better reproducibility. In the revision, we have provided a set of new ablation study results, as discussed in the general response. For each of them, we have provided some insights into the results and connected them with our approach in Section 3. For example, we have provided a detailed comparison of our attack with the high-sensitivity direction attacks in [1] and highlighted the importance of considering both static and dynamic stealthiness in deep reinforcement learning with image input. We have also adapted the Wasserstein distance perturbation metric [3] originally proposed for adversarial examples in deep learning to the deep reinforcement learning context by considering the sequential decision-making nature of RL. We hope these efforts adequately address the reviewer's concern about the lack of technical details.\\n\\n**Q4: Missing Roadrunner results.**\", \"a4\": \"As suggested by the reviewer, we have added new experiment results on Roadrunner in **Table 1** in the general response.\\nWe have retrained the vanilla DQN model and the SA-DQN model because the pretrained vanilla DQN and SA-DQN models provided by the SA-MDP paper [4] do not work in the RoadRunner environment. The result shows that similar to other Atari environments we evaluated before, our attack is able to compromise SA-DQN and Diffusion History defenses in the Roadrunner environment.\\n\\n**Q5: Improper use of the word \\u201cpoisoning.\\u201d**\", \"a5\": \"We carefully reviewed the paper and found that we misused the word \\u201cpoisoning\\u201d once. We thank the reviewer for catching this and have corrected it in the revision.\\n\\nWe sincerely hope our responses and additional experiment results have addressed all your concerns. If so, we kindly hope that you consider increasing the rating of our paper. Please do not hesitate to add further comments if you have any additional questions or need further clarification.\\n\\n[1] Ezgi Korkmaz, Adversarial Robust Deep Reinforcement Learning Requires Redefining Robustness, AAAI 2023.\\n\\n[2] Ezgi Korkmaz and Jonah Brown-Cohen, Detecting Adversarial Directions in Deep Reinforcement Learning to Make Robust Decisions, ICML 2023.\\n\\n[3] Eric Wong, Frank R. Schmidt, and J. Zico Kolter, Wasserstein Adversarial Examples via Projected Sinkhorn Iterations, ICML 2019.\\n\\n[4] Huan Zhang et al., Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations, NeurIPS 2020.\"}", "{\"metareview\": \"The paper discusses a novel adversarial attack to RL agents, by creating realistic perturbations using diffusion models. On the positive side, this attack is novel and can generate perturbed states that are semantically different from the true states while remaining realistic to avoid detection. However, it is difficult to quantitatively judge the realism and stealthiness of the proposed attack, as these terms do not have a precise mathematical definition. The evaluation results are not surprising since it is expected that many existing defenses built on Lp norm perturbation are not robust against the proposed attack (which can have a large norm and actually change the semantics). The experiments are not comprehensive enough (lacking environments beyond 3 easy Atari games and ablation studies). Considering these factors, the current form of this paper cannot be accepted at ICLR.\", \"additional_comments_on_reviewer_discussion\": \"The authors provided detailed responses during the discussion period, and the AC has checked them carefully. The initial version of the paper lacks discussion of a significant amount of related work. The paper was updated to include missing references that should have been discussed in the paper. However, the key weaknesses of the paper remain. Especially multiple reviewers (eNaA, D5uT) questioned this attack setting, and the AC shared the same concern. Also, although several new tables were provided as new results during the discussion period, the results are not comprehensive enough compared to most other published work in this field.\"}", "{\"title\": \"Did our responses address all your concerns?\", \"comment\": \"Dear Reviewer D5uT,\\n\\nThank you once again for your thoughtful feedback. As the rebuttal period approaches its conclusion, we hope to hear whether our responses address your concerns. Below is a summary of our rebuttal: \\n1. **Realism Metric**: We provided both original and perturbed trajectory GIFs in the supplementary material and introduced the Wasserstein-1 distance as an additional realism metric, demonstrating that our attack achieves both static and dynamic stealthiness. \\n2. **More Empirical Experiments**: We added new experiments, including Roadrunner evaluations, comparisons with PA-AD, and analysis of DDPM versus EDM diffusion methods, which reinforce our method's effectiveness. \\n3. **Non-Myopic Action Selection**: We discussed integrating PA-AD and non-myopic action selection methods with our approach, identifying challenges and potential future directions. \\n\\nWe hope these updates address your concerns and welcome your further feedback.\"}", "{\"title\": \"Author Responses (1)\", \"comment\": \"Dear Reviewer L64s:\\n\\nThank you for your insightful feedback on our works. We will address your concerns point by point in the following by providing additional results and clarifications. \\n\\n**Q1: Lack of computational costs analysis of our attack.**\", \"a1\": \"We are delighted to provide detailed computational costs of our attacks here. During the training stage, it takes around 1.5 hours to train both the conditional diffusion model and the autoencoder based realism detector. We remark that these two components can be trained in parallel. During the testing stage, our attack takes around 0.2 seconds to generate a perturbed state, making it feasible for real-time attacks.\\n\\n**Q2: Lack of comparison between DDPM and EDM diffusion architectures.**\", \"a2\": \"We have provided new results to compare DDPM and EDM in terms of attack efficiency and computational cost in the general response (**Table 5**). The results show that EDM and DDPM exhibit similar attack performance. However, DDPM is significantly slower than EDM in terms of running time (the average time needed to generate a single perturbed state during testing), making DDPM incapable of generating real-time attacks during testing. This validates the selection of EDM as the diffusion model architecture for constructing our attacks.\\n\\n**Q3: Our methods were only tested on three Atari environments.**\", \"a3\": \"We have included new evaluation results on the RoadRunner environment that is widely used in previous work. Please see **Table 1** in the general response, which shows that our attack obtains superb performance against SA-DQN and Diffusion History defenses.\\n\\n**Q4: Comparison with other attack baselines is insufficient.**\", \"a4\": \"We have included evaluation results for a new attack baseline, PA-AD [2], which is considered one of the strongest attacks in the literature. A complete comparison between PGD, MinBest, PA-AD and our attack methods is given in **Table 2-1** in the general response, which reports both the reward and the deviation rate of each method. We want to emphasize that while the manipulation rate does not apply to PGD, MinBest and PA-AD, the reward and the deviation rate (the fraction of the chosen actions under perturbed states differ from the actions under the true states across an episode) do apply to all the baselines. The results in **Table 2-1** show that our attack method achieves the best attack performance against both SA-DQN and DP-DQN in terms of both reward and deviation rate.\\n\\nFurthermore, we have added the Wasserstein-1 distance between a perturbed state and the true state in the previous time step as a new metric to measure the dynamic stealthiness of baseline attacks. As argued in [3], the Wasserstein distance captures the cost of moving pixel mass and represents image manipulation more naturally than the $l_p$ distance. We have reported the reconstruction loss of perturbed states and the Wasserstein distance between a perturbed state and the previous step\\u2019s true state in **Table 2-2** in the general response. The former captures the static stealthiness while the later captures the dynamic stealthiness as we further elaborated in the general response. The results show that our attack achieves both the lowest reconstruction error and the lowest Wasserstein distance, indicating our attack method achieves best stealthiness from both static and the dynamic perspectives. The superb attack performance and stealthiness of our method justify the use of the conditional diffusion model to generate attacks.\"}", "{\"title\": \"Thank you for your feedback\", \"comment\": \"Dear Reviewer L64s:\\n\\nThank you for reviewing our rebuttal and for increasing your rating. In response to your valuable feedback, we have further revised our manuscript to include additional experiments conducted during the rebuttal period. The following changes have been made to the main text:\\n\\n1. The **RoadRunner** results have been added to **Table 1**.\\n2. **Figure 3** now includes results for PA-AD, temporally coupled, and high-sensitivity direction based attacks, along with the introduction of **Wasserstein distance** as an additional metric to measure stealthiness.\\n3. We have added the ablation study on **DDPM** and **EDM** diffusion architectures.\\n\\nWe believe that these revisions, along with the extra experiments and discussions, strengthen the paper's statements and conclusions.\\n\\nWe appreciate your continued feedback and hope that the revised manuscript addresses your concerns effectively.\"}", "{\"title\": \"General Response(1)\", \"comment\": \"We thank all the reviewers for their insightful feedback and constructive criticism regarding our insufficient experiment results. In this general response, we provide more experiment results to address these concerns.\\n\\n**Table 1: RoadRunner Results under Our Attack**\\n| RoadRunner | Reward | Manipulation | Deviation |\\n|:---:|:---:|:---:|:---:|\\n| No Attack | 13500(0) | NA | NA |\\n| DQN | 0(0) | 52%(2%) | 70%(3%) |\\n| SA-DQN | 260(215.41) | 34%(2%) | 54%(1%) |\\n| Diffusion History | 1480(788.42) | 9%(2%) | 43%(2%) |\\n\\nWe have added new experiment results on Atari RoadRunner as suggested by Reviewers 37tk and L64s. We have retrained the vanilla DQN model and the SA-DQN model because the pretrained vanilla DQN and SA-DQN models provided by the SA-MDP paper do not work in the RoadRunner environment. Our attack obtains superb performance against SA-DQN and Diffusion History defenses. \\n\\n**Table 2-1: Attack Performance of Different Attack Methods**\\n| **Freeway** | PGD-1/255 | | PGD-3/255 | | PGD-15/255 | | Minbest-1/255 | | Minbest-3/255 | | Minbest-15/255 | | PA-AD-1/255 | | PA-AD-3/255 | | PA-AD-15/255 | | Ours | |\\n|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\\n| | Reward | Dev (%) | Reward | Dev (%) | Reward | Dev (%) | Reward | Dev (%) | Reward | Dev (%) | Reward | Dev (%) | Reward | Dev (%) | Reward | Dev (%) | Reward | Dev (%) | **Reward** | **Dev (%)** |\\n| **DQN** | 0 (0) | 86.2 (0.5) | 0 (0) | 100 (0) | 0 (0) | 100 (0) | 0 (0) | 100 (0) | 0 (0) | 100 (0) | 0 (0) | 100 (0) | 0 (0) | 100 (0) | 0 (0) | 100 (0) | 0 (0) | 100 (0) | **0.1 (0.3)** | **54 (1.4)** |\\n| **SA-DQN** | 30 (0) | 0 (0) | 30 (0) | 0 (0) | 20 (1.6) | 8 (10) | 30 (0) | 0 (0) | 29 (1.4) | 0.4 (0.3) | 20.8 (2.5) | 9.1 (1.1) | 30 (0) | 0 (0) | 30 (0) | 0 (0) | 20.5 (4.4) | 3 (1) | **17.3 (1.5)** | **33 (2)** |\\n| **DP-DQN** | 30 (0.9) | 3.5 (0.2) | 30 (0.9) | 4.5 (0.3) | 29 (1) | 3.2 (0.1) | 30.2 (1.3) | 3.7 (0.3) | 30.6 (1.4) | 4.1 (0.1) | 29.4 (1.2) | 7.3 (0.2) | 30.8 (1) | 6.5 (0.1) | 31.4 (0.8) | 7.3 (0.2) | 29 (1.1) | 10.3 (1) | **14.6 (1.5)** | **49 (1.9)** |\\n\\nIn response to Reviewer L64s, we have added PA-AD [2] as a new attack baseline, which is considered one of the strongest attacks in the literature. This table compares the performance of PGD, MinBest, PA-AD with budget {1/255, 3/255, 15/255}, and our attack method in Atari Freeway under DQN, SA-DQN and DP-DQN defenses. Standard deviations are reported in parentheses. The results show that our attack method achieves the best attack performance against both SA-DQN and DP-DQN in terms of both reward and deviation rate.\\n\\n**Table 2-2 Reconstruction Errors and Wasserstein Distances of Different Attack Methods**\\n| **Freeway** | **PGD** | **1/255** | **PGD** | **3/255** | **PGD** | **15/255** | **MinBest** | **1/255** | **MinBest** | **3/255** |\\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| | Recons. | Wass.($\\\\times 10^{-3}$) | Recons. | Wass.($\\\\times 10^{-3}$) | Recons. | Wass.($\\\\times 10^{-3}$) | Recons. | Wass.($\\\\times 10^{-3}$) | Recons. | Wass.($\\\\times 10^{-3}$) |\\n| **DP-DQN** | 3.45 (0.3) | 3.1 (0.2) | 3.50 (0.3) | 7.4 (0.3) | 4.36 (0.29) | 31 (1) | 3.45 (0.3) | 3.7 (0.2) | 3.53 (0.3) | 9 (0.4) |\\n| | **MinBest** | **15/255** | **PA-AD** | **1/255** | **PA-AD** | **3/255** | **PA-AD** | **15/255** | **Ours** | |\\n| | Recons. | Wass.($\\\\times 10^{-3}$) | Recons. | Wass.($\\\\times 10^{-3}$) | Recons. | Wass.($\\\\times 10^{-3}$) | Recons. | Wass.($\\\\times 10^{-3}$) | Recons. | Wass.($\\\\times 10^{-3}$) |\\n| **DP-DQN** | 5.35 (0.2) | 40 (1) | 3.47 (0.29) | 4.5 (0.2) | 3.60 (0.29) | 12 (0.1) | 6.06 (0.18) | 55 (0.2) | **1.02 (0.5)** | **1.1 (0.2)** |\\n\\nIn response to Reviewer D5uT, we added Wasserstein distance as a new perturbation metric beyond the reconstruction error considered in the original submission. This table shows (1) the average reconstruction error (computed by our autoencoder based realism detector) of a perturbed state generated by different attacks across an episode, and (2) the average Wasserstein-1 distance between a perturbed state and the true state in the previous time step across an episode, under different attacks. The Wasserstein distance was proposed in [3] as an alternative perturbation metric to $l_p$ distances, which measures the cost of moving pixel mass and can represent image manipulations more naturally than the $l_p$ distance. We argue that reconstruction error captures static stealthiness of state perturbation, while the Wasserstein distance to the previous state captures dynamic stealthiness. The result shows that our attack method achieves both lowest reconstruction error and lowest Wasserstein distance compared with other attacks.\"}", "{\"summary\": \"The submission claims to find that the effectiveness of the current defenses is due to a fundamental weakness of the existing $\\\\ell_p$-norm constrained attacks. Furthermore, the submission proposes a method to go beyond the $\\\\ell_p$-norm bounded adversarial attacks in deep reinforcement learning. The submission evaluates its proposed attacks in Atari games and argues that the proposed attack method of the submission lowers the cumulative rewards of the agent by 50%.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"AI safety and robustness is an important research area.\", \"weaknesses\": \"The major claimed contributions of the submission have been previously both mentioned and analyzed in previous work [1]. However, the submission does not refer to these studies, and furthermore, within the existing prior work the main claimed contributions of the submission are rather misplaced and inaccurate. The paper [1] already extensively studies and demonstrates that both deep reinforcement learning policies and current defenses, i.e. robust deep reinforcement learning, are not robust against semantically meaningful adversarial attacks and this study further reveals the need to have robustness beyond $\\\\ell_p$-norm bounded attacks.\\n\\nNot only has the necessity of considering beyond $\\\\ell_p$-norm bounded attacks already been discussed in previous work, furthermore the approach proposed in this paper [1] achieves higher degradation on the policy performance without even having access to the training details, the policy network (i.e. black-box adversarial attacks), and further without even training any additional network to produce such adversarial examples.\\n\\nThe submission substantially lacks appropriate references, and further positioning itself within the existing prior work and clarifying its main contributions within these studies. The claimed contributions of the submission are misplaced and incorrect. \\n\\n[1] Adversarial Robust Deep Reinforcement Learning Requires Redefining Robustness. AAAI Conference on Artificial Intelligence, AAAI 2023.\\n\\nFurthermore, the submission lacks main technical details to interpret their experimental results. Not a single experimental detail is provided regarding deep reinforcement learning. These details are essential for reproducibility and further to interpret and analyze the experimental results provided in the submission. However, the submission does not provide any information on this. \\n\\nThe submission only tests their algorithm in 3 games from Atari. However, in adversarial deep reinforcement learning it is usually tested in more games [1,2,3,4]. In particular, RoadRunner is missing from the baseline comparison.\\n\\n[1] Robust deep reinforcement learning against adversarial perturbations on state observations, NeurIPS 2020.\\n\\n[2] Robust Deep Reinforcement Learning through Adversarial Loss, NeurIPS 2021.\\n\\n[3] Adversarial Robust Deep Reinforcement Learning Requires Redefining Robustness, AAAI 2023.\\n\\n[4] Detecting Adversarial Directions in Deep Reinforcement Learning to Make Robust Decisions, ICML 2023.\\n\\nThe submission also refers to main concepts in the adversarial machine learning literature with inaccurate wording. For instance, in the introduction the submission writes: \\n\\n*\\u201cby poisoning its observation (Huang et al., 2017; Zhang et al., 2020a)\\u201d*\\n\\nHowever, poisoning attacks in adversarial machine learning literature refer to completely something else and these papers are not poisoning attacks. These papers are test time attacks. Thus, it is misleading to use the word poisoning here. \\n\\nOne thing I find ineffective is that the submission refers to a long list of papers such as these [1,2,3,4,5], however, somehow still misses the prior work that substantially coincides with the main claimed contributions of the submission and even further these prior studies already demonstrate the claimed contributions of this submission.\\n\\n[1] Kangjie Chen, Shangwei Guo, Tianwei Zhang, Xiaofei Xie, and Yang Liu. Stealing deep reinforcement learning models for fun and profit. ACM Asia Conference on Computer and Communications Security, 2021.\\n\\n[2] Mengdi Huai, Jianhui Sun, Renqin Cai, Liuyi Yao, and Aidong Zhang. Malicious attacks against deep reinforcement learning interpretations. ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020.\\n\\n[3] Yunhan Huang and Quanyan Zhu. Deceptive reinforcement learning under adversarial manipulations on cost signals. Decision and Game Theory for Security (GameSec), 2019.\\n\\n[4] Zikang Xiong, Joe Eappen, He Zhu, and Suresh Jagannathan. Defending observation attacks in deep reinforcement learning via detection and denoising. Machine Learning and Knowledge Discovery in Databases: European Conference 2023.\\n\\n[5] Inaam Ilahi, Muhammad Usama, Junaid Qadir, Muhammad Umar Janjua, Ala I. Al-Fuqaha, Dinh Thai Hoang, and Dusit Niyato. Challenges and countermeasures for adversarial attacks on deep reinforcement learning. ArXiv 2020.\", \"questions\": \"See above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Responses(1)\", \"comment\": \"Dear Reviewer 37tk:\\n\\nThank you for your insightful feedback. We will address all your concerns point by point in the following.\\n\\n**Q1: The main claimed contributions have been mentioned and analyzed in [1].**\", \"a1\": \"We would like to thank the reviewer for pointing us to this important study [1]. We have updated the introduction section of our paper to acknowledge the contribution of [1] and also included a detailed comparison with [1] in the related work section and the evaluation section. However, after carefully comparing our work with [1] and conducting additional environments (as the code in [1] is not publicly available, we have tried to reproduce some of their results in our environment), we found that although both [1] and our work consider attacks beyond $l_p$ norm constraint, the two attack methods are significantly different in multiple aspects and our paper has made substantial new contributions beyond what is already considered in [1].\\n\\nFirst, our attack is able to change the **essential semantics that matter for decision making, making it much harder to defend.** [1] shows that by following high-sensitivity directions, including changing brightness and contrast, image blurring, image rotation and image shifting, it is possible to generate perturbations that are visually imperceptible and semantically different from the original state. These attack methods reveal the brittleness of robust RL methods such as SA-DQN, but they **mainly target changes in visually significant but non-essential semantics.** For example, the relative distance between the pong ball and the pad will remain the same after brightness and contrast changes or image shifting in the Pong environment. Consequently, the perturbed images generated by these methods can potentially be denoised by a diffusion model. To confirm this, we have conducted new experiments, showing that (1) the Diffusion History defense with a diffusion model trained from clean data only is able to defend against B&C, blurring, and small scale rotation and shifting attacks (see **Table 4-1** in the general response), and (2) when the diffusion model is fine-tuned by randomly applying image rotations or shifting during training, the Diffusion History defense can mitigate large scale image rotations and shifting considered in their paper (see **Table 4-2** in the general response). In contrast, our diffusion guided attack can **change the decision-relevant semantics of images**, such as moving the Pong ball to a different position without changing other elements in the Pong environment as shown in Figure 1 e) in the paper. This is the key reason why our attack can bypass strong diffusion based defense methods. \\n\\nSecond, **our attack is stealthy from both static and dynamic perspectives.** [1] claims that the perturbed states generated by their high-sensitivity direction based attacks are imperceptible by comparing the perturbed state $\\\\tilde{s_t}$ and the true state $s_t$. However, we found that this only holds for small perturbations. For example, the Rotation attack with degree 3 and Shifting attack (1,2) in the Pong environment considered in their paper can be easily detected by humans (see Figure 6 in Appendix E in our revised paper). Further, their metric for stealthiness is static and does not consider the sequential decision-making nature of RL. In contrast, our attack method aims to stay close to the set of true states $S^*$ to maintain **static stealthiness** (Definitions 1 and 2 in our paper) and align with the history to achieve **dynamic stealthiness** (Definitions 4 and 5). These are novel definitions for characterizing stealthiness in the RL context. The static stealthiness is demonstrated through the low reconstruction loss of the perturbed states generated by our method shown in Figure 3 a) in our paper. To better illustrate that our attacks are stealthy from a dynamic perspective, we have added an ablation study to compare the average Wasserstein-1 distance between a perturbed state and the previous step\\u2019s true state across a randomly sampled episode (see **Tables 2-2 and 4-3** in the general response). As argued in [3], the Wasserstein distance captures the cost of moving pixel mass and represents image manipulation more naturally than the $l_p$ distance. The results show that even when the agent is aware of the true previous state $s_{t-1}$, the perturbed state $\\\\tilde{s}_t$ generated by our attack is more stealthy than other attacks including the attack methods in [1] (**Table 4-3** in the general response). \\n\\n(Continue in the next comment)\"}", "{\"comment\": \"I thank the authors for their response. However, there are several incorrect statements made in the author's response that I must address.\\n\\n*\\u201dAuthors: First, our attack is able to change the essential semantics that matter for decision making, making it much harder to defend. [1] shows that by following high-sensitivity directions, including changing brightness and contrast, image blurring, image rotation and image shifting, it is possible to generate perturbations that are visually imperceptible and semantically different from the original state. These attack methods reveal the brittleness of robust RL methods such as SA-DQN, but they mainly target changes in visually significant but non-essential semantics. For example, the relative distance between the pong ball and the pad will remain the same after brightness and contrast changes or image shifting in the Pong environment.\\u201d*\\n\\nThis is an incorrect statement. The several methods introduced in the paper [1] indeed change the essential semantics of the environment. In particular, perspective transform and rotation indeed change the distance of the pong ball and the pad.\\n\\nAuthors currently still missing quite critical issues that are essential to adversarial machine learning. The proposed methods in [1] decrease the policy performance of deep reinforcement learning policies by around 90% without even having access to the policy details, i.e. network, algorithm, training details or even the training environment. Thus, an adversarial attack that does not require any training or any access to the victim policy\\u2019s private information is a more dangerous and powerful attack. \\n\\nFurthermore, the submission still positions itself as the first paper that goes beyond $\\\\ell_p$-norm attacks. This is also incorrect. Furthermore, there are currently studies that achieve 90% damage on the policy performance with no additional training or having access to the history of the policy or any training details of the policy.\\n\\nThe submission still positions itself as a paper that shows adversarial training, i.e. robust deep reinforcement learning, does not work against beyond $\\\\ell_p$-norm bounded attacks. This is also incorrect. This is already known. Prior studies already demonstrated this. \\n\\nI will keep my score.\"}", "{\"summary\": \"The paper studies how to generate state perturbations for reinforcement learning, especially perturbation in an unconstrained way, instead of traditional L_p perturbations. The methods are based on diffusion models to generate states with different semantics. The experiments outperforms some existing baselines.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper studies an interesting question of non-L_p attacks, which is largely neglected by existing literature.\\n2. The methods can scale to image-input domain\", \"weaknesses\": \"1. One concern/question is that the goal of the paper is not very concrete. The paper said existing methods cannot change the semantics of the image input while this paper can. However, it is not very clear why the attacker has the motivation to change the semantics? In other words, isn't being stealthy beneficial for the attacker?\\n2. Some newest/recent defense baselines to my best knowledge [1, 2] are not discussed or compared in experiments. These game-theoretical based defense methods are significantly different from the defense mechanisms discussed in the paper by nature, and more importantly agnostic to the attacker model (which means one only need to change the attacker model to non-L_p accordingly to extend the defense to non-L_p). Therefore, it will be important to evaluate how the attacks performs under such kind of defense strategies.\\n\\n[1] Game-Theoretic Robust Reinforcement Learning Handles Temporally-Coupled Perturbations ICLR 2024\\n[2] Beyond Worst-case Attacks: Robust RL with Adaptive Defense via Non-dominated Policies ICLR. 2024\", \"questions\": \"see weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a novel non-$\\\\ell_p$ attack algorihtm for image-observation reinforcement learning, based on a history-conditioned diffusion model. The generated attacks are semantically meaningful while misleading to the agent. Experiments show that the proposed SHIFT attack can significantly break existing robust RL learning algorithms that are mainly designed for $\\\\ell_p$ attacks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper points out the limitation of mostly-studied $\\\\ell_p$ attack model for image-observation reinforcement learning environments. By utilizing a denoising diffusion probabilities model (DDPM), the paper achieves stealthy and realistic attack by altering the semantic meaning of the original state.\", \"The paper clearly defines the concept of valid and realistic states and adopts an autoencoder to enhance the realism of the generated attacks.\", \"Comparison with existing methods show that SHIFT can lower the reward of agent while having low state reconstruction error.\", \"The paper also proposes a possible defense method agains the new attack model.\"], \"weaknesses\": [\"Although the proposed attack uses methods such as autoencoder guidance to enhance the realism of the perturbed states, it is not guaranted or bounded like $\\\\ell_p$ attacks, making it hard to compare and evaluate the stealthiness of the perturbations. It is not clear to me whether the reconstruction error can effectively represent the realism of the state perturbation. It would be better if the authors can provide a gif or video showing the full perturbed trajectory.\", \"The experiments are not very informative. It is not surprising that RL agents learned via $\\\\ell_p$ attack assumptions will break under the proposed attack. But more empirical study can be done to verify the effectiveness of the proposed design. For example, how does the varying attack strengh influence the attack effects?\"], \"questions\": \"As the author mentioned, the proposed method uses a myopic target action manipulation objective which can be sub-optimal. Is there a way to improve it? For example, how can it be combined with RL-based attack methods such as PA-AD?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Responses(2)\", \"comment\": \"**Q5: Lack of discussion on [1]**\", \"a5\": \"We thank the reviewer for pointing us to this important study and we have added it to the related work in the updated version of our paper. However, after carefully reading the paper, we found that the temporally coupled attack and the game theoretic defense method proposed in [1] might not directly apply to our setting due to the following reasons.\\n\\nFirst, all the experiments in [1] are conducted in MuJoCo environments, where the state spaces are much smaller compared with Atari environments with image input. Further, their approaches are already computationally expensive (both take more than 20 hours) to train in MuJoCo environments. Thus, directly applying them to image domains can be computationally prohibitive, which points to an interesting research direction for further study. Second, the code for [1] is not publicly available at this time so we cannot easily evaluate their attacks and defenses as baselines in Atari environments. \\n\\nIn response to the reviewer\\u2019s concern, we have implemented a PGD version of the temporally coupled attack in Atari environments and tested it against SA-DQN and DP-DQN. The results are in the general response **Table 3**. The results show that the temporally coupled PGD attack with $\\\\epsilon = 15/255$ and $\\\\bar{\\\\epsilon} = 7.5/255$ could compromise SA-DQN but not diffusion based defense DP-DQN even with a large perturbation budget, which indicates the challenge of adapting this attack to Atari environments with raw-pixel input. We conjecture that this is because the attack is still constrained by an $l_p$ norm bound, making it difficult to alter the essential semantics of image input.\\n\\nWe sincerely hope our responses and additional experiment results have addressed all your concerns. If so, we kindly hope that you consider increasing the rating of our paper. Please do not hesitate to add further comments if you have any additional questions or need further clarifications.\\n\\n[1] Liang et al., Game-Theoretic Robust Reinforcement Learning Handles Temporally-Coupled Perturbations, ICLR 2024\\n\\n[2] Sun, Y., Zheng, R., Liang, Y., & Huang, F. Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL. ICLR 2022.\\n\\n[3] Eric Wong, Frank R. Schmidt, and J. Zico Kolter. Wasserstein Adversarial Examples via Projected Sinkhorn Iterations. ICML 2019.\"}", "{\"title\": \"Author Responses\", \"comment\": \"Dear Reviewer D5uT:\\n\\nThank you for your helpful and insightful feedback. We will address your concerns point by point in the following.\\n\\n**Q1: Can the reconstruction error effectively represent the realism of state perturbation?**\", \"a1\": \"Thank you for this important comment. As suggested by the reviewer, we have created two gifs, one showing the perturbed trajectory under our attacks constrained by the realism detection, and one showing the unperturbed trajectory for comparison and uploaded them in the updated supplementary material.\\n \\nWe have further explored the Wasserstein-1 distance between a perturbed state and the true state of the previous time step as another realism metric shown in **Table 2-2** in the general response. As argued in [1], the Wasserstein distance captures the cost of moving pixel mass and represents image manipulation more naturally than the $l_p$ distance. Thus a small Wasserstein distance from previous true states shows that even when the agent is aware of the true previous state $s_{t-1}$, the perturbed state $\\\\tilde{s}_t$ generated by our attack is more stealthy than other attacks. The results shown in **Table 2-2** state that our attack method achieves both the lowest Wasserstein distance and the lowest reconstruction error. This experiment further proves that our attack method can generate realistic attacks and achieve superb attack performance at the same time.\\n\\n**Q2: More empirical studies are needed such as ablation on varying attack strength.**\", \"a2\": \"We included the ablation study on varying guidance strength in Appendix E in the original submission, where we also included ablation studies on effectiveness of the realism guidance and target action selection methods. We agree with the reviewer that more evaluation results would help prove our attack method\\u2019s effectiveness and soundness. Thus, we have conducted new experiments including (1) an experiment for the Atari Roadrunner environment (see **Table 1** in the general response), (2) an experiment that compares different attack methods including the newly added PA-AD attack (see **Table 2** in the general response), and (3) an experiment that compares the DDPM and EDM diffusion methods (see **Table 5** in the general response). We hope these additional ablation studies can address your concern.\\n\\n**Q3: Use PA-AD or other non-myopic target action selection methods.**\", \"a3\": \"We would like to thank the reviewer for this insightful comment. Combining PA-AD or other non-myopic target action selection methods with our conditional diffusion based attack is indeed a promising direction. A simple strategy is to adopt a two-step approach, similar to what we did in our paper, where one first applies a non-myopic target action selection method to select target actions with some long-term attack objective in mind, and then uses our diffusion-based attack to approximate the goal. An important challenge, however, is that a diffusion-based approach cannot guarantee that the target actions are alway chosen both due to the randomness of the diffusion model and the requirement for maintaining realism and history alignment. Therefore, to provide a performance guarantee, one needs to measure the expected success rate of a conditional diffusion model for a given set of target actions, which are then optimized toward some long-term goal that is achievable. We believe this requires a non-trivial extension of our approach and is an interesting future direction to explore.\\n\\nWe sincerely hope our responses and additional experiment results have addressed all your concerns. If so, we kindly hope that you consider increasing the rating of our paper. Please do not hesitate to add further comments if you have any additional questions or need further clarifications.\\n\\n[1] Eric Wong, Frank R. Schmidt, and J. Zico Kolter. Wasserstein Adversarial Examples via Projected Sinkhorn Iterations. ICML 2019.\"}", "{\"title\": \"Did our responses address all your concerns?\", \"comment\": \"Dear Reviewer eNaA,\\n\\nThank you once again for your valuable feedback. As the rebuttal period comes to a close, we hope to hear from you regarding whether our responses have satisfactorily addressed your concerns. Below is a brief summary of our rebuttal: \\n1. **Motivation for Semantic Attacks**: $l_p$ norm-bounded attacks fail against diffusion-based defenses in raw-pixel environments like Atari. To address this, our attack changes state semantics while maintaining static and dynamic stealthiness, as demonstrated by low reconstruction loss (Fig. 3a) and minimal Wasserstein distance (Table 2-2). \\n2. **Recent Defense Baselines**: We appreciate the references to [1] and [2]. However, these methods are not directly applicable to Atari environments due to their computational cost and the lack of code. Instead, we tested a game-theoretic approach by retraining DP-DQN with our attack, which consistently performed poorly, underscoring the strength of our method. \\n\\nWe hope these clarifications resolve any remaining concerns, and we would greatly appreciate further comments or feedback if needed.\"}" ] }
Do3whenqeY
Can Language Models Reason about Individualistic Human Values and Preferences?
[ "Liwei Jiang", "Taylor Sorensen", "Sydney Levine", "Yejin Choi" ]
Recent calls for pluralistic alignment emphasize that AI systems should address the diverse needs of all people. Yet, efforts in this space often require sorting people into fixed buckets of pre-specified diversity-defining dimensions (e.g., demographics, personalities, communication styles), risking smoothing out or even stereotyping the rich spectrum of individualistic variations. To achieve an authentic representation of diversity that respects individuality, we propose individualistic alignment. While individualistic alignment can take various forms, in this paper, we introduce IndieValueCatalog, a dataset transformed from the influential World Values Survey (WVS), to study language models (LMs) on the specific challenge of individualistic value reasoning. Specifically, given a sample of an individual’s value-expressing statements, models are tasked with predicting their value judgments in novel cases. With IndieValueCatalog, we reveal critical limitations in frontier LMs’ abilities to reason about individualistic human values with accuracies, only ranging between 55% to 65%. Moreover, our results highlight that a precise description of individualistic values cannot be approximated only via demographic information. We also identify a partiality of LMs in reasoning about global individualistic values, as measured by our proposed Value Inequity Index (σINEQUITY). Finally, we train a series of Individualistic Value Reasoners (IndieValueReasoner) using IndieValueCatalog to enhance models’ individualistic value reasoning capability, revealing new patterns and dynamics into global human values. We outline future research challenges and opportunities for advancing individualistic alignment.
[ "individualistic value alignment", "pluralistic value alignment", "human values", "AI safety", "individualistic value reasoning" ]
https://openreview.net/pdf?id=Do3whenqeY
https://openreview.net/forum?id=Do3whenqeY
ICLR.cc/2025/Conference
2025
{ "note_id": [ "c8NCJHAsKt", "XJeHtYUnbN", "M1FrDSSRaa", "HbCQLSyBIR", "5kax5Htk31" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730539529131, 1730555589095, 1734285509806, 1730361784610, 1729817683383 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5185/Reviewer_wkhE" ], [ "ICLR.cc/2025/Conference/Submission5185/Reviewer_SviC" ], [ "ICLR.cc/2025/Conference/Submission5185/Authors" ], [ "ICLR.cc/2025/Conference/Submission5185/Reviewer_rvnN" ], [ "ICLR.cc/2025/Conference/Submission5185/Reviewer_4q73" ] ], "structured_content_str": [ "{\"summary\": \"This paper investigated the limitations of LLMs in reasoning about human values at an individual level. The main contributions include:\\n1. Presented a new dataset, INDIEVALUECATALOG, derived from the World Values Survey (WVS) that transforms unstructured survey questions into standardized natural language statements representing value preferences.\\n2. Examined LLMs' abilities to predict individuals' values based on a set of their value-expressing statements. \\n3. Trained LLMs with individualistic value statements to achieve proficient individual value reasoners.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The work tackles a crucial challenge in AI alignment: understanding human values at an individual level rather than relying on broad demographic categories. This bottom-up approach overcomes the limitations of traditional demographic-based models, enabling the development of AI systems that are both more equitable and better tailored to individual needs.\\n2. The paper's visualization is particularly effective. Figure 1 provides a clear illustration of the author's concept of individualistic value reasoning.\\n3. The paper conducted a thorough empirical evaluation on demonstrating the proficiency of trained individual value reasoners, including the comparison between various state-of-the-art LLMs and statistical methods.\", \"weaknesses\": \"1. The methodology lacks novelty. The training of individualistic value reasoner relies solely on fine-tuning approaches; the proposed metrics on LM proficiency and impartiality offer no novel contributions, and the overall methodological approach contains no significant innovations.\\n2. The paper's analysis lacks sufficient depth and fails to make substantial contributions to the field. While it identifies a key limitation - namely, frontier LLMs' deficiency in understanding and predicting individualistic human values - this observation, though intuitively correct, merely confirms what was already suspected. The paper does not extend beyond this basic insight to provide meaningful scholarly contributions.\\n3. The paper suffers from disjointed content and lacks coherent logical flow. Section 3 presents various findings about LLMs' accuracy in predicting individualistic values, but these emerge as scattered observations rather than systematic research. While these findings, such as identifying which demographic groups' values are more accurately predicted, are intuitively reasonable, they fail to coalesce into a comprehensive study of the field. The paper ultimately reads as a collection of disparate data analyses lacking meaningful synthesis or substantive theoretical contributions.\", \"questions\": \"The author's primary task involves predicting an individual's value judgments in novel situations based on a sample of their value-expressing statements. However, two critical questions emerge: First, are these provided value-expressing statements sufficient to capture a person's complete worldview and value system? Second, how can the authors differentiate between prediction inaccuracies caused by incomplete value statements versus those stemming from limitations in the LLM's reasoning capabilities?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a dataset from the World Values Survey designed to evaluate language models' (LMs) reasoning on individualistic values. Unlike pluralistic alignment approaches that generalize diversity through demographic categories, this dataset supports more nuanced, individual-focused alignment. With 93K participants\\u2019 value statements, the study highlights LMs' limitations in predicting individual preferences (accuracy 55-65%) and introduces the VALUE INEQUITY INDEX (\\u03c3INEQUITY) to assess model impartiality. Trained Individualistic Value Reasoners show slight accuracy improvements, providing new insights into global individual values.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The proposed dataset is a significant addition, transforming unstructured WVS data into a structured, standardized resource for examining individualistic values. This dataset enables a more granular approach to evaluate the performance of human value reasoning on LLMs.\\n\\n2. The paper\\u2019s critique of pluralistic alignment's reliance on broad demographic categories is thought-provoking. By shifting the focus to individualistic alignment, the authors argue for AI systems that respect individual uniqueness, facilitating personalized AI development.\\n\\n3. The VALUE INEQUITY INDEX (\\u03c3INEQUITY) is a new metric for assessing the degree of impartiality in LMs' reasoning.\", \"weaknesses\": \"1. The reliance on WVS data, while innovative, may limit the applicability of results. Survey responses may not capture the full breadth of individual values, and the transformation of survey items into value-expressing statements could introduce biases or oversimplify complex beliefs.\\n2. The authors lack an analysis of the task's challenges and fail to sufficiently examine the reasons behind the poor performance of LLMs. Is the subpar performance primarily due to the complexity and contradictions in human preferences, or to the models' inadequate understanding of statements? What are the discrepancies between LLMs' CoT reasoning and users' actual preferences?\\n3. There are some minor errors in the paper. For example, in Figure 1, the correspondence between 1-10 and \\\"satisfied\\\" and \\\"dissatisfied\\\" on the left side seems to be reversed after data conversion. Is this an error in data processing, or is it only an issue with Figure 1?\", \"questions\": \"see weakness\", \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper addresses the limitations of previous pluralistic alignment approaches that pre-categorize individuals, highlighting the importance of \\\"individualistic alignment\\\". To achieve \\\"individualistic alignment\\\", the authors introduce the INDIEVALUECATALOG dataset based on the World Values Survey (WVS). Through experimental validation, they reveal the limited capability of current state-of-the-art LLMs in understanding individualistic human values, as measured by the Value Inequity Index (\\u03c3INEQUITY) proposed by authors. Furthermore, the authors train a collection of Individualistic Value Reasoners (INDIEVALUEREASONER) models on INDIEVALUECATALOG, enhancing LLMs' capabilities in individualistic value reasoning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This paper focuses on individualistic alignment, which is an interesting and novel topic\", \"This paper utilizes World Values Survey (WVS) data and transforms it into a format suitable for LLM training, creating the INDIEVALUECATALOG dataset\", \"This paper proposes the VALUE INEQUITY INDEX (\\u03c3INEQUITY) to measure the fairness of model reasoning across different demographic groups, revealing the current limitations of SOTA LLMs in this aspect\", \"Through fine-tuning LLMs on INDIEVALUECATALOG, authors explore how to combine proposed dataset and LLMs to discover patterns in human values\"], \"weaknesses\": [\"Despite the interesting problem setting, the technical contributions of this paper appear limited for ICLR.\", \"There is insufficient discussion of the practical application value of \\\"individualistic alignment\\\"\", \"The paper lacks performance comparisons with related work\"], \"questions\": [\"While using human preference data for general value alignment has significantly improved LLMs' capabilities as assistants, does this paper's proposed \\\"individualistic alignment\\\" offer further performance improvements or broader application value?\", \"Are advanced pluralistic alignment methods technically comparable with the method proposed in this paper, and could performance comparisons with related work be conducted to highlight the relative effectiveness of the proposed method?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces the concept of \\\"individualistic alignment\\\" to capture human values of individuals. This paper presents the INDIEVALUECATALOG, a dataset derived from the World Values Survey, which includes standardized statements expressing individual preferences from a global sample of individuals. A novel metric, the Value Inequity Index (\\u03c3INEQUITY), is proposed to assess the impartiality of models across demographics.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"(1) The study of prediction equity across demographic groups is interesting, and the result is insightful.\\n\\n(2) A new metric, the Value Inequity Index (\\u03c3INEQUITY), is proposed to measure how equitably models treat different demographics.\\n\\n(3) This paper tested multiple LLMs.\", \"weaknesses\": \"(1) The problem formalization with notations in Section 2.2 is unnecessarily complicated.\\n\\n(2) The focus on predicting individual values may be problematic. Even individuals with the same demographics can differ widely due to various factors. This means the data contains a lot of randomness and noises. This could be why the models struggled to perform well, even after fine-tuning on similar data. A group/demographic-level setting might be more reasonable.\\n\\n(3) Why the studied task is important and useful in real-world applications needs further explanation.\\n\\n(4) The technical innovation is limited. The core contribution, the INDIEVALUECATALOG dataset, is essentially a simple conversion of the World Values Survey. The authors need to explain more about the core novelty of this paper.\\n\\n(5) Data and code is not currently shared.\", \"questions\": \"Considering the inherent variability and potential noise and randomness in predicting individual values, could you elaborate on the real-world applications where this task would be particularly impactful? How do you envision these predictions being useful in practical scenarios?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}" ] }
DnfPX10Etk
JOOCI: A FRAMEWORK FOR LEARNING COMPREHENSIVE SPEECH REPRESENTATIONS
[ "Hemant yadav", "Sunayana Sitaram", "Rajiv Ratn Shah" ]
Information in speech can be divided into two categories: what is being said (content) and how it is expressed (other). Current state-of-the-art (SOTA) techniques model speech at fixed segments, usually 10-25 ms, using a single embedding. Given the orthogonal nature of other and content information, attempting to optimize both within a single embedding results in suboptimal solutions. This approach divides the model's capacity, limiting its ability to build complex hierarchical features effectively. In this work, we present an end-to-end speech representation learning framework designed to jointly optimize the other and \enquote{content} information (JOOCI) in speech. By using separate learnable parameters, JOOCI addresses this optimization challenge by modeling other and content information independently. Our results show that JOOCI consistently outperforms other SOTA models of similar size (100 million parameters) and pre-training data used (960 hours) by a significant margin when evaluated on a range of speech downstream tasks in the SUPERB benchmark. Code and models are available at TBA.
[ "SSL", "Speech Representation Learning", "Joint Optimization" ]
Reject
https://openreview.net/pdf?id=DnfPX10Etk
https://openreview.net/forum?id=DnfPX10Etk
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zg39YGXNxO", "z2eb5yhDTx", "yCzH2r8pbJ", "vBcBna9jZG", "sKab0ZdrFd", "qJkTrUjlTx", "oUSy0GAujC", "nd2Jziooi6", "kCUZBxNPmQ", "gw4piE5evl", "frGIoi4QXz", "f3UpIWXP72", "YLHljndHiR", "UdYRlUpXNX", "UG2Zoeq54l", "RxKN8f9jKZ", "Pj8f91UAws", "HCrx5mWaO8", "H9uFPtJIGD", "BR6JBl6koK", "AULKWZjPU9", "8Bti8TO4Cl", "7qvQTXlrCN", "0QNCyvW8C4" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733210607164, 1733139287815, 1732731358515, 1732976211483, 1730674247059, 1737524289230, 1730670210599, 1732974775289, 1731604446914, 1732965571983, 1731574448739, 1732669140795, 1732773959189, 1732857986144, 1730595278261, 1733167691325, 1732530958696, 1731257522594, 1734530464046, 1732530772754, 1731519874097, 1731565263367, 1731492502050, 1732060832899 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13918/Reviewer_YxJB" ], [ "ICLR.cc/2025/Conference/Submission13918/Authors" ], [ "ICLR.cc/2025/Conference/Submission13918/Authors" ], [ "ICLR.cc/2025/Conference/Submission13918/Authors" ], [ "ICLR.cc/2025/Conference/Submission13918/Reviewer_YxJB" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13918/Reviewer_beXE" ], [ "ICLR.cc/2025/Conference/Submission13918/Authors" ], [ "ICLR.cc/2025/Conference/Submission13918/Authors" ], [ "ICLR.cc/2025/Conference/Submission13918/Reviewer_YxJB" ], [ "ICLR.cc/2025/Conference/Submission13918/Authors" ], [ "ICLR.cc/2025/Conference/Submission13918/Reviewer_rAqx" ], [ "ICLR.cc/2025/Conference/Submission13918/Authors" ], [ "ICLR.cc/2025/Conference/Submission13918/Authors" ], [ "ICLR.cc/2025/Conference/Submission13918/Reviewer_rAqx" ], [ "ICLR.cc/2025/Conference/Submission13918/Authors" ], [ "ICLR.cc/2025/Conference/Submission13918/Authors" ], [ "ICLR.cc/2025/Conference/Submission13918/Reviewer_9Wd2" ], [ "ICLR.cc/2025/Conference/Submission13918/Area_Chair_5Lbn" ], [ "ICLR.cc/2025/Conference/Submission13918/Authors" ], [ "ICLR.cc/2025/Conference/Submission13918/Reviewer_9Wd2" ], [ "ICLR.cc/2025/Conference/Submission13918/Authors" ], [ "ICLR.cc/2025/Conference/Submission13918/Authors" ], [ "ICLR.cc/2025/Conference/Submission13918/Reviewer_rAqx" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for the additional information! There are some points that are yet to be revised to improve the clarity of the paper, and I look forward to seeing a future version of the paper. For now I am still happy with my score. I don't have time to respond to all of the points individually, but here are replies regarding a few of them:\\n\\n* I do not understand....\\n - This notion, of WavLM as a SOTA when evaluated on a range of tasks, is well accepted in the speech community...\\n\\nI agree that WavLM is a solid model family that does well on many tasks, and it is reasonable to compare to it. However, I am not sure what you mean by \\\"well accepted in the speech community\\\". \\\"SOTA\\\" implies that it has the best existing result by some metric. When I look at the SUPERB leaderboard, for example for the \\\"challenge hidden dev\\\" setting, I see that the best model differs across tasks. For many tasks it is WavLM Large, but for some tasks it is WavLM Base+, WavLM Base, or wav2vec 2.0 Large. If by \\\"SOTA\\\" you mean the best according to some overall metric like the SUPERB \\\"rank\\\" or \\\"score\\\", then you should say so and then compare your model using that metric. If instead you want to compare performance over several individual tasks (as the paper is currently doing), then you should find the best result for each task in the literature and compare to it.\\n\\nI believe the SUPERB leaderboard does not include all of the latest models, but rather only a set of models that were evaluated at a particular time. Finally, SUPERB has a particular way of evaluating models using a constrained prediction head, and it may be possible to obtain better results using a more complex prediction head, fine-tuning, etc. (see, for example, https://arxiv.org/abs/2306.00452). \\n\\nOverall, I think it's fine to compare to WavLM, but \\\"SOTA\\\" is not the correct description for it. \\n\\n\\n* I don't follow the senten...\\n - Please re-read from line no 82\\n\\nThanks for revising. I'm afraid I still don't know exactly what is the problem that is being pointed out with prior approaches: if they encode content information in some layers and \\\"other\\\" information in other layers, this means that both kinds of information are encoded *somewhere* and that the two kinds of information are (at least to some extent) disentangled.\\n\\n\\n* In Eq. 1, the index d is ...\\n - We are unable to understand the question and request further clarification.\\n\\nEq. 1 is L_CL = \\\\sum_d (MPL). My point is that (MPL) does not appear to depend on d, and in addition \\\"MPL\\\" is not defined but seems to be later replaced by L_MPL.\"}", "{\"title\": \"Clarification on the contribution of JOOCI to the speech research community.\", \"comment\": \"### The idea of developing a single end-to-end model to __JOINTLY__ learn comprehensive speech representations that can effectively handle various downstream tasks jointly is both essential and a important research direction in the speech community.\\n#### There have been models in the past that perform well on certain tasks but poorly on others. Specifically, models that excel in content tasks, such as phoneme recognition (PR), tend to perform poorly on speaker tasks, such as speaker identification (SID). \\n-----\\nIn the research community, WavLM stands out as a widely popular model designed to learn comprehensive speech representations by leveraging (i) masked predictive learning (MPL) to capture content information and (ii) data augmentation to encode other aspects of speech. WavLM demonstrated superior performance __JOINTLY__ on content and speaker tasks compared to HuBERT.\\n* The authors observed that WavLM achieved this by dividing its total modelling capacity/layers, with later layers learning content and earlier layers learning other information. \\n* A major flaw is that this ultimately prevents any model from fully leveraging all layers to build the complex, hierarchical representations characteristic of deep learning. \\n----\\nOur proposed JOOCI framework make use of all the layers (depth) available to build hierarchical representations. This design choice resulted in remarkable performance improvements __JOINTLY__ on content and speaker tasks compared to WavLM as shown in Table 1.\\n* JOOCI's impressive __JOINT__ performance on these orthogonal tasks has been prominently acknowledged by \\n__unanimously__ by all the reviewers.\\n* Our discussion with reviewer __rAqx__ also shows that the gains are not solely due to data volume but also reflect JOOCI's ability to jointly optimize content and other information effectively. JOOCI achieves comparable performance with WavLM+, which is trained on 94,000 hours of data.\\n\\nFinally, we believe that JOOCI takes the speech community one step closer in developing a single end-to-end model capable of __JOINTLY__ learning comprehensive speech representations on various downstream tasks. \\n\\n-------\"}", "{\"title\": \"Response.\", \"comment\": [\"### We thank the reviewer for their comments and the opportunity to address their questions and observations.\", \"Would it be correc.....\", \"If it can be proved that the pseudo labels used accurately represent content information without including other information (e.g., speaker), then yes, the GRL module can be seen as attempting to remove only content information from the other encoder. The results shows otherwise, since using it also degrades the speaker information as shown in Table 5. We will update the writing to make this clearer for future readers.\", \"Regarding the use of the RDINO teacher, it does not explicitly ensure that the other encoder avoids specializing solely in speaker information. However, when the GRL module is applied, we observe that the other encoder balances its focus towards the ER task though at the cost of reduces performance on the speaker based tasks.\", \"Line 22: \\\"pre-training data u.....\", \"Thank you for bringing this important point to our attention. RDINO, is used as a teacher to extract speaker embeddings, which is trained on 2.5k data and is the reason for improved performance on the speaker based tasks such as SID and ASV as pointed out by the reviewers. And therefore, saying that comparing with WavLM (960 hours) might not be valid or true.\", \"To address this concern, we kindly request the reviewer to refer to Table 1, specifically the last row comparing WavLM+, which is trained on 94,000 hours of data (compared to just 960 + 2500 hours). The comparable performance on SID (a little improvement) and ASV (a little degradation) demonstrates that the gains are not solely due to data volume but also reflect JOOCI's ability to jointly optimize content and other information effectively. We will revise the abstract to clarify this distinction to prevent any potential misunderstanding.\", \"To clarify, while ....\", \"Thank you for the clarification regarding ContentVec and SPIN. As pointed by the reviewers, HuBERT/SPIN/ContentVec can be used as initialization for JOOCI's content encoder. But again, MS-HuBERT is a overall a better choice when compared to the above three, looking at their PR and ASR task performance. The results for HuBERT/SPIN/ContentVec are shown in SPIN paper ( https://arxiv.org/pdf/2305.11072) Table 1. MS-HuBERT's performance on PR and ASR task is 4.17 and 5.32 respectively as shown in Table 5.\", \"Furthermore, SPIN can be applied on MS-HuBERT to further improve its content representation since the phonetic content resides in top layers which is not the case of data2vec as explained by the authors of SPIN paper in section 3.2. Therefore we request the reviewer to see MS-HuBERT similar to HuBERT than SPIN or ContentVec. That is why we made the observation that MS-HuBERT should be seen as an earlier stage of ContentVec or SPIN. On the other hand, when MS-HuBERT is used as initialization for JOOCI's content encoder, it is continued pre-trained using the similar MMPL loss. It can be simply frozen, for which the results are shown in Table 5 (Keeping Shared and Content Encoder Frozen + NO Data Augmentatio).\", \"To further illustrate thi....\", \"We would like to address a misunderstanding: MS-HuBERT is not initialized with HuBERT instead HuBERT 1st iteration is only used to get cluster ids rather than starting from Fbanks to save computational resources.\", \"--------------------------------------------------------------------------------------------------------------------------------------\", \"#### Lastly, we respectfully maintain that ContentVec and SPIN have different goals compared to JOOCI. Both focus solely on maximizing content information, while JOOCI aims to jointly optimize both content and non-content representations, aligning more closely with WavLM's objectives.\", \"#### That said, we understand that it might be valuable to some reader in discussing the similarities and differences between these methods in detail. We will include a discussion section in the Appendix, in the revised version to elaborate on these points and provide the SUPERB benchmark numbers for ContentVec and SPIN for better clarity.\"]}", "{\"title\": \"Contd.\", \"comment\": [\"I do not understand....\", \"We will move this discussion to the appendix. We showed the results so that the readers could have a bigger point of view of JOOCI vs HuBERT with adapters. In no capacity we suggest it is a replacement of adapters. On the other hand, adapters can be applied to JOOCI also. We have revised the writing and request the reviewer to pleas re-read from line no 288.\", \"How does Figure 2 show..\", \"MS-HuBERT was trained WITHOUT data augmentation and is used as an initialization for JOOCI. Later, the content encoder of JOOCI is finetuned WITH data augmentation. As shown in Figure 2, the later layers have shown a increase in CCA score. I hope it clarifies.\", \"For Figure 2, more information is ne.....\", \"We apologize to the reviewer. Somehow, we missed to cite the paper. In the revised version we have mentioned it properly. To clarify, the analysis is exactly the same.\", \"The ablation study in Sec. 4.1 is ....\", \"We first study the effect of DGRL module on the representations learned by the other encoder, while keeping the shared and content encore frozen without using data augmentation.\", \"With and without DGRL results are shown for this config. DGRL module aligns the other encoder better with the claim of JOOCI i.e., to jointly optimize OTHER and content information. Other not just means speaker.\", \"We have tried to improve the writing and request the reviewer to please re-read the section 4.2. Please tell us what is still missing and we will add the information in the revised version.\", \"In Table 3, why are th...\", \"They are skipped because (i) won\\u2019t provide any additional information (ii) limited compute available. In the revised version, Table 5 last row shows the extent to which JOOCI-O-DGRL encodes both the content (PR) and other (SID) information. Close to 100% for the PR task, this means no meaning full content information is encoded in the other encoder.\", \"In Section 4.2, I have trouble followin....\", \"We request the reviewer to please re-read this section in the revised version for better clarity. It is section 4.1 line no 353. The results are updated. We made a minor mistake in earlier setup.\", \"Please ignore the numbers in asterik them for now. They show the performance when using multi-head attention.\", \"Same trend means: earlier layers having higher weight.\", \"Section 4.3 claims to \\\"prove that....\", \"We will tone down the claims and make it clear that it is an observed behavior of JOOCI, and is not pre-trained for.\", \"Using later layers of content encoder, which has high CCA score, resulted in high ASV score, compared to using all the layers. This shows that the interference of speaker is very minimum for these layers for the VC-a2a task . This led us to claim that the content encoder (later layers) are able to disentangle content information from other information (speaker).\", \"Similarly the performance of the tasks which require content information also improved, Table 4, when using only the high CCA score layers.\", \"In Table 5, what is the d....\", \"In the revised version, it is Table 6. We have added the context.\", \"---------------------\", \"#### Thank you again. Looking forward to your valuable feedback.\"]}", "{\"summary\": \"The paper proposes a self-supervised speech representation model that combines two encoders, one intended to encode linguistic content and the other intended for \\\"other\\\" content like speaker and emotion information, trained with different losses. The idea is that, by training a single encoder with a single loss, previous approaches have trouble encoding these two types of information equally well. The various elements of the model are largely borrowed from previous work, but combined in a new way. The model is compared in terms of performance on 8 common tasks (from the SUPERB benchmark) to other commonly used models (HuBERT, WavLM), finding improved performance on 4 of the tasks. The paper also includes some ablation studies and analyses of several model components.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Addresses an important need to account for both linguistic and non-linguistic content in speech representation learning.\", \"Obtains impressive results on several tasks, including speech recognition and speaker identification.\"], \"weaknesses\": [\"Presentation of many details is unclear. For example, the definition of \\\"content\\\" and \\\"other\\\" is never clearly stated. Also, the model description is very brief, leaving many details to cited papers or the imagination (for example, is prosody ever/always/sometimes considered \\\"content\\\"?). Either the writing should be much more precise or the paper should include equations specifying all of the model components. See some other specific questions below.\", \"The key claimed contribution is that the model encodes both linguistic and non-linguistic information and that these are disentangled into the two encoders' representations. However, the results don't quite show this, since the results on tasks are mixed and the analyses don't really demonstrate disentanglement (again see questions below). Overall, I don't see the community starting to use this model as a replacement for other currently popular models.\", \"Some of the experiments do not, as far as I can tell, show the claimed findings (see details in \\\"Questions\\\" below).\", \"The writing is in general hard to follow at times, in part due to many grammatical errors.\"], \"questions\": [\"The paper states that WavLM the \\\"previous SOTA method\\\". By what measure is WavLM SOTA? On what task(s)?\", \"I don't follow the sentences \\\"As a result, the model cannot fully leverage all layers ... within a single embedding.\\\" nor the following sentence \\\"The strategy of dividing the layers...\\\" Can you clarify what is meant there?\", \"The description of the split and append layer is a bit hard to follow.\", \"In Eq. 1, the index d is never used in the summand. Also, should \\\"MPL\\\" be \\\"L_MPL\\\"?\", \"In Eq. 3, what exactly are Student^PN and Teacher^RDINO?\", \"In Table 1, where are the results for FBANK and other competitor methods obtained from? Citations should be provided. I also suggest including MS-HuBERT since JOOCI is based on it, and ideally also data2vec which has good results on many SUPERB tasks (but please let me know if you think these would not be relevant for some reason).\", \"I don't quite follow the sentence \\\"We augment the data very lightly, so not to interfere with the content encoder a lot and divide its capacity.\\\"\", \"The description of the main results in Section 3.2 seems a bit misleading. The paper states that the \\\"results clearly indicate that JOOCI outperforms the current state-of-the-art (SOTA) models on the majority of tasks, except few ...\\\". However, in Table 1 JOOCI appears to outperform other models on exactly half the tasks, and it is never explained in what sense those models are SOTA (though they are clearly commonly used models).\", \"I do not understand the purpose of the comparison in Table 2, since JOOCI is not an alternative to adapters. Also, \\\"Houlsby\\\" and \\\"CHAPTER\\\" need to be defined.\", \"How does Figure 2 show the effect of data augmentation? Is there a pair of curves that differs only in the use of data augmentation?\", \"For Figure 2, more information is needed about the y-axis. How is CCA similarity defined? How are the word labels encoded and how many words are there? There has been prior work using CCA similarity for layer-wise analyses, e.g. Pasad et al., \\\"Comparative layer-wise analysis of self-supervised speech models,\\\" ICASSP 2023. Figure 2 seems similar to some of this prior work, and so it would also be helpful to state how your CCA-based analysis is the same or different, and whether your HuBERT results are similar to Pasad et al.'s.\", \"The ablation study in Sec. 4.1 is a bit confusing to me. It claims to separately show the effect of DGRL and data augmentation, but as far as I can tell these two variables are changed simultaneously in the experiments.\", \"In Table 3, why are the \\\"-\\\" results not included? If those could be included, they could help to show to what extent JOOCI-C and JOOCI-O specialize for linguistic vs. non-linguistic information.\", \"In Section 4.2, I have trouble following the first paragraph. What kind of information is considered \\\"higher-level\\\" in the \\\"other\\\" branch, and what is the \\\"same trend\\\" that is referred to here?\", \"Section 4.3 claims to \\\"prove that JOOCI is able to disentangle content and other information\\\", but I don't follow how the results show this. (Also, the word \\\"prove\\\" is too strong here, as in most descriptions of empirical findings.)\", \"In Table 5, what is the difference between the experiments in the last two lines (both labeled \\\"JOOCI (6-11)\\\"?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This submission introduces a framework for distinct representation learning of \\\"content\\\" and \\\"other\\\" properties in speech. The authors report improved performance on certain SUPERB tasks compared to other systems. Additionally, the submission includes comparisons with adapters, ablation studies on encoders, data augmentation, and learned representations.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The research community is highly interested in the topic of speech representation learning.\", \"The proposed method's evaluation on certain SUPERB tasks yielded better results compared to the cited systems.\", \"The discussions and comparisons presented are technically sound.\"], \"weaknesses\": [\"Major issues:\", \"The model's effectiveness is unconvincing. The baselines cited are outdated and not state-of-the-art, and the model's performances on the semantic tasks are not better.\", \"The paper's discussion of different model architectures is shallow, limiting its contribution and making it difficult to draw general conclusions.\"], \"minor\": [\"Figure 1 could be simplified by removing the hyperparameters.\", \"The discussion of \\\"Data augmentation\\\" in Line 52 seems out of place, as the initial focus was on model architecture for speech representation learning.\"], \"questions\": [\"Can you offer insights into the relationship between SUPERB downstream task performance and model architecture designs?\", \"How do your results compare to recent speech representation work that has also been evaluated on SUPERB?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you again for helping to improve the paper.\", \"comment\": [\"### Weaknesses:\", \"Presentation of many det....\", \"We will add a subsection in section 2 in the final version, explaining clearly what is meant by other and content information present in the input speech accompanied by a diagram or table. This section will be before the component subsection (section 2.1) so that the readers have an understanding of what different encoders are doing.\", \"The key claimed contribution is that ...\", \"We have not claimed any active disentanglement, but it was an observation made by us, given the empirical results of Table 5 and to some extent Table 4.\", \"When using JOOCI-O-DGRL, the performance on the PR (content task) is close to 100%. And when using JOOCI-C, the performance of the SID task is poor. JOOCI-C maximizes the content information and JOOCI-O-DGRL is trained to maximize other (speaker information using the RDINO as teacher).\", \"SInce the other in JOOCI is not just to encode speaker information. The DGRL module is used as a regularizer to avoid overfitting on the speaker based tasks (and not to be seen as a disentanglement module to improve the performance on the speaker tasks). The DGRL module resulted in improved performance on the emotion recognition task and slight reduction on the speaker based tasks.\", \"The writing is in general hard to...\", \"The reviewer is right and we are thankful for all the comments. We have tried to revise the writing part and hope to revise it again after collecting all the feedback during the rebuttal phase.\", \"---------------------------\", \"### Questions:\", \"The paper states tha...\", \"This notion, of WavLM as a SOTA when evaluated on a range of tasks, is well accepted in the speech community. The reviewer can confirm the same from the official SUPERB benchmark website. For some reason the link is down (https://superbbenchmark.org/leaderboard). Therefore we request the reviewer to please use the backup link: https://superbbenchmark.github.io/#/leaderboard. The WavLM series of models are at the top.\", \"SUPERB is a leaderboard to benchmark the performance of a shared model across a wide range of speech processing tasks with minimal architecture changes and labelled data. The official paper is at: https://arxiv.org/pdf/2105.01051\", \"I don't follow the senten...\", \"Please re-read from line no 82 in the revised version. Though we recommend reading the introduction section again starting line no 41.\", \"The description o...\", \"We have tried to show the working of split and append layer in Figure 1. We will add a dedicated paragraph with the heading split and append layer in the components section (section 2.1) discussing it in detail.\", \"In Eq. 1, the index d is ...\", \"We are unable to understand the question and request further clarification.\", \"In Eq. 3, what exactly ....\", \"PN and RDINO are the student and teacher modules used. We have tried to clarify it in line 190. We will clarify further if the reviewer recommends.\", \"In Table 1, where are...\", \"We will add the citations as suggested in the final version.\", \"MS-HuBERT and data2vec are shown to maximize content information. The same is observed with their superb performance on the ASR and PR task. Both focus solely on maximizing content information, while JOOCI aims to jointly optimize both content and other information, aligning more closely with WavLM's objectives. WavLM used data augmentation to boost the performance on the other tasks such as speaker and emotion.\", \"When the shared and content encoders are frozen, the results for the content based tasks are effectively as MS-HuBERT. For the reader\\u2019s reference, the corresponding performance on ASR and PR tasks is provided in Table 5. To ensure clarity, we will revise the table caption in the manuscript to explicitly highlight this equivalence.\", \"I don't quite follow the senten....\", \"We have tried to clarify this in the revised version and sincerely request the reviewer to please read section starting line no 244. When fine-tuning the shared and content encoder with Strong data augmentation, the content encoder would behave similar to WavLM i.e., fewer layers will be used to encode content information. Since the earlier layers would try to remove the noise from the audio.\", \"The description of the....\", \"We would like to clarify that the difference in performance is miniscule compared to WavLM and the reason could be very large batch sizes used by WavLM during the evaluation. This is mentioned in the WavLM paper TABLE IX (https://arxiv.org/pdf/2110.13900) and is not recommended to change as was shared by the authors of SUPERB in one of their issues: https://github.com/s3prl/s3prl/issues/360#issuecomment-1155008924. Therefore these small differences on the sematic tasks could be just noise\", \"Furthermore, we request the reviewer to please check Table 4. The gains can be reduced further and even be surpassed by JOOCI when using layers with higher content information.\"]}", "{\"title\": \"Clarifying JOOCI\\u2019s Position, Effectiveness, and Comparisons on the SUPERB Benchmark.\", \"comment\": [\"### We thank the reviewer for their comments and the opportunity to address their concerns.\", \"We kindly ask the reviewer to specify what they found unconvincing. The baseline (WavLM) we compare to is state-of-the-art on the SUPERB benchmark leaderboard: https://superbbenchmark.github.io/#/leaderboard. We would appreciate it if the reviewer could point to specific models or methods for additional comparison. For semantic tasks, JOOCI\\u2019s performance is very close to that of WavLM, and we mention that WavLM\\u2019s use of large batch sizes likely contributes to these minor gains. Therefore these small differences on the sematic tasks could be just noise. As shown in Table 6, with using only the layers with higher content information, a property of JOOCI's content encoder, we can close the gap even further on one semantic task (IC) and beat on the other semantic task (SF). For IC task, the difference is of 0.02 only.\", \"We also request that the reviewer share specific model architectures they believe would strengthen the discussion and comparisons, allowing us to draw more general conclusions in the paper.\", \"We will address the reviewer\\u2019s comment on Figure 1 in the revised version. Since WavLM (previous SOTA on SUPERB) is trained with data augmentation, we apply data augmentation in JOOCI for fair comparison given the smaller, cleaner pre-training LibriSpeech 960 hours data used.\", \"We will add a section discussing model architectures and their relation to the SUPERB benchmark in the revised version, as suggested by the reviewer.\", \"WavLM is one of the most recent models aimed at learning representations across orthogonal tasks such as SID and ASR. Compared to WavLM, our method demonstrates significant improvements on both the tasks. Data2Vec, another method, achieves better results on ASR but is pre-trained specifically to maximize content information. Additionally, on the PR task, JOOCI outperforms Data2Vec, showcasing the effectiveness of the content encoder initialization we used.\", \"We hope this addresses the reviewer\\u2019s concerns, and we appreciate any further feedback.\"]}", "{\"comment\": \"Thank you for the responses and revisions. I think the process has helped to clarify the contribution of the work. Besides the point about disentanglement, however, I cannot find responses to all of my questions/comments in the revised version or in the responses to reviewers. However, I may be missing something. If you could point out where in the revised version each of my questions is addressed, or respond here to the ones that aren't, I would be happy to consider revising my review. As of now I remain happy with my initial rating.\"}", "{\"title\": \"Rebuttal on JOOCI\\u2019s Framework and Comparisons to Existing Models. JOOCI is not doing any disentanglement.\", \"comment\": [\"### We thank the reviewer for their thoughtful comments and the opportunity to address their questions and observations.\", \"Firstly, we would like to clarify that JOOCI does not aim to remove other information from the content or employ a disentanglement module. Instead, JOOCI optimizes both content and other information jointly, using separate paths/encoders to maximize the representation quality without attempting a strict separation observed in WavLM. In fact, the concept of disentanglement is only mentioned in Section 4.3 of the paper (in the second-to-last section before the conclusion) as part of an ablation study on understanding learned representations.\", \"We are seeking further clarification regarding the statement: \\\"JOCCI relies on a pretrained method RDINO for training the other encoder, whereas baseline methods such as HuBERT do not.\\\" Many state-of-the-art methods, such as Vall-E[1], also use pretrained models (Encodec) as teachers. We would appreciate the reviewer\\u2019s insight on how this is considered a weakness in the context of our work.\", \"Regarding the comparison with ContentVec, SPIN, and Data2Vec, we believe these methods serve different purposes from JOOCI, focusing primarily on content representation without optimizing for both content and other information. Our content encoder is initialized from MS-HuBERT, which, as demonstrated on the SUPERB benchmark, outperforms ContentVec, SPIN, and Data2Vec in the phoneme recognition (PR) task. JOOCI, however, is designed to perform competitively on both PR and speaker identification (SID) tasks, demonstrating its ability to effectively handle both types of information. A direct comparison would be WavLM.\", \"ContentVec and SPIN are models that fine-tune other pre-trained models, whereas JOOCI focuses solely on the pre-training stage. Given this distinction, we are unsure how including these models would contribute to strengthening the claims of our paper.\", \"We would appreciate the reviewer\\u2019s perspective on how evaluating the impact of random versus MS-HuBERT initialization, as well as training the model with just the GRL loss and without the RDINO teacher, might help corroborate the effectiveness of JOOCI.\", \"We hope this addresses the reviewer's concerns, and we appreciate any further feedback that may help clarify or strengthen our presentation.\", \"[1] Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers\"]}", "{\"comment\": \"Would it be correct to say that the GRL module tries to remove the content information from the other encoder?\\n\\\"Its purpose is to ensure that the other encoder does not overly specialize in speaker information alone\\\" but the RDINO teacher is trained to generate speaker embeddings. How does using RDINO as a teacher for other encoders ensures that the other encoder does not overly specialize in speaker information alone?\", \"line_22\": \"\\\"pre-training data used (960 hours) by a significant margin when evaluated on a range of speech downstream tasks in the SUPERB benchmark\\\" in the abstract implies JOCCI is better than other models trained on the same amount of pretraining data. It is not true because JOCCI uses RDINO as a teacher which is trained on additional data (2.5K hours).\\n\\n\\\"To clarify, while both ContentVec and SPIN rely on HuBERT initialization, JOOCI does not require such initialization. Instead, using MS-HuBERT for initialization in JOOCI is a choice made primarily to save computational resources.\\\" So both methods use pretrained models as initialization, what makes ContentVec and SPIN \\\"rely\\\" on HuBERT initialization and JOCCI doesn't rely on the initialization? Most models start from pretrained models nowadays for computational reasons. \\n\\n\\\"To further illustrate this, ContentVec and SPIN could also use JOOCI's content encoder for initialization instead of HuBERT, which highlights that JOOCI operates at an earlier stage than ContentVec and SPIN in the pre-training process.\\\" JOCCI can also use HuBERT/SPIN/ContentVec as initialization. ContentVec, SPIN and JOOCI are all pertraining methods that start from an SSL model and further finetune it, what do you mean by \\\" JOOCI operates at an earlier stage than ContentVec and SPIN in the pre-training process\\\"? I would argue the opposite, JOCCI needs MS-HuBERT which needs HuBERT as initialization whereas ContentVec, and SPIN require one less stage of pretraining to get to the final model.\"}", "{\"title\": \"Clarification on the presentation and writing.\", \"comment\": \"### We sincerely thank the reviewer for their detailed questions. They were very helpful.\\n\\n* Firstly, we would like to clarify that JOOCI does not aim to remove other information from the content or employ a disentanglement module. Instead, JOOCI jointly optimizes content and other information by using separate paths/encoders, focusing on maximizing representation quality rather than enforcing a strict separation, as seen in WavLM. \\n * We have discussed this in detail with reviewer \\\"rAqx\\\" for your kind reference. If there are still any concerns or points of dissatisfaction, please let us know, and we would be happy to address them further.\\n\\n* We have worked to improve the writing and presentation in the revised version, addressing a lot of questions raised by the reviewers to provide better clarity and structure overall. We will aim to still make it better if the reviewer still has questions and suggestions. \\n\\n-------\\n\\n#### Once again, thank you for your thorough feedback, which has been invaluable in refining the manuscript and enhancing its clarity and presentation for future readers.\"}", "{\"title\": \"Updated response on the weaknesses.\", \"comment\": [\"**Reviewer:** \\\"The baseline comparisons are limited. There have been other attempts to remove other information such as speaker information from the self-supervised representations such as contentvec and SPIN.\\\"\", \"**Response:** I hope our previous response makes it clear that JOOCI is not removing speaker information, form content encoder, but is jointly optimizing both the speaker (other) and content information [1]. This is the reason for its high performance on both speaker and content based tasks as shown in Table 1. JOOCI does not have to pick one information over other. In contentvec and SPIN, the authors choose content over other (speaker) information.\", \"**Reviewer:** \\\"Even the MS-HuBERT model used for initializing JOCCI is missing from Table 1.\\\"\", \"When the shared and content encoders are frozen, the results for the content based tasks are effectively as MS-HuBERT. For the reader\\u2019s reference, the corresponding performance on ASR and PR tasks is provided in Table 5. To ensure clarity, we will revise the table caption in the manuscript to explicitly highlight this equivalence.\", \"**Reviewer:** \\\"JOCCI relies on a pretrained method RDINO for training the other encoder whereas baseline methods such as HuBERT do not.\\\"\", \"**Response:** I hope our previous response [2] clarifies that gains, as shown in Table 1, are not solely due to data volume but also reflect JOOCI's ability to jointly optimize content and other information effectively.\", \"1. As we mentioned earlier, SPIN can be applied on MS-HuBERT to further improve its content representation since the phonetic content resides in top layers which is not the case of data2vec as explained by the authors of SPIN paper in section 3.2.\", \"2. To address this concern, we kindly request the reviewer to refer to Table 1, specifically the last row comparing WavLM+, which is trained on 94,000 hours of data (compared to just 960 + 2500 hours). The comparable performance on SID (a little improvement) and ASV (a little degradation) demonstrates that the gains are not solely due to data volume but also reflect JOOCI's ability to jointly optimize content and other information effectively. We will revise the abstract to clarify this distinction to prevent any potential misunderstanding.\", \"-----\", \"#### We hope this addresses the reviewer's concerns related to weaknesses. Please let us know if you have any further concerns.\"]}", "{\"summary\": \"The paper proposes to disentangle the content \\\"what is being said\\\" and other \\u201chow it is expressed\\u201d information present in the speech data. The paper proposes the JOCCI framework, which uses two submodules, focused on maximizing the content and the other information. The content module is trained with a self-supervised objective whereas, the other module is optimized with a teacher-student objective. A regularization loss is added to minimize the information overlap in the two submodules.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1) The paper is well-written and addresses an important challenge of disentangling content and other information in the speech representation.\\n\\n2) The model performs well on the SUPERB benchmark and outperforms HuBERT and wavLM.\", \"weaknesses\": \"1) The baseline comparisons are limited. There have been other attempts to remove other information such as speaker information from the self-supervised representations such as contentvec[1] and SPIN[2]. Even the MS-HuBERT model used for initializing JOCCI is missing from Table 1.\\n\\n2) JOCCI relies on a pretrained method RDINO for training the other encoder whereas baseline methods such as HuBERT do not. \\n\\n[1] \\u201cContentvec: An improved self-supervised speech representation by disentangling speakers\\n\\n[2] Self-supervised Fine-tuning for Improved Content Representations by Speaker-invariant Clustering\", \"questions\": \"1) How does the JOCCI compare to the contentvec, SPIN, and Data2vec models?\\n\\n2) What is the impact of initialization on the model performance e.g. random vs Ms-HuBERT initialization \\n\\n3) Can the model be trained with just the GRL loss and without the RDINO teacher?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"A gentle reminder.\"}", "{\"title\": \"Response (Contd).\", \"comment\": [\"These changes show how stable t....\", \"We would like to clarify that the impact of removing the GRL module on the other encoder is shown in Table 5. As demonstrated, the GRL module acts as a regularizer: its presence reduces performance on the SID task but improves performance on the ER task. GRL module does not and can not affect the learning of content encoder because there is NO gradient flow, during backprop, from the other encoder to the content encoder. And thus other encoder can not emphasize the content encoder and vice-versa.\", \"We hope this explanation clarifies the role of the GRL module and its effects on model performance.\"]}", "{\"summary\": \"This paper proposes a method for speech representation learning, particularly, for disentangle content information from non-content information. The paper reported strong experimental results on SUPERB benchmark.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The proposed method is sound. The experimental results on a subset of SUPERB benchmark are strong.\", \"weaknesses\": [\"The novelty is limited. The proposed method is very close to a number of existing works, e.g.:\", \"Chan et al., Content-Context Factorized Representations for Automated Speech Recognition, InterSpeech 2022.\", \"Zhao et al., CCSRD: Content-Centric Speech Representation Disentanglement Learning for End-to-End Speech Translation, EMNLP 2023.\", \"The main claim is flawed. The paper claims SOTA on SUPERB. However, it only reports experimental results on a subset of the tasks from SUPERB (7 out of 10).\", \"The writing needs improvements:\", \"Importantly, the name \\\"other encoder\\\" is a poor choice, which causes a lot of confusion for reading. Some simple choices such as \\\"non-content encoder\\\" would do a much better job.\", \"Secondly, many small claims are questionable throughout the paper. A few examples:\", \"Abstract: content and non-content information are orthogonal -- in the words from the paper, \\u201chow it is expressed\\u201d depends on \\u201cwhat is being said\\u201d\", \"Sec 2.2: \\\"Since JOOCI uses separate learnable parameters, the losses are summed directly without requiring\", \"additional hyperparameter tuning.\\\" -- The previous paragraph said the opposite: \\\" The GRL scale the gradients during backpropagation by a factor of 1/10, preventing interference with the other loss.\\\"\", \"Lacks details of the model. While references to prior works is great, for completeness of the paper, you should describe the details of you model clearly, so that the readers understand your approach without having to jumping to many other papers.\"], \"questions\": \"See Weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper proposes an approach to disentangle linguistic information from non-linguistic ones by using an additional speaker model for supervision.\\n\\nI recommend a rejection because of the unsupported claims in the paper.\\n\\nAll reviewers raised concerns about unsupported claims. Whether the representations are properly disentangled is not well supported, not to mention disentanglement by itself is not a very well defined concept. Whether it is theoretically possible to disentangle linguistic and non-linguistic information is another inherent claim that is not supported.\\n\\nThe biggest argument is around the term \\\"state of the art.\\\" Most reviewers are not happy with the claims involving the state of the art. I personally don't mind using the term to refer to the current \\\"state.\\\" However, the paper is using the term to refer to the \\\"best.\\\" It becomes tricky to claim the best because there isn't a single metric in the evaluation. (It's also generally not useful to talk about the best in this context anyways.) It's much more convincing to claim victory over a strong baseline, rather than being the best. Besides, since the paper uses a speaker model, it might not even be fair to claim victory over those that don't.\\n\\nOverall, a lot of care is needed in the experiments and in the writing.\", \"additional_comments_on_reviewer_discussion\": \"The discussion had been on point and healthy. However, the authors failed to address all the questions during the rebuttal. The main problem as summarized in the metareview was around the attribution of improvements.\"}", "{\"title\": \"Response.\", \"comment\": [\"### We thank the reviewer for their comments and the opportunity to address their questions and observations.\", \"Line 153 in the paper: The role of content decoder.......\", \"we would like to clarify that the GRL module serves primarily as a regularizer rather than a disentanglement module. Its purpose is to ensure that the other encoder does not overly specialize in speaker information alone. As demonstrated in Table 5, the GRL module results in a trade-off, slightly reducing SID performance while improving ER performance, highlighting its role as a balancing mechanism rather than a strict disentanglement tool.\", \"Additionally, It discourages content information from other encoder and not the other way around as proposed in ContentVec and SPIN. We would like to emphasize that JOOCI does not remove any information from the content encoder; it simply optimizes it using the MMPL loss similar to MS-HuBERT or HuBERT.\", \"The paper claims that JOOCI is more data efficient than WavLM and HuBERT......\", \"We kindly ask the reviewer to clarify where in the paper we claim that JOOCI is more data-efficient than HuBERT or WavLM, as we would like to better understand the source of this impression. On the contrary, We focus on the framework's ability to jointly optimize content and other information rather than to emphasize data efficiency as a key claim.\", \"Regarding the suggestion of using WavLM-large as a teacher and HuBERT-base as a student, we appreciate the perspective. However, such an approach would inherit the same limitations as the original HuBERT or WavLM frameworks, where the encoder\\u2019s capacity is split between the initial and later layers. In contrast, JOOCI adopts a fundamentally different framework by separately optimizing content and other representations. For instance, using the later layers of WavLM-large to train the content encoder and the initial layers to train the other encoder ensures that both types of information are optimized without dividing the encoder\\u2019s capacity within a single framework. This joint optimization is not possible by using a single encoder as in HuBERT or WavLM.\", \"JOOCI performs better than HuBERT/WvLM: yes, is it more data efficient: No\", \"While it may be valid to argue that JOOCI benefits from leveraging RDINO as a pre-trained component. We request the reviewer to view JOOCI as a progression in the framework for learning comprehensive speech representations. Much like comparing GPT-3 to GPT-2 considers advancements in both scale and approach, JOOCI represents a next step in optimizing both content and other information jointly, which sets it apart from prior frameworks like HuBERT or WavLM.\", \"Since the MS-HuBERT model performance is not inc......\", \"When the shared and content encoders are frozen, the results for the content based tasks are effectively as MS-HuBERT. For the reader\\u2019s reference, the corresponding performance on ASR and PR tasks is provided in Table 5. When using little data augmentation, it results in little improvements for other tasks and little degradation for content tasks. To ensure clarity, we will revise the table caption in the manuscript to explicitly highlight this equivalence.\", \"Again it does not matter if MS-HuBERT ou....\", \"We would like to emphasize that the goal of JOOCI is not to maximize performance on content-based or speaker-based tasks individually but to optimize both jointly, similar to the aim of WavLM in learning comprehensive speech representations. However, while WavLM achieves this by dividing the encoder's capacity across layers, JOOCI takes a different approach by employing separate paths for these orthogonal types of information. The results presented in the paper demonstrate that JOOCI is a more effective choice for achieving this balance, offering improved representation quality for both content and non-content tasks. Table 5 shows the comparison as explained earlier.\", \"ContentVec and SPIN initialize their.....\", \"To clarify, while both ContentVec and SPIN rely on HuBERT initialization, JOOCI does not require such initialization. Instead, using MS-HuBERT for initialization in JOOCI is a choice made primarily to save computational resources. To further illustrate this, ContentVec and SPIN could also use JOOCI's content encoder for initialization instead of HuBERT, which highlights that JOOCI operates at an earlier stage than ContentVec and SPIN in the pre-training process. Additionally, while ContentVec and SPIN explicitly aim to maximize performance on content-based tasks, JOOCI does not make this trade-off. JOOCI is designed to optimize both content and non-content information jointly, without having to prioritize one over the other.\"]}", "{\"comment\": \"Thank authors for the response.\\n\\nUnfortunately, I'm confused by the author response, stating that my summary misunderstood the intention of the paper. \\\"JOOCI is not designed as \\\"speech representation learning for disentangling content information from non-content information,\\\" \\\" -- isn't that the motivation stated in the abstract of the paper?\"}", "{\"title\": \"Response\", \"comment\": [\"### We thank the reviewer for their follow-up and appreciate the opportunity to clarify further.\", \"The motivation described in the abstract is focused on optimizing the joint representation of content and other information by leveraging two separate paths/embeddings within a self-supervised learning framework. This approach is not intended for explicit disentanglement but rather for learning orthogonal representations without imposing a separation of information. This is different to current methods which encode other information in the earlier layers of the encoder, resulting in the model dividing its total capacity, limiting their ability to build complex hierarchical features effectively for other information (characteristic for deep learning) . Our results show that JOOCI is able to achieve competitive performance on both PR and SID tasks. This is different from previous methods, such as data2vec which maximize PR performance (data2vec) while lagging behind SID. Or WavLM which finds a local minima for PR and SID (dividing its total capacity). JOOCI achieves better performance jointly on PR and SID compared to earlier methods.\", \"If there are specific lines in the paper that may have implied a disentanglement motivation, we would be grateful if the reviewer could point them out. This feedback would help us address any potential ambiguities and refine our presentation accordingly for future.\"]}", "{\"title\": \"Addressing Misunderstandings Regarding JOOCI\\u2019s Methodology and Novelty. JOOCI is not doing any disentanglement.\", \"comment\": [\"### We sincerely thank the reviewer for their comments and for the opportunity to clarify certain aspects of the JOOCI framework's design and novelty.\", \"We will clarify the main claim and clearly mention specific tasks instead of the using the SUPERB benchmark term.\", \"Regarding the reviewer's remarks on JOOCI's novelty and its alleged similarity to previous work, we would appreciate further clarification. The reviewer has highlighted two specific papers, which both employ a disentanglement module and use supervised data during training in some way. However, JOOCI is fundamentally different: it does NOT include any disentanglement module, nor is it trained with supervised data during pre-training. This distinction contrasts with the reviewer\\u2019s assertion about JOOCI\\u2019s similarity to the referenced works.\", \"Additionally, we respectfully note that the reviewer's summary of JOOCI appears to contain a misunderstanding. Specifically, JOOCI is not designed as \\\"speech representation learning for disentangling content information from non-content information,\\\" as the summary suggests. In fact, the concept of disentanglement is only mentioned in Section 4.3 of the paper (in the second-to-last section before the conclusion) as part of an ablation study on understanding learned representations. This brief remark reflects an observed behavior in the model\\u2019s learned representations rather than a deliberate design goal or imposed structure during pre-training.\", \"We will add the details of the model as suggested. Lastly, as suggested we will improve the writing of the paper. We have updated the rebuttal, which does address this issue to a major extent.\", \"We hope this clarification addresses the reviewer's concerns and would be grateful for any additional feedback they might provide on this matter.\"]}", "{\"comment\": \"1) \\\"Firstly, we would like to clarify that JOOCI does not aim to remove other information from the content...\\\"\", \"line_153_in_the_paper\": \"\\\"The role of content decoder is to discourage the other encoder from learning features necessary for solving tasks that require content information\\\" so the GRL is used to minimize the content information in the other encoder explicitly. The paper content and your claim here about not removing information across the encoder seem contradictory to me.\\n\\n2) The paper claims that JOOCI is more data efficient than WavLM and HuBERT. By using a pretrainined model RDINO, trained on the VOXceleb dataset JOCCI model effectively sees more data than HuBERT/WavLM. What if I train a model initialized with HuBERT and use WavLM-large as a teacher but only use Librispeech 960 hours to pretrain my model, it would be unfair to call it more data efficient than HuBERT trained from scratch. \\nAnother reason is the simplicity of adapting the JOOCI to a new dataset/language, JOCCI requires MS-HuBERT to initialize and then RDINO for the other encoder to be trained in the new language whereas HuBERT/WavLM requires just one model to be trained. \\n\\nJOOCI performs better than HuBERT/WvLM: yes, is it more data efficient: No\\n\\n3) \\\"Regarding the comparison with ContentVec, SPIN, and Data2Vec, we believe these methods serve different purposes from JOOCI, focusing primarily on content representation without optimizing for both content and other information. Our content encoder is initialized from MS-HuBERT, which, as demonstrated on the SUPERB benchmark, outperforms ContentVec, SPIN, and Data2Vec in the phoneme recognition (PR) task. JOOCI, however, is designed to perform competitively on both PR and speaker identification (SID) tasks, demonstrating its ability to effectively handle both types of information. A direct comparison would be WavLM.\\\"\\n\\nSince the MS-HuBERT model performance is not included, JOCCI may perform worse than MS-HuBERT on some of the PR tasks as it also focuses on the speaker tasks. Including MS-HuBERT/data2vec performance could help readers understand the trade-off between focusing on PR tasks vs both phone and speaker tasks. Why is MS-HuBERT not included? \\n\\nAgain it does not matter if MS-HuBERT outperforms ContentVec, SPIN, and Data2Vec The paper focuses on JOCCI and not on MS-HuBERT so the comparisons should be with JOCCI and the paper does not provide results supporting that JOCCI outperforms MS-HuBERT. \\n\\n4) \\\"ContentVec and SPIN are models that fine-tune other pre-trained models, whereas JOOCI focuses solely on the pre-training stage. Given this distinction, we are unsure how including these models would contribute to strengthening the claims of our paper.\\\"\", \"contentvec_and_spin_initialize_their_models_with_hubert_and_then_finetune_them_with_their_ssl_objective_similar_to_what_is_being_done_here\": \"MS-HuBERT for initialization and then SSL objective that focuses on separating the content and other information. Content/SPIN also focuses on the pretraining stage. Could you please give more details on how these approaches differ and ContentVec and SPIN do not focus on the pretraining?\\n\\n5) \\\"We would appreciate the reviewer\\u2019s perspective on how evaluating the impact of random versus MS-HuBERT initialization, as well as training the model with just the GRL loss and without the RDINO teacher, might help corroborate the effectiveness of JOOCI.\\\"\\n\\nThese changes show how stable the training objective is and how much each part contributes to the performance. For example, no GRL loss could be a good experiment to support your claim that JOCCI focuses on maximizing the information in each encoder and not on minimizing the information across encoders.\"}" ] }
DnBjhWLVU1
Recovering Plasticity of Neural Networks via Soft Weight Rescaling
[ "Seungwon Oh", "Sangyeon Park", "Isaac Han", "Kyung-Joong Kim" ]
Recent studies have shown that as training progresses, neural networks gradually lose their capacity to learn new information, a phenomenon known as plasticity loss. An unbounded weight growth is one of the main causes of plasticity loss. Furthermore, it harms generalization capability and disrupts optimization dynamics. Re-initializing the network can be a solution, but it results in the loss of learned information, leading to performance drops. In this paper, we propose Soft Weight Rescaling (SWR), a novel approach that prevents unbounded weight growth without losing information. SWR recovers the plasticity of the network by simply scaling down the weight at each step of the learning process. We theoretically prove that SWR bounds weight magnitude and balances weight magnitude between layers. Our experiment shows that SWR improves performance on warm-start learning, continual learning, and single-task learning setups on standard image classification benchmarks.
[ "loss of plasticity", "plasticity", "continual learning", "online learning" ]
Reject
https://openreview.net/pdf?id=DnBjhWLVU1
https://openreview.net/forum?id=DnBjhWLVU1
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uKaLJQLgQk", "RJ9013qfD0", "OsDn3sWVAD", "LKJafxoZnE", "DxjKv7bRcL", "CuGsf94ImT" ], "note_type": [ "meta_review", "decision", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1734451578019, 1737524305111, 1730555672686, 1730713813677, 1730700931437, 1730089655857 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission14266/Area_Chair_wqCH" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission14266/Reviewer_QFvH" ], [ "ICLR.cc/2025/Conference/Submission14266/Reviewer_quxT" ], [ "ICLR.cc/2025/Conference/Submission14266/Reviewer_eg35" ], [ "ICLR.cc/2025/Conference/Submission14266/Reviewer_CTTf" ] ], "structured_content_str": [ "{\"metareview\": \"The paper addresses the issue of plasticity loss, also known as intransigence, in neural networks. The authors identify unbounded weight growth as a key contributor to this issue and introduce a novel regularization technique called Soft Weight Rescaling (SWR) to overcome it. SWR aims to limit weight magnitudes and ensure balance across different layers without compromising model performance. The authors offer a theoretical analysis to confirm that SWR effectively maintains bounded and balanced weights, which are desirable properties in neural networks. The empirical results validate the effectiveness of SWR in various scenarios, including warm-starting, continual learning, and generalization, demonstrating its ability to preserve both plasticity and stability.\\n\\n**Strengths:** All reviewers agreed that the paper addresses an interesting and timely problem. They praised the clear and methodical writing style and the method's simplicity. Moreover, the reviewers found the experimental results and analysis presented in the paper to support the claims and provide insightful observations effectively.\\n\\n**Weaknesses:** The paper was primarily criticized for its limited experimental setup, which restricts its broader impact. Notably, the datasets employed, such as CIFAR10, CIFAR100, MNIST, and TinyImageNet, are small-scale. Furthermore, the use of a VGG model in the experiments resulted in performance metrics\\u201472% on CIFAR10 and 40% on CIFAR100\\u2014that are significantly lower than current state-of-the-art results on these datasets. This discrepancy places the paper at a substantial disadvantage when compared to more recent literature. Additionally, the reviewers highlighted the absence of comparisons with related regularization methods, such as L2 regularization (weight decay) or more contemporary approaches that combine L2 and Layer Normalization (Lyle et al., 2024). Lastly, the paper lacks comprehensive ablation studies and a detailed sensitivity analysis of hyperparameters.\\n\\nAlthough the reviewers appreciated certain aspects of the paper, they unanimously agreed that the experimental setup was rudimentary and insufficient. Consequently, they concluded that the paper, in its current form, is not ready for publication. I enjoyed reading the paper and believe that the authors could significantly enhance their work based on the constructive feedback provided by the reviewers. Considering these factors, I recommend rejecting this paper. However, I recognize its potential and encourage the authors to continue refining their work.\", \"additional_comments_on_reviewer_discussion\": \"Unfortunately, the authors did not respond to the points raised by the reviewers. Given the consensus among reviewers on their feedback, there was little need for further discussion during the review period.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper focuses on the solution to recovering the plasticity of DNNs via weight regularization. The paper proposes a simple yet effective weight regularization method that prevents unbounded weight growth. The authors also provided the technique's theoretical and empirical insights, which prove the generalization performance in different learning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This work progressively establishes and justifies its framework, making this paper easy to follow.\", \"The results are promising, however, I have some concerns regarding the results as discussed below\"], \"weaknesses\": [\"One main drawback of the paper is the limited application of the paper. The authors made many assumptions (e.g., the network is affine, homogeneous with ReLU), which impedes the contributions and the applicability of the paper in real-world scenarios.\", \"Some crucial statements are made without proper references. Furthermore, these statements are conflicted with the statements in various peer-reviewed and significant publications.\", \"The paper came up with many theorems and definitions without explaining the usages and necessities of these statements.\", \"Ablation tests according to Theorem 1 needed to be conducted to verify the paper's significance.\", \"All in all, the aforementioned issue impedes the contribution and significance of the paper method. The authors please consider carefully about these issues. If the issues are addressed, the score can be modified.\", \"The experimental evaluations are not sufficient, they need to provide more experiments on large-scale datasets (ImageNet1K, COCO, etc) and across different model architectures (VisionTransformers, etc).\", \"The hyper-parameter $\\\\lambda$ is proposed but there are no experiments that consider the effect of $\\\\lambda$ on the boundedness of the weight before and after scaling.\", \"There should be a theoretical discussion about how to tighten the boundedness compared to other methods. For example, in Theorem 2, the authors show that $\\\\|W_t\\\\| \\\\neq B$, which is trivial thus not proving that the proposed method is better than others.\"], \"questions\": \"1. Can you discuss more the statement in L086: \\\"weight growth is inevitable in deep learning\\\"? We agree that a large value of weight norm impedes the model generalization. However, this phenomenon is usually at the initial phase of DL. It can be proved via empirical experiments [R1], or theoretical [R2, Theorem 1].\\n2. Moreover, in the cited paper, the authors mentioned the phenomenon when the weight norm is large. However, the authors did not mention that the weight norm is high due to the progress of the training model (which is related to the plasticity effects of the pre-trained model, which is already trained). Please note that this statement is crucial to assess the paper's contribution and significance.\\n\\n[R1] Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, Cho-Jui Hsieh, Large Batch Optimization for Deep Learning: Training BERT in 76 minutes, ICLR 2020.\\n\\n[R2] Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, H. Vincent Poor, Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization, NIPS 2020.\\n\\n3. What is the necessity of the notations of the network layer defined as in L130-136?\\n4. In 156, $f_{\\\\theta^\\\\prime}(x) = k\\\\cdot f_{\\\\theta}$ is the proportion of the output of the model, is it applicable to models with various types of activation functions or only applicable to linear activation functions?\\n5. What is the meaning of Theorem 1? Why do we need to find many networks that are proportional with $f_{\\\\theta}$?\\n6. In L216 - L218, can the authors discuss more the statement that the \\\"initial weight norm is small in most initialization\\\"? To be frank, this statement needed to be considered carefully (e.g., making ablation tests or empirical evaluations).\\n7. In L245, why $C$ is set to 1? Is it different in performance if we set $C$ to different values? An ablation test according to the difference initial C should be made to verify the paper's method.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper addresses the issue of plasticity loss in neural networks, where the capacity to learn new information diminishes over time due to unbounded weight growth. The authors propose a method called Soft Weight Rescaling (SWR), which mitigates this issue by scaling down the weights at each learning step, and claiming to maintain the network's plasticity without losing previously learned information. Some experimental results, such as continual leaning and single-task in image classification, demonstrate that SWR can enhance performance, outperforming existing weight regularization and re-initialization techniques.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The paper is easy to follow.\\n2. I think the authors are focusing on an interesting topic, i.e. loss of plasticity, that is worthy to probe.\\n3. The method proposed is simple and can be easily implemented in practice.\", \"weaknesses\": \"1. An unbounded weight growth is one of the main causes of plasticity loss, and the authors propose reducing weight magnitude through weight scaling. Reducing the weight magnitude could be a common implementation in training, where L2 is widely used. So I think the key here lies in comparing the proposed method to L2. However, after reviewing the text, I did not find a clear rationale why we should choose the proposed method over L2. Could the authors provide specific cases that demonstrate the essence regarding how the proposed method targets improvements over L2 regularization?\\n\\n2. I notice that the authors define the rate of how much the model has changed from the initial state as the ratio between the Frobenius norm of the current weight matrix and that of the initial one. Could the author give more explanations regarding this metric? In my opinion, this metric may not well capture the extent of change in the model. For instance, applying weight regularization could significantly alter the weights, yet the model's performance may change only marginally. \\n\\n3. I have not found any theoretical insights regarding the claims made about magnitude boundedness and weight balance in the main text. However, I did locate some proofs in the appendix. Since these proofs appear to be one of the main contributions of the proposed work, I recommend that the authors reorganize the paper to better highlight this important content.\\n\\n4. I think the authors should improve the experiments presented in the paper. Firstly, the current training performance falls significantly below existing baselines, with VGG achieving only 0.72 on CIFAR-10 and below 0.4 on CIFAR-100, which is unacceptable. Secondly, the authors should broaden their experimental scope beyond VGG on CIFAR, MNIST, and TinyImage. It would be beneficial to include experiments relevant to current RL or NLP scenarios, especially where pre-trained models are commonly utilized. For now, I could barely sense the superiority of the proposed method.\\n\\n5. It would be helpful if the authors could release the code.\", \"questions\": \"See Weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"I have not found any discussions about the limitations and potential negative societal impact. But in my opinion, this may not be a problem, since the work only focuses on the optimization in deep learning. Still, it is highly encouraged to add corresponding discussions.\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors introduce Soft Weight Rescaling (SWR), a novel weight regularization method that prevents unbounded weight growth to preserve information and maintain network plasticity. The theoretical analysis shows that SWR bounds weight magnitudes and balances them across layers without degrading model performance. Empirical evaluations, particularly with VGG-16, show that SWR improves generalization performance compared to other regularization methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is overall clearly written and the method is adequately described.\", \"The proposed method SWR is computationally more efficient than previously proposed methods.\", \"The experiment results and analysis provided in the paper are insightful.\"], \"weaknesses\": [\"The experimental results on smaller models are quite weak. For example, in warm-start and continual learning experiments, L2 (or S&P) seems to be better in most experiments (including the ones in the appendix). Even in Table 1, except for VGG, I wouldn't say the improvements are significantly higher since there's quite a bit of overlap with L2 in terms of standard deviations in MLP, and CNN cases. SWR only performs well on VGG which is not a very popular architecture even for vision-based experiments in this domain compared to ResNet. It would be interesting to see the comparison between SWR and baselines on bigger models. The assumptions of affine, conv layers in Theorem 1 are also strong and limit the applicability of SWR.\", \"I think the main novelty of the idea is limited and comes primarily from \\\"scaling the bias vectors according to a certain rule\\\". From Eq on line 220, one may assume that $W_l$ will attain a higher magnitude than $W_{init}$. As a result, $c_l \\\\approx 1 - \\\\lambda$, which implies that SWR would behave like a layer-wise version of S&P with weight_scale = $1 - \\\\lambda$ and no initial weights.\", \"Missing baselines: Lyle et al. 2024 recently also showed that the L2 + Layer norm generally outperforms the majority of the existing methods. Lee et al. 2024 have also shown that their method results in superior generalization performance on these benchmarks.\", \"Some grammatical/clarity related issues:\", \"Line 161: investigated the following theorem shows that\", \"Line 213: the change rate\"], \"questions\": \"There are some claims made in the paper that require evidence/clarification:\\n- While the proposed method is computationally more efficient, it is also true that the overhead cost of regularization methods like L2 is *not* significantly high as claimed in the paper unless higher-order computation is involved. In fact, L2 is quite common even in large-scale models. Some methods only involve computing scores based on the layer outputs which is not *very* expensive.\\nThe computational cost is only significant if there is higher-order computation involved.\\n- Line 382-386: We don't entirely lose previous knowledge in S&P. Rather, adding noise ultimately helps in better generalization. Even in the case of Lee et al. 2024 paper, they showed better generalization for a re-initialization method which is crucial.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces Soft Weight Regularization (SWR), a regularization based algorithm for maintaining plasticity under the broad framework of continual learning. Unlike other regularization based approaches for addressing plasticity loss, such as L2 regularization, Shrink and Perturb, and L2 Init, SWR does not alter the network's predictions. The paper provides a theoretical analysis showing that SWR bounds weight magnitudes and maintains balanced weights between layers, two favourable properties of neural networks. Finally, the paper provides empirical evidence arguing the efficacy of SWR on a set of problems that test for plasticity and stability in settings of warm-starting, continual learning, and generalization.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The paper rightfully makes the point that unbounded weight magnitudes in continual learning settings is a more general issue in deep learning. This is a point that is not often made explicit in the continual learning literature.\", \"The proposed method of SWR is supported by theoretical analysis establishing that weight magnitudes are bounded and that weights between layers are relatively balanced, two properties that have been previously shown to be beneficial for generalization and continual learning settings. Many recent methods in the continual learning framework, despite their simplicity, have been introduced with little to no theoretical basis, therefore, this is a strength of this paper.\", \"The proposed method is evaluated on three types of problems: warm-starting, continual learning, and classic supervised learning evaluating generalization. SWR's performance is evaluated with respect to the generalization gap, plasticity in continual learning, and catastrophic forgetting in continual learning. This provides a broader evaluation than is typical in continual learning.\"], \"weaknesses\": [\"This paper could use more polish and could be reorganized to better state the contributions as well as their relative merits to existing work. Some concrete examples are as follows:\", \"The paragraph on line 039 is too specific for the introduction and the paper would be better served with a concise overview of the merits and draw backs of regularization based re-initialization based methods, and moving the existing paragraph as is to a related works section.\", \"The paragraph on line 066 is redundant given the preceding paragraph.\", \"It would be useful to give, at least a high level or rough, description of SWR in the introduction so that the reader has an understanding of how SWR differs from existing regularization based methods. As the paper is currently written, SWR is described by its merits: bounded weight magnitudes and balancing weights, and no actual description of the algorithm itself is provided, until the full algorithm is presented on page 5.\", \"It would be useful to introduce and define both catastrophic forgetting and plasticity in the introduction, rather than just the latter phenomenon, as the paper claims to evaluate SWR's ability to mitigate catastrophic forgetting.\", \"The motivating and illustrating experiment, Figure 1 and the paragraph that follows on line 197 are confusing and I cannot make out the experimental setup and the exact point that is being made. I would suggest explicitly describing the experimental setup and each algorithm that you are evaluating. How exactly are you scaling, and what is scaling with and without proportionality in this example? Does the pre-trained model include any scaling? What is the difference between fine-tuned after scaling and just the scaled model? When you train the fine-tuned model for another 50 epochs, are you fine tuning on the validation set or some new training set? What exactly is the scaling magnitude or scaling ratio in this experiment? Given that this is a motivating or illustrating example, it would be useful to very precise with outlining the experimental setup.\", \"I would recommend moving your theorems on boundedness and balancedness to section 3 and commenting on the significance of these theorems rather than pointing the reader to the appendix.\", \"The set of competitor algorithms is limited. Specifically, for re-initialization based methods the well-cited Continual Backprop (Dohare et al.) and ReDO (Sokar et al.) are missing from the experiments that evaluate plasticity loss. As for the experiment that evaluates catastrophic forgetting, regularization based methods for explicitly addressing this phenomenon such as Elastic Weight Consolidation (Kirkpatrick et al.) are absent.\", \"To evaluate the efficacy of SWR for mitigating plasticity loss and catastrophic forgetting, a wider experimental study may be necessary. You could consider the benchmark problems of Permuted MNIST, Random Label MNIST and CIFAR, and Continual ImageNet, which are nicely described in (Kumar et al).\", \"The claim that SWR mitigates catastrophic forgetting requires more evidence than a single experiment, as noted in the previous point. SWR does not modify the network's outputs unlike other regularization based methods, but this does not prove that SWR mitigates plasticity loss. There is a series of regularization based methods, e.g. Elastic Weight Consolidation, that regularize networks towards weights (or equivalently representations) learned during earlier tasks, and in turn mitigating catastrophic forgetting. Therefore, the limited experiments and construction of SWR do not provide sufficient evidence that catastrophic forgetting is alleviated by SWR more efficiently than by other algorithms, therefore the claims that SWR maintains useful information while re-initialization based methods do not, is not entirely accurate.\"], \"questions\": [\"How sensitive is SWR to its choice of hyperparameter(s), it would be nice to see these results.\", \"Is there a reason why Theorem 2, Corollary 2.1, and Theorem 3 are not presented in the main body of the paper?\", \"Could you elaborate on why Shrink and Perturb experiences declining performance on the warm-starting experiments (CIFAR-10 and CIFAR-100), even though Ash and Adams introduce Shrink and Perturb and show that it is performant on these sorts of experiments?\", \"Can you restate Figure 1 and its experimental setup clearly, as described in the weaknesses section.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
Dn7Ay7rZcH
$\textbf{PLUM}$: Improving Code LMs Using On-Policy Preference Learning Powered by Automatic Test Cases
[ "Dylan Zhang", "Shizhe Diao", "Xueyan Zou", "Hao Peng" ]
Preference learning provides a promising solution to address the limitations of supervised fine-tuning (SFT) for code language models, where the model is not explicitly trained to differentiate between correct and incorrect code. Recent findings demonstrate that on-policy data is the key to successful preference learning, where the preference data is collected using the same policy LM being trained. Inspired by this, we propose PLUM, an on-policy $\textbf{P}$reference $\textbf{L}$earning framework A$\textbf{u}$gmented with test cases for code L$\textbf{M}$s. The framework operates in three key stages: (1) automatic generation of test cases from natural language instructions, (2) creation of a preference data by evaluating candidate code solutions sampled from the policy, which can then be used to (3) train the policy LM. PLUM levitates the need to train reward models, allowing for large scale on-policy and online preference data collation. PLUM is evaluated on both standard benchmarks (HumanEval, MBPP) and more challenging ones (LiveCodeBench), delivering substantial improvements over original SFT'ed models and other execution-feedback-driven approaches. We show PLUM benefits are consistent across various widely-used code LMs even they have been well-trained with SFT. For example, PLUM increases pass rates by up to 4.8% on average on standard benchmarks and 11.8% on LiveCodeBench, demonstrating its effectiveness and generalizability. We also demonstrate the benefits of on-policy and online preference learning
[ "Code Generation", "Preference Learning", "Test Case Generation" ]
https://openreview.net/pdf?id=Dn7Ay7rZcH
https://openreview.net/forum?id=Dn7Ay7rZcH
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yOabYnZ0sm", "tpRwe7CGk2", "svxDgxUkr3", "sHMOyE2x54", "rxAntaHXtO", "pKXCdvEivC", "lec9auOiEj", "je1qEMyUwD", "jGyA0cfnSe", "ebgbDu2ZXD", "dK9E4w1dvW", "cdeCgntAhQ", "WvvVUlylRI", "TqB09WBLpl", "TPiduT7c6w", "NUqsXvVWK2", "JCRqbVbaIr", "HW2t5geAMJ", "HHemyMzd8l", "9zHMJYFZUk", "8UhMDeHR3d", "5rsXridNoD", "4xqQ2YX4K2", "4nfDGgC6OY", "4IjnNVi2ws", "2abO7s2prc", "2V5igrSwZa", "1bMeKxj5jq" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732993021335, 1731868673844, 1731868407739, 1733125674087, 1732785912004, 1732778466459, 1733215905492, 1734377969859, 1733295697114, 1732753345305, 1730719305250, 1730657502017, 1733215577262, 1732339737228, 1730279529650, 1732752217103, 1732781770880, 1732812670916, 1732480897013, 1733284886104, 1730696737285, 1731868489831, 1733163204190, 1732786007111, 1732339653661, 1732363124174, 1733026590019, 1732597564082 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9120/Authors" ], [ "ICLR.cc/2025/Conference/Submission9120/Authors" ], [ "ICLR.cc/2025/Conference/Submission9120/Authors" ], [ "ICLR.cc/2025/Conference/Submission9120/Authors" ], [ "ICLR.cc/2025/Conference/Submission9120/Reviewer_AYWK" ], [ "ICLR.cc/2025/Conference/Submission9120/Authors" ], [ "ICLR.cc/2025/Conference/Submission9120/Authors" ], [ "ICLR.cc/2025/Conference/Submission9120/Authors" ], [ "ICLR.cc/2025/Conference/Submission9120/Authors" ], [ "ICLR.cc/2025/Conference/Submission9120/Authors" ], [ "ICLR.cc/2025/Conference/Submission9120/Reviewer_RrMw" ], [ "ICLR.cc/2025/Conference/Submission9120/Reviewer_6RyX" ], [ "ICLR.cc/2025/Conference/Submission9120/Authors" ], [ "ICLR.cc/2025/Conference/Submission9120/Authors" ], [ "ICLR.cc/2025/Conference/Submission9120/Reviewer_AYWK" ], [ "ICLR.cc/2025/Conference/Submission9120/Authors" ], [ "ICLR.cc/2025/Conference/Submission9120/Authors" ], [ "ICLR.cc/2025/Conference/Submission9120/Reviewer_RrMw" ], [ "ICLR.cc/2025/Conference/Submission9120/Reviewer_Du4Y" ], [ "ICLR.cc/2025/Conference/Submission9120/Authors" ], [ "ICLR.cc/2025/Conference/Submission9120/Reviewer_Du4Y" ], [ "ICLR.cc/2025/Conference/Submission9120/Authors" ], [ "ICLR.cc/2025/Conference/Submission9120/Authors" ], [ "ICLR.cc/2025/Conference/Submission9120/Authors" ], [ "ICLR.cc/2025/Conference/Submission9120/Authors" ], [ "ICLR.cc/2025/Conference/Submission9120/Reviewer_RrMw" ], [ "ICLR.cc/2025/Conference/Submission9120/Authors" ], [ "ICLR.cc/2025/Conference/Submission9120/Authors" ] ], "structured_content_str": [ "{\"title\": \"Follow-up with reviewer Du4Y\", \"comment\": \"Dear Reviewer Du4Y,\\n\\nThank you for your time and effort in reviewing our submission. We greatly appreciate your detailed feedback, which helped me identify and address the concerns you raised.\\n\\nWe have carefully addressed the concerns and responded to your earlier questions in both the previous response and the global comment, supplemented by additional experimental results. We wanted to kindly follow up to ensure we\\u2019ve adequately clarified your points and to ask if there is anything further we could elaborate on or improve.\\n\\nThank you again for your invaluable feedback and for your contributions to the review process!\"}", "{\"comment\": \"Thank you for the insightful feedback, which has helped clarify key aspects of our work.\\n# Limitation to Python\\nWe appreciate the reviewer\\u2019s comment on the scope of our work, which currently focuses on the Python programming language. The proposed PLUM framework, however, is designed to be broadly applicable across programming languages. Our approach utilizes test cases generated based on natural language instructions and verifies code through test execution rather than relying on any language-specific syntax or semantics unique to Python. The focus on Python was primarily for the availability of well-established training datasets and ease of comparison and reproducibility, as Python is widely used in both code generation and evaluation benchmarks (e.g. HumanEval, LeetCode). We are confident that the methods can be adapted to multilingual settings by training models on diverse programming languages and modifying test generators to meet language-specific requirements.\\n# Dependency on GPT-4\\nThank you for this insightful feedback. Our use of GPT-4 as a generator model was based on its superior generation capabilities; however, our framework does not rely on any features specific to GPT-4. The model was chosen purely for practical reasons, and any other language model capable of generating syntactically correct and logically coherent code solutions would also be effective within our framework. Open-source models can replace GPT-4 for test case generation, allowing for scalable expansion while retaining the effectiveness of our approach. Our experiments verify that the robustness of our method stems from the structured preference learning pipeline rather than reliance on GPT-4\\u2019s unique characteristics.\\n# Clarification of Model Sizes\\nWe appreciate the constructive feedback on the clarity of the paper. To clarify the reviewer\\u2019s concern regarding model sizes, we aim to illustrate that our ablation studies leverage responses from models of varying sizes specifically to demonstrate the comparative benefits of on-policy data. We did not actually train models of multiple sizes, but rather utilized the outputs from both smaller and larger models as part of our on-policy versus off-policy comparisons. This setup is intended to evaluate and emphasize the effectiveness of our preference learning method across different model capacities without requiring separate training instances. \\nSpecifically, the models we trained are all 7B-scale language models. CODEQWEN-1.5-CHAT refers to the 7B CodeQwen-1.5-Chat model, Magicoder-DS (-S-DS) refers to the Magicoder models trained from DeepSeek-Coder-6.7B base model, Magicoder-CL (-S-CL) refers to the Magicoder models trained from CodeLlama2-7B base model. Same applies to OCI (OpenCodeIntepreter). We will update the paper to make it clear! \\n# Consistency of Test Cases and Reference Solutions\\nWe appreciate the reviewer\\u2019s inquiry into test case consistency with reference solutions. In our approach, consistency between test cases and reference solutions is crucial, but as highlighted in works on software testing, achieving perfect alignment between natural language specifications and test cases remains an open research question. We adopt a practical heuristic by assuming that consistency between generated solutions and passing test cases implies correctness. Our experimental results demonstrate that this assumption holds well across our chosen datasets, which suggests that it is a reasonable approach under current constraints.\\n\\nWe hope that addressing these points will provide additional insights into the PLUM framework, potentially enhancing the reviewers' understanding and appreciation of our contributions and reflecting positively in their assessment.\"}", "{\"title\": \"Response to Reviewer RrMw (1 of 2)\", \"comment\": \"# Motivation\\nWe appreciate your insightful comments on the motivations for preference learning within the coding domain. While preference learning has traditionally been applied in domains such as human alignment and detoxification, recent work has demonstrated its potential in more complex, outcome-driven tasks across areas like reasoning, mathematics, and coding, where achieving functional correctness is paramount (Dong et al., 2023; Pang et al., 2024; Yuan et al., 2024; Xiong et al., 2024; Lai et al., 2024; Xie et al., 2024; Weyssow et al., 2024).\\n\\nIn the context of PLUM, we focus on encoding functional correctness directly within the preference objective, which offers a new perspective. Our method leverages test-based signals to ensure correctness through a preference-guided approach, thus removing the need for complex reward model setups while addressing potential over-optimization and distributional shift issues. Additionally, our comparative results, including contrastive learning baselines, underscore PLUM's efficacy in reliably improving functional correctness, as it harnesses the advantages of preference learning to a degree not yet fully explored in the code generation domain. \\n\\n# Comparison to DeepSeek-Coder-V2 and CodeRL\\n\\nCodeRL and PLUM represent fundamentally different methodologies for improving code generation. CodeRL is an iterative code-refinement framework build upon reinforcement learning principles. It evaluates and iteratively adjusts failed solutions with a trained critic network. This involves continuous refinement, leveraging test cases during inference to repeatedly improve solutions until they pass. \\n\\nIn contrast, PLUM operates as an on-policy preference learning framework designed to improve a model's intrinsic ability to generate correct code based on natural language prompts, bypassing the need for external critics or iterative refinement. By leveraging automatically generated test cases as a lightweight, consistent feedback mechanism for training, PLUM enables on-policy training without reliance on offline data, mitigating the distributional shift commonly seen in offline models\\u200b.\\n\\nFinally, the works cited by the reviewer and within our paper neither diminish nor overlap with PLUM\\u2019s unique contribution, as they are concurrent or fundamentally distinct. PLUM leverages on-policy preference learning with test cases to directly enhance model performance during training using preference learning algorithms. Our framework is distinct in its simplicity and robustness, bypassing reward models and reinforcement feedback entirely. This approach, combined with automated test case generation for scalable, policy-consistent preference data, establishes PLUM as an original solution that fills an unaddressed gap in nl-to-code generation.\\n\\n# Clarifications on the SFT Baseline\\n\\nTo avoid possible confusion, we clarify that the baseline rows we refer to as \\u201cBaseline\\u201d in the result tables represent existing models that have been fine-tuned on their respective datasets, not models we further fine-tuned on other data. Therefore, no additional out-of-domain adaptation effects have been introduced. \\nIn the rejection-sampling fine-tuning (RFT) baseline, we filtered out responses that did not pass initial test cases, leaving only correct responses. The model is then SFTed on the same set of questions as PLUM paired with these positive responses. By comparing PLUM to RFT, we demonstrate that the performance gains of PLUM are not merely due to filtering out those \\u201cout-of-domain\\u201d questions, but through learning to distinguish between correct and incorrect responses through on-policy training.\\nWe would also like to emphasize that PLUM is designed as an enhancement layered on top of the supervised fine-tuning (SFT) stage. The SFT models used in our study were already extensively fine-tuned on large and diverse instruction datasets. PLUM adds a further improvement step, where the preference learning framework, augmented with test cases, fine-tunes the model to prioritize functionally correct responses more effectively.\\n\\n# Questions to Reviewers\\n\\nWe appreciate the reviewer\\u2019s suggestions and reference pointers. However, we would be grateful if the reviewer could provide the titles and/or links to those papers for further clarification, since all references seem to be missing. (Le et al. 2022, Gorinski et al. 2023, Liu et al. 2023, Miao et al. 2024, Zhang et al. 2024, Gee et al. 2024)\"}", "{\"comment\": \"Dear Reviewer RrMw,\\n\\nThank you again for the time and effort to review our paper and to initiate discussions that help clarify our contributions!\\n\\nWith 2 days left until the end of the discussion period, it would mean so much to us if you could take a look at the added discussion and materials so that you can re-evaluate our work with those clarifications and update your scores if you find appropriate. \\n\\nPlease let us know any remaining questions / concerns surrounding PLUM's contributions! We are also more than happy to address your further questions and concerns!\"}", "{\"comment\": \"Thanks for the effort. The response resolves my concern.\"}", "{\"title\": \"[Revision] Summary of Modifications In Revision\", \"comment\": \"We thank the reviewers for their constructive comments on improving this work. We have modified the draft to reflect on the issues pointed out by the reviewers and included more experiment results to support the claims. The edited text is highlighted in blue in this version. We summarize the modifications below:\\n## Major Updates\\n### Ablation On Test Case Generator\\nIn Appendix A.6, we experimented PLUM across alternative test case generators (e.g., GPT-3.5-Turbo, Llama 3.1) to demonstrate its scalability and cost-efficiency, achieving consistent gains without compromising performance (Table 11). We hope this can help address the comments by **AYWK** and **Du4Y**. \\n\\n### Generalization To Stronger Models \\nIn Appendix A.7, we validated the effectiveness of PLUM on stronger models (Qwen-2.5-Instruct/Coder-14B), showing improvements on challenging benchmarks like LeetCode and LiveCodeBench (Table 12). This indicates PLUM works on stronger models (larger and with more advanced post-training). We hope this could help answer Reviewer **AYWK**\\u2019s question regarding PLUM\\u2019s applicability to more powerful base models. \\n\\n---\\n\\n## Clarity Improvements\\n**Figure 2 Caption:** Enhanced the clarity of the figure caption to better explain the content and context of the ablation study. \\n\\n**Table 6 Caption:** Revised the caption for Table 6 to ensure it accurately conveys the table's purpose and findings.\\n\\n**Algorithm Typo Correction:** Fixed an error in the algorithm body; it now correctly states that \\u201ceach solution $s_{i,k}$\\u200b passing all test cases are labeled as positive.\\u201d \\n\\n**Improved Layout:** Addressed layout issues, including correcting the order of Tables 3 and 4 to align with the narrative flow.\", \"additional_related_works\": \"Included relevant references, such as Gee (2023), to provide a more comprehensive discussion of related work.\"}", "{\"comment\": \"Dear Reviewer AYWK,\\n\\nWe thank you again for your positive feedback and constructive comments for us to further improve our work!\\n\\nWe would like to see if you have any further questions or concerns regarding this work that we could clarify further!\\n\\nWe also hope the improvements we\\u2019ve made will resonate positively with your evaluation. Your perspective plays a crucial role in shaping the paper\\u2019s chances to acceptance, and we truly value your thoughtful assessment!\\n\\nThank you very much again for your efforts!\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We tried our best to answer the queries from all reviewers but be did not receive confirmation regarding whether our responses and contributions were fully acknowledged. Given this uncertainty, we believe it is best to withdraw at this time and continue refining our work.\\n\\nWe sincerely appreciate the valuable feedback and the time the reviewers and committee have dedicated to evaluating our submission.\"}", "{\"comment\": \"Dear Reviewer RrMw,\\n\\nWe sincerely appreciate your valuable feedback, which has helped us refine our work and better situate it within the broader research context. We have made significant efforts to address your questions and have conducted additional baseline experiments that we believe directly respond to your concerns.\\n\\nWe hope these improvements bring our work closer to meeting your expectations. We would be deeply grateful if you could consider updating your rating to reflect these revisions and support our submission. We will include the additional discussions and results in our future revisions. \\n\\nThank you for your time and thoughtful consideration.\"}", "{\"comment\": \"### Language expansion\\nWe agree that Python's ease of environment setup significantly contributes to its popularity in execution-based evaluation. Many recent papers like SelfCodeAlign (NeurIPS\\u201924) [1] and more recent recent RLEF from MetaAI [2] also focus solely on python. \\nThat said, setting up test environments especially for function-level code is also straightforward, one could build execution environments like MultiPL-E [3] did! \\n\\nFor example, if one wants to test java functions, they could simply install JDK, and prepare the file as shown below:\\n```\\npublic class AddTest {\\n public static int add(int a, int b) {\\n return a + b;\\n }\\n\\n public static void main(String[] args) {\\n assert add(2, 3) == 5 : \\\"Test failed\\\";\\n assert add(0, 0) == 0 : \\\"Test failed\\\";\\n System.out.println(\\\"All tests passed\\\");\\n }\\n}\\n```\\nAnd run \\n```\\n>>> javac AddTest.java\\n>>> java AddTest\\n```\\n\\nTo get the results!\\n\\n**References**\\n\\n[1] Wei et al. 2024. SelfCodeAlign: Self-Alignment for Code Generation. https://arxiv.org/abs/2410.24198.\\n\\n[2] Gehring et al. 2024. RLEF: Grounding Code LLMs in Execution Feedback with Reinforcement Learning. https://arxiv.org/abs/2410.02089\\n\\n[3] Cassano et al. 2022. MultiPL-E: A Scalable and Extensible Approach to Benchmarking Neural Code Generation\", \"https\": \"//arxiv.org/abs/2208.08227\\n\\n**GitHub**: https://github.com/nuprl/MultiPL-E\\n\\n---\\n\\n### Reliance on GPT-4\\n\\nWe conducted additional experiments to demonstrate the effectiveness of our framework when using other language models (both open-source and proprietary) as test case generators. We kindly direct the reviewer to our global response for detailed results. \\n\\nWe experimented with various test case generators, including both proprietary models with much more affordable API access and **presumably less powerful than GPT-4** (GPT-3.5-Turbo, GPT4o-mini, Claude-3-Haiku), and **open-weight models** (Llama 3.1 70B and 405B).\\n\\nWe have observed consistent performance gains across the experiments. This proves that **PLUM does not rely on a single powerful model as its underlying test generator**, but is a robust, adaptable and scalable framework that works well under various conditions. In particular, it works under **more cost-and-time efficient set-ups**, making it a scalable to large data volumes.\"}", "{\"summary\": \"This paper explores the application of preference optimization on code language models. To obtain the preference data over pre-defined prompts, the paper follows previous work and uses GPT-4 to automatically generate unit tests (filtered with consistency checks) which are subsequently used to determine whether on-policy code solutions are functionally correct (i.e. preferred) or not (i.e. dis-preferred).\\n\\nThe experiments focus on applying the DPO and KTO methods over these derived data and comparing against SFT and RL (limited to Reflexion) training across multiple models and datasets. The results show consistent improvement of preference optimization over the baselines.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The experiments demonstrate that the proposed PLUM methodology leads to consistent improvement over baselines across many models and datasets.\", \"The experiments are robust, covering many models and benchmarks.\"], \"weaknesses\": [\"The novelty of the proposed work seems limited to the application of preference optimization over code. The on-policy data are obtained following previous work and the benefits of on-policy vs. off-policy training and test-cased filtered data vs. execution filtered data has already been demonstrated in previous work as well (i.e. Le et al. 2022, Gorinski et al. 2023, Liu et al. 2023).\", \"The motivation of the paper is somewhat unclear. Preference optimization is generally used to align LLM output to human values, e.g. reduce harmful and toxic expressions. In terms of code solutions that are strictly correct or incorrect (based on functional execution over unit tests), RL or contrastive learning over online signal seem more appropriate, and already explored in previous work. The paper should directly compare against those methods (e.g. CodeRL, DeepSeek-Coder-V2, etc.) instead of Reflexion.\", \"The presentation of the paper could be improved, with the tables being ordered consistently with how the corresponding experiments are introduced. Corresponding tables and figures should also be moved closer to where they are first referenced.\"], \"questions\": [\"Preference optimization over code has been explored in previous work (Miao et al. 2024, Zhang et al. 2024, Gee et al. 2024). While this work can be considered concurrent, the paper would benefit from discussing it.\", \"The methodology seems to require a more powerful model (than the one being trained) to generate the unit tests. Have the authors explored using the policy itself to generate the unit tests?\", \"Performing SFT over OSS-Instruct, Evol-Instruct-Code and ShareGPT may not necessarily translate to improvements in MBPP and HumanEval datasets. SFT may be forcing models to adapt to out-of-domain prompts resulting in lower performance (and an unfair baseline). The benefits observed in PLUM could be due to these datasets being filtered through on-policy consistency.\", \"Many tables are underexplained and potentially misleading (the best results are not always bolded, as in Table 2, 5 and 7). Please clearly indicate which baseline is used for each experiment. Figure 2 is particularly unclear, and (if this reviewer's interpretation is correct) in many cases PLUM introduces no benefit and in some execution based signal outperforms PLUM. Please confirm.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The current research piece introduced a new approach for improving code generation skills on existing Code LLMs. To this end, a two step process is introduced i) given a set of coding problems, generate, via prompting, test cases for them, and validate their correctness with the reference solution ii) with the resulting dataset, prompt the LLM to generate new solutions, check them with the generated test cases, and updated the model via iterative preference optimization.\\nThe idea is applied to several standard Code LLM models using DPO and KTO as preference optimization technique and compared with other, non PO approaches including prompting and alternative fine-tunning techniques.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The work is reasonable and clear. It uses established preference optimization techniques as DPO and KTO on for code synthesis improvement. It relies on automatic generation of test cases, guaranteeing their correctness by checking them with the reference solution.\\nExperimentation show a clear gain across the board in all the models included in the experiments (which are the most popular ones) to avoid the report of any spurious case.\", \"weaknesses\": \"The work is solid.\\nThe novelty aspect might be the only low point compared to the rest of the work. All the individual aspects on the technique: generating test cases, filtering them, using that as corpus, iterative PO, are all steps previously used on the improvement of Code LLMs\", \"questions\": \"Some suggestions\\n\\n* Tables: \\nTable 3 is shown after table 4. Tables numbers should follow their occurrence order. \\nAlso, table 2, seems too far from where is referenced (shown in page 5, referenced at the bottom of page 7.\\nLines 350 - 364 encloses Table 3 despite it refers to Table 2.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"--\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"RLTF Baseline Results\", \"comment\": \"As suggested by the reviewer, we applied RLTF on Magicode-DS-6.7B and CodeQwen-1.5-7B-Chat, following their dataset choice (APPS). We appreciate the reviewer's patience, as RLTF (and the precedent RL works) are typically proposed before the prevalence of instruction-tuning for code-LMs!\\n### Results and analysis\", \"the_result_is_presented_below\": \"| | HumanEval(+) | MBPP(+) | LeetCode |\\n|----------------------|--------------|--------------|----------|\\n| CodeQwen-1.5-7B-Chat | 83.5 (78.7) | 77.7 (67.2) | 33.9 |\\n| + RLTF | 81.7(76.2) &#8595; | 76.2 (64.4) &#8595; | 33.9 |\\n| Magicoder-DS-6.7B | 66.5(60.4) | 75.4 (61.9) | 19.4 |\\n| + RLTF | 64.6(59.1) &#8595; | 75.2 (61.9) &#8595;| 15.5 &#8595; |\\n\\nWe notice that RLTF in its original form might not be particularly effective to improve modern instruction-tuned code LMs. \\nIn addition to the restrictions brought by the training distributions, sparsity of reward could be another reason. Notably, very recent works like Dai (2024) noted on the sparse reward signals inherent in RLTF and demonstrate its limitations in further improving instruction-tuned code language models through experiments. \\n\\n---\\n\\n### Re-emphasizing difference between PLUM and RLTF \\nAgain, we emphasize the distinctions between PLUM and RLTF. RLTF focuses on adapting pre-trained code LMs to specific datasets (e.g., APPS) through **reward design**. In contrast, PLUM leverages on-policy preference learning to improve instruction-tuned code LMs by directly leveraging the execution feedback, offering a simpler, more scalable, and effective alternative to RL approaches without the need for RL.\\n\\nLet us know if you have further questions or concerns that we could take the last chance to clarify within the last day of discussion period!\\n\\n[1] Dai et al. Process Supervision-Guided Policy Optimization for Code Generation. https://arxiv.org/pdf/2410.17621\"}", "{\"comment\": \"Dear Reviewer Du4Y,\\n\\nThank you for your valuable feedback on our submission! We have responded your comments in our rebuttal and would appreciate any further clarifications or discussions to enhance our work!\\n\\nLook forward to further discussions! \\n\\nThanks again for your time and efforts.\"}", "{\"summary\": \"The paper proposes a preference learning framework (PLUM) for automatically building code preference data, which uses LLMs with natural language instructions to incorporate test cases and the model's on-policy candidate solutions into the training process. On commonly used evaluation benchmarks: HumanEval(+) and MBPP(+), PLUM further pushes performance of a wide range of instruction models on these coding benchmarks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. Without RM, this method accurately captures preferences related to accuracy with test cases and effectively improves the code capability of the model.\\n2. This method exhibits good generalization capabilities, as it can integrate with other preference methods to further enhance the performance of various code instruction models.\", \"weaknesses\": \"1. The paper uses GPT-4-1106 as the generator model to obtain test cases. Without a more powerful model, can this method still have a significant improvement? Can you conduct an ablation study using less powerful models (e.g. GPT-3.5) for test case generation to analyze how model capability affects the overall performance gains\\uff1f\\n2. There is no experimental verification for the enhancement capability of this method towards more powerful models, such as DeepSeek-Coder-V2-Instruct(21B AP / 236B TP). Can you add some experiments based on stronger models\\uff1f\", \"questions\": \"1. The pseudocode is inconsistent with the description in the paper. The paper talks about \\\" Solutions passing all test cases are used as the chosen solutions, and those failing at least one the rejected solutions\\\". But in pseudocode line 14, It seems like having just one case makes it a positive instance.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"[Additional Results] Ablation On Test Case Generators Demonstrates $\\\\textbf{PLUM}$'s Applicability Across Various Test Generators\", \"comment\": \"### Ablation On Test Case Generators Demonstrates $\\\\textbf{PLUM}$'s Applicability Across Various Test Generators\\nWe thank Reviewers **AYWK** and **Du4Y** for their discussion on PLUM's applicability across diverse models as test case generators. To address this, we conducted ablations with \\n- proprietary models with much more affordable API access and presumably less powerful than GPT-4 (GPT-3.5-Turbo, GPT4o-mini, Claude-3-Haiku). and \\n- Open-weight models (Llama 3.1 70B and 405B). \\n\\nConsistent performance gains were observed across the experiments.\\n\\nImportantly, the use of more cost-efficient test case generators does not compromise PLUM\\u2019s effectiveness. This demonstrates the scalability of our approach, enabling its practical application across a wide range of test generators and resource constraints.\\n\\nShown below is the result of ablations using CodeQwen-1.5-7B-Chat as the policy model to apply PLUM. We will update this result in our revised draft. \\n\\n**Remark:** Claude-3-Haiku from Anthropic and GPT4o-mini from OpenAI are currently the most affordable (per-token API cost) and fastest models available from their respective providers. \\n\\n| **Test Generator** | **Algo** | **MBPP** | **MBPP+** | **HE** | **HE+** | **Avg.** | **LeetCode** | **LiveCodeBench** |\\n|--------------------|:--------:|:--------:|:---------:|:------:|:-------:|:--------:|:------------:|:-----------------:|\\n| SFT-Baseline | - | 77.7 | 67.2 | 83.5 | 78.7 | 76.8 | 33.9 | 23.2 |\\n| GPT-4 | KTO | 81.0 | 69.0 | 86.0 | 81.1 | 79.3 | 35.2 | 25.8 |\\n| | DPO | 81.2 | 70.2 | 86.0 | 81.1 | 79.6 | 36.7 | 25.8 |\\n| Llama3.1-70B | KTO | 79.4 | 69.9 | 84.8 | 80 | 78.5 | 36.7 | 24.5 |\\n| | DPO | 79.4 | 66.2 | 84.1 | 79.3 | 77.3 | 36.1 | 24.5 |\\n| Llama3.1-405B | KTO | 79.4 | 67.7 | 84.1 | 79.9 | 77.8 | 36.1 | 23.8 |\\n| | DPO | 79.2 | 66.9 | 85.4 | 80.5 | 78.0 | 36.6 | 25.5 |\\n| GPT-3.5-Turbo | KTO | 80.2 | 67.9 | 84.8 | 79.9 | 78.2 | 36.1 | 24.0 |\\n| | DPO | 79.7 | 67.7 | 84.8 | 79.9 | 78.0 | 36.1 | 23.8 |\\n| GPT4o-mini | KTO | 81.2 | 69.2 | 85.4 | 80.5 | 79.1 | 36.7 | 24.0 |\\n| | DPO | 80.5 | 67.6 | 85.4 | 81.1 | 78.7 | 36.7 | 25.5 |\\n| Claude-3-Haiku | KTO | 79.7 | 67.2 | 85.4 | 81.7 | 78.5 | 36 | 24.0 |\\n| | DPO | 79.9 | 67.2 | 86 | 81.7 | 78.7 | 36 | 23.5 |\"}", "{\"comment\": \"We sincerely thank the reviewer for recognizing our contributions and providing detailed feedback that helps refine our work. Below, we address your insightful comments and suggestions.\\n\\n### Ablation Shows PLUM Works For Various (Especially Less-Powerful) Test Generators\\n\\nWe conducted an ablation study using alternative test case generators to evaluate PLUM's scalability and cost-efficiency.\\nThe study included open-weight models like Llama-3.1-70B and cost-effective API-based models such as GPT-3.5-Turbo, along with the lowest-cost models from OpenAI (GPT4o-mini) and Anthropic (Claude3-Haiku). The results consistently show performance improvements, even with less powerful test case generators, demonstrating PLUM's robustness to generator variability and its applicability under different resource constraints.\\nFor detailed results, please refer to the global response and our revised draft.\\n\\n\\n### Proving PLUM\\u2019s Effectiveness on Stronger Models\", \"we_evaluated_plum_on_stronger_models\": \"Qwen-2.5-Instruct-14B and Qwen-2.5-Coder-14B. These models already exhibit strong performance on coding benchmarks, and have undergone more sophisticated alignment techniques.\\n\\nOur results show that applying PLUM further improves these models. This proves that PLUM can enhance the performance of stronger models and highlights its complementary effect when combined with other alignment techniques. The results have been also included in the revision. \\n\\n| **Model** | **Item** | **LeetCode** | **LiveCodeBench** |\\n|-----------------------|-----------|:------------:|:-----------------:|\\n| Qwen-2.5-Instruct-14B | Baseline | 55.0 | 46.0 |\\n| | PLUM-DPO | **58.3** | **47.0** |\\n| Qwen-2.5-Coder-14B | Baseline | 58.3 | 32.2 |\\n| | PLUM-DPO | **61.7** | **35.0** |\\n\\n### Pseudocode Typo\\nThanks for pointing out the mistake! Your understanding is correct that \\u201cSolutions passing all test cases are used as the chosen solutions, and those failing at least one are rejected solutions.\\u201d We have fixed this in the revision.\"}", "{\"comment\": \"I would like to thank the authors for providing their insights on the related work, I certainly appreciate their efforts so far. I think the paper would benefit from the inclusion of this discussion, as it places the presented work in the proper context. I agree that much of that work can be considered concurrent and thus should not diminish the paper's contribution (in my original review I suggested that these should be included in the paper for discussion, not comparison).\\n\\nHowever, the novelty of the approach is still impacted by the fact that non-concurrent related work has already established that code models can be improved by exploiting on-policy signal assigned a reward through unit tests (either a continuous reward for RL or a binary for PO). At this point it is also important to note that previous work has also established that LLMs can be used to automatically produce unit tests and that their inclusion (either directly in the input or as a filtering step) can improve performance. \\n\\nXiong et al 2024 \\\"The Program Testing Ability of Large Language Models for Code\\\"\\n\\n Li et al. 2023 \\\"Towards enhancing in-context learning for code generation.\\\"\\n\\nChen et al. 2023 \\\"CodeT: Code Generation with Generated Tests\\\"\\n\\nWith all that said, to this reviewer, it seems that all individual contributions of the paper (on-policy learning for code, automatic generation of unit tests, filtering of on-policy signal by unit tests) have already been proposed and supported by previous work. Can the authors please help me clarify what is unique to this method, beyond the combination of previously established on-policy methods and signal on code? If the authors claim that nothing is specifically unique, but the proposed variations on these methods perform better, then direct comparisons and ablations are needed.\"}", "{\"title\": \"Response to authors\", \"comment\": \"Thanks for addressing the points raised in weakness. Please find my feedback below.\\n## On language expansion: \\nWhile I appreciate your explanation, the process of generating test cases that meet language-specific requirements involves non-trivial efforts, including setting up appropriate executing environments. Without a clearer estimation of these efforts and their feasibility, the generalizability of your approach remains unclear to me.\\n## On reliance on GPT-4: \\nMy concerns about dependency on GPT-4 have not been fully addressed. As you stated, \\u201cOur use of GPT-4 as a generator model was based on its superior generation capabilities,\\u201d which indicates a significant reliance on this generator model's performance. Although you mention that \\u201cthe robustness of our method stems from the structured preference learning pipeline rather than reliance on GPT-4\\u2019s unique characteristics,\\u201d it would be helpful if you could elaborate on which specific experiments support this claim.\"}", "{\"comment\": \"Dear Reviewer Du4Y,\\n\\nThank you for your thoughtful feedback! We have worked hard to address your concerns and would greatly appreciate it if you could review our response before the rebuttal period ends!\\n If you feel our revisions and responses have resolved your points, we\\u2019d be truly grateful if you could consider updating your score.\\n\\nThank you for your time and support!\"}", "{\"summary\": \"The paper presents an on-policy preference learning framework augmented with test cases for code language models called PLUM. It enhances the code generation capabilities of language models by leveraging preference learning driven by automatically generated test cases. The preference data is curated by evaluating candidate code solutions sampled from the policy based on the test cases. PLUM is evaluated on several code generation tasks, including HumanEval, MBPP and LiveCodeBench, and shows improvements over SFT\\u2019ed models and other execution-feedback-driven methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The proposed method uses on-policy preference learning, which aligns with the distribution of the base model itself, reducing the risk of distribution shift.\", \"The authors conduct comprehensive experiments with several models and compare their performance on multiple benchmarks.\", \"PLUM does not require extra training for the reward models, which simplifies its implementation.\"], \"weaknesses\": [\"This paper focuses on Python language only, during training and evaluation. It would be better if the authors could discuss how the methods could be applied to multilingual settings. Since different programming languages have different executing requirements, I assume it\\u2019s not very straightforward to apply the proposed framework directly to other languages.\", \"The preference learning relies on executing the test cases, which need to be generated by GPT-4, which is not scalable to generate large volumes of data, limiting the preference data size. Exploring other open-source LLMs might be more helpful in understanding the robustness of the proposed method, i.e. whether it heavily relies on the power of GPT-4.\", \"The model sizes used in the experiments are not clearly explained. There are several model families that have different sizes of instruct-model, such as DeepSeek-Coder-6.7B-Instruct and DeepSeek-Coder-33B-Instruct. Without such information, it is harder to understand the proposed method\\u2019s impacts on different model sizes.\"], \"questions\": \"When checking the consistency between the generated reference solution and the test cases, in line 179, how to check whether the test cases accurately reflect the solution?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer RrMw (2 of 2)\", \"comment\": \"# Readability / Clarity\\nThank you for your suggestion to improve the clarity of the tables and figures. We will reorganize the tables to follow the order of introduction in the main text, ensuring each experiment and its results are intuitively located near its description. Furthermore, for Figure 2, we will include additional annotations to clarify that the purpose of this figure is to contrast non-executable signals against those tested with our on-policy approach. The result actually shows PLUM\\u2019s effectiveness in boosting functional accuracy by leveraging test cases, rather than depending solely on signals from compilation failures.\\n\\n# References:\\n[1] Hanze Dong, Wei Xiong, Deepanshu Goyal, Yihan Zhang, Winnie Chow, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum, and Tong Zhang. Raft: Reward ranked finetuning for generative foundation model alignment.\", \"url\": \"https://arxiv.org/abs/2403.09032\"}", "{\"comment\": \"Dear Reviewer Du4Y,\\n\\nThank you for your time and effort in reviewing our submission. We sincerely appreciate your detailed and thoughtful feedback, which has been valuable in helping us identify and address the concerns you raised!\\n\\nIn light of the approaching end of discussion period, we would kindly check with you again whether our earlier response could address your concern, and if you have additional comments.\"}", "{\"comment\": \"Thank you for your prompt response! We would like to thank you again for the valuable comments in shaping our work!\"}", "{\"title\": \"Follow-Up on Rebuttal for Clarifications and Discussion\", \"comment\": \"Dear Reviewer RrMw,\\n\\nThank you for your valuable feedback on our submission! We have attempted to respond to your comments in our rebuttal and would appreciate any further clarifications or discussions to enhance our work. \\n\\nLook forward to further discussions! Thanks again for your time and efforts!\"}", "{\"comment\": \"Given the lack of clarity on the exact citations on my part, I appreciate the difficulty the authors had in addressing some of my concerns. I am clarifying these citations below, could you please clarify the novelty of your unit-test derived signal on the context of this previous work and why these could not constitute baselines for your approach?\\n\\nLe et al. 2022 -> CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning\\n\\nGorinski et al. 2023 -> Automatic Unit Test Data Generation and Actor-Critic Reinforcement Learning for Code Synthesis\\n\\nLiu et al. 2023 -> RLTF: Reinforcement learning from unit test feedback\\n\\nMiao et al. 2024 -> Aligning CodeLLMs with Direct Preference Optimization\\n\\nZhang et al. 2024 -> CodeDPO: Aligning Code Models with Self Generated and Verified Source Code\\n\\nGee et al. 2024 -> Code-Optimise: Self-Generated Preference Data for Correctness and Efficiency\"}", "{\"comment\": \"We thank the reviewer for their discussions.\\n\\n> ...non-concurrent related work has already established that code models can be improved by exploiting on-policy signal assigned a reward through unit tests (either a continuous reward for RL or a binary for PO).\\n\\nWe kindly point out that PLUM innovatively leverages test-case-driven preference optimization for code. To the best of our knowledge, **no prior works have demonstrated test-case signals can improve LMs\\u2019s functional correctness using a reward-model-free preference learning (xPO) technique**. As the same reviewer **RrMw** correctly pointed out in their earlier comments, _\\u2018preference optimization is (was) generally used to align LLM output to human values, e.g. reduce harmful and toxic expressions\\u2019_, it remained elusive before we proposed PLUM whether and how preference learning techniques can go beyond aligning to human preferences and improve the code LMs\\u2019 functional correctness when using test case feedback. PLUM addresses this critical gap and proposes an effective solution, and therefore shall be considered novel and valuable. \\n\\nAdditionally, although _\\u2018RL or contrastive learning over online signal seem more appropriate\\u2019_ (quoted from the earlier review by **RrMw**), we demonstrated the strong performance of PLUM, which brings efficient, scalable and effective on-policy improvement for code LMs without the need for reward-models or reward handcrafting. \\n\\n---\\n\\n> At this point it is also important to note that previous work has also established that LLMs can be used to automatically produce unit tests and that their inclusion (either directly in the input or as a filtering step) can improve performance.\\n\\nThese referenced works fundamentally differ from PLUM since they use test cases to rerank / refine solutions during inference time, not using these to establish training signals as PLUM did. Indeed, as we have acknowledged, our test case generation approach partially builds upon the insights that LLMs exhibit the ability to generate high-quality test cases based on its understanding of natural language instructions. However, we clarify that the fundamental difference in the use of these test cases distinguishes PLUM as a **training framework** from the inference time **solution-reranking techniques**. Therefore, our contribution of proposing a preference learning framework that can build better code LMs from test cases is novel and valuable.\\n\\nWe summarize the works mentioned by the reviewer below. \\n\\n**Xiong et al. 2024**: Evaluates and improves LLMs' ability to generate test cases for code, achieving pass rate improvements on HumanEval+.\\n\\n**Li et al. 2023**: Proposes AceCoder, an **in-context** learning method combining test-based analysis and retrieval, significantly enhancing code generation performance.\\n\\n**Chen et al. 2023**: Introduces CodeT, an **in-context** learning method that generates and uses test cases to improve code correctness, achieving state-of-the-art results on multiple benchmarks.\\n\\n---\\n\\n> all individual contributions of the paper (on-policy learning for code, automatic generation of unit tests, filtering of on-policy signal by unit tests) have already been proposed \\n\\nThe reviewer correctly notes that PLUM builds on prior insights as all research papers do, which we have acknowledged in the paper. But those works typically address a different problem from PLUM.\\n\\nPLUM uniquely integrates these components into a unified, scalable framework for on-policy preference optimization of code LMs. Unlike prior work, PLUM demonstrates:\\n- The effectiveness of on-policy preference optimization for code generation.\\n- The utility of LM-synthesized test cases in providing high-quality training signals.\\n- The critical importance of on-policy learning, supported by empirical evidence.\\n\\nOur contributions fundamentally differ from existing techniques, yielding novel insights and performance improvements that prior work neither achieves nor enables.\\n\\n---\\n\\n#### Request To Discuss Concurrent Works\\nWe appreciate the pointers from the reviewers. In fact, DeepSeek-Coder-V2 has been discussed in the paper as a concurrent work. It was infeasible for us to include works that were public (on Arxiv) after the ICLR deadline (Miao 2024: Aligning CodeLLMs with Direct Preference Optimization, **Arxiv date: Oct 24, 2024**.\", \"zhang_2024\": \"CodeDPO: Aligning Code Models with Self Generated and Verified Source Code, **Arxiv date: Oct 8, 2024**) at the time of submission. That said, we are happy to discuss them in future revisions!\"}", "{\"comment\": \"## Concurrent Works\\nWe thank the reviewer for providing the pointers. Some of these works are concurrent to ours and should not negatively impact PLUM\\u2019s contribution:\\n\\n- Miao 2024: Aligning CodeLLMs with Direct Preference Optimization\", \"https\": \"//arxiv.org/pdf/2406.12502\\n \\n **Arxiv date: Jun 18, 2024**. \\n\\n## Differences From The Rest Of Related Works\\nThe reviewer correctly pointed out the use of test cases for code LMs has been previously explored. The key contribution of this work is a novel on-policy preference learning framework for code LMs, and discuss its differences from previous works below (CodeRL has already been covered in our previous response). \\n### RLTF\\nRLTF is an extension of CodeRL. Its key is to use the enumeration of various kinds of errors and carefully handcrafted reward values for training the policy \\nResult-wise, the handcrafted RL reward reduces syntax errors and other superficial issues. However, it fails to effectively ensure that the solutions align with NL-specification (Figure 2 of RLTF paper). \\n\\nIn contrast, PLUM excludes syntax errors and other statically invalid programs, and demonstrates that focusing on run-time feedback by execution over test cases could significantly improve code LMs\\u2019 NL-to-code capabilities across benchmarks, besides, instead of relying on reward models as in RLTF, PLUM learns directly from the execution results over test cases with algorithms like DPO and KTO.\\n### Automatic Unit Test Data Generation and Actor-Critic Reinforcement Learning for Code Synthesis\\n\\nThey labeled programs with a static-analysis-based test case annotation tool (EvoSuite [1]) to augment the MBPP training split. EvoSuite focuses on ensuring structural coverage\\u2014i.e., making every branch of the program runnable\\u2014rather than aligning with natural language specifications. The resulting synthetic dataset caused the model to perform worse than the pre-trained checkpoint (as shown in Table 1 of this paper). \\n\\nIn contrast, we developed a scalable method that leverages LLMs to produce high-quality test cases for programming instructions, enabling the collection of on-policy preference data. Leveraging the signal provided by execution against the test cases, our method directly optimizes the model\\u2019s functional correctness for NL-to-code tasks. Through extensive experiments, we clearly demonstrate the effectiveness of our approach in improving code LMs. \\n\\n### Additional Remarks: \\nEvoSuite is limited to strongly-typed languages such as Java, making it incompatible with dynamically-typed languages like Python. Additionally, the extremely low yield rate in Gorinski 2023 \\u2014estimated at just 5.57%\\u2014renders it impractical for generating test cases. For instance, producing approximately 2,000 examples would require around 360 hours of computation time, assuming one second per test case generation.\\n## Summary\\nIn summary, PLUM as a novel test-case driven preference learning approach provides fresh insights beyond existing works, showcasing the importance of on-policy preference data. Beyond that, its strong empirical performance further strengthens its value.\\n\\n[1] EvoSuite: automatic test suite generation for object-oriented software. https://dl.acm.org/doi/10.1145/2025113.2025179\"}" ] }
DmEHmZ89iB
Single Teacher, Multiple Perspectives: Teacher Knowledge Augmentation for Enhanced Knowledge Distillation
[ "Md Imtiaz Hossain", "Sharmen Akhter", "Choong Seon Hong", "Eui-Nam Huh" ]
Do diverse perspectives help students learn better? Multi-teacher knowledge distillation, which is a more effective technique than traditional single-teacher methods, supervises the student from different perspectives (i.e., teacher). While effective, multi-teacher, teacher ensemble, or teaching assistant-based approaches are computationally expensive and resource-intensive, as they require training multiple teacher networks. These concerns raise a question: can we supervise the student with diverse perspectives using only a single teacher? We, as the pioneer, demonstrate TeKAP, a novel teacher knowledge augmentation technique that generates multiple synthetic teacher knowledge by perturbing the knowledge of a single pretrained teacher i.e., Teacher Knowledge Augmentation via Perturbation, at both the feature and logit levels. These multiple augmented teachers simulate an ensemble of models together. The student model is trained on both the actual and augmented teacher knowledge, benefiting from the diversity of an ensemble without the need to train multiple teachers. TeKAP significantly reduces training time and computational resources, making it feasible for large-scale applications and easily manageable. Experimental results demonstrate that our proposed method helps existing state-of-the-art knowledge distillation techniques achieve better performance, highlighting its potential as a cost-effective alternative. The source code can be found in the supplementary.
[ "TeKAP", "Teacher Knowledge Augmentation", "Teacher Knowledge Perturbation", "Single Teacher Multiple Perspectives", "Synthetic Teacher", "Knowledge Distillation", "Ensemble Learning", "Knowledge Transfer" ]
Accept (Poster)
https://openreview.net/pdf?id=DmEHmZ89iB
https://openreview.net/forum?id=DmEHmZ89iB
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wEMFXJ3Iop", "vGHQPvAiVP", "rcaw1XmjEs", "pRzFeCIKCB", "o3gvkkg8HM", "mr1HElOmpN", "mJfqsm4AU1", "jIgdF1LFGl", "iQMZgbPUue", "cUvMl1SLWu", "bHGEBcgG7X", "ZpLOcDDHFq", "X122sdgJfw", "V8B1tmuQoe", "TE7yqsTYsv", "SVYCSjUPiB", "S6rxjXcWMr", "RTPDppOAhf", "R59kTdBwXY", "Q1LhmXqtEY", "M9y3IgPlaj", "LMIAIAMdvi", "JOShFrOoNV", "IqMKoUbWAj", "CwPHvGsuKb", "9szhEzFIYE", "98JkVwGyjb", "8q3haSDK1I", "8jxlaNHLil", "7CVYIbqd79", "6Na2yan2ps", "5v0j9aKeGb", "1vLgzwlIIq", "0TZexonQK5" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732508090728, 1733212853288, 1732434755531, 1732866557166, 1733108555373, 1737523747864, 1732433338030, 1732795328385, 1732507521472, 1733213284038, 1734669306998, 1732797398742, 1733221671339, 1730545131710, 1732795360017, 1732431746312, 1732433164844, 1732432649367, 1732797373077, 1732434773674, 1732841742521, 1730476825339, 1733213174002, 1732893513287, 1730763236764, 1732599255187, 1732843400031, 1732532724784, 1731065331496, 1733216803468, 1733108879241, 1732797546710, 1733213005926, 1732795233344 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6163/Authors" ], [ "ICLR.cc/2025/Conference/Submission6163/Authors" ], [ "ICLR.cc/2025/Conference/Submission6163/Authors" ], [ "ICLR.cc/2025/Conference/Submission6163/Authors" ], [ "ICLR.cc/2025/Conference/Submission6163/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6163/Authors" ], [ "ICLR.cc/2025/Conference/Submission6163/Authors" ], [ "ICLR.cc/2025/Conference/Submission6163/Authors" ], [ "ICLR.cc/2025/Conference/Submission6163/Authors" ], [ "ICLR.cc/2025/Conference/Submission6163/Area_Chair_aKQy" ], [ "ICLR.cc/2025/Conference/Submission6163/Authors" ], [ "ICLR.cc/2025/Conference/Submission6163/Authors" ], [ "ICLR.cc/2025/Conference/Submission6163/Reviewer_zdP5" ], [ "ICLR.cc/2025/Conference/Submission6163/Authors" ], [ "ICLR.cc/2025/Conference/Submission6163/Authors" ], [ "ICLR.cc/2025/Conference/Submission6163/Authors" ], [ "ICLR.cc/2025/Conference/Submission6163/Authors" ], [ "ICLR.cc/2025/Conference/Submission6163/Authors" ], [ "ICLR.cc/2025/Conference/Submission6163/Authors" ], [ "ICLR.cc/2025/Conference/Submission6163/Authors" ], [ "ICLR.cc/2025/Conference/Submission6163/Reviewer_FGoU" ], [ "ICLR.cc/2025/Conference/Submission6163/Authors" ], [ "ICLR.cc/2025/Conference/Submission6163/Authors" ], [ "ICLR.cc/2025/Conference/Submission6163/Reviewer_pUYy" ], [ "ICLR.cc/2025/Conference/Submission6163/Reviewer_NkEk" ], [ "ICLR.cc/2025/Conference/Submission6163/Reviewer_FGoU" ], [ "ICLR.cc/2025/Conference/Submission6163/Reviewer_zdP5" ], [ "ICLR.cc/2025/Conference/Submission6163/Reviewer_NkEk" ], [ "ICLR.cc/2025/Conference/Submission6163/Reviewer_zdP5" ], [ "ICLR.cc/2025/Conference/Submission6163/Authors" ], [ "ICLR.cc/2025/Conference/Submission6163/Authors" ], [ "ICLR.cc/2025/Conference/Submission6163/Authors" ], [ "ICLR.cc/2025/Conference/Submission6163/Authors" ] ], "structured_content_str": [ "{\"title\": \"References\", \"comment\": \"References\\n\\n1. Zhao, Borui, et al. \\\"Decoupled knowledge distillation.\\\" Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition. 2022.\\n2. Jin, Ying, Jiaqi Wang, and Dahua Lin. \\\"Multi-level logit distillation.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.\\n3. Mirzadeh, Seyed Iman, et al. \\\"Improved knowledge distillation via teacher assistant.\\\" Proceedings of the AAAI conference on artificial intelligence. Vol. 34. No. 04. 2020.\\n4. Zhang, Hailin, Defang Chen, and Can Wang. \\\"Confidence-aware multi-teacher knowledge distillation.\\\" ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022.\\n5. Son, Wonchul, et al. \\\"Densely guided knowledge distillation using multiple teacher assistants.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.\"}", "{\"comment\": \"Dear Reviewer FGoU,\\n\\nWe are delighted that our responses have satisfactorily addressed your questions. We sincerely appreciate your kind words and acknowledgement of our work and contributions. \\n\\nWe have incorporated additional results further and kindly request you to review them at your convenience. Thank you for your time and consideration.\\n\\nBest regards, \\nThe Authors\"}", "{\"title\": \"Response to Reviewer pUYy (1/3)\", \"comment\": \"We sincerely thank the reviewer for the constructive feedback. Below, we address the concerns and questions step-by-step.\\n\\n### **Additional Experiments:**\\n\\n| | Model | ResNet32x4-ResNet8x4 | WRN_40_2-WRN_40_1 |\\n|-----------------|---------------|------------|----------|\\n| **Teacher** | Accuracy | 79.42 | 75.61 |\\n| **Student** | Accuracy | 72.50 | 71.98 |\\n| **Single Teacher** | DKD [1] | 76.32 | 74.81 |\\n| | **DKD + TeKAP (Ours)** | **76.59** | **75.33**|\\n| | MLKD [2] | 77.08 | 75.35 |\\n| | **MLKD + TeKAP (Ours)** | **77.36** | **75.67**|\\n| **Multi-Teacher** | TAKD [3] | 73.93 | 73.83 |\\n| | **TAKD + TeKAP (Ours)** | **74.81** | **74.37**|\\n| | CA-MKD [4] | 75.90 | 74.56 |\\n| | **CA-MKD + TeKAP (Ours)** | **76.34** | **74.98**|\\n| | DGKD [5] | 75.31 | 74.23 |\\n| | **DGKD + TeKAP (Ours)** | **76.17** | **75.14**|\\n\\n\\n**Table-1:** The effects of TeKAP on the SOTA methods DKD [1], MLD [2], TAKD [3], CA-MKD [4], and DGKD [5]. \\n\\n| #Original Teachers (T) | TeKAP (F+L) |\\n|-------------------------|--------------|\\n| 1 OriginT + 3 AugT | 75.98 |\\n| 2 OriginT + 3 AugT | 76.12 |\\n| 3 OriginT + 3 AugT | 76.31 |\", \"table_2\": \"Effect of multiple original teachers.\\n\\n\\n|Network | Augmentation Techniques | TeKAP (F+L) |\\n|-------------------------|--------------|--------------|\\n| **ResNet32x4-ResNet8x4** | Gaussian | 75.98 |\\n| | Uniform | 75.71 |\\n| **WRN_40_2-WRN_40_1** | Gaussian | 74.41 |\\n| | Uniform | 74.26 |\", \"table_3\": \"Effect of different noise techniques.\\n\\n### Responses\\n1. **Theoretical Depth of Perturbation Methods:** We appreciate this insightful comment. We agree that the theoretical depth needs to improve. Actually, Gaussian noise was chosen for its simplicity and general applicability across various domains. However, we also show the effect of uniform distribution noise in Table 3. We have used zero mean and 1 std. to produce random noise on every epoch and perform a weighted combination with the original teacher logits (noise weights with 0.1 and teacher weights with 0.9). We also added detailed guidelines on how to set the noise parameters for different configurations in the revised manuscript. \\n\\n\\n2. **Comparison with SOTA Methods:** We agree that the evaluation could benefit from a broader scope. We have added comparisons with additional multi-teacher distillation methods in Table 1.\\n\\n3. **Scalability and Computational Efficiency:** The training of a teacher ResNet32x4 in CIFAR100 using KD takes 16 seconds per epoch approximately. As we run for 240 epochs then the total time taken by is 240*16 = 64 minutes using 2, 3080 NVIDIA GeForce GPUs. For multi-teacher or ensemble learning we need to train multiple teachers, let's assume 2 teacher assistants of equal size which takes 64*2 = 128 minutes (approx) for DGKD. In our approach, TeKAP takes 18 seconds per epoch which is in total: 72 minutes only. We will add the complexity in the supplementary.\\n\\n4. **Overclaimed Statements:** Thank you very much for this insightful comment. We will improved the literature review in the final version.\\n\\n5. **Gaussian Noise Parameters (Page 4):** We have used zero mean and 1 std. (standard Gaussian) to produce random noise on every epoch and perform a weighted combination with the original teacher logits (noise weights with 0.1 and teacher weights with 0.9). In future work, we will work on optimizing these hyperparameters.\\n\\n\\n6. **Incomplete Explanation of Terms (Page 5):** We have included the explanation in the revised manuscript.\\n\\n7. **Overfitting Risk with Static Noise Parameters:** We agree with this comment. The static noise will create inductive bias shifts or over-fitting. This is why we generated random noise at every epoch which is considered as dynamic noise which creates diversity and balance. However, we are experimenting with static noise. The results will be reported here soon.\\n\\n8. **Handling Class Imbalance:** We have included the results below **Update: Response to Reviewer pUYy (4)**.\"}", "{\"title\": \"Update: Response to Reviewer pUYy (5):\", \"comment\": \"### Result D Static (Fixed) vs Dynamic Noise (We will add this response to the supplementary of the final version)\\n\\n**Results D: Additional Experiments**\", \"table_d\": \"Evaluation of the comparative effects between static and dynamic noise. KD has been used as the baseline distillation approach. The experiment is conducted with three augmented teachers. We use $\\\\sigma = 1$, $\\\\lambda = 0.8$, and three augmented teachers. Gaussian noise is used to generate the noise.\\n\\n| **Methods** | ResNet32x4-ResNet8x4 | WRN\\\\_40\\\\_2-WRN\\\\_16\\\\_2 | VGG13-VGG8 |\\n|---------------------------|----------------------|-----------------------|------------|\\n| **Baseline (KD)** | 73.33 | 74.92 | 72.98 |\\n| **+ TeKAP (Static-L)** | 73.74 | 74.66 | 73.29 |\\n| **+ TeKAP (Ours: Dynamic-L)** | 74.79 | 75.21 | 74.00 |\\n\\nThe results in Table C demonstrate the effectiveness of dynamic noise over static noise and the baseline Knowledge Distillation (KD) approach across three teacher-student pairs: ResNet32x4-ResNet8x4, WRN\\\\_40\\\\_2-WRN\\\\_16\\\\_2, and VGG13-VGG8 on CIFAR100 dataset. Baseline KD provides solid performance, achieving 73.33\\\\%, 74.92\\\\%, and 72.98\\\\% accuracy, respectively. Incorporating static noise (TeKAP Static-L) shows minor improvements for ResNet32x4-ResNet8x4 and VGG13-VGG8, achieving 73.74\\\\% and 73.29\\\\%, but performs slightly worse (74.66\\\\%) for WRN\\\\_40\\\\_2-WRN\\\\_16\\\\_2, indicating its limited adaptability. Conversely, our proposed dynamic noise strategy (TeKAP Dynamic-L) consistently outperforms both static noise and baseline KD, achieving significant gains with accuracies of 74.79\\\\%, 75.21\\\\%, and 74.00\\\\%, respectively. This superiority stems from dynamic noise's adaptability enabling robust generalization. These findings underscore the robustness and efficacy of dynamic noise in enhancing knowledge transfer during distillation, providing a compelling case for its application in improving student network performance across diverse architectures.\\n\\n\\n**We will add this response (ablation study: the comparative effect between static vs dynamic noise on TeKAP in the supplementary).** \\n\\nThank you for your detailed and insightful comments. Your feedback has significantly contributed to improving the manuscript. We are happy to address any additional questions or concerns you may have.\"}", "{\"title\": \"Gentle Reminder with Gratitude\", \"comment\": \"Dear Reviewer,\\n\\nWe wish to convey our heartfelt gratitude and appreciation for your insightful and constructive feedback. As the discussion period is anticipated to conclude very soon, we kindly request you to share any additional questions or concerns you may have.\\nWe remain readily available and would be pleased to continue the dialogue to ensure that all matters are comprehensively addressed.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to Reviewer (FGoU) (2/2)\", \"comment\": \"### The improvements we have made:\\n1. **(Reviewers: NkEk, pUYy, zdP5, FGoU): Additional comparison with state-of-the-art:** Added to the revised manuscript (Table 2, page 7)\\n\\n2. **(Reviewers: NkEk, pUYy, zdP5, FGoU) multi-teacher:** The results discussion for the recent SOTA multi-teacher approach is added to section 4.1, Table 2 (page 7) of the revised manuscript.\\n\\n3. **(Reviewers: NkEk): explanation of usage scenarios between the feature level and logit level:** Added in section 3.1. Page 4 of the main manuscript. (Please find the changes marked highlights in the supplementary)\\n\\n4. **(Reviewers: NkEk, pUYy) potential benefits of increasing the number of augmented teachers** Updated Figure 6 (Now Figure 5 of the main manuscript, Table 2 of this response). We have trained more teachers (till - 10) and provided the potential benefits of increasing the number of augmented teacher models in Table 1 of the supplementary.\\n\\n5. **(Reviewers: NkEk) Evaluation of TeKAP on ensemble learning.** Added to the supplementary: Table 2, Section B. Table 3 of the last response.\\n\\n6. **(Reviewer: pUYy): Theoretical Depth:** We have extended the theoretical analysis in the supplementary (Section K in details). more theoretical discussion in the supplementary (section D).\\n\\n7. **(Reviewer: pUYy, FGoU, zdP5) effect for different Gaussian noise parameters:** We have used mean = 0 and variance = 1 as the default. Additionally, we added the effect for variance $\\\\sigma$ = [0.5, 1, 1.5] in the supplementary (Table 5, section E).\\n\\n8. **(Reviewer: pUYy) comparative computation complexity**: Added to section H of the supplementary.\\n\\n9. **(Reviewer: pUYy, FGoU) Description and explanation of every mathematical term on page 5**: We have carefully gone through and added the description and explanation of every mathematical term used in the paper. \\n\\n10. **(Reviewer: pUYy, FGoU) Experiments of the class imbalance data:** Added to the supplementary Table 4, section D.\\n\\n11. **(Reviewer: pUYy, FGoU) fixed noise experiments**: Experiments are running and will be added to the final version and we will also report here with the deadline.\\n\\n12. **(Reviewer: pUYy. zdP5) how inter-class diversity works**: Discussion added in the supplementary section I.\\n\\n13. **(Reviewer: zdP5) effect for different values of $\\\\lambda$**: Added in the supplementary Table 3, Section C.\\n\\n14. **(Reviewer: zdP5) Meaning of $L_{cel}:** We have added the meaning of $L_{cel}$ in line 209, page 5 of the main manuscript.\\n\\n15. **(Reviewer: zdP5) More experiments on TAKD with WRN-22-2 or WRN-16-2?**: Experiments are running. We will be added in the final version and report here soon.\\n\\n16. **(Reviewer: FGoU) clarification of random distortion and inter-class relationships**: Added in the supplementary in section I.\\n\\n\\nReferences\\n1. Zhao, Borui, et al. \\\"Decoupled knowledge distillation.\\\" Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition. 2022.\\n2. Jin, Ying, Jiaqi Wang, and Dahua Lin. \\\"Multi-level logit distillation.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.\\n3. Mirzadeh, Seyed Iman, et al. \\\"Improved knowledge distillation via teacher assistant.\\\" Proceedings of the AAAI conference on artificial intelligence. Vol. 34. No. 04. 2020.\\n4. Zhang, Hailin, Defang Chen, and Can Wang. \\\"Confidence-aware multi-teacher knowledge distillation.\\\" ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022.\\n5. Son, Wonchul, et al. \\\"Densely guided knowledge distillation using multiple teacher assistants.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.\\n\\n\\nWe sincerely appreciate the time and effort you have devoted to reviewing our manuscript. Your suggestions have significantly enhanced its quality. We are happy to address any additional queries you may have.\"}", "{\"title\": \"Further Response to Reviewer zdP5 (2/3)\", \"comment\": \"8. **Question-8**: I did not find any configuration files in the supplementary. Do the authors mean the default settings in train_student.py?\\n\\n***Response-8**: The supplementary and revised manuscript is available now. **We have added details of experimental setups in **section K** in the supplementary document**. \\n\\n**Note:** Please note that we will add more results in the supplementary: (1) TAKD with WRN-22-2 and WRN-16-2, (2) Effect of TeKAP with fixed random noise. The experiments will be reported in the response soon on the CIFAR100 dataset (here before the discussion period ends). As the revised version submission time has ended. We will add these results in the final version and report here.\\n\\n\\n9. **Question**: In Table 2 of the authors' reply to Reviewer NkEk, more perturbations seem to harm the student's performance. Can you explain why increasing perturbations destroys the teacher's knowledge pattern? Since the mean of the gradients is converging with the increasing number of perturbations, and based on the theoretical part of the paper, more perturbations should benefit performance.\\n\\n**Response:**: We agree with this comment. We are very grateful to the reviewer for pointing out this very important issue. The reported result was confused with the class-imbalanced experiments. However, inspired by these comments we carefully went through again the experiments and evaluations. **We have updated Table 2 (reposenses of Review NkEk)**. The updated results are reported in Figure 5 and Section 4.9 of the revised manuscript.\\n\\n\\n| #Teachers | Ours | Baseline (KD) | Baseline (Rerun) |\\n|-------------------------|--------------|---------------|------------------|\\n| T + 1 AugT | 73.9 | 72.98 | 73.3 |\\n| T + 2 AugT | 73.43 | 72.98 | 73.3 |\\n| T + 3 AugT | 74.04 | 72.98 | 73.3 |\\n| T + 4 AugT | 73.98 | 72.98 | 73.3 |\\n| T + 5 AugT | 74.00 | 72.98 | 73.3 |\\n| T + 6 AugT | 73.53 | 72.98 | 73.3 |\\n| T + 7 AugT | 74.16 | 72.98 | 73.3 |\\n| T + 8 AugT | 74.33 | 72.98 | 73.3 |\\n| T + 9 AugT | 74.63 | 72.98 | 73.3 |\\n| T + 10 AugT | 75.11 | 72.98 | 73.3 |\", \"table_4\": \"Effect of number of the augmented teachers.\\n\\nTable. 4 (Fig. 5 of the main manuscript) shows the effect of the number of augmented teachers. We use ResNet32x4-ResNet8x4 as the teacher-student setups on the CIFAR100 dataset to examine the effect of the hyper-parameters. From Table.4 (Fig. 5 of the main manuscript) we see that TeKAP is robust to the number of augmented teachers. For every number of augmented teachers, TeKAP achieves better accuracy than baseline and DKD students. The best performance is achieved when the number of the augmented teacher is $3$. We have used three ($3$), and one $(1)$ augmented teacher along with the original teacher, respectively. During feature and logit distortion, the weights for noise and teacher output are $0.1$, and $0.9$, respectively.\\n\\n### Additional Ablation Study:\\n\\n***Results-A: Class Imbalance Dataset:**\\n\\n| Methods | ResNet32x4-ResNet8x4 | WRN_40_2-WRN_16_2 | VGG13-VGG8 |\\n|-------------------------|--------------|---------------|------------------|\\n| Baseline (KD) | 41.71 | 52.08 | 47.52 |\\n| + TeKAP (Ours) | 46.42 | 52.72 | 51.25 |\", \"table_5\": \"Significance of TeKAP on class imbalance dataset. We have used the class distribution of the CIFAR100 dataset that is described in Table 6 (of the supplementary)\\n\\nSection D (of the supplementary)\\nThe results presented in Table 5 (4 of the supplementary) highlight the effectiveness of TeKAP in addressing class imbalance in knowledge distillation tasks. TeKAP consistently improves the performance of all three teacher-student model pairs (ResNet32x4-ResNet8x4, WRN\\\\_40\\\\_2-WRN\\\\_16\\\\_2, and VGG13-VGG8) compared to the baseline Knowledge Distillation (KD) approach. Specifically, TeKAP boosts accuracy by 4.71\\\\% for ResNet32x4-ResNet8x4, 0.64\\\\% for WRN\\\\_40\\\\_2-WRN\\\\_16\\\\_2, and 3.73\\\\% for VGG13-VGG8. These results indicate that TeKAP is particularly effective in enhancing performance for models with lower baseline accuracy, though it also provides improvements for models with higher baseline accuracy. This suggests that TeKAP can effectively mitigate the effects of class imbalance, leading to improved generalization in knowledge distillation tasks.\"}", "{\"title\": \"References\", \"comment\": \"### References\\n1. Zhao, Borui, et al. \\\"Decoupled knowledge distillation.\\\" Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition. 2022.\\n2. Jin, Ying, Jiaqi Wang, and Dahua Lin. \\\"Multi-level logit distillation.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.\\n3. Mirzadeh, Seyed Iman, et al. \\\"Improved knowledge distillation via teacher assistant.\\\" Proceedings of the AAAI conference on artificial intelligence. Vol. 34. No. 04. 2020.\\n4. Zhang, Hailin, Defang Chen, and Can Wang. \\\"Confidence-aware multi-teacher knowledge distillation.\\\" ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022.\\n5. Son, Wonchul, et al. \\\"Densely guided knowledge distillation using multiple teacher assistants.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.\"}", "{\"comment\": \"Dear Reviewer pUYy,\\n\\nThank you for your thoughtful comments and suggestions. We have added further results with detailed, point-by-point explanations for your review. We hope these enhancements meet your expectations, and if you feel they merit reconsidering your rating, we would be truly grateful. \\n\\nBest regards, \\nThe Authors\"}", "{\"metareview\": \"This paper introduces TeKAP, a novel knowledge distillation method that generates diverse teacher perspectives by perturbing the feature maps and logits of a single pretrained teacher model. It simulates the benefits of multi-teacher distillation while reducing computational cost, improving student model generalization, and demonstrating effectiveness on standard benchmarks.\", \"the_paper_has_received_mixed_scores\": \"three weak positives (6, 6, 6) and one negative (5). The reviewers highlight some its strengths:\\n\\n(1). Efficiency and Simplicity: The method uses a single pretrained teacher to simulate multiple teacher perspectives through perturbation, effectively circumventing the high computational costs of traditional multi-teacher setups. \\n\\n(2). Seamless Integration with Existing KD Methods: The plug-and-play module integrates well with existing knowledge distillation methods, adding minimal computational overhead. \\n\\n(3). Wide Range of Applications: The proposed method demonstrates promising results in various aspects, such as model compression, adversarial robustness, transferability, and few-shot learning, indicating its broad applicability.\\n\\n(4). Clear Structure and Expression: The paper is well-structured, clear, and easy to follow, presenting intriguing perspectives.\\n\\nMeanwhile, the reviewers also pointed out some key weaknesses of the paper, such as the lack of comparisons with recent multi-teacher distillation approaches and other state-of-the-art single-teacher methods, which makes it difficult to highlight the relative strengths of the proposed method. The paper also has insufficient theoretical analysis of perturbation methods and lacks sufficient details on implementation. After the rebuttal phase, most of the weaknesses have been addressed.\", \"the_final_decision_is_acceptance_based_on_the_following_primary_reasons\": \"method's efficiency and simplicity, and seamless integration with existing KD methods to improve performance. Besides, most weaknesses have been addressed after the rebuttal phase. The authors are required to include some of the improvements mentioned in the rebuttal, such as necessary experimental results and image enhancements, in the final version. Meanwhile, the authors should consider the reviewers' suggestions to further improve the quality of the final version of the paper.\", \"additional_comments_on_reviewer_discussion\": \"The paper has received mixed scores: three weak positives (6, 6, 6) and one negative (5).\\n\\nReviewer NkEk praised the theoretical proof, experimental validation, and broad applicability of the method but suggested clarifying feature-level vs logit-level scenarios.\\n\\nReviewer pUYy highlighted the novelty of using a single pretrained teacher and the promising results but raised concerns about the lack of theoretical depth, missing comparisons with other methods, and no discussion on computational efficiency.\\n\\nReviewer zdP5 appreciated the efficiency of the method.\\n\\nReviewer FGoU valued the simplicity of the plug-and-play module but requested more details on the noise perturbation implementation.\\n\\nMost reviewers expressed concerns about the lack of comparisons with state-of-the-art methods and the limited validation of the proposed approach. Additionally, several reviewers highlighted the need for more detailed discussions on the theoretical aspects and method's implementation details, such as perturbation parameters and computational efficiency. After the rebuttal phase, most of the concerns have been addressed.\\n\\nAC believes that this paper indeed proposes a novel knowledge distillation method and provides a substantial amount of experimental data to demonstrate the effectiveness of their approach. Additionally, the authors have provided reproducible open-source code. Therefore, AC is inclined to accept the paper. Based on these considerations, the final decision is accept.\"}", "{\"title\": \"Further Additional Responses to Reviewer NkEk (2/2)\", \"comment\": \"11. **(Reviewer: pUYy, FGoU) fixed noise experiments**: Experiments are running and will be added to the final version and we will also report here with the deadline.\\n\\n12. **(Reviewer: pUYy. zdP5) how inter-class diversity works**: Discussion added in the supplementary section I.\\n\\n13. **(Reviewer: zdP5) effect for different values of $\\\\lambda$**: Added in the supplementary Table 3, Section C.\\n\\n14. **(Reviewer: zdP5) Meaning of $L_{cel}:** We have added the meaning of $L_{cel}$ in line 209, page 5 of the main manuscript.\\n\\n15. **(Reviewer: zdP5) More experiments on TAKD with WRN-22-2 or WRN-16-2?**: Experiments are running. We will be added in the final version and report here soon.\\n\\n16. **(Reviewer: FGoU) clarification of random distortion and inter-class relationships**: Added in the supplementary in section I.\\n\\nWe appreciate the effort you put into reviewing our manuscript. Your suggestions have been invaluable in refining and improving its quality. Thank you for your thoughtful comments, and we would be glad to address any further queries.\"}", "{\"comment\": \"**Dear Reviewer zdP5,**\\n\\nThank you very much for your reply. We appreciate the efforts you have devoted to reviewing our manuscript.\\n\\n**Q6. CAMs of TeKAP:** We have generated the CAMs figure for TeKAP vs Teacher after the submission deadline of the revised manuscript. The updated figure is now ready. However, we were unable to add it to the revised version or supplementary materials due to the submission deadline of the revised version. The figure was completed after the revised manuscript submission deadline, which is why it couldn't be included. We will add the updated CAMs figure in the final version.\\n\\n**Q8-Concern 1: Did the authors use $\\\\alpha=0.8$, $\\\\beta=0.2$, and $\\\\lambda=1.0$ in Equation 5?** Yes, we used this combination only for TeKAP*(F+L). For all other cases, we used $\\\\beta=0.8$ and $\\\\lambda=1$ (if not mentioned). For others, we have added experimental details.\\n\\n**Q8-Concern 2: From the provided code, it seems the number is set to 3. Could you clarify why, as increasing the number of AugTs generally leads to better performance?** Thank you for raising this concern. To keep computational complexity as low as possible, we used three AugTs in every comparison (unless specified otherwise). We also demonstrated that increasing the number of AugTs leads to better performance (Table 4 of the reply). In some experiments, we used a total of ten teachers to show the impact of different numbers of AugTs in TeKAP (Table 3 in the supplementary materials and Figure 5 in the main manuscript). However, for other experiments, only three AugTs were used. In our code, we have included implementations for all ten AugTs, with comments to indicate which ones are active. Researchers can simply remove the comments to run experiments with ten teachers.\\n\\n**Q8-Concern 3: There are two hyperparameters labeled as $\\\\alpha$ in both Equation 1 and Equation 5:** Thank you very much for noticing this very important issue. We will change the hyperparameter symbol in Equation 5 to $\\\\Psi$ in the final version and update all corresponding references.\\n\\n**Q9-Concern 1: For example, the performance of T + 10 AugTs improved from 71.4 to 75.11. Could you explain this update?** We appreciate this comment. There was **confusion with the experiments with class imbalance datasets**. Basically, we ran experiments on a **class-imbalanced dataset** following a suggestion by reviewer pUYy. Later, we began running experiments on the effects of different numbers of AugTs. Unfortunately, we overlooked the class imbalance dataloader in the data processing, which led to uncertain results (for instance 71.4) . Based on your suggestion in our first reply, we re-investigated the experiments and identified the issue with the class-imbalanced training set. After correcting the data processing, we reran the experiments and obtained more reliable results (for instance 75.11). The previous results (for instance 71.4) were uncertain and error, while the later results were obtained after making the necessary revisions.\\n\\n**Q9-Concern 2: In the first version, we had results for 1-6 AugTs. We had added results for 7-10 AugTs during the rebuttal.**\\n\\n**Q10. Unable to match the results for ResNet32x4\\u2013ResNet8x4 across Table 1, Table 4, Table 2 (Supplementary), and Table 4 (from the reply):** Thank you for pointing this out. This discrepancy is due to the use of different hyperparameters. We evaluate the effects of different hyperparameters to ensure a fair evaluation. In Table 4 (from the reply), we used TeKAP only in the logits-level, i.e., TeKAP(L). However, in Table 1 (of the supplementary materials), and Table 2 (of the supplementary materials) or T + 3AugT, we reported the results of TeKAP(F+L), where F and L stand for feature and logit-level distortion. For this reason, the results differ. Thank you again for this valuable concern. We have already presented the effect of different AugTs for logits only in the current version (Table 4 of the reply). Along with Table 4 (from the reply), we will also include TeKAP(F+L) in the supplementary materials of the final version of our paper.\\n\\nAgain, we sincerely appreciate your thoughtful feedback and suggestions. We deeply value your time and effort devoted to our paper.\"}", "{\"summary\": \"The paper proposes a new augmentation method to replace the ensemble approach for KD by adding noise to the features or logits of the teacher model. This increases the variability of predictions and reduces the generalization error.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The proposed method is more efficient compared to other ensemble methods, and increasing the variability of the teacher's predictions is meaningful for knowledge distillation.\\n\\nThe paper is well-written and easy to follow.\", \"weaknesses\": \"1) The paper proposes an effective method to replace ensemble approaches; however, there is a lack of comparison to other ensemble methods (such as multi-augmentations) to demonstrate its effectiveness. Additionally, TAKD is not the SOTA method (for example, DGKD [1]) and there is a lack of experimental details for TAKD. It is not clear what teacher models are used for TAKD.\\n\\n2) The experiments in this paper are not sufficient, and the baselines are outdated. The proposed method only compares with vanilla KD (2015), TAKD (2020), and CRD (2019), and lacks comparisons with other new methods like DKD [2] and MLKD [3].\\n\\n\\n[1] Son, W.; Na, J.; Choi, J.; and Hwang, W. 2021. Densely guided knowledge distillation using multiple teacher assistants. In Proc. Int. Conf. on Computer Vision (ICCV)\\n\\n[2] Zhao, B.; Cui, Q.; Song, R.; Qiu, Y.; and Liang, J. 2022. Decoupled Knowledge Distillation. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)\\n\\n[3] Jin, Y.; Wang, J.; and Lin, D. 2023. Multi-Level Logit Distillation. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)\", \"questions\": \"1) What is \\\\mathcal{L}_{cel}\\u200b in Equation 5?\\n\\n2) In Equations 2 and 4, calculate the summation of the perturbation loss. Does \\\\lambda need to be adjusted according to the number of perturbations?\\n\\n3) What is the difference between the CAMs of TeKAP and the teacher in Figure 5? They look the same.\\n\\n4) There is a lack of experimental details; even the learning rate and the number of training epochs are not mentioned in the paper.\\n\\n5) For feature-level perturbation, which features are selected to add noise?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Further Response to Reviewer zdP5(3/3)\", \"comment\": \"**Results-B: Effect of TeKAP for different variance $\\\\sigma$ on the performance:**\\n\\n**Table 6**: Effect of TeKAP with different variance $\\\\sigma$. KD is used as the baseline distillation approach. We have used mean zero in all the cases.\\n\\n| Variance | $\\\\sigma = 0.5$ | $\\\\sigma = 1$ | $\\\\sigma = 1.5$ |\\n|---------------|----------------|--------------|----------------|\\n| Accuracy | 74.89 | 74.79 | 74.35 |\\n\\nSection E (of the supplementary)\\n\\nTable 6 (table 5 of the supplementary and section E) summarizes the impact of different variances ($\\\\sigma$) on the performance of TeKAP, using the CIFAR-100 dataset. The baseline distillation approach, Knowledge Distillation (KD), is used for comparison. As shown in the results, the accuracy of the model remains relatively stable across varying values of $\\\\sigma$. Specifically, when $\\\\sigma = 0.5$, the model achieves an accuracy of 74.89\\\\%, slightly higher than the accuracy at $\\\\sigma = 1$ (74.79\\\\%) and $\\\\sigma = 1.5$ (74.35\\\\%). These results suggest that, within the range of variances tested, increasing the noise variance does not significantly degrade performance. In fact, the accuracy only decreases marginally as the variance increases from 0.5 to 1.5, which indicates the robustness of TeKAP with respect to noise. This behavior suggests that TeKAP can maintain competitive performance even with varying levels of noise in the teacher models, highlighting its resilience to noise during distillation. The consistent results across different variances also support the idea that TeKAP is stable and less sensitive to slight perturbations in the teacher\\u2019s logits. This stability is critical for practical applications where noise may be present in the data or models.\\n\\n\\n### The improvements we have made:\\n1. **(Reviewers: NkEk, pUYy, zdP5, FGoU): Additional comparison with state-of-the-art:** Added to the revised manuscript (Table 2, page 7)\\n\\n2. **(Reviewers: NkEk, pUYy, zdP5, FGoU) multi-teacher:** The results discussion for the recent SOTA multi-teacher approach is added to section 4.1, Table 2 (page 7) of the revised manuscript.\\n\\n3. **(Reviewers: NkEk): explanation of usage scenarios between the feature level and logit level:** Added in section 3.1. Page 4 of the main manuscript. (Please find the changes marked highlights in the supplementary)\\n\\n4. **(Reviewers: NkEk, pUYy) potential benefits of increasing the number of augmented teachers** Updated Figure 6 (Now Figure 5 of the main manuscript, Table 2 of this response). We have trained more teachers (till - 10) and provided the potential benefits of increasing the number of augmented teacher models in Table 1 of the supplementary.\\n\\n5. **(Reviewers: NkEk) Evaluation of TeKAP on ensemble learning.** Added to the supplementary: Table 2, Section B. Table 3 of the last response.\\n\\n6. **(Reviewer: pUYy): Theoretical Depth:** We have extended the theoretical analysis in the supplementary (Section K in details). more theoretical discussion in the supplementary (section D).\\n\\n7. **(Reviewer: pUYy, FGoU, zdP5) effect for different Gaussian noise parameters:** We have used mean = 0 and variance = 1 as the default. Additionally, we added the effect for variance $\\\\sigma$ = [0.5, 1, 1.5] in the supplementary (Table 5, section E).\\n\\n8. **(Reviewer: pUYy) comparative computation complexity**: Added to section H of the supplementary.\\n\\n9. **(Reviewer: pUYy, FGoU) Description and explanation of every mathematical term on page 5**: We have carefully gone through and added the description and explanation of every mathematical term used in the paper. \\n\\n10. **(Reviewer: pUYy, FGoU) Experiments of the class imbalance data:** Added to the supplementary Table 4, section D.\\n\\n11. **(Reviewer: pUYy, FGoU) fixed noise experiments**: Experiments are running and will be added to the final version and we will also report here with the deadline.\\n\\n12. **(Reviewer: pUYy. zdP5) how inter-class diversity works**: Discussion added in the supplementary section I.\\n\\n13. **(Reviewer: zdP5) effect for different values of $\\\\lambda$**: Added in the supplementary Table 3, Section C.\\n\\n14. **(Reviewer: zdP5) Meaning of $L_{cel}:** We have added the meaning of $L_{cel}$ in line 209, page 5 of the main manuscript.\\n\\n15. **(Reviewer: zdP5) More experiments on TAKD with WRN-22-2 or WRN-16-2?**: Experiments are running. We will be added in the final version and report here soon.\\n\\n16. **(Reviewer: FGoU) clarification of random distortion and inter-class relationships**: Added in the supplementary in section I.\\n\\n\\nWe appreciate your valuable time and effort. These suggestions help the manuscript improve a lot. Again thank you very much for your valuable and insightful comments. We would love to respond if there are any further queries.\"}", "{\"title\": \"Response to Reviewer NkEk\", \"comment\": \"We thank you for the valuable feedback and suggestions on our submission. We have addressed the comments and questions step-by-step below:\\n\\n### **Additional Experiments:**\\n\\n| | Model | ResNet32x4-ResNet8x4 | WRN_40_2-WRN_40_1 |\\n|-----------------|---------------|------------|----------|\\n| **Teacher** | Accuracy | 79.42 | 75.61 |\\n| **Student** | Accuracy | 72.50 | 71.98 |\\n| **Single Teacher** | DKD [1] | 76.32 | 74.81 |\\n| | **DKD + TeKAP (Ours)** | **76.59** | **75.33**|\\n| | MLKD [2] | 77.08 | 75.35 |\\n| | **MLKD + TeKAP (Ours)** | **77.36** | **75.67**|\\n| **Multi-Teacher** | TAKD [3] | 73.93 | 73.83 |\\n| | **TAKD + TeKAP (Ours)** | **74.81** | **74.37**|\\n| | CA-MKD [4] | 75.90 | 74.56 |\\n| | **CA-MKD + TeKAP (Ours)** | **76.34** | **74.98**|\\n| | DGKD [5] | 75.31 | 74.23 |\\n| | **DGKD + TeKAP (Ours)** | **76.17** | **75.14**|\\n\\n\\n**Table-1:** The effects of TeKAP on the SOTA methods DKD [1], MLKD [2], TAKD [3], CA-MKD [4], and DGKD [5]. \\n\\n\\n### **Responses**:\\n\\n1. **Weakness-1: More comparison with recent multi-teacher work:** We have included more comparisons with DKD, MLKD, TAKD, CA-MKD, and DGKD, evaluated on the CIFAR-100 dataset. TeKAP outperformed all approaches under every scenario.\\n\\n2. **Weakness-2: Insufficient explanation of the difference in usage scenarios between feature-level and logit-level:** Thanks for this insightful suggestion. Logit-level augmentation primarily diversifies the inter-class relationships, providing alternative supervisory signals that regularize the student network. Feature-level augmentation, on the other hand, introduces diversity in intermediate feature representations, exposing the student to a broader spectrum of variations (like dropout or data augmentation). Both augmentations target distinct aspects of teacher knowledge: logits focus on prediction diversity, while features address internal representation diversity. \\n\\n3. **Question-1: Inclusion of more distillation methods for a more convincing study:** We appreciate the suggestion. In our revised manuscript, We will add more comparisons with SOTA techniques as shown in Table 1 (here).\\n\\n| #Teachers | Ours | Baseline (KD) | Baseline (Rerun) |\\n|-------------------------|--------------|---------------|------------------|\\n| T + 1 AugT | 73.9 | 72.98 | 73.3 |\\n| T + 2 AugT | 73.43 | 72.98 | 73.3 |\\n| T + 3 AugT | 74.04 | 72.98 | 73.3 |\\n| T + 4 AugT | 73.98 | 72.98 | 73.3 |\\n| T + 5 AugT | 74.00 | 72.98 | 73.3 |\\n| T + 6 AugT | 73.53 | 72.98 | 73.3 |\\n| T + 7 AugT | 74.16 | 72.98 | 73.3 |\\n| T + 8 AugT | 74.33 | 72.98 | 73.3 |\\n| T + 9 AugT | 74.63 | 72.98 | 73.3 |\\n| T + 10 AugT | 75.11 | 72.98 | 73.3 |\", \"table_2\": \"Effect of number of the augmented teachers.\\n\\n4. **Question-2: Potential benefits of increasing the number of teacher models:** In Figure 6, we show that TeKAP consistently benefits from additional synthetic teachers up to a certain threshold. Increasing the number of augmented teacher models will not further improve the performance because too much noise will destroy the teacher knowledge pattern instead of regularizing (similar to dropout or data augmentation). The same thing happens in ensemble learning where excessive models may introduce noise. We have run additional experiments till 10 augmented teachers.\\n\\n| #Original Teachers (T) | TeKAP (Ours) |\\n|-------------------------|--------------|\\n| 1 OriginT + 3 AugT | 75.98 |\\n| 2 OriginT + 3 AugT | 76.12 |\\n| 3 OriginT + 3 AugT | 76.31 |\", \"table_3\": \"Effect of multiple original teachers.\\n\\n5. **Inspired By: More teachers with augmentation of every teacher:** We have run additional experiments where we use multiple teachers (2 and 3) of ResNet32x4 (training using different seeds and lr). We have augmented each teacher with three noise sets. We experience performance improvements while increasing the number of teachers as shown in Table 3.\\n\\nAgain, thank you very much for these insightful suggestions. We experience that these suggestions help improve the script. We will add these responses to our revised manuscript accordingly.\"}", "{\"title\": \"Response to Reviewer FGoU (1/2)\", \"comment\": \"Thank you for your constructive feedback. We appreciate the insights from the reviewers. Below, we address each of the points step-by-step:\\n### **Additional Experiments:** \\n\\n| | Model | ResNet32x4-ResNet8x4 | WRN_40_2-WRN_40_1 |\\n|-----------------|---------------|------------|----------|\\n| **Teacher** | Accuracy | 79.42 | 75.61 |\\n| **Student** | Accuracy | 72.50 | 71.98 |\\n| **Single Teacher** | DKD [1] | 76.32 | 74.81 |\\n| | **DKD + TeKAP (Ours)** | **76.59** | **75.33**|\\n| | MLKD [2] | 77.08 | 75.35 |\\n| | **MLKD + TeKAP (Ours)** | **77.36** | **75.67**|\\n| **Multi-Teacher** | TAKD [3] | 73.93 | 73.83 |\\n| | **TAKD + TeKAP (Ours)** | **74.81** | **74.37**|\\n| | CA-MKD [4] | 75.90 | 74.56 |\\n| | **CA-MKD + TeKAP (Ours)** | **76.34** | **74.98**|\\n| | DGKD [5] | 75.31 | 74.23 |\\n| | **DGKD + TeKAP (Ours)** | **76.17** | **75.14**|\\n\\n\\n**Table-1:** The effects of TeKAP on the SOTA methods DKD [1], MLD [2], TAKD [3], CA-MKD [4], and DGKD [5]. \\n\\n\\n| #Original Teachers (T) | TeKAP (Ours) |\\n|-------------------------|--------------|\\n| 1 OriginT + 3 AugT | 75.98 |\\n| 2 OriginT + 3 AugT | 76.12 |\\n| 3 OriginT + 3 AugT | 76.31 |\", \"table_2\": \"Effect of multiple original teachers.\\n\\n|Network | Augmentation Techniques | TeKAP (F+L) |\\n|-------------------------|--------------|--------------|\\n| **ResNet32x4-ResNet8x4** | Gaussian | 75.98 |\\n| | Uniform | 75.71 |\\n| **WRN_40_2-WRN_40_1** | Gaussian | 74.41 |\\n| | Uniform | 74.26 |\", \"table_3\": \"Effect of different noise techniques.\\n\\n### **Responses**:\\n1. **Validation of the Proposed Module:** We have conducted additional experiments comparing TeKAP with several state-of-the-art single-teacher and multi-teacher KD methods in Table 1. These results, which include advanced methods like DKD[1], MLKD[2], TAKD[3], CA-MKD[4], and DGKD[5]. Also, we have added additional evaluations in terms of ensemble, multi-teacher augmentations and so on in Tables 2 and 3.\\n\\n\\n2. **Details on Dynamic Noise Perturbation:** The generation of random noise happens on every epoch which we addressed as dynamic noise perturbation.\\n\\n3. **Scale of random noise:** We have used zero mean and 1 std. to produce random noise on every epoch and perform a weighted combination with the original teacher logits (noise weights with 0.1 and teacher weights with 0.9). We also added detailed guidelines on how to set the noise parameters for different configurations.\\n\\n4. **Meaning of h in Eq 9:** h represents a function from the hypothesis class H, which is a set of functions under consideration. Each h maps inputs x_i\\u200b (from the dataset) to real numbers, often representing predictions, scores, or decisions. This measures the capacity or complexity of H.\\n\\n5. **Clarification of Random Distortion and Inter-Class Relationships:** While the noise is random, it serves to introduce variability that prevents the student from overfitting to the teacher\\u2019s exact logits (i.e. single perspectives). If two classes are strongly correlated in the teacher logits, random distortions will not eliminate this correlation but may perturb its exact magnitude or direction, leading to diverse interpretations of the relationship. Imagine teaching a concept by showing slightly varied examples, this helps learners generalize the concept rather than memorize specific instances. Similar to techniques like dropout (which can be considered implicitly network ensemble learning because every random dropping creates a different network structure), random feature distortion (considered as a diverse network as the outputs are slightly different so it is assumed they come from different networks) can force the model to adapt to a broader range of conditions. This diversity helps the student model avoid collapsing into a rigid interpretation of the teacher\\u2019s outputs. \\n\\n4. **Fig. 3(b) vs Fig. 3(c):** Thanks for pointing this out. Usually in knowledge distillation feature and logits level knowledge distillation are divided into different categories. So to express that our approach is applicable to both the feature and logits level, we have drawn both feature and logits level figure. Actually, the figure describes that our TeKAP is a applicable to both feature 3(b) and logit level 3(c).\\n\\n\\nWe hope these revisions address your concerns and improve the clarity of the paper. We believe that these additional experiments and clarifications strengthen our work. Thank you again for your valuable feedback.\"}", "{\"title\": \"Response to Reviewer zdP5\", \"comment\": \"Thank you for your valuable feedback. We appreciate your thoughtful comments and suggestions.\\n\\n### **Additional Experiments:**\\n\\n| | Model | ResNet32x4-ResNet8x4 | WRN_40_2-WRN_40_1 |\\n|-----------------|---------------|------------|----------|\\n| **Teacher** | Accuracy | 79.42 | 75.61 |\\n| **Student** | Accuracy | 72.50 | 71.98 |\\n| **Single Teacher** | DKD [1] | 76.32 | 74.81 |\\n| | **DKD + TeKAP (Ours)** | **76.59** | **75.33**|\\n| | MLKD [2] | 77.08 | 75.35 |\\n| | **MLKD + TeKAP (Ours)** | **77.36** | **75.67**|\\n| **Multi-Teacher** | TAKD [3] | 73.93 | 73.83 |\\n| | **TAKD + TeKAP (Ours)** | **74.81** | **74.37**|\\n| | CA-MKD [4] | 75.90 | 74.56 |\\n| | **CA-MKD + TeKAP (Ours)** | **76.34** | **74.98**|\\n| | DGKD [5] | 75.31 | 74.23 |\\n| | **DGKD + TeKAP (Ours)** | **76.17** | **75.14**|\\n\\n\\n**Table-1:** The effects of TeKAP on the SOTA methods DKD [1], MLD [2], TAKD [3], CA-MKD [4], and DGKD [5]. \\n\\n| #Original Teachers (T) | TeKAP (F+L) |\\n|-------------------------|--------------|\\n| 1 OriginT + 3 AugT | 75.98 |\\n| 2 OriginT + 3 AugT | 76.12 |\\n| 3 OriginT + 3 AugT | 76.31 |\", \"table_2\": \"Effect of multiple original teachers.\\n\\n\\n|Network | Augmentation Techniques | TeKAP (F+L) |\\n|-------------------------|--------------|--------------|\\n| **ResNet32x4-ResNet8x4** | Gaussian | 75.98 |\\n| | Uniform | 75.71 |\\n| **WRN_40_2-WRN_40_1** | Gaussian | 74.41 |\\n| | Uniform | 74.26 |\", \"table_3\": \"Effect of different noise techniques.\\n\\n1. **Comparison to Other Ensemble Methods:** We have added a new set of experiments comparing TeKAP against CA-MKD, and DGKD. We also perform an ensemble comparison in Table 2. When we employed three teachers on DKD[2] and employed our approach, TeKAP, we experienced that our TeKAP uplifted the performance of the ensemble version as shown in Table 2.\\n\\n\\n2. **Comparison with SOTA Methods:** We acknowledge your comment about the comparison with outdated baselines, such as TAKD, and the absence of newer methods like DKD and MLKD. In response, we have added more experiments to compare with existing SOTAs, as suggested by the reviewers. The updated results are shown in Table 1.\\n\\n\\n3. **Details on Teacher Models Used for TAKD:** Thank you for pointing out this important concern. We re-run the official implementation of TAKD and use the same corresponding teacher network with lower number of layers, where TeKAP (Ours) denotes (F+L) i.e., (KD+CRD). We will add the experimental details in the supplementary. However, in our additional experiment, we run TAKD in our system for ResNet32x4-ResNet8x4 and WRN_40_2-WRN_40_1 in table 1. We have used only two blocks as the assistant teacher for every corresponding teacher whereas the original vanilla teacher has 3 blocks. \\n\\n4. **Explanation of $\\\\mathcal{L}_{cel}$ in Equation 5:** The term $\\\\mathcal{L}_{cel}$ in Equation 5 represents the cross-entropy loss for the student model during training. \\n\\n5. **Summation of Perturbation Loss in Equations 2 and 4:** In our work, we did not adjust the $\\\\lambda$ according to the number of perturbations. As we first add the losses of all the perturbations, then use $\\\\lambda$, the effects of noisy and true labels are always scaled by $\\\\lambda$ and $(1-\\\\lambda)$, respectively. \\n\\n6. **CAMs of TeKAP vs. Teacher in Figure 5:** We wholeheartedly appreciate for pointing out this important issue. We will update the figure in the revised manuscript.\\n\\n7. **Experimental Details (Learning Rate and Epochs):** We have updated the experimental details in the supplementary of the revised manuscript. We appreciate this important suggestion.\\n\\n8. **Feature-Level Perturbation: Which Features Are Selected for Noise?:** Regarding feature-level perturbation, we have followed the same setups described and implemented by CRD. We have selected the features immediately before the FC layer by following the work described in CRD for a fair comparison.\\n\\nAgain, thank you very much for these insightful and effective comments. We believe these revisions address the concerns and further enhance the clarity. We have updated the manuscript accordingly.\\n\\n### References\"}", "{\"title\": \"Further Additional Responses to Reviewer NkEk (1/2)\", \"comment\": \"We appreciate the valuable time and efforts offered by the Reviewer NkEk. Please note that we have updated the results in Table 2 of the previous response. We have added more results and a details analysis of our paper based on concerns raised by all the reviewers.\\n\\nWe would like to request to have a look again at our revised manuscript and overall responses. We would be happy if the reviewer again go through the responses, revised manuscript, supplementary, and reassess our updated manuscript.\", \"the_updated_table_2_can_be_found_as\": \"**Correction: Table 2**\\n\\n The reported result was confused with the class-imbalanced experiments. However, inspired by these comments we carefully went through again the experiments and evaluations. **We have updated Table 2 of the previous responses**. The updated results are reported in Figure 5 and Section 4.9 of the revised manuscript.\\n\\n\\n| #Teachers | Ours | Baseline (KD) | Baseline (Rerun) |\\n|-------------------------|--------------|---------------|------------------|\\n| T + 1 AugT | 73.9 | 72.98 | 73.3 |\\n| T + 2 AugT | 73.43 | 72.98 | 73.3 |\\n| T + 3 AugT | 74.04 | 72.98 | 73.3 |\\n| T + 4 AugT | 73.98 | 72.98 | 73.3 |\\n| T + 5 AugT | 74.00 | 72.98 | 73.3 |\\n| T + 6 AugT | 73.53 | 72.98 | 73.3 |\\n| T + 7 AugT | 74.16 | 72.98 | 73.3 |\\n| T + 8 AugT | 74.33 | 72.98 | 73.3 |\\n| T + 9 AugT | 74.63 | 72.98 | 73.3 |\\n| T + 10 AugT | 75.11 | 72.98 | 73.3 |\", \"table_4\": \"Effect of number of the augmented teachers.\\n\\nTable. 4 (Fig. 5 of the main manuscript) shows the effect of the number of augmented teachers. We use ResNet32x4-ResNet8x4 as the teacher-student setups on the CIFAR100 dataset to examine the effect of the hyper-parameters. From Table.4 (Fig. 5 of the main manuscript) we see that TeKAP is robust to the number of augmented teachers. For every number of augmented teachers, TeKAP achieves better accuracy than baseline and DKD students. The best performance is achieved when the number of the augmented teacher is $3$. We have used three ($3$), and one $(1)$ augmented teacher along with the original teacher, respectively. During feature and logit distortion, the weights for noise and teacher output are $0.1$, and $0.9$, respectively.\\n\\n### Concerns\\nWe have updated our manuscript based on the reviews added supplementary documents, and revised the manuscript with changes highlights. The updated manuscript's clean version, changes marked with yellow, and supplementary are available now.\\n\\n### The improvements we have made:\\n1. **(Reviewers: NkEk, pUYy, zdP5, FGoU): Additional comparison with state-of-the-art:** Added to the revised manuscript (Table 2, page 7)\\n\\n2. **(Reviewers: NkEk, pUYy, zdP5, FGoU) multi-teacher:** The results discussion for the recent SOTA multi-teacher approach is added to section 4.1, Table 2 (page 7) of the revised manuscript.\\n\\n3. **(Reviewers: NkEk): explanation of usage scenarios between the feature level and logit level:** Added in section 3.1. Page 4 of the main manuscript. (Please find the changes marked highlights in the supplementary)\\n\\n4. **(Reviewers: NkEk, pUYy) potential benefits of increasing the number of augmented teachers** Updated Figure 6 (Now Figure 5 of the main manuscript, Table 2 of this response). We have trained more teachers (till - 10) and provided the potential benefits of increasing the number of augmented teacher models in Table 1 of the supplementary.\\n\\n5. **(Reviewers: NkEk) Evaluation of TeKAP on ensemble learning.** Added to the supplementary: Table 2, Section B. Table 3 of the last response.\\n\\n6. **(Reviewer: pUYy): Theoretical Depth:** We have extended the theoretical analysis in the supplementary (Section K in details). more theoretical discussion in the supplementary (section D).\\n\\n7. **(Reviewer: pUYy, FGoU, zdP5) effect for different Gaussian noise parameters:** We have used mean = 0 and variance = 1 as the default. Additionally, we added the effect for variance $\\\\sigma$ = [0.5, 1, 1.5] in the supplementary (Table 5, section E).\\n\\n8. **(Reviewer: pUYy) comparative computation complexity**: Added to section H of the supplementary.\\n\\n9. **(Reviewer: pUYy, FGoU) Description and explanation of every mathematical term on page 5**: We have carefully gone through and added the description and explanation of every mathematical term used in the paper. \\n\\n10. **(Reviewer: pUYy, FGoU) Experiments of the class imbalance data:** Added to the supplementary Table 4, section D.\"}", "{\"title\": \"Response to Reviewer pUYy (2/3)\", \"comment\": \"9. **Inter-Class Correlation Discussion (Page 8):** If two classes are strongly correlated in the teacher logits, random distortions will not eliminate this correlation but may perturb its exact magnitude or direction, leading to diverse interpretations of the relationship. Imagine teaching a concept by showing slightly varied examples, this helps learners generalize the concept rather than memorize specific instances. Similar to techniques like dropout (which can be considered implicitly network ensemble learning because every random dropping creates a different network structure), random feature distortion (considered as a diverse network as the outputs are slightly different so it is assumed they come from different networks) can force the model to adapt to a broader range of conditions. This diversity helps the student model avoid collapsing into a rigid interpretation of the teacher\\u2019s outputs.\\n\\n\\n10. **Number of Augmented Teachers (Figure 6):** Empirical results showed that three augmented teachers offer the optimal trade-off between diversity and stability. For future applications, we recommend using three augmented teachers as a default, balancing performance and computational cost. The observed performance with two teachers aligns with expectations due to reduced diversity compared to three.\\n\\n\\nWe are committed to incorporating these changes to strengthen the theoretical and experimental rigour of our work. The revised manuscript will provide a clearer understanding of TeKAP\\u2019s capabilities, limitations, and broader applicability.\\nThank you again for your valuable feedback.\\n\\n### References\\n1. Zhao, Borui, et al. \\\"Decoupled knowledge distillation.\\\" Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition. 2022.\\n2. Jin, Ying, Jiaqi Wang, and Dahua Lin. \\\"Multi-level logit distillation.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.\\n3. Mirzadeh, Seyed Iman, et al. \\\"Improved knowledge distillation via teacher assistant.\\\" Proceedings of the AAAI conference on artificial intelligence. Vol. 34. No. 04. 2020.\\n4. Zhang, Hailin, Defang Chen, and Can Wang. \\\"Confidence-aware multi-teacher knowledge distillation.\\\" ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022.\\n5. Son, Wonchul, et al. \\\"Densely guided knowledge distillation using multiple teacher assistants.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.\"}", "{\"title\": \"Update: Response to Reviewer pUYy (4)\", \"comment\": \"We have added new results on class imbalance dataset, effects of different values for variance $\\\\sigma$ for Gaussian, and various valus of $\\\\gamma$.\\n\\n***Results-A: Class Imbalance Dataset:**\\n\\n| Methods | ResNet32x4-ResNet8x4 | WRN_40_2-WRN_16_2 | VGG13-VGG8 |\\n|-------------------------|--------------|---------------|------------------|\\n| Baseline (KD) | 41.71 | 52.08 | 47.52 |\\n| + TeKAP (Ours) | 46.42 | 52.72 | 51.25 |\", \"table_5\": \"Significance of TeKAP on class imbalance dataset. We have used the class distribution of the CIFAR100 dataset that is described in Table 6 (of the supplementary)\\n\\n\\nSection D (of the supplementary)\\nThe results presented in Table 5 (4 of the supplementary) highlight the effectiveness of TeKAP in addressing class imbalance in knowledge distillation tasks. TeKAP consistently improves the performance of all three teacher-student model pairs (ResNet32x4-ResNet8x4, WRN\\\\_40\\\\_2-WRN\\\\_16\\\\_2, and VGG13-VGG8) compared to the baseline Knowledge Distillation (KD) approach. Specifically, TeKAP boosts accuracy by 4.71\\\\% for ResNet32x4-ResNet8x4, 0.64\\\\% for WRN\\\\_40\\\\_2-WRN\\\\_16\\\\_2, and 3.73\\\\% for VGG13-VGG8. These results indicate that TeKAP is particularly effective in enhancing performance for models with lower baseline accuracy, though it also provides improvements for models with higher baseline accuracy. This suggests that TeKAP can effectively mitigate the effects of class imbalance, leading to improved generalization in knowledge distillation tasks.\\n\\n**Results-B: Effect of TeKAP for different variance $\\\\sigma$ on the performance:**\\n\\n**Table 6**: Effect of TeKAP with different variance $\\\\sigma$. KD is used as the baseline distillation approach. We have used mean zero in all the cases.\\n\\n| Variance | $\\\\sigma = 0.5$ | $\\\\sigma = 1$ | $\\\\sigma = 1.5$ |\\n|---------------|----------------|--------------|----------------|\\n| Accuracy | 74.89 | 74.79 | 74.35 |\\n\\nSection E (of the supplementary)\\n\\nTable 6 (table 5 of the supplementary and section E) summarizes the impact of different variances ($\\\\sigma$) on the performance of TeKAP, using the CIFAR-100 dataset. The baseline distillation approach, Knowledge Distillation (KD), is used for comparison. As shown in the results, the accuracy of the model remains relatively stable across varying values of $\\\\sigma$. Specifically, when $\\\\sigma = 0.5$, the model achieves an accuracy of 74.89\\\\%, slightly higher than the accuracy at $\\\\sigma = 1$ (74.79\\\\%) and $\\\\sigma = 1.5$ (74.35\\\\%). These results suggest that, within the range of variances tested, increasing the noise variance does not significantly degrade performance. In fact, the accuracy only decreases marginally as the variance increases from 0.5 to 1.5, which indicates the robustness of TeKAP with respect to noise. This behavior suggests that TeKAP can maintain competitive performance even with varying levels of noise in the teacher models, highlighting its resilience to noise during distillation. The consistent results across different variances also support the idea that TeKAP is stable and less sensitive to slight perturbations in the teacher\\u2019s logits. This stability is critical for practical applications where noise may be present in the data or models.\\n\\n\\n***Results-C: Effect of various $\\\\lambda$:**\\n\\nTable 3 (of the supplementary): Effect of the different values of \\u03bb (the weights of the noise terms). AugT denotes augmented teachers.\\n\\n| Number of AugT | \\u03bb = 0.2 | \\u03bb = 0.4 | \\u03bb = 0.6 | \\u03bb = 0.8 |\\n|----------------|---------|---------|---------|---------|\\n| AugT = 5 | 74.26 | 74.46 | 74.63 | 75.12 |\\n| AugT = 10 | 74.29 | 74.73 | 74.85 | 74.98 |\\n\\n### Section C of the supplementary\\n\\nThe results in Table 3 (of the supplementary) demonstrate the effect of varying the noise weight ($\\\\lambda$) and the number of augmented teachers (AugT) on the performance of the student model. For AugT=5, the accuracy consistently improves as $\\\\lambda$ increases, starting from $74.26\\\\%$ at $\\\\lambda=0.2$ and reaching $75.12\\\\%$ at $\\\\lambda=0.8$. This trend indicates that higher noise weights contribute positively to the student\\u2019s generalization by introducing greater diversity. Similarly, for AugT=10, the performance improves from $74.29\\\\%$ at $\\\\lambda=0.2$ to $74.98\\\\%$ at $\\\\lambda=0.8$, but the gains are less pronounced compared to AugT=5, suggesting a saturation effect with a larger number of augmented teachers.\"}", "{\"summary\": \"The authors propose TeKAP, a novel teacher knowledge augmentation technique that generates diverse synthetic teacher knowledge by perturbing a single pretrained teacher. This plug-and-play module leverages simple perturbations to capture ensemble benefits without training multiple teachers. Experimental results demonstrate TeKAP's effectiveness in enhancing both logit and feature-based knowledge distillation methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The proposed plug-and-play module integrates seamlessly with existing KD methods, adding minimal computational burden.\", \"By augmenting knowledge from a single pretrained teacher network, the authors significantly reduce training time and resource demands while achieving ensemble-like effects.\", \"The approach is simple yet highly effective.\"], \"weaknesses\": [\"The proposed plug-and-play module was not well validated. Specifically, it was only applied to vanilla KD and CRD, even though there have been many advanced KD methods that can serve as baselines.\", \"The experiments omit numerous state-of-the-art single-teacher and multi-teacher KD methods; additional benchmark comparisons would - strengthen the evaluation.\", \"Details on dynamic noise perturbation are insufficient, with critical implementation information missing for reference.\"], \"questions\": \"-How can randomly distorted teacher logits provide diverse inter-class relationships if the distortion is truly random?\\n-What does h represent in Eq. 9?\\n-What is the scale of the random noise, and how should it be set? Detailed guidelines for noise settings are needed.\\n-There appears to be no discernible difference between Fig. 3(b) and Fig. 3(c).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer zdP5,\\n\\nWe are truly grateful for your thoughtful feedback.\\n\\nWe have included further results during the rebuttal, which we have detailed point-by-point for your review. We kindly request you to take a look at them at your convenience. \\n\\nIf these responses address your concerns, we would be grateful if you could consider reassessing the score. Your time and effort are greatly appreciated. \\n\\nBest regards, \\nThe Authors\"}", "{\"title\": \"Further Response to Reviewer zdP5: Results for the Networks like WRN-22-2 or WRN-16-2\", \"comment\": \"**Additional Results: Results for the Networks like WRN-22-2 or WRN-16-2**\\n\\n### Table 1: Comparison of TeKAP and TAKD\\n\\nComparison of TeKAP with the assistant teacher-based KD method TAKD. We have used KD loss for both methods. We use $\\\\sigma = 1$, $\\\\lambda = 0.8$, and three augmented teachers. Gaussian noise is used to generate the noise for TeKAP. TeKAP outperforms TAKD without using any assistant teacher. We select WRN\\\\_40\\\\_2 as the teacher and WRN\\\\_16\\\\_2 and WRN\\\\_40\\\\_1 as the students. WRN\\\\_22\\\\_1, WRN\\\\_22\\\\_2, WRN\\\\_16\\\\_1, and WRN\\\\_16\\\\_2 are selected as the teacher assistant only for TAKD. TeKAP does not use any assistant teachers. TeKAP transfers knowledge directly from the teacher to the student.\\n\\n| **Teacher** | WRN_40_2 | WRN_40_2 | WRN_40_2 | WRN_40_2 | WRN_40_2 | WRN_40_2 |\\n|-----------------------|--------------------------|--------------------------|--------------------------|--------------------------|--------------------------|--------------------------|\\n| **Teacher Assistant** | WRN_22_2 | WRN_22_2 | WRN_22_1 | WRN_22_1 | WRN_16_2 | WRN_16_1 |\\n| **Student** | WRN_16_2 | WRN_40_1 | WRN_16_2 | WRN_40_1 | WRN_40_1 | WRN_40_1 |\\n| **TAKD** | 75.02 | 72.73 | 72.56 | 71.19 | 68.92 | 73.26 |\\n| **TeKAP (Ours)** | **75.21** | **73.80** | **75.21** | **73.80** | **73.80** | **73.80** |\\n\\nTeKAP surpasses TAKD by avoiding the use of narrow teacher assistants, directly transferring knowledge to students.\\n\\n\\n### Comparison with Teacher Assistant-Based Approach with Narrow Teacher Assistants\\n\\nThe results in Table 1 compare our proposed **TeKAP** with the traditional teacher assistant-based knowledge distillation method, **TAKD**. In these experiments, WRN_40_2 was used as the teacher, while WRN_16_2 and WRN_40_1 served as the student networks. TAKD employs narrow teacher assistants (WRN_22_1, WRN_22_2, WRN_16_1, and WRN_16_2) to mediate the knowledge transfer process, whereas TeKAP directly distills knowledge from the teacher to the student without intermediate assistants. TeKAP outperforms TAKD in all tested configurations. For instance, with WRN_40_1 as the student, TeKAP achieves a consistent accuracy of **73.80%**, compared to TAKD\\u2019s best result of **73.26%**. Similarly, for WRN_16_2 as the student and WRN_22_2 as the assistant, TAKD achieves **75.02%**, while TeKAP slightly improves it to **75.21%**. These results highlight the superior effectiveness of TeKAP in transferring knowledge directly, avoiding the limitations associated with teacher assistants.\\n\\n\\n\\nWe will add these results and discussion in the final version to the supplementary. We also have improved the GradCAM figure which will be added to the final version.\\n\\nWe sincerely appreciate the time and effort you have dedicated to reviewing our manuscript. Your insightful suggestions have significantly enhanced the quality of the paper. \\n\\nPlease do not hesitate to reach out if you have any further questions or require additional clarifications.\"}", "{\"summary\": \"This manuscript introduces TeKAP, a novel teacher knowledge augmentation technique. It generates multiple synthetic teacher perspectives from a single pretrained teacher model by perturbing its knowledge with random noise. TeKAP operates at both the feature and logit levels, enhancing the student's generalization ability. By reducing the need for multiple teacher models, TeKAP decreases both training time and memory usage. Evaluations on standard benchmarks demonstrate TeKAP's effectiveness in improving the performance of existing knowledge distillation approaches\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This work uses a single pretrained teacher to simulate multiple teacher perspectives through perturbation, effectively circumventing the high computational costs of traditional multi-teacher setups.\\n2. The proposed method is simple yet demonstrated encouraging results.\\n3. The work includes a comprehensive evaluation of various aspects such as model compression, adversarial robustness, and transferability, which strengthens the credibility of the proposed method.\\n4. The extensive experiments also demonstrate TeKAP\\u2019s effectiveness in few shot learning and noisy data settings, suggesting a promising direction for advancing knowledge distillation.\", \"weaknesses\": \"1. Despite TeKAP's impressive results, the theoretical analysis of the perturbation methods lacks depth. While Gaussian noise is introduced, there is limited discussion on the choice of perturbation parameters, such as the standard deviation, and how these settings impact the model\\u2019s performance. This omission could hinder reproducibility and generalizability of the approach.\\n2. Additionally, while the experiments cover a range of baseline comparisons, the paper lacks a comprehensive evaluation against existing multi-teacher distillation methods and other state-of-the-art single-teacher methods, which would better highlight TeKAP\\u2019s relative strengths. \\n3. Moreover, there is little discussion on the computational efficiency and scalability of TeKAP in practical applications, potentially raising concerns among readers regarding its feasibility in real-world scenarios.\\n4. Some statements are overclaimed in this manuscript. The authors should comprehensively review related works and give proper discriptions.\", \"questions\": \"On page 4, the paper mentions the use of Gaussian noise for teacher perturbation but does not detail the criteria for choosing the noise parameters. How are these parameters optimized, and what is their impact on the diversity and quality of the generated teacher perspectives?\\n2.On page 5, the term \\u200b is introduced in the formula without a complete explanation or definition.\\n3.Is there a risk of overfitting to the perturbed features, especially when the noise parameters are not dynamically adjusted? \\n4.How does TeKAP handle scenarios where certain classes are imbalanced? Is there a mechanism within the framework that ensures the augmented teachers do not bias the student towards overrepresented classes?\\n5.Could the following discussion be added to page 8? For instance:\\n1)What do these differences in inter-class correlations imply for the student's learning process?\\n2)How does the performance improvement of TeKAP in terms of inter-class correlation contribute to the overall effectiveness of the model?\\n6.In Figure 6, it is noted that the performance is best when the number of augmented teachers is 3. Does this imply that three teachers will be used in future applications? Additionally, the performance with two teachers seems normal; is there an explanation for this?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks to the author's reply. I would like to keep my score.\"}", "{\"comment\": \"The authors have addressed the reviewers' comments effectively, resolving many of my concerns. As a result, I have updated my ratings.\"}", "{\"title\": \"Official Comments by Reviewer zdP5\", \"comment\": \"Thank you to the authors for their reply.\", \"1\": \"What do the \\\"original teachers\\\" refer to in Table 2? Were they trained from different initializations?\", \"2\": \"Using only two blocks may not be a fair comparison. How about using three blocks with more shallow networks like WRN-22-2 or WRN-16-2?\", \"5\": \"In Equation 3, $\\\\alpha$ is set to 0.1, which means the perturbed logits are smoother than the original ones. Using a fixed $\\\\lambda$ for various numbers of perturbations does not make sense, as it influences the mean of the logits distribution. Is there any ablation study regarding $\\\\alpha$ and $\\\\lambda$? Additionally, in Equation 5, $\\\\alpha$ is used for a different purpose.\", \"8\": \"I did not find any configuration files in the supplementary. Do the authors mean the default settings in train_student.py?\", \"9\": \"In Table 2 of the authors' reply to Reviewer NkEk, more perturbations seem to harm the student's performance. Can you explain why increasing perturbations destroys the teacher's knowledge pattern? Since the mean of the gradients is converging with the increasing number of perturbations, and based on the theoretical part of the paper, more perturbations should benefit performance.\"}", "{\"summary\": \"The paper proposes a novel knowledge distillation method called TeKAP (Teacher Knowledge Augmentation via Perturbation), which generates diverse perspectives from a single teacher model. Instead of relying on multiple teacher models for supervision, TeKAP introduces diversity by perturbing both feature maps and output logits of a pretrained teacher network. This approach aims to simulate the benefits of multi-teacher distillation without the associated computational cost.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The paper provides thorough theoretical proof and experimental validation.\", \"The paper is well-structured and clear in its approach, with intriguing perspectives.\", \"The method proposed in the paper has a wide range of application scenarios.\"], \"weaknesses\": [\"There is a lack of comparison with recent multi-teacher distillation work.\", \"The explanation of the difference in usage scenarios between feature-level and logit-level may be insufficient..\"], \"questions\": [\"If more distillation methods could be included, it would be more convincing.\", \"I think the idea that different teacher models provide different perspectives is interesting. Would increasing the number of teacher models further improve performance?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Comments by Reviewer zdP5\", \"comment\": \"Thanks to the authors for their reply.\", \"6\": \"I reviewed the latest revision, and the CAMs of TeKAP vs. Teacher in Figure 1 (Supplementary) still appear to be the same.\", \"8\": \"I did not find the configuration in the latest provided code. Did authors use $\\\\alpha=0.8$, $\\\\beta=0.2$ and $\\\\lambda=1.0$ in Equation 5 for all teacher-student pairs? Additionally, how many AugTs were used in the experiment? From the provided code, it seems the number is set to 3. If so, could you clarify why, as increasing the number of AugTs generally leads to better performance? Furthermore, there are two hyperparameters labeled as $\\\\alpha$ in both Equation 1 and Equation 5.\", \"9\": \"Why did the authors update the results in Table 4: regarding the effect of the number of augmented teachers? The results for T + (7\\u201310) AugTs were revised, while others remain the same as in the previous version. For example, the performance of T + 10 AugTs improved from 71.4 to 75.11. Could you explain this update?\", \"10\": \"I am unable to match the results for ResNet32x4\\u2013ResNet8x4 across Table 1, Table 4, Table 2 (Supplementary), and Table 4 (from the reply). Based on Table 1, Table 4, and Table 2 (Supplementary), I assume the authors used 1 Original + 3 AugTs for their experiments. However, the results in Table 4 (from the reply) differ for 1 Original + 3 AugTs. Could the authors clarify this inconsistency?\"}", "{\"title\": \"Gentle Reminder with Appreciation\", \"comment\": \"Dear Reviewer,\\n\\nWe extend our sincere gratitude for your thoughtful and valuable feedback. We have carefully addressed all concerns and questions raised by the reviewers, providing detailed, step-by-step responses to each point. As the discussion period is approaching its end very soon, we kindly request you to share any further questions or concerns you may have. Please be assured of our readiness to engage in continued dialogue and provide any necessary clarifications to ensure all matters are thoroughly addressed.\"}", "{\"title\": \"Response to Reviewer pUYy (3/3)\", \"comment\": \"### The improvements we have made:\\n1. **(Reviewers: NkEk, pUYy, zdP5, FGoU): Additional comparison with state-of-the-art:** Added to the revised manuscript (Table 2, page 7)\\n\\n2. **(Reviewers: NkEk, pUYy, zdP5, FGoU) multi-teacher:** The results discussion for the recent SOTA multi-teacher approach is added to section 4.1, Table 2 (page 7) of the revised manuscript.\\n\\n3. **(Reviewers: NkEk): explanation of usage scenarios between the feature level and logit level:** Added in section 3.1. Page 4 of the main manuscript. (Please find the changes marked highlights in the supplementary)\\n\\n4. **(Reviewers: NkEk, pUYy) potential benefits of increasing the number of augmented teachers** Updated Figure 6 (Now Figure 5 of the main manuscript, Table 2 of this response). We have trained more teachers (till - 10) and provided the potential benefits of increasing the number of augmented teacher models in Table 1 of the supplementary.\\n\\n5. **(Reviewers: NkEk) Evaluation of TeKAP on ensemble learning.** Added to the supplementary: Table 2, Section B. Table 3 of the last response.\\n\\n6. **(Reviewer: pUYy): Theoretical Depth:** We have extended the theoretical analysis in the supplementary (Section K in details). more theoretical discussion in the supplementary (section D).\\n\\n7. **(Reviewer: pUYy, FGoU, zdP5) effect for different Gaussian noise parameters:** We have used mean = 0 and variance = 1 as the default. Additionally, we added the effect for variance $\\\\sigma$ = [0.5, 1, 1.5] in the supplementary (Table 5, section E).\\n\\n8. **(Reviewer: pUYy) comparative computation complexity**: Added to section H of the supplementary.\\n\\n9. **(Reviewer: pUYy, FGoU) Description and explanation of every mathematical term on page 5**: We have carefully gone through and added the description and explanation of every mathematical term used in the paper. \\n\\n10. **(Reviewer: pUYy, FGoU) Experiments of the class imbalance data:** Added to the supplementary Table 4, section D.\\n\\n11. **(Reviewer: pUYy, FGoU) fixed noise experiments**: Experiments are running and will be added to the final version and we will also report here with the deadline.\\n\\n12. **(Reviewer: pUYy. zdP5) how inter-class diversity works**: Discussion added in the supplementary section I.\\n\\n13. **(Reviewer: zdP5) effect for different values of $\\\\lambda$**: Added in the supplementary Table 3, Section C.\\n\\n14. **(Reviewer: zdP5) Meaning of $L_{cel}:** We have added the meaning of $L_{cel}$ in line 209, page 5 of the main manuscript.\\n\\n15. **(Reviewer: zdP5) More experiments on TAKD with WRN-22-2 or WRN-16-2?**: Experiments are running. We will be added in the final version and report here soon.\\n\\n16. **(Reviewer: FGoU) clarification of random distortion and inter-class relationships**: Added in the supplementary in section I.\\n\\n\\nThank you for taking the time to provide detailed and thoughtful comments. Your feedback has been instrumental in improving the manuscript. We are deeply grateful for your insights and are ready to respond to any further questions or concerns.\"}", "{\"comment\": \"Dear Reviewer NkEk,\\n\\nWe greatly value your encouraging feedback and recognition of our work and contributions. \\n\\nAdditionally, we have included further results and kindly request you to review them at your convenience. We appreciate your time and thoughtful consideration. \\n\\nBest regards, \\nThe Authors\"}", "{\"title\": \"Further Response to Reviewer zdP5 (1/3)\", \"comment\": \"We are very grateful for these insightful suggestions. These comments help to improve our paper significantly. We have addressed all the concerns step-by-step and performed additional experiments based on the suggestions:\\n\\n**Important: The supplementary documents and revised versions with both the changes marked and clean copy are available now:** We were working on the manuscript and supplementary to reflect all the concerns raised by the reviewers. We are sorry for the inconvenience that we uploaded the revised version late. We have provided supplementary documents and the changes marked by yellow in the supplementary zip file. In the final version, we will remove the highlighted copy.\\n\\n1. **Question:** What do the \\\"original teachers\\\" refer to in Table 2? Were they trained from different initializations?\\n**Response-1**: Yes! We have used different seeds and performed training multiple times. We have added this discussion in the supplementary file as follows: \\n\\nTable 2 (of the supplementary): The effects of multiple original teachers. We deploy three augmented teachers to every original teacher. ResNet32x4 and ResNet8x4 are considered teacher-student. Here, \\\"Original Teacher\\\" represents the teacher which is trained with 240 epochs and different random seeds. Three different original teachers use three different initializations (i.e., random seeds). AugT denotes the augmented teacher achieved by distorting the original teacher logits with random noise.\\n\\n| # Teacher | Accuracy |\\n|------------------------|----------|\\n| 1 Original + 3 AugT | 75.98 |\\n| 2 Original + 3 AugT | 76.12 |\\n| 3 Original + 3 AugT | 76.19 |\\n\\n### Section B (of the supplementary)\\nTable. 2 (of the supplementary) shows the effect of the number of augmented teachers. We use ResNet32x4-ResNet8x4 as the teacher-student setups on the CIFAR100 dataset to examine the effect of the hyper-parameters. From Table. 2 (of the supplementary) we see that TeKAP is robust to the number of augmented teachers. For every number of augmented teachers, TeKAP achieves better accuracy than baseline and DKD students in every scenario. The best performance is achieved when the number of the augmented teacher is $3$. We have used three ($3$), and one $(1)$ augmented teacher along with the original teacher, respectively. During feature and logit distortion, the weights for noise and teacher output are $0.1$, and $0.9$, respectively.\\n\\n2. **Question**: Using only two blocks may not be a fair comparison. How about using three blocks with more shallow networks like WRN-22-2 or WRN-16-2?\\n\\n**Response-2:** Thank you very much for this suggestion. We are experimenting on this. The experiments are running. The evaluation will be reported soon here. Please note that we could not at these results in the revised version. But we promise to add this evaluation in the final version as soon as the experiments are finished.\\n\\n5. **Question 5**: In Equation 3, is set to 0.1, which means the perturbed logits are smoother than the original ones. Using a fixed for various numbers of perturbations does not make sense, as it influences the mean of the logit distribution. Is there any ablation study regarding and ? Additionally, Equation 5, is used for a different purpose.\\n\\n**Response-5**: We appreciate this concern and acknowledge the need for evaluation for different values of $\\\\lambda$ and $\\\\sigma$ for various numbers of perturbations. We have added additional experiments in supplementary documents.\\n\\nTable 3 (of the supplementary): Effect of the different values of \\u03bb (the weights of the noise terms). AugT denotes augmented teachers.\\n\\n| Number of AugT | \\u03bb = 0.2 | \\u03bb = 0.4 | \\u03bb = 0.6 | \\u03bb = 0.8 |\\n|----------------|---------|---------|---------|---------|\\n| AugT = 5 | 74.26 | 74.46 | 74.63 | 75.12 |\\n| AugT = 10 | 74.29 | 74.73 | 74.85 | 74.98 |\\n\\n### Section C of the supplementary\\n\\nThe results in Table 3 (of the supplementary) demonstrate the effect of varying the noise weight ($\\\\lambda$) and the number of augmented teachers (AugT) on the performance of the student model. ResNet32x4-ResNet8x4 are considered as the teacher and the student. We use $\\\\sigma = 1$ for this experiment. For AugT=5, the accuracy consistently improves as $\\\\lambda$ increases, starting from $74.26\\\\%$ at $\\\\lambda=0.2$ and reaching $75.12\\\\%$ at $\\\\lambda=0.8$. This trend indicates that higher noise weights contribute positively to the student\\u2019s generalization by introducing greater diversity. Similarly, for AugT=10, the performance improves from $74.29\\\\%$ at $\\\\lambda=0.2$ to $74.98\\\\%$ at $\\\\lambda=0.8$, but the gains are less pronounced compared to AugT=5, suggesting a saturation effect with a larger number of augmented teachers.\"}" ] }
DlqRpj68xe
From Reward Shaping to Q-Shaping: Achieving Unbiased Learning with LLM-Guided Knowledge
[ "XieFeng Wu" ]
Q-shaping is an extension of Q-value initialization and serves as an alternative to reward shaping for incorporating domain knowledge to accelerate agent training, thereby improving sample efficiency by directly shaping Q-values. This approach is both general and robust across diverse tasks, allowing for immediate impact assessment while guaranteeing optimality. We evaluated Q-shaping across 20 different environments using a large language model (LLM) as the heuristic provider. The results demonstrate that Q-shaping significantly enhances sample efficiency, achieving an \textbf{16.87\%} average improvement across the 20 tasks compared to the best baseline, and a \textbf{226.67\%} improvement compared to LLM-based reward shaping methods. These findings establish Q-shaping as an effective and unbiased alternative to conventional reward shaping in reinforcement learning.
[ "reward shaping", "reinforcement learning", "large language model" ]
Reject
https://openreview.net/pdf?id=DlqRpj68xe
https://openreview.net/forum?id=DlqRpj68xe
ICLR.cc/2025/Conference
2025
{ "note_id": [ "whuChP6tcL", "u6Yv0w4LMZ", "qtbbNPnaOU", "qFgvioT2JP", "ngZFkPzgJv", "n6ENeR4DVI", "f9ce7YgFTD", "e5K1v6VdZu", "bPq0mZxZyi", "WTBJ38I32n", "TvIktl9qSh", "Swg5m4q0xm", "Ljm3vPbIhb", "HfqsEsz2Ty", "FuGYO5czvS", "Dlg2MOdk9s", "92mB1h3hKq", "6BRdBmspSl", "4r4wtHKHr3", "0pdhBIU2v8" ], "note_type": [ "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732071322460, 1732608596828, 1734296964602, 1731661442307, 1732105934488, 1732531828538, 1731659087174, 1732479680773, 1733001605850, 1729908979886, 1737523599419, 1730603308698, 1731659068359, 1732210708991, 1731661123765, 1731663303971, 1730596462502, 1732429741580, 1732488923491, 1732598626851 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3790/Reviewer_NRDQ" ], [ "ICLR.cc/2025/Conference/Submission3790/Authors" ], [ "ICLR.cc/2025/Conference/Submission3790/Area_Chair_ZUrG" ], [ "ICLR.cc/2025/Conference/Submission3790/Authors" ], [ "ICLR.cc/2025/Conference/Submission3790/Authors" ], [ "ICLR.cc/2025/Conference/Submission3790/Authors" ], [ "ICLR.cc/2025/Conference/Submission3790/Authors" ], [ "ICLR.cc/2025/Conference/Submission3790/Reviewer_aDyb" ], [ "ICLR.cc/2025/Conference/Submission3790/Reviewer_aDyb" ], [ "ICLR.cc/2025/Conference/Submission3790/Reviewer_NRDQ" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3790/Reviewer_GXux" ], [ "ICLR.cc/2025/Conference/Submission3790/Authors" ], [ "ICLR.cc/2025/Conference/Submission3790/Reviewer_aDyb" ], [ "ICLR.cc/2025/Conference/Submission3790/Authors" ], [ "ICLR.cc/2025/Conference/Submission3790/Authors" ], [ "ICLR.cc/2025/Conference/Submission3790/Reviewer_aDyb" ], [ "ICLR.cc/2025/Conference/Submission3790/Authors" ], [ "ICLR.cc/2025/Conference/Submission3790/Reviewer_NRDQ" ], [ "ICLR.cc/2025/Conference/Submission3790/Reviewer_GXux" ] ], "structured_content_str": [ "{\"comment\": [\"I thank the authors for the clarifications. A few follow up points:\", \"It appears that the authors have revised the paper, however, I could not find the revised version to verify the claims.\", \"On $A^{\\\\pi}$, it is still not clear to me what $A^{\\\\pi}\\\\mathcal{B}_{\\\\mathcal{M}}$ on line 155 means. Could you explain it?\", \"On theorem 1, I understand the authors' attempt to formalize the method in providing the proof. I have a few thoughts on this. First it looks like the use of $h(s, a)$ is iteration dependent, since you need to remove it after some iterations. Second, since the addition of $h(s, a)$ in earlier phase of the training process is essentially reward shaping, I don't think you really need to prove that $\\\\mathcal{T_{h}}$ converges to the fixed point w.r.t. $r(s, a) + h(s, a)$. I also think that with the way you defined $\\\\mathcal{T_{h}}$ in the comment above, a TD update with learning rate $\\\\alpha$ should lead to $\\\\alpha h(s, a)$?\"]}", "{\"comment\": \"Thank you for your constructive feedback and for acknowledging the improvements made to the paper. We appreciate your increased score. However, we believe there are some misunderstandings about our work that we would like to address and clarify below.\\n\\n#### Q1: This work is a reward shaping method.\\n\\n**R1:** This is a misunderstanding. Our method, called \\\"Q-Shaping,\\\" is fundamentally different from reward shaping methods. Key distinctions include:\\n1. **Preservation of the MDP**: Q-Shaping does not modify the MDP, ensuring the Q-function remains unbiased.\\n2. **Faster Impact Verification**: The impact of heuristics in Q-Shaping can be verified within a single episode. In contrast, reward shaping methods like `Eureka` and `Text2Reward (T2R)` require waiting until the end of training to observe the effects. This makes Q-Shaping approximately **2000 times faster** in heuristic evaluation.\\n3. **Ease of Heuristic Function Design**: Q-Shaping requires only a few interactions to improve the heuristic function. By comparison, designing a robust heuristic reward function for reward shaping methods is far more challenging.\\n\\n#### Q2: This work uses final performance as metrics, which is not enough.\\n\\n**R2:** This is also a misunderstanding. Our comparisons are based on **maximum performance metrics**, which provide a more comprehensive evaluation of the method. Specifically:\\n- Q-Shaping achieves an improvement of **227%** in maximum performance compared to baselines.\\n- If we were to evaluate based on final performance metrics, the improvement would be even higher, reaching **578%**.\\n\\nAdditionally, we use the metric of `additional steps required to evolve the final heuristic function`:\\nAs shown in Table 1, Q-Shaping requires only a few training steps to verify the impact of the heuristic function. In contrast, methods like Eureka require waiting until at least half of the training period to observe meaningful results.\\n\\n\\n#### Q3: This work requires deeper exploration to support its core contributions.\\n\\n**R3:** The paper addresses its core contributions effectively, as summarized below:\\n\\n1. **Won\\u2019t Bias Agent**:\\n - Q-Shaping ensures the Q-function remains unbiased, as supported by **Theorem 1** and validated through **Experiment 1** and **Experiment 2**.\\n\\n2. **Faster and Easier Heuristic Function Design**:\\n - Q-Shaping simplifies and accelerates the process of heuristic function design and improvement, as demonstrated in **Experiment 3** and **Experiment 5**.\\n\\nWe hope these clarifications address your concerns and highlight the key contributions of our approach. **Based on this, we kindly request that you re-evaluate the score.**\"}", "{\"metareview\": \"Authors present Q-shaping, an alternative to reward shaping using LLM guidance that directly modifies Q values. There are thorough empirical results showing an improvement in performance compared with existing reward-shaping methods.\\n\\nReviewers thought the Q-shaping contribution was novel and interesting, and the empirical results compelling. However, there were serious issues with the theory presented in the paper, insufficient baselines compared against, gaps in the related work and results analysis. Reviewers also felt the general writing and clarity could be improved. \\n\\nI believe the paper clarity can still be improved, and stronger connections made between the theory and experiments. For these reasons, I vote to reject the current iteration of this paper.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal and reviewer discussion, a lot of clarifications about the evaluation process and theory were discussed and provided. However, reviewer GXux still considers this a borderline paper. Their primary reason is that leveraging LLMs for reward design to improve RL performance has already been explored in several prior works, and the comparisons are limited to a few environments and focus primarily on final performance metrics. Additionally, the theoretical analysis presented in the paper has limited connection to the experimental results. Even after the rebuttal, reviewer aDyb has concerns about the evaluation process.\"}", "{\"comment\": \"**Q4:** How is the state designed? Can this method be scaled to image settings or real-world settings?\\n\\n**R4** The state design for the large model follows the requirements of the environment description. In our experiments, we tested the proposed Q-shaping method on three environments: MetaWorld, MuJoCo, and PyFlyt. For each of these environments, the state representation follows the default settings provided by the respective simulators.\", \"here_is_an_example_of_a_state_design_for_the_door_closing_task\": \"```\\ndef good_Q(self, batch_size):\\n actions = []\\n states = []\\n q_targets = []\\n for _ in range(batch_size):\\n # Generate a state where the end-effector is approaching the handle\\n handle_pos = np.random.uniform(0.1, 0.2, size=3) # Approximate handle position\\n # Start end-effector at a position slightly away from the handle\\n end_effector_pos = handle_pos + np.random.uniform(-0.1, 0.1, size=3)\\n\\n # Construct the observation\\n obs = np.zeros(self.obs_dim)\\n obs[self.end_effector_pos_idx] = end_effector_pos # End-effector position\\n obs[self.handle_pos_idx] = handle_pos # Handle position\\n obs[self.obs_dim - 3:] = np.array([0.2, 0.8, 0.15]) # Goal position\\n```\\n\\nIn this example, the state is designed to represent the door-opening condition in a robotic manipulation task, where the LLM provides actions that lead to success.\\n\\nTo implement Q-shaping for online visual reinforcement learning, we have two potential plans:\\n\\n+ Plan 1: We need a (image + text) to (image + text) large model that can take an example state as input and output good s,a pairs.\\n\\n+ Plan 2: We can first allow the agent to explore some s,a pairs. Then, we apply a VQA (Visual Question Answering) model to analyze each state, provide good and bad actions, and form good and bad s,a pairs. We can then assign relative Q-values according to the model\\u2019s confidence.\\n\\n**Q5:** The high-performance phase is not included in the calculation of sample steps.\\n\\n\\n**R5** In the validation phase, the steps spent on high-performance selection are also included in the evaluation. Typically, SAC and TD3 allocate 25,000 steps for random exploration, PPO is set to 5,000 steps, and Q-shaping uses 5,000 steps for random exploration and 10,000 steps for filtering out low-performance agents. In the specific experiments, we did not exclude these steps in order to implicitly boost Q-shaping\\u2019s performance. Furthermore, even if the 15,000 steps were excluded, the avg improvement in Q-shaping\\u2019s performance would not be significantly affected.\\n\\n\\n**Q6:** How is the correctness of the Q-value evaluated?\\n\\n**R6** The evaluation of the correctness of the Q-values from the LLM refers to determining whether the large model assigns a negative or zero value to good s,a pairs, or assigns a positive value to bad s,a pairs.\\n\\n\\n\\n**Q7:** How to improve the output of the LLM and how many times of evolution are needed to get a good agent?\\n\\n**R7** LLMs often require 1 to 3 evolutions. From Experiment 3, we observe that the large model is capable of understanding the environment and outputting code that meets the standards. However, the action-state guidelines provided by the model may result in different performances. The optimization of these output is measured by the total return of the shaped agent. \\n\\nReward shaping methods, such as T2R or Eureka, typically require half a training cycle or a full training cycle (1e7 steps) to validate the impact of reward heuristics. On the other hand, the Q-shaping algorithm can immediately validate the performance of the large model's heuristic function.\\n\\n\\n\\n**Improvements in the Next Version:**\\n\\n1.Complete the ablation study.\\n\\n2.Add experiments on the impact of the number of LLM prompts on the agent's learning efficiency.\\n\\n3.Include a complete tutorial on how to use prompts in the appendix.\\n\\n4.Re-conduct the experiment on Eureka and provide details about its re-implementation.\"}", "{\"comment\": \"Thank you for the thoughtful feedback and the suggestion to clarify the notation. This has significantly helped us identify areas for improvement in the paper.\\n\\n**Q1**: No updated paper found. \\n\\n**R1**: The paper update is still pending because Reviewer aDyb requires the full implementation of Eureka, which involves evolving the reward heuristic function for at least 5 times. This requires a significant amount of time. \\n\\n**Q2**: Clarification of $A^\\\\pi \\\\mathcal{B}_{\\\\mathcal{M}}$ \\n\\n**R2**:\\n\\n$A^\\\\pi$ is the activity matrix, which defines the entire output of the policy. \\n\\n$\\\\mathcal{B}_{\\\\mathcal{M}}$ is the Bellman consistency equation,\", \"defined_as\": \"$ \\\\mathcal{B}_{\\\\mathcal{M}}(\\\\textbf{x}) := r + \\\\gamma P \\\\textbf{x}. $\\n\\nHere, $\\\\mathcal{B}_{\\\\mathcal{M}}(\\\\textbf{x})$ can be interpreted as the target Q-function. \\n\\n$A^\\\\pi \\\\mathcal{B}_{\\\\mathcal{M}}(\\\\textbf{v})$ is defined as the **target value function**.\", \"specifically\": \"$ (A^\\\\pi \\\\mathcal{B}_{\\\\mathcal{M}}(\\\\textbf{v}))(s) $ refers to the target value given a state $s$ .\\n\\n**Q3**: The update formula also holds for reward shaping methods. What is the difference between Q-shaping and reward shaping methods?\\n\\n**R3**: The primary difference is that the Q-shaping framework allows Q-value updates for $(s, a)$ pairs that are not collected in the MDP $\\\\mathcal{D}$, whereas reward shaping methods require the $(s, a)$ pairs to be part of the collected trajectory. \\n \\nIn reward shaping methods, the reward heuristic signal is applied as follows: \\n\\n```python\\nnext_state, reward, termination, truncation, info = env.step(action) \\nnew_reward = reward_shaping(reward, other_information)\\n```\\nResearchers cannot control how the samples are collected, and the evaluation of the reward heuristic's effectiveness often requires waiting until the end of training.\\n\\nIn the heuristic TD update formula, $ \\\\alpha$ should be added to $h(s,a)$. Thanks for pointing it out.\"}", "{\"comment\": \"#### **Q1: Why are the scores for the last line of Table 3 lower than others?**\\n\\n**R1:** The numbers in Table 3 show how many steps the agent needs to converge. A smaller number means the agent needs less interaction with the environment to learn an optimal policy. We have added extra marks on the chart to make this clearer.\\n\\n\\n\\n#### **Q2: Why can\\u2019t RL baselines do \\\"high-performance selection\\\"?**\\n\\n**R2:** `High-performance selection` picks agents that do well in the early training steps. For RL baselines, this is hard because their performance at the start can be random due to exploration and network setup. Agents that perform poorly at the start might do well later. Performance at initial training steps (<50k) is irrelevant to its later performance.\\n\\nFor example, methods like `Eureka` and `Text2Reward` wait until halfway through the training period (5e5 steps) to see how reward heuristics help. Our algorithm starts with LLM priors, which boost early performance and make it easier to pick good agents.\\n\\n\\n\\n#### **Q3: Why not increase the selection phase to 50k steps for RL baselines?**\\n\\n**R3:** Even 50k steps are not enough to predict future performance for RL baselines. For harder tasks, we often need to wait at least 150k~500k steps to see if the agent is improving or failing. So increasing to 50k steps won\\u2019t solve the issue.\\n\\n\\n\\n#### **Q4: What does \\\"outputting marginal actions\\\" mean?**\\n\\n**R4:** \\\"Outputting marginal actions\\\" means the agent keeps picking actions at the boundary. For example, if the action range is [-1, 1], the agent keeps choosing -1 or 1 during training. There is a 20-40% chance of encountering this situation in each run. This problem is caused by defects in the algorithm itself.\\n\\n\\n\\nWe have updated Table 3 and the section on `high-performance selection` to improve readability. If you are still confused about the content in the paper, feel free to ask and we will provide further clarification.\"}", "{\"comment\": \"**Q5** Lines 32-36 require further citations\\n\\n**R5** To improve efficiency, popular methods include (1) imitation learning, (2) residual reinforcement\\nlearning, (3) reward shaping, and (4) Q-value initialization. Yet, each has limitations: imitation\\nlearning requires expert data[3-5], residual RL needs a well-designed controller[1-2], and Q-value initialization[8]\\ndemands precise estimates. Therefore, reward shaping[6-7] is the most practical approach, as it avoids the\\nneed for expert trajectories or predefined controllers.\\n\\n[1] Johannink, Tobias, et al. \\\"Residual reinforcement learning for robot control.\\\" 2019 international conference on robotics and automation (ICRA). IEEE, 2019.\\n\\n[2] Trumpp, Raphael, Denis Hoornaert, and Marco Caccamo. \\\"Residual policy learning for vehicle control of autonomous racing cars.\\\" 2023 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2023.\\n\\n[3] Garg, Divyansh, et al. \\\"Iq-learn: Inverse soft-q learning for imitation.\\\" Advances in Neural Information Processing Systems 34 (2021): 4028-4039.\\n\\n[4]Chang, Jonathan D., et al. \\\"Adversarial Imitation Learning via Boosting.\\\" arXiv preprint arXiv:2404.08513 (2024).\\n\\n[5] Kostrikov, Ilya, Ofir Nachum, and Jonathan Tompson. \\\"Imitation Learning via Off-Policy Distribution Matching.\\\" Proceedings of the 8th International Conference on Learning Representations (ICLR 2020)\\n\\n[6] Xie, Tianbao, et al. \\\"Text2Reward: Reward Shaping with Language Models for Reinforcement Learning.\\\" The Twelfth International Conference on Learning Representations. \\n\\n[7] Ma, Yecheng Jason, et al. \\\"Eureka: Human-Level Reward Design via Coding Large Language Models.\\\" The Twelfth International Conference on Learning Representations.\\n\\n[8] Nakamoto, Mitsuhiko, et al. \\\"Cal-ql: Calibrated offline rl pre-training for efficient online fine-tuning.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n\\n\\n\\n\\n**Q6**: The concept of NPBRS needs clarification.\\n\\n**R6**\\nIn [1], Ng introduced the concept of Reward Shaping and defined PBRS (Policy-Reward Shaping), where additional rewards are provided based on the potential function, ensuring that optimality remains unchanged. NPBRS refers to reward shaping methods that do not follow the potential function rule, and the learned policy does not guarantee optimality.\\n\\n[1] Ng, Andrew Y., Daishi Harada, and Stuart Russell. \\\"Policy invariance under reward transformations: Theory and application to reward shaping.\\\" ICML. Vol. 99. 1999.\\n\\n**In the new version of the paper, we will make the following revisions:**\\n\\nProvide a more detailed description of LLM-based RL algorithms.\\nAdd more citations in the introduction to ensure objectivity.\\nIn the related work section, we will cite LLM value-based methods to increase richness.\\nFigures 4 and 6 will be further updated to prevent ambiguity.\\nAn appendix will be added with a complete prompt for the large-model reinforcement learning case study.\"}", "{\"comment\": [\"I thank the authors for their response. I think the overall quality of the paper has improved but it still remains borderline (I have raised my score). I still have some doubts:\", \"From table 3, the scores of using all 3 (Q-shaping, policy-shaping and selection) are strictly lower than using just Q-shaping and policy-shaping. Is selection hurting or am I missing something?\", \"I do not understand the explanation for the poor performing runs. What does outputting marginal actions mean?\", \"Why do the authors claim it is impossible to do performance selection for baselines? From figure 4, SAC shoots up as fast as LLM-TD3 for many environments. This suggests that it still possible to do some form of selection there. Also note that it does not have to be at 10K steps (this is just a hyper parameter). For baselines, you could also consider setting this to larger values such as 50K.\"]}", "{\"comment\": \"I thank the authors for the effort they have put into the paper. Overall, I feel that the quality of the paper has significantly improved and I am now learning towards acceptance.\"}", "{\"summary\": \"The authors proposed a method called Q-shaping to enhance the sample efficiency of reinforcement learning algorithms. The main idea is to prompt a LLM to generate samples of good and bad state-action pairs and heuristic Q value estimates. These samples are used to train the initial Q function before turning to the standard RL pipeline. Experiments were conducted on a variety of continuous control environments showing significant improvement in sample efficiency in some environments.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The idea is original to my knowledge and the experiments are well executed.\", \"weaknesses\": [\"The presentation of the idea is somewhat long winded and the notations are somewhat inconsistent as I point out in the questions.\", \"It is not clear how the method is fundamentally different from Q value initialization.\"], \"questions\": [\"Line 155, what does the $A^{\\\\pi}$ symbol represent? Is it the policy improvement operator? I couldn't find any explanation in the text.\", \"Line 181, are the authors missing a $(1 - \\\\alpha)$ coefficient and brackets in the Q function update rule? The equation seems inconsistent with the update equation on line 744 in appendix B.2.\", \"I am not too sure how Theorem 1 actually shows the contraction property of the shaped Q iteration and how it differs from the contraction property of the regular Bellman operator. Line 757 in the proof section appears to say that the optimality of the shaped Q iteration is only guaranteed if the addition of heuristic values is stopped.\", \"In eq 1, that is $D_{g}$? Is it $D_{LLM} = \\\\{G_{LLM}, B_{LLM} \\\\}$?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper introduces a novel framework called \\\"Q-shaping,\\\" which enhances Q-value initialization by integrating domain knowledge to accelerate training in reinforcement learning (RL). Unlike traditional reward shaping methods, Q-shaping modifies Q-values directly, thereby improving sample efficiency without sacrificing the agent's optimality upon convergence. The experimental results indicate significant performance improvements.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. **Innovative Approach:** Q-shaping presents a fresh perspective on incorporating domain knowledge into RL, overcoming the limitations associated with reward shaping.\\n2. **Empirical Results:** The paper includes comprehensive experimental evaluations demonstrating Q-shaping's effectiveness, with a 16.87% improvement in sample efficiency over the best baseline and a remarkable 253.80% enhancement compared to LLM-based reward shaping methods.\\n3. **LLM Utilization:** The paper effectively harnesses large language models to guide agent exploration, revealing new potentials for LLMs in RL applications.\", \"weaknesses\": \"The current version lacks sufficient proof of completeness in both theoretical and experimental aspects. If the authors can convincingly address these issues, I would be open to reevaluating my score.\", \"questions\": \"1. **Comparison to Existing Works:** It\\u2019s important to clarify why the challenges of reward shaping cannot be addressed by recent LLM-based methods (e.g., Eureka, text2reward). How does your work differ from these studies? It appears your approach utilizes LLMs to design regularization for RL.\\n \\n2. **Proof of Theorem 1:** The proof seems unconventional; while you provide an update formula for the \\\\(\\\\hat{Q}\\\\) iteration, you immediately reference the Bellman optimal operator to support your theorem. Early works have established the convergence of the Bellman operator, so how can you demonstrate that your update formula aligns with it? This appears to assume the conclusion as a basis for your argument.\\n \\n3. **Clarification on Theorem 2:** Theorem 2 establishes a lower bound rather than an upper bound. What is the convergence sample complexity relative to other works? Is your bound more favorable than existing results, and do other studies not provide established bounds?\\n \\n4. **Relation to Regularization Techniques:** A deeper explanation of how your work relates to reinforcement learning methods employing regularization techniques would be beneficial. The core of your approach seems to hinge on introducing LLMs for regularization in RL.\\n \\n5. **Experimental Settings:** The experimental setup raises some questions. You utilize GPT-4o as the LLM and TD3 as the RL backbone in your LLM-TD3 method. Which LLM do Eureka and text2reward utilize (notably, Eureka uses GPT-4 and GPT-3.5, while text2reward uses GPT-4)? Is GPT-4o also used for these works, and do they employ TD3 as the RL backbone?\\n\\n**Minor Issues:**\\n1. In lines 32-36, the literature review on current RL works aimed at enhancing training efficiency lacks citations, which detracts from its objectivity.\\n2. The origin of the concept of NPBRS (non-potential based reward shaping) in line 53 is unclear and needs clarification.\\n3. A few LLM-assisted RL studies have focused on Q-function or value function design (e.g., \\u201cHow Can LLM Guide RL? A Value-Based Approach\\u201d). An analysis of these works should be included in the related works section.\\n4. Figures 4 and 6 do not specify the units for steps (presumably in millions).\\n5. The prompt example in the Appendix is too brief. A more comprehensive example, including the output Q function and policy function, would greatly enhance reader understanding.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your thoughtful comments and suggestions! Theorem 1 was incorrectly expressed, so we rewrote it to make it easier to understand and clearly express its purpose. If you still have questions, please let us know. Your suggestions are very valuable to us.\\n\\n**Q1:** The article needs to clearly explain why Q-shaping differs from recent LLM-based methods:\\n\\n\\n**R1:** Q-shaping accelerates learning by directly modifying the Q-values. The advantages of Q-shaping are mainly twofold: Its effect on the agent can be immediately reflected in the next episode, leading to a quick validation cycle. In contrast, reward shaping methods need to wait until the end of the training to validate the performance of the reward shaping.\\nQ-shaping remains optimal, whereas recent LLM-based reward shaping methods are not only difficult to design reward heuristics for, but also do not guarantee optimality.\\n\\n**Q2:** Regarding Theorem 1\\n\\n**R2** Theorem 1 and its proof is incorrect, and we apologize for misleading the reviewers here. \\nTheoretically, Theorem 1 provides two conclusions:\\n1. The new update formula is a contraction operator. Therefore, applying the TD update will lead to convergence. \\n2.During the update process, the heuristic term should stop to ensure that optimality remains unchanged. \\n\\n\\nTo derive Theorem 1, we first need to make two assumptions:\\n\\n1. The heuristic $h(s, a)$ provided by the large model does not change with the iteration count $k$.\\n2. The heuristic $h(s, a)$ terminates at some iteration before convergence.\", \"our_q_function_update_formula_is\": \"$$\\n\\\\hat{Q}^{k+1}(s,a) = (1-\\\\alpha) \\\\hat{Q}^{k}(s,a) + \\\\alpha \\\\left( r(s,a) + \\\\gamma \\\\sum_{s' \\\\in S} P(s'|s,a) \\\\max_a \\\\hat{Q}^k(s',a) + \\\\mathbf{h}(s,a)\\\\right) \\n$$\\n\\nWe define a new operator $\\\\mathcal{T}_h$ based on this:\\n\\n$\\\\hat{Q}^{k+1}(s,a) = \\\\mathcal{T}_h \\\\hat{Q}^{k}(s,a)$\\n\\n$= r(s,a) + \\\\gamma \\\\sum_{s' \\\\in S} P(s'|s,a) \\\\max_a \\\\hat{Q}^k(s',a) + \\\\mathbf{h}(s,a)$\\n\\nWe then prove that the operator $\\\\mathcal{T}_h$ still satisfies the **contraction property** in the appendix. Therefore, there exists a unique optimal fixed point $\\\\hat{Q}^*$. This proof allows us to apply our update formula to this new operator and find a new optimal fixed point.\\n\\nThe $\\\\hat{Q}^*$ is shifted and biased. Therefore, to allow $\\\\hat{Q}$ to converge to the optimal value function of the MDP $\\\\mathcal{D}$, we need to stop the heuristic $h(s, a)$ and let the value function update for a few steps towards $Q^*$.\", \"we_now_pose_the_following_question\": \"How many steps in advance should we stop the heuristic function $h$ so that $\\\\hat{q}_D$ converges to $q^*_D$? Theorem 2 provides an upper bound for any random bounded $q$-values converging to $q^*$ in MDP $\\\\mathcal{D}$.\\n\\n\\n\\n\\n**Q3** Regarding Theorem 2:\\n\\n**R3** Regarding sample complexity analysis, many previous works have already provided different upper bounds for sample complexity, such as VI-LCB [1], PEVI-Adv[1], and Q-LCB [2]. \\nThese works require the design of heuristics to derive a lower upper bound. \\nThey all derive tighter upper bounds by providing more refined heuristics. However, these works have some drawbacks:\\n\\n(1)These works that provide tighter convergence algorithms have not conducted experiments to verify whether their algorithms are effective.\\n\\n(2)The reference policy used to obtain the upper bound must satisfy ``single-policy concentrability,'' which limits their applicability.\\n\\nAs discussed in Theorem 1,the heuristic needs to stop before convergence. The goal of Theorem 2 is to provide experimenters with a reference for when to stop the heuristic, rather than comparing sample complexity with previous works.\", \"references\": \"[1] Xie, T., Jiang, N., Wang, H., Xiong, C., and Bai, Y. (2021b). Policy finetuning: Bridging sample-efficient offline and online reinforcement learning. arXiv preprint arXiv:2106.04895.\\n\\n[2] Shi, Laixi, et al. ``Pessimistic q-learning for offline reinforcement learning: Towards optimal sample complexity.'' International Conference on Machine Learning. PMLR, 2022.\\n\\n**Q4** Is Q-shaping related to regularization algorithms?\\n\\n**R4** Regularization terms refer to those terms related to the values or quantities of model parameters, such as L1 regularization or L2 regularization. Their main purpose is to constrain the overfitting problem of the model. In this work, the main purpose of the heuristic term is to shift the Q-function so that the agent can perform actions that are of interest to the large model.\"}", "{\"comment\": \"I thank the authors for their response. I am still confused about some details-\\n\\n**1. High-performance agent selection** \\n\\nIs this implemented for all baselines or just for Q-shaping? 10 agents are rolled out for 15K steps before being discarded. This means that there are 150K additional training steps taken, do the learning curves account for this and if they don't, how does it impact the sample efficiency results? This high-performance selection also biases the overall performance if it is not implemented for baselines. It essentially means reporting the average score of the 10 best agents for Q-shaping, but only reporting the average overall score for the other agents. \\n\\n2. **Environment description details** \\n\\nAs stated in my original review, it is important to provide the 'environment description' that is given to LLMs when prompting them to classify good/bad states. I want to understand the amount of domain knowledge being provided to LLMs. In particular, I want to get a sense of how easy/hard will it be for a human to themselves write the function which they ask a LLM to write. On this note, it will also be helpful to provide some examples of the good/bad state functions written by LLMs. \\n\\n3. **Regarding seeds** \\n\\nWhen I refer to seeds, I do not refer to just the environment, I also refer to the RL algorithm. As there are confidence intervals around the curves, I am assuming multiple runs are done for each method. How many such runs are done? \\n\\n4. **Most LLMs Can Provide Correct Heuristic Functions**\\n\\nFor this ablation, can you please describe how you actually evaluate correctness? For example, o1-preview has 100% correctness of the assigned Q-values. How is this correctness actually judged? Am I missing something, can you point me towards the lines where this is discussed in detail. \\n\\nAs a final note, the points above are not asking for any new experiment to be run, just clarification details on the current version of the paper.\"}", "{\"comment\": \"Thank you for your thoughtful and constructive feedback. The comments provided have been invaluable in helping us improve the clarity and thoroughness of our work. In this response, we address each of the concerns raised, with a focus on providing additional details, clarifications, and improvements to our experiments.\\n\\n**Q1:** In the 20 tasks, 6-7 tasks performed worse than the best baseline. Although there could be various reasons (such as the underlying RL algorithm, poor LLM output, randomness due to using only one seed, etc.), it is difficult to validate the generalization ability of the method.\\n\\n**R1** In the sample efficiency experiment, we also added a comparison with TD3. By incorporating the LLM heuristic module, the agent\\u2019s performance improved by an average of 55\\\\%. This clearly demonstrates that the Q-shaping module can significantly enhance the sample efficiency of the underlying RL algorithm. In the 6-7 tasks where performance was worse than the best baseline, the action space complexity exceeded the LLM\\u2019s understanding capability, making it difficult for the LLM module to provide accurate heuristic s,a pairs, and therefore, it could not compare with the best baseline.\\n\\n\\n**Q2:** The paper does not introduce the concept of seeds.\\n\\n**R2** Thank you for your reminder. The paper does not discuss the seed because this work completely removes the use of seeds. Introducing a seed would reduce the complexity of the environment. The function **env.reset()** resets the environment and provides a random initial state, while **env.reset(seed=0)** would fix the initial state. This would mean the agent starts learning from a fixed initial state each time it is reset, which actually **reduces the learning complexity.**\\n\\nIn the 20 tasks, some tasks have fluctuating learning curves, which are mainly related to the **design of the reward function**. For example, in the \\\"ball_in_cup\\\" task, if the agent manages to throw the ball into the cup on the first attempt, it will receive a total reward of 15,000. However, if it fails on the first attempt, the total score will be much lower. This leads to fluctuations in the learning curve. However, in most environments, the agent\\u2019s performance is relatively stable. Furthermore, the improvement brought by the LLM is also related to the complexity of the environment. The easier the environment is to understand, the more significant the improvement for the agent.\\n\\n\\n**Q3:** Eureka and Text2Reward are not fairly treated.\\n\\n**R3** Thank you for your reminder. Text2Reward was validated using MetaWorld, and the designed reward functions were provided in the code repository. Therefore, we could directly use the GitHub code for verification. The only modification we made to the T2R code was to verify optimality every 5000 steps. As a result, the training of T2R is fair.\\n\\nRegarding Eureka, it was validated in the Isaac environment, and it designed many prompts. To transfer from the Isaac prompt to the MetaWorld prompt, some adjustments to the prompt are necessary.\", \"i_believe_that_eureka_performed_poorly_for_five_main_reasons\": \"1. The Eureka prompt emphasizes reward scaling and normalization, which may introduce bias into the learning process during early iterations.\\n\\n2. Eureka uses PPO; however, in our experiments, PPO significantly lagged behind algorithms optimized for continuous action spaces, such as SAC and TD3.\\n\\n3. The reward heuristic provided by Eureka requires several genetic algorithm iterations before any performance improvement is observed.\\n\\n4. Eureka uses task success rate as a metric, which represents a much simpler task than learning an optimal policy.\\n\\n```python\\ndef compute_reward(object_pos: torch.Tensor, goal_pos: torch.Tensor) -> Tuple[torch.Tensor, Dict[str, torch.Tensor]]:\\n```\\n\\n5. Eureka simplifies the observation space. As shown above, the state used for computing the reward heuristic is limited to `object_pos` and `goal_pos`. When provided with the full state dimensions, Eureka has a high probability of generating worse reward heuristics.\\n\\nAdditionally, lines 67-73 of the paper explain that Q-shaping has a very fast verification cycle, allowing us to directly validate the impact of the algorithm. In contrast, Eureka needs to wait until halfway through training to obtain the fitness value. Assuming max_iter = 10^7, the verification cycle for Eureka is 5x10^6, which is 5x10^6 times longer than ours.\"}", "{\"comment\": \"Thank you for your careful reading of our paper and for pointing out the notation issues. We greatly appreciate your attention to detail, as it has helped us improve the clarity and consistency of our work. We have made the necessary corrections to the notations and ensured that the manuscript is now more accurate and easier to follow.\\n\\n**Q1:** It is currently unclear how this method fundamentally differs from Q-value initialization.\\n\\n**R1** Compared to recent work [1] that utilizes Q-value initialization to enhance online learning, it requires an accurate estimation of Q-values, whereas our work enhances online learning through imprecise estimation. Additionally, policy shaping is introduced to align the policy's behavior with the LLM\\u2019s output, which accelerates the training process.\\n\\n[1] Nakamoto, Mitsuhiko, et al. \\\"Cal-ql: Calibrated offline rl pre-training for efficient online fine-tuning.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n**Q2:** what does $A^\\\\pi$ mean?\\n\\n\\n**R2** $A^\\\\pi$ refers to \\u201cactivity matrix\\u201d,encoding $\\\\pi$'s state-conditional state-action distribution.\\n$$A^\\\\pi(s, \\\\langle \\\\hat{s}, a \\\\rangle) = \\\\pi(a | s) \\\\; \\\\text{if } s = \\\\hat{s}$$\\n$$A^\\\\pi(s, \\\\langle \\\\hat{s}, a \\\\rangle) = 0 \\\\\\\\ \\\\text{otherwise} $$\\n\\n**Q3:** In line 181 the q function update formula is probably wrong?\\n\\n**R3** Theorem 1 and its proof is incorrect, and we apologize for misleading the reviewers here. \\n \\nTheoretically, Theorem 1 provides two conclusions:\\n1. The new update formula is a contraction operator. Therefore, applying the heuristic TD update will lead to convergence. \\n2.During the update process, the heuristic term should stop to ensure that optimality remains unchanged. \\n\\n\\nTo derive Theorem 1, we first need to make two assumptions:\\n\\n1. The heuristic $h(s, a)$ provided by the large model does not change with the iteration count $k$.\\n2. The heuristic $h(s, a)$ terminates at some iteration before convergence.\", \"our_q_function_update_formula_is\": \"$$\\n\\\\hat{Q}^{k+1}(s,a) = (1-\\\\alpha) \\\\hat{Q}^{k}(s,a) + \\\\alpha \\\\left( r(s,a) + \\\\gamma \\\\sum_{s' \\\\in S} P(s'|s,a) \\\\max_a \\\\hat{Q}^k(s',a) \\\\right) + \\\\mathbf{h}(s,a)\\n$$\\n\\nWe define a new operator $\\\\mathcal{T}_h$ based on this:\\n\\n$\\\\hat{Q}^{k+1}(s,a) = \\\\mathcal{T}_h \\\\hat{Q}^{k}(s,a)$\\n\\n$= r(s,a) + \\\\gamma \\\\sum_{s' \\\\in S} P(s'|s,a) \\\\max_a \\\\hat{Q}^k(s',a) + \\\\mathbf{h}(s,a)$\\n\\nWe then prove that the operator $\\\\mathcal{T}_h$ still satisfies the **contraction property** in the appendix. Therefore, there exists a unique optimal fixed point $\\\\hat{Q}^*$. This proof allows us to apply our update formula to this new operator and find a new optimal fixed point.\\n\\nThe $\\\\hat{Q}^*$ is shifted and biased. Therefore, to allow $\\\\hat{Q}$ to converge to the optimal value function of the MDP $\\\\mathcal{D}$, we need to stop the heuristic $h(s, a)$ and let the value function update for a few steps towards $Q^*$.\", \"we_now_pose_the_following_question\": \"How many steps in advance should we stop the heuristic function $h$ so that $\\\\hat{q}_D$ converges to $q^*_D$? Theorem 2 provides an upper bound for any random bounded $q$-values converging to $q^*$ in MDP $\\\\mathcal{D}$.\\n\\n\\n**Q4:** what does $D_g$ mean?\\n\\n**R4** $D_g$ is typo, and it should be $D_{LLM}$, thank you for pointing out.\"}", "{\"summary\": \"This work presents Q-shaping, a framework to accelerate training of reinforcement learning agents by using LLMs to produce domain-knowledge based heuristic functions for initializing the Q-function and policy. Specifically, LLMs produce code to categorize good and bad state-action pairs in the environment. Before the start of training, these pairs are used to update the Q-network and policy, thus leading to better network initializations. Results across 20 environments show that Q-shaping can significantly improve sample efficiency and outperform LLM-guided reward-shaping methods.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The method is intuitive and simple to understand. The domain knowledge of LLMs is used to find good initializations of the Q-function. This can be generally useful for multiple RL tasks if structured information can be effectively elicited from LLMs.\\n2. The paper has an appropriate number of citations and properly details existing work in the related work section. \\n3. Although multiple works have considered using the domain knowledge of LLMs for improving RL, this work introduces another novel way to harness that expertise.\", \"weaknesses\": \"1) **Writing**: The overall writing is lacking and can be significantly improved. The style of writing is currently informal and often lacking important experimental details. For example, the evaluation criteria are not properly explained, some experimental details are not clear. The overall flow of the paper is also not smooth.\\n2) **Result Discussion**: The discussion of the results is very limited. The ablations conducted are only discussed superficially. For an empirical paper, only 1 page dedicated to discussion of results is too less. I personally feel that more discussion is needed in the experiments section, and some of the theory and notation introduced is not critical to the paper and can be deferred to the appendix.\\n3) **Significance of Results**: In 6-7 out of the 20 tasks, the presented method is worse than the best-performing baseline. While there are multiple potential causes of this (base RL algorithm, bad LLM outputs, randomness if only 1 seed is uses, etc), it is difficult to validate the generalization capability of the method.\", \"questions\": \"1. What are the number of seeds used? The curves oscillate a lot and it is difficult to draw conclusions from many of the plots.\\n\\n2. I am not convinced by the implementation of the Eureka and text2reward baselines. In 3 out of the 4 plots, both these baselines stay completely flat and do not improve at all. This is strange as Eureka was shown to perform well on a variety of robotic tasks. The tasks selected in this paper do not seem very different, and I am curious why these baselines are so bad. Setting the evolution round to 1 might be partially responsible for this but makes it unfair for the baseline. \\n\\n3. What is the state for the environments considered? There is no information provided on this and I do not see how this method will generalize when the states are images. Similarly, when doing RL on real robots, then clean environment code as assumed by this work will not be available. It will be useful to get an idea about the assumptions that this work makes.\\n\\n4. It will be helpful add the individual impacts of Q-shaping and policy-shaping in the ablation study on different training phases. Currently, it is unclear what the contributions of these two techniques are to the final performance of the method. \\n\\n5. I do not understand the significance of the sample efficiency results. Sample efficiency improves by an average of 17% compared to baselines. However, the presented framework also has a high-performance selection phase which is not a part of the baselines. As multiple agents are rolled out for a significant number of timesteps, a fairer comparison would be to add these timesteps into the sample efficiency calculations. \\n\\n6. How are the heuristic functions output by LLMs evaluated? For example, one of the evaluation criteria is correctness of assigned Q-values. How is this actually measured?\\n\\n7. How many times is a LLM prompted per task? If it prompted multiple times, how are they filtered? \\n\\n8. I think it is also important to release the entire prompts that are used for the LLMs as there could be a lot of domain knowledge provided in the task descriptions themselves. As the environment task descriptions are currently not provided in the paper, it is difficult to understand the contribution of the LLM.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely thank the reviewer for the detailed feedback and thoughtful comments. Below are our responses to the points raised:\\n\\n#### Q1: High-performance agent selection is for baselines? \\n**R1:** During early training steps, all runs perform similarly, so it is impossible to identify poorly performing runs and remove them at that stage. Therefore, the inclusion or exclusion of high-performance agent selection does not bias the performance of the baselines.\\n\\n#### Q2: Why is high-performance agent selection introduced? \\n**R2:** While testing TD3, we observed a strange phenomenon: 20\\u201340% of runs fail to learn and output marginal actions. To mitigate the impact of this phenomenon, we introduced `High-performance agent selection` to remove shaped agents that perform poorly. \\n\\nWhen the paper was written, we noticed this phenomenon but did not know its cause. Thus, we proposed `High-performance agent selection` as a strategy to mitigate its impact.\\n\\n\\n#### Q3: High-performance agent selection is unfair to baselines. \\n**R3:** This stage may be easily misunderstood as selecting the best 10 agents from 20 runs. However, the purpose of this stage is to prevent agents from consistently outputting marginal actions. This stage will only run 10k to test the performance of the shaped agent, and if the performance is not very good, it will be deleted.\\n\\nRecently, we identified the actual cause of the issue where agents have a certain probability of outputting marginal actions. A detailed explanation will be included in a related paper, which we plan to release by the end of this year.\\n\\n\\n\\n#### Q4: Regarding seeds. \\n**R4:** During early training, 20 agents start exploring. Within the first 10K steps, 10 agents are removed if they perform poorly after policy shaping, leaving only 10 agents to contribute to the learning curve.\\n\\n#### Q5: Correctness of the generated Q-values. \\n**R5:** Correctness of the assigned Q-values means that state-action pairs \\\\((s, a)\\\\) in the LLM-generated \\\\textit{goodQ} set must have Q-values greater than zero, while those in the \\\\textit{badQ} set must have Q-values less than or equal to zero.\\n\\nWe hope these responses address your concerns. If you have any additional questions or suggestions, please feel free to reach out. Your feedback is important for improving the quality of our paper.\"}", "{\"comment\": \"Thank the authors for their effort. I think the presentation of the paper has improved. The comparison against Eureka also demonstrates the utility of the proposed method for llm-guided RL. I am raising the scores correspondingly.\"}", "{\"comment\": \"Thank you for your efforts in addressing the review comments during the rebuttal stages. I acknowledge that the quality of the paper has improved as a result, and I have increased my score to 5.\\n\\nHowever, I still consider this a borderline paper. The primary reason is that leveraging LLMs for reward design to improve RL performance has already been explored in several prior works. While the authors have compared their approach with representative methods such as text2reward and Eureka, the comparisons are limited to a few environments and focus primarily on final performance metrics. Additionally, the theoretical analysis presented in the paper has limited connection to the experimental results. I believe deeper exploration and stronger connections between the theory and experiments are needed to better support the paper\\u2019s core contributions.\"}" ] }
DlZ97cVwr0
Exploring the Recall of Language Models: Case Study on Molecules
[ "Philipp Guevorguian", "Knarik Mheryan", "Hasmik Mnatsakanyan", "Hrant Khachatrian" ]
Most of the current benchmarks evaluate Generative Language Models based on the accuracy of the generated output. However, in some scenarios, it is also important to evaluate the recall of the generations, i.e., whether a model can generate all correct outputs, such as all security vulnerabilities of a given codebase. There are two challenges in evaluating the recall: the lack of complete sets of correct outputs for any task and the existence of many distinct but similar outputs (e.g., two exploits that target the same vulnerability). In this paper, we propose a benchmark from the domain of small organic molecules. We define several sets of molecules of varying complexity and fine-tune language models on subsets of those sets. We attempt to generate as many molecules from the target sets as possible and measure the recall, i.e., the percentage of generated molecules from the target set. We examine the impact of the training loss function and sampling strategy on the recall. We propose a sampling strategy based on beam search that avoids duplicates and maximizes recall. Finally, we show that given a small validation set, one can predict the recall of the model without actually generating many samples, which can act as a model selection strategy for maximizing generation recall.
[ "recall", "language models", "molecular language models", "sampling methods for language models" ]
Reject
https://openreview.net/pdf?id=DlZ97cVwr0
https://openreview.net/forum?id=DlZ97cVwr0
ICLR.cc/2025/Conference
2025
{ "note_id": [ "rG8BpXLcRb", "nv1GlaUxeD", "mL47EEzdzN", "ljtQXtlRxP", "l1ughHDeGF", "j1hkRs9vqg", "hz662P5g8e", "gMgOTGuEcK", "fwkA66S4Pp", "fqtuiHX0ZF", "ayS5628Ddc", "ZHDDNMSxUB", "XvVk9GJsdZ", "VU0UXPxgGz", "V709m4B8w2", "THopcWpA2W", "Ef6lzGSTvX", "CxcDPHX75y", "8qXQ1o18zq", "49bLCiF2HO", "1rxzgl0qea" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "decision", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1733009587081, 1733010090985, 1733009493041, 1733313526123, 1733011930584, 1733011894490, 1733312604383, 1730864226294, 1729999336842, 1733137713336, 1733011643268, 1733239057278, 1737524305374, 1734814259001, 1730414008984, 1733239202091, 1733313536717, 1733313613848, 1733011803792, 1730289109806, 1730275452853 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission14268/Authors" ], [ "ICLR.cc/2025/Conference/Submission14268/Authors" ], [ "ICLR.cc/2025/Conference/Submission14268/Authors" ], [ "ICLR.cc/2025/Conference/Submission14268/Authors" ], [ "ICLR.cc/2025/Conference/Submission14268/Authors" ], [ "ICLR.cc/2025/Conference/Submission14268/Authors" ], [ "ICLR.cc/2025/Conference/Submission14268/Authors" ], [ "ICLR.cc/2025/Conference/Submission14268/Reviewer_dREi" ], [ "ICLR.cc/2025/Conference/Submission14268/Reviewer_Tb3C" ], [ "ICLR.cc/2025/Conference/Submission14268/Reviewer_Tb3C" ], [ "ICLR.cc/2025/Conference/Submission14268/Authors" ], [ "ICLR.cc/2025/Conference/Submission14268/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission14268/Area_Chair_y2Uq" ], [ "ICLR.cc/2025/Conference/Submission14268/Reviewer_UmMb" ], [ "ICLR.cc/2025/Conference/Submission14268/Authors" ], [ "ICLR.cc/2025/Conference/Submission14268/Authors" ], [ "ICLR.cc/2025/Conference/Submission14268/Authors" ], [ "ICLR.cc/2025/Conference/Submission14268/Authors" ], [ "ICLR.cc/2025/Conference/Submission14268/Reviewer_jCNz" ], [ "ICLR.cc/2025/Conference/Submission14268/Reviewer_v462" ] ], "structured_content_str": [ "{\"title\": \"Response to minor questions\", \"comment\": \"**1. In figure 2, the authors stated that \\\"The plot indicates that the recall is close to saturation at 10 million generations, implying that this model will not cover 90% of the molecules even with 50 million generations.\\\" To me, the coverage function is naturally sub-linear, as you repeatedly take samples from a fixed distribution, the likelihood of getting a new unseen sample gradually goes down, so I am not sure if this (the sublinear trend) is a problem. And if it is, does the authors' proposed approach improves the trend to be somewhat linear? I think that will be an exciting result to see.**\\n\\n**Answer:**\\n\\nAs you correctly state, i.i.d methods are expected to demonstrate sublinear performance in these settings. In general, it would be better to get to higher recall with fewer generations, so \\u201cbeating\\u201d the sublinear trend is a good goal.\\n\\nUnfortunately, beam search does not really beat that. Here is why. There are two reasons why the generations go below the \\u201cideal\\u201d curve (the blue dashed line on Figure 2): (a) imperfect precision of generations, (b) duplications.\\n\\nRegular autoregressive generation has both problems. Precision is ~constant at 75% (Table 4), and there are many duplications. \\u201cUpper bound (i.i.d)\\u201d shows the case when the precision is ideal, but the duplications are there. Beam search solves the duplication issue, but the precision gets gradually worse as one increases the beam size. The reason is that beam search naturally ranks the molecules by their perplexity, and the top ones have higher precision. Surprisingly, the two different issues for these two methods (beam vs. upper bound) produce very similar recall. \\n\\nWe have [plotted](https://ibb.co/2vFFJrm) the values of Table 4 to visually show that the trend is sublinear for the beam search as well. We are adding more points to this chart to make it smoother before we put it in the paper (unfortunately it takes too long). We will add a paragraph with these clarifications. \\n\\nThanks for bringing this up!\\n\\n2. **SMILES v.s. SELFIES. I am not expert on the molecule modelling topic, but from Table 7, it seems SMILES works better than SELFIES when the data is in Canonical form, so why choose SELFIES as the main representation form?**\\n\\n**Answer:** \\n\\nWe used SELFIES as it has less issues with generating valid molecules. At some point we found the paper \\u201cInvalid SMILES are beneficial rather than detrimental to chemical language models\\u201d and decided to compare SMILES as well. The results were mixed: SMILES was better with canonical fine-tuning, and SELFIES was better with randomized fine-tuning.\\n\\nWe didn\\u2019t rerun all our experiments with SMILES as we did not have a goal to squeeze the best possible scores. The goal of this subsection is to show the effect of representations. The lesson learned is that future work should not neglect this aspect of the training when maximizing recall in modalities that have multiple representations. \\n\\n3. **Writings: [Line 76], (Remove \\\"Finally\\\"?) Finally, LLMs have recently demonstrated strong performance on these tasks [Line 310] I am not sure this expression = \\\"an average probability\\\", looks like a sum of probabilities.**\\n\\n**Answer:** Thank you for identifying these grammatical errors, we would be happy to make revisions to the manuscript based on this feedback.\\n\\nWe hope we addressed all of the concerns you raised to your satisfaction. If that is the case, we would ask to adjust the review score accordingly. We are open for more questions and feedback. Thanks again.\"}", "{\"title\": \"Response to weaknesses\", \"comment\": \"Thanks for the review!\\n\\n**Weaknesses:**\\n\\n**The contributions of this paper are limited. On one hand, in improving recall through sampling methods and loss functions, the authors merely attempt different strategies, which can sometimes harm precision, and no solutions are provided.** \\n\\n**On the other hand, the improvements through fine-tuning appear to offer no significant contribution, as it is generally expected that fine-tuning would enhance performance on a specific task.**\\n\\n**Answer:**\\n\\nWe appreciate the concern about the scope and extent of contributions of the present work. This criticism does not account for the primary focus and contribution of this paper, which is to enable an exact recall metric, as well as a strong method to predict this recall without expending compute for model inference for all of the required samples. This is a critical contribution as prior works attempting a recall evaluation rely on approximate metrics like KNN or necessitate the retrieval of specific text from corpora. Our method enables model selection for any i.i.d. sampling method, provides foundational insights into recall in language modelling. The recall-adapted beam generation strategy emphasizes how intuition surrounding this newly developed metric can guide modelling decisions. \\n\\nThe work also connects this evaluation with a specialized domain in which the recall problem is meaningful, formalizes the necessary domain-specific constructs to calculate recall, and provides motivation for potential applications in other domains like security. It\\u2019s of note that in molecular generation tasks, precision is not of great interest since repeated proposals of the same molecule are typically redundant. The analysis of loss function aggregation methods revealed an unexpected relationship between model capacity and recall optimization strategies.\\n\\nFinally, we do not perform experiments comparing models with and without finetuning on the subsets of interest. Rather, we separate model training into pretraining and finetuning stages in order to analyze the impacts of data representation on recall and precision, namely canonicalization and randomization of the SMILES and SELFIES representations. We also include an experiment which compares a model which undergo finetuning but **not pretraining**, this is a different experiment which demonstrates that the increased representational power gained during the pretraining stage uniformly improves models\\u2019 ability to generate molecules within subsets of interest, despite being exposed to a far greater number of molecules outside of these subsets during pretraining. \\n\\nPlease also note that the scope of this paper is to present a **benchmark** to facilitate research on recall of LMs. Thanks for confirming the importance of this task in the Strengths section of your review. Designing significantly novel methods that maximize the recall is beyond the scope of this work. We tried to cover all \\u201clow-hanging\\u201d methods known to the community to set up the scene with reasonably strong baselines.\\n\\n**The model is too singular, as the experiments in this paper only include the OPT-1.3B model. Therefore, the evaluation results and methods for enhancing recall may not generalize well.**\\n\\n**Answer:**\\n\\nWhile we primarily focused on OPT-1.3B, our experiments actually span multiple model scales, including OPT-125M and OPT-800K variants (Section 4.4). These experiments revealed important scaling behaviors - for instance, our recall-oriented loss function showed different effects across model sizes. \\n\\nWe chose the OPT architecture because it is a general GPT-2-like architecture and a predecessor of the LLaMA models, making it a reasonable proxy for a variety of autoregressive decoder-only architectures. Additionally, we extensively trained multiple sized models from scratch on different molecular representations, which required substantial computational resources. Given these constraints, we decided to focus our efforts on this single class of models to ensure thorough evaluation and analysis. \\n\\nWhile we believe newer architectures will improve both precision and recall, we do not expect significantly different behavior, e.g. across sampling strategies. To verify this belief, we are currently training a Llama 3.1 1B model. We hope the results will be in before the end of the discussion period, and regardless of the outcome we will include them in the manuscript.\\n\\nWe thank you again for your review. We hope these clarifications will enable a positive re-evaluation of our work.\"}", "{\"title\": \"Response to weaknesses\", \"comment\": \"Thank you for the detailed response and interesting insights! Please find our responses below.\\n\\n**Weaknesses:**\", \"my_main_concern_with_this_paper_is_around_its_technical_contributions\": \"**The author proposed using random sampling with temperature and beam search (with a large beam size) to improve recall coverage. These two methods are well-known methods in language models' (LM) generation, and I was expecting a novel generation approach such as generating with penalizing the likelihood of already generated sequences.**\\n\\n**Answer:**\\n\\nWe acknowledge that the proposed methods, random sampling with temperature and beam search, are well-established in language model generation. Our primary goal with temperature sampling was to conduct ablation studies, demonstrating that while higher temperatures lead to higher entropy and more diverse generations, in the context of molecular generation, this also resulted in more molecules outside the desired subset (Figure 4).\\n\\nRegarding beam search, we do not claim to introduce an entirely novel decoding scheme. Instead, we adapt it by setting the beam size equal to the generation size\\u2014an essential adjustment specifically designed to maximize recall in the context of molecule generation. This modification enables beam search to thoroughly explore a broader set of high-recall candidates (which, initially, we could not determine would belong to our desired subset) and ultimately achieve significantly higher recall compared to other popular decoding methods.\\n\\nIn general, the scope of this paper is to present a **benchmark** to facilitate research on recall of LMs. Thanks for highlighting this in the Strengths section of your review. Designing significantly novel methods that maximize the recall is beyond the scope of this work. We tried to cover all \\u201clow-hanging\\u201d methods known to the community to set up the scene with reasonably strong baselines.\\n\\n**The method that predicts recall has a lot of similarities with perplexity measure in language modelling, would the authors clarify how is the proposed metric different from the perplexity-based measures?**\\n\\n**Answer:**\\n\\nThe recall and especially precision predictors have mathematical similarities with standard perplexity metrics, from which we took inspiration. The critical difference for the method which predicts recall from perplexity is in the facility of its interpretation and connection to generative performance. Perplexity measured on a hold out set may be able to predict the perplexity on another larger corpus with similar distribution, but it does not provide an interpretable or comparable value regarding the downstream performance of the model. Unlike our proposed method, perplexity measures do not take model generations into account. In practice, the simplicity of our method enables the concomitant calculation of perplexity and predicted recall metrics, which would prove informative for teams working at the intersection of NLP and applications in domains for which recall is meaningful.\\n\\n**Removing duplicates and selecting data in each batch are sensible approaches, but they don't appear to be anything novel.**\\n\\n**Answer:**\\n\\nWe acknowledge that the loss objectives presented in the work are not novel on a broad scale. The intention of experiments which implemented these modifications, was to provide a more comprehensive explanation of . We would be happy to correct the statement in the manuscript to clarify that the novelty of the work is within the analysis of a new problem setting with previously inaccessible motivations for these modelling decisions, rather than presenting novel methods in a broad sense.\"}", "{\"title\": \"Responses\", \"comment\": \"We thank the reviewer for the deep review and the questions.\\n\\n**Concerns**:\\n\\n**Firstly, even though the main point, evaluating whether a model can generate all correct outputs is important for safety-critical problems, it is unclear whether this is the case for the studied objective molecule generation. It is better to give clear motivation for the importance of evaluating recall for this task.**\\n\\n**Answer:**\\nThe primary focus of the paper is to demonstrate this problem formulation in a domain for which it is useful. We suggest applications in other domains to provide further motivation for our line of research. With respect to molecules, we address the importance of recall evaluations in molecular generation in lines 044-048 of the paper: \\n\\n> In scientific discovery, generating new molecules or materials with given characteristics is a cornerstone problem. For example, in drug discovery, most of the correctly generated molecules may prove useless in subsequent phases of drug development (e.g., in toxicity analysis), so generating a diverse and complete set of initial molecules is useful. Another related problem is the exhaustive generation of all conformations (3D positions) for a given molecule.\\n> \\n\\nTo expound upon this, the ability of a model to cover the full set of molecules which satisfy certain criteria is desired for a number of reasons. Firstly, it tests whether a model can generate molecules which often have high reward, and capture the total diversity of the subset in question. It provides a direct signal for systematic biases and failure modes of the generative model, identifying if it misses certain subclasses or chemical subspaces within the chosen set. Current benchmarks in molecular generation rely on arbitrary thresholds for property values to evaluate molecular generation pipelines because the complete set of desired molecules is not specified. By reformulating the problem, we enable an evaluation method which is both interpretable by domain experts (\\u201dWith model A, we can recover M% of the molecules that bind to Y and have Z property\\u201d) and fully captures the complexity of the task.\\n\\n**For the subset construction, in Table 1, it is unclear how the threshold is determined, e.g., 0.4 for Sasp and 0.2 \\u2264 sim(m, d) \\u2264 0.2165. Please clarify it.**\\n\\n**Answer:**\\n\\n The thresholds were chosen to ensure that the resulting subsets have comparable sizes. We aimed to construct a training set of 1 million molecules for each subset, ensuring that all models were trained on equal amounts of data. Specifically, this design ensures that the upper bounds for recall calculations are based on subsets of similar size, providing a fairer basis for comparison.\\n\\n**In Section 4.1, Table 2 and Table 3 suggest different solutions as the best, which one we should accept in practice. It is better to add more discussion here.**\\n\\n**Answer:**\\n\\nThank you for raising concern about seemingly conflicting findings. We attempt to address this in lines 512-526 of our work. To add to this and continue the reasoning of the response to the first concern, maximizing recall would typically be of greater interest in practical applications compared to precision. In this case, randomized pretraining with randomized fine-tuning would be the best configuration based on our findings. This is because during molecular generation, given a fixed compute budget, the generation of an increased diversity of candidates for subsequent development is more important than generating desired molecules more often, since duplicate generations are redundant.\"}", "{\"title\": \"Response on questions\", \"comment\": \"**Questions:**\\n\\n- **In my understanding, the process you described in lines 236-237 is aimed at generating the set of every correct generation,\\u00a0\\u201dS\\u201d, for evaluation purposes. Is this correct? Additionally, how can you ensure that the generated results accurately represent every correct generation?**\\n\\n **Answer:**\\n\\n Yes, that is correct. Note that in practice, there is no proven method to exhaustively generate all possible valid permutations of a given SELFIES representation. Instead, we approximate this by shuffling the atom positions of each molecule up to 1 million times and retaining the unique, valid string representations obtained from these permutations to get every possible representation of molecules which satisfy the criteria of the molecular subsets. \\n\\n With respect to completeness of the molecular subsets (irrespective of string representation), leveraging GDB-13 as our full molecular set from which the subsets of interest are derived, we have a guarantee that these subsets are exhaustive enumerations of molecules which specify certain criteria. Some of these criteria are inherited from GDB-13 (\\u226413 heavy atoms, valid valences, etc..) and the remaining criteria correspond to the constraints by which we define the subsets.\\n\\n \\n\\n- **As shown in Table 2, recall shows a correlation with the complexity of molecules, whereas precision does not. Is there a specific reason for this? I\\u2019m curious about which aspects of the recall metric lead to this outcome.**\\n \\n **Answer:** \\n \\n In Table 2, the recall of model generations for molecular subsets from highest to lowest is \\u201csas\\u201d, \\u201casp\\u201d, \\u201cd>p\\u201d, \\u201cd=p\\u201d. Thus, \\u201csas\\u201d is the easiest set to model and \\u201cd=p\\u201d is the hardest. We see the same order of performance for precision, in Table 3. Thus, both metrics capture the same ordering of difficulty for modeling the respective molecular sets.\\n \\n- **What is the input to the model when performing generation with an LLM for recall / precision evaluation?**\\n \\n **Answer:** \\n \\n We do not provide any specific input to the model; the generation is entirely unconditional, initiated only with a start token. Each model is fine-tuned on a specific subset, and after training, we expect it to generate samples predominantly from the learned distribution of that subset.\\n \\n- **What exactly is the purpose of the validation set mentioned in line 220, and is there a specific reason for using only 10,000 instances?**\\n \\n **Answer:** \\n \\n The validation set, mentioned in Line 220, is used to evaluate the model's performance during training, helping us monitor overfitting and adjust hyperparameters as needed. The size of 10,000 instances was chosen as it represents 1% of the 1M training set, providing a reasonable balance between computational efficiency and statistical power. More importantly, we use this small validation set and our proposed method to **predict** recall on the entire set of molecules which satisfy the given criteria, the results of which are displayed in Figure 3.\\n \\n- **How does the cost (time complexity, memory, etc.) change with the beam size in 4.3?**\\n \\n **Answer:** \\n \\n The time complexity of beam search scales with the beam size `B`, as the model computes the probabilities for each token in the vocabulary `V` for each beam. At each decoding step, this results in `B\\u00d7V` probability calculations . Sorting `B\\u00d7V` candidates will take `log(B\\u00d7V)` time. `C` is the cost of the operation of getting 1 probability. Given a sequence length `L`, the overall time complexity is **O(L(B\\u00d7V[C + log(B \\u00d7 V)]).**\\n \\n When the beam size is increased, for instance from B=1M to 10M, the time complexity increases B\\u00d7log(B), meaning the decoding time will approximately scale by a factor of 10\\u00d7log(10).\\n \\n In terms of memory complexity, the space required grows linearly with beam size, i.e., O(B\\u00d7L), as each beam must store its partial sequence and associated score. For sequences of length L=10 and beyond, we save the beam candidates to disk to avoid excessive memory usage, thereby reducing the in-memory footprint.\\n\\n\\nWe hope we addressed all of the concerns you raised to your satisfaction. If that is the case, we would ask to adjust the review score accordingly. We are open for more questions and feedback. Thanks again.\"}", "{\"title\": \"Response on evaluation\", \"comment\": \"**Evaluation**\\n\\n**Most experiments in this paper are validated using a single model and dataset, making it difficult to consider the proposed benchmark method and the approaches to improve recall as thoroughly validated. I believe there should be verification to ensure that the trends in the experimental results hold consistently across at least several models**. Additionally, there are confusing aspects regarding the details of the experiments, which should be described and justified more comprehensively (see the questions section for more details).\\n\\n**Answer:**\\n\\nWe chose the OPT model because it is a general GPT-2-like architecture and a predecessor of the LLaMA models. Furthermore, we extensively trained it from scratch, which required significant computational resources and time, making it challenging to replicate the same experiments with other models.\\n\\nWhile we believe newer architectures will improve both precision and recall, we do not expect significantly different behavior, e.g. across sampling strategies. To verify this belief, we are currently training a Llama 3.1 1B model. We hope the results will be in before the end of the discussion period.\\n\\nRegarding datasets, to the best of our knowledge, there are no other comprehensive datasets apart from GDB-13 that satisfy the conditions outlined in our paper. While we have this limitation to start from GDB-13, we ensured to have diverse datasets for training and evaluating the recall. We believe we achieved a quite wide diversity, as the recall of the tested approaches varies between 12% and 58% (Table 2). This result highlights the impact of the \\u201ccomplexity\\u201d of the \\u201clanguage\\u201d being modeled by the LM. On the other hand, Tables 6 and 7 show that the general findings (e.g. the role of pretraining or string representations) translate well across the datasets.\"}", "{\"title\": \"Predicting precision and recall works with Llama as well\", \"comment\": \"Here are the results of predicting the precision and recall of fine-tuned Llama models (pretrained and fine-tuned on canonical SELFIES)\\n\\n| Model | Metric | Precision | Predicted Precision | Difference | Recall | Predicted Recall | Difference |\\n|--------------|---------|-----------|---------------------|------------|---------|------------------|------------|\\n| **OPT-1.2B** | S_asp | 75.7% | 74.0% | 1.7% | 8.61% | 8.43% | 0.18% |\\n| | S_sas | 80.6% | 79.9% | 0.6% | 11.25% | 11.16% | 0.08% |\\n| | S_d>p | 68.3% | 66.5% | 1.7% | 6.95% | 6.78% | 0.17% |\\n| | S_d=p | 14.1% | 13.5% | 0.6% | 1.73% | 1.65% | 0.07% |\\n| **LLAMA-3.2**| S_asp | 76.0% | 74.2% | 1.8% | 8.65% | 8.46% | 0.19% |\\n| | S_sas | 81.0% | 80.0% | 1.0% | 11.30% | 11.18% | 0.13% |\\n| | S_d>p | 68.6% | 67.2% | 1.4% | 6.98% | 6.85% | 0.13% |\\n| | S_d=p | 15.2% | 14.3% | 0.8% | 1.86% | 1.75% | 0.10% |\\n\\nAs mentioned before, Llama models have slightly better precision and recall. The predicted precision and recall metrics for Llama models are also slightly higher than the predictions for OPT, which implies that the predictor can be reliably used to compare two models.\"}", "{\"summary\": \"This paper introduces a benchmark for evaluating the recall of language models in the domain of small organic molecules. Specifically, based on the famous dataset GDB-13, the authors prepare a new dataset with four subsets, e.g., a new subset contains molecules that share a certain percentage of substructures with aspirin. Based on the constructed dataset, the molecule generation capability of language models (LMs) in terms of recall before and after fine-tuning has been evaluated. A new method for predicting the recall of LMs has also been designed. The average probability of a desired molecule to be generated and the ground truth recall values are used to build a regression model for the recall prediction. The evaluation demonstrated the correlation is more than 0.99. Finally, a recall-oriented molecule generation method and a loss function have been introduced to boost the recall of LMs.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. An interesting and important problem in analyzing the recall of language models.\\n2. Multiple solutions with promising results have been proposed in the same work\\n3. The paper is well-written\", \"weaknesses\": \"1. Even though the motivation is clear and good, the studied objective does not fit the motivation well, is the recall metric more important in the molecule generation domain?\\n2. Many design choices are unclear, e.g., why use Beam search in section 3.4 not others?\\n3. Many problems, e.g., capability estimation and new loss design, have been studied, but each of them lacks a comparison with baselines.\\n\\nOverall, this paper studies an important problem and proposes promising solutions for recall estimation and LMs enhancement. However, there are some concerns that need to be addressed.\\n\\nFirstly, even though the main point, evaluating whether a model can generate all correct outputs is important for safety-critical problems, it is unclear whether this is the case for the studied objective molecule generation. It is better to give clear motivation for the importance of evaluating recall for this task. \\n\\nFor the subset construction, in Table 1, it is unclear how the threshold is determined, e.g., 0.4 for Sasp and 0.2 \\u2264 sim(m, d) \\u2264 0.2165. Please clarify it.\\n\\nIn Section 4.1, Table 2 and Table 3 suggest different solutions as the best, which one we should accept in practice. It is better to add more discussion here.\\n\\nIn Section 4.2, considering the recall estimation, there are many works that have been proposed to evaluate deep learning models in an unsupervised manner [1, 2, 3], it is necessary to at least discuss the difference between the proposed method and these works.\\n\\nIn Section 4.3, it is unclear why Beam search is used here since there are many other options (search methods). \\n\\nIn Section 4.4, first, it is better to add baselines without using the designed loss function in Table 5. Besides, the recall values decreased after comparing the results in Table 5 and Table 4. It is unclear which factors lead to this degradation. \\n \\n[1] Unsupervised Evaluation of Code LLMs with Round-Trip Correctness.\\t\\n[2] Estimating Model Performance Under Covariate Shift Without Labels.\\n[3] Agreement-on-the-Line: Predicting the Performance of Neural Networks under Distribution Shift\", \"questions\": \"Please check my comments above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper identifies the challenges in evaluating the recall of generative models and introduces a recall benchmark in the domain of molecular generation. It also proposes sampling strategies and loss formulations to enhance recall.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper is well-written and easy to understand, addressing a problem that has not been extensively explored before. Additionally, the paper addresses crucial research directions, such as measuring recall without generation and methods to enhance recall, presenting intriguing experimental results.\", \"weaknesses\": \"**Scalability of Research**\\n\\nThe study in this paper is limited to a specific domain, namely molecular generation, and there needs to be a discussion on how this research can be extended to other domains. For example, a crucial aspect of measuring recall, as highlighted in the paper, is identifying the equivalence class of the model\\u2019s generated results. As mentioned in lines 60-62, there is a technique for identifying equivalence classes for SELFIES strings. How could this issue be addressed in other domains you mentioned in the introduction, such as \\u201cvulnerable code generation\\u201d?\\n\\n\\n**Completeness in Method**\\n\\nIn my opinion, the sections proposing the sampling strategy and loss to improve the model\\u2019s recall are crucial for establishing the novelty of your paper. However, these aspects are not fully developed and lack sufficient explanation. For instance, in the case of the recall-oriented loss function, the approach of changing the aggregation to min or max seems quite extreme to me, with significant potential for refinement. Additionally, the proposed method only showed effectiveness for a very small and underperforming model with 800K parameters. Therefore, improvements in this area are essential. Additionally, the motivation for using beam search in recall-oriented generation and the intuition behind why increasing the beam size leads to improved recall need to be more thoroughly explained.\\n\\n**Evaluation**\\n\\nMost experiments in this paper are validated using a single model and dataset, making it difficult to consider the proposed benchmark method and the approaches to improve recall as thoroughly validated. I believe there should be verification to ensure that the trends in the experimental results hold consistently across at least several models.\\nAdditionally, there are confusing aspects regarding the details of the experiments, which should be described and justified more comprehensively (see the questions section for more details).\", \"questions\": [\"In my \\bunderstanding, the process you described in lines 236-237 is aimed at generating the set of every correct generation, $\\\\mathbb S$ , for evaluation purposes. Is this correct? Additionally, how can you ensure that the generated results accurately represent every correct generation?\", \"As shown in Table 2, recall shows a correlation with the complexity of molecules, whereas precision does not. Is there a specific reason for this? I\\u2019m curious about which aspects of the recall metric lead to this outcome.\", \"What is the input to the model when performing generation with an LLM for recall/precision evaluation?\", \"What exactly is the purpose of the validation set mentioned in line 220, and is there a specific reason for using only 10,000 instances?\", \"How does the cost (time complexity, memory, etc.) change with the beam size in 4.3?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the author for providing detailed responses that have addressed most of my concerns and questions.\\n\\nHowever, some concerns still remain.\\n\\nFirst, if the scope of this paper is focused on benchmarking, I believe more extensive experiments should have been conducted (at least across various models).\\n\\nSecond, based on the experimental results of this paper on the loss function, it is somewhat risky to suggest \\u2018capacity-recall trade-offs in objective design.\\u2019 At the very least, the performance of each loss objective with respect to model parameter size should have been measured more thoroughly (with finer granularity) to establish such a trend.\\n\\nThird, while the implications drawn from the experimental results\\u2014such as \\u2018additional considerations are necessary to develop loss functions that significantly improve recall of model generations\\u2019\\u2014are noted, I think that they are too weak to enhance the value of this paper.\\n\\nTherefore, I will maintain my score.\"}", "{\"title\": \"Response on scalability of research\", \"comment\": \"Thank you for the detailed review of the paper.\\n\\n**Weaknesses:**\\n\\n**Scalability of Research**\\n\\nThe study in this paper is limited to a specific domain, namely molecular generation, and there **needs to be a discussion on how this research can be extended to other domains.** For example, a crucial aspect of measuring recall, as highlighted in the paper, is identifying the equivalence class of the model\\u2019s generated results. As mentioned in lines 60-62, there is a technique for identifying equivalence classes for SELFIES strings. **How could this issue be addressed in other domains you mentioned in the introduction, such as \\u201cvulnerable code generation\\u201d?**\\n\\n**Answer:**\\n\\nDespite the fact that proposed extensions of this work to other domains is beyond the immediate scope of this paper, we agree that accurately conveying the potential of this kind of analysis to other fields is essential. Initially, as presented in the molecular domain, the set constraints and equivalence classes will need to be simple to define and validate. To start, let\\u2019s simplify the \\u201cvulnerable code generation\\u201d setting, to a \\u201cvulnerable function generation\\u201d setting. Additionally, since functions with given behaviour can span infinite amount of text, let\\u2019s further constrain the program space to those composed of less than or equal to 1000 characters in the python programming language, and no side effects. \\n\\nLet\\u2019s say that the domain specific behaviour is that the introduction of the function in a codebase makes it open to a prespecified vulnerability. In this case an equivalence class of functions could be all of those with equivalent IO behaviour (i.e. for every possible function input, same output). In this case, an equivalence class of functions could be large and include many different programming constructs, but would have virtually identical behavior. \\n\\nAnother alternative would be to define functions which make a codebase vulnerable to any kind of vulnerability, and create function equivalence classes based on the category of vulnerability introduced. Some examples would be: weak random numbers, race conditions, buffer overflow, error swallowing, etc\\u2026 In this case the equivalence class is larger, but IO behaviour doesn\\u2019t have to be verified. \\n\\nIn both cases, an increased recall demonstrates a stronger threat model, in that it can generate a more complete set of threats to a given codebase. A more concrete realization of such problem settings would require a focused effort by researchers working within the cybersecurity domain, but would provide insights into performance of models in generating malicious code.\"}", "{\"title\": \"One more language model added!\", \"comment\": \"All experiments in our paper were done using OPT models. This week we have trained Llama 3.2 1B on the same pretraining set (canonical SELFIES), fine-tuned on canonical versions of the four datasets, generated 1M molecules from each of the models, and computed their precision and recall.\", \"here_are_the_results\": \"| Metric | OPT 1.2B Precision | Llama 3.2 1B Precision | OPT 1.2B Recall | Llama 3.2 1B Recall |\\n|--------|---------------------|------------------------|----------------|---------------------|\\n| S_asp | 75.64 | 76.04 | 8.61 | 8.65 |\\n| S_sas | 80.55 | 80.96 | 11.25 | 11.3 |\\n| S_d>p | 68.31 | 68.59 | 6.95 | 6.98 |\\n| S_d=p | 14.04 | 15.18 | 1.72 | 1.86 |\\n\\nEssentially, Llama 3.2 is slightly and consistently better across all metrics. Other than that there are no differences in behavior. We have fine-tuned on randomized versions as well, and the outcome is exactly the same: all scores are a bit better, but no difference in the relative rankings.\\n\\nWe will add this to the manuscript.\\n\\nThe idea of having more model sizes between OPT-800K and OPT-125M to have finer granularity is a good one. There are no \\\"standard\\\" sizes of OPT in between, but we will create new ones. Thanks for this.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"The paper explores the problem of evaluating language models with a focus on recall as opposed to accuracy and introduces a new benchmark for molecules. The methodology primarily involves random sampling with temperature and beam search using large beam width for decoding using a recall-aware loss function. Using a dataset of organic molecules, the paper shows that recall can be predicted using perplexity on a validation set.\\n\\nThe reviewer assessments were mixed on this paper. All reviewers appreciated the research question, formulation, and benchmarking. The negative reviewers' complained about the lack of technical novelty and/or more comprehensive experiments. The authors' response to some of the other questions were mostly satisfactory even though couple of reviewers didn't respond to rebuttal.\\n\\nIn my own reading and assessment of the paper, it certainty has some strengths but needs improvement for acceptance. The paper can potentially take two routes to strengthen it.\\n- Increase the technical novelty and add additional experiments on more molecule datasets (if the focus is on molecules as a case study).\\n- Increase the experiments by adding more use-cases (as alluded in the paper) beyond molecules to drive home the general message for benchmarking and importance of this research.\\n\\nTherefore, I'm recommending to reject this paper and strongly encourage the authors' to improve the paper based on the feedback from reviewers' for resubmission.\", \"additional_comments_on_reviewer_discussion\": \"The negative reviewers' complained about the lack of technical novelty and/or more comprehensive experiments. The authors' response to some of the other questions were mostly satisfactory even though couple of reviewers didn't respond to rebuttal.\\n\\nBased on my own reading the paper needs improvement in methodology and/or more comprehensive experiments.\"}", "{\"summary\": \"This paper presents a benchmark for modelling molecules, based on GDB-13 (an exhaustive set of molecules with at most 13 heavy atoms that satisfy certain conditions). The authors pretrained LMs to generate the molecule sequences, and aim to bring up recall via 1) better sampling in generation and 2) better training data design. In addition to that, the authors proposed ways to predict the recall value with a small-scale experiment and a set of empirical studies on how should one best represent the molecules in LM inputs.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Maximizing recall is indeed valuable for a lot of applications, as the authors discussed in the paper, this paper is of empirical importance.\\n2. The formulation of the problem is novel, the molecular generation domain provides an excellent testbed due to well-defined equivalence classes and complete reference sets.\\n3. The experiments are done with rigor. I like the comprehensive analysis of factors affecting recall (pretraining, molecular representations, etc.)\\n4. The dataset and benchmark would make a good contribution to the community.\", \"weaknesses\": \"My main concern with this paper is around its technical contributions:\\n1. The author proposed using random sampling with temperature and beam search (with a large beam size) to improve recall coverage. These two methods are well-known methods in language models' (LM) generation, and I was expecting a novel generation approach such as generating with penalizing the likelihood of already generated sequences.\\n2. The method that predicts recall has a lot of similarities with perplexity measure in language modelling, would the authors clarify how is the proposed metric different from the perplexity-based measures?\\n3. Removing duplicates and selecting data in each batch are sensible approaches, but they don't appear to be anything novel.\\n\\nI have some minor questions listed in the below section.\", \"questions\": \"1. In figure 2, the authors stated that \\\"The plot indicates that the recall is close to saturation at 10 million generations, implying that this model will not cover 90% of the molecules even with 50 million generations.\\\" To me, the coverage function is naturally sub-linear, as you repeatedly take samples from a fixed distribution, the likelihood of getting a new unseen sample gradually goes down, so I am not sure if this (the sublinear trend) is a problem. And if it is, does the authors' proposed approach improves the trend to be somewhat linear? I think that will be an exciting result to see.\\n\\n2. SMILES v.s. SELFIES. I am not expert on the molecule modelling topic, but from Table 7, it seems SMILES works better than SELFIES when the data is in Canonical form, so why choose SELFIES as the main representation form?\\n\\n3. Writings:\\n[Line 76], (Remove \\\"Finally\\\"?) Finally, LLMs have recently demonstrated strong performance on these tasks\\n[Line 310] I am not sure this expression = \\\"an average probability\\\", looks like a sum of probabilities.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Results with Llama\", \"comment\": \"All experiments in our paper were done using OPT models. This week we have trained Llama 3.2 1B on the same pretraining set (canonical SELFIES), fine-tuned on canonical versions of the four datasets, generated 1M molecules from each of the models, and computed their precision and recall.\", \"here_are_the_results\": \"| Metric | OPT 1.2B Precision | Llama 3.2 1B Precision | OPT 1.2B Recall | Llama 3.2 1B Recall |\\n|--------|---------------------|------------------------|----------------|---------------------|\\n| S_asp | 75.64 | 76.04 | 8.61 | 8.65 |\\n| S_sas | 80.55 | 80.96 | 11.25 | 11.3 |\\n| S_d>p | 68.31 | 68.59 | 6.95 | 6.98 |\\n| S_d=p | 14.04 | 15.18 | 1.72 | 1.86 |\\n\\nEssentially, Llama 3.2 is slightly and consistently better across all metrics. Other than that there are no differences in behavior. We have fine-tuned on randomized versions as well, and the outcome is exactly the same: all scores are a bit better, but no difference in the relative rankings.\\n\\nWe will add this to the manuscript. Our initial intuition is confirmed.\"}", "{\"title\": \"Response (part 2)\", \"comment\": \"**In Section 4.2, considering the recall estimation, there are many works that have been proposed to evaluate deep learning models in an unsupervised manner [1, 2, 3], it is necessary to at least discuss the difference between the proposed method and these works.**\\n\\n**[1] Unsupervised Evaluation of Code LLMs with Round-Trip Correctness. [2] Estimating Model Performance Under Covariate Shift Without Labels. [3] Agreement-on-the-Line: Predicting the Performance of Neural Networks under Distribution Shift**\\n\\n**Answer:**\\nThe mentioned papers are not about measuring the recall and do not seem to be relevant to this work. Here is a detailed description and comparison with one of the papers mentioned.\\n\\n**Unsupervised Evaluation of Code LLMs with Round-Trip Correctness.**\\n - This paper adapts the idea of back-translation, where a description is generated from a code snippet, and then the code is regenerated from the description. The initial code and the regenerated code are then compared using similarity metrics such as exact match, CodeBLEU, and unit tests.\\n - The key difference between this work and ours is that their method does not measure recall, nor does it attempt to predict recall as a metric upfront.\\n - Furthermore, their approach focuses on conditional generation, whereas our method is designed for unconditional generation.\\n\\n**In Section 4.3, it is unclear why Beam search is used here since there are many other options (search methods).**\\n\\n**Answer:**\\n\\nIt is true that a breadth of search methods exist for large model generation. However, for the purpose of this study, which is primarily focused on a novel evaluation setting, the purpose of employing beam search was to showcase the effectiveness of a commonly used non-i.i.d. generation method for the setting that we propose. \\n\\n---\\n\\nWhile other search methods could also be explored, our primary goal in this section was to showcase the effectiveness of Beam search method when we choose to keep all generated outputs which are all unique. \\n\\n**In Section 4.4, first, it is better to add baselines without using the designed loss function in Table 5. Besides, the recall values decreased after comparing the results in Table 5 and Table 4. It is unclear which factors lead to this degradation.**\\n\\n**Answer:**\\n\\n We would like to clarify that the baseline is indeed included in Table 5, specifically the \\\"Aggregation with Mean Loss.\\\" Additionally, we demonstrate in the same table that using the proposed Minimum Loss function allows for achieving higher precision and recall compared to the baseline, particularly when applied to a smaller model.\\n\\n The results in Table 4 and Table 5 correspond to evaluations of different experimental setups. In Table 4, the model is trained on the default setting, which uses a training set comprising 1 million unique molecules (SELFIES), as described in lines 229\\u2013231. In contrast, Table 5 reports results from an experiment described in lines 421\\u2013426, where the training set is augmented by generating 8 SELFIES representations for each of the 1 million molecules. This augmentation introduces variability that impacts the recall values.\"}", "{\"title\": \"Responses\", \"comment\": \"Thank you for the review. Please find the responses to the weaknesses below.\\n\\n### 1. On readability due to chemical terminology\\n\\nThank you for bringing to our attention the potential difficulties in the interpretation of our work caused by excessive use of domain-specific language. In our paper, we included citations to the works which formulated the SMILES and SELFIES molecular representations. However, we recognize that this doesn\\u2019t necessarily provide sufficient continuity for a reader with limited exposure to chemistry. \\n\\nSMILES and SELFIES are both string representations, which are linearized representations of 2D molecular graphs. We attach an image below which gives a visualization of the SMILES and SELFIES strings along with how the substrings map to nodes and edges on the molecular graph. Notably, SELFIES were designed after SMILES with the express goal of creating a similar representation where any sequence of tokens from the SELFIES vocabulary corresponds to a valid molecule. SMILES do not have this property and thus some SMILES strings do not correspond to a valid molecular graph. We are going to add a paragraph on this in the revised manuscript.\\n\\nWith respect to the datasets we investigated in the study, we included statistics on the length of the SELFIES representations, number of distinct randomized SELFIES per molecule for each subset, as well as a brief explanation of the criteria which define each subset. The larger dataset GDB-13 from which we derive these subsets is described in some detail in lines 160-164, and refer readers to the original publication for additional details. A large variety of statistics about GDB-13 is available in that publication, some of which are attached below. We will be happy to follow your suggestions about which of these (or other) statistics would most aid in the improving the clarity of the problem setting.\\n\\n![image.png](https://prod-files-secure.s3.us-west-2.amazonaws.com/c0f972b2-68c5-4f1a-9e56-e587b36b2392/9b4a8fdf-9e11-493b-b3bd-34a8385e6064/image.png)\\n\\n![image.png](https://prod-files-secure.s3.us-west-2.amazonaws.com/c0f972b2-68c5-4f1a-9e56-e587b36b2392/0edee59f-7db1-4399-88ee-80ec51ce5b4e/image.png)\\n\\n![image.png](https://prod-files-secure.s3.us-west-2.amazonaws.com/c0f972b2-68c5-4f1a-9e56-e587b36b2392/bcd4bf65-58d5-4c9c-9fa1-735a91912bb4/image.png)\\n\\n![image.png](https://prod-files-secure.s3.us-west-2.amazonaws.com/c0f972b2-68c5-4f1a-9e56-e587b36b2392/d9fcc807-6cf8-42ff-8223-97fadc4ddbfd/image.png)\\n\\n### 2. On estimating recall\\n\\nThere are several reasons why it may be useful to be able to accurately estimate recall a model would achieve on a given subset without performing generations. The primary reason is that our method greatly **reduces the computational cost of getting this value**, which allows the process to be used for **model selection**. In practice, one can tune i.i.d generation hyperparameters like temperature or Top-K in nucleus sampling to maximize recall without actually generating large number of molecules for each hyperparameter. Figure 3 hints that the estimated values can be compared across other hyperparameters as well. We are going to motivate this more explicitly in the revised manuscript.\\n\\nAdditionally, in cases where the entire closed set of desired generations is not known, this method enables the estimation of recall on that set using a much smaller subset. \\n\\nWe do not believe that the work is misleading in how it refers treats the recall metric. The paper **does not claim that recall is a new metric, nor does it state that the predicted recall is recall itself**. For example, Table 4 shows the actual recall calculated after performing generations. We report the predicted recall only in Figure 3 and the axis labels reflect our explicit distinction between true recall and the recall predicted by our method. \\n\\n### 3. On the novelty of beam search\\n\\nOur recall adapted generation strategy is not substantially different from standard beam search, and we do not claim that is a technical contribution of this work. Rather, the formulation of the recall metric provides new intuition and motivation for comparatively extreme beam search generation hyperparameters, where the beam size is equal to the generation size. This configuration demonstrably increases recall, and performs better than other generation methods to this end. \\n\\nWe agree that the abstract of the paper could be interpreted as hinting towards significant novelty around beam search. We will adjust the abstract accordingly.\"}", "{\"title\": \"Response on completeness in method\", \"comment\": \"**Completeness in Method**\\n\\n**In my opinion, the sections proposing the sampling strategy and loss to improve the model\\u2019s recall are crucial for establishing the novelty of your paper. However, these aspects are not fully developed and lack sufficient explanation. For instance, in the case of the recall-oriented loss function, the approach of changing the aggregation to min or max seems quite extreme to me, with significant potential for refinement. Additionally, the proposed method only showed effectiveness for a very small and underperforming model with 800K parameters. Therefore, improvements in this area are essential. Additionally,** **the motivation for using beam search in recall-oriented generation and the intuition behind why increasing the beam size leads to improved recall need to be more thoroughly explained.**\\n\\n**Answer:**\\n\\nWe appreciate the feedback regarding recall-oriented modeling and generation strategies described in our work. It is true that these elements of the research would benefit from additional exploration, development and explanation; the latter issue we would be sure to correct in a revision. However, we respectfully disagree that they are insufficiently developed and do not provide novelty. We address the concerns corresponding to those elements below.\\n\\nPlease also note that the scope of this paper is to present a **benchmark** to facilitate research on recall of LMs. We thank you for confirming that this question is underexplored in literature. Designing significantly novel methods that maximize the recall is beyond the scope of this work. We tried to cover all \\u201clow-hanging\\u201d methods known to the community to set up the scene with reasonably strong baselines.\\n\\n**Regarding the loss function**\\n\\nThe min/max aggregation approach may appear extreme, but it was deliberately chosen to test boundary conditions of the recall-precision trade-off space. While you are correct that the improvement was only observed in the 800K parameter model, this finding is actually quite significant for several reasons. It demonstrates that recall optimization strategies may need to be parameter-count dependent, and suggests there may be fundamental capacity-recall trade-offs in objective design. Additionally, the negative results in larger models are themselves informative, indicating that additional considerations are necessary to develop loss functions that are significantly improve recall of model generations. Furthermore, we articulate in Section 4.4 that the designing recall-oriented loss functions belongs to future work, and by providing simple approaches towards this end, establish important initial baselines and observations upon which subsequent research can develop greater understanding and more performant methods. In a revision, we could include implementations of other loss functions, and additional analyses on the relationship between recall, training objective and model scale if it would help the strength of our work. \\n\\n**Regarding the sampling strategy**\\n\\nThere are two reasons why the generations go below the \\u201cideal\\u201d curve (the blue dashed line on Figure 2): (a) imperfect precision of generations, (b) duplications.\\n\\nRegular autoregressive generation has both problems. Precision is ~constant at 75% (Table 4), and there are many duplications. \\u201cUpper bound (i.i.d)\\u201d shows the case when the precision is ideal, but the duplications are there. \\n\\n*Beam search solves the duplication issue*, which means that beam search with larger beam size will inevitably produce more distinct molecules, so the recall cannot decrease. Unfortunately, the precision gets gradually worse as one increases the beam size. The reason is that beam search naturally ranks the molecules by their perplexity, and the top ones have higher precision.\\n\\nNote that surprisingly, the two different issues for these two methods (beam vs. upper bound) produce very similar recall. We will add a paragraph with these clarifications in the manuscript.\"}", "{\"summary\": \"This paper introduces a benchmark for evaluating models based on recall rather than just accuracy. The authors tackle two challenges: the lack of complete correct output sets and the presence of multiple similar outputs. Using small organic molecules from the GDB-13 database, they fine-tune models and develop a method to predict recall based on perplexity. They also propose a novel beam search decoding method to maximize recall by avoiding duplicates, alongside a recall-aware loss function. This approach aims to enhance the ability of GLMs to generate all correct outputs, with potential applications in various fields, including security.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"This paper explores the evaluation of recall rates for small language models, which is a meaningful endeavor.\", \"The paper investigates various methods to enhance the recall rates of models and has achieved some positive results.\"], \"weaknesses\": [\"The contributions of this paper are limited. On one hand, in improving recall through sampling methods and loss functions, the authors merely attempt different strategies, which can sometimes harm precision, and no solutions are provided. On the other hand, the improvements through fine-tuning appear to offer no significant contribution, as it is generally expected that fine-tuning would enhance performance on a specific task.\", \"The model is too singular, as the experiments in this paper only include the OPT-1.3B model. Therefore, the evaluation results and methods for enhancing recall may not generalize well.\"], \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a new benchmark of molecules for evaluating generative language models with a focus on recall. It aims to investigate the model's ability on tasks requiring distinct output generation, like detecting all vulnerabilities in code. Using organic molecule dataset, the study shows that model recall can be anticipated via perplexity on a validation set. Moreover, the authors use beam search decoding to reduce duplicates and a recall-aware loss function to improve performance, providing insights into molecular representation and model pretraining effects.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"This paper presents a meaningful investigation into the recall of model generation, with a well-articulated and compelling motivation.\", \"weaknesses\": \"1. From section 3.1 onward, this paper becomes quite difficult to follow, largely due to the use of specialized terminology from fields like chemistry without providing sufficient foundational overviews or introductory explanations. This approach makes it challenging for readers to fully grasp the content and nuances of the work. For instance, important details and statistics regarding the dataset collected by the authors are not included, and terms like SELFIES are mentioned without any straightforward elaboration to help readers understand what SELFIES actually represents. This lack of accessible explanations hinders the reader\\u2019s ability to form a clear understanding of the paper\\u2019s specifics. I recommend that the authors incorporate diagrams or more detailed descriptions of key terminology to enhance clarity.\\n\\n2.In section 4.2, a new method for estimating recall is proposed. First, the statement \\\"Given that evaluating recall provides a meaningful and interpretable measure of an approach\\u2019s ability to model data, estimating recall without needing to perform generations would be useful\\\" lacks a convincing motivation for why recall estimation without actual generation is necessary. There is no clear justification for the need to use an alternative method to evaluate recall. Furthermore, using probability to estimate recall does not align with the standard definition of recall, which traditionally measures the proportion of correctly generated instances rather than a probabilistic expectation. Thus, it is both imprecise and misleading to label this metric as recall. For instance, in earlier sections (Table 2), the authors appear to use a conventional method for calculating recall; however, after introducing this new approach, they apply it in Table 4 but use the same metric name. This inconsistency undermines reliability and creates confusion regarding the validity of the reported recall values.\\n\\n\\n3.In section 4.3, I don\\u2019t see a substantial difference between your proposed recall-oriented generation and the standard beam search. \\n\\n4. Mean aggregation is equivalent to the regular loss function\\\" lack clarity\\u2014specifically, it is not defined what the \\u201cregular loss function\\u201d refers to. Furthermore, the section does not directly present the actual loss function or provide a detailed explanation. Instead, it relies solely on textual descriptions, which makes it difficult to understand the specifics of the proposed loss. Including the explicit mathematical form of the loss function along with a step-by-step explanation would significantly improve clarity and accessibility.\\n\\n5.In addition to the presentation issues mentioned above, the paper lacks a coherent structure throughout both the methods and experiments sections. The presentation feels fragmented, and critical details regarding the experimental setup, such as baseline configurations, are insufficiently described. To improve clarity, a major revision is needed to reorganize the paper, providing a more cohesive structure and a thorough explanation of the experimental settings.\", \"questions\": \"Please refer to the weakness part\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
Dl6nkKKvlX
Balancing Act: Diversity and Consistency in Large Language Model Ensembles
[ "Ahmed Abdulaal", "Chen Jin", "Nina Montaña-Brown", "Aryo Pradipta Gema", "Daniel C. Castro", "Daniel C. Alexander", "Philip Alexander Teare", "Tom Diethe", "Dino Oglic", "Amrutha Saseendran" ]
Ensembling strategies for Large Language Models (LLMs) have demonstrated significant potential in improving performance across various tasks by combining the strengths of individual models. However, identifying the most effective ensembling method remains an open challenge, as neither maximizing output consistency through self-consistency decoding nor enhancing model diversity via frameworks like "Mixture of Agents" has proven universally optimal. Motivated by this, we propose a unified framework to examine the trade-offs between task performance, model diversity, and output consistency in ensembles. More specifically, we introduce a consistency score that defines a gating mechanism for mixtures of agents and an algorithm for mixture refinement to investigate these trade-offs at the semantic and model levels, respectively. We incorporate our insights into a novel inference-time LLM ensembling strategy called the Dynamic Mixture of Agents (DMoA) and demonstrate that it achieves a new state-of-the-art result in the challenging Big Bench Hard mixed evaluations benchmark. Our analysis reveals that cross-validation bias can enhance performance, contingent on the expertise of the constituent models. We further demonstrate that distinct reasoning tasks—such as arithmetic reasoning, commonsense reasoning, and instruction following—require different model capabilities, leading to inherent task-dependent trade-offs that DMoA balances effectively.
[ "LLM", "ensembling", "diversity", "consistency", "mixture of agents", "self decoding" ]
Accept (Poster)
https://openreview.net/pdf?id=Dl6nkKKvlX
https://openreview.net/forum?id=Dl6nkKKvlX
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wtxVG1wGZ6", "vqEaIncgVk", "rpEsHTsQxD", "pkSh6vTUyW", "piN5P0hMJv", "pZUyCNKCIJ", "nQ6rJv3osT", "f4xw1vQEK3", "eHxVGdZe7P", "bYoOjPKs4F", "aTXVU9qa6c", "X5FhHSA9ob", "UpmjFKXXmu", "GdoFuDE66t", "6eJmLPGnPy", "1Ac21KyHTP", "0jMC8B6TOw" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "decision", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732029256355, 1730031491220, 1732028377387, 1732030329897, 1732027208337, 1730430090425, 1730302240428, 1737523887181, 1734767633383, 1732038587601, 1732029323857, 1732029877956, 1730566384880, 1732031590230, 1732028068320, 1732027619615, 1732088381060 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8091/Authors" ], [ "ICLR.cc/2025/Conference/Submission8091/Reviewer_7TjC" ], [ "ICLR.cc/2025/Conference/Submission8091/Authors" ], [ "ICLR.cc/2025/Conference/Submission8091/Authors" ], [ "ICLR.cc/2025/Conference/Submission8091/Authors" ], [ "ICLR.cc/2025/Conference/Submission8091/Reviewer_jGzM" ], [ "ICLR.cc/2025/Conference/Submission8091/Reviewer_EPHj" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8091/Area_Chair_Cgnn" ], [ "ICLR.cc/2025/Conference/Submission8091/Reviewer_EPHj" ], [ "ICLR.cc/2025/Conference/Submission8091/Authors" ], [ "ICLR.cc/2025/Conference/Submission8091/Authors" ], [ "ICLR.cc/2025/Conference/Submission8091/Reviewer_bHjk" ], [ "ICLR.cc/2025/Conference/Submission8091/Authors" ], [ "ICLR.cc/2025/Conference/Submission8091/Reviewer_bHjk" ], [ "ICLR.cc/2025/Conference/Submission8091/Authors" ], [ "ICLR.cc/2025/Conference/Submission8091/Reviewer_7TjC" ] ], "structured_content_str": [ "{\"title\": \"Response 1 Part 1\", \"comment\": \"We thank the reviewer for their questions and insights. We are glad they found our methodology, presentation, and results to be good for most of our contributions. We are also deeply appreciative that they found our takeaways on diversity and consistency to be clear and well presented. We have made a number of amendments to our manuscript based on the feedback, and respond to the queries below.\\n\\n> No critical focus on looking at the intermediate reasoning - while I get that semantic diversity was a focus, I would have liked to see a deeper look at how diverse semantic reasoning looked at with a few examples - were there instances were correct reasonings by multiple models were still judged to be semantically diverse, etc. I liked Appendix H - it was a good start, but a detailed, controlled experiment could've provided readers with much more about EigenDivergence\\n\\nThis is a nice suggestion. In order to provide more insight into the links between EigenDivergence, semantic reasoning traces and correctness, we **performed an additional experiment in Appendix H**. We reproduce the main figure of this analysis here for convenience ([(anonymous fig. 1)](https://imgur.com/75BJCdB)). In summary, we subsample 400 questions across GSM8K and ARC-C and assess the relationship between the individual answer accuracy and semantic consistency in the layer as measured by the EigenDivergence (ED). We find that ED scores of individually correct queries are more negative on average (i.e., increase semantic consistency in the mixture) than individually incorrect queries, which are more positive (i.e. decrease semantic consistency if considered in the mixture). Additionally, in both datasets, we find that correct answers in layers with at least one incorrect query are more negative on average. For instance, in GSM8K, correct answers had a mean ED score of -0.269 compared to -0.065 for incorrect answers, with similar patterns observed in ARC-C. This suggests that filtering based on ED scores is more likely to lead to *removing diverse queries that are statistically more likely to be incorrect answers in close-ended tasks* like arithmetic and common-sense reasoning. \\n\\n> Can the details of 3.3 and 4.4 be clarified? [...]\\n\\nThank you for your feedback. The motivation behind the DMoA approach was to create a dynamic, task-specific ensembling framework informed by the results of the preceding experiments (GMoA and Mixture Optimization) and the ablation studies which examined the trade-offs between task performance and ensemble diversity/consistency. We investigated this trade-off at the semantic level (GMoA) and the mixture-composition level (mixture optimization). From these experiments we accrued three main insights: 1) When models agree, this tends to improve performance; 2) The utility of them agreeing is based on whether the models in the ensemble have sufficient expertise to answer the current query correctly; 3) Different tasks require different skills which appear to exist in a trade-off with one another. This motivated us to develop a \\u2018dynamic\\u2019 inference-time strategy (Section 3.3), which operationalises these insights. Namely, we identify a set of skills required to solve the current task, and then estimate which models might perform well given these skills. We construct an ensemble of these models at inference time, and aggregate their outputs before finally synthesising them into a final high-quality solution. Indeed, we show this outperforms \\u2018static\\u2019 ensembling (Section 4.4) and can achieve leading results in the challenging BBH benchmark. Further to your feedback, we have made the following amendments to the manuscript: We have clarified the motivation for DMoA in Section 3.3 by more explicitly connecting the experiment to the prior experiments; We have clarified that the DMoA effectively mitigates some of the trade-off behaviours identified in the previous experiments; We have better motivated the experiment in the methods section in a number of areas. \\n\\n> Can Appendix F.2 be expanded to understand the behaviour of multiple sentence embeddings ? The 0.78 doesn't make much sense as an individual number - it is unclear if it a consequence of choice of embedding models / how much variance can exist, etc.\\n\\nWe have now **expanded the analysis in Appendix F.2** to include two additional embedding models, namely OpenAI\\u2019s text-embedding-3-large and text-embedding-ada-002. In all cases the correlation coefficient remains between 0.739 to 0.781 (3 s.f.), demonstrating minimal variance across different embedding models.\\n\\n> Typos in some parts - line 182 for example\\n\\nThank you very much indeed. We have corrected this.\"}", "{\"summary\": \"This paper proposes a unified framework to examine the trade-off between diversity and consistency to the final performance of model ensemble. The author proposes a dynamic mixture of agent approach to optimize the balance between task-specific capabilities, ultimately enhancing overall performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The motivation of balancing the model diversity and output consistency in model ensemble is compelling and well-founded.\\n2. The unified dynamic mixture of agent framework effectively addresses the trade-off between diversity and consistency across various tasks.\", \"weaknesses\": \"1. The proposed dynamic mixture of agent framework necessitates the individual search and optimization of MoA structures for each distinct task. The process relies on divergence filtering and mixture optimization, which is costly and requires additional task-specific datasets for evaluation.\\n2. The application of EignScore originally proposed for hallucination detection in a single model presents inherent limitations when extended to an ensemble of multiple models. This is primarily due to the fact that the sentence-embedding spaces of various models are not aligned during their pre-training or fine-tuning phases. Consequently, these embeddings do not inhabit the same representational space, which poses challenges for direct comparison and aggregation across different models.\\n3. The design of mixture optimization leads to a scenario where the final MoA model is absolutely dominated by a single model, as shown in Fig. 3, left. Since the search process for each run is determined by a greedy algorithm that replaces the model with the lowest delta to the one with highest delta.\", \"questions\": \"1. In Section 4.4, Table 2 presents a comparison between DMoA/Sonnet and the Claude-3.5-Sonnet baseline. While DMoA/Sonnet demonstrates a marginal performance improvement (91.85 vs. 90.20 normalized accuracy on BBH), it is important to consider the associated computational costs. DMoA/Sonnet necessitates multiple inferences across diverse models and subsequent aggregations using Claude-3.5-Sonnet. This process incurs significantly higher expenses compared to the baseline due to the additional model inferences and the substantially longer input required for aggregation. Moreover, the efficacy of the MoA approach is heavily contingent upon the final aggregation model employed. When Claude-3.5-Sonnet is not utilized as the aggregation model in the DMoA approach, a substantial performance degradation is observed (90.20 vs. 83.63 normalized accuracy on BBH).\\n2. What if testing DMoA on the seven benchmarks (AlpacaEval 2.0, MT-Bench, GSM8K, MATH, CSQA, ARC-C, ARC-E) in accordance with the experimental setups in Sections 4.1 and 4.2? \\n3. Based on the experimental findings presented in Section 4.3, several key conclusions can be drawn regarding the impact of diversity and consistency on various cognitive abilities. Firstly, high levels of diversity appear to have a detrimental effect across all measured abilities. Secondly, strong consistency enhances reasoning and mathematical capabilities, but impairs the instruction-following proficiency. Lastly, when strong consistency is coupled with an appropriate degree of supplemental diversity, there is an observed improvement in instruction-following abilities, though this comes at the cost of diminished mathematical and reasoning skills. Compared to the discussion in current version, the above summary appears to more accurately reflect the core idea of this paper: balancing diversity and consistency for model ensemble across various tasks.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response 1 Part 2\", \"comment\": \"> This leads to insight 1 in section 4.4 that \\\"Maximizing semantic diversity harms performance (Sec. 4.3), indicating that some cross-validation between outputs is necessary for high-quality results, even in open-ended instruction-following tasks.\\\" This can be problematic since the premise is wrong.\\n\\nThis is an interesting point of discussion. We completely agree that edge cases can exist. There could, for instance, exist two outputs which are semantically very similar to one another, but very different from all other outputs (as you suggest) \\u2013 in this case removing them would not necessarily maximise the overall semantic diversity of the outputs. However, in most other cases removing more semantically similar information will, on average, leave behind outputs which are more semantically distinct from one-another. On the point of removing outputs that are more semantically similar, we note that this does not *necessarily* relate to the correctness of the answers. For instance, we found that dbrx-instruct and mixtral-8x22B-instruct-v0.1 produce very semantically similar results. Despite their similarity, they can both be wrong (and indeed, subtle errors can propagate *differently* throughout their reasoning traces despite a very similar overall structure to their chains). We believe this to be an important limitation of our EigenDivergence score which we hope to mitigate in future work. We discuss this in more depth (and provide an illustrative example) in **Appendix H** of the paper. This leads to an inherent difficulty which we try to resolve with the DMoA framework. To achieve good performance, you should get models to \\u2018agree\\u2019 with one another. But models can agree whilst both are incorrect. Thus you need two main steps: 1) Select (or estimate) which models are likely to be highly performant given a specific task and create an ensemble of them; 2) Query the ensemble, and see if these models agree. Indeed, this is what our proposed DMoA approach attempts to do. \\n\\n> The structure of the paper gives a strong vibe that the first two experiments are very disconnected from the third [...]\\n\\nThank you for this thoughtful suggestion. To address your feedback, we have clarified the connections between the first two experiments and the DMoA in the introductory paragraph of the methods section and have additionally added a transitional sentence to the DMoA section itself. We hope that this helps to emphasise that the first two experiments develop methodologies that allow us to accrue insights that directly inform and support the development of the DMoA. \\n\\n> I wonder about the performance of DMoA on seven tasks used in the first two experiments [...]\\n\\nThis is a nice suggestion. We have now **expanded Appendix E (adding E.4: \\u201cAdditional results\\u201d)**, where we demonstrate the performance of the DMoA on the instruction following, arithmetic reasoning, and common-sense reasoning benchmarks in the gated mixture of agents and mixture optimization experiments. We reproduce the table below for convenience. In summary, the DMoA outperforms other models and ensembling strategies across the majority of the benchmarks. \\n\\n| **Model** | **AlpacaEval** | **MT-Bench** | **GSM8K** | **MATH** | **CSQA** | **ARC-C** | **ARC-E** |\\n|-----------------------|:--------------:|:------------:|:---------:|:--------:|:---------:|:----------:|:----------:|\\n| **DMoA** | **63.21** | **9.19** | **96.67** | **71.23**| 87.51 | 92.50 | **94.47** |\\n| **GMoA** | 58.66 | 8.97 | 94.23 | 56.35 | 85.20 | 92.32 | 93.75 |\\n| **MoA** | 59.50 | 9.19 | 93.87 | 55.22 | 84.32 | 91.85 | 94.31 |\\n| **Llama-3-70B** | 34.4 | 8.8 | 93.0 | 50.4 | 83.8 | 90.5 | 94.1 |\\n| **Qwen-1.5-110B** | 43.9 | 8.9 | 85.4 | 49.6 | 82.1 | 69.6 | 93.9 |\\n| **Qwen-1.5-72B** | 36.6 | 8.4 | 79.5 | 34.1 | 83.2 | 65.9 | 92.7 |\\n| **WizardLM-8x22B** | 51.3 | 8.8 | 81.6 | 22.7 | 69.0 | 62.5 | 90.1 |\\n| **Mixtral 8x22B** | 30.9 | 8.8 | 83.7 | 41.7 | 81.7 | 70.7 | 91.8 |\\n| **DBRX-Instruct** | 25.4 | 8.4 | 72.8 | 32.5 | 82.2 | 68.9 | 89.7 |\\n| **GPT-4 Omni (05/13)**| 57.5 | 9.19 | 94.1 | 61.2 | **88.6** | **94.6** | 94.3 |\", \"references\": \"1. Xu, Ziwei, Sanjay Jain, and Mohan Kankanhalli. \\\"Hallucination is inevitable: An innate limitation of large language models.\\\" arXiv preprint arXiv:2401.11817 (2024).\\n2. Wang, Xuezhi, et al. \\\"Self-consistency improves chain of thought reasoning in language models.\\\" arXiv preprint arXiv:2203.11171 (2022).\"}", "{\"title\": \"Response 1 Part 2\", \"comment\": \"> What if testing DMoA on the seven benchmarks (AlpacaEval 2.0, MT-Bench, GSM8K, MATH, CSQA, ARC-C, ARC-E) in accordance with the experimental setups in Sections 4.1 and 4.2?\\n\\nThis is a nice suggestion. We have now expanded Appendix E (adding E.4: \\u201cAdditional results\\u201d), where we demonstrate the performance of the DMoA on the instruction following, arithmetic reasoning, and common-sense reasoning benchmarks in the gated mixture of agents and mixture optimization experiments. We reproduce the table here for convenience. In summary, the DMoA outperforms other models and ensembling strategies across the majority of the benchmarks. \\n\\n| **Model** | **AlpacaEval** | **MT-Bench** | **GSM8K** | **MATH** | **CSQA** | **ARC-C** | **ARC-E** |\\n|-----------------------|:--------------:|:------------:|:---------:|:--------:|:---------:|:----------:|:----------:|\\n| **DMoA** | **63.21** | **9.19** | **96.67** | **71.23**| 87.51 | 92.50 | **94.47** |\\n| **GMoA** | 58.66 | 8.97 | 94.23 | 56.35 | 85.20 | 92.32 | 93.75 |\\n| **MoA** | 59.50 | 9.19 | 93.87 | 55.22 | 84.32 | 91.85 | 94.31 |\\n| **Llama-3-70B** | 34.4 | 8.8 | 93.0 | 50.4 | 83.8 | 90.5 | 94.1 |\\n| **Qwen-1.5-110B** | 43.9 | 8.9 | 85.4 | 49.6 | 82.1 | 69.6 | 93.9 |\\n| **Qwen-1.5-72B** | 36.6 | 8.4 | 79.5 | 34.1 | 83.2 | 65.9 | 92.7 |\\n| **WizardLM-8x22B** | 51.3 | 8.8 | 81.6 | 22.7 | 69.0 | 62.5 | 90.1 |\\n| **Mixtral 8x22B** | 30.9 | 8.8 | 83.7 | 41.7 | 81.7 | 70.7 | 91.8 |\\n| **DBRX-Instruct** | 25.4 | 8.4 | 72.8 | 32.5 | 82.2 | 68.9 | 89.7 |\\n| **GPT-4 Omni (05/13)**| 57.5 | 9.19 | 94.1 | 61.2 | **88.6** | **94.6** | 94.3 |\\n\\n> Based on the experimental findings presented in Section 4.3, several key conclusions can be drawn regarding the impact of diversity and consistency on various cognitive abilities. Firstly, high levels of diversity appear to have a detrimental effect across all measured abilities. Secondly, strong consistency enhances reasoning and mathematical capabilities, but impairs the instruction-following proficiency. Lastly, when strong consistency is coupled with an appropriate degree of supplemental diversity, there is an observed improvement in instruction-following abilities, though this comes at the cost of diminished mathematical and reasoning skills. Compared to the discussion in current version, the above summary appears to more accurately reflect the core idea of this paper: balancing diversity and consistency for model ensemble across various tasks. \\n\\nWe are deeply appreciative for this insightful comment, and have **adjusted our discussion section** to more clearly reflect this refinement of points. Thank you once more.\"}", "{\"title\": \"Response 1\", \"comment\": \"We thank the reviewer for their questions and insights. We are glad they found our topic to be nuanced and fundamental, our paper to be well-structured and logically rigorous, and our framework to be efficacious. We have made a number of amendments to our manuscript based on the feedback, and respond to the queries below.\\n\\n> While the paper demonstrates the effectiveness of GMoA and DMoA in achieving high performance, it does not analyze the computational costs associated with these methods [...]\\n\\nThis is a nice suggestion. We have now **added a cost analysis section which can be found in Appendix I** based on our experiment in Section 4.4. We reproduce the main figure here [(anonymous fig. 1)](https://imgur.com/4AhAQyW) for convenience. In summary, costs were calculated using pricing data from API providers' websites. Individual models generally offered low-cost options with worse performance, while ensembles provided enhanced performance at higher costs. Frontier models outperformed open-source models but incur significant cost premiums. The Dynamic Mixture of Agents (DMoA) approach achieved a well-balanced position which sits on the Pareto-optimal front, offering performance similar to gpt-4o-2025-05-13 at a significantly lower inference cost. The Pareto front progression from DMoA to Claude-3.5-Sonnet to DMoA/Sonnet surpassed gpt-4o-2025-05-13. DMoA/Sonnet achieved the highest normalised accuracy but is the most expensive, whereas the fully open-source DMoA offered a balanced trade-off, delivering high performance at moderate costs.\\n\\n> I am curious if the proposed methods, particularly DMoA and GMoA, align with test-time scaling laws. Specifically, does performance consistently improve as the number of models in the ensemble or the length of inference chains increase?\\n> In the experimental setup, the paper mentions constructing a \\\"MoA-Lite\\\" variant with a limited number of layers. What would happen if additional layers were added to MoA?\\n\\nThank you for these suggestions. To investigate scaling layer dimension and the number of layers we **ran an experiment based on our results from section 4.4**. We add this new section to **Appendix J**. We reproduce the main figures here for convenience: ([(anonymous fig. 2)](https://imgur.com/lxjFC27), [(anonymous fig. 3)](https://imgur.com/uUneYA9)). In summary, we find that adding more models per layer consistently improves performance in the BBH benchmark. We investigate up to 10 models per layer and in this setting we achieve a normalised accuracy of 87.36%, which represents 96.85% of Claude 3.5 Sonnet\\u2019s performance with only open-source models. With regards to adding more layers, we find that four layers achieves an even greater normalised accuracy of 89.34, which is 99.05% of Claude 3.5 Sonnet\\u2019s performance for this task.\\n\\n> In what specific scenarios are GMoA and DMoA each most effective? [...]\\nWe see the GMoA as a preliminary investigation of how semantic similarity between outputs affects the performance of LLM ensembles across a number of disparate benchmarks. This investigation yields a number of insights which allows us to develop the more flexible DMoA framework. Nevertheless, combining the ideas of performing inference-time selection of high-performing models with semantic filtering is an interesting avenue of future work. We should point out however that in our third ablation study (Section 4.3: Figure 4; Right plot) \\u2013 we note that filtering already specialised ensembles can degrade performance.\"}", "{\"summary\": \"The paper proposes a new model ensemble strategy that builds on top of a Mixture of Agents called Dynamic Mixture of Agents (DMoA). First, the authors attempt to unify model ensemble methods with a formal definition. The overall hypothesis for all the experiments are that different tasks require different ensemble diversity/consistency. With that, they first do an experiment where they propose a divergent metrics EigenScore and EigenDivergence that measure how semantically consistent/inconsistent LLM outputs are to the overall semantics. The effect of this Gated MoA is nuanced. For the next experiment, they try to optimize MoA structure by proposing a model delta that correlates with task performances. They found that there are trade-offs between each task. If they optimize for one task, it would decrease for another task. Lastly they tried DMoA, which chose mixtures based on some criteria. They showed that DMoA outperforms MoA on BBH.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The authors design and execute a pretty comprehensive analysis and extensions to the model ensembling method Mixture-of-Agents. The analysis and conclusions are useful because they give intuitions on what may work and may not work in a mixture of model frameworks.\", \"They first investigate whether reducing semantic inconsistency would help (it didn't, I wonder if the reverse would help though -- increase the diversity). Then a mixture optimization method is proposed and they found some useful insights, e.g., aggregate and synthesis works better than ranking and self-consistency. They then tested DMoA which dynamically uses different expertise models for the mixture.\", \"It is quite important to dive deeper into what makes model collaboration work as model ensemble methods are a promising way to inference-time scaling.\", \"The tasks are pretty comprehensive.\"], \"weaknesses\": [\"The purpose of the first experiment is a bit questionable. The result from divergent filtering is nuanced, as it only slightly improves four out of five reasoning tasks, and decreases performances on three tasks. I understand you use this experiment's result as insights to build DMoA in section 4.4, but I question whether those insights are actually correlated.\", \"For insight 1 in section 4.4, divergent filtering is used as a supporting evidence as \\\"As shown in Sec. 4.1, LLM ensembles outperform individual models regardless of divergence filtering, across both open- and close-ended tasks.\\\" If the conclusion arrived \\\"regardless of divergence filtering\\\" then it doesn't really add much value to the argument.\", \"For insight 2, it says \\\"We found in Sec. 4.1 that removing information from an ensemble can improve task-specific performance.\\\" But it improves four and hurts three.\", \"For insight 3, it says \\\"Performance varies when semantic diversity is altered within a fixed ensemble (Sec. 4.1).\\\" This claim is not that clear and doesn't provide much insight since performance can vary if you change anything about the mixture. It would be more useful if more patterns could be discovered.\", \"Similarly, I am concerned about the claim in section 4.3: \\\"two GMoA variants...one with the two most semantically divergent outputs removed (maximizing consistency), and one with the two most consistent outputs removed (maximizing diversity)...indicating that some semantic consistency is necessary for high-quality results, even in open-ended instruction-following queries.\\\" I don't think removing two most consistent outputs would maximize diversity. You could very well be removing two very similar outputs that are very distinctive from other outputs. This also has the unwanted effect of removing output that's more correct (since more models output it).\", \"This leads to insight 1 in section 4.4 that \\\"Maximizing semantic diversity harms performance (Sec. 4.3), indicating that some cross-validation between outputs is necessary for high-quality results, even in open-ended instruction-following tasks.\\\" This can be problematic since the premise is wrong.\", \"The structure of the paper gives a strong vibe that the first two experiments are very disconnected from the third as neither divergent filtering (GMoA) nor the mixture optimization methods are used in the third. And the insights they provide are also very limited which I elaborated above. I would encourage maybe downplaying the portion of the first two experiments and focusing more on the third.\"], \"questions\": [\"I wonder about the performance of DMoA on seven tasks used in the first two experiments. This should be a natural progression. It feels weird that for the first two, we are using the same set of seven and for third, we are using BBH.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors in this work look at Mixture of Agents (MoA) and propose:\\n1) A framework that captures different variations of possible MoAs\\n2) Divergence metric called `EigenDivergence` based on the hallucination detected in K sampled outputs with the additional proposition of using an external embedding instead of the model's internal embedding\\n3) Propose an optimization algorithm based on incremental performance gains and usage\\n4) Propose DMoAs that dynamically select the models\", \"results_of_this_work_are_shown_as_follows\": \"1) Gated MoA against standard MoA and other openly-available models, with GMoA only providing marginal improvement on settings with some models performing reasonably better while underperforming in settings where all models perform close to each other; 2) Show that mixtures do not translate the same across various tasks and 3) DMoAs perform better on the BBH\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Methodology, presentation, and results are good for the first 3 / 4 proposed contributions - I quite liked the idea of EigenDivergence and the analysis around it\\n2. The takeaways on diversity and consistency are clear and well presented\", \"weaknesses\": \"1. Sections 3.3 and 4.4 were both unclear and unnecessary, in my opinion - I couldn't quite understand how these sections tie to the main point of this paper. The presentation and motivation around this subset of contribution requires significant re-write\\n2. No critical focus on looking at the intermediate reasoning - while I get that semantic diversity was a focus, I would have liked to see a deeper look at how diverse semantic reasoning looked at with a few examples - were there instances were correct reasonings by multiple models were still judged to be semantically diverse, etc. I liked Appendix H - it was a good start, but a detailed, controlled experiment could've provided readers with much more about EigenDivergence\\n\\nNitpicks that can be easily fixed and does not affect the review/score:\\n1. Typos in some parts - line 182 for example\\n2.\", \"questions\": \"1. Can the details of 3.3 and 4.4 be clarified? (see weaknesses for comments)\\n2. Can Appendix F.2 be expanded to understand the behaviour of multiple sentence embeddings ? The 0.78 doesn't make much sense as an individual number - it is unclear if it a consequence of choice of embedding models / how much variance can exist, etc.\\n3. Is there a reason why closed-source models weren't used since the eigendivergence doesn't require access to the weights ?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"metareview\": \"This paper presents a unified framework for examining trade-offs between diversity and consistency in LLM ensembles through the Dynamic Mixture of Agents (DMoA) approach. The key scientific contribution is demonstrating that different reasoning tasks require distinct balancing of model diversity and output consistency. The work's major strengths include: comprehensive analysis of ensemble methods, clear empirical validation achieving state-of-the-art results on BBH benchmark, and practical insights into task-dependent trade-offs. While there were initial concerns about computational costs and methodology connections, the authors thoroughly addressed these through additional experiments and analyses during rebuttal, particularly around cost-performance trade-offs and scaling behavior. The paper makes a meaningful contribution to understanding and optimizing LLM ensemble methods.\", \"additional_comments_on_reviewer_discussion\": \"The discussion during rebuttal focused on three main areas: computational costs, methodology connections, and empirical validation. The authors addressed these by: 1) Adding detailed cost analysis showing DMoA achieves Pareto-optimal performance/cost trade-off; 2) Clarifying connections between GMoA, mixture optimization and DMoA while expanding methodology sections; 3) Providing comprehensive results across additional benchmarks. Reviewer bHjk found the new scaling experiments \\\"highly interesting,\\\" while EPHj noted improved readability. The thorough responses and substantial additions, particularly around cost-effectiveness and empirical validation, strengthen the paper's contributions and address the main reviewer concerns, supporting acceptance.\"}", "{\"comment\": \"Thank you for your clear responses. I think the readability and additional details have improved the paper.\"}", "{\"title\": \"Response 1 Part 2\", \"comment\": \"> Is there a reason why closed-source models weren't used since the eigendivergence doesn't require access to the weights ?\\n\\nWe chose to use freely accessible open source models for reproducibility and accessibility purposes. Closed models can change in capabilities across time without formal announcement [1]. On top of that, models are often simply deprecated, making replications harder still [2]. Furthermore, closed source models can be significantly costlier to run [3,4]. We therefore chose to run freely available and locally hostable open-source models to aid reproducibility of our frameworks as well as to ensure model accessibility.\", \"references\": \"1. Chen, Lingjiao, Matei Zaharia, and James Zou. \\\"How is ChatGPT's behavior changing over time?.\\\" arXiv preprint arXiv:2307.09009 (2023).\\n2. OpenAI. \\\"Deprecations.\\\" OpenAI Platform, OpenAI, https://platform.openai.com/docs/deprecations. [Accessed: 18/11/2024]\\n3. Model pricing, Together AI. https://www.together.ai/pricing [Accessed: 18/11/2024]\\n4. OpenAI pricing, https://openai.com/api/pricing/ [Accessed: 18/11/2024]\"}", "{\"title\": \"Response 1 Part 1\", \"comment\": \"We thank the reviewer for their questions and insights. We are glad they found the motivation for our work to be compelling and well-founded, feel our DMoA effectively addresses the trade-off between diversity and consistency across various tasks. We have made a number of amendments to our manuscript based on the feedback, and respond to the queries below.\\n\\n> The application of EignScore originally proposed for hallucination detection in a single model presents inherent limitations when extended to an ensemble of multiple models. This is primarily due to the fact that the sentence-embedding spaces of various models are not aligned during their pre-training or fine-tuning [..]\\n\\nThank you for this point. We completely agree that sentence embedding spaces of different models are not necessarily going to be aligned,. However, we instead project each model\\u2019s output to the same embedding space using the text-embedding-3-small model from OpenAI. We clarify this information in the additional experimental setup section in **Appendix B.3: \\u201cLanguage and embedding models\\u201d**. We have additionally clarified that our analyses utilised a shared embedding space in the main manuscript. \\n\\n> The design of mixture optimization leads to a scenario where the final MoA model is absolutely dominated by a single model [...]\\n\\nThe figure illustrates an example whereby one model which was more robust at a particular type of mathematical reasoning dominated the mixture. Whilst this phenomenon was observed in this particular instance, it was not always the case. As you correctly describe, the algorithm replaces the model with the lowest delta with the model with the highest delta, however in the next step of the algorithm, if performance degrades, this step can be reversed. Indeed in other tasks (particularly AlpacaEval 2.0 and MT-Bench) the \\u2018optimal\\u2019 set of LLMs were heterogeneous. We describe the algorithm and additional stopping criterions in more detail in **Appendix C**. \\n\\n> The proposed dynamic mixture of agent framework necessitates the individual search and optimization of MoA structures for each distinct task. The process relies on divergence filtering and mixture optimization, which is costly [...]\\n> In Section 4.4, Table 2 presents a comparison between DMoA/Sonnet and the Claude-3.5-Sonnet baseline. While DMoA/Sonnet demonstrates a marginal performance improvement (91.85 vs. 90.20 normalized accuracy on BBH), it is important to consider the associated computational costs [...]\\n\\nThank you for this important point. In light of this we have now **added a cost analysis section based on our experimental results in Section 4.4**. We reproduce the main figure here for convenience ([(anonymous fig. 1)](https://imgur.com/4AhAQyW)). The cost analysis can be seen in **Appendix I**. We find that the dynamic mixture of agents framework sits on the pareto optimal front between performance and operation cost. Additionally, the DMoA demonstrates a similar performance in the Big Bench Hard (BBH) benchmark to gpt-4o-2025-05-13 but with significantly cheaper input/output token cost. The baseline DMoA (which does not utilize Claude Sonnet as the aggregator) achieves 92.7% of the performance of Claude 3.5 sonnet, and indeed we now also have a new test-time inference analysis in **Appendix I** which shows that by scaling the number of layers, a fully open-source DMoA can achieve 99.05% of Claude 3.5 Sonnet\\u2019s performance for BBH.\"}", "{\"summary\": \"The paper introduces a unified approach to explore the balance between diversity and consistency in LLM ensembles. It addresses the challenge of finding optimal ensembling methods, where traditional strategies like self-consistency decoding (focusing on output consistency) and Mixture of Agents (MoA, focusing on model diversity) each have limitations. The authors propose the **Dynamic Mixture of Agents (DMoA)**, a novel inference-time ensembling strategy that integrates insights on the interplay between diversity and consistency.\\n\\n1. The paper establishes a framework for examining trade-offs between task performance, diversity, and consistency within LLM ensembles.\\n2. Introducing a consistency score enables selective filtering within model outputs, enhancing ensemble consistency.\\n3. This method refines mixtures of agents by considering semantic and model-level adjustments, optimizing task performance.\\n4. DMoA dynamically selects models based on task-specific needs, achieving state-of-the-art results on the Big Bench Hard benchmark, demonstrating effective balance across tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper addresses an intriguing and nuanced problem\\u2014how to balance consistency and diversity within LLM ensembles. This issue is fundamental, as different tasks often require prioritizing one over the other.\\n2. The paper is well-structured and logically rigorous. The authors provide clear, precise definitions of both DMoA and GMoA frameworks, enabling readers to fully understand the distinctions and specific innovations in each method.\\n3. The authors show that DMoA achieves state-of-the-art results on the Big Bench Hard (BBH) benchmark, indicating the framework's efficacy across various challenging tasks.\", \"weaknesses\": \"While the paper demonstrates the effectiveness of GMoA and DMoA in achieving high performance, it does not analyze the computational costs associated with these methods. Cost considerations are essential in ensemble approaches, particularly with LLMs, where scaling and inference-time model selection can be computationally intensive. For a comprehensive comparison, the authors should evaluate the total cost associated with GMoA and DMoA, including resource utilization during inference and model selection. This cost analysis would offer a more balanced view of the trade-offs between performance gains and computational expense, especially compared to other ensemble strategies and Chain-of-Thought (CoT) approaches.\", \"questions\": \"1. I am curious if the proposed methods, particularly DMoA and GMoA, align with test-time scaling laws. Specifically, does performance consistently improve as the number of models in the ensemble or the length of inference chains increase?\\n2. In the experimental setup, the paper mentions constructing a \\\"MoA-Lite\\\" variant with a limited number of layers. What would happen if additional layers were added to MoA?\\n3. In what specific scenarios are GMoA and DMoA each most effective? Can these methods be combined within a single framework to further improve performance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We are deeply appreciative of all of the reviewers for their time and detailed reviews. We appreciate their recognition of the effectiveness of our inference-time ensembling framework (**bHjk, 7TjC**), of the nuanced and fundamental importance of the research topic (**bHjk**), of our well-structured, logically rigorous, and comprehensive work (**EPHj, bHjk, jGzM**), and of the clarity of our takeaways (**EPHj**).\\nIn response to their feedback, we have made several amendments to the manuscript, which we believe has improved its quality. We summarise these here: \\n\\n| **Category** | **Amendment** |\\n|----------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\\n| **Cost Analysis** | - Added a detailed cost analysis section in Appendix I. |\\n| | - Showed DMoA sits on the Pareto-optimal front, balancing performance and cost. | |\\n| **Scaling Experiments** | - Expanded Appendix J with experiments on scaling models and layers. |\\n| | - Demonstrated that increasing models and layers improves BBH benchmark performance. |\\n| **Methodology Links** | - Strengthened connections between GMoA, mixture optimization, and DMoA in the manuscript. |\\n| | - Explained how earlier experiments informed the design of DMoA. |\\n| **DMoA Benchmarks** | - Added results for DMoA on seven benchmarks in Appendix E.4. |\\n| | - Found DMoA outperforms other models and ensembling strategies across a majority of benchmarks. |\\n| **EigenDivergence** | - Conducted a new analysis in Appendix H on the relationship between EigenDivergence and correctness. |\\n| | - Showed that semantic consistency improves close-ended task performance. |\\n| **Semantic Diversity** | - Expanded discussion on semantic diversity and filtering in Appendix H. |\\n| | - Addressed limitations and edge cases of EigenDivergence. |\\n| **Discussion Section** | - Incorporated feedback on balancing diversity and consistency. Revised to reflect trade-offs and task-specific performance impacts. |\\n| **Embedding Analysis** | - Expanded Appendix F.2 to analyze multiple embedding models. |\\n| | - Demonstrated minimal variance in results across embeddings. |\\n| **Additional clarification** | - Clarified the motivation for DMoA over static ensembles in the main manuscript. |\\n\\nWe hope these amendments and our additional analyses address the reviewers\\u2019 feedback and thank them once more for their valuable insights and time.\"}", "{\"title\": \"Response to Author 8091\", \"comment\": \"Thank you very much for your thorough and patient response. The newly added experiments are highly interesting, and I am delighted to see that the DMoA method demonstrates significant advantages in both efficiency and test-time scaling laws. I look forward to seeing a more detailed analysis of these aspects in the final version of your paper.\"}", "{\"title\": \"Response 1 Part 1\", \"comment\": \"We thank the reviewer for their questions and insights. We are glad they found our analysis comprehensive, our conclusions to be useful, and our task set to be comprehensive also. We have made a number of amendments to our manuscript based on the feedback, and respond to the queries below.\\n\\n> For insight 1 in section 4.4, divergent filtering is used as a supporting evidence as \\\"As shown in Sec. 4.1, LLM ensembles outperform individual models regardless of divergence filtering, across both open- and close-ended tasks.\\\" If the conclusion arrived \\\"regardless of divergence filtering\\\" then it doesn't really add much value to the argument.\\n\\nThank you for this comment - we merely intended to highlight that given our results in Table 1, we can see that any form of ensembling appears to correlate with higher benchmark performance relative to the performance of individual models, which aligns well with prior literature on the topic [1,2]. We use this as a basis for justifying the use of an ensembling strategy for our proposed DMoA. \\n\\n> For insight 2, it says \\\"We found in Sec. 4.1 that removing information from an ensemble can improve task-specific performance.\\\" But it improves four and hurts three. \\n\\nWell noticed - here, we state that task-dependent LLM expertise is crucial for boosting task performance. As you noted, removing information can improve performance in four tasks. These tasks evaluated arithmetic and common-sense reasoning which (in this instance) all had a \\u2018single best\\u2019 correct answer and are thus close-ended tasks (e.g., a correct single mathematical answer in the MATH benchmark, or a fixed correct multi-choice index for CSQA). When we remove information for these benchmarks we note a slight improvement in performance. For open-ended tasks without a \\u2018fixed\\u2019 correct answer, particularly for instruction following, the opposite effect is noted - that is, removing information appears to harm performance. Removing information for close-ended tasks seems to improve performance on average, suggesting that excluding models with semantically divergent answers can help. This implies that merely increasing the number of reasoning traces, without considering whether the models are suited to the task, might actually lower performance. We have additionally added a new section to **Appendix H** which investigates the relationship between semantic divergence and performance in close-ended tasks, and the results further support our main experiments. In aggregate, these results contravene prior suggestions that simply adding more reasoning traces (even from multiple heterogeneous models) always improves performance up to some plateau [2]. It would therefore be better if we stacked LLMs based on their task-dependent expertise. We have amended the manuscript to better clarify the instances when removing information can lead to improved performance.\\n\\n> For insight 3, it says \\\"Performance varies when semantic diversity is altered within a fixed ensemble (Sec. 4.1).\\\" This claim is not that clear [...]\\n\\nWe agree with this suggestion \\u2013 and think that our mixture optimization experiment provides a much more natural illustration of the insight that task-dependent skills demonstrate a trade-off with one another. We have amended the main manuscript accordingly.\"}", "{\"title\": \"Official Comment by Reviewer 7TjC\", \"comment\": \"Thanks for the explanations and the additional experiments! They indeed address many of my concerns.\"}" ] }
Dl5JaX7zoN
UrbanPlanBench: A Comprehensive Assessment of Urban Planning Abilities in Large Language Models
[ "Yu Zheng", "Longyi Liu", "Yuming Lin", "Jie Feng", "Guozhen Zhang", "Depeng Jin", "Yong Li" ]
Urban planning is a professional discipline that shapes our daily surroundings, which demands multifaceted domain knowledge and relies heavily on human expertise. The advent of Large Language Models (LLMs) holds promise for revolutionizing such a field by the pre-trained world knowledge. However, the extent to which these models can assist human practitioners remains largely unexplored. In this paper, we introduce a comprehensive benchmark, PlanBench, tailored to evaluate the efficacy of LLMs in urban planning, which encompasses fundamental principles, professional knowledge, and management and regulations, aligning closely with the qualifications expected of human planners. Through extensive evaluation, we reveal a significant imbalance in the acquisition of planning knowledge among LLMs, with even the most proficient models falling short of meeting professional standards. For instance, we observe that 70% of LLMs achieve subpar performance in understanding planning regulations compared to other aspects. Besides the benchmark, we present the largest-ever supervised fine-tuning (SFT) dataset, PlanText, for LLMs in urban planning, comprising over 30,000 instruction pairs sourced from urban planning exams and textbooks. Our findings demonstrate that fine-tuned models exhibit enhanced performance in memorization tests and comprehension of urban planning knowledge, while there exists significant room for improvement, particularly in tasks requiring domain-specific terminology and reasoning. Our benchmark, dataset, and associated evaluation and fine-tuning toolsets aim to catalyze the integration of LLMs into practical urban computing, fostering a symbiotic relationship between human expertise and machine intelligence.
[ "LLM Benchmark", "urban planning" ]
Reject
https://openreview.net/pdf?id=Dl5JaX7zoN
https://openreview.net/forum?id=Dl5JaX7zoN
ICLR.cc/2025/Conference
2025
{ "note_id": [ "nwvlxd3q5w", "fdMkkh3Fwm", "c2TraHSurK", "bjmoW8UgkE", "LcFsYm8jMG", "CT89uDO6Yu" ], "note_type": [ "official_review", "decision", "official_review", "official_review", "meta_review", "official_review" ], "note_created": [ 1730487469936, 1737524259875, 1730317094211, 1730553905190, 1734829616229, 1731139429147 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13434/Reviewer_oDyp" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13434/Reviewer_jSTs" ], [ "ICLR.cc/2025/Conference/Submission13434/Reviewer_5qxn" ], [ "ICLR.cc/2025/Conference/Submission13434/Area_Chair_VXKT" ], [ "ICLR.cc/2025/Conference/Submission13434/Reviewer_47Ea" ] ], "structured_content_str": [ "{\"summary\": \"The paper aims at advancing the LLM in the area of urban planning. The authors introduce a benchmark, UrbanPlanBench, that contains QAs on different perspectives of urban planning: (1) fundamental principles; (2) professional knowledge; (3) management and regulations. With the QAs, the authors evaluated the performance on various LLMs and settings (e.g., RAG, CoT) and found that current models still struggle in solving these tasks. In addition, the authors collect an SFT dataset named UrbanPlanText to help improve model performance.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe topic is important and interesting. It would be great to see how LLMs can help improve the efficiency of experts in various domains.\\n2.\\tThe experiments are comprehensive, with the evaluation of various models and settings.\", \"weaknesses\": \"1.\\tThe size of the benchmark is too small. It is also limited to multiple-choice questions (no short open-domain questions). The reviewer is also unsure about the diversity and generalizability of the chosen questions. It seems that all the questions are from urban planning in China. However, according to S3 Management and regulations, such questions might not be applicable to urban planning in other countries. In Table 1, we observe higher performance of models from China, which strengthens my concern.\\n2.\\tThe collection and quality control of the dataset is not well introduced. For example, what is the data source, human verification on the data correctness/categorization, and the inter-annotator agreement?\\n3.\\tThe take-aways are blurred since there are not many experimental details introduced: e.g., how the model performance varies with hyperparameters such as temperature, how the RAG and CoT methods are designed, and most importantly, how the data is collected with quality control.\\n4.\\tFrom the reviewer\\u2019s point of view, there is a slight overclaim issue in the paper: this is not a benchmark on the general cross-culture urban planning with realistic scenarios that may assist real human experts. It is more like a QA dataset on the urban planning problem of some specific cultural background. The reviewer still acknowledges UrbanPlanText as a good contribution. It would be good if the authors could extend the benchmark to realistic settings that might help experts, beyond just question answering, e.g., retrieving useful cases.\", \"questions\": \"1.\\tIs it possible to show how the UrbanPlanText dataset helps other datasets (e.g., general question answering or reasoning)?\\n2.\\tWhat is the best way to evaluate the model performance in urban planning? What are the other potential tasks besides QA?\\n3. How it the human annotation and experiments done? Can the authors provide more details on the annotation process, hyperparameter settings, actual prompts in CoT, retrieval settings, etc?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper proposes UrbanPlanBench, a new benchmark with urban planning multiple choice questions to evaluate LLMs. Results on the benchmark show that most LLMs still fall short of urban planning. To improve LLMs\\u2019 performance on UrbanPlanBench, the authors experiment with RAG, CoT, as well as SFT using UrbanPlanText, an automatically curated corpus of urban planning knowledge.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed UrbanPlanBench extends the popular LLM evaluation benchmarks with multiple-choice questions, such as MMLU and GPQA, to a new domain, urban planning. The benchmark is also in Chinese instead of English.\\n\\n2. The authors conduct extensive experiments using different LLMs and attempt to enhance them with RAG, CoT, and SFT.\\n\\n3. The paper is clear and easy-to-follow.\", \"weaknesses\": [\"1. The quality of the proposed datasets needs to be discussed and further clarified.\", \"For UrbanPlanBench, the paper does not mention any engagement of experts from the corresponding discipline, i.e. urban planning, making this domain-specific benchmark less authoritative and credible.\", \"The annotation procedure of UrbanPlanBench is not described, such as how the urban planning qualification exams are accessed, how the exam questions are selected and adapted, if there are any cross-annotator validations to ensure the quality, and what is the annotator agreement of gold answers.\", \"Similarly, the authors did not include any information about how they design the expert evaluation for UrbanPlanText\\u2019s quality. It is unclear how the experts were instructed to give the \\u201ccorrectness\\u201d and \\u201cinformativeness\\u201d scores, and how they relate to the quality of UrbanPlanText.\", \"If the question sources are available online, the authors should also discuss potential data contamination concerns and their impact on the evaluation of LLMs. For example, would the LLMs train on Chinese corpora have already seen the questions and answers so that they perform better?\", \"The authors claim to use UrbanPlanBench \\u201cto assess their [LLMs\\u2019] acquisition of planning skill\\u201d (line 71), but the benchmark is only testing LLMs\\u2019 domain knowledge. Suppose an LLM can answer most of the questions in the benchmark correctly, it is still not obvious how it can help in real-world urban planning tasks, such as predicting population growth and geographic analysis.\", \"2. Several technical details are missing from the paper.\", \"The implementation of Self-RAG on different LLMs is not introduced. The statistics and examples of exam_1 and exam_2 corpora used for retrieval are not provided. The configuration, training process, and inference hyperparameters of Self-RAG are missing.\", \"The details of few-shot and zero-shot CoT prompting are missing, e.g. the number of in-context examples in few-shot, how they are annotated/retrieved, and prompt templates.\", \"While it is straightforward to use output probability to select the answers for MCQ-S, it is not explained how the authors apply this strategy to MCQ-M questions, where the questions have multiple answer choices in nondeterministic numbers.\"], \"questions\": \"The paper mostly contains descriptive analysis of tables and figures, which focuses on the good results. It would be more inspiring for the community if the authors have any analysis explaining:\\n\\n1. Why is MCQ-M much harder than MCQ-S, and the scaling of model hyperparmeters does not always lead to increased performance on MCQ-M?\\n2. Why does SFT not lead to consistent improvement across all LLMs and even decreased performance?\\n3. Have the authors conducted a systematic error analysis on one LLM to investigate the issues claimed in the paper, e.g. subject imbalance and language bias?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces UrbanPlanBench, a comprehensive benchmark for evaluating the efficacy of Large Language Models (LLMs) in urban planning. It also presents UrbanPlanText, the largest supervised fine-tuning (SFT) dataset for LLMs in this domain, comprising over 30,000 instruction pairs sourced from urban planning exams and textbooks. The benchmark and dataset aim to assess and enhance LLMs' capabilities in understanding urban planning principles, professional knowledge, and management regulations. The paper reveals significant room for improvement in LLMs' performance, particularly in tasks requiring domain-specific terminology and reasoning.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This work proposes a benchmark that covers multiple dimensions of urban planning and accordingly constructs a dataset for fine-tuning and enhancing model performance.\\n2. This work demonstrates that most models do not achieve human-level performance on urban planning tasks, while also employing methods to enhance the models' capabilities in urban planning.\", \"weaknesses\": [\"1. UrbanPlanBench seems to focus solely on processing data from the 2022 urban planning qualification exam, which limits its contributions:\", \"The authors converted the original 2022 qualification exam documents into csv format, creating the UrbanPlanBench, which comprises 300 MCQs. Could you add a brief discussion highlighting your contributions to the benchmark beyond the data processing steps?\", \"Additionally, since the evaluation data is sourced from publicly available texts, it is difficult to ensure that large models did not encounter this data during pre-training. This could undermine the usability of UrbanPlanBench and affect fair comparisons between models. Moreover, as the data is in Chinese, models like Qwen, trained on larger Chinese corpora, show better urban planning performance, raising concerns about whether they have already been trained on these data.\", \"2. The experiments may be insufficient:\", \"In Sec 2.3, this paper investigates prompting techniques, including RAG and CoT. In Sec 3, this paper evaluates how fine-tuning methods could enhance the capabilities of LLMs. However, while the former is based on GPT-4o-mini and the latter on other LLMs, their results are not comparable. What we may really care about is the comparison between these two types of methods.\", \"The paper mentions \\\"a challenge in sourcing relevant data for SFT\\\". Given the difficulty in obtaining this data, why use SFT to enhance the models' capabilities if prompting techniques, such as RAG and CoT, have been proved effective?\", \"3. The effectiveness of the SFT methods:\", \"After SFT on UrbanPlanText, as illustrated in Table 4, 70%, 50% and 40% of LLMs exhibited decreased accuracy on the full questions of S1, S2 and S3, respectively. However, the models after SFT do not show performance improvements on many test sets.\", \"4. Other weaknesses:\", \"There is a typo in Line 284 and 464: \\\"MCS-S\\\" should be \\\"MCQ-S\\\".\"], \"questions\": \"1. Why was multiple-choice questions (MCQ) chosen as the primary evaluation format when designing UrbanPlanBench? Have other types of assessment methods been considered?\\n2. How does the author plan to address potential biases in the benchmark and dataset, particularly considering that all the data comes from Chinese urban planning exams and that some of the evaluation data may have been learned during the model's pre-training phase?\\n3. Why is the SFT effect minimal on larger models? Does this indicate that the quality of the constructed training data is poor?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper introduces UrbanPlanBench, a benchmark for evaluating LLMs in urban planning, along with UrbanPlanText, a dataset of 30,000 instruction pairs sourced from urban planning exams and textbooks. While fine-tuning shows some improvement, LLMs continue to struggle with tasks requiring specialized reasoning and terminology. UrbanPlanBench opens up a new domain for LLM evaluation, and UrbanPlanText provides a useful fine-tuning resource. However, reviewers pointed out that the benchmark lacks novelty, adapting existing frameworks, and relies on a small dataset focused on Chinese urban planning exams, which limits diversity and generalizability. Reviewers concerned about potential data contamination during pre-training and weak annotation practices, which hurt the dataset\\u2019s credibility. The reliance on multiple-choice questions might restrict real-world applicability, and fine-tuning results were inconsistent, especially for larger models. While the paper takes an interesting step forward, expanding beyond multiple-choice questions, and involving domain experts for validation would make the benchmark more strong. I would recommend to further refine and recycle the paper.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers raised concerns about the limited novelty of the benchmark, dataset quality, and potential data contamination. Concerns about the benchmark\\u2019s narrow focus on multiple-choice questions and the inconsistent results of fine-tuning remained unresolved. Reviewers also questioned the generalizability of the benchmark, given its focus on Chinese urban planning, and suggested expanding the scope and incorporating more realistic tasks. The author didn't respond to the review.\"}", "{\"summary\": \"This paper introduces PlanBench, a benchmark designed to assess the effectiveness of LLMs in the field of urban planning. The study finds notable gaps in LLMs' planning knowledge, with 70% of models performing poorly in regulatory understanding. Additionally, the authors present PlanText, the largest supervised fine-tuning dataset for LLMs in urban planning, containing over 30,000 instruction pairs from exams and textbooks. Fine-tuned models show improved performance but still struggle with domain-specific terminology and reasoning tasks. The benchmark, dataset, and tools aim to drive LLM integration into urban planning, enhancing collaboration between human expertise and AI.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper presents a valuable dataset focused on urban planning, an essential domain for AI exploration. The structure is thorough, covering dataset construction, in-depth analysis, and the training method. Additionally, the writing is clear and easy to follow, making the paper accessible and well-organized.\", \"weaknesses\": \"While this benchmark introduces a new domain, urban planning, it doesn't fundamentally expand beyond the scope of existing benchmarks. Its formulation is quite similar to widely used evaluation benchmarks like BigBench and MMLU, with the primary difference being just the new domain focus. This makes the dataset's contribution feel less novel, as it isn't that different from prior works. Similarly, the training method lacks novelty, as it relies on leveraging domain-specific resources from the web, which is already a common practice.\\n\\nTo enhance the paper, the benchmark's evaluation could move beyond a multiple-choice QA format. A more realistic approach might involve simulating a full urban planning task: for example, an agent would plan steps for constructing a house or shopping mall in a given city or district, considering the environment state in a simulated setting. This could yield more interesting insights into agent performance, reveal complex failure patterns, and potentially offer valuable contributions to real-world urban planning.\", \"questions\": \"See above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
Dl3MsjaIdp
Continual Slow-and-Fast Adaptation of Latent Neural Dynamics (CoSFan): Meta-Learning What-How & When to Adapt
[ "Ryan Missel", "Linwei Wang" ]
An increasing interest in learning to forecast for time-series of high-dimensional observations is the ability to adapt to systems with diverse underlying dynamics. Access to observations that define a stationary distribution of these systems is often unattainable, as the underlying dynamics may change over time. Naively training or retraining models at each shift may lead to catastrophic forgetting about previously-seen systems. We present a new continual meta-learning (CML) framework to realize continual slow-and fast adaptation of latent dynamics (CoSFan). We leverage a feed-forward meta-model to infer *what* the current system is and *how* to adapt a latent dynamics function to it, enabling *fast adaptation* to specific dynamics. We then develop novel strategies to automatically detect *when* a shift of data distribution occurs, with which to identify its underlying dynamics and its relation with previously-seen dynamics. In combination with fixed-memory experience replay mechanisms, this enables continual *slow update* of the *what-how* meta-model. Empirical studies demonstrated that both the meta- and continual-learning component was critical for learning to forecast across non-stationary distributions of diverse dynamics systems, and the feed-forward meta-model combined with task-aware/-relational continual learning strategies significantly outperformed existing CML alternatives.
[ "continual meta-learning", "latent dynamics forecasting", "time-series" ]
Accept (Poster)
https://openreview.net/pdf?id=Dl3MsjaIdp
https://openreview.net/forum?id=Dl3MsjaIdp
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ztVhDbeD6J", "zs1StAvDXm", "z22I1RDFPd", "vQbAFKeN2O", "jaTKhIQ4Rg", "j13hCO2arf", "dm8q6FTAqS", "c2wVO8XrUR", "bTOFze9bFs", "WKuJSb3nyt", "OPxSXPqEj4", "NvakSjdibJ", "IVjEeFuKpJ", "IJ1eJraFGi", "I8pOEisIwY", "G053XXsuvI", "DWdZbThQ4p", "7zwgaKvfu5", "4Dw2ElVkTB", "2bu6squwZC", "2SJuJul6fl", "0MMBtQAnzt" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_review", "decision", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732299561982, 1732706003901, 1732681693938, 1732563377815, 1732299613574, 1732299528417, 1733172446802, 1732299735175, 1730678232674, 1732299418661, 1732577833974, 1732299794841, 1732681962593, 1732299707952, 1735021614999, 1732726643099, 1730722533871, 1730584868053, 1737524160104, 1730765331907, 1732681906272, 1732299400477 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12012/Authors" ], [ "ICLR.cc/2025/Conference/Submission12012/Reviewer_i4xk" ], [ "ICLR.cc/2025/Conference/Submission12012/Authors" ], [ "ICLR.cc/2025/Conference/Submission12012/Reviewer_GEod" ], [ "ICLR.cc/2025/Conference/Submission12012/Authors" ], [ "ICLR.cc/2025/Conference/Submission12012/Authors" ], [ "ICLR.cc/2025/Conference/Submission12012/Authors" ], [ "ICLR.cc/2025/Conference/Submission12012/Authors" ], [ "ICLR.cc/2025/Conference/Submission12012/Reviewer_i4xk" ], [ "ICLR.cc/2025/Conference/Submission12012/Authors" ], [ "ICLR.cc/2025/Conference/Submission12012/Reviewer_JYok" ], [ "ICLR.cc/2025/Conference/Submission12012/Authors" ], [ "ICLR.cc/2025/Conference/Submission12012/Authors" ], [ "ICLR.cc/2025/Conference/Submission12012/Authors" ], [ "ICLR.cc/2025/Conference/Submission12012/Area_Chair_uowX" ], [ "ICLR.cc/2025/Conference/Submission12012/Authors" ], [ "ICLR.cc/2025/Conference/Submission12012/Reviewer_U6Tv" ], [ "ICLR.cc/2025/Conference/Submission12012/Reviewer_JYok" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12012/Reviewer_GEod" ], [ "ICLR.cc/2025/Conference/Submission12012/Authors" ], [ "ICLR.cc/2025/Conference/Submission12012/Authors" ] ], "structured_content_str": [ "{\"title\": \"Summary of major additions Pt. 2\", \"comment\": \"**3. Clarifying on Contribution Related to Task Identification and Bi-Level Optimization (Reviewer GEod).**\", \"we_would_like_to_clarify_the_benefit_of_the_task_aware_methodology_over_the_task_agnostic_approach_as_it_underpins_a_key_contribution_of_the_paper\": \"our main innovation is not on the boundary detection mechanism, but the task identification strategies used to enable per-task context-query pairing to enable bi-level meta-optimization that is not achieved in existing CML approaches. The Task-Agnostic Replay mechanism does not leverage task information, relying instead on the approximate equivalence between meta-learning and continual learning objectives in aligning the current task\\u2019s gradient with the average gradient of previous tasks. This involves adapting only on the current task\\u2019s context set and using the resulting parameter set on the past tasks\\u2019 samples to get the meta-loss. While sufficient for image classification and low-dimensional regression problems in prior work, we found this approach inadequate for latent dynamics forecasting. In contrast, the Task-Aware Replay mechanism incorporates the estimated task ID via the boundary detection mechanism, enabling traditional per-task context-query pairing for reservoir samples. This allows each task to adapt to its relevant context set sampled from the reservoir, with per-task losses aggregated to form the meta-loss. The enables meta-learning to be done in a continual fashion without compromising full bi-level meta-optimization, which we believe was the cause of the significant performance gain in the proposed task-aware over task-agnostic settings across both the gradient-based and our considered feed-forward meta-learners.\\n\\n**4. Documentation of Computational Costs (Reviewer JYok).**\\n\\nTo address suggestions regarding the computation cost of the Task-Relational Experience Replay, Appendix C.2 has been added to document the computational costs of Gaussian Mixture Model (GMM) clustering and task-relational replay. We analyze memory and processing requirements across varying reservoir sizes and task numbers and include a Time-to-Train comparison between Task-Aware and Task-Relational mechanisms. Our findings show that the GMM scales favorably in both memory and processing time, with minimal memory requirements due to the low-dimensional embedding of meta-knowledge.\"}", "{\"comment\": \"Thank you for the detailed reply, as well as the corrections/clarifications to the paper.\\n\\nApart from using the Transformer with a clear meta-learning approach, you could also consider using it with an online learning approach such as in [1] which works well with a replay. \\n\\nOverall, I am happy with the reply and would like to maintain the score.\\n\\n[1] J. Bornschein et al. Transformers for Supervised Online Continual Learning.\"}", "{\"title\": \"Response to Reviewer JYok\", \"comment\": \"**Presentation of results and figures**\\n\\nWe have completed a full overhaul of all the figures in the paper, excluding the data figures in Appendix D. For the revised figures, we increased font sizes, bolded all text, and converted them to a higher-quality SVG format where we could. A key improvement is in Figure 5, which has been remade for better readability, with an updated legend and caption to better distinguish between panels 5A and 5B. Additionally, Figures 15\\u201319 have been split into two separate horizontal figures for DST and MSE, improving readability and allowing for clearer comparisons between LP and RP metrics.\\n\\n**Task boundary standard deviations**\\n\\nIn case of any misunderstanding, we\\u2019d like to first clarify that the results shown in the original F23-24 (now F27-28) are the success rate of task boundary detection (left) and the average (middle) and std (right) of the mean likelihood differences as calculated by the task-boundary-detection mechanism. The performance of CoSFan in these settings were listed in Table 10-11 which does not show particularly high std. Fig27-28 now have std for the success rate (left) also. The high standard deviation in the mean likelihood differences and the detection success rate observed can be attributed to two factors. First, task boundaries between two tasks derived from the same underlying physical equation (e.g., different parameter configurations of Two-Body) may exhibit significantly smaller differences in their likelihood means compared to boundaries between tasks with more distinct dynamics. Second, as discussed in Section 5, some parameter configurations may collapse into a single cluster during the model\\u2019s optimization, effectively treating them as the same task. Consequently, certain boundaries that are technically different within these plots here as metrics may show low likelihood differences and contribute to the observed standard deviation range. Despite these factors, the majority of mean likelihood differences remains more than one standard deviation away from the threshold, which can be freely adjusted based on the variance of the expected likelihood ranges.\\n\\n**Figure mis-reference**\\n\\nThis has been taken care of, thank you!\"}", "{\"comment\": \"I still am not convinced about the scaling of hypernetworks. As you mentioned, the representational capacity of the hypernetwork is limited, and as you scale the number of heterogeneous tasks, you might hit that limit. Meanwhile, for a gradient adaptation type baseline, it isn't necessary to store all the information about a task in the network, as you are allowed to adapt.\\n\\nOverall, though I am satisfied with the rest of the responses and am raising my score.\"}", "{\"title\": \"Response to Reviewer U6Tv\", \"comment\": \"**Component motivation and trainable embedding networks**\\n\\nPlease refer to our overall response 2 for clarification on the feed-forward methodology and experiments on comparing alternative conditioning mechanisms, such as trainable embedding networks.\\n\\n**Task boundary detection novelty**\\n\\nPlease refer to our overall response 3 for clarification on the novelty of the task boundary detection mechanism and the primary contribution of the work.\\n\\n**Clarification of notations**\\n\\nThank you for bringing up confusions in the notation. Indeed the superscript s refers to the context set for meta-learning. We have done a passthrough notation and have cleaned up notation throughout the paper, including making the internal sequences of T_j dependent on j.\"}", "{\"title\": \"Summary of major additions\", \"comment\": \"We thank the reviewers for their constructive feedback. While detailed responses have been provided to individual reviewers, we highlight the main additions and clarifications made to address the key concerns raised here. Updated sections within the manuscript are highlighted in blue.\\n\\n**1. Ablation Study on Boundary Detection with Gradual Task Shifts (Reviewers GEod and JYok).**\\n\\nIn response to suggested improvements regarding the limitation of detectable task shifts, we have added Appendix C.8, which evaluates the boundary detection mechanism under increasingly blurred task boundaries on the mixed-physics and gravity-6 datasets. This study evaluates both the success of boundary identification and its impact on the overall forecasting performance. Our results show that boundary identification rates remain stable up to 40\\u201360% overlap between current and next task data, after which they drop significantly. Forecasting performance varies by dataset: for mixed-physics with more heterogeneous dynamics, performance decreases modestly with increasing overlap but remains relatively stable; for gravity-6 with a lower level of heterogeneity among dynamics , performance declines more significantly when overlap exceeds 60%. Despite this, the Task-Aware setting still consistently outperforms the Task-Agnostic setting on both datasets in all overlap levels. We thank the reviewers for suggesting this set of new experiments, which demonstrates a notable level of resilience of CoSFan to gradual boundaries and provides additional concrete evidence for the advantages of the proposed Task-Aware strategies to the state-of-the-art Take-Agnostic strategies used in CML. We attribute this to the idea that Task-Aware methods only fully degenerate to Task-Agnostic methods when all task boundaries are missed, with partial task identification still providing meaningful benefits for meta-optimization. In Appendix C.8, we also provide analyses of failure cases, linking them to task likelihood variance and boundary swap rates, and propose potential modifications to improve robustness under these scenarios. In addition, we have added a relevant pointer in the Conclusion of the main text towards the results of this ablation.\\n\\n**2.Clarification of Adaptation-Based Methodology and Comparison of Conditioning Mechanisms (Reviewers GEod and U6Tv).**\\n\\nIn response to reviewer GEod, We would like to clarify that the proposed method is fundamentally adaptation-based, utilizing feed-forward models to adapt the latent dynamics function with meta-knowledge from the context set. In response to reviewers U6Tv and GEod, we would like to further clarify that our main contribution on the meta-learner is to demonstrate the benefit of feed-forward meta-learners over widely-used MAML-based meta-learners in this problem: hyper-network based (multiplicative) adaptation is a specific design choice we used for this feed-forward meta-learner; it is neither the only choice nor the key innovation of CoSFan. To demonstrate the generality of CoSFan beyond this specific choice, we added an alternative approach using an embedding conditioning based (additive) adaptation, where the derived context variable is concatenated with the latent state of the dynamics function, aligning with prior meta-learning work [1]. Results are added to Appendix C.1 on mixed-physics and gravity-6 datasets, showing that CoSFan\\u2019s performance is not affected by this design choice (embedding mechanism performs comparably in the considered datasets). In the original manuscript, hyper-network architecture was chosen for its demonstrated suitability across diverse domains and its ability to model interactions that subsume those provided by embedding-based (or additive-based) conditioning [2], providing additional representation capacity without compromising optimization stability..\\n. \\n[1] Jiang, Xiajun, et al. \\\"Sequential latent variable models for few-shot high-dimensional time-series forecasting.\\\" The Eleventh International Conference on Learning Representations. 2023.\\n[2] Jayakumar, Siddhant M., et al. \\\"Multiplicative interactions and where to find them.\\\" International conference on learning representations. 2020.\"}", "{\"title\": \"Reminder to Reviewer U6Tv\", \"comment\": \"Dear Reviewer U6Tv\\n\\nWe truly appreciate your time and effort in reviewing our submission. The rebuttal discussion period comes to a close soon, and we would like to confirm whether our responses, added ablations, and manuscript updates have effectively addressed your questions and concerns.\\n\\nThank you!\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer i4xk Pt. 2\", \"comment\": \"**Cluster assignment and meta-learning objective**\\n\\nAs there are two potential clustering characteristics that could be referred to, and we are uncertain which one is being asked about specifically, we will define both settings before addressing each. The two characteristics are: (1) the collapse of unique parameter configurations for some Hamiltonian equations into a single general cluster, such as both Pendulum (orange) and Mass-Spring (green) in Figure 5A, and (2) the mapping of some individual embeddings to entirely incorrect tasks, as seen with certain Pendulum samples (orange) being mapped to Gravity (blue) or Two-Body (red) clusters.\\n\\nRegarding characteristic 1, this behavior is discussed in Section 5.4. These results suggest that the meta-model optimization deemed the realizations of high-dimensional observations for these individual parameter sets insufficiently diverse to justify separate clusters in the embedding space. Importantly, the performance on the Hamiltonian dynamics associated with these collapsed clusters indicates that this lack of differentiation did not negatively impact optimization.\\n\\nRegarding characteristic 2, it is true that during certain stages of optimization, some context embeddings in the reservoir may not be well-represented and can be assigned to clusters associated with different underlying tasks. We believe that additional regularization could help stabilize these context embeddings further. While the proposed cluster regularization has demonstrated improvements in stabilizing overall clusters, it does not fully address cases where individual reservoir samples are misassigned to incorrect dynamics clusters. This misassignment leads to the sample receiving an incorrect context set during experience replay, resulting in an erroneous loss signal. However, this pseudo-labeling is not permanent, as the Gaussian Mixture Model (GMM) is refit to the updated embeddings of the reservoir samples at each task boundary. Specifically, we believe that incorporating unsupervised contrastive loss terms across reservoir samples could provide a corrective signal to better align misassigned samples with their true dynamics clusters. Unfortunately, due to time constraints, we were unable to include results for this specific proposal in the rebuttal. Nonetheless, we see this as a promising avenue for future work.\\n\\n**Other minor edits**\\n\\nAll corrected as suggested, thank you!\"}", "{\"summary\": \"This paper proposes a continual meta-learning framework (CoSFan) for forecasting high-dimensional time-series data generated by non-stationary distributions of dynamic systems. CoSFan addresses the limitations of traditional meta-learning approaches that assume stationary task distributions and known task identifiers. It proposes a feed-forward \\\"what-how\\\" meta-model to quickly infer the current dynamic system and adapt a latent dynamics function accordingly. Furthermore, it introduces mechanisms to detect task boundaries and employs task-aware and task-relational experience replay for continual adaptation of the model, mitigating catastrophic forgetting. Experiments on simulated physical systems demonstrate CoSFan's ability to learn and adapt to new tasks while retaining performance on previously seen ones, outperforming existing CML alternatives and standard latent dynamic models with continual extensions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is clearly written. Different components of the system and how they work together to form the final solution are clearly described.\\n\\nThe experiment section provides lots of insight, with studies of individual components in the framework and comparisons with other alternative solutions. These ablations are especially important for a complex system.\", \"weaknesses\": \"The paper opts for using hypernet as the meta-learner. Although it ablates the hypernet solution to alternatives such as MAML and Bayesian gradient descent, it lacks the comparison with a sequence leaner, such as Transformer.\", \"questions\": \"1. Line 215, what is s in $T_j^s$? Could you add an explaination of what s is in the text?\\n2. Line 216, what's z? Could you add an explaination of what z is in the text?\\n3. Eq. 3, what's $l$? Is it summing over $l$?\\n4. Line 367, typo \\\"withou\\\".\\n5. Figure 2, why is it some method task aware replay performs better than full ER?\\n6. Table 1, could you add a description to what number represent in the caption? Could you add the unit in the table?\\n7. For task-relational buffers, why is the wrong cluster assigned? Is it because of the context embedding is not well-represented? If so, is it due to the meta-learning objective? What could be done to make it better?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer GEod Pt. 2\", \"comment\": \"**Adaptation-based baselines**\\n\\nWe would appreciate it if the reviewer could clarify what \\u201can adaptation based approach\\u201d is intended to refer to, as both the proposed feedforward meta-learner and the MAML-based approaches we used as one of the baselines are adaptation based approaches (the latent dynamic functions are learned to be adapted to context data). Based on our understanding, the reviewer may be referring to an alternative baseline where, after having derived the context variable $\\\\mathbf{c}(\\\\mathcal{T}_j)$ from the context set, we provide the context variable as additional input to condition/adapt the latent dynamics function. This formulation of feed-forward adaptation aligns with the approach considered in prior work [1]. \\n\\nPlease refer to the overall response 2 for our clarification on the choice of the hyper-network as well as experiments including this alternative baseline.\\n\\nWe kindly request clarification from the reviewer to confirm if this interpretation aligns with the intended meaning of \\u201cadaptation-based baseline,\\u201d so that we can further address any misunderstandings in our response.\\n.\\n[1] Jiang, Xiajun, et al. \\\"Sequential latent variable models for few-shot high-dimensional time-series forecasting.\\\" The Eleventh International Conference on Learning Representations. 2023.\\n\\n**Figure to compare task-aware and task-relational replays**\\n\\nWe appreciate the suggestion for adding this visualization. In the Appendix B.3 dedicated to Task-Aware vs. Task-Relational comparisons, we have added two additional figures comparing them on both mixed-physics and gravity-6.\"}", "{\"comment\": \"Hello,\\n\\nThank you for addressing all the points and the additions to appendix C.\\n\\nPlease make a real effort in addressing point 1 on the presentation, figures 15-19 of said appendix would be a good start.\\n\\nThe added results on the task boundaries are very interesting, however the std of the actual data points in F.23 & 24 is very large. I am not sure they are small enough to make clear conclusions. If your pipeline is setup, running a few more seeds would be quite beneficial.\", \"almost_missed_this\": \"you have a figure mis-reference at the start of paragraph 3 of C.8. should be 23&24 if I'm not mistaken.\"}", "{\"title\": \"Response to Reviewer JYok\", \"comment\": \"**Presentation of results and figures**\\n\\nWe agree that some of the figures\\u2019 presentation could be improved. We plan to continue to improve the quality of the figures. \\n\\n**Real world datasets**\\n\\nWe acknowledge the need of future works to extend CosFan to real-world datasets, although the benchmark data we considered are quite representative of what are being used in the current literature of latent dynamic modeling. One challenge of identifying appropriate real-world datasets is that we are looking for time-series of high-dimensional data (e.g., image sequences) where the dynamics being learned should be in the latent space (not directly in low-dimensional data space as many available real-world time-series datasets). \\n\\n**Comparison on CML performance trade-offs**\\n\\nWe did show improvements in both accuracy and speed by CosFan over MAML-based approaches, and provided further training performance comparisons to BGD-based baselines in Appendix B.2. A particularly notable performance trade-off between gradient-based methods and the feed-forward methods is that, when bi-level meta-optimization is enabled via the Task-Aware Reservoir Sampler, the gradient-methods face significant slowdowns in the face of multiple task IDs as they must sequentially process the per-task losses before aggregation to the meta-loss. We show that feed-forward methods are agnostic to the number of present dynamics within a batch and are easy to parallelize. We would like to hear from the reviewer any additional suggestions to further improve these results and discussion.\\n\\n**Computational cost of GMM cluster/Task-Relational replay**\\n\\nWe appreciate the reviewer\\u2019s feedback regarding the lack of documentation on computational costs and agree that this is an important aspect to address. In response, we have added Appendix C.2, which provides analysis of memory and processing requirements across a range of reservoir sizes and increasing numbers of unique tasks. Additionally, we include a Time-to-Train comparison between the Task-Aware and Task-Relational mechanisms. Our findings show that the Gaussian Mixture Model (GMM) exhibits favorable scalability in both memory and processing time. The memory requirements for fitting the GMM on the meta-embeddings are minimal, largely due to the benefit of embedding meta-knowledge into a lower-dimensional space. \\n\\n**Gradual task shift experiments**\\n\\nWe agree that exploring the stated limitation of detectable task shifts is an important aspect to include within the work. Please refer to our overall response 1 where we describe an ablation study and discussion over blurred task boundaries that was added to the work.\"}", "{\"title\": \"Summary of major additions Pt. 3\", \"comment\": \"**5. Presentation of results and figures.**\\n\\nWe have completed a full overhaul of all the figures in the paper, excluding the data figures in Appendix D. For the revised figures, we increased font sizes, bolded all text, and converted them to a higher-quality SVG format where we could. A key improvement is in Figure 5, which has been remade for better readability, with an updated legend and caption to better distinguish between panels 5A and 5B. Additionally, Figures 15\\u201319 have been split into two separate horizontal figures for DST and MSE, improving readability and allowing for clearer comparisons between LP and RP metrics.\"}", "{\"title\": \"Response to Reviewer i4xk Pt. 1\", \"comment\": \"**Comparison to additional meta-learners**\\n\\nWe appreciate the reviewer\\u2019s suggestion that incorporating a comparison with sequence learners, such as Transformers, could enhance the robustness of the meta-learner baselines. Based on relevant work [1, 2], we have identified two promising directions to include sequence learners in our evaluation and have added an additional Related Works section dedicated to additional algorithmic priors to consider, including sequence learners. Chen et al. [1] propose a transformer-based meta-learner that combines a shared initial parameter set with available data tokens to adapt the weights, which would be conceptually similar to gradient-based meta-learner adaptation. Vladymyrov et al. [2] present a recent CML approach using a Transformer as a hyper-network to generate target network weights based on the context set. In their work, the generated weights of the previous task are used as parameter tokens for the current task\\u2019s weights, alongside active task samples. They omit the use of a replay buffer, using only the weights updated over time on active samples. While their methodology primarily focuses on image classification, adapting their approach to our latent dynamics setting is an interesting direction for future exploration. We have added this discussion in Appendix A with a reference to it in the conclusion of the main text.\\n\\nWe refer to the overall response 2 regarding the feed-forward meta-learner and additional reasons for choosing the hyper-network specifically. Additionally, we would like to note a key advantage of our current hyper-network architecture: its parameter efficiency, as it generates the target network\\u2019s weights from a low-dimensional context embedding. Given the well-documented scaling limitations of Transformers, applying them to our setting of high-dimensional time-series may pose significant computational challenges, particularly when generating full target network parameter sets. While we have begun implementing these baselines, adapting these architectures to the forecasting setting is non-trivial. Due to the complexity of this adaptation and the rebuttal timeline, we may not be able to provide results within this review period.\", \"references\": \"Chen, Yinbo, and Xiaolong Wang. \\\"Transformers as meta-learners for implicit neural representations.\\\" European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.\\nVladymyrov, Max, Andrey Zhmoginov, and Mark Sandler. \\\"Continual HyperTransformer: A Meta-Learner for Continual Few-Shot Learning.\\\" Transactions on Machine Learning Research.\\n\\n**Explanation of s in T^s_j**\\n\\n$s$ refers to the context set for meta-learning. To support clarity in the text, we have added an explanation to the main text.\\n\\n**Line 216, variable z**\\n\\n$z$ refers to the latent space which the dynamics function forecasts in, and the encoder/decoders embed to and from. Section 3 Problem Formulation details what $z$ represents as well as what the equation $\\\\mathbf{z}_t = f_\\\\theta(\\\\mathbf{z}_{<t}; \\\\mathbf{c}(\\\\mathcal{T}_j))$ represents.\\n\\n**Equation 3, variable l**\\n\\nVariable $l$ refers to the length of the observation subsequence $\\\\mathbf{x}^q_{j,0:l}$ available to the initial state encoder derived from the full ground truth sequence $\\\\mathbf{x}^q_{j,0:T}$. We recognize that the definition of $l$ is missing from the text and have added an explanation. Thank you for the catch.\\n\\n**Exact Replay metrics**\\n\\nThank you for the astute observation. Based on this remark, we re-evaluated the exact-replay implementation for all baselines and identified an issue in metric reporting, which led to incorrect results. We have rerun all baselines on exact-replay and have updated figures and tables showing that, as expected, exact-replay performs the best overall in general.\\n\\n**Table 1 updates**\\n\\nWe have added a description of what MAML-1 and MAML-5 represents in the table caption, as well as adding the unit of each metric to the table itself. We define metrics TTA-1 and TTA-12 within the relevant metric section, which due to space constraints of elaborating that metric, we hope is sufficiently clear.\"}", "{\"metareview\": \"Reviewers remarked positively about the proposed approach, agreeing that the method is effective in mitigating forgetting when task identification is successful, remarking on the clear use case for the task relational replay and identifying the faster adaptation time as a major advantage over gradient-based methods. Apart from several smaller items mentioned by reviewer JYok, the submission was generally perceived to be clearly written and considered strong in its empirical methodology.\\n\\nOn the downside, questions remain about the scaling of the proposed hypernetwork-based approach with the number of heterogeneous tasks and the applicability of the setup to problems with gradual shifting instead of abrupt task changes (a condition in which the task detection mechanism will struggle, as confirmed by the authors). Results could be further improved by moving beyond synthetic datasets.\\n\\nOn balance, this submission is above the acceptance threshold, with one reviewer providing very strong feedback in favour of the submission. This submission should thus be accepted for publication at ICLR.\", \"additional_comments_on_reviewer_discussion\": \"Healthy Reviewer Discussion apart from Reviewer U6Tv, who did not respond to the rebuttal. As a result, I slightly down-weighted their criticism in my overall assessment.\"}", "{\"title\": \"Reminder to Reviewer U6Tv\", \"comment\": \"Dear Reviewer U6Tv\\n\\nThank you again for your time and efforts in reviewing our paper. As the deadline for discussion is approaching, we hope that you had time to review our revised manuscript and responses. We would like to follow up to see if you have additional comments for further discussion.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"summary\": \"This paper proposed a new method for continual meta-learning (CML). Different from previous methods (e.g. MAML variants), the proposed method uses a single feed-forward network. Using the contex encoder and hyper network, the support samples are encoded to context vector, and the context vector is used to produce the parameters for encoded query samples using the hyper network. Furthermore, the authors also proposed task-aware reservoir sampling approach by using the gaussian mixture model to identify the tasks. In the experiment section, the proposed methods outperforms the gradient-based meta learners, and also show the effectiveness of using task-aware sampling strategy.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Strengths\\n\\n1. Different from the gradient-based methods which needs a number of gradient steps to adapt, the proposed methods can adapt to novel tasks using a sinlge feed-forward network.\", \"weaknesses\": \"Weaknesses\\n\\n1. I think the motivation behind each component is weak. For example, for adapting to the query samples, the CosFan uses the hyper network to produce the parameters. However, why we should use the hyper network framework in this feed-forward network? Isn't it possible to use other trainalbe embedding networks for the query samples?\\n\\n2. The mechanism for detecting the task boundary is not novel. The detection mechanism simply comes from the methods in [1]. However, though the authors show that the advantage of CosFan over other baselines lies on the ability for detecting the task boundary without any assumptions on the task identifiers, I think other baselines can adopt the task boundary detection mechanism used in this paper with simple modification. Therefore, I don't think the ability on detecting the task boundary in CosFan is advantage compared to other methods\\n\\n3. The overall notation is confusing. For example, in line 217~220, what is the measing of superscript s in $T_j$? Is it support set? I think it is not clear. Furthermore, in the definition of $T_j$, the internal sequences are not dependent on $j$. I think the authors should clarify all the notations to prevent the confusion.\\n\\n\\n[1] Caccia et. al., Online Fast Adaptation and Knowledge Accumulation (osaka): A New Approach to Continual Learning, NeurIPS, 2020\", \"questions\": \"Already mentioned in weaknesses section\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": [\"The paper introduces CoSFan, a continual meta-learning framework that adapts to changing dynamics in high-dimensional time-series data. By using a novel \\\"what-how & when\\\" model, it detects task shifts and quickly adjusts to new tasks, minimizing catastrophic forgetting. The claim is that CoSFan outperforms existing methods in accuracy and adaptability across non-stationary data distributions.\", \"### Contributions\", \"Novel Continual Meta-Learning (CML) Framework: CoSFan, designed for continual adaptation of latent dynamics in time-series forecasting. It combines slow and fast adaptation mechanisms.\", \"What-How & When Framework:\", \"What-How Meta-Model: Quickly adapts to specific tasks by identifying system dynamics (what) and generating task-specific parameters (how) using a feed-forward hyper-network.\", \"Automatic Task Shift Detection: Identifies task boundaries to update the model for non-stationary distributions.\", \"Experience Replay Mechanisms:\", \"Task-Aware Reservoir Sampling: Uses boundary detection to pseudo-label tasks for better context-query pairing.\", \"Task-Relational Experience Replay: Clusters tasks using Bayesian Gaussian Mixture Models to handle frequent transitions and maintain rare tasks.\"], \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"Well written paper.\", \"Effectively combines feed-forward meta-learning with task-aware adaptation, addressing limitations in prior CML approaches reliant on gradient-based updates.\", \"Uses task-relational experience replay and GMM clustering to manage task transitions, a creative extension for handling non-stationary data.\", \"Robust empirical results validate CoSFan\\u2019s advantages, demonstrating reduced catastrophic forgetting and better adaptability than existing methods.\", \"Methodology is rigorous with well-defined metrics and detailed evaluation, covering key aspects of adaptation speed, memory usage, and performance retention.\", \"Generally clear presentation of the framework and methodology, with a structured breakdown of components (\\\"what-how & when\\\") aiding comprehension.\", \"Potential to influence further work in both CML frameworks and high-dimensional latent dynamics forecasting, with broader implications for adaptive AI in dynamic settings.\", \"The use of 5 seeds with average and standard deviation for all experimental results is greatly appreciated and confirms that these results are not simply a lucky seed; although the std remains very high on certain experiments.\"], \"weaknesses\": [\"There is work to be done on the presentation of the figures (too small, need better explanations/more exhaustive captions, not clear/to many overlapping lines...)\", \"---\", \"Limited to synthetic datasets (e.g., bouncing balls, Hamiltonian systems); adding real-world datasets (e.g., financial time series, climate data) would strengthen claims of applicability.\", \"(Weakness acknowledged by the authors in limitations)\", \"---\", \"The comparison with prior CML methods, especially MAML-based and task-agnostic approaches, lacks deeper discussion on specific performance trade-offs (e.g., speed vs. accuracy in adaptation).\", \"The added computational cost of GMM clustering and task-relational replay is not well-documented; detailing memory and processing requirements would help assess scalability.\", \"---\", \"Assumes detectable task shifts and local stationarity; the model\\u2019s robustness to gradual or overlapping task shifts is not explored, limiting the generalizability to more fluid non-stationary environments.\", \"A quantification of how slow is to slow for the task shift to be picked up by the model would be interesting.\", \"Suggested improvements: include experiments with blurred task boundaries or transitions to assess flexibility in ambiguous scenarios.\"], \"questions\": \"See weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"10\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper introduces CoSFan, a method for meta continual learning that the paper evaluates on sequences of dynamical systems tasks. The approach consists of a hyper network based approach where the parameters of the current task being adapted to are generated using a forward pass of the hypernetwork on a context vector created using the average encoding of the samples in a context set. The generated parameters are used in a dynamics model that predicts the latent of the next element in the sequence and is then decoded. The meta-parameters and the encoder/decoder are optimized using the MSE between predicted and ground truth query sequences. The paper also explores how to detect task boundaries to manage/balance the number of samples in a replay buffer being used to rehearse on previous tasks in the sequence. It proposes two different mechanisms: one where it looks at any spike in loss as a task change, and one where it use a gaussian mixture model to cluster examples and detect whether any new clusters have formed.\\n\\nThe paper evaluates on a series of image based dynamical systems tasks where the physics/gravity constants are changed in each task, and the model must predict the next state. The paper shows an improvement in a slight improvement in learning performance over other meta learning approaches, but shows a clear improvement in forgetting over other approaches when using both their meta learning approach and the task detection based replay.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Figure 5 shows a very clear use case for the task relational replay over even a task aware (and presumably task agnostic) replay.\", \"Figure 4 shows that the meta learning approach proposed by the paper helps mitigate forgetting when task detection is used.\", \"The method proposed can adapt to new sequences/new samples much faster than gradient based adaptation approaches, since they don\\u2019t need to take any gradient steps.\"], \"weaknesses\": [\"The way Figures 2 and 4 are presented is a bit confusing. For one, they use different metrics, and it\\u2019s unclear what the insight from using one over the other is. For figure 2, the red bar is the only meta-learned method, so the other 3 methods were presumably evaluated in a different setting. Were they given the \\u201ccontext\\u201d information? It seems the more direct comparison with baselines is in Figure 4.\", \"It\\u2019s unclear if the hypernetwork based approach can scale better than an adaptation based approach when the number of heterogeneous tasks increases.\", \"As the authors do mention, the task shift detection seems to rely on abrupt task shifts, and would likely fail on gradual task shifts. In the case where the task shift detection fails, and the default is to task agnostic reservoir sampling, this method seems to do slightly worse than other meta learning based approaches.\"], \"questions\": [\"For the experiments depicted in Figures 2 and 4, it seems each task is trained for the same number of samples. In this setting, why is there a big difference in the retained performance of specifically the meta learned methods between task agnostic and task aware replay? Wouldn\\u2019t the number of samples of each task still be approximately the same given that the replay size per task is still in the hundreds? Is the difference of a few samples really making that big of a difference?\", \"How does the hypernetwork based method scale with the number of heterogeneous tasks?\", \"I am a bit unclear about the setting. Is there a different context set sampled for each query example? Or is it just that the previous k sequences is treated as the context set?\", \"It would be interesting to see an adaptation based baseline which was also given the context vector as input.\", \"Do you have a version of Figure 3 comparing the Task aware replay with the task relational replay?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer GEod\", \"comment\": \"**Hypernetwork scaling**\\n\\nThank you for raising the point in terms of the adaptation capacity between a gradient adaptation baseline and a feedforward model-based adaptation method. While the latter offers the advantage of removing the need of gradient-based adaptation at test time, it is true that the feedforward model may encounter capacity limits as the number (or complexity) of the heterogeneous tasks increases. We have not yet observed this limit in our experiments so far, as demonstrated in our results, but we will take a deeper dive into this issue both theoretically and experimentally in our future work.\"}", "{\"title\": \"Response to Reviewer GEod Pt. 1\", \"comment\": \"**Clarification of metrics and intended results in Figures 2 and 4**\\n\\nWe used both MSE and DST across all experiments as MSE measures averaged pixel-level accuracy while DST the average Euclidean distance between the predicted object\\u2019s location and its ground truth. Due to space restrictions we are not able to report complete results from all metrics in all experiments in the main text \\u2013 we thus decided to use DST as an example in Fig 2 whereas MSE in Fig 4; complete results on both metrics are in Appendix. \\n\\nResults in Figure 2 and Figure 4 are intended to demonstrate different points. Results in Figure 2 (Section 5.2) are to show that both continual and meta methods are required simultaneously to achieve good performance. The non-meta models use CL strategies (except in naive learning) without context data to demonstrate the benefit of meta-learning (i.e., knowing how to adapt) in this setting. Because latent dynamic forecasting has not been studied in this setting of non-stationary and heterogeneous dynamics distributions, we felt that it is important to first demonstrate that both continual and meta components are essential in this setting. Once this is established in Section 5.2 / Figure 2, Figure 4 (and Section 5.3) then shows how the proposed continual and meta strategies improve over existing CML works (the baselines). \\n\\n**Task shift experiments**\\n\\nWe agree with the reviewer\\u2019s observation that the method performs slightly worse than MAML-based meta-learners in the task-agnostic setting. Please refer to our overall response 1 for an additional ablation study we performed on gradual task shift experiments and additional justification as to the advantages of Task-Aware replay over Task-Agnostic replay.\\n\\n**Differences between Task-Agnostic and Task-Aware methods**\\n\\nIndeed each task is trained with the same number of samples, and both Task-Agnostic and Task-Aware Replays maintain an approximately equivalent sample distribution across tasks during training, based on similar usage of the Reservoir Sampling algorithm.\\n\\nPlease refer to our overall response 3 for our clarification on the key difference between the Task-Aware methodology over the Task-Agnostic approach.\\n\\n**Hypernetwork scaling**\\n\\nThere is no required scaling with respect to the number of tasks in either the parameter or input size of the feed-forward hyper-network approach. It is more a matter of representational capacity of the network with respect to the complexity of the underlying task diversity and distribution. We argue that the algorithmic prior of this learned function transformation provides better scaling than gradient-based techniques, which have to share manifold capacity from an initial static point across the tasks. Computationally, the feed-forward adaptation approach benefits from being inherently agnostic to the number of heterogeneous tasks and efficiently parallelizing the adaptive forward pass across context sets. In contrast, gradient-based meta-learners require costly per-task test-time fine-tuning. This efficiency advantage is demonstrated in the adaptation efficiency comparison presented in Table 1.\\n\\n**Clarification on context-query pairing**\\n\\nFor the task which is actively streaming in T_j, the previous k sequences are treated as the context set for the active samples. However, when we sample from the reservoir to get past-task query samples for experience replay, we additionally sample a relevant context set from the reservoir based on their assigned pseudo-labels (obtained by our two task identification strategies)\"}" ] }
DkzZ1ooc7q
OmniSep: Unified Omni-Modality Sound Separation with Query-Mixup
[ "Xize Cheng", "Siqi Zheng", "Zehan Wang", "Minghui Fang", "Ziang Zhang", "Rongjie Huang", "Shengpeng Ji", "Jialong Zuo", "Tao Jin", "Zhou Zhao" ]
Query-based sound separation (QSS) effectively isolate sound signals that match the content of a given query, enhancing the understanding of audio data. However, most existing QSS methods rely on a single modality for separation, lacking the ability to fully leverage homologous but heterogeneous information across multiple modalities for the same sound signal. To address this limitation, we introduce Omni-modal Sound Separation (**OmniSep**), a novel framework capable of isolating clean soundtracks based on omni-modal queries, encompassing both single-modal and multi-modal composed queries. Specifically, we introduce the **Query-Mixup** strategy, which blends query features from different modalities during training. This enables OmniSep to optimize multiple modalities concurrently, effectively bringing all modalities under a unified framework for sound separation. We further enhance this flexibility by allowing queries to influence sound separation positively or negatively, facilitating the retention or removal of specific sounds as desired. Finally, OmniSep employs a retrieval-augmented approach known as **Query-Aug**, which enables open-vocabulary sound separation. Experimental evaluations on MUSIC, VGGSOUND-CLEAN+, and MUSIC-CLEAN+ datasets demonstrate effectiveness of OmniSep, achieving state-of-the-art performance in text-, image-, and audio-queried sound separation tasks. For samples and further information, please visit the demo page at \url{https://omnisep.github.io/}.
[ "sound separation", "composed query", "negative query" ]
Accept (Poster)
https://openreview.net/pdf?id=DkzZ1ooc7q
https://openreview.net/forum?id=DkzZ1ooc7q
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uUrXj9M60z", "sY4ZsDPlCF", "pPhM6m6ij7", "oXH8qMm1a9", "nhzn74yG1M", "hXzfa2bz9x", "gjvZMVBvrd", "eKY0cfUrkR", "bvVPasmCFQ", "aFiG3fcDG7", "Zt00h0AQzj", "Vi2yquJxKR", "VMgapxNq9L", "UjEMNhOqXg", "UTBOV3s94S", "SU0Q4S74Hb", "P6kVQC5lIp", "O0LIexDisu", "DkKM0nHLtQ", "98nIHDtyqS", "7nKpj7aVEu", "69SdqfmRbW", "5ErzcX4QUg", "3p8mwHuixz", "2f8gRdO2L9" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1733119176512, 1733085768070, 1733086757702, 1732578374369, 1732613178212, 1732697206609, 1732472344633, 1734600606253, 1732709650339, 1732547841345, 1733085154026, 1732472276609, 1732472600803, 1730713083000, 1730303183544, 1732708792179, 1737523688363, 1732546054269, 1732472737261, 1730146198907, 1732550324247, 1732472930482, 1730556484829, 1733085580811, 1732557634274 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5163/Reviewer_PPUR" ], [ "ICLR.cc/2025/Conference/Submission5163/Authors" ], [ "ICLR.cc/2025/Conference/Submission5163/Authors" ], [ "ICLR.cc/2025/Conference/Submission5163/Authors" ], [ "ICLR.cc/2025/Conference/Submission5163/Authors" ], [ "ICLR.cc/2025/Conference/Submission5163/Authors" ], [ "ICLR.cc/2025/Conference/Submission5163/Authors" ], [ "ICLR.cc/2025/Conference/Submission5163/Area_Chair_uiHX" ], [ "ICLR.cc/2025/Conference/Submission5163/Reviewer_7ccM" ], [ "ICLR.cc/2025/Conference/Submission5163/Reviewer_nQk6" ], [ "ICLR.cc/2025/Conference/Submission5163/Authors" ], [ "ICLR.cc/2025/Conference/Submission5163/Authors" ], [ "ICLR.cc/2025/Conference/Submission5163/Authors" ], [ "ICLR.cc/2025/Conference/Submission5163/Reviewer_7ccM" ], [ "ICLR.cc/2025/Conference/Submission5163/Reviewer_Mat5" ], [ "~Xubo_Liu1" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5163/Reviewer_Mat5" ], [ "ICLR.cc/2025/Conference/Submission5163/Authors" ], [ "ICLR.cc/2025/Conference/Submission5163/Reviewer_nQk6" ], [ "ICLR.cc/2025/Conference/Submission5163/Authors" ], [ "ICLR.cc/2025/Conference/Submission5163/Authors" ], [ "ICLR.cc/2025/Conference/Submission5163/Reviewer_PPUR" ], [ "ICLR.cc/2025/Conference/Submission5163/Authors" ], [ "ICLR.cc/2025/Conference/Submission5163/Reviewer_PPUR" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your hard and diligent work on updating the paper to include more explanations and experiments. The methods are much clearer now, and the new experimental results add depth and robustness to the study. Updating the score to 6.\"}", "{\"title\": \"Further Response to Reviewer 7ccM\", \"comment\": \"Thank you once again for your thoughtful response. We are delighted to hear that most of your concerns have been resolved, and we deeply appreciate your recognition of our work. Allow us to provide further clarification on your remaining questions:\\n\\n**Q1-1: The difference between naive subtraction (**$Q{\\\\prime} = Q - \\\\alpha Q_N$**) and ours (**$Q{\\\\prime} = (1 + \\\\alpha) Q - \\\\alpha Q_N$**)**\\n\\n**A1-1:** While the mathematical difference between these two formulations may seem minimal, their impact on performance is significant. As highlighted in lines 240\\u2013243, our formulation ($Q{\\\\prime} = (1 + \\\\alpha) Q - \\\\alpha Q_N$) was carefully designed to address issues inherent to naive subtraction.\\n\\nSpecifically, as shown in Figure 3, ImageBind embeddings are projected into a unified space during training, where all embeddings are mapped to a unit vector space $e$. When $\\\\alpha$ > 1, naive subtraction ($Q{\\\\prime} = Q - \\\\alpha Q_N$) shifts the query embedding outside this unified vector space (resulting in the embedding space of $(1 - \\\\alpha)e$), causing performance degradation and instability.\\n\\nIn contrast, our formulation ($Q{\\\\prime} = (1 + \\\\alpha) Q - \\\\alpha Q_N$) ensures that the modified query embedding remains in the same vector space as the original query. This alignment prevents issues caused by spatial mismatch and leads to significantly improved stability and performance.\\n\\n**Q1-2: Determining the \\u201coptimal\\u201d weighting factor for specific queries**\\n\\n**A1-2:** Thank you for raising this important question. We have indeed considered how to reduce the \\u201cmanual effort\\u201d required during inference to determine the optimal weighting factor $\\\\alpha$.\\n\\nAs discussed in lines 406\\u2013417, naive subtraction ($Q{\\\\prime} = Q - \\\\alpha Q_N$) poses significant challenges in identifying the best $\\\\alpha$, due to the lack of robustness to variations in this parameter. In contrast, our approach demonstrates strong robustness to $\\\\alpha$, as performance remains stable for $\\\\alpha$ > 0.5, minimizing the need for extensive parameter tuning during inference.\\n\\nFrom the performance curve in Table 2, it is evident that when $\\\\alpha = 0.5$, it can be regarded as the optimal choice, delivering relatively strong performance and demonstrating robust consistency across different samples.\\n\\nThank you again for your valuable feedback and questions. We hope this response fully addresses your concerns and further enhances your confidence in our work. Please feel free to reach out if you have any additional comments or questions.\"}", "{\"title\": \"Follow-Up on Reviewer PPUR\\u2019s Comments\", \"comment\": \"Dear Reviewer PPUR,\\n\\nWe have further revised our paper to address your concerns and have included the corresponding experimental results. As the rebuttal phase is drawing to a close, we kindly request your feedback at your earliest convenience. We hope our revisions have satisfactorily addressed all your questions.\\n\\nBest regards,\\nAuthors\"}", "{\"title\": \"Futher Response to Reviewer PPUR\", \"comment\": \"Thank you for your follow-up response and for recognizing our efforts during the rebuttal phase. Your feedback has helped us identify that our description in this section may still be prone to misinterpretation. Please allow us to provide further clarification.\\n\\n**Q1: Definition of $i$**\\n\\n**A1**: For a mixed audio signal $A_{\\\\text{mix}}$ composed of $n$ audio sources {$\\\\{A_1, A_2, \\\\cdots, A_n\\\\}$} , each audio source $A_i$ is associated with a corresponding video $V_i$ and textual query $T_i$ , forming $n$ triplets {$\\\\{(A_1, V_1, T_1), \\\\cdots, (A_n, V_n, T_n)\\\\}$}. The query $\\\\mathbf{Q}_i$ for sound separation is derived from the $i$ -th triplet $(A_i, V_i, T_i)$ . The predicted mask $\\\\hat{M}_i$ corresponds to the audio source $A_i$ .\\n\\n**Q2: Definition of $j$**\", \"a2\": \"The spectrogram X is fed into the audio U-Net to obtain $k$ intermediate masks $\\\\tilde{M}=${$\\\\{\\\\tilde{M}_1, \\\\cdots, \\\\tilde{M}_j, \\\\cdots, \\\\tilde{M}_k\\\\}$}\\n\\n, where $\\\\tilde{M} \\\\in \\\\mathbb{R}^{k \\\\times F \\\\times T}$, $\\\\tilde{M}_{j}$ is the $j$-th intermediate mask.\\n\\n**Q3: Dimensions of $Q_i$ and alignment of audio, visual, and text queries**\\n\\n**A3**: ImageBind extracts global semantic representations for different modalities. Unlike representations commonly used in speech separation tasks, such as AV-HuBERT or HuBERT, ImageBind\\u2019s embeddings do not contain temporal dimension. As mentioned on line 312, for video queries, we sample four frames at 1-second intervals and average their embeddings to obtain the image embedding $Q_v \\\\in \\\\mathbb{R}^{1024}$ . For audio and text queries, ImageBind extracts the entire segment into an audio embedding $Q_a \\\\in \\\\mathbb{R}^{1024}$ and a text embedding $Q_t \\\\in \\\\mathbb{R}^{1024}$ , respectively. After the weighted combination in Equation (1), $Q_i$ has the same dimension as the query embeddings for each modality, $Q_i \\\\in \\\\mathbb{R}^{1024}$ .\\n\\n**Q4: Dimensions of**\\u00a0 $\\\\hat{M}_i$ and $\\\\tilde{M}$\\n\\n**A4**: The mixed audio is first converted into the magnitude spectrum\\u00a0 $X \\\\in \\\\mathbb{R}^{C \\\\times F \\\\times T}$\\u00a0 using the Short-Time Fourier Transform (STFT), where\\u00a0 $C$\\u00a0 represents the number of channels,\\u00a0 $F$\\u00a0 is the frequency dimension, and\\u00a0 $T$\\u00a0 is the time dimension. For single-channel audio in this paper,\\u00a0 $C$ = 1 . Using the audio U-Net, we extract\\u00a0 $\\\\tilde{M}$ , which contains\\u00a0 $k$\\u00a0 intermediate masks,\\u00a0 $\\\\tilde{M} \\\\in \\\\mathbb{R}^{k \\\\times F \\\\times T}$ . The query embedding\\u00a0 $Q_i$\\u00a0 is passed through a linear layer to compute weights for the\\u00a0 $k$\\u00a0 intermediate masks, which are then combined with\\u00a0 $\\\\tilde{M}$\\u00a0 to produce the predicted mask corresponding to\\u00a0 $A_i$ ,\\u00a0 $\\\\hat{M}_i \\\\in \\\\mathbb{R}^{C \\\\times F \\\\times T}$ . Since the audio U-Net allows the time dimension\\u00a0 $T$\\u00a0 to be of arbitrary length, our model can handle audio signals of any length.\\n\\n**We have updated this section in the latest version of the paper and look forward to your further feedback.** We hope the revisions in this version clarify the remaining ambiguities.\\n\\n---\\n\\nAdditionally, regarding the extra experimental results, while we have incorporated some of the reviewer-suggested experiments into the paper, there are still a few that have not yet been fully integrated. We are actively working to include these additional experiments in the final version. We will ensure that all updates are completed before the deadline. To provide a prompt response, the latest version update only includes revisions addressing these specific misunderstandings (lines 200\\u2013211).\"}", "{\"title\": \"Appreciation to Reviewer nQk6 for Valuable Suggestions and Positive Comments\", \"comment\": \"Thank you for your thoughtful feedback, which has greatly enhanced the clarity and quality of our paper. The revised version now feels much more natural, and we sincerely appreciate your valuable input. If you have any additional questions or suggestions, please do not hesitate to reach out.\"}", "{\"title\": \"Latest Paper Version Uploaded\", \"comment\": \"We have updated the latest version of our paper, incorporating all the experiments conducted during the rebuttal period into the appendix. Additionally, we have addressed the definitions of parameters such as i and j, as highlighted in your previous feedback. We look forward to receiving your further comments and hope that this version resolves any potential misunderstandings.\\n\\nThank you again for your valuable feedback!\"}", "{\"title\": \"Response to Reviewer PPUR (1/N)\", \"comment\": \"Thank you for recognizing the significance of our work and the sufficiency of our experiments. Please allow me to address your questions in detail:\\n\\n**Q1: Clarity of the Technique**\\n\\n**A1**: Your meticulous reading is truly appreciated. Thanks to your thoughtful suggestions, we have added the details you mentioned in the latest version of the paper. We believe these improvements greatly enhance the presentation of our work.\\n\\n**Q2: Non-queried Sound Separation**\\n\\n**A2**: The PIT experimental results were derived from the baseline constructed by CLIPSEP[1]. Specifically, it is the prior research [2], which used PIT for sound separation. Also noted in CLIPSEP that:\\n\\n> The PIT model requires a post-selection step to get the correct source. Without the post-selection step, the PIT model returns the right source in only 50% of cases.\\n> \\n\\nWe conducted similar tests using TDANET [3] and have included the results in the latest version of the paper to provide a more comprehensive comparison.\\n\\n**Q3: Effectiveness of Query-Mixup**\\n\\n**A3:** Among existing multi-modal query sound separation approaches [1,4], there are two main strategies: iterative modality training, as exemplified by CLIPSEP, and our proposed Query-Mixup approach. We present an ablation study on the query embedding operation strategies, demonstrating that our Query-Mixup strategy consistently improves performance across tasks:\\n\\n| | training strategy | Mean SDR(TQSS) | Mean SDR(IQSS) | Mean SDR(AQSS) |\\n| --- | --- | --- | --- | --- |\\n| E31 | iterative modality training | 6.37 | 6.53 | 6.97 |\\n| E32 | query-mixup | **6.70** | **6.69** | **7.12** |\\n\\n**Q4: Discussion on Phase-Based Methods**\\n\\n**A4**: Indeed, these two works [5,6] have achieved state-of-the-art performance in speech separation. I have read these studies before and am impressed by their remarkable performance in terms of both accuracy and model scale. In this work, however, our primary focus is on investigating multi-modal collaboration between different query modalities and the effect of composed queries from omni modalities on sound separation performance. Therefore, phase information was not included in this study. Nevertheless, I am eager to explore similar methods in our future work.\"}", "{\"metareview\": \"This paper presents a novel system for query-based sound separation, called OmniSep, that accommodates multiple query modalities (text, image, and audio), either independently or in combination. Technical contributions consist in Query-Mixup for simultaneous multi-modal query processing, which is also supported by a negative query mechanism for sound suppression at inference, and Query-Aug for handling natural language descriptions beyond naive class labels, so enabling open-vocabulary sound separation.\\nExperimental validation on several datasets demonstrates OmniSep's iincreased performance as compared to existing methods (single-modality) across all query modalities. \\n\\nThis work is appreciated under several respects, including the interesting addressed task, the technically sound and novel methodological contributions, and the experimental analysis provided.\\n\\nWeak aspects regard, in general, several requests of better addressing some specific parts of the methodology (for all three main contributions, but mainly for Query-Aug), and clarifications/revision of the experimental section. \\n\\nThe authors replied properly to all such comments and the initial scores (6, 5, 5, 5) became all positive (6, 6, 6, 6).\\n\\nTherefore, this paper can be considered acceptable for publication to ICLR 2025.\", \"additional_comments_on_reviewer_discussion\": \"See above\"}", "{\"comment\": \"Thank you to the authors for their thoughtful responses and the effort put into the rebuttal. Based on the rebuttal and the points raised by other reviewers, I believe the paper addresses most of the concerns effectively and makes a valuable contribution to this field of research. Therefore, I would like to maintain my recommendation towards accepting the paper.\\n\\nHowever, one of my initial points regarding the weighting factor for negative queries was not fully addressed in the rebuttal. I would appreciate further clarification from the authors regarding their insights into Figure 2. On top of that, I'm curious if there is an \\\"optimal\\\" weighting factor for certain types of queries? Could this value be deterministically derived during the inference phase to simplify the use of negative queries, rather than relying on trial-and-error approaches? Providing such insights could significantly enhance the usability and practical implementation of the proposed method.\"}", "{\"comment\": \"Thanks for your response and efforts to address all my concerns. Now the story looks more natural. I'm happy to raise the score to 6\"}", "{\"title\": \"Response to Public Comment of Xubo (1/N)\", \"comment\": \"Thank you for your interest in our work. AudioSep, WavCaps, and similar projects have introduced a new wave of progress in sound separation by expanding datasets to effectively achieve open-vocabulary sound separation. We sincerely respect and appreciate your contributions. Below, allow us to address your questions/concerns in detail:\\n\\n**Q1: Comparison with AudioSep**\\n\\n**A1**: Thank you for your inquiry. Please understand that we did not directly compare OmniSep\\u2019s performance with AudioSep for the following reasons:\\n\\n1. Differences in training configurations:\\n \\n AudioSep processes 32kHz data during training, with a signal-to-noise ratio (SNR) range of -10 to 10 dB, resulting in clean and controlled data, consistent with its proposed benchmark settings (the snr in test set is 0dB). In contrast, OmniSep follows the CLIPSep configuration, processing 16kHz data without imposing additional SNR range restrictions on the audio. When testing AudioSep\\u2019s official model on the VGGSOUND-clean test set, we observed suboptimal performance because VGGSOUND-clean includes samples that fall outside the SNR range used during AudioSep\\u2019s training. Consequently, we retrained AudioSep on the VGGSOUND dataset to ensure fair evaluation. **Apparently, there exists a degree of domain shift between the training and test distributions for OmniSep and AudioSep, making it challenging to compare the two models fairly on a unified test set.**\\n \\n *Table R1. Comparison on VGGSOUND-clean\\uff08TQSS\\uff09*\\n \\n | Method | SDR | SDRi | SI-SDR |\\n | --- | --- | --- | --- |\\n | AudioSep(Official Version) | 3.04 | 3.54 | 4.49 |\\n | OmniSep | **6.70** | **6.64** | **5.53** |\\n2. Discrepancy in training data size:\\n \\n As you noted, AudioSep\\u2019s key contribution lies in scaling up to a large dataset (14,000 hours). OmniSep, in contrast, was trained solely on VGGSOUND (550 hours). This significant difference in training data size naturally impacts both separation performance and open-vocabulary capability. In the future, we plan to augment OmniSep\\u2019s training data using WavCaps. However, we refrained from including WavCaps data in this version to maintain fairness when comparing OmniSep with other IQSS baselines.\\n \\n\\nTo meet your expectations regarding performance comparisons, we have included results for AudioSep + Query-Aug in the next question (Q2). This demonstrates the effectiveness of our proposed method in enhancing open-vocabulary performance.\"}", "{\"title\": \"Response to Reviewer 7ccM\", \"comment\": \"Thank you for recognizing our framework and ablation experiments! Below are our detailed responses to your questions:\\n\\n**Q1: In AQSS, is the raw audio used as a query? What does\\u00a0$S=5$ mean in AQSS?**\\n\\n**A1**: In the AQSS setup of this paper, we use audio samples that shares the same class identity as the target audio as queries. We apologize for any confusion caused by insufficient description. This approach ensures that no audio information leakage occurs during the experiments. During inference, for VGGSOUND, we select 5 audio features from the category of the target audio and use their averaged feature as the audio query feature for that category.\\n\\n**Q2: How does OmniSep respond to a query that does not exist in the audio?**\\n\\n**A2:** When the query corresponds to content not present in the audio, the model is supposed to output silence.\\n\\n**Q3: Results for AQSS in Table 2?**\\n\\n**A3:** Due to space limitations, this part of the experiment was not included in the previous version of the paper. However, we believe the experiments on TQSS and IQSS already sufficiently support the conclusions of the section. Here, we provide the detailed results for AQSS:\\n\\n| Audio | Text | Image | MixUP | Mean SDR(AQSS) | Med SDR(AQSS) |\\n| --- | --- | --- | --- | --- | --- |\\n| \\u2714\\ufe0f | | | | 5.79\\u00b10.78 | 5.19 |\\n| \\u2714\\ufe0f | \\u2714\\ufe0f | | | 6.67\\u00b10.71 | 5.38 |\\n| \\u2714\\ufe0f | \\u2714\\ufe0f | \\u2714\\ufe0f | | 6.97\\u00b10.66 | 5.40 |\\n| \\u2714\\ufe0f | \\u2714\\ufe0f | \\u2714\\ufe0f | \\u2714\\ufe0f | **7.12\\u00b10.65** | **5.45** |\\n\\n**Q4: Query-Aug adaptability?**\\n\\n**A4:** Yes, Query-Aug can be adapted to any modality, even modalities that have never been trained, such as 3D. For scenarios with multiple sound sources, one feasible approach is to use Query-Aug to retrieve\\u00a0 potential queries based on a threshold, and then perform separation accordingly.\\n\\n**Q5: Regarding the use of ImageBind?**\\n\\n**A5:** We use the pretrained ImageBind model. This is explicitly highlighted again at line 193 of the paper to emphasize our reliance on the ImageBind repository.\\n\\nWe hope these responses address your concerns clearly! Please feel free to reach out with any additional questions.\"}", "{\"title\": \"Response to Reviewer PPUR (2/N, N=2)\", \"comment\": \"**Q5: Use of ImageBind**\\n\\n**A5**: In many existing works [7], researchers have observed that representations in models like CLIP and ImageBind are already well-aligned, allowing a simple linear mapping layer to suffice for cross-modal mapping. In this work, the focus of Query-Mixup is to enable the model to process inputs from three modalities simultaneously without mutual interference. For comparison, we conducted the following experiments:\\n\\n- E50: Results from a randomly initialized model trained with a learning rate of 1e-5.\\n- E51: Results from fine-tuning a pretrained ImageBind model.\\n- E52: Results from freezing ImageBind and fine-tuning only a linear mapping layer.\\n\\nOur experiments indicate that without a pretrained model, performance drops significantly, as the model cannot effectively extract features or align representations across modalities. While fine-tuning ImageBind (E52) slightly improves in-domain performance, it hinders generalization to out-of-domain data, resulting in a 1.94 SDR drop on the MUSIC test set. Freezing ImageBind (E53) and using a linear layer provides a balance between performance and generalization.\\n\\n***Experiments on VGGSOUND-clean***\\n\\n| | pretrained | imagebind | Mean SDR(TQSS) | Mean SDR(IQSS) | Mean SDR(AQSS) |\\n| --- | --- | --- | --- | --- | --- |\\n| E50 | \\u2718 | tuning | 2.33 | 2.31 | 1.94 |\\n| E51 | \\u2714\\ufe0f | tuning | 6.81 | 6.73 | 7.22 |\\n| E52 | \\u2714\\ufe0f | freeze | **6.70** | **6.69** | **7.12** |\\n\\n***Experiments on MUSIC***\\n\\n| | pretrained | imagebind | Mean SDR(TQSS on MUSIC) |\\n| --- | --- | --- | --- |\\n| E51 | \\u2714\\ufe0f | tuning | 4.76 |\\n| E52 | \\u2714\\ufe0f | freeze | **6.70** |\\n\\n**Q6: Discussion on Negative Queries**\\n\\n**A6**: When constructing the final query features with negative queries, it is crucial not only to ensure stability but also to remove the information corresponding to the negative query from the original query. If we use $(1-\\\\alpha)\\\\mathbf{Q} + \\\\alpha\\\\mathbf{Q}_N$ , both the negative query and the positive query content are retained, which contradicts the goal of using negative queries. On the other hand, directly subtracting ( $\\\\mathbf{Q} - \\\\alpha\\\\mathbf{Q}_N$ ) can lead to instability and difficulty in choosing the coefficient \\\\alpha, as discussed in Section 4.2 and Figure 2 of the paper. Below, we provide the results for three methods with $\\\\alpha=0.5$ for a more intuitive comparison:\\n\\n| | query embedding | TQSS | IQSS | AQSS |\\n| --- | --- | --- | --- | --- |\\n| E61 | $(1-\\\\alpha)\\\\mathbf{Q}+\\\\alpha \\\\mathbf{Q}_N$ | 3.96 | 3.23 | 3.77 |\\n| E62 | $\\\\mathbf{Q}-\\\\alpha \\\\mathbf{Q}_N$ | 6.77 | 6.92 | 6.65 |\\n| E63 | $(1+\\\\alpha)\\\\mathbf{Q}-\\\\alpha \\\\mathbf{Q}_N$ | **7.57** | **7.68** | **7.22** |\\n\\nWe hope this addresses all your questions clearly! Please feel free to reach out with further queries.\\n\\n[1] Dong H W, Takahashi N, Mitsufuji Y, et al. Clipsep: Learning text-queried sound separation with noisy unlabeled videos[J]. ICLR2022\\n\\n[2] Kavalerov I, Wisdom S, Erdogan H, et al. Universal sound separation[C] ICASSP2019\\n\\n[3] Li K, Yang R, Hu X. An efficient encoder-decoder architecture with top-down attention for speech separation[J]. ICLR2023\\n\\n[4] Liu X, Kong Q, Zhao Y, et al. Separate anything you describe[J]. arXiv 2023.\\n\\n[5] Pegg S, Li K, Hu X. RTFS-Net: Recurrent time-frequency modelling for efficient audio-visual speech separation[J]. arXiv preprint ICLR2024\\n\\n[6] Wang Z Q, Cornell S, Choi S, et al. TF-GridNet: Making time-frequency domain models great again for monaural speaker separation[C]//ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing ICASSP2023\\n\\n[7] Wang Z, Zhang Z, Cheng X, et al. FreeBind: Free Lunch in Unified Multimodal Space via Knowledge Fusion[C]. ICML2024\"}", "{\"summary\": \"This paper presents OmniSep, a framework for omni-modal sound separation, which supports sound isolation using queries from multiple modalities, such as text, image, and audio, either independently or in combination. The key method of omni-modal source separation is the introduction of Query-Mixup, a strategy that mixes query features from different modalities, using a pre-trained ImageBind. OmniSep further enables open-vocabulary sound separation with Query-Aug, a retrieval-augmented method that enhances adaptability, particularly for text-based queries. Experimental evaluations on MUSIC, VGGSOUND-CLEAN+, and MUSIC-CLEAN+ datasets showcase OmniSep\\u2019s SOTA performance across various modality-based separation tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper presents an omni-modal sound separation approach, allowing users to conduct sound separation tasks using queries across various modalities, including text, image, and audio, both independently and jointly. Additionally, the authors conducted a comprehensive evaluation across several benchmarks. Further, the inclusion of extensive ablation studies provides insight into the contributions of each component: impact of Query-Mixup, negative query weighting, long text description-queried separation, and analysis on query embeddings.\", \"weaknesses\": \"While the paper presents a solid framework, there are some details missing that make it difficult to give a firm acceptance (please answer Questions).\\n\\nThe use of full-length video as a query, which includes the target segment, raises questions about the potential for information leakage. It\\u2019s unclear if the QueryNet architecture ensures a bottleneck to prevent the model from \\u201ccheating\\u201d by directly accessing target audio features. To better demonstrate the model\\u2019s robustness, an alternative setup might involve using a different video as the query that shares the same class identity as the input audio.\\n\\nAdditionally, the comparison in Section 4.3 between the proposed weighting and a naive weighting approach on $Q_N$ may not be particularly insightful, as both methods use the same model. The proposed approach always has a weight difference of 1 ($1+\\\\alpha$ vs. $\\\\alpha$), while the comparison (naive) approach reduces the weight gap between $Q$ and $Q_N$, hence the results in Figure 2 are somewhat predictable. However, the insights provided by varying $\\\\alpha$ are valuable.\", \"questions\": [\"What happens when the query is some class that\\u2019s not present inside the input mixture audio?\", \"Table 2: why no results on AQSS?\", \"Query-Aug Adaptability:\", \"Is the Query-Aug method adaptable only to text queries $Q_T$, or does it extend to other modalities as well?\", \"Additionally, how will the model handle text prompts containing multiple sound sources (e.g., \\u201cthe sound of a baby with her parent\\u2019s soothing voice\\u201d)? Is the model trained to handle multi-source separation?\", \"Number of Audio Sources in Mixtures: For AQSS VGGSOUND, does $\\\\mathcal{S}=5$ mean $A_\\\\text{mix}=\\\\sum_{n=1}^6A_n$? then for all other setups, is $A_\\\\text{mix}=A_1+A_2$?\", \"Did the authors use the pre-trained ImageBind weights from the official repository (https://github.com/facebookresearch/ImageBind), or did they train ImageBind from scratch? They should mention the repository if pre-trained weights were used. If trained from scratch, they should include the details of the pre-training specifications.\", \"How does OmniSep respond when a query references a class that is not present within the input audio mixture?\", \"Why are there no AQSS results reported in Table 2?\", \"Minor comments\", \"typo @Figure 1: \\u201c\\u2026, donated as $\\\\text{Q}_T$, \\u2026\\u201d\", \"Figure 1: inside the Query-Mixup block, the weight factors are denoted as $W$, where it should be $w$\", \"grammar @line 217: \\u201cThe training \\u2026\\u201d\", \"Table 1: should note the source of VGGSOUND-CLEAN+ and MUSIC-CLEAN+ subsets. Also provide a couple of sentences for details of these subsets.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes an audio source separation model that can be queried via audio, language and image. To construct such model, this paper proposes query mixup which blends query from different modalities during training and ensure queries from different modalities reside in the same semantic space. This paper also propose a negative query mechanism to enhance the flexibility of query in inference stage. Last, to enable open-vocabulary text query input to the model which trained on close-set class name, this paper propose a query-augmentation method at inference time which retrieves nearest class name. Experiment result shows the model outperforms existing model in all modalities on MUSIC, VGGSOUND-CLEAN+, and MUSIC-CLEAN+ datasets.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. A novel model for source separation queried by audio, text and image. Propose a novel query-mixup method to enable such model.\\n2. Propose novel negative query and query-aug method to improve performance and flexibility at inference time.\\n3. State-of-the-art performance on separation tasks queried by all modalities\", \"weaknesses\": \"1. Presentation: Certain aspects of the paper\\u2019s presentation feel oversimplified, particularly in explaining key contributions and technical details. Please refer to the questions section for specific areas needing clarification.\\n\\n2. Contribution of Query-Aug: The significance of the query-augmentation (query-aug) contribution seems overstated. For example, lines 054\\u2013056 suggest that current systems cannot handle open-vocabulary queries. However, Liu et al. (2023) (\\\"Separate Anything You Describe\\\") demonstrates that open-vocabulary language queries are achievable using audio-text datasets, such as Clotho or AudioCaps. Furthermore, the model proposed in this paper should be capable of training on these audio-text datasets and potentially on datasets lacking complete audio, text, or image pairs. An expanded discussion comparing this work to Liu et al. (2023), along with a rationale for the choice of datasets, would strengthen the contribution.\\n\\n3. Experimental Results: The presentation of experimental results lacks some necessary detail:\\n - What training settings were used for the models in Table 2, and how do these models (particularly model #5) correspond to those in Table 1?\\n - For Table 4, a comparison of results from querying the same audio mixture across different modalities would provide valuable insights.\\n\\n4. Metrics for Comparison: In prior works cited in the paper, metrics such as SI-SDR and SDRi are commonly used for source separation tasks, as they better reflect model performance in recent studies. Including these metrics or providing justification for the metrics chosen would enhance the comparative analysis.\", \"questions\": \"The introduction of the Separate-Net is missing some details:\\n1. Line 210: what is k and what does k corresponds to? \\n2. Does the query conditioned to the U-Net? If not, why query is not conditioned to UNet? If possible, it would be helpful to visualize the qi in some examples. It feels odd to me that the query only results in a channel-wise weight.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Authors,\\n\\nThe concept of \\u201cOmniSep\\u201d has been on my mind since developing the LASS-Net and AudioSep model series. Thank you for your contributions to multi-modal audio source separation and for making it works well.\\n\\nAfter reading your paper, I have a few questions/concerns, particularly related to the experimental setup and the open-vocabulary separation claim:\\n\\n1. **Pre-Trained AudioSep Comparison**: I\\u2019d be interested to see the performance of pre-trained AudioSep compared directly with the baseline and OmniSep, rather than reproducing AudioSep using CLIPSep's dataset partitioning. As the key contribution of AudioSep is scaling training to a large dataset and generalizing well in open-domain sound separation with language queries.\\n\\n2. **Diverse Evaluation Sets**: To strengthen your claim in L53-56, I suggest conducting experiments on more diverse evaluation sets. AudioSep provides a dedicated test benchmark (https://drive.google.com/drive/folders/1PbCsuvdrzwAZZ_fwIzF0PeVGZkTk0-kL) that may be useful here. In my opinion, datasets like VGGSound and MUSIC are relatively small, and training on such datasets and evaluating on the same domain can often lead to better performance metrics - which may outperform models pre-trained on general datasets, but lacks the ability to generalize effectively across domains. For example, In my experience, fine-tuning AudioSep on the MUSIC dataset enabled me to achieve an SDRi of 18 dB, whereas pretraining results were around 9 dB.\\n3. **Effectiveness on DCASE 2024 Task 9 Datasets**: In DCASE 2024 Task 9, we created several synthetic and real datasets for evaluation with open-domain text queries. I believe these datasets provide a good test set for OmniSep, and demonstrating OmniSep\\u2019s effectiveness on these datasets compared to AudioSep and other baselines could further substantiate your claims on open-vocabulary sound separation. DCASE 2024 Task 9 Evaluation set: https://zenodo.org/records/11425256\\n4. **Correction on Table 9**: In Table 9, it is suggested that AudioSep is unable to perform AQSS or IQSS. However, since AudioSep is trained with CLIP/CLAP text encoders (AudioSep-CLIP one has not been released), it can perform AQSS/IQSS, though its performance may not be as robust as when using text queries.\\n\\nI appreciate your time and consideration in addressing these questions and look forward to your insights.\\n\\nBest regards,\\n\\nXubo Liu\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you to the authors for providing a detailed response and updating the experiment results. Your clarifications have addressed my concerns effectively. As a result, I have updated my score to 6.\"}", "{\"title\": \"Response to Reviewer nQk6\", \"comment\": \"Thank you for recognizing the contributions of our work, including the effectiveness of OmniSep and the rationality of the ImageBind embedding visualization analysis. Please allow me to address your questions in detail:\\n\\n**Q1: Original Motivation for OmniSep**\\n\\n**A1**: Thank you for your suggestions regarding our paper. Our intention was to highlight the significant value sound separation could bring to the research community. Based on your feedback, we realized that our initial framing might have limited the perceived scope of sound separation\\u2019s potential impact. In the latest version, we have rewritten the introduction to better emphasize the importance of cross-modal query sound separation, particularly focusing on homologous but heterogeneous modalities.\\n\\n**Q2: Subjective Testing**\\n\\n**A2**: We conducted Mean Opinion Score (MOS) evaluations to compare the sound separation results of several major models. The results are as follows:\\n\\n| | MOS |\\n| --- | --- |\\n| CLIPSEP(T) | 3.36 |\\n| CLIPSEP(I) | 3.51 |\\n| AudioSep(T) | 3.85 |\\n| OmniSep(A) | 3.94 |\\n| OmniSep(I) | 3.83 |\\n| OmniSep(T) | 3.89 |\\n| OmniSep(I+A+T) | 4.01 |\\n| OmniSep(I+A+T)+negative query | 4.14 |\\n| Real audio | 4.32 |\\n\\nThese experimental results have been updated in Table 9 in Appendix C.3 of the latest version.\\n\\n**Q3: ImageBind Fine-Tuning vs. OmniSep**\\n\\n**A3**: In prior works, researchers have found that models like CLIP and ImageBind already achieve well-aligned representations, enabling effective cross-modal mapping with a simple linear layer. In this work, the key focus of Query-Mixup is to enable the model to process inputs from three modalities simultaneously, ensuring that performance across modalities remains unaffected.\\n\\nTo compare, we evaluated the performance of fine-tuning ImageBind combined with the iterative training strategy used in CLIPSEP against training OmniSep with Query-Mixup. The results are shown below. As observed in experiments E1 and E2, fine-tuning ImageBind yields only marginal improvements, with performance close to that of linear fine-tuning. However, when Query-Mixup is used, the model (E3) achieves significant performance gains across all single-modality tasks, highlighting the effectiveness of Query-Mixup.\\n\\n| | imagebind tuning | training strategy | Mean SDR(TQSS) | Mean SDR(IQSS) | Mean SDR(AQSS) |\\n| --- | --- | --- | --- | --- | --- |\\n| E1 | \\u2718 | iterative modality training | 6.37 | 6.53 | 6.97 |\\n| E2 | \\u2714\\ufe0f | iterative modality training | 6.40 | 6.52 | 6.93 |\\n| E3 | \\u2718 | query-mixup | **6.70** | **6.69** | **7.12** |\\n\\n**Q4: Ablation Study on Query-Mixup and Negative Query for CLIPSEP**\\n\\n**A4:** Following your suggestion, we conducted an ablation study on VGGSOUND-Clean to evaluate the impact of Query-Mixup and negative query methods on CLIPSEP. The results are as follows:\\n\\n| | Method | Mean SDR\\uff08TQSS\\uff09 | Mean SDR\\uff08IQSS\\uff09 |\\n| --- | --- | --- | --- |\\n| E4 | CLIPSEP | 5.49 | 5.46 |\\n| E5 | CLIPSEP+Query-mixup | 5.67 | 5.74 |\\n| E6 | CLIPSEP+negative query | **6.32** | **6.17** |\\n\\nFurthermore, with the help of Query-Mixup, our model is the first to achieve sound separation using fully composed queries from all modalities. As shown in Table 1, this approach significantly improves performance for Composed Omni-Modal Queried Sound Separation by leveraging homologous but heterogeneous information from multiple modalities, setting a new benchmark in the field.\\n\\n| | Query modality | Mean SDR |\\n| --- | --- | --- |\\n| OmniSep | T | 6.70 |\\n| OmniSep | T+I | 7.12 |\\n| OmniSep | T+I+A | **7.46** |\\n\\nWe hope this clarifies your questions and demonstrates the strength of our proposed methods. Please feel free to reach out with additional feedback or inquiries!\"}", "{\"summary\": \"This paper attempts to cover text-, image- and audio-queried sound separation all at once with one model by exploiting ImageBind as its encoder. Several training techniques are introduced and verified its effectiveness in terms of signal-to-noise ratio as well as spectral similarity, which demonstrates the superiority of OminiSep to other existing query-based sound separation methods\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Three novel training techniques, Query-mixup, NQ, Query-Aug, are proposed and evaluated in the standardized experimental settings\", \"Consistent improvements of median SDR in Table 2, which demonstrates the SOTA performance of OmniSep in the query-based sound separation field\", \"Good visualization of ImageBind embeddings to show the clear motivation of the Query-mixup\"], \"weaknesses\": [\"Inappropriate motivation: I felt puzzled when I was reading the introduction part because the original motivation of the paper was how we could increase the number of high-quality training dataset although this motivation was never addressed anywhere in the other sections of the paper. If the objective is like the above, why don't we simply compare your scheme to other data augmentation methods or artificial data creation methods other than sound separation? I agree that denoising with sound separation would serve as one of many methods to achieve the same goal. Then, I believe the author should prove that OmniSep is the one of the best methods to increase the number of data. That being said, while assessing the paper, I somehow felt that this was not your intention. Since the technical contributions of the paper are fair enough, I suggest rewriting the introduction part in a way like your motivation is to improve the quality or coverage of text-, image-, or audio-queried sound separation and NOT to use sound separation to increase the number of training datasets for other purposes.\", \"As an expert in this field, I don't agree relying only on SDR to measure the performance of any sound separation methods. Rather than showing pairs of spectrums, I suggest conducting a listening test to doublecheck if the performance of OmniSep really dominates others and the difference is recognizable. If the goal of the paper is not to make separated data available to human, as you originally stated in the introduction part, i.e., data increase is the objective, I don't think having a subjective listening test enriches the content. Otherwise, I strongly recommend this because this is a common practice in the sound separation community.\"], \"questions\": \"The motivation of Query-mixup is clear. On the other hand, its effectiveness compared to finetuning of ImageBind is not clear. Why don't you simply finetune your encoder so that all the modalities align well each other to the inputs of your interests. It is also unclear if the performance improvement really comes from the proposed three methods because the encoders in CLIPSep and OmniSep are different. Table 3 is a nice way to prove Query-Aug is actually important to increase SDR. Not sure about other two. You could have tried the rest of two methods on CLIPSep and see the performance improvements of CLIPSep?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your thoughtful review and constructive suggestions. I am delighted to know that all your questions have been addressed. Your valuable feedback has greatly contributed to enhancing the quality of our paper.\"}", "{\"title\": \"Response to Reviewer Mat5\", \"comment\": \"Thank you for recognizing the flexibility and performance of our work. Please allow us to clarify your questions in detail:\\n\\n**Q1: Applicability of Query-Aug**\\n\\n**A1**: Natural sounds can be described using either class labels or textual descriptions. Class labels provide coarse-grained descriptions, while textual descriptions capture fine-grained details, emphasizing subtle differences between sounds. While AudioSep [1] achieves open-vocabulary sound separation using caption-based descriptions, its performance on unseen queries is still unstable due to data limitations. By combining AudioSep with our proposed Query-Aug method, we demonstrate that Query-Aug further enhances model performance, making open-vocabulary queries more robust. Related experimental results can also be found in Section D of the demo page.\\n\\n| Method | Training Data (hours) | Mean SDR | Med SDR |\\n| --- | --- | --- | --- |\\n| CLIPSEP-text | 550 | 3.53\\u00b10.52 | 2.91 |\\n| CLIPSEP-text+Query-Aug | 550 | **5.24\\u00b10.79** | **4.87** |\\n| AudioSep | 14,000 | 7.24\\u00b10.67 | 6.23 |\\n| AudioSep+Query-Aug | 14,000 | **7.46\\u00b10.63** | **6.34** |\\n\\n**Q2: Reason for Dataset Selection**\\n\\n**A2**: Considering that the key contribution of this work is the proposal of an omni-modality sound separation model, we selected datasets that support diverse modalities. In prior vision-queried sound separation tasks, most works have been conducted on VGGSOUND. To enable fair comparisons, we followed CLIPSEP\\u2019s experimental setup. However, your suggestion is highly valuable. Based on your feedback, we expanded our experiments by incorporating additional data from AudioSet into VGGSOUND, leading to further performance improvements. Here is the futher experiments:\\n\\n***Composed Omni-Modal Queried Sound Separation***\\n\\n| | Training Data | VGGSOUND-Clean+ |\\n| --- | --- | --- |\\n| OmniSep | VGGSOUND | 7.46\\u00b10.65 |\\n| OmniSep | AudioSet+VGGSOUND | **7.63\\u00b10.62** |\\n\\n**Q3: Results of the Same Query Across Different Modalities**\\n\\n**A3**: In Sections A.1, A.2, and A.3 of the demo page, we present the results of using the same query across different modalities. You are welcome to review these sections for detailed insights.\\n\\n**Q4: SISDR and SDRi Metrics**\\n\\n**A4**: Thank you for your suggestion! Based on your feedback, we added SI-SDR and SDRi metrics to our evaluation. However, since the CLIPSEP paper only provided partial checkpoints for reference models, we compare performance with all available baselines. These experimental results have been updated in Table 9 in Appendix C.3 of the latest version.\\n\\n| | SI-SDR | SDRi |\\n| --- | --- | --- |\\n| CLIPSEP | 3.92 | 5.32 |\\n| CLIPSEP(I) | 4.32 | 5.27 |\\n| AudioSep | 5.43 | 5.94 |\\n| OmniSep(T) | 5.53 | 6.64 |\\n| OmniSep(I) | 5.49 | 6.68 |\\n| OmniSep(A) | 6.12 | 7.08 |\\n| OmniSep(T+I+A) | 6.56 | 7.37 |\\n\\n**Q5: Experimental Settings in Table 2**\\n\\n**A5**: All experiments in Table 2 follow the same settings as the OmniSep experiments in Table 1, with changes only to the training modalities and training strategy (Query-Mixup). Detailed settings are described in Appendix A. Experiment #5 in Table 2 corresponds to text-queried and image-queried sound separation experiments for OmniSep (ours) in Table 1. Other experiments represent ablation studies and do not correspond directly to experiments in Table 1.\\n\\n**Q6: Separate-Net Experimental Details**\\n\\n**A6**: For a fair comparison, our experiments were based on the CLIPSEP framework. The parameter $k$ is a hyperparameter set to 32, following the setup in CLIPSEP.\\n\\nTo ensure comparability with the benchmark established by CLIPSEP, we injected features into the output of the final layer of the UNet model (in the form of mask weights q_i), as used in CLIPSEP [2] and SOP [3].\\n\\nTo further clarify differences between injection methods, we conducted a theoretical analysis:\\n\\n1. Feature Injection Method in AudioSep: Features are injected into hidden embeddings at every layer, enabling multi-level control over sound separation embeddings.\\n2. Feature Injection Method in CLIPSEP: Features are injected directly into the final layer, providing more direct control over the output mask.\\n\\nDespite these differences, both architectures achieve the goal of sound separation. In our experiments, AudioSep\\u2019s approach yielded better results, as shown in Table 1, which compares CLIPSEP and AudioSep. However, to maintain experimental consistency, we adopted the current Separate-Net structure in this version of the paper. However, we are committed to including an OmniSep implementation based on the AudioSep architecture in our open-sourced code.\\n\\nWe hope this clarifies your questions. Please feel free to reach out with further feedback or inquiries!\\n\\n[1] Liu X, Kong Q, Zhao Y, et al. Separate anything you describe[J]. arXiv 2023.\\n\\n[2] Dong H W, Takahashi N, Mitsufuji Y, et al. Clipsep: Learning text-queried sound separation with noisy unlabeled videos[J]. ICLR2022\\n\\n[3] Zhao H, Gan C, Rouditchenko A, et al. The sound of pixels[C]. ECCV2018\"}", "{\"summary\": \"This paper presents OmniSep, a novel unified framework for query-based sound separation that accommodates multiple query modalities (text, image, and audio) within a single model. The authors introduce three key technical contributions: Query-Mixup for simultaneous multi-modal query processing, a negative query mechanism for unwanted sound suppression, and Query-Aug for handling natural language descriptions beyond predefined class labels. The model's architecture represents an advance over previous approaches, which were typically constrained to single-modality queries.\\n\\nExperimental validation on several datasets demonstrates OmniSep's performance compared to existing methods across all query modalities. The model exhibits robust separation capabilities in complex, multi-source scenarios and achieves state-of-the-art results on MUSIC, VGGSOUND-CLEAN+, and MUSIC-CLEAN+ benchmarks. OmniSep's multi-modal query capability enables enhanced separation performance through the simultaneous application of different query types, such as combining textual and visual queries.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The paper presents a clear and well-motivated problem statement, addressing three fundamental limitations in current sound separation approaches: the absence of unified multi-modal query handling, insufficient flexibility in sound manipulation (particularly for unwanted sound removal), and restricted vocabulary constraints that preclude natural language descriptions. The authors construct a compelling narrative throughout the introduction and literature review, effectively contextualizing their contributions within the field through comprehensive citations and thorough analysis of related work.\\n\\nThe methodology is generally well-documented to begin with. The experimental validation is particularly robust, encompassing diverse tasks and datasets that demonstrate the method's versatility. The authors provide extensive ablation studies that illuminate the model's internal mechanisms and justify some of the architectural choices. Their commitment to reproducibility through code release further strengthens the paper's contribution to the field.\", \"weaknesses\": \"There are several issues with the paper, which can broadly be classified into two areas. Most of these issues can be fixed with better, scientific writing, and more explanation, rigour and experimentation.\", \"technical_clarity_issues\": [\"1. Significant documentation gaps in core variables and operations in Separate-Net:\", \"q_i and q_ij are poorly explained or completely undefined in Separate-Net\", \"Many variables are undefined and or with no dimensions specified. All variables should be clearly defined with dimensions given.\", \"M(hat) is defined but never used.\", \"Mechanism of how masks are used/applied is not explained.\", \"Audio U-Net lacks both citation and architectural explanation.\", \"2. Query-Aug undefined components:\", \"Q_des: completely undefined without dimensions and no explanation of how these features are obtained (T5/Bert?).\", \"Query-Set: is undefined and dimensions not specified.\", \"Q_aug defined as argmax but its integration into model never explained.\", \"sim(.,.) is not a standard operator, it should be properly defined (cosine similarity?).\"], \"experimental_validation_problems\": \"1. Table 1 uses a 2017 model called PIT for non-queried sound separation. State-of-the-art audio-only separation methods (i.e. TDANet, TF-GridNet) would serve stronger baselines (while these models are for speech separation, so is PIT. Additionally, PIT was proposed so solve the permutation problem, not as a strong speech separation model). \\n2. Table 2 does not show that the query-mixup method works, only that it scales across different modalities. Validating its effectiveness would mean comparing to other methods and/or different strategies. \\n\\nHowever, there are some other issues. While the method is interesting, the paper's novel work can be summarised as a weighted average of modalities, a linear layer and (1+alpha)Q-alphaQ_N for negative queries.\", \"questions\": \"1. Separate-Net only converts to magnitude spectrum X, ignoring phase. Modern time-frequency methods (RTFS-Net, TF-GridNet) use retain both magnitude and phase information via concatenated real/imaginary STFT outputs. Some methods additionally concatenate the magnitude to create a three channel representation. Have you tried these approaches?\\n\\n2. Why keep ImageBind parameters frozen? What if it was trained from scratch with the model? What if it was initialized and fine tuned with the model using a lower learning rate? \\n\\n3. For the Negative Query, why (1+alpha)Q-alphaQ_N instead of (1-alpha)Q+alphaQ_N, Q-alphaQ_N or a scale-and-shift approach such as in stable diffusion - please add some justification in this section or some experimental evidence. \\n\\n4. And the problems raised in the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Public Comment of Xubo (2/N, N=2)\", \"comment\": \"**Q2: Comparison on diversified test sets**\\n\\n**A2**: Thank you for your suggestion. We would like to reiterate that **our primary contribution lies in enabling composed query sound separation across multiple modalities**. Query-Aug is a training-free method specifically designed to tackle the open-vocabulary challenges that arise due to the limited scale of omni-modality datasets.\\n\\n- To address your request, we integrated the Query-Aug method with AudioSep and evaluated its performance on the datasets you proposed. The results are presented in Table R2. Please note that since AudioSet and VGGSOUND use class labels as queries, these are in-domain queries already utilized during training. When applying Query-Aug for enhancement, the retrieved $\\\\text{query}_{\\\\text{aug}}$ is the original query itself. As a result, the performance of AudioSep+Query-Aug on these datasets is identical to that of AudioSep. Therefore, we did not include evaluations on AudioSet and VGGSOUND datasets.\\n \\n *Table R2: Comparison on the benchmark of AudioSep.*\\n \\n | Method | MUSIC(SDRi) | MUSIC(SI-SDR) | ESC-50(SDRi) | ESC-50(SI-SDR) | Clotho(SDRi) | Clotho(SI-SDR) | AudioCaps(SDRi) | AudioCaps(SI-SDR) |\\n | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n | AudioSep(Paper version) | 9.75 | 8.45 | 10.24 | 9.16 | 6.51 | 4.84 | 7.68 | 6.45 |\\n | AudioSep(Github version) | 10.508 | 9.425 | 10.040 | 8.810 | 6.850 | 5.242 | 8.220 | 7.189 |\\n | AudioSep+Query-Aug(ours) | **10.719** | **9.712** | **10.720** | **9.840** | **7.201** | **5.795** | **8.221** | **7.190** |\\n \\n Using query-aug, we observed significant performance improvements on MUSIC, ESC-50, and Clotho, highlighting its effectiveness in enhancing open-vocabulary capability. Interestingly, the performance gain on AudioCaps was minimal. We guess this is because for extremely fine-grained queries, the retrieved in-domain queries might fail to fully capture the original query\\u2019s semantics. (*We would also like to draw your attention to a potential issue in your benchmark: the AudioCaps test set appears to be included in the AudioSet training set. This overlap may explain the high performance of the original model, as these audio samples might have been exposed to AudioSep during training.*)\\n \\n- Additionally, we conducted specific tests to further illustrate the effectiveness of Query-Aug for addressing the open-vocabulary problem. By varying the queries for the same audio, we observed that even minor alterations, such as changing the case (from \\u201cCello\\u201d to \\u201ccello\\u201d), could significantly affect model performance. Query-Aug mitigates this issue by retrieving in-domain queries, which are more robust in performance, thereby improving query comprehension and enabling more robust sound separation.\\n \\n *Table R3: Comparison on varying queries for the same audio.*\\n \\n | Query | Query-Type | SDR | SDRi | SI-SDR |\\n | --- | --- | --- | --- | --- |\\n | cello sound | Out-of-domain | 5.570 | 5.458 | 2.279 |\\n | a sound of cello | Out-of-domain | 6.213 | 6.100 | 2.279 |\\n | the sound of cello | Out-of-domain | 6.779 | 6.667 | 5.495 |\\n | cello | Out-of-domain | 7.113 | 7.001 | 6.097 |\\n | A cello is playing. | In Domain | 7.234 | 7.122 | 5.753 |\\n | Cellos are playing. | In Domain | 7.765 | 7.653 | 6.233 |\\n | Cello | In Domain | 7.958 | 7.845 | 7.026 |\\n\\n**Q3: Effectiveness on DCASE 2024 Task 9 Datasets**\\n\\n**A3:** Unfortunately, since this competition has concluded, we are unable to submit results for evaluation. Moreover, we were unable to access the ground truth audio for the evaluation set, which prevents us from calculating quantitative metrics. Despite our efforts, we regret that we could not locate the relevant resources. \\n\\nIf such resources are publicly available, we would greatly appreciate your guidance in accessing them. Although we are currently unable to provide a performance comparison on this test set, we hope you agree that the results presented in Q2 sufficiently demonstrate the contributions and effectiveness of our method to open-vocabulary sound separation.\\n\\n**Q4: Results in Table 9**\\n\\n**A4:** We apologize for any confusion caused by Table 9. The \\u201cmodality\\u201d mentioned refers to the query modality during inference and does not imply that AudioSep is limited to TQSS. In our paper, AudioSep was used as a strong baseline for TQSS. If needed, we are happy to clarify this with annotations in the camera-ready version.\\n\\nThank you again for your detailed feedback and interest in our work. We hope our responses address your concerns and further clarify the contributions of our proposed methods. Please do not hesitate to reach out with any additional questions or comments.\"}", "{\"comment\": [\"Thank you for providing such detailed experiments in such a short period of time, it's extremely impressive.\", \"Would it be possible to add these experiments to the manuscript and reference them at the pertinent parts in the text (i.e. we use this method because 'x', see appendix 'y'). The same goes for the experiments requested by other reviewers. They provide important context and make the work much more thorough.\", \"A few dimensions are still unlabelled. Having the dimensions readily available makes reading a much nicer experience, and makes the manuscript more comprehensive:\", \"$Q_i$ a slice in the time dimension of $Q$, i.e. $R^{1024}$?\", \"$\\\\tilde{M}$ is a set of $k$ elements, but what are the dimensions of each element?\", \"$i$ is unclear. Should the number of predicted masks $\\\\hat{M}$ be equal to the number of intermediate masks ($k$)? Is $i \\\\in \\\\{0,...,n-1\\\\}$?\", \"Define the dimensions of the $\\\\hat{M}_i$ and $\\\\hat{M}$.\", \"The $i$ and $j$ notation is difficult to follow. $i$ is the number of time steps, hence $i \\\\in \\\\{0,...,n-1\\\\}$, and then we have $i$ masks. So we have 1 mask for each time step. But it is chosen that $k\\\\leq n$. But $k$ is the output dimension of a linear layer, meaning the input dimensions needs to be defined. Are you using a linear layer to compress the time dimensions and if so, does that mean the input audio size has to be fixed ahead of time i.e. the method does not generalize to any length of audio?\", \"How are the $Q_i$ obtained from $A_i, V_i, T_i$, and what are the dimensions of $A_i, V_i, T_i$? How do you ensure that audio features, video features and text features all have the same number of time steps? Surely the video features would have much fewer time steps for 25 fps video, etc.\"]}" ] }
Dkz8npDqAv
Multimodal Context-Aware Transformer with Visual Guidance for Automated 3D Annotation
[ "Xiaoyan Qian", "Chang Liu", "XIAOJUAN QI", "Siew Chong Tan", "Edmund Y. Lam", "Ngai Wong" ]
The laborious nature of manual point cloud labeling drives the growing interest in 3D auto-annotation. The challenge is amplified by the sparse and irregular distribution of point clouds. This leads to the under-performance of current autolabelers, particularly with hard-to-detect samples characterized by truncation, occlusion, or distance. In response, we propose a multimodal context-aware transformer (MMCAT) that integrates 3D point cloud geometry with image-based semantic insights to improve 3D bounding box annotations through 2D visual guidance. Our approach utilizes visual hints from three perspectives to integrate the 2D and 3D dimensions. Initially, we develop point and image encoders to align LiDAR and image data, establishing a unified semantic bridge between image visuals and point cloud geometry. Subsequently, our box encoder processes 2D box coordinates to improve accuracy in determining object positions and dimensions within 3D space. Finally, our multimodal encoders enhance feature interactions, improving point cloud interpretation and annotation accuracy, especially for challenging samples. MMCAT lies in its strategic use of 2D visual prompts to bolster 3D representation and annotation processes. We validate MMCAT's efficacy through extensive experiments on the widely recognized KITTI and Waymo Open datasets, particularly highlighting its superior performance with hard samples.
[ "3D point cloud", "multimodal architecture", "automatic annotation", "LiDar" ]
Reject
https://openreview.net/pdf?id=Dkz8npDqAv
https://openreview.net/forum?id=Dkz8npDqAv
ICLR.cc/2025/Conference
2025
{ "note_id": [ "owP3w3Fe8q", "NiAvnyO7vM", "Hm6n8tllSu", "7TMJyvttlc", "5sxgc4qjc9" ], "note_type": [ "official_review", "official_review", "decision", "official_review", "meta_review" ], "note_created": [ 1729026672377, 1731042473824, 1737523793461, 1730636469767, 1734875494483 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6807/Reviewer_zb38" ], [ "ICLR.cc/2025/Conference/Submission6807/Reviewer_fuGs" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6807/Reviewer_PPoH" ], [ "ICLR.cc/2025/Conference/Submission6807/Area_Chair_7SEP" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces MMCAT, a framework designed for automated 3D annotation using multimodal data. By combining point cloud data from LiDAR with images and 2D bounding boxes, MMCAT improves the annotation quality. It integrates specialized encoders for point clouds, images, and 2D boxes, allowing effective feature alignment and multimodal fusion. The model is validated on the KITTI and Waymo Open datasets, achieving SOTA performance in generating 3D annotations, particularly excelling in challenging scenarios.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The use of 2D visual cues from images and 2D boxes to guide 3D point cloud annotation effectively addresses the limitations of sparse point cloud data, leading to improvements in accuracy for hard samples.\\n2. Qualitative results demonstrate the effectiveness of MMCAT.\", \"weaknesses\": \"1. Although new modalities are introduced as inputs, there aren't many technical contributions to the multimodal transformer itself.\\n2. 2D bounding boxes are not always available, making MMCAT cannot be applied to 3D raw point clouds.\", \"questions\": \"What is the effect of inaccurate 2D bounding boxes as inputs, e.g., annotating images with an off-the-shelf 2D detection model?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"## Summary\\nThe authors propose a framework that uses pseudo labels obtained from three modalities, LiDAR point clouds, images, and 2D bounding boxes to train 3D object detectors that produce 3D bounding boxes as outputs. Utilizing dense image features in addition to point and 2D box data allows their framework to be robust to challenging cases of heavy occlusion and truncation. They outperform existing weakly-supervised baselines on challenging cases within the KITTI and Waymo datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. *Impact*: Utilizing relatively abundant data modalities for prediction of 3D pseudo labels would enable us to train better 3D object detection models cheaply\\n2. *Leveraging multi-modal data*: Aligning 3D point cloud and 2D image/bounding box data to obtain more accurate 3D labels is a useful research direction given the abundance of 2D data and maturity of image encoders.\", \"weaknesses\": \"1. Table 1 shows that their method is unable to beat previous SoTA on easier cases within KITTI. I would expect their approach to perform at least as well as, other weakly supervised methods that use 2D data. Could the authors discuss why their method is better on challenging cases but cannot beat the SoTA on the easier cases in KITTI (Table 1)? An analysis of failure modes to explain this behavior would be helpful for the community.\", \"questions\": \"1. What is the rationale for a uniform architecture design across modalities in the MMCAR architecture (Section 3.2)? Wouldn't different modalities benefit from modality-specific architectural designs?\\n2. Why does the third column (Full Supervision) say \\\"2D Box\\\" for MMCAT in Tables 2 and 4? Isn't MMCAT also trained on 3D bounding boxes as supervision during its training phase as described in Section 3.4?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper proposes an automatic 3D bounding box annotation method based on a designed multimodal context-aware transformer, termed MMCAT. MMCAT utilizes 2D images and the corresponding 2D bounding boxes as visual cues to guide the regression of 3D bounding boxes. Ultimately, this work employs MMCAT to annotate the training sets of KITTI and Waymo, achieving results on the car/vehicle category that are approximately equivalent to manual annotations.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"MMCAT has designed a multimodal context-aware transformer consisting of four types of encoders, which makes more comprehensive use of multimodal information. MMCAT uses 2D bounding boxes to optimize the regression of 3D bounding boxes, which can reduce the labeling cost to a certain extent.\", \"weaknesses\": \"According to the experimental setup of the paper, the authors used manually annotated 2D boxes and some 3D annotation information. In fact, this still represents a non-negligible cost. This paper only provided results for the car/vehicle category using MMCAT, lacking comparative experiments for other categories. Lacks comparison with state-of-the-art automatic annotation algorithms, such as DetZero[1]. The design of the annotator is similar to ViT-WSS3D[2]. However, the annotation cost required is higher, and the contribution of the proposed method is limited. The method proposed in this paper seems to rely on the four modality encoders designed by the MMCAT. Would the use of existing pre-trained encoders affect the performance of MMCAT?\\n[1] DetZero: Rethinking Offboard 3D Object Detection with Long-term Sequential Point Clouds, ICCV 2023.\\n[2] A Simple Vision Transformer for Weakly Semi-supervised 3D Object Detection. ICCV 2023\", \"questions\": \"The author provided the accuracy on the KITTI test set in the paper, but the corresponding accuracy was not found on the KITTI benchmark. What information do images and 2D bounding boxes provide for 3D bounding box regression? How effective is using only a Point+2D encoder? The design of the annotator is similar to ViT-WSS3D; what are the advantages of the proposed method over ViT-WSS3D? The method proposed in this paper seems to rely on the four modality encoders designed by the MMCAT. Would the use of existing pre-trained encoders affect the performance of MMCAT?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper proposes MMCAT, a framework for automated 3D bounding box annotation by leveraging multimodal data, including LiDAR point clouds, images, and 2D bounding boxes. The major strength of the paper is that it demonstrates the promise for reducing reliance on manual annotation by leveraging multimodal data, where there are abundant in the real world. On the negative side, the reviewers are concerned about the similarity to previous work, narrowed scope of the method (only focusing on the car/vehicle category), and incomplete benchmarking. The reviewers are on the fence. After thorough discussion, while the reviewers appreciate the use of multimodal data to help the annotation, they are still worried about the experimental evaluation and the key difference to prior art. The ACs agree with the reviewers. The authors are encouraged to incorporate the feedbacks from the reviewers and resubmit to a future venue.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers were primarily concerned about the similarity to prior work and incomplete experimental evaluations. While the authors made quite a bit of attempts, the reviewers are still worried about the lack of experimental analyses (ie, missing the results of several categories) and key differences.\"}" ] }
DjtJV3ke1j
Dynamic Kernel Sparsifiers
[ "Yichuan Deng", "Wenyu Jin", "Zhao Song", "Xiaorui Sun", "OMRI WEINSTEIN" ]
A geometric graph associated with a set of points $P= \{x_1, x_2, \cdots, x_n \} \subset \mathbb{R}^d$ and a fixed kernel function $\mathsf{K}:\mathbb{R}^d\times \mathbb{R}^d\to\mathbb{R}_{\geq 0}$ is a complete graph on $P$ such that the weight of edge $(x_i, x_j)$ is $\mathsf{K}(x_i, x_j)$. We present a fully-dynamic data structure that maintains a spectral sparsifier of a geometric graph under updates that change the locations of points in $P$ one at a time. The update time of our data structure is $n^{o(1)}$ with high probability, and the initialization time is $n^{1+o(1)}$. Under certain assumption, our data structure can be made robust against adaptive adversaries, which makes our sparsifier applicable in iterative optimization algorithms. We further show that the Laplacian matrices corresponding to geometric graphs admit a randomized sketch for maintaining matrix-vector multiplication and projection in $n^{o(1)}$ time, under \emph{sparse} updates to the query vectors, or under modification of points in $P$.
[ "Sparsifiers", "Optimization", "Algorithms" ]
Reject
https://openreview.net/pdf?id=DjtJV3ke1j
https://openreview.net/forum?id=DjtJV3ke1j
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zT5ckBrAPt", "wlXVXxRZK0", "wGu1kglecn", "rR9AiyFOtx", "nVNjVWo6lR", "mUw1cUShcE", "lMgnfkvOfF", "i6DzEejxXU", "HNnGc42CEN", "9hBPKWCyu2", "8HaNYTZcPZ", "88pXSOCc9J" ], "note_type": [ "meta_review", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "decision" ], "note_created": [ 1734833609667, 1732369315611, 1730666974465, 1732249216621, 1730603623273, 1732252841095, 1730388852788, 1732609828722, 1732248948406, 1730697458681, 1732253669287, 1737524199461 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12554/Area_Chair_p12c" ], [ "ICLR.cc/2025/Conference/Submission12554/Reviewer_1Yg6" ], [ "ICLR.cc/2025/Conference/Submission12554/Reviewer_mPNt" ], [ "ICLR.cc/2025/Conference/Submission12554/Authors" ], [ "ICLR.cc/2025/Conference/Submission12554/Reviewer_b5c2" ], [ "ICLR.cc/2025/Conference/Submission12554/Authors" ], [ "ICLR.cc/2025/Conference/Submission12554/Reviewer_1Yg6" ], [ "ICLR.cc/2025/Conference/Submission12554/Reviewer_mPNt" ], [ "ICLR.cc/2025/Conference/Submission12554/Authors" ], [ "ICLR.cc/2025/Conference/Submission12554/Reviewer_Bowh" ], [ "ICLR.cc/2025/Conference/Submission12554/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"metareview\": \"The goal of this work is to efficiently update the kernel matrix when one of the data sample changes its representation. The main idea is to first use random projection to reduce the dimension while pair-wise distances are approximately maintained as per Johnson-Lindenstrauss, and some method to compute only a subset of kernel pairs. Criticisms are raised regarding the necessity to consider the problem in a dynamic setting, rigor of the 'high-probability' claims, and lack of experiments.\", \"additional_comments_on_reviewer_discussion\": \"The authors provided rebuttal but are not convincing enough to the the negative reviews.\"}", "{\"comment\": \"Dear authors,\\n\\nThank you for your replies and answering my questions. Having read the other reviews and your responses to them, I've decided to keep my score.\"}", "{\"summary\": \"This paper introduces a dynamic data structure designed to efficiently maintain spectral sparsifiers for geometric graphs constructed on a set of points $P = {x_1, \\\\ldots, x_n} \\\\subset \\\\mathbb{R}^d$ with edge weights determined by a kernel $K: \\\\mathbb{R}^d \\\\times \\\\mathbb{R}^d \\\\rightarrow \\\\mathbb{R}_{\\\\geq 0}$. The authors propose a randomized dynamic algorithm that, when $K$ satisfies a multiplicative Lipschitz condition, updates an almost linear spectral sparsifier with high probability in time $O(n^{o(1)})$ (with an initialization time of $O(n^{1+o(1)})$).\\n\\nFurthermore, the authors show that with additional constraints on $P$, specifically involving its \\\"aspect ratio,\\\" the algorithm can be made resilient to adversarial modifications. The paper also includes algorithms for efficiently maintaining approximate matrix-vector queries for the graph\\u2019s Laplacian and its generalized inverse, achieving a time complexity of $O(n^{o(1)})$ for these operations as well.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1- **Originality and mathematical rigor**: The authors provide technically rigorous arguments, integrating ideas from data structures, spectral sparsification, resampling, and sketching. For example, they employ the well-separated pair decomposition (WSPD) method from Har-Peled (2011) to construct and maintain sparse graph representations, leveraging this decomposition to create a spectral sparsifier through efficient sampling. Additionally, they utilize Johnson-Lindenstrauss projections to maintain a lower-dimensional representation, which is essential for preserving the algorithm\\u2019s efficiency.\", \"2__clarity_and_presentation\": \"The paper is generally well-written and effectively organized. The main body focuses on clearly defining the problem, presenting the key results (in informal versions for accessibility), and outlining the principal proof strategies and techniques. The detailed technical proofs and methodological developments are provided in the appendix, which is largely self-contained and complements the main text well.\", \"weaknesses\": \"**Limitations and their assessment**: One limitation is that the method\\u2019s performance may degrade in high-dimensional settings. Although the authors acknowledge this in the limitations section, noting that their arguments are optimized for fixed dimensions, it would be helpful to have explicit statements on the relationship between $d$ and $n$ for each main result. For the adversarial setting, this is more clear due to the condition $\\\\alpha^d=O(\\\\text{poly}(n))$, suggesting that these results might deteriorate more quickly as the dimension increases. Could the authors confirm this interpretation?\\n\\nThe $(C,L)$-Lipschitz assumption is also presented as a limitation. To better understand applicability, it would be useful to clarify which types of kernels satisfy this assumption, with examples. For instance, the classic kernel $K(x,y)=\\\\mathbb{1}_{|x-y|_2 \\\\leq \\\\delta}$ for some $\\\\delta \\\\geq 0$ does not appear to satisfy this condition, in general. Similarly, a kernel like $K(x,y)=e^{-\\\\frac{1}{\\\\sigma} |x-y|_2^2}$ with large $\\\\sigma$ might encounter similar issues. Providing insights into the impact of this assumption for such common kernels would be valuable, as the role of the Lipschitz condition remains somewhat unclear (see additional comments below).\", \"questions\": \"1- In the main results, the role of the constant $C$ in the $(C,L)$-Lipschitz condition is unclear. Specifically, it appears that any function could satisfy this condition by setting $C=1$, with a corresponding choice of $L=1$, for example. Am I overlooking a situation where $C$ significantly affects the analysis?\\n\\n2- In line 055, could you define geometric kernel?\", \"3__typo\": \"in line 056 in the word extending.\", \"4__line_076\": \"an entire row update of $K$...is an entire row and the corresponding column, right?\", \"5__similar_to_point_1\": \"in line 183 of the related work you mention several examples of kernels, it would be interesting to mention the applicability of your results to those situations.\", \"6__in_line_963\": \"shouldn't be $w_ {f(G)}$ instead of $w_ {G'}$?\", \"7__line_1053\": \"why is $\\\\log(1/\\\\delta)>2$.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your thoughtful and detailed feedback. We appreciate your recognition of the technical rigor and clarity of our work and will address the concerns raised.\\n\\nWe acknowledge the need to clarify the impact of $C$ on the analysis. While $C$ can theoretically be set to 1 for simplicity, its role in practical scenarios, such as scaling for different kernel functions, can be significant. The relationship between the dimensionality $d$, the aspect ratio $\\\\alpha$, and performance degradation is indeed critical. While our method assumes $\\\\alpha^d = O(\\\\text{poly}(n))$ to ensure computational feasibility, we will further clarify how dimensionality affects the robustness of our results. Additionally, we will expand the discussion of kernel applicability, explicitly noting how certain kernels like $\\\\exp(-\\\\|x-y\\\\|^2/\\\\sigma)$ may or may not satisfy the $(C, L)$-Lipschitz condition depending on parameter choices.\\n\\nWe will address the typographical error on line 056 and refine ambiguous wording, such as clarifying on line 076 that a row update implies a corresponding column update. We will also revise the related work discussion (line 183) to emphasize the practical relevance of kernels in applications where our framework is most beneficial. On line 963, $w_f(G)$ should indeed replace $w_G$, and we will correct this oversight. Regarding line 1053, we will elaborate on the reasoning behind the choice $\\\\log(1/\\\\delta) > 2$ to enhance clarity.\\n\\nWe greatly appreciate your insights, which will help us improve the presentation and clarity of the paper while addressing key technical nuances.\"}", "{\"summary\": \"This paper studies the following problem: given points $x_1, \\\\ldots, x_n \\\\in \\\\mathbb{R}^d$ and a kernel function $K : \\\\mathbb{R}^d \\\\times \\\\mathbb{R}$, define a complete graph with $x_1, ..., x_n$ as nodes and the weight of the edge between $x_i, x_j$ is defined to $K(x_i, x_j)$. The goal of this paper is to maintain a dynamic spectral sparsifier of the complete graph where dynamic refers to the abilitiy of handling vertex movements quickly. In particular, if a given vertex $x_i$ is moved to a new location $z \\\\in \\\\mathbb{R}^d$, we want the spectral sparsifier with respect to the new graph to be computed quickly. Note that moving a single vertex affects $n-1$ edge weights (assuming $K(x, x) = c$ for all $x$) and therefore $O(n)$ entries of the Laplacian matrix are changed by a single vertex movement. The goal of the paper is to compute a spectral sparsifier that can be updated in $n^{o(1)}$ time for each vertex update.\", \"the_high_level_algorithm_is_as_follows\": \"1. Project the points in $\\\\mathbb{R}^d$ into $\\\\mathbb{R}^k$ using ultra-low dimensional JL embedding which preserves relative distances up to a reasonable multiplicative factor.\\n2. Construct a well separated pair decomposition using quadtrees.\\n3. Use the approxiamtion guarantees of the JL transform to argue that the WSPD computed on the projected points is also a WSPD.\\n4. For each tuple, in the WSPD, approximate the laplacian by uniform sampling, which is equivalent to leverage score sampling up to multiplicative factors and hence gives a spectral sparsifier when sampling appropriate number of edges uniformly at random.\\n5. To make the algorithm support vertex movements dynamically, the authors show that there exists a WSPD where each vertex is part of only a small number of tuples and that when a vertex is moved only the edges in the affected tuples need to be resampled. To show that this can be done quickly, they argue that the resampling can be done keeping a large number of sampled edges from the original graph and modifying only a few random edges.\\n\\nThe above leads to a fast dynamic algorithm to update the spectral sparsifier. Then the authors give algorithms to robustify the randomness of JL transform to sequential updates by computing a net and using a union bound over the net vectors. Rounding off the vectors to their closest net vector would then mean that the randomness of the JL transform is robust to sequential updates.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The problem is interesting to study and a reasonably fast algorithm with support for a broader range of kernel functions may have practical utility.\", \"weaknesses\": \"I have reviewed an earlier version of the paper. I am slightly paraphrasing my earlier review below.\\n\\n1. The randomness of uniform sampling in a pair of WSPD is correlated across multiple rounds of vertex movements. Then how is the algorithm adversarially robust? This wasn't considered at all in the paper.\\n2. The motivation of not being able to directly update $L_H u$ dynamically is stated many times in the paper and I don't think it is adequate. What is the model here? Do we know the vector $u$ at the beginning or is it only revealed later? If it is revealed later, do we need to support answering queries for multiple adaptively chose vectors $u$? In that case how are the sketches robust?\\n3. Not much significant contributions necessary to convert the static version of Alman et al., into the dynamic version.\", \"questions\": \"See the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your constructive feedback and for revisiting our work. We address your concerns as follows:\\n\\nWe acknowledge the importance of adversarial robustness in dynamic algorithms. However, our current work focuses on the theoretical development and analysis of dynamic Laplacian solvers under standard assumptions. Addressing adversarial scenarios requires a separate, in-depth investigation, which is beyond the scope of this paper. We will explicitly state this limitation in the revised manuscript to set clear expectations for the reader.\\n\\nFor the hardness of dynamic maintenence of the Laplacian of geometric graph, before our work, there is no dynamic algorithm for geometric graphs (with kernel function) which supports fully dynamic $(1 \\\\pm \\\\epsilon)$ spectral edge sparsifiers algorithm. For example, [1] provided a sparsifier that can be maintained in amortized $\\\\poly(\\\\log n, 1/\\\\epsilon)$ time per update. However, each point update would result in weight change of $O(n)$ edges. So the cost of directly applying the edge sparsifier update algorithm can be high. \\n\\nConverting static sparsification techniques (e.g., those of [2]) to a fully dynamic setting required overcoming significant challenges, such as efficient WSPD updates, adversarial robustness in JL projections, and maintaining spectral guarantees under vertex movements. These contributions provide the foundation for a fast dynamic sparsification algorithm applicable to a wide range of kernel functions. \\n\\nThank you again for your thoughtful remarks, which will help us improve the clarity and depth of our paper. \\n\\n[1] Ittai Abraham, David Durfee, Ioannis Koutis, Sebastian Krinninger, and Richard Peng. On fully dynamic graph sparsifiers. In Irit Dinur, editor, IEEE 57th Annual Symposium on Foundations of Computer Science, FOCS 2016, 9-11 October 2016, Hyatt Regency, New Brunswick, New Jersey, USA, pages 335\\u2013344. IEEE Computer Society, 2016.\\n[2] Josh Alman, Timothy Chu, Aaron Schild, and Zhao Song. Algorithms and hardness for linear algebra on geometric graphs. In 2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS), pp. 541\\u2013552. IEEE, 2020.\"}", "{\"summary\": \"This work presents a dynamic data structure that maintains a spectral sparsifier for a geometric graph, allowing for efficient updates when point locations change. This data structure is initialized in $n^{1+o(1)}$ time and can handle point location changes in $n^{o(1)}$ time with high probability. The key component of the data structure is a dynamic well-separated pair decomposition (WSPD), which efficiently partitions the graph into subgraphs with similar edge weights. Leveraging JL projections and a smooth resampling technique, the data structure maintains low-dimensional sketches for efficient updates and queries. Additionally, the data structure can be made robust against adaptive adversaries. The paper also present a randomized sketch for maintaining matrix-vector multiplication and projection in $n^{o(1)}$ time under sparse updates to query vectors or point modifications.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"**S1)** The studied problem is interesting and relevant. Constructing spectral sparsifiers in the dynamic setting is an active area of research. For geometric graphs, this is extra challenging, because any change in data point directly induces $\\\\theta(n)$ edge updates. A lot of previous work on dynamic sparsifiers focuses on single edge updates (Abraham et al., 2016) - this work circumvents that.\\n\\n**S2)** The overall results are quite strong. The main technical contribution of this work is to carefuly work out how to (i) dynamically maintain the WSPD; and (ii) maintain uniform random samples of the bicliques induced by the WS pairs. The work notes that (i) can be achieved with the algorithm of Har-Peled (2011). To achieve (ii), a careful resampling is designed for the bicliques using biased coins. Designing the full procedures is clearly tedious work, and therefore I consider it a significant contribution that future work could benefit from. The results on the adversarial guarantees and randomized sketch for matrix-vec multiplication seem to follow from the main result, and they are a nice addition to the paper.\", \"weaknesses\": [\"**W1)** The current manuscipt write-up is subpar. The first 9 pages are difficult to read, and I believe the writeup could be significantly improved.\", \"**a.)** First, the references are poorly formatted. Most references are missing their brackets. This makes the paper difficult to read at points. Probably this is an easy fix using \\\\citet instead of \\\\cite.\", \"**b.)** Section 4 contains a lot of dense technical content, that is difficult to follow. Given that the contribution of this work is very technical, I understand that it's not possible to fully describe the result in the main body, and that only high-level ideas and sketches can be given. However the current writeup makes it difficult to grasp the ideas in the paper. For example, sections 4.1.1 and 4.1.2 are dense with a lot of technical discussion. There are a lot of references to the appendix (Lemma A.22, Definition A.15, Figure 2) that require the reader to go back and forth a lot. Replacing some of the long descriptions with figures (like fig 1 and fig 2) would add a lot of clarity. It is not expected that all the results can be accurately verified within the first 9 pages, however, these pages should convice the reader/reviewer that the result is solid. At the moment, for me, that is difficult to do.\", \"**c.)** It might be helpful to convert some of the text into algorithm descriptions (e.g., lines 285-292, lines 317-320, lines 329-333), as these lines describe pseudocode.\", \"**d.)** A lot of typos and mistakes in formulation. See the Minor points/typos section.\", \"**W2)** Some of the assumptions in this work are quite strong, such as fixed dimensionality and being limited to $(C, L)$-Lipschitz functions (although constructing static sparsifiers for arbitrary kernel functions is hard, see Alman et al. (2020)). In particular the assumption on the aspect ratio of the data might be too strong; under adversarial updates, this assumption is broken easily.\", \"**W3** This is a fairly minor weakness, but no experimental results are reported. Adding experiments would improve the significance and impact of this work.\", \"**Minor points/typos**\", \"On line 029-031 the references look wrong - I think there should be brackets around \\\"Alaoui (...) Lee et al. (2020)\\\". There are multiple other places where this happens, making the text hard to read at points. Probably a \\\\citep vs \\\\citet issue.\", \"Line 056: \\\"Extendin\\\" --> \\\"Extending\\\"\", \"Line 207: \\\"Before presenting our dynamic data structure, we first have a high level (..)\\\" --> \\\"(..) we first give a high level (..)\\\"?\", \"Line 218 and 220: \\\"A s-WSPD\\\" --> \\\"An s-WSPD\\\"\", \"Line 283: The sentence \\\"When a point location update (...)\\\" ends abruptly. Should there be a comma instead of a period?\", \"Line 392: \\\"To make sure this\\\" --> \\\"To ensure this\\\"?\", \"Lines 391: \\\"From another direction, we need to make that\\\" --> \\\"From another direction, we need to make sure that\\\"?\", \"Line 950: \\\"Algortihm\\\" --> \\\"Algorithm\\\"\", \"Line 432: What is DynamicGeoSpar? I believe this is only introduced in the appendix.\"], \"questions\": \"**Q1)** Is any adjustment to the algorithm of Har-Peled required to obtain your results? Or can it be applied as a black-box.\\n\\n**Q2)** How tight are the results? Could any of the update times be improved?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your responses. After reviewing your clarifications and considering the feedback from other reviewers, I believe there are still several questions and concerns that remain unresolved. As a result, I have decided to maintain my current score.\"}", "{\"comment\": \"Thank you for your detailed and constructive feedback. We will clarify that $k = O(\\\\log n)$ is chosen to balance computational efficiency and distortion probability, while $k = O(\\\\sqrt{n})$ may increase computational cost. The parameter $C$ is treated as fixed for simplicity, and we will revise the text to reflect this while also adding explicit examples of $(C, L)$-multiplicative Lipschitz kernels, such as Gaussian and polynomial kernels, to illustrate the framework's applicability. Regarding the failure probability $\\\\delta$ in Lemmas of Appendix D, they should be considered to be constant, ensures it remains polynomially bounded for practical scenarios, which we will clarify in the revised manuscript. We will expand the discussion on related work, including the sparsification algorithm mentioned, and improve the rigor and precision of key theorem statements to address concerns about vagueness. Thank you again for your valuable input, which will help us strengthen the paper.\"}", "{\"summary\": \"Given a set of $n$ points and a kernel function $k$, we can consider an all-pairs weighted graph where the weight of every pair is given by the corresponding kernel value. The paper studies algorithms for the laplacian matrix for this graph. The main result that I mostly focused on is assuming the kernel is (C,L)-multiplicative Lipschitz, the paper presents an algorithm for constructing a spectral sparsifier which can be dynamically updated. Algorithms for constructing this graph in near linear time was studied already in prior works (e.g. Alman et al 20), and the new contribution is have fast updates when the points are dynamically updated. The authors show that for a small number of updates (see more below), we can have update time $n^{o(1)}$.\", \"the_main_idea_seems_to_be_the_following\": \"one can modify the classic JL lemma to project onto slightly smaller than $O(\\\\log n)$ dimensions (the paper takes it to be $o(\\\\log n)$ dimensions). This can cause the distances to be distorted by $n^{o(1)}$ factors. However, in such small dimensions, we can afford to build dynamic data structures that require exponential in dimension space/query time (where exponential in dimension would be $n^{o(1)}$ due to the projection dimension). These underlying algorithms use sampling for spectral sparsification, and so we can get rid of the distortion issue by simply oversampling by the distortion factor.\\n\\nExtensions to other settings such as adversarial queries are also presented, if the dimension of the point set is not too large.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The technique of using an `ultra low' dimensional embedding seems to be novel to me. It is nice that in this setting, one can be resilient against the large distortion factors cause by very small embeddings by simply over sampling.\", \"weaknesses\": [\"Unfortunately, the writing leaves a lot to be desired. In particular, I found many parts of the paper to be very vague, including the statement of the 'rigorous' version of the main theorem. Let's start there at Theorem D.3 which is the 'main theorem'.\", \"it is stated there that $k = o(log n)$. Can I take $k$ to be just $1$ or a constant? In other parts of the paper, $k$ is set to be $O(\\\\sqrt{n})$. What should I set $k$ to? Why not just set it to be a fixed value in the theorem statement or can I take any value?\", \"There is no dependence of $C$ in the parameters of the theorem statement, even though it is a parameter of the kernel. Is $C$ thought to be a constant?\", \"A major issue is that everywhere the statement 'with high probability' is used, including Theorem D.3. Typically one thinks of this to be failure probability $1/\\\\text{poly}(n)$. Indeed, this is what is stated in the 'informal' version in the main body. However, the dependence on the failure probability seems to be much worse in the theorem. The theorem relies on prior lemmas (such as Lemmas D.10, D.11), which for failure probability $\\\\delta$, have dependence $1/\\\\delta$. This means that if we naively set $\\\\delta = 1/\\\\text{poly}(n)$, the running time for every update would be polynomial. This would mean one can just use Alman et al to rebuild the data structure every time. Thus, the data structure seems to only handle very few updates, which makes it less theoretically interesting. Besides this, it maybe a bit misleading that failure probability $1/\\\\text{poly}(n)$ is stated in the main body, but it does not seem to be the case in reality when examining Theorem D.3.\", \"The paper never gives an explicit example of a kernel that is allowed by their main theorem. The best description I can find is on line 97: the kernel must be (C,L)-multiplicative Lipschitz. What are some examples of such kernels? Must the condition 1/c^L <= f(cx)/f(x) <= C^L hold for all x? It seems like such a definition is only relevant for polynomially decaying kernels and not more popular kernels such as Guassians etc unless the diameter of the point set is bounded. This is fine, but it is never explicitly explained in the text.\", \"I don't think empirical evaluations are strictly necessary, but they could benefit the paper since the theoretical results are not so strong (due to the poor dependence on the failure probability).\"], \"minor\": [\"The paper seems to be missing discussions on related works such as https://openreview.net/pdf?id=74A-FDAyiL. There another spectral sparsification algorithm and lower bounds are discussed.\"], \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your positive and detailed feedback. We are pleased to hear that you find the problem studied to be interesting and relevant, and that the contributions of our work are strong and technically significant. We appreciate your constructive comments, which will help us further improve the clarity and presentation of our work.\\n\\nWe agree that the manuscript's write-up can be improved to enhance accessibility. And thank you for pointing out the typographical errors and suggestions for clearer wording\\n\\nWe appreciate your observation about the assumptions related to \\\\((C, L)\\\\)-Lipschitz functions and fixed dimensionality. While these assumptions are necessary to ensure theoretical tractability, we recognize that they might limit the scope of certain practical applications. However, as our focus is on theoretical contributions, we believe this aligns with the intended scope of the paper.\\n\\nRegarding Q1, our algorithm builds on Har-Peled\\u2019s framework, which we adapt for dynamic updates. This adaptation involves modifying the WSPD update procedure to maintain robustness in dynamic settings. For Q2, while our results are tight within the theoretical framework, we will add comments in the conclusion to outline directions for potential future improvements in update times, such as extending the approach to handle adversarial updates more effectively. \\n\\nWe are grateful for your high evaluation of our work and your thoughtful suggestions, which will help us refine the presentation and strengthen the paper.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
Djk1Tgs0wR
SeeThruAnything: Learning to Remove Any Obstructions Across Distributions
[ "Junhang Li", "Yu Guo", "Chuhua XIAN", "Shengfeng He" ]
Images are often obstructed by various obstacles due to capture limitations, hindering the observation of objects of interest. Most existing methods address occlusions from specific elements like fences or raindrops, but are constrained by the wide range of real-world obstructions, making comprehensive data collection impractical. To overcome these challenges, we propose SeeThruAnything, a novel zero-shot framework capable of handling both seen and unseen obstacles. The core idea of our approach is to unify obstruction removal by treating it as a soft-hard mask restoration problem, where any obstruction can be represented using multi-modal prompts, such as visual semantics and textual commands, processed through a cross-attention unit to enhance contextual understanding and improve mode control. Additionally, a tunable mask adapter allows for dynamic soft masking, enabling real-time adjustment of inaccurate masks. Extensive experiments on both in-distribution and out-of-distribution obstacles show that SeeThruAnything consistently achieves strong performance and generalization in obstruction removal, regardless of whether the obstacles were present during training.
[ "Obstruction Removal", "Zero-shot", "Prompts" ]
Reject
https://openreview.net/pdf?id=Djk1Tgs0wR
https://openreview.net/forum?id=Djk1Tgs0wR
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xERyALNhFn", "v1oZCM6Qtl", "ul4W3uoJ6m", "sR0mZWVIIm", "sGLk8LT3hz", "orr6YgtY7U", "nSdZ6nebbO", "kqbSrlZger", "hmT9jh9E4F", "fbJ2S0hdHR", "e9mCiB0R8f", "Z7JbXKDSKr", "Y0O144lhFj", "Xh3YuBFVoT", "Pv8cRtkvWR", "JwAGRI0iSc", "JiYLtk5Y5X", "E7nfYFSAXQ", "CokvKwUnxJ", "AlvaRIfWAQ", "3GuDmIeceL", "2XjkXJ8Ctp", "0X9M5Fzaxo" ], "note_type": [ "decision", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1737523399387, 1729742632333, 1732547540947, 1730393329774, 1732614123029, 1733118266324, 1732697043541, 1730629775476, 1732547485978, 1733089692529, 1732688430659, 1733121838860, 1732547466766, 1732547423492, 1732547446148, 1733125732821, 1732611306847, 1732547429290, 1734302119116, 1732769885592, 1730679709425, 1733115811713, 1732759991575 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission502/Reviewer_tqQw" ], [ "ICLR.cc/2025/Conference/Submission502/Authors" ], [ "ICLR.cc/2025/Conference/Submission502/Reviewer_AeWH" ], [ "ICLR.cc/2025/Conference/Submission502/Authors" ], [ "ICLR.cc/2025/Conference/Submission502/Authors" ], [ "ICLR.cc/2025/Conference/Submission502/Authors" ], [ "ICLR.cc/2025/Conference/Submission502/Reviewer_UqLU" ], [ "ICLR.cc/2025/Conference/Submission502/Authors" ], [ "ICLR.cc/2025/Conference/Submission502/Reviewer_AeWH" ], [ "ICLR.cc/2025/Conference/Submission502/Reviewer_UqLU" ], [ "ICLR.cc/2025/Conference/Submission502/Reviewer_AeWH" ], [ "ICLR.cc/2025/Conference/Submission502/Authors" ], [ "ICLR.cc/2025/Conference/Submission502/Authors" ], [ "ICLR.cc/2025/Conference/Submission502/Authors" ], [ "ICLR.cc/2025/Conference/Submission502/Authors" ], [ "ICLR.cc/2025/Conference/Submission502/Reviewer_tqQw" ], [ "ICLR.cc/2025/Conference/Submission502/Authors" ], [ "ICLR.cc/2025/Conference/Submission502/Area_Chair_yBPJ" ], [ "ICLR.cc/2025/Conference/Submission502/Authors" ], [ "ICLR.cc/2025/Conference/Submission502/Reviewer_Ms8C" ], [ "ICLR.cc/2025/Conference/Submission502/Authors" ], [ "ICLR.cc/2025/Conference/Submission502/Reviewer_AeWH" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper introduces a zero-shot obstruction removal framework to handle both seen and unseen obstructions in images. The idea is to formulate obstruction removal as a soft-hard mask restoration task, leveraging multi-modal prompts to enhance generalization. The framework incorporates a tunable mask adapter that dynamically refines inaccurate masks during the restoration process. The authors show that their method achieves superior performance over state-of-the-art techniques across a wide range of both in-distribution and out-of-distribution obstructions, demonstrating its flexibility and robustness across diverse occlusions.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1. The introduction of the soft-hard mask prediction task intuitively enhances the model\\u2019s generalization ability, making it more adaptable to various obstruction types.\\n2. The paper conducts thorough experiments to validate the effectiveness of the multiple components of the framework.\", \"weaknesses\": \"1. While the generalization largely stems from the mask prediction process, the paper lacks a detailed analysis of the quality and generalizability of the predicted masks. Are there any quantitative metrics to evaluate mask quality on both seen and unseen objects?\\n2. There is no comparison with in-painting methods in experiments. It would be valuable to see a comparison with more recent diffusion-based in-painting methods. For example, [1,2].\\n3. The performance on seen categories is not consistently superior to prior works.\\n\\n\\n[1] Grechka, Asya, Guillaume Couairon, and Matthieu Cord. \\\"GradPaint: Gradient-guided inpainting with diffusion models.\\\" Computer Vision and Image Understanding 240 (2024): 103928.\\n\\n[2] Lugmayr, Andreas, et al. \\\"Repaint: Inpainting using denoising diffusion probabilistic models.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.\", \"questions\": \"1. In cases where an image contains multiple obstructions (e.g., raindrop and power cable), how does the model handle prompts to remove only one type of obstruction? Can it selectively remove the specified obstruction without affecting others?\\n2. In Sec 5.1, the patch sizes mentioned (128, 160, 192, 256) seem unreasonable large. Should this refer to the image resolution instead?\\n3. Is the model capable of handling obstructions that exhibit significant differences from seen obstructions, such as in the case of reflection elimination? Can the authors provide some visualizations with existing datasets or real-world photos?\\n4. How do the authors initialize the model? Are pre-trained weights beneficial?\\n5. What is the model size, and how does it perform in terms of inference speed?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Response-Q1:** Thank you for your insightful question. To address this, we have included results on multi-type obstruction removal in **Appendix C.2** to demonstrate the model's capability. Leveraging the multi-modal prompts in our method, we can accurately target and remove the specified obstruction (e.g., raindrops or power cables) without affecting other image content. This selective removal is clearly illustrated in **Figure 11**. For further details and analysis, we invite you to review the revised manuscript.\\n\\n**Response-Q2:** You are correct\\u2014\\\"patch size\\\" in Sec 5.1 refers to the dimensions of the training image blocks, a term commonly used in traditional practices. It can indeed be understood as the resolution of these blocks.\\n\\n**Response-Q3:** Thank you for your question. As noted in the limitations section, our method is currently not designed for obstacles covering large areas, such as specular reflections, which lack sufficient contextual information. Addressing this issue is one of our future research directions.\\n\\nHowever, we have demonstrated the model\\u2019s strong zero-shot generalization ability on various obstacles significantly different from the training samples, such as spots, scratches, and power cables in **Figure 6**, rain streaks, snow, and strokes in **Figure 9**, and shadows and watermarks in **Figure 10**. These visualizations highlight the robustness of our approach in handling diverse unseen obstructions.\\n\\n**Response-Q4:** Since our model and the baseline model differ in terms of the task and input format, using pre-trained weights yields results comparable to our initialization method. Therefore, we directly initialize the model using **Kaiming normal distribution**.\\n\\n**Response-Q5:** Our model has 56.69 M parameters and an inference speed of 84.28\\u00b10.61 ms. You can see **Appendix F** for a more detailed analysis.\\n\\n**Table 7:** Comparisons of parameters, FLOPs, and runtime between.\\n\\n\\n| Model | Venue | Parameters (M) | FLOPs (G) | Runtime (ms) |\\n| --------------- | --------- | -------------- | --------- | ------------ |\\n| Restormer | CVPR22 | 26.13 | 118.60 | 49.37\\u00b10.46 |\\n| TransWeather | CVPR22 | 38.06 | 3.57 | 19.64\\u00b10.05 |\\n| PromptIR | NeurIPS23 | 35.59 | 132.26 | 53.95\\u00b10.47 |\\n| WGWSNet | CVPR23 | 4.70 | 96.65 | 88.39\\u00b10.35 |\\n| Histoformer | ECCV24 | 16.62 | 86.79 | 83.13\\u00b10.82 |\\n| XRestromer | ECCV24 | 22.34 | 155.49 | 100.67\\u00b10.44 |\\n| SeeThruAnything | | 56.69 | 146.23 | 84.28\\u00b10.61 |\"}", "{\"summary\": \"This paper proposes a new Obstruction Removal method SeeThruAnything to reconstruct a clear original image given a degraded image and the estimated occlusion mask as input. To deal with different obstructions with or without ambiguous boundaries, SeeThruAnything utilize a transformer-based tunable adapter to convert hard masking to soft masking and use different maskings for different obstructions during inference. To better recover the original clean image, this paper also utilizes CLIP to extract multi-modal information from corrupted images and text commands like \\\"remove semi-transparent obstructions\\\" as a condition for their network. A cross-attention is used to inject this multi-modal information into their model. The proposed method obtains competitive performance compared to SOTA on seen obstructions and SOTA performance on unseen obstructions.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The proposed method is quite simple and well motivated, which has a potential to become a common baseline for future works in this field.\\n2. The proposed method obtains competitive performance on seen obstructions and SOTA performance compared to previous methods.\", \"weaknesses\": \"1. The proposed tunable mask detector seems to be heavy. It would be best to mention the number of parameters and the flops for your method and the compared method so that we can distinguish the performance improvement brought by the increasing parameters.\\n2. The proposed method use the corrupted images with obstructions removed as input. The obstructions are removed according to inaccurate estimated obstruction masks. However, previous works mainly take degraded images with unremoved obstructions as input. There is no ablation study to prove the advantage of your design.\\n3. The images and texts in Figure 1 might be too small. It is difficult to distinguish the comparison in figure 1.\", \"questions\": \"1. As the method is mainly tested on synthetic corrupted data, how does it perform for images with multi-type obstructions?\\n2. What mask detector is used? Is it the same one for compared methods?\\n3. What is the exact text command used in your method? Do you use different text commands for different obstructions? How does it perform when only using some consistent text commands like \\\"remove obstructions\\\".\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your follow-up question. We understand your concerns regarding mask detectors; however, we would like to clarify that designing a zero-shot mask detector is not the focus of our work. This is a distinct research area explored in studies such as referring segmentation, with recent advancements like Grounded SAM 2 showcasing powerful zero-shot capabilities. Our emphasis lies in understanding transparent obstructions and recovering the occluded transparent context beneath them. To ensure a fair evaluation of recovery capabilities, all methods in our comparisons use the same masks.\\n\\nExcept for the three basic obstructions used for training, all our experiments are designed to demonstrate zero-shot performance, effectively generalizing to diverse unseen scenarios. Simply put, our approach trains on obstructions A, B, and C, and tests on obstructions D through Z. This highlights the robustness of our distribution-agnostic obstruction formulation and its capability to handle recovery tasks across a wide range of obstructions. We hope this clarification underscores the focus of our work and addresses your concern.\"}", "{\"title\": \"Request for Reconsideration of Rating\", \"comment\": \"Dear Reviewers,\\n\\nThank you for your thoughtful comments and feedback. We have provided detailed responses to address all concerns, including clarifications and revisions. We would greatly appreciate it if you could reconsider your rating to reflect the improvements and explanations provided in our replies. \\n\\nThank you for your time and consideration. \\n\\n\\nSincerely,\\\\\\nThe Authors\"}", "{\"comment\": \"**Response:**\\n\\nThank you for your comment. We believe there may still be some misunderstanding regarding the comparison and contributions of our work. \\n\\nWhile adding masks as input to all competing methods provides limited zero-shot capability, the key difference lies in how unseen obstructions are handled. Methods like PromptIR and WGWSNet are fundamentally not designed for generalization to unseen obstructions. Specifically, PromptlR relies on predefined degradation embeddings (which can be updated during training) and estimates probabilities that enhance feature representations. This approach is inherently limited to predefined degradation types and does not generalize well to unseen scenarios. WGWSNet, on the other hand, employs a two-stage approach: the first stage trains a general model, while the second stage trains multiple parallel pathways tailored to specific degradation types. During inference, each degradation type requires its own pathway in addition to the general model. \\n\\nThis limitation is evident when addressing rare or complex obstructions, as shown in **Figure 6**, where unseen obstructions differ significantly from those seen during training. In contrast, our distribution-agnostic obstruction formulation enables robust zero-shot generalization across a wide range of scenarios. \\n\\nWe also respectfully disagree with the suggestion that our method does not demonstrate significant improvement. In unseen scenarios, our method achieves an average PSNR gain of 2\\u20137 dB over competing methods, as shown in **Table 2**. Such performance improvement is considered substantial in the field of image processing, particularly for challenging unseen cases. The closer performance observed for rain streak removal can be attributed to the similar distribution of rain streaks to seen types like raindrops, which aligns more closely with the training data of competing methods. This contrasts with the other two unseen types, where their performance is significantly worse, underscoring their limited ability to generalize effectively. \\n\\nLastly, while our model has the highest parameter count due to the inclusion of multi-modal inputs, we emphasize that this is not the primary driver of our performance improvement. As demonstrated in **Table 7**, FLOPs and runtime comparisons show that our method is computationally comparable to existing models, with inference times remaining within an acceptable range. The performance gains stem from our distribution-agnostic formulation design, not from the increased parameter count. \\n\\nWe hope this clarification addresses your concerns and highlights the strengths of our approach. Thank you again for your valuable feedback.\"}", "{\"summary\": \"The article presents SeeThruAnything, a novel zero-shot framework designed to effectively remove various types of obstructions in images. SeeThruAnything employs multi-modal prompts\\u2014combining visual and textual inputs\\u2014processed through a cross-attention unit for enhanced contextual understanding. It also features a tunable adapter for mask adjustments. Extensive experiments demonstrate that SeeThruAnything excels in both familiar and unfamiliar obstacle scenarios, showcasing strong performance and generalization capabilities in obstruction removal tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written and easy to follow.\\n2. The paper demonstrates that SeeThruAnything is highly effective in removing obstacles, particularly in generalizing to invisible obstacles outside the training distribution. \\n3. The paper conceptualizes obstacle removal as a problem of soft and hard mask recovery, offering significant insights into the future research directions of this field. By integrating visual tokens with text tokens, the model\\u2019s capacity for generalization in open-world scenarios is substantially enhanced.\", \"weaknesses\": \"1. The technical contribution of the paper is limited. The use of multi-modal prompts and mask recovery techniques, although effective, may not significantly depart from established methodologies, suggesting a reliance on existing concepts rather than groundbreaking innovations.\\n2. Generalization Limitation. While SeeThruAnything demonstrates the capability to remove unseen obstacles, these obstacles are often fundamentally similar in nature (e.g., raindrops and rain streak, fences and yarn). This is underscored by the observation that the performance of WGWSNet and PromptIR on rain streaks and strokes is nearly comparable to, or even surpasses, that of SeeThruAnything.\", \"questions\": \"As you mentioned, the original configuration of other methods cannot achieve zero-design tasks. How do you give them this ability?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely appreciate Reviewer tqQw's valuable feedback. To provide a comprehensive overview and visual evidence of the improvements, we have included the updated manuscript in the supplementary materials. We kindly invite you to review this revised version, which thoroughly documents all new comparisons and analyses.\\n\\n**Response-W1:** Thank you for your comment. We believe there may be a misunderstanding regarding the source of our method\\u2019s generalization capability. The generalization performance does not stem primarily from the quality of the predicted masks but from our proposed formulation, which uniformly represents the complex task of obstacle removal. Even when provided with ground truth obstacle masks, many existing methods fail to remove obstructions, especially those unseen during training. This is evidenced by the visual comparison of rare obstacle removal in **Figure 6** and the results of inpainting-based methods using masks, as detailed in **Appendix C.3**.\\n\\nThe effectiveness of our model lies in its ability to handle diverse obstructions through the introduction of soft and hard recovery strategies, which address both opaque and semi-transparent obstacles. Additionally, the use of multi-modal prompts allows for precise representation of obstacles, further enhancing the model\\u2019s capabilities. The mask detector we use is an off-the-shelf component and not the primary focus of this paper. However, our model is plug-and-play, meaning it can seamlessly integrate with more advanced mask detectors in the future to further improve performance.\\n\\n**Response-W2:** Thank you for your comment. We initially included a comparison with inpainting-based methods in **Figure 1** of the original draft. However, as discussed in the Introduction, inpainting-based methods, while capable of producing visually plausible results, often fail to faithfully reconstruct the original image content. This lack of fidelity was the primary reason we did not include a detailed comparison with inpainting methods in the initial draft, as our focus is on methods that preserve the authenticity of the original scene.\\n\\nTo address this concern and provide a more comprehensive evaluation, we expanded the comparison in **Appendix C.2**, focusing on zero-shot recovery performance for unseen obstacles. Since the code for GradPaint [1] is not publicly available, we included LaMa [3] and Repaint [2] as competitors. The results, presented in **Table 5** and **Figure 12**, demonstrate that our method outperforms these inpainting-based methods both quantitatively\\u2014across three classic obstacle removal tasks\\u2014and visually, particularly for less common obstacles.\\n\\n**Table 5:** PSNR and SSIM comparisons of our method with inpainting-based methods on *unseen* obstructions. The best results are highlighted in **bold**.\\n\\n\\n| Method | Venue | Rain Streak PSNR | Rain Streak SSIM | Snow PSNR | Snow SSIM | Stroke PSNR | Stroke SSIM | Average PSNR | Average SSIM |\\n| --------------- | ------ | ---------------- | ---------------- | --------- | ---------- | ----------- | ----------- | ------------ | ------------ |\\n| LaMa | WACV22 | 29.07 | 0.8858 | 32.32 | 0.9108 | 28.10 | 0.8728 | 29.83 | 0.8898 |\\n| RePaint | CVPR22 | 28.78 | 0.8865 | 32.20 | 0.9064 | 23.78 | 0.8059 | 28.25 | 0.8662 |\\n| SeeThruAnything | | **29.82** | **0.8907** | **34.85** | **0.9283** | **29.45** | **0.9067** | **31.37** | **0.9086** |\\n\\n- [1] Grechka A, Couairon G, Cord M. GradPaint: Gradient-guided inpainting with diffusion models, CVIU, 2024.\\n- [2] Lugmayr A, Danelljan M, Romero A, et al. Repaint: Inpainting using denoising diffusion probabilistic models, CVPR 2022.\\n- [3] Suvorov, Roman, et al. Resolution-robust large mask inpainting with fourier convolutions,WACV, 2022.\\n\\n**Response-W3:** Thank you for your observation. Our method is specifically designed to address the challenges posed by unseen obstructions, which may explain why its advantages are less pronounced for known obstructions. Nonetheless, comprehensive comparisons across the three datasets demonstrate that our method achieves consistently superior overall performance.\\n\\nMoreover, the comparable performance on both seen and unseen categories highlights a key strength of our approach: the distribution-agnostic obstruction formulation. By treating all obstructions uniformly as a transparency recovery problem, our method ensures consistent and robust performance across diverse scenarios, regardless of whether the obstructions were encountered during training.\"}", "{\"comment\": \"Sorry I just have another point requiring clarification. For compared methods, do you use the same masked images as input? As you mention in the paper, previous methods are not trained with masked images as input. Do you retrain all compared methods? I notice Histoformer(ECCV 2024) claims their performance for raindrop removal is 33.060 PSNR and 0.9441 SSIM. However, in your table 1, its performance is 31.59 PSNR and 0.9614 SSIM. Can you clarify this difference?\"}", "{\"comment\": \"Thank you for your reply, which has helped me understand the author's intention to some extent.\\n\\nHowever, I still have some confusion about W2 and Q1. WGWSNet is an image restoration method designed for weather conditions, applicable to both general and specific weather scenarios through a two-stage approach. PromptIR is more closely aligned with this paper, which restore images with few lightweight prompts. While I know the difference between these methods, the authors do not clarify how these two methods achieve zero-shot removal capabilities. If the original versions of these methods are largely ineffective for unseen obstacles, then merely utilizing the same masks and degraded images would not be sufficient to acquire this ability. The author should clarify this to ensure a fair comparison.\\n\\nMoreover, the performance shown in Table 2 does not demonstrate a significant improvement compared to the other methods, in some cases, it is even less effective. Additionally, the proposed method has a larger number of parameters than other methods, and the performance gains may be largely due to this, which further weakens the claim of \\\"unique generalization ability.\\\"\"}", "{\"comment\": \"Thanks for your quick response! I have a follow-up question regarding my initial W2. When you retrain the compared methods, do your masked input also produce a better result? As you have proved the masked input work for your methods, it would be interesting if this masking strategy also works for the existing methods.\"}", "{\"comment\": \"We sincerely thank Reviewer AeWH for their valuable comments. To provide a comprehensive overview and visual evidence of the improvements, we have included the updated manuscript in the supplementary materials. We kindly invite you to review this revised version, which thoroughly documents all new comparisons and analyses.\\n\\n**Response-W1:** Thank you for your observation. To address this, we have included a detailed comparison of model parameters, FLOPs, and runtime in **Appendix F**. While our model has a slightly higher number of parameters compared to the baselines, the FLOPs and runtime\\u2014which are more reflective of computational efficiency\\u2014remain within a practical range. Notably, the increase in parameters primarily arises from the introduction of the cross-attention module for integrating multi-modal prompts, not from the tunable adapter itself. Specifically, the adapter contributes only 0.2176 million additional parameters and 11.08 giga FLOPs, ensuring that its impact on computational cost is minimal.\\n\\n**Table 7:** Comparisons of parameters, FLOPs, and runtime between.\\n\\n\\n| Model | Venue | Parameters (M) | FLOPs (G) | Runtime (ms) |\\n| --------------- | --------- | -------------- | --------- | ------------ |\\n| Restormer | CVPR22 | 26.13 | 118.60 | 49.37\\u00b10.46 |\\n| TransWeather | CVPR22 | 38.06 | 3.57 | 19.64\\u00b10.05 |\\n| PromptIR | NeurIPS23 | 35.59 | 132.26 | 53.95\\u00b10.47 |\\n| WGWSNet | CVPR23 | 4.70 | 96.65 | 88.39\\u00b10.35 |\\n| Histoformer | ECCV24 | 16.62 | 86.79 | 83.13\\u00b10.82 |\\n| XRestromer | ECCV24 | 22.34 | 155.49 | 100.67\\u00b10.44 |\\n| SeeThruAnything | | 56.69 | 146.23 | 84.28\\u00b10.61 |\\n\\n**Response-W2:** Thank you for your comment. We would like to clarify that we conducted this ablation study in our original submission, and the results are presented in **Table 3**. To make this clearer, we have highlighted this section in the revised manuscript and added explanatory footnotes to improve readability and ensure the results of our ablation study are easily understood.\\n\\n**Table 3:** PSNR and SSIM comparisons of integrating different modules.\\n\\n\\n| mask | CA | Adapter | PSNR | SSIM |\\n| ---- | -- | ------- | ----- | ------ |\\n| | | | 27.05 | 0.8920 |\\n| \\u221a | | | 28.05 | 0.9004 |\\n| \\u221a | \\u221a | | 30.00 | 0.9117 |\\n| \\u221a | \\u221a | \\u221a | 30.93 | 0.9250 |\\n\\n**Response-W3:** Thank you for your suggestion, we have reorganized **Figure 1** for better comparison.\\n\\n**Response-Q1:** Thank you for your insightful question. We have conducted experiments on images with multiple types of obstructions, as detailed in **Appendix C.2** and illustrated in **Figure 11**. By leveraging multi-modal prompts to represent various obstructions, SeeThruAnything effectively removes the specified obstacles with high accuracy. Additional results and in-depth analysis are provided in the revised manuscript for your review.\\n\\n**Response-Q2:** Thank you for your question. The mask detector used in our method is described in detail in **Appendix A.1** of the initial draft. In the revised version, we have highlighted this section in blue for clarity. To ensure a fair comparison, the same mask inputs are used across all experiments for both our method and the comparative methods.\\n\\n**Response-Q3:** Thank you for your question. Examples of the text commands we use are provided in **Figure 8** and **Figure 14**, ranging from simple descriptive phrases to more complex sentences. We typically use different text commands to specify whether an obstacle is opaque or semi-transparent. A generic command like \\\"remove obstruction\\\" may not yield optimal results, as our method interprets it as referring to opaque obstacles and performs hard mask recovery accordingly. However, by including transparency-specific descriptions in the input text, such as \\\"opaque yarn\\\" or \\\"semi-transparent obstacles,\\\" our method achieves significantly better results.\"}", "{\"comment\": \"Dear Reviewers,\\n\\nWe sincerely appreciate your valuable feedback and the critical concerns raised. We believe some of these concerns stem from misunderstandings, which we have addressed thoroughly in the revised manuscript. Specifically, our distribution-agnostic obstruction formulation, which unifies obstruction removal tasks, is the key innovation enabling generalizability to both seen and unseen obstructions. This formulation is not only impactful for obstruction removal but also holds significant potential for broader applications in image recovery tasks. We kindly invite you to review the clarifications and improvements in our revised manuscript and hope you will reconsider your ratings in light of these contributions.\\n\\nThank you for your time and thoughtful consideration.\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"comment\": \"We sincerely appreciate Reviewer UqLU's valuable feedback. To provide a comprehensive overview and visual evidence of the improvements, we have included the updated manuscript in the supplementary materials. We kindly invite you to review this revised version, which thoroughly documents all new comparisons and analyses.\\n\\n**Response-W1:** Thank you for your feedback. We believe this concern may stem from a partial or shallow understanding of some aspects of our techniques. Our method introduces a novel distribution-agnostic obstruction formulation that unifies obstruction removal as a single framework, treating all obstructions as a soft masking problem. This formulation focuses on understanding the underlying context of transparent obstructions, which is critical for effective removal.\\n\\nWe demonstrate that even with a ground truth mask, the inherent distribution of different obstructions cannot be easily handled using traditional approaches. Our approach addresses this limitation by enabling zero-shot removal through a flexible soft mask recovery strategy, which generalizes across diverse obstruction types, including unseen scenarios. This perspective has not been explored in prior work and significantly advances both the theoretical foundation and practical application of obstruction removal.\\n\\n**Response-W2&Q1:** Thank you for your comments and questions. We believe some aspects of your concerns might stem from a misunderstanding. The original versions of WGWSNet and PromptIR are not designed for zero-shot tasks and are largely ineffective for unseen obstructions. To ensure a fair comparison, we standardized the inputs for all methods, including the same degraded images and masks, effectively giving these methods some degree of zero-shot removal capability. While their complex designs allow them to perform comparably to ours in specific unseen scenarios with distributions similar to their training data, our method consistently outperforms them. The PSNR and SSIM comparisons across three unseen scenarios (**Table 2**) highlight our model's advantages, with a PSNR lead of 1.88 dB over the second-best method.\\n\\nRegarding the similarity between fences and yarn, this appears to be a misconception. Fences exhibit regular patterns and predictable distributions in the image domain, while rare obstructions like yarn have random distributions and highly irregular patterns, making them much harder to remove. This difference is clearly demonstrated in the results shown in **Figure 6**.\\n\\nOur approach addresses these challenges through a novel soft and hard mask strategy that understands the underlying context of diverse obstructions, enabling effective removal even in complex or unseen scenarios. Competing methods struggle in these cases, which underscores the unique generalization capability of our model. We hope this clarification resolves any misunderstanding and highlights the strengths of our proposed method.\"}", "{\"comment\": \"Thank you for your follow-up question. Our distribution-agnostic obstruction formulation is inherently tied to the use of multi-modal prompts, which work together to enable the effectiveness of our method. Without these components, our method essentially reduces to a Restormer, as demonstrated in our ablation studies.\", \"this_observation_also_answers_your_question\": \"incorporating our components\\u2014such as the distribution-agnostic formulation and multi-modal prompts\\u2014can potentially improve the performance of other baseline methods as well. We will explore this aspect further in the final revision.\\n\\nThank you again for your insightful question!\"}", "{\"comment\": \"Thank the authors for their detailed response. While some of my concerns have been effectively addressed, I have one additional question. Regarding W1, I would expect to see a comparison highlighting the zero-shot ability of the mask detector. For instance, how does the performance (e.g., IoU) of training on obstacles set A and testing on obstacles set B compare to direct training and testing on obstacles set B? Such results would help reinforce the zero-shot claim. Additionally, a follow-up question arises: if we change set A, how sensitive is the model to the choice of A? There is no need for an exhaustive grid search, one or two representative results would suffice to provide valuable insight.\"}", "{\"comment\": \"We sincerely appreciate the insightful comments provided by Reviewer Ms8C. To offer a detailed account and visual evidence of the improvements, we have included the updated manuscript as part of the supplementary materials. We kindly invite you to review this revised version, which thoroughly documents all new comparisons and analyses.\\n\\n\\n**Response-W1:** Thank you for raising this important point. Prior to training the SeeThruAnything model, we fine-tuned the CLIP text encoder to adapt its embedding space to our specific task. The fine-tuning process aimed to optimize the embedding space by reducing intra-class distances (i.e., decreasing the cosine distance between embeddings of different textual descriptions for the same task) while increasing inter-class distances (i.e., enhancing the separation between embeddings of textual descriptions for different tasks). This adaptation ensures that the CLIP text encoder can effectively handle task-specific instructions. As this is a relatively small adjustment, it was not included in the initial draft. However, we have provided a detailed explanation of this technique in **Appendix B** of the updated manuscript for your reference.\\n\\n\\n**Response-W2:** Thank you for your suggestion. To improve the model's reproducibility and make it easier for readers to understand, the usage details about CLIP have been added to the **Appendix B** of the updated version.\\n\\n\\n**Response-W3:** Thank you for your insightful suggestions. We have added the ablation experiment using only the visual prompt to **Table 4** for completeness. Additionally, we explored replacing the CLIP text embedding model with the BLIP model, and the results show comparable performance to our final setup. Detailed metrics and analysis can be found in **Appendix D.2** and **Table 6**.\\n\\nInitially, we did not include the \\\"visual prompt only\\\" setting because using additional images as input deviates from the primary goal of leveraging text-based instructions, which are more direct, interpretable, and practical in real-world scenarios. Such a setting introduces complexity that may limit the applicability and flexibility of the model. However, we appreciate the opportunity to address this point in the revised manuscript.\\n\\n**Table 4:** PSNR and SSIM comparisons of using different prompt strategies.\\n\\n| Textual Prompt | Visual Prompt | PSNR | SSIM |\\n| -------------- | ------------- | ----- | ------ |\\n| | | 28.65 | 0.9063 |\\n| \\u221a | | 29.73 | 0.9168 |\\n| | \\u221a | 30.25 | 0.9215 |\\n| \\u221a | \\u221a | 30.93 | 0.9250 |\\n\\n**Table 6:** PSNR and SSIM comparisons of using different prompt generation strategies.\\n\\n| Model | PSNR | SSIM |\\n| ---------------------- | ----- | ------ |\\n| SeeThruAnything + CLIP | 30.93 | 0.9250 |\\n| SeeThruAnything + BLIP | 31.01 | 0.9235 |\\n\\n\\n**Response-Q1:** Thank you for raising this important concern. We believe this issue stems from a misunderstanding, which we have now clarified in **Appendix B**. Specifically, the CLIP text encoder was fine-tuned to better adapt to task-specific instructions, and its role within our framework is clearly outlined. To address the question of generalizability, we conducted additional experiments replacing the CLIP text encoder with the BLIP model, as detailed in **Appendix D.2**. The results confirm that our framework is not restricted to CLIP and can adapt to other commonly used pretrained text encoders.\\n\\nRegarding textual prompts, their primary purpose in our approach is to specify the type of obstacle to be removed, rather than providing complex or detailed scene descriptions. This allows for robust performance without requiring intricate prompts. Examples illustrating this simplicity and effectiveness can be found in **Figure 8** and **Figure 14**.\\n\\nWe hope this clarification resolves the misunderstanding and highlights the robustness and flexibility of our approach. In light of this additional evidence and explanation, we kindly request the reviewer to reconsider their rating.\"}", "{\"metareview\": \"All the reviewers provided the negative ratings. Although the paper has some merits, e.g., competitive results, the reviewers pointed out a few critical concerns about 1) technical contributions compared to the prior work, 2), technical clarity like used prompts and details of CLIP text encoder training, 3) generalization limitation and performance on seen cases. After taking a close look at the paper, rebuttal, and discussions, the AC agrees with reviewers' feedback and hence suggests the rejection decision. The authors are encouraged to improve the paper based on the feedback for the next venue.\", \"additional_comments_on_reviewer_discussion\": \"In the rebuttal, some of the concerns like technical clarity are explained by the authors. However, the generalization issue raised by reviewer UqLU, AeWH, and tqQw is not fully addressed in the post-rebuttal period. The AC agrees with the three reviewers that this can be still significantly improved in the manuscript since one of the paper's main focuses is on the zero-shot removal capability.\"}", "{\"comment\": \"**Response:**\\n\\nThank you for your thoughtful comments. We are glad to hear that most of your concerns have been addressed. Below are our responses to the remaining points you raised: \\n\\n1. **SAM 2 Usage:** We use SAM 2 in two modes. In automatic mode, GroundingDINO detects obstructions, and SAM 2 generates masks. In manual mode, masks are created based on user-provided input points or bounding boxes. This detail was not initially included because all competitors use the same masks, and our primary focus has been on evaluating removal performance. However, we will include these details in the revised manuscript for clarity. \\n\\n2. **CLIP Text Encoder Fine-tuning:** We utilized ChatGPT-4o to generate 3,984 diverse text commands for fine-tuning which is equal to the number of our training images, ensuring robust training. Fine-tuning provides two key benefits: (1) It reduces the semantic distance between user-generated commands and core commands, enabling the model to better interpret user intent. (2) It enhances the model's ability to identify when to apply soft masking, improving semi-transparent obstruction removal. For instance, before fine-tuning, the cosine similarity (with softmax) between a user command like \\\"There are raindrops in the image, please remove them\\\" and the core commands (\\\"remove opaque obstructions\\\" and \\\"remove transparent obstructions\\\") was 0.5374 and 0.4626, respectively. After fine-tuning, these values improved to 0.00004 and 0.99996, demonstrating significantly improved semantic alignment. \\n\\n\\n3. **Soft Masking Generalizability:** We acknowledge that handling large-area occlusions remains challenging due to the significant loss of underlying visual semantics, which are critical to our recovery formulation. This limitation is also shared by inpainting-based methods. However, our approach represents a novel attempt at unseen obstruction removal, with extensive experiments demonstrating strong generalizability across diverse unseen obstructions. The category-agnostic obstruction formulation and unified model design establish a promising foundation for addressing such challenges. We appreciate your suggestion and will explore ways to enhance performance for large-area occlusions in future work. \\n\\nWe hope these responses clarify your concerns. If they do, we kindly request you consider adjusting your score to reflect this. Thank you again for your constructive feedback.\"}", "{\"summary\": \"The paper introduces an in-painting method to remove real-world obstructions from images. The method uses multimodal prompts from a pretrained CLIP model as conditioning to the in-painting transformer model and shows good improvement over prior art.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper is well written, the method and architecture is clearly explained and tested on a wide range of benchmarks. The proposed method shows good improvement over recently published methods in the domain.\", \"weaknesses\": \"1. The paper does not include any examples of textual prompts. The only examples are in Figure 1 and 3, CLIP text encoder is not explicitly trained on instructions like \\\"Remove the semi-transparent obstruction\\\", and the image-text datasets used to train CLIP models typically have captions describing the foreground which may or may not describe the type of occlusions. It is unclear how the embedding space of CLIP's text encoder is capable of embedding such instructions.\\n2. The paper does not provide details on CLIP model used for generating multimodal prompts. \\n3. In Table 4. ablation does not include \\\"visual prompt only\\\" setting. An interesting ablation would be to use different text embedding models apart from CLIP.\", \"questions\": \"My main concern with this work is the use of CLIP text encoder, as it is typically not trained on textual instructions as depicted in Figure 1 and 3. is it possible that any text encoder would work in this setup? Also, ablation on textual prompts would be good to have, i.e. what level of detail is necessary in the prompt to achieve a good in-painting result. I am willing to update the score based on the response to above questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Response:**\\n\\nThank you for your comments. To ensure a fair comparison, we provided the same mask inputs for all competing methods and retrained them on the same dataset we used, thereby granting them zero-shot capability (discussed on Page 9, footnote 1).\\n\\nRegarding the concern about the raindrop dataset, we utilized a dataset different from Histoformer. Specifically, we employed the VRDS dataset [1], a more recent and advanced dataset for raindrop removal, while Histoformer used the Raindrop dataset [2]. Additionally, Histoformer trains separate weights for each obstacle type, whereas we retrained all competing methods as single general models to evaluate their universality. These differences account for the variations in performance metrics.\\nWe hope this explanation clarifies our methodology. Thank you again for your valuable feedback.\\n\\n- [1] Wu H, Yang Y, Chen H, et al. Mask-Guided Progressive Network for Joint Raindrop and Rain Streak Removal in Videos. MM, 2023.\\n\\n- [2] Qian R, Tan R T, Yang W, et al. Attentive Generative Adversarial Network for Raindrop Removal from a Single Image. CVPR, 2018.\"}", "{\"comment\": \"I acknowledge I have read the responses from authors and discussions among other reviewers. Most of my concerns are addressed. For Q2, how to use SAM2 to detect mask of obstructions is not mentioned. For Q3, the contrastive finetuning of CLIP text encoder may need further justification. How the finetuning benefits the performance is not mentioned. And authors could consider to try using GPT4 to generate more command samples for finetuning. And I am also concerned about the generalizability of the proposed method. The limitations for handling large area of occlusions may indicate the limited generalizability of soft masking.\"}" ] }
DjHnxxlqwl
Solving Urban Network Security Games: Learning Platform, Benchmark, and Challenge for AI Research
[ "Shuxin Zhuang", "Shuxin Li", "Tianji Yang", "Muheng Li", "Xianjie Shi", "Bo An", "Youzhi Zhang" ]
After the great achievement of solving two-player zero-sum games, more and more AI researchers focus on solving multiplayer games. To facilitate the development of designing efficient learning algorithms for solving multiplayer games, we propose a multiplayer game platform for solving Urban Network Security Games (**UNSG**) that model real-world scenarios. That is, preventing criminal activity is a highly significant responsibility assigned to police officers in cities, and police officers have to allocate their limited security resources to interdict the escaping criminal when a crime takes place in a city. This interaction between multiple police officers and the escaping criminal can be modeled as a UNSG. The variants of UNSGs can model different real-world settings, e.g., whether real-time information is available or not, whether police officers can communicate or not. The main challenges of solving this game include the large size of the game and the co-existence of cooperation and competition. While previous efforts have been made to tackle UNSGs, they have been hampered by performance and scalability issues. Therefore, we propose an open-source UNSG platform (**GraphChase**) for designing efficient learning algorithms for solving UNSGs. Specifically, GraphChase offers a unified and flexible game environment for modeling various variants of UNSGs, supporting the development, testing, and benchmarking of algorithms. We believe that GraphChase not only facilitates the development of efficient algorithms for solving real-world problems but also paves the way for significant advancements in algorithmic development for solving general multiplayer games.
[ "security games", "multiplayer games" ]
Reject
https://openreview.net/pdf?id=DjHnxxlqwl
https://openreview.net/forum?id=DjHnxxlqwl
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vHZlfdbSyM", "uyAnTDjFjd", "ttg4jZ4MMb", "ro7jmnfmfH", "oVGS312opX", "meuA9WBACV", "m7pgYABztz", "lxWhFdfC7j", "l1kGk4PBIy", "iHzR8xGilf", "g5XvbWnHje", "dKtNzS0FUQ", "dEHXT9A1zl", "bCJQSmFTwn", "ZjWOpuJ6Fn", "ZczTVo1o0S", "WhBzJft9pG", "U3K2TA7Ray", "PP9Fqcl58S", "Ovp3g1hSWU", "Orfi6RqYyG", "KJUHygImiM", "D6KvC5B9HP", "BG9Hn5boBo", "7HW1v9rLQ2", "620ljrkveT", "2B46el16L3", "1osEim0Ima" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review" ], "note_created": [ 1730539955845, 1732370385157, 1733045457368, 1730626441147, 1732371238872, 1732370052282, 1733045330721, 1732369574507, 1733045383926, 1732962222296, 1732528280698, 1732371086715, 1733221822888, 1732371145353, 1733212627331, 1732767412595, 1732648694317, 1732370091682, 1732491906169, 1732370500256, 1732370643982, 1737524286873, 1734678572992, 1732370859544, 1730647800776, 1733050867970, 1732766979870, 1730655105805 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13872/Reviewer_HG6y" ], [ "ICLR.cc/2025/Conference/Submission13872/Authors" ], [ "ICLR.cc/2025/Conference/Submission13872/Authors" ], [ "ICLR.cc/2025/Conference/Submission13872/Reviewer_eAes" ], [ "ICLR.cc/2025/Conference/Submission13872/Authors" ], [ "ICLR.cc/2025/Conference/Submission13872/Authors" ], [ "ICLR.cc/2025/Conference/Submission13872/Authors" ], [ "ICLR.cc/2025/Conference/Submission13872/Authors" ], [ "ICLR.cc/2025/Conference/Submission13872/Authors" ], [ "ICLR.cc/2025/Conference/Submission13872/Authors" ], [ "ICLR.cc/2025/Conference/Submission13872/Reviewer_HG6y" ], [ "ICLR.cc/2025/Conference/Submission13872/Authors" ], [ "ICLR.cc/2025/Conference/Submission13872/Reviewer_HG6y" ], [ "ICLR.cc/2025/Conference/Submission13872/Authors" ], [ "ICLR.cc/2025/Conference/Submission13872/Authors" ], [ "ICLR.cc/2025/Conference/Submission13872/Authors" ], [ "ICLR.cc/2025/Conference/Submission13872/Reviewer_FH55" ], [ "ICLR.cc/2025/Conference/Submission13872/Authors" ], [ "ICLR.cc/2025/Conference/Submission13872/Reviewer_Wrzs" ], [ "ICLR.cc/2025/Conference/Submission13872/Authors" ], [ "ICLR.cc/2025/Conference/Submission13872/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13872/Area_Chair_mZZY" ], [ "ICLR.cc/2025/Conference/Submission13872/Authors" ], [ "ICLR.cc/2025/Conference/Submission13872/Reviewer_FH55" ], [ "ICLR.cc/2025/Conference/Submission13872/Reviewer_eAes" ], [ "ICLR.cc/2025/Conference/Submission13872/Authors" ], [ "ICLR.cc/2025/Conference/Submission13872/Reviewer_Wrzs" ] ], "structured_content_str": [ "{\"summary\": \"The paper proposes GraphChase, a Gymnasium-based platform that is aimed to solve urban network security games (e.g., policy trying to catch a criminal/terrorist in a densely populated urban environment where avoidance of collateral damages and civilian casualties is crucial). The platform has a modular structure allowing the user to define the environment (by entering a - potentially complex - graph), the strategies of the players and the learning algorithms among other parameters. The paper runs experiments in 7x7 grids with 4 police officers and 1 criminal and demonstrates that solution through GraphChase achieves generally faster solution times than naive implementation of the learning algorithms.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"*Quality and Clarity*: the paper is well-structured and cover many details of the proposed GraphChase platform. There is a discussion of related work, potential extensions to multi-agent settings and limitations of the current work which mainly stem from the (convergence) limitations (in complex environments) of state-art MARL algorithms. The exposition is generally clear, and I could follow the paper.\", \"*Originality and Significance*: the problem of urban network security games is important - entails optimisation of police strategies to arrest criminals which becomes particularly challenging in urban environments were casualties of co-existing civilians need to be avoided at any cost - and providing a platform to make their solution easier is an ambitious and well-received goal. There is a literature that studies this topic, and although I am not expert in this literature, the paper does a good job to present it for the general reader and motivate the current study.\", \"-*Reproducibility*: The authors provide a link to a github repository with their code which I appreciated.\"], \"weaknesses\": \"- *Contribution and Novelty*: the main weakness in my opinion is that the paper does not provide convincing enough evidence that GraphChase is at the moment a significant step in solving UNSGs. The algorithmic results provide some evidence that it converges faster, but this is not surprising since the platform and the learning algorithms are integrated. I am not sure that I understand correctly the reasons of the claimed speed-up (so, please see my question below). Also, some of the simulations seem to terminate early or do not demonstrate significant improvements over the naive algorithm.\\n\\nBy weighting the strengths and weaknesses, my evaluation is that while I don't see methodological errors or bad exposition, the contribution of the current paper is simply not enough to merit publication at its current stage at ICLR. The paper should provide more substantial experiments in more complex environments and more systematic comparisons to the existing literature. While I understand that such environments can grow very complex very quickly demanding unrealistic computational power to be solved on researchers' computers, I still believe that the current contribution is not sufficient.\", \"questions\": [\"Can the authors clearly explain how is the speed-up achieved in the simulations that they show? In other words, what are the exact reasons/mechanisms that make the solution through GraphChase faster than the \\\"naive\\\" solution. Also how does the \\\"original code\\\" (or naive) implementation solve the games? Can the authors provide a detailed breakdown of computational time for different components of their algorithm compared to the baseline approaches?\", \"Why do some simulations seem to terminate early (especially the red lines) and some others to terminate before convergence (e.g., first panels of Fig 7 and 8). Can you please explain the termination criteria in detail, and provide convergence plots that show the full trajectory of the algorithms until a well-defined convergence point is reached?\", \"Can the authors provide specific additional experiments in already existing environments from the literature (with references) and comparisons that would provide more convincing evidence of GraphChase's significance?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your thoughtful review! Please find point-by-point replies below:\\n\\n## Weaknesses:\\n**To Weakness 1:** We would like to clarify two key aspects of GraphChase:\\n\\n Regarding information levels, our platform allows researchers to configure game environments according to their algorithmic requirements. This flexibility is demonstrated in our current implementations: CFR-MIX makes decisions without considering evader history, while PretrainPSRO incorporates historical evader information. This exemplifies GraphChase's capability to support varying information levels. That is the information level requirements are determined by the algorithms themselves. Currently, there are only a few learning algorithms for UNSGs and the results of them are shown in our work.\\n\\n Concerning graph structures, GraphChase supports the removal of edges between any connected nodes to simulate different real-world scenarios. This feature has been tested with the Grasper algorithm, as it requires the generation of diverse graph structures. Furthermore, GraphChase allows researchers to customize various graph structures by providing an adjacency list as a parameter to our game generation function. \\n\\n We designed GraphChase to offer maximum flexibility through user-defined graph structures and customizable information levels. We hope that our platform serves as a versatile tool for researchers, enabling them to efficiently implement and evaluate various algorithms. \\n\\n**To Weakness 2:** We would like to clarify several aspects of our platform's implementation and evaluation methodology:\\n\\n Regarding finding the evader's optimal strategy, our current implementations follow the simplified evader modeling approach used in existing UNSG research. This simplification means that it only considers finding the optimal strategy for the pursuer, and directly computes the criminals' best responses as evaluation. \\n\\n The computational complexity of UNSGs presents significant challenges, as the strategy space of players cannot be enumerated even under simplified conditions where the time dynamics are ignored (Jain et al., 2011). That is, even if the evader's strategy is simplified, the problem of solving UNSGs is still NP-hard (Jain et al., 2011), which is discussed in Section 2.3 of the paper.\\n\\n We also note the limitations of solely computing the pursuer's optimal strategy during training, as evidenced by the experimental results in Table 1. The pursuer's performance notably deteriorates when confronting evader strategies not encountered during training. This observation was one of the key motivations behind developing GraphChase. So our platform enables researchers to model evaders with any strategy. Specifically, GraphChase allows researchers to customize evader decision-making mechanisms such as implementing them as learning algorithms to find optimal strategies.\\n\\n Due to the computational complexity in evaluation, we sample 1000 episodes to calculate the pursuers' worst case utility. At the beginning of each episode, the evader selects an available path based on the best response i.e., enumerate all paths to select the best path. During each time step, the evader moves according to this predetermined path. The pursuer's performance is evaluated based on the average success rate across these 1000 episodes.\\n\\n As we discussed in Section 2.3, scalability is the key challenge for solving UNSGs. Our GraphChase platform has been designed to facilitate and address these challenges by providing a large-scale game environment.\"}", "{\"comment\": \"Thank you for your review. We would appreciate your feedback on whether our response has adequately addressed your concerns. If there are any remaining questions or points that need further clarification, we would be happy to provide additional information.\"}", "{\"summary\": \"The paper introduces GraphChase, a scalable, open-source platform tailored for Urban Network Security Games (UNSGs), where police agents coordinate to capture criminals in complex urban settings. UNSGs present computational challenges due to their large game spaces and the need for cooperation and competition among multiple players under imperfect information. Existing algorithms like CFR-MIX and PSRO have struggled with scalability and performance in these settings. GraphChase aims to provide a flexible environment for developing and benchmarking algorithms across diverse UNSG scenarios, supporting features including different numbers of players, types of underlying graphs, information levels, and the presence or absence of communication among police officers. Experimental results indicate that GraphChase improves algorithm efficiency compared to previous implementations but reveals ongoing challenges in scaling algorithms for larger urban networks\\u200b.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper addresses a challenging and practical problem\\u2014Urban Network Security Games (UNSGs)\\u2014that has direct applications in urban safety and law enforcement.\\n2. The paper aims to design an open-source, flexible, and modular platform that facilitates simulation, testing, and benchmarking of various algorithms in scalable settings. This makes it valuable for researchers aiming to develop efficient algorithms and compare them in a fair way. \\n3. The paper implements multiple algorithms in its experiments, effectively identifying performance and scalability issues.\", \"weaknesses\": \"Considering that this is a submission for the datasets and benchmarks track, I have several concerns about the weakness of this paper.\\n\\n1. Given that GraphChase is claimed as a comprehensive platform enabling various configurations, the experiments seem limited. Additional experiments and result analysis on different underlying graphs and varying information levels would better demonstrate the platform\\u2019s versatility and advantages. \\n2. It seems that the platform only considers algorithms for finding the optimal strategy for the pursuer, and directly computes the criminals' best responses as evaluation. However, the paper does not discuss the computational efficiencies of evaluation. \\n3. The paper elaborates related work on the pursuit-evation games and mentions their relationships to UNSGs. However, first, I found the literature comparison in this part is unclear. Second, the paper does not mention the relationships in detail nor does it illustrate using experiments. For example, does UNSG include other games beyond pursuit-evasion games? \\n4. The paper does not discuss the efficiency of using and interacting with the platform.\\n5. The experimental results are not presented in a clear way, as detailed in the \\\"Questions\\\" section below.\", \"questions\": \"1. In Line 347, how is the probability of being caught calculated? Specifically, I do not understand why the left side of Figure 5 has a probability of 0.5.\\n2. In Section 4.2, the authors aimed to show that their reproduced code performs better than the original. Could the authors further clarify this comparison? Is the improvement due to enhancements in code details, or does GraphChase offer systemic advantages that enable faster algorithm convergence?\\n3. In Table 1, the paper notes that even in relatively small settings, existing methods require extensive training times. Is this due to the complexity of environment interactions? Does this imply a lack of efficiency in the current platform, or is it simply a limitation of the algorithms themselves?\\n4. Around Line 410, the authors mention that current methods often make simplified assumptions about the criminal's strategy. Would it be possible to observe phenomena or convergence patterns if both the criminal and the police officer were modeled as learning algorithms?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"We sincerely appreciate the reviewers' constructive feedback. We believe we have addressed all the reviewers' concerns. In response to their comments, we have made several supplements to our manuscript:\", \"In Appendix C, we added a parameter table that users can control to generate graph structures tailored to their specific research needs.\", \"Now we have additional experiments on a variety of scenarios used in the UNSG domain (Xue et al. 2021; 2022; Li et al. 2023a; 2024) in Section 4.2 (details are in Appendix D), which includes larger $15\\\\times 15$ grid structures and real-world maps based on Singapore and Manhattan. We believe that our extensive testing on these maps effectively demonstrates GraphChase's capability to address real-world problems.\", \"Regarding the convergence speed, we provided a detailed comparison in Appendix E to explain why GraphChase achieves faster convergence compared to the original papers in terms of the wall-clock time.\", \"We have also included a brief tutorial on implementing GraphChase. The details are shown in Appendix F.\", \"We have made some revisions, which are highlighted in red in our updated manuscript. We sincerely thank the reviewers for their valuable feedback. We welcome further discussion and are open to any questions or suggestions you may have.\"]}", "{\"comment\": \"Thank you for your thoughtful review! Please find point-by-point replies below:\\n\\n## Weaknesses:\\n**To W1:** We would like to clarify that our work primarily aims to provide a standardized platform for UNSG research and algorithm development, that's why we submitted to the Datasets and Benchmarks track.\\n\\n Current UNSG-focused works (NSGZero, NSG-NFSP, Pretrained PSRO, Grasper, and CFR-MIX) implement environments differently, requiring researchers to rewrite substantial code to adapt to each algorithm's input requirements when conducting comparative studies. GraphChase addresses this challenge by integrating multiple into our platform. Researchers need only input the graph structure once to evaluate multiple algorithms' performance. This not only enhances research efficiency but also ensures fair algorithm comparisons by resolving implementation inconsistencies across different works.\\n\\n One of the key features of GraphChase is its flexibility, allowing users to customize various game variants. We believe that the definition of game variants should be determined by the algorithm designers rather than by us, in order to support different algorithms training on the same graph structure. Our platform provides all state information at each simulation step, including all pursuer and evader observations. Researchers can freely define how their algorithms utilize this information. For instance, while CFR-MIX doesn't consider evader history, PretrainPSRO incorporates historical evader information in pursuer action selection. Despite these methodological differences, our platform requires only a single input of the game's topological structure to enable training and testing across different algorithms and game variants on the same game structure.\\n\\n Furthermore, regarding graph topologies, our platform supports arbitrary graph structures for UNSG problems. Researchers can define any graph topology (grid-based or otherwise) by simply providing the adjacency list to our platform's game generation function. The platform architecture is intentionally designed to accommodate various equilibrium concepts as shown in the discussion. Researchers can leverage our platform to study different types of equilibria by implementing their own algorithms. \\n\\n Our experiments show that current algorithms still suffer performance and scalability issues in real-world settings. It suggests that substantial efforts are still required to develop effective and efficient algorithms for solving real-world UNSGs.\\n\\n We envision GraphChase as a unified and flexible platform that will advance research progress in UNSG problems.\\n\\n**To W2:** We have revised the paper to adopt a more neutral and precise tone, removing subjective descriptors like \\\"advanced\\\" and \\\"pivotal.\\\" Thank you for helping us maintain appropriate scientific writing standards.\\n\\n## Questions:\\n**To C1:** Thank you for your question regarding the definition of 'caught'. We acknowledge that we overlooked this definition in the original paper. To clarify: The evader is considered caught if the evader and any of the pursuers occupy the same point at any time within the maximum time horizon. We have now added this precise definition to Section 2.1.\\n\\n**To C2:** We appreciate your attention to detail.\\n\\n 1. Update $E_{xit}$ to $E_{exit}$.\\n 2. N(v) specifically denotes the set of neighbours of node v. We have clarified this definition in the relevant sections of the paper to avoid any ambiguity.\\n 3. Correct $R$ to $\\\\mathbb{R}$.\"}", "{\"comment\": \"Thank you for your previous response. We would appreciate your feedback on whether our additional response has adequately addressed your concerns. If there are any remaining questions or points that need further clarification, we would be happy to provide additional information.\"}", "{\"comment\": \"We thank the reviewer for constructive advice.\\n\\n## Weaknesses\\n\\n1. **To Weakness 1**: We added user-controllable parameters in Appendix B.\\n\\n2. **To Weakness 2**: As you correctly mentioned, our platform functions primarily as a simulator. While our current testing algorithms focus on Nash Equilibrium solutions, it's important to clarify that the platform itself is equilibrium-concept agnostic.\\n\\n The platform serves as a testing and training environment, providing researchers for algorithm development and evaluation. Researchers interested in studying Markov equilibrium or subgame perfect equilibrium would need to implement their own solution algorithms. The platform's role is to provide the environmental framework within which these various equilibrium concepts can be explored, rather than being tied to any specific equilibrium solution concept.\\n\\n One of our platform's aims is to allow researchers to implement and test different solution concepts according to their research needs. So the simulator's architecture is intentionally separated from the specifics of equilibrium computation, making it a versatile tool for studying various game-theoretic solution concepts.\\n\\n3. **To Weakness 3**: Our platform is designed with extensibility as a core feature, allowing researchers to incorporate additional real-world elements into their models easily. The key to this flexibility lies in the method of generating games. Researchers can design and input their own adjacency list as parameters to our platform's game generation function, enabling customized environment modeling for specific research requirements.\\n\\n What's more, the graph implementation is based on the NetworkX library, so our platform enables researchers to incorporate diverse graph attributes, such as edge-specific travel times and other details to model real-world scenarios.\\n\\n The platform's modular design ensures that such modifications can be implemented without requiring changes to the core system architecture. \\n\\n4. **To Weakness 4**: SIMPE is a Matlab-based platform for simulating Pursuit-Evasion games which has several limitations. First, it outputs the coordinates of the pursuer and evader in the x-y plane, with a continuous position space. So it is difficult to model the topological structure of actual UNSGs. Secondly, it does not take time information into account, overlooking the temporal constraints inherent in UNSG problems. Finally, it offers only three predefined strategies for pursuers and evaders, which substantially restricts the platform's extensibility. \\n\\n Avalon, on the other hand, is designed to simulate biological survival skills (from basic actions like eating to complex behaviors like hunting and navigation). It provides diverse procedural 3D environments where agents must survive which is not suitable for model UNSGs. We expanded our comparison of GraphChase with SIMPE and Avalon in the related work section.\\n\\n Our GraphChase platform is purposefully designed to model and simulate real-world criminal prevention scenarios, enabling the application of developed algorithms to urban security challenges. We deliberately choose Gymnasium to implement our platform, as many DRL researchers are already familiar with this environment, thus reducing the learning time. Additionally, our implementation of Gymnasium's SyncVector functions introduces parallel simulation capabilities, while Gymnasium's render functions enable researchers to better understand their algorithms' real-world performance. These features are not present in the other platforms. We hope that GraphChase can be a valuable tool for researchers to design their own algorithms.\"}", "{\"comment\": \"Thank you for your previous response. We would appreciate your feedback on whether our additional response has adequately addressed your concerns. If there are any remaining questions or points that need further clarification, we would be happy to provide additional information.\"}", "{\"comment\": \"Thank you once again for your positive review.\"}", "{\"comment\": \"I thank the authors for their detailed response. I can know better appreciate their contribution. The new experiments in the $15\\\\times15$ Singapore map which recover the performance of existing algorithms (Table 3) but at half the running time (Table 4) -- if I understand correctly -- strengthen the paper. Similarly, the instructions on how to use the platform in Appendix F are also welcome. I raise my score to reflect these changes. Although Datasets and Benchmarks is not my area of research (I focus on theory), and after reading the other, more expert, reviews, I still believe that more systematic experiments in more complex environments (the above being correct steps in that direction) and a more careful presentation perspective to highlight the key contributions of the platform (the above being again correct steps in that direction) are required.\"}", "{\"comment\": \"We appreciate the reviewer's thoughtful critique and the opportunity to clarify our platform's contributions.\\n\\n## **To Weaknesses:**\\n\\nThe primary goal of GraphChase is not to propose a novel algorithm or demonstrate superior performance than existing work, but rather to provide a unified, open-source environment for training and testing for UNSGs. Our platform enables researchers to efficiently utilize existing algorithms, including NSGZero, NSG-NFSP, PretrainPSRO, and Grasper, for performance evaluation. This addresses a critical gap in solving UNSGs, which is why we submitted to the Datasets and Benchmarks track.\\n\\nSince GraphChase aims to provide a unified simulation environment for existing algorithms, we focused on reproduction rather than algorithmic improvements. That's the reason GraphChase does not have significant improvements over the original algorithm. We optimized code structure and implementation, resulting in faster wall-clock convergence compared to the original implementations. This aligns with our objective of achieving comparable performance to the original implementations.\\n\\nConcerning the systematic comparisons to the existing literature, we deliberately matched the experimental settings of the original papers to validate GraphChase's ability to reproduce their results. As our primary goal is to demonstrate equivalent performance to original implementations rather than surpassing them, we suppose our current experiments sufficiently validate this objective.\\n\\nOur experiments also show that current algorithms still suffer performance and scalability issues in real-world settings. It suggests that substantial efforts are still required to develop effective and efficient algorithms for solving real-world UNSGs.\\n\\nGraphChase supports a wide range of equilibrium concepts, including, but not limited to Nash Equilibrium. The platform allows researchers to explore and analyze diverse equilibrium types by integrating their custom solution algorithms. This flexibility distinguishes GraphChase from existing platforms, which typically lack support for such extensibility.\\n\\nWe look forward to the research community utilizing GraphChase as a collaborative tool for advancing UNSG research. Furthermore, we will provide more details about why it is faster and when to terminate in response to your subsequent questions.\\n\\n## Questions:\\n\\n\\n- **To Question 1:** We can detail the specific technical factors contributing to the improved performance:\\n\\n Our platform incorporates several technical enhancements that contribute to its faster performance in terms of the wall-clock time. First, GraphChase is developed based on Gymnasium, replacing the custom class implementations found in the original papers. This change results in faster simulation processes and eliminates redundant data copying operations, leading to improved efficiency.\\n\\n Additionally, we have implemented various code optimizations to enhance the platform's performance. These include improved data type conversions, such as using numpy-to-tensor conversions instead of list-to-tensor operations, which reduces processing time. We have also focused on enhancing memory management throughout the platform, resulting in more efficient resource utilization.\\n\\n Currently, different algorithms (such as NSGZero, PretrainPSRO, and Grasper) each implement their simulation environments differently. This requires researchers to rewrite substantial code to adapt their algorithms for comparison with existing methods. When using the original code to test games, we rewrote game-related code to fit the input data requirements of each algorithm. However, we did not make any modifications to the algorithmic implementations from the original papers.\\n\\n The algorithms implemented in our platform are direct reproductions of the original code provided by the respective papers, with no changes or improvements made to the algorithms themselves. The key difference is that in GraphChase, researchers only need to input graph adjacency list as parameters into the game generation function to enable the application of different algorithms. Therefore, the primary contribution of GraphChase lies in providing a standardized and unified testing environment, addressing the lack of platforms for UNSG research.\\n\\n To address the reviewer's concern about performance improvements, we added a detailed discussion in Appendix E comparing GraphChase's simulation speed with original implementations. The results show that GraphChase truly accelerates a single episode of simulation and the data-saving process for each algorithm.\\n\\n Hope our reply will clarify your concerns.\"}", "{\"comment\": \"I thank the authors for their response and I maintain my current assessment for the paper.\"}", "{\"comment\": \"## Questions:\\n\\n- **To Question 2:** Thank you for this important observation regarding the termination patterns in Figures 7 and 8. We acknowledge that this requires clarification.\\n\\n For all experiments, including both GraphChase and baseline implementations, we set a large maximum iteration count at the start of training. The earlier termination of the red lines in the figures is a direct result of GraphChase's improved computational efficiency as we explain above, allowing it to complete the whole iterations more rapidly than the baseline implementations.\\n\\n Regarding the specific cases presented in the first panels of Figures 7 and 8, we acknowledge that the data shown in our original submission only covered the results up to the point of convergence. Data after this point, which would have depicted a more stable convergence curve, was not included. We recognize that it may not have provided a complete view of the algorithms\\u2019 convergence trajectories. To address this, we have updated our results to include a more comprehensive depiction of the convergence process. We apologize for any confusion this omission may have caused and appreciate the opportunity to present more complete and accurate experimental results in Figures 7 and 8.\\n\\n- **To Question 3:** We added experiments based on a $15\\\\times 15$ grid graph and the Singapore map. The $15\\\\times 15$ grid graph has been previously tested in CFR-MIX (Li et al., 2021), NSG-NFSP (Xue et al., 2021) and NSGZero (Xue et al., 2022). The Singapore map has been tested in NSG-NFSP (Xue et al., 2021), NSGZero (Xue et al., 2022), Pretrained PSRO (Li et al., 2023a) and Grasper (Li et al., 2024). These additional experiments can show the significance of GraphChase. The experimental results are detailed in Table 3 of the appendix.\"}", "{\"comment\": \"Thank you for your insightful question regarding strategy evaluation methods. We appreciate the opportunity to elaborate on our approach to assessing pursuer strategies and equilibrium computation.\\n\\nOur platform supports the pursuer's **strategy evaluation** through four methods:\\n\\n1. Pseudo Worst-Case Utility\\n\\nThis method evaluates the performance of the pursuer's strategy by first selecting an exit according to the best response and then choosing a path within that exit. Due to the randomness in path selection, it is referred to as pseudo worst-case utility. \\n\\n2. Worst Case Utility\\n\\nThis method evaluates the performance of the pursuer's strategy by enumerating every feasible path and using the path with the lowest reward as the worst-case utility. When the number of paths is vast and difficult to enumerate, it will use the first method to approximate worst-case utility.\\n\\n3. Strategy Robustness Testing\\n\\nWe assess strategy robustness by varying the maximum time horizon, which enables a comprehensive examination of the pursuer strategy's performance against diverse evader behaviors. As demonstrated in our experimental setup, this approach provides insights into the strategy's adaptability across different scenario complexities.\\n\\n4. Exploitability Testing\\n\\nThis method systematically evaluates the pursuer strategy's vulnerability by training an adversarial evader strategy using reinforcement learning while maintaining a fixed pursuer strategy.\\n\\nRegarding **equilibrium assessment**, our platform supports calculating the NashConv metric to measure convergence. This metric is calculated as:\\n\\n```math\\nNashConv = pursuer_br_value + evader_br_value\\n```\\n, where `pursuer_br_value` and `evader_br_value` denote the values of their respective best response strategies. Since we provided the aforementioned methods for strategy evaluation, there are several ways to assess NashConv. Users can utilize (Pseudo) Worst-Case Utility or various training algorithms to calculate the value of best response strategies. The detailed usage methods can be found on [GraphChase repository](https://github.com/GraphChase/GraphChasePlatform.git).\\n\\nThe evaluation method outlined in Section 3.2 of our paper is consistent with prior works on UNSG, including CFR-MIX (Li et al., 2021), NSG-NFSP (Xue et al., 2021), NSGZero (Xue et al., 2022), Pretrained PSRO (Li et al., 2023a), and Grasper (Li et al., 2024). To ensure a fair and direct comparison with these previous studies, we did not include alternative evaluation methods in the paper.\\n\\nHowever, our platform fully supports additional evaluation methods as mentioned above. These methods have been implemented at the code level, so we have provided detailed example instructions in the usage guide available on [GraphChase Platform](https://github.com/GraphChase/GraphChasePlatform.git). The guide provides detailed instructions on how users can leverage these various assessment techniques to evaluate strategies and analyze equilibrium characteristics.\\n\\nWe hope this detailed explanation addresses your concerns and provides clarity on our approach to strategy and algorithm evaluation.\"}", "{\"comment\": \"Thank you for your constructive feedback!\\n\\nWe have supplemented additional experiments based on the Manhattan map. Now we have additional experiments on a variety of scenarios used in the UNSG domain (Xue et al. 2021; 2022; Li et al. 2023a; 2024) in Section 4.2 (details are in Appendix D), which includes larger $15\\\\times 15$ grid structures and real-world maps based on Singapore and Manhattan. The $15\\\\times 15$ grid network represents the randomly generated network, and two real-world networks represent different topological structures in real-world cities. To the best of our knowledge, the Singapore and Manhattan maps represent the most realistic and complex graph structures currently employed in UNSG algorithm research. We believe that our extensive testing on these maps effectively demonstrates GraphChase's capability to address real-world problems.\\n\\nWe have revised the manuscript to more clearly highlight the key contributions of our platform in Sections 1 and 4.2. Specifically, we highlight that algorithms based on GraphChase can recover the performance of the algorithms based on the original codes with significantly less time. The modifications have been marked in red for your convenience, and you can review them in the latest version of the paper.\"}", "{\"comment\": \"Thanks for engaging with my comments!\\n\\nRegarding W1, the response is essentially reiterating the flexibility of the work (and effectively answering another question). It does not address the core of the comment: the lack of experiments in a variety of scenarios. In my opinion, the authors need to demonstrate that the framework can indeed accommodate the type of experiments that are described, by putting themselves in the \\\"shoes\\\" of the researchers who would use this framework themselves. I am confident that, in trying this, issues, bugs, shortcomings of functionality etc. will be found. What is currently there seems more of a vision than an actual framework that is ready-to-use. It is inaccurate, in my opinion, to describe it as such without demonstrating a wide range of experiments.\", \"i_am_therefore_keeping_the_original_recommendation\": \"I strongly believe the \\\"framework\\\" claims are not supported without demonstrating it can be used as such. The hard work to put together the problems and implementations has already be done. I would encourage the authors to revise the paper to include such experiments and resubmit to another venue in the future. The reviews should provide encouragement that this is worth pursuing, but the work is not ready for publication at this time in my opinion.\"}", "{\"comment\": \"**To C3:** In team adversarial games, TMECom is NE discussed in Section 2. We have provided explanations in the relevant sections of the paper to prevent any misunderstandings for the readers.\\n\\n**To C4:** Our platform is designed to be format-agnostic when it comes to graph data input. As long as users can obtain the adjacency list representation of their graph, regardless of the original format (e.g., GraphML, XML, or other standard graph formats), they can easily integrate it into our platform. Specifically, users only need to pass the adjacency list as parameters to our platform's game generation function, enabling them to customize their own games. To improve the clarity of our paper, we added a detailed explanation of supported data formats in Section 3.1. \\n\\n**To C5:** Our platform incorporates several technical enhancements that contribute to its faster performance in terms of the wall-clock time. First, we have adopted the Gymnasium for game simulation, replacing the custom class implementations found in the original papers. This change results in faster simulation processes and eliminates redundant data copying operations, leading to improved efficiency.\\n\\nAdditionally, we have implemented various code optimizations to enhance the platform's performance. These include improved data type conversions, such as using numpy-to-tensor conversions instead of list-to-tensor operations, which reduces processing time. We have also focused on enhancing memory management throughout the platform, resulting in more efficient resource utilization.\\n\\nTo address the readers' concern about performance improvements, we added a detailed discussion in Appendix E comparing GraphChase's simulation speed with original implementations.\\n\\n**To C6:** We agree that using \\\"pursuer\\\" and \\\"evader\\\" better reflects the abstract nature of our mathematical model and helps avoid potential misinterpretations about real-world applications.\", \"we_have_revised_the_paper_as_follows\": \"1. Use \\\"pursuer\\\" and \\\"evader\\\" as the primary terminology throughout the theoretical discussions and model descriptions; 2. Maintain references to \\\"police officers\\\" and \\\"criminals\\\" only in specific illustrative examples where concrete scenarios help readers understand the practical applications of the abstract model.\"}", "{\"title\": \"Thank you\", \"comment\": \"I am happy with the author responses.\"}", "{\"comment\": \"## Weaknesses:\\n\\n**To Weakness 3:** We would like to clarify the scope and positioning of UNSGs within the pursuit-evasion games domain.\\n\\n While pursuit-evasion games encompass various scenarios, UNSGs represent a specific subset with distinct characteristics. UNSGs are specifically focused on scenarios with discrete observation and action spaces, chasing within a finite time horizon. In contrast, existing pursuit-evasion game benchmarks like Avalon, mentioned in our related work, are designed to simulate biological survival skills (from basic actions like eating to complex behaviors like hunting and navigation). In the revised version, we added more details about existing platforms SIMPE and Avalon, and highlighted the difference between them and our platform. Its problem settings do not belong to UNSGs.\\n\\n Our related work discussion (L437-460) examines various pursuit-evasion scenarios that can be modeled as UNSGs. These works, while valuable, do not provide a platform for UNSGs. So we do not conduct experimental comparisons with GraphChase.\\n\\n Regarding existing benchmarks (discussed in L462-478), we provide a brief overview of these benchmarks and explain why they are not suitable for UNSGs. For instance, MARBLER primarily simulates real-world robot behavior, while SIMPE operates in continuous state space and imposes restrictions on the behavior patterns of both pursuers and evaders. These limitations make existing benchmarks unsuitable for studying UNSGs. \\n\\n So the motivation behind developing GraphChase is precisely to address this gap by providing a platform and benchmark tailored for research on UNSGs. Thank you for your suggestion. We added more comparisons in the related work to clarify the differences between GraphChase and the current platforms for pursuit-evasion games, which we believe will aid in the readers' understanding.\\n\\n**To Weakness 4:** We appreciate the reviewer's comment about platform usability and interaction documentation.\\n\\n We deliberately chose Gymnasium as the foundational framework for GraphChase because it is widely recognized and utilized in the DRL community. This design choice significantly reduces the learning curve for researchers who are already familiar with Gymnasium's interfaces and conventions.\\n\\n Regarding platform usage and interaction details, we have provided comprehensive documentation and examples in our GitHub repository. We added a brief introduction about how to use our platform to Appendix F.\\n\\n Thank you for bringing this to our attention. These additions will make the paper more complete and useful for potential users of our platform.\"}", "{\"comment\": \"## Questions:\\n\\n1. **To Question 1:** Let us explain how we arrived at the probability of 0.5 under the Nash equilibrium:\\n\\n On the left side of Figure 5, the evader has four potential exit nodes. The calculation considers several key factors:\", \"exit_node_accessibility\": [\"Two exit nodes (bottom-left and bottom-right) have the shortest paths longer than 3 steps for the evader\", \"These same exits can be reached by pursuers in exactly 3 steps\", \"Due to rational decision-making, the evader will avoid these exits as capture would be certain\"], \"viable_exit_options\": [\"This leaves two viable exit nodes for the evader\", \"Only the pursuer in the top-left corner can reach either of these exits within 3 steps\", \"Both exits have equal utility for both the evader and the pursuer\"], \"probability_calculation\": \"- The evader rationally chooses between the two viable exits with equal probability (1/2 for each)\\n - For each exit, the probability of being caught depends on the pursuer choosing the same exit (1/2)\\n - The total probability is calculated as: 2 exits \\u00d7 (1/2 probability of choosing each exit) \\u00d7 (1/2 probability of pursuer choosing the same exit) = 0.5\\n\\n Hope our reply will clarify your concerns.\\n\\n2. **To Question 2:** Our platform incorporates several technical enhancements that contribute to its faster performance in terms of the wall-clock time. First, we have adopted the Gymnasium for game simulation, replacing the custom class implementations found in the original papers. This change results in faster simulation processes and eliminates redundant data copying operations, leading to improved efficiency.\\n\\n Additionally, we have implemented various code optimizations to enhance the platform's performance. These include improved data type conversions, such as using numpy-to-tensor conversions instead of list-to-tensor operations, which reduces processing time. We have also focused on enhancing memory management throughout the platform, resulting in more efficient resource utilization.\\n\\n From the perspective of wall-clock time, this indeed accelerates the convergence speed. However, it's crucial to note that in terms of the number of training iterations required for convergence, there is no significant improvement. For instance, if the original code necessitates sampling $10^4$ episodes to initiate convergence, our platform's reproduced algorithms similarly require approximately the same number of training iterations. This consistency in training iterations is attributable to the fact that we have not altered the underlying algorithms themselves.\", \"this_distinction_highlights_an_important_nuance\": \"while GraphChase offers improved computational efficiency in the environment, it does not change the sample efficiency or learning process of the implemented algorithms. Our platform's primary contribution lies in providing a more efficient simulation environment, rather than enhancing the algorithmic performance itself.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"The paper introduces GraphChase, an open-source platform for Urban Network Security Games (UNSGs). UNSGs model complex scenarios involving allocating limited security resources in urban environments and balancing cooperative and adversarial interactions between multiple agents. The authors aim to provide a unified platform to support researchers in developing, testing, and benchmarking algorithms for UNSGs. Experimental results demonstrate that GraphChase improves computational efficiency compared to baseline implementations and supports a wide range of game configurations.\\n\\nReviewers appreciated the platform's potential to address a significant gap in UNSG research by offering a standardized environment for algorithm development and evaluation. However, they expressed concerns about the paper's readiness for publication due to limitations in experiments, insufficient clarity in contributions, and the need for a broader evaluation of the platform's versatility.\\n\\nReviewer HG6y emphasized the need for more systematic experiments in complex environments. Reviewer eAes appreciated GraphChase but pointed out the limited scope of experiments and unclear presentation of related work. Reviewer FH55 raised similar concerns about the lack of experimental depth and noted that the current presentation sometimes reads more as an advertisement than objective scientific writing. Reviewer Wrzs found the platform's extensibility promising but suggested further comparisons with related works and additional implementation details to strengthen the contribution.\\n\\nDuring the discussion, the authors provided additional experiments on real-world graph structures, including Singapore and Manhattan maps, and clarified several technical details. These revisions partially addressed the reviewers' concerns, but a consensus emerged that the paper requires further polishing. Specifically, the reviewers agreed that demonstrating broader experiments, improving clarity, and conducting more systematic evaluations of GraphChase\\u2019s capabilities would significantly strengthen the work.\\n\\nGiven these considerations, the reviewers recommend a rejection at this stage. However, they acknowledge the importance of the problem and the potential of the proposed platform. The authors are encouraged to address the concerns raised and resubmit the paper to a future venue.\", \"additional_comments_on_reviewer_discussion\": \"The authors and the Reviewers engaged in productive discussion, which led to improvements to the manuscript.\\n\\nReviewers agreed that more comprehensive experiments, particularly in complex and realistic environments, are necessary to validate the platform's utility. The authors responded with additional experiments on the Singapore and Manhattan maps. Reviewer HG6y acknowledged these efforts, stating, \\u201cThe new experiments in the Singapore map...strengthen the paper,\\u201d but insisted that \\u201cmore systematic experiments in more complex environments...and a more careful presentation perspective to highlight the key contributions of the platform...are required.\\u201d Similarly, reviewer eAes appreciated the new details. Still, they emphasized that the platform's potential to \\u201csupport the implementation of various evaluation methods for pursuers' strategies, as well as the computation and assessment of the overall equilibrium\\u201d could have been further substantiated.\\n\\nFinally, Reviewer Wrzs highlighted the need for detailed comparisons with existing platforms like SIMPE and Avalon (\\\"The comparison to other works in this area...must be expanded. What extra does this work offer over SIMPE and Avalon?\\u201d).\"}", "{\"comment\": \"## Questions:\\n\\n3. **To Question 3:** The extensive training times are primarily due to the limitations of the current algorithms rather than platform inefficiencies. Let me elaborate on some factors contributing to these computational demands:\\n\\n - Grasper and Pretrained PSRO: These algorithms require extensive training to generate the best responses in each iteration. Our platform's current implementations require $10^5$ episode simulations(same as the original paper) to gather sufficient data for RL training. This sampling requirement is inherent to the PSRO methodology.\\n\\n - NSGZero:\\n Similar to AlphaGo's approach, it employs Monte Carlo Tree Search. Requires extensive tree exploration to generate effective policies. The computational intensity is intrinsic to the search-based nature of the algorithm.\\n\\n Other implemented algorithms similarly require substantial data sampling for effective training. \\n\\n We also note that the significant training time requirements, even for relatively small games (e.g., $5\\\\times 5$ grid), represent a broader challenge in the field of UNSGs. Actually, it is one of the reasons that we develop the GraphChase platform.\\n\\n In addition, our new Appendix E shows that our platform GraphChase runs faster than existing implemented environments.\\n\\n We aim to provide researchers with a standardized environment for algorithm development and testing. We hope that our platform can enable researchers to focus on algorithmic design that could potentially reduce these training times and improve computational efficiency, rather than spending time on environment implementation details.\\n\\n4. **To Question 4:** The simplified assumptions about criminal strategies primarily serve to facilitate convergence analysis due to the hardness of computing the evader's best response. Modeling both criminals and pursuers as learning agents presents a practical challenge: As demonstrated in NSG-NFSP (Xue et al., 2021), when criminals are modeled as learning agents, they tend to spend considerable time exploring invalid paths (those not leading to targets) before discovering optimal strategies. This results in:\\n - Training inefficiency due to pursuers learning against suboptimal opponents\\n - System instability during the learning process\\n\\n These findings have influenced subsequent works, including NSGZero, Pretrained PSRO, and Grasper, to adopt similar modeling approaches for criminal strategies. That's the reason that our current testing algorithms follow the same settings.\\n\\n However, our platform fully supports the implementation of criminals as learning agents, a feature that is not supported in the original papers. Researchers can integrate their own training algorithm and criminal agent designs into our simulator. We truly hope that our GraphChase is a useful tool for advancing UNSG research by enabling the development and testing of more sophisticated criminal strategy models.\"}", "{\"summary\": \"This paper considers Urban Network Security Games (UNSGs), a game-theoretic model of a scenario in which pursuers seek to catch some evaders. The game takes place on a graph. Agents are positioned at certain nodes and can move to nodes in the neighbourhood by taking an action, and evaders can exit at a pre-specified set of nodes. The work implements a set of environments corresponding to variations in this family of games, as well as a set of algorithms that can be applied to them. Some benchmarks are shown that demonstrate that the proposed implementations are faster than those in the original papers.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"S1. The basic idea behind the work (proposing a framework for developing learning algorithms for a class of multiplayer network games) is sensible and worthwhile.\\n\\nS2. The work is technically sound.\", \"weaknesses\": \"W1. The presented benchmarks are very thin: the only aspect they demonstrate is that the authors' implementations are faster than the original codebases. In my opinion, this is a missed opportunity to showcase the type of experiments that are enabled by the existence of this framework. How do the different algorithms perform on the different game variants (e.g. pursuers able communicate versus not; the different observability of the locations as in cases i-iv described in Section 2.2; different types of graph topologies as you only consider grids). Without this type of analysis, the contribution of the paper is lacking. Including it would validate the argument that your framework would be helpful in carrying out algorithmic research.\\n\\nW2. The paper reads more like an advertisement for the framework of the authors rather than objective scientific writing. The framework is self described as \\\"advanced\\\", \\\"pivotal\\\", etc. We also get sentences that seem to suggest the design is somehow highly innovative such as \\\"[...] providing a seamless flow of information and actions across the system. This modular approach not only enhances the adaptability of the platform to different research demands and scenarios but also supports the integration of various algorithmic strategies\\\". In my opinion, the design contains just about every component you would expect for a multiplayer game environment and is standard.\", \"questions\": \"C1. Could you define more precisely in Section 2.1 what it means to for the pursuer to \\\"catch\\\" the evader, as it seems this is currently missing? Presumably, the evader is caught if both the pursuer and evader are located at the same node, and the node is not an exit node?\\n\\nC2. L131: $E_{exit}$ or preferably $E_\\\\text{exit}$? $\\\\mathcal{N}(v)$ denotes either the *set of neighbours* or the *neighborhood*; L173: $\\\\mathbb{R}$ instead of $R$?\\n\\nC3. In Section 2.2 you say that NE is adopted as solution concept, but Section 6 mentions that a TMECom is computed in your framework. Section 2.2 should be updated with clarifications.\\n\\nC4. For the \\\"Game Module\\\" component (3.1), you should specify what formats for graph data are supported e.g. graphml, xml, adjacency list, etc.\\n\\nC5. Regarding the presented benchmarks, could you specify *why* your implementations perform faster? What is the core insight or optimization you did that enables this? This is not discussed.\\n\\nC6. This is more of a preference, but I would suggest sticking to the \\\"pursuer\\\" and \\\"evader\\\" terminology throughout the paper. The work keeps intermittently referring to police officers and criminals, but this is a highly abstracted model, and while it is inspired by a real-world scenario, it is very far from capturing real-world complexity. Applying this terminology can also help to more clearly signal that the paper does not raise any ethical or fairness concerns (I don't think it does).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your detailed response.\\n\\nWith respect to strategy or algorithm evaluation, does your platform support the implementation of various evaluation methods for pursuers' strategies, as well as the computation and assessment of the overall equilibrium or strategy profile? If so, could these processes be further illustrated and explained in the paper or accompanying documentation?\"}", "{\"comment\": \"Thank you for your valuable feedback!\\n\\nWe have expanded our experimental validation by incorporating a variety of scenarios used in the UNSG domain (Xue et al. 2021; 2022; Li et al. 2023a; 2024) in Section 4.2 (details are in Appendix D), which include larger $15\\\\times 15$ grid structures and real-world maps based on Singapore and Manhattan. The $15\\\\times 15$ grid network represents the randomly generated network, and two real-world networks represent different topological structures in real-world cities. To our knowledge, the Singapore and Manhattan maps represent the most realistic and complex graph structures currently used in the UNSG algorithm research. We believe that our extensive testing on Singapore and Manhattan maps demonstrates GraphChase's capability to handle real-world problems.\\n\\nRegarding your point about testing communication between agents, to the best of our knowledge, no existing learning algorithms for solving UNSGs address this specific aspect. This limits our ability to conduct comparative experiments in this variant. However, GraphChase provides a standardized testing platform for researchers interested in exploring such scenarios. We hope that GraphChase will be a foundation to advance future UNSG research.\"}", "{\"summary\": \"This is a benchmarks paper and the authors have chosen the correct primary area for this work.\\nThe work proposes an environment (or simulator) for urban security game, with various parameters that can be tuned for changing game parameters. This is a well-studied multi-player (>2) problem. The authors also evaluate many known algorithms in this environment.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1) The work is an important step towards benchmarking performance of the different algorithms and evaluating them on the same setup.\\n2) Benchmarking in multi-agent systems is lacking, so this is a timely piece of work.\", \"weaknesses\": \"1) There should a list/table of parameters that can be controlled. Right now, it is all in text and getting missed.\\n2) The paper mentioned NE many times, but the games seem stochastic form, and probably extensive form. There have Markov equilibrium and Subgame perfect equilibrium. It is not clear to the reviewer if the framework can handle all of these - as this is just a simulator, it is probably ok.\\n3) Any simulator must deal with question of realism. The authors acknowledge the abstract nature of the simulator model itself. But, then the important question is if the simulator is extendible easily to handle other aspects if some researcher wants to add extra nuances of the real world. How is this handled, please explain?\\n4) The comparison to other works in this area---in the design of environment and simulator---must be expanded. What extra does this work offer over SIMPE and Avalon?\", \"questions\": \"Please respond to questions in weakness. Note that while I am positive, my overall view is still dependent on the questions I raised, so please answer these in details.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
DjEyXTbEpa
Machine Reinforced Perturbation on Drifted Human Logical Reasoning
[ "Songlin Xu", "Xinyu Zhang" ]
Using deep neural networks as computational models to simulate cognitive process can provide key insights into human behavioral dynamics. This enables synthetic data generation to test hypotheses for neuroscience and guides adaptive interventions for cognitive regulation. Challenges arise when environments are highly dynamic, obscuring stimulus-behavior relationships. However, the majority of current research focuses on simulating human cognitive behaviors under ideal conditions, neglecting the influence of environmental disturbances. We propose ReactiveAgent, integrating drift-diffusion with deep reinforcement learning to simulate granular effects of dynamic environmental stimuli on human logical reasoning process. This framework is built and evaluated upon our contributed large dataset of 21,157 logical responses of humans under various dynamic stimuli. Quantitatively, the framework improves cognition modelling by considering temporal effect of environmental stimuli on logical reasoning and captures both subject-specific and stimuli-specific behavioural differences. Qualitatively, it captures general trends in human logical reasoning under stress, better than baselines. Our approach is extensible to examining diverse environmental influences on cognitive behaviors. Overall, it demonstrates a powerful, data-driven methodology to simulate, align with, and understand the vagaries of human logical reasoning in dynamic contexts.
[ "Human Logical Reasoning", "Deep Reinforcement Learning", "Cognitive Model" ]
https://openreview.net/pdf?id=DjEyXTbEpa
https://openreview.net/forum?id=DjEyXTbEpa
ICLR.cc/2025/Conference
2025
{ "note_id": [ "pYeMxgxzCS", "fq2FrbkISh", "fFrnGMg2E1", "OoKcx9AYHc", "GmXZHp3zsx", "FZHmeIFHhL", "8l6St84dpE" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730721076700, 1731207214716, 1731193053500, 1731190647508, 1737591539328, 1730715526493, 1731292458370 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4842/Reviewer_RwVU" ], [ "ICLR.cc/2025/Conference/Submission4842/Reviewer_faFY" ], [ "ICLR.cc/2025/Conference/Submission4842/Reviewer_E3gS" ], [ "ICLR.cc/2025/Conference/Submission4842/Reviewer_Eu9S" ], [ "ICLR.cc/2025/Conference/Submission4842/Authors" ], [ "ICLR.cc/2025/Conference/Submission4842/Reviewer_Rdw8" ], [ "ICLR.cc/2025/Conference/Submission4842/Reviewer_KmtJ" ] ], "structured_content_str": [ "{\"summary\": \"The paper proposes to model the effects of stress-inducing stimuli on human response time in a mathematical reasoning task. The task consist of a math problem (answering whether two number are equivalent modulo a third), and the stimulus is a progress bar creating time pressure. The authors collect a large dataset of human response times under various applications of this stimulus, and train and evaluate their proposed model on it.\\n\\nThe model consists of SVMs predicting response time and choice under perfect conditions (no stimulus) from an embedding of the math task. A drift diffusion model (DDM) is then used to predict response time under the stimulus from this. The DDM\\u2019s evidence accumulation rate is modulated by an RL policy conditioned on the stimulus. The RL policy is trained to steer the DDM such that it replicated the response rates observed in the training data.\\n\\nThe proposed model is compared to a large set of baselines. The authors also find that use of the DDM in predicting the response time is instrumental.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"The paper tackles an important problem: reasoning and choice under time pressure. The model it contributes can unlock new insights into human behavior in these conditions, and could potentially be used to create new interventions to mitigate the effects of these stimuli.\\n\\nI appreciate that this paper moves away from ideal conditions and towards dynamic environmental stimuli. I also like that the task considered is simple but realistic. In this light, the participant data that was collected is certainly valuable.\\n\\nThe experiments presented appear rigorous (although I sometimes miss statistical significance tests) and are well-documented in the main paper and in the extensive appendices.\", \"weaknesses\": \"What I am missing is a more explicit hypothesis regarding how the stimulus considered here affects human choice, and to see that hypothesis reflected in the modelling decisions. Right now the paper reads very much like a statistical modelling paper, which arranges a number of existing elements into a new ML model that achieves better performance than baselines, but does not uncover any significant insight into human cognition.\\n\\nFurther, Table 3 suggests that the proposed model\\u2019s performance is within error of a standard Transformer model (line 1386). Why was this baseline not included in the main text? Is there any reason why it is not an equally good model?\\n\\nFinally, the model is evaluated on only one task. This is acknowledged by authors, and I do understand that this is common practice in the field. Nevertheless, it limits the evidence we have for the proposed model.\", \"questions\": \"For the pure DRL agent, wouldn\\u2019t the problem it is solving be more accurately described as a contextual bandit problem? The actions it takes (predicting perturbation to baseline response time) do not appear to affect the next state (the next math problem), unless I am missing something?\\n\\nIn section A.6.4, it is unclear to me how exactly the action affects the rate of evidence accumulation. I do not see the action $a$ in any of the formulas there.\\n\\nIs the difference in MAPE between the Hybrid DRL and Pure DRL agents statistically significant?\\n\\nYou found that the block number influenced users\\u2019 performance, and therefore included the question ID as an additional feature for response time and choice prediction. Why was this important? Does it help to model (implicitly) things like participants tiring out (hence slipping attention) or learning the task (hence decreasing response time)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors introduce a novel hybrid approach for modeling of human cognitive process by incorporating the drift-diffusion model (DDM) with RL agent. The authors evaluate their idea on the task with modular arithmetic. The proposed approach extracts features using a pre-trained LSTM model on synthetic data, after that SVM model is used to predict response time and correctness of the response. On top of that, they train an RL agent that extracts visual information using CNN model and obtains reward from a DDM model.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1. The authors provide an open dataset for future investigation and research.\\n2. The authors introduce a novel method for incorporating RL into modeling human reasoning.\\n3. The authors provide a variety of baseline models that were used in experiments.\", \"weaknesses\": \"1. The entire problem applied only to modular arithmetic which might not exactly be a full baseline for the method. In general, extending to more tasks would highlight the method's applicability and generalization.\\n2. No connection to learning user state? LSTM models are great tools for learning underlying user states as shown in DKT [https://stanford.edu/~cpiech/bio/papers/deepKnowledgeTracing.pdf]. I would expect to see also some of the user information fed directly to the model rather than just using it as a proxy.\\n3. Is the model really interpretable? I still have doubts that by adding RL agent and LSTM (Which both are not interpretable) we can give an assessment of cognitive behavior. The entire section 5.6 does not provide insight into human/agent decisions. I would suggest adding more discussion of the effect of LSTM and RL agents on casual connection.\", \"questions\": \"1. Does the neuron in LSTM refer to the size of the hidden state?\\n1.1 Does line 352 refer to the last layer of LSTM or the hidden state size?\\n1.2 I would suggest replacing training loss/accuracy with Validation loss /accuracy. Since 100% accuracy might also refer to overfitting for LSTM in this scenario. \\n2. In line 947, the agent should output 2 instead of 3? 26 mod 4 is 2.\\n3. Does the model generalize on tasks of different lengths i.e. can we model 4360==870mod(6)? I do understand that it would be hard for the person to answer such a problem in real-time, but it might be beneficial for model performance.\\n4. What exactly meant \\\"the block number.\\\" on line 1007?\\n5. How does including question id in training SVM affect the performance of the model? I'm not sure that I correctly understand the point of adding question id as a feature since it might cause a common leak problem, which just makes SVM remember the mapping between feature id and the correctness/time response of the user. \\n6. How does dataset size affect the performance of the model? \\n8. Why can't we add user features to lstm for training and output user features for better representation? I.e why can't we directly train lstm on user data?\\n9. Does the model take into account that with time users get more familiar with the problem and their response time also changes?\\n10. Can you clarify general level splitting? Is it performed in Time base Sequence Train-Test split? [https://scikit-learn.org/dev/modules/generated/sklearn.model_selection.TimeSeriesSplit.html\\n11. I would argue about interpretability since the approach requires a \\\"black-box\\\" LSTM model. Same as with RL agent, it can not be directly interpretable since it is basically an MLP trained with PPO without any constraints. It would be interesting to see more discussion on inpreterability of such an approach. \\n12. Typo: For time pressurem, line 1039\\n13. Could authors clarify action space for the hybrid model and the total final estimation of time? What is the output of the hybrid model i.e last layer of MLP?\\n14. Whate are R_p and S_p on line 1147\\n15. How does long-term evaluation of user prediction and time?\", \"comments\": \"1. I would refer here to [https://arxiv.org/pdf/2207.02098] as seen in Table 2, LSTM tends to fail at Modular Arithmetic since it corresponds to DCF grammar.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper addresses the challenges of using deep learning to simulate cognitive processes in the context of dynamic environments, where environmental disturbances are often neglected or ideal conditions assumed. Specifically, the paper presents a dataset capturing human logical reasoning in response to temporally variable environmental stimuli and formulates ReactiveAgent to simulate such phenomena at fine granularity. The work improves modeling of human logical reasoning under stress and is extensible to analyze other environmentally-influenced cognitive behaviors.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper addresses an important gap in current approaches to computationally model human cognitive behaviors, namely the ability to analyze nuances of how dynamic environmental factors impact human cognition.\", \"The paper uniquely builds upon the benefits of DDM and DRL to develop a more flexible and interpretable cognitive modeling approach.\", \"Experiments are sound, with reasonable metrics selected to assess performance. Ablation studies are comprehensive to verify each portion of the framework. Alignment of predicted with actual human response times is high vs. baseline methods.\"], \"weaknesses\": \"The paper focuses on a single math arithmetic task to demonstrate the efficacy of the proposed framework. This is reasonable and gives good results, but a second setting would be helpful to demonstrate the generalizability of the approach.\\n\\nPlease see additional considerations in the questions below.\", \"questions\": [\"Questions\", \"Table 1 Clarification: For response time predictions represented in Table 1, was this done on a per-individual basis or per-group basis? Additionally, did predictions capture absolute response time or the difference between baseline and time-pressured response time? Baseline measurement is discussed in the main and supplementary texts but it appears unclear here. If predictions capture raw response times would perceived task difficulty and mental fatigue/attention not be confounders?\", \"For the curated dataset, how was the validity of responses assessed? Specifically, what determined whether a response time sample was valid/invalid?\", \"What is the rationale for using MAPE versus correlation to assess alignment between predicted and real response times?\", \"How does the accuracy of the logical reasoning agent impact the efficacy of the proposed framework at modeling human performance?\", \"How would one extend ReactiveAgent to model human cognitive performance in more than one task at once?\", \"Suggestions\", \"Figure 1: The name of the method, ReactiveAgent, is not mentioned. It could help to increase the font size of the label for each of the four stages or move the labels to under the rectangular bounding boxes.\", \"[Minor] Line 97-98: \\u201cproposed\\u201d is in past tense vs. present tense for the following two contribution bullet points.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors present ReactiveAgent which is an interesting computational framework designed to simulate the impact of dynamic environmental stimuli, such as time pressure, on human logical reasoning processes. They integrate the drift-diffusion model with deep reinforcement learning in order to model the effect of dynamic stimuli on evidence accumulation during decision-making tasks. ReactiveAgent is evaluated on a dataset of 21157 human responses to math tasks under various time-pressure conditions which is claimed to demonstrate improved accuracy, interpretability, and efficiency compared to baseline models. The authors discuss how the framework has potential to provide insights into human cognitive behaviors under stress and that this work could contribute to the development of adaptive interventions and neuroscience research.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Open-source dataset with 21,157 logical reasoning responses from humans\\n2. Shows better simulation accuracy in predicting human response times compared to several baseline models, meaning the framework captures the effects of dynamic stimuli on cognitive processes more accurately\\n3. The framework is flexible and could be adapted to study other types of environmental influences on cognitive behaviors beyond time pressure, suggesting it could be applied broadly in cognitive science research\\n4. Includes comprehensive ablation studies that evaluate the importance of each component in their framework, like the math logical reasoning agent and the integration of the DDM into the DRL agent\", \"weaknesses\": \"1. While the introduction outlines the methodology and objectives of the study, it lacks a compelling explanation of why this research is significant and what specific impact it could have on the field of cognitive modeling or neuroscience. The only mentions of potential impact are generic statements about providing key insights into human behavioral dynamics and informing the design of feedback mechanisms to augment cognition. This vagueness makes it difficult to understand the broader implications or novelty of the work.\\n\\n---- [Abstract] Using deep neural networks as computational models to simulate cognitive process can provide key insights into human behavioral dynamics. \\n\\n---- [Intro] ...the effects of environmental dynamics (e.g., stress (Cheng (2017)) and feedback (Costa et al. (2019))) on cognitive performance could elucidate behavioral responses to tasks (Cheng (2017)) and inform the design of feedback mechanisms to augment cognition\\n\\n**Recommendation for Improvement**: The authors should clearly articulate the specific contributions and potential impact of their work. This could involve highlighting how their hybrid framework advances current modeling techniques, addresses existing limitations, or opens new avenues for research. Providing concrete examples of applications or how this model could influence future studies would strengthen the paper's significance\\n\\n2. The core of the paper is the integration of the drift-diffusion model with deep reinforcement learning to simulate the impact of dynamic stimuli on human logical reasoning. However, the paper does not provide enough analysis of how the modulations introduced by the DRL agent align with established cognitive mechanisms or theories of human reasoning under stress. There is a gap in demonstrating that the DRL agent's modulation of evidence accumulation trajectories corresponds to known patterns of human cognitive processing under time pressure.\\n\\n**Recommendation for Improvement**: The authors should conduct a deeper analysis comparing the DRL-induced modulations with empirical data or established theories on human cognitive mechanisms under stress. Including discussing how the DRL agent's behavior mirrors or diverges from human cognitive strategies under time pressure would provide good insights.\\n\\n3. The study exclusively focuses on time pressure as the external stimulus affecting cognitive performance. While time pressure is a significant stressor, cognitive stress can also arise from factors like multitasking, uncertainty, or social pressures. The paper does not explore whether the proposed framework can generalize to other forms of cognitive stress or external stimuli.\\n\\n**Recommendation for Improvement**: To improve the generalizability of the framework, the authors should consider incorporating other types of cognitive stress into their experiments or at least discuss how the model might be adapted for different stressors. This could involve designing tasks that introduce emotional distractions or require multitasking, and then evaluating the model's performance under these new conditions.\\n\\n4. Figure 2 top row is hard to interpret. Please create a more interpretable graph.\\n\\n5. This study uses a math arithmetic task to assess human logical reasoning under time pressure. Relying on a single type of task may limit the applicability of the results to other domains of cognitive function. Different cognitive tasks engage various neural circuits and cognitive processes, so a model that performs well in math reasoning may not necessarily translate to language comprehension, spatial navigation, or memory tasks.\\n\\n**Recommendation for Improvement**: The authors should evaluate their framework on perhaps 1 additional cognitive task to demonstrate its robustness and general applicability. If extending the study to additional tasks is not feasible within the current timeline, the authors should acknowledge this limitation and propose it as an avenue for future research.\\n\\n6. consider incorporating qualitative analyses or case studies could improve the evaluation in the main text or appendix. For example, examining specific instances where the model successfully predicts human behavior under stress or fails to do so could demonstrate the underlying mechanisms. Including discussions on how the model's behavior aligns with psychological theories of stress and decision-making would improve this work.\", \"questions\": \"1. Can you provide a clearer explanation of why your work is important and what specific impact it could have on cognitive modeling or neuroscience?\\n2. How does the DRL agent's modulation of the evidence accumulation process align with known human cognitive mechanisms under stress? can you provide more analysis or evidence supporting this alignment?\\n3. Can your framework be tested on other cognitive tasks besides math reasoning to demonstrate its robustness? Do you have plans to evaluate it on tasks involving memory, language comprehension, or spatial reasoning?\\n4. Have you considered potential confounding factors like individual differences in stress tolerance, math proficiency, or fatigue? Discussing how these might affect your results would strengthen the validity of your findings.\\n5. What are your plans for future work? Do you intend to test your model with different tasks or incorporate other forms of stimuli?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper introduce \\\"ReactiveAgent,\\\" a hybrid framework integrating deep reinforcement learning (DRL) with the drift-diffusion model (DDM) to simulate the impact of dynamic environmental stimuli on human logical reasoning. The model aims to provide a more granular simulation of human cognition under dynamic conditions, particularly under time pressure.\\n\\n ReactiveAgent leverages a large dataset of human logical responses and demonstrates improved accuracy and interpretability compared to existing cognitive models.\\n\\n\\nThe paper addresses the challenge of modeling human cognitive behavior under dynamic environmental stimuli. While existing research typically focuses on modeling cognition under ideal conditions, this work aims to simulate how environmental factors (like time pressure and stress) affect human logical reasoning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The authors contributed a large dataset of human logical reasoning responses under dynamic stimuli. Open sourcing the dataset would allow future researchers to leverage the data for other cognitive studies\", \"better_performance\": \"ReactiveAgent achieves lower Mean Average Percentage Error (MAPE) compared to baseline models\", \"faster_training\": \"Hybrid approach converges ~10x faster than pure DRL\", \"better_interpretability\": \"Can generate and analyze trajectories of time pressure effects\", \"captures_group_differences\": \"Successfully models different responses across experimental groups\", \"weaknesses\": \"1.Single Task Limitation: The evaluation is based solely on a math arithmetic task, which may limit the generalizability of the conclusions. Cognitive processes vary significantly across different types of tasks, and more diverse tasks would help in validating whether ReactiveAgent's approach is broadly applicable to human cognition beyond arithmetic reasoning.\\n\\n2. Complexity of Model Components: The integration of multiple components (LSTM, DRL, DDM, etc.) adds considerable complexity to the framework. While the paper strives to explain these interactions, there is still a risk that such complexity may lead to challenges in replicability or practical implementation.\", \"questions\": \"How well does the proposed framework generalize to other types of cognitive tasks beyond the arithmetic task presented? Could the authors provide further clarification or potentially some preliminary experiments to address generalizability?\\n\\nHow sensitive is the framework to the hyperparameter choices, particularly those in the DRL agent and the drift-diffusion model?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper aims to discover a model capturing how time pressure affects logical reasoning by predicting answers and reaction times by humans in a mathematical logical reasoning problem. The authors introduce a hybrid model combining task-driven deep learning, deep RL, and a classic cognitive model (DDM) designed to capture the dynamics of decision-making under uncertainty. The authors demonstrate that this model attains higher quantitative scores than some other approaches. The authors also introduce a dataset consisting of 44 subjects performing this task. They also argue that the approach is interpretable, permitting interrogation of the classic cognitive model's components.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The work is well motivated, as predicting human behavior in complex reasoning problems is an important and exciting application of AI.\\n\\nThe hybrid data-driven approach seems promising for this type of task. Particularly given the constraints of the dataset, using features learned from a model trained on a task, adapting them based on behavior, and incorporating a classic model with an interrogatable form seems like a good approach for this problem. \\n\\nThe open-sourcing of the dataset is a very positive contribution.\", \"weaknesses\": \"The approach appears to quantitatively outperform others to which they compare it, although it is unclear how these baselines are motivated -- what drives the choices in Table 1?\\n\\nFurthermore, the improvement is hard to interpret. The mean score of 0.2999 is better than for the other models. However, the standard deviation and lower/upper bounds of the range are quite large, and it's not clear if this difference is significant. This is also the case for the quantitative results per figure.\\n\\nIt also would be helpful to see what scores look like at chance. I realize there is no \\\"chance\\\" for RTs, but recommend performing a permutation test (randomly permute choices with respect to input problems) and report the scores. Otherwise, it's not clear what magnitude of improvement the current method gives (and it's not clear how strong the baselines are, or whether the data is sufficient here to support a data-driven approach at all). \\n\\nIs there a classic cognitive modeling approach that has been typically applied to this problem, or is there one suggested by the heuristics that have been documented? I realize the DDM is part of this, but it seems that can't be applied naively absent the LSTM/DRL/SVM components. This would also be helpful for assessing the contribution.\\n\\nModel seems a bit complex, and it's not clear the extent to which each component is justified or what each component is uniquely responsible for. For example, there are multiple stages that adapt the model to the data (SVM+DRL).\\n\\nModel ablations are performed, but they are incomplete. For example, it also appears there is no ablation that includes giving all the information that was used to train the LSTM to the SVM model -- the numbers were given, but not what the correct answer is. This is important because it's possible the additional information conferred by the LSTM is having information about what the target is going to be. Another ablation might use the output logits of the LSTM rather than the feature layer, as it seems likely that the answer and the certainty about it would have a lot of the information relevant to predicting reaction time, whereas \\\"features\\\" implies it something about the stimuli.\\n\\nThese ablations also have confusing numbers -- how are digits/strings getting 81% accuracy and F1 score of 0?\\n\\n44 subjects is not a huge dataset as it relates to this type of analysis, and with 4 conditions, that is 11 per group. It is also a cognitively complex task which would presumably demonstrate real individual variability, making larger numbers even more important. I'm skeptical this dataset is sufficiently large to support a data driven approach at all. This could be demonstrated by outperforming a pure cognitive model baseline, or with some careful permutation tests. but with only data-driven baselines, it's unclear what the model is picking up on.\\n\\nPresentation of the task, model, and results are very unclear. In particular, it was very difficult to figure out the architecture, and several design choices are unclear. Even after referencing the figure in the appendix, I'm unclear on why an LSTM was used, and what corresponds to the sequence dimension (trials? computation time? time within trial?). Also was not clear some of the task details -- are subjects informed of the task structure, or do they learn it from feedback? How are numbers sampled -- is the probabilty of true/false 50/50? Also had difficult parsing the text on how the progress bar related to the agents score.\\n\\nmissing axis labels -- e.g. what are the x + y axes in fig 6? fig 9?\\n\\ninterpretability -- these seem like behavioral features that all could have been measured in a pure data driven approach.\", \"questions\": \"Questions inline above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"The work involves human subjects but the authors specifically mention IRB approval, which is appropriate.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
Dj9wssUmLn
Beyond In-Context Learning: Enhancing Long-form Generation of Large Language Models via Task-Inherent Attribute Guidelines
[ "Do Xuan Long", "Duong Ngoc Yen", "Do Xuan Trong", "Anh Tuan Luu", "Kenji Kawaguchi", "Shafiq Joty", "Min-Yen Kan", "Nancy F. Chen" ]
In-context learning (ICL) is an important yet not fully understood ability of pre-trained large language models (LLMs). It can greatly enhance task performance using a few examples, termed demonstrations, without fine-tuning. Although effective in question answering, ICL often underperforms in long-form generation tasks such as summarization. Under appropriately realistic assumptions, we empirically and theoretically show that ICL demonstrations alone are insufficient to teach LLMs the task’s language and format distributions for generation. We argue for explicit exposure to the task distributions and hypothesize that defining them by prompting enhances model performance. To this end, we present LongGuide, which efficiently generates two parallel streams of guidelines capturing task language and format properties: (i) Metric Guidelines (MGs) that instruct models to optimize self-evaluated metrics; and (ii) Output Constraint Guidelines (OCGs) that constrain generation at both token and sentence levels. LongGuide automatically selects the best combination of guidelines, improving both strong open- and closed-source LLMs by over 5% in both zero- and few-shot settings. We show that LongGuide is generalizable, learnable by weak models to enhance strong ones, and integrates synergistically with automatic prompt optimizers.
[ "In-context Learning", "Prompt Optimization", "Long-form Generation" ]
Reject
https://openreview.net/pdf?id=Dj9wssUmLn
https://openreview.net/forum?id=Dj9wssUmLn
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uffTbAuDbZ", "stEg9y7LyP", "qm8P7mzx57", "mB5lBK5urk", "leSfaBwPlv", "kPo19Ark21", "kNVdzzbzgq", "j2uk0JxCTY", "ivdCjf4qKf", "isjo3Rqy0D", "fwjbtJR3Dq", "dZNxJKb01K", "cwg2Nqz4ye", "bDYRX3mlgk", "aNdPuAKptY", "YkWIaK6aiW", "X2Y1N2UwTu", "TWUJsC6NWs", "TALnOUWn9U", "SmbOPno1kn", "RMbIxt2NKx", "Kz16eqjKDX", "KkFcOyoLFV", "K4Q9Wj47i3", "I7ekQ4aFRL", "H641mpAgHa", "GilKiSopjJ", "GMugpMkAmR", "F0vilDFJhv", "ABUR3MHnjE", "8zc37z76uj", "8mvpqbWm87", "7k4B6UmhLQ", "60fNQIcCXU", "51km4KPT3P", "2vkfP4ygan", "2sDbPvamFm" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment" ], "note_created": [ 1732110821051, 1732113451604, 1732110389227, 1730693429529, 1732590750600, 1734337478279, 1732259617894, 1733132000518, 1732113363855, 1732828438005, 1733234182478, 1732210542609, 1732366578329, 1730805467893, 1732949992461, 1732780425326, 1732366531985, 1732113196958, 1732114651072, 1732950194615, 1732261373534, 1730099988662, 1733198693847, 1730280515116, 1732610689774, 1732572955191, 1732778833437, 1732112587882, 1732778305833, 1730772521022, 1732112078645, 1733108793711, 1732950031961, 1732112569278, 1732853673091, 1737523724019, 1732777667145 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5765/Authors" ], [ "ICLR.cc/2025/Conference/Submission5765/Authors" ], [ "ICLR.cc/2025/Conference/Submission5765/Authors" ], [ "ICLR.cc/2025/Conference/Submission5765/Reviewer_ku1W" ], [ "ICLR.cc/2025/Conference/Submission5765/Authors" ], [ "ICLR.cc/2025/Conference/Submission5765/Area_Chair_HmFA" ], [ "ICLR.cc/2025/Conference/Submission5765/Authors" ], [ "ICLR.cc/2025/Conference/Submission5765/Reviewer_V713" ], [ "ICLR.cc/2025/Conference/Submission5765/Authors" ], [ "ICLR.cc/2025/Conference/Submission5765/Reviewer_V713" ], [ "ICLR.cc/2025/Conference/Submission5765/Authors" ], [ "ICLR.cc/2025/Conference/Submission5765/Reviewer_V713" ], [ "ICLR.cc/2025/Conference/Submission5765/Authors" ], [ "ICLR.cc/2025/Conference/Submission5765/Reviewer_9VDd" ], [ "ICLR.cc/2025/Conference/Submission5765/Authors" ], [ "ICLR.cc/2025/Conference/Submission5765/Reviewer_YJ28" ], [ "ICLR.cc/2025/Conference/Submission5765/Authors" ], [ "ICLR.cc/2025/Conference/Submission5765/Authors" ], [ "ICLR.cc/2025/Conference/Submission5765/Authors" ], [ "ICLR.cc/2025/Conference/Submission5765/Authors" ], [ "ICLR.cc/2025/Conference/Submission5765/Authors" ], [ "ICLR.cc/2025/Conference/Submission5765/Reviewer_yZpL" ], [ "ICLR.cc/2025/Conference/Submission5765/Authors" ], [ "ICLR.cc/2025/Conference/Submission5765/Reviewer_YJ28" ], [ "ICLR.cc/2025/Conference/Submission5765/Reviewer_YJ28" ], [ "ICLR.cc/2025/Conference/Submission5765/Reviewer_yZpL" ], [ "ICLR.cc/2025/Conference/Submission5765/Authors" ], [ "ICLR.cc/2025/Conference/Submission5765/Authors" ], [ "ICLR.cc/2025/Conference/Submission5765/Reviewer_ku1W" ], [ "ICLR.cc/2025/Conference/Submission5765/Reviewer_V713" ], [ "ICLR.cc/2025/Conference/Submission5765/Authors" ], [ "ICLR.cc/2025/Conference/Submission5765/Authors" ], [ "ICLR.cc/2025/Conference/Submission5765/Authors" ], [ "ICLR.cc/2025/Conference/Submission5765/Authors" ], [ "ICLR.cc/2025/Conference/Submission5765/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5765/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to reviewer\", \"comment\": \"Dear reviewer yZpL,\\n\\nWe deeply thank you for your time and efforts in providing constructive reviews for our paper. We would like to address your concerns below and our updated changes in the paper are in blue.\\n\\n> metrics used as the core set of LongGuide are described insufficiently - the main thing explained about them is that they do not include LLM-based ones which sounds worrying since those are proved to have correlation with human judgements, at least for summarization tasks they're superior to the numeric ones like ROUGE-L.\\n\\nThank you for your suggestion. We have added GPT-4o-Judge scores evaluating how aligned the generated answer is with the reference answer and its quality on criteria:\\n\\n- Format consistency: ensuring the generated response matches the length, structure and layout of the reference.\\n- Content completeness: evaluating whether all key points present in the reference are included in the assistant's answer.\\n- Factuality: checking for factual correctness of the assistant's answer.\\n- Style adherence: ensuring that the tone, style, and level of detail of the assistant's answer match the reference.\\n- Assistant's answer quality: assessing how well the response satisfies the user's requirements.\\n\\nEach criterion is scored on a scale of 10, and the final GPT-4o-Judge score is the average of them. We have included the evaluation scores in Table 2 and Figure 2. We summarize the results below:\\n\\n| Method | Format | Content | Factuality | Style | Quality |\\n| -------- | ------- | ------- | ------- | ------- | ------- | \\n| Baseline | 4.18 | 4.83 | 6.64 | 4.36 | 4.75 |\\n| + APO | 4.73 | 5.91 | 7.26 | 4.91| 5.39 |\\n| + LongGuide | **5.72** | **6.01** | **8.25** | **5.78** | **6.04** |\\n\\nAmong five GPT-4o-Judge criteria in Figure 2, LongGuide notably improves Format, Style, and Factuality, confirming its effectiveness in aligning model generation with ground-truth distributions. In addition, the significant gains in Quality criterion, together with the ROUGE-L scores from Table 2 further demonstrate that LongGuide also significantly enhances the generation quality.\\n\\nOur evaluation prompting template is heavily inspired by (https://openreview.net/forum?id=uccHPGDlao).\\n\\n> with a combination of metric guidelines and output constraint guidelines evaluated at the last step of LongGuide, there arises the question on performance/cost aspects of LongGuide and how it compares to prompt optimization SoTA - I couldn't find it in the main paper content\\n\\nThank you for the constructive feedback. The prompting costs for generating guidelines we provided in Appendix F.3. Below we present the prompting costs for the last step of LongGuide compared to adv-ICL and APO on SAMSum using 3 demonstrations:\\n\\n| | Method | #Prompts Sampled | Cost |\\n |---------|------------------|------------------------------------------------------------|-------------------------------|\\n| **ZS** | adv-ICL | (3 iterations) x (1 instruction) x (5 variants) | 15 x prompt validation cost | \\n| | APO | (5 iterations) x (15 prompts sampled) x (1 instruction) | 75 x prompt validation cost | \\n| | LongGuide | 4 prompts (MG, OCG, MG-OCG, No guideline) | **4** x prompt validation cost | \\n| **FS** | adv-ICL | (3 iterations) x (3 demonstrations + 1 instruction) x (5 variants) | 60 x prompt validation cost | \\n| | APO | (5 iterations) x (15 prompts sampled) x (3 demonstrations + 1 instruction) | 300 x prompt validation cost | \\n| | LongGuide | 4 prompts (MG, OCG, MG-OCG, No guideline) | **4** x prompt validation cost | \\n\\nLongGuide is approximately at least **3.75** times cheaper than PO algorithms in terms as it requires only four prompt variants to verify on the validation set. For SAMSum, the validation of one prompt using 50 samples involves approximately 22K tokens, which incurs a cost of $0.02 USD as of November 19, 2024. \\n\\nWe have added these analyses in Appendix F.3. We have also added a sentence discussing the cost-efficiency of LongGuide in the Introduction L091-092.\\n\\n> l. 208: \\\"...and propose 12more metrics for a broader evaluation coverage\\\" - where are they described?\", \"they_are_described_in_table_11_as_we_noted_in_l198\": \"(Appx.-Table 11 for details).\\n\\n## In summary\\n\\nWe thank you for your time and constructive feedback. We hope our responses can sufficiently address your concern and improve your ratings. Thank you for your consideration.\"}", "{\"title\": \"Response to reviewer (2)\", \"comment\": \"> Additionally, the work is pretty much only a prompt work with little solid science contribution.\\n\\nThank you for your feedback. While we respect your perspective, we would like to address your concerns and clarify the significance of our work.\\n\\nAlthough your comment contrasts with most of the other reviewers (3 out of 5 rated our contribution 3/5 and the other reviewer thought our work was not very novel so they gave 2), we understand that the value of prompt-based research may not be universally appreciated. However, it is important to recognize that prompting plays a crucial role in practical applications, particularly in business and real-world settings, where traditional benchmarks often do not align with user-centric outcomes. In these contexts, prompting to obtain optimized model performance is of paramount importance.\\n\\nOur work extends beyond mere prompting; it tackles the critical challenge of aligning LLM generation distribution with downstream task distribution via prompting, as highlighted in lines L086-087. While numerous fine-tuning methods exist to address this, there is a notable gap in research on non-fine-tuning approaches for LLMs, such as prompting and calibration. These methods are often more accessible and scalable for a wider audience compared to traditional fine-tuning, as fine-tuning LLMs to be successful and reliable can be impractical for many researchers, engineers, and institutions.\\n\\nWe believe that solving LLM adaptation for long-form generation tasks through prompting represents a meaningful scientific contribution, as hopefully, you can agree with us. Our approach addresses a real-world problem and provides a practical solution that is both novel and widely applicable. We hope this clarification helps convey the value and importance of our work.\\n\\n> There is little direct takeaway from the paper. By direct takeaway, I mean that engineers and researchers can directly adopt the hyper-parameters and models given from a paper to their academic and industrial pipeline. \\n\\nThank you for your comment. We appreciate your perspective and would like to highlight the key contributions of our paper and their practical implications:\\n- (C1) We identify a critical challenge: the misalignment between LLM generation and the distributions required for downstream long-form generation tasks. We demonstrate both empirical and theoretical intuitions that ICL demonstrations alone are insufficient to teach LLMs the task-specific language and format distributions (L016-019). This finding is meaningful, as ICL is currently the most widely used instructional method for adapting LLMs (L034-036).\\n- (C2) We propose LongGuide, an efficient guideline-learning algorithm designed to improve the distribution alignment of LLMs for downstream tasks. This method significantly addresses this fundamental challenge.\\n- (C3) We provide an in-depth analysis of LongGuide, revealing key insights into its properties and why it works: it can be used by weaker models to enhance stronger models, it boosts the performance of non-instruct models, it significantly improves ICL performance, and it integrates effectively with automatic prompt optimizers.\\n\\nWe believe that engineers and researchers can takeaways findings from (C1), (C2), (C3), as hopefully you can agree with us. The framework we present is also highly efficient, highly adaptable, and generalizable and can be directly applied to any long-form generation task in both academic and industrial pipelines.\\n\\n> I don't get the point of mentioning ICL in the paper's name. Because they only adapt instruction tuned LLMs in their research. And a lot of existing research points that there is a trade-o\\\" between the instruction following capacity and ICL. Maybe more experiments on base models are needed.\\n\\nThank you for your comment. We actually experimented with one non-instruction-tuned model, Mistral-7B-v0.1 in Appendix C.1. The results show that LongGuide improves more than half of the experiments, showing its potential effectiveness in enhancing even non-instruct models.\"}", "{\"title\": \"Response to reviewer\", \"comment\": \"Dear reviewer 9VDd,\\n\\nWe deeply thank you for your time and efforts in providing constructive reviews for our paper. We would like to address your concerns below and our updated changes in the paper are in blue.\\n\\n> There exist many advanced automatic prompt optimization algorithm based on a training dataset, the authors only include APO, and I think they should include some more recent methods as baselines. Please add some more recent algorithms in automatic prompt improvement as baselines.\\n\\nThank you for your suggestion. As discussed in line 302, we have compared LongGuide with **adv-ICL** (Do et al., 2024), a strong prompt optimization (PO) method at the time, in Appendix C.3 across three representative datasets: CNN, IWSLT17, and CommGen. We have also incorporated **EvolPrompt** (Guo et al., 2024) in our experimental analysis in Appendix C.3. \\n\\nWe also acknowledge other recent PO algorithms such as **PromptAgent** (Wang et al., 2023) and **Promptbreeder** (Fernando et al., 2023). However, these PO methods are less applicable to long-form generation tasks due to the ambiguity of error feedback, compared to reasoning/MCQ tasks.\\n\\nCurrent PO algorithms, even the most advanced ones, struggle to outperform LongGuide in certain long-form generation tasks because they typically rely on sampling new prompts through search, evolution, or paraphrasing methods, which rarely produce comprehensive guidelines like those generated by LongGuide. LongGuide has its own unique advantage. Additionally, LongGuide can be combined with PO algorithms to further enhance its guidelines, as noted in Appendix C.3.\\n\\n> Please include some LLM-based evaluation method to compare the long-form generation instead of rouge and bleu scores. \\n\\nThank you for your suggestion. We have added GPT-4o-Judge scores (Section 4) evaluating how aligned the generated answer is with the reference answer and its quality on criteria:\\n\\n- Format consistency: ensuring the generated response matches the length, structure, and layout of the reference.\\n- Content completeness: evaluating whether all key points present in the reference are included in the assistant's answer.\\n- Factuality: checking for factual correctness of the assistant's answer.\\n- Style adherence: ensuring that the tone, style, and level of detail of the assistant's answer match the reference.\\n- Assistant's answer quality: assessing how well the response satisfies the user's requirements.\\n\\nEach criterion is scored on a scale of 10, and the final GPT-4o-Judge score is the average of them. We have included the evaluation scores in Table 2 and Figure 2. We summarize the results below:\\n\\n| Method | Format | Content | Factuality | Style | Quality |\\n| -------- | ------- | ------- | ------- | ------- | ------- | \\n| Baseline | 4.18 | 4.83 | 6.64 | 4.36 | 4.75 |\\n| + APO | 4.73 | 5.91 | 7.26 | 4.91| 5.39 |\\n| + LongGuide | **5.72** | **6.01** | **8.25** | **5.78** | **6.04** |\\n\\nAmong five GPT-4o-Judge criteria, LongGuide notably improves Format, Style, and Factuality, confirming its effectiveness in aligning model generation with ground-truth distributions. In addition, the significant gains in the Quality criterion, together with the ROUGE-L scores from Table 2 further demonstrate that LongGuide also significantly enhances the generation quality.\\n\\nOur evaluation prompting template is heavily inspired by (https://openreview.net/forum?id=uccHPGDlao).\\n\\n> human evaluate datasets in Figure 5 is small where only 50 examples.\\n\\nThank you for your feedback. For each sample, each annotator was asked to rate 7 metrics in total (5 random MG metrics and 2 OCG metrics). Due to resource constraints, we were only able to hire them for 50 samples, resulting in 350 ratings per annotator.\\n\\n## In summary\\n\\nWe thank you for your time and constructive feedback. We hope our responses can sufficiently address your concern and improve your ratings. Thank you for your consideration.\"}", "{\"summary\": \"This paper explores the ICL capabilities of LLMs, pointing out that relying solely on ICL is insufficient for effectively completing long-form generation tasks, and provides experimental validation and theoretical analysis to support this claim. Based on these findings, the authors propose a hypothesis: optimizing multiple text property tasks can approximate the overall goal of optimizing long text generation tasks. To this end, the authors design the LongGuide to generate two types of guidelines to enhance the performance of LLMs, and validate its effectiveness through a large amount of experiments.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed method is grounded in a well-established conclusion supported by both experimental and theoretical evidence, providing a solid basis for its formulation.\\n2. The effectiveness of LongGuide has been confirmed through a number of experiments, demonstrating its significant performance improvement in multiple long-text generation tasks.\\n3. The article provides a solid theoretical analysis explaining why LongGuide can better achieve task objectives.\", \"weaknesses\": \"1. The LLM's selection of metrics is based on the distribution of its pre-training data, which may lead it to favor common or general metrics. Introducing human verification on top of this could be more effective.\\n2. I believe the novelty of this method is limited, as there are already some automated prompt designs for long-form generation [https://arxiv.org/html/2406.14449v1, https://arxiv.org/abs/2211.01910]. In certain cases, the LongGuide method can only choose not to use any guidelines in the final step.\", \"questions\": \"1. When calculating the JS.Avg metric, the authors used ChatGPT to score two responses, but the paper does not provide a specific display of the prompt used.\\n2. In the final step of the method, there are only four choices\\u2014use MG, use OCG, use both, or use neither. Can a more sophisticated strategy be designed to leverage the advantages of both? For example, considering the requirements of MG and OCG in different stages rather than simultaneously.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for reviewing our paper\", \"comment\": \"Thank you reviewer yZpL for your feedback!\"}", "{\"metareview\": \"Here\\u2019s the revised text maintaining the original style while fixing grammar and improving clarity:\\n\\nThis paper introduces LongGuide, a novel guideline-learning algorithm designed to enhance in-context learning (ICL) for long-form generation tasks. By concurrently generating metric-based and constraint-based guidelines from limited training data, LongGuide supplements standard ICL examples to improve adherence to desired text properties such as format and length. Experimental results demonstrate its effectiveness across various generation tasks, outperforming prompt optimization techniques on models of varying sizes, as evidenced by improvements in both automatic metrics (e.g., BLEU, ROUGE-L) and human evaluations.\", \"strengths\": [\"The paper introduces a relatively novel approach, and the idea of providing explicit task guidelines is well-motivated.\", \"The paper shows improved performance on multiple datasets and long-text generation tasks. In particular, it outperforms prompt optimization SoTA algorithms.\", \"The article provides a theoretical analysis of LongGuide, suggesting why it can better achieve task objectives.\"], \"weaknesses\": [\"Novelty: The proposed method lacks substantial originality, as automated prompt design techniques for long-form generation already exist in recent literature. Furthermore, the method sometimes defaults to not using any guidelines, which may limit its practical impact.\", \"Presentation: The theoretical formalization in Section 2 is not rigorous, with seemingly unnecessary remarks and weakly connected hypotheses that detract from the clarity and utility of the paper. The proofs provided do not substantively enhance understanding or support the claims made about the method.\", \"Evaluation: The evaluation relies heavily on outdated metrics like ROUGE and BLEU, which lack the expressiveness of modern alternatives and fail to align with human judgment. While BERTScore is included, it does not sufficiently address the limitations of n-gram-based metrics. Human evaluation is conducted on a small dataset of only 50 examples, limiting its reliability.\", \"Comprehensiveness: The scope of the experiments is narrow, focusing only on simpler datasets such as SAMSum, CNN, and SWiPE. More challenging and diverse benchmarks, along with a deeper exploration of performance/cost trade-offs compared to state-of-the-art prompt optimization methods, would strengthen the practical relevance of the results.\", \"To the authors' credit, several of the issues mentioned above were addressed during the discussion period\\u2014for example, through numerous additional experiments and a reframing of the theoretical section\\u2014which led some reviewers to increase their ratings. However, despite these significant changes, the paper remains very much borderline, making me hesitant to recommend acceptance, especially since none of the five reviewers was willing to give the updated paper a clear endorsement. During the reviewer-AC discussion, most reviewers maintained that the novelty of the work is limited. Consequently, I recommend rejecting the paper.\"], \"additional_comments_on_reviewer_discussion\": \"During the discussions, the authors addressed most of the weaknesses listed in the meta-reviews, except the one on novetly. During the reviewer\\u2013AC discussion, several reviewers expressed satisfaction with some of these changes. However, despite reading the authors' rebuttal, four reviewers stated that they still feel this work does not provide enough novel ideas or meaningful insights, leading me to recommend rejecting this paper.\"}", "{\"title\": \"Reponse to reviewer (4)\", \"comment\": \"Dear reviewer V713,\\n\\nThank you for engaging with us in this conversation and your constructive feedback. We would like to address your concerns below and our updated changes in the paper are in blue.\\n\\n> \\u2026this is a somewhat willful misquoting of YJ28\\u2026\\n\\nThank you for sharing your thoughts. We believe the *main* reason why our hypothesis was \\u201cnot solid\\u201d to ```YJ28``` was we did not cover the different weightings for different objectives as he elaborated on later and provided an illustrative example. Nevertheless, we thank you for your insight and agree with you and we made the below modifications, see our response below.\\n\\n> While I understand this is not a theory paper (and I agree that \\\"intuitions\\\" is more appropriate framing than \\\"derivations,\\\" I still feel the revised section 2 is not solid enough for acceptance. If you are using the form and verbiage of formal mathematics for your claims, you must also adopt the same level of rigor. I think the current use of notation does more to obstruct than aid the claims in the paper.\\n\\nThank you for your constructive feedback. We agree with the reviewer that this section might make readers obstructed. As a result, we have moved Subsection 2.1 into Appendix A, and we have added a short paragraph **Theoretical intuitions** in Section 2.\\n\\n> My issue with Remark 2.2 wasn't really that it was taking up main text space-- it's that it isn't a meaningful statement at all. The remark states that if the method meets/exceeds the baseline performance in all measured metrics and \\\"text generation quality,\\\" it exceeds the baseline in \\\"task performance.\\\" Most ML papers take \\\"exceeds the baseline in measured metrics\\\" as a proxy for task performance, so this is not necessary to state. \\n\\nThank you for sharing your point in detail. We want to note that \\u201cmeasured metrics\\u201d in our case are not proxies for task performance as conventional ML papers. They are **text properties** as we defined in Section 2 (now Appendix A) such as \\u201cFluency, Context Coverage, Informativeness, etc\\u201d. For example, not always optimizing \\u201cContext Coverage\\u201d in all summarization circumstances leads to better summarization performance.\\n\\nWe believe the value of Remark 2.2 (now Remark A.1) is not trivial. Without Hypothesis 2.1 (now Hypothesis A.1), Remark 2.2 (now Remark A.1) can\\u2019t be proven. This is because there is no guarantee that text properties are the proxies for task performance. Remark 2.2 (now Remark A.1) emphasizes that text properties must be \\u201cwell-chosen\\u201d following Hypothesis 2.1 and we need to optimize their levels in the task data. Only in that case, they all in once will be the proxy with task performance. \\n\\nPlease feel free to share more of your thoughts. Thank you!\\n\\n> On 50-shot ICL, Downstream metrics\\n\\nWe are glad that our responses addressed your two concerns. Thank you for your feedback. \\n\\nWe appreciate your feedback and the references regarding many-shot prompting. As noted in our \\u201cResponse to Reviewer (1),\\u201d prior PO studies did not incorporate such baseline, and we previously followed them. We already supplemented your suggested baseline in the paper, see L293 and Appendix D.1.\\n\\n## In Summary\\nIn summary, we thank you for your time, effort, and constructive feedback. We hope you will recognize the empirical, observational, and constructive contributions presented in this work, which we believe benefit the field as a whole. With our modifications, we trust that you find our paper, updates, and clarifications appropriate and worthy of a higher score.\"}", "{\"comment\": \"> We want to clarify that Remark A.2 builds directly on Hypothesis A.1. Under the assumption that Hypothesis A.1 holds, we obtain two things: (1) the existence of\\n and (2) optimizing these functions during generation ensures a lower overall loss.\\n\\nWhile I see why you introduced Hypothesis A.1 to use as a stepping stone to Remark A.2, the concern here stands. Even when you assume Hypothesis A.1 holds, you can only use the fact that a set of functions that satisfies these properties _exists_. The proof (in line 1112 of the current pdf) assumes that the set of the text properties you are currently optimizing _is the set that you proved exists_, but there's no guarantee of this. Hypothesis A.1 says that you believe this set of properties exists; section 4/LongGuide is based on the idea that if you can discover these properties, you can approximately optimize the objective; your results show that you can discover a set of properties that serve as a better approximation of true quality than taking the maximum likelihood conditioned on a set of in-context examples. I don't think you need Remark A.2 at all for the claims you are making, and I don't think there is a way to prove it in the framework you've established.\\n\\n\\n> We believe now the theoretical intuition should be more solid, as hopefully you agree with us, given that all concepts are defined by functions, even though we did not specify how they are computed. \\n\\nWriting the text generation quality as a function (without defining how it is computed) is not any more rigorous than writing it in natural language, it just looks more math-y. I think this is the crux of my critique of appendix A: it _looks_ like math, but it contains remarks that make claims involving subjective criteria and proofs that are not sound. While I think the paper could be fine without a theoretical intuition section at all, I think Remark/Definition/Hypothesis A.1 are all fine; it's really Remark A.2 that I take serious issue with.\\n\\n\\nFinally, I want to thank the authors for engaging so consistently throughout the rebuttal period, especially as we've continued to disagree. I was quite conflicted on my final score here; while I still do not agree with the authors on the formalization and find parts of it remain fundamentally flawed, I do recognize that this is not the main point of the paper. After consideration, I chose to raise my score 3->5. If the paper is accepted, I urge the authors to carefully rethink the mathematical sections of the paper for the final version.\"}", "{\"title\": \"Response to reviewer (1)\", \"comment\": \"Dear reviewer YJ28,\\n\\nWe deeply thank you for your time and efforts in providing constructive reviews for our paper. We would like to address your concerns below and our updated changes in the paper are in blue.\\n\\n> ...Specifically, the discussion on the weights of different objectives are pretty weak. Intuitionally it is not solid for me...I don't always assign 0.9 weight to storyline and 0.1 weight to writing.\\n\\nThank you for your comment. We have revised Hypothesis 2.1 and Proof of Remark 2.2 to address the weighting of different objectives. \\n\\nWe agree that human preferences can vary dynamically, with different temporal weights assigned to objectives based on context. However, modelling those is highly complex: accurately determining such weighting parameters typically requires careful empirical experiments or expert judgment.\\n\\nIn this work, we have tried our best by (1) selecting the most important metrics for capturing task properties and (2) incorporating the specific levels of these metrics from the training data. Extending our work to model dynamic objective weights is a valuable direction for future work. We will add this discussion into the Generalization section of our paper.\\n\\n> The metrics reported in Section 4.1 is pretty weak, with only BLEU-1/ ROUGE-L given and without more clear and specific evaluation aspects related to human, like fluency, factuality, and etc....\\n\\nThank you for your suggestion. We have added GPT-4o-Judge scores (Section 4) evaluating how aligned the generated answer is with the reference answer and its quality on criteria:\\n\\n- Format consistency: ensuring the generated response matches the length, structure and layout of the reference.\\n- Content completeness: evaluating whether all key points present in the reference are included in the assistant's answer.\\n- Factuality: checking for factual correctness of the assistant's answer.\\n- Style adherence: ensuring that the tone, style, and level of detail of the assistant's answer match the reference.\\n- Assistant's answer quality: assessing how well the response satisfies the user's requirements.\\n\\nEach criterion is scored on a scale of 10, and the final GPT-4o-Judge score is the average of them. We have included the evaluation scores in Table 2 and Figure 2. We summarize the results below:\\n\\n| Method | Format | Content | Factuality | Style | Quality |\\n| -------- | ------- | ------- | ------- | ------- | ------- | \\n| Baseline | 4.18 | 4.83 | 6.64 | 4.36 | 4.75 |\\n| + APO | 4.73 | 5.91 | 7.26 | 4.91| 5.39 |\\n| + LongGuide | **5.72** | **6.01** | **8.25** | **5.78** | **6.04** |\\n\\nAmong five GPT-4o-Judge criteria in Figure 2, LongGuide notably improves Format, Style, and Factuality, confirming its effectiveness in aligning model generation with ground-truth distributions. In addition, the significant gains in Quality criterion, together with the ROUGE-L scores from Table 2 further demonstrate that LongGuide also significantly enhances the generation quality.\\n\\nOur evaluation prompting template is heavily inspired by (https://openreview.net/forum?id=uccHPGDlao).\\n\\n> The experiments conducted in the paper only cover SAMSum/ CNN/ SWiPE in the main text, which are not comprehensive and challenging at least for nowadays research.\\n\\nThank you for your comment. We conduct our main experiments across 7 diverse generation tasks, including summarization, text simplification, translation, dialogue generation, and table-to-text generation, see Table 2.\"}", "{\"comment\": \"> Without Hypothesis 2.1 (now Hypothesis A.1), Remark 2.2 (now Remark A.1) can\\u2019t be proven. This is because there is no guarantee that text properties are the proxies for task performance.\\n\\nThere's no guarantee that any metric is a proxy for task performance! You could write a similar formulation to Hypothesis 2.1 about just about any method for self-refinement or output reranking.\\n\\n> Remark 2.2 (now Remark A.1) emphasizes that text properties must be \\u201cwell-chosen\\u201d following Hypothesis 2.1\\n\\nI also think this is an issue in the proof, for what it's worth -- in the proof of Remark 2.2, you assume that the set of text properties you are measuring are _well chosen_, but Hypothesis 2.1 only claims that such a set _exists_. \\n(Also I believe in the current version the original Remark 2.2 is Remark A.2. I refer to it here as 2.2 for clarity.)\\n\\n\\nOverall, I still don't think your mathematical formulation is meaningful, because the remarks leverage text descriptions of concepts that do not have a mathematically rigorous definition, like \\\"text quality\\\" in Remark 2.2. At best, it adds nothing to your empirical results; at worst, I worry it could be misleading to the reader.\"}", "{\"title\": \"Summary of reviews, contributions, and changes\", \"comment\": \"Dear Reviewers and Chairs,\\n\\nWe sincerely thank all the reviewers again for their insightful and constructive reviews. We are grateful that they found our paper has good writing (```YJ28```) and recognized our method to be novel (```V713, yZpL```), well-motivated (```YJ28, ku1W```) and effective (```V713, yZpL, ku1W, 9VDd```) supported by comprehensive experiments (```9VDd, V713, yZpL, YJ28, ku1W```). And we are delighted that subsequent discussions have successfully addressed your major concerns (```ku1W, YJ28, yZpL```), and reviewers ```V713, ku1W, YJ28, yZpL``` have raised their scores.\\n\\nOur key contributions compared to automatic prompt engineering (PE)/optimization (PO) algorithms are summarized as follows:\\n\\n1. We show that ICL demonstrations alone fail to enable pre-trained LLMs to consistently maintain their language and format properties during generation, as the ICL demonstrations alone can\\u2019t fully align the LLM-induced distribution to the desired task distribution in the limit. \\n\\n2. We then propose a novel alignment method (LongGuide) by automatically selecting and capturing important task-specific language and format properties and explicitly instructing LLMs to optimize them during generation. To the best of our knowledge, LongGuide is the first to explore enhancing the generation by optimizing task properties during this process.\\n\\n3. LongGuide significantly outperforms baselines and PO algorithms in long-form generation tasks. It is also efficient (>= 3.5x cheaper than PO methods), generalizable, transferrable, and can be synergistically combined with PO/PE algorithms.\\n\\nFollowing the insightful suggestions of the reviewers, we have made the following revisions: \\n\\n1. We have added GPT-4o-Judge as an LLM evaluation method for our main experiments (Table 3, Figure 5, Table 6) to address the major concerns from ```9VDd, V713, YJ28, yZpL```.\\n\\n2. We have added four new LLMs to our experiments in Section 2 verifying the limitations of the ICL method following the suggestion of ```YJ28```.\\n\\n3. We have shortened the theoretical intuition section into two short paragraphs in Section 2, and moved the full section into the Appendix to address the concerns from ```V713, YJ28```. We will also remove Remark A.2 and its proof to fully address the concern from ```V713```. We have retained the section's core content to support reviewers who recognized its benefits.\\n\\n4. We have added new baselines including more recent PO baselines (```9VDd```), a many-shot prompting baseline (```V713```), and several-stage baselines (```ku1W```).\\n\\n5. We have added AlpacaEval2 evaluation in Section 5.3 to further verify the effectiveness of our method on real-life LLM chat, following the suggestion of ```YJ28```.\\n\\n6. Prompting cost analyses comparing our method with PO algorithms have been also added, addressing the concern raised by ```yZpL```.\\n\\n7. We have revised the discussions of Sections 5.2 and 5.4 to provide more insights. We have also revised some minor details suggested by reviewers and provided implementation details for the new experiments added.\\n\\nThank you all once again for your valuable feedback, dedication, engagement and attention; we greatly appreciate them.\\n\\nBest Regards, \\n\\nAuthors\"}", "{\"comment\": \"**On the formalization in 2.1:**\\n> While this contradicts the rest of reviewers \\n\\nI do think `ku1W` and I disagree on this, but I'd like to note this is a somewhat willful misquoting of `YJ28`, who wrote \\u201cI like the theoretical derivations given in Section 2.1. *But it is not solid. I don't buy in that 4.1 is a strong proof of Hypothesis 2.1.*\\\" I share this concern about Hypothesis 2.1.\\n\\n> It is important to note that the purpose of this subsection is not to present a \\u201crigorous\\u201d theory, but rather to provide an intuition that motivates our approach. We believe that a \\u201csolid\\u201d theoretical discussion should deserve the whole paper discussing details such as addressing epsilon distribution recovery. However, this is not our main focus in this work.\\n\\nWhile I understand this is not a theory paper (and I agree that \\\"intuitions\\\" is more appropriate framing than \\\"derivations,\\\" I still feel the revised section 2 is not solid enough for acceptance. If you are using the form and verbiage of formal mathematics for your claims, you must also adopt the same level of rigor. I think the current use of notation does more to obstruct than aid the claims in the paper.\\n\\n\\n> We remove the Remark 2.2 as you suggested. We describe it in L180-181 and put the old Remark 2.2 into Appendix A.\\n\\nMy issue with Remark 2.2 wasn't really that it was taking up main text space-- it's that it isn't a meaningful statement at all. The remark states that if the method meets/exceeds the baseline performance in all measured metrics and \\\"text generation quality,\\\" it exceeds the baseline in \\\"task performance.\\\" Most ML papers take \\\"exceeds the baseline in measured metrics\\\" as a proxy for task performance, so this is not necessary to state. And \\\"text generation quality\\\" is not defined (and, the phrasing implies, is something separate from the $f_i$s). \\n\\n\\n**On 50-shot ICL:**\\n I'm glad to see that LongGuide outperforms ICL in a setting with more demonstrations in-context. I do disagree that this is an \\\"unnatural\\\" setting-- several recent works (e.g. [1](https://arxiv.org/abs/2405.00200), [2](https://arxiv.org/abs/2404.11018)) have demonstrated that ICL is effective up to several thousand examples in context, and many example selection methods are less effective in the higher-demonstration-regimes. \\n\\n**Downstream metrics**\\nThank you for adding additional metrics here! This addresses my concern on this point.\"}", "{\"title\": \"Request for your review\", \"comment\": \"Dear Reviewer YJ28,\\n\\nAs the author-reviewer discussion period is nearing its conclusion, we kindly request your consideration of our responses to your concerns.\\n\\nWe deeply thank you for your time and careful review. Many of the comments in your review we have actually addressed in the revised paper, as hopefully you can agree as mentioned above. We thank you for taking the concern to provide very detailed critique of the paper, and trust that you find our clarifications appropriate and worthy of a higher score.\\n\\nThank you for your attention and consideration.\\n\\nBest regards, The Authors\"}", "{\"summary\": \"ICL usually use demonstration examples to improve the LLM's performance. In this paper, the authors propose LongGuide (\\u00a73), a guideline- learning algorithm that efficiently1 generates two types of guidelines concurrently from limited task training data as supplementary instructions to enhance LLMs: metrics guidelines and output constraint guidelines.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"the paper demonstrate the complementary values of guidelines; and propose two kinds of guidelines: metrics guidelines and output constraint guidelines.\", \"the paper proposed a full algorithm to learn guidelines from task training dataset. and show improved performance on multiple datasets of different tasks\", \"The paper did a through ablation study and investigation/evaluations to study the impact of guidelines.\"], \"weaknesses\": [\"There exist many advanced automatic prompt optimization algorithm based on a training dataset, the authors only include APO, and I think they should include some more recent methods as baselines. So that we can have a better understanding the usefulness of this method.\", \"the automatic metrics are out-of-dated e.g. using Rouge scores for summarization. The authors can use LLM as judges to show the improvement.\", \"human evaluate datasets in Figure 5 is small where only 50 examples.\"], \"questions\": [\"Please add some more recent algorithms in automatic prompt improvement as baselines.\", \"Please include some LLM-based evaluation method to compare the long-form generation instead of rouge and bleu scores.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your thoughtful feedback (6)\", \"comment\": \"Dear reviewer YJ28,\\n\\nThank you for your thoughtful feedback. We sincerely appreciate the time and effort you have dedicated to evaluating our work. Below, we provide short responses to your comments and suggestions.\\n\\n> 3. \\u2026 a more detailed statistic analysis between the real downstream [i.e. from the true user, like what have been done in Chatbot Arena / WildBench] and the provided aspects should be deeply studied\\u2026\\n\\nThank you for your thoughtful comment. We agree with you as you suggested that this will be an exciting (and possibly important?) future direction, as we believe aligning fine-grained metric properties is an exciting alternative alignment direction as our work follows. Below we provide the metrics chosen for AlpacaEval2 and WildBench-V2 for our reference:\\n\\n- AlpacaEval2: ```{'Accuracy': 5, 'Clarity': 4.8, 'Coherence': 5, 'Completeness': 5, 'Conciseness': 4.8, 'Engagement': 5, 'Relevance': 5}```\\n- WildBench-V2: ```{'Accuracy': 5, 'Coherence': 4.8, 'Completeness': 5, 'Creativity': 5, 'Engagement': 5, 'Informativeness': 5, 'Naturalness': 5, 'Readability': 4.8}```\\n\\nWhile the model generally prioritizes ```Completeness, Relevance``` for AlpacaEval2, it notably prioritizes ```Creativity, Informativeness``` for WildBench-V2, which are reasonable. While in this work we collected metrics widely used in prior studies (Section 3, Step 1), we also want to note that these metrics are flexibly customizable and generalizable (Generalizability Section). We strongly encourage task-specific customizations to further enhance the applicability and effectiveness of our approach (Generalizability Section).\\n\\n> Two comments about the name of including ICL and considering trying HelloBench\\u2026\\n\\nThank you for your thoughtful suggestions; we greatly appreciate them. We named the paper \\\"Beyond ICL\\\" as it reflects our investigation into the limitations of in-context learning (ICL) in recovering the desired distribution for LLMs, though we acknowledge that this exploration could be more thorough. Nonetheless, we take your naming suggestion seriously and are actively considering it. Since the rebuttal period is closing soon, we are unable to give you a concrete response but we are committed that we will revise these points carefully in our revised manuscript. We sincerely thank you for your time and insightful feedback, and we hope our revised version will meet your expectations.\"}", "{\"comment\": \"I will increase my score to 6. But I would still recomment you to:\\n1. Seriously reconsider the name of including ICL, because it does not makes any sense. Since the paper does not seriously evaluate the role of ICL capacities in the Long-form Generation and your method also has little relationship with that. In context learning is an intrinsic capacity of LLM and the paper shows nothing beyond that. Besides that, for a strong enough and large enough LLM, like Qwen-2.5-72B(base), if a decent prompt and decent samples can be selected to activate the LLM's self-judge capacity, your work may also be adopted on non-instruction model. But, still, \\\"Beyond\\\" is still a bad word here.\\n2. consider trying HelloBench: Evaluating Long Text Generation Capabilities of Large Language Models or selecting long-form generation tasks from WildBench / AlpacaEval like what they (HelloBench) have done. What is the real downstream needs from the users are the only important thing. \\n\\nI will look forward to a better version of the paper if accepted.\"}", "{\"title\": \"Request for your review\", \"comment\": \"Dear Reviewer ku1W,\\n\\nAs the author-reviewer discussion period is nearing its conclusion, we kindly request your consideration of our responses to your concerns.\\n\\nWe deeply thank you for your time and careful review. Many of the comments in your review we have actually addressed in the revised paper, as hopefully you can agree as mentioned above. We thank you for taking the concern to provide very detailed critique of the paper, and trust that you find our clarifications appropriate and worthy of a higher score.\\n\\nThank you for your attention and consideration.\\n\\nBest regards, The Authors\"}", "{\"title\": \"Response to reviewer\", \"comment\": \"Dear reviewer ku1W,\\n\\nWe deeply thank you for your time and efforts in providing constructive reviews for our paper. We would like to address your concerns below and our updated changes in the paper are in blue.\\n\\n> The LLM's selection of metrics is based on the distribution of its pre-training data...\\n\\nThank you for your suggestion. Appendix D.11 shows the metrics selected for each task by each model. After reviewing the metrics chosen by models like Mistral and ChatGPT, we find no clear bias in their selection process.\\n\\nBoth models consistently choose key metrics like \\u201cAccuracy,\\u201d \\u201cClarity,\\u201d \\u201cRelevance,\\u201d and \\u201cUnderstandability,\\u201d which are important for many language tasks. They also adjust their metric choices based on the tasks. For example, specific tasks like CNN and XL-Sum include additional metrics such as \\u201cEngagement\\u201d and \\u201cSemantic Coverage.\\u201d This suggests that the models select metrics reasonably, based on the needs of the task, rather than showing a preference for certain metrics. Overall, the variety and suitability of the selected metrics show that the process is fair and appropriate for the tasks.\\n\\nWe have supplemented the discussion in Appendix D.11 where the selected metrics are presented.\\n\\n> I believe the novelty of this method is limited...\\n\\nThank you for your perspective and for referencing APEER and APE. Both APEER and APE are automatic prompt generation methods: APEER uses a feedback-and-refine mechanism for prompt optimization and APE selects prompts based on validation performance.\\n\\nOur method fundamentally differs from APEER and APE as well as all prompt optimization (PO) methods in twofold: \\n- Rather than focusing on refining, paraphrasing, or evolving prompts like PO methods, we generate task-specific guidelines to improve LLM alignment and performance in long-form generation tasks; \\n- Our approach prioritizes task property and format distribution alignment over solely optimizing model performance like PO studies.\\n\\nAs shown in Table 2 and Appendix C.2, LongGuide consistently outperforms advanced prompt optimization (PO) methods in long-form generation tasks. Current PO algorithms, even the most advanced ones, struggle to outperform LongGuide in certain long-form generation tasks because they typically rely on sampling new prompts through search, evolution, or paraphrasing methods, which rarely produce comprehensive guidelines like those generated by LongGuide. LongGuide has its own unique advantage.\\n\\nOur method has also been confirm novel and constructive by other reviewers (Reviewer V713 said our work is \\u201cnovel\\u201d and 3/5 reviewers rated our contribution 3). We thank you for your feedback and we hope you appreciate the novelty of our work.\\n\\n> When calculating the JS.Avg metric, the authors used ChatGPT to score two responses, but the paper does not provide a specific display of the prompt used.\\n\\nThank you for your comment. We have provided the ChatGPT property scorer prompt in Appendix F.2.\\n\\n> ...there are only four choices\\u2014use MG, use OCG, use both, or use neither. Can a more sophisticated strategy be designed to leverage the advantages of both?...\\n\\nThank you for your feedback. We would like to clarify that in the inference stage, our approach employs LongGuide directly, which does not involve multiple stages.\", \"we_extended_our_experiments_to_2_baselines_using_2_stages_where_we_used\": \"\", \"baseline_1\": [\"MG to OCG\", \"Stage 1: Instruction + Input + MG -> Output 1 (as usual MG baseline)\", \"Stage 2: \\u201cRefine the following output from the task:\\\\n\\u201d + Input + Output 1 + OCG -> Output 2\"], \"baseline_2\": [\"OCG to MG\", \"Stage 1: Instruction + Input + OCG -> Output 1 (as usual OCG baseline)\", \"Stage 2: \\u201cRefine the following output from the task:\\\\n\\u201d + Input + Output 1 + MG -> Output 2\"], \"the_results_are_provided_below_with_chatgpt\": \"| #shot | CNN (3.0.0) | SWiPE | Comm.-Chall. |\\n| -------- | ------- | ------- | ------- | \\n| Zero-shot | 20.12 / 7.44 | 45.09 / 7.28 | 24.21 / 6.53 |\\n| Zero-shot + LongGuide | **22.19 / 7.67** | **45.09 / 7.28** | **34.41 / 7.23** |\\n| Zero-shot + MG to OCG | 16.74 / 6.23 | 30.22 / 5.76 | 15.92 / 4.92 |\\n| Zero-shot + OCG to MG | 9.62 / 4.18 | 20.34 / 4.82 | 8.86 / 3.97 |\\n\\nWe observe that 2-stage baselines significantly degrade model performance, as the final generated answers deviate substantially from the ground truth. We attribute this to the model's inherent bias amplified by self-refining (https://aclanthology.org/2024.acl-long.826/).\\n\\nWe agree that exploring more sophisticated strategies to leverage the complementary strengths of MG and OCG could be a promising direction for future research. We will incorporate the discussion of this potential extension into the Generalization section of our paper.\\n\\n## In Summary\\n\\nWe thank you for your time and constructive feedback. We hope our responses can sufficiently address your concern and improve your ratings. Thank you for your consideration.\"}", "{\"title\": \"Response to reviewer (3)\", \"comment\": \"> For the proof in Section 2.1, it is well-written but only a descriptive math. Remark 2.1 and Definition 2.1 only have weak relationship with hypothesis 2.1. And Hypothesis 2.1 only claims a simple thing: Task T can be optimized by optimizing several understandable aspects of the task, e.g. fluency, factuality, and etc. They don't give a solid proof for that.\\n\\nThank you for your feedback. We would like to clarify that the purpose of this subsection is not to present a \\u201crigorous\\u201d theory, but rather to provide an intuition that motivates our approach. We believe that a \\u201csolid\\u201d theoretical discussion should deserve the whole paper discussing details to address the epsilon distribution recovery. To address your concern, we have made the following modifications:\\n\\n- We move Subsection 2.1 into Appendix A, and we add a short paragraph theoretical intuitions in Section 2.\\n- We change \\u201cTheoretical Derivations\\u201d to \\u201cTheoretical Intuitions\\u201d.\\n- We shorten the definition of task property significantly for more conciseness.\\n- We remove the Remark 2.2 as you suggested. We describe it in L180-181 and put the old Remark 2.2 into Appendix A.\\n\\nWe hope these changes improve the clarity and coherence of the section. We also try our best to balance the opinions among reviewers. If you have further suggestions, we would be happy to consider them.\\n\\n## In summary\\n\\nWe thank you for your time and constructive feedback. We hope our responses can sufficiently address your concern and improve your ratings. Thank you for your consideration.\"}", "{\"title\": \"Request for your review\", \"comment\": \"Dear reviewer 9VDd,\\n\\nWe hope this message finds you well. As the author-reviewer discussion period is nearing its conclusion, we kindly request your consideration of our responses to your concerns. We sincerely thank you for your thoughtful reviews and the time and effort you have dedicated to evaluating our work.\\n\\nThank you for your attention.\\n\\nBest regards,\\nAuthors\"}", "{\"title\": \"Summarize major changes to all reviewers and invite them to review and discuss\", \"comment\": \"Dear the Reviewers,\\n\\nWe sincerely appreciate and thank you for the time and effort you have dedicated to providing detailed feedback on our paper. We would like to bring your attention to the major changes we have made in our paper:\\n\\n- We have added GPT-4o-Judge as an LLM evaluation method to address the concerns from ```9VDd, V713, YJ28, yZpL```.\\n- We have shortened the theoretical intuition section in our paper into a paragraph in Section 2, and moved the full section into the Appendix to address the concerns from ```V713, YJ28```. We have retained the section's content in the Appendix to support reviewers who recognized its benefits.\\n- We have added new baselines to address all baseline-related concerns, including more recent PO baselines (```9VDd```), a many-shot prompting baseline (```V713```), and several-stage baselines (```ku1W```).\\n- We have added ```AlpacaEval2``` evaluation to further verify the effectiveness of our method on real-life LLM chat, following the suggestion of ```YJ28```.\\n- Prompting cost analyses comparing our method with PO algorithms have been also added, addressing the concern raised by ```yZpL```.\\n \\nFor their details, we invite you to review our responses, we have carefully responded to all the concerns raised by each of you, which are ready for your consideration. \\n\\nThank you once again for your invaluable feedback, dedication, and attention. We are looking forward to discussing with you.\\n\\nBest regards,\\nThe authors\"}", "{\"summary\": \"The paper proposes a study on LLMs' generation quality using in-context learning showing its ineffectiveness on long-context tasks, and proposed a new technique, LongGuide, to alleviate the problem.\\nLongGuide is an algorithm to generate customized guidelines for the LLM to optimize a set of imposed self-evaluated metrics. Overall, LongGuide collects a set of task-independent metrics, obtains the verbal descriptions of metric values via LLM self-evaluation and combines them with constraint-based guidelines which instruct the model on the numerical properties such as token/sentence count.\\n\\nThe technique is evaluated on a set of long-form generation tasks (summarization, text simplification, machine translation, generation) against SoTA prompt optimization algorithms such as APO and adv-ICL.\\nThe experiments show that LongGuide improves LLMs' performance across the board, working both for medium-sized models (Mistral-7B-it) and large ones (ChatGPT 3.5 Turbo) - as shown via automatic metrics (BLEU, ROUGE-L) and human evaluation.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"a new method for automatically finding a set of LLM guidelines that improve longform generation is proposed\", \"it outperforms prompt optimization SoTA algorithms on a series of benchmarks, both for medium-sized and large models\", \"in addition to introducing the algorithm, the authors conduct an extensive study on how in-context learning is ineffective for longform generation\"], \"weaknesses\": [\"metrics used as the core set of LongGuide are described insufficiently - the main thing explained about them is that they do not include LLM-based ones which sounds worrying since those are proved to have correlation with human judgements, at least for summarization tasks they're superior to the numeric ones like ROUGE-L\", \"with a combination of metric guidelines and output constraint guidelines evaluated at the last step of LongGuide, there arises the question on performance/cost aspects of LongGuide and how it compares to prompt optimization SoTA - I couldn't find it in the main paper content\"], \"questions\": \"l. 208: \\\"...and propose 12more metrics for a broader evaluation coverage\\\" - where are they described?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your thoughful comments\", \"comment\": \"Dear reviewer V713,\\n\\nThank you for getting back to us with your very thoughtful feedback; we sincerely appreciate your explanations and feedback and the time and effort you have dedicated to evaluating our work. We would like to address your concerns below.\\n\\n> While I see why you introduced Hypothesis A.1 to use as a stepping stone to Remark A.2, the concern here stands...\\n\\n> Writing the text generation quality as a function (without defining how it is computed) is not any more rigorous than writing it in natural language, it just looks more math-y. I think this is the crux of my critique of appendix A\\u2026\\n\\nThank you for your thoughtful comments; we understand your concerns, and you are right. Our initial motivation for Hypothesis A.1 was to establish two key points: first, that the set of text properties exists, and second, optimizing this set during generation leads to improved alignment. However, upon reflection, we agree with your observation regarding the gap between Hypothesis A.1 and Remark 2.2.\\n\\n*To address your two concerns, we will remove Remark A.2 and its proof for two reasons all pointed out by you: the identified gap and the lack of a rigorous definition of text quality.* We are glad to receive your feedback that Remark/Definition/Hypothesis A.1 are fine, so we will keep them in the revised manuscript.\\n\\nIf possible, we would greatly appreciate your confirmation that the proposed modifications fully address your concerns and bring us into an agreement.\\n\\nThank you once again for your consistently constructive feedback, engagement, and attention. It has been a pleasure working with you on this rebuttal. Thank you for reviewing our paper!\"}", "{\"summary\": \"The paper introduces LongGuide, to efficiently generate two parallel streams of guidelines capturing task language and format properties. All the experiments are conducted on finetuned LLMs (Mistral/ ChatGPT), which is a major concern without isolating ICL capacity from Instruction following capacities. I don't get the point of mentioning In-Context Learning in the name.\\n\\nI like the theoretical derivations given in Section 2.1. But it is not solid. I don't buy in that 4.1 is a strong proof of Hypothesis 2.1. \\n\\nSpecifically, the discussion on the weights of different objectives are pretty weak. Manual preference on different aspects of generated long-context are actually always highly unbalanced based on queries and contents. The theory basically ignores the unbalanced and dynamic weights on different objectives and simply treats them the same. \\n\\nThe metrics and datsets used for evaluating LongGuide are pretty weak. \\n\\nAdditionally, the work is pretty much only a prompt work with little solid science contribution. The prompt workflow itself is also only based on weak hypothesis and cannot intuitionally matches the manual work pattern. For manual workflow, I refer to human writer's working pattern. Human writer doesn't write based on a given metrics collection and keeps reviewing it. Intuitionally it is not solid for me. Take novel writing as an example, as a reader, sometimes I pay more attention to whether the writing is good, sometimes I pay more attention to storyline. I don't always assign 0.9 weight to storyline and 0.1 weight to writing.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Good writing.\\n2. Motivation is clear.\\n3. Their ablation studies on each elements of their method in the appendix are detailed and clear.\", \"weaknesses\": \"1. For the proof in Section 2.1, it is well-written but only a descriptive math. Remark 2.1 and Definition 2.1 only have weak relationship with hypothesis 2.1. And Hypothesis 2.1 only claims a simple thing: Task T can be optimized by optimizing several understandable aspects of the task, e.g. fluency, factuality, and etc. Noted that each aspect is with a fixed L in their setting. They don't give a solid proof for that. Intuitionally it is not solid for me. Take novel writing as an example, as a reader, sometimes I pay more attention to whether the writing is good, sometimes I pay more attention to storyline. I don't always assign 0.9 weight to storyline and 0.1 weight to writing.\\n\\n2. The metrics reported in Section 4.1 is pretty weak, with only BLEU-1/ ROUGE-L given and without more clear and specific evaluation aspects related to human, like fluency, factuality, and etc. If human evaluation is not accessible, at least LLM-based evaluation should be given. Although they provide BERTScore, but it is actually similar to BLEU-1/ ROUGE-L on the evaluation aspects. It is mainly based on similarity. They should move more detailed analysis or some estimations of the manual evaluation in the appendix to the main context.\\n\\n3. There is little direct takeaway from the paper. By direct takeaway, I mean that engineers and researchers can directly adopt the hyper-parameters and models given from a paper to their academic and industrial pipeline. The experiments conducted in the paper only cover SAMSum/ CNN/ SWiPE in the main text, which are not comprehensive and challenging at least for nowadays research.\", \"questions\": \"I don't get the point of mentioning ICL in the paper's name. Because they only adapt instruction tuned LLMs in their research. And a lot of existing research points that there is a trade-off between the instruction following capacity and ICL. Maybe more experiments on base models are needed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the detailed response provided by the authors. However, I'd like to respectfully retain my score. Because the rebuttal doesn't address several key problems of the paper. :\\n\\n1. One non-instruction-tuned model, Mistral-7B-v0.1 is definitely not enough for an ICL-related paper. You should provide a wide range of sota foundation models to verify your methods' efficiency. For Open-source models, you can select Qwen-2.5 / Yi / LLaMA-3.1 / Mistral-v0.3 / etc. For fully transparent models, you can select OLMo / MAP-Neo / Pythia/ etc. Honestly speaking, although I fully understand the LLM verision updating is crazy. Mistral-v0.1-7B is still too outdated to be convincing for a paper in the late 2024.\\n2. I still recommend you to include more diverse NLG tasks and benchmarks published after 2023. For the tasks in the paper, there is a really high possibility that they are included in the pretrain corpus and not convincing at all. The rebuttal about that is pretty weak.\\n3. For the claimed scientific contribution, a more detailed statistic analysis between the real downstream [i.e. from the true user, like what have been done in Chatbot Arena / WildBench] and the provided aspects should be deeply studied. Instead of a prompt engineering paper, a worse thing is overclaiming. The experiment results in the paper do not solidly solve LLM adaptation for long-form generation tasks or clearly reveal the relationship between different downstreams and the provided fine-grained metrics. If the detailed analysis can be provided, the analysis itself can be a very good paper then.\"}", "{\"title\": \"Increased soundness upon rebuttal\", \"comment\": \"Thanks to the authors for the comprehensive additions to the paper - I increased the Soundness rating by one point.\"}", "{\"title\": \"Thank you for your feedback, we have added AlpacaEval2 and been waiting for WildBench-V2 (5)\", \"comment\": \"> 2. I still recommend you to include more diverse NLG tasks and benchmarks published after 2023. For the tasks in the paper, there is a really high possibility that they are included in the pretrain corpus and not convincing at all. The rebuttal about that is pretty weak.\\n\\nThanks for this new feedback. We understand your concern about the data contamination of LLMs. We have added and summarized our AlpacaEval2 evaluations and been waiting for WildBench-V2 evaluations from AI2. We would like to clarify:\\n\\n- *Our selected tasks are widely used by the community:* The tasks and benchmarks used in our evaluation are widely adopted for assessing the generation capabilities of LLMs ([Jang et al., NeurIPS 2024](https://arxiv.org/pdf/2411.06710v1), \\n[Feng et al., EMNLP 2024](https://aclanthology.org/2024.findings-emnlp.648.pdf), \\n[Bai et al., ACL 2024](https://aclanthology.org/2024.acl-long.172.pdf)) similar to GSM8K/SVAMP for reasoning. These benchmarks are suitable for our method, as our method requires a small number of training samples available.\\n\\n- *The benchmarks are valuable as they challenge and expose LLM weaknesses*: The benchmarks are challenging and expose LLM weaknesses: As shown in Table 3 and Figure 5, none of the tested models scored above 8 on GPT-4o-Judge or surpassed 50% ROUGE-L against ground truth. Average quality and format ratings for answers remained below 5/10, highlighting LLM limitations.\\n\\n- *We prioritize selecting widely used test sets and their latest versions for evaluation:* \\n - CNN: latest version 3.0.0\\n - SWiPE: published May 2023\\n - Synthetic-Persona-Chat: released 2024\\n - CommonGen-Challenge: challenge test set of CommonGen\\n - SAMSum, XL-Sum, and IWSLT: widely cited and used in summarization and translation studies\\n\\nWhile we agree with you that we cannot control whether these test sets were used to pretrain LLMs, similar to the GSM8K/SVAMP benchmarks, our selected benchmarks remain reasonable and valuable for studying capabilities and weaknesses in LLMs, as hopefully, you can agree with us.\\n\\nWe have also added an experiment with subsets of **AlpacaEval2 (Yann et al., 2024)** and been waiting for **WildBench-V2 (Lin et al., 2024)**. Due to the very limited resources and time constraints, our experiments are conducted on 203 random samples of AlpacaEval2 and 200 samples of WildBench with ChatGPT (gpt-3.5-turbo-1106). Since these benchmarks do not have training data, our setups are:\\n- For AlpacaEval2, we train LongGuide on *only 5 random samples* from [alpaca-gpt4](https://huggingface.co/datasets/vicgalle/alpaca-gpt4). We also use those 5 samples as few-shot demonstrations. Note that [alpaca-gpt4](https://huggingface.co/datasets/vicgalle/alpaca-gpt4) is a quite OOD dataset compared to AlpacaEval2.\\n- For WildBench, we train LongGuide on *only 5 random samples* from [WildBench-V2 GPT-4 outputs](https://huggingface.co/datasets/allenai/WildBench-V2-Model-Outputs/viewer/gpt-4-turbo-2024-04-09). We also use those 5 samples as few-shot demonstrations *and exclude them from our evaluation samples*.\\n\\nThe results with AlpacaEval2 are summarized below.\\n\\n| Setting | LC Win Rate | Win Rate |\\n|-------------------|-------------|----------|\\n| ZS | 11.08% | 3.17% |\\n| ZS + OCG | 4.73% | 2.44% |\\n| ZS + MG | **19.13%** | **7.07%** |\\n| ZS + MG-OCG | 8.42% | 3.90% |\\n| **ZS + LongGuide** | **19.13%** | **7.07%** |\\n|-------------------|-------------|----------|\\n| FS | 8.08% | 2.68% |\\n| FS + MG | **12.65%** | **4.88%** |\\n| FS + OCG | 7.73% | 3.45% |\\n| FS + MG-OCG | 12.63% | 4.88% |\\n| **FS + LongGuide** | **12.65%** | **4.88%** |\\n\\nWith only 5 samples from [alpaca-gpt4](https://huggingface.co/datasets/vicgalle/alpaca-gpt4), LongGuide significantly improves ChatGPT on AlpacaEval2. OCG did not achieve good results because [alpaca-gpt4](https://huggingface.co/datasets/vicgalle/alpaca-gpt4) is a quite OOD dataset compared to AlpacaEval2.\\n\\nFor WildBench, we are waiting for AI2 to get the results and *we are unsure if we can get the results to supplement here within the rebuttal period, but we will do so if we can*.\\n\\nFeel free to share your feedback, we are happy to take and discuss it.\"}", "{\"title\": \"Response to reviewer (3)\", \"comment\": \"## In summary\\n\\nWe thank you for taking the concern to provide very detailed critique of the paper. Many of the comments in your review we have actually addressed in the rebuttal and the updated manuscript, as hopefully you can agree as mentioned above. We trust that you find our paper, updates, and clarifications appropriate and worthy of a higher score.\"}", "{\"title\": \"Response to Rebuttal\", \"comment\": \"Thanks for the response, I have increased my scores accordingly.\"}", "{\"summary\": \"In-context learning for generation tasks requires the model to capture some attributes of the desired output texts. The paper introduces a method, LongGuide, to more effectively use in-context examples for generation tasks by first using a small set of examples to develop guidelines about the desired output format, then providing these guidelines (generally in addition to the ICL examples) during inference. The guidelines are divided into two sets: metric-based guidelines, developed by using ChatGPT evaluation along axes of generation quality and selecting axes that human answers perform highly along; and statistics-based guidelines, generated from properties of the human answers (e.g. average length). LongGuide provides additional gains on top of ICL and prompt optimization.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"S1. The idea of providing explicit task guidelines is well-motivated and clearly effective; I like the breakdown into metric-oriented and output-text-statistics oriented guidelines, which to the best of my knowledge is novel.\\n\\nS2. The method is more effective on stronger models (likely because these models are better at instruction following). Surprisingly, it's also somewhat effective on weaker, non-instruction-tuned models (line 1036).\\n\\nS3. The authors are thorough in their analysis, specification of hyperparameters, and description of the setting. I also appreciate the specification of annotator wages.\", \"weaknesses\": \"W1. I feel that the formalization in Section 2 is not rigorous and frankly distracting from the goals of the paper. While I can see some benefit to stating Remark 2.1 given that some of the literature makes different assumptions, I feel the assumptions provided to start the proof are not well-formed and the property as a whole does not feel terribly useful. Remark 2.2 seems wholly unnecessary, as it seems to essentially claim that you will consider your method better than the baseline if it outperforms the baseline on the metrics. This does not require a proof.\\n\\nW2. While the attributes are computed using up to 50 train set examples, the model is not evaluated on using 50-shot ICL, which feels like a natural baseline to consider. Recent works on long-context ICL have demonstrated improved performance with many demonstrations. \\n\\nW3. The use of ROUGE and BLEU as the final downstream metrics seems ill-advised. These are both very simple ngram metrics, without the expressiveness of other metrics; the fact that they show near-identical trends (line 368) is unsurprising because they measure very similar things.\", \"questions\": \"Q1: Can you elaborate on the aims of the formalization / theoretical claims in section 2? I am not clear on the reasoning for this section.\", \"q2\": \"The point about the guidelines not being useful for tasks the models are trained on is an interesting claim (line 556 onwards). Could you verify this with a model with open training data?\", \"q3\": \"In regards to the title: are these texts really \\\"long-form\\\"? Certainly they are generation rather than classification, but the outputs for most of these tasks are quite short.\\n\\nQ4. The evaluation in 4.1 establishes the goal as minimal Jensen-Shannon divergence between the score distributions of gold summaries and model answers. Is this a good goal? Is it possible that the model answers are better on some axes than human answers, and this fails to account for this setting?\", \"minor_presentation_notes\": [\"the jump to notation in lines 36-45 was abrupt and it's not clear at this point in the paper why establishing this notation is useful; it might be better to introduce this in section 2.\", \"the use of \\\"metrics\\\" in line 101 reads a bit strange; I would refer to these as properties, which you evaluate using ChatGPT.\", \"showing the % gain is not super helpful in table 2, and it clutters the table.\", \"lines 478-485 about the ablations performed would have been useful to know earlier, since earlier tables reference these settings.\", \"formatting typo in the citation of Krippendorff's alpha (line 431)\", \"unclear what \\\"the mode of 17 response tokens\\\" means in line 123\", \"typo in line 1015: \\\"second not the sam\\\"\", \"in Figure 17, skipping step 2 seems to improve the performance, not worsen it?\", \"the formatting of links to appendix figures is a bit non-standard\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer (1)\", \"comment\": \"Dear reviewer V713,\\n\\nWe deeply thank you for your time and efforts in providing reviews for our paper. We would like to address your concerns below and our updated changes in the paper are in blue.\\n\\n> W1. I feel that the formalization in Section 2 is not rigorous and frankly distracting from the goals of the paper\\u2026 Remark 2.2 seems wholly unnecessary\\u2026 Can you elaborate on the aims of the formalization / theoretical claims in section 2?\\u2026\\n\\nThank you for your feedback. While this contradicts the rest of reviewers (YJ28 commented \\u201cI like the theoretical derivations given in Section 2.1\\u201d and ku1W commented \\u201cthe article provides a solid theoretical analysis explaining why LongGuide can better achieve task objectives.\\u201d), we appreciate your comment and allow us to clarify your questions.\\n\\nOur goal of Subsection 2.1 is twofold: \\n- It shows that ICL, the most common prompting-based alignment method, can\\u2019t help LLM to recover the desired alignment in the limit if initially the model fails to capture the task language distribution.\\n- It provides the formalizations for our Hypothesis 2.1 and the (exact) definition of task property, which serves as the basis for our method.\\n\\nIt is important to note that the purpose of this subsection is not to present a \\u201crigorous\\u201d theory, but rather to provide an intuition that motivates our approach. We believe that a \\u201csolid\\u201d theoretical discussion should deserve the whole paper discussing details such as addressing epsilon distribution recovery. However, this is not our main focus in this work.\\n\\nWe value your feedback and that of other reviewers, and we have worked hard to balance differing perspectives. Specifically, we have made the following changes in response to your suggestions:\\n- We change \\u201cTheoretical Derivations\\u201d to \\u201cTheoretical Intuitions\\u201d.\\n- We shorten the definition of task property significantly for more conciseness.\\n- We remove the Remark 2.2 as you suggested. We describe it in L180-181 and put the old Remark 2.2 into Appendix A.\\n\\nWe hope these changes address your concerns and demonstrate our efforts to balance reviewers. \\n\\n> W2. While the attributes are computed using up to 50 train set examples, the model is not evaluated on using 50-shot ICL, which feels like a natural baseline to consider. \\n\\nThank you for your comment. While this may seem \\u201cnatural\\u201d, it is important to note that for long-form generation, prompt optimization (PO) studies typically do not follow this method, please see APO (Pryzant et al., 2023) and adv-ICL (Do et al., 2024). We believe the reason is twofold: (1) Few-shot prompting with an excessive number of examples, such as 50 shots is unnatural in practice; (2) for long-form generation tasks, such as CNN, on average the #tokens for 1 shot is 798.29, thus 50 shots is 40K which exceeds the window size of most current commonly used LLMs such as Mistral-7B-it-v0.2 (limit 4096), Llama 3 (limit 8K) and gpt-3.5-turbo-1106 (limit 16K). \\n\\nNevertheless, we still supplement the results for CNN (3.0.0), SWiPE, and Comm.-Chall. below where we use 10 shots for CNN, 40 shots for SWiPE, and Comm.-Chall **up to the limit of gpt-3.5-turbo-1106** evaluated by ROUGE-L / GPT-4o-Judge scores:\\n\\n| #shot | CNN (3.0.0) | SWiPE | Comm.-Chall. |\\n| -------- | ------- | ------- | ------- | \\n| 3-5 shots | 14.51 / 4.38 | 33.72 / 5.07 | 22.08 / 4.19 |\\n| 3-5 shots + LongGuide | **18.17 / 4.42** | **37.60 / 5.25** | **38.21 / 7.21** |\\n| 10-50 shots | 20.55 / 6.67 | 44.04 / 6.07 | 28.18 / 4.85 |\\n| 10-50 shots + LongGuide | **21.69 / 6.82** | **46.17 / 6.67** | **42.55 / 7.72** |\\n\\nWe observe that while supplementing more shots to ChatGPT improves the model\\u2019s performance, LongGuide further boosts the ICL performance significantly for all three benchmarks. \\n\\nWe have supplemented these results in Appendix D.1 and added one description in Section 4 describing that we also compare our method with many-shot prompting in L297.\"}", "{\"title\": \"Request for your review\", \"comment\": \"Dear reviewer V713,\\n\\nAs the ICLR discussion phase is closing soon, we respectfully request your consideration of our responses to your concerns. We have carefully addressed your concerns and made the noted changes that the theoretical intuition section is now in the Appendix. \\n\\nThank you for your valuable time and feedback, and thank you for your attention.\\n\\nBest regards,\\nAuthors\"}", "{\"title\": \"Thank you for reviewing our paper!\", \"comment\": \"Thank you, reviewer ku1W, for your feedback!\"}", "{\"title\": \"Response to reviewer (2)\", \"comment\": \"> W3. The use of ROUGE and BLEU as the final downstream metrics seems ill-advised.\\n\\nThank you for your suggestion. We have added GPT-4o-Judge scores (Section 4) evaluating how aligned the generated answer is with the reference answer and its quality on criteria:\\n\\n- Format consistency: ensuring the generated response matches the length, structure, and layout of the reference.\\n- Content completeness: evaluating whether all key points present in the reference are included in the assistant's answer.\\n- Factuality: checking for factual correctness of the assistant's answer.\\n- Style adherence: ensuring that the tone, style, and level of detail of the assistant's answer match the reference.\\n- Assistant's answer quality: assessing how well the response satisfies the user's requirements.\\n\\nEach criterion is scored on a scale of 10, and the final GPT-4o-Judge score is the average of them. We have included the evaluation scores in Table 2 and Figure 2. We summarize the results below:\\n\\n| Method | Format | Content | Factuality | Style | Quality |\\n| -------- | ------- | ------- | ------- | ------- | ------- | \\n| Baseline | 4.18 | 4.83 | 6.64 | 4.36 | 4.75 |\\n| + APO | 4.73 | 5.91 | 7.26 | 4.91| 5.39 |\\n| + LongGuide | **5.72** | **6.01** | **8.25** | **5.78** | **6.04** |\\n\\nAmong five GPT-4o-Judge criteria in Figure 2, LongGuide notably improves Format, Style, and Factuality, confirming its effectiveness in aligning model generation with ground-truth distributions. In addition, the significant gains in Quality criterion, together with the ROUGE-L scores from Table 2 further demonstrate that LongGuide also significantly enhances the generation quality.\\n\\nOur evaluation prompting template is heavily inspired by (https://openreview.net/forum?id=uccHPGDlao).\\n\\n> The point about the guidelines not being useful for tasks the models are trained on is an interesting claim (line 556 onwards). Could you verify this with a model with open training data? \\n\\nThank you for your feedback. We would like to clarify that this is a hypothesis we proposed, as stated in the manuscript: \\u201cLongGuide may not be useful for the tasks the models are trained on\\u2026\\u201d. Our hypothesis comes from our observation that \\u201cwhile we see notable enhancements on the CommonGen-Challenge dataset (Lin et al., 2020), it\\u2019s intriguing that we don\\u2019t observe any improvements on the WebNLG (Gardent et al., 2017) and E2E NLG (Puzikov & Gurevych, 2018) datasets. Given the popularity of these datasets, we suspect the models we tested may have been previously trained on them.\\u201d written in L556-562.\\n\\nTesting this hypothesis directly is challenging due to the opaque nature of training data for most large language models. Even for models with open training data, identifying specific overlaps or pretraining exposure remains complex. \\n\\nThat said, we acknowledge that this is not a central claim of our work but rather an observation to guide future research. We appreciate your suggestion and agree that further studies could delve deeper into verifying this hypothesis.\\n\\n> In regards to the title: are these texts really \\\"long-form\\\"? \\n\\nThank you for your question. We follow the ELI5 definition (Fan et al., 2019) of long-form generation as generating sentence- or paragraph-length answers (L053). All our tasks fall within this scope, requiring sentence- or paragraph-level answers. This distinguishes long-form generation from factoid question answering involving single-word answers, and multiple-choice question answering.\\n\\n> Q4. The evaluation in 4.1 establishes the goal as minimal Jensen-Shannon divergence between the score distributions of gold summaries and model answers. Is this a good goal? Is it possible that the model answers are better on some axes than human answers, and this fails to account for this setting? \\n\\nThank you for your question. The primary goal of our work and the LongGuide method is to improve the alignment between the LLM generation distribution and ground-truth distribution (L086-087). For that purpose, minimizing the Jensen-Shannon divergence between the score distributions of gold summaries and model answers is a suitable objective. \\n\\nIt is possible that the model answers are better on some axes than human answers. However, this is not our paper\\u2019s goal. Our work\\u2019s goal is to align LLM responses with human responses. Note that our goal aligns with most of the current alignment techniques where we all try to optimize towards human answers/preferences.\\n\\n> Minor presentation notes.\\n\\nThank you for pointing out the writing advice. We have revised our manuscript accordingly. There is one question \\u201cin Figure 17, skipping step 2 seems to improve the performance, not worsen it?\\u201d. Skipping step 2 improves model performance from 17.24 to 21.62.\"}", "{\"title\": \"Thank you for your thoughful comments\", \"comment\": \"Dear reviewer V713,\\n\\nThank you for your getting back to us and for your thoughtful comments. We understand your concern about concepts that lack mathematical definitions. We would like to address your concerns below.\\n\\n> I also think this is an issue in the proof, for what it's worth -- in the proof of Remark 2.2, you assume that the set of text properties you are measuring are well chosen, but Hypothesis 2.1 only claims that such a set exists.\\n\\nThank you for your feedback. We want to clarify that Remark A.2 builds directly on Hypothesis A.1. Under the assumption that Hypothesis A.1 holds, we obtain two things: (1) the existence of $\\\\{f_1,\\\\dots,f_r\\\\}$ and (2) optimizing these functions during generation ensures a lower overall loss.\\n\\n> Overall, I still don't think your mathematical formulation is meaningful, because the remarks leverage text descriptions of concepts that do not have a mathematically rigorous definition, like \\\"text quality\\\" in Remark 2.2. At best, it adds nothing to your empirical results; at worst, I worry it could be misleading to the reader.\\n\\nThank you for your thoughtful comment. We appreciate your concern regarding the clarity of the \\u201ctext generation quality\\u201d concept. We propose this concept primarily covering fundamental linguistic properties of generated responses (Grammatical correctness, Readability, etc) and importantly, also the task-specific alignment.\", \"this_concept_itself_is_subjective\": \"each person may have their own \\u201ctext quality\\u201d like we can write the same idea very differently. Nevertheless, this concept is necessary. Specifically, the set of \\\"well-chosen\\\" text properties may not cover all subjective aspects, and lower task loss alone may not always indicate a better solution. For instance, a generated answer that diverges from the ground truth can still be valid.\\n\\nTo address your concern, we propose a simple modification. We define the (subjective) text generation property as a function $f_P: (\\\\mathcal{X}, \\\\mathcal{Y}) \\\\to \\\\mathbb{R}$. The modifications to the current Appendix A are also simple: we briefly define this function in L1049-1051, include it in Remark A.2 L1053, and briefly talk about it in L1112-1114. We believe now the theoretical intuition should be more solid, as hopefully you agree with us, given that all concepts are defined by functions, even though we did not specify how they are computed. We will further clarify this \\u201ctext generation property\\u201d concept better in the Appendix A.\\n\\nOverall, we very much understand your concern and your suggestion. As we have noted, these formulations are intended to provide a theoretical intuition, and we have placed them in the Appendix to support audiences who find them helpful. We hope you will recognize the empirical, observational, and constructive contributions of our work\\n\\nWe are very open to suggestions and discussions, and happy to take them. Feel free to share your feedback and thoughts.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Thank you for your feedback (4)\", \"comment\": \"Dear reviewer YJ28,\\n\\nThank you for engaging with us in this conversation and your feedback. We appreciate the time and effort you have invested in evaluating our paper. We would like to address the new points you raised:\\n\\n> 1. One non-instruction-tuned model, Mistral-7B-v0.1 is definitely not enough for an ICL-related paper. You should provide a wide range of sota foundation models to verify your methods' efficiency...\\n\\nThank you for sharing your thoughts. To the best of our knowledge, the ICL concept applies broadly to all language models (Brown et al., 2020; Dong et al., 2023). Our study focuses on instruction-tuned models. As noted in our Acknowledgement (L540-543), LongGuide requires models that *possess strong instruction-following capabilities and a certain level of task knowledge*. These attributes are essential for enabling self-evaluation and leveraging task-specific guidelines effectively. Models that are not instruction-tuned, such as Mistral-7B-v0.1, were included to demonstrate baseline capabilities only and they are not our primary focus since they can\\u2019t follow our method\\u2019s instructions to perform self-evaluation.\\n\\n*Perhaps, the reviewer meant our study of the limitations of ICL (Section 2) should cover more non-instruct LLMs?* For this perspective, we have added the experiments in Section 2 with 3 non-instruct models (Mistral-7B-v0.3, Llama-3.1-8B, Qwen2.5-7B) + 1 instruct-model (Llama-3.1-8B-it). The results are presented below and we supplemented them in Section 2.\\n\\n| **ICL w/ 5 demos** | **(1) COV** | **(2) FAC** | **(3) CON** | **(4) INF** | **(5) COH** | **(6) REL** | **(7) NT (mean)** | **(7) NT (std)** | \\n| -------- | ------- | -------- | ------- | -------- | ------- | -------- | ------- | -------- | \\n| *Expected* | *100* | *100* | *100* | *100* | *100* | *100* | *17.00* | *0.00* |\\n| **Mistral-7B-v0.3** | 12 | 27 | 28 | 8 | 20 | 35 | 87.74 | 144.91 |\\n| **Llama-3.1-8B** | 12 | 42 | 50 | 4 | 32 | 47 | 271.81 | 379.48 |\\n| **Qwen2.5-7B** | 43 | **90** | **85** | **40** | 78 | **96** | 281.38 | 264.59 |\\n| **Mistral-7B-it-v0.2**| 38 | 80 | 78 | 17 | 75 | 88 | 50.25 | 55.54 |\\n| **Llama-3.1-8B-it** | **44** | 86 | 82 | 26 | **81** | 87 | **34.72** | **45.29** |\", \"we_find_almost_the_same_observations_as_we_had_in_section_2\": \"(i) ICL models do not achieve a 100% score of 5 on any metric; (ii) increasing # demonstrations does not rectify this issue; (iii) adding a simple guideline improves instruct models. Additionally, Qwen scored high on metrics (1)\\u2013(6) while failed on metric (7) (and (8)) because it copied the input dialogue as the summarization outcome and thus did not solve the task properly.\\n\\nWe have added the above experiments to Section 2. We believe that exploring the extension of our work to non-instruct model adaptation is a promising direction for future work.\"}" ] }
Dj9a4zQsSl
Enhancing Document Understanding with Group Position Embedding: A Novel Approach to Incorporate Layout Information
[ "Yuke Zhu", "Yue Zhang", "Dongdong Liu", "Chi Xie", "Zihua Xiong", "Bo Zheng", "Sheng Guo" ]
Recent advancements in document understanding have been dominated by leveraging large language models (LLMs) and multimodal large models. However, enabling LLMs to comprehend complex document layouts and structural information often necessitates intricate network modifications or costly pre-training, limiting their practical applicability. In this paper, we introduce Group Position Embedding (GPE), a novel and efficient technique to enhance the layout understanding capabilities of LLMs without architectural changes or additional pre-training. GPE achieves this by strategically grouping the attention heads and feeding each group with distinct positional embeddings, effectively encoding layout information relevant to document comprehension. This simple yet powerful method allows for effective integration of layout information within the existing LLM framework. We evaluate GPE against several competitive baselines across five mainstream document tasks. We also introduce a challenging benchmark called BLADE, specifically designed to assess layout comprehension. Extensive experiments on both established and BLADE benchmarks confirm the efficacy of GPE in significantly advancing the state-of-the-art in document understanding. Our code is available at https://github.com/antgroup/GroupPositionEmbedding.git
[ "DocAI", "LLM", "Position Embedding" ]
Accept (Poster)
https://openreview.net/pdf?id=Dj9a4zQsSl
https://openreview.net/forum?id=Dj9a4zQsSl
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xk2XIkaahD", "w8BM3QMrpb", "rruQGK4b4l", "rceS4aja7w", "pdkZq9M0ht", "pBeVHB5dH6", "oz0HeEMo5M", "m0SVPaQnc1", "im83Bk7bZT", "fhtKFqBHst", "eXZZIsns8d", "eDXATO93Rq", "drRxBFHIIV", "dactYF70eJ", "UNpgCDRccd", "QzS1eWO2Ap", "QhhDxhyoIY", "Pt53Xw7F45", "MOoE11Amu4", "LAdouIp40B", "EUV7CWmdcK", "B9eQwyEAsi", "9FftlgRtNJ", "7LglwhGjcJ", "6MPzcxHbrx" ], "note_type": [ "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "decision" ], "note_created": [ 1733207098564, 1730531906930, 1730202029791, 1733059765214, 1733054680126, 1733207028428, 1733037065086, 1732502068711, 1733206901273, 1732773349972, 1730536285333, 1731864506727, 1732356645477, 1731864190285, 1731861919554, 1732500645823, 1732502007311, 1732526038622, 1732521610675, 1730704423796, 1733196807804, 1734418999268, 1731861724489, 1732691051791, 1737523406617 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission612/Authors" ], [ "ICLR.cc/2025/Conference/Submission612/Reviewer_WcMm" ], [ "ICLR.cc/2025/Conference/Submission612/Reviewer_Ntfi" ], [ "ICLR.cc/2025/Conference/Submission612/Authors" ], [ "ICLR.cc/2025/Conference/Submission612/Authors" ], [ "ICLR.cc/2025/Conference/Submission612/Authors" ], [ "~Yi_Zhang104" ], [ "ICLR.cc/2025/Conference/Submission612/Authors" ], [ "ICLR.cc/2025/Conference/Submission612/Authors" ], [ "ICLR.cc/2025/Conference/Submission612/Reviewer_hPjd" ], [ "ICLR.cc/2025/Conference/Submission612/Reviewer_hPjd" ], [ "ICLR.cc/2025/Conference/Submission612/Authors" ], [ "ICLR.cc/2025/Conference/Submission612/Reviewer_WcMm" ], [ "ICLR.cc/2025/Conference/Submission612/Authors" ], [ "ICLR.cc/2025/Conference/Submission612/Authors" ], [ "ICLR.cc/2025/Conference/Submission612/Reviewer_Ntfi" ], [ "ICLR.cc/2025/Conference/Submission612/Authors" ], [ "ICLR.cc/2025/Conference/Submission612/Authors" ], [ "ICLR.cc/2025/Conference/Submission612/Reviewer_Ntfi" ], [ "ICLR.cc/2025/Conference/Submission612/Reviewer_XTDc" ], [ "ICLR.cc/2025/Conference/Submission612/Reviewer_XTDc" ], [ "ICLR.cc/2025/Conference/Submission612/Area_Chair_nNus" ], [ "ICLR.cc/2025/Conference/Submission612/Authors" ], [ "ICLR.cc/2025/Conference/Submission612/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"comment\": \"We are very pleased that our work has received your recognition. Thank you for your valuable suggestions and feedback and for raising the score.\"}", "{\"summary\": \"The paper makes two main contributions: (1) it uses group position embedding to ensure that different attention heads focus on different views of position, and (2) it introduces a new document AI dataset, BLADE, which highlights the model's ability to handle complex layout information. Experimental results demonstrate improved performance, and the ablation studies are comprehensive.\", \"updated_on_23th_nov_2024\": \"raise the score\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The proposed group position embedding is novel.\\n2. The proposed BLADE dataset is important to the field.\\n3. The experiments are extensive, especially the comparison of different approaches to fuse layout information is insightful.\", \"weaknesses\": \"1. Although the experimental results in Tables 2 and 3 demonstrate the superiority of GPE, the results in Table 1 appear somewhat contradictory to those in Tables 2 and 3, requiring further explanation or experiment.\\n2. The paper needs a section discussing why GPE is effective in guiding attention heads to focus on different positional views. Including a visualization of attention scores or similar analysis would provide valuable insights, as the improved performance alone is insufficient to fully explain the mechanism.\\n\\n3. The writing requires improvement, as several paragraphs are vague and difficult to understand. For example:\\n(1) Lines 248-249: What is the value of lambda? Does GPE discretize all coordinates into integers, allowing us to obtain position embeddings for specific position?\\n(2) Line 418: In the experiment regarding reading order, I cannot understand the difference between W/O and LOCAL. A concrete example would help illustrate the distinctions between each setting.\\n(3) Line 473: It would be helpful to introduce the design of the group function where it first appears, in Section 3.2, using a formal mathematical definition. This would clarify the paragraph, as I found it confusing without prior context on the group function.\", \"questions\": \"See weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a simple yet effective method, Group Position Embedding (GPE), for encoding spatial positional information in LLM-based document understanding tasks. This technique enables LLM models to comprehend document layouts without altering their architectures. Additionally, the authors introduce a new benchmark, BLADE, for evaluating complex document processing.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1.This paper introduces a simple layout embedding approach, termed Group Position Embedding (GPE), for enhancing LLM-based document understanding.\\n2.This paper proposes a new benchmark, BLADE, designed for the evaluation of complex document evaluation. \\n3.Extensive ablation experiments have been conducted to validate the effectiveness of GPE.\", \"weaknesses\": \"1. The rationale behind GPE is unclear. The introduction section fails to adequately explain the workings of the group position embedding, instead, it merely highlights the advantages without providing underlying reasons. It is also unclear what the principles and feasibility of the mapping function g_r() are, and the distinction between Gr(k) and Gr(i) on lines 196-197 is also not elucidated. Further explanation is needed on how the n-dimensional spatial position is mapped into different attention heads. The meaning of the hyperparameter scaling factor \\\\lambda and its impact on the approach should also be clarified.\\n2. This paper is not the first to utilize head-specific layout position embeddings. Previously, LAGaBi [https://aclanthology.org/2023.findings-emnlp.521/] employed diverse Gaussian kernels to encode relative positional information for each attention head. \\n3.The paper's writing quality is subpar. Inconsistent and non-standard academic expressions are used. For instance, in LLMs, l typically denotes the number of layers rather than dimensions, which should be represented by d or dim. The figures do not clearly demonstrate the mapping mechanism of group position embedding and convey some ambiguity. For instance, Figure 1 shows that only the key-state and value-state encode n-dimensional spatial position information, whereas the query-state receives only 1-dimensional position information, which is confusing.\", \"questions\": \"Please refer to Weaknesses.\", \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety']\", \"details_of_ethics_concerns\": \"na\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response for general concern\", \"comment\": [\"## Revised Paper and Future work\", \"We have revised and re-uploaded the paper. The key changes are highlited in yellow. The revised part mainly includes\", \"Comparison with LAGaBi in related work\", \"Grouping function definition\", \"A new Section(A.7) for the impact of scaling factor and experiments.\", \"Adding experiment of XY-cut for comparison\", \"More details of BLADE.\", \"A new Section(A.8) for intuitive analysis of GPE including the attention map visualization.\", \"Some typos\"], \"future_work_of_open_source_plan_includes\": \"- BLADE and its evaluation scripts.\\n- Code of GPE as well as its training and evaluation\\n- Release model weights based on SOTA LLMs with GPE using large scale training data\\n\\n## Motivation and Ituitive Analysis of GPE\\n**Motivation**\\nOur objective is to integrate layout information into LLMs. Given the limitations of existing methods, we have identified three key characteristics that our approach should embody:\\n1. **Unambiguous Layout Representation**: The representation of the layout should not result in any loss of information. \\n2. **No Additional Vocabulary**: Introducing new vocabulary or modalities would complicate the training process.\\n3. **No Additional Tokens**: Adding extra tokens would disrupt the original text sequence, making the LLM hard to understand the original text.\\n\\nThe GPE is designed to meet all the properties. GPE satisfies the first property by ensuring that each dimension of layout information is processed individually by different heads. In contrast, the ``Add Box Embedding'' used in LayoutLM just adds the embeddings of each layout dimension, which brings ambiguity, e.g. the representations of [x1, x2] and [x2,x1] are identical. GPE satisfies property 2 by reusing the original position embeddings of LLM. As it is not a new modality or new vocabulary, the model is easy to adapt. GPE does not change the input sequence thus satisfying property 3.\\n\\n**Explanation of why GPE is effective**\\nThe key technique that enhances the effectiveness of GPE is its **head-specific design**. This design ensures that each dimension of layout information is processed by each attention head. The benefits of this approach are akin to those of multi-head attention, as it encourages the model to learn positional relationships from various perspectives. By distributing the layout relations across multiple attention scores, rather than relying on a single score, the model can more effectively capture and utilize different aspects of the layout. Intuitively, some heads might focus on horizontal (left-to-right) relationships, while others concentrate on vertical (up-to-down) relationships.\\n\\n**Visualization of Attention Map**\\nPlease refer to the newly added Section A.8 in the revised paper.\\n\\n## Details of BLADE\\nWe have added more details to BLADE, which includes two parts: (1) an explanation of the differences in the construction details between Newspapers and SynthDocs; (2) additional information on the quality filtering process for SynthDocs.\"}", "{\"comment\": \"Dear Yi Zhang\\n\\nWe are glad to see that our work has caught your attention. We greatly appreciate your interest in our work and your valuable feedback. \\n\\n**Fair Comparison**\\n\\n**The comparison in Table 1 is not entirely fair, which has been mentioned in Lines 354-356** . \\n>Considering that these methods are based on different base models, trained with varying datasets and strategies, and some without publicly available model weights, the comparison is not entirely equitable.\\n\\nThis is precisely our motivation for the subsequent comparison between Table-2 and Table-3. The unfair comparison in Table-1 does not sufficiently demonstrate the advantages of GPE as an encoding method for enhancing layout information over similar encoding methods, nor does it prove that GPE improves the model's understanding of layout information. \\n\\nUnder these circumstances, we conducted a fair comparison using the exact same setting (Table-2). However, this still does not demonstrate GPE's ability to utilize layout information effectively. To further investigate, we introduced BLADE, a benchmark focused on evaluating a model's understanding of complex layout information, and performed a fair comparison in Table-3. Please refer to Section 5.4 for more details.\\n\\nAdditionally, are you interested in whether using GPE in LLMs can achieve SOTA performance? If so, we hope you will keep an eye on the open-source plans for our work. Recently, we have trained GPE-modified models on larger-scale corpora and instructions, which will support multiple languages and multiple tasks. We will also be using various mainstream LLMs that are currently publicly available within the community.\\n\\n**Metric of KIE**\\n\\nYes, we found that other methods, apart from DocLLM, use the ANLS metric, so we followed most of the work using ANLS. We also provide a comparison of the F1-score of our method with DocLLM here. \\n\\n| method | FUND | CORD | SROIE |\\n|-------|-------|-------|-------|\\n| DocLLM | 51.8 | 67.4 | 91.9 |\\n| GPE | 86.8 | 90.1 | 93.9 |\\n\\nHowever, as we pointed out in the previous paragraph, such a comparison cannot actually prove the effectiveness of GPE. The introduction of GPE is intended to enhance the understanding ability of LLMs for complex layout information, and we note that these evaluation sets hardly reflect this perspective (Sec A.3). We hope that our proposed BLADE dataset can bring new benchmarks to this field.\\n\\n**Setting**\\n\\nIn Table-1, we used a non-zero-shot setting, since the training split of these datasets were used during tuning\\uff08Sec 5.1). We noticed that you mentioned DocLayLLM and LayTextLLM using two settings, and the ones cited in Table 1 are from the non-zero-shot setting. LayoutLLM only reports zero-shot setting. Its results were cited as a reference. \\n\\n**Reference Error**\\n\\nThanks for your reminding. There are some citation errors in the results of this DocLayLLM. We will fix this problem.\\n\\nWe hope our response can answer your questions, and we thank you once again for your interest in our work.\"}", "{\"comment\": \"Thank you for taking the time to provide us with your thoughtful and constructive feedback and for raising your score. We greatly appreciate your valuable support and recognition.\"}", "{\"title\": \"Comments and Concerns Regarding the Paper\", \"comment\": \"Dear Authors and Reviewers,\", \"we_would_like_to_bring_to_your_attention_the_following_points\": \"1. In Table 1, we noticed that the performance of DocLayLLM and DocLLM on the VIE row appears to be identical. **This should be an error.**\\n\\n2. We observed that DocLLM employs the F1 metric for the VIE task, which does not align with the ANLS metric mentioned in Table 1. **Is such a comparison appropriate or fair?**\\n\\n3. LayoutLLM did not utilize the DocVQA, VisualMRC, FUNSD, CORD, or SROIE datasets during training, whereas this paper includes these datasets in the training process. **Could the authors comment on whether this comparison remains fair?** Furthermore, DocLayLLM and LayTextLLM provide experimental results under different settings. **Could the authors confirm whether the comparison made in your paper is based on the zero-shot setting, the VQA setting, or the all-setting?** It would be helpful to understand whether the comparison is fair under these different configurations.\\n\\nThanks for this work, and we appreciate your attention to these points. We look forward to your responses.\"}", "{\"comment\": \"Dear Reviewer hPjd,\\n\\nWe have meticulously addressed your comment with comprehensive responses during the rebuttal phase. At present, we have not yet received new comments from you. We are looking forward to your valuable feedback and insights, and grateful for your effort throughout this process.\\n\\nBest regards,\\n\\nSubmission 612 authors\"}", "{\"comment\": \"Thank you for your insightful and positive feedback, and for raising the score. We greatly appreciate your valuable support and recognition.\"}", "{\"comment\": \"Thanks for the detailed explanation of my concerns. I think the majority of the problems are answered well. Especially the motivation of this paper. I'm happy to increase my mark from 5 to 6.\"}", "{\"summary\": \"This paper introduces a new positional encoding method to tackle the limitations of current LLMs/MLLMs' lack of layout information for addressing layout-aware document understanding tasks. A dataset named BLADE is designed to mainly focus on measuring the performance on complex-layout aware questions. Various experiments with different model configurations are conducted.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper proposed a new positional encoding to address the limitation of existing LLMs/MLLMs neglecting layout information.\\n2. A benchmark is proposed specifically focusing on evaluating the performance of various LLMs/MLLMs on document understanding tasks with complex layouts.\", \"weaknesses\": \"1. Motivation: The limitations of current positional encoding methods adopted by other frameworks are not clearly defined. The research gaps are not clearly defined.\\n2. The methods are described well with the technical workflow without specific reasons and experiments as to why the positional encoding is working. For example, there is no explanation or citation as to why different position information giving various heads are reasonable for the research aim. \\n3. The datasets are not clearly described making understanding breakdown categories difficult. \\n4. The proposed methods are only evaluated on LLM which is expected to see whether it's workable on MLLMs and pretrained document understanding models. \\n5. More reading order setups should be tried like XY-cut. \\n6. There are some typos and some possible errors in the paper, like Table 2 SCOIE performance. Some bolded digits in the performance tables are not the highest.\", \"questions\": \"1. Motivation and Related Work: what is the limitation of current positional encoding methods adopted by pretrained document understanding frameworks and what are the limitations of them?\\n2. Dataset: is that necessary to have a layout-aware dataset. Only focusing on layout-complex questions may ignoring the performance on other generative tasks. \\n3. Model and Evaluation: is there any reason you chose those LLMs/MLLMs? It would be better to give more insight analysis for this part by giving more ablation and case studies. \\n4. Is it possible to show some qualitative analysis of benchmark datasets like CORD, SROIE?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Q: Contradictory results.\", \"a\": \"We have revised the paper. The mathematical definition of Group Function is defined in Equation-5.\", \"q\": \"Adjust the position of the Group Function definition.\", \"input_texts\": \"[\\\"PAGE 01 OF\\\", \\\"MATERIAL SAFETY DATA SHEET\\\"]\", \"input_tokens\": \"[[112, 113, 114 ], [115, 116, 117, 118]] (Note: Tokens in the same list come from one text box)\", \"flattened_tokens\": \"[112, 113, 114, 115, 116, 117, 118]\", \"common_1d_position_id\": \"[0, 1, 2, 3, 4, 5, 6]\\n\\nW/O 1D Position ID: [0, 0, 0, 0, 0, 0, 0]\", \"local_1d_position_id\": \"[0, 1, 2, 0, 1, 2, 3]\"}", "{\"title\": \"Thanks\", \"comment\": \"Thanks for the response, since almost all my concerns are addressed, I decided to raise my score to 8.\"}", "{\"comment\": \"Q: Why GPE works\", \"a\": \"We believe the Figure 1 is clear. The **query-state** and **key-state** receives n-dimensional spatial pasition information. **No positional information is applied to value-state**.\", \"q\": \"Figure ambiguity. Figure 1 shows that only the **key-state and value-state** encode n-dimensional spatial position information, whereas the **query-state** receives only 1-dimensional position information, which is confusing.\"}", "{\"comment\": \"Q\\uff1aMotivation and explanation of why GPE works.\\n\\nA\\uff1aPlease refer to the general response.\", \"q\": \"Qualitative analysis of CORD, SROIE\", \"a\": \"Qualitative analysis is presented in Figure 4 in BLADE. For, CORD and SROIE, the characteristics have been analyzed in Sec A.3 and Figure 8.\"}", "{\"comment\": \"I'm looking into the revision, still need some time.\"}", "{\"comment\": \"Dear Reviewer XTDc,\\n\\nWe have meticulously addressed your comment with comprehensive responses during the rebuttal phase. At present, we have not yet received new comments from you. We are looking forward to your valuable feedback and insights, and grateful for your effort throughout this process.\\n\\nBest regards,\\n\\nSubmission 612 authors\"}", "{\"comment\": \"Thank you for taking the time to provide us with your feedback. We would like to clarify the following concerns for the reviewer.\\n\\n**Limited contribution**\\n\\nWe would like to clarify the contribution here, which has been acknowledged by reviewers. Reviewer XTDc WcWm, hPjd have acknowledged that GPE is a novel method. All reviewers mentioned the dataset contribution, especially WcWm acknowledged that BLADE is important to this field. Three reviewers (WcWm, XTDc, Ntfi) have acknowledged that the extensive experiments that are insightful.\\n\\nWe further summarize the contribution of this work in three aspects\\n\\n- We propose a **novel head-specific positional encoding approach** that enables LLM to comprehend high-dimensional positional information. This concept is innovative and has not been addressed in previous research. \\n\\n- Building on the head-specific idea, we devised a **comprehensive method for LLMs to understand document layout information**, supported by **detailed analytical experiments** that showcase various aspects of encoding layout information.\\n\\n- We also highlight a **limitation in current document-related task evaluation benchmarks**, which primarily rely on LLMs' text comprehension abilities, lacking an assessment of their capacity to understand layout information. To address this gap in evaluation standards, we introduce a new benchmark suite, BLADE, aimed at effectively assessing this aspect of LLM performance.\\n\\n**Mixed results**\\n\\nWe argue that the statement \\\"The results were mixed\\\" is inaccurate. Through comprehensive experiments, we have demonstrated that GPE outperforms other methods that incorporate layout information. This is clearly evidenced in Tables 2 and 3. Furthermore, as shown in the analysis in Tables 4, 5, 6,8 and 10, GPE can achieve even better performance for specific scenarios . The \\\"mixed results\\\" mentioned by the reviewer likely refer to the differences in the Visual MRC and SROIE datasets in Tables 1 and 2, which are actually as expected. These differences have been analyzed in Sections 5.3, A.3, A.6 and reply to XTDc. The main reason is that both Visual MRC and SROIE primarily measure the model's text capabilities. On datasets that rely more heavily on layout information, such as DocVQA, FUND, and our proposed BLADE, GPE shows a significant advantage.\\n\\nWe sincerely hope that our efforts can address your concerns.\"}", "{\"comment\": \"I really appreciate the detailed explanation, especially the experiments added in Appendix (A.8) for intuitive analysis of GPE, including the attention map visualization, which gives more insights into why GPE works. However, I think the contribution of this work is limited, and the results of the experiment were mixed. I choose to remain with my original score.\"}", "{\"summary\": \"This paper introduces Group Position Embedding (GPE), a method to enhance layout comprehension in Large Language Models (LLMs) without requiring architectural changes or extra pre-training. By grouping attention heads with distinct positional embeddings, GPE effectively encodes layout information for document tasks. Tested on five standard benchmarks and the challenging BLADE benchmark, GPE shows significant improvement in document understanding over existing methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"(1) interesting task\\n\\n(2) new method\\n\\n(3) extensive experiments\", \"weaknesses\": \"(1) Missing important details for dataset construction: Any critiria to manual and filter challenging question-answer pairs for the data scenarios of Forms, Slides, and Websites? Why is Newspapers constructed with an initial manual selection whereas SynthDocs being synthetically generated? For SynthDocs whose answers are synthetically generated, any filtering measure to ensure the data quality?\\n\\n(2) Missing the intuitive motivation of the proposed method: Although this paper verified the effectiveness of the proposed method compared with other approaches on Layout-aware Position Embeddings through the empirical experiments, it lacks of more intuitive explanations on the advantages of the proposed method, which can provide more insights for the follow-up work.\\n\\n(3) Missing the clarification of the experiment results: According to the experiment results in Table 1, vanilla Qwen2-7B outperforms GPE-Qwen2-7B on VisualMRC, which needs more clarifications.\", \"questions\": \"Please refer to the weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to the authors\", \"comment\": \"Thanks for the detailed response from the authors. I think this could address most of my concerns. I have raised my score from 5 to 6.\"}", "{\"metareview\": \"The authors present a novel approach, Group Position Embedding (GPE), to enhance document understanding by enabling attention heads to focus on different positional views. The authors also propose a new benchmark, BLADE, for complex document processing, offers a valuable contribution to the field.\\n\\nTo further strengthen the paper, I recommend incorporating a more comprehensive discussion of related work, particularly in the area of layout encoding for language models. While the authors have considered recent advancements in LLMs and LMMs, exploring earlier work, such as ROPE: Reading Order Equivariant Positional Encoding for Graph-based Document Information Extraction (ACL 2021), can provide valuable insights and historical context.\", \"additional_comments_on_reviewer_discussion\": \"The initial reviewer feedback highlighted concerns regarding the motivation and intuition behind the design, as well as the dataset construction and evaluation methodology. The authors have made significant efforts to address these concerns, providing more detailed explanations and justifications.\"}", "{\"comment\": \"Thank you for your valuable feedback. We believe that addressing these details is very helpful for improving our work. We have revised the paper according to your suggestions.\\n\\nQ\\uff1aCritiria for manually filter challenging question-answer pairs.\\n\\nA\\uff1aThe overall standard is to minimize the likelihood that LLMs can infer the correct answer solely based on simply sorted OCR text or semantic information. A QA pair is considered simple if the question text and answer text are placed in natural reading order after simply sorting the OCR text from left to right and up to down. So, in reality, annotators are required to select samples and question pairs that have spatial interference. For example, in Forms, annotators would select samples with as many rows and columns as possible or select samples that contain rotations. In QA selections, they would choose texts in cells that contain line breaks. In Slides, they would select QA pairs where there are changes in font size or cases where the question and answer are spatially misaligned.\", \"q\": \"Clarification for results between Qwen2-7B and GPE-Qwen2-7B.\", \"a\": \"We believe that on VisualMRC, the original Qwen2-7B outperforming GPE-Qwen2-7B is as expected. Firstly, **GPE does indeed affect the model's text comprehension ability**, although very slightly. Referring to Sec A.6, the setting with GPE shows a slight decline in text capability (MMLUPro) compared to the original Qwen2-7B. Secondly, **VisualMRC primarily measures the model's text understanding ability**, as referenced in Sec A.3 and Figure 7. Based on the above two points, we believe that the differences in VisualMRC are as expected.\\nIn lines 932-935, we actually analyzed this aspect. \\n\\nMoreover, the gap between GPE and the vallina qwen2-7B is indeed very small. The instability in the calculation of the CIDEr metric results in discrepancies in the scores that are larger than the actual performance differences.\\nFor example, the following question are both correctly answered by GPE and vallina qwen2-7B. But the CIDEr differs greatly\", \"question\": \"Do Senior Software Engineer and Senior Cybersecurity Engineer both get the best jobs?\", \"answer\": \"['Yes, they do.']\", \"qwen2_prediction\": \"Yes, they do. (CIDEr 750.0)\", \"gpe__prediction\": \"Yes. (CIDEr 92.3)\\n\\nIt is shown that CIDEr differs greatly in these two answers.\\n\\nWe have revised the paper according to the suggestion, mentioning this point in the main text rather than just in the appendix.\"}", "{\"title\": \"Awaiting Reviewer Feedback\", \"comment\": \"Dear Area Chairs,\\n\\nWe have meticulously addressed each reviewer's comment with comprehensive responses during the rebuttal phase. At present, we have not yet received any responses from 2 reviewers (reviewer hPjd and XTDc). We are looking forward to receiving valuable feedback and insights from them, and appreciate your support throughout this process very much.\\n\\nBest regards,\\n\\nSubmission 612 authors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}" ] }
Dj1PVLU8fK
Democratizing Evaluation with Infinity-Benchmarks: Sample-Level Heterogeneous Testing Over Arbitrary Capabilities
[ "Adhiraj Ghosh", "Sebastian Dziadzio", "Ameya Prabhu", "Vishaal Udandarao", "Samuel Albanie", "Matthias Bethge" ]
Traditional fixed test datasets fall short in quantifying the open-ended potential of foundation models. In this work, we propose ∞-benchmarks, a new testing paradigm that combines individual evaluation datasets into a single, uniform, ever-expanding sample pool from which custom evaluations can be flexibly generated. An ∞-benchmark allows users to dynamically select a collection of sample-level evaluations that correspond to their specific capabilities of interest. By aggregating and reusing samples across various test sets, it enables the assessment of diverse capabilities beyond those covered by the original test sets, while mitigating overfitting and dataset bias through real-world diversity. Most importantly, it frames model evaluation as a collective process of aggregation and selection of sample-level tests. The shift from multi-task benchmarks to ∞-benchmarks introduces two key challenges: (1) heterogeneity and (2) incompleteness. Heterogeneity refers to aggregating diverse metrics, including binary, numeric, and ordinal data, while incompleteness describes comparing models evaluated on different subsets of testing data. To address these challenges, we explore algorithms inspired by social choice theory which aggregate sparse, unequal measurements into reliable model scores. Our aggregation algorithm ensures identifiability (asymptotically recovering ground-truth scores) and rapid convergence, enabling accurate model comparisons with relatively little data. We introduce ∞-LLMBench for language models and ∞-LMMBench for vision-language models, unifying evaluations across leaderboards and arenas in these domains, and showcasing targeted querying over a wide-range of capabilities. Our algorithm recovers ground truth rankings with large Kendall τ correlations when compared to standard aggregation on homogeneous metrics, even with up to 95% of measurements missing. This approach reduces evaluation cost by up to 20× with little to no compromise in performance. Overall, we present the first large-scale ∞-benchmarks for lifelong, efficient evaluation of language and vision-language models which can aggregate over open-ended heterogeneous sample-level testing to evolve alongside the rapid development of foundation models.
[ "foundation models", "efficient evaluation", "aggregation", "lifelong benchmarking", "heterogeneity" ]
https://openreview.net/pdf?id=Dj1PVLU8fK
https://openreview.net/forum?id=Dj1PVLU8fK
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uQgy4ONlR5", "ojqM8PoeGE", "oHGwx0pOpY", "eBheJDeW2K", "cSEyBJ2NxH", "HazgN9vr03" ], "note_type": [ "official_comment", "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1732487447917, 1730615681076, 1730653620986, 1730694853915, 1732528856603, 1730500393165 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3865/Authors" ], [ "ICLR.cc/2025/Conference/Submission3865/Reviewer_knSj" ], [ "ICLR.cc/2025/Conference/Submission3865/Reviewer_SrCU" ], [ "ICLR.cc/2025/Conference/Submission3865/Reviewer_MH8B" ], [ "ICLR.cc/2025/Conference/Submission3865/Authors" ], [ "ICLR.cc/2025/Conference/Submission3865/Reviewer_aKaM" ] ], "structured_content_str": [ "{\"title\": \"Author Rebuttal\", \"comment\": \"We appreciate the reviewers' feedback and would like to address the most commonly raised issues in this joint response.\\n\\n1) **What are \\u221e-benchmarks?**\\n\\nOur main technical contribution is a method for aggregating model evaluations across incompatible metrics and diverse data sources. We achieve this by converting all measurements from the cardinal form (individual measurements such as accuracy or BLEU score) to the ordinal form (pairwise comparisons between two or more models) and applying a random utility model based on the Plackett-Luce framework. In essence, we demonstrate a principled way to produce a unified ranking and model scores from heterogeneous and incomplete measurements.\\n\\nThis approach enables decentralised model benchmarking that supports lifelong, continuously updated, and ad-hoc sample-level evaluation. We demonstrate its viability by integrating information between HELM and Open LLM Leaderboard, as well as VHELM and LMMs-Eval.\\n\\n2) **The description of the Plackett-Luce model is not clear.**\\n\\nWe acknowledge this feedback and agree that our description of the random utility model could be improved. We will move relevant details from the appendix to the main paper to enhance clarity.\\n\\n3) **Do practitioners care about rankings?**\\n\\nThe emergence of projects like HELM or the Open LLM Leaderboard show that the community does care about aggregating individual benchmark scores into rankings. At the same time, the popularity of platforms like Chatbot Arena demonstrates that scores derived from pairwise comparisons are a viable method for measuring model performance. While ELO or Bradley-Terry scores are relative, they effectively convey information about the magnitude of gaps between model capabilities. Similarly, scores from the Plackett-Luce model help practitioners gauge when differences in model rankings are significant. We will revise our paper to emphasise that our method provides both rankings and scores. Additionally, we will analyse how well these scores correlate with absolute values of individual metrics on homogeneous datasets.\\n\\n4) **What is the ground truth in Section 3?**\\n\\nWe acknowledge that Section 3 needs greater clarity regarding the ground truth used to calculate ranking correlations. While our method is the first to enable aggregation of evaluations across benchmarks, Section 3 focuses on demonstrating its ability to recover score-based rankings within uniform benchmarks. We therefore compare all ranking methods against score-based rankings within each benchmark, obtained by normalising numerical metrics, averaging them per benchmark, and sorting models by their aggregated scores.\\n\\n5) **The informative sampling method is inefficient.**\\n\\nWe apologise for any confusion regarding this point. Our intention was to demonstrate that the random strategy performs as well as the informative one, thus eliminating the need for the complete evaluation required by the latter. We will edit this section in future versions of the manuscript.\\n\\n6) **Are personalised benchmarks robust?**\\n\\nThe quality and relevance of retrieved data samples depends on both the query precision and the size of the data pool. In our examples, we restrict ourselves to simple queries over the combined data pools of HELM and LLM Leaderboard for LLMs, and VHELM and LMMs-Eval for VLMs. However, our implementation supports both straightforward semantic search and structured, compositional filters. Our vision is to develop a querying mechanism over an open, distributed, continuously expanding, and potentially crowdsourced benchmark with a data pool large enough to provide comprehensive coverage of popular concepts.\\n\\n7) **Do dynamically constructed benchmarks measure the capabilities of interest?**\\n\\nWith our definition of capability probing, we state that it is possible to query any arbitrary concept and find representative samples in \\u221e-benchmarks. And while our concept pool is a proof-of-concept (as highlighted in the submission), we took into consideration the hierarchy of concepts and provide quantitative analysis of how accurate and representative the retrieved samples are. For example, we query architecture and gothic architecture, where the latter is a more fine-grained query of the former. Based on the review and filtering of mismatched samples by expert annotators, we observe an average precision (AP) of 77% and 90% for the concept \\u2018architecture\\u2019 on \\u221e-LLMBench and \\u221e-LMMBench respectively and an AP of 79% and 100% for \\u2018gothic architecture\\u2019. We aim to address the removal of irrelevant retrieved samples by setting a similarity threshold for the similarity score between the query and \\u221e-benchmarks sample embeddings. \\n\\nWe sincerely thank all reviewers for their valuable insights. We will incorporate this feedback in future submissions to enhance the paper's clarity and coherence.\"}", "{\"summary\": \"The paper proposes aggregating datasets from existing benchmarks to allow on-the-fly construction of customized benchmarks. To accomplish this goal, the authors introduce a ranking-based model that boasts high sample efficiency. The authors focus their attention on the problems of 1) how to combine different metrics (which the authors address with a ranking-based approach) and 2) how to compare models evaluated on different data examples (the authors lean on theoretical properties of their proposed methodology, guaranteeing that models can be ranked if each pair of models is connected by a directed path of pairwise rankings). The authors study the sample efficiency of their methodology and investigate the possibility of making benchmarks less resource-intensive by removing low-signal data examples. Finally, the authors illustrate the construction of custom benchmarks for queries such as \\\"neuroscience\\\" or \\\"perfume\\\".\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The proposed infinity-benchmarks pose an interesting and important question. Can we dynamically create benchmarks on the fly to evaluate foundation model performance more flexibly? This approach is interesting because custom benchmarks can give fine-grained insights into model behavior, which may be obscured when we focus on average performance trends on the most popular benchmarks. In addition, aggregating data from a pool of different data sets may reduce the impact of statistical artifacts/ideosyncrasies present in individual data sets.\\n\\nTo implement this idea, the authors propose the Plackett-Luce model for computing model performance rankings, and they describe the theoretical conditions under which the model is identifiable. The resulting ranking methodology is highly sample efficient, which is a strong advantage given the cost and limited scalability of hiring human data labelers, especially on more specialized subjects requiring subject matter expertise.\", \"weaknesses\": \"The paper proposes heterogeneity and incompleteness \\u2013 implementation issues \\u2013 as the main obstacles hindering usage of dynamic benchmarks. However, there are broader questions about the usefulness and desirability of custom benchmarks, which are largely unaddressed:\\n\\n1) Do dynamically constructed benchmarks actually measure the capabilities of interest? The authors partially address this question in Section 3.2. But there are important unaddressed questions, such as the robustness of custom benchmarks' informativeness to imperfect retrieval of data examples. It would also be interesting to explore how specific the custom benchmarks can be, and whether more specific benchmarks (e.g., \\\"Kirchoff's Law\\\") face greater issues with retrieval precision compared to broader benchmarks (e.g., \\\"electricity and magnetism\\\"). Overall, I would like to see a more detailed investigation on whether dynamically constructed benchmarks actually measure capabilities of interest.\\n\\n2) There are consequences to adopting dynamic benchmarks within the research community. Would dynamic benchmarks incentivize researchers to invent new benchmarks for their papers, allowing them to claim SOTA on narrow tasks? Would researchers try to create many different custom benchmarks until they find one on which their new method performs well?\\n\\n3) The re-use of data examples from the same pool results in correlations between different custom benchmarks. Phrased in a different way, a single data example can affect many different custom benchmarks. When dynamic benchmarks are used to report model performance across different research papers, how can we transparently present the correlations and dependencies between different custom benchmarks?\\n\\nIn addition to these big picture concerns, I have more concrete questions about the work, for example about the soundness of the ranking evaluation with respect to ground truth. Please see the Questions.\", \"questions\": \"1. You are measuring the performance of your model ranking methodology by comparing with rankings of existing benchmarks (see Table 1). These existing benchmarks consist of different subtasks and must tackle the problem of aggregating performance data from different subtasks, just like your method aims to do. For example, the HELM leaderboard ranks models by mean win rate against other models. Hence, it is unclear if these rankings can be considered \\u201cground truth\\u201d rather than just different ways of computing model rankings. For example, what if HELM actually used your method to aggregate performance from its constituent tasks? Then your method would trivially reach perfect agreement. Does taking the rankings of existing benchmarks as \\u201cground truth\\u201d make sense?\\n\\n2. Your work focuses on implementing custom benchmarks via ranking. However, beyond ranking, it is useful to have absolute performance numbers. For example, if one model reaches 99% accuracy and another 98.9%, it may not matter in practice which model to use. Do you see any possibilities for dynamic benchmarks to reveal when the differences in ranking between models are significant?\\n\\n3. In Section 3.2.1, the average precision (AP) for retrieving data examples ranges from 28.5% to 100%. Erroneously retrieved data examples may affect the overall ranking, reducing fidelity to the user\\u2019s benchmarking goal. In the experiment you conducted, does filtering out the erroneously retrieved data examples change the ordering of the top-5 models?\\n\\n4. You show that random selection of data examples and informative selection of data examples result in similar performance. How does this align with your discussion of low-signal data examples? For example, eyeballing Figure 4 suggests that on the Open LLM Leaderboard, perhaps 40% of data examples are low-signal (all or none of the models answer correctly). This suggests that informative sampling \\u2013 focusing on data examples where different models perform differently \\u2013 should be able to get away with using only 60% as many data examples as random sampling. Could you highlight this benefit of informative sampling in your paper, or else explain why it does not materialize?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper makes two contributions to LLM capability evaluations. First, it proposes to use Plackett-Luce model to rank LLMs and has shown that it is more robust to missing values in evaluation. Second, it proposes personalized evaluation where the user can submit queries that represent capabilities they are interested in, and the evaluation will focus on the subset of samples that best match the queries. This has shown to reveal different LLM rankings depending on the queries.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The idea of having personalized evaluation makes sense and it can be very useful to practitioners who work on different fields.\", \"weaknesses\": \"I am not very familiar with social choice theory, but it still seems to me that the paper is not very written in Section 2. Specifically,\\n- Can the authors explain exactly how Plackett-Luce model obtains a predicted ranking from observations of performance of LLMs on samples?\\n- Can the authors explain why no metric achieves 1.0 Kendall Tau correlation in Table 1 when full data is used to evaluate model rankings? Where does the ground truth model ranking come from in this case?\\n\\nThe paper also makes several observations that is too trivial in my opinion.\\n - In Section 2.3, the paper mentions that for a dataset there exists many easy and hard problems where either all models get right or no models get right. Therefore, when performing evaluation, we can be efficient and just use the problems that come from the central difficulty bin (Figure 4). Isn't this quite obvious? Also, in order to know what problems are in the central bin, we need to already evaluate models on these problems. It seems to defeat the purpose of efficient evaluation. Additionally, whether a problem belongs to central bin is dynamic depending on the change of model capabilities.\\n\\n- In Section 3.2, while I like the concept of personalized evaluation, I still believe this is a straightforward idea without much technical contribution. It can be thought as a retrieval + evaluation problem where depending on what concept a user wants to evaluate, we can retrieve the questions from that concept and then evaluate. People have also explored this in a more high level where to evaluate the mathematical reasoning capabilities, datasets like GSM8K, MATH will be used to evaluate.\\n\\nThe paper should also cite a few related work such as [1,2, 3] about efficient evaluation using multi-arm bandits since this is also mentioned in the paper.\\n\\n[1] Shi, Chengshuai, Kun Yang, Zihan Chen, Jundong Li, Jing Yang, and Cong Shen. \\\"Efficient prompt optimization through the lens of best arm identification.\\\" arXiv preprint arXiv:2402.09723 (2024).\\n\\n[2] Zhou, Jin Peng, Christian K. Belardi, Ruihan Wu, Travis Zhang, Carla P. Gomes, Wen Sun, and Kilian Q. Weinberger. \\\"On Speeding Up Language Model Evaluation.\\\" arXiv preprint arXiv:2407.06172 (2024).\", \"questions\": \"Please see the Weakness section above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes \\u221e-benchmarks, an evaluation paradigm for ranking and understanding foundation models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"As foundation models are developed on top of huge amounts of data, it is a timely and important question to study how to evaluate and understand these models.\", \"weaknesses\": \"While I appreciate the importance of evaluating foundation models, I found it difficult to understand the proposed \\u221e-benchmarks and thus the main contribution of this paper. In particular, what is \\u221e-benchmarks? Is it a new dataset, an evaluation pipeline, or a model ranking tool?\\n\\nHalf of the paper's technical content is about how the proposed ranking method outperforms the other methods (page 3-6). The authors use the correlation between the ground-truth ranking and the learned ranking as the metric for the comparison. What is the ground-truth ranking? Why are the ground-truth rankings independent of the ranking methods? As far as I understand, rankings are somewhat subjective, and the ground-truth sometimes are determined by the ranking methods directly. For example, Elo-score simply defines the ground-truth rankings by (sufficiently) many battles between all players/foundation models.\\n\\nThe other half of the paper's technical meat presents LLMBench and LMMBench. While the analysis on which models are performative on which queries is insightful, I am not sure what the technical contribution is. Is it simply merging many existing datasets into one, or is there anything I am missing here?\\n\\nOverall, I find it is very hard to tell what the contribution is given the current form of this paper.\", \"questions\": \"See my questions above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper broadly makes two different contributions:\\n\\n1. The paper proposes treating all (per-sample) evaluation scores of language models as (per-sample) rankings and then demonstrates that such sample-leve rankings can be used to rank such models using the Plackett-Luce model.\\n2. The authors also argue that, since all per-sample ranking scores can be generally aggregated, one can dynamically carve up existing benchmarks or construct new benchmarks on the fly by segmenting the data pool into relevant subsets and then aggregating ranking scores from that subset.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The grammar, styling, attention to detail, etc. is extremely high quality\", \"Figure 1is very well done and communicates the paper\\u2019s contributions well\", \"Table 1 is a reasonable way to assess different ranking mechanisms\", \"Figure 3 is compelling\"], \"weaknesses\": [\"Overall, this paper relies on a critical assumption: that model rankings are all the field cares about. For instance, in Section 2.2 Line 244, the manuscript states: \\u201cFor practitioners, the critical concern is whether the top models are ranked correctly.\\u201d I think this assumption is generally false. From personal experience, practitioners care very much about the magnitude/size of gaps between models\\u2019 capabilities. For instance, when OpenAI\\u2019s o1 came out, it was ranked #1 in AIME, and its performance was significantly better than that of any competitor. When GDM announced a 1M and 10M context length, that context length wasn\\u2019t merely #1 - such a context length was 1-2 orders of magnitude longer than any other available model at the time. Similarly, for companies or startups or organizations using these models, if Model B is epsilon-better than Model A, but Model A is already implemented and trustworthy and cheap, switching to Model B isn\\u2019t worth the effort and potential risks. Cost is an especially important factor; a model that is epsilon-better likely isn\\u2019t worth a 10x or 100x inference costs. Consequently, I find this paper\\u2019s contributions limited because I disagree with its premise that ordinal rankings of models are the important/meaningful signal.\", \"In terms of writing, this paper is difficult to follow. By Section 2.2, I was lost. I then discovered that the methodology that this paper contributes cannot be found in the main text and is instead detailed in Appendix A (I also cannot find a reference to Appendix A in the main text, although this might be my own inability). I feel like Appendix A is a significant pillar of this paper and should thus be included prominently in the main text.\", \"Section 2.1: This is the first substantive section of the paper, and it is titled \\u201cWhy This Works.\\u201d Up to this point in the paper, the reader does not know what exactly \\u201cthis\\u201d is and has no evidence that whatever \\u201cthis\\u201d is does work. Consequently, it seems premature at this point in the manuscript to explain why some unknown method achieves some unknown performance.\", \"Section 2.1: I don\\u2019t know what \\u201cP1, P2, P3\\u2026\\u201d refer to. Please state what \\u201cP\\u201d is an abbreviation of.\", \"Section 2.1: The text in each paragraph appears to be a literature review of the Plackett-Luce and the properties that accompany it. In this section, I am not able to identify what these authors are contributing in this section of text.\", \"Appendix A: Minor: The method assumes that sample-level rankings are available, which is oftentimes not true. Oftentimes, model creators release only aggregate metrics for an entire benchmark (e.g., 5-Shot Accuracy on MMLU).\", \"Appendix A: Line 1175-1176 states, \\u201cIn practice, ordinal measurements can paradoxically outperform cardinal ones despite the inherent information loss.\\u201d While I buy that ordinal measurements can outperform cardinal ones, whether they actually do so in a particular setting remains to be proven, and it is incumbent upon the researchers to demonstrate this.\", \"Given that this paper relies heavily on the Plackett-Luce model, the authors absolutely must state/summarize what it is. I was not intimately familiar and had to go educate myself.\", \"Figure 2: Echoing my first point about how ordinal ranks are not sufficient, I have no sense of whether these permutations of rankings are minor or major.\", \"Lines 304 and 310: How does insight 2 not contradict insight 3? Insight 3 states that Random Sampling matches Informative Sampling, but Insight 2 (and the subsection title \\u201cACTIVE SAMPLING IMPROVES DATA AGGREGATION EFFICIENCY\\u201d) seem to contradict this. It seems like the claims of this section and of Figure 4 (bottom) is self-contradictory.\", \"Figure 5 is aesthetically nice but lacking in substance\"], \"questions\": [\"In Section 2.1, what does \\u201cP\\u201d stand for in \\u201cP1.\\u201d, \\u201cP2.\\u201d, etc.?\", \"Table 1: Why are the Kendall taus with LLMs-Eval and VHELM comparatively low for your method? How is LLMs\\u2013Eval scored such that the Plackett-Luce only has a correlation of 0.67?\", \"Line 236: \\u201cOur method preserves the ranking of the top-10 models.\\u201d I might be misreading the figure, but it seems that your method reorders the ground truth rankings? If so, how does your method preserve the ranking of the top 10 models.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
DiRJUdmZoK
Pixelated Instructions: Can Multimodal Large Language Models Follow Printed Instructions in Images?
[ "Xiujun Li", "Yujie Lu", "Zhe Gan", "Jianfeng Gao", "William Yang Wang", "Yejin Choi" ]
Recent multimodal large language models (MLLMs) have shown promising instruction following capabilities on vision-language tasks. In this work, we introduce VISUAL MODALITY INSTRUCTION (VIM), and investigate how well multimodal models can understand textual instructions provided in pixels, despite not being explicitly trained on such data during pretraining or fine-tuning. We adapt VIM to eight benchmarks, including OKVQA, MM-Vet, MathVista, MMMU, and probe diverse MLLMs in both the text-modality instruction (TEM) setting and VIM setting. Notably, we observe a significant performance disparity between the original TEM and VIM settings for open-source MLLMs, indicating that open-source MLLMs face greater challenges when text instruction is presented solely in image form. To address this issue, we train V-MLLM, a generalizable model that is capable to conduct robust instruction following in both text-modality and visual-modality instructions.
[ "multimodal large langauage models", "instruction following" ]
Reject
https://openreview.net/pdf?id=DiRJUdmZoK
https://openreview.net/forum?id=DiRJUdmZoK
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zDromMIdFX", "xorA5qowFR", "vhqya6ImbZ", "qvD9ZnG0Ad", "ph2BHg0pxi", "nUIUx74o6V", "lcrkwvbI2b", "VkZc8PlNOm", "Vgp2Vd42Vl", "Qtstp8FWFv", "Q2eDNxiv4Z", "Pp0kWd5so0", "DIhCcNa3R0", "1cgqGt7L4X" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_review", "official_review" ], "note_created": [ 1732511371213, 1732322718220, 1732762423710, 1732322664522, 1730538638512, 1732322693478, 1737523701219, 1733210051111, 1734534750328, 1732652305636, 1730689604365, 1732322615263, 1730671968397, 1731303932046 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5353/Reviewer_Vub3" ], [ "ICLR.cc/2025/Conference/Submission5353/Authors" ], [ "ICLR.cc/2025/Conference/Submission5353/Reviewer_QNLJ" ], [ "ICLR.cc/2025/Conference/Submission5353/Authors" ], [ "ICLR.cc/2025/Conference/Submission5353/Reviewer_Vub3" ], [ "ICLR.cc/2025/Conference/Submission5353/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5353/Reviewer_AviV" ], [ "ICLR.cc/2025/Conference/Submission5353/Area_Chair_g2PS" ], [ "ICLR.cc/2025/Conference/Submission5353/Reviewer_VPB2" ], [ "ICLR.cc/2025/Conference/Submission5353/Reviewer_AviV" ], [ "ICLR.cc/2025/Conference/Submission5353/Authors" ], [ "ICLR.cc/2025/Conference/Submission5353/Reviewer_VPB2" ], [ "ICLR.cc/2025/Conference/Submission5353/Reviewer_QNLJ" ] ], "structured_content_str": [ "{\"comment\": \"I have carefully read the rebuttal. Unfortunately, my concerns remain unresolved. As a result, I have no choice but to maintain my rating as a rejection.\"}", "{\"title\": \"Thanks for the feedback\", \"comment\": \"Thanks for the feedback.\\n1. One example scenario is UI Interface or Navigation, or Online Shopping, so that a MLLM (as an agent) can follow the embedded instructions in the UI interface to execute actions. For the potential advantage of enabling MLLMs to follow the visual-modality instructions, it is mainly in the form of visual-situated text, it can range from text with diagrams or images or tables, to mobile apps with buttons and forms. Thanks for the suggestion, we will discuss this in the revision.\\n2. Thanks for the suggestion of the OCR and chain-of-thoughts baselines.\"}", "{\"comment\": \"Thanks for the authors' response. However, I think the major problem is the contribution of the paper. Thus, I decide to maintain the score.\"}", "{\"title\": \"Thanks for the feedback.\", \"comment\": \"Thanks for the feedback.\\n1. First, for the VIM setting in Figure 1, LLMs cannot make any predictions since there is no text input in the VIM setting.\\n\\n2. Thanks for the suggestion, we may change to Pixelated Instruction for the \\u201cembedded instruction\\u201d.\\n\\n3. The training used the standard SFT recipe to enhance the capability of the MLLMs, and this is widely used in the MLLM training for different capabilities. \\n\\n4. Thanks for the suggestion of citation format issue, we will correct it in the future.\"}", "{\"summary\": \"This paper investigates the ability of multimodal models to follow textual instructions embedded within visual data. The authors introduce a new benchmark and a custom training dataset to evaluate this capability. Their findings reveal that while open-source multimodal large language models encounter significant challenges, some proprietary models demonstrate effective performance. Additionally, they present a trained model, v-MLLM, capable of following instructions in both text-based and visual modalities.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tA new evaluation benchmark for MLLMs is introduced, along with an assessment of several baseline methods.\\n2.\\tThis paper introduces a new VIM training corpus, shown to be effective for training models with visual instruction-following capabilities.\\n3.\\tExtensive evaluations on the VIM benchmark reveal several noteworthy and practical findings.\", \"weaknesses\": \"1.\\tThe motivation for developing visual modality instructions is unclear. What specific application scenarios would require instructions to be provided only through printed images?\\n2.\\tIt may be unfair to evaluate existing open-source MLLMs in the VIM setting and compare them against proprietary models or a specialized model like v-MLLM. First, the VIM setting is likely unfamiliar to open-source models, whereas it may have been accessible to the proprietary and specialized models, making it unsurprising that open-source models struggled with this new setting. This diminishes the experimental results' relevance. Additionally, accurately recognizing text remains a known limitation for most general-purpose MLLMs, making the VIM setting challenging. To accurately assess visual instruction-following capabilities, it is necessary to minimize the impact of these models' text-recognition weaknesses; otherwise, the evaluation risks becoming more of an OCR test.\\n3.\\tThe paper is missing some key baselines. First, visual instruction-following could potentially be achieved by integrating an OCR front-end with MLLMs, which would be a straightforward approach to the task. Second, since visual instruction processing in MLLMs resembles a two-step process, and the authors find mixed instructions significantly improve performance, using a chain-of-thoughts prompt could help build stronger baseline models.\", \"questions\": \"Please see the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for the feedback.\", \"comment\": \"Thanks for the feedback.\\n1. Thanks for the suggestion for Figure 2. We will simplify it for readability.\\n2. Thanks for the suggestion for the \\u201cLeft\\u201d position in section 2.1.2. Right, a random position of the text would be ideal. Resolution is an important factor for the performance of MLLMs. In order to maintain the origin resolution of the image, we found \\u201cbottom\\u201d and \\u201ctop\\u201d positions may be the two good choices here. \\u201cLeft\\u201d, \\u201cRight\\u201d and \\u201cRandom\\u201d positions may change the original image resolution, which may bring variance to the experiments. \\n3. v-MLLM is initialized from the LVIS-Instruct4V (in Line 265). We will explicitly state this in the future version.\\n4. For the Text-Rich VQA tasks, like TextVQA, ChartQA, after the VIM training, the performance will drop since there is no explicit text prompt/instruction input in the VIM Training.\\n5. For the image resize, we use the same image preprocessor as LLaVA, so the image will be resized to the same size before patchifying.\\n6. That\\u2019s a good question. Probably adding some random embedded text instruction in the images would help for the robustness of the model. Thanks for the suggestion.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thanks for your response. I will maintain my score.\"}", "{\"metareview\": \"(a) The paper introduces Visual Modality Instruction (VIM), evaluating MLLMs\\u2019 capability to follow text instructions embedded in images. A significant gap between traditional instruction-following and VIM settings is identified. The authors propose V-MLLM to address this challenge, showing improved performance across benchmarks like OKVQA and MMMU.\\n\\n(b) Strengths include introducing a new benchmark (VIM), comprehensive evaluations on multiple MLLMs, and improved instruction-following performance with V-MLLM through visual training.\\n\\n(c) Weaknesses include unclear novelty over OCR tasks, limited application scenarios, and missing baselines (OCR front-end integration). Performance drops on TextVQA post-training highlight limitations.\\n\\n(d) Decision: Reject. While the VIM setting is interesting, the technical contribution is incremental, comparisons are incomplete, and practical impact remains unclear. Reviewers\\u2019 concerns about relevance and novelty remain unresolved.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers\\u2019 concerns about relevance and novelty remain unresolved.\"}", "{\"comment\": \"Thank you for the response! I would like to maintain the score since most of the concerns remain unresolved after reading the rebuttal.\"}", "{\"summary\": \"This paper introduces visual modality instruction to investigate how well multi-modal models can understand textual instructions provided in images. Furthermore, this paper trains a v-MLLM model.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"This paper is easy to read.\", \"Some figures are good.\"], \"weaknesses\": [\"The motivation presented in Figure 1 do not make sense. While LLMs can make plausible or correct predictions in some cases, these predictions do not change with different image inputs and will be incorrect if the image changes. However, the benchmark questions you mentioned seem closely related to the images, suggesting that the final answer depends on both the image and text. So I cannot understand the importance and necessary of designing the VIM task.\", \"The concept of \\\"embeded instruction\\\" is confusing. I initially thought you were embedding the text instruction using a visual encoder, but it appears you are simply adding the instruction to the image, similar to OCR.\", \"In my opinion, this benchmark is primarily designed to probe the OCR capability of MLLMs, specifically a certain type of OCR capability. While useful in some scenarios, I think the vision and motivation are somewhat limited.\", \"It would be better to compare the results to some MLLMs that excel at OCR. Moreover, the training method setups seem a little bit trivial, obtaining seems like a task-specific model.\"], \"questions\": [\"The citation format is incorrect. You should use \\\\citep{} rather than \\\\citet{}. For the case of \\\"Multimodal Large Language Models (MLLMs)\\\", it would be better to use the following format: Multimodal Large Language Models (MLLMs; citations).\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for the feedback.\", \"comment\": \"Thanks for the feedback.\\n1. For the MM-Vet (TEM setting, 29.9) in the stage-wise tuning, compared with the original LLaVA-1.5 3B (30.5), it looks like it drops a little. For the VIM setting, it looks stage-wise tuning (25.9) is slightly better than mixture tuning (23.5). Whether it is because of better OCR in the VIM training, it is still a question to explore. My observation is - the training of v-MLLM is not stable as a similar observation in the ScreenShot LM paper [1]. There are two kinds of stability here, the first one is inter-task stability under the same setting, for example, 8 tasks under TEM setting; the second one is inter-setting stability. We observed that it is hard to find a checkpoint to maintain both the inter-task and inter-setting stabilities, even only maintaining inter-task or inter-setting stability. So, there are quite large variances.\\n\\n2. TextVQA is a text-rich task, it highly depends on the text input. After the VIM training, it might drop since no text input for VIM setting.\\n[1]. Improving Language Understanding from Screenshots, Tianyu Gao etc., arXiv Feb. 2024\"}", "{\"summary\": [\"The paper introduces an interesting setting, visual modality instruction, to assess the ability of Multimodal Large Language Models (MLLMs) to follow textual instructions presented in visual formats.\", \"The paper trains V-MLLM, which demonstrates robust instruction-following abilities in both text-based and visual instruction settings across multiple tasks.\"], \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper identifies a gap in existing MLLMs\\u2019 capabilities, noting that they struggle to follow text instructions embedded in visual formats. To address this, the authors propose Visual Modality Instruction (VIM), a challenging setting designed to assess MLLMs' ability to interpret instructions delivered through visual modalities.\", \"The paper constructs VIM-Bench based on eight existing representative benchmarks and trains V-MLLM to following instructions in both text and visual formats.\"], \"weaknesses\": [\"Figure 2 is overly complex and contains excessive information, making it difficult to interpret. Simplifying this figure would improve clarity and reader comprehension.\", \"The conclusion and discussion around the instruction location experiment in Section 2.1.2 is not well established. For example, it\\u2019s unclear why the authors omitted a comparison with the \\\"left\\\" position. Additionally, while the paper claims that \\u201cGPT-4V and LLaVA-1.5 are robust to the locations of the embedded instruction\\u201d, there\\u2019s a nearly 10% performance difference between the \\\"bottom\\\" and \\\"top\\\" positions in GPT-4V. Moreover, the paper could also consider constructing the VIM corpus with randomly selected positions for the embedded text instructions\", \"For the VIM training, it\\u2019s unclear if V-MLLM was initialized with pretrained weights from LLaVA-1.5 and whether the model fine-tunes the full model including the image encoder, projector, and language model (LLM) backbone altogether.\", \"In Table 3, under the TEM setting, V-MLLM\\u2019s performance drops on TextVQA and ChartQA compared to LLaVA-1.5. Since these tasks require an understanding of text within images, this drop appears to contradict the hypothesis that VIM training would help with understanding the text within the image?\"], \"questions\": [\"The paper states that \\u201cwe aim to keep the resolution of the raw images, and we add text with the same font size for all images.\\u201d However, most MLLMs resize images to a standard size before encoding. Won't this resizing result in inconsistent text instruction resolution?\", \"Given that the VIM corpus places text instructions primarily at the bottom of images, how would the model perform on instances where the text instructions are embedded in different locations?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"this paper investigates how well multimodal models can understand textual instructions in images. propose a new setting named visual modality instruction (VIM) which evaluates the capability of MLLMs following instructions given in images. The results clearly show the performance gap of open-source models in the VIM setting and traditional setting, motivating a training dataset targeting the VIM setting.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. show interesting findings:\\n(1) open and closed source VLMs are robust to the position of textual instruction in the image.\\n(2) Two-stage instruction tuning and mixed instruction tuning have similar performance.\\n\\n2. After being tuned on the proposed VIM training dataset, open-source models demonstrate better instruction following capability.\\n\\n3. Comprehensive evaluation of open source and close source VLMs in the VIM setting.\", \"weaknesses\": \"1. The main concern is the technical contribution.\\n\\n(1) The proposed instruction following setting is new but it's similar to the original task of OCR which tests if VLM can read and understand text in the image.\\n\\n(2) The proposed training data is an augmentation of existing datasets by rendering and adding textual instruction on the images.\\n\\n(3) The VIM training is a supervised training setting with two variants. The major different between two variants are the data mixing strategies.\", \"questions\": \"1. What causes the performance improvement of LLaVA-1.5 3b on the MM-vet with stage-wise tuning in the TEM setting (Table 7)? Do you think it's because of better OCR of VLM learned during VIM instruction tuning.\\n\\n2. Why models achieve lower performance on TextVQA after VIM tuning in Table 7?\", \"flag_for_ethics_review\": \"['No ethics review needed.', 'Yes, Research integrity issues (e.g., plagiarism, dual submission)']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
Di3VLZHZdj
Efficient Fatigue Modeling: Applying Operator Networks for Stress Intensity Factor Prediction and Analysis
[ "Tushar Gautam", "Nathan Crosby", "Sara Restrepo-Velasquez", "Juan D Ocampo", "Harry Millwater", "Jacob Hochhalter", "Mike Kirby", "Shandian Zhe" ]
Fatigue modeling is essential for material-related applications, including design, engineering, manufacturing, and maintenance. Central to fatigue modeling is the computation and analysis of stress intensity factors (SIFs), which model the crack-driving force and are influenced by factors such as geometry, load, crack shape, and crack size. Traditional methods are based on finite element analysis, which is computationally expensive. A common engineering practice is manually constructing handbook (surrogate) solutions, though these are limited when dealing with complex scenarios, such as intricate geometries. In this work, we reformulate SIF computation as an operator learning problem, leveraging recent advancements in data-driven operator networks to enable efficient and accurate predictions. Our results show that, when trained on a relatively small finite element dataset, operator networks --- such as Deep Operator Networks (DeepONet) and Fourier Neural Operators (FNO) --- achieve less than 5\% relative error, significantly outperforming popular handbook solutions. We further demonstrate how these predictions can be integrated into crack growth simulations and used to calculate the probability of failure in small aircraft applications.
[ "Stress Intensity Factors", "Scientific ML", "Operator Network", "Crack Growth Simulation", "Fatigue Modeling", "Solid Mechanics", "AI for Science" ]
Reject
https://openreview.net/pdf?id=Di3VLZHZdj
https://openreview.net/forum?id=Di3VLZHZdj
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z8exNDCES6", "wpvlcNR4Lo", "rMuHjvY0Ie", "koYQRvNkjf", "iKVbCXal7V", "bksZqm1rtF", "YuXG3UKyzq", "TWXeeGxp1y", "Pk8prisBWQ", "NVqGGhu6y8", "IGr9ryBYrn", "ALj4YXGhHe", "2bwV28Lkjy" ], "note_type": [ "official_comment", "official_review", "meta_review", "official_comment", "official_review", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1731985961370, 1730272232429, 1734365586732, 1732929935991, 1731089278543, 1730540985360, 1737523680521, 1731986006319, 1732777668963, 1732562706398, 1730365060114, 1731986337861, 1731986939945 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5052/Authors" ], [ "ICLR.cc/2025/Conference/Submission5052/Reviewer_hyG6" ], [ "ICLR.cc/2025/Conference/Submission5052/Area_Chair_j6EZ" ], [ "ICLR.cc/2025/Conference/Submission5052/Reviewer_mZNZ" ], [ "ICLR.cc/2025/Conference/Submission5052/Reviewer_J9iU" ], [ "ICLR.cc/2025/Conference/Submission5052/Reviewer_mZNZ" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5052/Authors" ], [ "ICLR.cc/2025/Conference/Submission5052/Reviewer_hyG6" ], [ "ICLR.cc/2025/Conference/Submission5052/Reviewer_J9iU" ], [ "ICLR.cc/2025/Conference/Submission5052/Reviewer_TxYD" ], [ "ICLR.cc/2025/Conference/Submission5052/Authors" ], [ "ICLR.cc/2025/Conference/Submission5052/Authors" ] ], "structured_content_str": [ "{\"comment\": \"We would like to thank the reviewer for the comments. Responses to the comments **C** are marked **R**:\\n\\n---\\n\\n**C1:** *Weak comparison with baselines. While the paper mentions several previous works in using machine learning methods such as ANNs in fatigue modeling, none of them have been compared. This makes it hard to evaluate the importance of operator learning methods compared to previous works.*\\n\\n**R1:** We will add a detailed comparison against ANN, Random Forest Regression (RFR) and SVM in the paper. The summary of the L2 errors are shown below:\\n| |DeepONet|POD-DeepONet|FNO|Raju-Newman Equations|ANN|RFR|SVM|\\n|-|--------------|----------------------|------|---------------------------------|-------|------|------|\\n| Surface Crack | 0.000776|0.000817|0.000695|0.023611|0.000670|0.037400|0.005107|\\n| Corner Crack |0.000755| 0.000530| 0.000627| 0.036049| 0.001018|0.039508|0.136749|\\n\\nFrom this we can see that using operator networks are at par with ANN for the surface crack, while we get atleast an order of magnitude improvement over other ML algorithms for more complex corner crack dataset.\\n\\n---\\n\\n**C2:** *It is not clear where the novelty of this paper lies, since they explore applications of known neural operator methods on a new dataset. It will be useful to explicitly state the contributions of this work.*\\n\\n**R2:** This paper is an applications paper and focuses on applying neural operator to the practical of fatigue modeling. We show that using operator networks, the results can be further improved and predictions can be made very quickly, which results in very fast crack growth simulation. ICLR encourages papers showing the application to complex real-life problems and we aim to target that with this paper.\\n\\n---\\n\\n**C3:** *Limited complexity of the dataset. The visualizations suggest that the problem involves predicting a single 1D variable that is smoothly varying. More details can be provided on the complexity of tasks considered, and a possible categorization of data samples based on the level of complexity.*\\n\\n**R3:** The reviewer has misunderstood the problem. We are not predicting a 1D variable that is smoothly varying. In the visualizations, we are fixing the other dimensions (that represent the plate geometry and crack shape), and only looking at the SIFs with respect to $\\\\phi$. In reality SIFs are the function of geometry as well as the crack shape and it is not smooth in those dimensions.\"}", "{\"summary\": \"The paper presents the application of neural operators to enhance fatigue Modeling while maintaining a high level of accuracy. A dataset featuring diverse geometries and crack types is created through finite element (FE) simulations. This data is utilized to develop operators capable of predicting the Stress Intensity Factor (SIF) in near real-time. The effectiveness of these operators is demonstrated in crack growth simulations, achieving significant acceleration compared to traditional FE methods. Additionally, the approach is shown to be more accurate than the handbook solutions commonly employed in the industry.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is well written. Accelerating FE simulations is an important problem in the field of numerical simulations. The need for faster inference is clearly articulated. Neural operators naturally lend themselves to the problem. However, their application is not straightforward.\\nThe paper presents a very interesting application of neural operators for a practical problem in the industry. The idea of using neural operators for mitigating the repetitive bottlenecks in fatigue modelling looks novel. The methodology is reasonably clear barring few details which seem to be skipped. Claims in the paper are well supported by the results identifying the benefits over conventional methods.\", \"weaknesses\": \"A thorough exploration of prior work on fatigue modeling using PINNs, neural networks and machine learning approaches, pointing out their limitations would strengthen the case of this work.\\n\\nWhile this is an interesting engineering application of neural operators, it seems to use only vanilla neural operator frameworks. The details of the modeling effort - architecture used, learning strategies tweaked for this problem, challenges faced during training, hyperparameters selection, loss curves are all missing from the paper. It is very difficult to evaluate the contribution without these details. \\n\\nThe results look good against conventional handbook methods, but they should also be compared against other ML approaches - PINNs, neural networks, ML methods. \\n\\nIs there any validation for probability of failure with real life data? If yes, that should be added as well. \\n\\nThe ground truth in Figures 4,5,6 is very hard to see. Suggest changing the symbol/colour/font size to make it easy to comprehend.\", \"questions\": \"Is it possible to use neural operators for capturing entire transient fatigue modeling? Instead of switching between crack growth calculation and SFI prediction. What would be the challenges in this?\\n\\nWhat challenges do you envisage for operator learning when loading conditions and material properties change?\\n\\nWhat about performance on new out-of-distribution scenarios? How about using physics equations as a constraint while building these operators? As an example, can something similar be done within neural operator framework - https://www.sciencedirect.com/science/article/abs/pii/S0167844224004671?via%3Dihub\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper explores the use of neural operator networks (eg. DeepONet, POD-DeepONet, FNO) for predicting Stress Intensity Factors (SIFs) in fatigue modeling. By reformulating SIF computation as an operator learning problem, the authors demonstrate significant computational speedups and higher accuracy compared to more traditional methods. The proposed approach integrates effectively into crack growth simulations and shows potential applications in assessing failure probabilities for small aircraft.\\n\\nWhile the application is relevant and the results are promising, the paper has several limitations. The primary weakness is the lack of methodological novelty, as it applies existing neural operator frameworks without proposing innovations. Baseline comparisons with other (more standard) ML methods were missing in the original submission and the panel of reviewer estimated that they remained insufficiently addressed. The dataset is restricted to simple geometries and loading conditions, limiting insights into the method\\u2019s generalization to complex/realistic scenarios. It was also noted that there was no validation with real-world experimental data, particularly for failure probability predictions, which diminishes the practical impact of the work.\\n\\nThe panel recommends rejection of the paper. While the application is interesting and addresses a meaningful problem, the submission lacks sufficient innovation in methodology, rigor in baseline comparisons, and robustness in demonstrating real-world applicability. We encourage the authors to expand the scope of their work by exploring methodological enhancements, providing more comprehensive baseline comparisons, and addressing practical challenges such as generalization and uncertainty quantification. This could make the work stronger for submission to a specialized venue or a future iteration of ICLR.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, reviewers raised concerns about the lack of baseline comparisons, unclear methodological novelty, limited dataset complexity, and missing details about model architectures and hyperparameters. The authors addressed some issues by adding comparisons with ML methods & clarifying the practical relevance of their work. They also committed to providing additional methodological details in the appendix and explained the limitations of validating real-world failure probabilities due to resource constraints. However, the responses did not fully resolve the core weaknesses, including the lack of innovation, insufficient generalization testing, and the absence of experimental validation.\"}", "{\"title\": \"Comments on the authors reply\", \"comment\": \"From the replies R1, R2, and R4, it is difficult to see what is the advantage of the current approach in addressing real-world fatigue design problems that cannot be done using existing non-machine learning methods. Maybe the authors can add a showcase to highlight this.\"}", "{\"summary\": \"This paper evaluates the effectiveness of neural operators for the problem of fatigue modeling. Three neural operator learning methods are used, namely FNO, DeepONet, and POD-DeepONet for predicting crack growth on simulated datasets. Results show comparison of neural operator methods with a numerical method considered as ground-truth.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. Studies an interesting real-world problem with application to a new area of engineering\\n2. Provides citations to relevant literature in machine learning for fatigue modeling\", \"weaknesses\": \"1. Weak comparison with baselines. While the paper mentions several previous works in using machine learning methods such as ANNs in fatigue modeling, none of them have been compared. This makes it hard to evaluate the importance of operator learning methods compared to previous works.\\n2. It is not clear where the novelty of this paper lies, since they explore applications of known neural operator methods on a new dataset. It will be useful to explicitly state the contributions of this work.\\n3. Limited complexity of the dataset. The visualizations suggest that the problem involves predicting a single 1D variable that is smoothly varying. More details can be provided on the complexity of tasks considered, and a possible categorization of data samples based on the level of complexity.\", \"questions\": \"See weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The work compares the performance of three operator networks in predicting the stress intensity factors in model structures. Based on two sets of finite element datasets on representative plate and holed-specimen geometries, the operator neural network approach outperforms approximate textbook solutions solved from the Newman equations, with very high computational efficiency (0.1 M loading cycles within 0.5 s). The workflow to integrate the calculated stress intensity factors into fatigue crack growth is discussed.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Finite element simulation datasets are created, which allow the assessment of operator neural networks in predicting the stress intensity factor solutions from the finite-geometry linear elastic problems under specific loading conditions. The data-driven computation of stress intensity factors is orders of magnitudes faster than the finite element solvers, which very good accuracy for the different geometries and crack shapes (for plates and holed specimens). The work shows the potential of applying these methods into fatigue performance assessment.\", \"weaknesses\": \"The loading conditions is limited to uniform tension. The authors are suggested to explore the performance of predictions under more complex loading conditions such as non-uniform tension and a combination of tension and shear. Similar, it is not clear how the model trained using the datasets constructed in the current work generalizes to specimens and cracks with very different geometries and shapes such as a plate with varying thickness, and a solid with irregular or 3D crack geometries, which are essential if one considers practical applications of these ideas. The detailed parameters and settings of the operator neural network models should be given in the appendix.\", \"questions\": \"How does the model trained for the plates apply to the holed specimens, and vice versa? The authors are suggested to introduce quantitative performance metrics when applying the model trained on one geometry to the other, compared to when it's trained on the specific geometry. How do the three operator neural networks compare in terms of computational costs and speeds?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"We would like to thank the reviewer for the comments. Responses to the comments **C** are marked **R**:\\n\\n---\\n\\n***C1***: *The loading conditions is limited to uniform tension. The authors are suggested to explore the performance of predictions under more complex loading conditions such as non-uniform tension and a combination of tension and shear.* \\n\\n***R1***: Adding bending or bearing loading to tension does not change the problem complexity. In accordance with linear elastic fracture mechanics, these additional loading conditions would simply be superpositions on the tension loading. For reference, please check out Section 2.6 of the book \\u201c[Fracture Mechanics](https://www.taylorfrancis.com/books/mono/10.1201/9781482265583/fracture-mechanics-jan-zuidema-michael-janssen-russell-wanhill) by *Michael Janssen,\\u00a0Jan Zuidema,\\u00a0Russell Wanhill*\\u201d.\\n\\n---\\n\\n***C2***: *It is not clear how the model trained using the datasets constructed in the current work generalizes to specimens and cracks with very different geometries and shapes such as a plate with varying thickness, and a solid with irregular or 3D crack geometries, which are essential if one considers practical applications of these ideas.* \\n\\n***R2***: From a fracture mechanics expert perspective, such a generalization would never be possible because \\u201cvery different geometries and shapes\\u201d would require new models due to requirement of adding new features. Additionally, it is never the case in practice \\u201cirregular or 3D crack geometries\\u201d are considered for practical application. Instead, surrogate models of idealizations of more complex scenarios are used in every standard procedure for estimating fatigue crack growth and damage tolerance in practice, e.g., SIF model use in NASGRO and AFGRO. \\n\\n---\\n\\n***C3***: *The detailed parameters and settings of the operator neural network models should be given in the appendix.*\\n\\n***R3***: We thank the reviewer for the comment. We will add this to our paper.\\n\\n---\\n\\n***C4***: *How does the model trained for the plates apply to the holed specimens, and vice versa? The authors are suggested to introduce quantitative performance metrics when applying the model trained on one geometry to the other, compared to when it's trained on the specific geometry.* \\n\\n***R4***: The two datasets have different features. So, training on one and testing on the other is not sensical.\\n\\n---\\n\\n***C5***: *How do the three operator neural networks compare in terms of computational costs and speeds?*\\n\\n***R5***: We thank the review for the suggestion. We will add this in the appendix. In summary, POD-DeepONets trained the fastest, followed by DeepONets and FNO was the slowest. Computational cost follows the order where FNO is most intensive, followed by DeepONet and POD-DeepONet.\"}", "{\"comment\": \"Thank you authors for your comments.\\nFor comment C2, would like to see those details related to the training. I don't see them in the appendix yet. Add them in the response here if you are not able to edit the appendix now. Difficult to properly evaluate unless those are added. Thanks.\"}", "{\"comment\": \"I have read the other reviews and the author responses. As also mentioned in other reviews, this work lacks technical novelty than a simple application of existing methods to a new problem. Also, the results in response to comment C1 show that ANN (used in previous works for this problem) is almost as good as neural operator methods, limiting the contribution of this work.\"}", "{\"summary\": \"The authors use operator networks to predict SIFs with high efficiency and accuracy and can be generalizable to a wide range of geometries and crack shapes. The combination of the FE model and M-integral acts as an operator. The capabilities of models are demonstrated by integrating the learned operator into crack growth simulations and calculating the probability of failure in small aircraft applications.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. They reformulate the SIF computation as an operator learning problem.\\n\\n2. The SIFs are predicted with high efficiency and accuracy, which are validated on datasets of surface and corner cracks in plates.\\n\\n3. The framework can be integrated into crack growth simulations and used to calculate the probability of failure in small aircraft applications.\", \"weaknesses\": \"1. The methodological innovation is not prominently highlighted. It seems to be a combination of existing approaches such as FE modeling, operator networks.\\n\\n2. The details of the method may require further elaboration, such as the process of neural network training and the setting of hyperparameters. Additionally, information on how the training and test datasets were divided, and which crack geometries were used for training versus testing, should be specified.\\n\\n3. The framework lacks more demonstrative experimental data to verify its feasibility. The dataset, trained through FE models, may have deviations in experimental scenarios compared to the FE models (such as in material constitutive models, geometry, loading conditions, etc.).\", \"questions\": \"1. Please further elucidate the methodological innovation. It should not merely be a concatenation of existing methods.\\n\\n2. Please add the details of the method, such as the process of neural network training and the setting of hyperparameters. Additionally, information on how the training and test datasets were divided, and which crack geometries were used for training versus testing, should be specified to better evaluate the generalizability of models.\\n\\n3. Provide examples of the best, worst, and median performance of the proposed machine learning model. Show the prediction of the closest point in the dataset of each of those examples, so that readers understand the quality of the machine learning model.\\n\\n4. The operator implicitly encapsulates material constitutive relations, loading conditions, etc., within the finite element model. When the method is applied to real-world engineering scenarios with uncertainties compared to the training environment, a discussion on the model\\u2019s applicability should be expanded.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We would like to thank the reviewer for the comments. Responses to the comments **C** are marked **R**:\\n\\n---\\n\\n***C1***: *The methodological innovation is not prominently highlighted. It seems to be a combination of existing approaches such as FE modeling, operator networks. Please further elucidate the methodological innovation. It should not merely be a concatenation of existing methods.*\\n\\n***R1***: This paper is an applications paper and focuses on applying neural operator to the practical problem of fatigue modeling. We show that using operator networks, the results can be further improved and predictions can be made very quickly, which results in very fast crack growth simulation. ICLR encourages papers showing the application to complex real-life problems and we aim to target that with this paper.\\n\\n---\\n\\n**C2**: *The details of the method may require further elaboration, such as the process of neural network training and the setting of hyperparameters. Additionally, information on how the training and test datasets were divided, and which crack geometries were used for training versus testing, should be specified.*\\n\\n***R2***: We thank the reviewer for the comment. We will add this in the appendix.\\n\\n---\\n\\n***C3***: *The framework lacks more demonstrative experimental data to verify its feasibility. The dataset, trained through FE models, may have deviations in experimental scenarios compared to the FE models (such as in material constitutive models, geometry, loading conditions, etc.).*\\n\\n***R3***: SIFs cannot be measured experimentally. They can be computed but not measured. The scope of this study is to demonstrate accurate SIF predictions. SIFs are a function of geometry and loading conditions. Indeed, both will have associated uncertainties. To test this, a significantly expanded dataset to characterize the propagation of uncertainty through the FE model for SIF would need to be completed, then a Bayesian or similar training approach (like deep ensemble) would be undertaken. While interesting, this is far outside the scope of paper.\\n\\n---\\n\\n***C4***: *Provide examples of the best, worst, and median performance of the proposed machine learning model. Show the prediction of the closest point in the dataset of each of those examples, so that readers understand the quality of the machine learning model.*\\n\\n***R4***: We thank the reviewer of the comment. We will add this to the paper.\\n\\n---\\n\\n***C5***: *The operator implicitly encapsulates material constitutive relations, loading conditions, etc., within the finite element model. When the method is applied to real-world engineering scenarios with uncertainties compared to the training environment, a discussion on the model\\u2019s applicability should be expanded.*\\n\\n***R5***: Please check ***R3*** for this comment as well. We briefly discuss uncertainty analysis using SMART-DT and plot the probability of failure, but a more detailed analysis is for the future work.\"}", "{\"comment\": \"We would like to thank the reviewer for the comments. Responses to the comments **C** are marked **R**:\\n\\n---\\n\\n***C1***: A thorough exploration of prior work on fatigue modeling using PINNs, neural networks and machine learning approaches, pointing out their limitations would strengthen the case of this work.\\n\\n***R1***: We will add a detailed comparison against ANN, Random Forest Regression (RFR) and SVM in the paper. Comparison against PINNs is not possible because we don't have any physics equations constraining the SIF computation. The summary of the L2 errors are shown below:\\n| |DeepONet|POD-DeepONet|FNO|Raju-Newman Equations|ANN|RFR|SVM|\\n|-|--------------|----------------------|------|---------------------------------|-------|------|------|\\n| Surface Crack | 0.000776|0.000817|0.000695|0.023611|0.000670|0.037400|0.005107|\\n| Corner Crack |0.000755| 0.000530| 0.000627| 0.036049| 0.001018|0.039508|0.136749|\\n\\nFrom this we can see that using operator networks are at par with ANN for the surface crack, while we get atleast an order of magnitude improvement over other ML algorithms for more complex corner crack dataset.\\n\\n---\\n\\n***C2***: *While this is an interesting engineering application of neural operators, it seems to use only vanilla neural operator frameworks. The details of the modeling effort - architecture used, learning strategies tweaked for this problem, challenges faced during training, hyperparameters selection, loss curves are all missing from the paper. It is very difficult to evaluate the contribution without these details.*\\n\\n***R2***: We thank the reviewer. We will add the details in the appendix.\\n\\n---\\n\\n***C3***: *The results look good against conventional handbook methods, but they should also be compared against other ML approaches - PINNs, neural networks, ML methods.*\\n\\n***R3***: Please see the response ***R1*** for this comment.\\n\\n---\\n\\n***C4***: *Is there any validation for probability of failure with real life data? If yes, that should be added as well.*\\n\\n***R4***: Validating real life probability of failure would require millions of experiments/inspections of the aircrafts. This will take significant amount of time/resources and is outside the scope of this work.\\n\\n---\\n\\n***C5***: *Is it possible to use neural operators for capturing entire transient fatigue modeling? Instead of switching between crack growth calculation and SFI prediction. What would be the challenges in this?*\\n\\n***R5***: Once we have accurate SIF values, the crack growth simulation is performed by solving a relatively simple differential equation. We have methods like Runge-Kutta that can do this very effectively. Such an integration approach is important because the crack growth depends on the sequence of cycles applied. For example, it\\u2019s not the case that constant amplitude cycles are applied. Instead, any random sequence of cycle amplitudes can be applied, which is an intractable problem for ML, considering that millions of cycles are often applied. This will lead to lower accuracy and slower training/prediction times.\\n\\n---\\n\\n***C6***: *What challenges do you envisage for operator learning when loading conditions and material properties change?*\\n\\n***R6***: We thank the reviewer for the attention to detail. We made a typo in the manuscript, where we mentioned that SIF depends on the material property. We will correct it to reflect that SIF only depends on the geometry and loading conditions (and not on material properties). Regarding loading conditions, it does not change the problem complexity, hence will not pose any significant challenge for the operator networks. In accordance with linear elastic fracture mechanics, multiple loading conditions would simply be superpositions on the tension loading. For reference, please check out Section 2.6 of the book \\u201c[Fracture Mechanics](https://www.taylorfrancis.com/books/mono/10.1201/9781482265583/fracture-mechanics-jan-zuidema-michael-janssen-russell-wanhill) by *Michael Janssen,\\u00a0Jan Zuidema,\\u00a0Russell Wanhill*\\u201d.\\n\\n---\\n\\n***C7***: *What about performance on new out-of-distribution scenarios? How about using physics equations as a constraint while building these operators? As an example, can something similar be done within neural operator framework -\\u00a0https://www.sciencedirect.com/science/article/abs/pii/S0167844224004671?via%3Dihub*\\n\\n***R7***: Sampling out of distribution is not feasible in our case because the dataset is generated practically and we can\\u2019t sample any parameters like crack shape and manipulate the dataset to get out of distribution examples. This will lead to cases that are not practical and will offer nothing. There are no physics equations describing the problem, hence we can not constrain the learning. The paper link provided by the reviewer is from a different (unrelated) problem and is not applicable for our work.\"}" ] }
DhlbK7tAjz
MaskInversion: Localized Embeddings via Optimization of Explainability Maps
[ "Walid Bousselham", "Sofian Chaybouti", "Christian Rupprecht", "Vittorio Ferrari", "Hilde Kuehne" ]
Vision-language foundation models such as CLIP have achieved tremendous results in global vision-language alignment, but still show some limitations in creating representations for specific image regions. To address this problem, we propose MaskInversion, a method that leverages the feature representations of pre-trained foundation models, such as CLIP, to generate a context-aware embedding for a query image region specified by a mask at test time. MaskInversion starts with initializing an embedding token and compares its explainability map, derived from the pretrained model, to the query mask. The embedding token is then subsequently refined to approximate the query region by minimizing the discrepancy between its explainability map and the query mask. During this process, only the embedding vector is updated, while the underlying foundation model is kept frozen allowing to use MaskInversion with any pre-trained model. As deriving the explainability map involves computing its gradient, which can be expensive, we propose a gradient decomposition strategy that simplifies this computation. The learned region representation can be used for a broad range of tasks, including open-vocabulary class retrieval, referring expression comprehension, as well as for localized captioning and image generation. We evaluate the proposed method on all those tasks on several datasets such as PascalVOC, MSCOCO, RefCOCO, and OpenImagesV7 and show its capabilities compared to other SOTA approaches.
[ "localized embedding", "fondation models", "test-time optimization" ]
Reject
https://openreview.net/pdf?id=DhlbK7tAjz
https://openreview.net/forum?id=DhlbK7tAjz
ICLR.cc/2025/Conference
2025
{ "note_id": [ "sUFxUfMzrE", "rqCJf487GJ", "rDQ81akkSW", "qthjH7ptSz", "nnFbgGyzxE", "iQuV11A8bl", "hbcIGudpWN", "ceILkDqwhD", "SBJtWvTuns", "Ot4d2KTXfr", "MaeD7Pfsbb", "JzNMBYF2Ow", "IuMz3vesKh", "HCEwG0OLjm", "2IlcOXGkXn" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1733000277651, 1733127411447, 1730781584220, 1732309542710, 1732309559672, 1737523471687, 1732631565626, 1730508249423, 1732631541323, 1733127227850, 1734799296567, 1733127307345, 1732631588360, 1732309553438, 1730716286236 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1855/Reviewer_rbTp" ], [ "ICLR.cc/2025/Conference/Submission1855/Authors" ], [ "ICLR.cc/2025/Conference/Submission1855/Reviewer_gqG2" ], [ "ICLR.cc/2025/Conference/Submission1855/Authors" ], [ "ICLR.cc/2025/Conference/Submission1855/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1855/Authors" ], [ "ICLR.cc/2025/Conference/Submission1855/Reviewer_rbTp" ], [ "ICLR.cc/2025/Conference/Submission1855/Authors" ], [ "ICLR.cc/2025/Conference/Submission1855/Authors" ], [ "ICLR.cc/2025/Conference/Submission1855/Area_Chair_BC6P" ], [ "ICLR.cc/2025/Conference/Submission1855/Authors" ], [ "ICLR.cc/2025/Conference/Submission1855/Authors" ], [ "ICLR.cc/2025/Conference/Submission1855/Authors" ], [ "ICLR.cc/2025/Conference/Submission1855/Reviewer_zd2G" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for the detailed response and clarification. Most of my questions are now resolved, and I\\u2019ve updated my score accordingly.\"}", "{\"comment\": \"As the discussion period is nearing its end, we wanted to respectfully follow up on our responses to your valuable feedback. We have carefully addressed all raised concerns and added substantial improvements to our paper.\\n\\nWe would greatly appreciate if you could review our responses and consider updating your assessment if you find our revisions satisfactory.\\nThank you for your time and expertise\"}", "{\"summary\": \"This paper introduces MaskInversion, a method that leverages pre-trained vision-language models (such as CLIP) to generate context-aware embeddings for specific image regions by optimizing explainability maps. It aims to improve localized image representation tasks, such as referring expression comprehension and captioning, while employing a gradient decomposition strategy to reduce computation.\", \"the_contributions_of_this_paper_include\": \"1) a new method that is able to learn localized embeddings for given queries;\\n2) an efficient gradient decomposition approach for multi-query masks;\\n3) improved performance on various downstream tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The motivation is interesting. The problem of poor localization capabilities does exist in CLIP.\\n2. The proposed method is intuitive.\\n3. The performance is good. MaskInversion achieves superior results on a wide range of vision-language benchmarks.\", \"weaknesses\": \"1. Very important baselines are missing. I noticed that you have discussed the paper of MaskCLIP [1] but did not compare with it in the experiments. Actually, CLIP's localization issues can be addressed in a very simple way. You just need to reform the last layer's self-attention in the fasion of MaskCLIP (removing Q and K), SCLIP [2] (Q-to-Q and K-to-K attention), or CLIPSurgery [3] (V-to-V attention with dual paths). I believe by simply modifying CLIP with these methods (they are all training-free), the performance can be improved by a very large margin.\\n\\n2. Given these baselines are missing, it's difficult to evaluate whether the new method is effective enough. As MaskInversion involves a much more complex process, I expect it to perform significantly better than those three baselines.\\n\\n3. The other contribution of the paper, gradient decomposition, is not that significant. As shown in Table 5, It makes clear speed improvements only if we have >10 masks/image. What is the general case of the number of masks involved in your tasks?\\n\\n4. Minor comments: there are some typos in the paper such as in Line 481, what does Table 4.5 refer to?\\n\\n[1] Extract free dense labels from clip, in ECCV 2022\\n\\n[2] SCLIP: rethinking self-attention for dense vision-language inference, in ECCV 2024\\n\\n[3] A closer look at the explainability of contrastive language-image pre-training.\", \"questions\": \"See Weaknesses.\\n\\n---- updates after rebuttal ----\\nI appreciate the authors' response and additional experiments for the metioned baselines. While MaskInversion outperforms the training-free approaches in most cases, some of my concerns are addressed. Howerver, the authors did not discuss the new results in the revised paper, which may cause misleading for readers. Overall, I still think this is a boarderline paper and have changed the score to 5. I still have concerns about the scalability of the method, as on OpenImagesV7, which is relatively more complex and has more masks in the images, MaskInversion performs worse than CLIPSurgery.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely thank the reviewer for their thorough and constructive feedback. We particularly appreciate the suggestion to compare with MaskCLIP, SCLIP, and CLIPSurgery, which are indeed relevant baselines and strengthen the paper.\\n## 1. Missing Baselines:\\nWe thank the reviewer for bringing attention to these important baselines. Following your suggestion, we have conducted comprehensive experiments comparing MaskInversion with MaskCLIP [1], CLIPSurgery[2], and SCLIP[3] across all evaluation datasets. To compare the respective performance, we used the training-free pipelines to compute respective patch token representations and average pool all patch tokens inside the mask to get the localized embedding for that mask. Using the official implementation provided by the authors, we ran each method on our evaluation suite without any retraining, as these methods are designed to be training-free. We keep the same evaluation pipeline as in the paper and only change the localized embedding tokens used. The results are summarized in the table below. We report the top-1 Accuracy for all datasets extending Table 1 and Table 2 in the original paper:\\n\\n\\n\\n| Method | Backbone | VOC | Context | COCO | PhraseCut | RefCOCO | RefCOCO+ | OpenImagesV7 |\\n|--------|----------|-----|---------|------|-----------|----------|-----------|--------------|\\n| MaskCLIP | B/16 | 74.9 | 43.0 | 40.2 | 53.9 | 49.3 | 52.6 | 45.6 |\\n| CLIPSurgery | B/16 | 70.8 | 53.5 | 41.7 | 52.5 | 48.9 | 52.0 | **49.5** |\\n| SCLIP | B/16 | 64.3 | 43.0 | 33.4 | 37.2 | 40.7 | 42.4 | 45.5 |\\n| **Ours** | B/16 | **85.4** | **58.1** | **44.7** | **57.2** | **56.1** | **58.3** | 46.3 |\\n| MaskCLIP | L/14 | 55.1 | 33.2 | 29.3 | 47.6 | 43.2 | 47.2 | 32.5 |\\n| CLIPSurgery | L/14 | 78.3 | 46.4 | 47.7 | 47.2 | 47.3 | 50.9 | 45.5 |\\n| SCLIP | L/14 | 43.0 | 24.9 | 25.9 | 19.0 | 32.8 | 32.5 | 38.3 |\\n| **Ours** | **L/14** | **91.0** | **59.0** | **56.0** | **60.2** | **56.1** | **60.2** | **48.7** |\\n| MaskCLIP | H/14 | 61.8 | 37.8 | 30.9 | 45.9 | 34.6 | 39.6 | 36.9 |\\n| CLIPSurgery | H/14 | 68.0 | 40.8 | 40.1 | 41.5 | 43.2 | 46.7 | 45.8 |\\n| SCLIP | H/14 | 38.2 | 20.7 | 19.8 | 15.2 | 20.7 | 20.7 | 35.6 |\\n| **Ours** | H/14 | **93.5** | **61.8** | **63.7** | **64.0** | **61.2** | **65.0** | **51.2** |\\n\\nOur results demonstrate that MaskInversion consistently outperforms these baselines across nearly all datasets and backbone sizes, with particularly strong improvements for larger backbones.\\nIt shows that only in one case (ViT-B/16) ClipSurgery is able to outperform our method on OpenImages V7. More importantly, it shows that MaskInversion consistently outperforms these baselines and profits from larger backbones by showing increased performance. \\nIn contrast, the performance of the evaluated training-free methods starts to degrade. \\n\\nWhile this shows the capabilities of MaskInversion, we want to emphasize that we consider MaskInversion and training-free methods as two different valuable lines of work that overlap in this task, but might be used in different contexts. \\n\\n[1] https://github.com/chongzhou96/MaskCLIP\\n[2] https://github.com/wangf3014/SCLIP\\n[3] https://github.com/xmed-lab/CLIP_Surgery\\n\\n## 2. Complexity compared to training-free methods: \\nWe think that it is difficult to directly compare the trade-off between complexity and performance of MaskInversion compared to training-free methods as we think of them as two independent lines of work. \\n_e.g._, while our method operates on the model outputs without requiring architectural modifications, ClipSurgery needs the modification of the forward-pass over several layers. \\nIf the reviewer has any suggestions on how to explore this topic further, please let us know.\\n\\n\\n## 3. Gradient Decomposition Significance:\\nWe appreciate the concern about the gradient decomposition's practical utility. \\nWhile the speed improvements become significant only with >10 masks per image, which might not be common in e.g., human interactions, we believe this contribution is rather valuable for an automatic processing scenario. Indeed, recent methods such as SAM and open-world objectness detectors typically generate 100-250 masks per image. Being able to efficiently process large amounts of masks might allow converting such image regions into meaningful embeddings. We consider this to be an interesting direction for future work.\\n\\n\\n## 4. Minor Comments:\\nThank you for catching the typo regarding Table 4.5 which is actually Table 5. This will be corrected in the final version.\\n\\nWe believe the results and clarifications we provide in this rebuttal address the main concerns while demonstrating the significant advantages of our approach. We will update the paper to include these comparisons and clarify the practical significance of the gradient decomposition contribution.\"}", "{\"comment\": \"Thank you for your thoughtful review and for highlighting the strengths of our approach, particularly regarding our novel use of explainability methods to enable region-specific focus while maintaining global context information. We appreciate your constructive feedback and questions, which we address below:\\n\\n# 1. Multi-object scenarios:\\nWe appreciate the reviewer's feedback regarding multi-object scenarios. While our presentation may have understated this capability, MaskInversion naturally handles multiple objects, as demonstrated quantitatively in our experiments on semantic segmentation datasets like PascalContext (Table 2), where masks frequently encompass multiple instances of the same object class. \\n\\nTo further illustrate this capability qualitatively, we have added visualizations using \\u03bb-ECLIPSE diffusion model in **Figure 6**. of the Annex of the current paper version (please see the Annex written in blue). \\nThe figure shows how MaskInversion effectively captures multiple objects within a single mask, preserving their individual characteristics. For instance, when the mask covers multiple characters (as shown in the rightmost examples), the generated images accurately reflect the group composition while maintaining contextual relationships. This demonstrates that our method can effectively encode complex, multi-object arrangements without requiring explicit object-level separation. The explainability maps (top row) further validate this, showing how attention is appropriately distributed across multiple objects within the masked region. While quantitative evaluation of multi-object scenarios remains challenging due to the lack of standardized metrics, these qualitative results strongly suggest that MaskInversion successfully handles complex, multi-object compositions.\\n\\n# 2. Global context capture and visualization:\\nTo address your question about global context capture, we conducted additional experiments analyzing how the regularization parameter $\\\\alpha$ influences the balance between local and global information (please see **Table 6** written in blue in the updated paper). We observe that:\\n- Lower $\\\\alpha$ values in [0,1] result in highly focused embeddings that primarily capture the masked region\\n- Medium $\\\\alpha$ values in [2,5] incorporate relevant contextual information while maintaining region specificity\\n- Higher $\\\\alpha$ values (>10) progressively approach the behavior of the global [CLS] token\\nWe have added visualizations in the paper (**Figure 5**) showing explainability maps and corresponding captions across different $\\\\alpha$ values, demonstrating how this parameter effectively acts as a \\\"slider\\\" for controlling the local-global information balance. This is particularly valuable for tasks like referring expressions where contextual understanding is crucial.\\n\\n# 3. Performance on RefCOCO+:\\nRegarding the performance difference between MaskInversion and Masked Crop on RefCOCO+ with ViT-B/16, this is an interesting observation that reveals important characteristics of the dataset. RefCOCO+ was specifically designed to focus on appearance-based descriptions rather than relational or contextual ones. Therefore, methods that isolate the target region (like Masked Crop) can perform well on this particular dataset. However, MaskInversion shows superior performance to Masked Crop across all other datasets and larger model architectures, particularly in scenarios requiring contextual understanding (PhraseCut and RefCOCO). We will clarify this dataset-specific characteristic in the paper.\\n\\n## Minor correction:\\nThank you for catching the typo (\\\"maks\\\"). We will correct this in the final version.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Feedback on our response\", \"comment\": \"We thank you for your constructive feedback and have addressed all points in detail. If you have no further questions, we kindly ask you to consider increasing the score.\"}", "{\"summary\": \"The paper proposes a new method that uses explainability maps from pretrained models to generate localized embeddings. These embeddings can represent object properties while capturing the broader image context. The paper further demonstrates that these learned region representations are versatile and can be applied to various tasks, including retrieval, grounding, captioning, and image generation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper introduces a novel approach leveraging explainability methods to enable the model to focus on specific regions within an image. Unlike traditional techniques like clipping, blurring, or masking, this approach allows the model to retain access to global image information. The method is clearly outlined and validated through comprehensive downstream tasks, demonstrating its effectiveness.\", \"weaknesses\": \"The paper primarily focuses on single-object scenarios, lacking analysis on multiple objects and their interactions. Including experiments and analysis on multi-object scenarios would strengthen the study and provide a more comprehensive evaluation of the method's effectiveness. For instance, datasets like MSCOCO, with complex captions involving multiple objects, could offer valuable insights; sharing examples from such datasets would further illustrate the model's performance in these scenarios.\", \"questions\": \"1. What type of global image context does this method capture? Could the authors provide visualizations, like attention map, to illustrate how the global context influences localized embeddings across different scenarios? This would clarify the method\\u2019s effectiveness in capturing and utilizing global context for downstream tasks.\\n2. In referring expression retrieval tasks, MaskInversion with ViT-B/16 underperforms compared to Masked Crop in RefCOCO+. Could the authors provide a detailed analysis investigating the reasons for this discrepancy?\\n3. Minor comment: In the related work section, \\\"maks\\\" should be corrected to \\\"masks\\\".\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Feedback on our response\", \"comment\": \"We hope that our comprehensive response, particularly the extensive experimental comparisons with MaskCLIP, SCLIP, and CLIPSurgery, has adequately addressed your concerns about missing baselines and demonstrated the significant advantages of our approach. Given these new results showing consistent improvements across datasets and backbone sizes, we kindly ask if you would consider revising your assessment of our paper, or if you have any additional questions we could address.\"}", "{\"comment\": \"Thank you for your positive feedback and thoughtful evaluation of our work. We greatly appreciate your supportive comments indicating that our responses have addressed your concerns.\\n\\nWe noticed that the review score hasn't been updated yet, and we would be grateful if you could kindly consider adjusting it to reflect your positive assessment.\"}", "{\"metareview\": \"This paper addresses the poor localization problem of contrastive image-text pretraining (CLIP) models, which is a critical issue when using CLIP models in practice. The paper is well-written and motivated, and experiments show improved performance on the target localization tasks.\\n\\nHowever, as Reviewer gqG2 pointed out, the paper lacks comparison and discussion with training-free methods (MaskCLIP, SCLIP, and CLIPSurgery, as the reviewer suggested) that share the same (or similar) objective (i.e., resolving poor localization ability of CLIP). In addition, the paper lacks a discussion about additional costs compared to these training-free approaches. While the authors showed additional comparisons with those baselines in their rebuttal, these results were not reflected in the revised version of the paper, which remains the same concerns from the initial review.\\n\\nInitial reviews also highlighted concerns about complex design, lack of ablation studies, and required optimization steps compared to other baselines. Additionally, the AC agrees with Reviewer zd2G that the methodology heavily relies on LeGrad's explainability method and lacks technical novelty. \\n\\nThe AC considers this a borderline paper with both strengths and weaknesses. Given the reviews and rebuttal, and the highly competitive nature of ICLR submissions, the AC recommends rejection.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer gqG2 raised issues about missing baselines (MaskCLIP, SCLIP, CLIPSurgery). The authors answered with some experimental results, but they did not update the paper accordingly. The AC finds that the authors failed to address the reviewer's concerns clearly.\"}", "{\"comment\": \"As the discussion period is nearing its end, we wanted to respectfully follow up on our responses to your valuable feedback. We have carefully addressed all raised concerns and added substantial improvements to our paper.\\n\\nWe would greatly appreciate if you could review our responses and consider updating your assessment if you find our revisions satisfactory.\\nThank you for your time and expertise\"}", "{\"title\": \"Feedback on our response\", \"comment\": \"We hope our detailed responses have addressed all your concerns. If you have no further questions, we would greatly appreciate if you could consider increasing the score.\"}", "{\"comment\": \"We thank the reviewer for their thorough and constructive feedback. We particularly appreciate the recognition of the paper's clear writing, comprehensive experimental evaluation, and the advantages of our zero-shot approach that leverages pre-trained models effectively.\\n\\n# Response to Weaknesses:\\n## 1. Innovation and LeGrad Description\\nWe understand the concern about the extensive description of LeGrad potentially overshadowing our contributions. We included this detailed explanation to ensure the paper is self-contained and accessible. We emphasize that our key innovation lies not in the use of LeGrad itself, but in the novel approach of using explainability maps to guide the learning of localized embeddings without any fine-tuning (Sec. 3.2). As a second contribution, we also propose a gradient decomposition strategy for efficient computation (Sec. 3.3). Following the reviewer\\u2019s comment, we offer to move the detailed LeGrad description to the supplementary material upon request, while maintaining only the essential components in the main text.\\n\\n## 2. Regularization Loss Analysis\\nThanks for raising this topic. We want to point out that we see the regularization as a special feature that can be useful if there is a reason to include more global information in the LET (localized embedding token), which so far seems to be only relevant in cases of referential expressions. To further analyze the influence of $\\\\alpha$, we conducted a respective ablation study on the RefCOCO dataset (please see **Table 6** in Annex of the current version of the paper). We further provide a qualitative example of the effect of alpha on the explainability map and the generated caption in the supplementary of the current version in **Figure 5**.\\nThe added figure illustrates this effect through generated captions for different $\\\\alpha$ values. When $\\\\alpha=0$, the model generates descriptions focused strictly on the masked region (_e.g._, \\\"woman in a boat\\\"), while increasing $\\\\alpha$ progressively incorporates more contextual information(_e.g._, \\\"produce\\\" or \\\"vegetables\\\")\\n\\nThe results of both the quantitative and qualitative analysis (please see the updated annex written in blue) show that $\\\\alpha$ acts as a \\\"slider\\\" controlling the balance between local and global information. When $\\\\alpha$ is very small (\\u22480), the model focuses strictly on the masked region, potentially missing contextual cues. As $\\\\alpha$ increases, the model incorporates more spatial context, with optimal performance around $\\\\alpha=5.0$. At very high values ( $\\\\alpha$>7.5), the performance slightly decreases as the representation becomes too similar to the global [CLS] token.\\n\\n## 3. Performance depends on the quality of input masks\\nWe acknowledge this important practical consideration and have thoroughly investigated it in Sec. 4.5 (Table 4) of our paper. Our analysis shows that:\\nEven using just bounding boxes instead of masks only results in a modest performance drop (44.7% to 42.9% on MSCOCO for the Class Retrieval Task).\\nAutomatically segmenting bounding boxes with SAM and using the resulting masks as input to our method achieves comparable performance to inputting ground-truth masks (45.0% vs 44.7%). Hence, in combination with SAM, our method works very well given cheap bounding-box input from the users, a practical application scenario.\\nOur method is more sensitive to under-specification (erosion) of masks (42.7%). \\n\\n## 4. Failure Cases\\nThank you for highlighting the need for deeper analysis of failure cases. We assume the main scenario where MaskInversion may underperform might be in case of resolution limitations. Namely, as the method's effectiveness is bounded by the grid size of the underlying foundation model, if masks, e.g., of very small objects, are below the regular grid sampling size, we will not be able to exactly recover the localized embedding. We have conducted an additional experiment where we disentangle the performance of MaskInversion depending on the size of the mask. The reported numbers are the retrieval accuracy on the COCO dataset obtained using ViT-B/16.\\n\\n| Size Category | Percentage |\\n|--------------|------------|\\n| Small (<10%) | 42.3 |\\n| Medium (10-30%) | 62.5 |\\n| Large (>30%) | 63.2 |\\n| Overall | 44.7 |\\n\\nThe table shows that MaskInversion performance is mainly bounded by the performance on small objects. We will add a respective discussion in the main paper. \\n## Response to Questions\\nYour suggestion about incorporating mask-guided feature capture during training is interesting. We believe this could be implemented as a form of self-distillation during the pre-training phase to enhance the model's fine-grained perception capabilities. This represents an exciting direction for future work that could potentially improve the foundation model's local feature capture abilities directly. We appreciate this suggestion and will explore it in future research.\"}", "{\"summary\": \"The paper introduces MaskInversion, a method designed to generate localized embeddings for specific image regions using pre-trained vision-language foundation models like CLIP. This approach leverages the feature representations of these models to create context-aware embeddings for a query image region specified by a mask at test time.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper is overall well-written.\\n2. The paper provides a comprehensive set of experiments and results, including quantitative metrics and qualitative visualizations, which helps in understanding the method's effectiveness and behavior.\\n3. MaskInversion operates in a zero-shot setting, which means it can handle tasks without requiring additional training data for specific tasks, leveraging the knowledge embedded in pre-trained models.\", \"weaknesses\": \"1. This paper may be a bit short on innovation, as it actually uses the explainability map obtained from LeGrad to improve the feature extraction of the pre-trained models. Besides, some of the methods section is devoted to reviewing LeGrad, reinforcing the perception that this article is not innovative enough.\\n2. The regulaization loss seems very important to avoid trivial solutions. However, I find no ablation study on the hyper-paramter $\\\\alpha$, which modulates the influence of the regularization loss. \\n3. The performance of MaskInversion is heavily dependent on the quality of the input masks. In practical applications, obtaining high-quality masks might be challenging, which could limit the method's real-world applicability.\\n4. The paper could benefit from a deeper analysis of scenarios where MaskInversion might fail or underperform, and how such cases could be addressed.\", \"questions\": \"If the pre-trained model itself does not have strong local feature capture capability, then post-training can give limited improvement. I'm curious if this idea of mask-guided feature capture can be applied to the training phase to improve the fine-grained perception of pre-trained VL models.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
DhdqML3FdM
Limits of Deep Learning: Sequence Modeling through the Lens of Complexity Theory
[ "Nikola Zubic", "Federico Soldà", "Aurelio Sulser", "Davide Scaramuzza" ]
Despite their successes, deep learning models struggle with tasks requiring complex reasoning and function composition. We present a theoretical and empirical investigation into the limitations of Structured State Space Models (SSMs) and Transformers in such tasks. We prove that one-layer SSMs cannot efficiently perform function composition over large domains without impractically large state sizes, and even with Chain-of-Thought prompting, they require a number of steps that scale unfavorably with the complexity of the function composition. Also, the language of a finite-precision SSM is within the class of regular languages. Our experiments corroborate these theoretical findings. Evaluating models on tasks including various function composition settings, multi-digit multiplication, dynamic programming, and Einstein's puzzle, we find significant performance degradation even with advanced prompting techniques. Models often resort to shortcuts, leading to compounding errors. These findings highlight fundamental barriers within current deep learning architectures rooted in their computational capacities. We underscore the need for innovative solutions to transcend these constraints and achieve reliable multi-step reasoning and compositional task-solving, which is critical for advancing toward general artificial intelligence.
[ "theory", "complexity theory", "state space models", "deep learning architectures", "logic in computer science" ]
Accept (Poster)
https://openreview.net/pdf?id=DhdqML3FdM
https://openreview.net/forum?id=DhdqML3FdM
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zYOmiartNu", "xFLAZg9hH0", "w986bLOmq1", "vtexSPlLfK", "sRgzeqIopV", "qems7DrZHo", "o6w5hnRkS8", "j3zOZvgJls", "hFXwPsE9qt", "fFfdvyuVJU", "dzns25VJ3U", "cqVxXbJwfV", "cMEfc2TO6A", "ayFgbbxuUb", "YIRUIc6HXQ", "VcKfWrMvkD", "UmqXGUUiCf", "SmL7DRETCG", "Ibf5mPikTu", "DG6mikxfFD", "ADdtXesQCK", "A5lB96amTY", "7Wlol2RQ0G" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732545463537, 1732536035660, 1732536054495, 1730676645939, 1732794115035, 1733218344954, 1732545994161, 1732536677539, 1730714497479, 1733218708327, 1734650755635, 1733223070068, 1732845640651, 1732794672625, 1730484676037, 1730694989890, 1737523500057, 1733014069080, 1732678883791, 1732547445019, 1732663954014, 1733014302469, 1733000166217 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2378/Authors" ], [ "ICLR.cc/2025/Conference/Submission2378/Authors" ], [ "ICLR.cc/2025/Conference/Submission2378/Authors" ], [ "ICLR.cc/2025/Conference/Submission2378/Reviewer_Lnbo" ], [ "ICLR.cc/2025/Conference/Submission2378/Authors" ], [ "ICLR.cc/2025/Conference/Submission2378/Reviewer_yPps" ], [ "ICLR.cc/2025/Conference/Submission2378/Authors" ], [ "ICLR.cc/2025/Conference/Submission2378/Reviewer_U9LY" ], [ "ICLR.cc/2025/Conference/Submission2378/Reviewer_yPps" ], [ "ICLR.cc/2025/Conference/Submission2378/Reviewer_yPps" ], [ "ICLR.cc/2025/Conference/Submission2378/Area_Chair_gtjJ" ], [ "ICLR.cc/2025/Conference/Submission2378/Authors" ], [ "ICLR.cc/2025/Conference/Submission2378/Reviewer_BvEu" ], [ "ICLR.cc/2025/Conference/Submission2378/Authors" ], [ "ICLR.cc/2025/Conference/Submission2378/Reviewer_U9LY" ], [ "ICLR.cc/2025/Conference/Submission2378/Reviewer_BvEu" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2378/Authors" ], [ "ICLR.cc/2025/Conference/Submission2378/Reviewer_yPps" ], [ "ICLR.cc/2025/Conference/Submission2378/Authors" ], [ "ICLR.cc/2025/Conference/Submission2378/Authors" ], [ "ICLR.cc/2025/Conference/Submission2378/Authors" ], [ "ICLR.cc/2025/Conference/Submission2378/Reviewer_yPps" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer BvEu [Part I]\", \"comment\": \"We sincerely thank the reviewer for their thoughtful feedback and insightful suggestions. We appreciate acknowledging our work\\u2019s originality, clarity, and significance in exploring the limitations of reasoning abilities in Structured State Space Models (SSMs) and Transformers. Below, we address the reviewer\\u2019s queries and expand on their recommendations.\\n\\n---\\n\\n### **1. Additional Section on Limitations and Future Work**\\n\\n**Comment:** *Could the authors provide an additional section on limitations (of this work per se) and future works that practitioners may follow? For example, could the authors discuss more specific architectural modifications based on the existing results?*\\n\\n#### **Response:**\\n\\nWe acknowledge the importance of delineating the limitations of our work and outlining potential directions for future research. Below, we address these in detail:\\n\\n#### **Limitations of Our Work**\\n\\n1. **Focus on Current Architectures:**\\n Our theoretical and empirical analyses are restricted to existing SSM and Transformer architectures. We did not explore architectural modifications or external enhancements that might expand their reasoning capabilities.\\n\\n2. **Idealized Assumptions:**\\n The theoretical results are derived under standard complexity-theoretic and algorithmic assumptions. These abstractions do not fully account for practical factors such as optimization dynamics, hardware constraints, or domain-specific training regimes.\\n\\n3. **Scope of Empirical Evaluation:**\\n While we conducted extensive experiments across diverse tasks, our exploration is not exhaustive. Alternative configurations, training paradigms, or specific domain-focused benchmarks could reveal additional insights.\\n\\n---\\n\\n#### **Future Work and Potential Architectural Modifications**\\n\\nBuilding on our findings, we identify the following directions for future research:\\n\\n1. **Integrating External Memory Mechanisms:**\\n - Incorporating memory-augmented components, such as differentiable neural computers or external attention mechanisms, could enable models to effectively store and retrieve intermediate computations.\\n - This approach could mitigate existing multi-step reasoning and function composition limitations by allowing models to scale reasoning over extended sequences. While this improves general reasoning capabilities, it may remain insufficient for problems demanding deeper algorithmic understanding (e.g., solving unsolved mathematical conjectures).\\n\\n2. **Incorporating Symbolic Reasoning Components:**\\n - Hybrid architectures that combine neural networks with symbolic reasoning frameworks (e.g., integrating SAT solvers or formal theorem provers) may improve models\\u2019 logical inference capabilities.\\n - Symbolic modules can complement neural systems by performing exact computations, thus addressing the constraints of neural-based architectures in handling structured, logical reasoning.\\n\\n3. **Implementing Specialized Training Strategies:**\\n - Designing tailored training paradigms, such as curriculum learning or meta-learning approaches, could enable models to progressively develop reasoning skills.\\n - Auxiliary tasks emphasizing multi-step reasoning or function composition might also facilitate improved generalization to complex tasks.\\n\\n4. **Exploring Alternative Computational Frameworks:**\\n - Investigating architectures inspired by computational models with theoretically higher capacities (e.g., hypergraph-based neural networks or quantum-inspired architectures) could unlock new pathways for complex reasoning.\\n - Such frameworks may inherently support reasoning-intensive tasks without requiring impractical resource scaling.\\n\\n5. **Advancing Neural Algorithmic Reasoning:**\\n - Neural Algorithmic Reasoning, an emerging paradigm that aims to align neural networks with classical algorithmic processes, presents a promising avenue.\\n - By embedding algorithmic structures into neural systems, these models can execute complex computations over large domains (without requiring impractically large state dimensions or excessive computational resources), enabling capabilities such as iterative function composition, mathematical problem-solving, and logical deduction.\\n - Leveraging this approach can address the architectural and training bottlenecks identified in our study, offering a robust framework for tackling reasoning-centric tasks.\\n\\n---\"}", "{\"title\": \"Response to Reviewer U9LY [Part I]\", \"comment\": \"We sincerely thank Reviewer U9LY for the detailed and thoughtful feedback on our paper. We are delighted that you found the manuscript well-written, well-referenced, and addressing significant questions about the limitations of current architectures for sequence modeling. Your insights have been highly constructive, and we are grateful for the opportunity to improve/rewrite our work based on your suggestions.\\n\\n---\\n\\n### Addressing Weaknesses\\n\\n#### 1. **Providing Additional Background on Communication Complexity**\\n\\nWe appreciate your recommendation to improve the accessibility of our manuscript by providing more background on communication complexity. To address this, we will make the following changes in the revised paper:\\n- **Introduction of Key Concepts:** We will include a high-level, intuitive explanation of communication complexity, covering core ideas like communication protocols, the pointer chasing problem, and their relevance to sequence modeling.\\n- **Background on Relevant Problem Classes:** The revised paper will provide additional context for computational classes such as $\\\\mathbf{L}$ (logarithmic space) and $\\\\mathbf{NL}$ (nondeterministic logarithmic space), explaining their role in characterizing the challenges faced by sequence models.\\n\\nThese additions will ensure that readers without a computational complexity background can better understand the motivations and implications of our work.\\n\\n---\\n\\n#### 2. **Bridging the Gap Between Theory and Practice in Computational Complexity**\\nWe value your observations about the discrepancies between theoretical constructs and practical applications. Below, we address your specific questions:\\n\\n**(a) What aspects of the formal definitions might be overly general with respect to practical settings?**\", \"the_following_assumptions_in_our_theoretical_framework_contribute_to_potential_overgeneralization\": [\"**Worst-Case Analysis:** Theoretical results often reflect worst-case scenarios, which may not represent the typical distributions of real-world data.\", \"**Simplified Architectures:** We consider simplified architectures for analytical tractability, omitting practical enhancements such as optimized attention mechanisms or regularization strategies.\", \"**Deterministic Computation:** Our theoretical results assume deterministic computations, overlooking stochastic elements like dropout and random initialization.\", \"**Perfect Inputs:** Our analysis does not model Real-world data imperfections (e.g., noise, errors).\", \"But, since models are proven to be limited even in these idealized cases, they will struggle even more in practice when there is more stochasticity.\", \"**(b) Could discrepancies between formalization and practice account for the observed results?**\", \"Yes, these discrepancies might explain some of the divergence between theoretical and practical performance. However, our empirical findings suggest that theoretical limitations remain relevant:\", \"**Performance Degradation:** Tasks requiring deep compositional reasoning showed significant performance degradation, aligning with our theoretical predictions.\", \"**Error Patterns:** Failure modes observed in practice are consistent with our identified theoretical constraints.\", \"**(c) What empirical work could help bridge this gap?**\", \"To close the gap between theory and practice, we propose the following empirical strategies:\", \"**Controlled Experiments:** Design benchmarks tailored to theoretical limitations, allowing for direct empirical validation.\", \"**Ablation Studies:** Investigate how specific architectural components and training strategies impact model performance on theoretical tasks.\", \"**Error Analysis:** Conduct detailed examinations of failure cases to uncover reasoning bottlenecks in models.\", \"**Cross-Model Comparisons:** Compare various architectures (e.g., RNNs, Transformers, and SSMs) to identify universal versus architecture-specific limitations.\", \"**Real-World Task Evaluation:** Assess models on real-world problems requiring compositional reasoning to determine practical applicability.\", \"**Benchmark Development:** Establish standardized datasets and tasks to improve broader comparisons and progress tracking.\", \"**Theory-Informed Experiments:** Use theoretical insights to design experiments that test specific hypotheses about model limitations.\", \"Although we included most of them, these initiatives would promote a deeper understanding of the interplay between theoretical constraints and practical model performances. They can be interesting follow-up work. Thank you for pointing out this direction.\", \"---\"]}", "{\"title\": \"Response to Reviewer U9LY [Part II]\", \"comment\": \"### Addressing Specific Comments and Suggestions\\n\\n#### Formatting Error in Section 7 Heading\\nThank you for identifying this oversight. We will fix the spacing issue to ensure the heading is properly formatted.\\n\\n#### Relocating the \\\"Implications for General Artificial Intelligence\\\" Paragraph\\nWe appreciate your suggestion to improve the flow of this discussion. We will relocate this paragraph to a dedicated \\\"Implications\\\" section or incorporate it into the \\\"Conclusion\\\" to improve the paper's structure.\\n\\n---\\n\\nThank you once again for your valuable feedback. We are confident these revisions will make our work better and more accessible to the community.\"}", "{\"summary\": \"The core result of this paper is a proof that one-layer structured state space models cannot perform efficient function composition. The paper builds upon prior work from from Christos Papadimitriou in Peng et all who proposed the paradigm. The paper extends results to SSM models and shows that significant COT computation is required to achieve function composition in SSMs. Authors extend analysis to multi layer SSMsdemonstrating that the computation of an L-layer SSM on a prompt of length N can be carried out using O(L log N ) bits of memory, positioning SSMs within the complexity class L (logarithmic space). This implies that SSMs cannot solve problems that are NL-complete unless L = NL, which is widely believed to be false.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"Paper is an important contribution as it extends rigorous complexity theory analysis of LLM model limitations to SSM models proving sharp results on resources required to execute function composition.\", \"weaknesses\": \"No real weakness. Potentially authors could spend more time comparing their results to Peng for readers.\", \"questions\": \"Authors could also clarify the numerical experiments and their relationship to the main theorem with respect to layer number.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer yPps [Part II]\", \"comment\": \"Dear Reviewer yPps,\\n\\n1. We are very grateful for your insightful review. Your feedback has been instrumental in strengthening and changing Theorem 4. We have addressed all of your questions (points 2 to 6) by improving Theorem 4 and adding relevant explanations in the text following Theorem 3 and before the Experiments section. These updates are included in the revised manuscript.\\n\\n2. You are correct in noting that we do not utilize external assistance in the \\\"reasoning\\\" process. This clarification aligns with the point raised by Reviewer BvEu, who initiated the discussion regarding AlphaProof-based external systems. If our paper is accepted, we will include a few additional sentences in the camera-ready version to further clarify and eliminate any ambiguity on this matter.\\n\\n3. Thank you once again for your valuable observation. We did not put assumptions on A_t, B_t, C_t, and D_t since we now changed Theorem 4. Log assumptions had to be initially made, as in [1] on Page 6, Lemma 4.1.\\n\\n[1] The Illusion of State in State-Space Models; William Merrill, Jackson Petty, Ashish Sabharwal\"}", "{\"comment\": \"### 2\\n\\nI would very much appreciate explicitly pointing out that Theorem 4 applies only to *finite precision* SSMs, and would highly recommend adding the qualification every time Theorem 4 is referenced (e.g. lines 506 to 510 in the conclusion of the updated paper). I would also appreciate offering a more balanced viewpoint in e.g. lines 426-431 of the updated paper.\\n\\nTo maximize transparency to the reader, I would recommend pointing out the analogy between Theorem 4 and the statement that real computers are limited to FSMs because of their finite memory. It may be worth mentioning that, as a result, the practical implications of the theorem statement are more relevant for heavily quantized SSMs. (Perhaps to partially justify the finite-precision model you may also make a connection to existing work suggesting a real-valued parameter really acts as though it's a small finite number of bits.) And then you may suggest future work that addresses the concern by assuming infinite precision.\\n\\nIn summary, I believe the current new additions in the paper exaggerate the practical implications of Theorem 4, and should be significantly modified/reduced for a more accurate portrayal. **My score increase assumes that the authors will implement my recommendations here.**\"}", "{\"title\": \"Response to Reviewer BvEu [Part II]\", \"comment\": \"**2. Discussing Tree Search and Self-Correction Methods in Relation to Our Work**\\n\\n*Question:* *In practice, complicated reasoning tasks are often solved with (tree) search (cf., AlphaProof, GPT-f, HyperTree Search), potentially with self-correction (cf., Self-Correction, SCoRe), beyond naive stepwise chain-of-thought augmentation. Can the authors provide further discussions on this?*\\n\\n**Answer:**\\n\\nWe appreciate the reviewer\\u2019s insightful question and the opportunity to discuss how advanced methods like tree search and self-correction relate to our findings.\\n\\n---\\n\\n**Relation to Our Analysis**\\n\\n1. **Intrinsic Architectural Capabilities:** \\n Our analysis primarily focuses on the inherent computational limitations of Structured State Space Models (SSMs) and Transformers when used in isolation, without the aid of external mechanisms or augmentations.\\n\\n2. **External Mechanisms as Augmentations:** \\n Methods like tree search and self-correction introduce external reasoning frameworks that extend beyond the models' intrinsic capabilities. While they mitigate some computational limitations, they do so by leveraging additional resources or algorithms rather than addressing the core architectural constraints.\\n\\n---\\n\\n**Tree Search Methods**\\n\\n1. **Overview:** \\n Tree search algorithms, such as those employed in AlphaProof, GPT-f, and HyperTree Search, improve reasoning by systematically exploring multiple solution paths. They allow models to navigate combinatorial spaces and evaluate alternatives effectively.\\n\\n2. **Impact on Reasoning:** \\n By integrating systematic exploration, these methods enable models to handle complex, structured reasoning tasks requiring deep logical deductions or exploration of solution spaces.\\n\\n3. **Connection to Our Work:** \\n - Tree search compensates for the lack of native reasoning capacity in SSMs and Transformers by layering external logic and decision-making. \\n - Our findings suggest that current architectures struggle with tasks requiring multi-step reasoning or function composition without such mechanisms. The need for tree search highlights the models\\u2019 reliance on external processes for tasks beyond their intrinsic computational scope.\\n\\n---\\n\\n**Chain-of-Thought (CoT) Prompting Limitations**\\n\\n1. **Naive CoT vs. Advanced Methods:** \\n Our results show that naive stepwise CoT prompting fails to overcome the computational limits of SSMs and Transformers. Advanced approaches like tree search and self-correction provide additional reasoning capabilities but at the cost of relying on external augmentations.\\n\\n2. **Implications:** \\n These methods demonstrate that intrinsic architectural changes, rather than external augmentations, are needed to address reasoning limitations directly.\\n\\n---\\n\\n**Implications for Future Research**\\n\\n1. **Architectural Integration of Reasoning Mechanisms:** \\n A key direction is embedding reasoning frameworks such as search algorithms and iterative refinement into the models themselves, enabling these capabilities to become intrinsic rather than external.\\n\\n2. **Designing Intrinsically Iterative Models:** \\n New architectures that inherently support iterative reasoning and dynamic exploration could eliminate the reliance on external augmentations.\\n\\n---\\n\\nWe thank the reviewer for prompting this valuable discussion and will explicitly clarify in our manuscript that our analysis does not incorporate external \\\"engines.\\\" We welcome any further questions and sincerely hope that, if the reviewer is satisfied with our responses, they will consider revising their score.\"}", "{\"title\": \"Quick reaction to the rebuttal\", \"comment\": \"Thanks for your outline of the proposed changes. I think they look good and I don't have any more comments at this time.\\n\\n------\\n\\nNote that you can already make changes to the manuscript and update your submission (this has been open since the beginning of the discussion period). This way reviewers can see the actual changes rather than a proposal for changes, and possibly reassess their evaluation. (This is just my impression and not an official recommendation).\"}", "{\"summary\": \"The paper theoretically and empirically studies the limitations of the computational power of SSMs in terms of effective space complexity and their ability to do function composition.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Novel theoretical insight on the recently popular SSMs (especially in regards to their limited ability to do function composition), echoing similar previous work on Transformers. Experiments validate the theory.\", \"weaknesses\": \"1. The proof of Theorem 1 appears to rely on the specific format of the prompt (see Questions below).\\n2. I have doubts about the correctness of Theorem 4's proof and I don't think the theorem statement itself properly formalizes the insight that it's trying to convey (see Questions below).\\n3. It seems to me that the criticism in Question 6 below (i.e., that Theorem 4 really says \\\"SSMs' computational power is bounded by the amount of space used to represent floating points\\\") equally applies to the cited papers Merrill & Sabharwal 2023 and Merrill & Sabharwal 2024. Since Theorem 3 is based on the results of the latter paper, it relies on the problematic assumption that precision $p = O(\\\\log N)$. A consequence of this is that changing the assumption on how $p$ scales with $N$ even by a little bit will completely mess up the theorem statement, e.g., if $p = O(\\\\mathrm{poly}(n))$ then we can no longer say that SSMs can't solve these P-complete problems. (See Question 6 below for a more detailed discussion of this point as applied to Theorem 4.)\", \"questions\": \"The following questions are about Theorem 1.\\n1. The proof seems to rely on the specific ordering of $g$ followed by $f$ followed by $x$ in the prompt. Does a similar proof work when these pieces are in different orders? In particular, what about the order that would intuitively be the easiest for the SSM: $x$ followed by $g$ followed by $f$? (The intuition here comes from the fact that a streaming algorithm taking in $(x, g, f)$ in this order wouldn't need to store the entire table of $f$ or $g$.)\\n\\nThe following questions are about Theorem 2.\\n1. How's the definition of CoT different from just autoregressive decoding?\\n\\nThe following questions are about Theorem 4.\\n1. Lines 74 & 386: Isn't $\\\\mathsf{L}$ a class of decision problems? Shouldn't one say $\\\\mathsf{FL}$ instead of $\\\\mathsf{L}$ here?\\n2. How is it possible for $p, d$ to grow with $N$? While an SSM can take an input of variable length $N$, its $p$ and $d$ must be fixed, right? I don't know what it means for the precision and hidden dimension of an SSM to increase as the input sequence becomes longer. SSMs are unlike boolean circuits which require a differently-sized circuit for every possible input size.\\n3. While the theorem statement assumes $p, d = O(\\\\mathrm{poly}(N))$, the proof assumes $d = O(1), p = O(\\\\log N)$. For example, line 359 says \\\"each element of these matrices can be represented using $O(\\\\log N)$ bits.\\\" Line 377 says \\\"numbers are represented with $O(\\\\log N)$ bits of precision.\\\" Line 379 says \\\"we only need to keep the current and previous hidden states\\\", which require $O(dp)$ space, hence implicitly assuming $dp = O(\\\\log N)$.\\n4. How are $A_t, B_t, C_t, D_t$ (which are functions of $x_t$) computed? Some assumption on the space complexity of these computations is missing from the theorem statement.\\n5. This is a small detail, but in the current formulation where the input sequence is a bunch of vectors of real numbers, I believe the input size is actually $Ndp$. But $O(\\\\log N)$ implies $O(\\\\log(Ndp))$, so there's no problem here even when $dp = \\\\omega(1)$ in $N$.\\n6. Assuming $A_t, B_t, C_t, D_t$ are independent of the input sequence (and \\\"easily computable\\\"), it's easy to generalize the theorem to say that an SSM with $L$ layers, precision $p$ and hidden dimension $d$ is equivalent to an algorithm that uses $O(Lpd)$ space (i.e., you store hidden states $h_t^{(l)}$ of all layers $l$ at the current time step $t$ and the intermediate result $y_t^{(l_\\\\text{cur})}$ of the current layer $l_\\\\text{cur}$). So Theorem 4's statement that SSMs use log-space is actually just an artifact of $Lpd = O(\\\\log N)$ in the assumption of the (corrected version of the) theorem (see Question 3 above). So it's unclear what insight we get from the fact that SSMs are equivalent to log-space when $Lpd = O(\\\\log N)$. If $p, d = O(1)$ (i.e., the case of actual SSMs), then we get that linear SSMs are equivalent to algorithms that use $O(1)$ space, but does that mean that the decision problems that linear SSMs can solve must be regular languages? (A question of a similar nature: my computer has finite memory, so does that mean it can only decide regular languages?) On the other hand, if $p, d = O(\\\\mathrm{poly}(N))$, then we get that linear SSMs can be simulated by algorithms that use polynomial space, and we no longer get the takeaway that SSMs are limited. To summarize, the fact that the computational power of SSMs (as proven using the method in Theorem 4) varies widely based on what is assumed of the scaling of precision $p$ w.r.t. $N$ indicates a failure of \\\"SSM with precision $p = O(f(N))$\\\" as model to accurately capture the behavior of actual SSMs. Theorem 4 generalized would tell me that an SSM with precision $p = 16N$ can potentially solve P-complete problems and yet an SSM with $p = 64$ can only decide regular languages, but in practice there shouldn't be a difference in what these two classes of SSMs can do. (See below for what I think a theorem statement that actually formalizes the intuitive notion of \\\"SSMs can only compute problems in [complexity class]\\\" might look like.)\\n\\nHere's my suggestion for how to actually formalize \\\"SSMs only have _____ computational power\\\" into a theorem statement to replace the current Theorem 4.\\n\\nFirst, we need to define the inputs and outputs as actual strings, since members of $\\\\mathsf{FL}$ are functions $f : \\\\Sigma^* \\\\to \\\\Sigma^*$. So the input to the SSM is a sequence of tokens $w_1 \\\\ldots w_N$ that first get embedded into vectors $x_t \\\\in \\\\mathbb R^m$ ($t \\\\leq N$), and the output is an argmax applied on top of the last layer of $y_t$'s for $t > N$ until the <eos> token. (The $w_t$ for $t > N$ are the decoded $\\\\arg \\\\max_i (y_{t-1})_i$ as usual.) A basic formalization like this to properly define the inputs/outputs of an SSM is missing from the paper and should be added.\\n\\nNow, to prevent the issue in Question 6 above where the SSM's computational power ends up being limited by the finite precision in floating point computations, we really should assume infinite precision here. (So we're working with an idealized version of an SSM and not a real one.) And then we can ask about the computational power of an SSM with (given) hidden dimension $d$ and # layers $L$. So the theorem statement looks something like \\\"Functions $f : \\\\Sigma^* \\\\to \\\\Sigma^*$ that can be computed by an infinite-precision SSM with hidden dimension $d$ and # of layers $L$ are within the class _____.\\\"\\n\\nThe current proof of Theorem 4 doesn't work anymore, since directly carrying out the computations in the SSM would require infinite space. However, intuitively, SSMs should still be limited in their computational power for the following reason. Since input tokens are discrete, intuitively the spaces of $h_t$ and $y_t$ can, in some sense, be discretized into \\\"regions of equivalent behavior\\\". Thus, if we show that under certain conditions, the space of the hidden state can be divided into a finite number of regions of equivalent behavior, then the SSM is just a finite-state machine and the functions it computes are computable in $O(1)$ space.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Final summary\", \"comment\": \"I very much thank the authors for engaging in productive discussions about their paper. From the start to the end of the discussion period, I increased my score from 3 (reject) to 6 (borderline accept).\\n\\nA few of my questions and concerns have been addressed, and the main concern remaining pertains to the practical implications of Theorem 4. Assuming that the authors will address my [current concerns](https://openreview.net/forum?id=DhdqML3FdM&noteId=qems7DrZHo) that version 2 of the manuscript oversells the implications of the result, I recommend weak acceptance of the paper.\\n\\nSincerely,\\n\\nReviewer yPps\"}", "{\"metareview\": \"This paper empirically and theoretically analyzes the computational limitations of Structured State Space Models (SSMs). The authors introduce three theorems: the inability of SSMs to compose functions, exponential scaling of compute when performing chain-of-thought, and an inability to solve NL-complete problems (unless L=NL). These theorems are backed up by empirical evidence in several different reasoning tasks.\", \"most_reviewers_agreed_that_the_paper_is_original\": \"it echos findings for transformers but applies them to the SSM architecture and yields new theoretical insights which are likely to be very valuable to the SSM community. These theoretical insights are further backed up by experiments demonstrating the limitations of function composition in SSMs. Overall, reviewers generally agreed that the paper was well-structured, with proofs being clear for each theorem.\\n\\nAs is common for theoretical papers of this kind, and as noted by several reviewers, it is not clear how large the potential gap is between the theoretical results presented here and practical applications. For example, in practice, methods such as search and self-correction are often used in combination with SSMs to handle complex problems, and as a result it may be possible to mitigate some of the challenges of SSMs by relying on these. However, the authors now note this directly in the revised manuscript, and reviewers agree that understanding the limitations of SSMs architecturally (without such additions) is still very valuable.\\n\\nAll reviewers agree that this paper is worth accepting for the reasons above, and I similarly recommend acceptance.\", \"additional_comments_on_reviewer_discussion\": \"The authors and reviewers engaged in very productive discussion during the rebuttal period which led to significant improvements in the paper, and the raising of overall scores (in some cases by multiple points).\\n\\nTheorem 4 substantially improved based on the feedback from several reviewers, and the authors added a significantly improved conclusion section which also touches on limitations.\"}", "{\"title\": \"Response to Reviewer yPps [03.12.2024]\", \"comment\": \"Dear Reviewer yPps,\\n\\nThank you for your thoughtful feedback and for engaging in productive discussions about our paper. We appreciate your suggestions regarding Theorem 4 and will incorporate them into the revised manuscript. Specifically, we will explicitly state that Theorem 4 applies only to finite precision SSMs wherever it is referenced, adjust the discussion to offer a more balanced viewpoint, and include the analogy to real computers and FSMs to improve transparency for the reader.\\n\\nThank you again for your valuable input, which has been instrumental/critical in improving our work.\"}", "{\"comment\": \"Thanks for the authors' comments! I will carefully walk through the revisions (also feedback from other reviewers here) during the weekend and will update the rating if the concerns are well-addressed.\"}", "{\"title\": \"[Final comment] Manuscript is updated\", \"comment\": \"Dear Reviewers,\\n\\nWe would like to express our gratitude for your insightful and constructive feedback. In response to your comments, we have thoroughly revised our manuscript to address all concerns and improve the overall quality of our work. Below, we outline the key changes made:\\n\\n- **Theorem 1 Change**: Added the prompt order to Theorem 1 to provide a clearer understanding of its application.\\n- **Theorem 4 Revision**: Replaced Theorem 4 to better align with the revised framework and improve its robustness.\\n- **External Engines Integration**: Included citations to AlphaProof-based systems and other external engines to contextualize our work within existing technologies.\\n- **AGI Implications**: We moved the discussion on AGI implications to the conclusion section and removed it from the related work to highlight its significance better.\\n- **Appendix Expansion**: A comprehensive background on communication complexity was added to the appendix to provide additional context and support for our findings.\\n\\nAll revisions in the updated manuscript are highlighted in blue for your convenience. Although the discussion deadline has passed, we remain committed to refining our work. We are open to addressing any remaining typos or minor issues with the camera-ready version if we receive the acceptance. We will re-read the manuscript to correct any small misalignments and ensure clarity and coherence.\\n\\nOnce again, we sincerely thank you for your valuable feedback, which has been instrumental in improving our paper. We hope the revisions satisfactorily address your concerns and that you will consider updating your evaluations/ratings accordingly.\\n\\nBest regards,\\n\\nauthors of \\\"Limits of Deep Learning: Sequence Modeling through the Lens of Complexity Theory.\\\"\"}", "{\"summary\": \"The authors investigate the computational limitations of structured state space models to perform function composition. They provide theoretical and empirical evidence for the infeasibility of these architectures to perform function composition over large domains.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Well written\", \"Well referenced\", \"Settles important questions about the limitations of current architectures of interest for sequence modeling.\", \"This kind of theoretical work is very much needed.\"], \"weaknesses\": \"* The main manuscript could use more background on communication complexity concepts and techniques, presented intuitively, to make the methodology and results more accessible to a general ML audience.\\n* Computational complexity work often must deal with a gap between formal and practical problems (even if the work is conducted rigorously). The paper could use a discussion about this possible gap, the (non-)robustness of the results to closing this gap, and what kind of studies are needed to close this gap.\\n\\nSince there is still roughly 1 page of extra space before the manuscript reaches the page limit, it would be useful to have more background on the computational complexity concepts and techniques deployed in the proofs. This could include concepts and techniques from communication complexity, relevant problem classes and reducibilities, and the rationale that supports the proofs.\\n\\nComputational complexity work often must deal with objections related to the gap between theory and practice. It would be useful to elaborate on how these gaps might play out in this work, and what responses to possible objections might look like. For instance, what aspects of the formal definitions might be overly general with respect to a possibly more restricted practical setting? Might such a discrepancy between formalization and practical scenario account for the results? What kind of descriptive empirical work could be needed to close the gap between formal and practical problems?\", \"questions\": \"Minor comments and suggestions:\\n\\nThere is a formatting error causing the heading of section 7 to come too close to the previous paragraph.\\n\\nA summary figure of the empirical results could also be useful in the main manuscript if space limitations allow.\\n\\nThe \\u201cImplications for General Artificial Intelligence\\u201d paragraph seems more suited to the Conclusions section or a separate \\u201cImplications\\u201d section than to the \\u201cRelated Work\\u201d section. An Implications section could also elaborate on the links between the formal proofs and the issues of practical interest.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper discusses the limitations of the reasoning abilities of SSMs and Transformers. Theoretically, it presents three theorems: (i) the inability of SSMs to efficiently perform function composition; (ii) Chain-of-thought helps yet with exponential increase in reasoning steps; and (iii) the inability of multi-layer SSMs to solve problems that are NL-Complete unless L = N. Empirically, the authors present experiments in qualitative examples of zero-shot inference, function composition, math and other reasoning tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Originality: This paper precisely defines the problem and proposes new theorems showing new results.\", \"Clarity and rigor: The theoretical and empirical sections are clearly structured, and concepts are clearly defined.\", \"Significance: Reasoning is a crucial problem in this domain. The lower bound on CoT steps required for iterated function composition is an interesting result, showing the practical challenges in scaling up these techniques, which is valuable for the research community. I am not able to verify the proofs, though.\", \"Empirical evaluation: The empirical results corroborate the theoretical claims effectively. The authors tested various composition tasks (function, spatial, temporal, and relational), presenting concrete evidence of performance degradation across different types of tasks.\"], \"weaknesses\": [\"Note: Unfortunately I am not a domain expert in theoretical computer science, and my evaluations are based on educated guess.\", \"Could the authors provide an additional section on limitations (of this work per se) and future works, that practitioner may follow? For example, could the authors discuss more specific architectural modifications based on the existing results?\", \"In practice, complicated reasoning tasks are often solved with (tree) search (cf., [alphaproof](https://deepmind.google/discover/blog/ai-solves-imo-problems-at-silver-medal-level/), [GPT-f](https://arxiv.org/pdf/2009.03393), [HyperTree Search](https://arxiv.org/pdf/2205.11491)), potentially with self-correction (cf., [Self-Correction](https://arxiv.org/pdf/2405.18634), [SCoRe](https://arxiv.org/pdf/2409.12917)), beyond naive stepwise chain-of0thought augmentation. Can the authors provide further discussions on this?\"], \"references\": \"[1] Polu, Stanislas, and Ilya Sutskever. \\\"Generative language modeling for automated theorem proving.\\\" arXiv preprint arXiv:2009.03393 (2020). \\n[2] Lample, Guillaume, Timothee Lacroix, Marie-Anne Lachaux, Aurelien Rodriguez, Amaury Hayat, Thibaut Lavril, Gabriel Ebner, and Xavier Martinet. \\\"Hypertree proof search for neural theorem proving.\\\" Advances in neural information processing systems 35 (2022): 26337-26349. \\n[3] Wang, Yifei, Yuyang Wu, Zeming Wei, Stefanie Jegelka, and Yisen Wang. \\\"A Theoretical Understanding of Self-Correction through In-context Alignment.\\\" arXiv preprint arXiv:2405.18634 (2024). \\n[4] Kumar, Aviral, Vincent Zhuang, Rishabh Agarwal, Yi Su, John D. Co-Reyes, Avi Singh, Kate Baumli et al. \\\"Training language models to self-correct via reinforcement learning.\\\" arXiv preprint arXiv:2409.12917 (2024).\", \"questions\": \"See the weakness. I am happy to raise my score if the authors can address the concerns proposed by reviewers.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to Reviewer yPps [01.12.2024]\", \"comment\": \"Dear Reviewer yPps,\\n\\nThank you for your thoughtful and detailed feedback on our paper. We greatly appreciate the time and effort you have invested in reviewing our work, and your insights have been invaluable in improving the quality and clarity of our manuscript. We are also grateful for your updated score.\\n\\n---\\n\\n**1. Inclusion of the Original Theorem 4:**\\n\\nWe agree with your suggestion to include the original Theorem 4 in the appendix. We will add it as **Theorem 5** in the revised manuscript. In this appendix section, we will state the log-precision assumption given by Merrill et al. (2024). This will provide readers with a comprehensive understanding of the different assumptions under which our results hold and how they relate to prior work.\\n\\n---\\n\\n**2. Addressing the Concern in Question 6:**\\n\\nWe understand your concern regarding the practical implications of our result in Theorem 4. You mentioned that while physical computers have finite memory, we typically model them as Turing machines with infinite memory to capture their computational capabilities. Similarly, you suggest that although real SSMs operate with finite precision, the apt theoretical model might be one with infinite precision.\\n\\nOur intention with Theorem 4 was to highlight the theoretical limitations of SSMs when operating under practical constraints of finite precision and fixed hidden dimensions. We acknowledge that, in theory, allowing infinite precision could enable SSMs to perform more complex computations. However, in practice, neural network models, including SSMs, are implemented with finite precision due to hardware limitations.\\n\\nBy demonstrating that SSMs with finite precision are computationally equivalent to finite-state machines, we aim to underscore the inherent limitations faced by these models in practical settings. This result emphasizes that, under realistic conditions, SSMs may not suffice for tasks requiring computational power beyond regular languages.\\n\\nWe agree that this perspective aligns with understanding the limitations of real-world computers, which, despite having finite memory, can simulate Turing machines for practical purposes. However, finite memory limits the size and complexity of computations that can be performed in practice.\\n\\nTo address your concern and provide a balanced view, we will revise the discussion following Theorem 4 to clarify that while our result highlights theoretical limitations under finite precision, it does not preclude the possibility of SSMs approximating more complex computations in practice. We will emphasize that our findings serve as a foundation for understanding the computational boundaries of SSMs and motivate the exploration of the capabilities under infinite-precision, along with architectures with improved capabilities.\\n\\n---\\n\\n**3. Explicit Assumption on Matrices in Theorem 4:**\\n\\nThank you for pointing out the need to explicitly state the assumption regarding the matrices A, B, C, and D in Theorem 4. We will revise the theorem statement to clearly specify that these matrices are fixed and do not depend on the input sequence.\\n\\n---\\n\\n**4. Formalization of Chain-of-Thought (CoT):**\\n\\nWe appreciate your acceptance of our formalization of CoT based on prior work, even though you maintain reservations about its distinction from autoregressive decoding. We will further refine our explanation in the manuscript to better highlight the aspects of CoT that involve explicit reasoning steps, distinguishing it more clearly from standard autoregressive decoding.\\n\\n---\\n\\n**5. Appreciation for the Updated Theorem 1:**\\n\\nWe are pleased you find the updated Theorem 1 stronger and more satisfactory. Your insights have greatly improved the rigor and clarity of our theoretical results.\\n\\n---\\n\\nOnce again, we sincerely thank you for your detailed and constructive review. Your feedback has significantly helped us improve our paper, and we are grateful for your willingness to engage deeply with our work. We have incorporated most of your suggestions into the revised manuscript and will incorporate others, and we believe these changes have strengthened the overall quality of our submission.\"}", "{\"title\": \"Response to authors' rebuttal Part I\", \"comment\": \"I thank the authors for their detailed response and explanations.\\n\\n**1.** I look forward to seeing the strengthened theorem statement. I would be appreciate it if it can be provided before the end of the discussion period.\\n\\n**2.** I appreciate the informal explanation of the difference between CoT and autoregressive decoding, but my question was specifically about the formalization of CoT given in the paper. You say \\\"we defined CoT within the context of SSMs by formalizing how the model recursively generates additional tokens based on its previous outputs and inputs\\\", which is in agreement with my understanding of your definition. However, to me, that definition seems to be an accurate formalization of autoregressive decoding without incorporating the reasoning aspects of CoT.\\n\\n**3.** Making the changes to replace $\\\\mathsf{L}$ with $\\\\mathsf{FL}$ sounds good to me.\\n\\nI'm looking forward to the authors' responses to my questions 2-6 regarding Theorem 4.\"}", "{\"title\": \"Response to Reviewer Lnbo\", \"comment\": \"We sincerely thank the reviewer for their positive feedback and highlighting our paper's contributions. Below, we address your suggestions regarding comparisons to prior work and the relationship between numerical experiments and theoretical results.\\n\\n---\\n\\n### **Comparing Our Results to Peng et al.**\\n\\n**Comment:** *No real weakness. Potentially authors could spend more time comparing their results to Peng for readers.*\\n\\n**Response:** \\nWe agree that a detailed comparison with Peng et al. would aid readers. Here\\u2019s a concise summary of distinctions and extensions provided by our work:\\n\\n1. **Architectural Scope:** \\n - Peng et al. focused on Transformers, showing limitations in function composition over large domains. \\n - We extended this analysis to Structured State Space Models (SSMs), proving that they also struggle with function composition without exponential state dimensions, suggesting broader limitations across sequence models.\\n\\n2. **Theoretical Contributions:** \\n - Peng et al. placed Transformers in weak complexity classes, showing their inefficiency in compositional reasoning. \\n - We showed that multi-layer SSMs operate within \\\\(L\\\\) (logarithmic space), reinforcing their computational constraints and inability to solve \\\\(NL\\\\)-complete problems unless \\\\(L = NL\\\\).\\n\\n3. **Chain-of-Thought (CoT) Analysis:** \\n - Peng et al. highlighted exponential scaling in reasoning steps with CoT prompting for Transformers. \\n - We demonstrated similar limitations for SSMs, with CoT requiring insufficient polynomially growing steps to overcome inherent architectural constraints.\\n\\n4. **Empirical Evidence:** \\n - Both works showed limitations in function composition tasks, but our experiments confirmed that SSMs face similar barriers despite their distinct architectural design, along with many more experiments in the appendix.\\n\\n**Summary:** \\nOur findings complement Peng et al.\\u2019s work by generalizing their insights to SSMs, reinforcing that these limitations are not specific to Transformers but reflect fundamental constraints in sequence modeling architectures.\\n\\n---\\n\\n### **Clarifying the Relationship Between Numerical Experiments and Theoretical Results**\\n\\n**Comment:** *Authors could clarify the numerical experiments and their relationship to the main theorem with respect to layer number.*\\n\\n**Response:** \\nOur numerical experiments were designed to validate the theoretical results, particularly Theorem 3, which places \\\\(L\\\\)-layer SSMs in \\\\(L\\\\) (logarithmic space). Key findings include:\\n\\n1. **Layer Depth and Performance:** \\n - Increasing SSM depth improved performance marginally but plateaued quickly, consistent with the theoretical prediction that computational capacity does not scale significantly with depth.\\n\\n2. **Function Composition Tasks:** \\n - Deeper SSMs did not overcome limitations in multi-step reasoning or function composition, aligning with our proof that they remain constrained by \\\\(L\\\\).\\n\\n3. **Chain-of-Thought Prompting:** \\n - CoT prompting aided SSMs but required polynomial growth in reasoning steps. Additional layers did not alleviate this growth, reinforcing that the architectural constraints persist despite CoT.\\n\\n**Conclusion:** \\nThese experiments support our theoretical results, demonstrating that SSM limitations stem from fundamental architectural constraints, not just depth. \\n\\n---\\n\\nWe appreciate the reviewer\\u2019s thoughtful suggestions and hope these clarifications improve the understanding. We remain open to further questions and would be grateful if these responses are satisfactory enough to warrant an updated score.\"}", "{\"title\": \"Response to Reviewer yPps [Part I]\", \"comment\": \"Dear Reviewer, thank you for your thorough and insightful review of our paper. We appreciate your constructive feedback and have carefully considered each of your points. Before the deadline, we will update the manuscript.\\n\\n---\\n\\n### **Responses to Questions:**\\n\\n#### **1. Questions about Theorem 1:**\\n\\n**Question 1:**\\n\\n*The proof seems to rely on the specific ordering of g followed by f followed by x in the prompt. Does a similar proof work when these pieces are in different orders? In particular, what about the order that would intuitively be the easiest for the SSM: x followed by g followed by f? (The intuition here comes from the fact that a streaming algorithm taking in \\\\(x, g, f) in this order wouldn't need to store the entire table of f or g.*)\\n\\n**Response:**\\n\\nYou are correct that the ordering of the prompt in our proof plays a role in the communication complexity argument. However, the fundamental limitation does not depend on the specific order of x, g and f in the prompt. \\n\\nIn our initial proof, we assumed a prompt where the descriptions of g and f precede the query x. This aligns with how we constructed the communication protocol to simulate the SSM's computation.\\n\\nWe can adapt the proof accordingly if the prompt order is x followed by g followed by f. The key challenge remains: to compute f(g(x)) over large domains without storing substantial information about f and g is fundamentally difficult for SSMs due to their limited state size.\\n\\nEven if x is presented first, the SSM must retain x while processing g and then f. Since g and f are arbitrary functions over large domains, the model must still capture significant information about them to compute f(g(x)). This requires a substantial state size, regardless of the prompt order.\\n\\nTherefore, the limitations we proved still hold under different prompt orderings. We will revise the proof in our paper to clarify that the argument applies regardless of the order of x, g, and f in the prompt.\\n\\n**Edits to the Manuscript:**\\n\\n- **Section 4 (Function Composition Requires Wide One-Layer Models):** We will revise the proof of Theorem 1 to explicitly address different prompt orderings. We will demonstrate how the communication complexity argument can be adapted to various prompt structures, reinforcing that the fundamental limitation arises from the need to represent large functions, not from the specific prompt order.\\n\\n---\\n\\n#### **2. Questions about Theorem 2:**\\n\\n**Question 2:**\\n\\n*How is the definition of CoT different from just autoregressive decoding?*\\n\\n**Response:**\\n\\nThank you for highlighting the need for clarification. In our definition, the Chain-of-Thought (CoT) refers to a model generating intermediate reasoning steps that explicitly break down the problem-solving process. While autoregressive decoding involves predicting the next token based on previous tokens, CoT is designed to encourage the model to produce a sequence of logical reasoning steps leading to the final answer.\", \"the_key_differences_are\": \"- **Purpose:** CoT aims to mimic human-like reasoning by generating intermediate steps that make the model's thought process explicit.\\n\\n- **Structure:** In CoT, the model is prompted or trained to produce these reasoning steps, which may involve additional tokens that represent sub-calculations or explanations.\\n\\n- **Autoregressive Decoding:** While CoT uses autoregressive decoding as the mechanism to generate the sequence, not all autoregressive decoding involves CoT. Standard autoregressive models may generate outputs without explicit intermediate reasoning.\\n\\nIn our paper, we defined CoT within the context of SSMs by formalizing how the model recursively generates additional tokens based on its previous outputs and inputs, focusing on the reasoning aspect.\\n\\n**Edits to the Manuscript:**\\n\\n- **Section 5 (Many Thought Steps are Needed):** We will update the definition of CoT to clearly distinguish it from standard autoregressive decoding. We will emphasize the role of intermediate reasoning steps in CoT and explain how it simulates multi-step reasoning processes, making the distinction more explicit.\\n\\n---\\n\\n#### **3. Questions about Theorem 4:**\\n\\n**Question 3:**\\n\\n*Lines 74 & 386: Isn't L a class of decision problems? Shouldn't one say FL instead of L here?*\\n\\n**Response:**\\n\\nYou are correct. In computational complexity, L refers to the class of decision problems solvable in logarithmic space, while FL denotes the class of function problems computable in logarithmic space. Since our focus is on function computation rather than decision problems, it is more precise to refer to FL.\\n\\n**Edits to the Manuscript:**\\n\\n- **Throughout the Paper:** We will replace L with FL when referring to function computation in logarithmic space. This change will improve the precision of our statements regarding the computational complexity classes involved.\\n\\n---\"}", "{\"title\": \"Response to Reviewer BvEu [01.12.2024]\", \"comment\": \"Dear Reviewer BvEu,\\n\\nThank you for your time and consideration in reviewing our revisions. We appreciate your thoughtful feedback and are glad our responses have addressed your concerns. We are also grateful that you have updated your score.\"}", "{\"title\": \"Response to authors' rebuttal Part II\", \"comment\": \"### 1\\n\\nI appreciate the authors' update to Theorem 4. The update resolves Question 2 that I had, and also renders my Questions 3-5 obsolete since the entire theorem statement was changed.\\n\\nMaybe putting the original Theorem 4 in the Appendix would be nice, and you can state in the main text \\\"Under the log-precision assumption from Merrill et al. 2024, we prove Appendix Theorem ... that shows .... Here, we show using the more realistic assumption that precision doesn't depend on input sequence length that ...\\\"\\n\\nThe concern in my Question 6 remains. To quote one sentence from my original question:\\n> My computer has finite memory, so does that mean it can only decide regular languages?\\n\\nThe new Theorem 4 is identical in spirit to a theorem (call it Theorem 4') that says \\\"A computer with finite memory is fundamentally limited to computations that can be performed by an FSM.\\\" And applying the takeaway in lines 426-431 to Theorem 4' will look like this:\\n> These limitations are significant because they highlight the boundaries of what computers can achieve in\\npractical settings. Regarding practical considerations, since real-world implementations of computers\\noperate on hardware with finite memory and finite precision arithmetic, these theoretical limitations\\ndirectly apply to computers used in actual applications. Therefore, when designing systems for tasks\\nthat require processing beyond regular languages, it becomes clear that computers\\nmay not suffice, and alternative architectures or computational mechanisms need to be considered to\\novercome these inherent constraints.\\n\\nwhich is the incorrect takeaway. The correct takeaway from Theorem 4' is that, although real computers operate with finite memory, the aptest theoretical model is one with infinite memory (i.e., a Turing machine). Similarly, I believe that the correct takeaway from the new Theorem 4 is that, although real SSMs operate with finite precision, the aptest theoretical model is one with infinite precision.\\n\\n### 2\\n\\nAlthough the added paragraph in the paper does not directly address my question, I accept the formalization of CoT purely based on the fact that it was based on prior work. (However, I maintain that the formalization better captures the idea of autoregressive decoding and not of CoT.)\\n\\n### 3\\n\\nYour definition of SSMs allows the matrices $A_t, B_t, C_t, D_t$ to depend on $t$, but your proof of Theorem 4 assumes fixed $A, B, C, D$. I would recommend explicitly stating this assumption in the theorem statement.\\n\\n### About Theorem 1\\n\\nI appreciate the updated theorem and believe it is stronger than the previous one, although I have not yet checked the details of the proof.\\n\\n### Summary\\n\\nI have revised my Soundness Score to 2 and Overall Score to 5 (Weak Reject).\\n* _Explanation for not higher score:_ As elaborated above, my concern raised in Question 6 remains unresolved.\\n* _Explanation for increase in score:_ Question 2 was resolved with the more realistic model of fixed precision. Also, although I would still recommend rejection based on my unresolved Question 6 alone, my objection is reduced by the fact that the authors are following the finite-precision assumption of previous work.\"}" ] }
DhYsFwLqkL
Well-NeRF: Ensuring Well-Posed Neural Radiance Fields via View Frustum and Shadow Zone Based Regularization
[ "Geunho Kim", "Jinwook Paeng", "Jin Hong", "Yoojin Han", "Junseok Kwon" ]
Neural Radiation Field (NeRF) often produces many artifacts with sparse inputs. These artifacts are primarily caused by learning in regions where position inference is not feasible. We assume that the main cause of this problem is the incorrect setting of boundary conditions in the learning space. To address this issue, we propose a new regularization method based on two key assumptions: (1) the position of density and color cannot be inferred in regions where the view frustum does not intersect, and (2) information inside opaque surfaces cannot be observed and inferred, and thus cannot contribute to the rendering of the image. Our method aims to transform the NeRF model into a well-posed problem by regularizing learning in regions where position inference is not possible, allowing the network to converge meaningfully. Our approach does not require scene-specific optimization and focuses on regions where position inference is not possible, thereby avoiding degradation of model performance in main regions. Experimental results demonstrate the effectiveness of our method in addressing the sparse input problem, showing outstanding performance on the Blender synthetic datasets. Our method is designed to integrate seamlessly with existing techniques, providing an effective solution for sparse input scenarios and offering a foundational approach that serves as the first clue in addressing sparse input problems.
[ "Few-shot NeRF", "Ill-posed problem", "Artifacts removal", "View frustum", "Inside opaque", "Boundary condition", "Near-far threshold", "Integrated model" ]
Reject
https://openreview.net/pdf?id=DhYsFwLqkL
https://openreview.net/forum?id=DhYsFwLqkL
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wshAqD9jPe", "voBS4QBwAv", "tNVBIIxsgP", "s4fz7mZe0f", "qLGEk2VVdT", "ppKa3EbkiJ", "oVHb7xScNg", "oEifSRwW4X", "nRHoCCshJJ", "l2zsfCMgVR", "imGzR5QSoy", "heuLVApHE2", "dhMIMaHg35", "c1Kof5Y8Of", "Y32FNvTRzI", "VA3unPKdz9", "V2XLg7uvd6", "OSSKRlhoyv", "LqJbLnquzN", "LjHzWNvlCh", "JkGKtGcsSv", "FJ6ebVbKYs", "BnbxPTqfeJ", "BXzb3viqIC", "BGHjUBzRSd", "AS6fNUszp5", "92FLad3A9o", "8Yfo65CKKF", "85B4dbpJD4", "7G0y2KdRzD", "5oyXYEhSKv", "5jVxrYj6a8", "5edje92j8v", "5UEyhkpvDT", "4f8E85eWwe", "3OVq4KDPeq" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732300084487, 1732299609297, 1732192784409, 1733206193195, 1732883041511, 1733206404194, 1732801932993, 1732803662808, 1737523970458, 1732192932660, 1730169732776, 1730876035781, 1732801339233, 1732800818033, 1730626005458, 1732502761425, 1732883963559, 1733191869371, 1734513855434, 1730677225033, 1733074438129, 1732195019068, 1732289222774, 1733206677655, 1733214266524, 1732882888215, 1732799939669, 1732845554515, 1730679150030, 1732289314042, 1732798910242, 1732278569916, 1732799600957, 1733220876777, 1733225199898, 1732283846297 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9236/Authors" ], [ "ICLR.cc/2025/Conference/Submission9236/Authors" ], [ "ICLR.cc/2025/Conference/Submission9236/Authors" ], [ "ICLR.cc/2025/Conference/Submission9236/Authors" ], [ "ICLR.cc/2025/Conference/Submission9236/Authors" ], [ "ICLR.cc/2025/Conference/Submission9236/Authors" ], [ "ICLR.cc/2025/Conference/Submission9236/Authors" ], [ "ICLR.cc/2025/Conference/Submission9236/Reviewer_e43w" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9236/Authors" ], [ "ICLR.cc/2025/Conference/Submission9236/Reviewer_sTgH" ], [ "ICLR.cc/2025/Conference/Submission9236/Reviewer_e43w" ], [ "ICLR.cc/2025/Conference/Submission9236/Authors" ], [ "ICLR.cc/2025/Conference/Submission9236/Authors" ], [ "ICLR.cc/2025/Conference/Submission9236/Reviewer_xeNz" ], [ "ICLR.cc/2025/Conference/Submission9236/Reviewer_sTgH" ], [ "ICLR.cc/2025/Conference/Submission9236/Authors" ], [ "ICLR.cc/2025/Conference/Submission9236/Reviewer_sTgH" ], [ "ICLR.cc/2025/Conference/Submission9236/Area_Chair_YAPV" ], [ "ICLR.cc/2025/Conference/Submission9236/Reviewer_M1eU" ], [ "ICLR.cc/2025/Conference/Submission9236/Authors" ], [ "ICLR.cc/2025/Conference/Submission9236/Reviewer_e43w" ], [ "ICLR.cc/2025/Conference/Submission9236/Authors" ], [ "ICLR.cc/2025/Conference/Submission9236/Authors" ], [ "ICLR.cc/2025/Conference/Submission9236/Reviewer_xeNz" ], [ "ICLR.cc/2025/Conference/Submission9236/Authors" ], [ "ICLR.cc/2025/Conference/Submission9236/Authors" ], [ "ICLR.cc/2025/Conference/Submission9236/Reviewer_sTD1" ], [ "ICLR.cc/2025/Conference/Submission9236/Reviewer_sTD1" ], [ "ICLR.cc/2025/Conference/Submission9236/Authors" ], [ "ICLR.cc/2025/Conference/Submission9236/Authors" ], [ "ICLR.cc/2025/Conference/Submission9236/Authors" ], [ "ICLR.cc/2025/Conference/Submission9236/Authors" ], [ "ICLR.cc/2025/Conference/Submission9236/Authors" ], [ "ICLR.cc/2025/Conference/Submission9236/Authors" ], [ "ICLR.cc/2025/Conference/Submission9236/Authors" ] ], "structured_content_str": [ "{\"title\": \"Official Review of Submission9236 by Reviewer sTgH\", \"comment\": \"**W6.**\\n\\n**Q.** Is it also possible that the color of sample points after weighting could bring noise from unconstrained foreground areas into the interior?\\n\\nThe method is carefully designed to minimize its influence on regions where the base NeRF model can already determine results autonomously. Our approach aims only to assist NeRF in resolving the indeterminacy of internal colors that it cannot infer.\\n\\nThis method does not affect other regions because the degree of color blending is proportional to the sampling point\\u2019s weight. As a result, color blending propagates unidirectionally from high-weight regions (e.g., opaque object surfaces) toward deeper areas along the viewing direction (inside the object). In low-weight regions, color blending has minimal impact on rendering, making its effect difficult to observe.\\n\\nThe regularization coefficient $\\\\lambda$ for Shadow Zone Regularization decreases significantly with iterations (as shown in Equation 10) to prevent overfitting. Additionally, the regularization loss $L_b$ (Equation 7) becomes much smaller than the primary training loss $L_c$ (Equation 8), helping the model converge during early training while having little to no impact in later stages.\\n\\nThis method proved to be an effective approach among various experiments aimed at minimizing the indeterminacy of sampling points along a single ray.\\n\\n**W7.**\\n\\n**Q.** Settings of FreeNeRF are completely different from this study\\n\\nThe frequency encoding used in Free-NeRF and hash encoding might seem significantly different at first glance, but the two methods share substantial similarities. While there are notable differences in the detailed implementations, both ultimately produce a multi-resolution (scale) position-encoded tensor that is concatenated. As a result, the structure of the final tensor is quite similar in both approaches.\\n\\nThus, the frequency-level masking and scheduling implemented in Free-NeRF can be applied in a similar manner to hash encoding.\\n\\nIn fact, we successfully implemented this and confirmed that it functions similarly. However, based on our independent implementation, we found that frequency masking is highly sensitive to scheduling and poses a significant risk of overfitting to low-frequency regions during the early stages, potentially resulting in reduced resolution.\\n\\nNonetheless, we compared our method with the original implementation rather than relying solely on our independent implementation, as we believe a direct comparison would be more appropriate.\\n\\nIt is somewhat unfortunate that we could not compare with more implementations, but we encountered a particular issue when attempting comparisons with other models.\\n\\nWe attempted to compare our model with **Reg-NeRF**. However, we encountered difficulties due to the complex and ambiguously defined experimental conditions (particularly the near-far threshold). Based on our understanding, Reg-NeRF first applies a near-far condition, then performs coordinate warping into the NDC space, and subsequently applies a secondary near-far thresholding process within the NDC space. These settings are not only hardcoded but also vary numerically depending on the data class. Unfortunately, neither the main text nor the supplementary material of the Reg-NeRF paper explicitly specifies these conditions.\\n\\nAs a result, we could not ensure that our comparison with Reg-NeRF was conducted under equal conditions and were therefore unable to include those results in our paper.\\n\\nThis issue is not unique to Reg-NeRF but is common across many models, as they often fail to explicitly detail the hyperparameter settings (e.g., near-far thresholds) used in their codebases\\n\\n**W8.**\\n\\nWe plan to provide experiments on new data and a wider range of views for existing experiments within the review period.\\n\\nWe are conducting additional experiments, including addressing the points raised by you and other reviewers. Additional responses and materials will be uploaded during the review period, and we would greatly appreciate your continued interest.\"}", "{\"title\": \"Response to Reviewer sTgH\", \"comment\": \"Thank you very much for investing your time in reviewing our work. Your detailed review is greatly appreciated and will be immensely helpful in improving our paper.\\n\\n**W1.**\\n\\nThe reviewer's point is valid. We are currently considering changing the title.\\n\\nOur experiments were initially aimed at addressing the sparse input view problem. However, we realized that establishing reasonable spatial boundary conditions for training and resolving ill-posed problems carries a greater significance. **We believe this insight can be emphasized not only for sparse input conditions but also in addressing other challenges.**\\n\\nIn fact, we demonstrated through simple yet important experiments that our implementation improves performance even under dense input conditions\\u2014an experiment not attempted by most other models addressing sparse input problems. Our model effectively preserves the performance of the original NeRF because, in the primary training regions, the influence of regularization is minimal, allowing it to operate identically to the original model.\\n\\nIn contrast, other models addressing the sparse input problem often risk potential performance degradation due to excessive regularization in the regions of interest. For example, in the case of Free-NeRF, overfitting to low-frequency regions can result in reduced resolution.\\n\\n**W2.**\\n\\nWe acknowledge the lack of references to and discussion of recent related research.\\n\\nOur work aimed to emphasize the potential of what is now considered a somewhat classic NeRF method. While remarkable recent models such as ZeroRF, Reconfusion, and Cat3D (as mentioned by other reviewers) exist, our approach was specifically designed to explore the potential of base models and their extensions built on the following foundations:\\n\\n- **Random pixel ray sampling**\\n- **Encoding:** 3(position)+2(direction) input dimensions \\u2192 n-dimensional output (including spatial scale levels)\\n - Examples:\\n - Frequency encoding\\n - Hash encoding\\n- **Fully connected layers**\\n- **Volume rendering**\\n\\nDue to this focus, the range of comparable models was significantly restricted to those sharing a similar architectural foundation.\\n\\n**W3.**\\n\\nWe will work on improving the consistency of the notation.\\n\\n**W4.**\\n\\nS_adj = min(S, 9) - 1\\n\\nS_norm = S_adj / max(S_adj)\\n\\nWe limited the Frustum Score count to 9 or less to prevent excessive gradient differences. When \\\\( S = 1 \\\\), position inference is impossible (as the point is observed from only one view). To eliminate training gradients in this case, we subtracted 1.\\n\\nThis is explained in **Supplementary Material Equations 6 and 7**, but we noticed some notation inconsistencies in those equations. We will make the necessary corrections.\\n\\n\\n**W5.**\\n\\nQ. In equation (3), the Frustum Score is a constant value for each sample point when the camera parameters are fixed. Therefore, the obtained \\\\sigma_masked could be directly used in the integration calculation for RGB.\\n...\\n\\nWe would appreciate it if you could confirm whether we have understood this question correctly. Thank you for thoroughly reviewing the model architecture.\\n\\nOur design principles were based on the following two assumptions, which we considered to be true:\\n\\n1. The position of density and color cannot be inferred in regions where the view frustum does not intersect.\\n2. Information inside opaque surfaces cannot be observed or inferred, and thus cannot contribute to the rendering of the image.\\nWe attempted to implement these principles within the model as thoroughly as possible.\\n\\nMasking and gradient clipping can be applied to tensors with the shape [batch, num_sampling_points, n] that are generated during the computation process. For example:\\n\\n[batch, num_sampling_points, position(3)] (all corresponding tensors are aligned with this position):\\n\\n(a) [batch, num_sampling_points, density(1)]: gradient , mask\\n\\n(b) [batch, num_sampling_points, weight(1)]\\n\\n(c) [batch, num_sampling_points, rgb(3)]: gradient\\n\\nIntegration of (weight * rgb) produces:\\n\\n[batch, rgb(3)]\\n\\nWe believe the reviewer is suggesting that the method could be applied only at the step marked as the integration of (weight * rgb) to achieve the same effect.\\n\\nAdditionally, we understand that the reviewer might be asking whether the method was redundantly applied to both (a) and (b).\", \"our_reasoning_was_as_follows\": \"During the computation of the weight, we believed that the density of inference-impossible regions could influence the weight of inference-possible regions, causing gradients to propagate into inference-impossible regions.\\nTherefore, we applied the method again in step (a).\\nWhile this implementation may not be a logically perfect equivalence to the first assumption, we believe that our experimental results demonstrate that the assumptions were convincingly implemented.\"}", "{\"title\": \"Response to Reviewer e43w\", \"comment\": \"First of all, we sincerely thank you for taking the time to review and evaluate our paper. Your comments have provided valuable insights and have been greatly helpful in revisiting and improving our work.\\n\\n## Response to Weaknesses\\n\\nThe reviewer's point is valid.\\n\\nTo provide additional clarification, the proposed method starts from two true propositions:\\n\\n**1. It is impossible to infer position at points where view frustums do not intersect (as triangulation is not feasible).**\\n\\n**2. Information inside solid objects cannot contribute to rendering (as it is unobservable).**\\n\\nOur implementation, inspired by these propositions, cannot be claimed as a logically perfect equivalence to them. However, through careful examination, we have made significant efforts to ensure these propositions are well reflected in our learning model, and experiments have demonstrated their effectiveness.\", \"the_motivation_for_our_challenge_is_as_follows\": \"The base model of **nefstudio** (https://github.com/nerfstudio-project/nerfstudio/), **nerfacto**, is one of the most widely used general-purpose models. While this model has been refined over a long time to achieve generality in real-world applications, users have occasionally faced difficulties training it on simple objects (examples below):\\n\\n- https://github.com/nerfstudio-project/nerfstudio/issues/2443\\n- https://github.com/nerfstudio-project/nerfstudio/issues/806\\n\\nThis issue remains unresolved and becomes especially pronounced when data is scarce. It has posed significant challenges in one of our recent practical projects.\\n\\nHowever, we assumed that if there is a spherical object with a well-defined surface pattern, under ideal conditions, the surface of the sphere could be fully reconstructed through stereo vision using only four views positioned to observe the sphere at 90-degree angles relative to a plane passing through the center of the sphere. (All surfaces can be observed from at least two of these views.)\\n\\nNevertheless, many NeRF models fail critically in this scenario. We were convinced that the primary cause of this issue is related to the first assumption. (This problem can be better understood through the following analogy: a computer cannot distinguish between a scenario where a large sphere is at the center and another where four small spheres obstruct the camera directly in front of it.)\\n\\nBased on multiple reported cases and our own experience, we believed that our attempt and implementation could significantly contribute to improving the practical generalization of models. This implementation was achieved by introducing minimal modifications and module insertions into the nerfacto model, ensuring that the original principles of the model remain largely intact. As a result, it integrates seamlessly with all functionalities of **nerfacto** that can be toggled on or off.\\n\\nOur Appendix (at the end of the main paper) includes experiments on DTU and LLFF datasets. These datasets consist of real-world captured data, which we believe demonstrates the applicability of our method in real-world scenarios.\\n\\nWe will work on improving the writing and figures. Additional responses regarding the experiments will follow in the next comments.\"}", "{\"title\": \"Thank you for the positive feedback\", \"comment\": \"Dear Reviewer,\\n\\nI am glad to hear that your concerns have been resolved. Your comments have greatly contributed to improving the paper. I sincerely appreciate your positive evaluation.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Thank you for taking the time to review our response. We sincerely appreciate your feedback and would be more than happy to address any remaining concerns you may have. Please do not hesitate to let us know if there is anything further we can do to fully address your concerns.\"}", "{\"title\": \"Thank you for the positive feedback\", \"comment\": \"Dear Reviewer,\\n\\nThe discussion with you has also been highly valuable to us. As you suggested, we are maintaining a continuous interest in the learning-based approach, and we plan to make efforts on both sides along with improving the base model. We sincerely appreciate your positive evaluation.\"}", "{\"title\": \"Response to Reviewer sTgH\", \"comment\": \"Dear Reviewer,\\n\\nWe have uploaded additional experiments and materials, with a summary provided in the comments at the top of this page.\\n\\nAmong the uploaded materials, **2. Effectiveness of Dynamic Lambda** may be indirectly related to your concerns.\\n\\nAdditionally, **4. New Dataset Proposal: Randomized Structures and Patterns** is proposed as a new contribution from our work.\\n\\nWe are still preparing a direct response to your recent comments. We are carefully considering them and will provide a reply as soon as possible during the discussion period.\\n\\nThank you for your time and understanding.\"}", "{\"title\": \"Response to Authors\", \"comment\": \"Thanks for your reply and efforts. Your response and additional experiments address most of my concerns. Therefore, I am raising my rating. Please add these additional experiments and results in the future versions to ensure the soundness and clarity of the paper.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer e43w\", \"comment\": \"## Response to Questions\\n\\n**Response to the Q1:**\\n\\n1. If the input data includes camera intrinsics and extrinsics (the basic input format for NeRF training), the Frustum Score calculator matrix is precomputed. During training, the Frustum Score Calculator is used to calculate the scores of sampling points. Figure 3(d) represents the Frustum Score for actual sampling points calculated with three input views (yellow: 3 points, red: 2 points, yellow: 3 points).\\n2. So far, our experiments have shown that the method works effectively on object-centric synthetic datasets (Blender), forward-facing real-world datasets (LLFF), and real-world datasets captured along a quarter-sphere trajectory (DTU).\\n\\n**Response to the Q2:**\\n\\nThe experiment is feasible and still works well. Our goal is to ensure functionality across a wide range of near and far plane settings, from narrow to wide configurations. While the ultimate aim is to make the method near-far parameter-free, achieving exact values of 0 and infinity is numerically impossible. Therefore, we used a setting of 0.5 to 1000.\\n\\n**Response to the Q3:**\\n\\nA controlled study on lambda is feasible. Our methodology is designed to minimize excessive regularization and maximize the potential of the model itself. It influences the model only during the very early stages, as lambda decreases in proportion to the RGB loss (and is further reduced by the overall optimization scheduler). Consequently, after the initial iterations, the model operates no differently from the original nerfacto.\\n\\n**Response to the Q4:**\\n\\nA controlled study on the number of sampling points is feasible. However, the loss does not increase proportionally to the number of sampling points. This is because we used PyTorch's default MSELoss with mean reduction (dividing by the number of elements). We will add a clarification about the exact formulation of MSELoss.\\n\\nOn the other hand, adjusting the number of sampling points is a significant factor that affects both the original nerfacto model and the baseline NeRF results. Therefore, such a controlled study might introduce some confusion.\\n\\n**Response to the Q5:**\\n\\nWe are planning experiments on new datasets.\\n\\nHowever, we would like to point out that there is currently a lack of suitable datasets for few-shot experiments. When splitting data into training and evaluation sets, there is a high likelihood that unobserved regions in the training set are included in the evaluation set. In such scenarios, how should the model approximate completely unobserved regions? As black, white, gray, random colors, or the mean color? This is nonsensical and sometimes leads researchers to enforce overfitting to achieve high evaluation PSNR.\\n\\nTherefore, we are designing a random structure, random texture dataset and plan to create hundreds to thousands of samples. This approach aims to eliminate potential biases in evaluations.\\n\\n**Response to the Q6:**\\n\\nAnalyzing our method in comparison with Gaussian Splatting-based approaches is indeed an interesting task. However, there are important considerations to keep in mind. Most Gaussian Splatting-related works begin training with a point cloud obtained through SfM as a prior input.\\n\\nIn contrast, many NeRF models do not use such inputs and start training solely with images and camera positions.\\n\\nComparing Gaussian Splatting, which starts with 3D prior information about image surfaces, with NeRF models requires careful attention. It would be more appropriate to compare our method with Gaussian models that either use random Gaussian priors or no priors at all.\\n\\nAlthough NeRF and Gaussian Splatting share similarities, they differ significantly in their rendering methods (volume rendering vs. alpha-sorting and rasterization). Therefore, implementing such a comparison may take some time.\\n\\n**We plan to conduct the suggested experiments within the review period and provide additional responses. Thank you for your continued interest.**\"}", "{\"summary\": \"This paper explores two key factors that lead to artifacts when training NeRF from sparse viewpoints. The first is that the regions cannot be inferred where the view frustum does not cover, and the second is that the opaque areas inside objects cannot be well constrained by the RGB loss, resulting in a NeRF that does not fully represent scene information. To address these issues, the paper proposes two regularization terms to constrain the training, focusing the network on areas with higher reliability for better results. Experiments demonstrate that the proposed method effectively resolves the two issues mentioned above.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper proposes two different strategies to address the underfitting issues caused by sparse viewpoints, which are actually quite common. These strategies involve designing regularization terms to constrain the training of NeRF, and have been proven to be very effective in experiments. On the other hand, the proposed approach is very intuitive and relatively concise, and it can achieve better results without altering other network settings. Additionally, from what I've observed, the current design is essentially a plug-and-play module that can be used in any existing NeRF. If possible, the author could also emphasize this point and validate it with some experiments.\", \"weaknesses\": \"For writing, firstly, regarding the paper's title, since the core problem addressed is how to regularize NeRF training under a \\\"sparse perspective\\\" to achieve better results, it's recommended that the title reflects the concept of \\\"sparsity\\\". Next, concerning the literature review section of the paper, I believe there should be relevant studies from the latest year (2024), and the author needs to thoroughly research the most recent advancements in this field. Additionally, there are issues with some of the formulas where the symbols are not clearly described, and there is inconsistency in their use. For example, the symbol (\\\\sigma) in equation (3) should be consistent with equation (1). Also, it's unclear how (S_norm) in equation (6) is calculated\\u2014is it (S / num_views)?\\n\\nFor technical part, some design choices have not provided reasonable explanations for certain aspects.\\n\\nIn equation (3), the Frustum Score is a constant value for each sample point when the camera parameters are fixed. Therefore, the obtained \\\\sigma_masked could be directly used in the integration calculation for RGB. Why then is there a need to further constrain its sparsity? Similarly, in equation (6), clipping is performed in the calculation of gradients. If, as previously mentioned, \\\\sigma_masked is directly used in the integration, constraining both RGB and the gradients, wouldn't a similar effect be achieved with reduced computational effort? I hope the author can explain the design principles. \\n\\nAdditionally, during the RGB blending process, introducing RGB values near the surface into internal sample points through blending might cause color bleeding in other views. Is it also possible that the color of sample points after weighting could bring noise from unconstrained foreground areas into the interior?\\n\\nFurthermore, in terms of experimental design, the paper only compares with FreeNeRF, but the settings of FreeNeRF are completely different from this study (network structure, use of hash acceleration, etc.), making the comparison potentially unfair. Lastly, I hope the author could add a few examples of novel view synthesis, because having constrained NeRF, theoretically, the results of NVS should improve. Otherwise, it might be possible that the training data was overfitted through regularization strategies.\", \"questions\": \"As discussed above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces Well-NeRF, a method addressing sparse input problems in NeRF models. The proposed approach includes Frustum Score and Shadow Zone to constrain learning to well-posed regions in order to reduce artifacts. Experimental results on synthetic and real-world datasets demonstrate the method\\u2019s improvement over traditional models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The two key assumptions are insightful and crucial.\\n\\n2. Authors did well in the organization of the paper.\\n\\n3. The idea of exploring view frustum seems interesting. It would be a great point for solving problems of sparse inputs in NeRFs.\", \"weaknesses\": \"1. **Incremental Contribution**. The authors did not give enough theoretical proofs and arguments for effectiveness of their method. In the Experiment part, as the experiments largely focus on synthetic data, the contribution of this work seems incremental. Please see Question 2 for more information on this weakness.\\n\\n2. **The proposed method**. Although the authors give a good and novel assumption, the proposed methods seems simple and incremental without insightful design. More validation can contribute to the soundness of your method.\\n\\n3. **Insufficient Experiments**. The authors did not provide sufficient experiment about comparisons with prior works to show their performances. The experiments are mainly conducted on **NeRF Synthetic Dataset**. However, the huge amount of experiments on synthetic datasets would weaken the effectiveness of the proposed methods for real world applications. I would encourage authors to conduct more experiment to improve the soundness of the paper. \\n\\n4. **Writing**. Writing could be improved for clarity and soundness. Some typos and mistakes could be corrected i.e. L45, 47, 50 incorrect quotation marks. L406 the sentence is not clear.\\n\\n5. **Figures**. More detailed figures can improve the clarity of the paper and make your paper more understandable. Figures in the submission seem too simple for reader to fully understand your methods and arguments.\", \"questions\": \"1. Can you give the **Frustum Score** of the input views in your training settings? Also can you give a statement with **Frustum Score** to explain what kind of inputs improves the most with your method?\\n2. Can you give more experiments on near/far plane setting? Is the proposed method still work well or does it still improve over baselines with near/far settings other than [0.05,1000] (such as [2,6] in Figure. 5, 6)?\\n3. Can you provide ablation studies on lambda of Equation. 9? You can set lambda as a pre-set hyperparameter which do not change in the training. It seems interesting to detach part of the loss as another parameter. How much does this design contribute to the convergence speed?\\n4. Can you give the ablation studies on the number of sampling points? In Equation. 7, it seems that the parameters depend on the number of sampling points. Does the number of sampling points influence the training results?\\n5. Can you give more results on large-scale datasets? The datasets with sparse inputs can further demonstrate the efficacy of your method.\\n6. Can you provide the experimental results on comparisons with similar works with Gaussian splatting? If the results still improve greatly compared with them, it would greatly improve the performances of the work.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer xeNz\", \"comment\": \"Dear Reviewer,\\n\\nWe have uploaded additional experiments and materials, with a summary provided in the comments at the top of this page.\\n\\nWe carefully considered your comments and designed and conducted new experiments. In particular:\\n\\n**2. Effectiveness of Dynamic Lambda**\\n\\n**3. Visualization of Frustum Score**\\n\\n**5. Video Demonstration of NeRF Rendering and Loss Curve**\\n\\nare directly related to your feedback.\", \"regarding_your_question\": \"Q. How does this effect apply only to the interior of the object as claimed, without affecting other regions?\\n\\nWe interpret this as a concern about potential over-regularization. We hope that Supplementary Material 2 addresses this concern to some extent. Additionally, the other experiments may also indirectly help alleviate your concerns.\\n\\nThank you for your time and consideration.\"}", "{\"title\": \"Response to Reviewer M1eU\", \"comment\": \"Dear Reviewer,\\n\\nAdditional experiments and materials have been uploaded, with a summary provided in the comments at the top of this page.\\n\\nYou identified the lack of experiments as a key weakness of our paper. In response, the additional materials include extended experiments based on the original ones as well as a proposal for an entirely new dataset accompanied by additional experiments. We believe these additions may increase the academic contribution of our work, and we kindly request your review.\\n\\nIn particular, we invite you to review **4. New Dataset Proposal: Randomized Structures and Patterns.**\\n\\nThank you for your time and consideration.\"}", "{\"summary\": \"The work addresses NeRF reconstruction using sparse inputs. The authors assume that the primary cause of reconstruction artifacts is the incorrect setting of boundary conditions in the learning space. To tackle this issue, they propose a new regularization method based on two assumptions: (1) the position of density and color cannot be inferred in regions where the view frustum does not intersect, and (2) information inside opaque surfaces cannot be observed or inferred, and therefore cannot contribute to the rendering of the image. The proposed method can be seamlessly integrated with existing techniques.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The proposed regularization methods, along with the propositions to develop them, are logical and well-founded.\\n2. The proposed method automatically adjusts the training space without requiring additional parameter tuning, making it easy to combine with other approaches.\", \"weaknesses\": \"1. Regarding Shadow Zone Regularization, the authors claim that Equation 7 blends the opaque surface's color into the object's interior. However, there is no explicit determination of the opaque surface. How does this effect apply only to the interior of the object as claimed, without affecting other regions?\\n2. No video results are presented to demonstrate the reconstruction accuracy and view-consistency of the rendering. Additionally, the comparison baseline only involves nerfacto and FreeNeRF, which is insufficient.\\n3. Manually adjusting the bounding box is straightforward with popular NeRF frameworks and may achieve the same or even better results than the proposed Frustum Score Regularization.\\n4. Quotation marks are not used correctly. (Minor issue, not considered in my rating).\", \"questions\": \"1. Could the authors provide additional frustum score visualizations, especially those associated with the presented qualitative results? This would aid in understanding the proposed regularization.\\n2. Could the authors include loss curves for the proposed regularizers?\\n3. Could the authors also provide visualizations of RGB values of samples along the ray to illustrate the behavior of Shadow Zone Regularization?\\n4. Could the proposed approach be combined with 3D Gaussian Splatting? This is significant, as 3DGS is becoming the mainstream approach for novel view synthesis, surpassing NeRF.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Authors\", \"comment\": \"Thanks for your efforts and detailed explanations. The responses indeed resolve some of my concerns.\\n\\nWhat I mean in W5 is that, the Frustum Score remains constant when the camera parameters are set. Consequently, the obtained \\\\sigma_masked is also constant. So, why is there a need to further constrain the sparsity of \\\\sigma_masked, will it change during training?\\n\\nRegarding W6, I remain unconvinced by the statement because the degree of color blending correlates directly with the weight of the sampling points. Could we potentially limit the color blending by setting a threshold to prevent color bleeding?\\n\\nBesides, as noted, comparing the proposed methods with others, such as Reg-NeRF, is challenging due to the complexity and ambiguous definition of the experimental conditions. I'm curious whether it's feasible to treat the two proposed regularizations as plug-and-play modules to see if they enhance performance beyond the original implementation.\\n\\nI am looking forward to the additional experiments. Thanks.\"}", "{\"title\": \"We Look Forward to Your Valuable Feedback on Our Response\", \"comment\": \"We look forward to receiving your valuable feedback on our response and would be more than happy to address any concerns you may have. Please do not hesitate to let us know if there is anything further we can do to address your concerns comprehensively.\"}", "{\"title\": \"Response to Authors\", \"comment\": \"Thanks for your efforts again. The additional experiments and explanation address most of my concerns. I will raise my rating to marginally above. Please include the necessary discussions in the revision, thanks.\"}", "{\"metareview\": \"In this paper, the authors have proposed Well-NeRF for reconstruction with sparse inputs. The motivation of this paper mainly comes from: 1) the points should not be able to inferred if they only appear in one view, and 2) the points inside the object are not observed so that should not contribute to the rendering process. By designing new regularization methods based on frustum score and shadow zone, Well-NeRF better restricts the boundary condition. The sparse view reconstruction is an important problem, and the proposed method should have potential to combine with more NeRF methods as a plug-and-play module. However, there are still some significant limitations of this paper. About the experiments, I suggest to apply the proposed method to more NeRFs and compare with more sota methods to better demonstrate the advantages of Well-NeRF. Reviewer xeNz raises concerns on the shadow zone regularization of the method. The writing of the paper should also be futher improved as mentioned by many reviewers. Based on the concerns above, I recommend a decision of rejection of this paper.\", \"additional_comments_on_reviewer_discussion\": \"Initially the reviewers raised concerns on the writing, the significance of the setting, contributions, several technical details and experiments. In the rebuttal, the authors have addressed many of them, but there are still some issues remained as I mention in the metareview, which are important to make a final decision.\"}", "{\"summary\": \"This paper proposes, Well-NeRF, which leverages view frustum and shadow zone-based regularization to make NeRF a well-posed problem under sparse view setting. Authors show outstanding performance across various test datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"+: The paper introduces the idea of addressing sparse input issues through frustum score and shadow zone regularization, which is highly simple and reasonable.\\n+: The experimental results are impressive, across different scenes.\\n+: The analysis is fundamental in 3D reconstruction community, which may inspire other 3D research task.\", \"weaknesses\": \"-: The dataset are too small to well demonstrate the upper bound of the proposed method. And then the results are sensitive to the experimental setting, making the results less convincing.\\n-: Lack enough comparisons to other methods, like RegNeRF[1], ZeroRF[2], etc.\\n-: Lack experiments on various number of views, making me unclear about the sensitivity and scalability.\\n-: (not totally a weakness, but a suggestion) The whole analysis seems independent of how we represent the scene. So, why not enhance the paper with experiments on 3DGS? \\n\\n[1] Regnerf: Regularizing neural radiance fields for view synthesis from sparse inputs\\n[2] ZeroRF: Fast Sparse View 360\\u25e6 Reconstruction with Zero Pretraining\", \"questions\": \"Please refer to the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer sTgH\", \"comment\": \"### Q1-1. The Frustum Score remains constant when the camera parameters are set.\\nWhen the camera parameters are given, what is determined is the **Frustum Score Calculator**. \\nHowever, the Frustum Score itself is recalculated for different sampling points in each iteration. Since the sampling points are determined by the proposal network during each iteration, the Frustum Score changes at every step. \\n\\n### Q1-2. Consequently, the obtained $\\\\sigma_{\\\\text{masked}}$ is also constant. \\nBecause the sampling points vary with each iteration, $\\\\sigma_{\\\\text{masked}}$ also changes in every iteration. \\n\\n### Q1-3. So, why is there a need to further constrain the sparsity of $\\\\sigma_{\\\\text{masked}}$? Will it change during training? \\nWhile the **Frustum Score Calculator** is pre-defined and constant, the **Frustum Score** and $\\\\sigma_{\\\\text{masked}}$ are recalculated differently for each iteration. \\n\\n### Summary of How Frustum Score is Used in the Model: \\n\\n1. **Camera Parameters** \\u2192 Pre-defined **Frustum Score Calculator (FSC)**. \\n2. **Ray Generator** \\u2192 Proposal Network Sampler \\u2192 Sampling Points Along the Ray (varies per iteration). \\n3. Frustum Score (FS) of sampling points is calculated using the FSC. \\n - At this stage, FS does not yet impose any constraints on training. \\n4. FS is then utilized as: \\n - **Frustum Score Mask (FSM)**. \\n - **Frustum Score Gradient Scaling (FSGS)**. \\n\\nThese two components (FSM and FSGS) impose constraints on training. \\n\\n### Q2. The degree of color blending correlates directly with the weight of the sampling points. Could we potentially limit the color blending by setting a threshold to prevent color bleeding?\\n\\nFor sampling points along the ray, the color behaves as follows: \\n1. When the **weight** is small, the amount of blending is minimal. \\n2. When the **weight** is small, the contribution to rendering is minimal. \\n3. Even if some color bleeds slightly toward other input views, the model can recover as training progresses. This is because the regularization coefficient $\\\\lambda \\\\$ becomes very small over time. (Refer to Question 3 from Reviewer1 e43w and Section 2 of the Rebuttal for more details.) \\n\\nSetting a threshold is a great suggestion. However, in this model design, we have aimed to minimize the introduction of hyperparameters that require specific values. As such, we will consider whether additional designs can be introduced that minimize the risk of color bleeding outside the object without relying on fixed thresholds.\\n\\n### Q3. \\n\\nOur module is designed as a plug-and-play module based on Nerfacto, allowing for easy integration with other methods. As a result, we have observed the outcomes of various methods operating under sparse conditions (refer to Figure 9 in the main text). \\n\\nIt seems that the reviewer is asking whether other sparse models could see performance improvements when combined with our method in a plug-and-play manner. We will review whether this can be demonstrated quickly using our current implementation. \\n\\n\\n**If there are any misunderstandings or differences in interpretation, please feel free to point them out. We are planning additional responses and would greatly appreciate your continued interest in our work.**\"}", "{\"title\": \"Response to Authors\", \"comment\": \"Thanks for your reponse and efforts. After reading your response, I better understand your work and your contribution. I am looking forward to your additional responses and results of experiments.\"}", "{\"title\": \"Response to Reviewer xeNz\", \"comment\": \"# **Response to Weaknesses**\\n\\n**W1. Regarding Shadow Zone Regularization, the authors claim that Equation 7 blends the opaque surface's color into the object's interior. However, there is no explicit determination of the opaque surface. How does this effect apply only to the interior of the object as claimed, without affecting other regions?**\", \"we_encourage_you_to_consider_this_method_from_the_following_perspective\": \"rather than focusing on color blending, it seeks to minimally intervene in areas where the NeRF network cannot make independent decisions\\u2014such as unobserved regions or the interior of opaque surfaces. The method is carefully designed to minimize its influence on regions where the base NeRF model can already determine results autonomously.\\n\\nDecisions regarding opaque surfaces rely entirely on the capabilities of the original NeRF model. Our approach aims only to assist NeRF in resolving the indeterminacy of internal colors that it cannot infer.\\n\\nThis method does not affect other regions because the degree of color blending is proportional to the sampling point\\u2019s weight. As a result, color blending propagates unidirectionally from high-weight regions (e.g., opaque object surfaces) toward deeper areas along the viewing direction (inside the object). In low-weight regions, color blending has minimal impact on rendering, making its effect difficult to observe.\\n\\nThe regularization coefficient $\\\\lambda$ for Shadow Zone Regularization decreases significantly with iterations (as shown in Equation 10) to prevent overfitting. Additionally, the regularization loss $L_b$ (Equation 7) becomes much smaller than the primary training loss $L_c$ (Equation 8), helping the model converge during early training while having little to no impact in later stages.\\n\\nThis method proved to be an effective approach among various experiments aimed at minimizing the indeterminacy of sampling points along a single ray.\\n\\n**W2. No video results are presented to demonstrate the reconstruction accuracy and view-consistency of the rendering. Additionally, the comparison baseline only involves nerfacto and FreeNeRF, which is insufficient.**\\n\\nWe will provide the video within the review period.\\n\\nWe acknowledge the lack of comparisons with other methods and would like to provide additional context regarding our experimental settings.\\n\\nWe attempted to compare our model with **Reg-NeRF**. However, we encountered difficulties due to the complex and ambiguously defined experimental conditions (particularly the near-far threshold). Based on our understanding, Reg-NeRF first applies a near-far condition, then performs coordinate warping into the NDC space, and subsequently applies a secondary near-far thresholding process within the NDC space. These settings are not only hardcoded but also vary numerically depending on the data class. Unfortunately, neither the main text nor the supplementary material of the Reg-NeRF paper explicitly specifies these conditions.\\n\\nAs a result, we could not ensure that our comparison with Reg-NeRF was conducted under equal conditions and were therefore unable to include those results in our paper.\\n\\nThis issue is not unique to Reg-NeRF but is common across many models, as they often fail to explicitly detail the hyperparameter settings (e.g., near-far thresholds) used in their codebases.\\n\\nWhile we recognize the limited comparisons as a weakness of our study, we conducted an in-depth review to address this and highlight that our model overcomes these issues by being a **near-far threshold-free model**. This characteristic sets our approach apart and addresses a major limitation observed in other models.\\n\\n**W3. Manually adjusting the bounding box is straightforward with popular NeRF frameworks and may achieve the same or even better results than the proposed Frustum Score Regularization.**\\n\\nDoes the bounding box refer to limiting the training region using an axis-aligned bounding box (AABB) or an oriented bounding box (OBB)? Or does it refer to a precisely defined near-far threshold?\\n\\nOur work aims to establish clear decisions on spatial boundary conditions for training through theoretical calculations.\\n\\nManually adjusting bounding boxes to achieve high performance would require iterative experiments. Our method minimizes this effort.\\n\\nThe near-far range of 2-6 is a long-standing and reasonable setting for the Lego model. However, as shown in the middle row of **Figure 5**, this setting fails significantly under sparse input conditions (split ratio 0.03). The fragments around the object are caused by inference-impossible regions that are not adequately constrained by the near-far settings.\\n\\nInference-impossible regions may also occur with AABB or OBB configurations. This is because the refined inference region calculated by our **Frustum Score Calculator** takes the form of a polyhedral polygon, which cannot be fully represented by a rectangular bounding box.\"}", "{\"title\": \"Thank you for the positive feedback\", \"comment\": \"Dear Reviewer,\\n\\nI am glad to hear that your concerns have been resolved. Your comments have been highly valuable in improving the soundness and clarity of the paper. I sincerely appreciate your positive evaluation.\"}", "{\"comment\": \"Dear authors,\\n\\nThanks for your response, revision, and additional results. After checking them and other reviewers' opinions, I am still fully convinced of this work's contribution and would like to keep my original rating. \\n\\nAs for the proposed shadow zone regularization, I still find it hard to operate only in the desired region. Moreover, I regard the efforts to adjust bounding box affordable, given the current speed of NeRF training. So I don't think this work brings a significant improvement to the community.\\n\\nThe new results demonstrate improvement over nerfacto, but there is still a lack of comparison with other sota methods. Even the nerfacto results can be improved according to my experience.\"}", "{\"title\": \"We Look Forward to Your Valuable Feedback on Our Response\", \"comment\": \"We look forward to receiving your valuable feedback on our response and would be more than happy to address any concerns you may have. Please do not hesitate to let us know if there is anything further we can do to address your concerns comprehensively.\"}", "{\"title\": \"Response to Reviewer sTD1\", \"comment\": \"Dear Reviewer,\\n\\nAdditional experiments and materials have been uploaded, and a summary is provided in the comments at the top of this page. While there are no materials directly requested by you, we believe the additional data may help in revisiting the paper.\", \"in_particular\": \"**2. Effectiveness of Dynamic Lambda**\\n\\n**4. New Dataset Proposal: Randomized Structures and Patterns**\\n\\nare contributions that we believe further enhance our work.\\n\\nThank you for your time and consideration.\"}", "{\"comment\": \"Thanks for the authors' rebuttal, which provided some nice discussions between classical approaches and more recent learning-based approaches. I'm raising my rating to marginally above, and I hope the authors will include such discussions in the future version of their paper.\"}", "{\"summary\": \"This paper proposes a simple method to improve nerf results under sparse input views (as few as 6). This method captures two intuitions: (1) points that only appear in just one input view are not learnable because the mass to learn can lie anywhere along the rays (paired with a specific scale) to show up correctly in the camera, and (2) points inside the object are not learnable since they don\\u2019t contribute to the final observed colors. The authors argue that explicitly making the network not learn those points improves the sparse-view performance.\\n\\nFor (1), the authors compute a mask indicating the times each point is included in all input views\\u2019 frustums, and use it to mask the loss and scale the gradients. For (2), the authors encourage nearby points to have similar colors, by gradually blending each point\\u2019s RGB with the previous point along the ray and also computing loss between the current RGB and the blended RGB.\\n\\nThe method is compared against nerfacto as a plug-and-play module, and the results show nerf results get improved in general with this trick. It also gets compared with FreeNerf, where near and far planes need selecting carefully when input views are sparse; this limitation can be eliminated with this paper, and similar results can be acheived.\\n\\nThe authors also test their method on real-world datasets including the well-known \\u201cnerf datasets\\u201d and a real in-the-wild dataset captured by the authors themselves (in the supplemental PDF).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper is strong in how it turns simple observations/intuitions into concrete implementations that improve nerf results in general. The intuitions make sense, and the end results indeed look improved.\\n\\nThe paper presents the ideas clearly with helpful visuals such as Figures 1 and 3.\\n\\nThe experiments from baseline comparisons to ablation studies are extensive and cover questions people may have very well.\", \"weaknesses\": \"I like the simplicity and modularity of this approach but the real-world, in-the-wild results shown in the supplemental material PDF are of concerning quality. Admittedly, high-quality view synthesis from just 6 input views of an in-the-wild scene is hard, but the method is shown to work well for the famous real-world \\u201cnerf datasets\\u201d. Clearly a gap here, one that needs closing before this approach is useful for any real use case.\", \"a_bigger_question_follows\": \"Under sparse input views, do such per-scene learning approaches still make sense? I think when input views are sparse like this, and the quality presented is bad like this, one may be better off with learning-based approaches that learn from many scenes and generalize reasonably to the test scene at hand.\", \"questions\": \"Related to my point above about learning-based approaches that learn from multiple scenes, have the authors compared this approach against those approaches. There\\u2019s PixelNerf, and many nice works that followed. My intuition is the fewer input views you have, the better-suited a learning-based approach becomes, with priors learned from many scenes.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer xeNz\", \"comment\": \"# **Response to Questions**\\n\\n**W1. Could the authors provide additional frustum score visualizations, especially those associated with the presented qualitative results? This would aid in understanding the proposed regularization.**\\n\\nWe plan to provide intuitive visualizations along with a video during the review period.\\n\\n**W2. Could the authors include loss curves for the proposed regularizers?**\\n\\nWe will provide it within the review period.\\n\\n**W3. Could the authors also provide visualizations of RGB values of samples along the ray to illustrate the behavior of Shadow Zone Regularization?**\\n\\nWe will provide it within the review period.\\n\\n**W4. Could the proposed approach be combined with 3D Gaussian Splatting? This is significant, as 3DGS is becoming the mainstream approach for novel view synthesis, surpassing NeRF.**\\n\\nAnalyzing our method in comparison with Gaussian Splatting-based approaches is indeed an interesting task. However, there are important considerations to keep in mind. Most Gaussian Splatting-related works begin training with a point cloud obtained through SfM as a prior input.\\n\\nIn contrast, many NeRF models do not use such inputs and start training solely with images and camera positions.\\n\\nComparing Gaussian Splatting, which starts with 3D prior information about image surfaces, with NeRF models requires careful attention. It would be more appropriate to compare our method with Gaussian models that either use random Gaussian priors or no priors at all.\\n\\nAlthough NeRF and Gaussian Splatting share similarities, they differ significantly in their rendering methods (volume rendering vs. alpha-sorting and rasterization). Therefore, implementing such a comparison may take some time.\\n\\nWe are primarily focused on maximizing the potential of NeRF and its implicit network capabilities. We also plan to design future projects to integrate our method with Gaussian Splatting.\\n\\nWe are conducting additional experiments, including addressing the points raised by you and other reviewers. Additional responses and materials will be uploaded during the review period, and we would greatly appreciate your continued interest.\"}", "{\"title\": \"Additional Materials for Rebuttal Uploaded\", \"comment\": \"Dear Reviewers,\\n\\nOnce again, we would like to sincerely thank you for investing your time in reviewing our paper. Your in-depth feedback has been incredibly valuable in providing insights and improving our work.\\n\\nAs promised, we have compiled and uploaded the requested additional experiments and supplementary materials. The files added during the rebuttal process can be found in the Supplementary Material.zip file, which includes **ICLR_2025_Well_NeRF_Supplementary Materials for Rebuttal.pdf** and the **Videos folder.**\", \"the_contents_of_supplementary_materials_for_rebuttal_can_be_summarized_as_follows\": \"**1. Near-Far Parameter Robustness**\\n\\nThis experiment demonstrates the robustness of our model to near-far parameters. While the near-far parameter may seem simple, it is, in fact, a highly sensitive parameter that significantly impacts the management of the training region in existing models.\\n\\n**2. Effectiveness of Dynamic Lambda**\\n\\nThe **Dynamic Lambda** we designed through experiments duplicates the RGB loss value and uses it to regulate the normalization lambda. This approach prevents the risk of over-regularization and enhances training stability. Additionally, during the mid-to-late stages of training, it minimizes the impact of regularization, allowing the base model to dominate. This ensures that the potential of the base model is preserved and highlighted. We would like to emphasize this as another key contribution of our work.\\n\\n**3. Visualization of Frustum Score**\\n\\nTo observe how the view frustum score influences the training region, we visualized it alongside NeRF rendering during the training process.\\n\\n**4. New Dataset Proposal: Randomized Structures and Patterns**\\n\\nWe previously mentioned to some reviewers that \\\"there is currently a lack of reasonable datasets for NeRF experiments.\\\" In response, we propose a newly developed dataset generator. This generator creates datasets that are infinitely randomized, preventing experimenters from designing biased methods due to insufficient datasets. We hope this contributes to a more accurate evaluation of NeRF model designs. The dataset generator will be made publicly available for everyone to use.\\n\\n**5. Video Demonstration of NeRF Rendering and Loss Curve**\\n\\nWe included video materials alongside the loss curve to provide a more detailed examination of our model's performance.\\n\\nIf the feedback is positive, the rebuttal materials will be incorporated into the paper and the Supplementary Materials.\\n\\nThank you.\"}", "{\"title\": \"Response to Reviewer sTD1\", \"comment\": \"We sincerely thank the reviewer for taking the time to review our work. All comments will be invaluable in improving the paper.\\n\\n## Response to Weaknesses and Questions\\n\\nThe reviewer's concerns are valid.\\n\\nThe data from the Custom Indoor Dataset included in the Supplementary Material was provided to support our research motivation. Our initial challenge targeted the most complex (indoor) datasets. However, for step-by-step resolution, this paper simplifies the problem to identify one of the key reasons why many NeRF models struggle under sparse input conditions.\\n\\n### Q: Under sparse input views, do such per-scene learning approaches still make sense?\\n\\n**A:** Yes, they are meaningful.\\n\\nFor instance, if a spherical object with a well-defined surface pattern is observed under ideal conditions, its surface can be fully reconstructed through stereo vision using only four views. These views are positioned at 90-degree angles relative to a plane passing through the sphere\\u2019s center. (Each surface point is observed from at least two views.)\\n\\nThen why do NeRF models fail critically under sparse input view conditions? This issue can be better understood through the following analogy: NeRF models cannot distinguish between a scenario where a large sphere is centered in the scene and another where four small spheres obscure the camera directly in front of it. (The fact that classical models perform well under sparse conditions while deep learning models fail suggests that existing NeRF models may not yet be fully optimized.)\\n\\nWe believe this failure stems from improperly defined training conditions. To address this, we implemented our model based on the following assumption:\\n\\n**Assumption 1:** *It is impossible to infer position at points where view frustums do not intersect (as triangulation is not feasible).*\\n\\n**Implementation:** The model was trained with boundary conditions set to restrict learning to regions where position inference is possible.\\n\\nOur proposed method is designed to influence the model only during the early iterations of training (as shown in Equation 10). After this initial phase, it functions almost identically to the base NeRF model (nerfacto).\\n\\n**This suggests that the potential of existing NeRF models remains significant.**\\n\\n### Insights on NeRF and Generative Models\\n\\nWe believe that the potential of existing NeRF models remains significant. NeRF is a technique for reconstructing the 3D structure and optical properties of a scene, replacing certain aspects of traditional surveying and graphics techniques with deep neural networks while maintaining foundational principles (e.g., volume rendering and optical physics). This highlights that NeRF serves as an extension rather than a complete replacement of conventional methods.\\n\\nGenerative models excel at supplementing data or producing visually realistic results, which can enhance NeRF\\u2019s performance. However, generative models prioritize visual plausibility over metric accuracy, differing from NeRF in both approach and objectives.\\n\\nTherefore, as these technologies evolve and integrate, they can achieve optimal results. However, the combination of generative models does not render the development of NeRF-based models unnecessary, nor can it completely replace them.\\n\\nWe refer to Reconfusion, one of the most recent generative models, for reference. In the initial video provided (https://reconfusion.github.io/videos/teaser/wipe_4.mp4), a phenomenon can be observed in Zip-NeRF where the object is not centered but instead rotates as if it were part of the background. Such object collapse is a common issue found in many models under sparse input conditions. This phenomenon is identical to the issue caused by incorrectly defined boundary conditions, as described in our paper.\\n\\nThe collapse of the central object into the background can also be observed in Figure 8 of our main text.\\n\\nWhile Reconfusion has achieved remarkable success, we deliberately excluded pre-trained or generative NeRF models from the scope of this paper to avoid diluting our focus, especially considering the significant resources required for pre-training and the need for continuous refinement of base models. Furthermore, we believe that combining NeRF-based methods with generative models to address sparse input conditions is not yet essential, given the potential inherent in existing NeRF technologies.\\n\\nWe are conducting additional experiments, including those addressing points raised by other reviewers. Additional responses and materials will be uploaded during the review period, and we appreciate your continued interest.\"}", "{\"title\": \"Response to Reviewer e43w\", \"comment\": \"Dear Reviewer,\\n\\nWe have uploaded additional experiments and materials. A summary of these updates can be found in the comments at the top of this page. Among the uploaded materials:\\n\\n1. Near-Far Parameter Robustness\\n2. Effectiveness of Dynamic Lambda\\n3. Visualization of Frustum Score\\n4. New Dataset Proposal: Randomized Structures and Patterns\\n\\nare particularly relevant to your comments.\\n\\nIn particular, based on your feedback, we would like to emphasize Dynamic Lambda as another key contribution of our work.\\n\\nThank you for your time, and we kindly request your review of the updates.\"}", "{\"title\": \"Thank you for the feedback\", \"comment\": \"Dear Reviewer,\\n\\nThank you very much for your response. We deeply respect your evaluation.\", \"to_elaborate_further_on_some_points\": \"**W1.** As for the proposed shadow zone regularization, I still find it hard to operate only in the desired region.\\n\\nThis method was designed based on an assumption of intuition regarding rigid surfaces and rendering, which may raise concerns that it might not function perfectly as intended in all cases. However, we believe that the design already incorporates considerations to address such concerns (e.g., preventing over-regularization), and the performance improvements were explicitly demonstrated through the ablation study.\\n\\n**W2.** Moreover, I regard the efforts to adjust bounding box affordable, given the current speed of NeRF training.\\n\\nIt is often difficult to achieve good results even with the best possible bounding box because a complex, polygon-shaped learnable region cannot be adequately represented by a simple bounding box.\\n\\nIn practical scenarios where NeRF is applied as a service, it is hard to expect general users (non-specialists or researchers) to conduct repeated experiments.\\n\\nI view the proper restriction of the learning region not merely as an extension of parameter adjustment or 3D editing but as a theoretical contribution that makes NeRF a well-posed problem, akin to a differential equation with appropriately defined boundary conditions.\\n\\n**W3.** but there is still a lack of comparison with other sota methods.\\n\\nIt is indeed unfortunate that our model does not actively compete with other models in terms of SOTA performance. However, we highlighted in our work that over-regularization methods or optimization issues for specific data classes have arisen in pursuit of SOTA performance in NeRF. We attributed these issues to a lack of dataset diversity. To address this, we proposed a completely new dataset, and we hope this will be considered an additional contribution (please refer to Section 4 of our rebuttal: New Dataset Proposal: Randomized Structures and Patterns).\\n\\nWe accept your evaluation and want to express that the discussion process has been instrumental in improving the paper. We will continue to reference this feedback in the future. Thank you again for your valuable input.\"}", "{\"title\": \"Reminder\", \"comment\": \"The discussion period is coming to an end. We are looking forward to your valuable feedback. Thank you.\"}", "{\"title\": \"Response to Reviewer M1eU\", \"comment\": \"## Response to Weaknesses and Questions\\n\\n**Q1. The dataset are too small to well demonstrate the upper bound of the proposed method**\\n\\nWe are planning experiments on new datasets.\\n\\nHowever, we would like to point out that there is currently a lack of suitable datasets for few-shot experiments. When splitting data into training and evaluation sets, there is a high likelihood that unobserved regions in the training set are included in the evaluation set. In such scenarios, how should the model approximate completely unobserved regions? As black, white, gray, random colors, or the mean color? This is nonsensical and sometimes leads researchers to enforce overfitting to achieve high evaluation PSNR.\\n\\nTherefore, we are designing a random structure, random texture dataset and plan to create hundreds to thousands of samples. This approach aims to eliminate potential biases in evaluations.\\n\\n**Q2. And then the results are sensitive to the experimental setting, making the results less convincing.**\\n\\n Our goal is to ensure functionality across a wide range of near and far plane settings, from narrow to wide configurations. While the ultimate aim is to make the method near-far parameter-free, achieving exact values of 0 and infinity is numerically impossible. Therefore, we used a setting of 0.5 to 1000. Could you clarify which aspects you found to be sensitive to the experimental settings?\\n\\n**Q3. Lack enough comparisons to other methods, like RegNeRF, ZeroRF, etc.**\\n\\nWe acknowledge the lack of comparisons with other methods and would like to provide additional context regarding our experimental settings.\\n\\nWe attempted to compare our model with **Reg-NeRF**. However, we encountered difficulties due to the complex and ambiguously defined experimental conditions (particularly the near-far threshold). Based on our understanding, Reg-NeRF first applies a near-far condition, then performs coordinate warping into the NDC space, and subsequently applies a secondary near-far thresholding process within the NDC space. These settings are not only hardcoded but also vary numerically depending on the data class. Unfortunately, neither the main text nor the supplementary material of the Reg-NeRF paper explicitly specifies these conditions.\\n\\nAs a result, we could not ensure that our comparison with Reg-NeRF was conducted under equal conditions and were therefore unable to include those results in our paper.\\n\\nThis issue is not unique to Reg-NeRF but is common across many models, as they often fail to explicitly detail the hyperparameter settings (e.g., near-far thresholds) used in their codebases.\\n\\nWhile we recognize the limited comparisons as a weakness of our study, we conducted an in-depth review to address this and highlight that our model overcomes these issues by being a **near-far threshold-free model**. This characteristic sets our approach apart and addresses a major limitation observed in other models.\\n\\n**Q4. Lack experiments on various number of views, making me unclear about the sensitivity and scalability**\\n\\nWe are currently conducting experiments to provide results for a wider range of views.\\n\\n**Q6. So, why not enhance the paper with experiments on 3DGS?**\\n\\nAnalyzing our method in comparison with Gaussian Splatting-based approaches is indeed an interesting task. However, there are important considerations to keep in mind. Most Gaussian Splatting-related works begin training with a point cloud obtained through SfM as a prior input.\\n\\nIn contrast, many NeRF models do not use such inputs and start training solely with images and camera positions.\\n\\nComparing Gaussian Splatting, which starts with 3D prior information about image surfaces, with NeRF models requires careful attention. It would be more appropriate to compare our method with Gaussian models that either use random Gaussian priors or no priors at all.\\n\\nAlthough NeRF and Gaussian Splatting share similarities, they differ significantly in their rendering methods (volume rendering vs. alpha-sorting and rasterization). Therefore, implementing such a comparison may take some time.\\n\\nWe are primarily focused on maximizing the potential of NeRF and its implicit network capabilities. We also plan to design future projects to integrate our method with Gaussian Splatting.\\n\\nWe are conducting additional experiments, including addressing the points raised by you and other reviewers. Additional responses and materials will be uploaded during the review period, and we would greatly appreciate your continued interest.\"}" ] }
DhHIw9Nbl1
Decoupling Layout from Glyph in Online Chinese Handwriting Generation
[ "Minsi Ren", "Yan-Ming Zhang", "yi chen" ]
Text plays a crucial role in the transmission of human civilization, and teaching machines to generate online handwritten text in various styles presents an interesting and significant challenge. However, most prior work has concentrated on generating individual Chinese fonts, leaving complete text line generation largely unexplored. In this paper, we identify that text lines can naturally be divided into two components: layout and glyphs. Based on this division, we designed a text line layout generator coupled with a diffusion-based stylized font synthesizer to address this challenge hierarchically. More concretely, the layout generator performs in-context-like learning based on the text content and the provided style references to generate positions for each glyph autoregressively. Meanwhile, the font synthesizer which consists of a character embedding dictionary, a multi-scale calligraphy style encoder and a 1D U-Net based diffusion denoiser will generate each font on its position while imitating the calligraphy style extracted from the given style references. Qualitative and quantitative experiments on the CASIA-OLHWDB demonstrate that our method is capable of generating structurally correct and indistinguishable imitation samples.
[ "Online handwriting generation; Layout generation; Calligraphy imitation; Conditional diffusion Model" ]
Accept (Poster)
https://openreview.net/pdf?id=DhHIw9Nbl1
https://openreview.net/forum?id=DhHIw9Nbl1
ICLR.cc/2025/Conference
2025
{ "note_id": [ "x7Pj713cCh", "uZPXoOEE3P", "qgraRpimkj", "odLo7sRGxQ", "mrMGfdkonS", "ipjCwhwTKS", "i0yOYWJS8o", "h3odSBvR7y", "ehSQAlDhVH", "eKDfyY5jqC", "bMyY8jcxcD", "aWtDYKCPaM", "WiL1LRrMRJ", "T8EUgp4t56", "Srb8GOUBRv", "RbgNYHepV0", "QO6uGlbfzK", "PmuGhxUjYA", "NoJa0Aubpf", "NKdLAyLYgD", "MqR0803VDs", "LtRvc3FqPF", "HmnxV6FD8I", "FLhp5e7JIj", "F32Kk7yu09", "5jEdcj1i5H", "4tmrFP8mDb", "2ZD7jjlcNP", "2KS6fTYY80", "2FT7Nxbq6Z", "10rD81Fq00" ], "note_type": [ "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1731897609005, 1732592954083, 1737523970850, 1732372536334, 1730517673739, 1731906643400, 1732533369646, 1731872546223, 1732282405188, 1731517201749, 1731576542094, 1731567323593, 1732371786329, 1732352707623, 1732534380010, 1733289873205, 1732534757497, 1731561133552, 1732286042392, 1730468766548, 1730647054727, 1734700966513, 1732270777864, 1732533365363, 1732267409149, 1732587322198, 1731875919020, 1731874120062, 1731872114794, 1731519658733, 1730712018922 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9241/Reviewer_Up7T" ], [ "ICLR.cc/2025/Conference/Submission9241/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9241/Authors" ], [ "ICLR.cc/2025/Conference/Submission9241/Reviewer_BU7e" ], [ "ICLR.cc/2025/Conference/Submission9241/Authors" ], [ "ICLR.cc/2025/Conference/Submission9241/Authors" ], [ "ICLR.cc/2025/Conference/Submission9241/Authors" ], [ "ICLR.cc/2025/Conference/Submission9241/Reviewer_Up7T" ], [ "ICLR.cc/2025/Conference/Submission9241/Authors" ], [ "ICLR.cc/2025/Conference/Submission9241/Authors" ], [ "ICLR.cc/2025/Conference/Submission9241/Authors" ], [ "ICLR.cc/2025/Conference/Submission9241/Authors" ], [ "ICLR.cc/2025/Conference/Submission9241/Reviewer_BU7e" ], [ "ICLR.cc/2025/Conference/Submission9241/Reviewer_kxE9" ], [ "ICLR.cc/2025/Conference/Submission9241/Authors" ], [ "ICLR.cc/2025/Conference/Submission9241/Authors" ], [ "ICLR.cc/2025/Conference/Submission9241/Authors" ], [ "ICLR.cc/2025/Conference/Submission9241/Authors" ], [ "ICLR.cc/2025/Conference/Submission9241/Reviewer_Up7T" ], [ "ICLR.cc/2025/Conference/Submission9241/Reviewer_kxE9" ], [ "ICLR.cc/2025/Conference/Submission9241/Area_Chair_6SAB" ], [ "ICLR.cc/2025/Conference/Submission9241/Authors" ], [ "ICLR.cc/2025/Conference/Submission9241/Authors" ], [ "ICLR.cc/2025/Conference/Submission9241/Reviewer_Up7T" ], [ "ICLR.cc/2025/Conference/Submission9241/Reviewer_BU7e" ], [ "ICLR.cc/2025/Conference/Submission9241/Authors" ], [ "ICLR.cc/2025/Conference/Submission9241/Authors" ], [ "ICLR.cc/2025/Conference/Submission9241/Authors" ], [ "ICLR.cc/2025/Conference/Submission9241/Authors" ], [ "ICLR.cc/2025/Conference/Submission9241/Reviewer_iC1s" ] ], "structured_content_str": [ "{\"title\": \"follow-up questions\", \"comment\": \"Thanks for the detailed responses, which address my most of concerns. However, I am still not clear about the following three questions.\\n\\n(1) Do authors consider the direction of each generated character? For example, I think the italic character in each position is very common.\\n\\n(2) In line 285, it seems the L represents the length of w_i (the i-th character, please correct me if my understanding is wrong). So each character has 500 trajectory points? why each character have the same number of points?\\n\\n(3) I am sorry that maybe I did not describe my question clearly. In the paper, the authors describe that \\\"use l1 distance between ground truth and generated layout as the loss function\\\". so my question is that how to get the ground truth here? Is this the same as the style reference during the training? in Figure 3? Or the dataset has exact ground-truth annotation for each generated layout?\"}", "{\"comment\": \"Thank you for recognizing our efforts and improvements. Your rigor and expertise have provided us with valuable suggestions to enhance our work. We appreciate your contribution to the paper!\\n\\nWarm regards,\\n\\nThe Authors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"> Question2: About the effectiveness of the layout generator.\\n\\n**Answer**: Layout generation is the core of our approach and one of the most significant innovations of our method.\", \"it_is_important_to_emphasize_that\": \"**the effectiveness of our in-context layout generation method lies in its ability to ensure that the layout style of the generated portion remains consistent with that of the prefix portion, rather than whether the prefix belongs to the ground truth text line.** To demonstrate this point, we have conducted further quantitative and qualitative experiments:\\n\\n1. Quantitatively :We have supplemented the quantitative experiments on layout generation presented in Table 3 as follows: ( In-Context-gt refers to the case where we use the first 10 words from the ground truth text line as the prefix, while In-Context-other refers to the case where we use 10 words from another randomly picked text line written by the same author as the prefix.) As can be seen, the performance is almost identical, because the style of different text lines written by the same author is nearly consistent.\\n\\n| | $\\\\nabla_1$ | $\\\\nabla_2$ | $\\\\nabla_3$ | $\\\\nabla_4$ | $\\\\nabla_5$ | $\\\\nabla_6$ |$\\\\nabla_7$ | $\\\\nabla_8$ |\\n|----------|----------|----------|----------|----------|----------|----------|----------|----------|\\n| In-Context-gt | 0.046 | 0.122 | 0.062 | 0.058 | 0.129 | 0.129 | 0.364 | 0.419 |\\n| In-Context-other | 0.047 | 0.124 | 0.057 | 0.058 | 0.130 | 0.132 | 0.363 | 0.422 |\\n\\n2. Qualitatively: In Figure 7 of the origin paper, we indeed use the first 10 characters from the ground truth text line as the prefix. This is primarily to provide a more intuitive demonstration that our method can maintain **consistency between the layout style of the generated part and the prefix part**. We have supplemented the qualitative experiments in the Section 4, Figure 7 of the supplimentary material. We demonstrate that using ten characters from other text lines from the same author as a prefix, our method can still capture the layout style quite effectively. As long as the layout style of different lines from one same author remain consistent, our method can work well.\\n\\nThank you for your thoughtful questions, which have help us to make our experimental section more comprehensive.\\n\\n---\\nWe hope that our response can effectively address your concerns. If you have any unresolved issues or suggestions, we would be more than happy to provide further clarification and make improvements to the best of our ability, as your suggestions have played an important role in improving our work!\", \"title\": \"Response to the following questions\"}", "{\"summary\": \"This paper focuses on the generation of online Chinese handwritten text lines. The core of this method lies in decomposing text line generation into layout generation and character generation, and fill characters into the generated layouts to form complete text lines. Experiments evaluate the proposed method.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1) This paper proposes a hierarchical online Chinese handwritten text line generation method. The proposed method utilizes a layout generator and a font synthesizer to produce the layouts and characters independently, then arranges the characters within the layouts to create complete text lines.\\n\\n2) The proposed method achieves the best performance in purely data-driven font generation task.\", \"weaknesses\": \"1) The multi-scale style encoder is not a new design in handwriting generation area, as a similar idea has been proposed in [a]. Besides, the proposed style contrastive learning loss is somewhat similar to the style learning loss in [b].\\n\\n2) The method description is not clear: (1) In lines 233-237, it is mentioned that style reference samples are used as context prefixes, but how they guide the subsequent layout generation is unclear. (2) The paper does not specify the modality of the style references used, online data, or offline images. (3) The paper does not specify the number of style reference samples used, one-shot or few-shot.\\n\\n3) Section 4.3.2 lacks quantitative experiments in terms of calligraphy styles, raising doubts about whether the proposed Multi-Scale Style Encoder can accurately extract calligraphy styles from entire text lines.\\n\\n4) In the 'Conditional' row of Figure 7, the generated layouts (red boxes) show significant absences at the beginning of the text line, which raises concerns about the effectiveness of the layout generator.\\n\\n5) It is recommended to compare the proposed method with style transfer-based approaches, as it can be relatively straightforward to extend this method to a style transfer setting by replacing character embedding with a CNN-based content encoder.\\n\\n6) The layout generator requires real layouts of style references, which is not directly available in the application, does this limitation affect its applicability? If some simple layout extraction methods are used to extract the pseudo-layouts of style references, what impact would this have on generation performance?\\n\\n7) The paper provides very few generated visual results and lacks visual comparisons with the baseline.\\n\\n[a] Wang H, Wang Y, Wei H. Affganwriting: a handwriting image generation method based on multi-feature fusion, ICDAR, 2023.\\n\\n[b] Dai G, Zhang Y, Wang Q, et al. Disentangling writer and character styles for handwriting generation, CVPR, 2023.\", \"questions\": \"My main concerns are the novelty of the proposed multi-scale style encoder and style contrastive learning loss, and the effectiveness of the proposed layout generator. For details, please refer to the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"answers\", \"comment\": \"Thank you for your detailed question\\uff01 Below are the answers to the individual queries:\\n\\n\\n(1) In the layout generation stage, we donot explicitly consider the slant (italic) of the generated characters, as we believe this factor can be **addressed in the second stage** of individual character generation. If all provided style reference samples exhibit the same slant, the generated character samples will similarly imitate this slant. For example, in Figure 3 of the supplementary material, \\\"writer2,\\\" both the real and generated characters exhibit a similar slant angle.\\n\\n(2) Yes, your understanding is correct. To be more precise, the L in line 285 represents the length corresponding to the i-th character at a certain feature layer. (The value 500 is just an example to illustrate the meaning of L, and not every character has this many points.)\\n\\n(3) Thank you for the clarification. The dataset is annotated. More specifically, the dataset labels which trajectory points belong to each single character. Then, following the definition of layout described in Section 3.2 of the original paper, we manually calculate these ground truth values during the data processing phase.\\n\\nBased on the previous discussion, we have made revisions and additions to the original text. And we hope our response effectively addresses your concerns\\uff01\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you once again for your thoughtful and constructive feedback! During the rebuttal process, base on your suggestions, we have made the following improvements: 1) We clarified the complexity and efficiency of the method; 2) We provided an analysis of the application scenarios and have demonstrated its effectiveness in data augmentation; 3) We investigated ways to expand the model for generating continuous handwriting.\\n\\nWe sincerely hope that our responses have satisfactorily addressed your questions and concerns. If you have any further inquiries or require additional clarification, please do not hesitate to let us know. We are more than happy to refine and strengthen the contribution of our work. Once again, thank you for your invaluable input !\\n\\nSincerely,\\n\\nThe authors.\", \"title\": \"Looking forward to your valuable feedback\"}", "{\"comment\": [\"Dear Reviewer,\", \"We have just submitted the supplementary material. In response to your questions and suggestions, we have made the following revisions:\", \"In Section 1, we present a visual comparison with state-of-the-art methods, which enhances the completeness of our paper.\", \"In Section 2, we have improved the visual subjective evaluation experiments to not only assess the effectiveness of style imitation but also evaluate the consistency between different lines of text. Thank you for your valuable suggestions!\", \"In Section 3, we demonstrate how the designed model can be extended to an end-to-end generation framework, addressing the potential issue of character linking in generated text. We also compare the advantages and disadvantages of the two methods at the current stage. This also demonstrates the motivation and benefits of decoupling the layout from the glyphs.\", \"In Section 4, we present more visual comparison results of the layout generation methods with previous approaches and carefully elaborate on our advantages and contributions.\", \"We have also made these revisions to the corresponding sections of the original text as per the suggestions. Thank you again for your thoughtful feedback! If you have any additional questions or suggestions, please feel free to let us know!\"]}", "{\"comment\": \"Thanks for the clarification. This is a good idea. It is clear now and I have no further questions.\"}", "{\"title\": \"Response to Concerns Regarding the Novelty and the Effectiveness of the Proposed Method.\", \"comment\": \"Thank you very much for your comments and questions, which are of great significance for us to improve our article and work. Next, I will respond in detail.\\n\\n### **Regarding the novelty and contribution** :\\nFirstly, we would like to emphasize that the primary novelty of our work lies in the proposal of a hierarchical approach to **solving the challenging task of handwritten Chinese text line generation**, a problem that has been **rarely explored**. This decoupling strategy can also be effectively extended to other complex handwriting generation tasks. Our contributions within this framework are twofold: 1) A novel method for explicit layout modeling, and 2) A purely data-driven character generation approach based on a 1D U-Net. \\n\\n\\nFor 1)\\uff1a I would like to highlight that we cleverly adapt the in-context generation paradigm from **next-token prediction** in LLM to the task of generating layouts. This simple yet effective approach forms the core of our method. To the best of our knowledge, our work is one of the earliest explorations into handwritten Chinese text line generation, and it is the **first to explicitly generate the layout for text line**. Additionally, an extra benefit of our approach is that the generated data **naturally contains strong positional label** of characters, which makes it highly convenient for data augmentation in tasks such as character segmentation and recognition.\\n\\nFor 2\\uff09: We design a 1D convolutional network capable of extracting **multi-scale features** specifically for **online sequential data**. In contrast, previous approaches for sequential handwriting data typically use LSTM or Transformer models, which do not explicitly consider multi-scale features. Furthermore, our approach is purely data-driven, distinguishing it from the style transfer paradigm commonly used in the state-of-the-art methods. These two approaches are not mutually exclusive but can complement each other in different application scenarios.\\n\\nBelow, we address the reviewer\\u2019s questions one by one:\\n\\n---\\n\\n### **Response to Specific Comments** :\\n> Comment 1: About the novelty of the multi-scale encoder and multi-scale contrastive loss.\\n\\n**Answer** : Thank you for pointing out the relevant literature. The additional references have been incorporated in our revision. However, it is important to note that the work referenced in [a] deals with offline handwritten data, specifically images. In contrast, all the data used in our work is online data, which is sequential in nature. Networks designed for processing **online handwritten data** typically **do not explicitly** model multi-scale features in the previous related work. In this paper, we introduce a model based on 1D-convolutional networks, marking the first instance of explicitly modeling multi-scale features in online Chinese handwritten data generation field. Additionally, the contrastive loss function we design is tightly integrated with this multi-scale network to enhance its ability to distinguish calligraphy styles from different writers at different scales. While in paper[b], the proposed writer-wise and character-wise contrastive learning is applied **only at a single feature scale**. From this perspective, our contributions are not similar, but they could even **complement each other**. \\n\\n> Comment 2: It is not clear : 1) How the prefix guide the generation in layout generator. 2) The modality of style reference data. 3) The task setting.\\n\\n**Answer** : 1) A more intuitive explanation can be drawn from the **next-token-prediction paradigm** in language models. When a partial sequence of text is input as a context, the subsequent generated content tends to be coherent with the prefix. Similarly, in our task, when the layout information for the first few characters is input as a context, the layout of subsequent characters is naturally consistent with the prefix. For example, if the initial layout exhibits a general skew, there is a higher probability that this characteristic will be maintained in the generated text, as we demonstrate in the experiment section (Figure 7). This is what we call **\\u201cin-context-like layout generation\\u201d** . 2) As described in Section 3.1, all the data we used in this paper is online sequential data. Therefore, the novelty of our approach lies, to some extent, in its focus on designing a network specifically tailored for online handwritten data compared with previous work. 3) For fairness in comparison, we have kept the experimental setup consistent with previous methods, specifically using a few-shot setting, where 10 reference characters are used for style reference.\\n\\n\\n[a] Wang H, Wang Y, Wei H. Affganwriting: a handwriting image generation method based on multi-feature fusion, ICDAR, 2023.\\n\\n[b] Dai G, Zhang Y, et al. Disentangling writer and character styles for handwriting generation, CVPR, 2023.\"}", "{\"comment\": \"We truly appreciate the positive feedback and recognition of our work\\u2019s novelty, contribution and foundation\\uff01In the following, we provide detailed responses to the queries one by one:\\n\\n---\\n\\n>Comment1\\uff1aDoes text line style only manifest in the relative positions and sizes of individual characters? The reasonableness of the indepent assumption and explain whether the decoupling might limit the method's ability in style learning.\\n\\n**Answer**: \\nThank you for raising such a valuable and thought-provoking question. This issue is indeed worth exploring further and may provides important insights for future improvements to our method.\\n\\n+ On the manifestation of text line style: We agree that the entire text line writing style is difficult to be fully captured with disentangled styles. However, we believe that the calligraphy style of individual characters, as well as their relative positions and relationships, form the most intuitive and crucial components of the entire style. In fact, these factors are sufficient to describe **the vast majority** of handwriting styles, making them central to our approach.\\n\\n+ On the reasonableness of the independence assumption: Our independence assumption is based on extensive observations of writing habits and visualizations from the dataset. We believe this assumption holds in most practical scenarios. This is ultimately an empirical question, and the validity of our assumption can be partially supported by the subjective satisfaction observed in the user studies of synthesized samples.\\n\\n+ On the decoupling and potential limitations: We acknowledge that decoupling the writing process does introduce some limitations. The most powerful solution may indeed lie in end-to-end manner. However, training a high-quality, end-to-end Chinese text line generation model remains a challenging task. As such, our current approach represents a trade-off between simplicity and performance.\\n\\n>Comment2\\uff1aAbout some figure issues.\\n\\n**Answer**\\uff1aThank you for your helpful suggestion and appreciate your attention to these details, and we will make these revisions in the revised version of the paper.\\n\\n>Question1&2\\uff1a1. If the generated bounding box have different shapes with the generated characters, how should this be handled? 2. Why not jointly train the two models end-to-end instead of training them separately?\\n\\n**Answer**\\uff1aThe two issues are interrelated, so I will address them together:\\n+ Referring to Appendix 2.1, in the data preprocessing stage, we normalize the overall height of text lines to 1, which results in **significant variance in the actual size of each character**. For instance, in lines with horizontal writing, character heights are close to 1, but in lines with certain tilted writing angles, character heights may be as low as 0.3. As stated in Appendix 2.3 (training details): \\u201dIn practical implementation, we found that without normalizing the size of the characters, the model's ability to learn the structural information of the characters would be compromised, leading to consistent errors in the generated structures. Therefore, during the training of the character generator, each character is normalized to the same size and learned independently.\\u201c\\n+ As described above, the character generator directly produces characters with normalized size. We employed a simple method of scaling the xy-coordinates to fit the bounding box output by the layout generator. Since the layout generator takes into account the types of characters, the bounding boxes it generates are reasonable. Consequently, we observed that this scaling does not significantly affect the shape or aspect ratio of the characters.\\n\\n>Question3: Whether the method can be applied to other languages?\\n\\n**Answer**\\uff1aDue to the decoupling of layout and glyph properties, our method is especially well-suited for language systems in which characters are relatively independent, such as Chinese, Japanese, Korean, or even mathematical formulas.\\n\\n>Question4: What does 'L', 'sequence' represent in line 285?\\n\\n**Answer**\\uff1aRecalling section3.1, our data consists of online **sequential data**. For example, if there are 1000 trajectory points, the raw data dimension would be $\\\\mathbb{R}^{(1000,3)}$. Our network consist of 1D convolutional layers, for example, after passing through a convolutional layer with a stride of 2 and a kernel num of D, the feature dimension would be reduced to $\\\\mathbb{R}^{(500,D)}$. L represents the length of the feature sequence, which is 500 in this case.\\n\\n>Question5: In Line 230, is the ground truth the same as the reference input in Figure 3\\uff1f\\n\\n**Answer**\\uff1aYes, we use the **teacher-forcing** technique during training, where for each text line, the bounding boxes of the first i-1 characters are used as the prefix (reference) to predict the bounding box of the i-th character.\\n\\n---\\nThank you again for your insightful suggestions to improve our paper! We hope that our response adequately addresses your concerns.\"}", "{\"comment\": \"Thank you for the thoughtful and positive feedback! We greatly appreciate for the recognition of the innovative aspects of our approach and the thorough evaluation of our model's effectiveness. In the following, we provide detailed responses to the reviewer\\u2019s queries one by one:\\n\\n---\\n\\n>Comment1: Missing qualitative comparisons with prior methods to prove the advantages in style fidelity and layout accuracy.\\n\\n**Answer**: This is a valuable suggestion.\\n+ For style fidelity: **we have supplemented** style qualitative comparison with existing methods, in appendix A.2.5.\\n+ For layout accuracy: As stated in Section 4.3.1, previous work models the layout of text lines via Gaussian distribution in a model-free manner. We have reproduced this method and demonstrate the effectiveness and necessity of our layout generation approach in Sections 4.3&4.4. In particular, we visually demonstrate in Figure 7 that overly simplistic model-free methods fail to generate layouts with distinct personal characteristics, making them easily recognizable at a glance. **We have supplemented** more details in Section 4 of the supplementary materials.\\n\\n>Comment2 & Question 1: The contributions over previous approaches could be articulated more clearly, especially regarding the effectiveness of the layout-glyph separation; Could more details be provided on how the layout-glyph separation specifically enhances performance in comparison to prior models?\\n\\n**Answer**: Thank you for your suggestion! This is an important issue in our approach. I will provide more details about the layout-glyph separation: \\n\\n+ Contribution: As far as I know, we are **the first to complete the generation of handwritten Chinese text lines**. The significant contribution of the layout-glyph separation, therefore, is its ability to effectively address this issue. In Chinese text lines, the structure and the position of each glyph are both crucial. However, previous methods for single-character generation typically focus on transferring a standard character template style to handwritten style, which completely neglects modeling the positional relationships between multiple characters, making them **unsuitable for text line** generation tasks.\\n+ Motivation : When extending from single characters to text lines, a key challenge lies in how to represent the content information of the text line. In our extended experiments (Section 3 of the supplementary materials), we conducted end-to-end string generation experiments by concatenating the embeddings of individual characters to form the embedding of a text line. However, without further improvements, we found that the model was capable of generating short sequences but **encountered difficulties with longer sequences**. We attribute this issue to the complexity of training a single model to simultaneously learn both the structure of the characters and their relative positions. To reduce the learning complexity of the diffusion model, we decomposed the full probabilistic model and decoupled the layout component from the glyphs. While the performance is basically satisfactory, we make the training and sampling process simple and stable.\\n\\n>Comment3: The organization could be refined for readability.\\n\\n**Answer**: Thank you for pointing out the writing issues. We will carefully revise and supplement the content based on your feedback to improve the readability and quality of our paper.\\n\\n>Question2: Would additional experiments on style consistency across diverse text lines clarify the benefits of this approach?\\n\\n**Answer**: This suggestion is highly insightful! We have **improved the subjective experiments** in Section 4.4 considering this factor. Specifically, we now construct each test sample by combining a real handwritten text line from a particular author with **more than one** synthetic text lines. The participants are tasked with determining whether the lines were written by the same person. If there are inconsistencies in style between the synthetic sample and the real sample, or between different synthetic lines, the testers are likely to notice. We believe this improvement makes our experiment more complete. \\n\\n>Question3: Could this method can be adapted to non-Chinese scripts or connected handwriting styles?\\n\\n**Answer**: \\n+ Our method is particularly suited for language systems where characters are relatively independent, such as Chinese, Japanese, Korean or even mathematic formula. \\n+ As mentioned in Section 5 of the original paper, the limitation of layout-glyph seperation lies in its difficulty in replicating connected handwriting styles. However, in Section 3 of the supplementary material, we have explored and made attempts at generating connectedstyle writing in an end2end manner.\\n\\n---\\nThank you again for your valuable suggestions for improving our paper\\uff01I hope our response can effectively address your concerns.\"}", "{\"title\": \"Response to the following questions\", \"comment\": \"Thank you for the clarification on the issues! Below are the responses to your concerns and questions:\\n> Comment 1: The novelty about the multi-scale style encoder with contrastive loss.\\n\\n**Answer**: \\nThank you for your insightful feedback based on your extensive domain knowledge. We will provide a more detailed explanation of our contribution: \\n+ **Background**: As previously stated, our method operates entirely on online data. Since online trajectories can contain more information, especially writing order, research on processing online trajectories is certainly worth investigating. However, previous methods based on online data extract style features in a relatively crude way, such as they do not leverage multi-scale features of online data. Therefore, how to extract rich features from online data is a topic worthy of further research.\\n+ **Contribution**: We fully acknowledge your valuable insights in the relevant field, while the additional references have been incorporated in our revision. However, although similar concepts may exist in the related work on offline data, considering the background of online data, our approach is the **first implementation** to extract multi-scale style features from **online data** and has also demonstrated its effectiveness, provides a foundation for future research on style feature extraction for online data. We believe this is of certain value. More importantly, this is only a small part of our framework and not the main contribution.\\n\\n>Comment2: About the contribution of the proposed method to font generation.\\n\\n**Answer**: \\n\\n+ **Background**: As mentioned before, style transfer-based methods and purely data-driven methods have different potential applications scenarios, and therefore cannot be completely replaced by each other. Moreover, the performance of the earlier purely data-driven methods **significantly lagged behind** that of style transfer-based methods. Therefore, how to improve the performance of purely data-driven methods in this field reamains an important research question.\\n\\n+ **Contribution**: \\n\\n 1). For font: In our work, we have significantly bridged the performance gap between purely data-driven methods and sota style-transfer methods, therefore laid the foundation for future studies on purely data-driven approaches. More specifically, our method performs almost identically in terms of style scores with the sota style-transfer methods, while it slightly lags behind in content scores. We attribute this primarily to the fact that purely data-driven methods, lacking standard font structure information, need to learn everything from scratch. As a result, they are more susceptible to annotation errors and noisy, overly sloppy samples in the dataset, which can negatively affect the stability.\\n\\n\\n 2). For generalization: More importantly, the 1D CNN-based denoiser we designed **can be directly extended for end-to-end multi-character generation** (see Appendix A.3 or Supplementary Material Section 3), which represents a **significant advantage** over previous methods. In contrast, previous methods for single-character generation typically focus on transferring a standard character template style to a handwritten style. However, text lines not only involve character structure but also include the size and position of characters, meaning they lack a standard template, making these methods unsuitable for text line generation tasks.\\n\\n>Comment3: About the application scenary.\\n\\n**Answer**: \\n+ **Writing service** : The first point to emphasize is that we are the **first to provide a service** capable of generating text line-level data of arbitrary length while maintaining the user's layout style in a single pass. Additionally, in practical applications, users only need to write a continuous sequence of characters (less than 10 is also acceptable), which are then processed by a character segmentation algorithm to get the bounding box information we need. The current segmentation algorithm's performance is sufficient to meet the requirements of our application. The users do not need to mark the position. We believe that it is still very convenient for users to use.\\n\\n+ **Other application**: In addition, our approach is not limited to providing handwriting services only. The generated data naturally contains strong positional label of characters, which makes it highly convenient for data augmentation in tasks suach as confidence calibration where character position is needed to determine positive or negative samples. We believe that our approach has significant potential for application in this regard. \\n\\n> Question1: About the AR and CR.\\n\\n**Answer**: Thank you for your attention to detail! Apologies for the mistake in Table 3 where we swapped AR and CR and this has now been corrected now.\"}", "{\"title\": \"Response to authors\", \"comment\": \"Thank you for your detailed responses, which partially address my concerns. However,\\n1) I still believe the multi-scale style encoder with contrastive loss is not a novel design within the handwriting generation domain. For the multi-scale style encoder, it incorporates the same multi-scale idea as [a], with the key difference being in the implementation: the proposed encoder uses a 1D CNN, while [a] employs a 2D CNN. This distinction, however, is a minor modification. Regarding the multi-scale contrastive loss, it essentially introduces the multi-scale concept of [a] into [b], i.e., applying contrastive learning loss across multi-scale features. Considering that both [a] and [b] are established works in handwriting generation, combining them does not constitute a novel design. \\n\\n2) The performance of the updated CNN-based version appears to lag behind the previous style transfer method, SDT (cf. Table 1). As a result, the contribution of the proposed method to font generation (cf. Section 4.2) is limited to the purely data-driven setting.\\n\\n3) The proposed method seems user-unfriendly, as it not only requires 10 reference characters but also forces users to manually mark the position of each character.\\n\\nBesides, my new questions are: \\n1) In Table 3, the metric AR is consistently higher than CR. However, according to the definitions in [c, d], AR includes insertion errors as an additional factor compared to CR, which should make it lower than CR. Could you clarify these results?\\n\\n2) It appears that the first 10 characters of each text line are used as style references to predict the subsequent layouts (cf. the caption of Figure 7). However, these 10 characters seem to be part of the ground truth (GT). Does this lead to information leakage from the GT? In other words, are parts of the GT used as style references? Could you clarify this?\\n\\nLooking forward to your reply.\"}", "{\"comment\": \"Thanks for the detailed responses, I have no further questions.\"}", "{\"title\": \"The summary of rebuttal\", \"comment\": \"Dear AC and reviewers,\\n\\nIn this section, we aim to highlight the contributions of our work and summarize the key points from our rebuttal. Our study proposes a hierarchical approach to solve the challenging task of handwritten Chinese text line generation, which has been rarely explored. We introduce a in-context layout generator based on the next-token-prediction paradigm and construct a 1D-UNet-based diffusion denoiser for font generation. Our method can also be applied to other complex structured handwritten data and is suitable for use as a data augmentation technique that generates data with strong labeling information. \\n\\nDuring the rebuttal stage, we are pleased that the reviewers have acknowledged the novelty and contribution of our methods. We provided detailed responses to the reviewers' questions regarding the method's specifics. Following their suggestions, we not only improved our manuscript, but also added quantitative and qualitative experiments in the appendix and supplementary materials to further validate the effectiveness and scalability of the method. We believe that these revisions adequately address all the reviewers' concerns and further strengthen the contributions of our study.\\n\\nWe sincerely appreciate the time and effort that the reviewers have dedicated to evaluating our manuscript. Their insightful comments and constructive feedback have significantly improved the quality of our work. We are also grateful for the AC's efforts throughout the rebuttal process.\\n\\nWarm regards,\\n\\nThe Authors\"}", "{\"comment\": \"We sincerely thank the reviewer for their positive response to our primary contribution and for increasing the score in recognition of our improvements.\\n\\nWarmly regards!\"}", "{\"comment\": \"We are grateful for the positive feedback and valuable questions! We are truly grateful that the reviewer acknowledges the novelty, clarity, and flexibility of our proposed method. In the following, we provide detailed responses to the reviewer\\u2019s queries one by one:\\n\\n---\\n\\n>Comment1: Consideration about the model's complexity, training and inference efficiency.\\n\\n**Answer**: This is a very important consideration, in this response, we will first clarify more about the motivation behind decoupling and then provide a detailed analysis of the efficiency aspects, particularly in relation to training and sampling.\\n+ Motivation: We actually have also experimented based on our 1D U-Net generator to generate strings end-to-end. However, without further improvements, we found that the model can generate short sequences (e.g., two to five consecutive characters) but encounters difficulties with longer sequences. We attribute this issue to the complexity of having a single diffusion model simultaneously learn both the structure of the characters and their relative positions. To **reduce the learning complexity** of the diffusion model, we decompose the full probabilistic model and decouple the layout component from glyphs.\\n+ Training and inference efficiency: 1) During the training phase, as described, the diffusion model only needs to learn the generation of individual characters, while the layout generator is responsible for planning the size and position of each character. This effectively reduces the complexity of training diffusion model. 2) In the sampling phase, since the generation processes of the two components are decoupled, as shown in Figure 3, **they can be fully parallelized**. The character generator can simultaneously generate all characters in a string within a batch, while the layout generator has already planned their positions. Additionally, as detailed in the appendix, the task is not computationally intensive, therfore our layout generator is lightweight (a 2-layer LSTM with a hidden size of 128).\\n\\n>Comment2: Analyze the application scenarios for this task.\\n\\n**Answer**: We currently have three considerations for potential application directions:\\n+ We can provide **personalized writing services**. As is widely known, Chinese comprises thousands of commonly used characters. Users can submit a short passage (e.g., a dozen characters) of any content as a stylistic reference sample. We can then mimic their writing habits to generate handwritten samples of any length and content.\\n+ It can be used **in the field of education**. Since the online data we generate includes dynamic information about the writing process, it can teach the stroke structure and order of different Chinese characters.\\n+ Used for **data augmentation**: In today's Chinese text recognition field, the training data available is extremely limited (commonly only CASIA OLHWDB). We have made initial progress in this area. We use all real-world data (about 60,000 text lines) as training data, train the text line recognizer using 3-layer LSTM with CTC Loss, achieving 85.5 AR 85.7 CR on the Icdar2013 test set. The results of data augmentation are as follows, demonstrating the application value of our method in this task.\\n\\n| | AR | CR |\\n|:----------:|:----:|:----:|\\n| Real data | 85.5 | 85.7 |\\n| Real + 2.5W aug | 88.3 | 88.7 |\\n| Real + 5.0W aug | 90.9 | 91.2 |\\n\\nOne advantage of our approach is that the generated data inherently includes strong positional labels for characters, which makes it suitable for data augmentation in recognition tasks that **require strongly labeled data** for training, such as character segmentation. \\n\\n\\n>Comment3: Limitations in handling certain calligraphic styles.\\n\\n**Answer**: The hierarchical generation method do have a weakness in not being able to generate inter-character connections. A potential solution is the end-to-end framework or to make a trade-off by training an end-to-end generation model for a limited number of characters, and then combining it with the layout generation model. We present some possibilities for extending to end-to-end framework in Section 3 of the supplementary material. We will strive to make up for the shortcomings of our methods in our future work.\\n\\n---\\nHope my response effectively addresses your concerns, if you have any remaining questions or need further information about our study, please feel free to let us know!\"}", "{\"comment\": \"We sincerely appreciate your response and the recognition of our work! This is of great significance to the continued improvement of our efforts.\\n\\nBest wishes\\uff01\"}", "{\"summary\": \"The paper addresses the task of generating online handwritten Chinese text lines condition on the content and style. It identifies that text lines can be divided into two components: layout and characters. The authors propose a hierarchical approach that includes a text line layout generator and a stylized font synthesizer. The layout generator uses in-context-like learning to determine the positions of each character, while the font synthesizer generates characters that imitate the calligraphic style of the provided references. The method is evaluated using the CASIA-OLHWDB dataset, demonstrating its effectiveness in producing structurally correct and indistinguishable imitation samples.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"1.While some work on English handwritten text line generation exists, as far as I know, no such work has been published for online Chinese text lines. Compared to English characters, Chinese characters have more complex structures and a larger number of categories, making English generation methods unsuitable for direct application to Chinese. This work proposes a method to address this task, representing a noticable contribution.\\n\\n2.The method decouples text line generation into two steps\\u2014layout generation and character generation\\u2014under a unified probabilistic framework, providing a good theoretical foundation and considerable novelty.\\n\\n3.The experimental section includes comprehensive comparative and visualization experiments for both layout generation and character generation, yielding convincing results.\\n\\n4.The paper is well-organized and clearly written.\", \"weaknesses\": \"1. The assumption that character generation is independent given their positions seems too strong. Does text line style only manifest in the relative positions and sizes of individual characters? I hope the authors can give discussion on the reasonableness of this assumption and explain whether it might limit the method's ability in style learning.\\n\\n2. It is better to add sub-figure index for figure 8 and 9. It seems each of figure 8 and 9 has three sub-figures, but now their boundaries are not clear. In Figure 7, it is also suggested to identify which one is the proposed method in the paper. Of course, this is not a big issue.\", \"questions\": \"1. If the bounding box generated by the layout model and the bounding box generated by the character model have different shapes, how should this be handled?\\n\\n2. Since the method can be described as a unified probability distribution according to Equation 1, why not jointly train the two models end-to-end instead of training them separately?\\n\\n3. The paper does not discuss whether the method can be applied to handwriting generation of other languages.\\n\\n4. in Line 285, what does L represent? Although the authors write this is the length of the feature sequence, it is not clear what does this sequence represent? \\n\\n5. In Line 230, is the ground truth the same as the reference input in Figure 3\\uff1f\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper focuses on the generation of online Chinese handwriting text lines. It proposes a hierarchical approach that decouples layout generation from glyph generation. The text line layout generator arranges character positions based on text content and writing style references, while the font synthesizer generates characters with specific styles. The contributions include a novel layout generator, a 1D U-Net network for font generation, and a multi-scale style encoder. Experiments demonstrate the effectiveness of the method in generating structurally correct and stylistically similar samples.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"(1) The hierarchical decomposition into layout and glyph generation is an innovative approach, particularly suited for complex scripts like Chinese. This framework successfully addresses challenges specific to the language, such as the diversity of character structures.\\n\\n(2) The model is thoroughly tested on both character and line generation, with metrics tailored to layout and stylistic fidelity. The model's success across multiple metrics shows a well-rounded, effective design.\\n\\n(3) Despite the technical depth, the paper provides a good level of explanation for each module, with helpful visualizations that demonstrate layout and glyph generation separately.\\n\\n(4) The method has potential applications in handwriting synthesis, digital personalization, and document augmentation, contributing a valuable approach for future research in multilingual handwriting generation.\", \"weaknesses\": \"(1) Missing qualitative comparisons with prior methods, limiting insights into this model\\u2019s advantages in style fidelity and layout accuracy.\\n\\n(2) The contributions over previous approaches could be articulated more clearly, especially regarding the effectiveness of the layout-glyph separation.\\n\\n(3) The organization could be refined for readability, as the methods section contains complex explanations that could benefit from clearer structuring.\", \"questions\": \"(1) Could more details be provided on how the layout-glyph separation specifically enhances performance in comparison to prior models?\\n\\n(2) Would additional experiments on style consistency across diverse text lines clarify the benefits of this approach?\\n\\n(3) Could this method can be adapted to non-Chinese scripts or connected handwriting styles?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper studies online handwritten Chinese text generation. After rebuttal, the overall rating of this paper is above the marginal acceptance threshold, but with mixed scores of 5,6,6,8. The Area Chair has read the paper, all reviews, and the authors' rebuttal. The main strengths of the paper include 1) the task of the online handwritten Chinese text is considered to be important and the proposed method is well motivated, 2) the proposed decoupling of layout and glyph is reasonable and innovative, 3) the experimental design is thorough, and the results presented are convincing.\\n\\nDuring the rebuttal phase, the authors addressed most of the reviewers' concerns, leading Reviewers iC1s, Up7T, and kxE9 to lean toward accepting the paper. However, Reviewer BU7e remained concerned about the novelty of the proposed multi-scale style encoder with contrastive loss, maintaining a rating of 5. Despite this, considering all review comments and the paper's other contributions, the AC agrees that the paper meets the acceptance threshold. Additionally, during the discussion phase, Reviewer BU7e also agrees that the paper could be accepted despite the concerns regarding its novelty.\\n\\nThe recommendation is acceptance. The authors are advised to carefully revise the manuscript by incorporating the reviewers' suggestions to enhance its quality further.\", \"additional_comments_on_reviewer_discussion\": \"After rebuttal, Reviewers iC1s, Up7T, and kxE9 lean toward accepting the paper. Reviewer BU7e also agrees that the paper could be accepted despite the concerns regarding its novelty.\"}", "{\"title\": \"answers\", \"comment\": \"Thank you for your detailed question! Here, we will provide a detailed introduction to the dataset and the specifics of its usage\\uff1aThe dataset can first be divided into 1,200 different authors, with each author having dozens of handwritten lines.\\n***\\nTherefore, for each author i, we have all the text lines that he has written:\\n\\n${writer_i}: [(line_{i1}, content_{i1}), (line_{i2}, content_{i2}),..., (line_{in}, content_{in})]$\\n\\nAssume that we use the j-th line from author i as the imitation target, so $line_{ij}$ serves as the ground truth. For the style reference, we randomly select the **k-th line (where k \\u2260 j) from the same author i** as the style reference $ref_{style-i}$. Thus, the generative model's input is composed of <$content_{ij}$, $ref_{style-i}$ > and its target ground truth is $line_{ij}$. \\n\\n***\\nIn summary, for each text line used as the imitation target, other text lines from the same author can be used as style references. Therefore, the ground truth matches the style reference sample, as they originate from the same author. Hope this explanation can resolve your issue!\"}", "{\"title\": \"Looking forward to your valuable feedback\", \"comment\": \"Dear Reviewer,\\n\\nThank you once again for your thoughtful and constructive feedback! During the rebuttal process, base on your suggestions, we have made the following improvements\\uff1a1) We incorporated additional visual comparison experiments. 2) We provided a detailed explanation of the motivation of layout-glyph separation and its advantages over previous methods. 3) We improved the subjective evaluation by comparing the consistency of style across different generated samples, making the assessment more comprehensive. 4) Furthermore, we explored how the model can be expanded for generating connected handwritings.\\n\\nWe genuinely hope that our responses have effectively addressed your questions and concerns. If you have any further inquiries or need additional clarification, please feel free to reach out. We are eager to refine and enhance the contribution of our work. We sincerely hope to receive your approval. Once again, thank you for your valuable feedback!\\n\\nSincerely,\\n\\nThe authors.\"}", "{\"title\": \"Following question (3)\", \"comment\": \"Thanks for your clarification. For the Q(3), based on my understanding, the output text line will match both style reference and text content. If the dataset has ground-truth of each generated text line, does it mean the ground-truth also match the reference style\\uff1f So the data has such kind of triplet annotation <text content, style reference, ground-truth of generated text line>?\"}", "{\"title\": \"Response to authors\", \"comment\": \"Thank you for the detailed answer. It has addressed part of my concerns very well, and I am happy to raise the score to 5.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe have just submitted the supplementary material. In response to your questions and suggestions, we have made the following revisions:\\n+ In Section2, we have revised Figure 7 in the original paper, making it easier to identify which one corresponds to our method.\\nFor Figures 8 and 9 in the original paper, we have also revised the visualizations to make the boundaries between different authors more distinct. Thank you again for your valuable suggestion!\\n+ In Section3, we present the progress of extension experiments using our designed network, aimed at addressing the end-to-end generation paradigm for handling cursive character connections.\\n\\nThank you for your recognition of our work! We warmly welcome further discussion if you have any additional questions or suggestions!\"}", "{\"comment\": [\"Dear Reviewer,\", \"We have just submitted the supplementary material. In response to your questions and suggestions, we have made the following revisions:\", \"In Section 1, we present a visual comparison with state-of-the-art methods, which makes our experimental results more comprehensive.\", \"In Section 2, we have added additional visual results and further refined the subjective evaluation experiments.\", \"In Section 3, we demonstrate the generalization of our proposed model to an end-to-end framework. We believe these updates further **highlight the novelty and flexibility** of our approach compared to existing work. This also demonstrates one advantage of embedding-based methods over CNN-encoder-based methods in terms of scalability, as style transfer-based approaches face challenges in obtaining standard content templates for **complete text lines**.\", \"In Section 4, we present more visual comparison results of the layout generation methods with previous approaches and carefully elaborate on our advantages and contributions.\", \"We have also made these revisions as well as added the corresponding references to the relevant sections of the original paper. Thank you once again for your thoughtful feedback! If you have any unresolved issues or suggestions, please do not hesitate to let us know!\"]}", "{\"comment\": \"Dear Reviewer,\\n\\nWe have just submitted the supplementary material. In response to your questions and suggestions:\\n+ In Section 3, we demonstrate the potential of **extending our designed 1D-UNet** model to an end-to-end generation paradigm, thereby addressing the issue of generating connected strokes between characters.\\n\\nWe have also made revisions to the corresponding sections of the original text as per the suggestions. Thank you for your recognition of our work! If you have any further questions or suggestions, we would be honored to engage with you\\uff01\"}", "{\"comment\": \"> Comment 3: Section 4.3.2 lacks quantitative experiments.\\n\\n**Answer**: Actually, our calligraphy classifier **can also achieves a classification accuracy of 91%** for the entire generated text line. The reason for this is not explicitly included in the paper is that, in Chinese, calligraphy style can be predominantly reflected at the level of individual characters. Since our method generates each character independently, the quantitative evaluation of calligraphy style of the entire line **largely overlaps with** the character-level experiments presented in Section 4.2.\\n\\n> Comment 4: The generated layouts show significant absences in Figure 7. \\n\\n**Answer**: Recalling that our layout generation method requires a few characters bounding boxes as a prefix. In Figure 7, we use the first ten characters as the prefix, so the bounding boxes for these characters are the same as the real ones rather than being missing.\\n\\n> Comment 5: It is recommended to compare the embedding-based method cnn-content-encoder approaches.\\n\\n**Answer**: Thanks for your insightful suggestion. We conducted a supplementary experiment for Table 1 by replacing the character embedding with the cnn-based content encoder\\uff1a\\n\\n| | **DTW(\\u2193)** | **Content Score(\\u2191)** | **Style Score(\\u2191)** |\\n| ------------- | :-----: | :---------------: | :-------------: |\\n| **CNN-based** | 0.943 | 0.935 | 0.892 |\\n| **Ours** | 0.932 | 0.891 | 0.918 |\\n\\n\\nIt can be observed that the content score has increased, while the style score has slightly decreased, with a small increase in computational cost. Overall, the performance of both methods is comparable. We believe that the advantage of style transfer-based methods lies in their ability to generalize to standard glyphs that were not seen during training, whereas embedding-based methods can handle datasets that lack standard glyphs. \\n+ As mentioned before, these two settings are not conflict ing but can complement each other in different application scenarios.\\n+ Additionally, in Section 3 of the supplementary materials, we demonstrate the flexibility of our approach in end-to-end text line generation, where **the content encoding of a text line can be naturally obtained by concatenating the embeddings of individual characters**.\\n\\n> Comment 6: 1) About the applicability and 2) what will happen if some simple layout extraction methods are used to extract the pseudo-layouts of style references.\\n\\n**Answer**: 1) Application:\\n+ For customized handwritten text generation, we only require the user to write a small piece of coherent text line (e.g 10 characters) and mark the position of each char. It do not need to be coherent in content with the text lines to be generated later, but only used as a style reference. The layout generator will mimic the layout characteristics of these reference characters when generating the layout for subsequent characters. Additionally, these 10 reference characters will also serve as style references for the character generation model.\\n+ Last but not least, the strong positional label of characters in our generated data also makes it highly convenient for data augmentation in tasks such as character segmentation and recognition.\\n\\n2\\uff09Performance of simple layout style extraction methods tends to be **overly dependent on** carefully designed features, such as the binary geometric features I used in Table 2. In my own experiments, I found that model-free approaches that are too simplistic often generate layouts with noticeable differences compared to real samples, such as failing to properly handle the relative positions of punctuation marks and text. Furthermore, these methods exhibit poor generalization and are **only limited to text line generation**, whereas model-based approaches can be transferred to other even 2-Dimension handwritten data, such as handwritten math equations.\\n\\n> Comment 7: Few generated visual results and lacks visual comparisons with the baseline.\\n\\n**Answer**: Thank you for your suggestion. We **have conducted** more visulization and subjective experiments as well as visual comparisons with previous sota methods in the revised version to make our paper more convincing. However, in the domain of text line generation, this task remains largely unexplored, and as such, there is a lack of established baselines. Our approach represents a **novel attempt at addressing this task**.\\n\\n---\\nWe sincerely appreciate your detailed and thoughtful feedback on our manuscript. We have carefully addressed each of your comments and have made the necessary revisions and additions in the official version of the manuscript. We hope that our responces meet your expectations and would be grateful if you could consider revising the rating in light of the improvements made. We welcome any further questions or discussions you may have, and we will be happy to provide more elaborate responses as needed\\uff01\"}", "{\"summary\": \"The paper introduces a novel approach for generating online handwritten Chinese text with specific styles. The authors naturally divide a text line into two components: layout and glyphs, and design a text line layout generator coupled with a diffusion-based stylized font synthesizer to address this challenge hierarchically. The layout generator autoregressively generates the positions for each glyph based on text content and provided style references, while the font synthesizer generates each font at its position while imitating the calligraphy style extracted from the given style references. Experiments on the CASIAOLHWDB demonstrate the method's capability to generate structurally correct and indistinguishable imitation samples.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The study proposes a hierarchical method to address the under-explored task of online handwritten Chinese text line generation.\\n2. By decoupling layout generation from glyph generation, the method offers more flexibility in handling the generation of text lines, which is particularly useful when dealing with complex Chinese characters.\\n3. The experiments conducted on the CASIA-OLHWDB database indicate high performance in imitation sample generation, demonstrating the effectiveness of the method.\", \"weaknesses\": \"1. While decoupling layout and glyph generation increases flexibility, it may also add to the model's complexity, potentially affecting training and inference efficiency.\\n2. Are there any application scenarios for this task? The author could analyze its practicality.\\n3. The paper mentions difficulties in imitating styles with extensive cursive connections between characters due to the independent generation of each character, indicating potential limitations in handling certain calligraphic styles.\", \"questions\": \"Please see Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
DhH3LbA6F6
Reinforcement learning with combinatorial actions for coupled restless bandits
[ "Lily Xu", "Bryan Wilder", "Elias Boutros Khalil", "Milind Tambe" ]
Reinforcement learning (RL) has increasingly been applied to solve real-world planning problems, with progress in handling large state spaces and time horizons. However, a key bottleneck in many domains is that RL methods cannot accommodate large, combinatorially structured action spaces. In such settings, even representing the set of feasible actions at a single step may require a complex discrete optimization formulation. We leverage recent advances in embedding trained neural networks into optimization problems to propose SEQUOIA, an RL algorithm that directly optimizes for long-term reward over the feasible action space. Our approach embeds a Q-network into a mixed-integer program to select a combinatorial action in each timestep. Here, we focus on planning over restless bandits, a class of planning problems which capture many real-world examples of sequential decision making. We introduce coRMAB, a broader class of restless bandits with combinatorial actions that cannot be decoupled across the arms of the restless bandit, requiring direct solving over the joint, exponentially large action space. We empirically validate SEQUOIA on four novel restless bandit problems with combinatorial constraints: multiple interventions, path constraints, bipartite matching, and capacity constraints. Our approach significantly outperforms existing methods—which cannot address sequential planning and combinatorial selection simultaneously—by an average of 24.8% on these difficult instances.
[ "reinforcement learning", "combinatorial optimization", "restless bandits", "mixed-integer programming" ]
Accept (Poster)
https://openreview.net/pdf?id=DhH3LbA6F6
https://openreview.net/forum?id=DhH3LbA6F6
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wbLGjn6odq", "sbaOAk34bo", "rr9bEzOg60", "rPdx2DmmnP", "lNfTIzGvVV", "jIW0UL8IGV", "hKKhUsn30l", "b6GkTqmpis", "aDfO34scUb", "XHBFPu0nHW", "StknAgnKww", "ON4z7eyOVd", "L0BQOWgb7k", "Jd2JigLaSK", "Ih7Bs4qQh1", "AWZXlioQ47", "13BtWm2lcY", "0DqIIl4fuh" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_review" ], "note_created": [ 1733243430621, 1733125087372, 1731524271671, 1731523940635, 1730709404843, 1734768478716, 1731524312240, 1732950087980, 1731119856180, 1733071728533, 1731524105111, 1730209707506, 1731524163502, 1737523685215, 1732680373189, 1733209714454, 1733067368892, 1730797999032 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5117/Authors" ], [ "ICLR.cc/2025/Conference/Submission5117/Reviewer_oEWh" ], [ "ICLR.cc/2025/Conference/Submission5117/Authors" ], [ "ICLR.cc/2025/Conference/Submission5117/Authors" ], [ "ICLR.cc/2025/Conference/Submission5117/Reviewer_ya5G" ], [ "ICLR.cc/2025/Conference/Submission5117/Area_Chair_XPR6" ], [ "ICLR.cc/2025/Conference/Submission5117/Authors" ], [ "ICLR.cc/2025/Conference/Submission5117/Reviewer_EMxA" ], [ "ICLR.cc/2025/Conference/Submission5117/Reviewer_oEWh" ], [ "ICLR.cc/2025/Conference/Submission5117/Reviewer_EMxA" ], [ "ICLR.cc/2025/Conference/Submission5117/Authors" ], [ "ICLR.cc/2025/Conference/Submission5117/Reviewer_9UJj" ], [ "ICLR.cc/2025/Conference/Submission5117/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5117/Reviewer_ya5G" ], [ "ICLR.cc/2025/Conference/Submission5117/Reviewer_1mgZ" ], [ "ICLR.cc/2025/Conference/Submission5117/Authors" ], [ "ICLR.cc/2025/Conference/Submission5117/Reviewer_EMxA" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your review! We are glad to hear you find our problem interesting, novel, and relevant for real-world applications.\\n\\n## Question: Why no regret bounds, unlike traditional bandit papers?\\n\\nWe would like to clarify the distinction between restless bandits (RMABs) and traditional multi-armed bandits. Our paper considers RMABs, which are typically a planning problem that assume known transition dynamics. The challenge for RMABs is tractably planning over this large, budgeted action space. As you point out, RMABs are a form of Markov decision processes, where each arm of the RMAB is an MDP.\\n\\nOn the other hand, traditional bandits are *learning* algorithms thus study theoretical guarantees, measured in terms of regret. Regret would be a useful metric to compare if we are considering a learning setting with unknown transition dynamics. However, we focus instead on planning with known transition dynamics, where the challenge is overcoming computational complexity, not learning unknown dynamics.\\n\\nFor other convergence bounds, note that the vast majority of deep RL approaches focus on empirical performance, not on theoretical guarantees. For example, DQN (the deep RL algorithm we use) was introduced in 2015, but the first attempt for a theoretical analysis of DQN was not done until 2020 [Fan et al. 2020] \\u2014 and this paper makes extremely simplifying assumptions, such as assuming the data-gathering policy has good coverage (thus requiring minimal exploration) and that the trained neural network is sparse (i.e., mostly 0s). \\n\\nAs mentioned in line 487, Tkachuk et al. (2023) recently offered a theoretical analysis of RL with combinatorial actions under some linearity assumptions. While not applicable to our setting, it could serve as a starting point for further theory in the future. \\n\\n> Fan, et al. \\\"A theoretical analysis of deep Q-learning.\\\" Learning for dynamics and control. PMLR, 2020.\\n>\\n> Tkachuk et al. \\\"Efficient planning in combinatorial action spaces with applications to cooperative multi-agent reinforcement learning.\\\" AISTATS 2023. \\n\\n\\n## Question: Comparison to SOTA RL algorithms?\\n\\nAs emphasized in the paper, our action space is both combinatorial and subject to hard constraints. Naively \\u201cexpanding\\u201d all possible actions results in an extremely large discrete action space. In Figure 2, we illustrate how this combinatorial explosion makes the use of existing RL methods impossible. \\n\\nFor example, we show how a \\u201cStandard DQN\\u201d in the top-left corner would require a Q-network with as many outputs as actions. For our largest setting in the experiments ($J=100$ arms and $N=20$ workers), the number of possible actions is $\\\\approx 5.3 \\\\times 10^{20}$. In addition, not all binary vectors are feasible actions, as they may violate constraints (e.g., path constraints, assignment constraints, budget constraints, etc.). This limits the applicability of SOTA RL methods. Part of our contribution is to show that using MILP within a value-based RL approach can help address both the combinatorial explosion and the presence of hard constraints.\\n\\n\\u2014\\u2014\\n\\nThank you again for your review of our paper; we hope our response addresses your concerns and helps to contextualize our contributions!\"}", "{\"comment\": \"Thank you for your responses. I will reconsider my score.\"}", "{\"comment\": \"Thank you for your positive review and suggestions! We are glad you find our problem formulation interesting and realistic to the real-world, and our solution practical for addressing the computational intensity of deep RL.\\n\\nWe will update the paper to reflect the following response.\\n\\n## Response to Questions\\n\\n1. **Handling standard restless bandits:** Indeed, our formulation of coRMAB generalizes the standard RMAB. We mention this in lines 175-178, 944, and 971-973 for our formulation of different settings. \\n\\n2. **Comparison to Restless UCB:** Restless UCB focuses on a learning problem with unknown transition dynamics, which is a different setting. We agree they make a valuable contribution for learning in RMABs. However, their experiments only scale to $N=5$ arms and requires over $T=50,000$ timesteps to learn \\u2014 clearly, that is not practical for real-world problem settings. \\n\\n3. **Comparison to other algorithms:** Regret would be a useful metric to compare if we are considering a learning setting with unknown transition dynamics. However, here instead we focus on planning with known transition dynamics, where the challenge is overcoming computational complexity, not unknown dynamics. \\n\\n4. **Comparison to the optimal policy:** Please see response to Question #3 for why we cannot consider regret.\\n\\n5. **Scalability for larger neural networks:** Yes, the complexity does grow as the neural network for DQN grows; we say in lines 291-292 that the MILP requires a linear $O(DP)$ binary variables and constraints where $D$ is the number of hidden layers and $P$ is the number of neurons per layer.\\n\\n6. **Effect of hyperparameter tuning:** Our results are shown without hyperparameter tuning. We expect that the empirical performance could improve further with more tuning.\\n\\n7. **Comparison to domain-specific solutions:** There are not existing domain-specific solutions for the sequential, combinatorial-action problem we consider, so we cannot compare to them. We introduce the coRMAB problem in this paper.\\n\\n## Response to Weaknesses\\n\\n1. **Theoretical guarantees:** Please see response to Question #3 for why we cannot evaluate regret. \\n\\nFor other converge bounds, note that the vast majority of deep RL approaches focus on empirical performance, not on theoretical guarantees. For example, DQN (the deep RL algorithm we use) was introduced in 2015, but the first attempt for a theoretical analysis of DQN was not done until 2020 [5] \\u2014 and this paper makes extremely simplifying assumptions, such as assuming the data-gathering policy has good coverage (thus requiring minimal exploration) and that the trained neural network is sparse (i.e., mostly 0s). \\n\\nOur focus is on practical restless combinatorial bandit problems. We have therefore proposed realistic problems and realistic generation schemes.\\n\\n2. **Experimental design:** Please see response to Question #7\\n\\n3. **Assumption of an offline planning setting:** Restless multi-armed bandits are designed as planning problems; see Whittle [1988] and Weber and Weiss [1990], and restless bandits that assume known dynamics have been applied to several real-world settings; see Raman et al. [2024] for food rescue and Mate et al. [2022] for public health in India.\\n\\n### References\\n> Mate, et al. Field study in deploying restless multi-armed bandits: Assisting non-profits in improving maternal and child health. AAAI 2022\\n>\\n> Raman, Shi, and Fang. Global rewards in restless multi-armed bandits. NeurIPS 2024\\n> \\n> Whittle. Restless bandits: Activity allocation in a changing world. Journal of Applied Probability 1988\\n>\\n> Weber and Weiss. On an index policy for restless bandits. Journal of Applied Probability 1990\"}", "{\"comment\": \"Thank you for your review and suggestions! We're happy that you find the applications meaningful and that our method is practical and has potential for impact.\\n\\nWe will update the paper to reflect the following response.\\n\\n## Response to your question (performance during initial training):\\nThank you for raising this concern about deployment in high-stakes environments. As discussed in Section 2, the coRMAB setting assumes that the environment is known, i.e., the transition probabilities and reward functions have already been derived. Offline planning is the standard assumption for restless bandits, and has been deployed for important real-world settings including public health in India [1] and clinical trial design [2]. In [1], they build the offline simulator (of the probability of a patient becoming engaged in a health program following a messaging intervention) using domain experts and historical data. As such, training the Q-network using Algorithm 1 is done in simulation, offline. Once satisfactory performance is achieved, the policy can be deployed in the real world. \\n\\nThis offline planning approach is discussed in Section 4.1 \\u201cLearning in the Real World\\u201d of [3], as a way of addressing online RL\\u2019s extensive data needs. Should SEQUOIA be applied to an environment with unknown dynamics, we recommend building a model of the environment dynamics first as discussed in [4], and then applying our Algorithm 1 before finally deploying the Q-network.\\n\\n## Response to the Weaknesses\\n\\n1. **The four coRMAB instantiations:** We provide precise formulations of the goals and constraints of all 4 coRMAB problem settings we introduce in full mathematical detail in Appendix C. However, we disagree that these instantiations are all interdependent. For example, for the capacity-constrained setting, each worker $j$ can act on multiple arms (up to their budget $b_j$), but in the schedule-constrained setting, each worker can only act on a single arm \\u2014 and based on incompatible availability, some workers may not be assigned at all. Whereas for the path-constrained setting, the \\u201cnumber of workers\\u201d is not a fixed number $N$, but rather corresponds to the total length of a feasible path. Separately, in the multiple interventions setting, the impact of multiple workers acting on the same arm is cumulative; that is not the case in any of the other settings (i.e., multiple workers acting on arm $j$ is no better than one worker acting on arm $j$).\\n\\n2. **Theoretical guarantees:** The vast majority of deep RL approaches focus on empirical performance, rather than on theoretical guarantees. For example, DQN (the deep RL algorithm we use) was introduced in 2015, but the first attempt for a theoretical analysis of DQN was not done until 2020 [5] \\u2014 and this paper has to make extremely simplifying assumptions to get convergence guarantees, such as assuming the data-gathering policy has good coverage (thus requiring minimal exploration) and that the trained neural network is sparse (i.e., mostly 0s). Other key deep RL algorithms, such as DDPG, PPO, or RAINBOW, do not come with theoretical guarantees.\\n\\nAs you highlight, our focus is on practical restless combinatorial bandit problems. We have therefore proposed realistic problems and realistic generation schemes.\\n\\n3. **Computational runtime:** As we mentioned above, SEQUOIA is an offline planning approach. As such, a few hours of training can be seen as a reasonable upfront cost before deployment.\\n\\n> [1] Mate, Madaan, Taneja, et al. \\u201cField study in deploying restless multi- armed bandits: Assisting non-profits in improving maternal and child health.\\u201d In AAAI 2022.\\n>\\n> [2] Villar, Bowden, Wason. \\u201cMulti-armed bandit models for the optimal design of clinical trials: benefits and challenges.\\u201d Statistical Science: A Review Journal of the Institute of Mathematical Statistics 2015.\\n>\\n> [3] Whittlestone, Arulkumaran, and Crosby. \\\"The societal implications of deep reinforcement learning.\\\" JAIR 2021.\\n>\\n> [4] Moerland, et al. \\\"Model-based reinforcement learning: A survey.\\\" Foundations and Trends in Machine Learning 2023.\\n>\\n> [5] Fan, et al. \\\"A theoretical analysis of deep Q-learning.\\\" Learning for dynamics and control. PMLR, 2020.\"}", "{\"summary\": \"The work introduces a new class of multi armed bandits problem, coRMAB which generalizes Restless bandits problem where the arm action cannot be decoupled because of the constraint of the problem that is common in real world scenarios. The authors also briefly go through the four scenarios with valid examples and propose an algorithm SEQUOIA based on deep RL algorithm \\u2013 Q learning & mathematical optimization to optimize long-term reward. The authors also highlight the issue with very large action space and showcase the ability of SEQUOIA to perform on those scenarios with experiments comparing them with some of the other algorithms that can handle this problem.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe problem setting and formulation are interesting. The formulations discussed in this paper is a natural and general extension of restless bandits. Most of the real-world scenarios often fall under one of the four scenarios highlighted by the authors in this work.\\n\\n2.\\tDeep Q learning type algorithms are generally computationally heavy, and it only grows with the action and state space. This work addresses this issue and takes this into account in their problem formulation and algorithm. \\n\\n3.\\tA key challenge in using deep learning within a RL problem or any problem in general is the need for optimizing the network architecture and resource for hyper-parameter tunning and the algorithm seems to work with minimal alterations across domains.\", \"weaknesses\": \"1.\\tThe work highlights the empirical results of the proposed algorithm SEQUOIA but it did not have any theoretical guarantees on measures like Regret or convergence bounds. Having them would have greatly benefitted the solidarity of the developed algorithm.\\n\\n2.\\tThe major results shown in this work is about the experiments and how SEQUOIA, the developed algorithm performs on four scenarios of the problem formulation and competes with some of the other algorithms that can be modified to work on coRMAB, however a detailed experimental design could be carried out to further showcase the benefits and limitations of the proposed algorithm and how they perform in different regimes on different transition dynamics. \\n\\n3.\\tThe work also assumes that it is an offline planning setting, i.e., the transition dynamics are known in advanced which can be a limiting factor on many practical settings where the transition dynamics are harder to compute. Most of the real-world setting involves an agent interacting with an environment to understand them.\", \"questions\": \"1.\\tThe problem of coRMAB extends the problem of restless MAB to handle actions that cannot be decoupled. If we were to set the no of actions (N) equal to the no. of arms (J) and using a simple budget constraint where \\\\sum j \\\\in [J] a_j <= B and also making each action only connect to its corres. arm, we end up in restless MAB setting. In that case, How does SEQUOIA handle the Restless bandit problem ?\\n\\n2.\\tFor the case of standard restless bandit problem, how does SEQUOIA competes with some of the existing algorithm in the space of restless bandit problem like restless-UCB [Reference B], which tends to have a sublinear Regret bound with good empirical performance on real-world data too. ? \\n\\n3.\\tAlso, a detailed comparison of the algorithm with other algorithms or approach could help better understand the performance of the developed algorithm. For instance, a comparison of SEQUOIA with other algorithm/ approach on the basis of either Regret/ Normalized Average reward would better help understand the performance gain of the proposed algorithm. ?\\n\\n4.\\tThe metrics used in this paper is normalized average reward. Given that we know the transition dynamics, does comparison against the optimal best policy performance be a better metrics like Regret ? Or Other convergence guarantees like one shown in this paper [Reference A] be better fitted to potentially quantify the significance of this work ?\\n\\n\\n5.\\tAlso, solving the large action space problem is computationally hard, however how does the complexity grows if we were to increase the neural network size for a more complex system ?\\n\\n6.\\tAlso, the SEQUOIA uses the same network architecture across all the four constraint type proposed in the paper, Is it optimal or does tunning the hyper-parameter for each constraint provide better performance ?\\n\\n7.\\tHow does SEQUOIA\\u2019s result compare to existing domain specific solution for the four-constraint type setting discussed. This would better help SEQUOIA solidify its performance gain with better clarity ?\", \"reference\": \"A.\\tGuojun Xiong, Jian Li, Finite-Time Analysis of Whittle Index based Q-Learning for Restless Multi-Armed Bandits with Neural Network Function Approximation, Advances in Neural Information Processing Systems 36 (NeurIPS 2023) \\nB.\\tSiwei Wang, Longbo Huang, John C. S. Lui, Restless-UCB, an efficient and low-complexity algorithm for online restless bandits, Advances in Neural Information Processing Systems (NeurIPS 2020)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper studies reinforcement learning (RL) with combinatorial action spaces, specifically within the novel coRMAB framework for coupled restless bandits. It presents significant empirical results using SEQUOIA, an RL algorithm that integrates mixed-integer programming with deep Q-networks to optimize long-term rewards under combinatorial constraints. The reviewers appreciated the practical relevance, novelty of the problem setting, and the effectiveness of the proposed method, as highlighted by strong experimental results and detailed rebuttals addressing initial concerns. Although one rejecting reviewer emphasized the lack of large-scale experiments and theoretical guarantees, this critique is less applicable as the paper's primary focus is theoretical contributions and computational techniques for RL. Overall, the paper's advancements in handling large, combinatorially structured action spaces in RL warrant acceptance for their theoretical and practical impact.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, the reviewers raised several key points. Reviewer 1mgZ emphasized the lack of theoretical guarantees like regret bounds and comparisons to state-of-the-art (SOTA) RL algorithms; the authors clarified the distinction between planning and learning settings, explaining why regret does not apply and why comparisons to SOTA RL methods were infeasible given the combinatorial action space. Reviewer oEWh raised concerns about scalability and overlapping problem settings; the authors provided details on their MILP-solving strategy, clarified the distinctiveness of the scenarios, and highlighted practical performance benefits. Reviewer EMxA questioned scalability with large neural networks and MILPs; the authors explained their heuristic methods, MILP optimizations, and problem-specific structure exploitation. Reviewer ya5G suggested broader experimental studies and comparisons to domain-specific solutions; the authors acknowledged these points but noted the lack of suitable baselines for sequential, combinatorial action problems. Reviewer 9UJj requested more clarity on specific formulations and noted the lack of real-world datasets; the authors updated explanations and acknowledged the data limitation. Each concern was thoughtfully addressed, and the paper's focus on theoretical contributions and computational techniques overcomes its limitations, supporting an acceptance recommendation.\"}", "{\"comment\": \"Thank you for your positive review! We are glad that you see our paper is well-motivated, and that our approach of solving the exact optimization problem is better than an approximate approach.\\n\\n## Response to Weaknesses/Questions\\n\\n1. **Real data experiments:** We agree that it would be preferable to use real data. However, we do not know of publicly available data sources for a restless bandit setting. We hope this changes in the near future!\\n\\n2. **Eq. (2):** There is no $s\\u2019$ in the RHS because the equation represents the probability of transitioning to a higher-reward state \\u2014 we have clarified this in the paper.\\n\\n3. **Transition probability formulation:** Yes, the transition dynamics for these settings are specified in Appendix B.2.\\n\\n4. **Why 4 settings:** We introduced 4 new formulations of coRMAB that we think are practical representations of many real-world settings (with multiple types actions and with path, capacity, schedule constraints). \\n\\nOf course, our approach is generalizable to any deep RL setting where the actions (either continuous or binary) have constraints that can be formulated as a mixed-integer program. This of course generalizes to a very wide range of problems.\\n\\n5. **Iterative approach:** Thank you for catching this typo! This was supposed to read \\u201cIterative DQN\\u201d and \\u201cSEQUOIA\\u201d, not referring to Myopic. We have corrected this.\"}", "{\"title\": \"Thank you for your response and follow-up about scalability\", \"comment\": \"Thank you for your response which clarified some of my concerns. I would like to follow-up on the main motivation of the paper and the main challenge to be addressed which is scalability.\\n\\nOne of the main motivations of the paper is to address combinatorial action spaces which bring an important scalability challenge into picture. I have a few follow-up questions for clarification regarding some of the bottlenecks which are also mentioned in the paper: \\n\\n1. The paper proposes to embed the Q function into an MILP and then solve it to find the maximizing action (in principle). However, embedding (by linearizing large NNs, is this done automatically?) can result in a very large MILP which can quickly become intractable to solve (these are also arguably very hard problems though as mentioned by the authors). In practice these can take days to be solved even for small instances. Moreover, the algorithm has to solve an MILP **for each sampled state within each episode**. Given the number of samples usually required to train RL, this is huge. I am wondering how do you meaningfully reduce the complexity of the MILPs to be solved (which have among decision variables the huge combinatorial action space). \\nThe reported running time in this work for the training is about 1 to a few hours. You mention that you 'impose a time limit to the MILP solver and modify the solver\\u2019s parameter settings to prioritize finding feasible solutions quickly over proving global optimality'. So do you still keep the same large combinatorial action space and by just reducing the running time you are able to get a feasible action to the problem, how can you guarantee that you get such an action just by reducing the running time? \\nThe paper argues that considering a Q-network with an output of the size of the combinatorial action space is intractable, but now with the proposed approach the max over the Q-network is still intractable and it has to be solved a large number of times. The experiments provide some evidence but could you elaborate more on how you manage to obtain a better result with MILP with a very limited running time that would be better than any random feasible action? \\n\\n2. The exploration strategy which is also discussed in section 4.2 is based on heuristics. I find it hard to get any meaningful estimates on actions that are never encountered for instance. If all of them are sampled then you also need exponentially many sampled actions which is intractable. How would the values learned for the Q function for specific actions inform or could be extrapolated to unseen actions even intuitively? Otherwise there should be a cost to that in the approximation, does the approach in the paper involve indirectly reducing the number of actions sampled compared to the combinatorial size of the action set? Are you somehow exploiting some problem specific structure that drastically reduces the number of feasible actions in the combinatorial action space? \\n\\n3. The proposed algorithm outputs a Q function. Does it mean that to find the action to be performed at a given state (i.e. the policy), you have solve an MILP (i.e. for each given state)? \\n\\n4. Minor: The transition operator is valued in $[0,1]^J$ in Eq. 1. Is this a typo and it should be just [0,1]? What is the meaning of the Bellman equation you provide otherwise? \\n\\nI understand that this is a hard problem to solve and any meaningful progress is important. I would like to sense better how the central scalability challenge is meaningfully addressed here, could you elaborate more on this matter?\"}", "{\"summary\": \"This paper introduces a novel reinforcement learning framework for a challenging setting known as combinatorial restless multi-armed bandits (CoRMAB). In these problems, the vast combinatorial action space presents a key bottleneck, especially for real-world applications like public health. The authors address this by proposing SEQUOIA, a method that combines a Q-network with mixed-integer linear programming (MILP) solvers. Four distinct constraint types (such as capacity and matching constraints) are applied to CoRMAB instances to explore the method's performance. Experimental results show that SEQUOIA significantly outperforms existing baselines across these settings.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper highlights meaningful applications, particularly in public health, where combinatorial decision-making is crucial. By focusing on real-world-inspired constraints, the paper emphasizes the practical utility of SEQUOIA.\", \"The experimental results suggest that SEQUOIA has a strong advantage over other methods, particularly in scenarios that require both sequential planning and combinatorial action selection. This shows potential for the method to be impactful in complex decision-making tasks.\"], \"weaknesses\": [\"The four specific CoRMAB instantiations appear somewhat interdependent. For instance, the first instance involving multiple interventions seems to implicitly contain elements of the second (path constraints) and third (capacity constraints). This overlap could obscure the unique contributions of each instance, and the presentation of these distinctions would benefit from clarification. Additionally, a more precise formulation of the optimization goals and constraints in each problem setting could strengthen the paper.\", \"Although the SEQUOIA framework is innovative in combining Q-networks with MILP, the method lacks theoretical guarantees, which may reduce its general appeal in theoretical RL circles. The paper leans toward practical applications without rigorously addressing theoretical underpinnings. Given that the CoRMAB problems are motivated by real-world scenarios, it would be beneficial for the authors to demonstrate how SEQUOIA could operate on actual datasets or real-world instances.\", \"The method's training demands significant computational resources, as evidenced by Table 3, where training times extend to hours. For online applications, this can be a prohibitive factor. The paper would benefit from a discussion on optimizing computational efficiency or alternative approaches to reduce overhead.\"], \"questions\": [\"Since SEQUOIA\\u2019s primary application appears to be in public health, there are concerns about performance during the initial training phase. If the network\\u2019s early-stage predictions are suboptimal, this could lead to unacceptable decisions in real-world use. How do the authors envision mitigating this issue in practice, particularly in high-stakes domains like public health where early errors could have critical impacts?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you\", \"comment\": \"Thank you for your response, I am increasing my score given the importance of the problem and its applications (with the settings highlighted) as well as the effort of the paper to go beyond non-combinatorial action spaces in practice. The paper builds on prior work to embed a neural network into an MILP and uses some heuristics to address the issue of scaling for combinatorial action spaces. I still believe there are a number of issues to be addressed properly in terms of scaling as this is one of the main goals of the paper: solving an MILP for each state sampled seems quite restrictive given the usual number of samples one needs in RL for instance although the method improves over the baselines presented on the instances presented, exploration is handled with some heuristic sampling that does not show the limitation of the proposed method in harder instances, I believe the justifications provided in the rebuttal need to enhance the paper since scalability is the key challenge, the Q-network used has 2 hidden layers of size 32 each which seems quite small to handle large problems with combinatorial action spaces and training a neural network with 128 neurons takes an order of magnitude longer in terms of running time as mentioned in appendix G. Given that the paper is fully practical (and no guarantees are provided), I would expect a more extensive and comprehensive experimental study to support the proposed method (on more instances, larger ones, investigating sensitivity to the parameters ...), its scalability and show its limitations.\"}", "{\"title\": \"Response pt 1\", \"comment\": \"Thank you for your review and suggestions! We are happy that you find our work to be compelling and well-presented. We will update the paper to reflect the following response.\\n\\n## Response to Main Questions\\n1. **Assumption of known dynamics:** We focus on RL for planning, so we do assume known dynamics and rewards. Restless multi-armed bandits are designed as planning problems; see Whittle [1988] and Weber and Weiss [1990], and restless bandits that assume known dynamics have been applied to several real-world settings; see Raman et al. [2024] for food rescue and Mate et al. [2022] for public health in India. \\n\\n2. **Running time Table 3:** In this table we fix the number of workers to $N=10$ (the middle setting in Figure 3).\\n\\n3. **Tractability in combinatorial action space (step 9):** Because our action spaces are combinatorial, a DQN-type network with as many outputs as the number of actions is not feasible. In other words, one cannot simply input the state vector into a network and read out Q(s,a) for all possible actions $a$ (top-left in Figure 2). To get around this, we use a network which takes as input a pair $(s,a)$ and estimates $Q(s,a)$ only for that pair. It remains to find the action $a$ which maximizes the output of this network for a fixed state s; this is what MILP solving does.\\n\\n4. **Size of the combinatorial action sets, and why is it prohibitive for RL:** For the experiments, with the largest setting ($J=100$ arms and $N=20$ workers) the number of possible actions is $\\\\approx 5.3 \\\\times 10^{20}$; this is clearly prohibitive for existing RL methods.\\n\\n5. **Need for diverse actions:** We do need to explore a number of diverse actions, but as we discuss in lines 346\\u2013350, we are able to efficiently do so in this setting.\\n\\n6. **Why DQN not DDPG:** Given that we have to solve a MIP to evaluate our policy, we chose to use DQN as it is an effective and simple RL approach that requires only a Q-network, whereas DDPG requires both an actor and a critic. \\n\\n7. **Comparison to approach in l.494-498:** Tkachuk et al. (2023) assume linear Q-realizability (Assumption 1 in their paper), which requires that the true $Q(s,a)$ can be approximated by a linear function in a feature vector representing the state-action pair $(s,a)$. This is a restrictive assumption which we do not make. Additionally, they assume that the actions are continuous rather than discrete, which they clarify: \\u201c\\u200b\\u200b\\u200b\\u200bCombined with the linear $Q \\\\pi$-realizability (Assumption 1), the greedy oracle amounts to solving a linear optimization over the action set $A$.\\u2019\\u2019 This is a much simpler setting than ours that is amenable to theoretical analysis.\\n\\n## Response to Minor Questions\\n\\n1. **Need for PWL approximation of sigmoid:** A mixed-integer linear program cannot directly model a continuous non-linear sigmoid function. A piecewise-linear approximation is MILP-representable.\\n\\n2. **Self-loops in path-constrained coRMAB (l. 196):** The self-loops ensure that all paths of total length $\\\\leq B$ are valid. If the budget is $B=10$, the reward-maximizing path might be of length $9$ \\u2014 but without self-loops, a path of length $10$ might not be feasible (because we would have to go to another node and come back, requiring 2 extra edges).\"}", "{\"summary\": \"This paper propose a more general restless bandit model---coRMAB, in which the action space for different arms could also be correlated (e.g., one action can influence multiple arms). In this model, the authors adapt the idea of DQN, and utilize the fact that solving integer programming with a feed-forward neural network programming representation is efficient. They propose the SEQUOIA algorithm, and show that it achieves good performances in experiments.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The problem setting is well-motivated.\", \"I quite like the idea that instead of solving the exact combinatorial optimization, we choose to solve the optimization based on our estimation as an approximate approach.\", \"From experiments, this idea seems work well.\"], \"weaknesses\": \"- There are no real data experiments. For a paper that does not contain too much theories, I believe real data experiments are necessary.\\n\\n- There are some parts that are not very clear to me, e.g., \\n\\nIn Eq. (2), why there is no $s'$ in RHS?\\n\\nFor \\\"Schedule-constrained\\\", \\\"Capacity-constrained\\\", and \\\"Path-constrained\\\", are there any formulation about the transitions?\\n\\nWhy we only consider these four kinds of cdRMAB? I think your algorithm (or solving the MILP) is not restricted to these four settings, right? \\n\\nIn line 431-432, it is said that \\\"For example the ITERATIVE myopic approach performs on average 14.6% lower than optimal MYOPIC\\\". But I do not see that? In Figure 3(b) and 3(c), they are very close, and in Figure 3(a), it seems that ITER.MYOPIC is higher than MYOPIC?\", \"questions\": \"See \\\"Weaknesses\\\" for details.\\n\\n\\n======After rebuttal=======\\n\\nThanks for the reply. I do not have further questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response pt 2\", \"comment\": \"## Response to Weaknesses\\n\\n1. **Clarifications on problem formulation:** (a) Indeed, this is an infinite-horizon setting; the horizon $H$ is used as the number of timesteps in evaluation. We clarified in the updated submission. (b) By \\u201cper-timestep\\u201d in l.99, we simply meant that the action space is combinatorial. We agree that SEQUOIA can handle time-varying action spaces by appropriately restricting the available actions per timestep in the MIP model. \\n\\n2. **Transition and rewards assumed to be known:** Please see response to Question #1\\n\\n3. **Comparison to standard DQN and combinatorial action space:** Precisely as you say, with standard DQN the output size of the NN scales with the size of the action space, which is exponentially large. That is the key contribution of our paper \\u2014 to directly solve using an NN whose output size is only the dimensionality of the action space. \\n\\n4. **Cost of solving MILP in every timestep / scalability:** Selecting an action from a constrained combinatorial set necessitates some form of combinatorial optimization. MILP solving is one generic way of doing so as it can simultaneously represent the trained ReLU Q-network as well as the combinatorial constraints. Optimizing (1) a deep network objective function with (2) binary variables and (3) combinatorial constraints is an extremely challenging problem that does not admit gradient-based methods (e.g., \\u00e0 la projected gradient descent for adversarial attacks) or polynomial-time algorithms. In practice, we impose a time limit to the MILP solver and modify the solver\\u2019s parameter settings to prioritize finding feasible solutions quickly over proving global optimality.\\n\\n5. **Q-learning for continuous action spaces:** Q-learning has indeed been used for continuous action spaces, but standard Q-learning (or any existing modifications) cannot solve a policy that says: \\u201cFind me the best policy that solves an assignment problem (an NP-hard discrete optimization problem) at every timestep.\\u201d This is what our work achieves here, to integrate MIP solving into deep RL.\\n\\n6. **Elaboration on challenges embedding a NN into a MIP:** Others have shown that neural networks can be embedded into a MIP, but our paper is the first to integrate **deep RL** (not just deep learning) with MIP solving to enable combinatorial action constraints. Please see our response to Weakness #4 for more detail on the technical challenges.\\n\\n\\n### References\\n> Mate, et al. Field study in deploying restless multi-armed bandits: Assisting non-profits in improving maternal and child health. AAAI 2022\\n>\\n> Raman, Shi, and Fang. Global rewards in restless multi-armed bandits. NeurIPS 2024\\n>\\n> Whittle. Restless bandits: Activity allocation in a changing world. Journal of Applied Probability 1988\\n> Weber and Weiss. On an index policy for restless bandits. Journal of Applied Probability 1990\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you for responding to my comments. The authors response to Q2 (comparison with restless UCB) and Q3&Q4, comparison with existing algorithm have clarified my initial concerns. The authors have provided relevant information to clarify some of my initial concerns in their work related to the computational efficiency. I suggest including this information into the next version of their manuscript for a better clarity in this work.\"}", "{\"summary\": \"This paper considers the CoRMAB problem, in which there are complex combinatorial arm structures to be learned. Such complexity often arises in many real-world applications such as public health etc. The problem, as far as I can see, is kind of like an intermediate area between traditional bandits where each arm is independent, and that of the more general Markov Decision process where any transition might happen. This paper, however, focuses on the more tractable scenario where some sort of information is known beforehand about the arm dependence structure. In particular, the paper considers four specific scenarios, including multiple interventions, bipartite matching, capacity constrained, and path planning. The paper proposed SEQUOIA, which applies Q-network with mixed-integer linear programming to solve the problem. Experiments demonstrate the effectiveness and efficiency of the proposed algorithm.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"The paper studied a very novel and niche problem that is more tractable than RL but also more practically relevant in real-world bandit applications. I think it's important to have deeper investigations on such problems to directly applying SOTA RL algorithms. One novelty is that the paper combines RL with MILP to solve the problem more effectively.\", \"weaknesses\": \"Although the paper studied very interesting bandit problems, but the problem is solved via RL plus MILP. I was curious why not directly apply SOTA RL algorithms? How does that compare to the SEQUOIA proposed in this paper? I think one weakness of the paper is that it didn't compare with more advanced baselines like certain RL algorithms.\\n\\nAnother major weakness of the paper is that it doesn't have theoretical analysis on the algorithm performance, which is very critical for bandit papers. I would hope the authors provide regret bounds for each of the four cases studied in the paper.\", \"questions\": \"How does the algorithm in this paper compare to SOTA RL algorithms?\\n\\nHow to derive theoretical analysis?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your careful attention to our paper; we're glad to hear that our response clarified some of your concerns.\\n\\nAs you highlight, this is indeed a hard problem to solve, and we appreciate your recognition that any meaningful progress is important.\", \"to_address_your_follow_up_questions\": \"## Question 1: Efficiently sampling states\\n\\nAs we mention in line 291, embedding the neural network into the MILP can be done adding a linear $\\\\mathcal{O}(DP)$ binary variables and constraints, where $D$ are the number of hidden layers and $P$ are the number of neurons per layer. In the experiments, we use $D=2$ layers and $P= \\\\lbrace 32, 128 \\\\rbrace $ hidden neurons, which we show is sufficiently expressive to capture the problem sizes we consider \\u2014 despite the largest setting (with $J=100$ arms and $N=20$ workers) having up to $\\\\approx 5.2 \\\\times 10^{20}$ possible inputs. \\n\\nTo embed the neural network, we have a custom translator from a trained PyTorch model to a mixed-integer linear problem in Gurobi. However, there are now automatic translators including from Gurobi which would facilitate this process even further (https://gurobi-machinelearning.readthedocs.io).\\n\\nModern MILP solvers, such as Gurobi which we use, are designed to find optimal solutions as efficiently as possible. For example, Berthold [2013] shows that even when an optimal solution takes over 3,000 seconds to compute, a solution that achieves 90% of the same reward can be found within less than 800 seconds (Fig. 1 and 2). \\n\\nIn all 12 of our experiment settings (Figure 3), clearly our SEQUOIA approach performs better than just selecting a random feasible action. Imposing a time limit for every MILP solve is used only as a safeguard, to avoid excessively long runs (rare in our experience). For all the problems we have considered, the solver finds a feasible solution very quickly, making early termination possible.\\n\\n\\n> Berthold (2013). Measuring the impact of primal heuristics. Operations Research Letters\\n\\n\\n\\n## Question 2: Exploration strategy\\nAs we discuss in Section 4.2, generating efficient and informative samples to train the Q-function was a priority for developing our method. Important to note is that to simply evaluate the immediate reward of a $(\\\\mathbf{s},\\\\mathbf{a})$ pair only requires computing the reward function, and does not require solving the MILP at all. We warm-start our Q-network with many of these such samples (lines 327-337). \\n\\nAs you correctly inquire, we do have problem structure that we can also helpfully leverage. We discuss this in lines 349-353: one particularly useful property of restless bandits is that the impact of an action on each arm is decoupled, as state transitions are defined independently per-arm. Thus, the transition dynamics are less complex to learn, and we can even simulate valid state transitions even for infeasible actions (e.g., actions that exceed a budget constraint). \\n\\nAltogether, these conditions combined enable us to tractably explore the large action set. And, as shown in our experiments, we are able to achieve strong performance within reasonable time limits.\\n\\n\\n## Question 3: Going from Q-function to action\\nCorrect, Algorithm 1 is the process for training the Q-function. At inference time, to find the action to be performed at a given state, we perform a single solve of the MILP to find the best action $\\\\mathbf{a}$ from the current state $\\\\mathbf{s}$.\\n\\n\\n## Question 4: Transition operator\\nThe joint transition probability $P^\\\\times$ is indeed over $[0, 1]^J$ (no typo). This is the joint transition probability for all $J$ arms. The state for each arm $j \\\\in [J]$ is in the range $[0, 1]$. \\n\\nThe Bellman equation in eq. (1) is over the joint state space \\u2014 note that we use the vector notation $\\\\mathbf{s} \\\\in \\\\mathcal{S}^\\\\times$, where $\\\\mathbf{s}$ is a vector of length $J$, representing the current state of each arm $j$. \\n\\n\\n\\u2014\\u2014\\u2014\\n\\nWe hope that our responses sufficiently clarify your remaining questions! If so, we would appreciate your updating your score accordingly.\"}", "{\"summary\": \"This paper addresses a RL problem setting where the action space is combinatorial and discrete, I.e. in which actions are coupled with combinatorial constraints. This work uses the formalism of restless multi-armed bandits to tackle the problem and consider the setting where arms are coupled which leads to a large action space using 4 different examples: multiple interventions, path constraints, bipartite matching and capacity constraints. The proposed approach relies on embedding a Q-network into a mixed integer program fro combinatorial action selection at each time step. The proposed RL algorithm SEQUOIA optimizes for long-term reward over the action space allows to perform sequential planning for a combinatorial action space setting. The performance of the algorithm shows empirical improvement over existing approaches.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The 4 example settings provided are compelling for motivating the work. I find the public healthcare example interesting and it is nicely used as a running example. I believe these sequential planning problems are important problems to solve in practice.\", \"Writing is clear overall and the presentation with the figures is nice.\"], \"weaknesses\": [\"The mathematical formulation of the problem is a bit confusing and could be more rigorous:\", \"(a) Is it an infinite horizon problem (since you seem to be using discounting)?\", \"(b) l. 99: you mention that your approach enables per-timestep combinatorial action spaces. Where does this flexibility show up in the problem formulation in sec. 2.2. The set C is a fixed combinatorial action vector set. I do not see any time dependence taken into account in the formulation, C also seems to be fixed in Algorithm 1. Usually in RL, action sets are fixed. In your setting, it might be useful to consider time varying ones as the actions that might be available might change because of the coupling of the actions, for instance reducing over times due to the previous actions chosen that limit the remaining possible choices.\", \"Transition dynamics and rewards are assumed to be known a priori (l. 104). This might be quite limiting regarding the health care motivating example.\", \"About the comparison to standard DQN (Fig. 2 + l. 243-247): in standard DQN the output size of NN scales with the size of the action space. In your approach, now the input size has to be of the size of the action space (which is exponentially large) to be able to encode any action input from the large combinatorial space you consider. Any comment about this? Why is it more tractable as for the main scaling challenge you want to overcome?\", \"As discussed in l. 319-323: having to solve an MILP for each sample and for each time step seems extremely expensive.\", \"Q learning has even been used for continuous actions spaces via appropriate discretization of the action space. I believe stronger and more convincing arguments have to be made here to support the claims of the paper since this is a crucial point given the motivation of the paper. Could you please elaborate and clarify better what makes your approach scalable compared to prior existing algorithms applied to your combinatorial action space setting? See follow-up question below.\", \"As discussed in the paper, the idea of embedding a neural network into a mixed-integer problem is not new. Could you elaborate more on the technical challenges faced when following this approach and why does it address the scalability challenge in your problem?\"], \"minor\": \"l. 1028: seems empty for \\u2018Multiple interventions\\u2019, any missing description here?\", \"questions\": [\"**Main questions:**\", \"How crucial is the assumption of known dynamics and rewards for your approach?\", \"Running time: Table 3 in the appendix shows the total running time depending on the number of arms. What about the number of workers? Is it fixed in this table?\", \"Why is step 9 involving an argmax over actions more tractable than given the combinatorial nature of the action space? This is an important point for scalability that is not very clear to me from the presentation.\", \"What\\u2019s the size of the combinatorial action set in the experiments for each of the 4 examples? Why is it prohibitive for existing RL methods?\", \"l. 345 \\u2018We introduce diversity into the sampled actions with additional random perturbations\\u2019. It seems that there is no way to bypass the need to see a sufficient number of diverse actions. I guess this is also an exploration requirement to solve the RL task. If you cannot explore a large number of actions, I guess there is little that can be said about the quality of the obtained policy.\", \"Can you further justify the use of DQN? I understand that this is probably the most famous one but since you consider a known transition model, would DDPG make also sense to be tested?\", \"Why don\\u2019t you compare to the approach you mention in l. 494-498?\", \"**Minor questions:**\", \"Why do you need a piecewise linear approximation of the sigmoid link function (l. 181) which is known and can be computed?\", \"Any interpretation for including self-loops (l. 196)?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
Dgh5GXsW65
There and Back Again: On the relation between noises, images, and their inversions in diffusion models
[ "Łukasz Staniszewski", "Łukasz Kuciński", "Kamil Deja" ]
Denoising Diffusion Probabilistic Models (DDPMs) achieve state-of-the-art performance in synthesizing new images from random noise, but they lack meaningful latent space that encodes data into features. Recent DDPM-based editing techniques try to mitigate this issue by inverting images back to their approximated staring noise. In this work, we study the relation between the initial Gaussian noise, the samples generated from it, and their corresponding latent encodings obtained through the inversion procedure. First, we interpret their spatial distance relations to show the inaccuracy of the DDIM inversion technique by localizing latent representations manifold between the initial noise and generated samples. Then, we demonstrate the peculiar relation between initial Gaussian noise and its corresponding generations during diffusion training, showing that the high-level features of generated images stabilize rapidly, keeping the spatial distance relationship between noises and generations consistent throughout the training.
[ "diffusion models", "latent space", "ddim", "generative models" ]
Reject
https://openreview.net/pdf?id=Dgh5GXsW65
https://openreview.net/forum?id=Dgh5GXsW65
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vxvD1Dp6bk", "vWuYQ3r8pd", "pzMsl1xc3R", "j1y7BJUHCU", "iALcOlOoN3", "bnnnbE05ST", "UgJ6Z3FwBm", "SlIzS2MlTb", "ScBcJgvcrl", "PemHkYbDi1", "OQp3l2hPpK", "Lyw9jdlgP3", "LBW0jw5fpq", "JX42bd2kxW", "IzxGr8K7kq", "IbXk3XK3ue", "9Q1mvtWe5I" ], "note_type": [ "official_comment", "decision", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review" ], "note_created": [ 1733221674437, 1737523799677, 1734405785440, 1733221802198, 1733221737703, 1733221610339, 1733222369333, 1730685470219, 1733222116421, 1733221963205, 1733222043620, 1733222418632, 1730695360707, 1730398345474, 1733222292266, 1733221206526, 1730713170214 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6889/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6889/Area_Chair_MpLR" ], [ "ICLR.cc/2025/Conference/Submission6889/Authors" ], [ "ICLR.cc/2025/Conference/Submission6889/Authors" ], [ "ICLR.cc/2025/Conference/Submission6889/Authors" ], [ "ICLR.cc/2025/Conference/Submission6889/Authors" ], [ "ICLR.cc/2025/Conference/Submission6889/Reviewer_8i5c" ], [ "ICLR.cc/2025/Conference/Submission6889/Authors" ], [ "ICLR.cc/2025/Conference/Submission6889/Authors" ], [ "ICLR.cc/2025/Conference/Submission6889/Authors" ], [ "ICLR.cc/2025/Conference/Submission6889/Authors" ], [ "ICLR.cc/2025/Conference/Submission6889/Reviewer_DKMh" ], [ "ICLR.cc/2025/Conference/Submission6889/Reviewer_khzk" ], [ "ICLR.cc/2025/Conference/Submission6889/Authors" ], [ "ICLR.cc/2025/Conference/Submission6889/Authors" ], [ "ICLR.cc/2025/Conference/Submission6889/Reviewer_BpCo" ] ], "structured_content_str": [ "{\"title\": \"Response to the Review (3/n)\", \"comment\": \"First, we include those two models in our experiments to compare pixel correlation in Gaussian noises and latent encodings. The latent encodings created with reverse DDIM for large-scale diffusion models also have correlated pixel values. Surprisingly, the correlation is more significant for the pixel-space model operating at $256\\\\times256$ resolution than for the $64\\\\times64$ model.\\n\\n| | DDPM $32\\\\times32$ (CIFAR10) | DDPM $64\\\\times64$ (ImageNet) | DDPM $256\\\\times256$ (ImageNet) | LDM $256\\\\times256$ (CelebA) | DiT $256\\\\times256$ (ImageNet) |\\n|---------------|---------------|---------------|---------------|---------------|---------------|\\n| Noise $(x^T)$ | 0.159 \\u00b1 0.003 | 0.177 \\u00b1 0.007 | 0.141 \\u00b1 0.001 | 0.087 \\u00b1 0.004 | 0.087 \\u00b1 0.004 | \\n| Latent $(\\\\hat{x}^T)$ | 0.462 \\u00b1 0.009 | 0.219 \\u00b1 0.006 | 0.263 \\u00b1 0.006 | 0.179 \\u00b1 0.008 | 0.171 \\u00b1 0.007 | \\n| Sample $(x^0)$ | 0.986 \\u00b1 0.001 | 0.966 \\u00b1 0.001 | 0.985 \\u00b1 0.001 | 0.904 \\u00b1 0.005 | 0.861 \\u00b1 0.004 | \\n\\n\\nNext, we continue this study in the experiment for determining the most probable angles located by the vertexes of images ($x^0$), noises ($x^T$), and latents ($\\\\hat{x}^T$), with varying diffusion steps $T$. We show that, even for large-scale diffusion models, the latents are located along the trajectory of the generated image. Our observations with angles align closely with the correlation experiment.\\n\\n| Model | T | $\\\\angle x^0$ | $\\\\angle x^T$ | $\\\\angle \\\\hat{x}^T$ |\\n|-----------------------------------|------|--------|--------|---------|\\n| **U-Net DDPM 32\\u00d732** | 10 | 44 | 16 | 120 |\\n| | 100 | 29 | 28 | 123 |\\n| | 1000 | 20 | 45 | 115 |\\n| **U-Net DDPM 64\\u00d764** | 10 | 30 | 31 | 119 |\\n| | 100 | 11 | 60 | 109 |\\n| | 1000 | 6 | 79 | 95 |\\n| **U-Net DDPM 256\\u00d7256** | 10 | 24 | 50 | 106 |\\n| | 100 | 24 | 73 | 83 |\\n| | 1000 | 23 | 73 | 84 |\\n| **U-Net LDM 64\\u00d764** | 10 | 23 | 53 | 104 |\\n| | 100 | 2 | 76 | 102 |\\n| | 1000 | 1 | 83 | 96 |\\n| **DiT LDM 32\\u00d732** | 10 | 27 | 47 | 106 |\\n| | 100 | 4 | 66 | 110 |\\n| | 1000 | 1 | 80 | 99 |\\n\\nWe also leverage those two models to show that our findings on image-to-noise and noise-to-image mapping by $L_2$-distance are valid for large-scale models. As for previously studied pixel-space diffusion models, we can correctly determine initial noise based on generation $(x^0 \\\\rightarrow x^T)$ by choosing the noise closest to it using the $L_2$-norm. For the $256\\\\times256$ resolution pixel-space model, we obtain $100\\\\%$ accuracy in this assigning. When predicting generation, based on initial noise $(x^T \\\\rightarrow x^0)$, the accuracy is worse than for lower-resolution models. For this particular model, we, once more observed that the reason for such behavior are singular generations with large plain areas that are located close to the mean of the random gaussian noise. For conditional DiT operating in the latent space of the LDM, we show that, similarly to U-Net-based LDM, we can do assignments in both directions (so determining generations based on noises and vice-versa) with an almost $100\\\\%$ success rate, indicating that our findings are valid across variant diffusion architectures and for conditional diffusion models. \\n\\n| T | DDPM 256\\u00d7256 $(x^0 \\\\rightarrow x^T)$ | DDPM 256\\u00d7256 $(x^T \\\\rightarrow x^0)$ | DiT 256\\u00d7256 $(x^0 \\\\rightarrow x^T)$ | DiT 256\\u00d7256 $(x^T \\\\rightarrow x^0)$ |\\n|-------|-------------------------------------|-------------------------------------|--------------------------------------------|--------------------------------------------|\\n| 10 | 100 \\u00b1 0.0 | 39.2 \\u00b1 6.2 | 100 \\u00b1 0.0 | 93.7 \\u00b1 7.2 |\\n| 50 | 100 \\u00b1 0.0 | 22.9 \\u00b1 5.1 | 100 \\u00b1 0.0 | 90.8 \\u00b1 10.1 |\\n| 100 | 100 \\u00b1 0.0 | 23.2 \\u00b1 4.8 | 100 \\u00b1 0.0 | 90.7 \\u00b1 10.1 |\\n| 500 | 100 \\u00b1 0.0 | 25.0 \\u00b1 4.6 | 100 \\u00b1 0.0 | 93.0 \\u00b1 8.5 |\\n| 1000 | 100 \\u00b1 0.0 | 25.0 \\u00b1 4.4 | 100 \\u00b1 0.0 | 96.7 \\u00b1 4.6 |\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"This paper presents key empirical observations regarding the relationships between noise, images, and their inversions: (1) the inversion retains structural features of the original image and differs from pure noise; (2) the inversion approximately follows the trajectory from noise to the image; (3) it is possible to associate noise with corresponding generated images using L2 distance, and this mapping is learned early in the training process.\\n\\nHowever, reviewers find the paper is a bit incomplete, without sufficient experimental evaluation, methods contribution, and explicit practical insights.\", \"additional_comments_on_reviewer_discussion\": \"Although the authors addressed parts of the concerns on experimental evaluation, the practice value of the work remains largely under-explored.\"}", "{\"title\": \"Response to the Review (5/n)\", \"comment\": \"> Figure 1c: There is a single latent\\n for a panel of 4 images, and it is thus confusing. Could the authors clarify which image in this panel the generated latent corresponds to?\\n\\nFigure 1c shows the grid of the four latent encodings obtained using the DDIM inversion process from the images shown at left of the figure. We apologise for the confusion, but in the initial submission, we did not add a black border around the latents to easily distinguish them. We are thankful for this remark.\\n\\n\\n**References:**\\n\\n[1] Nichol, Alexander Quinn, and Prafulla Dhariwal. \\\"Improved denoising diffusion probabilistic models.\\\" International conference on machine learning. PMLR, 2021.\\n\\n[2] Rombach, Robin, et al. \\\"High-resolution image synthesis with latent diffusion models.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.\\n\\n[3] Dhariwal, Prafulla, and Alexander Nichol. \\\"Diffusion models beat gans on image synthesis.\\\" Advances in neural information processing systems 34 (2021): 8780-8794.\\n\\n[4] Peebles, William, and Saining Xie. \\\"Scalable diffusion models with transformers.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\\n\\n[5] Parmar, Gaurav, et al. \\\"Zero-shot image-to-image translation.\\\" ACM SIGGRAPH 2023 Conference Proceedings. 2023.\\n\\n[6] Garibi, Daniel, et al. \\\"ReNoise: Real Image Inversion Through Iterative Noising.\\\" arXiv preprint arXiv:2403.14602 (2024).\\n\\n[7] Hong, Seongmin, et al. \\\"On Exact Inversion of DPM-Solvers.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n\\n[8] Bodin, Erik, et al. \\\"Linear combinations of Gaussian latents in generative models: interpolation and beyond.\\\" arXiv preprint arXiv:2408.08558 (2024).\\n\\n[9] Zheng, PengFei, et al. \\\"NoiseDiffusion: Correcting Noise for Image Interpolation with Diffusion Models beyond Spherical Linear Interpolation.\\\" The Twelfth International Conference on Learning Representations. 2024.\"}", "{\"title\": \"Response to the Review (4/n)\", \"comment\": \"> While the experiments in Section 4.4 are interesting, what is their significance in the context of the broader picture of the paper? If I understand correctly, the focus of the main text is to highlight issues with DDIM inversion by analyzing the relationship between the inverted latents, the original latents, and the generated samples, and therefore it is not clear how Section 4.4 fits here since there seems to be no reference to DDIM inversion here.\\n\\nThe main aim of our work is to investigate the relationships between the noise space, the image generations produced by the implicit sampler, and the space of the latent encodings resulting from the inversion of the generative process. However, we agree with the Reviewer that the experiment in Section 4.4 could also be used to compare the process of generating images from Gaussian noises with the inversion procedure using the DDIM. To this end, we perform the $L_2$-distance-based assignment experiment, but instead of Gaussian noise, we try to assign image generations $x^0$ to their latent encodings $\\\\hat{x}^T$ and vice-versa. Note that image generations in this scenario are not the reconstructions produced by denoiser from latents but are the same exact generation as in the noise-sample assignment.\\n\\nFirst, we assign image generations based on latent encodings $(\\\\hat{x}^T \\\\rightarrow x^0)$. We observe similar results for all five evaluated diffusion models, that align with the previous noise-to-image analysis. Assigning images to their latent encodings in the pixel-space models cannot be successfully done through a simple $L_2$-distance.\\n\\n| T | U-Net DDPM 32\\u00d732 | U-Net DDPM 64\\u00d764 | U-Net DDPM 256\\u00d7256 | U-Net LDM 256\\u00d7256 | DiT LDM 256\\u00d7256 |\\n|-------|------------------|------------------|--------------------|-------------------|-----------------|\\n| 10 | 38.2 \\u00b1 5.1 | 100.0 \\u00b1 0.0 | 30.8 \\u00b1 4.3 | 100 \\u00b1 0.0 | 95.1 \\u00b1 6.4 |\\n| 100 | 33.4 \\u00b1 2.7 | 57.5 \\u00b1 7.3 | 23.9 \\u00b1 5.0 | 100 \\u00b1 0.0 | 90.7 \\u00b1 10.3 |\\n| 1000 | 40.9 \\u00b1 2.7 | 44.7 \\u00b1 6.5 | 25.4 \\u00b1 4.4 | 100 \\u00b1 0.0 | 96.6 \\u00b1 4.6 |\\n| 4000 | 41.9 \\u00b1 3.0 | 43.5 \\u00b1 6.5 | - | - | - |\\n\\nLikewise, we perform an opposite-direction assignment where we try to determine latent encodings based on image generations $(x^0 \\\\rightarrow \\\\hat{x}^T)$. In such a case, surprisingly, for pixel-space models, the results are opposite to the distance-based classification calculated between noises and images, as we cannot assign the correct latent encoding given the distance from the original generation.\\n\\n| T | U-Net DDPM 32\\u00d732 | U-Net DDPM 64\\u00d764 | U-Net DDPM 256\\u00d7256 | U-Net LDM 256\\u00d7256 | DiT LDM 256\\u00d7256 |\\n|-------|-----------------------------------------|-----------------------------------------|-----------------------------------------|-----------------------------------------|--------------------------------------------------|\\n| 10 | 66.4 \\u00b1 1.7 | 64.4 \\u00b1 7.1 | 0.7 \\u00b1 0.2 | 100 \\u00b1 0.0 | 99.8 \\u00b1 0.6 |\\n| 100 | 16.4 \\u00b1 6.1 | 8.6 \\u00b1 9.3 | 4.1 \\u00b1 1.4 | 100 \\u00b1 0.0 | 99.5 \\u00b1 1.7 |\\n| 1000 | 3.6 \\u00b1 2.2 | 1.7 \\u00b1 1.3 | 23.9 \\u00b1 5.2 | 100 \\u00b1 0.0 | 100.0 \\u00b1 0.0 |\\n| 4000 | 2.8 \\u00b1 2.2 | 1.9 \\u00b1 1.4 | - | - | - |\\n\\nWe observe for both Gaussian noises $x^T$ and latent encodings $\\\\hat{x}^T$, that the assignment in both directions is possible for Latent Diffusion Models, where the denoising process is performed in the latent space. We hypothesize that this fact is connected with the Kullback-Leibler regularization that imposes a slight KL-penalty towards a standard normal distribution $\\\\mathcal{N}(0, I)$ on the learned latent [2].\\n\\n> The introduction can be improved. For instance, the authors note [...] While GANs and VAEs indeed are designed to assign low-dimensional latent codes to the data, Flows/Continuous flows also do not possess a low-dimensional latent space and are similar to diffusion models in that aspect. In fact, the ODE sampling in diffusion models is equivalent to simulating a continuous normalizing flow with a vector field defined in terms of the score function. Therefore, this claim is misleading, and it would be great if the authors could revise this in the main text.\\n\\nThank you for pointing out this misleading claim. We can see that this statement indeed introduced a lot of confusion, therefore we will remove it from our submission.\\n\\n> Missing citations: Reference to related work is missing in some places. For instance, in line 37, combining diffusion models with additional external models, references to several related works are missing: (1) DiffuseVAE: Efficient, Controllable and High-Fidelity Generation from Low-Dimensional Latents, Pandey et al. and (2) Score-based Generative Modeling in Latent Space, Vahdat et al.\\n\\nThank you for suggesting the missing related works, we will gladly add them to the mentioned section.\"}", "{\"title\": \"Response to the Review (2/n)\", \"comment\": \"> Table 1: What does each row correspond to? Does it denote the correlation of the pixels in the latent (pure Gaussian noise or reversed DDIM) vector vs data samples?\\n\\nIn Table 1, we empirically validate that latent representations calculated with reverse DDIM does not follow the random Gaussian distribution. In particular, we calculate correlation between each pair of pixels, and present the average of the correlation coefficients for the top 10 most correlated pairs of pixels. We can observe that latent codes (second row) has at least a small number of highly correlated pixels when compared to the sampled gaussian noise (first row). For completeness, we also present the values for the final generations (third row).\\n\\n> While these are only a few examples, I would request the authors include all experimental details in the Appendix.\\n\\nThank you for this suggestion, we will gather all of experimental details included in our code repository and add them to the Appendix.\\n\\n> Firstly, if I understand correctly, Sections 4.2 and 4.3 reaffirm already existing conclusions about the DDIM Inversion procedure (as the authors note in lines 172-173) using a different experimental methodology. In this context, can the authors point out additional insights that can be drawn from these experiments? \\n\\nWe agree that previous works also indicate that the latent encodings resulting from DDIM inversion are not the same as the initial Gaussian noises. What these works have in common is claiming that latents are not white uncorrelated Gaussians and that they statistically deviate from a normal distribution ([5, 6, 7]). Other works ([8,9]) indicate that the more the latents deviate from Gaussian noise, the worse the quality of images denoised from their interpolation and editing. \\n\\nHowever, in addition to validating claims made in those papers (see Figure 1 and Table 1), we offer a much more in-depth analysis that highlights the main differences between noises and latents. Specifically, we show that latents are located next to diffusion denoising trajectory, between the initial Gaussian noise and the final images (Figure 2 and Figure 3). Additionally, we show that the inverse DDIM method does not benefit from extending the training of the diffusion model (Figure 4). Finally, through experiments carried out during this Rebuttal, we indicated that, while it is possible to predict, based on the image, which noise it origins from through the $L_2$ distance, we cannot similarly indicate the latent encoding resulting from it through the inverse DDIM process. Therefore, the original noise-to-image mapping does not hold in the latents-to-image scenario.\\n\\n> Secondly, the authors note the following in Line 175: We study the implications of this fact and show its far-reaching consequences. However, it is not clear from the main text what these implications are, as these are never discussed and, therefore, seem like overclaiming. Can the authors discuss this in detail? I would have liked to see the impact this can have on the DDIM inversion-based editing or reconstruction capabilities, which would justify this claim.\\n\\nAs noted by the past works ([5,6,8,9]), non-Gaussian properties in the inverted latent encodings lead to the generation of lower quality images and also introduce artefacts into the results of various image manipulations such as interpolation or editing.\\n\\nAs our work deeply explores the analysis of the disparity between latents and noise, we believe that the highlighted flaw of preserving image structure in latent encodings is a source of error in all the methods for interpolation and image editing based on the DDIM inversion. \\n\\n> Lastly, I don't see any results on a large-scale experiment (say ImageNet-256), and it is unclear how severe this problem is at scale. Can the authors comment on this and include relevant experiments?\\n\\nWe would like to point out that the LDM model we used in the experiments operates on $256\\\\times256$ images from the CelebA dataset. However, the internal diffusion model operates in latent space of $3\\\\times64\\\\times64$. As requested, we performed our experiments also on other large-scale models. To that end, we used (1) an unconditional pixel-space U-Net trained on images from ImageNet with $256\\\\times256$ resolution and a **class-conditional** Diffusion Transformer (DiT) operating in the Latent Space of the autoencoder, trained also on images from ImageNet with $256\\\\times256$ resolution.\"}", "{\"title\": \"Response to the Review (2/2)\", \"comment\": \"> In line 092, you say \\u201cDDIM inversion approximates this equation by assuming linear trajectory\\u201d. Explain these in more detail, since it is not clear to me why this assumption implies Equation (4).\\n\\nAs denoted in Equation (3) in the initial submission, to perform the exact DDIM inversion and obtain a noisier latent $x_t$ from a less noisy latent $x_{t-1}$, we would need the diffusion model's output for the latent we aim to obtain, $\\\\epsilon_{\\\\theta}(x_t, t, c)$. However, determining this output is infeasible due to the circular dependency on $x_t$. \\n\\nTo address this, the DDIM inversion assumes a **local** linear trajectory in the latent space. The output of the noise-prediction model, $\\\\epsilon_{\\\\theta}(x_t, t, c)$, can be interpreted as a vector representing the direction from $x_t$ to $x_{t-1}$ during the diffusion denoising process. By swapping $\\\\epsilon_{\\\\theta}(x_t, t, c)$ with $\\\\epsilon_{\\\\theta}(x_{t-1}, t, c)$ in Equation (3), we approximate (locally) that the direction from $x_t$ to $x_{t-1}$ is the same as the direction from $x_{t-1}$ to $x_{t-2}$. Mathematically, such approximation implies that $x_t - x_{t-1} \\\\approx x_{t-1} - x_{t-2}$.\\n\\n> In Equation (4), remove the bold from to be coherent with the notation employed in the paper.\\n> In line 107, the referred equation should be Equation (3).\\n> I believe lines 158-159 should be postponed since they refer to the \\u201ctwo models\\u201d that you introduce in lines 159-160.\\n> In lines 215 and 221 you say that the latent is located along the trajectory, while you actually show that it is next to the trajectory.\\n\\nWe thank the Reviewer for these editorial suggestions. We will apply them in the final version of our paper.\\n\\n> In the related works, I believe you should also consider at least the ODE inversion paper by Song et al. (i.e. the \\u201cprobability flow\\u201d paper), and also Asperti et al., 2023 (title: Image embedding for denoising generative models), which introduces the DDIM inversion through neural network training.\\n\\nThank you for suggesting additional related works, we agree that they should be included in the appropriate section, and we will describe them in the paper.\\n\\n> In the list of models with \\u201cmeaningful latent space encoding\\u201d, the authors considers also GAN. I believe this is not the case, since GAN has a similar behavior as DDIM, i.e. there is no meaningful nor explicit latent space, and it has to be recovered by techniques that are very similar to the ones used to invert DDIM, like GAN inversion.\\n\\nThank you for pointing out this misleading claim. We can see that this statement indeed introduced a lot of confusion, among several reviewers, therefore we decided to remove it.\\n\\n**References:**\\n\\n[1] Nichol, Alexander Quinn, and Prafulla Dhariwal. \\\"Improved denoising diffusion probabilistic models.\\\" International conference on machine learning. PMLR, 2021.\\n\\n[2] Dhariwal, Prafulla, and Alexander Nichol. \\\"Diffusion models beat gans on image synthesis.\\\" Advances in neural information processing systems 34 (2021): 8780-8794.\"}", "{\"summary\": \"This work examines the relationship between the initial Gaussian noise, generated images, and latent representations produced using the DDIM inversion technique in diffusion models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper provides comprehensive empirical evidence to support its claims, using various metrics and visualizations.\", \"The study demonstrates the limitations of DDIM inversion, particularly its deviation from theoretical expectations and the persistence of inversion errors despite prolonged training.\"], \"weaknesses\": [\"The analysis focuses primarily on DDPM and LDM models, leaving open the question of whether the observed phenomena generalize to other diffusion model architectures.\", \"While the paper empirically observes the inaccuracy of DDIM inversion and the early formation of noise-to-sample mapping, it lacks a theoretical explanation for these findings.\"], \"questions\": \"* Does the observed early formation of the noise-to-sample mapping have\\n connections with the stages of reverse diffusion process as discussed\", \"in_https\": \"//arxiv.org/abs/2402.18491?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to the Review (2/2)\", \"comment\": \"We also leverage those two models to show that our findings on image-to-noise and noise-to-image mapping by $L_2$-distance are valid for large-scale models. As for previously studied pixel-space diffusion models, we can correctly determine initial noise based on generation $(x^0 \\\\rightarrow x^T)$ by choosing the noise closest to it using the $L_2$-norm. For the $256\\\\times256$ resolution pixel-space model, we obtain $100\\\\%$ accuracy in this assigning. When predicting generation, based on initial noise $(x^T \\\\rightarrow x^0)$, the accuracy is worse than for lower-resolution models. For this particular model, we, once more observed that the reason for such behavior are singular generations with large plain areas that are located close to the mean of the random gaussian noise. For conditional DiT operating in the latent space of the LDM, we show that, similarly to U-Net-based LDM, we can do assignments in both directions (so determining generations based on noises and vice-versa) with an almost $100\\\\%$ success rate, indicating that our findings are valid across variant diffusion architectures and for conditional diffusion models.\\n\\n| T | DDPM 256\\u00d7256 $(x^0 \\\\rightarrow x^T)$ | DDPM 256\\u00d7256 $(x^T \\\\rightarrow x^0)$ | DiT 256\\u00d7256 $(x^0 \\\\rightarrow x^T)$ | DiT 256\\u00d7256 $(x^T \\\\rightarrow x^0)$ |\\n|-------|-------------------------------------|-------------------------------------|--------------------------------------------|--------------------------------------------|\\n| 10 | 100 \\u00b1 0.0 | 39.2 \\u00b1 6.2 | 100 \\u00b1 0.0 | 93.7 \\u00b1 7.2 |\\n| 50 | 100 \\u00b1 0.0 | 22.9 \\u00b1 5.1 | 100 \\u00b1 0.0 | 90.8 \\u00b1 10.1 |\\n| 100 | 100 \\u00b1 0.0 | 23.2 \\u00b1 4.8 | 100 \\u00b1 0.0 | 90.7 \\u00b1 10.1 |\\n| 500 | 100 \\u00b1 0.0 | 25.0 \\u00b1 4.6 | 100 \\u00b1 0.0 | 93.0 \\u00b1 8.5 |\\n| 1000 | 100 \\u00b1 0.0 | 25.0 \\u00b1 4.4 | 100 \\u00b1 0.0 | 96.7 \\u00b1 4.6 |\\n\\n\\n> Does the observed early formation of the noise-to-sample mapping have connections with the stages of reverse diffusion process as discussed in https://arxiv.org/abs/2402.18491?\\n\\nThank you for pointing out this insightful related work. In 2402.18491 authors study how noise-to-sample mapping evolves in the backward diffusion process. They show that trajectories can be divided into three regimes where, in the first part, generations of different objects follow common trajectory, then in the second regime they split towards distinct parts of the data distribution, while the third regime corresponds to the memorization and drives samples towards particular data points. In our studies, we focused on how the noise-to-sample mapping change throughout training. We show that general characteristic of the sample is defined early in the training stage what might suggest that the model, early in the training achieves reasonable performance in the second regime defined by Biroli et al., while the third regime is further optimized throughout the remaining training steps. We will add these explanations to the final version of our submission.\"}", "{\"title\": \"Response to the Review\", \"comment\": \"We thank the Reviewer for valuable feedback and valuable suggestions.\\n\\n> For the empirical observations in Sections 4.2 and 4.3, I don\\u2019t see any clear applications based on these findings. I suggest taking it a step further; for example, how could we reduce the divergence between the inversion and the noise? What insight does it provide if the inversion lies along the trajectory from noise to image?\\n> \\n> Overall, without applications or theoretical insights, the empirical analysis lacks clear motivation. I hope the authors can identify relevant scenarios where these observations could be put to practical use.\\n\\nThank you for this suggestion. The main goal of our work was to study a behavior of a commonly used DDIM inversion technique in order to provide explanations and in-dept understanding of its limitations. Agreeing with Prof. Black [https://perceiving-systems.blog/en/post/novelty-in-science], we follow his point of view that a novel paper does not have to come with a direct application. Our insights shed a new light on the problem, show how approximation error influences the results of the reverse DDIM procedure, and how this behavior changes during diffusion training.\\n\\n> The observation in Section 4.4 is interesting, but the paper doesn\\u2019t explore any theoretical insights or potential applications related to this phenomenon. One possible direction could be to link it to optimal transport, building a theoretical framework to better understand the training dynamics of diffusion models.\\n\\nThank you for pointing out this interesting direction. There are several works discussing the connection between diffusion models' training dynamics and optimal transport and to the best of our knowledge, this topic is still to be defined. In [1] authors show that DDPM encoder map (e.g. latent-sample mapping) coincides with the optimal transport map when modeling simple distributions. However, as noticed by [2,3], the proof provided by [1] cannot hold. Our experiments show that the closest-L2-based mapping in the case of pixel-space DDPMs holds only in one direction, what might be an interesting starting point for more theoretical considerations. Moreover, we also highlight that this mapping appears relatively early in the diffusion model training, sheding some light on the dynamics of diffusion models' training.\\n\\n>Questions: From Table 2, are there any insights into why LDM could achieve 100% accuracy for both \\n$x^0 \\\\rightarrow x^T$ and $x^T \\\\rightarrow x^0$, but DDPM could only achieve high accuracy for $x^0 \\\\rightarrow x^T$. What key differences between LDM and DDPM might explain this discrepancy?\\n\\nThank you for this interesting question. We attribute the difference to the nature of the input data provided to the diffusion in the LDM models. In this scenario the diffusion model is trained on the latent data representations extracted by the autoencoder usually trained with additional regularization either by the KL-Loss or VQ-Loss. In both cases, the application of the regularization leads to the normalization of the input data. As presented in the Appendix, the reason why DDPM does not achieve high accuracy for the $x^T \\\\rightarrow x^0$ case is that there exist some images, and hence, some generations are by nature located closer to the mean of the input data noise, so they ''attract'' more random noises. This is not the case for the LDM model.\\n\\n**References:**\\n\\n[1] Khrulkov, Valentin, et al. \\\"Understanding ddpm latent codes through optimal transport.\\\" arXiv preprint arXiv:2202.07477 (2022).\\n\\n[2] Kim, Young-Heon, and Emanuel Milman. \\\"A generalization of Caffarelli\\u2019s contraction theorem via (reverse) heat flow.\\\" Mathematische Annalen 354.3 (2012): 827-862.\\n\\n[3] Lavenant, Hugo, and Filippo Santambrogio. \\\"The flow map of the fokker\\u2013planck equation does not provide optimal transport.\\\" Applied Mathematics Letters 133 (2022): 108225.\"}", "{\"title\": \"Response to the Review (1/2)\", \"comment\": \"We appreciate the feedback and Reviewer's positive opinion about our experiments. We would like to clarify the remaining questions in this comment:\\n\\n> The analysis focuses primarily on DDPM and LDM models, leaving open the question of whether the observed phenomena generalize to other diffusion model architectures.\\n\\nTo show that our findings generalize over more diffusion models, we performed our experiments also on other large-scale models. To that end, we used an unconditional pixel-space U-Net trained on images from ImageNet with $256\\\\times256$ resolution and a **class-conditional** Diffusion Transformer (DiT) operating in the Latent Space of the autoencoder, trained also on images from ImageNet with $256\\\\times256$ resolution.\\n\\nFirst, we include those two models in our experiments to compare pixel correlation in Gaussian noises and latent encodings. The latent encodings created with reverse DDIM for large-scale diffusion models also have correlated pixel values. Surprisingly, the correlation is more significant for the pixel-space model operating at $256\\\\times256$ resolution than for the $64\\\\times64$ model.\\n\\n| | DDPM $32\\\\times32$ (CIFAR10) | DDPM $64\\\\times64$ (ImageNet) | DDPM $256\\\\times256$ (ImageNet) | LDM $256\\\\times256$ (CelebA) | DiT $256\\\\times256$ (ImageNet) |\\n|---------------|---------------|---------------|---------------|---------------|---------------|\\n| Noise $(x^T)$ | 0.159 \\u00b1 0.003 | 0.177 \\u00b1 0.007 | 0.141 \\u00b1 0.001 | 0.087 \\u00b1 0.004 | 0.087 \\u00b1 0.004 | \\n| Latent $(\\\\hat{x}^T)$ | 0.462 \\u00b1 0.009 | 0.219 \\u00b1 0.006 | 0.263 \\u00b1 0.006 | 0.179 \\u00b1 0.008 | 0.171 \\u00b1 0.007 | \\n| Sample $(x^0)$ | 0.986 \\u00b1 0.001 | 0.966 \\u00b1 0.001 | 0.985 \\u00b1 0.001 | 0.904 \\u00b1 0.005 | 0.861 \\u00b1 0.004 | \\n\\n\\nNext, we continue this study in the experiment for determining the most probable angles located by the vertexes of images ($x^0$), noises ($x^T$), and latents ($\\\\hat{x}^T$), with varying diffusion steps $T$. We show that, even for large-scale diffusion models, the latents are located along the trajectory of the generated image. Our observations with angles align closely with the correlation experiment.\\n\\n| Model | T | $\\\\angle x^0$ | $\\\\angle x^T$ | $\\\\angle \\\\hat{x}^T$ |\\n|-----------------------------------|------|--------|--------|---------|\\n| **U-Net DDPM 32\\u00d732** | 10 | 44 | 16 | 120 |\\n| | 100 | 29 | 28 | 123 |\\n| | 1000 | 20 | 45 | 115 |\\n| **U-Net DDPM 64\\u00d764** | 10 | 30 | 31 | 119 |\\n| | 100 | 11 | 60 | 109 |\\n| | 1000 | 6 | 79 | 95 |\\n| **U-Net DDPM 256\\u00d7256** | 10 | 24 | 50 | 106 |\\n| | 100 | 24 | 73 | 83 |\\n| | 1000 | 23 | 73 | 84 |\\n| **U-Net LDM 64\\u00d764** | 10 | 23 | 53 | 104 |\\n| | 100 | 2 | 76 | 102 |\\n| | 1000 | 1 | 83 | 96 |\\n| **DiT LDM 32\\u00d732** | 10 | 27 | 47 | 106 |\\n| | 100 | 4 | 66 | 110 |\\n| | 1000 | 1 | 80 | 99 |\"}", "{\"title\": \"General response\", \"comment\": \"We extremely appreciate the Reviewers' time and the valuable feedback that helped us develop our work. We are thankful for pointing out all the inaccuracies, ambiguities, and errors in the initial submission, which we hope to have addressed in the comments below and which we promise to apply to the final version of the paper. We hope our additional experiments with other diffusion architectures, which we did during the rebuttal, have further strengthened our submission.\"}", "{\"summary\": \"This paper presents some empirical observations between noise, image, and its inversion: (1) the inversion contains some structure of the original image and is different from the noise; (2) the inversion approximately lies in the trajectory from noise to image; (3) it is possible to assign noise to the corresponding generated images from L2 distance and this mapping is learned at an early stage of training\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"This paper presents intriguing empirical observations, particularly the ability to assign noise to corresponding generated images based on L2 distance. This phenomenon appears to relate to diffusion models and optimal transport, and further exploration could deepen our understanding of diffusion model training dynamics and properties of image manifold.\", \"weaknesses\": \"The paper feels incomplete to me.\\n\\nFor the empirical observations in Sections 4.2 and 4.3, I don\\u2019t see any clear applications based on these findings. I suggest taking it a step further; for example, how could we reduce the divergence between the inversion and the noise? What insight does it provide if the inversion lies along the trajectory from noise to image?\\n\\nThe observation in Section 4.4 is interesting, but the paper doesn\\u2019t explore any theoretical insights or potential applications related to this phenomenon. One possible direction could be to link it to optimal transport, building a theoretical framework to better understand the training dynamics of diffusion models.\\n\\nOverall, without applications or theoretical insights, the empirical analysis lacks clear motivation. I hope the authors can identify relevant scenarios where these observations could be put to practical use.\", \"questions\": \"1. From Table 2, are there any insights into why LDM could achieve 100% accuracy for both $x^0 \\\\rightarrow x^T$ and $x^T \\\\rightarrow x^0$, but DDPM could only achieve high accuracy for $x^0 \\\\rightarrow x^T$. What key differences between LDM and DDPM might explain this discrepancy?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethics review needed.\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper analyzes intriguing properties of the relationship between the triplet (image, noise, latent) in diffusion models, where \\u201clatent\\u201d refers to the noise computed by the inverted diffusion process in deterministic diffusion models (DDIM). Even if from a theoretical point of view, noise and latent should match, in practice this does not happen due to approximations required to practically implement the inverted diffusion process. This analysis is performed in two stages: first of all, the authors shows that the latent variable is located next to the trajectory mapping noise to the generated image, by analyzing both the angles of the triangle with vertices (image, noise, latent), and by computing the distance of each element x_t of the trajectory with the edge connecting the noise to the latent. Moreover, they show that this behavior emerges at the beginning of the training, and never changes as the training advances. In the second stage, the authors argue that the mapping between noise and the generated image is \\u201cpredictable\\u201d, in the sense that the noise gets mapped to the closest feasible generated data, measured in L2 loss. Again, they show that this behavior emerges at the very beginning of the training.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"4\", \"strengths\": \"The idea presented in the paper is original, as not many authors analyzed the relationship that occurs between the noise that generates an image and the latent encoding of the image itself in deterministic diffusion models. Due to the nowadays popularity of DDIM, I also believe that this work is significant for the scientific community, as it describes intruguing properties of these models.\", \"weaknesses\": [\"I believe the paper has a few aspects that require to be improved to allow for a precise decision. In particular:\", \"the description of the experiments is too hasty and does not go enough into the details, making it very hard to understand. In particular, the whole paper is very confusing in the distinction between DDPM and DDIM. In Section 2, the authors define DDPM as the model obtained by setting $\\\\eta = 1$ in the definition of $\\\\sigma_t$ in Equation (2), while DDIM is the model obtained by setting $\\\\eta = 0$, and its reverse process is fully deterministic, once $x_T$ is given. Clearly, the inversion map, mapping $x_0$ to its latent, is only defined in the DDIM setting. In Section 4, however, the authors continuously interchange the names DDPM and DDIM, making it very hard to understand which model they are using. For example, both Table 1 and Figure 1 uses the name \\u201cDDPM\\u201d to indicate the models, while the first paragraph of Section 4.2 refers to them as \\u201cDDIM\\u201d. Therefore, I suggest the authors to rewrite this section by paying more attention to the definitions of DDIM and DDPM. Therefore, I suggest the authors to:\", \"1. clearly define DDPM and DDIM early in the paper and consistently use these terms throughout.\", \"2. explicitly state which model (DDPM or DDIM) is being used in each experiment in Section 4.\", \"3. explain how the inversion process is applied to DDPM models if that is indeed what they are doing.\", \"in Section 4.4, they refer to \\u201cthe smallest L2 criterion\\u201d without introducing it. Moreover, the obtained results are confusing to me since I didn't expect a discrepancy in the accuracy of $x_0 \\\\to x_T$ vs $x_T \\\\to x_0$. In a final version of this work, I expect the authors to:\", \"1. Formally define the \\\"smallest L2 criterion\\\" when it's first mentioned.\", \"2. Explain the classification process using this criterion in more detail.\", \"3. Address the apparent discrepancy between the accuracies of $x_0$ -> $x_T$ and $x_T$ -> $x_0$, given the symmetry of the L2 norm.\", \"In line 348, after discussing the accuracy of the metrics between image and noises, the authors say \\u201cwe can observe that the distance between noise and latents accurately defines\\u2026\\u201d. Note that, in this section, the latents were not considered, since all the experiments were performed on images and noises. Therefore I do not understand what they want to say with this sentence. In general, I suggest the authors to re-check Section 4 to correct the errors and improve the readability, better clarifying all the steps they performed. Please note that the length of the paper is at most 10 pages EXCLUDING the citations, therefore you still have 3 pages left, which you can use to expand the description of the experimental section.\", \"**Minor Comments.**\", \"In line 092, you say \\u201cDDIM inversion approximates this equation by assuming linear trajectory\\u201d. Explain these in more detail, since it is not clear to me why this assumption implies Equation (4).\", \"In Equation (4), remove the bold from $x_{t-1}$ to be coherent with the notation employed in the paper.\", \"In line 107, the referred equation should be Equation (3).\", \"In the related works, I believe you should also consider at least the ODE inversion paper by Song et al. (i.e. the \\u201cprobability flow\\u201d paper), and also Asperti et al., 2023 (title: Image embedding for denoising generative models), which introduces the DDIM inversion through neural network training.\", \"In the list of models with \\u201cmeaningful latent space encoding\\u201d, the authors considers also GAN. I believe this is not the case, since GAN has a similar behavior as DDIM, i.e. there is no meaningful nor explicit latent space, and it has to be recovered by techniques that are very similar to the ones used to invert DDIM, like GAN inversion.\", \"I believe lines 158-159 should be postponed since they refer to the \\u201ctwo models\\u201d that you introduce in lines 159-160.\", \"In lines 215 and 221 you say that the latent is located along the trajectory, while you actually show that it is next to the trajectory.\"], \"questions\": \"I included a few questions in the \\\"Weakness\\\" section\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to the Review (1/2)\", \"comment\": \"We thank the Reviewer for the valuable feedback and their recognition of its originality and importance.\\n\\n> The description of the experiments is too hasty and does not go enough into the details, making it very hard to understand. In particular, the whole paper is very confusing in the distinction between DDPM and DDIM. [...] Therefore, I suggest the authors to:\\n> 1. clearly define DDPM and DDIM early in the paper and consistently use these terms throughout.\\n> 2. explicitly state which model (DDPM or DDIM) is being used in each experiment in Section 4.\\n> 3. explain how the inversion process is applied to DDPM models if that is indeed what they are doing.\\n\\nWe appreciate the Reviewer's pointing out this confusion. We agree this is misleading; hence, we want to clarify it in this comment. We use the DDIM sampler in the diffusion model inference process for all the experiments in the paper. This assumption allows us to get deterministic generations from the given noise and approximate this noise from generations with the DDIM inversion method. However, the models were trained with the standard DDPM schedulers, so we used those two terms alternately, and following [1], we used this term to distinguish pixel-space models that we called DDPMs and latent models (LDMs). We agree that this term is inacurate, so we will change the names of these models to \\\"Pixel DMs\\\" in the revised version of our paper.\\n\\n\\n> In Section 4.4, they refer to \\u201cthe smallest L2 criterion\\u201d without introducing it. Moreover, the obtained results are confusing to me since I didn't expect a discrepancy in the accuracy of $x_0\\\\rightarrow x_T$ vs $x_T \\\\rightarrow x_0$. In a final version of this work, I expect the authors to:\\n> 1. Formally define the \\\"smallest L2 criterion\\\" when it's first mentioned.\\n> 2. Explain the classification process using this criterion in more detail.\\n\\nThank you for pointing out that the initial version of the submission lacked a thorough explanation of this aspect. For calculating a distance between two objects in our experiments, we use the $L_2$ norm (euclidean distance) between two images/noises/latents calculated as follows: ${||x-y||}_2 = \\\\sqrt{\\\\Sigma_c\\\\Sigma_i\\\\Sigma_j (x_{c,i,j}-y_{c,i,j})^2}$. \\n\\nFor the classification experiment in Table 2, we assume a setup where we have $N$ inputs, called, in the paper, initial Gaussian noises, and their $N$ corresponding diffusion model outputs, called image generations. When assigning images to the noises, we iterate over all the $N$ noises, and for each one, we calculate its distance to all the $N$ generations. We assign the images to noises by choosing the one with lowest $L_2$ distance. For the accuracy metric, we check if, for a given noise, the predicted image is the same as the one that results from denoising this particular noise with DDIM). For the reverse problem (assignment of noises to images), the setup is the same, but we iterate over the $N$ image generations and, for each image, calculate its $L_2$-distances to all the noises and choose the one to which a distance is lowest. Please note that that even though the L2 distance is symetrical, the assignment of the closest image/noise is not the same due to the one-directional many-to-one relation (e.g. There might be several noises pointing towards the same closest image).\\n\\n> 3. Address the apparent discrepancy between the accuracies of $x_0\\\\rightarrow x_T$ and $x_T\\\\rightarrow x_0$, given the symmetry of the L2 norm.\\n\\nWhile the $L_2$ norm is symmetrical, the problem here is the many-to-one relation that we perform. When assigning images to the initial noises, there are singular generations (with large plain areas) located close to the mean of the random Gaussian noise in the set of generated images. Such generations tend to be the closest (in $L_2$-Norm) for the majority of the noises in our experiments. We show examples of such wrong assignments in Figure 7 (A) in the Appendix. In (C), we present the singular generations that lead to incorrect noise-to-image classification, along with the number of noises for which they are the closest. In (B), we sort images used in the experiment by the variance of pixels and show four with the lowest one. We observe that the set of singular generations leading to misclassification overlaps with lowest-variance generations.\\n\\n> In line 348, after discussing the accuracy of the metrics between image and noises, the authors say \\u201cwe can observe that the distance between noise and latents accurately defines\\u2026\\u201d. Note that, in this section, the latents were not considered, since all the experiments were performed on images and noises. Therefore I do not understand what they want to say with this sentence.\\n\\nThank you to the reviewer for pointing out this error. We confirm that in the sentence we meant \\\"the distance between the noises and their corresponding generations\\\".\"}", "{\"title\": \"Response to the Review (1/n)\", \"comment\": \"We appreciate the Reviewer's valuable feedback.\\n\\n**Missing experimental details** \\n> For instance, the image resolution at which the models were trained is missing for all datasets. \\n> Similarly, details on the network architecture used for the diffusion denoiser and training hyperparameters are missing for both pixel space diffusion models and LDMs.\\n\\nIn the experiments for initial submission, we leveraged three diffusion models:\\n\\n1. Unconditional pixel-space Denoising Diffusion Probabilistic Model (DDPM), with a U-Net architecture as a backbone. This model was trained on the CIFAR-10 dataset at image resolution $32\\\\times32$. We use the checkpoint from [1]. This model was trained with $T=4000$ diffusion steps, with cosine schedule and hybrid loss (composed of simplified objective and variational lower bound loss). The model was trained for 500K training steps.\\n2. Unconditional pixel-space Denoising Diffusion Probabilistic Model (DDPM), with a U-Net architecture as a backbone, which was trained on the ImageNet dataset with image resolution $64\\\\times64$. We use the checkpoint from [1], which, similarly to (1), was trained with $T=4000$, cosine schedule, and hybrid loss, but for 1.5M training steps.\\n4. Unconditional Latent Diffusion Model (LDM) trained on the CelebA-HQ dataset with images of resolution $256\\\\times256$. This particular model is a U-Net-based denoising diffusion model inside the $3\\\\times64\\\\times64$ latent space of the VQ-VAE autoencoder. We use the trained weights from [2]. The denoising model was trained with $T=1000$ diffusion steps and a linear variance schedule for 410K training steps.\\n\\nAdditionally, as requested, we have added experiments on two additional diffusion architectures focusing on higher resolution data:\\n\\n1. Unconditional pixel-space Denoising Diffusion Probabilistic Model (DDPM), with a U-Net architecture as a backbone, that was trained on the ImageNet dataset at image resolution $256\\\\times256$. We use the trained weights from [3]. This model was trained with $T=1000$ diffusion steps and a linear variance schedule for 1980K training steps.\\n2. Conditional Diffusion Transformer (DiT), leveraging Transformer architecture as the denoising diffusion backbone inside the $32\\\\times32\\\\times4$ latent space of Variational Autoencoder, trained on ImageNet dataset with image resolution $256\\\\times256$. For our experiments, we skip the classifier-free guidance. We use the trained weights from [4]. This model was trained with $T = 1000$ diffusion steps and a linear variance schedule for 400K training steps.\\n\\nFor analyzing noise, latent and sample properties with the training progress, we train two unconditional DDPMs with the U-Net architectures - $32\\\\times32$ CIFAR-10 (1) and $64\\\\times64$ ImageNet (2), with the same exact hyperparameters as [1].\\n\\n> Moreover, it is unclear from the text how the angles between different vectors were computed in Figure 2. \\n\\nFor the experiment with angles (Figure 4), we first sample 1000 example generations, and calculate the angles next to the image $\\\\angle x^0$, the Gaussian noise $\\\\angle x^T$, and the latent encoding $\\\\angle \\\\hat{x}^T$ by calculating the cosine similarity between two vectors attached at a given point and converting this value from radians to degrees. \\n\\nOn top of that, for the visualization in Figure 2, we create histograms for each triangle's vertex and obtain the probability density function for every angle binned up to the precision of one degree. Finally, for all triples of angles that can form a triangle (adding up to 180 degrees), we calculate the probability of such a triangle as the product of the probabilities of each angle. Finally, we visualize the triangles yielding the highest probability. \\n\\n> Secondly, in Section 4.4, what is the minimum L2 distance criterion for the assignment of images to noise mentioned in line 321?\\n\\nFor calculating a distance between two objects in our experiments, we use the $L_2$ norm of the matrix being a difference between the two objects: ${||x-y||}_2 = \\\\sqrt{\\\\Sigma_c\\\\Sigma_i\\\\Sigma_j(x_{c,i,j}-y_{c,i,j})^2}$.\\n\\nFor the experiment in Table 2, we assume a setup where we have $N$ inputs, called, in the paper, initial Gaussian noises, and their $N$ corresponding diffusion model outputs, called image generations. When assigning images to the noises, we iterate over all the $N$ noises, and, for a particular one, we calculate its distance to all the $N$ generations. We assign the images to noises by choosing those that has the lowest L2 distance. \\nFor the accuracy metric, we calculate how accurate is the assignment described above. For the reverse problem (assignment of noises to images), the set-up is the same, but we iterate over the $N$ image generations and, for each image, calculate its $L_2$-distance to the noises.\"}", "{\"summary\": \"The authors perform an in-depth analysis of the DDIM inversion technique by analyzing the relationship between initial Gaussian samples, the corresponding generated samples, and their inverted latents. The empirical analysis is presented on pixel and latent space diffusion models for CIFAR-10, ImageNet, and CelebA datasets.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The authors attempt to demonstrate the relationship between diffusion latents, their corresponding generated samples, and latents obtained using DDIM inversion. Since DDIM inversion is of interest to practitioners working in controllable synthesis using diffusion models, some of the analyses presented in the paper in Fig. 4 can be useful.\", \"weaknesses\": \"Since the paper primarily analyzes the relationship between the diffusion latents, generated samples/ data points, and the reverse DDIM latent without any methodological contributions, I would expect the experiments section to be more detailed and clear. More specifically, the following observations stand out:\\n\\n1. **Missing experimental details**: For instance, the image resolution at which the models were trained is missing for all datasets. Similarly, details on the network architecture used for the diffusion denoiser and training hyperparameters are missing for both pixel space diffusion models and LDMs. Moreover, it is unclear from the text how the angles between different vectors were computed in Figure 2. While these are only a few examples, I would request the authors include all experimental details in the Appendix.\\n\\n2. **Limited experiments and overclaiming**: Firstly, if I understand correctly, Sections 4.2 and 4.3 reaffirm already existing conclusions about the DDIM Inversion procedure (as the authors note in lines 172-173) using a different experimental methodology. In this context, can the authors point out additional insights that can be drawn from these experiments? Secondly, the authors note the following in Line 175: We study the implications of this fact and show its far-reaching consequences. However, it is not clear from the main text what these implications are, as these are never discussed and, therefore, seem like overclaiming. Can the authors discuss this in detail? I would have liked to see the impact this can have on the DDIM inversion-based editing or reconstruction capabilities, which would justify this claim. Lastly, I don't see any results on a large-scale experiment (say ImageNet-256), and it is unclear how severe this problem is at scale. Can the authors comment on this and include relevant experiments?\\n\\n3. While the experiments in Section 4.4 are interesting, what is their significance in the context of the broader picture of the paper? If I understand correctly, the focus of the main text is to highlight issues with DDIM inversion by analyzing the relationship between the inverted latents, the original latents, and the generated samples, and therefore it is not clear how Section 4.4 fits here since there seems to be no reference to DDIM inversion here. Secondly, in Section 4.4, what is the minimum L2 distance criterion for the assignment of images to noise mentioned in line 321?\\n\\n**Minor Comments**\\n\\n1. The introduction can be improved. For instance, the authors note the following:\\n```\\nNevertheless, one of the significant drawbacks that distinguishes diffusion-based approaches from other generative models like Variational Autoencoders (Kingma & Welling, 2014), Flows (Kingma & Dhariwal, 2018), or Generative Adversarial Networks (Goodfellow et al., 2014) is the lack of implicit latent space that encodes training data into low-dimensional, interpretable representations.\\n```\\nWhile GANs and VAEs indeed are designed to assign low-dimensional latent codes to the data, Flows/Continuous flows also do not possess a low-dimensional latent space and are similar to diffusion models in that aspect. In fact, the ODE sampling in diffusion models is equivalent to simulating a continuous normalizing flow with a vector field defined in terms of the score function. Therefore, this claim is misleading, and it would be great if the authors could revise this in the main text.\\n\\n2. **Missing citations**: Reference to related work is missing in some places. For instance, in line 37, `combining diffusion models with additional external models`, references to several related works are missing [1,2]\\n[1] DiffuseVAE: Efficient, Controllable and High-Fidelity Generation from Low-Dimensional Latents, Pandey et al.\\n[2] Score-based Generative Modeling in Latent Space, Vahdat et al.\\n\\n3. Figure 1c: There is a single latent $\\\\hat{x}_T$ for a panel of 4 images, and it is thus confusing. Could the authors clarify which image in this panel the generated latent corresponds to?\\n\\n4. Table 1: What does each row correspond to? Does it denote the correlation of the pixels in the latent (pure Gaussian noise or reversed DDIM) vector vs data samples?\", \"questions\": \"See the Weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
DgaY5mDdmT
MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs
[ "Jiarui Zhang", "Mahyar Khayatkhoei", "Prateek Chhikara", "Filip Ilievski" ]
Multimodal Large Language Models (MLLMs) have experienced rapid progress in visual recognition tasks in recent years. Given their potential integration into many critical applications, it is important to understand the limitations of their visual perception. In this work, we study whether MLLMs can perceive small visual details as effectively as large ones when answering questions about images. We observe that their performance is very sensitive to the size of the visual subject of the question, and further show that this effect is in fact causal by conducting an intervention study. Next, we study the attention patterns of MLLMs when answering visual questions, and intriguingly find that they consistently know where to look, even when they provide the wrong answer. Based on these findings, we then propose training-free visual intervention methods that leverage the internal knowledge of any MLLM itself, in the form of attention and gradient maps, to enhance its perception of small visual details. We evaluate our proposed methods on two widely-used MLLMs and seven visual question answering benchmarks and show that they can significantly improve MLLMs' accuracy without requiring any training. Our results elucidate the risk of applying MLLMs to visual recognition tasks concerning small details and indicate that visual intervention using the model's internal state is a promising direction to mitigate this risk. Our code is available at: https://github.com/saccharomycetes/mllms_know.
[ "Multimodal Large Language Models", "Visual Details", "Attention", "Gradients", "Bias", "Perception", "Localization" ]
Accept (Poster)
https://openreview.net/pdf?id=DgaY5mDdmT
https://openreview.net/forum?id=DgaY5mDdmT
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ucCkqJ8i1t", "rTESOFWioS", "oucXwZbxC4", "oQPOt8gNE8", "lkguIUpBgO", "kMgPZFn4wm", "hZf1ie242m", "de5TtXpaZk", "WCVnj9RBq3", "VmF447pnzv", "UoIgTsJbMu", "TLAMx9OvDD", "IMCR5qBxl0", "Gm4mvoWZ0h", "DZMvpEV2Wt", "9v59ixcKU3", "8YDoCkfvg9", "6VUwvpyiqP" ], "note_type": [ "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "comment", "decision", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1733041305330, 1733177356348, 1732303683465, 1732653727920, 1732303742007, 1730682053494, 1732107337514, 1737523673179, 1732663021957, 1732303474853, 1734871641879, 1732679022125, 1732303785296, 1730699804606, 1732642061011, 1733177241238, 1730409687962, 1732912490138 ], "note_signatures": [ [ "~Zaiquan_Yang1" ], [ "ICLR.cc/2025/Conference/Submission4947/Authors" ], [ "ICLR.cc/2025/Conference/Submission4947/Authors" ], [ "ICLR.cc/2025/Conference/Submission4947/Reviewer_bmba" ], [ "ICLR.cc/2025/Conference/Submission4947/Authors" ], [ "ICLR.cc/2025/Conference/Submission4947/Reviewer_J4zc" ], [ "~George_Bredis1" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4947/Authors" ], [ "ICLR.cc/2025/Conference/Submission4947/Authors" ], [ "ICLR.cc/2025/Conference/Submission4947/Area_Chair_ZAhu" ], [ "ICLR.cc/2025/Conference/Submission4947/Reviewer_J4zc" ], [ "ICLR.cc/2025/Conference/Submission4947/Authors" ], [ "ICLR.cc/2025/Conference/Submission4947/Reviewer_1vjv" ], [ "ICLR.cc/2025/Conference/Submission4947/Reviewer_J4zc" ], [ "ICLR.cc/2025/Conference/Submission4947/Authors" ], [ "ICLR.cc/2025/Conference/Submission4947/Reviewer_bmba" ], [ "ICLR.cc/2025/Conference/Submission4947/Reviewer_1vjv" ] ], "structured_content_str": [ "{\"title\": \"The results with only the visually cropped image tokens (not include the original image tokens) as input of MLLM.\", \"comment\": \"I notice that this work directly extends the original image tokens by concatenating the visually cropped image tokens. This method seems to seek a trade-off between global knowledge and local knowledge. Suppose the tasks rely on local knowledge. It may be enough even only with the cropped image tokens as input. Could you provide the results with only the visually cropped image tokens (not including the original image tokens) as input of MLLM on the TextVQA.\\n\\nThanks!\"}", "{\"comment\": \"Thanks for the response, we have run and added the comparison of SEAL and our method on a 25K VQAv2 random subset in the 'as-is' fashion we mentioned previously:\\n\\n| Model | VQAV2 |\\n|------------------------------|---------|\\n| SEAL (Visual Search) | 65.60 |\\n| LLaVA-1.5+rel-att (Ours) | 76.29 |\\n\\nWe would be happy to engage further if you have any additional questions or suggestions.\"}", "{\"comment\": \"Thank you for reviewing our paper. We appreciate your valuable feedback and will try to address your concerns below:\\n\\n**W1:**\\n\\nRegarding the draw-backs of MLLMs for small-sized objects, some recent works have noticed this limitation anecdotally (discussed in the opening of Section 5), but to our knowledge, we are the first to A) quantitatively study its existence across multiple SOTA MLLMs (Section 3), B) show that it is causally related to object-size (Section 3), and C) it is primarily a perception limitation rather than localization limitation (Section 4). We will update our related works to better place our work in the literature following your point.\\n\\n**Q1:**\\n\\nWe conducted the experiment with weaker intervention per your suggestion for LLaVA-1.5 and InstructBLIP (on TextVQA). Specifically, we crop around the ground-truth object (GTO) in the small set such that its relative size becomes the same as the average relative size of GTOs in the medium set (denoted Align to Medium) and large set (denoted Align to Large). We also randomly move the crop around so that the GTO is not always in the center of the cropped image. We still observed a significant increase in the MLLMs\\u2019 perception accuracy as a result of cropping, suggesting that weaker cropping is still effective:\\n\\n| Model | Original Accuracy (%) | Align To Medium | Align To Large | human-CROP (tight) | \\n|------------------------------|-----------------------|-----------------|----------------|----------------------|\\n| InstructBLIP\\t | 21.79 | 45.55 | 65.28 | 69.60 |\\n| LLaVA-1.5 | 39.38 | 54.32 | 60.35 | 69.95 |\\n\\nWe also note that the accuracy surpasses the medium/large-set\\u2019s (similar to human-CROP in Table 1). This suggests that the small-set\\u2019s questions are easier than the medium/large-set\\u2019s questions, which in turn suggests that the limitation in seeing small objects is even stronger than we observed in Table 1 (because the small set seems to have the advantage of easier questions).\\n\\nLastly, we think the accuracy gains from our automatic visual cropping methods (Table 2) provide additional evidence that weaker/less-tight crops are still beneficial.\\n\\n**Q2:**\\n\\nThe prior localization works mentioned in the related works (PNP-VQA and Img2LLM) are developed specifically for the BLIP model that has a dedicated image-text similarity computation neural network called the Image-Text Matching network, and are therefore not directly compatible with general MLLMs that do not explicitly train for text-image similarity (like LLaVA-1.5). In this work, we derived a more general way for localizing the attention of MLLMs to images from first principles (the product of answer to image-token attention and image-token to image patch attention). Regarding interpretability of attention in later layers, we observed in Figure 3 that middle to outer layers are more likely to correctly localize than earlier layers (i.e., have higher attention ratio).\\n\\n**Q3:**\\n\\nOur intuition is that the model can use surrounding information to identify where to look. For example, in the car example, it sees the overall appearance of a road next to buildings in the distance, but cannot really see enough details to assign high enough probability to the car tokens.\\n\\n**Q4:**\\n\\nThanks for the suggestion, we have implemented a stronger baseline: we compared our internal visual cropping methods with external cropping methods under the same setting in Table 4 (SAM, YOLO, CLIP which are stronger than random cropping), and observed that our internal methods perform much better.\\n\\n**Q5:**\\n\\nIt is the image resolution that the MLLM can receive inherently (specified in lines 421-425). For example, for LLaVA-1.5 that has an image resolution of 336x336, we choose windows with resolutions from 336x336 up to 672x672 when searching for the cropping bounding box, and then resize the discovered box down to 336*336 for input to the MLLM.\\n\\n**Q6:**\\n\\nThe periodic patterns in Figure 3 (top) is due to the periodic definition of the layer number on the x-axis (in lines 249-271). Essentially, within each period (for example 24 for BLIP-2), the layer number L goes from the first layer of the backbone LLM to its outer layer (24 layers in BLIP-2) and computes attention with a specific layer in the connector. So, the number of connector layers (Lc) determines the number of periods, and the number of LLM layers (L) determines the length of each period. Consequently, for LLaVA and Qwen that have a single-layer connector (Lc=1), there is no periodic pattern.\"}", "{\"comment\": \"Thank the authors for addressing all my concerns, I'll keep my score.\"}", "{\"comment\": \"Thank you for reviewing our paper. We appreciate your valuable feedback and will try to address your concerns below:\\n\\n**W1:**\\n\\nThanks for pointing out the work on Matryoshka Query Transformer. Training MLLMs with MQT allows them to have varying visual context size during inference, and this indeed can reduce computational cost. In our current results, we have shown that our methods can work with two different MLLMs with distinct visual context sizes, so it seems entirely possible that our method can still work with varying visual context size under MQT. We will add this discussion to the paper and will explore MQT more closely and integrate into our methods in future works.\\n\\n**W2:**\\n\\nYes, but please note that most general VQA benchmarks are dominated by large visual objects, and therefore it is natural that a method that improves small object perception does not achieve significant boosts in the general VQA benchmarks. We report these results to show that our methods, while improving perception of small objects, do not come at the cost of perception of large visual objects, that is, they can maintain performance on general VQA benchmarks.\\n\\n**Q1:**\\n\\nIn our early experiments we noticed that concatenation works better than addition. We think this is because, **without any finetuning**, adding the global and cropped image tokens results in shifted image tokens that the frozen LLM has no idea how to deal with. In contrast, concatenation works because the cropped image tokens are simply treated as additional context, and the LLM in the MLLM already knows how to use varying amounts of context.\"}", "{\"summary\": \"This paper identifies a drawback of MLLMs in VQA when questions concern small-sized objects in the images. It shows empirically that MLLMs perform worse on such questions, and that their performance is due to perception issues but often they are able to attend correctly to the relevant regions of the image (localization). With these two insights, they propose a method based on visual cropping which computes attention maps of the MLLM through various methods, and uses these attention maps to derive a cropped version of the image which is appended to the original image and passed through the MLLM. They show empirically that on VQA datasets involving fine-grained questions (e.g. TextVQA), this inference-time approach outperforms the base MLLM, while maintaining performance on general VQA datasets.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"This paper identifies an important problem in MLLMs, specifically their lack of sensitivity to small details in images in a VQA context. Moreover, it pairs this with an original and useful insight -- that often the MLLM can attend correctly to the relevant region of the image, even if it produces the wrong answer. This observation may not have been recognized previously. Moreover they turn this analysis into a practical method which generates an attention map and uses this to crop the image. The method has several advantages -- it is intuitive, fits well into the framework of pretrained MLLMs, and works at inference-time without further training.\\n\\nEmpirical results on Table 2 are convincing and show that their method significantly outperforms the base MLLM on detail-sensitive datasets. \\n\\nThe paper is also written well and has good clarity.\", \"weaknesses\": \"I wonder whether the drawback of MLLMs for small-sized objects for vision-language tasks has been noted before. For example, some work is mentioned in Lns 287-288 on training MLLMs with higher-resolution patches. It would be good to understand related work that has addressed the problem of small-sized objects for MLLMs generally and/or VQA specifically. This could potentially be added to the related work.\\n\\nThe method assumes that there is a single relevant location of attention in the image; this is not always true, for example in questions that ask about spatial relationships between objects. The authors note this when they discuss limitations. It is not inconceivable that the approach could be extended to such questions if attention maps are sufficiently informative.\", \"questions\": \"(Sec. 3) Human-crop is a very strong intervention where the crop is tight around the ground-truth object. It would be nice to see how accuracy improves for more realistic crops. For example, one experiment -- crop around objects in the \\\"small\\\" dataset s.t. the proportion of its size in the image is equal to the average proportion value in the \\\"medium\\\" and \\\"large\\\" datasets. Does the accuracy improve, maybe matching the accuracy of the medium and large splits?\\n\\n(Sec. 4) Is this a standard way to calculate attention in transformers? If I am understanding right -- I am not sure how interpretable attention values are in later layers, where the input tokens have already been transformed significantly. How does this method compare to prior work on attention in MLLMs/vision transformers? Also, it would be nice, if feasible, to evaluate variants of GradCam for transformers as another attention method. These are mentioned in the related work. \\n\\n(Sec. 4) It'd be interesting to get more intuition on when the MLLM produces correct attention when giving an incorrect answer, and when it does not. I imagine that in the exit or bicycle number case in Fig. 2, it seems reasonable to produce correct attention. In the car question I'm a bit surprised this works since it's just asking about existence of the object -- proper localization seems to imply that the model also perceived the car correctly.\\n\\n(Sec. 6) A simple baseline here would be to provide a random crop; crop the image at a random location, perhaps half the size of the original image, and provide this as additional input to the MLLM. This would reinforce the analysis in Sec. 4, and show that the accurate localization of the MLLM provides informative crops. It would be nice to show the results of such a baseline.\\n\\nMinor questions\\n- What is the \\\"input image resolution\\\" in line 356? Is that the size of the patches input to the MLLM, or the image resolution? Clarifying since the multiples are >= 1.\\n- What is the reason for the oscillatory behavior in Fig. 3?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Clarification of Details on Comparison with LLaVA-NeXT\", \"comment\": \"Could you please provide more concrete details on how you evaluated your method in comparison with LLaVA-NeXT? From Appendix B, it is not clear exactly what steps were taken: did you process the image in low resolution and then crop the region of interest, or did you use a dynamic resolution mode similar to LLaVA-NeXT and crop regions from every crop of the original image? It would be very helpful if you could provide specific details about your implementation.\\n\\nThank you!\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you, we appreciate your support of the paper\\u2019s contributions. We are happy to explain the bounding box selection further, might be more clear to present it in 2 stages:\\n\\n1. Selecting the best location per bounding box size (Ln 357-359): For each box size (e.g., 2x the input resolution size), we find the box location on the original image that has the highest internal sum of importance. Note that the original image can be larger than 2x the input resolution of the MLLM. For example, the average input image size is 954x818 in TextVQA (we report dataset statistics in Table 6 in Appendix C).\\n2. Selecting the best bounding box size (Ln 359-362): Among the \\u201cbest location per size\\u201d boxes (selected in the stage 1 above), we then select the box whose internal sum has the largest difference from the average internal sums of its adjacent positions. This latter step is a heuristic to avoid choosing too small or too large windows (notice that in both cases, moving the window slightly left/right or up/down will not change its internal sum significantly).\\n\\nComing back to your specific question, the 2nd step specially avoids selecting a box the size of the entire image, because such a window would have zero difference from its adjacent windows of the same size (since its adjacent window is itself).\"}", "{\"comment\": \"Thank you for reviewing our paper. We appreciate your valuable feedback and will try to address your concerns below:\\n\\n**W1/Q1:**\\n\\nRegarding traditional grounding algorithms, we compared our internal visual cropping methods with several SOTA grounding methods (SAM, YOLO, CLIP) under the same settings in Table 4. We observed that our attention/gradient-based methods (ViCrop) consistently outperform the use of these grounding methods. We think the reason is that ViCrop utilizes the MLLMs\\u2019 internal strong question understanding and reasoning capabilities to perform a question-dependent and context-aware grounding. \\n\\nRegarding the V* method (SEAL), we did not compare with it because SEAL requires substantial training and finetuning of several neural networks, whereas our methods are completely training-free, so a comparison would not be fair. Nonetheless, to provide an idea of how our method compares to SEAL in an \\u201cas-is\\u201d fashion (i.e., if a user just wants to pick one method as-is off-the-shelf), we report the accuracy of SEAL compared to LLaVA-1.5+rel-att below. We observe that our method outperforms SEAL except on the V* benchmark (we think this might be because SEAL is designed and tuned specifically towards achieving high accuracy on the questions in its V* benchmark). We also note that the inference time of SEAL is significantly slower than our method (4.44s compared to 1.88s on average per question, tested on the same random 100 TextVQA samples with one A6000 GPU).\\n\\n\\n| Model | TextVQA | V* | POPE | DocVQA | AOKVQA | GQA |\\n|------------------------------|---------|-------|-------|--------|-------|-------|\\n| SEAL (Visual Search) | 36.30 | 75.30 | 82.40 | 5.31 | 55.34 | 50.18 |\\n| LLaVA-1.5+rel-att (Ours) | 55.17 | 62.30 | 87.25 | 19.63 | 60.66 |60.97 |\\n\\n(We could not report on VQAv2 for the rebuttal because it contains 200K testing samples -- which is 20 times larger than GQA and 40 times larger than Textvqa -- and running SEAL on it will cost us more than 10 days on our available computing resources; we will certainly add it for the final version.)\\n\\n\\n**W2:**\\n\\nOur approach achieves substantial improvements on benchmarks requiring the perception of small visual concepts. To better clarify the gains, we reiterate our rel-att method\\u2019s improvements from Table 2 below:\\n\\n- TextVQA: LLaVA-1.5 goes from 47.80% to 55.17% (**7.37 gain**); InstructBLIP goes from 33.48% to 45.44% (**11.96 gain**)\\n- V*: LLaVA-1.5 goes from 42.41% to 62.30% (**19.89 gain**); InstructBLIP goes from 35.60% to 42.41% (**6.81 gain**)\\n- DocVQA: LLaVA-1.5 goes from 15.97% to 19.63% (**3.66 gain**); InstructBLIP goes from 9.20% to 9.95% (**0.75 gain**)\\n- POPE: LLaVA-1.5 goes from 85.27% to 87.25% (**1.98 gain**); InstructBLIP goes from 84.89% to 86.64% (**1.75 gain**)\\n\\nOther general VQA benchmarks are dominated by large visual concepts, and therefore it is natural that a method that improves small object perception does not achieve significant boosts in the general VQA benchmarks. We report these results to show that our methods, while improving perception of small objects, do not come at the cost of perception of large visual objects, that is, they can maintain performance on general VQA benchmarks.\\n\\n**W3:**\\n\\nWe fully agree that studying the architectural causes of this perception limitation is valuable, but we think it merits its own separate paper and therefore we leave it to future works. Our scope in this paper was to first show that the limitation in seeing small objects exists in SOTA MLLMs and is causal (per Table 1), that it is primarily a perception limitation rather than a localization limitation (i.e., MLLMs internally know where to look, per Figure 3), and lastly, that this can be utilized to improve their perception without any training (per Table 2).\\n\\n**Q2:**\\n\\nThe method uses a multi-scale bounding box selection strategy (described in Section 5 lines 353-364) that allows the method to select the box that contains the visual subject as completely as possible. However, if the question requires seeing multiple visual subjects that are very far apart, and more than one of them are small, then our method can only help with the visual subject that the model thinks to be the most important. We have discussed this limitation under Limitations and Future Works in lines 497-499, and will try to address this in future works.\"}", "{\"metareview\": \"This paper addresses MLLMs\\u2019 limitation in perceiving small details. The authors show, via controlled experiments, that cropping the relevant region significantly improves performance, while revealing that MLLMs do know where to look but often fail to perceive fine details. Its strengths include a thorough causal analysis, strong empirical gains (especially on the V*Bench), and a training-free approach. Weaknesses involve limited improvement on general VQA tasks and challenges with multi-object queries. Overall, given the significance of the problem, the clarity of findings, and the simple yet effective solution, the AC recommend acceptance.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal, reviewers requested grounding comparisons, multi-region clarifications, partial-crop baselines, and details on bounding-box selection and token overhead. The authors provided experiments showing stronger performance than sam/yolo/clip, explained multi-region constraints, demonstrated partial-crop improvements, clarified bounding-box selection, and discussed concatenation vs. Matryoshka queries. These updates and additional results strengthened the paper\\u2019s claims and supported the final decision to accept.\"}", "{\"comment\": \"Thank you for the clarification, it is clear to me now and makes sense. It is helpful to understand the average input image size.\"}", "{\"comment\": \"Thank you for your question. We use the internal dynamic resolution of LLaVA-NeXT in both the baseline and our method in Table 5. We apply our method to LLaVA-NeXT mostly in the same way as LLaVA-1.5, except for one difference that we will elaborate next. LLaVA-NeXT, unlike LLaVA-1.5, internally divides the global input image into several patches, then independently processes these patches (and a down-sampled global image) by the vision encoder, and then concatenates the resulting features into a single long visual context for the LLM. When applying our method to LLaVA-NeXT, we compute the attention with respect to these patches (rather than the downsampled global image), and then stitch the patch attention maps back together to form an attention map over the original global image. We then use this global attention map for finding the region of interest (per lines 353-363), then crop it, process it with the vision encoder into features, and concatenate the features to the input visual context (which already contains features of patches and downsampled global image from the baseline LLaVA-NeXT). We will of course release the code after publication to further assist the use and adoption of our methods.\"}", "{\"summary\": \"This paper studies the perception limitations of Multimodal Large Language Models (MLLMs) when dealing with small visual details in images, and proposes training-free visual cropping methods to mitigate these limitations. The key contributions are: (1) demonstrating that MLLMs struggle with perceiving small visual details despite knowing where to look in images, (2) showing this limitation is causal through intervention studies with visual cropping, and (3) developing automatic visual cropping methods that leverage MLLMs' internal attention and gradient information to improve their perception without any additional training.\\nThe paper makes a valuable contribution by rigorously analyzing an important limitation of MLLMs and providing practical solutions. However, there are some points that need clarification and potential improvements that could strengthen the work.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"This paper propose an interesting problem that MLLM seems know which areas should be focused but still fails to solve the related problem successfully all the time. By conducting intervention study, the problem is validated.\", \"The proposed methods are intuitive and straightforward to the problem mentioned and make good use of the finding from the pilot study.\"], \"weaknesses\": [\"The evaluation part is not so well done given the baseline is just no croping without any other fair comparison. V* (star) may be a good baseline since they tried to address a similar problem like this paper brought about.\", \"The existing approach seems achieve limited improvement by up to ~4% even compared to no-crop setting across all 8 benchmarks.\", \"The role of model architecture in determining perception limitations isn't explored\"], \"questions\": [\"Like mentioned in the first point of weakness, How will traditional grounding algorithms help address the problem under similar setting? Why viscrop instead of that?\", \"How does the method handle cases where the visual subject spans multiple regions of interest?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for their response. My concerns/comments in the original review have been addressed, and I appreciate the authors' efforts. I am satisfied by the authors' response, and will keep my rating of accept.\\n\\nIt is not that important, but I still have some confusion on how the bounding box selection happens in Ln 356 (if the sliding window is larger than the image, and locations are selected to maximize sum of importance, how would small bounding boxes be selected vs. the whole image?). If the authors see this perhaps they can clarify my confusion. However, I am sure that the approach is fine and my accept rating does not depend on this clarification.\"}", "{\"comment\": \"Thanks for your question. Following your suggestion, we have run both LLaVA-1.5 and InstructBLIP under our relative attention visual cropping method (rel-att) with (w/) and without (w/o) providing the global image tokens to the LLM on TextVQA:\\n\\n| Model | Original Performance | ViCrop Performance w/o Global Image Tokens | ViCrop Performance w/ Both Global & Cropped Image Tokens |\\n|-------|---------------------|----------------------|-------------------|\\n| LLaVA-1.5 | 47.80 | 51.63 | 55.17 |\\n| InstructBLIP | 33.48 | 38.91 | 45.44 |\\n\\nWe observe that the global information is indeed needed as discussed in lines 300-304.\"}", "{\"summary\": \"This paper studies the attention patterns of MLLMs when answering visual questions, and investigate into do MLLMs know where to look at, i.e. the perception problem v.s localization problem. Based on these findings, this paper introduces an automatic visual cropping methods leveraging attention and gradient maps, to help it better perceive the small visual subjects. The proposed methods is evaluated on two MLLMs and seven VQA benchmark and demonstrates significant improvement.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. This paper studied the perception v.s localization problem for small visual objects and introduced insightful findings.\\n2. It introduced a training-free method to help MLLMs better perceive the small visual subject of any question.\\n3. Experimentally, it demonstrated significant improvement in 7 benchmarks and on two MLLMs.\", \"weaknesses\": \"1. To help the model keep the global visual information, the cropped object introduced extra tokens. As illustrated in paper's Table 4, it indeed introduced some computational latency. But I suggest the author could try this approach (Matryoshka Query Transformer for Large Vision-Language Models (NeurIPS 24)) on the cropped object to save the visual tokens, as the re-scaled cropped objects intuitively don't have much visual detailed information.\\n2. The improvement on general VQA benchmarks such as large visual concepts are not as significant as on small visual concepts.\", \"questions\": \"1. I'm curious to see, have the authors tried and explore methods to add the cropped visual object to the original image instead of concatenating two images together. Would that bring to on par or better performance with current approach and save the computation cost.\\n2. I incline to accept this paper unless I saw critical drawbacks that I missed from other reviewers.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank the authors for all the comments, I'll keep my score.\"}" ] }
DgGdQo3iIR
GEPCode: A Context-Aware 1M-Parameters Graph-Based Language Model for Source Code
[ "Federico Cichetti", "Emanuele Parisi", "Andrea Acquaviva", "Francesco Barchi" ]
The pursuit of optimal conditions for software execution poses a complex challenge. This task can be automated by harnessing the structured nature of programming languages, especially from compiler intermediate representations of code (IR). The manipulation of source code using Large Language Models (LLMs) is a thriving area of study in Natural Language Processing (NLP) literature. However, in this study we illustrate how we can circumvent the need for exceedingly large models by employing domain-specific language models. These models have a reduced number of parameters but retain the ability to capture the relationships within source code elements. We introduce GEPCode, a graph neural network designed to model IR with the flexibility to adapt to new tasks. This flexibility is obtained through special "meta" nodes, that allow for the representation of additional task-dependent contextual information. Pre-training is performed by solving node and graph-level tasks, resulting in a general language model. After a fine-tuning phase on two downstream tasks, Device Mapping and Algorithm Classification, we achieve average accuracy results of 88.9% (NVIDIA) and 92.3% (AMD) for the former and 97.2% for the latter. Comparing our methodology with state-of-the-art models trained from scratch, our results are similar or better, yet providing a more flexible model. Moreover, we achieve similar accuracy results in downstream tasks compared to state-of-the-art pre-trained language models based on Transformers, while utilizing 100 times fewer parameters.
[ "graph neural network", "graph language model", "source code optimization" ]
Reject
https://openreview.net/pdf?id=DgGdQo3iIR
https://openreview.net/forum?id=DgGdQo3iIR
ICLR.cc/2025/Conference
2025
{ "note_id": [ "guARwpfaRN", "amYy9ISJXd", "PV0IJpyg9P", "OHC7tr10nm", "NNZHtNXSGH", "AH5Uygx34S", "8XzAsQcJuM", "3SVXFYoa3t" ], "note_type": [ "decision", "official_comment", "official_comment", "official_review", "meta_review", "official_review", "official_comment", "official_review" ], "note_created": [ 1737524119306, 1733262444764, 1733262081285, 1729220293518, 1734942542375, 1730785802798, 1733262283801, 1730465380234 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11356/Authors" ], [ "ICLR.cc/2025/Conference/Submission11356/Authors" ], [ "ICLR.cc/2025/Conference/Submission11356/Reviewer_jpTT" ], [ "ICLR.cc/2025/Conference/Submission11356/Area_Chair_2Qt3" ], [ "ICLR.cc/2025/Conference/Submission11356/Reviewer_NyAs" ], [ "ICLR.cc/2025/Conference/Submission11356/Authors" ], [ "ICLR.cc/2025/Conference/Submission11356/Reviewer_V1oP" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Answers\", \"comment\": \"Q1: A transformer model over flattened source code is naturally a GNN over fully connected graphs. It would be interesting to study how it performs to directly pre-train a transformer model over the same data with the same objectives.\", \"a1\": \"Yes, transformer models identify relationships across the entire context, effectively constructing a graph of relationships from a complete graph. However, our approach aims to leverage domain-specific knowledge about the structure of code. This goes beyond merely considering the sequence of op-codes; we focus on capturing the complex dependencies in data and control flow. By doing so, we aim to build a more efficient and lightweight model. To improve the comparison in the future, we will train a transformer model under the same conditions as the GNN, as suggested by the reviewer.\", \"q2\": \"L292 is confusing to me. Is the \\\"normalized property value\\\" in META edges the same as m in eq.6? If that's the case, isn't the training objective already known from the input?\", \"a2\": \"Yes, the classification targets (the META properties) are present in the input as nodes in the initial graph. However, the meta prediction is performed on the final representation of the CLS node, rather than directly on the representations of the META edges. Its purpose is simply to guide the CLS node to pay greater attention to the META nodes, as explained at the beginning of the paragraph: \\\"To enhance the network\\u2019s ability to effectively capture meta-information into the global representation, ...\\\". This allows to include meta nodes in a structural way for graph allowing generalization for downstream tasks.\"}", "{\"title\": \"Answers\", \"comment\": \"Q1: Why GNN is used as a solution to build compact LMs in this work? What is the motivation.\", \"a1\": \"Graph Neural Networks are used as a solution to build compact language models because of their ability to effectively capture and process structured relational data. Source Code inherently has a graph-like structure and GNNs can represent and process such structures more naturally and efficiently encoding domain-specific relationships enhancing the model\\u2019s understanding without significantly increasing complexity. Moreover GNNs allow us to include meta node population in a structural way allowing generalization in downstream tasks.\", \"q2\": \"The paper uses LLMV-IR representation of code; in that case, is the proposed method scalable?\", \"a2\": \"LLVM-IR captures fine-grained operations, its verbosity could lead to scalability challenges addressable simplifying the LLVM-IR by removing irrelevant nodes or edges or splitting large graphs into smaller ones. At the same time, however, the use of LLVM enables several advantages including the application of the methodology to different high-level languages in an agnostic manner and the extension of the methodology to different domain-specific dialects.\"}", "{\"summary\": \"The paper proposes GEPCode, a Graph Neural Network (GNN) to embed compiler intermediate representation of source code. GEPCode is pre-trained on 1.3M graphs constructed from source code for three objectives: (1) Attribute Masking, (2) Meta Prediction, and (3) Contrastive Learning. The resulting model is then fine-tuned for two downstream tasks respectively: (1) Device Mapping and (2) Algorithm Classification. Evaluation shows that GEPCode, which has only 1M parameters, achieves comparable performance with respect to much larger pre-trained transformer models, highlighting its efficiency.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"While previous works have approached the same problem with graph representations, GEPCode is the first to *pre-train* an GNN for intermediate representation, and proposed the three new objectives for pre-training.\", \"The proposed method is compared against a comprehensive list of baseline methods. All experiments are repeated 5 times for reliability.\", \"The paper is written clearly.\"], \"weaknesses\": [\"The proposed method does not show a clear advantage over Perfograph which performs better on DevMap with an even smaller model size, and does not require pre-training.\", \"It is highlighted that GEPCode is more parameter-efficient than pre-trained transformers. I do not think this is a well-established advantage of the proposed method unless it can be demonstrated that GEPCode outperforms the SOTA pre-trained transformer of the same model size.\", \"Regarding efficiency, I'm not sure about the practical benefit of reducing the inference time from around 100ms (with CodeT5 or other transformers) to 23ms (with GEPCode), at the cost of accuracy degradation. Those transformers are already pretty small and fast, whose inference latency should be acceptable for DevMap and algorithm classification that are considered in evaluation.\", \"The baseline transformer models are mostly trained on multilingual programming language data, while GEPCode is trained on IR which is a single language. Is it possible that a monolingual transformer model trained only on the target language for the evaluation task would perform better, and thus the baseline numbers are underestimated?\"], \"questions\": [\"A transformer model over flattened source code is naturally a GNN over fully connected graphs. It would be interesting to study how it performs to directly pre-train a transformer model over the same data with the same objectives.\", \"L292 is confusing to me. Is the \\\"normalized property value\\\" in META edges the same as $m$ in eq.6? If that's the case, isn't the training objective already known from the input?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": [\"This paper proposes GEPCode, a GNN-based pre-trained model over LLVM-IR. It uses three pre-training objectives\\u2014attribute masking, meta prediction, and contrastive learning\\u2014and is tested on device mapping and algorithm classification. The authors claim GEPCode outperforms larger transformer-based models in efficiency while maintaining comparable accuracy.\", \"**Strengths:**\", \"The paper highlights \\u201cdemand for smaller, efficient and flexible models in this domain\\u201d (V1oP), which is timely given the growing focus on efficiency in modern ML research.\", \"Multiple reviewers (e.g., NyAs, jpTT, V1oP) note that the paper is \\u201cwritten well\\u201d (NyAs) and \\u201cwritten clearly\\u201d (jpTT).\", \"The inclusion of three tasks\\u2014attribute masking, meta prediction, and contrastive learning\\u2014finding that these \\u201care effective\\u201d (NyAs)\", \"Comprehensive set of experiments. (jpTT)\", \"**Weaknesses:**\", \"Unclear motivation of using GNN: \\u201cwhy GNN is the solution, it is not clear\\u201d (NyAs).\", \"It is also unclear how the baselines were chosen (NyAs), and a comparison with fine-tuned transformer-based language models would be beneficial (V1oP)\", \"Weak comparison with prior methods. Perfograph performs better on DevMap with fewer parameters and no pre-training (jpTT).\", \"Though the paper is well-written and tackles a relevant problem, reviewers remain unconvinced about GEPCode\\u2019s novelty and its empirical advantages. The authors\\u2019 rebuttal did not fully address these concerns.\"], \"additional_comments_on_reviewer_discussion\": \"Please refer to the meta-review for details.\"}", "{\"summary\": \"The paper introduces GEPCode, a graph-based, efficient, pre-trained, context-aware LM of graph representations of source code. The proposed approach expands upon ProGraML and Perfograph. GEPCode expresses LLMV-IR code samples as graphs, where nodes are token within a vocabulary of LLVM-IR elements, and edges are directed and represent dependencies between the elements of code. GEPCode is pretrained using synth-compilable subset of the Exebench dataset, using three tasks - attribute masking, meta prediction, and contrastive learning. For downstrem tasks, GEPCode is evaluated on device mapping and algorithm classification. Experimental results demonstrate that GEPCode is able to bridge the gap between the efficiency of task specific architectures and the generality of larger LMs, while using a limited number of parameters.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Overall the paper is written well. The motivation behind the model choice is sound.\", \"The pretraining objectives that include three tasks are effective.\", \"Experiment results show the value of the proposed method.\"], \"weaknesses\": [\"The biggest weakness of the work is the novelty. In a nutshell, it is a paper that is using GNN for some specific code tasks. GNN was previously explored for coding tasks, however, with the emergence of LLMs, the focus has shifted. In this paper, authors emphasized on compact LMs, but why GNN is the solution, it is not clear.\", \"The paper could use more software engineering tasks for evaluation. In the analysis part of the work, there is not much critical thinking paid by the authors. Straight-forward main results and a piece of ablation study - that's it. Moreover, I didn't understand the baselines used in comparison.\"], \"questions\": [\"Why GNN is used as a solution to build compact LMs in this work? What is the motivation.\", \"The paper uses LLMV-IR representation of code; in that case, is the proposed method scalable?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Answers\", \"comment\": \"Q1: Please clarify if IRGen is a non-transformer-based pre-trained language model that performs better than GEPCode.\", \"a1\": \"IRGen is not a competitor of our model, it is a framework based on genetic algorithms to identify sequences of optimization flags that can significantly improve embedding quality. It is thus an orthogonal research to the one presented in this paper that can be hybridized to further improve the embedding quality in future works.\"}", "{\"summary\": \"This paper introduces GEPCode, a novel approach using a Graph Neural Network to model compiler intermediate representations of code with the adaptability to accommodate new tasks. The model is first pre-trained as a general-purpose language model, then fine-tuned on downstream tasks, achieving results comparable to state-of-the-art models trained from scratch. Additionally, GEPCode performs similarly to leading pre-trained Transformer-based language models while requiring fewer parameters, offering a more flexible and efficient solution.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The problem statement addressed in this paper is worth solving as there is demand for smaller, efficient and flexible models in this domain.\", \"weaknesses\": \"(1) Typically, fine-tuning is performed for 1 to 10 epochs. Here, fine-tuning for both downstream tasks was run for 100 epochs, which could have led to overfitting.\\n\\n(2) The accuracy of fine-tuned GEPCode on downstream tasks is compared against pre-trained transformer-based language models. A comparison with fine-tuned transformer-based language models for our downstream tasks would have been ideal.\", \"questions\": \"Please clarify if IRGen is a non-transformer-based pre-trained language model that performs better than GEPCode.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
DfTWrTwLzD
Two Are Better than One: Context Window Extension with Multi-Grained Self-Injection
[ "Wei Han", "Pan Zhou", "Soujanya Poria", "Shuicheng YAN" ]
Limited longtext window has been an inherent constraint for large language models (LLMs), which significantly restricts their application scenarios. Continual pre-training on long-context data is the most straightforward approach to further extend an LLM's context window, but it is at the expense of huge data acquisition and computation cost. There are many cost-efficient context window extension methods which do not require pretraining process emerges as appealing solutions, such as extrapolation, attention manipulation, context compression, etc. In this paper, we propose a novel approach named Shared-LLaMA. Shared-LLaMA is composed of two short-context LLMs. One of them works as compressor and the other works as decoder. The decoder receives compressed multi-grained context information from the compressor and performs context-aware modeling on the running text. Information transfer between the compressor and decoder occurs only at the lowest layers to circumvent an entire forward pass and save the inference time. Both LLMs are initialized from the same off-the-shelf checkpoint and thus can be directly trained without extra feature alignment stages. Additionally, we propose a tree structure to store the multi-grained information and design a search algorithm to fast locate and retrieve related information from each level of that tree. With these efficient design choices, Shared-LLaMA can greatly reduce memory consumption, and achieves apparent speed up over other advanced baselines (2$\times$ over streaming, $3\times$ over encoder-decoder architectures). In our evaluation on long-context modeling and understanding tasks, Shared-LLaMA yields superior or comparable results to several strong baselines, indicating Shared-LLaMA achieves a good balance between efficiency and effectiveness.
[ "long-context modeling", "large language models" ]
Reject
https://openreview.net/pdf?id=DfTWrTwLzD
https://openreview.net/forum?id=DfTWrTwLzD
ICLR.cc/2025/Conference
2025
{ "note_id": [ "n5ifuAKTBL", "k5Jns3IcPQ", "flvpHjHeJC", "dqTjT6V2Rw", "aoNZ0YDynO", "TtxQWvZAp6", "PrmOJ8fyYB", "FO61TyKibg" ], "note_type": [ "official_review", "official_review", "official_review", "meta_review", "official_comment", "decision", "official_review", "official_comment" ], "note_created": [ 1730735149337, 1729428375491, 1730047737400, 1734560137227, 1732077873219, 1737523736076, 1730700166046, 1732453220262 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5960/Reviewer_toYK" ], [ "ICLR.cc/2025/Conference/Submission5960/Reviewer_PZ2g" ], [ "ICLR.cc/2025/Conference/Submission5960/Reviewer_swMr" ], [ "ICLR.cc/2025/Conference/Submission5960/Area_Chair_zH8C" ], [ "ICLR.cc/2025/Conference/Submission5960/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5960/Reviewer_wFyt" ], [ "ICLR.cc/2025/Conference/Submission5960/Reviewer_swMr" ] ], "structured_content_str": [ "{\"summary\": \"The paper introduces SharedLLM, a novel approach for extending the context window of large language models (LLMs) by using a hierarchical architecture that pairs two short-context LLMs. In SharedLLM, one model, called the lower model, acts as a compressor that processes past context into compact, multi-grained representations. The other, the upper model, serves as a decoder that integrates this compressed context with current text to predict future tokens in a context-aware manner. Information is passed from the compressor to the decoder through self-injection layers at specific levels, allowing efficient integration without extensive cross-attention.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Innovative Multi-Grained Context Extension: Introduces a unique approach to extend context windows using a compressor-decoder model architecture, which efficiently handles large context data.\\n2. Strong Experimental Results: Demonstrates superior performance on several long-context benchmarks, providing evidence of the model's robustness.\\n3. High Efficiency: Outperforms other methods in terms of speed and memory usage, making SharedLLM viable for large-scale applications.\", \"weaknesses\": \"1. The paper provides little information on how the model\\u2019s performance changes with different tree depths, compression ratios, and injection layers beyond the default settings. Since these parameters are key to achieving a balance between efficiency and accuracy, a sensitivity analysis would be beneficial.\\n2. While the paper introduces a query-aware retrieval policy in the context tree for efficient information extraction, it lacks a detailed analysis of how different retrieval policies affect SharedLLM\\u2019s performance. An ablation study comparing retrieval policies (e.g., different similarity metrics or selection thresholds) would enhance understanding and offer actionable tuning guidance for practitioners.\", \"questions\": \"1. What criteria guided the choice of retrieval policy in the context tree, and how sensitive is SharedLLM\\u2019s performance to different retrieval policy settings?\\n2. How does SharedLLM perform with different compression ratios, tree depths, and injection layer settings?\\n3. Have you considered testing SharedLLM with alternative context extension methods, such as position interpolation or memory-augmented architectures?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces SharedLLM, an innovative approach designed to extend the context window of LLMs without incurring the substantial costs associated with continual pre-training on long-context data. The authors claim that SharedLLM achieves comparable or superior results on long-context tasks while significantly reducing memory consumption and increasing processing speed compared to existing methods.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The SharedLLM introduces an original dual-model architecture with a \\\"context tree\\\" data structure, enhancing the efficiency of context compression and retrieval for large language models.\\n2. Despite the complexity of the concepts introduced, the paper communicates the workings of the SharedLLM and its underlying mechanisms\", \"weaknesses\": \"1. The evaluation only uses the LLaMA-2 model, with no justification for not including more recent or varied models like LLaMA-3.\\n2. The method proposed in this paper does not seem to outperform other models in Longbench.\", \"questions\": \"See Weakness section\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes ShardLLM to reduce heavy training and inference cost in long-context LLMs. Specifically, this method insight is on the multi-grained context compression and query-aware information retrieval. SharedLLM uses two short-context LLMs, where the lower model functions as a compressor while the upper model acts as a decoder. The lower model divides the sequence into non-overlapping chunks and makes split-and-search procedure to choose relevant chunks. The selected chunks are then downsampled to reduce KV cache cost. The upper model encodes these keys and values via cross attention with chunk level positional embedding.\\n\\nThe paper makes experiments on language modeling and supervised fine-tuning to verify the effectiveness of SharedLLM. In language modeling, SharedLLM outperforms other efficient long-context baselines. In supervised fine-tuning, SharedLLM also achieves strong performance in LongBench and InfiniteBench. The efficient experiments show that SharedLLM is cheap and easy to deploy. Finally, the ablation studies discuss the design choice.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper focuses on an important research area, where LLMs are struggling to achieve effective and efficient long-context training and inference. SharedLLM integrates the compression and retrieval idea, which is a promising research direction in this field.\\n\\n2. The paper is well written. Although the proposed method includes a long procedure, the method section is organized well to help readers get the core idea. The experiment section includes all the necessary part, including performance, efficiency, and ablation studies.\", \"weaknesses\": \"1. The method is very complicated. I admit that \\\"complicated\\\" itself is not sometimes a weakness. However, simple and elegant design is usually working well in model architecture research. Complicated design brings engineering difficulty and optimization problem. SharedLLM uses the last output as the embedding of a chunk which is used for retrieval by the queries. It is highly questionable whether optimization is valid where sparse computation usually has difficulty on gradient estimation.\\n\\n2. SharedLLM divides context into different chunks and adds downsampling module to compress KV cache. This may hurt performance on some long-context tasks. The experiment part is not strong enough to support SharedLLM's long-context capability. I will give some existed benchmarks and easy examples accordingly:\\n- Needle-in-a-haystack is now a compulsory evaluation to show long-context's retrieval ability, which is also included in InfiniteBench. However, the experiment only includes two subtasks. The other part is also essential to show model's long-context ability.\\n- If I query the model to repeat all the previous context, the sparse-divided and compress context may hurt the context information. Moreover, If my first query does not include some context information, and my second query needs that again. Then, SharedLLM will drop it in the first response and can not answer my second question.\\n\\n3. The experiment also lacks important baselines of KV pruning methods. For example, StreamingLLM is an early baseline which directly drops all the global information while only maintaining the local KV cache. There are many sparse KV cache works, including H2O, SnapKV, FastGen. These works are efficient and memory-friendly, while maintaining parts of long-context capability.\", \"questions\": \"1. I'm curious about the result in Table 1 and Table 2. CEPE in line 349 is a re-produced result. Does it mean that the other results are all from the public checkpoint of the according paper? If so, I think the perplexity comparison is meaningless due to different experiment setting. If not so, the perplexity in short context (4k) is highly different, which is also weird.\\n\\n2. On Infinibench evaluation, why you are only interested in two subtasks? There is not a rationale for that. After all, if you use a bench for evaluation, you usually use the whole subtasks.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposes a method for longer context length using two LMs, which play role of a compressor and a decoder each other.\\n\\nLonger context length without heavy computational costs and data for post-training are critical and important topic. \\nHowever, main concerns are too complicated method, experimental results not solid and insufficent in-depth analysis for supporting the efficacy of the proposed method, which are raised by most reviewers.\\n\\nAC also agree to reviewers concerns, so recommends rejecting this paper.\", \"additional_comments_on_reviewer_discussion\": \"Most reviewers pointed out its too complicated approach, insufficient and not solid experimental results and lack of in-depth analysis.\\nDuring the rebuttal, the authors tried to address these issues and failed to convince the reviewers. \\n\\nVia AC-reviewer discussion, all reviewers and AC concluded that this paper is not sufficient for ICLR quality.\"}", "{\"title\": \"General Response: A gentle reminder that we have updated the PDF file.\", \"comment\": \"Dear reviewers,\\n\\nThank you all for the efforts in reviewing this paper. This is a gentle reminder that we have slightly modified the PDF file and colored the modified text in red, which includes:\\n1) Change some colors in Figure 1 and add descriptions in the caption. This is to highlight the core dataflow of Self-injection in SharedLLM.\\n2) Add a few results on additional experiments in Appendix B, page 15. This is to further demonstrate SharedLLM's model and task generalizability.\\n\\nIf you find that we refer to specific lines or pages in our response, please download the latest version to ensure that the position of referred content is correct. We apologize for any incovenience this may cause.\\n\\nWe do not response to questions summarized from many reviewers since the questions raised by each reviewer focuses on different aspects. Please check the individual thread for targeted response. Thanks!\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": [\"The paper presents SharedLLM, which uses two short-context LLMs (derived from the same model family like LLaMA-2) in a hierarchical structure to handle long contexts. My understanding is that this paper has made the following contribution:\", \"Integrating the concept of \\\"Context Tree\\\": A binary tree structure storing text at different granularities, with higher compression ratios at higher levels\", \"Architecture: A lower model compresses context into multi-grained representations, while an upper model performs language modeling using this compressed information\", \"Query-Aware Retrieval: For instruction-following tasks, uses similarity scoring to selectively expand relevant tree nodes\", \"Layer-wise Connection: Information transfer occurs only at lower layers to reduce computational overhead\", \"Empirical results on language modeling tasks up to 128K tokens and various instruction-following benchmarks.\", \"However, I am a bit concerned with the presentation and also the technical depth explanation.\"], \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"the basic empirical setup for evaluation in experimental studies is clean on standard benchmarks, from ppl to instruction-following tasks\", \"ablation studies demonstrate the effectiveness of the approach\"], \"weaknesses\": [\"1. In my personal opinon, this paper could benefit from the improving the presentation, specifically,\", \"In figure 1, it is hard for me to capture whether this is a encoder-decoder model or decoder-only model. The role of cross attention is not very clear. I would recommend authors to haven an overview of the model, and then dive into details, and use another figure to explain the context tree to avoid readers getting distracted. At least, you can mark which part is an encoder, and which part is a decoder.\", \"I really find it hard to understand what is \\\"Tree Cross-attention\\\" in Figure 1.\", \"The paper emphasizes on \\\"self-injection\\\" but this concept is nowhere in this central figure.\", \"there are notations of compression ratio but I am not so sure that they are well explained.\", \"Authors were arguing \\\"information preservation\\\" but I am not so sure what does this mean?could you elaborate on this and connect this with related work?\", \"2. Experimental setup and results\", \"There are many hyperparameter setup of the trees constructed, but the intuitive why those hyperparameters were used/set are not well discussed. See Q1\", \"What do you handle the drift or variable length during training and inference? See Q2\", \"Different values of M were set, what's the intuition and what's the best practice? See Q3\"], \"questions\": \"Q1. How do you setup the values of hyperparameters of context tree? for example, their depth? are they sensitive to the inference tasks?\\n\\nQ2. How do you take care of variable length during training and inference? The dynamic NTK and Yarn used the inference-time ratio on this, but I am not so sure about this in your method, thank you in advance as I am out of curiosity. \\n\\nQ3. Robustness of different M values. M is an important model architecture base value, but I am not sure the robustness and meaning of setting different M values, and what is the best practice for this?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your detailed response! I will continue discussing some details as follows:\\n\\n1. \\\"Complicated\\\" is more like a subject judgement. You believe it is simple and straight only because that's your invention. Admittedly, it is not compulsory in an academic paper. You can consider it as my personal feeling and advice, which won't affect the rating.\\n\\n2. I still do not agree about your comparison view in the experiment part. Even though some KV pruning methods are post-training technique, long-context continue-training is not a difficult part in 2024 year [1, 2]. If I consider your method as an aligned training+inference method, we can also split the two parts. \\n\\n3. Following 2, only fairly comparing your method with CEPE is not solid enough academically. The baseline comparison needs to be done by yourself, since there are almost contemporary work. I think listing the results in their paper is sometimes misleading where the experiment settings are quite different.\\n\\n4. I'm glad to see the NIAH results in your response. But still, it is not solid enough when only compared with CEPE.\\n\\n\\n[1] Effective Long-Context Scaling of Foundation Models.\\n[2] Data Engineering for Scaling Language Models to 128K Context.\"}" ] }
DfOYQZOilp
Jump-teaching: Ultra Robust and Efficient Learning with Noisy Labels
[ "Kangye Ji", "Fei Cheng", "Zeqing Wang", "Qichang Zhang", "Bohu Huang" ]
Sample selection is the most straightforward technique to combat noisy labels, aiming to prevent mislabeled samples from degrading the robustness of neural networks. However, compounding selection bias and redundant selection operations have always remained challenging in robustness and efficiency. To mitigate selection bias, existing methods utilize disagreement in partner networks or additional forward propagation in a single network. For selection operations, they involve dataset-wise modeling or batch-wise ranking. Any of the above methods yields sub-optimal performance. In this work, we propose $\textit{Jump-teaching}$, a novel framework for optimizing the typical workflow of sample selection. Firstly, Jump-teaching is the $\textit{first}$ work to discover significant disagreements within a single network between different training iterations. Based on this discovery, we propose a jump-manner strategy for model updating to bridge the disagreements. We further illustrate its effectiveness from the perspective of error flow. Secondly, Jump-teaching designs a lightweight plugin to simplify selection operations. It creates a detailed yet simple loss distribution on an auxiliary encoding space, which helps select clean samples more effectively. In the experiments, Jump-teaching not only outperforms state-of-the-art works in terms of robustness, but also reduces peak memory usage by $0.46\times$ and boosts training speed by up to $2.53\times$. Notably, existing methods can also benefit from the integration with our framework.
[ "learning with noisy labels", "machine learning", "classification" ]
Reject
https://openreview.net/pdf?id=DfOYQZOilp
https://openreview.net/forum?id=DfOYQZOilp
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uYgG6uGljk", "tm94uV1kWV", "q27QgrF2p9", "gXlUfUM6Fp", "b46om2ytqF", "ZoIWQzwcGX", "YgRvv7EWtX", "VtiI8yiDJJ", "TgNou5qXZe", "REnX3BukXr", "LOltYKkGX6", "JD0mvkX5QB", "Fbjb2PPv5i", "FEcu0MjJVI", "F8EDQrPrmw", "EmAKOOKBu2", "CktBJkCOqj", "7rqW5JmtfK", "5WxnfE5Ccn", "2thQH4cBli", "2YOvBGVZW0" ], "note_type": [ "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730713700652, 1732292881410, 1737524061371, 1732292911737, 1733131514225, 1732291960348, 1730693377785, 1732292436862, 1732293512903, 1733912425057, 1732497979650, 1730679588275, 1732715072527, 1732293642581, 1730691391543, 1732292340551, 1732293255622, 1732293470300, 1732293760718, 1732293138838, 1732501215437 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10552/Reviewer_ySnh" ], [ "ICLR.cc/2025/Conference/Submission10552/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10552/Authors" ], [ "ICLR.cc/2025/Conference/Submission10552/Authors" ], [ "ICLR.cc/2025/Conference/Submission10552/Authors" ], [ "ICLR.cc/2025/Conference/Submission10552/Reviewer_ZGND" ], [ "ICLR.cc/2025/Conference/Submission10552/Authors" ], [ "ICLR.cc/2025/Conference/Submission10552/Authors" ], [ "ICLR.cc/2025/Conference/Submission10552/Area_Chair_i3ce" ], [ "ICLR.cc/2025/Conference/Submission10552/Reviewer_5v7y" ], [ "ICLR.cc/2025/Conference/Submission10552/Reviewer_FvYm" ], [ "ICLR.cc/2025/Conference/Submission10552/Reviewer_ySnh" ], [ "ICLR.cc/2025/Conference/Submission10552/Authors" ], [ "ICLR.cc/2025/Conference/Submission10552/Reviewer_5v7y" ], [ "ICLR.cc/2025/Conference/Submission10552/Authors" ], [ "ICLR.cc/2025/Conference/Submission10552/Authors" ], [ "ICLR.cc/2025/Conference/Submission10552/Authors" ], [ "ICLR.cc/2025/Conference/Submission10552/Authors" ], [ "ICLR.cc/2025/Conference/Submission10552/Authors" ], [ "ICLR.cc/2025/Conference/Submission10552/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The authors introduce the concept of identifying significant disagreements within a single neural network across different training iterations. This discovery leads to the proposal of a \\\"jump-manner\\\" strategy for model updates, effectively bridging the gaps caused by these disagreements. Jump-Teaching simplifies the sample selection process through a lightweight plugin that generates a clear loss distribution in an auxiliary encoding space. This approach enhances the ability to select clean samples more effectively, addressing selection bias and redundancy. The framework demonstrates substantial improvements in both robustness and efficiency compared to state-of-the-art methods, specifically reducing peak memory usage by 46% and increasing training speed by up to 253%. The paper highlights that current methods can benefit from integrating with the Jump-Teaching framework, suggesting that it enhances the overall approach to learning with noisy labels.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The method achieves improved performance under high noise rates on CIFAR datasets.\\n2. The method demonstrates better computational and storage efficiencies during testing.\", \"weaknesses\": \"1. The experiments conducted on CIFAR-10 with 90% symmetric noise lack meaningful insight, as this setting results in random labels for each sample, effectively reducing the task to an unsupervised learning scenario.\\n2. The presentation needs improvement. Suggested changes include:\\n - The methodology section incorporates experimental analysis (Figure 4), making it difficult to discern insights related to debiasing.\\n - The connection among the four subsections in Section 3.2 is unclear.\\n - The framework presented in Figure 1 contains excessive details that are not explained in the introduction; these should either be removed or relocated to the methodology section.\\n3. The authors claim that \\u201cJump-Teaching is the first work to discover significant disagreements within a single network between different training iterations.\\u201d However, the concept of leveraging disagreements across different training iterations has been previously studied (see [1]).\\n\\n[1] Self-Filtering: A Noise-Aware Sample Selection for Label Noise with Confidence Penalization, ECCV 2022.\", \"questions\": \"1. As indicated in Table 5, the accuracy under 90% symmetric noise on CIFAR-10 exceeds 75%, which corresponds to random labels for the training samples. This scenario can be classified as an unsupervised learning task rather than weakly supervised learning. We need to reconsider the implications of generalization in learning with noisy labels using semi-supervised learning methods, given the lack of supervision.\\n2. It seems unreasonable to separate the updates of the neural network parameters in steps 9-10. Combining \\\\( L^{BCE} \\\\) and \\\\( L^{CE} \\\\) and updating the neural network with respect to the total loss could be more efficient.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer FvYm (part 1)\", \"comment\": \"Thank you for your helpful review. We have clarified the novelty of our method in [Novelty]. Furthermore, we add new experiments in [Experiments] and improve the presentation in [Presentation]. Now we would like to address your concerns in detail.\\n\\n> [W1] The academic novelty of this approach appears limited. [W1.1] It is unclear whether updating the model based on selections from the previous step offers any theoretical advantage over the naive approach of updating the model at every iteration. [W1.2] Additionally, is there theoretical support that using data from the previous step effectively addresses the noisy label problem? [W1.3] Moreover, extracting useful information from models at different training epochs has already been extensively explored in the literature. A seminal work in this area, for example, is Snapshot Ensembles: Train 1, get M for free (ICLR \\u201917).\\n\\nThank you for your comments. We address your concerns point by point to clarify the academic novelty of the jump-update strategy.\\n\\n**W1.1:** First, there seems to be a misunderstanding of the jump-update strategy. The jump-update strategy also updates the model at every iteration. As for the theoretical advantages over the naive self-update approach, we have provided two main justifications in the paper:\\n\\n(1) It bridges the disagreement for self-correction of bias, which is shown directly in Figure 2. \\n\\n(2) It splits the sequential error flow, which can reduce the number of error accumulations by hundreds of times on CIFAR-10 and CIFAR-100, which is detailed in Section 3.1. \\n\\n**W1.2:** The question appears to be somewhat imprecise. Do you mean that data from the previous step is better than current data? As for this question, the paper does not make such a comparison, nor does it claim any superiority of data from the previous step. If your concern comes from other parts, would you please provide more information? We will provide more details in the following discussion phase.\\n\\n**W1.3:** The description is overly general, and the referred literature is unrelated to the field of our paper. Therefore, it does not constitute a limitation for the academic novelty of the paper.\\nFirst of all, \\\"extracting useful information from models at different training epochs\\\" is too general, which occurs in countless papers [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. These papers all follow this manner, but the research problems, research objects, novelty, and technical contributions are diverse but all admirable. We take the paper [11] you referenced as an example.\\n- In the research problem, we study how to improve network performance when learning with noisy labels while [11] studies how to improve the ensembling performance of different epochs.\\n- In research objects, we study the model update strategy and sample selection criteria while [11] focuses on the convergence and optimization of model parameters.\\n- In temporal-related technologies, we use different selection behaviors to mitigate bias while [11] uses different snapshot parameters for ensembling.\\n\\n[1] Temporal Ensembling for Semi-Supervised Learning \\\\\\n[2] Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results \\\\\\n[3] Temporal Self-Ensembling Teacher for Semi-Supervised Object Detection \\\\\\n[4] Averaging Weights Leads to Wider Optima and Better Generalization \\\\\\n[5] Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better \\\\\\n[6] Boost Neural Networks by Checkpoints \\\\\\n[7] Understanding and Improving Early Stopping for Learning with Noisy Labels \\\\\\n[8] Efficient Knowledge Distillation from Model Checkpoints \\\\\\n[9] DISC: Learning From Noisy Labels via Dynamic Instance-Specific Selection and Correction \\\\\\n[10] Explaining Deep Learning Representations by Tracing the Training Process \\\\\\n[11] Snapshot Ensembles: Train 1, get M for free\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer 5v7y\", \"comment\": \"Thank you for your constructive review. We have clarified the novelty of our method in [Novelty]. Furthermore, we add new experiments in [Experiments] and improve the presentation by checking the typos and clarifying the definition in [Presentation]. Now we would like to address your concerns in detail.\\n\\n> [W1] Authors need to further provide the results of J-Co-teaching and J-DivideMix on Clothing1M in Table 3 to prove the reliable performance in real scenarios.\\n\\nThank you for your constructive suggestion. We have conducted further experiments on J-Co-teaching and J-DivideMix using the Clothing1M dataset. The supplemental experiments have been incorporated into Section 4.2 in line 482, with the results detailed in Appendix 19. As shown in Table 13, we conclude that the jump-update strategy demonstrates reliable performance in real-world scenarios, achieving a 2.88% improvement for Co-teaching and a 0.26% improvement for DivideMix.\\n\\n> [W2] There are errors with the experimental results data. J-Co-teaching does not achieve optimal performance under some settings in Table 4, such as Sym-0.5 of CIFAR-10 and Sym-0.5 and Asym-0.4 of CIFAR100.\\n\\nThanks for your helpful suggestion. We have rechecked the experimental results and highlighted the top-performing algorithm for each metric with bold annotation in Table 4.\\n\\n> [W3] The content of the semantic loss decomposition part of this paper is not strongly related to the main motivation of this paper, Jump-update Strategy, and is more like an auxiliary trick.\\n\\nThanks for your comment. Jump-update strategy and semantic loss decomposition are equally crucial to our motivation. In other words, semantic loss decomposition is not merely a trick but a key component of Jump-teaching. We would like to re-emphasize our motivation and clarify the role of semantic loss decomposition.\\n1) Motivation. The motivation of this paper is to develop a robust and efficient solution for handling noisy labels. Generally, the workflow of traditional sample selection methods involves two key aspects: mitigating selection bias and performing selection operations. Both of these aspects present challenges to achieving robustness and efficiency. First, selection bias inevitably occurs due to the classifier's exposure to noisy data in lines 69 to 81. Second, current selection operations are often redundant lines 82 to 91. Jump-teaching addresses both robustness and efficiency concerns by considering these two aspects in its design. Therefore, the Jump-update strategy and semantic loss decomposition are both essential components of our approach.\\n2) Role. Semantic Loss Decomposition simplifies redundant selection operations by avoiding dataset-wide modeling or batch-wise ranking. Specifically, it utilizes a lightweight plugin to decompose a single loss into a detailed distribution within the loss. By leveraging the memorization effect in this distribution, it becomes easier to detect noise. Therefore, Semantic Loss Decomposition plays a critical role in fulfilling our motivation.\\n\\n>[W4] There is no clear definition of I_{detection} in Eq. (8).\\n\\nThanks for your helpful suggestion. For the sake of completeness, we provide the definition of $I_{detection}$ in line 337.\\n\\n> [W5] The names of the citation methods are not uniform, such as ' JoCoR ' in the relevant work section and ' JoCor ' in the experimental section.\\n\\nThanks for your helpful advice. We have thoroughly reviewed the text for typographical errors and updated the name of the method in [1] as 'JoCoR'.\\n\\n[1] Combating noisy labels by agreement: A joint training method with co-regularization\\n\\n> [W6] It is recommended to compare the proposed method with 2024 SOTAs.\\n\\nThank you for your thoughtful suggestion. We provide a further comparison between the Jump-teaching and the latest work [1], strengthening the validity of our experimental results. The results are supplemented in Table 1 in line 444. This literature also has been included in the references. As illustrated in Table 1, Jump-teaching consistently outperforms RML across all noise settings, achieving a remarkable $48.8$% improvement in Sym. $\\\\epsilon = 0.8$ on CIFAR-10 and a $24.3$% improvement in Sym. $\\\\epsilon = 0.8$ on CIFAR-100.\\n\\n[1] Regroup Median Loss for Combating Label Noise (AAAI'24)\"}", "{\"title\": \"Summary of Submission and Discussion\", \"comment\": \"We sincerely thank the reviewers and chairs for their valuable feedback. During the discussion period, we made every effort to address all of the reviewers' concerns, particularly regarding the novelty and effectiveness of the proposed method. Additionally, we have conducted the suggested experiments and made several revisions, all of which are highlighted in blue in the uploaded PDF file. Although we are still awaiting responses from some reviewers, we would be happy to provide a clear summary of the paper and the points discussed.\\n\\n## Motivation\\nThis paper focuses on combating noisy labels by selecting clean samples for training. The motivation behind Jump-teaching is to optimize the outdated sample selection workflow by mitigating selection bias within a single network and simplifying redundant selection operations through the incorporation of a lightweight plugin.\\n## Contributions\\n- **Novelty of Discovery.** We are the first work to discover the disagreement for correcting selection bias within a single network. Notably, this has been clarified in the [Novelty] section of the general response and in specific responses to Reviewer #1 and Reviewer #2.\\n- **Technical Contribution of Jump-update Strategy.** Jump-update strategy proposes a simple and cost-free solution to bridge disagreement and split sequential error flows, leading to a significantly better trade-off between efficiency and robustness. \\n- **Technical Contribution of Semantic Loss Decomposition.** The proposed plugin virtually eliminates redundancy in selection operations, occupying only 2.2% of the total training time, by exploiting the memorization effects in the distribution of the decomposed single loss.\\n## Experiments\\nExtensive experimental results confirm the effectiveness of our method. \\n- **Superior Performance of Jump-teaching.** To address the concerns regarding up-to-dateness, we compare 2024 SOTA in Table 1. Jump-teaching outperforms the latest baselines across various noise ratios and types, including symmetric noise (Table 1), asymmetric noise (Table 1), pairflip noise (Table 9), IDN (Table 10), and real-world noise (Table 3). Furthermore, it reduces peak memory usage by $0.46\\\\times$ and accelerates training speed by up to $2.53\\\\times$ (Table 2), achieving an extremely low overhead for sample selection (2.2% in training speed and 2.8% in peak memory).\\n- **Effective Integration of Jump-update Strategy.** We successfully apply the Jump-update strategy to improve the performance of mitigating bias in two representative settings: supervised-only and semi-supervised methods. The results demonstrate impressive improvements under extreme noise conditions (Table 4 and Table 5) and real-world noise (Table 13). The newly added results in Table 13 also address Reviewer #3's concerns regarding the performance of J-co-teaching and J-DivideMix under real-world noise.\\nWe also summarize the newly conducted experiments addressing these concerns in the [Experiments] section of the general response and respond to specific reviewers in detail.\"}", "{\"title\": \"Response to Reviewer ySnh (Part 1)\", \"comment\": \"Thanks for your thorough reviews. We have clarified the novelty of our method in [Novelty]. Furthermore, we add new experiments in [Experiments] and improve the presentation in [Presentation]. Now we would like to address your concerns in detail.\\n\\n> [W1] The experiments conducted on CIFAR-10 with 90% symmetric noise lack meaningful insight, as this setting results in random labels for each sample, effectively reducing the task to an unsupervised learning scenario.\\n\\nThere may be a misunderstanding that 90% is mistaken as the ratio of flipped categories. Actually, 90% symmetric noise is advisable for the following three reasons:\\n1) The setup that uses CIFAR-10 with 90% symmetric noise is aligned with the baseline [1] in Table 5.\\n2) 90% symmetric noise serves as a widely adopted standard [2, 3, 4, 5] for performance evaluation under extreme noise. \\n3) The setup is reasonable, as the labels are not random, indicating that this task is not an unsupervised learning scenario. We conduct experiments detailing that the labels of the selected samples are not random at all. Specifically, we calculate the ratio and number of clean samples selected by the network under 90% symmetric noise conditions. As shown in the newly added Fig. 8, the clean ratio by all methods exceeded 20%, and even two methods using the Jump-update strategy exceeded 60%, which is significantly higher than the random proportion. \\n\\nIn fact, 90% noise indicates that a label has a 90% probability of being flipped to any class. We also recommend that readers refer to Appendix 12 for further details on the simulation of synthetic noise.\\n\\n[1] DivideMix: Learning with Noisy Labels as Semi-supervised Learning \\\\\\n[2] LongReMix: Robust Learning with High Confidence Samples in a Noisy Label Environment \\\\\\n[3] Sample Prior Guided Robust Model Learning to Suppress Noisy Labels \\\\\\n[4] Probabilistic End-to-end Noise Correction for Learning with Noisy Labels \\\\\\n[5] Joint Optimization Framework for Learning with Noisy Labels \\n\\n> [W2.1] The methodology section incorporates experimental analysis (Figure 4), making it difficult to discern insights related to debiasing.\\n\\nThe insights presented in the Empirical Analysis (the second subsection) are verified by Experimental Analysis (the third subsection). We have revised lines 186 and 187 to point out the relationship between the two explicitly.\\nIn the Methodology section (Section 3.1), we first describe the model update strategy for debiasing in the first subsection, followed by a discussion of our insights in the second subsection. The final subsection, titled 'Experimental Analysis', presents experimental results that validate these insights.\\nIn Experimental Analysis subsection, we introduce the experimental settings in the first paragraph and use the second and third paragraphs to verify the two core properties underlying the insights.\\nThe structure of the section, with clear subsection titles and introductory sentences, makes it easy for readers to follow. If there are any specific points of confusion, please feel free to let us know, and we will be happy to clarify.\\n\\n> [W2.2] The connection among the four subsections in Section 3.2 is unclear.\\n\\nThank you for your constructive review! The unclarity stems from the lack of a clear guide for the structure of these subsections. We have explicitly explained the relationship between these sections in lines 299 and 300. \\nSection 3.2 is organized as follows: we first introduce our motivation in the first subsection, followed by detailed explanations of the codebook and auxiliary head in the next two subsections. Finally, we describe the sample selection operation that leverages both modules in the fourth subsection.\\n\\n> [W2.3] The framework presented in Figure 1 contains excessive details that are not explained in the introduction; these should either be removed or relocated to the methodology section.\\n\\nThe concern stems from two aspects. First, Figure 1 contains excessive details, which create a barrier to reading. Second, Figure 1 was not effectively integrated into the introduction.\\nTo address these issues, we have made two revisions:\\n- We simplify Figure 1 by removing unnecessary elements, making it more intuitive and better aligned with the introduction.\\n- We revise lines 98 and 99 to explicitly guide readers to refer to Figure 1 for a clearer understanding of the content.\"}", "{\"summary\": \"The paper proposes \\\"Jump-teaching,\\\" a novel framework for robust and efficient learning with noisy labels. By introducing a jump-update strategy and a Semantic Loss Decomposition plugin, the method reduces sample selection bias and enhances efficiency. Experiments show Jump-teaching improves performance over state-of-the-art methods, particularly under high noise conditions, with notable gains in memory efficiency and processing speed.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Proposes an innovative jump-update strategy that significantly reduces selection bias in single-network training.\", \"Semantic Loss Decomposition provides a lightweight yet effective way to distinguish clean and noisy samples.\", \"Empirically validated with improved accuracy, efficiency, and robustness across various noise levels and datasets.\"], \"weaknesses\": [\"I disagree with the claim that this work is the first to identify disagreements across different iterations within a single network. Prior studies [1][2] have leveraged these disagreements to distinguish clean samples from corrupted data in training sets. I recommend that the authors revise this claim and include comparisons with these two works.\", \"In Figure 2, the authors introduce the IoU metric to measure disagreements. Although an explanation of IoU is provided in the appendix, could the authors illustrate what range of IoU values is considered preferable? Because I notice that the IoU value of Jump-update is between the values of self-update and cross-update, the performance of Jump-update is the best (see Figure 2(c,d)).\", \"Property 1. (1) I have a question regarding the assumption in Property 1, namely that $N_A$ equals $N_{iterations}$. From past experience, the model often generates biased selections initially, then gradually corrects this bias as performance improves, given moderate noise rates (10%, 20%). Therefore, error accumulation may not persist in later iterations. (2) Additionally, the results in Figure 4(a) do not align well with the conclusion of Property 1. The highest test accuracy occurs at r = 50% rather than r = 10%.\", \"There are some concerns regarding whether the jump-update is a more effective strategy for selecting clean samples. (1) In Table 4, at typical noise ratios (e.g., CIFAR-10/100 sym. 50%, CIFAR-100 asym. 40%), J-Co-teaching does not outperform standard Co-teaching (2 networks). (2) While non-trivial improvements are observed in Table 1, these gains do not carry over to a semi-supervised learning setting (see Table 5). In some settings, J-DivideMix is worse than DivideMix.\", \"The compared methods in Table 1 are outdated. Comparing with more recent works is necessary; for example, ProMix (IJCAI'23).\", \"[1] Late Stopping: Avoiding Confidently Learning from Mislabeled Examples. ICCV'23\", \"[2] Self-Filtering: A Noise-Aware Sample Selection for Label Noise with Confidence Penalization. ECCV'22\", \"[3] ProMix: Combating Label Noise via Maximizing Clean Sample Utility. IJCAI'23\"], \"questions\": \"see Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer ySnh (Part 3)\", \"comment\": \"> [Q1] As indicated in Table 5, the accuracy under 90% symmetric noise on CIFAR-10 exceeds 75%, which corresponds to random labels for the training samples. This scenario can be classified as an unsupervised learning task rather than weakly supervised learning. We need to reconsider the implications of generalization in learning with noisy labels using semi-supervised learning methods, given the lack of supervision.\\n\\nThe question is similar to [W1]. 90% symmetric noise on CIFAR-10 is not a part of unsupervised learning scenarios because the labels of selected samples are not random. The details can be referred to in response to W1.\\nMoreover, the accuracy under 90% symmetric noise on CIFAR-10 exceeds 75%, demonstrating the potential of weakly supervised learning methods with insufficient supervision. However, it is also crucial to consider the importance of sample selection to separate labeled samples and unlabeled samples from noisy data.\\n\\n> [Q2] It seems unreasonable to separate the updates of the neural network parameters in steps 9-10. Combining ( L^{BCE} ) and ( L^{CE} ) and updating the neural network with respect to the total loss could be more efficient.\\n\\nThanks for your helpful suggestion. These two steps are updated simultaneously. We revise Algorithm 1 in line 358. We combine the two steps in the code directly and backpropagate them simultaneously, ensuring no impact on efficiency. In the former version of Algorithm 1, we separated them only for clarity.\"}", "{\"title\": \"Response to All Reviewers (Part 2)\", \"comment\": \"**[Experiments]**\\n\\nR.ZGND and R.5v7y recommended comparing our work with recent studies. In response, we expand the following experiments to support our paper further:\\n- **Supplementing baselines in Table 1.** To compare with the latest method, we include the results of RML [4] in Table 1 to showcase the up-to-date effectiveness of Jump-teaching.\\n- **Statistical analysis of labels under 90% symmetric noise.** We conduct experiments to calculate the ratio and number of selected labels under 90% symmetric noise. The results confirm that the selected labels are not random, thereby validating the rationality of our experimental setup.\\n- **Conducting additional experiments on real-world datasets.** We conduct further experiments using J-Co-teaching and J-DivideMix on the Clothing1M dataset to demonstrate the reliable performance of the Jump-update strategy in real-world scenarios.\\n\\n[1] Self-Filtering: A Noise-Aware Sample Selection for Label Noise with Confidence Penalization \\\\\\n[2] Late stopping: Avoiding confidently learning from mislabeled examples \\\\\\n[3] Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels \\\\\\n[4] Regroup Median Loss for Combating Label Noise (AAAI'24)\\n\\n\\n**[Presentation]**\", \"we_have_improved_our_presentation_by\": [\"Simplify Figure 1 to better align with the introduction.\", \"Enhance the connection between Figure 1 and the text in lines 98 and 99.\", \"Add necessary explanations, such as disagreement and the notation $I_{detection}$.\", \"Revise Property 1 by incorporating a hypothesis.\", \"Correct a few typos.\"]}", "{\"metareview\": \"This paper proposes a novel technique called Jump-teaching for learning with noisy labels. Specifically, Jump-teaching aims to discover significant disagreements within a single network between different training iterations. Based on this discovery, this paper proposes a jump-manner strategy for model updating to bridge the disagreements. The authors further illustrate the effectiveness from the perspective of error flow. Moreover, Jump-teaching designs a lightweight plugin to simplify selection operations. It creates a detailed yet simple loss distribution on an auxiliary encoding space, which helps select clean samples more effectively.\\n\\nAlthough the authors claim that their method is novel, the reviewers consider that utilizing the disagreements for sample selection across different iterations within a single network or across different networks is not new, so all of them show negative scores.\", \"additional_comments_on_reviewer_discussion\": \"All reviewers show negative scores to this paper, as the idea of this paper, namely utilizing disagreement, is not interesting enough. Therefore, I feel sorry that I cannot recommend an acceptance to this paper.\"}", "{\"comment\": \"After reading the author's response and other reviewers' comments, I will reduce my initial score.\"}", "{\"summary\": \"This paper proposes the Jump-Teaching methodology for learning with noisy labels.\\n\\nSpecifically, it investigates an efficient approach that requires only a single network. \\n\\nTo achieve this, the authors introduce two key techniques: a Jump-update Strategy to mitigate selection bias and Semantic Loss Decomposition to simplify the selection operation.\", \"the_effectiveness_of_the_proposed_approach_is_demonstrated_through_experiments_on_three_benchmark_datasets\": \"CIFAR-10, CIFAR-100, and Clothing-1M.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Research on sample selection methods using a single network to reduce computational costs is an interesting research topic.\", \"weaknesses\": \"The academic novelty of this approach appears limited. It is unclear whether updating the model based on selections from the previous step offers any theoretical advantage over the naive approach of updating the model at every iteration. Additionally, is there theoretical support that using data from the previous step effectively addresses the noisy label problem? Moreover, extracting useful information from models at different training epochs has already been extensively explored in the literature. A seminal work in this area, for example, is Snapshot Ensembles: Train 1, get M for free (ICLR \\u201917).\\n\\n\\nThe experimental results are not convincingly state-of-the-art. In particular, several recent relevant papers (a, b, c) are missing from the references. Their result tables show significantly better performance on CIFAR-10 and CIFAR-100 compared to the results presented in this work.\\n \\n(a) Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels (NeurIPS\\u201921)\\n\\n(b) DISC: Learning From Noisy Labels via Dynamic Instance-Specific Selection and Correction (CVPR\\u201923)\\n\\n(c) Sample-wise Label Confidence Incorporation for Learning with Noisy Labels (ICCV\\u201923)\", \"questions\": \"It is not clear why using data selection results from previous iterations for model updates would be beneficial. Specifically, why would the set of samples selected by the model in the previous step yield better results than the samples selected in the current step? Is the main advantage simply that it avoids sequential updates, thereby reducing the amplification of error accumulation?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the detailed response, which addresses most of my concerns. Based on the clarifications provided, I am inclined to increase my score to 5. However, I still believe that the paper falls slightly below the acceptance threshold.\"}", "{\"title\": \"Response to All Reviewers (Part 1)\", \"comment\": [\"We sincerely thank all the reviewers for their valuable time and insightful suggestions. In this 'global' rebuttal, we clarify our technical contributions and present new experimental results. Furthermore, we will address other concerns in our responses to individual reviews. A revised version of our paper has been uploaded, with all the modifications marked in blue to address the concerns.\", \"**[Novelty]**\", \"R.Ysnh and R.ZGND expressed concerns regarding our claim that \\\"We are the first work to discover significant disagreement within a single network,\\\" as prior studies [1, 2] have utilized predictions of different epochs. This concern can be addressed with three key points:\", \"1) The concept of \\\"disagreement\\\" in our paper essentially differs from counterparts in [1, 2] in terms of definition, described objects, application, calculation, and focus.\", \"**Definition.** The definition of \\\"disagreement\\\" is different from these terms, such as \\\"Fluctuation\\\" [1] and \\\"First-time k-epoch Learning (FkL)\\\" [2]. As stated in lines 75 to 76, the \\\"disagreement\\\" we refer to is derived from [3]. It denotes the differences in selection behaviors within networks. In contrast, [1] defines a similar concept as \\\"fluctuation,\\\" which refers to a sample being classified correctly at one moment but misclassified in the subsequent learning step. \\\"FkL\\\" in [2] refers to the minimum index of the training epoch that the instance has been predicted to its given label for $k$ consecutive epochs.\", \"**Described Object.** The object described by \\\"disagreement\\\" is entirely distinct from others. \\\"Fluctuation\\\" or \\\"FkL\\\" refers to the characteristics of samples, while \\\"disagreement\\\" relates to the characteristics of networks. For example, in [3], the object is two networks, whereas in this paper, the object is the network across different iterations. \\\"Fluctuation\\\" or \\\"FkL\\\" describes variations in judgments within a single sample.\", \"**Application.** \\\"Disagreement\\\" is applied in a completely different way from others. \\\"Disagreement\\\" as a network characteristic, is used to design model update strategies, whereas \\\"fluctuation\\\" and \\\"FkL\\\" as sample characteristics define selection criteria.\", \"**Calculation and Focus.** The calculation of each term differs significantly, which stems from their distinct focus. From the focus perspective, \\\"fluctuation\\\" and \\\"FkL\\\" concern whether a sample is clean, while \\\"disagreement\\\" addresses the potential for correcting selection bias in model updates. Therefore, \\\"Fluctuation\\\" and \\\"FkL\\\" emphasize and calculate consistency between model predictions and labels, whereas \\\"disagreement\\\" is independent of labels and is calculated as the Intersection Over Union (IoU) between selected data sets.\", \"2) The meaning we emphasized through \\\"first\\\" has not been discovered and utilized.\", \"We are the first to discover that disagreement within a single network not only exists but also persists during the training, shown in Figure 2(a)(b).\", \"We are the first to discover that disagreement within a single network is significant, which is even larger than two networks, as shown in Figure 2(a)(b).\", \"We are also the first to quantify and visualize \\\"disagreement\\\" by the IoU metric.\", \"3) We sincerely thank all the reviewers for their dedicated efforts. We have made the following revisions for further clarity:\", \"Revise the claim by emphasizing its beneficial attributes for the mitigation of bias in line 107.\", \"Add the definition of \\\"disagreement\\\" clearly in lines 75 and 76. \\\"Disagreement\\\" refers to differences of networks in the selection behaviors.\", \"Add discussions on [1, 2] in Related Work in lines 151 and 152.\", \"Add discussions to compare our paper and the related papers in Appendix 2.\"]}", "{\"summary\": \"To mitigate compounding selection bias and redundant selection operations in existing methods, the authors of this paper propose a novel framework for optimizing the typical workflow of sample selection, called Jump-teaching. Jump-teaching focuses on discovering significant disagreements within a single network between different training iterations by employing a jump-manner strategy for model updating to bridge the disagreements. Besides, Jump-teaching designs a lightweight plugin to simplify selection operations to help select clean samples more effectively. Finally, experimental results on synthetic and real-world noisy datasets, demonstrate the robustness of Jump-teaching.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This idea and motivation for discovering significant similarities within a single network between different training iterations are interesting and fascinating.\\n\\n2. This paper has carried out a lot of formula derivation and proved the effectiveness of the proposed method from the theoretical knowledge level.\\n\\n3. Figures 1, 2, and 3 in this paper simply and clearly express the main ideas and innovations of the paper.\", \"weaknesses\": \"1. Authors need to further provide the results of J-Co-teaching and J-DivideMix on Clothing1M in Table 3 to prove the reliable performance in real scenarios.\\n\\n2. There are errors with the experimental results data. J-Co-teaching does not achieve optimal performance under some settings in Table 4, such as Sym-0.5 of CIFAR-10 and Sym-0.5 and Asym-0.4 of CIFAR100.\\n\\n3. The content of the semantic loss decomposition part of this paper is not strongly related to the main motivation of this paper, Jump-update Strategy, and is more like an auxiliary trick.\\n\\n4. There is no clear definition of I_{detection} in Eq. (8).\\n\\n5. The names of the citation methods are not uniform, such as ' JoCoR ' in the relevant work section and ' JoCor ' in the experimental section.\\n\\n6. It is recommended to compare the proposed method with 2024 SOTAs.\", \"questions\": \"See above weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer ySnh (Part 2)\", \"comment\": [\"> [W3] The authors claim that \\u201cJump-Teaching is the first work to discover significant disagreements within a single network between different training iterations.\\u201d However, the concept of leveraging disagreements across different training iterations has been previously studied (see [1]).\", \"Thank you for your thorough review. Our method differs significantly from others, and there may be a misunderstanding stemming from the similarity in that our paper and [2, 3] both leverage network inferences across different iterations. However, it is not a new thing that predictions change in different iterations. Therefore, we will conduct a detailed and clear exposition to eliminate misunderstandings and avoid ambiguity through a series of revisions.\", \"Although \\\"disagreement\\\" and \\\"fluctuation\\\" are similar in form, they are essentially different:\", \"**Definition.** The definition of \\\"disagreement\\\" is different from \\\"fluctuation\\\". As stated in lines 75 and 76, the \\\"disagreement\\\" we refer to is derived from [1]. It denotes the differences in selection behaviors within networks. In contrast, \\\"fluctuation\\\" refers to a sample being classified correctly at one moment but misclassified in the subsequent learning step.\", \"**Described Object.** The object described by \\\"disagreement\\\" is entirely distinct from \\\"fluctuation\\\". \\\"Fluctuation\\\" refers to the characteristics of samples, while \\\"disagreement\\\" relates to the characteristics of networks. For example, in [1], the object is two networks, whereas in this paper, the object is the network across different iterations. \\\"Fluctuation\\\" describes variations in judgments within a single sample.\", \"**Application.** \\\"Disagreement\\\" is applied in a completely different way from \\\"fluctuation\\\". \\\"Disagreement\\\" as a network characteristic, is used to design model update strategies, whereas \\\"fluctuation\\\" as sample characteristics define selection criteria.\", \"**Calculation and Focus.** The calculation of each term differs significantly, which stems from their distinct focus. From the focus perspective, \\\"fluctuation\\\" concerns whether a sample is clean, while \\\"disagreement\\\" addresses the potential for correcting selection bias in model updates. Therefore, \\\"Fluctuation\\\" emphasizes and calculates consistency between model predictions and labels, whereas \\\"disagreement\\\" is independent of labels and is calculated as the Intersection Over Union (IoU) between selected data sets.\", \"In addition to the differences between the two in terms of definition, described object, application, computation, and focus, we would also like to emphasize that the original expression in line 106 contains several meanings regarding \\\"first\\\":\", \"We are the first to discover that disagreement within a single network not only exists but also persists during the training, shown in Figure 2(a)(b).\", \"We are the first to discover that disagreement within a single network is significant, which is even larger than two networks, as shown in Figure 2(a)(b).\", \"We are also the first to quantify and visualize \\\"disagreement\\\" by the IoU metric.\", \"We are motivated by this discovery of existence, persistence, and significance to design an efficient and effective jump-update strategy. Before our paper, the disagreement that the field is generally considered only exists in two networks.\", \"We sincerely thank you for your dedicated effort and respect for the great work that has been done in [2]. We have made the following revisions for clarity:\", \"Revise the claim by emphasizing its beneficial attributes for the mitigation of bias in line 107.\", \"Add the definition of \\\"disagreement\\\" clearly in lines 75 and 76. \\\"Disagreement\\\" refers to differences of networks in the selection behaviors.\", \"Add discussions on [2,3] in Related Work in lines 51 and 152.\", \"Add discussions to compare our paper and the related papers in Appendix 2.\", \"[1] Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels \\\\\", \"[2] Self-Filtering: A Noise-Aware Sample Selection for Label Noise with Confidence Penalization \\\\\", \"[3] Late stopping: Avoiding confidently learning from mislabeled examples\"]}", "{\"title\": \"Response to Reviewer ZGND (Part 1)\", \"comment\": [\"Thanks for your detailed and thoughtful reviews. We have clarified the novelty of our method in [Novelty]. Furthermore, we add new experiments in [Experiments] and improve the presentation in [Presentation]. Now we would like to address your concerns in detail.\", \"> [W1] I disagree with the claim that this work is the first to identify disagreements across different iterations within a single network. Prior studies [1][2] have leveraged these disagreements to distinguish clean samples from corrupted data in training sets. I recommend that the authors revise this claim and include comparisons with these two works.\", \"Thank you for your constructive feedback. We agree that prior studies have leveraged predictions of different iterations to distinguish noisy samples. However, our claim is accurate because the concept of \\\"disagreement\\\" in our paper essentially differs from counterparts in [1 2] in terms of definition, described objects, application, calculation, and focus. We would like to provide a detailed comparison to clarify your concerns.\", \"**Definition.** The definition of \\\"disagreement\\\" is different from these terms, such as \\\"Fluctuation\\\" [1] and \\\"First-time k-epoch Learning (FkL)\\\" [2]. As stated in lines 75 to 76, the \\\"disagreement\\\" we refer to is derived from [3]. It denotes the differences in selection behaviors within networks. In contrast, [1] defines a similar concept as \\\"fluctuation,\\\" which refers to a sample being classified correctly at one moment but misclassified in the subsequent learning step. \\\"FkL\\\" in [2] refers to the minimum index of the training epoch that the instance has been predicted to its given label for $k$ consecutive epochs.\", \"**Described Object.** The object described by \\\"disagreement\\\" is entirely distinct from others. \\\"Fluctuation\\\" or \\\"FkL\\\" refers to the characteristics of samples, while \\\"disagreement\\\" relates to the characteristics of networks. For example, in [3], the object is two networks, whereas in this paper, the object is the network across different iterations. \\\"Fluctuation\\\" or \\\"FkL\\\" describes variations in judgments within a single sample.\", \"**Application.** \\\"Disagreement\\\" is applied in a completely different way from others. \\\"Disagreement\\\" as a network characteristic, is used to design model update strategies, whereas \\\"fluctuation\\\" and \\\"FkL\\\" as sample characteristics define selection criteria.\", \"**Calculation and Focus.** The calculation of each term differs significantly, which stems from their distinct focus. From the focus perspective, \\\"fluctuation\\\" and \\\"FkL\\\" concern whether a sample is clean, while \\\"disagreement\\\" addresses the potential for correcting selection bias in model updates. Therefore, \\\"Fluctuation\\\" and \\\"FkL\\\" emphasize and calculate consistency between model predictions and labels, whereas \\\"disagreement\\\" is independent of labels and is calculated as the Intersection Over Union (IoU) between selected data sets.\", \"Moreover, the meaning we emphasized through \\\"first\\\" in line 106 has not been discovered and utilized.\", \"We are the first to discover that disagreement within a single network not only exists but also persists during the training, shown in Figure 2(a)(b).\", \"We are the first to discover that disagreement within a single network is significant, which is even larger than two networks, as shown in Figure 2(a)(b).\", \"We are also the first to quantify and visualize \\\"disagreement\\\" by the IoU metric.\", \"We sincerely thank you for your dedicated effort and important suggestions! We have made the following revisions for clarity:\", \"Revise the claim by emphasizing its beneficial attributes for the mitigation of bias in line 107.\", \"Add the definition of \\\"disagreement\\\" clearly in lines 75 and 76. \\\"Disagreement\\\" refers to differences of networks in the selection behaviors.\", \"Add discussions on [1 2] in Related Work in lines 151 and 152.\", \"Add discussions to compare our paper and the related papers in Appendix 2.\", \"[1] Self-Filtering: A Noise-Aware Sample Selection for Label Noise with Confidence Penalization \\\\\", \"[2] Late stopping: Avoiding confidently learning from mislabeled examples \\\\\", \"[3] Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels\"]}", "{\"title\": \"Response to Reviewer ZGND (Part 2)\", \"comment\": \"> [W2] In Figure 2, the authors introduce the IoU metric to measure disagreements. Although an explanation of IoU is provided in the appendix, could the authors illustrate what range of IoU values is considered preferable? Because I notice that the IoU value of Jump-update is between the values of self-update and cross-update, the performance of Jump-update is the best (see Figure 2(c,d)).\\n\\nThank you for your response. It is both interesting and beneficial to discuss this observation. The pattern should be the smaller the IOU values (the larger the disagreement), the better the performance. \\n1. However, as you noticed, the performance of Self-update is poorer than Jump-update and Cross-update despite the smallest IoU values, which does not suit this pattern. It is because there is an underlying assumption that disagreement should have been exploited but Self-update does not utilize this advantage. \\n2. Moreover, Jump-update and Self-update both show very small IOU values, with Jump-update slightly larger than Self-update. This is because the Jump-update strategy bridges disagreement, so the IoU values increase.\\nAppendix 1 has been supplemented by focusing on the analysis of the relationship between IoU and performance in three strategies.\\n\\n> [W3] Property 1. (1) I have a question regarding the assumption in Property 1, namely that NA equals Niterations. From past experience, the model often generates biased selections initially, then gradually corrects this bias as performance improves, given moderate noise rates (10%, 20%). Therefore, error accumulation may not persist in later iterations. (2) Additionally, the results in Figure 4(a) do not align well with the conclusion of Property 1. The highest test accuracy occurs at r = 50% rather than r = 10%.\\n\\nThanks for your positive and thoughtful suggestion.\\n\\nFor (1), the reviewer questioned that the bias may also be corrected by performance improvement. There is a possibility of that occurring. Moreover, Stochastic Gradient Descent also could lead to $N_A$ not being equal to $N_{iterations}$. Therefore, we revised the paper by adding a hypothesis that the error flow is an uninterrupted model in lines 227 and 768. The detailed proof of $N_A$ equals $N_{iterations}$ can be found in Appendix 4. Notably, $N_A$ equals $N_{iterations}$ under the ideal condition, when we consider the two factors above, Property 1 still holds because $N_A$ is still proportional to $N_{iterations}$, considering corrections caused by either performance improvement or SGD are random. \\n\\nFor (2), this conclusion is right. This issue arises from the fact that accuracy as the experimental result cannot be fully aligned with the accumulated error $D_A$. Also, it is affected by other factors in the experiment. As illustrated in lines 796 and 797, if the model cannot be trained quickly, performance improvements will be delayed. What makes things worse is the selection does not become accurate immediately, leading to a long-term fit to noise. This may explain why the highest test accuracy occurs at $r = 50\\\\%$ rather than $r = 10\\\\%$. \\n\\nRegarding Property 1, this conclusion remains intuitive. Considering other factors, the accuracy can roughly reflect $D_A$ with fluctuations. Moreover, both $r = 10\\\\%$ and $r = 50\\\\%$ show significant improvements in accuracy, effectively validating Property 1.\"}", "{\"title\": \"Response to Reviewer ZGND (Part 3)\", \"comment\": \"> [W4] There are some concerns regarding whether the jump-update is a more effective strategy for selecting clean samples. (1) In Table 4, at typical noise ratios (e.g., CIFAR-10/100 sym. 50%, CIFAR-100 asym. 40%), J-Co-teaching does not outperform standard Co-teaching (2 networks). (2) While non-trivial improvements are observed in Table 1, these gains do not carry over to a semi-supervised learning setting (see Table 5). In some settings, J-DivideMix is worse than DivideMix.\\n\\nThank you for your concerns. Overall, the jump-update strategy is more effective and efficient, especially in extreme noise settings.\\n- For (1)\\n - Under typical noise levels, the performance of the jump-update strategy is comparable to that of co-teaching. Both methods exhibit higher accuracy in certain noise scenarios, with the accuracy difference not exceeding one point. (In this case, different seeds might lead to varying results.) Under extreme noise (noise rate =0.8, 0.9), its performance significantly outperforms co-teaching. \\n - The jump-update only employs a single network with higher efficiency.\\n- For (2)\\n - In most scenarios, J-DivideMix outperforms DivideMix. Furthermore, under all extreme noise conditions (90% symmetric noise), its performance is significantly superior to that of DivideMix. As a model update strategy, J-DivideMix aims to minimize selection bias, achieving optimal results even in extreme noise situations. Regarding the integration with semi-supervised methods, this remains a long-term challenge. It involves not only the effective use of unlabeled samples but also balancing efficiency and robustness. In this paper, we primarily focus on supervised-only scenarios. We are committed to making dedicated efforts in the future to address these limitations.\\n\\n\\n> [W5] The compared methods in Table 1 are outdated. Comparing with more recent works is necessary; for example, ProMix (IJCAI'23).\\n\\nThank you for your insightful advice. We have replicated and supplemented the results of the latest sample selection method [1], which are presented in Table 1 of the revised paper. Although the least method employs loss estimation to further protect the model from noisy samples, it can not mitigate the compounding selection bias. Therefore, Jump-teaching still achieves state-of-the-art performance in all the noise settings. As shown in Table 1, Jump-teaching achieves a remarkable 48.8% higher accuracy in Sym. $\\\\epsilon = 0.8$ on CIFAR-10 and a 24.3% higher accuracy in Sym. $\\\\epsilon = 0.8$ on CIFAR-100. Table 1 shows the comparison results in supervised-only scenarios, therefore we do not include ProMix.\\n\\n[1] Regroup Median Loss for Combating Label Noise (AAAI'24)\"}", "{\"title\": \"Response to Reviewer FvYm (part 2)\", \"comment\": \"> [W2] The experimental results are not convincingly state-of-the-art. In particular, several recent relevant papers (a, b, c) are missing from the references. Their result tables show significantly better performance on CIFAR-10 and CIFAR-100 compared to the results presented in this work.\\n(a) Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels (NeurIPS\\u201921)\\n(b) DISC: Learning From Noisy Labels via Dynamic Instance-Specific Selection and Correction (CVPR\\u201923)\\n(c) Sample-wise Label Confidence Incorporation for Learning with Noisy Labels (ICCV\\u201923)\\n\\nThank you for the comment, but we cannot fully agree with the comment because the **experimental setup** and **the categories** of referred methods differ from those in our paper. \\n\\n1) It is inappropriate to directly compare the values across different tables, as they are based on different experimental settings. \\n - Both (a) and (c) use PreActResNet-34 with 400 epochs, while Jump-teaching uses PreActResNet-18 with only 200 epochs.\\n - Both (a) and (c) utilize a hyperparameter optimization budget and mechanism, whereas we use fixed hyperparameters.\\n\\nConsequently, even under identical noise conditions (e.g., symmetric noise of 0.8), their baseline CE implementation achieves a higher accuracy of $39.2\\\\%$, which reflects the advantages of their setup rather than inherent methodological superiority.\\n\\n2) It is also inappropriate to compare methods that belong to different categories. Specifically, (a) and (c) are noise-robust loss methods, whereas (b) and (c) employ label correction techniques. Therefore, (a), (b), and (c) all benefit from additional supervision provided by corrupted noise samples. In contrast, Jump-teaching relies solely on clean samples.\\n\\nIf possible, we really recommend reviewing the experiment again and taking care of the experiment setting. After that, we hope the reviewer recognizes the value of our work as demonstrated by the experiments. We have not blindly integrated costly tricks to improve performance, such as double-cost view augmentation, semi-supervised technology, and mature noise-robust functions. Instead, we focus on optimizing sample selection while also prioritizing training and storage efficiency, which we believe could offer significant value to the community. As shown in Tables 1, 2, and 3, Jump-teaching has achieved state-of-the-art (SOTA) performance in both efficiency and sample selection accuracy, demonstrating its effectiveness in distinguishing noisy samples from clean ones. This provides a solid foundation for future integration with semi-supervised methods such as [1, 2, 3].\\n\\n[1] FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence \\\\\\n[2] MixMatch: A Holistic Approach to Semi-Supervised Learning \\\\\\n[3] FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling \\n\\n> [Q1.1] It is not clear why using data selection results from previous iterations for model updates would be beneficial. Specifically, why would the set of samples selected by the model in the previous step yield better results than the samples selected in the current step? [Q1.2] Is the main advantage simply that it avoids sequential updates, thereby reducing the amplification of error accumulation?\\n\\nThis question partially overlaps with [W1]. \\n\\n**Q1.1:** The set of samples selected in the previous step can't yield better results than the samples selected in the current step. The better results are attributed to the mitigation of selection bias.\\n\\n**Q1.2:** The selection bias leads to accumulated error in an error flow, the jump-update strategy has an advantage in splitting the sequential error flow into error sub-flows, leading to a smaller degree of accumulated error. Therefore, it mitigates the bias and achieves better results. Notably, the jump-update strategy still enables the network to update sequentially. Regarding accumulated error, our contributions can be summarized as follows:\\n- We identify that error accumulation adversely affects performance and formalize the accumulation procedure mathematically.\\n- Based on the formulation, we provide some verified methods to improve performance, such as reducing updating frequency in line 268 and avoiding initial bias in Appendix 6. \\n- Jump-update strategy offers a novel way to reduce error accumulations by splitting error flows. It not only guarantees sufficient updating frequency but also reduces the number of accumulations to a small magnitude.\\n\\n[1] Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels\"}", "{\"title\": \"Response to Reviewer 5v7y\", \"comment\": \"Thank you for your feedback. We believe our response adequately addressed the key points the reviewers raised. However, it is confusing that the score was reduced without specific reasons outlining the shortcomings of our paper. We remain open to further discussion and would greatly appreciate it if you could elaborate on any specific concerns to help us better address them.\"}" ] }
DexGnh0EcB
MathEval: A Comprehensive Benchmark for Evaluating Large Language Models on Mathematical Reasoning Capabilities
[ "Zitao Liu", "Tianqiao Liu", "Zui Chen", "ZhenshengFang", "Mi Tian", "Weiqi Luo" ]
Mathematical reasoning is a fundamental aspect of intelligence, encompassing a spectrum from basic arithmetic to intricate problem-solving. Recent investigations into the mathematical abilities of large language models (LLMs) have yielded inconsistent and incomplete assessments. In response, we introduce MathEval, a comprehensive benchmark designed to methodically evaluate the mathematical problem-solving proficiency of LLMs across varied contexts, adaptation strategies, and evaluation metrics. MathEval amalgamates 19 datasets, spanning an array of mathematical domains, languages, problem types, and difficulty levels, from elementary to advanced. This diverse collection facilitates a thorough appraisal of LLM performance and is stratified by language (English and Chinese), problem category (arithmetic, competitive mathematics, and higher mathematics), and difficulty. To overcome the challenges of standardizing responses across diverse models and prompts, we've developed an automated LLM-driven pipeline for answer extraction and comparison, ensuring consistent evaluation criteria. To broaden the utility of MathEval beyond the scope of GPT-4, we have harnessed the extensive results from GPT-4 to train a deepseek-7B-based answer comparison model, enabling precise answer validation for those without access to GPT-4. This model will also be made publicly available. MathEval not only assesses mathematical proficiency but also introduces a method to identify potential data contamination within pre-training datasets. This is done by hypothesizing that enhancements in one mathematical dataset should be mirrored by advancements in correlated datasets, thus signaling potential contamination—like the inadvertent inclusion of test data in the pre-training phase. To mitigate this and truly gauge progress, MathEval incorporates an annually refreshed set of problems from the latest Chinese National College Entrance Examination (Gaokao 2023), thereby benchmarking genuine advancements in mathematical problem solving skills. MathEval strives to refine the assessment of Large Language Models' (LLMs) capabilities in mathematics.
[ "Mathematical reasoning benchmark", "Adaptation strategies", "Cross-lingual assessment" ]
https://openreview.net/pdf?id=DexGnh0EcB
https://openreview.net/forum?id=DexGnh0EcB
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zxJ2rsmeew", "wY3dRbS8Nh", "vbPHgivjvx", "sdjfeooVUw", "rw91JzEiza", "rEHSbL1gdA", "qwApRjbPtp", "nwTOLAuzNy", "ks0odWeVUY", "juwqIjk705", "hJssOFIxAt", "fhrgMGt7xV", "apxrSkDxU5", "aBIRI2cbMJ", "X2rzDOV4Pk", "UVB8heoBGA", "MDVf9Dkgbv", "GNdetaIKPG", "G45VpwSgh8", "CbI4DQ3tyM", "CIoGmZZW6z", "8fFShtq0i3", "6jJgMZrLYo", "5O2pN9etwQ", "42CKwUiohW", "3aWvw0pE7o", "2zfDum4GVZ" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1730692490378, 1731665564244, 1732098772487, 1730720691583, 1731665508579, 1731089593151, 1737596392226, 1733119547057, 1732107465701, 1732812266174, 1732815063098, 1732814234476, 1732628154957, 1732602081505, 1733158925266, 1731668641726, 1733212952666, 1732812954340, 1733119571681, 1730708646356, 1733054391372, 1732603487200, 1732087263256, 1733166259740, 1730649130627, 1732813480533, 1733166240271 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6578/Reviewer_kkq7" ], [ "ICLR.cc/2025/Conference/Submission6578/Authors" ], [ "ICLR.cc/2025/Conference/Submission6578/Authors" ], [ "ICLR.cc/2025/Conference/Submission6578/Reviewer_7NzK" ], [ "ICLR.cc/2025/Conference/Submission6578/Authors" ], [ "ICLR.cc/2025/Conference/Submission6578/Reviewer_Nkro" ], [ "ICLR.cc/2025/Conference/Submission6578/Authors" ], [ "ICLR.cc/2025/Conference/Submission6578/Authors" ], [ "ICLR.cc/2025/Conference/Submission6578/Authors" ], [ "ICLR.cc/2025/Conference/Submission6578/Authors" ], [ "ICLR.cc/2025/Conference/Submission6578/Authors" ], [ "ICLR.cc/2025/Conference/Submission6578/Authors" ], [ "ICLR.cc/2025/Conference/Submission6578/Authors" ], [ "ICLR.cc/2025/Conference/Submission6578/Authors" ], [ "ICLR.cc/2025/Conference/Submission6578/Reviewer_kkq7" ], [ "ICLR.cc/2025/Conference/Submission6578/Authors" ], [ "ICLR.cc/2025/Conference/Submission6578/Reviewer_BH88" ], [ "ICLR.cc/2025/Conference/Submission6578/Authors" ], [ "ICLR.cc/2025/Conference/Submission6578/Authors" ], [ "ICLR.cc/2025/Conference/Submission6578/Reviewer_BH88" ], [ "ICLR.cc/2025/Conference/Submission6578/Authors" ], [ "ICLR.cc/2025/Conference/Submission6578/Authors" ], [ "ICLR.cc/2025/Conference/Submission6578/Authors" ], [ "ICLR.cc/2025/Conference/Submission6578/Reviewer_Nkro" ], [ "ICLR.cc/2025/Conference/Submission6578/Reviewer_1zuC" ], [ "ICLR.cc/2025/Conference/Submission6578/Authors" ], [ "ICLR.cc/2025/Conference/Submission6578/Reviewer_Nkro" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces MathEval, a unified evaluation suite for mathematical reasoning of LLMs. It puts together 22 datasets, and provides a unified method to extract and score answers given model responses. The base answer extraction method in MathEval uses GPT-4, but the authors provide a fine-tuned DeepSeek-Math 7B model just for the task of answer extraction. Authors provide experiments on 52 closed and open models, with Claude 3.5 Sonnet performing best overall in both English and Chinese.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"The paper is easy to read\", \"The unified dataset is likely to be useful for the community. Answer extraction is indeed a pain for math evaluations, and it's useful to have a standardized method / model for this. I believe people would use this.\"], \"weaknesses\": [\"While I do think the effort behind MathEval is useful, I'm not sure if it provides sufficient insight or novelty for an ICLR research paper. I did not gain any insights from the paper, and by construction it is composed of existing evaluations of reasoning capabilities.\", \"It's unclear to me how much overlap there is in the datasets in terms of distribution of problems. Besides the multilinguality, I'm not sure the datasets are measuring fundamentally different things.\", \"*\"], \"questions\": \"* Have the authors looked at correlations between performance on all the 22 datasets?\\n** If this correlation is high (which I'd suppose it is), what is the gain in using so many datasets, since their performance can be predicted from one another?\\n** If this correlation is low between some pairs, what orthogonal abilities might they be measuring?\\n* What insights did the authors derive from running all the evaluations that would not be obvious from just a small subset of them (e.g. one from each category in Fig 2)?\", \"flag_for_ethics_review\": \"['Yes, Responsible research practice (e.g., human subjects, data release)']\", \"details_of_ethics_concerns\": [\"Missing details on how the human annotators were recruited / compensated. It's unclear if they're authors of the paper, or were hired.\", \"If the data of all the datasets is to be released in MathEval, it should be checked that their license allows this.\"], \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"## Evaluation and Algorithmic Details\\n\\n**Q: Details of Calculation Scheduling and details of the Evaluation Pipeline.**\\n\\nWe will include the details of the calculation schedule in the appendix. This will cover dynamic dataset partitioning and automatic GPU allocation. For the evaluation pipeline, we have prepared our [anonymous github](https://anonymous.4open.science/r/MathEval-505B/README.md), which we believe will address these issues comprehensively.\\n\\n**Q: Training Details for the DeepSeek-7B Model**\\n\\nIn the final version, we will provide more information about the training process for the DeepSeek-7B model. Due to anonymity concerns, we are unable to share our Hugging Face repository that contains all the details related to the training of the compare answer model. However, we utilized a straightforward Supervised Fine-Tuning (SFT) technique with standard language model loss for finetuning.\\n\\n## Model and Language Choices\\n\\n**Q: Using Claude rather than GPT-4 for the LLM-based evaluation and limitations of using GPT-4.**\\n\\nWe appreciate the reviewer's concern regarding our reliance on the GPT-4 model for answer comparison. Firstly, we would like to clarify our choice of GPT-4 over Claude-3.5. Our company has a collaborative agreement with the GPT-4 supplier, which grants us more efficient and cost-effective access to GPT-4, enabling extensive parallel processing capabilities. Our empirical evaluations suggest that the performance differences between GPT-4 and Claude-3.5 in answer verification tasks are minimal.\\n\\nWe acknowledge that even the most advanced models, such as GPT-4 or Claude-3.5, do not yet match human performance comprehensively. Another limitation of using GPT-4 is its high cost. With over 70,000 questions across 52 models, if each question averages 1000 tokens, this amounts to approximately 36.4 billion tokens, resulting in significant expenses.\\n\\n**Q: Choice of Chinese as the Second Language to Consider.**\\n\\nWe chose Chinese as the second language for several reasons. First, Chinese is extensively used worldwide, and there is a wealth of rich and representative math data available in Chinese. Second, as can be seen, a significant portion of the large language models we evaluate are trained by Chinese companies. In our view, analyzing Chinese does not offer a unique advantage over other languages. We selected Chinese based on the availability of data and the variety of models.\"}", "{\"comment\": \"Firstly, we would like to express our sincere gratitude to the reviewers for their insightful feedback.\\n\\n### Previous Work on Using Open Source LLMs in Grading Frameworks:\\n\\nThanks reviewer for highlighting the relevant work MathCAMPS (https://arxiv.org/pdf/2407.00900), which uses large language models for problem rephrasing and symbolic solvers for answer verification. While MathCAMPS focuses on generating diverse datasets, our approach stands out by addressing more natural and realistic problems across a range of difficulty levels from elementary to high school. We appreciate the reviewer's recognition of our efforts in fine-tuning a specific model for the answer comparison task.\\n\\n### Well-Known Findings Discussion:\\n\\nWe recognize that some of our discussions cover well-known findings within the domain. Due to space constraints, a more extensive discussion is included in Appendix E (Lines 824-952), where we conduct a detailed analysis that may offer more valuable insights for researchers than the content in the main text. We summarize the content as follows:\\n\\n- **Language Dimension:** Models generally exhibit stronger mathematical performance in English than in Chinese, especially at the primary school level. This disparity is attributed to primary school math problems requiring more language comprehension, and the models trained predominantly on English datasets lack sufficient exposure to Chinese mathematical problems. Models developed by Chinese companies (e.g., WenXin 4.0, Spark-3.5) perform better in Chinese due to their training data, while those from English-speaking countries excel in English.\\n\\n- **Impact of Specialized Fine-Tuning:** Fine-tuning models with specialized mathematical data (e.g., MAmmoTH-70B, MetaMath-70B) significantly enhances their problem-solving abilities, highlighting the importance of domain-specific fine-tuning in boosting performance beyond specific datasets.\\n\\n- **Grade Level Dimension:** Models generally perform better on primary school math problems than on high school ones due to difficulty differences. Models like Claude-3.5-Sonnet and Gemini-1.5-Pro excel at the primary level, suggesting strong language comprehension that aids in solving word problems. In contrast, models like Llemma-7B and Llemma-34B show less pronounced advantages, possibly because their training focuses on complex concepts relevant to higher grades.\\n\\n- **Consistency Across Dimensions:** Models often exhibit consistent performance within the same dimension; strong performance in one language or grade level typically correlates with similar performance in related areas. Evaluating models across different dimensions is crucial for identifying specific strengths, weaknesses, and potential data contamination. Significant discrepancies\\u2014such as exceptional performance at one grade level but poor performance in other tasks\\u2014may indicate contamination with data from that particular grade level.\\n\\nWe believe these detailed analyses provide a deeper understanding and help the community make informed decisions on model selection based on specific tasks.\\n\\nMoreover, on page 29 of the appendix, in Figure 13, we present potential test set contamination for the evaluated model.\\n\\nIn the **Upper Chart**, we have three types of bars:\\n\\n- Chinese Subsets Rank (Blue Bars): This indicates how each model ranks specifically within Chinese mathematical datasets. A smaller rank indicates better performance.\\n\\n- Gaokao-2023 Rank Increase (Orange Bars): Represents increases in the rank of models when evaluated using Gaokao-2023 tests. A larger increase in rank signifies poorer performance on Gaokao-2023.\\n\\n- Gaokao-2023 Rank Decrease (Green Bars): Represents decreases in the rank of models with the Gaokao-2023 tests. A larger decrease in rank signifies better performance on Gaokao-2023.\\n\\nIn the Lower Chart, we have the same three types of bars as in the upper chart; however, the blue bars represent the overall average score across 22 datasets.\\n\\nWe believe that Gaokao-2023 is not contained in the training dataset for all models. Thus, if a model performs very well on the blue bars but poorly on Gaokao-2023, this may indicate potential test data contamination.\\n\\nFrom this figure, we can detect potential data contamination in the results. In the upper chart, the top two models showing an increase in rank are chatglm3-6b and Baichuan2-13b. Most of the Qwen-series models also have orange bars, indicating potential data contamination for these models. This conclusion is supported by findings in the paper \\\"Compression Represents Intelligence Linearly\\\". Furthermore, most base models exhibit green bars, indicating improved rankings on the \\\\textit{Gaokao-2023} dataset. This suggests that chat models are more likely to have encountered similar math word problems during the instruction fine-tuning stage, increasing the probability of data contamination.\"}", "{\"summary\": \"MathEval represents a comprehensive benchmark designed to rigorously evaluate the mathematical problem-solving skills of large language models (LLMs).The benchmark is stratified by language, problem category, and difficulty, ensuring a comprehensive evaluation.To standardize responses and ensure consistent evaluation criteria, an automated LLM-driven pipeline for answer extraction and comparison has been developed.MathEval provides an extensive benchmark that includes a diverse array of mathematical problems across different types and difficulty levels. MathEval have developed a standardized method for comparing answers that effectively addresses the complexities associated with outputs from mathematical word problems (MWPs).MathEval implements a strategy of using a dynamically updated dataset.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This proposed framework is really useful and provides a more general evaluation standard for the evaluation of mathematical reasoning ability of subsequent large language models.And the problems it can solve are also diversified.\", \"weaknesses\": \"1. For a more mathematical process, it seems that a better evaluation standard cannot be given;\\n2. For the multi-step mathematical reasoning process, it seems impossible to evaluate whether this process is moving towards the expected solution process.\\n3. The classification of mathematical reasoning evaluated in the experiment is still a little rough.\", \"questions\": \"1. Can this framework evaluate the reasoning process? Because sometimes the results are the same, the complexity of the reasoning process is also a point that needs to be considered in the reasoning evaluation of large language models.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"## Figure-Related Issues\\n**Q: Difficulty in Reading Figures Due to Acronyms, Color Scheme, and Flow of Components**\\n\\nThank you for your insightful suggestions regarding Figures 2, 3, and 4. We agree that these improvements will significantly enhance the clarity and accessibility of our figures. Specifically:\\n\\n- For Figure 2, we added intuitive icons to represent different languages and grade levels. Specifically, we introduced language icons for various languages and used school building icons to distinguish between different grade segments. This visual enhancement improves the readability and intuitiveness of the chart.\\n- Regarding Figure 3, we optimized the color scheme. We changed the previously hard-to-distinguish light green text to a prominent orange, while avoiding red-green combinations to ensure that individuals with color blindness can also read it easily.\\n- For Figure 4, we added arrows pointing from parts a and b to part c, indicating that a portion of the training data for the compare-model was extracted and validated from the model's output as well as GPT-4\\u2019s answers. This improvement illustrates the logical relationships and information flow between the components, helping readers better understand the structure and workflow of the entire framework.\\n\\nThese changes have been done in our revision version\\n\\n\\n## Dataset and Difficulty Settings\\n**Q: Focus on K-12 Levels and Lack of Higher-Level Mathematics**\\n\\nOur current focus on K-12 education levels stems from their broader applicability to our user base and the availability of extensive datasets within this range. However, we recognize that incorporating higher-level mathematics, such as undergraduate topics (College Math) and competition math (e.g., PutnamBench), would offer deeper insights into the models' capabilities across varying difficulty levels. We are actively working towards including these more challenging problems in future iterations of MathEval.\\nMathEval is actively maintained. Two months ago, we identified a gap in our benchmarks for competition-level problems, so we integrated OlympiadBench into our dataset collection. Since then, we have continued to expand our datasets and plan to include additional ones over time. Furthermore, we are preparing to encompass multimodal mathematical evaluations. To this end, we have already collected multimodal datasets, including MATHVISTA and MATHVERSE.\\n\\nWe have added explatation in Appendix, Line 684-Line 688\\n\\n**Q: Inclusion of Dataset Summary in the Main Text**\\n\\nWe will strive to include a dataset summary in the main text in subsequent versions of the paper.\\n\\n**Q: Adding More Middle School Datasets**\\n\\nWe acknowledge that currently, Arith3K is the only middle school level dataset in MathEval. To achieve a balanced and diverse collection, we have recently collected the Zhongkao-2023 and Zhongkao-2024 datasets to compensate for the lack of middle school data. These datasets are derived from the Beijing High School Entrance Examination, which corresponds to the middle school level in China. Incorporating these datasets will enhance the diversity of MathEval and allow us to better evaluate models on mathematical abilities pertinent to middle school education.\\n\\n**Q: Details on Automatic Dataset Updates**\\n\\nFrom my point of view, this refers to our annual updates of the GAOKAO series datasets. As described in Appendix B.2 (line 680), these datasets are sourced from China's National College Entrance Examination (Gaokao). We have specialized educators who input the exam questions into our system shortly after the exam papers are released each year. By updating the datasets annually, we ensure that MathEval includes the most recent exam questions, which helps us evaluate models on fresh data and avoid potential data contamination from models that may have been trained on older testset.\\n\\nWe have added a footnote to indicate that the dataset will updated in our Github Repo in Page 15.\\n\\n**Q: Generation of \\\"Ours\\\" Labelled Parts in Table 2**\\n\\nWe have included the details of the generation of our datasets, such as SCQ-EN-5K, SCQ-CH-5K, GAOKAO, and Arith3K in Appendix B.1. To provide additional clarity, we have also shared the corresponding [GitHub](https://anonymous.4open.science/r/MathEval-505B/README.md) repository, where we offer more information about the dataset creation process.\"}", "{\"summary\": \"MathEval is an extensive benchmark that includes various math scenarios, different types of prompts, and LLM-based evaluation. This solves the problem of incomprehensiveness, inadequate adaption, and inconsistency. MathEval incorporate 22 datasets in English and Chinese. MathEval also has dynamically updated content to prevent data contamination. In addition, it uses a robust evaluation framework, leveraging GPT-4 for automated comparison. It also uses its findings to fine-tune a DeepSeek-7B-based model and validates evaluation accuracy through human annotation. It also evaluates 52 models across three categories and proviees a comparative analysis.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Overall, much of what I mentioned in the summary is a strength of the paper, but here are some more specific strengths I have noticed:\", \"I very much appreciate how clear the introduction is. The the distinction between the three primary issues in evaluation the mathematical reasoning capabilities of LLMs is precise and clean.\", \"The experiments are very thorough, evaluation 52 models across 22 datasets.\", \"I appreciated the use the human annotation process to demonstrate precision.\", \"The two-stage evaluation approach combining GPT-4 and regex methods is great.\", \"I think the flexible prompt adaptation strategies for different model types is very useful for future research.\", \"Overall, this paper systematically addresses the challenge of mathematical answer comparison.\"], \"weaknesses\": [\"Here is a list of areas for improvement:\", \"I find it a bit difficult to read Figure 2 due to the acronyms. You could use icons for each category (e.g. flag icons for languages, school building icons for grade levels, etc.) or use a combination of icons and abbreviated text.\", \"Although Figure 3 is very informative, I find it a bit difficult to read the light green text on the gray background. I recommend updating the color scheme to dark text on light background or increasing the font size of the text within the boxes. You might also want to use a colorblind-friendly palette to ensure accessibility.\", \"For Figure 4, am I correct in saying that part c uses the results from parts a and b? If so, I recommend including arrows from parts and b to part c. This would help understand the flow of the components and the overall framework.\", \"Although it is great that MathEval has math problems from multiple levels of education, I am curious about why you don\\u2019t have more college level mathematics, at least at the undergraduate level. Ideally, this includes both academic math, such as real analysis, and competition math, such as the Putnam competition. I think this would provide more insights into how different models fare on different difficulty levels. Can you explain your rationale for focusing on K-12 levels and discuss the potential benefits and challenges of incorporating higher-level mathematics in future iterations of MathEval?\", \"I like how you mention details on what datasets are included in MathEval in the appendix. However, I think it would be beneficial to include a more detailed summary in the main paper. Otherwise, the reader may be left wondering what kind of data is in MathEval and how difficult it really is. I suggest including a brief summary table or paragraph in the main text that outlines key characteristics of the datasets (e.g. number of problems, difficulty levels, main mathematical concepts covered).\", \"You mention that you use calculation scheduling. However, I recommend including in the Appendix more details of the algorithm used and why it was chosen.\", \"I recommend including more middle school datasets (only Arith3K exists in MathEval at this point) for balance.\", \"Can you add some implementation details of the evaluation pipeline? This would aid reproducibility.\", \"I think training details for the DeepSeek-7B comparison model could be more detailed.\", \"Can you add more details on how the dataset is automatically updated?\", \"Can you include more details on how you generated the parts of MathEval that are labeled with \\u201c(ours)\\u201d in Table 2?\"], \"questions\": [\"Would you have significantly different results if you used Claude rather than GPT-4 for the LLM-based evaluation? What are the limitations of using GPT-4 as an evaluator?\", \"Why did you choose Chinese as the second language to consider? Why not consider other languages? What unique advantage, if any, does analyzing Chinese language performance over other languages bring?\", \"What are the failure modes are error patterns that consistently appear because of dataset construction? - - What are some limitations?\", \"Why do chat models consistently outperform base models in these tasks?\", \"What types of mathematical reasoning are not currently captured by the benchmark?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Have we addressed your concern\", \"comment\": \"Thank you very much for your thorough review. As the discussion period is nearing its conclusion, I wanted to follow up to ensure you\\u2019ve had the opportunity to review our detailed rebuttal. Given the additional explanations and adjustments we've incorporated, we would greatly appreciate your feedback on whether our responses have satisfactorily addressed your concerns.\\n\\nThank you once again for your time and thoughtful review. We look forward to your response.\"}", "{\"comment\": \"## About Question1:\\n\\nWe appreciate the suggestion to use a regex-based extraction as a first step to minimize compute costs. While this approach could potentially reduce the number of instances where a more complex model like GPT-4 is needed, our experiments indicate that relying solely on regex extraction can lead to potential issues, especially for certain types of tasks.\\n\\nAs shown in Figure 14 of our paper, the regex-based method performs poorly on datasets like MathQA and MATH401, which involve mathematical word problems and arithmetic problems, respectively. In these cases, the complexity of the language and the variety of valid answer formats often lead to incorrect extractions or false positives\\u2014where an incorrect answer might be mistakenly marked as correct due to misextraction. This undermines the fairness and accuracy of the benchmark evaluation.\\n\\nTo address both the computational cost and the need for reliable verification across diverse tasks, we have developed a compare-answer model with only 7 billion parameters. This model strikes a balance by significantly reducing the computational overhead compared to GPT-4 while providing more consistent and accurate answer verification than regex-based methods.\\n\\n## About Question2:\\n\\nYour understanding is correct. The dataset-level higher setting is defined as selecting the higher accuracy between few-shot and zero-shot prompting. Our intention in highlighting this result was to emphasize the fairness and robustness of our evaluation. Evaluating a model using only few-shot or only zero-shot prompting can lead to an underestimation of the model's true capabilities on certain tasks. By considering the higher accuracy from either prompting strategy on a per-dataset basis, we provide a more balanced and fair assessment of the model's performance. \\n\\n\\n\\n## About Question3:\\n\\nGreat thanks to the reviewer for pointing out this typo, we will fix this in the revised version of the paper.\"}", "{\"comment\": \"Firstly, we would like to express our sincere gratitude to the reviewers for their insightful feedback.\\n\\n### Q1: The contribution is limited to the aggregation of benchmarks (some of them recently introduced with less contamination, but not evaluated separately) and automated grading.\\n**A1**: MathEval offers a Comprehensive Evaluation Suite, which fills a crucial gap by providing a unified and thorough benchmark specifically designed to evaluate the mathematical reasoning capabilities of large language models (LLMs).\\n\\n The significance of this contribution seems to have been overlooked, and we aim to clarify it. As shown in Figure 1, MathEval's Comprehensive Evaluation Suite consists of three parts: Math Scenarios, Prompt Adaptation, and LLM-based Evaluation. These correspond to the three issues identified in the Introduction: **incomprehensiveness, inadequate adaptation, and inconsistency**. Specifically: (1) Math Scenarios, which encompass languages (Chinese and English), problem types (arithmetic and math word problems), and educational levels (primary, middle, and high school), comprehensively address the challenge of **incomprehensiveness**; (2) Prompt Adaptation, which selects tailored datasets and model templates based on specific dataset characteristics and model information, effectively tackling the problem of **inadequate adaptation**; (3) LLM-based Evaluation, which utilizes GPT-4 for answer extraction and comparison to mitigate **inconsistency** issues, with an alternative distilled compare-answer model available for users without access to GPT-4.\\n\\nThus, MathEval can leverage this Comprehensive Evaluation Suite to adapt to new datasets and models, providing a comprehensive and fair evaluation. This is the core of MathEval as a comprehensive benchmark for evaluation. The other contributions, such as expanding the dataset, performing multidimensional evaluation result analysis, and providing a Reliable Answer Extraction Model, are merely partial outcomes in the process of building MathEval.\\n\\n### Q2: It's not clear whether the collection includes very difficult problems and may saturate soon. Current best results are around 70%, but not sure if there are some datasets or subdatasets where errors are quite high still. Aggregate results do not give a lot of insight.\\n**A2:** Aggregate results indeed are indeed insufficient to give a lot of insight. Given the 52 models across 22 datasets, totaling 1,144 results, it is challenging to display all the data. Therefore, evaluation is conducted using high-level dimensions. In addition to the average score, Sections 3.4 and Appendix E present several analyses from different perspectives and uncover several intriguing insights. These include dimensions such as languages, problem types, educational levels, the differences between closed-source and open-source models, and the evaluation discrepancies arising from few-shot/zero-shot settings. Furthermore, the Gaokao-2023 results in Figure 13 on page 29 of the Appendix are used to discover potential data contamination.\\n\\nFor further exploration of other perspectives or individual dataset-level analysis, a website will be provided to include all the specific evaluation results to support the discovery of more detailed findings. Specifically, all datasets currently covered are listed in Table 2. For particularly challenging datasets, this is an ongoing maintenance process, which includes the continuous expansion of the GAOKAO series evaluations and the newly added OlympiadBench evaluations. Current best results for these datasets are around 50% and 30%, respectively.\"}", "{\"comment\": \"### Q11: The main contribution is an aggregation of datasets and some experimental results from them. The unification of prompting is questionable, especially in the way new models can be evaluated with this benchmark in an easy way, without the need for prompt adaptation.\\n**A11**: This is similar to the issue raised in Q3. We still want to emphasize that the evaluation of 52 models across 22 datasets is scalable and fair, and this scalability and fairness are enabled by prompt adaptation.\\n\\n### Q12: Minor issues \\n**A12**: Thank you for pointing these out. We have corrected them in the new PDF version.\\n\\n---\\n\\n### Q13: Is prompting specialized for each pair of model and dataset? \\n\\n**A13**: Yes, prompting is specialized for each pair of model and dataset. There are 22 dataset configurations and 52 model configurations, resulting in 22*52 specialized prompts. Each prompt is evaluated in both zero-shot and few-shot settings. Specifically, Figure 3 illustrates the prompt adaptation process. Figure 3(a) shows two independent processes for Model and Dataset Configuration, and an example is provided in Figure 19. In Figure 3(b), the prompts from (a) are wrapped based on zero-shot and few-shot settings, with examples shown in Figures 20 and 21.\\n\\n### Q14: Is CoT used for some models but not others?\\n**A14:** CoT is part of the dataset configuration. For some datasets, the answers do not involve a CoT process, so CoT is not used in few-shot settings for those datasets. However, in zero-shot settings and for other datasets, CoT is used.\\n\\n### Q15: How many instances did humans evaluate?\\n**A15**: Humans evaluated approximately 1,068,000 instances. More details can be found in the previous response A4.3.\\n\\n### Q16: Why 19 out of 22 for the comparison of humans vs. automated scoring?\\n**A16**: MathEval is a continually evolving suite, and at the time of planning the human annotations, MathEval included 19 datasets. Later, OlympiadBench-CN, OlympiadBench-EN, and GAOKAO-2024 were added, expanding the collection to 22 datasets. Since the purpose of the human annotations was primarily to validate the effectiveness of the Compare Answer Methods, and not for the model performance evaluations or leaderboard rankings, we did not extend the annotations to the newly added datasets. Thus, the human annotation process was based on the 19 datasets originally included.\\n\\n### Q17: What's the distribution of difficulty of the datasets and the evolution across that difficulty?\\n**A17**: We have added an analysis of the dataset distribution in Appendix B.2. As shown in Figure 9, by applying t-SNE to query embeddings from the 22 datasets and visualizing the results, we observe that the datasets naturally form three clusters: English datasets, Chinese datasets, and arithmetic datasets. Further examination of the Chinese and English clusters shows that as the t-SNE component 2 value decreases (from top to bottom in the figure), the problems become progressively more difficult, and the corresponding grade levels also rise. This naturally reflects that our difficulty levels are distributed across various grade levels, with datasets corresponding to each level.\\n\\n### Q18: Is there any unexpected finding in the experimental results?\\n**A18**: In Section 3.4 and Appendix E, along with some high-level analyses, we also included a discussion on potential data contamination. We compared the model rankings between Gaokao-2023 and the overall average score. Since Gaokao-2023 is a brand-new set of questions, we wouldn't expect significant variations in rankings if there were no contamination. Therefore, substantial differences in ranking suggest potential data contamination.\\n\\nWe found that the Qwen-series models might have encountered such contamination, and chat models are more likely to have been exposed to similar math word problems during the instruction fine-tuning stage, increasing the probability of data contamination. Further discussion on this can be found in our response to Reviewer kkq7's fourth part.\\n\\n---\\n\\n**Finally**, we thank the reviewers for their detailed feedback. We would like to reiterate the importance of prompt adaptation for MathEval as a benchmark. In addition to the discussions above, we provide an [anonymous github](https://anonymous.4open.science/r/MathEval-505B/README.md) for a better understanding of the implementation process and the significant role of prompt adaptation.\"}", "{\"comment\": \"### Q5: Figure 3 and the related text is confusing. It's not clear whether all the things in blue and green are alwasy used, but not the one in black: \\\"[COT prompt]\\\". Is this optional? When is it introduced?\\n\\n**A5**: Referring to Figure 3, Figure 3(a) represents two independent processes: Model and Dataset Configuration. An example of this process is provided in Figure 19. In Figure 3(b), the prompts from Model and Dataset Configuration are wrapped for use in zero-shot and few-shot settings, with examples shown in Figures 20 and 21.\\n\\n Specifically, the Dataset Configuration prompt templates include DQP, DAP, DOP, and COT prompts. The first three are clearly indicated in Figure 3(b) with their positions for concatenation. The COT prompt is more specific, as its placement depends on the dataset. Therefore, it is difficult to indicate this in the figure.\\n\\n The use of the COT prompt is another issue. In a zero-shot setting, the COT prompt is always used, but in a few-shot setting, some datasets do not have a COT process, such as arithmetic problems. In those cases, the shots provided do not involve a COT process, meaning the COT prompt may not be applied. The use of [COT prompt] is therefore optional and depends on the dataset. The [] signify that this is conditional based on the dataset.\\n\\n### Q6: Is the configuration of prompts per model and dataset different? What if we need to explore a new model? Should we try to find the best prompts for each and every dataset in MATHBENCH?\\n\\n**A6**: As mentioned in A3, when introducing a new model, the only thing that needs to be adjusted is the new model template, which is generally based on the prompt provided upon the model\\u2019s release.\\n\\n### Q7: The details about \\\"Calculation Scheduling\\\" and parallel processing are not part of the benchmarks, and definitely not part of the \\\"prompts section\\\". This is just experimental details or it could go to the appendix.\\n\\n**A7**: Calculation Scheduling is indeed not directly related to prompt adaptation, but it is part of the answer generation process, as illustrated in Figure 3. It is an important aspect of the practical implementation of a benchmark. Reviewers, such as Reviewer Nkro, may focus on its specific implementation details.\\n\\n### Q8: The evaluation results are based on an arithmetic mean of all datasets. This is a common practice but requires a justification, as the different datasets are incommensurate in difficulty. Why are easy datasets weighting the same as hard datasets? Do we have models failing at easy items but succeeding at difficult ones? Averages are not the best way of comparing systems. It is telling that the paper also reflects the results for GSM8K and MATH, and they see only minor discrepancies, so what's the point then about this comprehensive dataset if the same results could have been obtained with only GSM8K and MATH?\\n\\n**A8**: This addresses two issues. First, the arithmetic mean is indeed a single metric. As explained in A2, we have also performed analysis across three dimensions, including Grade, and provided the complete evaluation results for further analysis. Secondly, the results for GSM8K and MATH, referenced in Appendix F.3 (Table 6), are meant to help with the development of the Evaluation Suite and validate the accuracy of the evaluation, not to suggest that these benchmarks alone are sufficient.\\n\\n### Q9: With this aggregate results, many of the observations in Fig. 6 are confirmatory, such as the best LLMs for maths (as we knew for some other dataset collections) are the best for this benchmark, and also the effect of finetuning, but specifically parameters (perhaps FLOPS would have been a better metric).\\n\\n**A9**: Thank you for the suggestion. In addition to the confirmatory observations, we also identified some additional findings, which are discussed in Section 3.4 and Appendix E. For example, newer model series exhibit steeper slopes, indicating that their mathematical abilities improve more effectively with an increase in parameter size.\\n\\n### Q10: The separation between Math word problems and arithmetic in Fig. 6 (bottom) is more insightful, but the arithmetic variability is not explained (this is partly explained by \\\"arithmetic plugins\\\", but isolated benchmarks with basic operations using large numbers could have been conducted to know what models are using them or not).\\n\\n**A10:** Thank you for the suggestion. This could potentially lead to a new dataset design, and we are considering adding this to MathEval in the future.\"}", "{\"comment\": \"Regarding Weakness 2:\\n\\nWe have expanded our discussion on Pages 22-23 to address this concern more comprehensively. Specifically, we have included an analysis of potential data contamination and its implications for our study. Additionally, we have refined some of our previous conclusions to provide deeper insights and to strengthen the validity of our findings.\", \"regarding_question_2\": \"We have made minor revisions on Page 9, Lines 416-419, to clarify this point.\", \"regarding_the_typos_in_our_conclusion\": \"Thank you for pointing out these errors. We have corrected them accordingly to enhance the readability and professionalism of our manuscript.\"}", "{\"comment\": [\"We sincerely thank the reviewer for the thoughtful feedback and insightful questions. We are pleased to hear that you find our paper easy to read and acknowledge the utility of MathEval for the community\", \"**1, Regarding the novelty and contribution of MathEval:**\", \"We appreciate reviewer's concern about the novelty aspect of our work for an ICLR research paper. While we acknowledge that benchmarks may not always introduce novel methodologies, we believe that the contributions of MathEval are significant for the following reasons:\", \"Comprehensive Evaluation Suite: MathEval fills a crucial gap by providing a unified and comprehensive benchmark specifically designed for evaluating the mathematical reasoning capabilities of large language models (LLMs). It encompasses 22 diverse datasets that cover various problem types (e.g., computation, application, multiple-choice), difficulty levels (elementary to high school), and languages (English and Chinese).\", \"Reliable Answer Extraction Model: We address the substantial challenge of answer extraction in mathematical problem-solving by introducing a fine-tuned DeepSeek-Math 7B model. This model enhances the reliability and consistency of evaluations, which is essential for fair model comparisons.\", \"Insights into LLM Performance: By conducting extensive experiments on 52 models, we provide valuable insights into the strengths and weaknesses of current LLMs in mathematical reasoning across different dimensions, including language proficiency, grade levels, and potential data contamination.\", \"Furthermore, as benchmarks like [MathVista](https://openreview.net/pdf?id=KUNzEQMWU7) and [BooookScore](https://openreview.net/pdf?id=7Ttk3RzDeu) have demonstrated in prior ICLR publications, the contribution of a well-designed benchmark lies in its ability to propel the community forward by providing a reliable and robust platform for evaluation. We believe MathEval will be instrumental for future research in this domain.\", \"**2. Regarding the potential overlap among datasets and the aspects they measure:**\", \"We appreciate reviewer's insightful concern about the potential overlap in our datasets and whether they are truly measuring different aspects of mathematical reasoning beyond multilinguality.\", \"To address this, we have taken several steps in our revised submission:\", \"Detailed Dataset Classification (Appendix B.2, Table 2): We have included a comprehensive table in Appendix B.2 (pages 16-17) that lists each dataset along with its specific classification.\", \"Analysis of Potential Overlap Between Datasets- (Page 16, 17, Line 711-731, Appendix B.2):\", \"Query Similarity Heat Map (Figure 8): To examine the overlap between datasets, we conducted a query similarity analysis using embedding techniques to represent each problem query. We computed pairwise cosine similarities between queries from different datasets and visualized the results in a heat map (Figure 8). The heat map illustrates that the average similarities between queries from different datasets are low, indicating minimal overlap in problem content.\", \"Dimensionality Reduction and Clustering (Figure 9): Furthermore, we applied dimensionality reduction (e.g., t-SNE or UMAP) to the query embeddings and performed clustering to visualize the distribution of queries across datasets (Figure 9). The resulting plot shows that queries from different datasets form distinct clusters, suggesting that each dataset covers unique topics and problem types.\", \"Findings from the Analysis - (Page 16, 17, Line 711-731, Appendix B.2):\", \"Coverage of Diverse Difficulty Levels: Our datasets collectively cover a wide range of difficulty levels, from elementary school mathematics to high school competition-level problems. This ensures a comprehensive assessment of models across varying complexities.\", \"Low Inter-Dataset Similarity: The low similarity scores between queries from different datasets confirm that each dataset presents unique challenges. This indicates that the datasets are measuring fundamentally different aspects of mathematical reasoning, such as basic computation, complex problem-solving, logical reasoning, and application of mathematical concepts in various contexts.\", \"By conducting this detailed analysis and including these findings in our revised paper, we aim to demonstrate that the datasets within MathEval are not overlapping but are instead complementary, each contributing to a holistic evaluation of mathematical reasoning in LLMs.\"]}", "{\"comment\": \"I thank the authors for the very through response. I looked at all the updates to the manuscript, including in the appendix, which I think are very informative.\\n\\n> 1, Regarding the novelty and contribution of MathEval:\\n\\nMethodologically, I completely see the value of the answer extraction model that is introduced here: I believe it has potential to be used in future work.\\n\\nHowever, I'm still unconvinced by what exactly MathEval as a benchmark suite enables. To quote from the authors response:\\n\\n>> MathEval fills a crucial gap by providing a unified and comprehensive benchmark specifically designed for evaluating the mathematical reasoning capabilities of large language models (LLMs)\\n\\n>> By conducting extensive experiments on 52 models, we provide valuable insights into the strengths and weaknesses of current LLMs in mathematical reasoning across different dimensions\\n\\nWhat I still fail to see is what is this crucial gap, or what are the valuable insights, concretely. I'm obviously not opposed to benchmarking papers in ICLR -- many of these papers can indeed help guide the community towards new interesting directions. I think the two examples that the authors cite -- MathVista and BooookScore -- were examples of this. MathVista became a standard evaluation of visual reasoning, which was (in a general setting) a new task, in response to the very recent release to GPT-4V. The datasets that comprised MathVista were extremely specialized, so that putting them together indeed produced a new whole. They also measured the human evaluation gap, which was useful to set new goals for the community. For BooookScore, that was one of the first evaluation of very long context language tasks, only recently supported by LLMs and expected to grow. For MathEval, I'm not seeing what is the equivalent novel task or evaluation angle that is gained by putting all the existing benchmarks together.\\n\\n> 2. Regarding the potential overlap among datasets and the aspects they measure:\\n\\nThanks, this is helpful (more the textual descriptions than the cosine-based analyses - those are hard to interpret because it's hard to know what the embedding model is most sensitive to -- syntax? wording? formatting? mathematical semantics? these all could cause clusters besides the mathematical content itself).\\n\\n> 3. Addressing the correlations between performances on all 22 datasets:\\n\\nThank you, I think this is extremely helpful to understand the dataset.\\n\\nThe correlations indeed generally look quite high. It seems like there are basically two clusters here: asdiv-a has the weakest correlations with the rest, while most of the other correlations seem to be generally >= .7, with few exceptions. This would indicate that, if one were to just pick a few of the datasets, they would already be very good proxies for the rest (which is the current practice).\\n\\n> 4. Insights derived from the comprehensive evaluations:\\n\\nThis is an interesting analysis. It might be a lead into something interesting, but it's still more on the speculative side rather than a clear finding (since you can't really test this by looking into the data for most models, unfortunately).\\n\\nLooking at the ranks is a bit opaque, because if many models are performing very close to each other, then a small difference in performance might cause a large difference in rank. Besides, even if rank stays the same but the top-1 model drops in accuracy by a lot, this would already be interesting (and conversely, if all ranks change a lot but overall performances are within a few % of their originals, this wouldn't be so surprising).\\n\\nOverall, I think the potential data contamination analysis is enabled by the new GAOKAO datasets that are introduced (which are a nice contribution), rather than necessarily putting all of the other datasets together as the MathEval suite proposes to do. Perhaps there could be more focused and clear insights that can be derived from the new datasets, instead of focusing on the overall evaluation from which I did not see a concrete takeaway.\"}", "{\"comment\": \"## Failure Modes and Other Limitations\\n\\n**Failure Modes or Error Patterns Due to Dataset Construction**\\n\\n\\nFirst, we would like to clarify which stage of dataset construction is being referred to. If it pertains to prompt adaptation, we experimented with various prompt templates. We found that no Chain-of-Thought (no-CoT) prompts tend to introduce more problems compared to CoT prompts, especially in calculation problems that require step-by-step computations. For mathematical word problems like GSM8K, MATH, or datasets like OlympiadBench, the more reasoning steps required, the higher the likelihood of errors. This suggests that prompt design significantly impacts model performance, and carefully crafted CoT prompts are essential to mitigate error rates in complex reasoning tasks.\\nIf the concern is about the training data for our compare-answer model, we did not observe any significant error patterns.\\n\\n**Why Chat Models are Stronger**\\n\\n\\nWe provided a brief explanation in line 362 of the paper. Chat models generally undergo post-training based on base models, during which a substantial amount of data and methods specifically aimed at enhancing reasoning abilities are incorporated. For instance, models like DeepSeek-Math use techniques such as GRPO to improve their mathematical reasoning capabilities. Additionally, chat models are better at following instructions, which is a significant advantage over base models.\\n\\n**Mathematical Reasoning Not Captured by the Benchmark**\\n\\n\\nCurrently, our benchmark may not fully capture some advanced mathematical reasoning, such as problems involving diagrams or proof-based questions. We are actively preparing to include these two types of problems in future evaluations. While they may not be presented in this paper, we have conducted preliminary experiments.\\nRegarding diagram-based problems, we have obtained initial results that will be part of a major update for MathEval. These results highlight significant challenges in multimodal mathematical reasoning, as models often struggle with interpreting and reasoning about visual information effectively.\", \"here_are_the_current_performance_metrics_for_various_models_on_different_benchmarks\": \"| Model | StatsChartMWP | MATHVISTA | MATHVERSE |\\n|-------------------|---------------|-----------|-----------|\\n| LLaVA-NeXT-34B | 15.67 | 46.5 | 34.6 |\\n| InternLM-XC2 | 17.13 | 57.6 | 27.4 |\\n| Qwen-VL-PLUS | 19.68 | 43.3 | 21.3 |\\n| InternVL-1.2-Plus | 22.16 | 59.9 | - |\\n| GPT-4V | 34.28 | 49.9 | 53.6 |\\n| GPT-4o | 55.62 | 63.8 | - |\"}", "{\"comment\": \"Thank you for updating your paper with the additional details, as they help support the work. However, I believe my current score reflects my confidence in the work. Thank you again for addressing the other issues.\"}", "{\"comment\": \"### Q3: Regarding the issue of prompt adaptation\\n**A3**: Prompt adaptation is one of the main components of MathEval's Comprehensive Evaluation Suite, designed to ensure scalability in evaluating the 52 models across 22 datasets. When a new dataset is introduced, only a new dataset template needs to be set up, allowing it to be evaluated across the 52 models. Similarly, when a new model is introduced, only a new model template is required, enabling evaluation across the 22 datasets.\\n\\n**The design's primary goal** is to address the issues of \\\"Inadequate adaptation\\\" and \\\"Inconsistency\\\" mentioned in the Introduction. \\\"Inadequate adaptation\\\" refers to the need for a framework capable of handling the requirements of different types of models and datasets. For instance, chat models need to use special templates, while base models do not. This is highly sensitive and manually adjusting the evaluation code is impractical, necessitating an appropriate framework. \\\"Inconsistency\\\" addresses fairness concerns: for the same dataset, the prompt and cases used in few-shot evaluations should be fixed to avoid multiple leaderboard discrepancies. For example, OpenCompass, Llemma, and HELM use the same LLaMa2 model, but the accuracy on GSM8K and Math differs significantly: (16.7%, 3.3%), (11.8%, 3.2%), and (13.3%, 10.7%), respectively. For the same model, the prompt template should also be fixed to avoid results being affected by external factors.\\n\\n**Now, let\\u2019s respond to the specific questions:**\\n\\n### Q3.1: The prompt adaptation for models and datasets is not fully motivated. Is it the goal to evaluate each model and dataset in the condition that gets the best results?\\n**A3.1**: As mentioned earlier, the goal is to ensure scalability while maintaining fairness. For datasets, this ensures uniform datasets configurations across different models, rather than primarily aiming to \\\"get the best results.\\\" For evaluation models, fairness is ensured by using the specified template and system prompt. In general, when models are released, it is expected that the evaluation conditions are optimized for the best results, but this is not the task of the evaluation framework.\\n\\n### Q3.2: Is that fair and meaningful regarding an ecologically-valid use of these models in scenarios where math knowledge is necessary? It's different to evaluate a specific LLM for a competition and another when these models are used in educational or engineering settings, to put two examples. \\n**A3.2**: This is why we categorize scenarios into three different dimensions. We want models to be chosen based on their suitability for different scenarios, such as elementary and high school situations, to ensure the results match the specific requirements of each setting.\\n\\n### Q3.3: In the end, why prompt adaptation instead of an off-the-shelf use? Language models should work in natural conditions, or all with similar chain-of-thought prompts. In any case, how do we know that we use the optimal generic prompt for each LLM? \\n**A3.3**: This is directly related to the issues of \\\"Inadequate adaptation\\\" and \\\"Inconsistency.\\\" Off-the-shelf use faces the problem of \\\"Inadequate adaptation,\\\" which makes the evaluation framework non-scalable. \\\"Inconsistency\\\" leads to unfairness. Only model-specific prompts aim for the optimal results, but this is akin to model parameters, which are finalized when the model is released, rather than discovered during evaluation. On the other hand, the shots used for datasets should be standardized.\"}", "{\"title\": \"Have we addressed your concern\", \"comment\": \"Thank you very much for your thorough review. As the discussion period is nearing its conclusion, I wanted to follow up to ensure you\\u2019ve had the opportunity to review our detailed rebuttal. Given the additional explanations and adjustments we've incorporated, we would greatly appreciate your feedback on whether our responses have satisfactorily addressed your concerns.\\n\\nThank you once again for your time and thoughtful review. We look forward to your response.\"}", "{\"summary\": \"The authors present a benchmark that aims to address three dimensions of current math benchmarks: comprehensiveness, adaptation, and consistency. MathEval is a benchmark specifically designed to evaluate the mathematical reasoning abilities of LLMs across problem types, languages, and difficulty levels, encompassing primary through high school math problems in English and Chinese. The benchmark includes 22 datasets and integrates a dynamic update feature, adding new problems annually to reduce test data contamination. MathEval uses a tailored prompting approach to adapt to the unique characteristics of different models and problem types, ensuring fairer and more accurate comparisons. To maintain consistency and overcome the limitations of traditional rule-based evaluation methods, the benchmark uses GPT-4 for answer extraction and comparison, with a publicly available deepseek-7B model as an accessible alternative.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The authors create a comprehensive benchmark that includes 22 math reasoning benchmarks, some of which they create. The benchmark is annotated with the following dimensions: language of the problem, educational level (primary to high school), and problem type (arithmetic vs. word problem).\", \"Because different models and different datasets require unique prompting techniques, the paper also includes adaptable prompt templates which make it easier to evaluate a model under zero/few shot conditions depending on what is more suited for the model.\", \"The paper also presents an open model that can be used to compare mathematical answers for researchers who might not have access to GPT-4 to use for grading.\"], \"weaknesses\": [\"Some previous work (https://arxiv.org/pdf/2407.00900) has been done around using open source LLMs as part of the grading framework, although fine tuning a specific model for the answer comparison task is still a notable contribution.\", \"While the paper presents a significant effort in benchmarking (and the tools presented to the broader research community via this paper will be useful to researchers), the discussion from the numerical results support a lot of things that are already well known (the supremacy of closed-source over open-source models, the performance of math domain models generally being better, few-shot prompting generally resulting in better performance when compared to zero-shot, etc.)\"], \"questions\": [\"To minimize compute costs, wouldn\\u2019t it make more sense to first try regex-based answer extraction on an answer, and in the case that the regex-extracted answer is incorrect (which could be caused by either a genuinely incorrect answer, or a mis-extracted answer), run GPT-4/the custom model on the answer? Because in Figure 14, we see that precision for answer verification with regex-only is high enough that GPT-4 doesn\\u2019t need to be run on every single model output, and rather only on ones that are originally marked as incorrect.\", \"In lines 415-418, the authors note that the dataset-level higher setting consistency outperforms using either few- or zero-shot prompting. However, isn\\u2019t dataset-level higher defined as the higher accuracy between few- and zero-shot accuracy at the dataset level? So wouldn\\u2019t this behavior be expected by definition? Sorry if I am misunderstanding the definition.\", \"Just a quick note about the conclusion section (line 464): the paragraph mentions 2 datasets when MathEval actually includes 22.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Request for Feedback on Responses\", \"comment\": \"Dear Reviewers,\\n\\nWe would like to express our gratitude for your meticulous review of our paper. We have provided responses to the comments and concerns you raised.\\n\\nAs the discussion period comes to a close, we would appreciate your feedback on whether our responses have addressed the issues raised and whether they may lead to an improvement in the scores. We also welcome any new questions or continued discussions.\\n\\nThank you again for your hard work and support.\\n\\nBest regards,\\n\\nThe authors of Paper 6578\"}", "{\"comment\": [\"**3. Addressing the correlations between performances on all 22 datasets:**\", \"We appreciate your insightful suggestion. In response, we have conducted a comprehensive correlation analysis, which is presented in Figure 10 on page 19 (lines 850-861) of our revised submission.\", \"Findings from the Correlation Analysis (Figure 10): Our analysis shows that there are indeed high correlations between model performances on many of the datasets. We consider this to be a reasonable and expected outcome because mathematical abilities are interconnected. Improvements in computational skills often enhance problem-solving capabilities, and advancements in reasoning skills typically benefit performance across various mathematical tasks. This interrelated nature of mathematical competencies suggests that as models improve in one area, they tend to improve in others as well. Which proves our benchmark is reliable.\", \"Justification for Including Multiple Datasets:\", \"Enhancing Benchmark Robustness: Including all 22 datasets increases the robustness and reliability of our benchmark. A comprehensive evaluation across diverse datasets ensures that models are not just performing well on a narrow set of problems but are demonstrating consistent mathematical reasoning abilities across a wide spectrum of topics and difficulty levels.\", \"Identifying Specific Strengths and Weaknesses: Even with high overall correlations, the detailed performance on individual datasets can reveal specific strengths or weaknesses of a model.\", \"Detecting Potential Data Contamination: A significant benefit of using multiple datasets is the ability to detect potential data contamination. For example, suppose Model M has high performance on Dataset A but does not show similar performance gains on other highly correlated datasets. This discrepancy might suggest that Model M has potentially memorized Dataset A due to data leakage, rather than genuinely understanding the underlying mathematical concepts.\", \"Measuring Orthogonal Abilities: While correlations are generally high, our analysis also reveals that certain datasets measure more specialized or orthogonal abilities. For instance, some datasets focus on advanced reasoning or problem-solving skills that may not correlate as strongly with basic computational abilities. Including these datasets allows us to assess a wider range of mathematical proficiencies.\", \"**4. Insights derived from the comprehensive evaluations:**\", \"By conducting comprehensive evaluations across all datasets, we uncovered insights that wouldn't have been apparent from analyzing just a small subset. One significant discovery was the potential data contamination identified in Figure 13 on page 29 of the Appendix.\", \"This figure was instrumental because it highlighted discrepancies in model performance on the Gaokao-2023 dataset\\u2014a brand-new set of questions that none of the models had encountered during training. Given the high correlation in performance across datasets (as discussed in our previous answer), we wouldn't expect significant variations on Gaokao-2023 if there were no contamination. Therefore, substantial differences in rank suggest potential data contamination.\"], \"in_the_upper_chart_of_figure_13\": [\"Chinese Subsets Rank (Blue Bars): Indicates each model's ranking within Chinese mathematical datasets. A smaller rank signifies better performance.\", \"Gaokao-2023 Rank Increase (Orange Bars): Represents models whose rank increased (i.e., performed worse) on Gaokao-2023 compared to other datasets. A larger increase indicates poorer performance on Gaokao-2023.\", \"Gaokao-2023 Rank Decrease (Green Bars): Represents models whose rank decreased (i.e., performed better) on Gaokao-2023. A larger decrease signifies better performance on the new dataset.\"], \"in_the_lower_chart\": \"Similar to the upper chart, but the blue bars represent the overall average score across all 22 datasets.\\n\\nFrom the figure, we detected potential data contamination:\\n\\n- The top two models showing a significant increase in rank (poorer performance) on Gaokao-2023 are ChatGLM3-6B and Baichuan2-13B.\\n- Many of the Qwen-series models display orange bars, suggesting they may have been trained on data overlapping with our evaluation sets, leading to inflated performance on those but not on Gaokao-2023.\\n\\nThese observations are further supported by findings in the paper \\\"Compression Represents Intelligence Linearly\\\", which discusses similar issues for Qwen-series model. Furthermore, most base models exhibit green bars, which suggests that chat models are more likely to have encountered similar math word problems during the instruction fine-tuning stage, increasing the probability of data contamination.\"}", "{\"comment\": \"Thanks reviewer for the thoughtful review, we would like to address the concrens and questions you raised:\\n\\n## For Weakness 1:\\n\\nWe understand that the reviewer is pointing out that our evaluation standard may not be sufficient for assessing the mathematical process itself. In our current framework, we choose to directly evaluate whether the final output of the model is correct. This decision was made because evaluating the solution process is a complex issue that we have considered extensively.\\n\\n\\nRecent methodologies, such as Program Reward Models (PRM) or those based on attention score distribution for hallucination detection, have not yet demonstrated reliable accuracy in evaluating reasoning steps. Incorporating such methods into our benchmark could introduce unnecessary variables and potentially undermine the credibility of our results due to potential misjudgments.\\n\\n\\nOur current approach focuses on the final answer verification, which we believe offers a more straightforward and reliable measure of performance. However, we recognize the importance of evaluating the reasoning process and are actively exploring this as a research topic. It remains one of our objectives to develop and integrate robust methods for reasoning process verification in the future.\\n\\n\\n## For weakness 2 and Question 1:\\n\\nReviewer bring up an important point regarding the evaluation of multi-step reasoning processes. Currently, our framework does not fully capture whether the model's reasoning aligns with the expected solution paths. We agree that evaluating the trajectory of the reasoning process is crucial for understanding a model's problem-solving abilities.\\n\\n\\nFor now, we operate under the assumption that if the final answer is correct, the reasoning process is likely to be reasonable. Additionally, since we evaluate a sufficiently large number of questions, assessing mathematical reasoning ability based on the correctness of the final answer can effectively enhance the robustness of our evaluations.\\n\\n\\nRegarding whether MathEval can evaluate the reasoning process, from a benchmark perspective, it is challenging to employ a model-based method for this task without introducing a significant number of misjudgments. Even an excellent PRM model with 95% accuracy could produce many errors due to the extensive reasoning steps involved in complex problems. Therefore, to maintain the fairness and reliability of the benchmark, we have temporarily decided not to include reasoning process evaluation.\\n\\n## For weakness3\\n\\nThank you for highlighting this concern. We agree that refining the classification of mathematical reasoning types can improve the depth and granularity of our evaluations. In future work, we plan to conduct more detailed classifications, such as categorizing problems based on algebraic manipulation, geometric reasoning, combinatorial logic, and other specific reasoning skills.\"}", "{\"title\": \"Response 2\", \"comment\": \"I appreciate the authors\\u2019 response regarding the use of GPT-4 for LLM-based evaluation. However, the reply is unsatisfactory for several reasons. The authors state that the performance differences between GPT-4 and Claude-3.5 Sonnet are minimal without providing any evidence or comparative analysis to support this claim. The authors mention a collaborative agreement with GPT-4's supplier, which has led to their reliance on GPT-4. While cost and access considerations are understandable, such agreements should not dictate the scientific validity of a benchmark. Evaluating alternative models, such as Claude, would provide a more robust and objective assessment and ensure that the choice of evaluator does not introduce bias into the results. The authors acknowledge that GPT-4 has limitations, including its cost and incomplete alignment with human performance. However, they do not discuss the potential impact of these limitations on the evaluation results or how they plan to mitigate them. The authors\\u2019 justification focuses on cost efficiency rather than exploring the scientific implications of using different LLMs as evaluators. This misses an opportunity to enhance the robustness and generalizability of MathEval by demonstrating consistency across multiple evaluators.\\n\\nI appreciate the authors\\u2019 attempt to address the question about failure modes and error patterns. However, the response is insufficient. While the authors mention prompt adaptation and the compare-answer model, they fail to directly address failure modes stemming from the dataset construction process itself. For example, are there biases, redundancies, or inconsistencies in the datasets that could systematically influence model performance or evaluation results? This was the primary focus of the question, and it remains unanswered. The authors reference datasets like GSM8K, MATH, and OlympiadBench, noting increased error rates with more reasoning steps. However, this is a generic observation rather than an analysis of specific dataset-driven error patterns. The claim that no significant error patterns were observed in the compare-answer model training data lacks sufficient evidence. Given the scale and complexity of the datasets, it is unlikely that no patterns or limitations emerged. What steps were taken to validate this claim? Was a thorough analysis of the error cases performed?\\n\\nAdditionally, I have looked at the responses to other reviewers. I find a recurring issue across responses: they lack the depth and specificity necessary to address the concerns raised. For instance, when reviewer 7NzK pointed out the rough classification of mathematical reasoning types, the authors merely acknowledged the concern and deferred its resolution to future work. Acknowledging issues without taking substantive action or providing detailed plans risks diminishing the credibility of the paper.\\n\\nOverall, I find the response to be insufficiently robust for several reasons noted previously. Given the consistent lack of depth and rigor in the responses and subsequent versions of the paper, I have decided to lower my score from a 6 to a 3. While the paper addresses an important area, it does not adequately address essential feedback issues. Its inability to demonstrate scientific rigor significantly undermines its quality and potential impact.\"}", "{\"summary\": \"This paper presents a more comprehensive benchamrk for LLMs in the domain of maths, elaborates on how elicitations mechanisms (prompt engineering, chain of thought, et.c) should be applied to different families and then introduces automatic evaluation of the results via GPT-4 and a finetuned model. The experimental results are consistent with previous evidence. Nothing especially relevant is found from the experiments.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Comprehensiveness, at least in languages and types of problems.\", \"Releasing a low-cost finetuned model (DeepSeek-7B) that can be used for automated evaluation can be very useful.\", \"Number of models and configurations being evaluated.\"], \"weaknesses\": \"- The contribution is limited to the aggregation of benchmarks (some of them recently introduced with less contamination, but not evaluated separately) and automated grading.\\n\\n- It's not clear whether the collection includes very difficult problems and may saturate soon. Current best results are around 70%, but not sure if there are some datasets or subdatasets where errors are quite high still. Aggregate results do not give a lot of insight.\\n\\n- The prompt adaptation for models and datasets is not fully motivated. Is it the goal to evaluate each model and dataset in the condition that gets the best results? Is that fair and meaningful about an ecologically-valid use of these models in scenarios where maths knowledge is necessary? It's different to evaluate a specific LLMs for a competition and another is when these models are used in educational or engineering settings, to put two examples. In the end, why prompt adaptation instead of an off-the-shelf use? Language models should work in natural conditions, or all with similar china-of-thought prompts. In any case, how do we know that we use the optimal generic prompt for each LLMs?\\n\\nEvaluation using LLMs is quite usual today. Section 2.3 tries to present this as new (but it's not even if multiple-choice problems are still very common), but there's significant work in this area: https://arxiv.org/abs/2403.02839, https://arxiv.org/pdf/2406.18403, https://arxiv.org/abs/2408.02666 or https://eugeneyan.com/writing/llm-evaluators/.\\nSection 2.3 raises the qustion about how the evaluation from GPT-4 is validated. It seems this section is going to cover this, a comparison with a sample of answers evaluated by human experts? And it refers to appendix G.1 shows Table 7, but what's 0.6264 for instance? Inter-rater agreement? The caption says \\\"overall average score\\\". Are we choosing human annotations as ground truth? If this is agreement this is very low. What's the sample annotated by humans? The read may get confused because it seems this section is going to clarify what the quality of evaluation is by using GPT4. Then, this is expanded in section 3.2, and Figure 5 more specifically. But since the sample size is not mentioned, there's no sample and humans evaluate all the instances? What's the number of instances being labelled by each of the annotators? Using Kappa and having 0.8871 is good to use the human average (or mode?) as golden standard. Why for the 22 datasets or only 19? Why only 19? If this is done for 22, why figure 5 only with 19? Why not a sample of the 22? But then, in Figure 5, what does it mean to calculate an \\\"absolute difference\\\"? Are the answer other than correct and incorrect to calculate a \\\"difference\\\"? Why not Kappa as you did before? I cannot really determine whether the automated evaluations are good or bad as I cannot interpret the disagreement. And this is one of the key contributions of the paper.\\n\\nFigure 3 and the related text is confusing. It's not clear whether all the things in blue and green are alwasy used, but not the one in black: \\\"[COT prompt]\\\". Is this optional? When is it introduced?\\n\\nIs the configuration of prompts per model and dataset different? What if we need to explore a new model? Should we try to find the best prompts for each and every dataset in MATHBENCH?\\n\\nThe details about \\\"Calculation Scheduling\\\" and parallel processing are not part of the benchmarks, and definitely not part of the \\\"prompts section\\\". This is just experimental details or it could go to the appendix.\\n\\nThe evaluation results are based on an arithmetic mean of all datasets. This is a common practice but requires a justification, as the different datasets are incommensurate in difficulty. Why are easy datasets weighting the same as hard datasets? Do we have models failing at easy items but succeeding at difficult ones? Averages are not the best way of comparing systems. It is telling that the paper also reflects the results for GSM8K and MATH, and they see only minor discrepancies, so what's the point then about this comprehensive dataset if the same results could have been obtained with only GSM8K and MATH?\\n\\nWith this aggregate results, many of the observations in Fig. 6 are confirmatory, such as the best LLMs for maths (as we knew for some other dataset collections) are the best for this benchmark, and also the effect of finetuning, but specifically parameters (perhaps FLOPS would have been a better metric).\\n\\nThe separation between Math word problems and arithmetic in Fig. 6 (bottom) is more insightful, but the arithmetic variability is not explained (this is partly explained by \\\"arithmetic plugins\\\", but isolated benchmarks with basic operations using large numbers could have been conducted to know what models are using them or not).\\n\\nThe main contribution is an aggregation of datasets and some experimental results from them. The unification of prompting is questionable, especially in the way new models can be evaluated with this benchmark in an easy way, without the need for prompt adaptation.\", \"minor_issues\": [\"Sometimes the authors use triple quotes, i.e., '''xxx''', which is non-standard.\", \"Conclusions, line 1: \\\"In this papar\\\"\"], \"questions\": [\"Is prompting specialised for each pair of model and dataset?\", \"Is CoT used for some models but not others?\", \"How many instances did humans evaluate?\", \"Why 19 out of 22 for the comparison humans - automated scoring?\", \"What's the distribution of difficulty of the datasets and the evolution across that difficulty?\", \"Is there any unexpected finding in the experimental results?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"## Q4: Evaluation methods details\\n### Q4.1: Section 2.3 raises the question about how the evaluation from GPT-4 is validated. It seems this section is going to cover this, a comparison with a sample of answers evaluated by human experts? \\n**A4.1**: Section 2 introduces MathEval's Comprehensive Evaluation Suite, which includes three components: Math Scenarios, Prompt Adaptation, and LLM-based Evaluation (see Figure 1). Section 2.3 specifically discusses the LLM-based Evaluation method. It covers two interchangeable methods: 1) GPT-4 for answer extraction and comparison, and 2) an alternative distilled compare-answer model. These methods serve as alternatives to ensure a robust and fair evaluation. Human experts are not involved in this section; they are only engaged in evaluating the effectiveness of these two evaluation methods in Section 3.2.\\n\\n### Q4.2: And it refers to appendix G.1 shows Table 7, but what's 0.6264 for instance? Inter-rater agreement? The caption says \\\"overall average score.\\\" Are we choosing human annotations as ground truth? If this is agreement, this is very low. What's the sample annotated by humans? The reader may get confused because it seems this section is going to clarify what the quality of evaluation is by using GPT-4. \\n**A4.2**: There seems to be a misunderstanding. Table 7 is found in Section 3.2 (G.3), not in Section 2.3 (G.1). \\n\\nTable 7 compares the overall average score across four models using three Compare Answer Methods on all datasets. The three methods listed in the first column of the table are: Human Annotated, Two-stage with GPT-4, and Fine-tuned-DeepSeek-7B. The four models include GPT-4, DeepSeek-math-7B-Base, DeepSeek-math-7B-Instruct, and DeepSeek-math-7B-RL, representing closed-source and various open-source models (Base, Instruct, and RL). \\n\\nThe score of 0.6264 represents the overall average score across all datasets for Human Annotated GPT-4 evaluations. \\n\\n**The primary purpose of Table 7** is to validate the reliability of the Two-stage with GPT-4 and Fine-tuned-DeepSeek-7B Compare Answer Methods based on the assumption that human annotations are accurate and can be used as ground truth.\\n\\n### Q4.3: Then, this is expanded in Section 3.2, and Figure 5 more specifically. But since the sample size is not mentioned, there's no sample and humans evaluate all the instances? What's the number of instances being labeled by each of the annotators? Using Kappa and having 0.8871 is good to use the human average (or mode?) as a golden standard. \\n**A4.3**: You are correct in your understanding; this part verifies the accuracy of human annotations. Using Kappa and achieving 0.8871 indicates the consistency of human annotations in determining correctness, which reflects inter-annotator agreement. \\nSpecifically, we conducted human annotations on the outputs of the four models listed in Table 7. Each model's outputs across 19 datasets, evaluated under both zero-shot and few-shot settings, resulted in a total of 53,400 outputs. Each output was labeled by 5 annotators. Therefore, the total number of annotations was: 4 models * 53,400 outputs * 5 annotations, resulting in 1,068,000 annotations.\\n\\n### Q4.4: Why for the 22 datasets or only 19? Why only 19? If this is done for 22, why is Figure 5 based only on 19? Why not a sample of the 22? \\n**A4.4**: This is a detail related to dataset maintenance. MathEval is a continually evolving suite, and at the time of planning the human annotations, MathEval included 19 datasets. Later, OlympiadBench-CN, OlympiadBench-EN, and GAOKAO-2024 were added, expanding the collection to 22 datasets. Since the purpose of the human annotations was primarily to validate the effectiveness of the Compare Answer Methods, and not for the model performance evaluations or leaderboard rankings, we did not extend the annotations to the newly added datasets. Thus, the human annotation process was based on the 19 datasets originally included.\"}", "{\"title\": \"Response 1\", \"comment\": \"I appreciate the authors\\u2019 response to my concerns. However, I find the response to be insufficiently robust for several reasons. The authors reiterate that their focus on K-12 levels is due to broader applicability and the availability of datasets. However, they fail to provide concrete evidence or data supporting the claim that K-12 levels are more broadly applicable to their user base. Are there user studies or usage statistics to back this focus? The integration of OlympiadBench is noted, but no specific details are given about what progress, if any, has been made toward incorporating undergraduate or Putnam-level mathematics. The authors reference their additions to the appendix (lines 684\\u2013688), but these edits merely restate the same general points made in the response without providing additional depth or insight. For instance, there is no mention of whether specific steps, such as pilot experiments with higher-level mathematics or user engagement studies, have been initiated. The mention of new datasets like MATHVISTA and MATHVERSE is intriguing, but it is not tied back to the primary concern: the lack of undergraduate and competition-level math. Are these datasets addressing that gap? If so, why were they not explicitly discussed?\\n\\n\\nWhile the appendix provides details, the main text should include a high-level summary to ensure the benchmark\\u2019s comprehensiveness is apparent. This could take the form of a table or brief paragraph outlining critical attributes, such as the number of problems, difficulty levels, languages, and key mathematical concepts covered. Without this addition, the paper\\u2019s impact is diminished, as it fails to adequately communicate the benchmark\\u2019s strengths to a broader audience.\\n\\nI appreciate the authors\\u2019 response regarding calculation scheduling. However, I find the reply to be insufficient. While the authors state they will include the details in the appendix, there is no evidence in the current manuscript of these details being present. I could not find a description of the algorithm for dynamic dataset partitioning or GPU allocation, either in the main text or in the appendix. The absence of these details is problematic, as calculation scheduling seems to be an important part of the benchmark\\u2019s implementation, particularly in large-scale computational tasks. \\n\\nI appreciate the authors\\u2019 acknowledgment of the imbalance in middle school datasets and their effort to address it by incorporating Zhongkao-2023 and Zhongkao-2024. However, these additions are not reflected in the current submission, despite the response being made before the deadline for paper edits. This makes it difficult to assess their impact on the diversity and balance of MathEval. Details such as the dataset characteristics, number of problems, and the specific mathematical concepts they cover are essential to evaluate their relevance and contribution.\\n\\nI appreciate the authors\\u2019 reference to a GitHub repository for implementation details of the evaluation pipeline. However, this response is insufficient. Providing a GitHub repository without adding relevant implementation details in the paper fails to address the concern about reproducibility. Reviewers and readers should not have to rely solely on external resources to understand the evaluation pipeline, particularly since repositories may be updated or removed over time, potentially compromising the reproducibility of results.\\n\\nI appreciate the authors\\u2019 response regarding the training details for the DeepSeek-7B comparison model. However, the explanation provided is vague and insufficient. While I understand the constraints of anonymity, the description of \\\"straightforward Supervised Fine-Tuning (SFT) with standard language model loss\\\" lacks the necessary depth to assess the training process. Key details, such as the size and characteristics of the training dataset, the number of training epochs, the optimizer used, hyperparameter values, and evaluation metrics during training, are missing. Without these details, the reproducibility and rigor of the training process cannot be evaluated.\\n\\nI appreciate the authors\\u2019 explanation regarding the annual updates to the GAOKAO datasets. However, the response does not adequately address my request for more details on the automatic dataset update process. The description provided focuses on manual input by educators, which contradicts the claim of an \\\"automatic\\\" update mechanism. There is no explanation of whether any automation exists, such as for data cleaning, formatting, or integration into MathEval.\"}" ] }
Dem5LyVk8R
Efficient Policy Evaluation with Safety Constraint for Reinforcement Learning
[ "Claire Chen", "Shuze Liu", "Shangtong Zhang" ]
In reinforcement learning, classic on-policy evaluation methods often suffer from high variance and require massive online data to attain the desired accuracy. Previous studies attempt to reduce evaluation variance by searching for or designing proper behavior policies to collect data. However, these approaches ignore the safety of such behavior policies---the designed behavior policies have no safety guarantee and may lead to severe damage during online executions. In this paper, to address the challenge of reducing variance while ensuring safety simultaneously, we propose an optimal variance-minimizing behavior policy under safety constraints. Theoretically, while ensuring safety constraints, our evaluation method is unbiased and has lower variance than on-policy evaluation. Empirically, our method is the only existing method to achieve both substantial variance reduction and safety constraint satisfaction. Furthermore, we show our method is even superior to previous methods in both variance reduction and execution safety.
[ "Reinforcement Learning" ]
Accept (Poster)
https://openreview.net/pdf?id=Dem5LyVk8R
https://openreview.net/forum?id=Dem5LyVk8R
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xr86cOWeWf", "uQeq3eXcru", "tT2KGG2NOu", "qlH5KPGIlT", "moBdv9CtGD", "mHJYv0CZrp", "l4iIWX30lm", "jWYpj5agfK", "gHh5dLFitb", "g9hBUJ8GCI", "dyROKGdcrD", "czQpVrS27X", "bu5WmIWoYn", "WG74b7sKQi", "USYp3zsEGh", "PGKApJGn8Z", "L8To4gFuPw", "IgEr2qtPJX", "BA02XMI67m", "2AR65UelIz" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "meta_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision" ], "note_created": [ 1732514655758, 1732081419956, 1732078625053, 1732077627099, 1732514635308, 1732078320601, 1732080906558, 1732131113593, 1732081756204, 1731120833439, 1732080377755, 1729064257390, 1734746823577, 1730001267221, 1732648550316, 1732135279528, 1730681505958, 1732650080874, 1732105360300, 1737524261335 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13457/Authors" ], [ "ICLR.cc/2025/Conference/Submission13457/Authors" ], [ "ICLR.cc/2025/Conference/Submission13457/Authors" ], [ "ICLR.cc/2025/Conference/Submission13457/Authors" ], [ "ICLR.cc/2025/Conference/Submission13457/Authors" ], [ "ICLR.cc/2025/Conference/Submission13457/Authors" ], [ "ICLR.cc/2025/Conference/Submission13457/Authors" ], [ "ICLR.cc/2025/Conference/Submission13457/Authors" ], [ "ICLR.cc/2025/Conference/Submission13457/Authors" ], [ "ICLR.cc/2025/Conference/Submission13457/Reviewer_92mj" ], [ "ICLR.cc/2025/Conference/Submission13457/Authors" ], [ "ICLR.cc/2025/Conference/Submission13457/Reviewer_MExc" ], [ "ICLR.cc/2025/Conference/Submission13457/Area_Chair_PUbt" ], [ "ICLR.cc/2025/Conference/Submission13457/Reviewer_J4Sc" ], [ "ICLR.cc/2025/Conference/Submission13457/Reviewer_HUVe" ], [ "ICLR.cc/2025/Conference/Submission13457/Authors" ], [ "ICLR.cc/2025/Conference/Submission13457/Reviewer_HUVe" ], [ "ICLR.cc/2025/Conference/Submission13457/Reviewer_J4Sc" ], [ "ICLR.cc/2025/Conference/Submission13457/Reviewer_MExc" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"comment\": \"As the rebuttal phase is nearing its end, we wanted to kindly follow up to check if you had any additional feedback or comments for our paper. Your input would be greatly appreciated, and we are confident to discuss and address any concerns you may have.\\n\\nThank you again for your time and effort in reviewing our work!\"}", "{\"comment\": \">I am also wondering whether the variance of on-policy MC in Tables 3 or 4 is large. I agree with the authors that the proposed method achieves lower variance, but I am not fully convinced of how important are the differences in real applications.\\n\\n**-- *Safety* constrained variance reduction**\\n\\nIn fact, our algorithm does not consider *solely* variance reduction, but instead addresses **variance reduction and safety compliance simultaneously**. As pointed out in our experiment section with data from Table 1,2 and 6, solely reducing variance usually comes with a **trade-off of a higher cost**. Thus, we believe that our method should be evaluated not only by how much variance it reduces, but also how much cost it saves to achieve the desired evaluation accuracy.\\n\\nAs shown in *Table 2* of our paper, our method achieves state-of-the-art performance in **reducing the cost needed to achieve the desired evaluation accuracy**. Specifically, it saves from **$25.4$% to $57.5$%** cost across different MuJoCo environments. This superior performance is a result of the **joint effects of reducing both variance and controlling cost**, as respectively demonstrated in Table 3 and 4 (now Table 5 and 6 in our revision). Besides, all our numbers are averaged over $900$ different runs over a wide range of policies, indicating strong statistical significance.\\n\\n\\n| Environment | On-policy MC | **Ours** | ODI [1] | ROS [2] | Saved Cost (%) |\\n|-------------------|--------------|----------|-------|-------|----------------|\\n| Ant | 1000 | 746 | 1136 | 1063 | **25.4%** |\\n| Hopper | 1000 | 552 | 824 | 1026 | **44.8%** |\\n| I. D. Pendulum | 1000 | 681 | 1014 | 1003 | **31.9%** |\\n| I. Pendulum | 1000 | 425 | 615 | 890 | **57.5%** |\\n| Walker | 1000 | 694 | 1031 | 960 | **30.6%** |\\n\\n*Table 2:Cost needed to achieve the same estimation accuracy that on-policy Monte Carlo achieves with $1000$ episodes on MuJoCo. Each number is averaged over 900 independent runs. Standard errors are plotted in Figure 3.*\\n\\n>I think the reproducibility of this paper is rather low given there is no source code attached and there is little information on the experimental setups.\\n\\nThank you for pointing this out! We will publish the source code upon publication to facilitate future research.\\nWe have also included more explanations on experimental setups in Appendix B (line 1078-1082) of the revision. Additional details on experiments are provided in our answer to your Q2-4.\\n\\n### For Typos:\\n>In (3), $V\\\\to V_{ a \\\\sim u}$\\n\\nMany thanks for this extremely detailed notification! We added this subscript into (3) for clarification.\\n\\n### For questions:\\n\\nQ1\\n\\n>In other words, could you tell me the reason why RHS in (12) does not depend on $\\\\pi$? Perhaps is this a typo?\\n\\nThank you for catching this typo. You are correct that the RHS of equation (12) should depend on $\\\\pi$, and it is actually $(1+\\\\epsilon)J^c(\\\\pi)$, where $J^c(\\\\pi)$ is the expected cost of the target policy. We have corrected this oversight in the current paper.\\n\\nIndeed, we have noticed this typo and corrected them right after the submission. We have also once again checked our work thoroughly to avoid any typo.\\n\\nQ2\\n> The original MuJoCO environment does not have a notion of safety cost. How did the authors introduce safety costs?\\n\\nThe cost of the MuJoCo environments is built on **the control cost of the robot**. The control cost is the L2 norm of the action and is proposed by OpenAI Gymnasium (Brockman et al., 2016). This control cost is motivated by the fact that large actions in robots induce sudden changes in the robot's state and may cause safety issues.\\n\\nQ3\\n>How was the offline dataset generated? Could you tell me the details? I know there is some statement in the Appendix, but it is critical for understanding the actual performance of the algorithm.\\n\\n\\nWe are glad to provide more details! The offline dataset of each environment contains a total of 1, 000 episodes generated by\\n30 policies with various performances. The performance of those policies **ranges from completely\\nrandom initialized policies to well-trained policies** in each environment. For example, in the Openai Gymnasium Hopper-v4, the performance of those 30 policies ranges from around 18 to around 2800. We let offline data be\\ngenerated by various policies to **simulate the fact that offline data are from different past collections**.\"}", "{\"comment\": \"### For questions:\\n\\n>Specifically, if the dataset lacks sufficient coverage, is it still possible to obtain a reliable behavior policy?\\n\\nThank you for your question. As with most offline RL approaches, the number and coverage of offline data do impact the quality of the learned policy. Specifically, as we acknowledged in the conclusion of our paper, **there is no free lunch**: if the offline dataset contains only one data pair, it is clear that obtaining a reliable behavior policy is infeasible. This limitation is **inherent to offline RL** and cannot be fully resolved, as pointed out by Levein et al, 2020, \\u201cOffline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems\\u201d.\\n\\n > Could the authors also consider comparing with online bootstrapping methods, as they might provide more efficient solutions for online policy evaluation?\\n\\nThank you for your insightful comment.\\n\\nWe believe that in online policy evaluation, the on-policy Monte Carlo method is the standard approach. E.g., when an RL practitioner wants to draw a curve of their agents\\u2019 performance against training steps (we believe almost every RL paper has such a curve), they will just use Monte Carlo. The reason is that for this purpose (i.e., for hyperparameter tuning or for model selection), we need an unbiased **scalar performance metric**. If you refer to TD by bootstrapping method, we believe those bootstrapping methods are more used for getting the value function, not the scalar performance metric. In general, **those bootstrapping methods are biased because they need function approximation**. So when practitioners **need an unbiased scalar performance metric**, MC is the dominating method. \\n\\nBesides, the existing best-performing methods in policy evaluation [1][2][3] **all consider the on-policy MC as their primarily baseline**. Therefore, we also use the on-policy Monte Carlo and other Monte Carlo estimators as our baselines.\\n \\n\\n[1] (ICML Jiang and Li, 2016) \\\"Doubly Robust Off-Policy Value Evaluation For Reinforcement Learning\\\"(DR)\\n\\n[2] (ICML Liu and Zhang, 2024) \\u201cEfficient Policy Evaluation with Offline Data Informed Behavior Policy Design\\u201d (ODI)\\n\\n[3] (NeurIPS Zhong et al. 2022) \\\"Robust On-Policy Sampling for Data-Efficient Policy Evaluation in Reinforcement Learning\\\" (ROS)\\n\\n[4](ICML Mukherjee et al.2024)\\\"SaVeR: Optimal Data Collection Strategy for Safe Policy Evaluation in Tabular MDP\\u201d\"}", "{\"comment\": \"Many thanks for the encouraging feedback. Your comments truly highlight the core strength of this work: combining rigorous theoretical results with state-of-the-art empirical performance.\\n\\n### For weaknesses:\\n\\n>More discussion could be included on the specific form of the safety constraint. \\n\\nIn equation (18), we prove that our designed behavior policy $\\\\mu^*$ satisfies the safety constraint $\\\\quad J^c(\\\\mu^*)\\\\leq (1+\\\\epsilon)J^{c}(\\\\pi)$, where $J^c(\\\\mu^*)$ and $J^{c}(\\\\pi)$ are the expected total cost of policy $\\\\mu^*$ and the target policy $\\\\pi$, respectively. By setting $\\\\epsilon=0$, as is the choice in our experiment section, **we aim to find a variance-reducing behavior policy without increasing the execution cost compared with the on-policy MC method**. In fact, under the threshold $\\\\epsilon=0$, our method is the **only method** to achieve both variance reduction and safety constraint satisfaction, compared with the existing best-performing methods [1][2] published on ICML and NeurIPS.\\n\\nBesides, in situations where variance reduction becomes a priority, allowing $\\\\epsilon$ to be slightly greater than $0$ can be a reasonable trade-off to achieve greater variance reduction.\\n\\n\\n### For questions:\\n>What are the implications of considering the undiscounted setting, as opposed to the discounted setting?\\n\\nThank you for your insightful question. \\nBecause we consider finite horizon MDPs, we use the undiscounted setting for simplifying notations. Our method can be directly extended to the discounted setting by simply adding the discount factor $\\\\gamma$ into all derivations.\\n\\n\\n[1] (Liu and Zhang, ICML 2024) \\u201cEfficient Policy Evaluation with Offline Data Informed Behavior Policy Design\\u201d (ODI)\\n\\n[2] (Zhong et al. NeurIPS 2022) \\\"Robust On-Policy Sampling for Data-Efficient Policy Evaluation in Reinforcement Learning\\\" (ROS)\"}", "{\"comment\": \"As the rebuttal phase is nearing its end, we wanted to kindly follow up to check if you had any additional feedback or comments for our paper. Your input would be greatly appreciated, and we are confident to discuss and address any concerns you may have.\\n\\nThank you again for your time and effort in reviewing our work!\"}", "{\"comment\": \"Thanks a lot for your detailed comments. Your opinion shows that our paper provides solid theoretical foundations and empirical results, outperforming prior methods in both variance reduction and safety compliance.\\n\\n### For weaknesses:\\n>While addressing safety constraints is novel, the approach to variance minimization under these constraints lacks significant innovation.\\n\\nAs pointed out by your review, previous methods published on ICML and NeurIPS [1][2][3] only consider variance reduction, completely ignoring safety issues. Thus, although their methods achieve reduced variance, they indeed *increase the execution cost* compared with the on-policy MC method, as shown in Table 6 of our paper:\\n\\n| | On-policy MC | **Ours** | ODI [1] | ROS [2] |\\n|---------------|--------------|----------|-------|-------|\\n| Ant | 1.000 | **0.897** | 1.397 | 1.033 |\\n| Hopper | 1.000 | **0.930** | 1.523 | 1.021 |\\n| I. D. Pendulum | 1.000 | **0.876** | 1.399 | 1.012 |\\n| I. Pendulum | 1.000 | **0.961** | 1.743 | 0.990 |\\n|Walker | 1.000 | **0.953** | 1.485 | 1.061 |\\n\\n*Table 6: Average trajectory cost on MuJoCo. Numbers are normalized by the cost of the on-policy estimator. ODI and ROS have much larger costs because they both ignore safety constraints.*\\n\\n\\nBy contrast, our method innovatively considers variance reduction and safety constraint satisfaction simultaneously. In fact, our method is the **only method** to achieve both variance reduction and safety constraint satisfaction, compared with [1][2]. As computed from Table 2, our method saves up to $57.5$% cost to achieve the desired evaluation accuracy, surpassing all the baseline methods across various environments.\\n\\n| | On-policy MC | **Ours** | ODI [1] | ROS [2] | Saved Cost (%) |\\n|-------------------|--------------|----------|-------|-------|----------------|\\n| Ant | 1000 | 746 | 1136 | 1063 | **25.4%** |\\n| Hopper | 1000 | 552 | 824 | 1026 | **44.8%** |\\n| I. D. Pendulum | 1000 | 681 | 1014 | 1003 | **31.9%** |\\n| I. Pendulum | 1000 | 425 | 615 | 890 | **57.5%** |\\n| Walker | 1000 | 694 | 1031 | 960 | **30.6%** |\\n\\n*Table 2:Cost needed to achieve the same estimation accuracy that on-policy Monte Carlo achieves with $1000$ episodes on MuJoCo. Each number is averaged over 900 runs. Standard errors are plotted in Figure 3 of our paper.*\\n\\n>In the experimental section, the paper could benefit from comparisons with a broader range of work in safe reinforcement learning to better demonstrate its advantages.\\n\\nThank you for pointing this out. In fact, compared with mainstream safe RL papers focusing on policy improvement, our work addresses the safety in policy evaluation. Specifically, we aim at reducing the evaluation variance while satisfying safety constraints.\\n\\nHowever, to the best of our knowledge, [4] is the only existing work considering **safety-constrained policy evaluation** in reinforcement learning. However, while our method and the baselines we compared [1][2] are designed for **general MDPs** and are **model-free**, [4], as suggested by its title, focuses solely on **tabular MDPs** and is **model-based**. It is not clear to us how [4] can be used in our MuJoCo experiments, which are non tabular. Given its limited applicability, we have not included [4] as a baseline in our comparisons.\"}", "{\"comment\": \"Thank you for your extremely detailed review and practical suggestions. Your comments show that our work is well written, and the variance reduction and safety compliance of our method is guaranteed both theoretically and empirically.\\n\\n### For weaknesses:\\n> There are several survey papers on safe RL, so the authors may want to refer to how to comprehend safe RL.\\n\\nThank you for your practical suggestion! We have adjusted our related work section and cited these papers in our main text (line 69-70 and line 81-83). \\n\\n>Readers cannot know that this paper deals with off-policy evaluation until line 160. Please clearly mention that in the Abstract or Introduction. It may be better to add \\\"off-policy\\\" in the title.\\n\\nMany thanks for this practical suggestion. We have taken it to heart! We have added \\u201coff-policy\\u201d into our title for better understanding.\\n\\n> I think it is better to discuss the theoretical relations between (12) and (15, 16), e.g., equivalent, conservative approximation, etc.\\n\\nWe have taken this suggestion to heart. In (12), we require that $\\\\quad J^c(\\\\mu^*)\\\\leq (1+\\\\epsilon)J^{c}(\\\\pi)$, where $J^c(\\\\mu^*)$ and $J^{c}(\\\\pi)$ are the expected total cost of policy $\\\\mu^*$ and the target policy $\\\\pi$, respectively. In (16), we further require that $E_{a\\\\sim\\\\mu_t}[q^c_{\\\\mu,t}(s,a)]\\\\leq (1+\\\\epsilon)E_{a\\\\sim\\\\mu_t}[q^c_{\\\\mu,t}(s,a)]$ for all time steps $t$. In fact, the constraint in (16) is **stricter** than (12). In Theorem 3, we proved that our behavior policy $\\\\mu^*$ derived under (15)(16) does satisfy the wider constraint (12).\\n\\nWe have added more discussions on the theoretical relations between than in our main text (line 369-370). Thanks again for your practical suggestion!\\n\\n>I think readers cannot identify the magnitude of variance for each algorithm from Figures 1 or 2. There should be a table including the actual variance numbers (at least in the Appendix) like Tables 3 or 4.\\n\\nThank you for pointing this out! We agree that including a table with the variance and cost reduction numbers would provide a clearer representation of the results. To address this, we have added Table 3 and 4 (corresponding to Figure 1 and 2, respectively) to the appendix and attached them below:\\n|Environment Size | On-policy MC | **Ours** | ODI | ROS|\\n|------------------|--------------|----------|-------|-------|\\n| 1,000 | 1.000 | **0.547** | 0.460 | 0.953 |\\n| 27,000 | 1.000 | **0.575** | 0.484 | 0.987 |\\n\\n*Table 3: Relative variance for estimators on Gridworld. Each number is averaged over 900 runs. Standard errors are plotted in Figure 1.*\\n\\n| Env Size | On-policy MC | **Ours** | ODI | ROS | Saved Cost Percentage |\\n|----------|--------------|----------|-------|-------|------------------------|\\n| 1,000 | 1000 | **472** | 738 | 1035 | **(1000 - 472)/1000 = 52.8%** |\\n| 27,000 | 1000 | **487** | 765 | 1049 | **(1000 - 487)/1000 = 51.3%** |\\n\\n*Table 4: Cost needed to achieve the same estimation accuracy that on-policy Monte Carlo achieves with 1000 episodes on Gridworld. Each number is averaged over 900 runs. Standard errors are plotted in Figure 2.*\\n\\nBesides, the numbers of *average trajectory cost* on Gridworld have been shown in our Table 1.\\nConsidering *solely* variance reduction, Table 3 shows that our method achieves greatly lower variance than the on-policy MC method. In Table 4, we further demonstrate that our method **saves more than 50% of cost to achieve the desired estimation accuracy**, outperforming all three baselines significantly.\"}", "{\"comment\": \"Perfect, thanks again for the extremely constructive comments and the fast turn around.\"}", "{\"comment\": \"Q4\\n>Could you show me the experimental results or figures illustrating how the performance of each algorithm is affected by the number of offline datasets?\\n\\nTo better answer your question, we provide the result of ablation studies using **different numbers of offline data**. In the tables below, 1,2,3 K means we use offline data with $1000$, $2000$ and $3000$ episodes, respectively. Also, notice that SCOPE (Safety-Constrained Off-Policy Evaluation) is the name of our method. \\n| | On-policy MC | SCOPE-1K | SCOPE-2K | SCOPE-3K | Saved Cost Percentage |\\n|---------------|--------------|-----------|-----------|-----------|------------------------|\\n| Ant | 1000 | 746 | 707 | 687 | **25.4% \\u2013 31.3%** |\\n| Hopper | 1000 | 552 | 515 | 488 | **44.8% \\u2013 51.2%** |\\n| I. D. Pendulum | 1000 | 681 | 641 | 621 | **31.9% \\u2013 37.9%** |\\n| I. Pendulum | 1000 | 425 | 388 | 369 | **57.5% \\u2013 63.1%** |\\n| Walker | 1000 | 694 | 667 | 647 | **30.6% \\u2013 35.3%** |\\n\\n*Table 7: Cost needed to achieve the same estimation accuracy that on-policy Monte Carlo achieves\\nwith 1000 online episodes on MuJoCo. Each number is averaged over 900 independent runs.*\\n\\n| | On-policy MC | SCOPE-1K | SCOPE-2K | SCOPE-3K |\\n|----------|--------------|-----------|-----------|-----------|\\n| Ant | 1.000 | 0.835 | 0.809 | 0.780 |\\n| Hopper | 1.000 | 0.596 | 0.564 | 0.531 |\\n| I. D. Pendulum | 1.000 | 0.778 | 0.730 | 0.718 |\\n| I. Pendulum | 1.000 | 0.439 | 0.401 | 0.389 |\\n| Walker | 1.000 | 0.728 | 0.709 | 0.690 |\\n\\n*Table 8: Relative variance of estimators on MuJoCo. The relative variance is defined as the variance\\nof each estimator divided by the variance of the on-policy Monte Carlo estimator. Each number is averaged over 900 independent runs.*\\n\\n\\n| | On-policy MC | SCOPE-1K | SCOPE-2K | SCOPE-3K |\\n|---------------|--------------|-----------|-----------|-----------|\\n| Ant | 1.000 | 0.897 | 0.881 | 0.877 |\\n| Hopper | 1.000 | 0.930 | 0.921 | 0.918 |\\n| I. D. Pendulum | 1.000 | 0.876 | 0.874 | 0.867 |\\n| I. Pendulum | 1.000 | 0.961 | 0.958 | 0.956 |\\n| Walker | 1.000 | 0.953 | 0.949 | 0.946 |\\n\\n*Table 9: Average trajectory cost on MuJuCo under different . Numbers are normalized by the cost of the on-policy estimator. Each number is averaged over 900 independent runs.*\\n\\nAs shown in the above tables, our method scales with the number of offline data. Specifically, in Table 7, we **saved 25.4%-63.1\\\\% cost** in achieving the desired estimation accuracy across different environments and different offline data numbers, compared with the on-policy Monte Carlo method. Besides, in Table 8 & Table 9, we see that both the variance and the average trajectory cost are reduced.\\n\\n\\n[1] (Liu and Zhang, ICML 2024) \\u201cEfficient Policy Evaluation with Offline Data Informed Behavior Policy Design\\u201d (ODI)\\n\\n[2] (Zhong et al. NeurIPS 2022) \\\"Robust On-Policy Sampling for Data-Efficient Policy Evaluation in Reinforcement Learning\\\" (ROS)\"}", "{\"summary\": \"This paper provides an on-policy evaluation method that aims to reduce evaluation variance while also ensuring safety. This is done in the context of contextual bandits, sequential RL, and offline RL. Via theoretical results, this method is shown to be feasible, unbiased, and variance-reducing. Empirical results on GridWorld and MuJoCo demonstrate that under different cost budgets, the proposed method is able to improve performance (variance reduction) over baselines.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"This work is well-motivated in terms of aiming to reduce variance while also _ensuring safety_ during on-policy evaluation.\", \"The proposed method is simple, and theoretical results demonstrate that the method is feasible, unbiased, and variance-minimizing.\", \"The experimental demonstration is solid, with improved performance (variance reduction) over baselines under any given cost budget. The authors demonstrate good experimental practices, including averaging over many runs and comparing against strong baselines.\"], \"weaknesses\": \"More discussion could be included on the specific form of the safety constraint. In what settings is it sufficient to ensure that \\\"the expected cost of the designed behavior policy $\\\\mu$ should be smaller than the multiple of the expected cost of the target policy $\\\\pi$\\\"? In what settings might this form of safety constraint be insufficient?\", \"questions\": [\"What are the implications of considering the undiscounted setting, as opposed to the discounted setting?\", \"See \\\"Weaknesses\\\"\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you so much for your time and comments. As you pointed out, our method is theoretically grounded, and our work also includes a thoughtful and comprehensive literature review.\\n\\n### For weaknesses:\\n\\n>In the experiment, the performances of the constrained RL algorithm under different policy evaluation methods should be compared.\\n\\nThank you for your insightful comment! We have two interpretations for this question:\\n\\n**1. Application of our method under other policy evaluation methods (TD):**\\n\\nAs answered in your last question, we believe that in online policy evaluation, the on-policy Monte Carlo method is the standard approach. It provides an **unbiased scalar performance metric** that every RL implementation requires (i.e., for hyperparameter tuning or for model selection). By contrast, TD is **biased because it needs bootstrapping and function approximation**. So when practitioners need an unbiased scalar performance metric, MC is the dominating method [1][2]. \\n\\nAdditionally, the well-known ICML paper (Jiang and Li, 2016, \\n *Doubly Robust Off-Policy Value Evaluation For Reinforcement Learning* ) as well as [1][2] are all different Monte Carlo estimators. Therefore, in this work, we also develop our algorithm using the Monte Carlo method, and use different Monte Carlo estimators as baselines.\\n\\nBesides, as specified in our conclusion (line 539), extending our constrained variance minimization technique to the TD method is our future work.\\n\\n**2. Comparison of our method with other safety constrained RL algorithm:**\\n\\nThank you for pointing this out. In fact, compared with mainstream safe RL papers focusing on policy improvement, our work addresses the safety in policy evaluation. Specifically, we aim at reducing the evaluation variance while satisfying safety constraints.\\n\\nHowever, to the best of our knowledge, [4] is the only existing work considering **safety-constrained policy evaluation** in reinforcement learning. However, while our method and the baselines we compared [1][2] are designed for **general MDPs** and are **model-free**, [4], as suggested by its title, focuses solely on **tabular MDPs** and is **model-based**. It is not clear to us how [4] can be used in our MuJoCo experiments, which are non tabular. Given its limited applicability, we have not included [4] as a baseline in our comparisons.\\n\\n\\nPlease let us know if we have addressed your comments! If there are methods that you want us to compare with, could you please refer us to them? We would be happy to discuss.\\n\\n\\n### For questions:\\n\\n>Is the transition from (12) to (13) too conservative? As (12) requires the total cost to be smaller than the threshold but (13) makes the feasible set smaller. What if the optimization problem (13) has an empty feasible set? How to derive Theorem 3 when (13) is too conservative, as the policy $\\\\pi$ will not satisfy this conservative constraint?\\n\\nThanks for your comment! We acknowledge that there was a typo in (12), which we have fixed in our revision pdf. In (12), we require that $\\\\quad J^c(\\\\mu^*)\\\\leq (1+\\\\epsilon)J^{c}(\\\\pi)$, where $J^c(\\\\mu^*)$ and $J^{c}(\\\\pi)$ are the expected total cost of policy $\\\\mu^*$ and the target policy $\\\\pi$, respectively. \\n\\nWhile in (13), we further require that $ E_{a\\\\sim\\\\mu_t}[q^c_{\\\\mu,t}(s,a)]\\\\leq (1+\\\\epsilon) E_{a\\\\sim\\\\mu_t}[q^c_{\\\\mu,t}(s,a)]$ for all time steps $t$. The constraint in (13) is a **sufficient condition** for (12), as it ensures that our behavior policy $\\\\mu$ performs under the safety threshold throughout the time steps. In Theorem 3, we proved that our derived behavior policy $\\\\mu^*$ (with constraint (13)) **does satisfy the original and wider constraint (12)**.\\n\\nFor your feasibility concern, we have proved in Appendix A.4 that **$\\\\pi$ is in the feasible set under the constraint (13)**. Please let us know if you have any further question. We are happy to discuss!\\n\\n\\n>How is the proposed method applied to continuous state-action space?\\n\\nOur method is applicable to continuous state space with discrete action space. Specifically, we estimate the $\\\\tilde{r}$ functions by passing the continuous states into the neuro network and getting the value for each action. We then use $\\\\tilde{r}$ to construct our behavior policy $\\\\mu^*$ by (15) for safe online data collecting.\"}", "{\"summary\": \"This paper proposes an optimal variance-minimizing behavior policy to guarantee the satisfaction of safety constraints. Theoretically, the authors prove that their proposed method is unbiased and with lower variance. Empirical experiments show that their proposed method achieves variance reduction and safety guarantee.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The authors address a very important and interesting problem where safety constraint is considered in RL policy evaluation.\", \"The motivation behind the problem formulation is well-presented while using an example of Google's data center.\", \"The background section is well-written and easy to follow.\", \"I think it is a good idea to provide theoretical results and key ideas in the case of contextual bandit.\", \"Theoretical results are nice and proofs are well-presented.\", \"The empirical experiments are overall well-conducted.\", \"The authors' proposed method performs well in Grid-world and MuJoCO.\"], \"weaknesses\": [\"In the Related Work section, the literature review of safe RL can be written in a more organized way. There are several survey papers on safe RL, so the authors may want to refer to how to comprehend safe RL.\", \"Brunke, Lukas, et al. \\\"Safe learning in robotics: From learning-based control to safe reinforcement learning.\\\" Annual Review of Control, Robotics, and Autonomous Systems 5.1 (2022): 411-444.\", \"Liu, Yongshuai, Avishai Halev, and Xin Liu. \\\"Policy learning with constraints in model-free reinforcement learning: A survey.\\\" In IJCAI. 2021.\", \"Gu, Shangding, et al. \\\"A review of safe reinforcement learning: Methods, theory and applications.\\\" arXiv preprint arXiv:2205.10330 (2022).\", \"Wachi, Akifumi, Xun Shen, and Yanan Sui. \\\"A Survey of Constraint Formulations in Safe Reinforcement Learning.\\\" In IJCAI (2024)\", \"Readers cannot know that this paper deals with off-policy evaluation until line 160. Please clearly mention that in the Abstract or Introduction. It may be better to add \\\"off-policy\\\" in the title.\", \"I think lines 309-369 are hard to follow. I understand that (12) is challenging to handle, but it is unclear whether the authors' transformation is reasonable or not. I think it is better to discuss the theoretical relations between (12) and (15, 16), e.g., equivalent, conservative approximation, etc.\", \"I think readers cannot identify the magnitude of variance for each algorithm from Figures 1 or 2. There should be a table including the actual variance numbers (at least in the Appendix) like Tables 3 or 4.\", \"I am also wondering whether the variance of on-policy MC in Tables 3 or 4 is large. I agree with the authors that the proposed method achieves lower variance, but I am not fully convinced of how important are the differences in real applications.\", \"I think the reproducibility of this paper is rather low given there is no source code attached and there is little information on the experimental setups. It is problematic as a research paper that even basic information is not mentioned for reproducing experimental results.\", \"### Typos\", \"In (3), $\\\\mathbb{V}$ --> $\\\\mathbb{V}_{a \\\\sim \\\\mu}$.\"], \"questions\": [\"Q1: I could not understand (12). Why is the safety threshold $(1+ \\\\epsilon)$? In other words, could you tell me the reason why RHS in (12) does not depend on $\\\\pi$? Perhaps is this a typo?\", \"Q2: The original MuJoCO environment does not have a notion of safety cost. How did the authors introduce safety costs?\", \"Q3: How was the offline dataset generated? Could you tell me the details? I know there is some statement in the Appendix, but it is critical for understanding the actual performance of the algorithm.\", \"Q4: Could you show me the experimental results or figures illustrating how the performance of each algorithm is affected by the number of offline datasets?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper provides an on-policy evaluation method that aims to reduce the variance while satisfying safety constraints.\\nThe problem is an important one and the reviewers are generally positive about the paper. The experiments are also well executed, and I encourage the authors to open source their code.\", \"additional_comments_on_reviewer_discussion\": \"Minor concerns have been addressed during the rebuttal, and the reviewers are all positive about the paper\"}", "{\"summary\": \"This paper studies the problem of policy evaluation under constraints and proposes an algorithm that reduces the policy evaluation variance while satisfying the constraint.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The proposed solution is derived by formulating and solving an optimization problem, which has support from the theoretical results.\\n\\nThe literature review is thoughtful and complete.\", \"weaknesses\": \"In the experiment, the performances of the constrained RL algorithm under different policy evaluation methods should be compared.\\n\\nSee **Questions**\", \"questions\": \"Is the transition from (12) to (13) too conservative? As (12) requires the total cost to be smaller than the threshold but (13) makes the feasible set smaller. What if the optimization problem (13) has an empty feasible set? How to derive Theorem 3 when (13) is too conservative, as the policy $\\\\pi$ will not satisfy this conservative constraint?\\n\\nHow is the proposed method applied to continuous state-action space?\\n\\nThe mainstream policy evaluation uses a one-step method, such as the TD method. Can the proposed method be applied to these methods?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the authors for their rebuttals and explanations. My concerns have been addressed. I have increased my score to 6.\"}", "{\"comment\": \">The mainstream policy evaluation uses a one-step method, such as the TD method. Can the proposed method be applied to these methods?\\n\\nYes, our method has the potential to be applied to the TD method. Specifically, our safety-constrained variance reduction problem formulation has the potential to be applied to one-step data collection in TD. As mentioned at the end of our conclusion section, this is one future direction of our work.\\n\\nBesides, we believe that in online policy evaluation, the on-policy Monte Carlo method is the standard approach. E.g., when an RL practitioner wants to draw a curve of their agents\\u2019 performance against training steps (we believe every empirical RL paper has such a curve), they will use Monte Carlo. The reason is that for this purpose (i.e., for hyperparameter tuning or for model selection), we need an unbiased **scalar performance metric**. TD is a bootstrapping method, which is more used for getting the value function. In general, **TD method is biased because it needs function approximation and bootstrapping**. So when practitioners **need an unbiased scalar performance metric**, MC is the dominating method [1][2]. This fact motivates us to propose an optimal variance-minimizing behavior policy under safety constraints for MC methods. \\n\\nThank you again for your insightful question! \\n\\n\\n\\n[1] (ICML Liu and Zhang, 2024) \\u201cEfficient Policy Evaluation with Offline Data Informed Behavior Policy Design\\u201d (ODI)\\n\\n[2] (NeurIPS Zhong et al. 2022) \\\"Robust On-Policy Sampling for Data-Efficient Policy Evaluation in Reinforcement Learning\\\" (ROS)\"}", "{\"summary\": \"This paper addresses high variance in on-policy evaluation in reinforcement learning by proposing a behavior policy that minimizes variance under safety constraints. Unlike previous methods that ignore safety, this approach provides unbiased, low-variance evaluation while ensuring safe execution. Empirical results show it outperforms prior methods in both variance reduction and safety compliance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The study addresses the new and intriguing problem of incorporating safety constraints in policy evaluation.\\n2. The paper is well-organized and easy to follow.\\n3. It provides comprehensive support with both theoretical and empirical evidence.\", \"weaknesses\": \"1. While addressing safety constraints is novel, the approach to variance minimization under these constraints lacks significant innovation.\\n2. In the experimental section, the paper could benefit from comparisons with a broader range of work in safe reinforcement learning to better demonstrate its advantages.\", \"questions\": \"1. From my understanding, the algorithm proposed in this paper aims to learn a behavior policy that minimizes variance while satisfying safety constraints. However, I don\\u2019t see a clear connection in the theoretical results to the offline dataset used. Could you clarify this? Specifically, if the dataset lacks sufficient coverage, is it still possible to obtain a reliable behavior policy?\\n\\n2. In the experimental section, the paper compares the proposed method to the Monte Carlo (MC) approach, which seems straightforward but rather basic. Could the authors also consider comparing with online bootstrapping methods, as they might provide more efficient solutions for online policy evaluation?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No concerns.\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the response. My concerns have been addressed. I have adjusted my rating accordingly.\"}", "{\"title\": \"Thank you for clarifications\", \"comment\": \"I appreciate the authors for their rebuttals and additional experiments. My concerns have been addressed. If Eq. (12) is a typo, then I am fully convinced by the claims of this paper. I increased my score to 8 and confidence to 5.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}" ] }
DedkG85z3c
Aligning to Constraints for Data-Efficient Language Model Customization
[ "Fei Wang", "Chao Shang", "Shuai Wang", "Sarthak Jain", "Qiang Ning", "Bonan Min", "Vittorio Castelli", "Yassine Benajiba", "Dan Roth" ]
General-purpose language models (LMs) are aligned to diverse user intents, but fall short when it comes to specific applications. While finetuning is the default method for customized alignment, human annotations are often unavailable in various customization scenarios. Based on the observation that one of the main issues of LM customization is constraint adherence, we investigate the feasibility of using constraints as a bridge from general LMs to customized ones. We investigate common constraints in NLP tasks, categorize them into three classes based on the types of their arguments, and propose a unified framework, ACT (Aligning to ConsTraints), to automatically produce supervision signals for user alignment with constraints. Specifically, ACT uses constraint verifiers, which are typically easy to implement in practice, to compute constraint satisfaction rate (CSR) of each response. It samples multiple responses for each prompt and collect preference labels based on their CSR automatically. Subsequently, ACT adapts the LM to the target task through a ranking-based learning process. Experiments on fine-grained entity typing, abstractive summarization, and temporal question answering show that ACT is able to enhance LMs' capability to adhere to different classes of constraints, thereby improving task performance comparable to or approaching that of finetuning with labeled data.
[ "LLM Customization", "Data-efficiency", "Constraint-driven Learning" ]
https://openreview.net/pdf?id=DedkG85z3c
https://openreview.net/forum?id=DedkG85z3c
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ybQ7ucsBA5", "xMuh1u1XVz", "qzqoP2bMS8", "pFLxMHzSxH", "n18ed23z1i", "kCadBz4QsB", "chof1erJgg", "cUF4HQhrGc", "Z4IbXd5PD0", "Yyxq5BbCqn", "VT9b9vWIaB", "U9g8jO2bcY", "OvRhg67lLK", "OOpppqTFX8", "NRf6sqIdN4", "GPGgsUN6ZG", "Fh15NsRHP9", "ElkMnWuqBE", "BYlTzGu6Xu", "B5Od3EX5uP", "97AhkTBJMN", "7Qf2cFtnVI", "28iS050Fa7", "24NcUBBI5m" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment" ], "note_created": [ 1730392686680, 1732775180723, 1732773311280, 1732602849473, 1733166407707, 1731008772621, 1732341932100, 1733295621820, 1732590318207, 1732573923299, 1730698503029, 1732681798295, 1730220934087, 1733295579811, 1732340569418, 1732635161516, 1732344477941, 1733167663250, 1732343706237, 1733167283195, 1733295441251, 1733295543998, 1733161683499, 1734312103507 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8005/Reviewer_HmdB" ], [ "ICLR.cc/2025/Conference/Submission8005/Authors" ], [ "ICLR.cc/2025/Conference/Submission8005/Authors" ], [ "ICLR.cc/2025/Conference/Submission8005/Authors" ], [ "ICLR.cc/2025/Conference/Submission8005/Authors" ], [ "ICLR.cc/2025/Conference/Submission8005/Reviewer_Fejv" ], [ "ICLR.cc/2025/Conference/Submission8005/Authors" ], [ "ICLR.cc/2025/Conference/Submission8005/Authors" ], [ "ICLR.cc/2025/Conference/Submission8005/Authors" ], [ "ICLR.cc/2025/Conference/Submission8005/Authors" ], [ "ICLR.cc/2025/Conference/Submission8005/Reviewer_mNHe" ], [ "ICLR.cc/2025/Conference/Submission8005/Reviewer_mNHe" ], [ "ICLR.cc/2025/Conference/Submission8005/Reviewer_dUjU" ], [ "ICLR.cc/2025/Conference/Submission8005/Authors" ], [ "ICLR.cc/2025/Conference/Submission8005/Authors" ], [ "ICLR.cc/2025/Conference/Submission8005/Reviewer_dUjU" ], [ "ICLR.cc/2025/Conference/Submission8005/Authors" ], [ "ICLR.cc/2025/Conference/Submission8005/Authors" ], [ "ICLR.cc/2025/Conference/Submission8005/Authors" ], [ "ICLR.cc/2025/Conference/Submission8005/Authors" ], [ "ICLR.cc/2025/Conference/Submission8005/Authors" ], [ "ICLR.cc/2025/Conference/Submission8005/Authors" ], [ "ICLR.cc/2025/Conference/Submission8005/Reviewer_mNHe" ], [ "ICLR.cc/2025/Conference/Submission8005/Authors" ] ], "structured_content_str": [ "{\"summary\": \"In this paper, the authors proposed a methodology to customize an LLM to specific user needs by introducing task constraints.\\nThe main contributions consist of, firstly, defining a categorization of constraints that covers scenarios where the constraints involve the response alone (as in restricting the label(s) of the prediction), context and response, and groups of prompts and responses. Secondly,\\nthe framework allows for the implementation of automatic constraint verifiers, which are used to build preference data and customize the models using a ranking-based method.\\nFinally, experiments on tasks representative of each constraint scenario, including analysis on transfer task learning scenarios, and the potential impact of constraint verification as a reward signal for reward modeling.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The framework makes no assumptions about the constrain type, and can be easily categorized into the proposed hierarchy.\", \"The collection strategy for the preference dataset is sound, maximizing the \\u201cconstraint satisfaction score\\u201d gap between responses whilst making sure all of them are highly probable. This effectively leverages the sensitivity of the \\u201cconstraint satisfaction score\\u201d given for each scenario and the well-trained knowledge in the base LLM.\"], \"weaknesses\": [\"Although the constraint hierarchy is sound, only one task was investigated per constraint category. The exploration of at least two more tasks per category would provide a more confident indicative that the proposed method is efficient and we are not just overfitting to a specific task. Perhaps peripheral experiments in section 5 can be reworked as main experiments in section 4.\", \"Critically, the paper is also missing more details and examples on the defined constraint categorization, as well as an analysis on distribution of the constraint satisfaction scores for each constraint scenario (see ORPO, https://arxiv.org/pdf/2403.07691)\"], \"questions\": [\"## Section 3.5: Training\", \"Some details are missing (as well as numeration on the equations) from the loss functions, which make the section difficult to understand for someone who hasn\\u2019t seen the cited paper (RRHF). It is alright to add redundant details in order to make the sections more understandable. For instance, the margin (L228) does not appear in L_rank.\", \"## Section 4.1. Implementation details\", \"From what I gather, the chosen (preferred) response is defined as one that satisfies all constraints, whereas a rejected response is one that does not satisfy \\u2018some\\u2019 constraints. which kind of constraints are considered when selecting a rejected response?\", \"## Section 4.2.\", \"Details of the \\u2018enhanced loss function\\u2019 are missing in appendix C\", \"For the \\u2018inference with constraints\\u2019 baseline, how often are the constraints verified during inference? After each token or sentence, or at the end?\", \"## Section 4.3\", \"Footnote 6, could you please elaborate in a more technical way what the \\u2018garbage in-garbage out\\u2019 problem is?\", \"When collecting feedback from the constraint verifier, two responses are sampled from all possible combinations. It is desirable for this response pair to have minimal conflict. How is this conflict defined, and how is it quantified?\", \"## Section 5\", \"5.1. The argumentation in this section would be better placed at the related work section, as it provides general motivation\", \"5.2. In table 2 and 3, when source task = \\u2018-\\u2018, does this mean that training was done on the target task or that inference was done over the base model (before ACT training)\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No concerns\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> **\\\"why this constraints and not another one\\\"**\\n\\nThank you for raising this insightful question. \\n\\n**TL;DR:**\\n* It is not about selecting a specific constraint for a particular task but rather identifying a representative constraint for a broad category and then choosing a suitable task as the testbed for evaluation.\\n* The coherent constraint falls into the f(y) type. For this type, we chose to verify a more general constraint\\u2014option lists, which are widely used to define valid solution spaces across various tasks.\\n\\n\\n**Explanation**\\n\\nAs indicated in lines 238-242, we select representative tasks for each of the three constraint categories to ensure comprehensive coverage of distinct types of constraints. It is not about selecting a specific constraint for a particular task but rather identifying a representative constraint for a broad category and then choosing a suitable task as the testbed for evaluation. Additionally, in Section 5.2, we conduct experiments to demonstrate the transferability of constraints across tasks, showcasing their adaptability and broader applicability. Overall, our work includes five groups of experiments that span a diverse spectrum of constraints, further validating the generality and robustness of our approach.\\n\\nThe coherent constraint falls into the f(y) type. For this type, we chose to verify a more general constraint\\u2014option lists, which are widely used to define valid solution spaces across various tasks. This approach allows us to evaluate constraints that are broadly applicable and representative of real-world scenarios.\"}", "{\"comment\": \"> **Regarding the constraint decoding baselines.**\\n\\nWe appreciate the reviewer\\u2019s insightful feedback. The key advantage of ACT-style tuning methods over inference-time interventions lies in their ability to enhance the model\\u2019s capabilities, whereas inference-time interventions rely on utilizing the existing model capabilities. This partially explains why the two methods are complementary.\\n\\nIn our paper, we evaluate different inference-time intervention methods based on the literature related to each task. For entity typing, we assessed post-hoc rule-based correction and constrained decoding. For summarization, we employed reranking beams at the last decoding step, a proven method that is effective in boosting model performance (Cao and Wang, 2021). The results align: ACT can further boost model performance. Notably, ACT can improve constraint satisfaction rates, achieving performance levels comparable to inference-time interventions.\\n\\nWe also compared constrained decoding and ACT on a subset of the CommonGen validation set. Constrained decoding achieved a ROUGH-L score of 41.6, while ACT, after less than 300 training steps, achieved a score of 42.0, further demonstrating the effectiveness of ACT. Additionally, we observed that the constraint satisfaction rate (CSR; i.e., concept coverage in this case) for constrained decoding is highly dependent on the beam size, whereas ACT can achieve a CSR of 92.3% without requiring further intervention. This highlights the different advantages of ACT and constrained decoding.\\n\\n\\nCao and Wang. \\\"CLIFF: Contrastive learning for improving faithfulness and factuality in abstractive summarization.\\\" EMNLP (2021).\\n\\n> **implementing constraint verifier can be a limitation**\\n\\nThank you for highlighting this concern. We apologize for any confusion caused. We agree that constructing constraint verifiers is a limitation of ACT, and we have discussed this limitation and proposed potential solutions in Appendix D.\\n\\nConstraints are prevalent in NLP tasks, and the extensive literature on these tasks serves as a valuable resource for identifying well-defined constraints [1-6] (more examples are available in lines 90-102). For creative text generation, for instance, a lexical-level creativity scorer [7] could be a potential tool for building constraint verifiers. Drawing an analogy to data annotation, we posit that specifying constraints is a prerequisite for tasks requiring them, as humans must first understand the task constraints before annotation begins.\\n\\nAt present, our approach relies on human efforts for constraint identification and verifier implementation. However, we envision the possibility of modularizing this process in the future. By combining different units, such as rule checkers and scorers, intelligent agents could potentially automate the creation of constraint verifiers, reducing the dependency on human intervention. This modular approach could streamline the workflow and expand the applicability of ACT to a broader range of tasks.\\n\\n[1] Chang, Ming-Wei, Lev Ratinov, and Dan Roth. \\\"Guiding semi-supervision with constraint-driven learning.\\\" ACL 2007.\\n\\n[2] Wang, Haoyu, et al. \\\"Joint constrained learning for event-event relation extraction.\\\" EMNLP (2020).\\n\\n[3] Jang, Myeongjun Erik, and Thomas Lukasiewicz. \\\"Consistency analysis of chatgpt.\\\" arXiv preprint arXiv:2303.06273 (2023).\\n\\n[4] Pan, Wenbo, et al. \\\"A preliminary evaluation of chatgpt for zero-shot dialogue understanding.\\\" arXiv preprint arXiv:2304.04256 (2023).\\n\\n[5] Parikh, Ankur P., et al. \\\"ToTTo: A controlled table-to-text generation dataset.\\\" EMNLP (2020).\\n\\n[6] Porteous, Julie, and Marc Cavazza. \\\"Controlling narrative generation with planning trajectories: the role of constraints.\\\" ICIDS 2009.\\n\\n[7] Kuznetsova, Polina, Jianfu Chen, and Yejin Choi. \\\"Understanding and quantifying creativity in lexical composition.\\\" Proceedings of the 2013 conference on empirical methods in natural language processing. 2013.\"}", "{\"comment\": \"Dear Reviewer HmdB,\\n\\nThank you for your time and thoughtful feedback on our paper. We hope our responses have addressed your concerns, and we kindly request that you consider updating the score accordingly. If there are any remaining issues, please let us know, and we will be happy to provide further clarification.\\n\\nThanks!\"}", "{\"comment\": \"We thank the reviewer for the followup.\\n\\n> **is there more recent inference-time baseline you can adapt than this one for summarization constraints?**\\n\\nAs far as we know, this is the best-performing inference-time intervention for summarization. The limited improvement in recent work also highlights the difficulty of integrating constraints into generation tasks like summarization during inference, even with constraint verifiers. In contrast, our method does not rely on strong assumptions about constraints and is broadly generalizable and applicable.\\n\\n> **thorough comparison with prior work on CommonGen**\\n\\nWe want to clarify that this additional experiment is intended purely as a proof of concept. To ensure efficiency, we randomly sampled 200 instances. We followed the constrained decoding approach outlined in [1]. Since the settings, such as the base model, differ significantly, there are no directly comparable numbers from prior work. \\n\\n[1] Post, Matt, and David Vilar. \\\"Fast lexically constrained decoding with dynamic beam allocation for neural machine translation.\\\" arXiv preprint arXiv:1804.06609 (2018).\\n\\n> **I think architecture + parameters + decoding method (which can be optimized) can be considered as a system/model. And it seems that sometimes inference time intervention (entity typing) can actually work better.**\\n\\nOur claim is based on the general observation that, even with the same inference-time intervention, the best performance a model can achieve is highly correlated with its original performance. This happens not only in our case, but also in broad scenarios such as instruction following [1], ICL [2], and RAG [3]. This correlation explains why researchers are actively developing more advanced base models and post-training methods. One key message in the entity typing experiment is that ACT and inference-time interventions are complementary; in other words, ACT enhances the upper bound of what inference-time interventions can achieve.\\n\\n[1] Wei, Jason, et al. \\\"Finetuned language models are zero-shot learners.\\\" arXiv preprint arXiv:2109.01652 (2021).\\n\\n[2] Brown, Tom B. \\\"Language models are few-shot learners.\\\" arXiv preprint arXiv:2005.14165 (2020).\\n\\n[3] Soudani, Heydar, Evangelos Kanoulas, and Faegheh Hasibi. \\\"Fine Tuning vs. Retrieval Augmented Generation for Less Popular Knowledge.\\\" arXiv preprint arXiv:2403.01432 (2024).\"}", "{\"summary\": \"# Goal an Method\\nThis work looks at general problem of training LLMs to perform tasks where the output must fulfill some set of constraints. Their proposed training approach assumes access to some implemented constraint verification method for measuring how closely the LLM's output follows the task's constraints. Training is then done in a manner that is similar to rejection sampling: For a given, unlabeled input they first sample multiple outputs from the model before scoring and ranking outputs using their constraint verification method. The authors then update the LLM by training on two losses using these rankings: (1) the standard NLL loss of generating the best ranked sample and (2) ranking loss encouraging the model to assign greater likelihood to samples rank higher in constraint following than those that rank lower.\\n\\n# Main Results\", \"the_authors_experiment_on_a_variety_of_tasks_with_different_constraints\": \"(1) Fine-Grained Entity Typing w/ Label Space and Label Hierarchy Constraints (2) Abstractive Summarization w/ Extractiveness Constraints and (3) TemporalQA w/ Event Ordering Logic Constraints.\\nThe primary baselines used in this work are using the base LLM (w/ prompting for constraint following) and (2) finetuning on gold labels. The results demonstrate that their proposed method improves performance over simple prompting. They also demonstrate that in the first two settings, their proposed method can achieve comparable performance while reducing annotation cost. In the third setting, there is a still a significant gap between full finetuning and their proposed method.\\n\\n# Additional Results\\nThe authors demonstrate that the learned constraints can transfer across tasks (learning extractiveness constraint on table-to-text summarization transfers to text-to-text). The authors also experiment with using their constraint-following rankings to train reward models, and find it that it improves reward modeling performance on Fine-Grained Entity Typing, but still performs worse than training with gold, human-annotated labels.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The work is well written, clear, and technically sound. Results are generally positive, demonstrating that\", \"weaknesses\": \"# Generalizability and Reliance on Constraint Verifier\\nA significant limitation of the method proposed in this work is the reliance on the constraint verifier. This limits the utility of the proposed method, where constraint verifiers are applicable and implemented. This work also, for the most part, explores settings where constraint following is directly tied to the end task performance metric, which is often not the case. The only exception to this is the summarization setting, where the authors do find that some mixing between constraint following and standard finetuning performs best. [1] Presents an interesting additional setting and methods where constraints are not directly tied to end-task performance (human preference), and are input dependent.\\n\\n[1] Rule Based Rewards for Language Model Safety\\nTong Mu, Alec Helyar, Johannes Heidecke, Joshua Achiam, Andrea Vallone, Ian Kivlichan, Molly Lin, Alex Beutel, John Schulman, Lilian Weng\\n\\n# Missing Comparisons Against Existing Methods\\nThe proposed training approach is very similar to standard training methods like rejection sampling and preference learning methods (e.g., DPO) , where the constraint verification evaluator is used in lieu of a reward model. The most significant differences are seem to be minor changes to the learning objective (Loss function in L218 and L222), and response sampling method (Section 3.3). Describing the differences between the proposed training approach and experiments comparing the proposed method against these standard approaches and ablations using different response sampling methods would help determine the differences and the impact of these changes in moving from a reward-maximizing to constraint-following setting.\", \"questions\": \"Could you clarify the necessity of only evaluating on LLMs following the Apache 2.0 license and whether there are any other public LLMs that could be used for additional experiments? The limited model settings would be listed as a weakness of this work, but this is understandable if there is no further options.\\n\\nHow is Inference w/ constraints implemented? Is it the sampling approach as is used during training for generating candidates? I'm particularly curious of in the improvement of ACT (100% setting ) and inference w/ constraints performance in Table 1. It's also surprising that inference w/ constraints significantly improves entity typing performance, over ACT, but not in summarization. Is there any explanation for this?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We appreciate the reviewer's insightful feedback. We provide a detailed response below to address the concerns and questions raised by the reviewer.\\n\\n> **W1. The gains are small compared to fine-tuning**\\n\\nWe want to clarify that **fine-tuning is not a baseline** but rather an **\\u201cupper-bound\\u201d reference**. While ACT does not require human-annotated data, fine-tuning relies on human annotations (of the same data size), as mentioned in lines 280-282. The comparison with fine-tuning demonstrates the informativeness of feedback from constraint verifiers and the effectiveness of constraint-driven alignment.\\n\\nACT only requires pre-defined constraint verifiers to automatically generate supervision signals. As discussed in Section 5.1, **identifying constraints demands significantly less effort than manual annotation** and is similar to designing annotation guidelines.\\n\\n> **W2.1. It would be helpful if authors can provide justification for the choice of three benchmarks.**\\n\\nAs indicated in lines 238-242, we select representative tasks for **each of the three constraint categories**. We also conduct additional experiments to demonstrate constraint transferability in section 5.2. Overall, we present five groups of experiments, covering a diverse spectrum of constraints.\\n\\n> **W2.2. constrained decoding should be compared as a baseline**\\n\\nConstrained decoding is **orthogonal to our work** and is considered **one type of inference with constraints** in our paper. (In other words, constrained decoding is for inference, while ACT is for training.) We have already presented the results of such methods. We show that ACT and inference with constraints are complementary. By **combining our method with inference with constraints**, one **can achieve better performance**. As noted in Footnote 3, we have tested various inference w/ constraints methods and observed no significant performance difference. For consistency, we report the results of using inference w/ constraints methods derived from constraint verifiers.\\n\\nPer the reviewer's request, **we evaluated constrained decoding** on the entity typing task. The F1 score achieved is 64.0, which is slightly better than the inference-with-constraints result reported in the paper. Notably, when combining constrained decoding with ACT or finetuning, the F1 score improves significantly to over 72.0, demonstrating the effectiveness of our method.\\n\\nWe will update our paper soon to include a discussion of [1] and COLD.\\n\\n> **Q1. how many constraints exist for each task**\\n\\nTo avoid confounding factors, we focus on at most two constraints of the same type for each experiment and report the constraint satisfaction rate for each independent constraint.\\n\\n> **Q2. Do you have any ideas how your approach could be further improved to address such tasks ( where verifiers are difficult to build or are expensive to run)?**\\n\\nAs discussed in Section 5.1, implementing constraint verifiers demands **significantly less effort** than manual data annotation and is similar to designing annotation guidelines. Constraint verifiers are **reusable** (even across tasks, as shown in Section 5.2) and can generate substantial supervision signals after their initial implementation. Additionally, given a new user request, one can **retrieve** the corresponding constraint verifiers or the trained constraint adapters for efficient alignment, as discussed in section 5.4.\\n\\nWe thank the reviewer for the insightful comments and suggestions, which can help us improve the quality and clarity of our paper.\"}", "{\"comment\": \"Dear Reviewer dUjU,\\n\\nAs the discussion period is ending, we would like to thank you for volunteering your time and engaging in discussion. We appreciate your positive review of our paper and hope we have answered all your questions and addressed any concerns you had.\\n\\nThanks!\"}", "{\"comment\": \"Dear Reviewer mNHe,\\n\\nThank you for your time and thoughtful feedback on our paper. We hope our responses have addressed your concerns, and we kindly request that you consider updating the score accordingly. If there are any remaining issues, please let us know, and we will be happy to provide further clarification.\\n\\nThanks!\"}", "{\"title\": \"Followup\", \"comment\": \"Dear Reviewer dUjU,\\n\\nThank you for your time and thoughtful feedback on our paper. We hope our responses have addressed your concerns and kindly request you to consider raising your score. If there are any remaining issues, please let us know, and we will be happy to provide further responses.\\n\\nThanks!\"}", "{\"summary\": \"The paper presents an approach to adapt LLM to focus on a particular task which comes with constraints. The task is assumed to have a set of constraints, each of which are easy to automatically verify (e.g., you can write a simple code that checks whether the model output satisfies the constraint or not). The paper propose a simple, reasonable approach to address this issue: they build automatic constraint verifiers manually, sample multiple responses from the base LMs, evaluates the responses with constant verifiers, and fine-tune LLM to prefer constant satisfying responses over non constant satisfying ones.\\n\\n And I find the paper is well-written and easy to follow. The experiments are well explained, and I do not have major concerns with the validity of the proposed method. But I'm having an issue with the baselines and the gains from the proposed approach. Please see weaknesses for the details.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"They propose a simple and reasonable approach that can be applied to a wide range of applications.\", \"The task is well motivated.\", \"The experimental design is solid, though it misses baselines from prior work.\", \"The paper is clearly written.\", \"The discussion (Section 5) is quite thorough and interesting.\"], \"weaknesses\": \"* Weak results(?):\\n The gains from the proposed approach is pretty small or non-existent compared to reasonable baseline (e.g., fine-tuning approaches). For example, in fine-grained entity typing, fine-tuning outperforms (Figure 3&4). I'm not sure I'm understanding the Fine-tuning setup -- is ACT \\\"unsupervised\\\" (w/t access to human labels) and fine-tuning approach \\\"supervised\\\" with labeled data? Instaed, ACT has manually crafted constraint verifiers? More elaboration on how much supervision is given to each setting would be helpful.\\n\\n* Issues with baselines / task choice: It would be helpful if authors can provide justification for the choice of three benchmarks. There are other benchmarks that has been studied for constrained generations, for example, COMMONGEN corpus [1] which considers lexically constrained decoding (LLMs goal is to generate a coherent sentence containing all listed words). There are rich literature (which the paper cites e.g., COLD decoding paper) that handles such constraint adherence problems in this dataset, and they should be compared as a baseline. \\n\\n[1] B. Y. Lin, W. Zhou, M. Shen, P. Zhou, C. Bhagavatula, Y. Choi, and X. Ren. CommonGen: A constrained\\ntext generation challenge for generative commonsense reasoning. In EMNLP - Findings, 2020. URL https://aclanthology.org/\\n2020.findings-emnlp.165.pdf.\", \"questions\": \"(1) In your experiments, how many constraints (on average) exist for each task?\\n(2) There exist tasks where either (a) building a reliable verifier is difficult (e.g., long-form question answering) or (2) reliable verifier is computationally very expensive. Do you have any ideas how your approach could be modified/further improved to address such tasks?\", \"comments\": [\"I am not sure identification of three downstream tasks where constraints apply is strong enough to be listed as a contribution in the introduction.\", \"The verifier must be built manually, if I\\u2019m understanding correctly. It\\u2019d be good to make that clear in Figure 2.\", \"Human evaluation result (for summarization) should be more carefully reported, with inter-annotator agreement and statistics on the annotators, payment for them, etc.\", \"It would be helpful to have example task instance for each one in the appendix.\", \"Minor comments/suggestions:\", \"For reproducibility, it\\u2019d be good to provide the exact prompt / in-context examples used (e.g., line 294).\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Hi, thank you for many clarifications. This helps!\\n\\nRegarding the constraint decoding baselines. \\nThank you for the clarification! Adding this discussion would strengthen the paper. However, I still want to see comparison with these orthogonal, well-established approaches. You do not necessarily have to outperform them all -- especially as your approach can be combined with existing approaches. However, you should contextualize your approach with these existing literature: what is the pros/cons of inference time intervention vs. fine-tuning approaches like yours? In this regard, you should provide results not only on entity-typing but on other benchmarks as well, and evaluate your approach in the commongen benchmark (or other benchmarks that other papers have evaluated their approaches on). \\n\\nI am not sure I understand the answers to the my second question. I am not sure you can easily claim \\\"implementing constraint verifiers demand significantly less efforts\\\" than data annotation efforts. For complex tasks (e.g., long form question answering, or creative text generation), I have hard time envisioning constructing a constraint verifier. Could you elaborate on this point? I think this is a limitation of this approach, and should be discussed appropriately.\"}", "{\"summary\": \"This paper is about incoporating constraints in LLMs in order to improve them efficiently for NLP downstream tasks (in the paper: summarization, entity typing and temporal question answering). The approach to do so relies - as far as I understand - a rather standard process in this area where constraints are defined (manually), automatically verified and different responses of an LLM are then generated, each of which is assessed for constraints. The model is then trained with these constraint-verified outputs. The paper goes on to show that this approach can rival standard fine-tuning of LLMs on three selected NLP tasks.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"S1) The paper's idea is interesting\\n\\nS2) The evaluation is thorough\\n\\nS3) The paper is well-written and easy to follow\\n\\nS4) The paper conducts multiple interesting and thorough analyses\", \"weaknesses\": \"I generally like this paper, but list here a few potential weaknesses:\\n\\nW1) Even though I am not an expert for this, I think the methodology is straightforward and standard\\n\\nW2) The topic is arguably not one of the most fanciest, concurrently\\n\\nW3) Could one have added more baselines, e.g., constrained decoding?\", \"questions\": \"Q1) Choosing relevance as a constraint in summarization seems a bit arbitrary. Why not other aspects such as coherence? Overall, however, I am not sure whether it makes sense to think of those dimensions as constraint. Should that not be more formal aspects such as the length of the summary?\\n\\nQ2) Did I miss something: In Table 3, you show the results for summarization&table-to-text generation but where is extraction? (Table 2 shows only CSR)\\n\\nQ3) see W3\\n\\nDepending on author answers, I am prepared to raise my score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer HmdB,\\n\\nAs the discussion period is ending, we would like to thank you for volunteering your time to review our paper. We hope to have answered all your questions and addressed the rest of the concerns you had.\\n\\nThanks!\"}", "{\"comment\": \"We appreciate the reviewer's insightful feedback. We provide a detailed response below to address the concerns and questions raised by the reviewer.\\n\\n> **W1.1. generalizability and reliance on the constraint verifier**\\n\\nAs discussed in Section 5.1, implementing constraint verifiers demands **significantly less effort** than manual data annotation and is similar to designing annotation guidelines. Constraint verifiers are **reusable** (even across tasks, as shown in Section 5.2) and can generate substantial supervision signals after the initial implementation.\\n\\nAll constraints discussed in our paper are derived from general task guidelines, such as option lists, extractiveness, and consistency. **Constraints are widely observed in NLP tasks [2-7]** (more examples are available in lines 90-102), as discussed in Section 5.1. It is very unlikely that one cannot specify a constraint for a task unless the solution space is arbitrary text.\\n\\n\\n\\n> **W1.2. constraint following is directly tied to the end task performance metric, which is often not the case**\\n\\nWe want to clarify that, in terms of utility, **any user-specified constraint is part of the task goal, narrowing down the solution space**, regardless of whether it is directly or indirectly related to the performance metric. A response that does not meet the constraints is unhelpful to the user. From this perspective, **[1]** (such as apology and refusal rules) also **falls into one type of constraint, f(x, y), as defined in our paper**. \\n\\nWe want to kindly point out that [1] was released after the initial release of our paper. We will update our paper soon and add a discussion of this work. Note that [1] and our work have different focuses. We investigate common constraints in NLP tasks, categorize them into three classes, and propose a unified and efficient constraint-driven alignment framework.\\n\\n[1] Rule Based Rewards for Language Model Safety Tong Mu, Alec Helyar, Johannes Heidecke, Joshua Achiam, Andrea Vallone, Ian Kivlichan, Molly Lin, Alex Beutel, John Schulman, Lilian Weng\\n\\n[2] Chang, Ming-Wei, Lev Ratinov, and Dan Roth. \\\"Guiding semi-supervision with constraint-driven learning.\\\" ACL 2007.\\n\\n[3] Wang, Haoyu, et al. \\\"Joint constrained learning for event-event relation extraction.\\\" EMNLP (2020).\\n\\n[4] Jang, Myeongjun Erik, and Thomas Lukasiewicz. \\\"Consistency analysis of chatgpt.\\\" arXiv preprint arXiv:2303.06273 (2023).\\n\\n[5] Pan, Wenbo, et al. \\\"A preliminary evaluation of chatgpt for zero-shot dialogue understanding.\\\" arXiv preprint arXiv:2304.04256 (2023).\\n\\n[6] Parikh, Ankur P., et al. \\\"ToTTo: A controlled table-to-text generation dataset.\\\" EMNLP (2020).\\n\\n[7] Porteous, Julie, and Marc Cavazza. \\\"Controlling narrative generation with planning trajectories: the role of constraints.\\\" ICIDS 2009.\\n\\n> **W2. The proposed training approach is very similar to standard training methods**\\n\\nWe want to clarify that our **research direction is orthogonal** to the mentioned training methods. The research question we address is: If human annotation is unavailable and models cannot perform well on a task, **where should the supervision signals come from** for customized alignment? Our **insight is that the alignment process can be constraint-driven**. We are the first to connect LLM alignment with constraint-driven learning. To support this insight, we formally categorize existing constraints into three types, demonstrate how to incorporate them into the alignment process, and show the effectiveness and transferability of representative constraints. Our findings can definitely be extended, and our constraint verifier can be integrated into other alignment processes.\\n\\n> **Q1. Could you clarify the necessity of only evaluating on LLMs following the Apache 2.0 license \\u2026 this is understandable if there is no further options.**\\n\\nWe make this technical choice due to institution-wide policy restrictions.\\n\\n> **Q2. How is Inference w/ constraints implemented?**\\n\\nAs introduced in lines 265-277 and 344-346, inference with constraints is derived from constraint verifiers, but instead of assessing constraint satisfaction rate, we use them to improve the response. For summarization, we adopt the constraint verifier to rerank multiple sampled summaries, following Cao & Wang (2021).\\n\\nCao and Wang. \\\"CLIFF: Contrastive learning for improving faithfulness and factuality in abstractive summarization.\\\" EMNLP (2021).\\n\\n> **Q3. I'm particularly curious about the improvement**\\n\\nThe improvement depends on the initial task performance and the properties of the constraints. Note that for entity typing and summarization, we use constraints from two different categories to demonstrate the generalizability of our framework. Our goal is not to achieve state-of-the-art performance but to formulate the concept of aligning with constraints and prove its effectiveness across a wide range of scenarios.\"}", "{\"comment\": \"Dear authors,\\n\\nthanks for the answers - maybe making shorter ones could be a recommendation (reviewers have a lot of papers to review, so brevity can be important).\\n\\nI was/am generally optimistic for this paper, and as you see, I am the only reviewer that recommends acceptance right now. Mostly for this reason, and because I am not an expert for the topic, I am not increasing my score right now.\\n\\nBut I keep leaning positive and your answers were also ok.\\n\\n> While coherence is also critical and length could be considered applicable, neither falls within the constraint category f(x, y) that this experiment aims to investigate.\\n\\nTrue. But this sounds like answering \\\"we didn't do it\\\" to a question \\\"why didn't you do it\\\"? My question is more like \\\"why this constraints and not another one\\\". There's no answer to the second part, or did I miss it?\"}", "{\"comment\": \"We appreciate the reviewer's insightful feedback. We provide a detailed response below to address the concerns and questions raised by the reviewer.\\n\\n> **W1. the methodology is straightforward and standard**\\n\\nOur goal is **not to propose yet another alignment method but to introduce the insight** that alignment can be driven by constraints, specifically tailored for customized use cases. The research question we address is: If human annotation is unavailable and models cannot perform well on a task, where should the supervision signals come from for customized alignment? Our primary focus is to conceptualize and validate this across various scenarios. We are the **first to bridge LLM alignment with constraint-driven learning**. To support this, we formally categorize existing constraints into three types, demonstrate their integration into the alignment process, and showcase the effectiveness and transferability of representative constraints.\\n\\n> **W2. The topic is arguably not one of the most fanciest**\", \"our_work_addresses_a_critical_and_underexplored_problem_in_lm_customization\": \"Assume one has an off-the-shelf LM at hand, with no access to how this LM was trained by the provider (training data, RM model, etc.). However, this LM may not meet one\\u2019s needs in terms of constraint satisfaction, even if the constraint requirement is explicitly added to the prompts. The challenge is how to improve the LM's constraint adherence as cost-effectively as possible.\\n\\n- Approach 1: Post-processing \\u2014 use the LM as-is and perform some post-editing to satisfy constraints (Inference with Constraints baseline in our paper).\\n- Approach 2: Collect annotations for the task at hand and finetune the LM on this data (Finetuning as reference in our paper).\\n\\nThis paper argues that both approaches are not ideal because Approach #1 underperforms, and Approach #2 is too costly. Thus, our approach focuses on the following problem: if the main issue with the LM is constraint-following, which can often be checked automatically, how do we automatically distill this knowledge into the LM?\\n\\nWe believe our work offers a promising and impactful contribution to advancing the customized alignment of LLMs.\\n\\n> **W3. Could one have added more baselines, e.g., constrained decoding?**\\n\\nConstrained decoding is **orthogonal to our work** and is considered **one type of inference with constraints** in our paper. (In other words, constrained decoding is for inference, while ACT is for training.) We have already presented the results of such methods. We show that ACT and inference with constraints are complementary. By **combining our method with inference with constraints**, one **can achieve better performance**. As noted in Footnote 3, we have tested various inference-with-constraints methods and observed no significant performance difference. For consistency, we report the results of using inference with constraints derived from constraint verifiers.\\n\\nPer the reviewer's request, **we evaluated constrained decoding** on the entity typing task. The F1 score achieved is 64.0, which is slightly better than the inference w/ constraints result reported in the paper. Notably, when combining constrained decoding with ACT or finetuning, the F1 score improves significantly to over 72.0, demonstrating the effectiveness of our method.\\n\\n> **Q1. Choosing relevance as a constraint in summarization seems a bit arbitrary.**\\n\\nWe chose relevance as a constraint because it is a key aspect of summary evaluation **identified in prior work** [1,2]. While coherence is also critical and length could be considered applicable, neither falls within the constraint category f(x, y) that this experiment aims to investigate.\\n\\n[1] Fabbri, Alexander R., et al. \\\"Summeval: Re-evaluating summarization evaluation.\\\" TACL 2021. \\n\\n[2] Zhang, Tianyi, et al. \\\"Benchmarking large language models for news summarization.\\\" TACL 2024.\\n\\n> **Q2. where is the results for extraction?**\\n\\nWe emphasize the Constraint Satisfaction Rate (CSR) in our analysis to demonstrate the transferability of constraint adherence capabilities. Event trigger extraction has traditionally been framed as an NLU task rather than NLG, and there is no universally established metric for this specific context. However, in response to the user\\u2019s request, we report the exact match (EM) accuracy below.\\n\\n| Source Task | CSR | EM |\\n|-------------|-------|------|\\n| - | 58.8 | 16.8 |\\n| T1 | 67.7 | 18.2 |\\n| T2 | 73.9 | 19.7 |\\n| T1+T2 | 76.2 | 20.0 |\"}", "{\"title\": \"Paper Update\", \"comment\": [\"We have carefully addressed the reviewers' suggestions and incorporated the following updates into the revised PDF:\", \"**Reliance on Constraint Verifier**: Added discussion on the wide existence of constraints and their verifiers in lines 462-470.\", \"**Distribution of Constraint Satisfaction Rate**: Added results in Appendix E and lines 473-476.\", \"**CommonGen**: Added results in Appendix D.\", \"These updates aim to address the reviewers' feedback comprehensively and strengthen the overall contribution of the paper.\"]}", "{\"comment\": \"We appreciate the reviewer's insightful feedback. We provide a detailed response below to address the concerns and questions raised by the reviewer.\\n\\n> **W1. show that we are not just overfitting to a specific task**\\n\\nOur goal is to fit user-specified constraints, not tasks. The **generalizability of learned constraints has been shown in Section 5**. We agree with the reviewer that reframing the additional experiments in Section 5 as main experiments would better demonstrate the method's generalizability, and we will update our paper accordingly. Currently, we have five groups of experiments showing the constraint satisfaction rate for specific tasks and also the transferability of constraints, covering a diverse spectrum of constraints.\\n\\n> **W2.1. examples on the defined constraint categorization**\\n\\nFor f(y) constraints, an example is shown in Figure 1, where the response must be selected from an option list (Section 4.1). \\nFor f(x,y) constraints, examples include the relevance constraint for summarization (Section 4.2) and the extractiveness constraint for information extraction (Section 5.2). \\nFor f({x,y}) constraints, an example is the consistency constraint for temporal QA (Section 4.3), where the answer to \\\"happens before event A\\\" and \\\"what happens after event A\\\" should have no overlap (lines 377-379).\\nWe will add concrete examples in the appendix in the updated paper shortly.\\n\\n> **W2.2. analysis on distribution of the constraint satisfaction scores**\\n\\nPer the reviewer's request, we present the **constraint satisfaction rate distribution for [entity typing](https://bashify.io/i/nedJO9) and [summarization](https://bashify.io/i/hj30Iy)**. (Please check the links pointing to anonymous figures.) The observation is that ACT and finetuning exhibit similar distributions, while the original model is significantly different. \\n\\nWe will add this analysis in the updated paper and acknowledge ORPO for inspiring this insightful analysis.\\n\\n> **Section 4.1: Which kind of constraints are considered when selecting a rejected response?**\\n\\nIf the response does not satisfy one or more of the given constraints, it will be rejected.\\n\\n> **Section 4.2: Enhanced loss function** \\n \\n$$ \\\\mathcal{L}_{ft} = - CSR \\\\sum_i \\\\log P(y_i|\\\\mathbf{x}, \\\\mathbf{y}_{<i})$$\\n\\n$$\\\\mathcal{L}_{rank} = \\\\sum_{CSR_i < CSR_j} \\\\max (0, P(\\\\mathbf{y}^i|x) - P(\\\\mathbf{y}^j|x) + CSR_i - CSR_j)$$\\n\\n> **Section 4.2: How often are the constraints verified during inference?** \\n\\nThe constraints are verified at the end, following Cao & Wang (2021).\\n\\nCao and Wang. \\\"CLIFF: Contrastive learning for improving faithfulness and factuality in abstractive summarization.\\\" EMNLP (2021).\\n\\n> **Section 4.3: Footnote 6 \\u2013 Could you please elaborate in a more technical way on what the \\u2018garbage in-garbage out\\u2019 problem is?** \\n\\nThe original model cannot generate reasonable responses without any finetuning. In this case, no informative supervision signals can be collected by comparing the responses, as none of them are of high quality.\\n\\n> **Section 4.3: How is this conflict defined, and how is it quantified?** \\n\\nThe conflict is defined as the overlap of events in two responses. It is quantified as the ratio of overlapping events.\\n\\n> **Section 5.2: When the source task = \\u2018-\\u2019, what does this mean?** \\n\\nThis indicates that inference was done over the base model (before ACT training).\"}", "{\"comment\": \"Dear Reviewer Fejv,\\n\\nThank you for your time and thoughtful feedback on our paper. We hope our responses have addressed your concerns, and we kindly request that you consider updating the score accordingly. If there are any remaining issues, please let us know, and we will be happy to provide further clarification.\\n\\nThanks!\"}", "{\"comment\": \"Dear Reviewer Fejv,\\n\\nAs the discussion period is ending, we would like to thank you for volunteering your time to review our paper. We hope to have answered all your questions and addressed the rest of the concerns you had.\\n\\nThanks!\"}", "{\"comment\": \"Dear Reviewer mNHe,\\n\\nAs the discussion period is ending, we would like to thank you for volunteering your time and engaging in discussion. We found your comments to be the most challenging and rewarding, motivating some significant changes and hopefully improvements to our paper. We hope to have answered all your questions and addressed the rest of the concerns you had.\\n\\nThanks!\"}", "{\"comment\": \"Thank you for the response.\\n\\nFor the second task -- is there more recent baseline you can adapt than this one from 2021?\", \"regarding_the_experiments_on_commongen\": \"---\\nWe also compared constrained decoding and ACT on a subset of the CommonGen validation set. Constrained decoding achieved a ROUGH-L score of 41.6, while ACT, after less than 300 training steps, achieved a score of 42.0, further demonstrating the effectiveness of ACT. Additionally, we observed that the constraint satisfaction rate (CSR; i.e., concept coverage in this case) for constrained decoding is highly dependent on the beam size, whereas ACT can achieve a CSR of 92.3% without requiring further intervention. This highlights the different advantages of ACT and constrained decoding.\\n---\\nHow did you select a subset here? Why a subset? What constrained decoding method are you using here? These comparison to prior work is not thorough thorough enough. Can you compare with the numbers in prior studies? \\n\\n====\\n The key advantage of ACT-style tuning methods over inference-time interventions lies in their ability to enhance the model\\u2019s capabilities, whereas inference-time interventions rely on utilizing the existing model capabilities. \\n>> I am not sure I buy this argument. It depends on what you see as a \\\"model\\\". I think architecture + parameters + decoding method (which can be optimized) can be considered as a system/model. And it seems that sometimes inference time intervention (entity typing) can actually work better..\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
DeVm3YUnpj
Agent-as-a-Judge: Evaluating Agents with Agents
[ "Mingchen Zhuge", "Changsheng Zhao", "Dylan R. Ashley", "Wenyi Wang", "Dmitrii Khizbullin", "Yunyang Xiong", "Zechun Liu", "Ernie Chang", "Raghuraman Krishnamoorthi", "Yuandong Tian", "Yangyang Shi", "Vikas Chandra", "Jürgen Schmidhuber" ]
Contemporary evaluation techniques are inadequate for agentic systems. These approaches either focus exclusively on final outcomes---ignoring the step-by-step nature of the thinking done by agentic systems---or require excessive manual labour. To address this, we introduce the **Agent-as-a-Judge** framework, wherein agentic systems are used to evaluate agentic systems. This is a natural extension of the LLM-as-a-Judge framework, incorporating agentic features that enable intermediate feedback for the entire task-solving processes for more precise evaluations. We apply the Agent-as-a-Judge framework to the task of code generation. To overcome issues with existing benchmarks and provide a proof-of-concept testbed for Agent-as-a-Judge, we present **DevAI**, a new benchmark of 55 realistic AI code generation tasks. DevAI includes rich manual annotations, like a total of 366 hierarchical solution requirements, which make it particularly suitable for an agentic evaluator. We benchmark three of the top code-generating agentic systems using Agent-as-a-Judge and find that our framework dramatically outperforms LLM-as-a-Judge and is as reliable as our human evaluation baseline. Altogether, we believe that this work represents a concrete step towards enabling vastly more sophisticated agentic systems. To help that, our dataset and the full implementation of Agent-as-a-Judge will be publically available at [REDACTED]
[ "Code Generation; Agent-as-a-Judge; AI Developer; AI Judge; LLM" ]
Reject
https://openreview.net/pdf?id=DeVm3YUnpj
https://openreview.net/forum?id=DeVm3YUnpj
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zT70FQcgJv", "v8Ft3PYz5E", "uZSQ7JO90Y", "rowUOY0GoK", "r6xWcCJfRu", "oXr4QLfNWd", "mu47dBQIHL", "hTC9LiWCHf", "fIoeh3AYGZ", "egOMplPTYR", "WQWLMeZaQg", "VwSLZlP1pP", "VYpDfOmu47", "V9OfQx2Z0X", "UqkxrCmd8b", "TVT4EgXfAN", "QH3I9STjfx", "Q2YT8B167S", "PceHTuMwOe", "OQvdt6659v", "Nprlyo99Es", "JqsTN55M39", "I26PHXPZUV", "HdYqXu31Th", "BYN9kJ92P0", "96lHd1jn6f", "8nzXSX0hlu", "1zdR0KuSDc", "0fqNiksAv3" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732133614399, 1732133536448, 1732550377431, 1732558128483, 1732133497423, 1730629081044, 1732136529781, 1737523399108, 1732134532561, 1732802830082, 1730531424532, 1733224649294, 1732133705706, 1732134678175, 1732135611717, 1732134967449, 1730720838498, 1732136205708, 1732136683192, 1732136725017, 1732133894867, 1732134577628, 1732560230203, 1732136569886, 1732208393832, 1734604804503, 1732136401549, 1732136470265, 1732136374258 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission499/Authors" ], [ "ICLR.cc/2025/Conference/Submission499/Authors" ], [ "ICLR.cc/2025/Conference/Submission499/Reviewer_MHgQ" ], [ "ICLR.cc/2025/Conference/Submission499/Authors" ], [ "ICLR.cc/2025/Conference/Submission499/Authors" ], [ "ICLR.cc/2025/Conference/Submission499/Reviewer_4rku" ], [ "ICLR.cc/2025/Conference/Submission499/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission499/Authors" ], [ "ICLR.cc/2025/Conference/Submission499/Authors" ], [ "ICLR.cc/2025/Conference/Submission499/Reviewer_MHgQ" ], [ "ICLR.cc/2025/Conference/Submission499/Reviewer_qJsZ" ], [ "ICLR.cc/2025/Conference/Submission499/Authors" ], [ "ICLR.cc/2025/Conference/Submission499/Authors" ], [ "ICLR.cc/2025/Conference/Submission499/Authors" ], [ "ICLR.cc/2025/Conference/Submission499/Authors" ], [ "ICLR.cc/2025/Conference/Submission499/Reviewer_qJsZ" ], [ "ICLR.cc/2025/Conference/Submission499/Authors" ], [ "ICLR.cc/2025/Conference/Submission499/Authors" ], [ "ICLR.cc/2025/Conference/Submission499/Authors" ], [ "ICLR.cc/2025/Conference/Submission499/Authors" ], [ "ICLR.cc/2025/Conference/Submission499/Authors" ], [ "ICLR.cc/2025/Conference/Submission499/Authors" ], [ "ICLR.cc/2025/Conference/Submission499/Authors" ], [ "ICLR.cc/2025/Conference/Submission499/Reviewer_4rku" ], [ "ICLR.cc/2025/Conference/Submission499/Area_Chair_SYbo" ], [ "ICLR.cc/2025/Conference/Submission499/Authors" ], [ "ICLR.cc/2025/Conference/Submission499/Authors" ], [ "ICLR.cc/2025/Conference/Submission499/Authors" ] ], "structured_content_str": [ "{\"title\": \"Thanks for your review! Authors' feedback [3/4].\", \"comment\": \"**Q4 (original W4). Code generation is only one of the many applications of agentic systems. There are plenty of other domains where LLM agents may help. It is therefore not enough to consider the task of code generation alone. The agentic evaluation framework also seems to be ad hoc to code generation.**\\n\\nWe agree that agentic systems have applications beyond code generation. In our study, we chose code generation as a testbed for the Agent-as-a-Judge (AAAJ) framework due to its inherent complexity and the significant advancements in agentic systems in this field (e.g., having notable impact in both academia and industry recently). Developing benchmarks across multiple domains is a substantial effort, and focusing on code generation allows us to thoroughly demonstrate the effectiveness of AAAJ. For example, OpenAI's [1] from the last ICLR focuses on mathematics problems, but the idea can be applied more generally.\\n\\nOur intention is to demonstrate the effectiveness of the AAAJ framework in a challenging domain, with the hope that it can be extended to others in future work. \\n\\n[1] Lightman, Hunter, et al. (2024). \\\"Let's verify step by step.\\\" ICLR 2024. https://openreview.net/forum?id=v8L0pN6EOi \\n\\n**Q5 (original Q1). Analysis of failure cases: Did you conduct any studies on the failure cases to see whether there are consistent patterns that agentic systems may exploit?**\\n\\nValuable suggestion. We conducted an analysis of the failure cases and identified consistent patterns, which are summarized below:\\n\\n| Category | Count |\\n|---------------------------------------|-------|\\n| Data preprocessing and postprocessing | 10 |\\n| Dataset or Environment | 8 |\\n| Other | 5 |\\n| Machine Learning Method | 4 |\\n| Performance Metrics | 3 |\\n| Visualization | 3 |\\n| Human-Computer Interaction | 3 |\\n\\nWe found that AAAJ struggled most with judging cases in the *Data preprocessing and postprocessing* category, whereas it performed well in judging *Human-Computer Interaction* cases.\\n\\n---\\n\\n### **Failure Case Sample 1**\\n\\n| Task | Requirement ID | Category | Agent-as-a-Judge | Human-as-a-Judge | Criteria |\\n|-----------------------------------------------|----------------|------------------------|-----------------|-----------------|-----------------------------------------------------------------------------------------|\\n| `40_Text_Summarization_BART_CNNDailyMail_DL` | 0 | Dataset or Environment | True | False | The \\\"CNN/Daily Mail\\\" news dataset is used, including loading and preparing the dataset in `src/data_loader.py`. |\\n\\n**Analysis:** \\nIn this case, the dataset used was a synthesized one generated by the OpenHands CodeAct agent. Human evaluators could quickly identify this discrepancy, but the agent-as-a-judge, having only checked the file path and content, was misled into believing it was the genuine CNN/DailyMail dataset.\\n\\n---\\n\\n### **Failure Case Sample 2**\\n\\n| Task | Requirement ID | Category | Agent-as-a-Judge | Human-as-a-Judge | Criteria |\\n|-----------------------------------------------|----------------|------------------------|-----------------|-----------------|-------------------------------------------------------------------------------------------------|\\n| `46_Speech_Recognition_DeepSpeech_LibriSpeech_DL` | 2 | Machine Learning Method | True | False | Hyperparameters such as learning rate and batch size are tuned in `src/train.py`. |\\n\\n**Analysis:** \\nHere, the agent-as-a-judge confirmed that hyperparameters were set, but missed the nuance in the criteria. The requirement implied that the learning rate and batch size should dynamically adjust in `src/train.py`, something human evaluators were able to detect.\"}", "{\"title\": \"Thanks for your review! Authors' feedback [2/4].\", \"comment\": \"**Q2 (original W1 (b) and W2). I feel like these two contributions, namely the AAAJ framework and the DevAI benchmark, should have been separated into two papers and discussed in more details respectively. Otherwise, it may be better to start from the DevAI benchmark, with agentic evaluation framework being one of its useful features; Principal contributions deviation.**\\n\\nWe understand your perspective. While these two contributions might seem to belong to separate papers, they are intrinsically linked and mutually supportive. For DevAI to be practically usable, we suggest having AAAJ (due to the cost of human evaluations). Likewise, for AAAJ to be useful, we suggest having a benchmark that allows for such complex evaluations (which has been something largely not attended to over time due to issues with the cost of evaluation).\\n\\nThis work thus opens the way for further development of similar benchmarks to DevAI with the agentic evaluation systems attached (i.e., AAAJ).\\n\\n\\n\\n**Q3 (original W3). In Section 4, AAAJ is only discussed as a proof-of-concept, with little technical details.**\\n\\n \\nWe appreciate the importance of providing detailed technical information. While our primary aim in Section 4 was to introduce the design paradigm of the AAAJ system and share insights from its development, we have included detailed technical information in Appendices K\\u2013M (6 pages), covering code descriptions and prompt usage. We will ensure to reference these appendices more clearly in the main text to guide readers to these details. Additionally, we plan to release our code after the double-blind review process concludes, which we hope will provide further clarity.\"}", "{\"title\": \"Thanks for the rebuttal\", \"comment\": \"Thanks for the author's response.\\n\\nI think my main concerns are responded by the author's response, though some of them may not be that convincing to me. For example, the scale of tasks, and the technical contributions, would be beyond the capability of the authors to address during the rebuttal time. \\n\\nMy initial evaluation was positive which I think is a fair evaluation so that I would keep my score.\"}", "{\"title\": \"Thanks again\", \"comment\": \"**Dear Reviwer 4rku,**\\n\\n---\\n\\nWe greatly appreciate your time and effort in reviewing our work.\\n\\nIn response to your suggestions, we have revised the submission to enhance its clarity and readability. Specifically, we have added leading sentences and refined the explanations to make the content more accessible. \\n\\n---\\n\\n**Thank you for your positive review score.**\"}", "{\"title\": \"Thanks for your review! Authors' feedback [1/4].\", \"comment\": \"We appreciate your valuable time & insights, and thank you for highlighting the strengths of our work (e.g., address a critical gap, practical and comprehensive testbed, significantly reducing costs and evaluation time, etc.). We will address your questions in the following response.\\n\\n---\\n\\n**Q1 (original W1 (a)). In the abstract and introduction, the Agent-as-a-Judge (AAAJ) framework is proposed with its novelty and effectiveness being emphasized. But in Section 2 and 3, the paper suddenly turns to the DevAI benchmark, introducing the motivation and technical details of this particular benchmark.**\\n\\nThank you for highlighting this structural concern. Our intention was to organize the paper following the logical sequence we adopted during our research process. Namely that **(1)** no sufficient benchmarks exist (thus requiring the creation of DevAI), **(2)** that a sufficient benchmark required extensive human effort to evaluate (shown with our human evaluation), and, finally, **(3)** how an AAAJ framework solves the issue encountered in **(2)**.\\n\\nAccording to your comments and suggestions, we now add a paragraph in the introduction and emphasize how Section 2 introduces DevAI to address the lack of benchmarks, Section 3 establishes Human-as-a-Judge as a baseline highlighting evaluation challenges, and Section 4 presents AAAJ as a scalable solution: ``This paper is structured as follows: **Sec 2** introduces DevAI to address the lack of benchmarks for verifying agentic systems with intermediate processes. **Sec 3** establishes Human-as-a-Judge as a manual evaluation baseline, highlighting its limitations. Finally, **Sec 4** presents Agent-as-a-Judge, a scalable solution to these challenges. More details are provided in **Appendix A and B**.``\"}", "{\"summary\": \"The paper \\\"Agent-as-a-Judge: Evaluating Agents with Agents\\\" addresses the inadequacy of traditional evaluation techniques for assessing agentic systems, which require more sophisticated, step-by-step feedback mechanisms. The authors propose the Agent-as-a-Judge framework, a novel approach that employs agentic systems to evaluate other agentic systems, integrating capabilities to provide intermediate feedback throughout the task-solving process for more precise evaluations.\\n\\nThe paper introduces a new benchmark, DevAI, to demonstrate and validate the proposed framework. DevAI comprises 55 realistic AI code generation tasks, complete with detailed manual annotations, making it ideal for agentic evaluators. The authors benchmark three leading open-source code-generating systems\\u2014MetaGPT, GPT-Pilot, and OpenDevin\\u2014using the proposed framework. Their experiments reveal that the Agent-as-a-Judge framework outperforms the LLM-as-a-Judge method and matches the reliability of a human evaluation baseline.\", \"the_primary_contributions_of_this_work_include\": \"1. The release of the DevAI benchmark, which consists of 55 comprehensive AI development tasks with accompanying tags, hierarchical requirements, and preferences, designed to enhance the evaluation of agentic systems.\\n2. The introduction of the Agent-as-a-Judge framework, an innovative method for evaluating agentic systems using other agentic systems, providing rich, intermediate feedback for more accurate evaluations that align closely with human evaluators while significantly reducing time and cost.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"**Originality:**\\nThe paper introduces the novel benchmark DevAI, which comprises 55 realistic AI code generation tasks with comprehensive manual annotations and hierarchical solution requirements. This benchmark addresses the gap in existing evaluation methods and is a significant contribution to the AI development community. The introduction of the Agent-as-a-Judge framework is another innovative aspect, proposing a novel method for evaluating agentic systems using other agentic systems. This extends the existing LLM-as-a-Judge framework by incorporating capabilities to provide intermediate feedback, thereby enhancing evaluation precision.\\n\\n**Quality:**\\nThe quality of the research is evident in the thorough experimental setup and robust analysis. The authors have benchmarked three leading open-source code-generating systems\\u2014MetaGPT, GPT-Pilot, and OpenDevin\\u2014using the proposed framework. They conducted experiments across various settings, including black-box and gray-box scenarios, as well as independent and task-dependent evaluations. These comprehensive experiments validate the effectiveness of the Agent-as-a-Judge framework and underscore its superiority over traditional methods. The detailed statistical analysis and alignment with human evaluators further reinforce the reliability of the proposed method.\\n\\n**Clarity:**\\nThe paper is well-structured and clearly presents its ideas and contributions. The introduction effectively outlines the motivation and objectives, while subsequent sections provide a detailed explanation of the DevAI benchmark and the Agent-as-a-Judge framework. The methodologies and experiments are described with clarity, making it easy for readers to follow and understand the research. Additionally, the inclusion of figures and tables enhances the presentation by providing visual insights into the experimental results.\\n\\n**Significance:**\\nThe significance of the paper lies in its potential to transform the evaluation of agentic systems. By introducing a novel and possibly more sound method\\u2014Agent-as-a-Judge\\u2014the paper addresses the limitations of traditional evaluation techniques that focus solely on final outcomes or require excessive manual labor. The DevAI benchmark, coupled with the Agent-as-a-Judge framework, provides a comprehensive and efficient approach for assessing the performance of agentic systems. This work lays the groundwork for more precise, intermediate evaluations, which can significantly accelerate progress in the development and deployment of sophisticated agentic systems.\", \"weaknesses\": \"**Human Evaluation Methodology :**\\nThe use of only three human experts as judges raises concerns about the robustness and reliability of the evaluation process. Given that these experts are authors of the paper, there is a potential for bias, and the large disagreements observed among them further question the dependability of the evaluation. To improve this aspect, the study could benefit from recruiting a larger and more diverse pool of external evaluators. Even applying a subset of tasks to a larger group of judges would offer more statistically reliable results and help validate the labels in the DevAI benchmark. This would ensure that the benchmark has a more concrete \\\"ground truth\\\", thus making the subsequent results more robust and credible.\\n\\n**Inconsistent Statements on Human Evaluation Reliability :**\\nThere are conflicting statements regarding the reliability of human evaluations. In Section 4 (Line 366), it is mentioned that \\\"Human evaluation, while somewhat reliable,\\\" which seems inconsistent with the prior conclusions in Section 3.4 (Line 354) \\\"Human judgment errors are inevitable\\\" and the observations in Lines 319-327 about the large disagreements among human evaluators. The paper would benefit from clearly articulating the relative reliability of various evaluation methods. If I understand correctly, the paper is expressing the relationship of reliability as LLM-as-a-Judge < Single-Human-as-a-Judge < Agent-as-a-Judge < Ensemble of Human Judges. The paper would benefit from stating this clearly.\\n\\n**Clarity in Figures :**\\nFigure 2(1) appears unprofessional and is highly unreadable due to the mixed vertical and horizontal text as well as the small font size. Enhancing the readability and professional presentation of this figure is essential. the authors might consider using a different type of diagram to display the word frequency more effectively.\\n\\n**Citation Issues :**\\nThere are several inconsistencies and inaccuracies in the citations. For instance, the citation of SWE-Bench in the introduction leads to SWE-Agent, which is incorrect. Moreover, some papers that have been accepted to conferences are cited in their arXiv versions (e.g., SWE-Bench, DSPy, HumanEval, AgentBench). Ensuring that all citations are accurate and up-to-date will enhance the paper's credibility. Each referenced work should be properly cited in its correct and most formal publication form when available.\", \"questions\": [\"I'm not exactly sure of the meaning of independent tasks (I) and tasks considering task dependencies (D). Does R0 belong to (I) and R1, R2 belong to (D) in Figure 1? Maybe I missed something, could you please further clarify this?\", \"In line 486, how is Table 3 able to demonstrate the usefulness of the \\\"retrieve\\\" module?\", \"What is your backbone model for the LLM-as-a-Judge and Agent-as-a-Judge frameworks? Does using a different backbone affect your main results?\", \"Please also pay attention to the questions and suggestions mentioned in the \\\"Weaknesses\\\" section.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for your review! Authors' feedback [6/9].\", \"comment\": \"**(Following up with Q3...)**\\n\\n- Automated Step 6 (**get information from the execution logs or trajectories**): \\n\\n```\\n\\u256d\\u2500 Relevant Steps in Trajectory \\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u256e\\n\\u2502 \\u2502\\n\\u2502 The following environment feedback is provided for reference only and does \\u2502\\n\\u2502 not serve as decisive evidence. \\u2502\\n\\u2502 \\u2502\\n\\u2502 - **<RELEVANT STEPS>**: \\u2502\\n\\u2502 \\u2502\\n\\u2502 - **Step 19**: The visualization file `rmse_scores.png` was successfully \\u2502\\n\\u2502 generated and saved in the `results/` directory. This indicates \\u2502\\n\\u2502 that the regression results were visualized using the specified tools and \\u2502\\n\\u2502 saved correctly. \\u2502\\n\\u2502 \\u2502\\n\\u2502 - **Step 20**: The visualization of the RMSE scores was successfully \\u2502\\n\\u2502 displayed, confirming that the visualization process using seaborn was \\u2502\\n\\u2502 executed without errors. \\u2502\\n\\u2502 \\u2502\\n\\u2502 - **Step 25**: A warning was encountered during the conversion of the \\u2502\\n\\u2502 Markdown report to PDF, indicating that the image \\u2502\\n\\u2502 `rmse_scores.png` could not be fetched. This suggests a potential \\u2502\\n\\u2502 issue with the image path during the report generation process, which \\u2502\\n\\u2502 might affect the inclusion of the visualization in the final report. \\u2502\\n\\u2502 \\u2502\\n\\u2502 - **Step 31**: After updating the image path to an absolute path, the \\u2502\\n\\u2502 PDF report was successfully generated, suggesting that the visualization \\u2502\\n\\u2502 was correctly referenced and included in the report. \\u2502\\n\\u2502 \\u2502\\n\\u2570\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u256f\\n```\\n\\nAfter these automated steps, Agent-as-a-Judge finishes the judgment: \\n\\n**JUDGEMENT:** SATISFIED \\u2705\\n\\n**REASON:**\\n\\nThe requirement to visualize the regression results using \\\"seaborn\\\" and save them to `results/` is satisfied. The code in `src/train.py` includes the use of `sns.histplot` from the seaborn library to create a histogram of the RMSE scores, as shown in the line:\\n\\n```python\\nsns.histplot(rmse_scores, kde=True)\\n```\\n\\nAdditionally, the visualization is saved to the specified directory with the following lines:\\n\\n```python\\nos.makedirs('results', exist_ok=True)\\nplt.savefig('results/rmse_scores.png')\\n```\\n\\nThe presence of the file `rmse_scores.png` in the `results/` directory further confirms that the visualization was successfully created and saved, fulfilling the requirement.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Thanks for your review! Authors' feedback [2/5].\", \"comment\": \"**Q2 (original W2 (a)). There are conflicting statements regarding the reliability of human evaluations. In Section 4 (Line 366), it is mentioned that \\\"Human evaluation, while somewhat reliable,\\\" which seems inconsistent with the prior conclusions in Section 3.4 (Line 354) \\\"Human judgment errors are inevitable\\\" and the observations in Lines 319-327 about the large disagreements among human evaluators.**\\n\\n\\nThank you for bringing this to our attention. We understand that the statements may seem conflicting at first glance, but they are consistent when considering the scope of our evaluation. Our focus is on assessing the alignment rate between Agent-as-a-Judge and Human-as-a-Judge, with the understanding that if Agent-as-a-Judge aligns closely with Human-as-a-Judge, it can be considered sufficiently reliable.\\n\\nThe statement ``Human evaluation, while somewhat reliable`` reflects the fact that human judgments are generally regarded as more reliable than automated methods and often serve as the ``gold standard.`` However, absolute ground truth is inherently difficult to obtain due to the biases and errors inherent in human judgment. Studies have shown significant disagreement among human annotators, with inter-annotator agreement levels typically ranging from 0.6 to 0.7 [1, 2]. This underscores the inherent limitations of human evaluation, despite its recognized value.\\n\\nAs noted in Section 3.4, ``human judgment errors are inevitable.`` Therefore, we treat human judgments as a reliable baseline for comparison, rather than as perfect ground truths.\\n\\n[1] Ouyang, Long, et al. \\\"Training language models to follow instructions with human feedback.\\\" Advances in Neural Information Processing Systems 35 (2022): 27730-27744.\\n\\n[2] Wang, Binghai, et al. \\\"Secrets of RLHF in large language models Part II: Reward modeling.\\\" arXiv preprint arXiv:2401.06080 (2024).\\n\\n\\n**Q3 (original W2 (b)). The paper would benefit from clearly articulating the relative reliability of various evaluation methods. If I understand correctly, the paper is expressing the relationship of reliability as LLM-as-a-Judge < Single-Human-as-a-Judge < Agent-as-a-Judge < Ensemble of Human Judges. The paper would benefit from stating this clearly.**\\n\\n\\nYes, your understanding is correct, and we appreciate the suggestion. To improve clarity, we have added the following statement to the manuscript: Our observations indicate the relative reliability of evaluation methods as: \\n``LLM-as-a-Judge < Single-Human-as-a-Judge < Agent-as-a-Judge < Ensemble of Human Judges. Future advancements in foundation models and Agent-as-a-Judge designs may shift this order.``\"}", "{\"comment\": [\"Dear all reviewers, ACs, and PCs,\", \"According to your suggestions, we've uploaded our revised version of the paper to incorporate the comments made during the rebuttal and new experimental results we've obtained in addressing particular comments. We've carefully highlighted all the substantial changes made for this version in blue. For our camera-ready version, we will correct the color of the text.\", \"To address reviewer qJsZ's comments\", \"we have updated a paragraph at the end of the introduction to clarify the structure of the paper for readers (see L107-L134);\", \"we have conducted a statistical analysis of failure cases (see Appendix N);\", \"we have provided examples for further uses of Agent-as-a-Judge, and\", \"we conducted experiments using different LLMs (see Appendix O).\", \"To address reviewer MHgQ's comments\", \"we have provided examples for further uses of Agent-as-a-Judge, and\", \"we conducted experiments using different LLMs (see Appendix O).\", \"To address reviewer 4rku's comments\", \"we have conducted additional Human-as-a-Judge experiments to confirm the reliability,\", \"we have done a word frequency analysis,\", \"we have resolved citation issues and corrected the bibliography, and\", \"we have clarified the description of the retrieval module.\", \"We have also now added a table of contents to our appendix, as the above has brought it to a total of 30 pages.\", \"We would like to thank reviewers 4rku and MHgQ for interacting with us during the rebuttal stage and finally recommending the paper's acceptance. We would like to thank reviewer qJsZ for their original comments that we've used to strengthen the paper. We have endeavored to thoroughly address all of your comments (e.g., clarifying the structure of the paper and adding an analysis of the failure cases in Appendix N) and would kindly ask if we've been able to successfully allay your major concerns. If so, we would kindly ask you to update your score to reflect these changes.\", \"We again want to express our deep gratitude to all the reviewers, the ACs, and the PCs for their work so far.\"]}", "{\"summary\": \"This paper introduces \\\"Agent-as-a-Judge,\\\" a framework that uses AI agents to evaluate other AI agents' code generation capabilities, extending the existing LLM-as-a-Judge paradigm. The authors also present DevAI, a new benchmark dataset containing 55 AI development tasks with hierarchical requirements, designed to test code-generating AI systems. They evaluate three popular open-source code generation agents (MetaGPT, GPT-Pilot, and OpenDevin) using both human evaluators and their Agent-as-a-Judge system. Their results show that Agent-as-a-Judge achieves comparable performance to human evaluators (90% alignment with human consensus) while being significantly more cost-effective, reducing evaluation time by 97.72% and costs by 97.64% compared to human evaluation. The paper demonstrates that automated evaluation of AI agents is feasible and can potentially scale up the development of more sophisticated AI systems.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Using agent to evaluate the agent's performance is very interesting and important to advance the development of the foundation agent. This paper addresses a critical and growing challenge in AI: how to effectively evaluate increasingly complex AI agents.\\n\\n2. Thorough experimental design with multiple levels of evaluation, including comprehensive ablation studies to understand component contributions, careful analysis of human evaluation biases and errors and clear comparison between human evaluators, LLM-as-a-Judge, and Agent-as-a-Judge\\n\\n3. The proposed DevAI is a good benchmark, which has carefully designed hierarchical requirements, includes explicit dependencies between requirements, incorporates both mandatory requirements and optional preferences, and focuses on realistic AI development tasks\", \"weaknesses\": \"My main concerns of the paper are:\\n\\n1. The limited technical contributions. This agent-as-a-judge is a natural extension of LLM-as-a-judge, and this is more like an engineering project. \\n\\n2. The proposed DevAI is small scale with only 55 tasks, which is insufficient for a comprehensive benchmark. Though low cost is a good point, this would also over-simplify the tasks with limited task complexity and diversity.\", \"questions\": \"Here are some questions:\\n\\n1. Could you clarify the technical contributions of this agent-as-a-judge? How this method differ from LLM-as-a-judge? \\n\\n2. Could you clarify about the succeffiency of the tasks in DevAI? More tasks and evaluations should be evaluated. \\n\\n3. Does this method generalize to novel tasks, which is out of the considered benchmark tasks?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Re\", \"comment\": \"Thank the author for studying the failure cases. My concerns on the overall writing and the limited evaluation remain after the rebuttal. So the score will be kept unchanged.\"}", "{\"title\": \"Thanks for your review! Authors' feedback [4/4].\", \"comment\": \"**Q6 (original Q2). Can the AAAJ framework be evaluated in benchmarks other than code generation?**\\n\\nWe acknowledge that our current implementation is tailored to code generation. \\n\\nAlthough we have not yet designed a general AAAJ capable of covering all domains, we plan to pursue this in the future based on your suggestion. Nonetheless, our findings indicate that AAAJ provides valuable insights both as a judge and as environmental feedback for code generation. These results suggest that AAAJ holds promising potential for generalizability for other domains by prompt and code modifications.\\n\\n### Ask anything about a Python codebase (repo grokking):\\n\\nThis setting uses a real, popular, and comparatively complex codebase for Agent-as-a-Judge to handle any queries about it.\\n\\n**Question**: What do you think about this project, and how can it be improved with real swarm intelligence? Please provide at least 5 suggestions with specific code examples where I need to add functions. Note that you do not need to consider the current `/examples` folder; please focus on the code in the `/swarm` directory. Thank you! \\n\\n**Project**: https://github.com/openai/swarm\\n\\n**Response:** \\n\\nThe project appears to be structured around a core module named `swarm`, which likely serves as the main engine for the application. The `swarm` directory contains several key files: `util.py`, `core.py`, `types.py`, and `repl/repl.py`. To enhance this project with real swarm intelligence, here are five suggestions with specific code examples:\\n\\n1. **Implement a Swarm Intelligence Algorithm**\\n - **File to Modify:** `swarm/core.py`\\n - **Suggestion:** Introduce a swarm intelligence algorithm such as Particle Swarm Optimization (PSO) or Ant Colony Optimization (ACO) to improve decision-making processes.\\n - **Code Example:**\\n ```python\\n def particle_swarm_optimization():\\n # Initialize particles\\n particles = initialize_particles()\\n for iteration in range(max_iterations):\", \"for_particle_in_particles\": \"# Update particle velocity and position\\n update_velocity(particle)\\n update_position(particle)\\n # Evaluate fitness\\n fitness = evaluate_fitness(particle)\\n # Update personal and global bests\\n update_bests(particle, fitness)\\n return get_global_best(particles)\\n ```\\n\\n2. **Enhance Communication Between Agents**\\n - **File to Modify:** `swarm/types.py`\\n - **Suggestion:** Define a communication protocol or message-passing interface to allow agents to share information effectively.\\n - **Code Example:**\\n ```python\", \"class_message\": \"def __init__(self, sender_id, receiver_id, content):\\n self.sender_id = sender_id\\n self.receiver_id = receiver_id\\n self.content = content\\n\\n def send_message(sender, receiver, content):\\n message = Message(sender.id, receiver.id, content)\\n receiver.receive_message(message)\\n ```\\n\\n3. **Dynamic Task Allocation**\\n - **File to Modify:** `swarm/util.py`\\n - **Suggestion:** Implement a dynamic task allocation system where tasks are assigned based on agent capabilities and current workload.\\n - **Code Example:**\\n ```python\\n def allocate_tasks(agents, tasks):\", \"for_task_in_tasks\": \"best_agent = min(agents, key=lambda agent: agent.current_load)\\n best_agent.assign_task(task)\\n ```\\n\\n4. **Agent Learning and Adaptation**\\n - **File to Modify:** `swarm/core.py`\\n - **Suggestion:** Introduce a learning mechanism for agents to adapt based on past experiences, using techniques like reinforcement learning.\\n - **Code Example:**\\n ```python\\n def learn_from_experience(agent, experience):\\n # Update agent's knowledge base\\n agent.knowledge_base.update(experience)\\n # Adjust strategies based on learning\\n agent.adjust_strategy()\\n ```\\n\\n5. **Visualization of Swarm Activity**\\n - **File to Modify:** `swarm/repl/repl.py`\\n - **Suggestion:** Add a visualization tool to monitor and analyze the swarm's activity and performance in real-time.\\n - **Code Example:**\\n ```python\\n def visualize_swarm_activity(swarm_data):\\n import matplotlib.pyplot as plt\\n # Plot swarm data\\n plt.plot(swarm_data['time'], swarm_data['performance'])\\n plt.xlabel('Time')\\n plt.ylabel('Performance')\\n plt.title('Swarm Activity Over Time')\\n plt.show()\\n ```\\n\\n\\n---\\n\\n**Once again, we sincerely appreciate your valuable time. Your insights have been instrumental in helping us improve our work. We are committed to refining our submission based on your recommendations and look forward to any further discussions.**\"}", "{\"title\": \"Thanks for your review! Authors' feedback [4/5].\", \"comment\": \"**Q6 (original Q1). I'm not exactly sure of the meaning of independent tasks (I) and tasks considering task dependencies (D). Does R0 belong to (I) and R1, R2 belong to (D) in Figure 1? Maybe I missed something, could you please further clarify this?**\\n\\nCertainly. In our context, we evaluate the DevAI tasks in two ways (``(I) without considering dependencies with different requirements, (D) considering dependencies with different requirements``):\", \"here_is_an_example_task_with_the_intermediate_requirements\": \"```\\n\\\"name\\\": \\\"39_Drug_Response_Prediction_SVM_GDSC_ML\\\",\\n\\n\\\"query\\\": \\\"Develop a system to predict drug response using the GDSC dataset with a Support Vector Machine (SVM) regressor. Load the dataset and perform feature selection to identify key features in `src/data_loader.py`. Implement the SVM regressor in `src/model.py`. Use cross-validation to evaluate the model's performance in `src/train.py`. Save the performance results to `results/metrics/performance.txt`. Visualize the regression results using seaborn and save it under `results/figures/`. Next, create a report including the data preprocessing, model training, evaluation process, and the visualization. Save the report as `results/drug_response_prediction_report.pdf`. The report should emphasize how feature selection impacts the model's performance, and the regression results visualization should clearly highlight the relationship between the selected features and the predicted drug response. Ensure the system is designed to be easily extendable for incorporating additional datasets or new features.\\\",\\n\\n\\\"requirements\\\": [\\n {\\n \\\"requirement_id\\\": 0,\\n \\\"prerequisites\\\": [],\\n \\\"criteria\\\": \\\"The \\\\\\\"GDSC\\\\\\\" drug response dataset is loaded in `src/data_loader.py`.\\\",\\n \\\"category\\\": \\\"Dataset or Environment\\\"\\n },\\n {\\n \\\"requirement_id\\\": 1,\\n \\\"prerequisites\\\": [0],\\n \\\"criteria\\\": \\\"Feature selection is performed to identify important features in `src/data_loader.py`.\\\",\\n \\\"category\\\": \\\"Data preprocessing and postprocessing\\\"\\n },\\n {\\n \\\"requirement_id\\\": 2,\\n \\\"prerequisites\\\": [],\\n \\\"criteria\\\": \\\"The \\\\\\\"SVM regressor\\\\\\\" is implemented in `src/model.py`.\\\",\\n \\\"category\\\": \\\"Machine Learning Method\\\"\\n },\\n {\\n \\\"requirement_id\\\": 3,\\n \\\"prerequisites\\\": [1, 2],\\n \\\"criteria\\\": \\\"Cross-validation is used to evaluate the model in `src/train.py`.\\\",\\n \\\"category\\\": \\\"Performance Metrics\\\"\\n },\\n {\\n \\\"requirement_id\\\": 4,\\n \\\"prerequisites\\\": [0, 1, 2, 3],\\n \\\"criteria\\\": \\\"The performance results are saved as `results/metrics/performance.txt`.\\\",\\n \\\"category\\\": \\\"Performance Metrics\\\"\\n },\\n {\\n \\\"requirement_id\\\": 5,\\n \\\"prerequisites\\\": [0, 1, 2, 3],\\n \\\"criteria\\\": \\\"The regression results are visualized using \\\\\\\"seaborn,\\\\\\\" and saved to `results/figures/`.\\\",\\n \\\"category\\\": \\\"Visualization\\\"\\n },\\n {\\n \\\"requirement_id\\\": 6,\\n \\\"prerequisites\\\": [0, 1, 2, 3, 4, 5],\\n \\\"criteria\\\": \\\"A report containing data preprocessing, model training, evaluation process, and the regression results visualization, is created and saved as `results/drug_response_prediction_report.pdf`.\\\",\\n \\\"category\\\": \\\"Other\\\"\\n }\\n]\\n```\", \"therefore\": [\"``Requirements Met (I)`` means *we evaluate the intermediate requirements/tasks while ignoring the \\\"prerequisites\\\" of other requirements/tasks*. This allows for, say, the implementation of the learning algorithm without having a correct implementation of a data loader.\", \"``Requirements Met (D)`` means *we evaluate the requirements/tasks with the requirements that the \\\"prerequisites\\\" of a requirement be met before that requirement could be considered satisfied.* For example, if requirement_2 was implemented correctly, but requirement_1 is a prerequisite of requirement_2 and is not implemented correctly, then we consider requirement_2 to not have been satisfied.\"]}", "{\"title\": \"Thanks for your review! Authors' feedback [1/9].\", \"comment\": \"We appreciate your valuable time & insights, and thank you for highlighting the strengths of our work (e.g., interesting and important idea, DevAI is a good benchmark, etc.). We will address your questions in the following response.\\n\\n---\\n\\n**Q1 (original W1 (a)). This agent-as-a-judge is a natural extension of LLM-as-a-judge.**\\n\\nThank you for recognizing the connection between Agent-as-a-Judge and LLM-as-a-Judge. We agree that Agent-as-a-Judge is a natural extension of LLM-as-a-Judge as we described in abstract: \\n> This Agent-as-a-Judge is a natural extension of the LLM-as-a-Judge framework, incorporating agentic features that enable intermediate feedback for the entire task-solving processes for more precise evaluations.\\n\\nOur work identifies and addresses unique challenges inherent in evaluating agentic systems that LLM-as-a-Judge frameworks cannot adequately handle. Specifically, Agent-as-a-Judge introduces substantial technical innovations in the following areas: \\n\\n- **Automated Evidence Collection and Verification:** Agent-as-a-Judge autonomously collects and verifies evidence based on intermediate requirements, enabling more precise and human-aligned evaluations. For instance, it can inspect specific functions within code to ensure correct implementation, surpassing the LLM-as-a-Judge's limitation of evaluating only final outputs.\\n\\n- **Agentic Capabilities and Modular Design:** Designed as an agent itself, Agent-as-a-Judge possesses diverse and customizable modules, allowing it to adapt to the complex demands of evaluating agentic systems.\\n\\nOur goal is to conduct a methodological and thorough study of Agent-as-a-Judge, demonstrating its effectiveness in evaluating agentic systems. This work fills a critical gap left by existing LLM-as-a-Judge frameworks and provides a solid foundation for future research and applications in this area.\"}", "{\"title\": \"Thanks for your review! Authors' feedback [5/5].\", \"comment\": \"**Q7 (original Q2). In line 486, how is Table 3 able to demonstrate the usefulness of the \\\"retrieve\\\" module?**\\n\\nSorry, this is an unclear illustration. Unlike the black-box setting, the gray-box setting allows for the retrieve module and the impact on performance of this can be partially seen in Table 3, as the trajectory provides additional valuable information. We make this clearer in the updated manuscript: ``Adding retrieve does not always provide a significant benefit in this case. We found the retrieve module effective for judging MetaGPT and GPT-Pilot, as it provides valuable trajectory information (as shown in Table 3. However, it is less effective for OpenHands, which sometimes fails to execute files, resulting in missing responses. In such cases, judgment without trajectories remains viable.``\\n \\n\\n**Q8 (original Q3). What is your backbone model for the LLM-as-a-Judge and Agent-as-a-Judge frameworks? Does using a different backbone affect your main results?**\\n\\nExcellent question. In our initial experiments, we used ``gpt-4o-2024-0513`` as the backbone model. To assess the impact of different backbones on the performance of Agent-as-a-Judge, we conducted an ablation study. The results are below.\\n\\n| Model | Version | Parameters | agent-as-a-judge alignments with human-as-a-judge |\\n|----------------------|----------------------------|------------|---------------------------------------------------|\\n| LLAMA [1] | 3.2 | 90B | 87.76% |\\n| Qwen [2] | Coder 2.5 | 32B | 88.73% |\\n| Claude [3] | claude-3-5-sonnet-20241022 | Unknown | 92.95% |\\n| ChatGPT (Submission) | gpt-4o-2024-0513 | Unknown | 90.16% |\\n\\nThese results allow us to conclude that the backbone does have a meaningful effect on the alignment but a relatively marginal one. We found that Claude's results are better than what we reported in the submitted manuscript, we hypothesize this is because ``claude-3-5-sonnet-20241022`` has been trained with strong function calling skills and agentic features.\\n\\n\\n[1] https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/\\n\\n[2] https://qwenlm.github.io/blog/qwen2.5-coder-family/\\n\\n[3] https://www.anthropic.com/claude/sonnet\\n\\n---\\n\\n**Once again, we sincerely appreciate your valuable time. Your insights have been instrumental in helping us improve our work. We are committed to refining our submission based on your recommendations and look forward to any further discussions.**\"}", "{\"summary\": \"This paper introduces the Agent-as-a-Judge framework, an innovative approach for evaluating agentic systems by employing other agentic systems as evaluators. Building on prior methods like LLM-as-a-Judge, this framework provides intermediate feedback throughout the problem-solving process, moving beyond final outcome evaluation. The paper applies this approach to code generation tasks, introducing the DevAI dataset, consisting of 55 realistic development tasks with hierarchical solution requirements. The results show that Agent-as-a-Judge aligns closely with human evaluation and outperforms LLM-as-a-Judge in task accuracy and cost-effectiveness.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"1. The introduction of the Agent-as-a-Judge framework addresses a critical gap in evaluating agentic systems by enabling feedback at each stage of task completion, making it especially suited for complex, multistep tasks.\\n2. The DevAI dataset provides a practical and comprehensive testbed for agentic systems, encompassing various AI development tasks that closely mirror real-world demands, which enhances the relevance of the evaluation.\\n3. The study compares Agent-as-a-Judge with both human evaluators and LLM-as-a-Judge, demonstrating superior alignment with human consensus while significantly reducing costs and evaluation time.\\n4. The paper includes extensive ablation studies, cost analysis, and performance metrics, providing insights into the efficacy of different components of the framework and supporting reproducibility.\", \"weaknesses\": \"1. The overall writing is inconsistent and confusing: in the abstract and introduction, the Agent-as-a-Judge (AAAJ) framework is proposed with its novelty and effectiveness being emphasized. But in Section 2 and 3, the paper suddenly turns to the DevAI benchmark, introducing the motivation and technical details of this particular benchmark. The AAAJ benchmark is not comprehensively introduced until Section 4. I feel like these two contributions, namely the AAAJ framework and the DevAI benchmark, should have been separated into two papers and discussed in more details respectively. Otherwise, it may be better to start from the DevAI benchmark, with agentic evaluation framework being one of its useful features.\\n2. Among the four principal contributions, the top two seem to be deviated from the main idea of this paper.\\n3. In Section 4, AAAJ is only discussed as a proof-of-concept, with little technical details. \\n4. Code generation is only one of the many applications of agentic systems. There are plenty of other domains where LLM agents may help. It is therefore not enough to consider the task of code generation alone. The agentic evaluation framework also seems to be ad hoc to code generation.\", \"questions\": \"1. While the AAAJ framework achieves higher alignment rate than other baseline algorithms, chances are that AAAJ makes incorrect evaluations. Did you conduct any studies on the failure cases to see whether there are consistent patterns that may be exploited by agentic systems?\\n2. Can the AAAJ framework be evaluated in benchmarks other than code generation?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for your review! Authors' feedback [2/9].\", \"comment\": \"**Q2 (original W2 and Q2). The proposed DevAI is small scale with only 55 tasks, which is insufficient for a comprehensive benchmark. Though low cost is a good point, this would also over-simplify the tasks with limited task complexity and diversity. & Q2. Could you clarify about the succeffiency of the tasks in DevAI? More tasks and evaluations should be evaluated**\\n\\nThank you for your insightful feedback regarding the different perspectives of the benchmark. \\n\\n\\n**1. Scale of DevAI**\\n\\nNote that a concurrent benchmark for autonomous AI development, MLE-Bench [1], for example, includes 75 AI tasks with the same scale as ours. \\n\\n[1] Chan, Jun Shern, et al. \\\"Mle-bench: Evaluating machine learning agents on machine learning engineering.\\\" arXiv preprint arXiv:2410.07095 (2024).\\n\\nWe initially considered including more tasks but found that expanding beyond 55 would significantly increase evaluation costs and complexity without yielding proportional benefits in insights. For instance, OpenHands completed 55 tasks at a cost of $350.9, and GPT-Pilot took 24.78 hours to finish all tasks. Our focus was on creating a high-quality, manageable set of tasks that provide meaningful and actionable evaluations of agentic systems. By balancing cost and benefit, we believe the current number of tasks effectively distinguishes between different systems' performances and meets evaluation needs.\\n\\n\\n| Metric | MetaGPT (Hong et al., 2024) | GPT-Pilot (Pythagora.io, 2023) | OpenHands (Wang et al., 2024) |\\n|------------|-------------|----------------|----------------|\\n| Average Cost | $1.19 | $3.92 | $6.38 |\\n| Average Time | 775.29s | 1622.38s | 362.41s |\\n\\n\\n\\n**2. Task complexity**\\n\\n\\nIn our testing, for example, agentic systems like GPT-Pilot and OpenHands successfully resolved only 1 out of the 55 tasks by meeting all hierarchical requirements (as shown in the table below, 1.81% Task Solve Rate). This low completion rate underscores the complexity and difficulty of the benchmark, ensuring that DevAI effectively differentiates between varying levels of agent performance.\\n\\n\\n| Metric | MetaGPT (Hong et al., 2024) | GPT-Pilot (Pythagora.io, 2023) | OpenHands (Wang et al., 2024) |\\n|---|---|---|---|\\n| Requirements Met (``with dependency consideration``) | 6.55% | 28.96% | 28.68% |\\n| Task Solve Rate | 0.00% | 1.81% | 1.81% |\\n\\n\\n**3. Task Diversity**\\n\\n\\n\\nWe have meticulously designed the 55 tasks in DevAI to comprehensively capture the inherent complexity and diversity of AI code generation. These tasks span a wide range of AI domains, including computer vision, natural language processing, and reinforcement learning, and encompass various model types from traditional machine learning algorithms to deep learning architectures. For example:\\n\\n\\n- Task 15 (Image_Captioning_ShowAndTell_Flickr8k_DL): Implementing an image captioning model based on the Flickr8k dataset using the Show and Tell architecture.\\n\\n- Task 25 (Speech_Emotion_Recognition_CNN_LSTM_RAVDESS_DL): Develop a speech-emotion recognition system using CNN-LSTM models on the RAVDESS dataset.\\n\\n- Task 35 (Loan_Default_Prediction_RandomForest_LendingClub_ML): Creating a loan default prediction model using Random Forest on the LendingClub dataset.\\n\\n\\n---\\n\\nIn practice, we recommend utilizing DevAI for cost-effective debugging and feedback loops when building agentic systems. For more in-depth evaluations, a full sweep of the benchmark can be conducted. In comparison, running evaluations using MLE-Bench is estimated to cost $3,000\\u2013$6,000 (a total of 1704 GPU hours, AIDE used 120.7M input tokens and 14.2M output\\ntokens per task), whereas DevAI provides high-quality evaluations at a significantly lower cost (e.g., the comparatively expensive OpenHands completed 55 tasks for $350.9). This strategy effectively balances cost, complexity, and evaluation depth, offering substantial benefits over existing benchmarks.\"}", "{\"title\": \"Thanks for your review! Authors' feedback [8/9].\", \"comment\": \"**(Following up with Q5...)**\\n\\n**3. Handling Arbitrary Queries on a Python Codebase (Codebase Grokking):**\\n\\nThis setting uses a real, popular, and comparatively complex codebase for Agent-as-a-Judge to handle any queries about it.\\n\\n**Question**: What do you think about this project, and how can it be improved with real swarm intelligence? Please provide at least 5 suggestions with specific code examples where I need to add functions. Note that you do not need to consider the current `/examples` folder; please focus on the code in the `/swarm` directory. Thank you! \\n\\n**Project**: https://github.com/openai/swarm\\n\\n**Response:** \\n\\nThe project appears to be structured around a core module named `swarm`, which likely serves as the main engine for the application. The `swarm` directory contains several key files: `util.py`, `core.py`, `types.py`, and `repl/repl.py`. To enhance this project with real swarm intelligence, here are five suggestions with specific code examples:\\n\\n1. **Implement a Swarm Intelligence Algorithm**\\n - **File to Modify:** `swarm/core.py`\\n - **Suggestion:** Introduce a swarm intelligence algorithm such as Particle Swarm Optimization (PSO) or Ant Colony Optimization (ACO) to improve decision-making processes.\\n - **Code Example:**\\n ```python\\n def particle_swarm_optimization():\\n # Initialize particles\\n particles = initialize_particles()\\n for iteration in range(max_iterations):\", \"for_particle_in_particles\": \"# Update particle velocity and position\\n update_velocity(particle)\\n update_position(particle)\\n # Evaluate fitness\\n fitness = evaluate_fitness(particle)\\n # Update personal and global bests\\n update_bests(particle, fitness)\\n return get_global_best(particles)\\n ```\\n\\n2. **Enhance Communication Between Agents**\\n - **File to Modify:** `swarm/types.py`\\n - **Suggestion:** Define a communication protocol or message-passing interface to allow agents to share information effectively.\\n - **Code Example:**\\n ```python\", \"class_message\": \"def __init__(self, sender_id, receiver_id, content):\\n self.sender_id = sender_id\\n self.receiver_id = receiver_id\\n self.content = content\\n\\n def send_message(sender, receiver, content):\\n message = Message(sender.id, receiver.id, content)\\n receiver.receive_message(message)\\n ```\\n\\n3. **Dynamic Task Allocation**\\n - **File to Modify:** `swarm/util.py`\\n - **Suggestion:** Implement a dynamic task allocation system where tasks are assigned based on agent capabilities and current workload.\\n - **Code Example:**\\n ```python\\n def allocate_tasks(agents, tasks):\", \"for_task_in_tasks\": \"best_agent = min(agents, key=lambda agent: agent.current_load)\\n best_agent.assign_task(task)\\n ```\"}", "{\"title\": \"Thanks for your review! Authors' feedback [9/9].\", \"comment\": \"**(Following up with Q5...)**\\n\\n4. **Agent Learning and Adaptation**\\n - **File to Modify:** `swarm/core.py`\\n - **Suggestion:** Introduce a learning mechanism for agents to adapt based on past experiences, using techniques like reinforcement learning.\\n - **Code Example:**\\n ```python\\n def learn_from_experience(agent, experience):\\n # Update agent's knowledge base\\n agent.knowledge_base.update(experience)\\n # Adjust strategies based on learning\\n agent.adjust_strategy()\\n ```\\n\\n5. **Visualization of Swarm Activity**\\n - **File to Modify:** `swarm/repl/repl.py`\\n - **Suggestion:** Add a visualization tool to monitor and analyze the swarm's activity and performance in real-time.\\n - **Code Example:**\\n ```python\\n def visualize_swarm_activity(swarm_data):\\n import matplotlib.pyplot as plt\\n # Plot swarm data\\n plt.plot(swarm_data['time'], swarm_data['performance'])\\n plt.xlabel('Time')\\n plt.ylabel('Performance')\\n plt.title('Swarm Activity Over Time')\\n plt.show()\\n ```\\n\\nThese enhancements aim to leverage swarm intelligence principles to improve the system's efficiency, adaptability, and overall performance. By focusing on the core `swarm` module, these suggestions can be integrated into the existing framework to create a more robust and intelligent system.\\n\\n\\n**These experiments indicate that Agent-as-a-Judge generalizes well to novel coding tasks with minimal modifications and has the potential to be applied in various domains.**\\n\\n---\\n\\n\\n**Once again, we sincerely appreciate your valuable time. Your insights have been instrumental in helping us improve our work. We are committed to refining our submission based on your recommendations and look forward to any further discussions.**\"}", "{\"title\": \"Thanks for your review! Authors' feedback [1/5].\", \"comment\": \"We appreciate your valuable time & insights, and thank you for highlighting the strengths of our work (e.g., novel benchmark, thorough experiments and analysis, well-structured, etc.). We will address your questions in the following response.\\n\\n---\\n\\n**Q1 (original W1). To improve this aspect, the study could benefit from recruiting a larger and more diverse pool of external evaluators. Even applying a subset of tasks to a larger group of judges would offer more statistically reliable results and help validate the labels in the DevAI benchmark. This would ensure that the benchmark has a more concrete \\\"ground truth\\\", thus making the subsequent results more robust and credible.**\\n\\n\\nThank you for this insightful comment. We understand that involving a larger and more diverse group of evaluators could enhance our study. However, due to the significant time commitment required (approximately 19 hours per expert) and the necessity for expert qualifications, assembling and managing a large expert panel for a full evaluation presented challenges.\\n\\nHowever, to address your concern, we conducted an additional study with 10 MSc and PhD students in AI-related fields on 7 randomly selected tasks (about 12.7% of DevAI) to evaluate OpenHands's performance. The results are summarized below:\\n\\n\\n\\n\\n| Evaluation Panel | Majority Vote Result (Alignment with Previous Consensus Result) (%) | Majority Vote Result (Alignment with Previous Majority Vote) (%) | Average Time per Person (hrs) | Average Cost per Person ($) |\\n|-----------------------------|-------------------------|---------------------------------------------------------|-------------------------------|-----------------------------|\\n| Larger Panel (10 experts) | 95.23 | 97.67 | 1.13 | 15.20 |\\n\\n\\nIn this additional evaluation, the majority vote alignment (with 10 experts) with our previous majority vote results (with 3 experts) was 97.67%. This consistency reinforces the reliability of our initial evaluation and suggests that AAAJ's performance is comparable to that of a broader human panel. We observe that compared to the previous majority voting results with three human experts (92.85% for the same 7 tasks, totaling 42 requirements), the extended study achieved a modest 2.38% improvement in alignment, which demonstrates more experts involved in the majority vote may improve the alignment rate (L358-LL365). After checking the disagreement between the majority vote results from the larger panel of experts and our previous consensus results, we found that consensus results are accurate. We suggest this is due to the natural features of human brainstorming may be more effective in correcting errors and biases.\"}", "{\"title\": \"Thanks for your review! Authors' feedback [3/5].\", \"comment\": \"**Q4 (original W3). Figure 2 (1) appears unprofessional and is highly unreadable due to the mixed vertical and horizontal text as well as the small font size. Enhancing the readability and professional presentation of this figure is essential. the authors might consider using a different type of diagram to display the word frequency more effectively.**\\n\\nUpon reinspection, we agree with the reviewer here. According to this suggestion, we conduct an analysis to display the word frequency (the table below). This table highlights the most frequent technical words that appear in the queries, reflecting the dataset features in AI developments. Words like *dataset*, *report*, *feature*, *results*, *model*, and *data* stand out prominently.\\n\\n| Word | Frequency |\\n|--------------|-----------|\\n| results | 194 |\\n| model | 161 |\\n| src | 148 |\\n| save | 129 |\\n| dataset | 85 |\\n| figures | 66 |\\n| data_loader | 62 |\\n| data | 60 |\\n| report | 55 |\\n| system | 53 |\\n| feature | 36 |\\n\\n\\n**Q5 (original W4). Citation Issues: There are several inconsistencies and inaccuracies in the citations. For instance, the citation of SWE-Bench in the introduction leads to SWE-Agent, which is incorrect. Moreover, some papers that have been accepted to conferences are cited in their arXiv versions (e.g., SWE-Bench, DSPy, HumanEval, AgentBench). Ensuring that all citations are accurate and up-to-date will enhance the paper's credibility. Each referenced work should be properly cited in its correct and most formal publication form when available.**\\n\\nWe have thoroughly reviewed all the citations and corrected the inconsistencies and inaccuracies (updating 19 references). We will ensure that all references are accurate and up-to-date in the final version of the paper.\"}", "{\"title\": \"Thanks for your feedback\", \"comment\": \"**Dear Reviewer MHgQ,**\\n\\n---\\n\\nWe greatly appreciate your time and effort in reviewing our work.\\n\\nIn response to your comments, we have carefully addressed your concerns in **Q2 (original W2 and Q2)** and **Q3 (original Q1)**. **Based on our prior research and designed experiments, we identified key pain points:**\\n\\n1. **Agentic systems should be evaluated by agentic benchmarks rather than conversational benchmarks:**\\nFor example, GPT-4o can achieve 90.2% on the HumanEval benchmark (initially designed for evaluating foundation models rather than agents) in a single-round conversation, while existing agentic works require significantly more resources (e.g., 1000x cost and time) to reach similar results (93%-96%). We believe such benchmarks do not effectively reflect the unique features of agentic systems (as discussed in the Introduction). To address this, we introduce the DevAI benchmark.\\n\\n2. **Intermediate feedback is critical for agentic systems (but expensive):** Agentic systems operate in a physically step-by-step problem-solving manner, meaning their evaluation must consider the process, not just the final outcome. However, obtaining such feedback is costly\\u2014ideally, it should involve user experiences or expert-level judgments. We analyzed the quality, issues, and costs associated with expert-level feedback.\\n\\n3. **Agent-as-a-Judge can help:** Building on points (1) and (2), after developing a meaningful benchmark and acknowledging the importance (and expense) of intermediate feedback, we explored the idea of using agents to simulate human evaluations. This approach provides a more efficient and cost-effective solution to assess agents before releasing them to the public. Our goal is to validate this direction by addressing key challenges and demonstrating practical solutions. While further improvements (e.g., multi-agent setups or more complex prompt designs) are possible, we leave them for future work. In academia, this approach provides valuable reward signals to identify bottlenecks (e.g., pinpointing issues and missed steps in key processes) before obtaining the final outcome and pave a way for recursive self-improving. In industry, it accelerates the development process by using Agent-as-a-Judge to simulate human experts or users, aiding pre-release evaluation.\\n\\n---\\n\\n**Thank you again for your positive review score. We truly appreciate your thoughtful feedback and support in refining our work. Let us know if further adjustments are needed!**\"}", "{\"title\": \"Thanks for your review! Authors' feedback [7/9].\", \"comment\": \"**Q4 (original W1 (b)). This is more like an engineering project.**\\n\\nWe agree that our work involves significant engineering efforts and a substantial manual workload, including designing the framework, carefully crafting datasets, and conducting human evaluations. As you and other reviewers have noted, this paper also identifies critical gaps and issues in evaluating agentic systems, demonstrates that existing approaches are insufficient to address these gaps, proposes a new benchmark, and presents a framework for evaluating agentic systems using agentic systems. Our work aligns with the standards of ICLR and is comparable to prior work presented at ICLR in this regard.\\n\\n\\n**Q5 (original Q3). Does this method generalize to novel tasks, which is out of the considered benchmark tasks?**\\n\\nThank you for this important question. To assess the generalization ability of Agent-as-a-Judge, we conducted additional experiments:\\n\\n**1. Providing Real-Time Feedback to OpenHand:**\\n\\nWe integrated AAAJ into OpenHands to provide intermediate feedback (we do not provide the original requirements to keep fairness), enhancing its performance on DevAI tasks. AAAJ functioned differently here\\u2014it acted as an environment feedback mechanism for the developer agent rather than merely assessing final outputs. The experimental results showed significant improvements in task completion rates with the integration of AAAJ. For example:\\n\\n| | Task 15 (Image_Captioning_ShowAndTell_Flickr8k_DL) | Task 25 (Speech_Emotion_Recognition_CNN_LSTM_RAVDESS_DL) | Task 35 (Loan_Default_Prediction_RandomForest_LendingClub_ML) |\\n|---------------------|--------------|-----------------|------------------|\\n| **OpenHands** | 2/6 | 3/7 | 0/7 |\\n| **OpenHands + AAAJ**| 4/6 (+33.33) | 5/7 (+28.5%) | 4/7 (+57.14%) |\\n\\n\\n**2. Kaggle Assistant (we select one similar task from MLE-Bench [1]: `Facebook Recruiting III - Keyword Extraction`):**\\n\\nWe selected a challenging task, *Facebook Recruiting III - Keyword Extraction* from MLE-Bench [1]. Without AAAJ, OpenHands could not produce any submission. With AAAJ's feedback, OpenHands successfully wrote and executed all necessary code, achieving a score comparable to a bronze medal (~0.720). This demonstrates AAAJ's ability to generalize to novel and complex tasks.\"}", "{\"comment\": \"Thank you for your response. I will raise my score for Q1 and Q7. I kindly recommend that the authors include these two parts, or at least the main conclusions and a link to the appendix, in the main body of the paper. These results might be beneficial to other readers as well.\\n\\nAdditionally, I would like to suggest that the authors further refine their paper for clarity and readability. I experienced the same confusion mentioned by Reviewer qJsZ (weakness 1) when I first read this paper. Perhaps incorporating some leading sentences at the beginning of each section would help (just a thought; you could explore other ways to enhance clarity).\"}", "{\"metareview\": \"The paper \\\"Agent-as-a-Judge: Evaluating Agents with Agents\\\" introduces a novel framework, Agent-as-a-Judge (AAAJ), for evaluating agentic systems (AI systems capable of autonomous decision-making and action) by leveraging other agentic systems as evaluators. This approach extends the existing LLM-as-a-Judge paradigm by incorporating intermediate feedback mechanisms, enabling more granular and accurate evaluations. To validate this framework, the authors propose a new benchmark, DevAI, consisting of 55 realistic AI code generation tasks with detailed annotations and hierarchical requirements. The paper benchmarks three leading code-generating agentic systems (MetaGPT, GPT-Pilot, and OpenHands) using AAAJ, demonstrating its alignment with human evaluators and its superior performance compared to LLM-as-a-Judge. The authors also provide supplementary materials, including failure case analyses and experiments with different backbone models, to support their claims.\\n\\n\\nWeaknesses\\n- Limited Scope of Benchmark : While DevAI is a valuable benchmark, its scale (55 tasks) is relatively small compared to some existing benchmarks (e.g., MLE-Bench with 75 tasks). The authors argue that increasing the number of tasks would disproportionately increase costs, but this may limit the benchmark's generalizability to a broader range of AI development scenarios.\\n- Task-Specific Design : The AAAJ framework is currently tailored to code generation tasks, raising questions about its applicability to other domains. Although the authors suggest its potential for generalization, this remains a theoretical claim without concrete demonstrations across diverse tasks.\\n- Human Evaluation Limitations : The reliance on a small pool of human evaluators (the authors themselves) raises concerns about bias and the robustness of the ground truth labels. While the authors conducted a supplementary study with a larger group of students, the scope of this additional evaluation was limited (7 tasks), leaving room for skepticism about the reliability of the human baseline.\\n\\nOverall, the paper presents a compelling contribution to the field of agentic system evaluation, with the Agent-as-a-Judge framework and the DevAI benchmark offering practical and innovative solutions. While the paper has addressed many reviewer concerns through additional experiments and clarifications, some limitations persist, particularly regarding the scope of the benchmark and the framework's generalizability. The paper would benefit from further refinement in terms of presentation and broader demonstration of the framework's applicability.\", \"additional_comments_on_reviewer_discussion\": [\"Key Points and Responses During Rebuttal Period:\", \"1. **Structural Clarity (qJsZ):**\", \"**Concern**: The paper's structure was confusing, with the DevAI benchmark overshadowing the Agent-as-a-Judge (AAAJ) framework.\", \"**Response**: The authors added a clarifying paragraph in the introduction to outline the logical flow from benchmark creation to evaluation challenges and the AAAJ solution.\", \"2. **Separation of Contributions (qJsZ):**\", \"**Concern**: The AAAJ framework and DevAI benchmark should be separated into distinct papers for more detailed discussions.\", \"**Response**: The authors argued that the contributions are intrinsically linked, providing a rationale for their combined presentation and suggesting potential future directions.\", \"3. **Human Evaluation Methodology (4rku):**\", \"**Concern**: The use of only three human evaluators (the authors themselves) raised concerns about bias and insufficient robustness.\", \"**Response**: The authors conducted an additional study with 10 external evaluators on a subset of tasks, demonstrating alignment with the original results and reinforcing the reliability of their baseline.\", \"4. **Inconsistent Statements on Reliability (4rku):**\", \"**Concern**: The paper contained conflicting statements about the reliability of human evaluations.\", \"**Response**: The authors clarified that human evaluations were treated as a reliable baseline despite inherent limitations and explicitly outlined the relative reliability of evaluation methods.\", \"5. **Benchmark Scale and Complexity (MHgQ):**\", \"**Concern**: The DevAI benchmark was criticized for its small scale (55 tasks) and potential oversimplification of tasks.\", \"**Response**: The authors justified the benchmark size by comparing it to concurrent benchmarks and emphasizing its high-quality design, evidenced by low completion rates by tested agents.\", \"6. **Technical Novelty of AAAJ (MHgQ):**\", \"**Concern**: AAAJ was viewed as a natural extension of LLM-as-a-Judge, lacking significant technical novelty.\", \"**Response**: The authors outlined specific technical innovations, such as automated evidence collection and modular design, and provided a detailed example of AAAJ's evaluation process.\", \"Despite the authors' efforts to address reviewer concerns, the paper still exhibits some limitations that outweigh its contributions. The structural issues, while partially addressed, continue to hinder the readability and coherence of the manuscript. The limited scope of the DevAI benchmark and the lack of demonstrated generalizability of the AAAJ framework beyond code generation tasks remain substantial weaknesses.\"]}", "{\"title\": \"Thanks for your review! Authors' feedback [4/9].\", \"comment\": \"**(Following up with Q3...)**\\n\\n- Automated Step 3a (**check again the relevant code snippets (a)**):\\n\\n### Code Snippet\\n\\n**File:** `workspaces/OpenHands/39_Drug_Response_Prediction_SVM_GDSC_ML/src/train.py` \\n**Lines:** `[9, 35]` \\n**Identifier:** `def` \\n**Category:** `function` \\n\\n```python\\ndef evaluate_model(data_path, target_column, k=10):\\n # Load and select features\\n X, y, selected_features = load_and_select_features(data_path, target_column, k)\\n\\n # Train the model\\n model = train_svm_regressor(X, y)\\n\\n # Perform cross-validation\\n scores = cross_val_score(model, X, y, cv=5, scoring='neg_mean_squared_error')\\n rmse_scores = np.sqrt(-scores)\\n\\n # Save performance results\\n os.makedirs('results', exist_ok=True)\\n with open('results/performance.txt', 'w') as f:\\n f.write(f\\\"Selected features: {selected_features}\\\\n\\\")\\n f.write(f\\\"Cross-validated RMSE scores: {rmse_scores}\\\\n\\\")\\n f.write(f\\\"Mean RMSE: {rmse_scores.mean()}\\\\n\\\")\\n f.write(f\\\"Standard deviation of RMSE: {rmse_scores.std()}\\\\n\\\")\\n\\n # Visualize regression results\\n sns.histplot(rmse_scores, kde=True)\\n plt.title('Cross-validated RMSE scores')\\n plt.xlabel('RMSE')\\n plt.ylabel('Frequency')\\n os.makedirs('results/figures', exist_ok=True)\\n plt.savefig('results/rmse_scores.png')\\n plt.close()\\n```\\n\\n- Automated Step 3b (**check again the relevant code snippets (b)**):\\n\\n### Code Snippet\\n\\n**File:** `workspaces/OpenHands/39_Drug_Response_Prediction_SVM_GDSC_ML/src/model.py` \\n**Lines:** `[5, 12]` \\n**Identifier:** `def` \\n**Category:** `function` \\n\\n```python\\ndef train_svm_regressor(X, y):\\n # Create a pipeline with standard scaler and SVM regressor\\n model = make_pipeline(StandardScaler(), SVR(kernel='linear'))\\n\\n # Train the model\\n model.fit(X, y)\\n\\n return model\\n```\\n\\n**...**\\n\\n\\n\\n- Automated Step 4 (**identify the related multimodal file and read it**): \\n\\n```\\nContent of workspaces/OpenHands/39_Drug_Response_Prediction_SVM_GDSC_ML/results/rmse_scores.png:\\n\\nThe image is a histogram with a line plot overlayed, titled \\\"Cross-validated RMSE scores.\\\" \\n\\n- **X-axis (horizontal):** Labeled \\\"RMSE,\\\" with tick marks at 0.30, 0.32, 0.34, 0.36, and 0.38.\\n- **Y-axis (vertical):** Labeled \\\"Frequency,\\\" with tick marks from 0.00 to 2.00 in increments of 0.25.\\n- **Bars:** There are three bars in the histogram.\\n- The first bar is centered at approximately 0.30 with a height of 2.00.\\n- The second bar is centered at approximately 0.34 with a height of 2.00.\\n- The third bar is centered at approximately 0.36 with a height of 1.00.\\n- **Line Plot:** A smooth blue line overlays the histogram, indicating a trend. It starts at the first bar and rises slightly over the second bar before gently declining over the third bar.\\n- **Color:** The bars are shaded in a translucent blue with a black outline.\\n\\nThe chart visually represents the distribution and trend of RMSE scores across different ranges.'\\n```\"}", "{\"title\": \"Thanks for your review! Authors' feedback [5/9].\", \"comment\": \"**(Following up with Q3...)**\\n\\n- Automated Step 5 (**check the historical judgments (memory) on the same workspace**): \\n\\n```\\n--------------------------------------------------\", \"requirement_1\": \"--------------------------------------------------\", \"criteria\": \"The performance results are saved as `results/performance.txt`.\", \"satisfied\": \"Yes\", \"reason\": \"<SATISFIED>: The requirement to save the performance results as `results/performance.txt` is satisfied. The code in `src/train.py` includes a section where the cross-validated RMSE scores, mean RMSE, and standard deviation of RMSE are written to `results/performance.txt`, as evidenced by lines 22-26 in the code snippet. The content of `performance.txt` confirms that these metrics have been successfully saved, as it contains the selected features and the cross-validated RMSE scores, along with their mean and standard deviation.\\n--------------------------------------------------\\n```\", \"requirement_2\": \"--------------------------------------------------\", \"requirement_3\": \"--------------------------------------------------\", \"requirement_4\": \"--------------------------------------------------\", \"requirement_5\": \"--------------------------------------------------\"}", "{\"title\": \"Thanks for your review! Authors' feedback [3/9].\", \"comment\": \"**Q3 (original Q1). Could you clarify the technical contributions of this agent-as-a-judge? How this method differ from LLM-as-a-judge?**\\n\\nAgent-as-a-Judge is specifically designed to evaluate agentic systems, while LLM-as-a-Judge is tailored for assessing language models. Our results show that this is a non-trivial extension. As discussed above (**Q1 (original W1 (a))**), the key technical contributions (and new features) of Agent-as-a-Judge compared to LLM-as-a-Judge are: (1) Automated Evidence Collection and Verification and (2) Agentic Capabilities and Modular Design.\\n\\n**We give a comprehensive trajectory of Agent-as-a-Judge done in the judgment that may clearly show the specific features:**\\n\\n---\", \"requirement\": \"```\\n {\\n \\\"requirement_id\\\": 5,\\n \\\"prerequisites\\\": [\\n 0,\\n 1,\\n 2,\\n 3\\n ],\\n \\\"criteria\\\": \\\"The regression results are visualized using \\\\\\\"seaborn,\\\\\\\" and saved to `results/figures/`.\\\",\\n \\\"category\\\": \\\"Visualization\\\",\\n }\\n```\\n\\n- Automated Step 1 (**review user's query to get a comprehensive understanding**): \\n\\n```\\nDevelop a system to predict drug response using the GDSC dataset with a Support Vector Machine (SVM) regressor. Load the dataset and perform feature selection to identify key features in `src/data_loader.py`. Implement the SVM regressor in `src/model.py`. Use cross-validation to evaluate the model's performance in `src/train.py`. Save the performance results to `results/performance.txt`. Visualize the regression results using seaborn and save it under `results`. Next, create a report including the data preprocessing, model training, evaluation process, and the visualization. Save the report as `results/report.pdf`. The report should emphasize how feature selection impacts the model's performance, and the regression results visualization should clearly highlight the relationship between the selected features and the predicted drug response. Ensure the system is designed to be easily extendable for incorporating additional datasets or new features.\\n```\\n\\n- Automated Step 2 (**construct the workspace tree**): \\n\\n```\\n \\u256d\\u2500 Project Tree \\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u256e\\n \\u2502 \\u2502\\n \\u2502 Workspace Path: workspaces/OpenHands/39_Drug_Response_Prediction_SVM_GDSC_ML \\u2502\\n \\u2502 Total Nodes: 5 \\u2502\\n \\u2502 \\u2502\\n \\u2502 Project Structure \\u2502\\n \\u2502 \\u251c\\u2500\\u2500 . \\u2502\\n \\u2502 \\u2502 \\u2514\\u2500\\u2500 gdsc_dataset.csv \\u2502\\n \\u2502 \\u251c\\u2500\\u2500 results \\u2502\\n \\u2502 \\u2502 \\u251c\\u2500\\u2500 report.md \\u2502\\n \\u2502 \\u2502 \\u2514\\u2500\\u2500 rmse_scores.png \\u2502\\n \\u2502 \\u2502 \\u2514\\u2500\\u2500 performance.txt \\u2502\\n \\u2502 \\u2514\\u2500\\u2500 src \\u2502\\n \\u2502 \\u251c\\u2500\\u2500 data_loader.py \\u2502\\n \\u2502 \\u251c\\u2500\\u2500 model.py \\u2502\\n \\u2502 \\u2514\\u2500\\u2500 train.py \\u2502\\n \\u2502 \\u2502\\n \\u2570\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u2500\\u256f\\n```\"}" ] }
DdPeCRVyCd
Communication-Efficient Federated Low-Rank Update Algorithm and its Connection to Implicit Regularization
[ "Haemin Park", "Diego Klabjan" ]
Federated Learning (FL) faces significant challenges related to communication efficiency and heterogeneity. To address these issues, we explore the potential of using low-rank updates. Our theoretical analysis reveals that client's loss exhibits a higher rank structure (gradients span higher rank subspaces of Hessian) compared to the server's loss. Based on this insight, we hypothesize that constraining client-side optimization to a low-rank subspace could provide an implicit regularization effect. Consequently, we propose FedLoRU, a general low-rank update framework for FL. Our framework enforces low-rank client-side updates and accumulates these updates to form a higher-rank model. Additionally, variants of FedLoRU can adapt to environments with statistical and model heterogeneity by employing multiple or hierarchical low-rank updates. Experimental results demonstrate that FedLoRU performs comparably to full-rank algorithms and exhibits robustness to heterogeneous and large numbers of clients.
[ "Federated Learning", "Communication-Efficient Federated Learning", "Low-Rank Nature", "Cross-Device Federated Learning" ]
Reject
https://openreview.net/pdf?id=DdPeCRVyCd
https://openreview.net/forum?id=DdPeCRVyCd
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wL1TjpoyvA", "mLjw1wOJz5", "kDoRllCwr5", "jP2SeMXhBj", "fXQSHRdGu2", "el0vSesPBY", "e98m4DFxPC", "dcCUglK2Id", "bwN9kWTNfw", "bgr943FTVW", "bWaybodXDK", "ZmWwRlv7NC", "SxPDfaSdAg", "SPUMfs5hcK", "LUQ3dCnvbs", "Jt4DfxiX1q", "IqkD7sm9JL", "DEXgC6lZQ4", "CGWxFHcdFs", "9cDVThDati", "6jJLCJ5tWd", "49Wy2Wske0", "2W5srkud7f", "1b824QHxuP" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732561862757, 1731986145457, 1730397031237, 1732334881319, 1729999859713, 1732223153655, 1731985497310, 1732514351097, 1731985080954, 1737523901762, 1733194727279, 1730697981019, 1731986345401, 1731985393499, 1734781361675, 1733192510076, 1731984992406, 1731986427426, 1730456384216, 1732514429421, 1731984685122, 1732222080848, 1731985879010, 1732561355204 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8334/Reviewer_9pru" ], [ "ICLR.cc/2025/Conference/Submission8334/Authors" ], [ "ICLR.cc/2025/Conference/Submission8334/Reviewer_mpJt" ], [ "ICLR.cc/2025/Conference/Submission8334/Reviewer_yeeh" ], [ "ICLR.cc/2025/Conference/Submission8334/Reviewer_yeeh" ], [ "ICLR.cc/2025/Conference/Submission8334/Reviewer_mpJt" ], [ "ICLR.cc/2025/Conference/Submission8334/Authors" ], [ "ICLR.cc/2025/Conference/Submission8334/Authors" ], [ "ICLR.cc/2025/Conference/Submission8334/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8334/Authors" ], [ "ICLR.cc/2025/Conference/Submission8334/Reviewer_9pru" ], [ "ICLR.cc/2025/Conference/Submission8334/Authors" ], [ "ICLR.cc/2025/Conference/Submission8334/Authors" ], [ "ICLR.cc/2025/Conference/Submission8334/Area_Chair_4nMH" ], [ "ICLR.cc/2025/Conference/Submission8334/Authors" ], [ "ICLR.cc/2025/Conference/Submission8334/Authors" ], [ "ICLR.cc/2025/Conference/Submission8334/Authors" ], [ "ICLR.cc/2025/Conference/Submission8334/Reviewer_WVpF" ], [ "ICLR.cc/2025/Conference/Submission8334/Authors" ], [ "ICLR.cc/2025/Conference/Submission8334/Authors" ], [ "ICLR.cc/2025/Conference/Submission8334/Reviewer_mpJt" ], [ "ICLR.cc/2025/Conference/Submission8334/Authors" ], [ "ICLR.cc/2025/Conference/Submission8334/Reviewer_mpJt" ] ], "structured_content_str": [ "{\"comment\": \"The theoretical analysis reveals a higher-rank nature of Hessian of a smaller dataset, but it may not be regarded as a conclusion for federated learning. The insight only reveals the high-rank nature of small datasets, and it can not conclude constraining to low rank would help to align client updates along major directions and facilitate better aggregation. Therefore the contribution of the theory is limited. Also as the algorithmic convergence wasn't established, the conclusion is not convincing without repeating your experiment to report the variance.\"}", "{\"title\": \"Continued Response to Reviewer mpJt's Review\", \"comment\": \"**W3. According to my understanding, this algorithm is still a full parameter training algorithm as it initializes W every \\u03c4 step. So the comparison to LoRA is unfair**\", \"we_would_like_to_clarify_the_nature_of_our_algorithm_and_the_rationale_behind_the_comparisons_presented_in_our_work\": \"- **Parameter Efficiency**:\\n \\n Our algorithm, FedLoRU, is fully parameter-efficient training. While we reinitialize low-rank modules every \\u03c4 rounds (which is optional), this does not imply that full parameters are used during training. In local training, we only train low-rank modules. In fact, one of the key advantage of FedLoRU is its communication efficiency. In federated learning, communication overhead is a critical bottleneck, and FedLoRU significantly reduces this by transferring only low-rank modules instead of a full-rank model. Therefore, comparing FedLoRU with other low-rank methods, such as FedLoRA, is both reasonable and relevant, as they share the goal of achieving communication efficiency through low-rank updates.\\n \\n- **Comparison with Conventional Federated Learning Algorithms**:\\n \\n It is important to note that our low-rank local training strategy is highly general and can be integrated with conventional methods such as FedProx ([2]), SCAFFOLD ([3), and FedAdam ([4]). These algorithms primarily address optimization loss or server-side aggregation strategies rather than model updates. As such, we easily can plug FedLoRU algorithm to other federated learning algorithms. For example, we can use FedLoRU algorithm with FedAdam. Exploring these combinations remains an exciting avenue for future research. \\n\\n**W4. You can't accurately solve argmin_{A,B} f. This step is computation-heavy even if you use an \\\\epsilon-approixition. This step is actually one step of full LoRA tuning. Therefore, this algorithm is not suitable for LLM fine-tuning.**\\n\\nThe one-step local training process in our algorithm is equivalent to one step of LoRA training. Numerous studies have demonstrated the effectiveness of LoRA for fine-tuning large language models (LLMs). Your statement seems LoRA itself may not be suitable for LLM fine-tuning, which contradicts existing evidence. If this interpretation is incorrect, could you please clarify your question further?\\n\\nIf your concern is that low-rank training is unsuitable due to computational costs, I would argue the opposite\\u2014it is actually more suitable for federated LLM fine-tuning. For LLM fine-tuning, we use very low rank modules, which requires low-computational power.\\n\\nEspecially, in the context of federated learning, the most significant bottleneck for LLM fine-tuning is not computation but communication. For instance, transferring the full weights of a model like LLaMa2-7B (~13.5GB) requires significantly more time than performing local training on client devices. By leveraging low-rank updates, our approach drastically reduces the communication overhead, making it particularly well-suited for federated LLM fine-tuning scenarios.\\n\\nWe appreciate the reviewer's constructive comments and time again. If there is any remaining question, we will try our best to answer.\\n\\n**References**\\n\\n[1] Baskerville, N. P. (2023). Random matrix theory and the loss surfaces of neural networks.\\u00a0*arXiv preprint arXiv:2306.02108*.\\n\\n[2] Li, T., Sahu, A. K., Zaheer, M., Sanjabi, M., Talwalkar, A., & Smith, V. (2020). Federated optimization in heterogeneous networks.\\u00a0*Proceedings of Machine learning and systems*,\\u00a0*2*, 429-450.\\n\\n[3] Karimireddy, S. P., Kale, S., Mohri, M., Reddi, S., Stich, S., & Suresh, A. T. (2020, November). Scaffold: Stochastic controlled averaging for federated learning. In\\u00a0*International conference on machine learning*\\u00a0(pp. 5132-5143). PMLR.\\n\\n[4] Reddi, S., Charles, Z., Zaheer, M., Garrett, Z., Rush, K., Kone\\u010dn\\u00fd, J., ... & McMahan, H. B. (2020). Adaptive federated optimization.\\u00a0*arXiv preprint arXiv:2003.00295*.\"}", "{\"summary\": \"This paper reveals that client loss in federated learning has a higher rank structure (in gradients and Hessian subspaces) than the server's loss. Based on this, they propose that restricting client optimization to a low-rank subspace could provide implicit regularization and then introduce FedLoRU, a framework that enforces low-rank updates on the client side and aggregates them into a higher-rank model. Finally, they add another low-rank module pair to adapt to environments with statistical and model heterogeneity.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper reveals that client loss in federated learning has a higher rank structure (in gradients and Hessian subspaces) than the server's loss.\\n Based on this, they propose that restricting client optimization to a low-rank subspace could provide implicit regularization. They then introduce FedLoRU, a framework that enforces low-rank updates on the client side and aggregates them into a higher-rank model. Finally, they add another low-rank module pair to adapt to environments with statistical and model heterogeneity.\", \"weaknesses\": \"The novelty is limited, there is no close connectiong between the analysis and the algorithm. I think this algorithm is a federated version of ReLoRA if we consider on the non-personalized version, aggregating low-rank modules for higher rank training.\\n\\nThere is no theoretical analysis for the algorithm. It's fully heuristic. When we consider the personalized strategy this paper studied, I don't know what kind of solution will this algoritm converge to. Will the introduced L, U fully concel out the A, B modules and make this algorithm fully consider local loss? The author didn't provide the reasonability of their strategy. \\n\\nAccording to my understanding, this algorithm is still a full parameter training algorithm as it initializes W every $\\\\tau$ step. So the comparison to LoRA is unfair. On the other hand, there are numerous algorithms for conventional federated learning. If you want to highlight your algorithm's advantage, you should compare your algorithm with the conventional algorithm, rather than just beatting LoRA. \\n\\nYou can't accurately solve argmin_{A,B} f. This step is computation-heavy even if you use an \\\\epsilon-approixition. This step is actually one step of full LoRA tuning. Therefore, this algorithm is not suitable for LLM fine-tuning.\", \"questions\": \"Please refer to the limitation.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the detailed reply.\", \"regarding_the_connection_between_loss_hessian_and_weight_matrices\": \"\\\"The connection between the Hessian and the model lies in the fact that model updates follow the eigenvectors of the Hessian.\\\"It is not clear to me why this statement is necessarily true. The algorithm performs local updates by minimizing the local objective, but I am not aware of the connection between minimization and the eigenvectors of Hessian.\\n\\nIn general, I was wondering why the theory on low-rank hessians can motivate the design of low-rank weight updates. They seem very separated without further justifications.\"}", "{\"summary\": \"This paper studies communication-efficient low-rank update framework for federated learning.\\n\\nIt provides theoretical asymptotic analysis for the rank structures of the Hessian at server side and client side, which motivates the design of FedLoRU algorithm. Generalizations of FedLoRU under statistical and model heterogeneity, namely pFedLoRU and mFedLoRU, are also presented. Finally, the authors conduct experiments on computer vision pre-training and language model fine-tuning tasks to demonstrate the performance of FedLoRU and its generalizations.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper provides rigorous theoretical analysis on the Hessian rank structures, establishing interesting asymptotic results within a mathematically general framework.\", \"The proposed algorithm achieves performance comparable or superior to other known methods in experiments, while significantly reducing the communication overhead by low-rank updates.\", \"The presentation of this paper is well-organized and the motivation and methodology are clear to follow.\"], \"weaknesses\": [\"Although the authors provide some Hessian rank structure analysis, the design of FedLoRU can be better supported from the theoretical side. For example, some convergence guarantees, since low-rank updates lead to loss of information compared to full-rank updates and may hurt the optimization.\", \"The title mentions \\\"its connection to implicit regularization\\\", but I was not able to spot sufficient discussion on implicit regularization of FedLoRU; also please see a conceptual question below.\", \"The design of FedLoRU seems a straightforward extension to federated setting of existing methods for low-rank matrix accumulation such as ReLoRA [1].\", \"This is not a major weakness but more evaluations on LLM fine-tuning could be done, as most of the experiment details are devoted to computer vision tasks on small datasets.\", \"[1] Vladislav Lialin, Sherin Muckatira, Namrata Shivagunde, and Anna Rumshisky. Relora: High- rank training through low-rank updates. In The Twelfth International Conference on Learning Representations, 2023.\"], \"questions\": [\"The title mentions \\\"its connection to implicit regularization\\\". To my knowledge, implicit regularization refers to the phenomenon that optimizers without explicit regularization, such as SGD, prefer regularized solutions [1]. However, FedLoRU explicitly works in a specific rank-$r$ space. Could the authors please explain in what sense is FedLoRU connected to implicit regularization?\", \"The theory part analyzes the rank structures of *loss Hessians* at server and client side. At the same time, FedLoRU proposes to perform low-rank updates on the model's *weight matrices*. Could the authors please explain the connection between the rank structure of loss Hessians and weight matrices?\", \"[1] Ziwei Ji and Matus Telgarsky. Gradient descent aligns the layers of deep linear networks. arXiv preprint arXiv:1810.02032, 2018.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"What I want to emphasize is that you need to do a series of LoRA, and then merge AB into W sequentially, which sacrifices the flexibility of LoRA. LoRA separating adapter and the frozen pre-trained model, which thus can be adapted to multiple tasks in parallel. Concretely, for each task, LoRA only needs to store {A, B}. Your algorithm needs to store the parameters with the size of the full model. I think it is inefficient.\\n\\nOverall, compared with the conventional algorithm, this paper lacks theoretical justification. Compared with LoRA, this work sacrifices flexibility and extensibility. \\n\\nI will keep my score for now.\"}", "{\"title\": \"Continued Response to Reviewer WVpF's Review\", \"comment\": \"**W3. The author should consider more baselines, which apply low-rank factorized update models, such as [1]. (FedPara)**\\n\\nThat is a good suggestion to compare with the other low-rank factorized updates.\\n\\nWe agree that it is valuable to compare different low-rank methods. In our study, our primary goal was to demonstrate the advantages of low-rank training in federated learning with a large number of clients. Thus, our approach is not limited to LoRA-style low-rank updates. As mentioned in our paper (lines 348-349), other low-rank methods can also be applied, but we adopted the most standard method for our experiments.\\n\\nRegarding the specific method LoHa (FedPara\\u2019s low-rank approach using Hadamard products), we chose not to include it for the following reasons:\\n\\n1. **Performance**: Preliminary experiments with LoHa on CIFAR-10 ($K=100$) showed that it performs worse than FedLoRA, achieving a final top-1 accuracy of approximately 0.75, which is significantly lower than other methods. Furthermore, the FedPara paper tested LoHa under settings with a small number of clients ($K=16$ for CIFAR-10 and $K=8$ for CIFAR-100) and a low participation ratio ($C=0.16$). These conditions involve only a small subset of clients participating per round, which do not align with our primary goal of testing scalability with many clients.\\n2. **Computational Overhead**: LoHa incurs significantly higher computational costs, requiring twice the training time compared to LoRA. This makes it less practical for scenarios with a large number of clients.\\n\\nNonetheless, we acknowledge the importance of exploring and comparing alternative low-rank methods. Future work can investigate whether other low-rank approaches perform similarly to FedLoRU and provide a broader comparison across different methods.\\n\\nWe appreciate the reviewer's constructive comments and time again. If there is any remaining question, we will try our best to answer.\"}", "{\"title\": \"Response to Reviewer yeeh's Comment\", \"comment\": \"Thank you for the comment and raising the important question. We provide a more detailed explanation of the connection between the loss Hessian, gradient descent updates, and the motivation for employing low-rank updates in our algorithm.\\n\\n**Connection Between Gradient Descent and Hessian Eigenvectors**\\n\\nWhen we update the model using gradient descent, the Hessian matrix of the loss function encapsulates the local curvature information of the loss landscape. The eigenvectors of the Hessian represent the principal axes of curvature, and the corresponding eigenvalues indicate the degree of curvature along these directions.\\n\\nAccording to the several previous works ([1], [2], [3]), the direction and magnitude of the gradient $\\\\nabla f(\\\\omega)$ are influenced by the Hessian's eigenstructure. In particular, gradient descent updates are implicitly biased toward the eigenvectors associated with larger eigenvalues. This means that the **minimization step is dominated by the eigenvectors corresponding to the top eigenvalues**.\\n\\nFor example, in [2], the authors projected the gradient onto the subspace spanned by the top eigenvectors and calculated the proportion of the gradient contributed by this projection. They demonstrated that the gradient is dominated by the top eigenvectors of the Hessian.\\n\\n**Motivation for Low-Rank Weight Updates from Low-Rank Hessian Analysis**\\n\\nWe consider low-rank update matrices $AB$ as the **updates** and accumulate them to construct our final model. By constraining these matrices to be of low rank, we ensure that the updates lie in a low-rank subspace, aligning them with the most significant curvature directions identified by the Hessian analysis.\"}", "{\"title\": \"References for the Responses\", \"comment\": \"We appreciate the reviewer's constructive comments and time again. If there is any remaining question, we will try our best to answer.\\n\\nWe provide the references for the responses.\\n\\n**References**\\n\\n[1] Baskerville, N. P. (2023). Random matrix theory and the loss surfaces of neural networks.\\u00a0arXiv preprint arXiv:2306.02108.\\n\\n[2] Li, T., Sahu, A. K., Zaheer, M., Sanjabi, M., Talwalkar, A., & Smith, V. (2020). Federated optimization in heterogeneous networks.\\u00a0*Proceedings of Machine learning and systems*,\\u00a0*2*, 429-450.\\n\\n[3] Reddi, S., Charles, Z., Zaheer, M., Garrett, Z., Rush, K., Kone\\u010dn\\u00fd, J., ... & McMahan, H. B. (2020). Adaptive federated optimization.\\u00a0*arXiv preprint arXiv:2003.00295*.\\n\\n[4] Karimireddy, S. P., Kale, S., Mohri, M., Reddi, S., Stich, S., & Suresh, A. T. (2020, November). Scaffold: Stochastic controlled averaging for federated learning. In\\u00a0*International conference on machine learning*\\u00a0(pp. 5132-5143). PMLR.\\n\\n[5] Sagun, L., Bottou, L., & LeCun, Y. (2016). Eigenvalues of the hessian in deep learning: Singularity and beyond.\\u00a0*arXiv preprint arXiv:1611.07476*.\\n\\n[6] Gur-Ari, G., Roberts, D. A., & Dyer, E. (2018). Gradient descent happens in a tiny subspace.\\u00a0*arXiv preprint arXiv:1812.04754*.\\n\\n[7] Li, T., Tan, L., Huang, Z., Tao, Q., Liu, Y., & Huang, X. (2022). Low dimensional trajectory hypothesis is true: Dnns can be trained in tiny subspaces.\\u00a0*IEEE Transactions on Pattern Analysis and Machine Intelligence*,\\u00a0*45*(3), 3411-3420.\\n\\n[8] Rudelson, M., & Vershynin, R. (2007). Sampling from large matrices: An approach through geometric functional analysis.\\u00a0*Journal of the ACM (JACM)*,\\u00a0*54*(4), 21-es.\\n\\n[9] Ipsen, I. C., & Saibaba, A. K. (2024). Stable Rank and Intrinsic Dimension of Real and Complex Matrices.\\u00a0*arXiv preprint arXiv:2407.21594*.\\n\\n[10] Bartlett, P. L., Foster, D. J., & Telgarsky, M. J. (2017). Spectrally-normalized margin bounds for neural networks. Advances in neural information processing systems, 30.\\n\\n[11] Neyshabur, B., Bhojanapalli, S., & Srebro, N. (2017). A pac-bayesian approach to spectrally-normalized margin bounds for neural networks. arXiv preprint arXiv:1707.09564.\\n\\n[12] Sanyal, A., Torr, P. H., & Dokania, P. K. (2019). Stable rank normalization for improved generalization in neural networks and gans.\\u00a0*arXiv preprint arXiv:1906.04659*.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer mpJt's Comment\", \"comment\": \"Thank you for your detailed feedback.\\n\\n**Regarding the algorithm design:**\\n\\nOur theoretical analysis indicates that during client-side optimization in federated learning, the Hessian matrices associated with smaller local datasets tend to have a higher stable rank. This suggests that the optimization landscape at each client is high-dimensional and potentially divergent from others. By constraining the updates to a low-rank space, we aim to capture the most significant directions that are common across clients. We acknowledge that there is a gap between our theoretical analysis and the proposed algorithm. While this does not directly conclude that constraining to a low rank will always aid in aligning client updates, our empirical results suggest otherwise (gap between full-rank training and low-rank training). We acknowledge that this heuristic may not be immediately intuitive, and we\\u2019re doing additional analysis to strengthen our hypothesis about the low-rank updates in federated learning, but due to time limit, we would be able to update it next time.\\n\\n**Regarding memory usage compared to LoRA:**\\n\\nFirst, you are incorrect that when merging $A_t B_t$ into $W$, additional memory is not required to store the product $B_t A_t$. The low-rank matrices can be integrated into the original model weights, resulting in no permanent increase in model size. For fine-tuning, it is correct that we need extra memory to store series of $A_t B_t$, but it is still much more efficient to use our low-rank update algorithm than using a full-rank algorithm. We agree with your point \\u201cyou don't know how many of them need to be stored\\u201d, but the number of low-rank matrices stored is limited (e.g., three updates are enough for LLaMa2-3B which is less than 1% of original model).\\n\\nWe recognize that this may not always be the case and will revise our manuscript to provide a more balanced discussion on this matter.\\n\\nAgain, thank you for your thoughtful feedback. Your insights help us improve the clarity and accuracy of our work.\"}", "{\"summary\": \"The paper applies FedLoRU and its variants to impose the local update in a low-rank subspace to achieve implicit regularization.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"FedLoRU uses successive low-rank updates for both pre-training and fine-tuning in federated learning and achieves good performance.\", \"weaknesses\": \"W1. The novelty is not justified sufficiently.\\n\\nW2. More discussions and justifications regarding the stable rank metric are needed.\\n\\nW3. The experiment setup and results are not convincing.\", \"questions\": \"Q1. The paper presents FedLoRU and its variants by applying low-rank updates in a federated learning setting. However, the novelty of this proposed method is limited. The idea of using low-rank updates in federated learning has been explored before, and the paper does not provide a compelling argument for why the proposed method outperforms existing approaches.\\n\\nQ2. While the paper utilizes the stable rank metric to analyze rank properties between local clients and the central server, the discussion around this metric is lacking. The claim that stable rank \\\"serves as a continuous proxy for rank and is robust\\\" is made without sufficient references or supporting literature. Additionally, more discussion is needed on how this concept is adapted from related fields, and why it is appropriate for the federated learning context.\\n\\nQ3. Figure 2(a) is difficult to interpret. Both the datasets with 50 and 500 samples show a high stable rank at the 15th epoch, which is counterintuitive and requires further explanation. It would strengthen the paper if the authors could repeat the experiment multiple times and provide clearer insights to support the observed trends.\\n\\nQ4. The experiment shown in Figure 2(b) does not convincingly support the authors' intuition without a more detailed description. A thorough explanation of the experimental setup and its relation to Theorem 3.2 would significantly improve the clarity and impact of the results.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer yeeh's Review\", \"comment\": \"We appreciate the reviewer\\u2019s time. Below, we address the concerns and questions raised:\\n\\n**Q1. Could the authors please explain in what sense is FedLoRU connected to implicit regularization?**\\n\\nThank you for the thoughtful question. We would like to clarify how FedLoRU is connected to implicit regularization.\\n\\nMathematically, regularization refers to the process of simplifying the solution of a problem. Explicit regularization involves adding a penalty term to the optimization objective, whereas implicit regularization includes all forms of regularization that do not involve such penalties. Since FedLoRU does not introduce any explicit penalty term, its regularization effect is implicit.\\n\\nNow I will explain the implicit regularization effect of FedLoRU.\\n\\nIn federated learning, a key challenge is the discrepancy between clients. After local training, individual models often diverge significantly, leading to suboptimal performance when aggregated at the server. Reducing this client discrepancy is crucial for improving federated learning outcomes.\\n\\nOur theoretical analysis (Theorem 3.2) shows that clients exhibit a higher stable rank, indicating a more complex loss landscape. This complexity exacerbates client discrepancies. By constraining updates to a low-rank space, FedLoRU implicitly regularizes client training, aligning updates along major directions and reducing client-to-client variations. In short, by using local low-rank updates, we force clients to have simpler solutions (locally trained models) which leads to more general aggregated server model.\\n\\n\\n**Q2. Could the authors please explain the connection between the rank structure of loss Hessians and weight matrices?**\\n\\nWe would like to clarify the connection between the rank structure of loss Hessians and the model's weight matrices in the context of FedLoRU.\\n\\n1. **Role of Low-Rank Matrices:**\\n \\n In FedLoRU, low-rank matrices are used to represent the updates applied during local training. These matrices capture the difference between the updated model and the previous model. The connection between the Hessian and the model lies in the fact that model updates follow the eigenvectors of the Hessian. This relationship is critical to understanding the curvature of the loss landscape and its effect on training.\\n \\n2. **Connection Through Loss Landscape Analysis:**\\n \\n In our paper, we use **stable rank** to analyze the rank properties of the loss landscape at both the client and server levels. The rank nature reflects the curvature information of the loss landscape, which directly influences the model updates during training. Specifically, stable rank provides a better representation of this curvature compared to rank, as it focuses on the effective dimensionality of the Hessian.\\n \\n3. **Insights from Deep Learning Training Dynamics**:\\n \\n According to the works ([1], [2], [3]), gradient descent trajectory is separated into two components: a bulk component (eigenvectors corresponding to large number of small eigenvalues), and a top component (eigenvectors corresponding to small number of large eigenvalues); which we call bulk-subspace and top-subspace.\\n \\n A significant portion of the gradient contribution comes from the top subspace. Therefore, to understand the loss landscape and its impact on training, it is essential to focus on the stable rank, which we employ in our study.\\n \\n4. **Stable rank of Hessian and Low-rank Updates**\\n \\n While the Hessian may have a very high traditional rank, the number of eigenvalues contributing significantly to the loss landscape is typically small (e.g.,k-eigenvalues for k-classification problems). Stable rank effectively captures this curvature information. Stable rank is less sensitive to small perturbations in the Hessian compared to traditional rank measures ([4], [5]). This makes it a more robust metric for analyzing the training loss landscape, as minor variations in the data points or training steps do not lead to significant changes in the stable rank. This property has been widely utilized in deep learning research ([6], [7], [8]) to assess and restrict model complexity.\\n \\n\\nTherefore, from this analysis, we expect that low-rank matrices for training restrict the complexity of local training, which leads to better performance of federated learning by reducing clients\\u2019 discrepancy\"}", "{\"title\": \"Response to Reviewer WVpF's Review\", \"comment\": \"We appreciate the reviewer\\u2019s time. Below, we address the concerns and questions raised:\\n\\n**W1. In the theorems that are presented, summarizing the main insights of these theorems may be needed.**\\n\\nThanks for the question. We will provide detailed explanation and novelty of the main theorem.\\n\\n- **explanation**: From a data-generating-distribution, we pick N samples, and again pick M (<N) samples from N samples. Then for prediction function h and weight w of dimension R, the stable rank of Hessian of M samples is asymptotically larger than the stable rank of Hessian of N samples.\\n\\n- **insight**: In federated learning with K local clients, when we assume each local client has M samples, then the server-side optimization loss is for N=KM samples. Theorem 3.2 says that when we consider the rank structure of local loss landscape and global loss landscape, local client has larger rank structure. It means that local client has more complicated loss landscape.\\n \\n From this insight, we hypothesize that if we can reduce the complexity of local loss landscape (by using low-rank updates), it might help to reduce client discrepancy which is a major factor of performance degradation in federated learning. We also provide the evidence that as the local dataset size decreases (compared to the size of combined dataset), low-rank update algorithms outperform full-rank algorithm (FedAvg) (see Figure 2(b)).\\n \\n- **theoretical novelty 1**: This is the first theoretical analysis on rank nature of optimization loss landscape in federated learning. We provide the information about the largest eigenvalues of client-side optimization and server-side optimization, and we have shown that local clients have higher stable rank.\\n- **theoretical novelty 2**: We first introduce two decoupled additive perturbed models to solve dependency problem in finding limiting eigenvalues of two Hessians. For example, Baskerville et al. just solve the dependency problem by adding assumption that their matrices are independent, which in fact, dependent.\\n\\n**W2. In experiments, the least partial client participation ratio is set as 0.5. In more realistic settings, the participation ratio is lower with more clients.**\\n\\nThank you for the great point. \\n\\nThis is indeed an important consideration. Our study emphasizes the effectiveness of client-side low-rank updates, particularly in cross-device federated learning scenarios involving a large number of participating clients. To evaluate this, we conducted experiments with $K=200,300,400$, and a participation rate of $C=0.5$. These results demonstrate that low-rank training methods outperform full-rank training (FedAvg), aligning with our theoretical insights.\\n\\n| $K$ | FedAvg | FedHM | FedLoRA | FedLoRU |\\n|--------------|---------|--------|---------|-----------------|\\n| $K=100$ | 0.5382 | 0.5732 | 0.5506 | **0.5837** |\\n| $K=200$ | 0.3885 | 0.4872 | 0.5227 | **0.5393** |\\n\\n\\nIn addition, inspired from your question, we extended our experiments to settings with a lower participation ratio and a larger number of clients. Specifically, we examined $K=100,200$ with $C=0.1$, using an IID CIFAR-100 dataset, which is more challenging than FMNIST and CIFAR-10. For these tests, we used the ResNet18 model, applying full parameter training for FedAvg and 41% parameter training for low-rank methods. The results, averaged over three runs with very low standard deviation (< 0.005), indicate that:\\n- Low-rank training methods consistently outperform full-rank training under lower participation ratio with more clients.\\n- FedLoRU achieves the best performance among low-rank methods.\\n\\nInterestingly, we observed that under these lower participation ratio conditions, FedHM surpasses FedLoRA, which contrasts with the results for higher participation ratios ($C=0.5). This finding highlights the complex relationship between participation rate and algorithm performance, further demonstrating the reliability of FedLoRU.\\n\\nWe hope this clarifies our experimental setup and results, demonstrating the adaptability of low-rank methods in diverse participation scenarios.\"}", "{\"metareview\": \"The paper proposes the FedLoRU method: a federated optimization method combining low-rank updates on the clients' side (equivalent to running LORA locally on each client), aggregating the updates on the servers, and repeating this process. The authors claim this method is motivated by the needs for communication-efficient federated learning methods, combined with regularization offered by low-rank adaptation.\\n\\nI do not really observe any particular strengths in this paper. The core of the idea is straightforward, and appeared in some previous papers already (e.g., Kuo et al, 2024). The difference here is that multiple low-rank updates are added to the base model over the algorithm run. However, this idea was explored before, too, in the COLA (Chain of LORA) paper (Xia et al, 2024).\", \"some_issues\": [\"There is no convergence analysis of the new method.\", \"The aggregation mechanism is problematic from a theoretical point of view: the A updates are aggregated on their own, and the B updates are aggregated on their own, whereas the \\\"mathematically correct\\\" update would be to average the products A*B; i.e., average the updates. This approach does not lead to a low-rank update, however. Thus, the authors resort to a heuristic, which needs justification. Multiple papers were written on this topic before.\", \"The connection between weights being low-rank and Hessians being of low-rank, which plays a key role in their justification of the method, was questions by the reviewers, and the explanation is not satisfying - it relies on hand-waving arguments instead of on solid mathematical reasoning.\", \"Experimental results were not found to be convincing enough - a very high bar is needed for an empirical work; and this bar was certainly not achieved.\", \"The authors seem to be not aware of prior FL literature with very closely connected algorithms which, unlike their work, have strong theoretical backing. For example, literature on federated optimization with contractive compression applied to the updates by the clients (e.g., all work on error feedback, started in 2014 by Seide, with hundreds of follow-up works) is very closely connected. This is because low-rank approximation is known to be a contractive compressor.\", \"I doubt the convergence of the presented method can be analyzed mathematically - in fact, I believe simple counterexamples can be found on which this methods fails.\", \"Finally, no reviewer recommended this paper for acceptance. I've read the reviews, the rebuttals and the discussion, and have briefly looked at the paper as well. I agree with the overall judgement that this paper has many significant weaknesses, and should not be accepted.\", \"AC\"], \"additional_comments_on_reviewer_discussion\": \"The key here is that the points raised by the reviewers were not addressed satisfactorily. This was also not possible, since the issues are indeed issues with the paper, and not merely due to misunderstanding by the reviewers that can be explained away.\"}", "{\"title\": \"Response to Reviewer 9pru's Comment\", \"comment\": \"Thank you for your feedback. We acknowledge that there is a gap between our theoretical analysis and the proposed algorithm. Our theoretical work highlights the high-rank nature of the Hessian in local optimization loss, which we believe provides valuable insights into the optimization landscape of federated learning. While this does not directly conclude that constraining to a low rank will always aid in aligning client updates, our empirical results suggest otherwise.\\n\\nSpecifically, our experiments demonstrate that as the number of clients increases, the performance gap between the low-rank algorithm and the full-rank algorithm widens. This empirical observation indicates that low-rank constraints can indeed help in aligning client updates along major directions, facilitating better aggregation in federated settings with many clients. Further, we\\u2019re doing additional analysis to strengthen our hypothesis about the low-rank updates in federated learning, but due to time limit, we would be able to update it next time.\\n\\nWe also have taken care to report variance across multiple runs to ensure the reliability of our findings. In fact, we have low variance across multiple runs for each setting. We appreciate your insights and will consider them to improve the clarity and impact of our work.\"}", "{\"title\": \"Continued Response to Reviewer 9pru's Review\", \"comment\": \"**Q3. Figure 2(a) is difficult to interpret.**\\n\\nFigure 2(a) illustrates the empirical stable ranks of Hessians for dataset sizes of 50 and 500, where the model is trained on the full training dataset, and stable ranks are computed during training with partial datasets. The results support Theorem 3.2, which states that the Hessian of a loss of a smaller dataset exhibits a larger stable rank at any weight when the parameter dimension is sufficiently large.\\n\\nThe spike in stable rank observed at the 15th epoch appears to be a random phenomenon. Learning dynamics, such as the eigenvalues and stable rank of the Hessian, remain under-explored in deep learning literature. Given the inherently complex nature of the loss landscape, the stable rank may exhibit significant perturbations during training.\\n\\nTo validate this observation, we repeated the experiments three more times. Although spikes occurred at random epochs in each trial, the consistent finding across all experiments was that smaller datasets consistently resulted in larger stable ranks except only one case (Experiment 2, r=40). This reinforces the theoretical insight provided by Theorem 3.2.\\n\\n| Epoch | Experiment 1 (n=50) | Experiment 1 (n=500) | Experiment 2 (n=50) | Experiment 2 (n=500) | Experiment 3 (n=50) | Experiment 3 (n=500) |\\n|-------|---------------------|----------------------|---------------------|----------------------|---------------------|----------------------|\\n| E=1 | 1.441 | 1.053 | 9.672 | 6.723 | 1.441 | 1.504 |\\n| E=5 | 17.562 | 6.825 | 3.853 | 1.808 | 106.194 | 7.217 |\\n| E=10 | 263.935 | 7.217 | 49.407 | 12.705 | 5.578 | 4.528 |\\n| E=15 | 67.226 | 6.794 | 9.481 | 5.158 | 67.226 | 6.793 |\\n| E=20 | 64.468 | 10.269 | 25.455 | 3.359 | 64.468 | 10.268 |\\n| E=25 | 72.798 | 15.989 | 6.100 | 5.958 | 72.798 | 15.989 |\\n| E=30 | 7.232 | 1.958 | 105.749 | 9.034 | 7.233 | 1.958 |\\n| E=35 | 912.511 | 11.685 | 108.535 | 15.085 | 91.251 | 11.685 |\\n| E=40 | 3.484 | 3.150 | 14.925 | 18.354 | 3.485 | 3.150 |\\n| E=45 | 372.260 | 28.769 | 6.248 | 1.645 | 372.263 | 28.769 |\\n\\n\\n\\n\\n**Q4. A thorough explanation of relation between Figure 2(b) and Theorem 3.2 improves the clarity and impact of the results.**\\n\\nThank you for the suggestion. As you noted, Figure 2(b) highlights a key impact of our work.\\n\\nOur main contribution is demonstrating the benefits of client-side low-rank optimization in federated learning with many clients. We compared full-rank training (FedAvg) with low-rank approaches (FedLoRU, FedLoRA) under the same framework, differing only in local updates. Figure 2(b) shows that as client numbers increase, low-rank methods outperform full-rank training. The performance gap between FedAvg and FedLoRU grows with more clients, and even FedLoRA surpasses FedAvg for $K \\\\in {200, 300, 400}$.\\n\\nThis is especially relevant for cross-device federated learning, where many edge devices participate. Our findings suggest that low-rank updates are more effective than full-rank training in such settings.\"}", "{\"title\": \"Continued Response to Reviewer yeeh's Review\", \"comment\": \"**W1. Although the authors provide some Hessian rank structure analysis, the design of FedLoRU can be better supported from the theoretical side. For example, some convergence guarantees, since low-rank updates lead to loss of information compared to full-rank updates and may hurt the optimization.**\\n\\nWe acknowledge that convergence analysis of LoRA (or its variants) algorithms is an important and open research area. Currently, no existing work rigorously analyzes the convergence properties of low-rank update methods, such as LoRA, in optimization. While this remains an intriguing direction for future study, our focus in this work is on demonstrating the empirical effectiveness and theoretical insights into rank-based optimization properties in federated learning.\\n\\nAdditionally, our paper highlights that low-rank updates are particularly advantageous in federated learning environments with a large number of clients. Client discrepancies are a key factor contributing to performance degradation in federated learning, making it crucial to minimize these differences. While local training may sacrifice some client-specific information, guiding local models toward shared global knowledge is more critical in federated learning. Low-rank updates effectively achieve this by aligning local updates with the major global directions (which we call it implicit regularization effect).\\n\\n**References**\\n\\n[1] Sagun, L., Bottou, L., & LeCun, Y. (2016). Eigenvalues of the hessian in deep learning: Singularity and beyond.\\u00a0*arXiv preprint arXiv:1611.07476*.\\n\\n[2] Gur-Ari, G., Roberts, D. A., & Dyer, E. (2018). Gradient descent happens in a tiny subspace.\\u00a0*arXiv preprint arXiv:1812.04754*.\\n\\n[3] Li, T., Tan, L., Huang, Z., Tao, Q., Liu, Y., & Huang, X. (2022). Low dimensional trajectory hypothesis is true: Dnns can be trained in tiny subspaces.\\u00a0*IEEE Transactions on Pattern Analysis and Machine Intelligence*,\\u00a0*45*(3), 3411-3420.\\n\\n[4] Rudelson, M., & Vershynin, R. (2007). Sampling from large matrices: An approach through geometric functional analysis.\\u00a0*Journal of the ACM (JACM)*,\\u00a0*54*(4), 21-es.\\n\\n[5] Ipsen, I. C., & Saibaba, A. K. (2024). Stable Rank and Intrinsic Dimension of Real and Complex Matrices.\\u00a0*arXiv preprint arXiv:2407.21594*.\\n\\n[6] Bartlett, P. L., Foster, D. J., & Telgarsky, M. J. (2017). Spectrally-normalized margin bounds for neural networks. Advances in neural information processing systems, 30.\\n\\n[7] Neyshabur, B., Bhojanapalli, S., & Srebro, N. (2017). A pac-bayesian approach to spectrally-normalized margin bounds for neural networks. arXiv preprint arXiv:1707.09564.\\n\\n[8] Sanyal, A., Torr, P. H., & Dokania, P. K. (2019). Stable rank normalization for improved generalization in neural networks and gans.\\u00a0*arXiv preprint arXiv:1906.04659*.\"}", "{\"summary\": \"To address the issue of communication efficiency and heterogeneity in Federated Learning, this paper proposes the FedLoRU method. This general low-rank update framework enforces low-rank client-side updates and accumulates these updates to form a higher-rank model. The authors provide empirical results to demonstrate that FedLoRU performs better than other algorithms.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The proposed method is well-motivated, the paper investigates the rank properties of client and server losses, analytically showing that under stochastic sampling, the rank of the Hessian of the loss function increases with smaller sample sizes.\\n2. The empirical results show empirical evidence of the higher rank structure of client losses and demonstrate that restricting the rank of local updates aids in implicit regularization.\", \"weaknesses\": \"1. In the theorems that are presented, summarizing the main insights of these theorems may be needed since currently they are just written as long paragraphs.\\n2. In experiments, the least partial client participation ratio is set as 0.5. In more realistic settings, the participation ratio is lower with more clients.\\n3. The author should consider more baselines, which apply low-rank factorized update models, such as [1].\\n[1] Nam Hyeon-Woo, Moon Ye-Bin, Tae-Hyun Oh. FedPara: Low-rank Hadamard Product for Communication-Efficient Federated Learning. ICLR 2022.\", \"questions\": \"See in weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer mpJt's Comment\", \"comment\": \"Thank you for your detailed comment. We address your concerns in two separate parts: (1) the connection between our theoretical analysis and the proposed algorithm, and (2) the comparison between LoRA and our algorithm.\\n\\n**Connection Between the Theory and the Algorithm**\\n\\nOur primary objective is to analyze the rank structure in federated learning and to propose an algorithmic framework that achieves better performance with a large number of clients while ensuring communication efficiency. We theoretically demonstrate that client-side optimization has a higher limiting stable rank, and we hypothesize that restricting updates to a low-rank space can align client updates with the global optimization direction. Based on this insight, we propose an algorithm that employs low-rank updates to align client updates and enhance communication efficiency.\\n\\nWe acknowledge that you may perceive the connection between our theoretical analysis and the algorithm as not entirely direct, particularly in the hypothesis phase. To address this, we conducted experiments showing that as the number of clients increases, the performance gap between low-rank updates (FedLoRA, FedLoRU) and full-rank updates (FedAvg) widens. These empirical results support our theoretical hypothesis and demonstrate the practical effectiveness of our approach.\\n\\n**Comparison Between LoRA and Our Algorithm**\\n\\nWe respectfully disagree with the assertion that our algorithm performs full-rank training. We consistently emphasize that while the updates are low-rank, the model itself is not constrained to be low-rank. We consider the low-rank update matrices $AB$ as the **updates** and accumulate them to construct the final model. By doing so, we achieve a higher-rank model using only low-rank updates. The key point is that we perform gradient descent on low-rank factorized matrices, thereby achieving the same memory and computational overhead as LoRA. Additionally, we transfer only the low-rank matrices, which results in communication efficiency.\\n\\nRegarding the flexibility and extensibility of LoRA, we assert that our algorithm provides the same level of adaptability. For pre-training, even if larger ranks are required for the low-rank matrices, we do not need to maintain these matrices as separate adapters since our goal is to construct a unified pre-trained model. For fine-tuning, we can retain a series of low-rank matrices separately alongside the frozen pre-trained model. Even if more low-rank matrices need to be stored, their size\\u2014especially in large language model (LLM) fine-tuning\\u2014is significantly smaller than that of the original model. This allows for a plug-and-play approach with the low-rank matrices and the pre-trained model. For example, in our LLaMa2-3B experiment, we only need to store 0.36% of the parameters, whereas FedLoRA requires storing 0.12% of the parameters.\"}", "{\"title\": \"Response to Reviewer 9pru's Review\", \"comment\": \"We appreciate the reviewer\\u2019s time. Below, we address the concerns and questions raised:\\n\\n**Q1. The novelty of this proposed method is limited, and the paper does not provide a compelling argument for why the proposed method outperforms existing approaches.**\\n\\nWhile we acknowledge that the concept of using LoRA in federated learning has been explored previously, our paper introduces three significant novelties that distinguish it from prior work:\\n\\n**[Technical Novelty] First theoretical analysis on rank nature of optimization loss landscape in federated learning.**\\n\\nOur work is the first to provide a theoretical analysis of the rank structure of the optimization loss landscape in federated learning. We demonstrate that local clients exhibit a higher stable rank and analyze the eigenvalues of the client-side and server-side Hessians, which offer critical insights into the curvature of the optimization landscape.\\n\\nThese theoretical insights are essential for effectively applying low-rank updates in federated learning. Higher stable rank indicates a more complex loss landscape for clients, leading to greater client discrepancies. By constraining updates to low-rank spaces, FedLoRU mitigates these discrepancies, aligning client updates along major directions and facilitating better aggregation. Furthermore, our work introduces two decoupled additive perturbed models to address dependency issue in analyzing Hessian structure. Unlike prior approaches, such as Baskerville et al. ([1]), which assume matrix independence (which is, in real, not true), our method resolves this dependency problem, representing a significant theoretical improvement.\\n\\n**[Empirical Novelty] First to show low-rank updates in client-side optimization outperform full-rank training when a system has a large number of clients. We also apply client-side low-rank updates and server-side accumulation**\\n\\nWe are the first to demonstrate that client-side low-rank updates outperform full-rank training in federated systems with a large number of clients. Our theoretical analysis suggests that low-rank updates reduce client discrepancies by simplifying the loss landscape and aligning updates along shared directions. Experimental results confirm this hypothesis, showing that even FedLoRA outperforms full-rank FedAvg. These findings position low-rank updates as a strong baseline for cross-device federated learning with large client populations. Moreover, low-rank updates are compatible with various federated learning algorithms, such as server-side optimization strategies (e.g., FedAdagrad, FedAdam [2]) and client-level frameworks like FedProx [3] and SCAFFOLD [4], further enhancing its applicability.\\n\\nWe are also the first to apply client-side low-rank updates combined with server-side accumulation in federated learning. Inspired by the concept of ReLoRA, we adapt this idea to the federated learning setting, where the role of accumulation has not been extensively explored. Furthermore, we extend this approach to heterogeneous settings, showcasing its flexibility and potential for broader application.\\n\\n**Q2. More discussion is needed on how stable rank is adapted from related fields, and why it is appropriate for the federated learning context**\\n\\nIn our paper, we use stable rank to compare the rank nature between clients and server in FL, and rank nature means the curvature information of loss landscape. In short, stable rank is the continuous proxy for rank, further, especially in deep learning, it provides more practical information about the training dynamics.\\n\\n- According to the works ([5], [6], [7]), gradient descent trajectory is separated into two components: a bulk component (eigenvectors corresponding to large number of small eigenvalues), and a top component (eigenvectors corresponding to small number of large eigenvalues). Furthermore, large fraction of gradient is from top eigenvectors. \\n\\n- Stable rank captures curvature information of loss landscape more accurately than rank. For example, Hessian would have very high rank compared to stable rank. However, even if the rank is very high, the number of eigenvalues that contributes to most of loss landscape is very small (typically, k eigenvalues for k-classification problem). Therefore, rank might not capture this curvature information accurately, but stable rank does.\\n\\n- Additionally, stable rank is more stable under small perturbations of Hessian ([8], [9]). Thus it is more suitable for analyzing training loss landscape. For example, small difference of the point would not affect the training landscape significantly, but the rank can be differ a lot. However, small difference of the points would not significantly change the resulting stable rank, which makes stable rank more suitable measure to capture curvature information. And this is why many deep learning studies ([10], [11], [12]) use stable rank, instead of rank, to restrict the complexity of the model.\"}", "{\"comment\": \"Thanks for the detailed response. The algorithm design for federated learning or personalized FL is a well-studied field. There has been a lot of well-justified work before this paper.\\n\\nThis paper showed that the hessian of client loss tends to be larger than that of the server loss under certain conditions. This could be a contribution ( the bound of this difference is lacking). In addition, I don't see a connection between this algorithm and your observation. You claimed that \\\"By constraining client updates to low-rank representations, we align clients along major optimization directions, reducing discrepancies in training\\\". It's hard to understand why restricting the updates to low-rank updates could align with global direction. To align with global direction, many algorithms can provably attain this, such as gradient tracking employed by scaffold. Your scheme is heuristic for me.\\n\\nIn addition, you keep summing these updates, which finally becomes a full-rank algorithm. I don't think your theory supports your algorithm. \\n\\nFor the second point, I think the reviewer agrees with me that the theoretical analysis in this work is weak.\"}", "{\"title\": \"Response to Reviewer mpJt's Review\", \"comment\": \"We appreciate the reviewer\\u2019s time. Below, we address the concerns and questions raised:\\n\\n**W1. The novelty is limited, there is no close connectiong between the analysis and the algorithm.**\\n\\nThank you for your feedback. We respectfully want to insist that our paper has very important technical and empirical contributions.\\n\\n- **Theoretical novelty 1**: This is the first work to theoretically analyze the rank nature of the optimization loss landscape in federated learning. We provide detailed analysis into the largest eigenvalues of client-side and server-side optimizations, demonstrating that local clients exhibit higher stable rank.\\n- **Theoretical novelty 2**: We introduce a new approach: decouple additive perturbed models, addressing the dependency problem when analyzing the limiting eigenvalues of two Hessians. Unlike prior works such as [1], which rely on the assumption of matrix independence (which in fact, is not true: there is a dependency of Hessian of a large dataset and Hessian of a sub-dataset), our approach resolves this dependency issue, which significantly affects mathematical analysis and results.\\n- **Connection between the analysis and the algorithm:** Our algorithm is directly inspired by our theoretical findings. For instance, Theorem 3.2 shows that clients have a higher stable rank, implying a more complex loss landscape of client optimization and greater client discrepancy in federated learning. By constraining client updates to low-rank representations, we align clients along major optimization directions, reducing discrepancies in training. This insight is especially impactful in settings with a large number of clients, where discrepancies naturally increase.\\n\\nThe empirical novelty lies in our finding that client-side low-rank updates consistently outperform full-rank training in cross-device federated learning scenarios. It demonstrates the practical advantages of the proposed approach and low-rank local training.\\n\\nWe also extend the application of low-rank training to heterogeneous settings. For pFedLoRU, unlike existing personalized federated learning methods that use full-rank models for global knowledge and low-rank models for personalized updates, we show that using low-rank models for both global and personalized training yields better performance. This is because low-rank updates for global model effectively capture general global knowledge by following major optimization directions. For mFedLoRU, we further introduce locally adaptive ranks, addressing heterogeneity across models.\\n\\n\\n**W2. There is no theoretical analysis for the algorithm. Don't know what kind of solution will this algorithm converge to.**\\n\\nWe would like to clarify several points regarding the theoretical analysis and the reasonability of our algorithm.\\n\\nOur algorithm, FedLoRU, demonstrates that: 1) It is comparable to full-rank training in federated learning (FL) in terms of performance, 2) It outperforms full-rank training in cross-device environments, where there are a large number of clients and limited participation per round.\\n\\nFor theoretical insights regarding the rank properties and their connection to our algorithm, please refer to our response to **W1**.\\n\\nIf the question refers to the theoretical convergence behavior of our algorithm, we acknowledge that this is an important and open research area. Currently, no existing work rigorously analyzes the convergence properties of low-rank update methods, such as LoRA, in optimization. While this remains an intriguing direction for future study, our focus in this work is on demonstrating the empirical effectiveness and theoretical insights into rank-based optimization properties in federated learning.\", \"for_question_about_personalized_strategy\": \"In our personalized algorithm, the $AB$ module is designed to capture locally adapted knowledge, while the $LU$ module focuses on learning globally shared knowledge. The $LU$ module is shared with the global server and updated by aggregating all client contributions. To ensure proper separation of global and local knowledge, we adjust the training process. Specifically, we use more training epochs for the global module ($LU$) and train $LU$ first before updating the personalized module ($AB$). This strategy helps $LU$ learn generalized global knowledge, while $AB$ captures locally specific information.\\n\\nAdditionally, we provide a justification for the personalized algorithm based on the rank properties observed in federated learning. By leveraging low-rank updates, we have demonstrated that they introduce an implicit regularization effect across clients. We expect that the LU module benefits from this regularization, allowing it to converge toward generalized knowledge shared by clients. Since low-rank modules are trained toward common major direction, and discrepancy between them are reduced compared to full-rank modules, we expect that the LU module encapsulates more comprehensive and general knowledge.\"}", "{\"comment\": \"\\\"We theoretically demonstrate that client-side optimization has a higher limiting stable rank, and we hypothesize that restricting updates to a low-rank space can align client updates with the global optimization direction.\\\" This heuristic design does not make sense to me.\\n\\nFor the second part, I respectfully disagree with the authors. \\n\\nFirst, memory usage is definitely larger than that of LoRA. When you merge B_tA_t into W, you need extra space to store BA. \\n\\nSecond, regarding flexibility and extensibility, \\\"we can retain a series of low-rank matrices separately alongside the frozen pre-trained model\\\". Actually, you merge them into the frozen model as you said in your paper. In addition, you can't claim that storing a series of BA is more efficient than storing the original model as you don't know how many of them need to be stored.\"}" ] }
Dci14asFPV
DPD-LoRA: Dynamic Prompt-Driven Low-Rank Adaptation for Improved Generalization
[ "Chushan Zhang", "Ruihan Lu", "Zeeshan Hayder", "Hongdong Li" ]
Fine-tuning large models presents technical challenges such as catastrophic forgetting and parameter inefficiency. Low-rank Adaptation (LoRA) and Propmt Learning can help address some of these challenges by providing more compact and flexible representations. However, Low-rank approximation is susceptible to outliers and relies on the assumption of a global low-rank structure, which can be suboptimal. Additionally, Prompt learning can overfit to specific downstream tasks, reducing its effectiveness when adapting to new tasks. In this paper, we introduce $\textbf{Dynamic Prompt-Driven Low-Rank Adaptation (DPD-LoRA)}$, a novel framework that seamlessly integrates task-specific guidance using hierarchical prompt tokens and parameter-efficient adaptation. Unlike traditional methods, task-aware prompts in the DPD-LoRA dynamically influences low-rank updates in the model's parameters, thus enabling robust adaptation and generalization across diverse tasks and mitigating the forgetting issues. We further improve the learning capabilities of the model by breaking down the standard LoRA into multiple low-rank sub-matrices, without adding additional parameters. Further, we use an adaptive loss function to guarantee alignment with the distribution of the pre-trained model. Specifically, we introduce a self-regulated mechanism to improve stability, and a soft-gated selection mechanism to decide when to activate adaptation modules to improve performance on unseen categories. Extensive experiments on 11 benchmark datasets demonstrate that DPD-LoRA significantly outperforms state-of-the-art methods in both accuracy and generalization, offering a comprehensive solution to the challenges of fine-tuning large-scale models.
[ "Vision-Language Models", "PEFT", "Prompt Learning" ]
https://openreview.net/pdf?id=Dci14asFPV
https://openreview.net/forum?id=Dci14asFPV
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yancf4FTvx", "uZ8rXJeHhf", "ohPt8y5OUe", "mzpDKj6Fho", "iYqabH0F3p", "i64rKne5q7", "bRaFwHNbKw", "ZnIntSr7S5", "WytpumJ7PI", "WtiThaAfK0", "WeETQ2NdZi", "WMYQX8uBne", "TiiZ4KG1oI", "QZH1vPKNGB", "QXl9qFQPNb", "P1bqhhtVyu", "MSSR66YvUJ", "Lxe55OsdsY", "KwAJlF7PYx", "Im7thgVzsL", "IBJEQoHGFk", "I34fPDXpKd", "FHyavMHZfC", "DZIxCLkF2f", "AbUjU5UuuB", "9br6mRsPKK", "8Y0t8kX0OH", "81ROvIJ2SW", "7LA2xAkXAk", "557HBgYSyW", "4WdUwMSAf9", "3hXEcr3JDv", "2Yl947ZFWa", "1EzoO2lVwM", "0N9s8WC3Dk" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732774125157, 1732698884833, 1732774046197, 1732683653775, 1732686630215, 1732691803434, 1732489379319, 1731691933245, 1731498435323, 1732176403548, 1732700836898, 1732689872746, 1737562826858, 1732176311093, 1732489447264, 1732489788941, 1731471874830, 1732774271541, 1731589848934, 1732489407336, 1731589888027, 1730689846920, 1730189429802, 1731877223451, 1730452989079, 1732785668248, 1731876206968, 1732688357085, 1730559819399, 1732176339349, 1731476240908, 1732175006238, 1732176368492, 1732293961069, 1732781907486 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1047/Authors" ], [ "ICLR.cc/2025/Conference/Submission1047/Reviewer_1iPg" ], [ "ICLR.cc/2025/Conference/Submission1047/Authors" ], [ "ICLR.cc/2025/Conference/Submission1047/Reviewer_pw2m" ], [ "ICLR.cc/2025/Conference/Submission1047/Authors" ], [ "ICLR.cc/2025/Conference/Submission1047/Authors" ], [ "ICLR.cc/2025/Conference/Submission1047/Authors" ], [ "ICLR.cc/2025/Conference/Submission1047/Authors" ], [ "ICLR.cc/2025/Conference/Submission1047/Authors" ], [ "ICLR.cc/2025/Conference/Submission1047/Authors" ], [ "ICLR.cc/2025/Conference/Submission1047/Authors" ], [ "ICLR.cc/2025/Conference/Submission1047/Reviewer_1iPg" ], [ "ICLR.cc/2025/Conference/Submission1047/Authors" ], [ "ICLR.cc/2025/Conference/Submission1047/Authors" ], [ "ICLR.cc/2025/Conference/Submission1047/Authors" ], [ "ICLR.cc/2025/Conference/Submission1047/Authors" ], [ "ICLR.cc/2025/Conference/Submission1047/Authors" ], [ "ICLR.cc/2025/Conference/Submission1047/Authors" ], [ "ICLR.cc/2025/Conference/Submission1047/Authors" ], [ "ICLR.cc/2025/Conference/Submission1047/Authors" ], [ "ICLR.cc/2025/Conference/Submission1047/Authors" ], [ "ICLR.cc/2025/Conference/Submission1047/Reviewer_Tgia" ], [ "ICLR.cc/2025/Conference/Submission1047/Reviewer_zPfS" ], [ "ICLR.cc/2025/Conference/Submission1047/Authors" ], [ "ICLR.cc/2025/Conference/Submission1047/Reviewer_1iPg" ], [ "ICLR.cc/2025/Conference/Submission1047/Authors" ], [ "ICLR.cc/2025/Conference/Submission1047/Authors" ], [ "ICLR.cc/2025/Conference/Submission1047/Authors" ], [ "ICLR.cc/2025/Conference/Submission1047/Reviewer_pw2m" ], [ "ICLR.cc/2025/Conference/Submission1047/Authors" ], [ "ICLR.cc/2025/Conference/Submission1047/Authors" ], [ "ICLR.cc/2025/Conference/Submission1047/Authors" ], [ "ICLR.cc/2025/Conference/Submission1047/Authors" ], [ "ICLR.cc/2025/Conference/Submission1047/Area_Chair_bKDT" ], [ "ICLR.cc/2025/Conference/Submission1047/Reviewer_Tgia" ] ], "structured_content_str": [ "{\"comment\": \"Dear reviewer Tgia,\\n\\nWe sincerely appreciate your time and effort in reviewing our submission and providing valuable suggestions. While we hope to have addressed your concerns adequately, we understand there may still be areas requiring further clarification or discussion. We are fully prepared to address your outstanding issues. Should our responses have successfully addressed all your questions, we would be deeply grateful if you could consider enhancing the score to further support our submission. Thank you very much for your thoughtful review.\\n\\nBest Regards,\\n\\nPaper1047 Authors\"}", "{\"comment\": \"I have replicated various LoRA variants on visual tasks using open-source code, including DoRA, LoRA-XS, VeRA, etc. Code link: https://github.com/MaxZanella/CLIP-LoRA\"}", "{\"comment\": \"Dear reviewer zPfS,\\n\\nWe sincerely appreciate your time and effort in reviewing our submission and providing valuable suggestions. While we hope to have addressed your concerns adequately, we understand there may still be areas requiring further clarification or discussion. We are fully prepared to address your outstanding issues. Should our responses have successfully addressed all your questions, we would be deeply grateful if you could consider enhancing the score to further support our submission. Thank you very much for your thoughtful review.\\n\\nBest Regards,\\n\\nPaper1047 Authors\"}", "{\"comment\": \"Thank you for your rebuttal. After reviewing the feedback from other reviewers and thoroughly examining the rebuttal and revised paper, some of my concerns have been addressed. However, for W1, I still think the method combines and introduces too many components, including prompt learning, LoRA, gating mechanisms, and loss design. Despite the ablation studies and your assurance to refine the introduction to emphasize the main points, I remain unconvinced about the necessity and coherence of integrating all these components. Reviewer zPfS also raised a similar concern on this issue.\\n\\nAdditionally, upon revisiting the revised version, I noticed some points that require further attention:\\n- In Sec. 3.2, $X'$ is referenced, but its definition appears later in Sec. 3.3. This sequencing could confuse readers and should be adjusted to introduce the term earlier.\\n- The revised manuscript does not use colored text to indicate the changes made, making it difficult to identify the updates. The $\\\\beta$ and $\\\\gamma$ still appear in Algorithm 1, though they have been replaced in the main text.\"}", "{\"comment\": \"Dear Reviewer pw2m\\n\\nWe are deeply grateful for the your feedback and insightful suggestions. \\n\\n**Q: In Sec. 3.2, $X^{'}$ is referenced, but its definition appears later in Sec. 3.3**\\n\\nWe actually defined this \\\"$X^{'}$: prompted inputs\\\" in line 247 of section 3.2.\\n\\n**Q: Revised manuscript does not use colored text to indicate the changes made; Algorithm 1 need to be changed**\\n\\nWe assumed that PDFdiff would automatically render the differences. Since it does not, we will update the revision immediately with colored text. Thank you for bringing this to our attention.\\n\\n**Q: the method combines and introduces too many components, including prompt learning, LoRA, gating mechanisms**\", \"these_components_collectively_represent_our_claimed_contribution\": \"Prompt-Driven Adaptation. Including prompt learning and LoRA is essential to this framework. Additionally, the gating mechanisms provide confidence scores for the components, allowing the model to dynamically balance contributions from Adaptation matrices; while the self-loss design prevents overfitting, ensuring robust adaptability. Although you find some of our minor contributions (e.g., LoRSS and Interaction) complex, we believe the ablation table(as in our next response) clearly demonstrates that the incorporation of all components (i.e., the full model) achieves the best performance.\"}", "{\"comment\": \"Dear Reviewer 1iPg,\\n\\nWe are deeply grateful for your feedback and insightful suggestions, and we are glad to hear that most of your concerns have been addressed. We understand your points, but since their downstream task is distinct from ours and primarily focuses on LLMs (e.g., LLaMA, GPT, etc.), their code is not easily adaptable for vision models, making it challenging for us to reproduce their outcomes at this moment.\\n\\nOnce again, we sincerely thank you for your thoughtful and constructive suggestions.\"}", "{\"comment\": \"Dear Reviewer Tgia,\\n\\nSince the discussion deadline is approaching in less than 48 hours, we kindly request your feedback on whether the response adequately addresses your concerns. If you have any more questions, we would be happy to provide further clarification.\\n\\nYour timely response is greatly appreciated.\\n\\nThank you.\"}", "{\"title\": \"Addressing Reviewers' Shared Questions (Concerns were frequently raised across multiple reviews)\", \"comment\": \"We appreciate all reviewers' time and insightful comments. Given the relatively short rebuttal window, we have addressed as many concerns as possible, and we are more than pleased to address any remaining issues.\\n\\n**1. Hyper-Parameter Settings and Inconsistency in $\\\\alpha$, $\\\\beta$, $\\\\gamma$**\\n\\n1. We acknowledge the confusion caused by our notation, as many reviewers thought we have too many hyper-parameters and questioned why there are so many. **In fact, our $\\\\alpha$, $\\\\beta$, and $\\\\gamma$ refer to the same weights and have the same values** (0.1 for layer $(l-1)$ and 0.9 for layer $l$, as shown in our Table 4(a)). We initially chose different alphabetical symbols to distinguish between prompt-token side annotations and LoRA side annotations. To prevent confusion, we will unify them into the same representation.\\n\\n2. All hyper-parameters (including the number of sub-LoRA matrices $m$ and rank $r$) are provided in Appendix Table 4(a).\\n\\n3. For different loss weights $\\\\lambda$, we empirically defined them to ensure that each loss is within a similar magnitude. \\n\\n**2. Why We Decompose Plain LoRA into LoRSS**\\n\\nA straightforward answer is that we found under the same parameter budget (e.g., $3 \\\\times r = 3$ sub-LoRA setting vs. $r = 12$ plain LoRA setting), LoRSS consistently outperforms the plain setting in both Base and Novel evaluations.\\n\\nThis LoRSS idea is inspired by MoE-LoRA, but our approach is more parameter-efficient regarding learnable parameters. We decompose the LoRA matrix into sub-LoRA matrices under the same parameter budget, whereas MoE-LoRA duplicates the LoRA matrix into several LoRA matrices. For example, if we have $n$ sub-LoRA matrices with a fixed rank $r$ and $W \\\\in \\\\mathbb{R}^{d \\\\times k}$, MoE-LoRA's parameters increase to $n \\\\times (d \\\\times r + r \\\\times k)$, whereas our parameters remain at $(d \\\\times r + r \\\\times k)$. Another difference is that MoE uses a network to select the importance of matrices $A/B$, while we employ a single learnable parameter (the scaling factor) for each sub-LoRA matrix, which is more efficient. Finally, our downstream tasks are entirely different, highlighting the distinct applicability of our method. From our observations, under the same parameters (e.g., $3 \\\\times r = 3$ sub-LoRA setting vs. $r = 12$ plain LoRA setting), LoRSS always outperforms the plain setting.\\n\\n\\n**3. Concerns About Memory and Cost Efficiency**\\n\\n1. As shown in the appendix (page 18), where we provide our algorithm, our method follows a two-step training strategy that has low memory requirements, **less or equal to those of PromptSRC**. The duplication is illustrative to indicate consistent module components; however, in implementation, we only apply cached pre-trained LoRA weights during the SCL-LoRA loss stage.\\n\\n2. Moreover, one reviewer asked if more efficiency metrics could be provided. We acknowledge that varying dataset sizes and different GPU architectures can make direct comparisons challenging due to discrepancies in training time and resource consumption; Our initial focus was on parameter counts (Table 4(b)) as a very intuitive measure because they remain fixed across various datasets and GPU architectures. However, to address these concerns, we have conducted additional experiments under consistent conditions to measure FLOPs, FPS, and training time per epoch. These metrics are provided below, along with comparisons to previous methods, to support our efficiency claims:\\n\\n| Method | Params | % CLIP | Base | Novel | HM | FPS (batch 4) | GFLOPs | Time (1 epoch) |\\n|-------------------|----------|-----------------|-------|-------|-------|----------------|--------|------------------------|\\n| CoOp | 2048 | 0.002 | 82.69 | 63.22 | 71.66 | 104.5| 162.5 | ~32s |\\n| CoCoOp | 35360 | 0.03 | 80.47 | 71.69 | 75.83 | 53.3 | 162.5 | ~47s|\\n| MaPLe | 3.55 M | 2.85| 82.28 | 75.14 | 78.55 | 175.58| 167 | ~28s|\\n| ALIGN | 3.58 M | 2.87| 83.38 | 75.51 | 79.25 | 72.6| 314.6 | ~42s |\\n| PrompSRC | 31488 | 0.02 | 84.26 | 76.10 | 79.97 | 149.86| 281.21 | ~27s |\\n| **DPD-LoRA\\u2020** | **1.92 M** | **1.54** | **84.80** | **76.80** | **80.60** | **82.51** | **334.03** | **~40s**|\\n| DPD-LoRA | 4.72 M | 3.79| 85.67 | 76.91 | 81.05 | 81.57| 334 | ~42s|\\n\\nOne more thing we hope reviewers may note is that even though our method has slightly higher GFLOPs due to additinal LoRA/LoRSS computations, **our convergence speed is much faster than any previous methods. Our method showcases accelerated convergence and favorable early-stage performance. Specifically, our method reaches better performance in just 7 epochs, which is 65% fewer epochs than the 20 epochs required by previous SOTA\\u2014a reduction of over 65% in training time (as shown in Figure 1b and Figure 5).**\"}", "{\"title\": \"Response to Reviewer zPfS\", \"comment\": \"We thank the Reviewer zPfS for many insightful comments. We answer the questions in what follows. Please let us know if further clarification is needed.\\n\\n**Q:INTRODUCTION; This can seem a bit cluttered and redundant; a more concise summary and consolidation of related content would be beneficial.**\\n\\nThank Reviewer zPfS for insightful suggestion. Reviewer pw2m mentioned this as well, and we will revise the introduction to focus on the main points. We want to emphasize that our main contributions are twofold: first, we are the first to prove that prompt learning can additionally provide task-specific guidance to LoRA (even in plain LoRA settings, as shown in Table 5 in the appendix); second, the proposed gating mechanism strengthens their connection.\\n\\n**Q:Sec 3.2; the explanation seems to treat prompts and LoRA separately, although the appendix provides a detailed explanation of their combined effect. As this is a crucial part of the paper, more clarity and detail in the main body of the text would be necessary.**\\n\\nThey are actually not separate. Our intention in Section 3.2 was to demonstrate the combined effect of prompts and LoRA within the Transformer architecture. Specifically, we introduce learnable prompts in both the textual and visual branches, which are then incorporated into the input sequences. These prompted inputs interact with the MHA mechanism as shown in Equation (2). Then we adapt the standard LoRA formulation to this prompted framework in Equation (3), which is directly influenced by the prompted inputs $X$. This equation illustrates that the output h is a result of both the original weights and the prompt-guided LoRA adjustment, highlighting their interconnected roles. However, we acknowledge that we might merge more details from the appendix. Thank Reviewer zPfS for pointing this out.\\n\\n\\n**Q:The title of Section 3.3; The order of introduction and the content should correspond to the title.**\\n\\nThank Reviewer zPfS for bringing this to our attention; we will align the order of introduction and section contents.\\n\\n**Q:Analysis of the synergistic effects between LoRA and prompt learning**\\n\\nWe include this part in our Appendix Section D, and the interpretation based on mathematical derivation is also included in Section D5.\\n\\n**Q:Different weights allocation methods are used: \\u03b1 and 1-\\u03b1 for prompt tokens, and \\u03b2 and \\u03b3 for LoRA layers.**\\n\\nWe thank you for pointing out this issue that many reviewers care about. We acknowledge it causes confusion that our \\n $\\\\alpha,\\\\beta,\\\\gamma$ are the same and have the same values (0.1 for $(l-1)$ and 0.9 for $l$, as you can tell from our Table 4(a)). The reason we chose different alphabetical symbols is that we wanted to separate prompt-token side annotations and LoRA side annotations. We will change them to the same representation to prevent confusion.\\n\\n**Q:Hyper-parameters configurations; An explanation for these fixed values would be necessary**\\n\\n* To ensure a fair comparison, we followed previous methods[1,2] for all prompt token settings, including deep prompt tokens, N_CTX, and learning rates.\\n\\n* Regarding the LoRA component, we initially experimented with rank settings commonly used in conventional LoRA[3], specifically r = 1, 4, 8. Higher ranks such as r = 32, 64 were not considered due to our aim to minimize the number of parameters. We observed that performance improved with increasing ranks in the set r = {1, 4, 8}. We then incrementally increased the rank until r = 12, where we found the performance to be better than at r = 13, prompting us to stop at r = 12.\\n\\n* For the quantity $m$, we have very limited choices when we have fixed rank. With r = 12 from our previous experiments, we tested all divisor combinations of r and m (i.e., r \\u00d7 m = 12 \\u00d7 1; 6 \\u00d7 2; 2 \\u00d7 6 ; 4 \\u00d7 3; 3 \\u00d7 4). Based on these experiments performance, we set the sub-LoRA matrices with r = 4 and m = 3 across all benchmarks.\\n\\n* The only hyperparameter we change/add is the layer weights, denoted as $\\\\alpha$. We provide the ablation here for your reference:\", \"https\": \"//cdn-fusion.imgcdn.store/i/2024/8d5daf9c21bf70ae.png\\n\\n* Finally, for different $\\\\lambda$, we empirically defined them to ensure that each loss is within a similar magnitude.\\n\\n1.Khattak, M. U., Rasheed, H., Maaz, M., Khan, S., & Khan, F. S. (2023). Maple: Multi-modal prompt learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 19113-19122).\\n\\n2.Khattak, M. U., Wasim, S. T., Naseer, M., Khan, S., Yang, M. H., & Khan, F. S. (2023). Self-regulating prompts: Foundational model adaptation without forgetting. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 15190-15200).\\n\\n3.Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., ... & Chen, W. (2021). Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685.\"}", "{\"title\": \"Invitation to further discussion\", \"comment\": \"Dear reviewer zPfS,\\n\\nWe genuinely appreciate the time and effort you've invested in reviewing our paper. We have carefully provided relevant responses and results to your concerns. We are eager to further discuss with you and gain your insights before the end of the Author/Reviewer phase. Please let us know if any aspect of our work remains unclear or if you have additional feedback.\\n\\nThank you.\"}", "{\"comment\": \"Thank you so much for sharing this valuable resource! We truly appreciate your help.\\n\\nWe will explore adapting this repository into our work and will certainly incorporate the experimental results.\"}", "{\"comment\": \"I have seen that the author has addressed most of the issues I raised, and I am happy to increase the score. However, I insist that since we are both working in the field related to PEFT, we must conduct a comprehensive comparison with variants of LoRA. This should not only be added to the related work but also included in the experimental section.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Invitation to further discussion\", \"comment\": \"Dear reviewer Tgia,\\n\\nWe genuinely appreciate the time and effort you've invested in reviewing our paper. We have carefully provided relevant responses and results to your concerns. We are eager to further discuss with you and gain your insights before the end of the Author/Reviewer phase. Please let us know if any aspect of our work remains unclear or if you have additional feedback.\\n\\nThank you.\"}", "{\"comment\": \"Dear Reviewer zPfS,\\n\\nSince the discussion deadline is approaching in less than 48 hours, we kindly request your feedback on whether the response adequately addresses your concerns. If you have any more questions, we would be happy to provide further clarification.\\n\\nYour timely response is greatly appreciated.\\n\\nThank you.\"}", "{\"comment\": \"Dear Reviewer 1iPg,\\n\\nSince the discussion deadline is approaching in less than 48 hours, we kindly request your feedback on whether the response adequately addresses your concerns. If you have any more questions, we would be happy to provide further clarification.\\n\\nYour timely response is greatly appreciated.\\n\\nThank you.\"}", "{\"title\": \"Response to Reviewer Tgia\", \"comment\": \"We thank the reviewer for many insightful comments. We answer the questions in what follows. Please let us know if further clarification is needed.\\n\\n\\n**Q:The motivation for using prompts to guide LoRA learning is not entirely intuitive.**\\n\\nAs stated in the Introduction and Related Work sections, prompt learning provides task-specific guidance BUT does not contribute to attention weights (lines 78-80). Conversely, solely using LoRA cannot provide task-specific guidance because it only focus on the internal structure of the model and updates pre-trained weights (lines 45-47). Therefore, we propose a new method that integrates task-specific guidance directly into the adaptation mechanism.\\n\\n**Q:The authors should clarify why applying a weight to each in the LoRA layer solely through gating prompt tokens is expected to be effective.**\\n\\nGating is commonly utilized in various deep learning tasks. In our case, it acts like a dynamic weight predictor; it assigns weights between [0,1] to help prevent updating unreliable LoRA matrices. Intuitively, you can consider the output of gating as a confidence score. Meanwhile, we have proven that even without gating (or the confidence score), our proposed method is valid (see $1^{st}$ column of ablation study in Table 3).\\n\\n**Q:The explanation of the Gating function requires clarification and potentially overlap in function.**\\n1. Actually, The LoRA matrices $A_i$ and $B_i$ are independent of the gating prediction. As explained in the previous question, our gating mechanism takes the prompt as input and then predicts weights/confidence scores for these LoRA matrices. The scaling factors $s_i$ and the gating are totally different. The $s_i$ are the weights of different LoRA sub-matrices, while the gating provides a confidence score to the total sum of all LoRA sub-matrices. Thus, as show in (eq 4), the prompt token differs across layers, which makes the weights of different layers differ. Therefore, each layer will be applied a different confidence score bacause of $G(p_l)!=G(p_{l-1})$. \\n\\n**Q:Additionally, it is unclear how interacts with the Hierarchical Interaction\\u2014does it apply weighting to $A_{i}B_{i}$ at layer $l-1$ as well?**\\n\\nHierarchical Interaction and LoRSS are actually affected by the gating in every prompted layers, exactly as shown in our Equations (6) and (7). For your convenience, we rewrite it here:\\n\\n$$\\n\\\\Delta W_l = \\\\left( \\\\beta \\\\sum_{i=1}^{m} \\\\left( s_{i}^{(l)} \\\\times A_{i}^{(l)} B_{i}^{(l)} \\\\right) + \\\\gamma \\\\sum_{i=1}^{m} \\\\left( s_{i}^{(l-1)} \\\\times A_{i}^{(l-1)} B_{i}^{(l-1)} \\\\right) \\\\right) \\\\times G(P_l)\\n$$\\n \\nwhere the $\\\\Delta W_l$ is the final updated matrix and will be added like in normal LoRA. We thank you for pointing this out, and we will add additional annotations to this equation.\\n\\n\\n**Q:Given the complexity of the proposed method and its multiple components, the current ablation study feels insufficient. For example, what is the rationale for decomposing a single LoRA into multiple sub-LoRAs?**\\n\\nThere are actually two additional tables in the appendix you might have overlooked. Table 4(a) provides the full setting, while Table 5 offers an additional ablation study. As you can see from Table 5, which uses only plain LoRA, it actually performs worse than our proposed LoRSS. The reason is simple: we found that under the same parameters (i.e., 3*R=3 sub-LoRA sitting V.S. R=12 plain LoRA sitting), LoRSS always outperforms the plain setting.\\n\\nSince Reviewer pw2m asked a similar question, and we elaborated on the question in shared response **\\\"Why We Decompose Plain LoRA into LoRSS\\\"** If you are interested, please refer to our answers there.\\n\\n**Q:How do authors define hyper-parameters**\\n\\nWe thank you for pointing out this issue that many reviewers care about. There is actually a misunderstanding that our \\n $\\\\alpha,\\\\beta,\\\\gamma$ are the same thing and have the same value (0.1 for $(l-1)$; 0.9 for $l$, as you can tell from our Table 4(a)). The reason we chose different alphabetical symbols is that we want to separate prompt-token side annotations and LoRA side annotations. We will change them to the same representation to prevent confusion. For different $\\\\lambda$, we empirically defined them to ensure that each loss is within a similar magnitude.\\n\\n**Q:How does the addition of orthogonal regularization prevent overfitting? More details on this would clarify the choice and its benefits.**\\n\\nOrthogonal regularization is commonly used in LoRA research, such as in [1]. It can prevent redundancy: orthogonality ensures that the rows (or columns) of the matrices are linearly independent. This reduces redundancy in the learned features, allowing the model to capture more diverse and informative representations.\\n\\n\\n\\n1.Zhang, Q., Chen, M., Bukharin, A., Karampatziakis, N., He, P., Cheng, Y., ... & Zhao, T. (2023). AdaLoRA: Adaptive budget allocation for parameter-efficient fine-tuning. arXiv preprint arXiv:2303.10512.\"}", "{\"comment\": \"Dear reviewer pw2m,\\n\\nWe sincerely appreciate your time and effort in reviewing our submission and providing valuable suggestions. While we hope to have addressed your concerns adequately, we understand there may still be areas requiring further clarification or discussion. We are fully prepared to address your outstanding issues. Should our responses have successfully addressed all your questions, we would be deeply grateful if you could consider enhancing the score to further support our submission. Thank you very much for your thoughtful review.\\n\\nBest Regards,\\n\\nPaper1047 Authors\"}", "{\"title\": \"Response to Reviewer 1iPg (1)\", \"comment\": \"We thank the Reviewer 1iPgfor many insightful comments. We answer the questions in what follows. Please let us know if further clarification is needed.\\n\\n**Q:Typos; LLaVa to LLaVA, PLoRA to DPD-LoRA**\\n\\nThank to Reviewer 1iPg for bring this to our attention, we will definitely revise these typos.\\n\\n**Q:Confusion; without any additional models prior; the abbreviation of \\u201cPEFT\\u201d**\\n\\nBy \\\"without any additional model priors,\\\" we mean that no other models are included except the pre-trained CLIP. This is a common practice in the PEFT field. Specifically, many methods import large models to learn stronger textual or visual representations. We have explicitly stated this in the related work section(line 154-156).\\n\\nRegarding the abbreviation 'PEFT,' we mention it only once in the introduction (Lines 43-44). We explicitly use 'parameter-efficient fine-tuning' for LoRA/adapter-like methods (Line 126) and refer to 'prompt learning' as 'prompt-based efficient fine-tuning' (Line 141). Therefore, we believe there should be no confusion on this point. However, we have deleted 'PEFT' where it refers to prompt learning to address your concerns.\\n\\n**Q:The related work section lacks references to significant LoRA extensions(e.g. DoRA, SVFT, PISSA, and LoRA-XS);\\nThe comparative experiments do not include related LoRA methods(e.g. DoRA and VeRA)**\\n\\nFirst, we acknowledge that our related work section can be improved by including more references to significant LoRA extensions such as DoRA, SVFT, PISSA, and LoRA-XS. We will update the manuscript to reflect the progress in this area of research. \\n\\nHowever, conducting comparative experiments with these methods is **beyond the scope of our current work**. Our primary contribution is **demonstrating that prompt tokens can provide additional task-specific guidance to LoRA**. To our knowledge, we are the first to show that this approach is feasible. Integrating and comparing with other LoRA methods is an excellent suggestion for future work. Additionally, since plain LoRA works effectively in our experiments (as shown in Table 5), we anticipate that other LoRA-like methods would also perform similarly.\\n\\n**Q:The ablation study section only presents the individual performance of each component without evaluating the performance of their combinations.**\\n\\nThere may be a misunderstanding regarding our ablation study presented in Table 3. In this table, each row represents the performance of the model with components added cumulatively. That is, each component is included in addition to all the previous ones.\\n\\nThis cumulative approach is a conventional format used in many papers, as well as in our baseline [1,2], to evaluate the impact of each component both individually and in combination with others. As you suggested, the third row effectively represents the performance with two components combined, and the fourth row shows the combination of three components. Thus, our ablation study **already evaluates different combinations of components as per the recommendation of Reviewer 1iPg .**\\n\\n1.Khattak, M. U., Rasheed, H., Maaz, M., Khan, S., & Khan, F. S. (2023). Maple: Multi-modal prompt learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 19113-19122).\\n\\n2.Khattak, M. U., Wasim, S. T., Naseer, M., Khan, S., Yang, M. H., & Khan, F. S. (2023). Self-regulating prompts: Foundational model adaptation without forgetting. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 15190-15200).\"}", "{\"comment\": \"Dear Reviewer pw2m,\\n\\nSince the discussion deadline is approaching in less than 48 hours, we kindly request your feedback on whether the response adequately addresses your concerns. If you have any more questions, we would be happy to provide further clarification.\\n\\nYour timely response is greatly appreciated.\\n\\nThank you.\"}", "{\"title\": \"Response to Reviewer 1iPg (2) with additonal experiments results\", \"comment\": \"**Q:Mising efficiency-related metrics; Including a comparison of these metrics to baseline methods would further support the efficiency claims.**\\n\\nWe agree that providing specific efficiency metrics such as training time per epoch, and FLOPs would substantiate our claims of resource efficiency. However, it is relatively hard to provide solid efficiency-related metrics due to different GPUs and datasets, as this phenomenon is observed in previous papers. Our initial focus was on parameter counts (Table 4(b)) as a very intuitive measure because they remain fixed across various datasets and GPU architectures.\\n\\nWe acknowledge that varying dataset sizes and different GPU architectures can make direct comparisons challenging due to discrepancies in training time and resource consumption. However, to address your concerns, we have conducted additional experiments under consistent conditions to measure FLOPs, FPS, and training time per epoch. These metrics are provided below, along with comparisons to baseline methods, to support our efficiency claims:\\n\\n| Method | Params | % CLIP | Base | Novel | HM | FPS (batch 4) | GFLOPs | Training (1 epoch) |\\n|-------------------|----------|-----------------|-------|-------|-------|----------------|--------|------------------------|\\n| CoOp | 2048 | 0.002 | 82.69 | 63.22 | 71.66 | 104.5| 162.5 | ~32s |\\n| CoCoOp | 35360 | 0.03 | 80.47 | 71.69 | 75.83 | 53.3 | 162.5 | ~47s |\\n| MaPLe | 3.55 M | 2.85 | 82.28 | 75.14 | 78.55 | 175.58 | 167 | ~28s |\\n| ALIGN | 3.58 M | 2.87 | 83.38 | 75.51 | 79.25 | 72.6 | 314.6 | ~42s |\\n| PrompSRC | 31488 | 0.02 | 84.26 | 76.10 | 79.97 | 149.86 | 281.21 | ~27s |\\n| **DPD-LoRA\\u2020** | **1.92 M** | **1.54** | **84.80** | **76.80** | **80.60** | **82.51** | **334.03** | **~40s** |\\n| DPD-LoRA | 4.72 M | 3.79 | 85.67 | 76.91 | 81.05 | 81.57 | 334 | ~42s |\\n\\nOne more thing you may note is that even though our method has slightly higher GFLOPs due to additinal LoRA/LoRSS computations, **our convergence speed is much faster than any previous methods. Our method showcases accelerated convergence and favorable early-stage performance. Specifically, our method reaches better performance in just 7 epochs, which is 65% fewer epochs than the 20 epochs required by previous SOTA\\u2014a reduction of over 65% in training time (as shown in Figure 1b and Figure 5).**\\n\\n\\n\\nWe appreciate your feedback and are willing to reformat the table or provide additional explanations to enhance clarity if necessary.\"}", "{\"summary\": \"This paper presents a dynamic prompt-guided LoRA approach that integrates several key modules: Hierarchical Interaction, a Prompt-Conditioned Gating Mechanism (PCGM), and a Self-Regularized Lower-Rank Subspace. The proposed method is evaluated on 11 benchmark datasets, demonstrating its effectiveness.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The integration of prompts with LoRA represents an innovative exploration in this domain.\", \"The authors conducted extensive experiments to substantiate the performance improvements of the proposed algorithm.\"], \"weaknesses\": [\"The motivation for using prompts to guide LoRA learning is not entirely intuitive. The authors should clarify why applying a weight to each $A_i B_i$\\u200b in the LoRA layer solely through gating prompt tokens is expected to be effective.\", \"The explanation of the Gating function requires clarification. Does $G(P)$ apply a weight before each $iA_i B_i$? How does this differ from directly learning $S_i$, and could it potentially overlap in function? Additionally, it is unclear how $G(P)$ interacts with the Hierarchical Interaction\\u2014does it apply weighting to $A_i B_i$ at layer $l\\u22121$ as well?\", \"Given the complexity of the proposed method and its multiple components, the current ablation study feels insufficient. For example, what is the rationale for decomposing a single LoRA into multiple sub-LoRAs? How are hyperparameters $\\\\alpha$, $\\\\beta$, $\\\\gamma$, $\\\\lambda_1$, $\\\\lambda_2$\\u200b, and $\\\\lambda_3$ set, and what is their impact on the final performance?\", \"How does the addition of orthogonal regularization prevent overfitting? More details on this would clarify the choice and its benefits.\"], \"questions\": \"Please refer to the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This manuscript introduces DPD-LoRA, a novel framework that aims to improve the generalization capability of large models by integrating dynamic prompt-driven low-rank adaptation. This method combines hierarchical prompt tokens and parameter-efficient adaptation to incorporate task-specific guidance, demonstrating superior performance over existing techniques across multiple benchmark datasets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The proposal of the DPD-LoRA framework is innovative as it integrates prompt learning and low-rank adaptation to enhance the model's generalization capabilities. The introduction of adaptive loss functions and soft-gated selection mechanisms (PCGM) adds to the novelty of the approach.\\n2. The method designed by the authors has been applied to three different tasks: Base-to-novel class generalization, Cross-dataset evaluation, and Few-shot learning, showing promising results across the board, which speaks to the effectiveness of the approach.\\n3. The authors have conducted extensive experiments on 11 benchmark datasets, which helps to substantiate the effectiveness of the proposed method.\\n4. The overall structure of the paper is relatively clear, with proper introductions to various techniques, facilitating the reader's understanding of the content.\", \"weaknesses\": \"1. The paper employs a variety of techniques and methods, including prompt learning, LoRA, gating mechanisms, and loss design, with five points listed in the INTRODUCTION under contributions and five in the METHOD section. This can seem a bit cluttered and redundant; a more concise summary and consolidation of related content would be beneficial.\\n\\n2. Section 3.2 is titled \\\"PROMPT LEARNING WITH LOW RANK ADAPTATION IN TRANSFORMERS,\\\" yet the explanation seems to treat prompts and LoRA separately, although the appendix provides a detailed explanation of their combined effect. As this is a crucial part of the paper, more clarity and detail in the main body of the text would be necessary.\\n\\n3. The title of Section 3.3 is \\\"HIERARCHICAL INTERACTION AND EXPANDED SUBSPACES,\\\" but the content first introduces expanded subspaces and then hierarchical interactions. The order of introduction and the content should correspond to the title.\\n\\n4. The paper slightly lacks in-depth analysis of the synergistic effects between LoRA and prompt learning. Although an ablation study is conducted, showing experimental results under different conditions, a deeper analysis of how these components interact and contribute to performance improvements is needed, especially considering this is the core and key of the paper.\", \"questions\": \"1. In the hierarchical interaction section, both prompt tokens and LoRA layers establish connections between the current layer and the previous one to prevent information loss across layers. However, different weight allocation methods are used: \\u03b1 and 1-\\u03b1 for prompt tokens, and \\u03b2 and \\u03b3 for LoRA layers. It would be beneficial to explain the rationale and necessity for using different methods when their purposes are aligned.\\n2. The paper sets a considerable number of hyperparameters, including learning rates, weight factors, deep prompt tokens, etc., and uses a fixed rank r and quantity m for LoRSS configurations across three different tasks. The paper does not seem to discuss the rationale behind these settings or how different ranks and quantities might impact the results. An explanation for these fixed values would be necessary.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer Tgia (2) with reformated/additional results\", \"comment\": \"Dear Reviewer Tgia,\\n\\nAs we have previously replied to your concerns, we have additionally reformatted Table 5 (ablation on different methods) to support our claim.\\n\\n1. The first part of the table shows results from naive combinations and individual methods such as CLIP, plain LoRA, and Prompts Learning (MaPLe). \\n2. In the second section, we introduce prompt tokens to guide LoRA/LoRSS. \\n3. The last section presents the performance of our proposed full model (with Gating/Hierarchical Interaction) over 5 and 20 epochs.\\n\\n| Model | Base(%)\\u2191 | Novel(%)\\u2191 | HM \\u2191 |\\n|--------------------------------------------------|----------|-----------|----------|\\n| **--- Baseline/Naive combinations ---** | | | |\\n| CLIP | 72.43 | 68.14 | 70.22 |\\n| plain LoRA(x) (5 epoch) | 77.57 | 69.70 | 73.42 |\\n| Prompts Learning (MaPLe)(x') (5 epoch) | 76.77 | 70.80 | 73.66 |\\n| Frozen LoRA(x) + Prompts(x') (5 + 5 epoch) | 75.20 | 61.17 | 67.46 |\\n| Frozen Prompts(x') + LoRA(x') (5 + 5 epoch) | 76.77 | 70.47 | 73.49 |\\n| **--- Prompt-Driven Adaptation ---** | | | |\\n| Prompt-Driven plain LoRA (5 epoch) | 77.62 | 70.81 | 74.09 |\\n| Prompt-Driven LoRSS (5 epoch) | 77.63 | 70.97 | 74.15 |\\n| **--- Ours ---** | | | |\\n| DPD-LoRA (full model) (5 epoch) | 77.87 | 71.13 | 74.34 |\\n| DPD-LoRA (full model) (20 epoch) | **78.13**| **71.33** | **74.58**|\", \"table_caption\": \"Ablation experiments for Prompts-To-LoRA on the ImageNet dataset. Here, x refers to the original input, while x' denotes the prompted input, i.e., the concatenation of x and prompt tokens. Note that the plain LoRA here is distinct from our proposed LoRSS. Only the last two rows represent the performance of our full model.\"}", "{\"summary\": \"DPD-LoRA uses task-specific prompts to dynamically influence the low-rank updates of model parameters, enhancing the model's adaptability across diverse tasks and mitigating forgetting issues. By decomposing the standard low-rank adaptation into multiple low-rank sub-matrices, the method retains flexibility without adding additional parameters, thus improving the model\\u2019s learning capacity. An adaptive loss function is introduced to ensure alignment between the adapted distribution and the pre-trained model, thereby enhancing learning effectiveness and stability. A self-regulating mechanism is used to further improve model stability, along with a soft-gating mechanism to determine when to activate adaptation modules, ensuring improved performance on new categories.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"(1) The proposed methods are relatively comprehensive, using several points to improve existing problems.\\n\\n(2) The writing is clear and easy to understand.\", \"weaknesses\": \"(1): In line 041, \\u201cLLaVa\\u201d should be revised to \\u201cLLaVA\\u201d for consistent terminology throughout the document, avoiding unnecessary visual inconsistency.\\n\\n(2): The related work section lacks references to significant LoRA extensions, such as DoRA, SVFT, PISSA, and LoRA-XS. It is recommended to include these studies and discuss how the proposed method compares to or builds upon these prior approaches. Specifically, it would be helpful to highlight the innovations of this work and the advantages it has over these extensions.\\n\\n(3): The method incorporates a distillation-like Self-Constrain Loss, but there is no evaluation of training time, GPU resource consumption, or other efficiency-related metrics. Providing specific efficiency metrics, such as training time per epoch, peak GPU memory usage, and FLOPs, would substantiate the claims of being resource-efficient. Including a comparison of these metrics to baseline methods would further support the efficiency claims.\\n\\n(4): The ablation study section only presents the individual performance of each component without evaluating the performance of their combinations. Adding experiments that evaluate different component combinations (e.g., two, three, and all four components) would provide a more comprehensive view of the model's performance. Including a table or figure showing these combinations or using an approach like forward selection to systematically evaluate the synergies between components would be very informative.\\n\\n(5): The comparative experiments do not include related LoRA methods, such as DoRA and VeRA. Including comparisons with these methods would more clearly demonstrate the advantages of the proposed approach. It is suggested to add a specific experiment or table comparing the proposed method to DoRA, VeRA, and other relevant LoRA variants on key metrics or datasets to provide a clearer demonstration of its benefits.\", \"questions\": \"(1): The phrase \\u201cwithout any additional models prior\\u201d in lines 110-111 is somewhat ambiguous. Typically, Parameter-Efficient Fine-Tuning (PEFT) builds on pre-trained models, so it is recommended to clarify whether this refers to the absence of model priors or additional model parameters.\\n\\n(2): The abbreviation \\u201cPEFT\\u201d is used for both Prompt-based Efficient Fine-Tuning and Parameter-Efficient Fine-Tuning, which may lead to confusion. It is advisable to select distinct abbreviations to improve clarity.\\n\\n(3): The term \\u201cPLoRA\\u201d in line 378 is confusing, as its specific reference is unclear. Further definition or clarification of this acronym is recommended for improved reader comprehension.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear reviewer Tgia,\\n\\nThank you for your prompt response and for **acknowledging the effectiveness of the Prompt-Driven LoRA/LoRSS(which is our main contribution, as reflected in the paper's title).**\\n\\nRegarding your concern about the verification of our other components' contributions and the seemingly marginal improvements, we would like to offer further clarification. While the numerical differences between Prompt-Driven plain LoRA and the other Prompt-Driven LoRSS versions may appear small, these **enhancements are consistent and statistically significant across multiple benchmarks**. Each component consistently contributes to performance enhancement. The integration of all components in our full model (DPD-LoRA) achieves the best results across all metrics. If you are interested in this, you may refer to our reply in the **\\\"More detailed ablation experiments to support our claims\\\"** section.\\n\\nWe believe that even **incremental advances are valuable in pushing the boundaries of what's possible under fixed parameters**, especially when they introduce new methodologies or perspectives. As the first to innovatively explore Prompt-Driven Adaptation, we are confident that our contributions open up new avenues for research and development in this domain. Thank you again for your valuable feedback and for considering our responses.\"}", "{\"title\": \"General Response to the Reviewers\", \"comment\": \"We sincerely thank the reviewers for their thoughtful and constructive feedback. We are encouraged by the positive recognition of our contribution, which can be summarized as follows:\\n\\n1. DPD-LoRA shows **innovative exploration** in this domain (Reviewers `Tgia`, `zPfS`)\\n2. DPD-LoRA achieves **outstanding performance** (Reviewers `Tgia`, `pw2m`, `zPfS`)\\n3. Extensive experiments provide **convincing evidence of the effectiveness** (Reviewers `Tgia`, `pw2m`, `zPfS`)\\n4. The paper is **well-structured** and **easy to understand** (Reviewers `pw2m`, `1iPg`, `zPfS`)\\n\\nIn our revision, we have carefully addressed each of the concerns raised. Below are the main points of the revised manuscript(colored in blue text):\\n\\n1. Corrected minor **typos** and **switched the introduction order** in the 'HIERARCHICAL INTERACTION AND EXPANDED SUBSPACES' sections; **merged more details** from the appendix into the section 'Prompt Learning with LoRA in Transformers'(Reviewers `1iPg`, `zPfS`).\\n2. **Reformatted our Equations** 6 and 7, as well as Table 4(a) in the appendix, to **unify/simplify hyper-parameter representation** and demonstrate that layer weights are the same (Reviewers `Tgia`, `pw2m`, `zPfS`).\\n3. Provided a **more concise introduction** to focus on our main contributions (i.e., prompt-guided adaptation and strengthening their connection with gating) (Reviewers `pw2m`, `zPfS`).\\n4. **Reformatted our Table** 5 to include more straightforward comparisons showing that LoRSS and our methods are better than previous methods and plain LoRA (Reviewers `Tgia`, `pw2m`); **reformatted our Table 4(b)** to include FPS in evaluation metrics (Reviewer `1iPg`).\\n5. Added **more references** to related work to reflect the progress in this area of LoRA (Reviewer `1iPg`).\\n\\nOnce again, **as the first** to explore how prompt learning can provide additional task-specific guidance to LoRA, we highly value the reviewers' insightful feedback and welcome any additional suggestions that can help us improve our work.\"}", "{\"title\": \"More detailed ablation experiments to support our claims\", \"comment\": \"Dear Reviewer pw2m,\\n\\nWe sincerely appreciate your insightful comments and are pleased to address your concerns by providing an additional ablation study. The table below presents the independent contributions of each component. Except for the last two rows (which evaluate combinations), each row represents the individual performance of a specific component, evaluated either in isolation or in combination with others, **distinct from the combinations presented in the main paper**.\\n\\n**Overview of Components and Their Roles**\\n>\\n>**Prompt-Driven LoRA/LoRSS**: We first demonstrate that our method is effective with plain LoRA. We then decompose the standard LoRA into a Low-Rank Self-Supervised (LoRSS) adaptation. This refinement achieves better generalization without increasing the number of parameters. By aligning the adaptation more closely with the prompts, we enhance the model's ability to generalize to novel classes. (Please compare the results in the 6th and 7th rows of the table below.)\\n>\\n>**Hierarchical Interaction**: To prevent information loss across layers, we introduce an interaction mechanism where each layer interacts with its preceding layer. This ensures that valuable information is preserved and propagated throughout the network, improving overall performance. (As shown in the 8th row of the table below.)\\n>\\n>**Self-Regulation Loss Function**: We incorporate a self-supervised loss function that maintains good generalization even after training for multiple epochs. This component helps the model avoid overfitting by regulating the adaptation process during extended training, which is extremely important. (Please compare the results for MaPLe in the 3rd and 4th rows with our method in the 10th and 11th rows.)\\n>\\n>**Gating Mechanism**: We employ a gating mechanism to assign confidence scores, balancing the contribution of adaptation at each layer. This dynamic adjustment allows the model to focus on the most relevant features, enhancing its adaptability and robustness. We add the adaptation matrices at different layers(depths), as our ablation study indicates that the contributions of LoRA at different layers vary (https://cdn-fusion.imgcdn.store/i/2024/7506679307b42899.png); the gating mechanism effectively addresses this by appropriately weighting each layer's adaptation.\\n>\\n\\n| Model | Base(%)\\u2191 | Novel(%)\\u2191 | HM \\u2191 |\\n|--------------------------------------------------|----------|-----------|----------|\\n| CLIP | 72.43 | 68.14 | 70.22 |\\n| + LoRA (5 epoch) | 77.57 | 69.70 | 73.42 |\\n| + MaPLe (5 epoch) | 76.77 | 70.80 | 73.66 |\\n|+ MaPLe (20 epoch) |77.17\\t|67.90\\t|72.24|\\n| + Naive combinations (5 epoch)\\t| 76.77\\t| 70.47\\t| 73.49| \\n| + Prompt-Driven plain LoRA (5 epoch) | 77.62 | 70.81 | 74.09 |\\n| + Prompt-Driven LoRSS (5 epoch) | 77.63 | 70.97 | 74.15 |\\n| + Prompt-Driven LoRSS + Interaction (5 epoch) |77.61\\t|71.02 | 74.17 | \\n| + Prompt-Driven LoRSS + self-regulation (5 epoch) | 77.57\\t| 71.13 | 74.21| \\t\\n| + DPD-LoRA (full model) (5 epoch) | **77.87** | **71.13** | **74.34** |\\n| + DPD-LoRA (full model) (20 epoch) | **78.13**| **71.33** | **74.58**|\\n\\n\\n\\n**Key Observation**\\n>\\n>1. Incremental Performance Improvements: Each component consistently contributes to performance enhancement. The integration of all components in our full model (DPD-LoRA) achieves the best results across all metrics.\\n>\\n>2. Parameter Efficiency: Each component is designed to improve performance without requiring additional parameters, except for the gating mechanism, which introduces minimal overhead.\\n>\\n>3. Avoiding Overfitting: While MaPLe initially demonstrates good performance, it shows signs of overfitting when trained for longer durations (e.g., 20 epochs). In contrast, our self-regulation loss function helps maintain generalization even after extended training.\\n>\\n>4. Coherence and Synergy of Components: The combination of components in the full DPD-LoRA model validates the necessity and synergy of the integrated approach. Each component addresses specific challenges, and together they significantly improve performance.\\n\\nThis study demonstrates that the inclusion of each component contributes progressively to performance improvements. Our full model (DPD-LoRA) achieves the best results after extended training (20 epochs), while MaPLe shows signs of overfitting after only 5 epochs. We believe this clarification, along with the additional table, effectively addresses your concerns about the necessity and coherence of integrating these components.\\n\\nThank you once again for your valuable feedback. We have also considered the similar concerns raised by Reviewer zPfS and have revised the manuscript accordingly to better highlight the roles and contributions of each component.\"}", "{\"summary\": \"This paper proposes the DPD-LoRA algorithm, which integrates prompt learning to guide the LoRA learning distribution. By incorporating modules such as Hierarchical Interaction, the Prompt-Conditioned Gating Mechanism (PCGM), and the Self-Regularized Lower-Rank Subspace (LoRSS), the proposed DPD-LoRA achieves strong performance across 11 benchmark datasets.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-structured and relatively easy to understand.\", \"Detailed experiments provide convincing evidence of the effectiveness of the proposed algorithm.\"], \"weaknesses\": [\"Overall, the proposed algorithm involves numerous modules. I strongly suggest the authors consider identifying and focusing on the core components of their method.\", \"It is unclear how Eqn (4) is optimized. Are $s_i$ and $A_iB_i$ learned simultaneously? How many sub-LoRAs $m$ are used, and why is it imperative to decompose a single LoRA into multiple sub-LoRAs essential? Do the learnable $S_i$ and $G(P)$ share any functional overlap?\", \"Why doesn't the weighting form in Eqn (6) match that in Eqn (5) (e.g., setting $\\\\gamma=1-\\\\beta$) ? This discrepancy should be clarified.\", \"In Eqn (8), why does the orthogonal regularization prevent overfitting and encourage diversity in the learned LoRA? If this assertion is based on findings from other studies, supporting citations would strengthen the claim.\", \"I would like to see a memory cost comparison between the DPD-LoRA and SOTA methods. DPD-LoRA requires storing $m$ LoRAs per layer (Eqn (4)) and also duplicates each encoder in both branches while retaining unprompted inputs, which appears to impose a substantial memory cost.\"], \"questions\": \"See weakness 2-5\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Invitation to further discussion\", \"comment\": \"Dear reviewer pw2m,\\n\\nWe genuinely appreciate the time and effort you've invested in reviewing our paper. We have carefully provided relevant responses and results to your concerns. We are eager to further discuss with you and gain your insights before the end of the Author/Reviewer phase. Please let us know if any aspect of our work remains unclear or if you have additional feedback.\\n\\nThank you.\"}", "{\"title\": \"Response to Reviewer pw2m\", \"comment\": \"We thank the Reviewer pw2m for many insightful comments. We answer the questions in what follows. Please let us know if further clarification is needed.\\n\\n**Q:I strongly suggest the authors consider identifying and focusing on the core components of their method.**\\n\\nThank Reviewer pw2m for insightful suggestion. Our proposed methods mainly focus on two things: first, prompts can additionally provide task-specific guidance to LoRA (even in plain LoRA settings, as shown in Table 5 in the appendix); second, the proposed gating mechanism strengthens their connection. Reviewer zPfS also reminded us that too many claims might be redundant, and we will revise the introduction to focus on the main points.\\n\\n**Q:It is unclear how Eqn (4) is optimized. Are $si$ and $AiBi$ learned simultaneously?**\\n\\nYes, $AiBi$ and $si$ are learned simultaneously. We provide our algorithm on page 18 of the appendix. \\n\\n**Q:How many sub-LoRAs are used, and why is it imperative to decompose a single LoRA into multiple sub-LoRAs essential? Do the learnable and share any functional overlap?**\\n\\nWe provided all hyperparameters in Table 4(a); we use a fixed 3 sub-LoRA matrices in all evaluations.\\n\\nThis LoRSS idea is inspired by MoE-LoRA [1], but our approach is more parameter-efficient in terms of learnable parameters. We decompose the LoRA matrix into sub-LoRA matrices under the same parameter budget, while MoE-LoRA duplicates the LoRA matrix into several LoRA matrices. For example, if we have n sub-LoRA matrices with a fixed rank r and $W \\\\in \\\\mathbb{R}^{d \\\\times k}$ their parameters increase to $n\\u00d7(d\\u00d7r+r\\u00d7k)$, whereas our parameters remain at $(d\\u00d7r+r\\u00d7k)$. Another difference is that MoE uses a Network to select the importance of matrices A/B. In contrast, we employ a single learnable parameter (the scaling factor) for each sub-LoRA matrix, which is clearly more efficient. Finally, our downstream tasks are completely different, highlighting the distinct applicability of our method. From our observation, we found that under the same parameters (i.e., 3*r=3 sub-LoRA sitting V.S. r=12 plain LoRA sitting), LoRSS always outperforms the plain setting.\\n\\nThe scaling factors $si$ and the gating $G(\\\\cdot)$ are totally different. The $si$ are the weights of different LoRA sub-matrices, while the gating provides a confidence score to the total sum of all LoRA sub-matrices. If we rewrite our euqation 6,7, you can see the scaling factor $s_i$ of LoRA matrices $A_i$ and $B_i$ are independent of the gating prediction:\\n\\n$$\\n\\\\Delta W_l = \\\\left( \\\\beta \\\\sum_{i=1}^{m} \\\\left( s_{i}^{(l)} \\\\times A_{i}^{(l)} B_{i}^{(l)} \\\\right) + \\\\gamma \\\\sum_{i=1}^{m} \\\\left( s_{i}^{(l-1)} \\\\times A_{i}^{(l-1)} B_{i}^{(l-1)} \\\\right) \\\\right) \\\\times G(P_l)\\n$$\\n \\nwhere the $\\\\Delta W_l$ is the final updated matrix and will be added like in normal LoRA.\\n\\n**Q:Why doesn't the weighting form in Eqn (6) match that in Eqn (5)? This discrepancy should be clarified.**\\n\\nWe thank Reviewer pw2m for pointing out this issue that many reviewers care about. There is actually a misunderstanding that our \\n $\\\\alpha,\\\\beta,\\\\gamma$ are the same thing and have the same value (0.1 for $(l-1)$ and 0.9 for $l$, as you can tell from our Table 4(a)). The reason we chose different alphabetical symbols is that we want to separate prompt-token side annotations and LoRA side annotations. We will change them to the same representation to prevent confusion.\\n\\n**Q:DPD-LoRA requires storing LoRAs per layer (Eqn (4)) and also duplicates each encoder in both branches while retaining unprompted inputs, which appears to impose a substantial memory cost.**\\n\\nIn fact, if you look at our Tables 4(a) and (b), where we provide computational complexity among different prompting methods, we only add a few parameters (or even fewer than previous methods!), and our proposed methods show better performance. As previously mentioned in our appendix algorithm, we actually follow a two-step training strategy which does not require much memory cost. We provided more explanation and deatils in **\\\"Concerns About Memory and Cost Efficiency\\\"** on Reviewers' Shared Questions section.\\n\\n\\n1.Tongxu Luo, Jiahe Lei, Fangyu Lei, Weihao Liu, Shizhu He, Jun Zhao, and Kang Liu. Moelora: Contrastive learning guided mixture of experts on parameter-efficient fine-tuning for large language models. arXiv preprint arXiv:2402.12851, 2024.\\n\\n2.Zhang, Q., Chen, M., Bukharin, A., Karampatziakis, N., He, P., Cheng, Y., ... & Zhao, T. (2023). AdaLoRA: Adaptive budget allocation for parameter-efficient fine-tuning. arXiv preprint arXiv:2303.10512.\"}", "{\"title\": \"Response to Reviewer zPfS (2) with additional Analysis of the synergistic effects between LoRA and prompt learning\", \"comment\": \"Dear Reviewer zPfS,\\n\\nAs previously replied about synergistic effects, We appreciate the opportunity to provide a more in-depth disscusion to address your concerns.\\n\\n**Mathmatical Derivation**\\n>\\n>In our appendix, particularly in the mathematical derivation leading up to Equation (19), we explore how prompt learning and LoRA interact within the transformer architecture. We define the total weight matrices incorporating LoRA as:\\n>\\n>$$\\\\( W_Q^\\\\text{total} = W_Q^{\\\\text{base}} + A_Q B_Q \\\\)$$ \\n>\\n>and similarly for $\\\\( W_K^\\\\text{total} \\\\)$ and $\\\\(W_V^\\\\text{total} \\\\)$ where $ A_Q B_Q $ are the low-rank matrices introduced by LoRA.. \\n>\\n>When we consider the prompted inputs $X^{'}=[X,P]$ where $X$ represents the original input tokens and $P$ represents the prompt tokens, the computation of the query matrix $Q^{\\u2032}$ becomes:\\n>\\n>\\n>$$\\n\\\\begin{aligned}\\n Q^{\\u2032} =[XW_Q^{base}\\u200b+XA_Q\\u200bB_Q\\u200b;PW_Q^{base}\\u200b+PA_Q\\u200bB_Q\\u200b].\\u200b\\n\\\\end{aligned}\\n>$$\\n>\\n>This expansion reveals that both the original inputs $X$ and the prompt tokens $P$ interact with the LoRA-adapted weights $A_QB_Q$. The key observation here is that the prompt tokens directly contribute to the LoRA updates, effectively enriching the model's adaptation capabilities.\\n>\\n>In the attention mechanism, the computation involves terms like $Q^{'}K^{'T}$, which, when expanded, include cross-interactions between $X$ and $P$:\\n>$$\\n>Q^{'}K^{'T}=([XW_Q^{total}\\u200b;PW_Q^{total}\\u200b])([XW_K^{total}\\u200b;PW_K^{total\\u200b}])^{T}\\n>$$\\n>\\n>This results in four combinations:\\n>1. $XW_Q^{total}\\u200b(XW_K^{total}\\u200b)^\\u22a4$\\n>2. $XW_Q^{total}\\u200b(PW_K^{total}\\u200b)^\\u22a4$\\n>3. $PW_Q^{total}(XW_K^{total}\\u200b)^\\u22a4$\\n>4. $PW_Q^{total}(PW_K^{total}\\u200b)^\\u22a4$\\n>\\n>These terms capture all possible interactions between the original inputs and the prompt tokens, modulated by the LoRA-adapted weights. **Particularly, the cross terms (2 and 3) highlight the direct influence of prompt tokens on the processing of original inputs through the adapted weights, showcasing a synergistic effect.**\\n\\n**Gradient Analysis**\\n>\\n>On another hand that we can provide an analysis from the gradient perspective to explain this.\\n>\\n>Consider the shared cross-entropy loss function $L$ computed over the model's predictions and the ground truth labels. Both the prompt tokens $P$ and the LoRA parameters $A_{\\\\*}B_{\\\\*}$\\u200b are optimized to minimize $L_{CE}$.\\n>\\n>**1. Gradients with Respect to Prompt Tokens**\\n>\\n>The prompt tokens $P$ are part of the input $X' = [X; P]$, where $X$ represents the original input tokens. The gradient of the loss with respect to $P$ is calculated using the chain rule. Therefore:\\n>\\n>$$\\n\\\\frac{\\\\partial L}{\\\\partial P} = \\\\left( \\\\frac{\\\\partial L}{\\\\partial Q'} \\\\frac{\\\\partial Q'}{\\\\partial X'} + \\\\frac{\\\\partial L}{\\\\partial K'} \\\\frac{\\\\partial K'}{\\\\partial X'} + \\\\frac{\\\\partial L}{\\\\partial V'} \\\\frac{\\\\partial V'}{\\\\partial X'} \\\\right) \\\\frac{\\\\partial X'}{\\\\partial P}.\\n>$$\\n>\\n>Since $X'$ directly includes $P$, we have $\\\\frac{\\\\partial X'}{\\\\partial P} = \\\\begin{bmatrix} 0 & I_P \\\\end{bmatrix}$ where $I_P$ is the identity matrix corresponding to the dimensions of $P$.\\n>\\n>**2. Gradients with Respect to LoRA Parameters**\\n>\\n>The LoRA parameters modify the weight matrices: $\\\\( W_Q^\\\\text{total} = W_Q^{\\\\text{base}} + A_Q B_Q \\\\)$, The gradients with respect to $A_Q$ and $B_Q$\\u200b are:\\n>\\n>$$\\n\\\\frac{\\\\partial L}{\\\\partial A_Q} = \\\\frac{\\\\partial L}{\\\\partial W_Q^\\\\text{total}} \\\\frac{\\\\partial W_Q^\\\\text{total}}{\\\\partial A_Q} = \\\\frac{\\\\partial L}{\\\\partial W_Q^\\\\text{total}} B_Q^\\\\top,\\n>$$\\n>\\n>$$\\n\\\\frac{\\\\partial L}{\\\\partial B_Q} = A_Q^\\\\top \\\\frac{\\\\partial L}{\\\\partial W_Q^\\\\text{total}}.\\n>$$\\n>\\n>Similar expressions hold for $A_K, B_K$ and $A_V, B_V$.\\n>\\n>**3. Interaction Between Prompt Tokens and LoRA Parameters**\\n>\\n>The key observation is that the gradients of the LoRA parameters depend on the **entire input**, including the prompt tokens $P$:\\n>\\n>$$\\n\\\\frac{\\\\partial L}{\\\\partial W_Q^\\\\text{total}} = X'^\\\\top \\\\frac{\\\\partial L}{\\\\partial Q'}.\\n>$$\\n>\\n>Since $X' = [X; P]$, the gradient with respect to $W_Q^\\\\text{total}$ involves both $X$ and $P$:\\n>\\n>$$\\n\\\\frac{\\\\partial L}{\\\\partial W_Q^\\\\text{total}} = \\\\begin{bmatrix} X^\\\\top \\\\\\\\ P^\\\\top \\\\end{bmatrix} \\\\frac{\\\\partial L}{\\\\partial Q'}.\\n>$$\\n>\\n>Therefore, the gradient with respect to $A_Q B_Q$ becomes:\\n>\\n>$$\\n\\\\frac{\\\\partial L}{\\\\partial A_Q} = (\\\\begin{bmatrix} X^\\\\top \\\\\\\\ P^\\\\top \\\\end{bmatrix} \\\\frac{\\\\partial L}{\\\\partial Q'}) B_Q^\\\\top.\\n>$$\\n>$$\\n\\\\frac{\\\\partial L}{\\\\partial B_Q} = A_Q^\\\\top (\\\\begin{bmatrix} X^\\\\top \\\\\\\\ P^\\\\top \\\\end{bmatrix} \\\\frac{\\\\partial L}{\\\\partial Q'}).\\n>$$\\n>\\n>These expressions show that the prompt tokens $P$ directly influence the updates to the LoRA parameters $A_{\\\\*} B_{\\\\*}$.\\n>\\n>**Thus, the prompt tokens directly contribute to the gradient calculations for the LoRA parameters**. This allows the model to adjust the low-rank adaptations in a way that specifically leverages the information provided by the prompts.\\n\\nThank you again for your valuable input, which helps us improve the clarity and depth of our work.\"}", "{\"title\": \"Invitation to further discussion\", \"comment\": \"Dear reviewer 1iPg,\\n\\nWe genuinely appreciate the time and effort you've invested in reviewing our paper. We have carefully provided relevant responses and results to your concerns. We are eager to further discuss with you and gain your insights before the end of the Author/Reviewer phase. Please let us know if any aspect of our work remains unclear or if you have additional feedback.\\n\\nThank you.\"}", "{\"title\": \"Interactive Discussions\", \"comment\": \"Dear Reviewers,\\n\\nThank you for your efforts in reviewing this paper. We highly encourage you to participate in interactive discussions with the authors before November 26, fostering a more dynamic exchange of ideas rather than a one-sided rebuttal.\\n\\nPlease feel free to share your thoughts and engage with the authors at your earliest convenience.\\n\\nThank you for your collaboration.\\n\\nBest regards,\\nICLR 2025 Area Chair\"}", "{\"comment\": \"Thank you for the rebuttal. After reviewing your response and the feedback provided to other reviewers, I find that some of my concerns have been addressed. While the comparison between Prompt-Driven plain LoRA and plain LoRA demonstrates the effectiveness of the Prompt-Driven part, it does not sufficiently verify the contributions of other components. Moreover, the relative improvements remain marginal (e.g., the small difference between Prompt-Driven plain LoRA and other Prompt-Driven LoRSS versions), making it difficult to determine whether these improvements are substantial enough to be convincing. Therefore, I decide to maintain my current score.\"}" ] }
DcZpQhVpp9
ADMM for Structured Fractional Minimization
[ "Ganzhao Yuan" ]
This paper considers a class of structured fractional minimization problems. The numerator consists of a differentiable function, a simple nonconvex nonsmooth function, a concave nonsmooth function, and a convex nonsmooth function composed with a linear operator. The denominator is a continuous function that is either weakly convex or has a weakly convex square root. These problems are prevalent in various important applications in machine learning and data science. Existing methods, primarily based on subgradient methods and smoothing proximal gradient methods, often suffer from slow convergence and numerical stability issues. In this paper, we introduce {\sf FADMM}, the first Alternating Direction Method of Multipliers tailored for this class of problems. {\sf FADMM} decouples the original problem into linearized proximal subproblems, featuring two variants: one using Dinkelbach's parametric method ({\sf FADMM-D}) and the other using the quadratic transform method ({\sf FADMM-Q}). By introducing a novel Lyapunov function, we establish that {\sf FADMM} converges to $\epsilon$-approximate critical points of the problem within an oracle complexity of $\mathcal{O}(1/\epsilon^{3})$. Extensive experiments on synthetic and real-world datasets, including sparse Fisher discriminant analysis, robust Sharpe ratio minimization, and robust sparse recovery, demonstrate the effectiveness of our approach.
[ "Fractional Minimization", "Nonconvex Optimization", "Proximal Linearized ADMM", "Nonsmooth Optimization", "Convergence Analysis" ]
Accept (Poster)
https://openreview.net/pdf?id=DcZpQhVpp9
https://openreview.net/forum?id=DcZpQhVpp9
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zoyOK9c3d6", "uhdDxxXhnN", "tVSjMnLIFF", "rSAJmLESV2", "huNCt7Eu7X", "fqutGj4d81", "fCcyG1KccQ", "Xb436QqNdr", "QjT2SLbBTx", "N7ya2uIAJO", "MTH3gruw4R", "LyYyGePPU3", "LkQa9jtGUO", "KQMf5su4Mf", "JvH8cIv2QI", "GffkFKMHo2", "6ueV4MFGkG", "2cHEkUIX7p" ], "note_type": [ "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732259437395, 1737523536962, 1732255776225, 1733112672855, 1730631086348, 1729863456202, 1732276066085, 1732533759245, 1732255753955, 1732278873440, 1734652125654, 1733115764825, 1732250859231, 1730721942067, 1732265194565, 1732251661487, 1732280434259, 1732277605956 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2863/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2863/Authors" ], [ "ICLR.cc/2025/Conference/Submission2863/Authors" ], [ "ICLR.cc/2025/Conference/Submission2863/Reviewer_AqZe" ], [ "ICLR.cc/2025/Conference/Submission2863/Reviewer_EekD" ], [ "ICLR.cc/2025/Conference/Submission2863/Reviewer_EekD" ], [ "ICLR.cc/2025/Conference/Submission2863/Reviewer_7KB3" ], [ "ICLR.cc/2025/Conference/Submission2863/Authors" ], [ "ICLR.cc/2025/Conference/Submission2863/Authors" ], [ "ICLR.cc/2025/Conference/Submission2863/Area_Chair_QNUA" ], [ "ICLR.cc/2025/Conference/Submission2863/Authors" ], [ "ICLR.cc/2025/Conference/Submission2863/Authors" ], [ "ICLR.cc/2025/Conference/Submission2863/Reviewer_7KB3" ], [ "ICLR.cc/2025/Conference/Submission2863/Authors" ], [ "ICLR.cc/2025/Conference/Submission2863/Authors" ], [ "ICLR.cc/2025/Conference/Submission2863/Authors" ], [ "ICLR.cc/2025/Conference/Submission2863/Authors" ] ], "structured_content_str": [ "{\"comment\": \"**Question 5. What is meant in line 73 when it is claimed that when rho is large, ||X||_[k] approximates ||X||_1 ?**\\n\\n\\n**Response.** According to (Gotoh et al., 2018; Bi et al., 2014), when $\\\\rho$ exceeds a certain threshold, the exact penalty function:\\n\\n$$\\\\min_X f(X)+\\\\rho( ||X||_1 - ||X||_{[k]} ) $$\\n\\nbecomes equivalent to the original sparsity-constrained problem $$\\\\min_X f(X), s.t. ||X||_0\\\\leq k$$\\n\\nin the sense that these two problems have the same global optimal solution set.\\n \\nWe will briefly mention this theoretical result in the updated manuscript.\\n\\n\\n\\n\\n**Question 6. In line 76, shouldn't h(Ax) have a factor of rho?**\\n\\n\\n**Response.** You are correct. Thank you for your careful reading and for pointing this out.\\n\\n\\n**Question 7. In lin 76, isn't it more clear to simply write that d and d^1/2 are convex?**\\n\\n\\n**Response.** We will change it to \\\"d or d^1/2 are weakly convex\\\". Thank you for your suggestion.\\n\\n\\n**Question 8. How much effort was made to tune the parameters used in all of the different methods?**\\n\\n\\n**Response.** We only tune one parameter $\\\\beta^0$:\\n\\n1. $\\\\beta^0=\\\\rho*100$ for sparse FDA\\n\\n2. $\\\\beta^0= 0.001$ for robust SRM\\n\\n3. $\\\\beta^0=0.001$ for robust sparse recovery.\\n\\nFor **all** experiments (refer to the updated manuscript), we consider the following fixed constants:\\n$$\\\\xi=1/2, p=1/3, \\\\chi=2\\\\sqrt{1+\\\\xi}+10^{-14}, \\\\theta=1.01.$$\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"**Question 1. All the numerical examples given in the paper have a denominator which is convex. This leads me to question whether the generalization of the denonimator to weakly convex functions is even well-motivated**\\n\\n**Response.** When the denominator is an $L$-smooth (not necessarily convex) function, it is also $L$-weakly convex. This setting includes an important class of fractional programs where the denominator is $L$-smooth.\\n\\nWe will include this clarification to emphasize that such generalization is well-motivated.\\n\\n\\n**Question 2. Similarly, what is the significance of not assuming that delta is convex if later you assume that it's a simple function? When you take delta to be the indicator function of the set of orthogonal matrices, the proximal operator is not well-defined.**\\n\\n**Response.** We do not assume the convexity of $\\\\delta$ since the indicator function for the orthogonality constraint is non-convex. Instead, we assume the convexity of $g(\\\\cdot)$ and $h(\\\\cdot)$.\\n\\nEven when $\\\\delta$ is the indicator function of the set of orthogonal matrices, the proximal operator remains well-defined. The solution set is compact, and the proximal operator has an efficient closed-form solution. For details on the computation of the proximal operator, please refer to Section \\\"G.1 ORTHOGONALITY CONSTRAINT\\\" in the manuscript.\\n\\n\\n**Question 3. In Remark 3.11 it is claimed that Lemma 3.9 and 3.10 are novel contributions but in fact many (perhaps all?) of these results are well-known results about Moreau envelopes so in what sense are they novel? The authors should either clarify this comment and be specific about what exactly is novel or to remove this remark and just cite the known references stating these results.**\\n\\n**Response.** \\n\\n1. Lemma 3.9(a) is a standard result in the literature, and we have appropriately cited it in the proof.\\n\\n2. Lemma 3.9(b,c) may already exist in the literature or in lecture notes, but we were unable to locate appropriate references. It is important to note that the results discussed in *Amir Beck's First-Order Methods in Optimization* (SIAM, 2017, Chapter 6) pertain to Moreau envelope smoothing, not Nesterov\\u2019s smoothing.\\n\\n3. In Lemma 3.9(d), we establish for the first time that the Moreau envelope smoothing function\\n\\n$h_{\\\\mu}^{more}(y) = \\\\min_{v} \\\\tfrac{1}{2\\\\mu} ||v-y||_2^2 + h(v)$ \\n\\nis equivalent to Nesterov's smoothing function:\\n\\n$h_{\\\\mu}^{nest}(y) = \\\\max_{v} <y,v> - h^*(v) - 0.5 \\\\mu ||v||_2^2$.\\n\\nIn other words, we have: $h_{\\\\mu}^{more}(y)=h_{\\\\mu}^{nest}(y)$.\\n\\n4. We argure that Lemma 3.9(d,e,f) and Lemma 3.10 represent novel contributions of this paper. Notably, even if the Moreau envelope smoothing function is used instead of Nesterov's smoothing function, Lemmas 3.9(e,f) remain novel.\\n\\n5. In our revision, we have changed it to: \\\"Lemma 3.9 and Lemma 3.10 can be derived using standard convex analysis and\\nplay an essential role in the analysis of the proposed FADMM algorithm. Interestingly, as\\ndemonstrated in Lemma 3.9(d), Nesterov\\u2019s smoothing function is essentially equivalent to the Moreau envelope smoothing function (Beck, 2017; B\\u00f6hm & Wright, 2021).\\\" See L204-208 in the updated manuscript.\\n\\n**Question 4. Very little is said about the effect that the parameter choices have on convergence. In practice how does one go about choosing beta_0 or other hyperparameters? For instance, in the sparse FDA experiments it's written that beta_0 = 100rho which is actually quite large (1000 to 100,000).**\\n\\n\\n**Response.** \\n\\nIn all our experiments, we use the following parameter settings:\\n$$\\\\xi=1/2, p=1/3, \\\\chi=2\\\\sqrt{1+\\\\xi}+10^{-14}, \\\\theta=1.01$$\\n\\nFor the parameter $\\\\beta^0$, the following values work well across different applications:\\n\\n1. $\\\\beta^0= 100 \\\\rho$ for sparse FDA\\n\\n2. $\\\\beta^0= 0.001$ for robust SRM\\n\\n3. $\\\\beta^0=0.001$ for robust sparse recovery.\\n\\nFor the sparse FDA experiments, according to the exact penalty theory (Gotoh et al., 2018; Bi et al., 2014), the value of $\\\\beta^t$ is expected to be at least larger than $\\\\rho$. This is why we consider a relatively large value for $\\\\beta^0$. \\n\\nWhile it is possible to choose a smaller $\\\\beta^0$, this may result in a little slower convergence.\"}", "{\"title\": \"Reply to: Regarding Question 4\", \"comment\": \"We acknowledge the reviewer\\u2019s concern regarding the choice of the parameter $\\\\beta^0$. However, we have not included detailed experiments with varying $\\\\beta^0$ for the following two reasons:\\n\\n1. We have already conducted experiments with 72 different settings (9x8 figures), which provide valuable insights into the robustness of the proposed methods. Including additional experiments would result in excessive redundancy, especially for a theoretical paper, detracting from its clarity and conciseness.\\n\\n2. All the MATLAB code for reproducibility is provided in the supplementary materials. Therefore, we believe that the experimental section does not represent a weakness of the paper.\"}", "{\"summary\": \"In this paper, the authors introduce FADMM, the first ADMM algorithm designed to solve general structured fractional minimization problems. FADMM decouples the original problem into linearized proximal subproblems, featuring two variants: one using Dinkelbach\\u2019s parametric method (FADMM-D) and the other using the quadratic transform method (FADMM-Q). The proposed algorithm improves the slow convergence speed and numerical stability issues of traditional subgradient methods and smoothing proximal gradient methods. The authors conduct a convergence analysis of the FADMM algorithm by introducing a novel Lyapunov function, and they validate the effectiveness of the FADMM algorithm through extensive synthetic and real-world data.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1.The authors provide a comprehensive analysis of the proposed FADMM algorithm, including two specific variants: FADMM-D and FADMM-Q.\\n\\n2.Comprehensive theoretical analysis, with proofs on convergence.\\n\\n3.The authors conduct extensive experiments on both synthetic and real-world data, effectively demonstrating the efficiency of the FADMM algorithm.\", \"weaknesses\": \"I think the writing of this paper can be further improved.\", \"questions\": \"I am not an expert in non-convex optimization, I can only give some advice on writing papers\\uff1a\\n\\n1.The first sentence of the abstract is too long. It is recommended to split it to improve readability.\\n\\n2.Line 73, \\\"sufficient large\\\" should be \\\"sufficiently large\\\".\\n\\n3.Line 138, \\\"To the best of our knowledge......\\\" is too long. It is better to break it into short sentences for reading.\\n\\n4.Line 236, \\\"is a widely used to develop practical optimization algorithms \\\", delete \\u201da\\u201c.\\n\\n5.Line 454, \\\"Additioanl\\\" -> \\\"Additional\\\".\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper develops and analyzes alternating direction method of multipliers (ADMM) algorithms for solving structured fractional minimization problems. Building on recent work (Bot et al., 2023a,b; Li & Zhang, 2022), the main idea is to transform the fractional minimization problem into an equivalent composite minimization problem more amenable to operator splitting. The authors consider two classical transformations following Bot et al., 2023a,b and Li & Zhang, 2022 - Dinkelbach's parametric method and the quadratic transform method. Rather than exactly solving the transformed problems at each iteration (which could be costly), they propose to linearize them and solve this majorized version (which leads to ADMM style updates). This produces two algorithms, FADMM-D and FADMM-Q, corresponding to the two transformation approaches.\\n\\nA key technical contribution is the use of smoothing via the Moreau envelope to handle the nonsmooth components in the numerator of the fractional objective. The authors establish convergence rates for both D and Q variants, showing they reach eps-approximate critical points within O(1/eps^3) iterations. The theoretical analysis is rigorous and all results are precisely stated. The definition of eps-approximate critical point, however, is nonstandard.\", \"the_two_methods_are_validated_on_three_applications_that_fit_loosely_within_their_abstract_framework\": \"sparse Fisher discriminant analysis, robust Sharpe ratio maximization, and robust sparse recovery. Note that these problems do not make full use of the relaxed assumptions outlined in the paper. Numerical experiments demonstrate that both variants typically outperform existing approaches in terms of wall-clock time to convergence and seem convincing.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The class of problems addressed is broader than what was previously treated in the literature.\", \"The theoretical results about the convergence of the methods and their rates are strong and significant.\", \"The arguments made were rigorous and clearly stated. I wasn't able to review all the proofs in detail as there were many that were relegated to the appendix and the paper itself is quite long at nearly 40 pages.\", \"The methods appear to perform well in the numerical experiments compared to previous methods that can solve fractional minimization problems and other smoothing algorithms.\"], \"weaknesses\": [\"The technical density of the presentation harms the readability.\", \"The discussion after Remark 3.5, on the stationarity conditions for this problem, feels very rushed and I don't understand exactly the justifications for all the claims made. In particular there are many references but they are vague - I think it would improve a lot the clarity of the paper if you could specify a lemma or result from those papers that clearly justifies what you are claiming with the subdifferential caclulus here. This also applies to Lemma A.1.\", \"Definition 3.8 is the definition of the Moreau envelope - I am confused to see it called Nesterov smoothing. This comes up again in the questions section because some of these results about Moreau envelopes are already well-known, even in papers cited by the authors (i.e., Bohm and Wright).\", \"Many hyperparameters in the method with little guidance about how to choose them or their effect on convergence.\"], \"questions\": [\"All the numerical examples given in the paper have a denominator which is convex. This leads me to question whether the generalization of the denonimator to weakly convex functions is even well-motivated; are there problems that really necessitate this assumption?\", \"Similarly, what is the significance of not assuming that delta is convex if later you assume that it's a simple function? When you take delta to be the indicator function of the set of orthogonal matrices, the proximal operator is not well-defined.\", \"In Remark 3.11 it is claimed that Lemma 3.9 and 3.10 are novel contributions but in fact many (perhaps all?) of these results are well-known results about Moreau envelopes so in what sense are they novel? The authors should either clarify this comment and be specific about what exactly is novel or to remove this remark and just cite the known references stating these results.\", \"Very little is said about the effect that the parameter choices have on convergence. In practice how does one go about choosing beta_0 or other hyperparameters? For instance, in the sparse FDA experiments it's written that beta_0 = 100rho which is actually quite large (1000 to 100,000).\"], \"minor_questions\": [\"What is meant in line 73 when it is claimed that when rho is large, ||X||_[k] approximates ||X||_1 ?\", \"In line 76, shouldn't h(Ax) have a factor of rho?\", \"In lin 76, isn't it more clear to simply write that d and d^1/2 are convex?\", \"How much effort was made to tune the parameters used in all of the different methods?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to rebuttal\", \"comment\": \"**Regarding Weakness 2**\\n\\nThere is no distinction between the Moreau envelope and Nesterov\\u2019s smoothing, except that the Moreau envelope was discovered and published in the 1960s by both Moreau and Yosida. To call it Nesterov\\u2019s smoothing doesn\\u2019t make sense as it is historically incorrect. It is nonsensical to suggest using Nesterov\\u2019s smoothing instead of the Moreau envelope - there is no difference so I don\\u2019t understand at all what you're claiming when you write \\u201cNotably, even if the Moreau envelope smoothing function is used instead of Nesterov's smoothing function, Lemmas 3.9(e,f) remain novel contributions.\\u201d These are two names for the same object; they are not just fundamentally equivalent, they are the same.\\n\\n**Regarding Question 1**\\n\\nI am aware that an L-smooth function is weakly convex. The point is that you don\\u2019t give any examples with these functions int he denominator - what is the point of generalizing if you don\\u2019t have examples that require it?\\n\\n**Regarding Question 2**\\n\\nWhen I write that the proximal operator is not well-defined for the indicator of the set of orthogonal matrices, I mean that it is not unique in general. What you\\u2019ve written in G.1 is in the argmin of the projection subproblem but it is not unique - what happens if I pick something else in this set? There can be more than one orthogonal matrix whose distance to the current matrix is equal.\\n\\n**Regarding Question 3**\\n\\n**I completely disagree with your response and the edited submission.**\\n* Lemma 3.9 (b) and (c) are indeed known, for instance proposition 2.1 of \\u201cGeneralized Conditional Gradient with Augmented Lagrangian for Composite Minimization\\u201d by Silveti-Falls et al which cites the well-known textbook of Bauschke and Combettes on convex analysis.\\n* Lemma 3.9 (d) is NOT a novel contribution, it\\u2019s literally on the wikipedia page for Moreau envelope https://en.wikipedia.org/wiki/Moreau_envelope#Properties as well as in various lecture notes, e.g., https://candes.su.domains/teaching/math301/Lectures/Moreau-Yosida.pdf 22.3.32 Dual Viewpoint. I absolutely insist that these results are well-known and that it is not reasonable to claim these are novel contributions at all.\\n* Lemma 3.9 (e) and (f) are also standard results; there is no doubt about this, they are looser versions of what is written in Silveti-Falls et al but you can see these reuslts also in Appendix A1 of \\\"A Conditional Gradient Framework for Composite Convex Minimization with Applications to Semidefinite Programming\\\" by Yurtsever et al or in Appendix A Lemma 10 \\\"A Smooth Primal-Dual Optimization Framework for Nonsmooth Composite Convex Minimization\\\" by Tran-Dinh et al among many. It's well-known that subgradients of Lipschitz functions are bounded in norm by the Lipschitz constant (see many textbooks for this result, e.g., Shai Shalev-Shwartz et al. Online learning and online convex optimization.).\\n\\n**Regarding Question 4**\\n\\n\\u201cWhile it is possible to choose a smaller, this may result in a little slower convergence.\\u201d\\n\\nComments like this, along with supporting experimental evidence, would greatly improve the quality of the submission.\"}", "{\"comment\": \"Thank you to the authors for the clarification. I have increased the score from 5 to 6.\"}", "{\"comment\": \"Thank you for your efforts in evaluating our manuscript.\\n\\n**Weakness 1. The discussion after Remark 3.5, on the stationarity conditions for this problem, feels very rushed and I don't understand exactly the justifications for all the claims made. In particular there are many references but they are vague - I think it would improve a lot the clarity of the paper if you could specify a lemma or result from those papers that clearly justifies what you are claiming with the subdifferential caclulus here. This also applies to Lemma A.1.**\\n\\n**Response.** Thank you for pointing this out. We will specify the exact lemmas from the referenced papers to provide clearer justification. This section primarily serves to provide the necessary background and introduce the notion of critical points. These results are likely well-known and are not the contributions of this paper.\\n\\n**Weakness 2. Definition 3.8 is the definition of the Moreau envelope - I am confused to see it called Nesterov smoothing. This comes up again in the questions section because some of these results about Moreau envelopes are already well-known, even in papers cited by the authors (i.e., Bohm and Wright).**\\n\\n**Response.** \\n\\n1. All the results are based on the definition of Nesterov's smoothing function, which is why we refer to it as Nesterov smoothing.\\n\\n2. Interestingly, as demonstrated in Lemma 3.9(d), Nesterov's smoothing function is essentially equivalent to the Moreau envelope smoothing function.\\n\\n3. Notably, even if the Moreau envelope smoothing function is used instead of Nesterov's smoothing function, Lemmas 3.9(e,f) remain novel contributions.\\n\\n4. In our revision, we will clarify that Nesterov's smoothing function is fundamentally equivalent to the Moreau envelope smoothing function.\\n\\n**Weakness 3. Many hyperparameters in the method with little guidance about how to choose them or their effect on convergence.**\\n\\n**Response.** The algorithm involves five parameters: $(\\\\beta^0, \\\\xi, \\\\theta, p, \\\\chi)$. However, we argue that these parameters are primarily chosen to ensure the theoretical convergence of the algorithm, and the algorithm's performance is not highly sensitive to their specific values.\\n\\nFor all our experiments (see the updated manuscript), we use the parameter settings.\\n\\n1. $\\\\xi=1/2$, $p=1/3$, $\\\\chi>2\\\\sqrt{1+\\\\xi}$, $\\\\theta>1$.\\n\\n2. The proximal parameter $\\\\theta$ is typically set to a constant slightly greater than $1$, such as $1.01$.\\n\\n3. The parameter $\\\\chi$ is typically set to a constant slightly greater than $2\\\\sqrt{1+\\\\xi}$, such as $2\\\\sqrt{1+\\\\xi}+10^{-14}$.\\n\\n4. In our experiments, we only slightly tune the parameter $\\\\beta^0$ for different applications.\"}", "{\"title\": \"Reply to: Regarding Weakness 2\", \"comment\": \"We now focus on Moreau envelope and Nesterov\\u2019s smoothing:\\n\\n$h_{\\\\mu}^{more}(y) = \\\\min_{v} \\\\tfrac{1}{2\\\\mu} ||v-y||_2^2 + h(v)$ \\n\\n$h_{\\\\mu}^{nest}(y) = \\\\max_{v} <y,v> - h^*(v) - 0.5 \\\\mu ||v||_2^2$.\\n\\n1. Although these formulations are essentially equivalent with $h_{\\\\mu}^{more}(y)=h_{\\\\mu}^{nest}(y)$, they take different forms. \\n\\n2. The Moreau envelope function $h_{\\\\mu}^{more}(y)$ involves adding **a strongly convex term** to the **primal** minimization problem, while Nesterov\\u2019s smoothing function $h_{\\\\mu}^{nest}(y)$ incorporates **a strongly concave term** into the **dual** maximization problem. \\n\\n3. This distinction leads to different strategies for deriving analytical solutions to the proximal subproblem and results in primal-dual algorithms. \\n\\n4. Finally, we use the term Nesterov\\u2019s smoothing technique in Definition 3.8, along with the summarized properties of Nesterov\\u2019s smoothing function. These results remain valid.\"}", "{\"metareview\": \"The paper presents a novel ADMM-based optimization method, tailored for structured fractional minimization problems, which seem not to be well addressed in the literature. The main techniques are based on smoothing methods (via the Moreau envelope) and two well-established approaches for fractional minimization problems. The authors provide corresponding theoretical analysis and numerical results. A Lyapunov function demonstrates convergence within an oracle complexity of $O(1/\\\\epsilon^3)$, and empirical evaluations confirm applicability across domains like sparse Fisher discriminant analysis and robust sparse recovery. Overall, the topic and results are interesting. The reviewers are broadly positive about the submission.\", \"additional_comments_on_reviewer_discussion\": \"Some concerns were raised during the initial review phase that were adequately addressed in the rebuttal. The reviewers note that the authors have adequately addressed the concerns raised during the rebuttal process, and the work demonstrates both technical soundness and potential impact.\"}", "{\"title\": \"Reply to: Regarding Question 1\", \"comment\": \"Given that the class of $L$-smooth functions for the denominator is quite broad, we discuss such a general case for future reference, following the work by Radu Ioan Bot et al. (Inertial Proximal Block Coordinate Method for a Class of Nonsmooth Sum-of-Ratios Optimization Problems, SIOPT 2023).\\n\\nIn the following, we discuss two examples of weakly convex denominator functions.\\n\\n1. Consider the Sparse FDA problem described in Section 1.1. If the denominator takes the form $d(X) = \\\\text{trace}(X'DX) + c$, where $D$ is not necessarily positive semidefinite and $c > 0$ is sufficiently large to ensure $d(X) > 0$, then $d(X)$ is non-convex. However, it is $(2||D||)$-weakly convex, and the proposed FADMM can still be applied.\\n\\n2. Consider another example where the denominator is a logarithmically convex function (but not necessarily convex), such as $d(x) = \\\\log(||Ax||_2^2 + 1)$. Although it is nonconvex, $d(x)$ can still be weakly convex under mild conditions. In this case, the proposed FADMM can still be applied.\"}", "{\"comment\": \"Thank you for your efforts in evaluating our manuscript.\\n\\n**Question 1:-- The results in Lemmas 3.9 and 3.10 are standard in the literature. The authors do not need to prove them in the appendix and should not claim the results ``represent novel contributions.**\\n\\n**Response:**\\n\\nIn our revision, we have changed it to: \\\"Lemma 3.9 and Lemma 3.10 can be derived using standard convex analysis and\\nplay an essential role in the analysis of the proposed FADMM algorithm. Interestingly, as\\ndemonstrated in Lemma 3.9(d), Nesterov\\u2019s smoothing function is essentially equivalent to the Moreau envelope smoothing function (Beck, 2017; B\\u00f6hm & Wright, 2021).\\\" See L204-208 in the updated manuscript.\\n\\n**Question 2: After using the smoothing techniques, the hard term $h(Ax)$ will become $h_{\\\\mu}(Ax)$. The authors could then use some standard methods from the fractional minimization community to solve this problem. The corresponding complexity will also be $\\\\epsilon^{-3}$. The authors might want to comment on this. It seems that Bot et al. (2023) also used this smoothing technique.**\\n\\n**Response:** The strategy suggested by the reviewer is essentially the Smoothing Proximal Gradient Method (SPGM) (Beck & Rosset, 2023; B\\u00f6hm & Wright, 2021) applied to fractional programs. We have discussed this method in the \\\"RELATED WORK\\\" section under \\\"General Algorithms for Solving Problem (1) \\u2014 Smoothing Proximal Gradient Methods (SPGM).\\n\\nBot et al. (2023) employ a smoothing technique that involves adding a stronger concave term to the dual maximization problem, which can be interpreted as another primal-dual method. However, their analysis relies on the Kurdyka\\u2013\\u0141ojasiewicz (KL) inequality of the problem, and no iteration complexity is provided. We have also included a numerical comparison with this method.\\n\\n**Question 3: The authors do not explain Assumption 5.1 well in Remark 5.2. By the algorithm, the iterate $x^t$ is only in the domain of $\\\\delta(\\\\cdot)$. You cannot say that it lies in a bounded set. The same issue occurs in Lemma 5.5. It might not be safe to assume that $x^t$ is bounded.**\\n\\n**Response:** Note that we assume the constraint set is **compact**, satisfying $||x^t|| \\\\leq R$ for some $R$. (This assumption holds for all three applications considered.)\\n\\nIf $x \\\\in \\\\operatorname{dom}(f)\\\\triangleq \\\\{ x : f(x) < +\\\\infty \\\\}$, then $x$ is feasible and it holds that $||x^t|| \\\\leq R$.\\n\\n\\n**Question 4: The notation $\\\\underline{Fd}$ and $\\\\overline{Fd}$ should be $\\\\underline{F}$\\uff0c $\\\\underline{d}$ and $\\\\underline{Fd}$, respectively.**\\n\\n**Response:** We will change $\\\\underline{Fd}$ to $\\\\underline{F}\\\\cdot\\\\underline{d}$.\\n\\n\\n\\n**Question 5: If the $\\\\epsilon$-critical point is similar to that in Bot et al. (2023b), will the same complexity results hold?**\\n\\n**Response:**\\n\\n1. Note that an exact critical point does not depend on specific algorithms, whereas the definition of an $\\\\epsilon$-critical point, whether in Bot et al.'s work or ours, does.\\n\\n2. Both notions of $\\\\epsilon$-critical points are reasonable, as they converge to the exact critical point when $\\\\epsilon = 0$ and depend only on the solution.\\n\\nTherefore, these two notions are not directly comparable.\\n\\nTo further illustrate, let us consider a simple example. Assume that $x=4$ is an exact critical point. One definition asserts that $(x,y,z)$ is an $\\\\epsilon$-critical point if $|x-y|+|x-z|+|y-4|\\\\leq \\\\epsilon$. Another definition may assert that $(x,y)$ is an $\\\\epsilon$-critical point if $|x-y|+|\\\\sqrt{x}-2|\\\\leq \\\\epsilon$. Both definitions are reasonable.\"}", "{\"summary\": \"This paper proposes an ADMM for solving a class of structured fractional minimization problems. The main techniques are based on smoothing methods and two well-established approaches for fractional minimization problems. The convergence rate of the proposed ADMM is established. Some numerical results are reported to show the efficiency of the proposed ADMM. Overall, the topic and results are interesting.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The authors discuss a class of fractional minimization problems, which seem not to be well addressed in the literature. The authors provide corresponding theoretical analysis and numerical results.\", \"weaknesses\": \"The results in Lemmas 3.9 and 3.10 are standard in the literature. The authors do not need to prove them in the appendix and should not claim the results ``represent novel contributions.''\\n\\nAfter using the smoothing techniques, the hard term $h(Ax)$ will become $h_{\\\\mu}(Ax)$. The authors could then use some standard methods from the fractional minimization community to solve this problem. The corresponding complexity will also be $\\\\mathcal{O}(\\\\epsilon^{-3})$. The authors might want to comment on this. It seems that Bot et al. (2023) also used this smoothing technique.\", \"questions\": [\"The authors do not explain Assumption 5.1 well in Remark 5.2. By the algorithm, the iterate $x^t$ is only in the domain of $\\\\delta(\\\\cdot)$. You cannot say that it lies in a bounded set. The same issue occurs in Lemma 5.5. It might not be safe to assume that $x^t$ is bounded.\", \"The notation $\\\\underline{\\\\mathrm{Fd}}$ and $\\\\overline{\\\\mathrm{Fd}}$ should be $\\\\underline{F} \\\\, \\\\underline{d}$ and $\\\\overline{F} \\\\, \\\\overline{d}$, respectively.\", \"If the $\\\\epsilon$-critical point is similar to that in Bot et al. (2023b), will the same complexity results hold?\", \"What is the method SGM mentioned in Section 7?\", \"The authors might need to compare their method to that in Bot et al. (2023b), at least for the Robust SRM problem.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Question 6: What is the method SGM mentioned in Section 7?**\\n\\n**Response:** This is a typo; it should read \\\"Subgradient Projection Methods (SPM).\\\" Thank you for your careful reading and for pointing this out.\\n\\n**Question 7:The authors might need to compare their method to that in Bot et al. (2023b), at least for the Robust SRM problem.**\\n\\n**Response:** \\n\\n1. Upon request, we have included a comparison with Bot et al.'s Fully Splitting Algorithm (FSA) across all three applications: sparse FDA, robust SRM, and robust sparse recovery.\\n\\n2. We adapted the original algorithm from (Bot et al., 2023b) to our notation to solve Problem (1). Refer to Section H for the implementation details.\\n\\n\\n3. **Figures 1,2,3,4,5,6,7,8,9** in the updated manuscript present the experimental results. Additionally, we have updated the supplementary material to ensure reproducibility.\"}", "{\"comment\": \"We are grateful to the reviewer for the time spent reviewing our manuscript.\\n\\n**Question 1: I think the writing of this paper can be further improved.**\\n\\n**Response:** \\n\\nThank you for your thorough review and for pointing out the typos in our manuscript. We will carefully consider your suggestions and make the necessary revisions to improve the writing.\"}", "{\"title\": \"Reply to: Regarding Question 2\", \"comment\": \"**Question. When I write that the proximal operator is not well-defined for the indicator of the set of orthogonal matrices, I mean that it is not unique in general. What you\\u2019ve written in G.1 is in the argmin of the projection subproblem but it is not unique - what happens if I pick something else in this set? There can be more than one orthogonal matrix whose distance to the current matrix is equal.**\\n\\n**Response.** \\n\\nThe subproblem is not required to have a unique solution but must be solved to global optimality (See Assumption 3.3). Note that we obtain the following two essential conditions for the nonconvex subproblem:\\n\\n1. The necessary first-order optimality condition\\n\\n2. The necessary and sufficient zero-order optimality condition (See L1343-1344 and L1514-1515)\\n\\nTo address your question, we do not \\\"pick something else in this set\\\" because we select the globally optimal solution for the nonconvex proximal subproblem. The subproblem does not need to be unique, just as the critical point is not necessarily unique.\"}", "{\"title\": \"Reply to: Regarding Question 3 (Lemmas 3.9 and 3.10)\", \"comment\": \"Note that in our revised submission, we did not claim Lemma 3.9 as our contribution and explicitly stated:\\n\\n\\\"Lemma 3.9 and Lemma 3.10 can be derived using standard convex analysis and are fundamental to the analysis of the proposed FADMM algorithm.\\\"\\n\\nThank you for bringing these references to our attention. We will include citations to them in the revised paper.\"}" ] }
DcMPfSTLN2
iART - Imitation guided Automated Red Teaming
[ "Sajad Mousavi", "Desik Rengarajan", "Ashwin Ramesh Babu", "Vineet Gundecha", "Ricardo Luna Gutierrez", "Antonio Guillen", "Avisek Naug", "Soumyendu Sarkar" ]
The potential of large language models (LLMs) is substantial, yet they also carry the risk of generating harmful responses. An automatic "red teaming" process constructs test cases designed to elicit unfavorable responses from these models. A successful generator must provoke undesirable responses from the target LLMs with test cases that exemplify diversity. Current methods often struggle to balance quality (i.e., the harmfulness of responses) and diversity (i.e., the range of scenarios) in testing, typically sacrificing one to enhance the other, and relying on non-optimal exhaustive comparison approaches. To address these challenges, we introduce an imitation-guided reinforcement learning approach to learn optimal red teaming strategies that generate both diverse and high-quality test cases without exhaustive searching. Our proposed method, Imitation-guided Automated Red Teaming (iART), is evaluated across various LLMs fine-tuned for different tasks. We demonstrate that iART achieves not only diverse test sets but also elicits undesirable responses from the target LLM in a computationally efficient manner.
[ "Automated Red-teaming", "Large Language Models (LLMs)", "Reinforcement Learning", "Imitation" ]
https://openreview.net/pdf?id=DcMPfSTLN2
https://openreview.net/forum?id=DcMPfSTLN2
ICLR.cc/2025/Conference
2025
{ "note_id": [ "Qzc6cOGRXf" ], "note_type": [ "comment" ], "note_created": [ 1728502701498 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4472/Authors" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
DcJuTtfYss
INDIRECT ATTENTION: IA-DETR FOR ONE SHOT OBJECT DETECTION
[ "Bissmella Bahaduri", "Hicham Talaoubrid", "Fangchen FENG", "Zuheng Ming", "Anissa Mokraoui" ]
One-shot object detection presents a significant challenge, requiring the identification of objects within a target image using only a single sample image of the object class as query image. Attention-based methodologies have garnered considerable attention in the field of object detection. Specifically, the cross-attention module, as seen in DETR, plays a pivotal role in exploiting the relationships be- tween object queries and image features. However, in the context of DETR networks for one-shot object detection, the intricate interplay among target image features, query image features, and object queries must be carefully considered. In this study, we propose a novel module termed ”indirect attention.” We illustrate that relationships among target image features, query image features, and object queries can be effectively captured in a more concise manner compared to cross-attention. Furthermore, we introduce a pre-training pipeline tailored specifically for one-shot object detection, addressing three primary objectives: identifying objects of interest, class differentiation, and object detection based on a given query image. Our experimental findings demonstrate that the proposed IA-DETR (Indirect-Attention DETR) significantly outperforms state-of-the-art one-shot object detection methods on both the Pascal VOC and COCO benchmarks.
[ "One shot object detection", "DETR", "cross-attention" ]
https://openreview.net/pdf?id=DcJuTtfYss
https://openreview.net/forum?id=DcJuTtfYss
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wY22fRtYsK", "wFHu4BZ5K2", "rJcw26mtK9", "qztEtUhTQV", "nfPWkKE1CU", "lEKJZzG4to", "idvcOQaHqI", "eiE5SH1yCo", "YLaMUZqEOL", "THRsgdJmrx", "LDiRzFB5BM", "Iucrn3Ahez", "Gc9yOCJh9o", "EX9kxQR51B", "EVIEn3IirX", "Adh61vEWDt", "27Rw7aCi5f", "23FzichBku" ], "note_type": [ "official_comment", "official_comment", "official_review", "comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732722799390, 1732203405604, 1730424034080, 1737566522025, 1732709672830, 1732445719580, 1730511835202, 1732613407139, 1732634802880, 1732726857431, 1732203393964, 1732498802936, 1732203573426, 1730363152072, 1730458870514, 1732248018675, 1732680077633, 1732203399764 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6371/Authors" ], [ "ICLR.cc/2025/Conference/Submission6371/Authors" ], [ "ICLR.cc/2025/Conference/Submission6371/Reviewer_h13P" ], [ "ICLR.cc/2025/Conference/Submission6371/Authors" ], [ "ICLR.cc/2025/Conference/Submission6371/Reviewer_GBux" ], [ "ICLR.cc/2025/Conference/Submission6371/Authors" ], [ "ICLR.cc/2025/Conference/Submission6371/Reviewer_GBux" ], [ "ICLR.cc/2025/Conference/Submission6371/Reviewer_xquQ" ], [ "ICLR.cc/2025/Conference/Submission6371/Authors" ], [ "ICLR.cc/2025/Conference/Submission6371/Authors" ], [ "ICLR.cc/2025/Conference/Submission6371/Authors" ], [ "ICLR.cc/2025/Conference/Submission6371/Reviewer_h13P" ], [ "ICLR.cc/2025/Conference/Submission6371/Authors" ], [ "ICLR.cc/2025/Conference/Submission6371/Reviewer_As9n" ], [ "ICLR.cc/2025/Conference/Submission6371/Reviewer_xquQ" ], [ "ICLR.cc/2025/Conference/Submission6371/Reviewer_h13P" ], [ "ICLR.cc/2025/Conference/Submission6371/Reviewer_As9n" ], [ "ICLR.cc/2025/Conference/Submission6371/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear reviewer,\\n\\nThanks a lot for your response.\\n1. We would be happy to further clarify on how relation between object queries and support features (or query features) reflects the importance of the image feature.\\\\\\n**Considering the first indirect-attention block**: as mentioned previously, in our approach, the object queries\\u2014comprising a combination of learnable parameters and the top-k target image features\\u2014serve as the query in the indirect-attention mechanism. These queries are multiplied with the query image features (or support image features) to compute the attention scores. Next, a softmax is applied to these attention scores, converting them into probability distributions between 0 and 1 giving higher weights (closer to 1) to object queries which are well-aligned with support image (or query image), and less weights (close to 0) to the object queries irrelevant to the support image features. Why this reflects the importace of target image features? we do not forget that object queries were not only learnable parameters but also **top-k target image features**.\\\\\\nNow, you are right that there is nothing significant about target image features up to here apart the top-k target features added to object queries. But, in the next step we add box positional bias(BoxRPB) to these attention scores. The BoxRPB reminds each object query (attention score) where it belongs spatially on the target image features. So the queries (attention scores) are now informed about their relevance to query image features and also about their spatial position in target image features (thanks to the addition of BoxRPB). Multiplying the \\\"informed\\\" object queries with the target image features we predict the boxes and classes.\\\\\\n**Subsequent blocks**: going to the next block of indirect-attention, the object queries are already affected by the target image features in last operation of previous block (multiplication with target image features), so again the multiplication with query image features and softmax indeed reflects the importance of target image features at each block. We hope it is clearer now.\\n\\n2. Thanks for your detailed look into the experiments section and apologies that it seems a bit confusing at first. The full model is based on BoxRPB, indirect attention, early backbone freezing, and contrastive-loss based pretraining. However the main components are the BoxRPB, and indirect-attention which are highly interdependent as well. So the difference between the result in table-1 and the last row of table-3 is the missing early backbone freezing and contrastive-loss based pretraining in table-3. This is because in table-3 our main focus is on ablating BoxRPB, indirect-attention and the interplay between them.\\\\\\nThe discrepancy between row-3 of table-3 is explained by table-6. In table-6 we add the two additional components (early backbone freezing, and contrastive-loss based pretraining) back and achieve the result in table-1.\\\\\\nAs you have noticed, there is a huge difference between seen and unseen classes in the ablations part until table-6. This gap is relatively closed mainly by early backbone freezing but as you have mentioned, at the cost of drop of performance on the seen classes. This early backbone freezing prohibits the backbone from overfitting on the seen classes which leads to significant generalization to unseen classes. As per your suggestion we have slightly expanded on this in our new submission.\"}", "{\"comment\": \"Thanks a lot for your feedback and guidance.\\n1. Thanks for suggesting that, we will enhance the introduction part in the final version.\\n\\n2. We provide the number of parameters in both (double cross-attention and indirect attention) settings in table 3. And the computation complexity comparison of both is as following:\\n| Method | Image Size | FLOPs (G) | Memory (GB) | \\n|---|---|---|---| \\n| Double Cross-Attention | 512x640 | 186.3 | 9.7 | \\n| Double Cross-Attention | 1024x1024 | 536.3 | 26.7 | \\n| Indirect Attention | 512x640 | **173.7** | **9.4** | \\n| Indirect Attention | 1024x1024 | **478.2** | **23.3** |\\n\\n3. Yes the all the blocks in decoder part are based on indirect-attention.\\n4. Thanks for pointing this out, we have expanded on this considering the fact that the MIM pretraining is better because it does not rely on labelled images and allow for pre-training on large pile of images and alleviate the problem of limited labelled data availability that OSOD tries to tackle.\"}", "{\"summary\": \"The paper proposed a novel DETR structure named \\\"IA-DETR\\\" that aims to more effectively capture the relationship among the target image features, query image features, and object queries. The method is evaluated on the standard PASCAL and COCO datasets, and the experimental results indicate that the proposed method has surpassed the existing methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The proposed method is novel and may have a positive effect on other related research.\\n2. The experimental results indicate that the method has achieved SOTA on the standard benchmark.\\n3. The motivation is clear, and the paper is in good structure.\", \"weaknesses\": \"1. Although the motivation is clearly stated, the first two paragraphs are slightly tedious. The author could consider trimming this introduction and making the motivation more straightforward.\\n2. In line #077 ~ #082, the paper states that one motivation of the proposed method is to ease the computational overhead in an existing method caused by additional cross-attention. To support this assumption, it is necessary to include an ablation study regarding the computational expense. If I missed this, please point it out during the rebuttal.\\n3. According to recent works on DETR (e.g., SQR [Chen et al.]), it is not only the final layer of the decoder that produces the correct prediction results; the output of the middle layer of the decoder sometimes produces better results. Is the proposed indirect-attention applied to each layer of the decoder? Is it possible that only applying on several layers of the decoder would get better performance?\\n4. In the last two rows of Table 6, the MIM pre-trained backbone decreases the AP50 of seen categories but increases the unseen categories. While the paper claims that the MIM is not very significant, then there should be more discussion about why the MIM is still necessary here.\\n\\n\\n[Chen et al.] Enhanced Training of Query-Based Object Detection via Selective Query Recollection, CVPR, 2023\", \"questions\": \"Please refer to the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Respond to the Rebuttal\", \"comment\": \"Dear Authors,\\n\\nThanks to the author's effort in providing a rebuttal. The rebuttal addresses some of my concerns, but I would like to retain my original rating. I still have the following concerns, \\n\\n1. Even though the reviews explain the key, value, and query are not necessarily the same. However, it still does not make sense to me how the cross-attention between object queries and support reflects the importance of the image feature. It still needs more theoretical reasoning on this design. \\n\\n2. It is hard to decide whether the results are based on a clear setting and solid experiments. Isn't the full model based on the baseline but with BoxRPB and indirect attention? What's the difference between Table 1 and the last row of Table 3 in the experimental setting? In addition, it seems the novel design sacrifices the performance of seen classes and gains on unseen classes. Comparing Table 1 and Table 3, the proposed IA-DETR in Table 1 has dramatically better results on unseen classes, with ~16% improvement, and worse results on seen classes with a ~10% decrease. What causes such a large difference? It should be included in the method and the experimental results, and this information should definitely be considered when evaluating the work.\"}", "{\"comment\": \"Appologies that we did not notice the second part of question #3 at first. In fact, due to the mentioned issue that the intermediate blocks in decoder may produce better result, the subsequent works on DETR follow iterative refinement[1] in which each decoder block predicts deltas on the total predicted box up to the block, and the box loss is calculated on each block of the decoder. However, the idea to use indirect-attention in some of the blocks seems interesting. Thus, in a new experiment we used indirect-attention in the three first blocks and switched to normal cross-attention (between object queries and target image feature) in the three last blocks (there are total of 6 blocks). However the performance drops on unseen classes. Followings are the result compared with when all decoder blocks use indirect-attention:\\n| Method | Seen Classes | Unseen Classes | \\n|---|---|---|\\n| IA in first 3 blocks only | 81.72 | 61.15 |\\n| IA in all blocks | **82.94** | **65.13** | \\n\\n[1]: Zhu, Xizhou, et al. \\\"Deformable detr: Deformable transformers for end-to-end object detection.\\\" ICLR (2021).\"}", "{\"summary\": \"This work introduces a method for one-shot object detection, a domain that requires training only on base categories without fine-tuning on novel categories. Specifically, the method employs an architecture comprising solely a feature encoder, i.e., a backbone model, and a decoder with indirect attention, which processes queries, keys, and values from distinct sources rather than the same source as in conventional cross-attention mechanisms. Furthermore, a two-stage training approach is utilized, involving pretraining with a hybrid strategy that combines supervised and self-supervised learning, followed by a fine-tuning stage. The incorporation of box-to-pixel relative position bias and contrastive loss enhances performance. Comprehensive experiments on the Pascal VOC and COCO datasets are conducted and results for both seen (base) and unseen (novel) categories are reported in the work.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper is overall well-written, with just a few typos to address.\\n2. Rather than employing a standard architecture with a backbone, encoder, and decoder, the proposed work eliminates the feature encoder for support-query aggregation and introduces a cross-attention mechanism that directly processes the support image, target image, and object queries. This design appears to offer a more generalized solution that facilitates direct application without the need for fine-tuning.\", \"weaknesses\": \"1. The primary novelty of this paper, the indirect attention mechanism, lacks a clear theoretical foundation. Indirect attention takes Q as the object query, K as the support (query) image feature, and V as the entire image feature, thereby adjusting the feature for each object query based on the global image feature and the similarity between support and object queries. Given that both the support and object queries represent local aspects of an object, it remains unclear how the mechanism determines the channel weights of the image feature, which encompasses multiple objects as well as the background.\\n2. The results in the ablation studies do not align with those presented in the main table, Table 1. Specifically, the AP0.5 for seen categories in Table 1 is 73.5, whereas the results for seen categories in Tables 3-5 are reported as higher, despite all evaluations being conducted on the Pascal VOC dataset according to line 403. This inconsistency undermines the persuasiveness of the experimental results. In addition, it lacks experimental comparison with recent studies that are published in 2023 and 2024.\", \"questions\": \"Here are more comments for your concern,\\n\\n1. Typos\", \"line_292\": \"3.5 TRAINING STRATEGY: --> 3.5 TRAINING STRATEGY (no clone here).\", \"line_246\": \"where the whre --> where the\\n\\n2. A curious querstion here: as claimed in line 208-209, \\\"in the decoder, instead of using ...., we proposed indirect attention, that directly exploits ....\\\" since it is an directly exploitation, why the process is named \\\"indirect\\\" attention? \\n\\n3. What's the query patch in 322, shouldn't the output only contains vectors corresponding to each object query and the background? \\n\\n4. In 299, it claimed that this work adopts two-stage training, pretraining and finetuning. As claimed in the related work, one-shot object detection(OSOD) does not allow for a finetuning, so it is unfair to compare with other OSOD methods? Hope the authors can explain more on the finetuning stage. \\n\\n5. In line 197, it assumes that \\\"the target image contains at least one instance of the same class as the object in Q\\\", here Q is the query image. The assumption actually cannot hold in realistic settings, as for detection we don't know the content of the target image, which may or may not include the support(query) class\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Respond to the Rebuttal\", \"comment\": \"Dear authors,\\n\\nThanks for the response, I have read the rebuttal. The comparison results of \\\"simple fusion + residual\\\" are actually very close to the proposed method, especially in the seen classes. and I believe that the results of both \\\"simple fusion\\\" and \\\"simple fusion + residual\\\" could be further improved with more proper training of the newly added MLP (since it is totally randomly initialized). Thus, I still have concerns about the design of indirect attention. I will maintain my initial scores.\"}", "{\"comment\": \"Dear reviewer,\\n\\nThanks for your response. The result reported as \\\"simple fusion\\\" + residual is not as basic as it might initially appear. Beyond the incorporation of MLPs specific to the query and target image features, we also introduced additional convolutional blocks for each feature type independently after the fusion stage. All that is in addition to a cross-attention module following it. To clarify further, just the first cross-attention block in the \\\"double cross-attention\\\" setting has been replaced by the MLP + convolutions fusion (simple fusion). So here, we are comparing only one module of indirect-attention with \\\"simple fusion\\\" + a cross-attention module while our indirect-attention module is equivalent only to the cross-attention module part removing the need for \\\"simple fusion\\\" and enhancing the performance at the same time. In the results provided in this table apart from the backbone all other parameters are randomly initialized including the indirect-attention.\\\\\\nIn addition, the critical performance metric in OSOD is the performance on unseen classes. While the \\\"simple fusion\\\" performs comparably to double cross-attention in seen classes, it significantly underperforms in unseen classes\\u2014precisely where our indirect-attention mechanism demonstrates its strength.\\nWe hope this clarification resolves the concern.\"}", "{\"comment\": \"Dear reviewer,\\nThanks a lot for your response.\\n1. Thank you for your thoughtful feedback and for highlighting this intriguing observation. Comparing backbones of different architectures trained on distinct tasks is indeed challenging. Our explanations for this issue are as followings:\\\\\\n**Pretraining objectives of two backbones and dataset characteristics**: Firstly, ResNet50 is trained on classification task and the objects in PascalVOC are not dense compared to COCO dataset. Most of the images in PascalVOC have a single big object in the middle making the classification part of the problem heavier than the localization part which makes it well algined with ResNet-50's pretraining which is mainly classification. On the other hand, it is true that SWIN has been trained on full Imagenet but with masked image modelling task without seeing the class labels, which makes it different. In fact, one can say that SWIN has also been trained on reduced Imagenet considering that the class labels are an important part of the dataset.\\\\\\n**Seen vs. unseen classes generalization tradeoff**: Secondly, based on our observations, the higher performance on seen classes can be explained by the lower performance on unseen classes and generalizing on unseen classes demands sacrifice of performance on seen classes. In other words, the better the backbone overfits on seen classes the worse it will get on unseen classes. This can be supported by table-6 in our experiments where it can be seen that performance on unseen classes increases substantially when we apply early backbone freezing but at the cost of drop on seen classes. We have expanded slightly on this in our new revision.\\\\\\nTo support our first point and as suggested by the reviewer we have done a limited test of the Resnet50 backbone **only on split-1** of COCO (60 class as seen and 20 class as unseen). Unlike PascalVOC in here we do not see very high performance on seen classes for Resnet-50:\\n| Method | Seen Classes | Unseen Classes | \\n|---|---|---| \\n| IA-DETR with ResNet50 | 52.6 | 26.5 | \\n| IA-DETR with SWIN | **53.2** | **27.3** |\\n\\n3. Thanks for recommending this. In our updated submission we have reflected on this in the beginning of the experiments section. The quantitative result for a different combination (query image as value, target image as key) is simply 0 for both seen and unseen classes even if trained for more epochs. It is important to mention that the roles of object queries, query image and target image features are fixed in double cross-attention setting so we can\\u2019t make comparisons in this regard.\\n\\n4. We do agree on this and we have mentioned this as a possible limitation in subsection 5.2 as the first point.\\n\\nWe hope that the provided clarifications resolve the concerns.\"}", "{\"comment\": \"1. We thank the reviewer for the comment. The indirect-attention mechanism operates as a principled extension of transformer attention instead of a heuristic modification. While classical attention presumes alignment between keys and values derived from the same source, our key insight is that such alignment is not obligatory all the times. By decoupling keys and values, and allowing them to originate from distinct sources, indirect-attention enables a more flexible and robust form of feature interaction, particularly advantageous for OSOD.\\nAlthough the precise theoretical underpinnings of this mechanism require further exploration, we attempted to gain insights through the visualizations presented in Section 5.1. Based on our observations, the mechanism appears to operate as follows: Object queries\\u2014composed of learnable parameters combined with top-k target image features\\u2014serve as a bridge between the query and target images. Specifically, when these queries attend to features in the query image, they activate selectively in regions that correspond to the object of interest. Subsequently, position bias grounds these activated queries within the spatial context of the target image. This interplay likely creates a precise attention flow, where features from the query image guide the selection of relevant regions in the target image via the intermediary object queries.\\n\\n2. We thank the reviewer for the comment. We\\u2019d like to clarify the apparent discrepancy between Tables 1,2 and 3-5: This reflects experimental design rather than inconsistency. Tables 3-5 use a controlled setting without pretraining and backbone freezing to isolate indirect attention's contribution. This clean-room approach is essential for rigorous ablation studies. Table 6 then systematically reintroduces these components to demonstrate their complementary benefits.\", \"questions\": \"1. We appreciate the attention to detail on typos, we rectified them accordingly.\\n2. Your question about the \\\"indirect\\\" terminology is insightful. The name reflects how attention between target and query images is mediated through object queries - unlike traditional direct cross-attention between two sequences, we use a third partially learnable sequence (object queries) that orchestrates the interaction, hence \\\"indirect.\\\"\\n3. The query patch here is the random crop of the image that is used as a query to extract the position where it belongs in the main image. Yes, that is correct the output just contains either the background or the relevant part of the image. However, since the query patch also belongs to the same image it is mentioned that it does not take part in the loss calculation.\\n4. The term \\\"finetuning\\\" has caused some understandable confusion. To be absolutely clear: both training stages (pretraining and finetuning) train exclusively on seen classes. So our approach maintains strict OSOD constraints throughout.\\n5. The assumption about target images containing query class instances is indeed a general OSOD limitation, which we've acknowledged in our limitations discussion in section 5.2.\"}", "{\"title\": \"Thanks for the clarification\", \"comment\": \"Thanks, that's an interesting observation, which also strengthen the effectiveness of IA, and I believe it addressed my concern.\"}", "{\"comment\": \"Thank you for your critical review. We address your concerns with additional empirical evidence that demonstrates the robustness of indirect-attention:\\n1. Thanks for mentioning the ResNet50 backbone. Here is the result with ResNet50 backbone on pascal VOC, pretrained on reduced imagenet:\\n| Method | Seen Classes | Unseen Classes | \\n|---|---|---| \\n| BHRL | 69.7 | 73.8 | \\n| IA-DETR with ResNet50 | **77.8** | **79.56** |\\n\\n2. The quadratic scaling of computational complexity with image size is inherent to transformers attention mechanisms. However, our indirect attention is more efficient than double cross-attention by design - it requires half the attention blocks while achieving superior performance. Following is further results on computational complexity comparing both double cross attention setting with indirect-attention.\\n| Method | Image Size | FLOPs (G) | Memory (GB) | \\n|---|---|---|---| \\n| Double Cross-Attention | 512x640 | 186.3 | 9.7 | \\n| Double Cross-Attention | 1024x1024 | 536.3 | 26.7 | \\n| Indirect Attention | 512x640 | **173.7** | **9.4** | \\n| Indirect Attention | 1024x1024 | **478.2** | **23.3** |\\n\\n3. We have tried different permutations of object queries, target image features, and query image features as key, query, and value. However, the object queries need to be the query but the target and query image features can be permuted. We have tried a different variation by setting target image features as key, and query image features as value but the model eventually fails to learn anything even during extended training time. Intuitively also it makes sense for the target image feature to be set as value and not as key because finally since they're the source for final box and class predictions.\\n\\n4. The interplay between BoxRPB and indirect attention is an important consideration that our ablation studies help clarify. As shown in row 3 of Table 3, while BoxRPB contributes to performance and is important for indirect-attention, indirect attention plays a crucial role - particularly for unseen classes where we observe significant performance downgrade in the case of removal of indirect-attention and relying only on BoxRPB. It is worthy to mention that BoxRPB is an enhanced position bias[1, 2] specific for object detection. Similarly, Table 6 shows performance improvement by adding the contrastive loss though not very substantial.\", \"questions\": \"We hope the explanations and further experiment results provided above can answer the concerns.\\n\\n[1]: Bao, Hangbo, et al. \\\"Unilmv2: Pseudo-masked language models for unified language model pre-training.\\\" International conference on machine learning. PMLR, 2020.\\n\\n[2]: Liu, Ze, et al. \\\"Swin transformer: Hierarchical vision transformer using shifted windows.\\\" Proceedings of the IEEE/CVF international conference on computer vision. 2021.\"}", "{\"summary\": \"The paper addresses the limitations of the double cross-attention strategy found in current one-shot object detection (OSOD) methods and presents the indirect-attention (IA) module as a practical alternative. In addition to the IA, the proposed IA-DETR model enhances the OSOD task by incorporating Box-to-pixel relative position bias (BoxRPB) and a contrastive pre-training pipeline specifically designed for the one-shot object detection head. Experimental results demonstrate the effectiveness of this model on the Pascal VOC and COCO datasets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The concept of indirect attention is straightforward and can be easily understood and implemented.\"], \"weaknesses\": [\"The experiments conducted may have issues regarding fairness. When evaluating the effectiveness of various OSOD methods, the choice of backbone architecture is significant. The proposed IA-DETR utilizes SWIN-based MIM pre-trained weights as its backbone, which differs from the more commonly used ResNet50 and reduced-ImageNet pre-trained weights in existing OSOD methods. It would be beneficial to first validate the proposed model architecture with the same backbone before progressing to a stronger one. Additionally, it's important to note that in the OSOD task, the dataset used for obtaining the pre-trained weights should exclude any classes that are present in the Pascal VOC and COCO datasets.\", \"The authors assert that the double cross-attention block results in a quadratic increase in computational cost as the number of features increases. However, their experiments do not provide sufficient support or clarification on how this increased computational burden impacts an OSOD model.\", \"In the proposed IA-DETR, it is interesting to consider alternative combinations of object queries, query image features, and target image features for their roles as query, key, and value. Including comparisons of these variations in the ablation study would enhance the comprehensiveness of the research.\", \"The findings from the ablation study indicate that the performance improvements attributed to the proposed indirect attention mechanism and the contrastive pre-training pipeline are quite modest. It appears that the overall effectiveness of the model is more significantly influenced by the backbone and the BoxRPB component. Consequently, the technical contributions of these enhancements are somewhat constrained.\"], \"questions\": \"The primary concern of this manuscript is on the experiments conducted. It is important to assess the appropriateness of utilizing the SWIN-based MIM pretrained backbone. Additionally, the ablation study does not adequately demonstrate the contributions of the indirect attention mechanism and the contrastive pre-training pipeline as claimed by the authors. To enhance the manuscript, a more detailed analysis should be included to clarify the effects of the proposed indirect attention and the contrastive pre-training approach.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Overall:\\nThis paper introduces IA-DETR, a novel one-shot object detection model that uses indirect attention to efficiently capture relationships between target image features, query image features, and object queries. The proposed method significantly outperforms state-of-the-art techniques on the Pascal VOC and COCO benchmarks, demonstrating its effectiveness and efficiency.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Good formulas and figures.\", \"The idea of extending the transformer attention mechanism to three distinct elements is simple.\", \"Comprehensive experiments yield robust and state-of-the-art results.\", \"The concept of IA-DETR is innovative for OSOD (One-Shot Object Detection).\"], \"weaknesses\": [\"Novelty is limited: the new technical thing proposed in this paper is \\\"indirect attention\\\" which differs from the previous attention by using two inputs for K and V. However, this idea seems direct and too simple without other technical contributions.\", \"The experimental analysis of the indirect attention is not comprehensive, such a manner could be regarded as using the K, and V layers to fuse the features of the input K (query images features P) and V (target image features T), how about the comparison result of first using some other simple fusion methods e.g., MLP([P, T]) and then the typical cross-attention.\", \"The experiments in Tables 1 and 2 are not based on multiple runs, which will weaken the robustness of the proposed method.\", \"The paper does not explicitly state whether the indirect attention method is applied during the pre-training stage. Given that the main challenge in OSOD is the scarcity of positive samples, and the proposed method succeeds during fine-tuning, it should ideally also be effective in the pre-training stage, where there are more positive samples available. Therefore, if indirect attention is applied during pre-training, results for this stage should also be presented\", \"Minor: Figure 3 is too large.\", \"The left and right quotation marks do not match on lines 18 and 83.\"], \"questions\": \"please see weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"none\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for the clarification\", \"comment\": \"The authors have addressed most of my concerns. Though I'm still hesitant about the Q3, but overall, I think the authors have done substantial revising to their manuscript. And I approve their claim that \\\"we challenge and overturn the long-standing assumption that attention keys and values must be aligned\\\".\"}", "{\"title\": \"Respond to the Rebuttal\", \"comment\": \"Thanks to the author's effort in providing a rebuttal.\\n1. Upon examining the results, it is intriguing that ResNet50, when trained on the reduced Imagenet, can outperform the original IA-DETR using SWIN trained on the full Imagenet. It is better to clarify the reasoning behind this unexpected outcome, particularly how a model with a seemingly weaker backbone and less data achieves superior results on the seen classes. Additionally, how is the performance of IA-DETR with ResNet50 being assessed in the COCO dataset?\\n2. Thanks for the information.\\n3. It is recommended supporting these findings with quantitative experimental results and comparing them to the traditional double cross-attention method. \\n4. It is also noteworthy that Table 3 highlights that the role of BoxRPB is more significant than that of direct-attention.\"}", "{\"comment\": \"We thank the reviewer for the comments.\\n1. While our indirect attention mechanism may appear straightforward, this simplicity masks a fundamental contribution: we challenge and overturn the long-standing assumption that attention keys and values must be aligned. This implicit constraint has gone unquestioned in attention mechanisms since their inception. Breaking free from this assumption enables a new class of attention operations empirically validated on and particularly suited for OSOD. Although the precise theoretical underpinnings of this mechanism require further exploration, we attempted to gain insights through the visualizations presented in Section 5.1.\\n\\n2. The interpretation of indirect attention as simple feature fusion between K, and V offers an interesting perspective, but we believe the mechanism operates quite differently. In standard attention, K and V represent different embeddings of the same sequence, operating as distinct projections rather than elements to be fused. Our indirect attention extends this principle: K and V maintain their independent roles through multiple attention blocks, just as in standard attention, with the key innovation being their origin from different sources. We conduct the experiment with an alignment method consisting of a few layers of convolutions and MLPs as mentioned by the reviewer on pascalVOC and the result is as below. We noticed that just a simple fusion gives a very bad result but it gets better with a residual connection with the original target image feature (fusion(target image, query image) + target_image). This result demonstrates that the benefits of indirect attention is in fact more than simple feature fusion.\\n| Method | Seen Classes | Unseen Classes | \\n|---|---|---|\\n| Simple Fusion | 26.6 | 30.7 |\\n| Simple Fusion + residual | 82.1 | 62.7 | \\n| Indirect-Attention | **82.94** | **65.13** |\\n\\n3. All results in every table represent averages over 5 runs, each with different randomly selected query images. Though not explicitly said in the experiment section, it has been mentioned in section 5.3. This protocol ensures our findings are stable across query image variations.\\n\\n4. Yes, indirect attention is indeed used during pretraining, but with the backbone frozen which is not ready yet for OSOD task. As requested, here are the pretraining-only results (5-run average) on Pascal-VOC:\\n| Method | Seen Classes | Unseen Classes |\\n|---|---|---|\\n| Pretraining only | 11.26 | 14.6 |\"}" ] }
DcG4YnbOT3
Vision-Enhanced Time Series Forecasting by Decomposed Feature Extraction and Composed Reconstruction
[ "Mingyang Yu", "Peng Chen", "Xiahui Guo", "Zhenkai Li", "Yang Shu" ]
Time series forecasting plays a crucial role in various domains, such as power and weather forecasting. In recent years, different types of models have achieved promising results in long-term time series forecasting. However, these models often produce predictions that lack consistency with the style of the input, resulting in reduced reliability and trust in the forecasts. To address this issue, we propose the Vision-Enhanced Time Series Forecasting by Decomposed Feature Extraction and Composed Reconstruction (VisiTER), which leverages the rich semantic information provided by the image modality to enhance the realism of the predictions. It consists of two main components: the Decomposed Time Series to Image Generation and the Composed Image to Time Series Generation. In the first component, the Decomposed Time Series Feature Extraction Model extracts periodic and trend information, which is then transformed into images using our proposed time series to vision transformation architecture. After converting the input time series into images, the resulting images are used as style features and concatenated with the previously extracted features. In the second component, we use our proposed TimeIR along with the previously obtained feature set to perform image reconstruction for the prediction part. Due to the rich information provided, the reconstructed images exhibit better consistency with the input images, which are then transformed back into time series. Extensive experiments on seven real-world datasets demonstrate that VisiTER achieves state-of-the-art prediction performance on both traditional metrics and new metrics.
[ "Time Series Forecasting" ]
https://openreview.net/pdf?id=DcG4YnbOT3
https://openreview.net/forum?id=DcG4YnbOT3
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yB9BkyAUr1", "y4AbAWJAnD", "tah06YwHiS", "psD531fC0v", "hovTS3sYUh", "bsdlBUgy7c", "YCUgaIkV3Q", "UOA1sQyUoE", "TuQfO7HACK", "M2rODOz7b1", "I8kktJF8tD", "FsApFM0SjA", "DRZHp9aKQp" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732160852524, 1732161131185, 1732537565144, 1732160940387, 1732160799001, 1730548879438, 1730515275840, 1733007122456, 1732537613359, 1733394867931, 1732537593736, 1730790515169, 1732161036743 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3717/Authors" ], [ "ICLR.cc/2025/Conference/Submission3717/Authors" ], [ "ICLR.cc/2025/Conference/Submission3717/Authors" ], [ "ICLR.cc/2025/Conference/Submission3717/Authors" ], [ "ICLR.cc/2025/Conference/Submission3717/Authors" ], [ "ICLR.cc/2025/Conference/Submission3717/Reviewer_MNU9" ], [ "ICLR.cc/2025/Conference/Submission3717/Reviewer_xofX" ], [ "ICLR.cc/2025/Conference/Submission3717/Reviewer_xofX" ], [ "ICLR.cc/2025/Conference/Submission3717/Authors" ], [ "ICLR.cc/2025/Conference/Submission3717/Authors" ], [ "ICLR.cc/2025/Conference/Submission3717/Authors" ], [ "ICLR.cc/2025/Conference/Submission3717/Reviewer_pp94" ], [ "ICLR.cc/2025/Conference/Submission3717/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer pp94\", \"comment\": \"# W3:Missing dataset\\n\\nThank you for your reminder. We have supplemented evaluation of our model on these two datasets on MSE metric, and the results are as follows:\\n\\n| dataset | ours | Crossformer | iTransformer | TiDE | PatchTST |\\n| ---------------- | --------- | ----------- | ------------ | ----- | -------- |\\n| Traffic_96 | 0.435 | 0.522 | **0.395** | 0.805 | 0.462 |\\n| Traffic_192 | 0.446 | 0.530 | **0.417** | 0.756 | 0.466 |\\n| Traffic_336 | 0.461 | 0.558 | **0.433** | 0.762 | 0.482 |\\n| Traffic_720 | 0.494 | 0.589 | **0.467** | 0.719 | 0.514 |\\n| Traffic_AVG | 0.459 | 0.550 | **0.428** | 0.760 | 0.481 |\\n| Solar-Energy_96 | **0.200** | 0.310 | 0.203 | 0.312 | 0.234 |\\n| Solar-Energy_192 | **0.231** | 0.734 | 0.233 | 0.339 | 0.267 |\\n| Solar-Energy_336 | 0.250 | 0.750 | 0.248 | 0.368 | 0.290 |\\n| Solar-Energy_720 | **0.249** | 0.765 | **0.249** | 0.370 | 0.289 |\\n| Solar-Energy_AVG | **0.232** | 0.639 | 0.233 | 0.347 | 0.270 |\\n\\nFor the Solar-Energy dataset, our model surpasses the current best baseline on the MSE metric. For the Traffic dataset, which is characterized by a high number of variables, our model's performance ranks second among all baselines, just behind the iTransformer specifically designed for multivariable time series. In the future, we will further consider modeling the relationships between variables.\\n\\n# W4:Missing code\\n\\nThank you for your reminder. In this supplementary material, we have submitted our model's code, as well as the training code. The complete repository of our method will be made open-source in the future.\"}", "{\"title\": \"Response to Reviewer xofX\", \"comment\": \"We would like to sincerely thank Reviewer xofX for providing a detailed review and insightful comments. Based on the suggestions, we have revised our paper accordingly.\\n\\n# W1:Use of linking words\\n\\nThank you for your suggestions. We have made the corresponding modifications in the article to enhance its clarity and coherence.\\n\\n# W2:Ambiguous connection\\n\\nWe apologize for any confusion caused by our choice of words. Here, \\\"complexity\\\" does not refer to computational complexity, but rather to the difficulty of constructing an appropriate method for transforming time series into images. Our intention is to convey that existing methods that directly map time series into scatter plots can lead to information loss, and designing a method that avoids such loss is challenging.\\n\\n# W3:Clarification of model objectives\\n\\nThe reason we utilize the periodicity and trend of the predicted output is closely related to the characteristics of the image reconstruction model. Image reconstruction refers to the process of restoring a degraded image, where \\\"degraded\\\" indicates that the geometric structure of the image has been compromised. Given the limitations of image reconstruction models in capturing temporal dynamics, it is crucial to employ existing time series modeling techniques to predict the output's periodicity and trends. Consequently, the degraded image we maintain must reflect these predicted structural periodicity and trends. In other words, we need to use segments of the predicted results from the time series as the foundation for the image reconstruction process.\\n\\n# W4:Expanding in Y-dimension\\n\\nWe need to extend the Y-axis because this adjustment provides a more accurate representation of the computed loss. Considering this scenario, when the predicted values are close to the actual values, in the time series modality, the corresponding loss should be small. However, in the image modality, these values do not correspond to the same pixel if the Y-axis is not expanded, resulting in a significantly larger loss in the image MSE, which does not align with our understanding and requirements regarding the errors in actual predictive results.\\n\\nBy extending the Y-axis, we can ensure that as the predicted points get closer to the points in the ground truth (GT), the loss becomes smaller, which aligns with our expectations.\\n\\n# W5:The computational costs of proposed method\\n\\nIn our model, the DTFE component includes a Transformer block, whereas the V2T component does not incorporate this block, as it serves as a conversion method that does not require training. We will first present the theoretical analysis of the computational complexity of the Transformer block in the DTFE component, where the sequence length is denoted as L, the patch size as P, the number of variables as N, and the number of Transformer layers as 1.\\n\\nWhen modeling with variables as tokens, the number of tokens is N and the dimensionality is L, resulting in a complexity of $O(N^2L)$. In contrast, when modeling using patches of time series, we consider patches as tokens, with the patch size defining the dimensionality, leading to a complexity of $O(\\\\frac{L^2}{P})$. The computational complexity for the periodic prediction part is denoted as $O(\\\\frac{L^2}{P})$, while that for the trend prediction part is denoted as $O(N^2L)$, resulting in an overall computational complexity for DTFE represented as $O(N^2L+\\\\frac{L^2}{P})$. We have selected commonly used models, iTransformer and patchTST, for comparison, with their respective computational complexities denoted as $O(2N^2L)$and $O(2\\\\frac{L^2}{P})$. The factor of 2 is included because both models have two layers.\\n\\nIt is important to note that the specific values of these computational complexities are influenced by different datasets and prediction lengths. Therefore, we tested the computational complexity on a specific dataset, namely the ETTh2 dataset, with a prediction length of 96, using GMac as the unit of measurement for the computational complexity.\\n\\n| models | MSE | computational cost(GMac) |\\n| ------------ | --------- | ------------------------ |\\n| DTFE | **0.286** | 0.08 |\\n| iTransformer | 0.297 | **0.02** |\\n| PatchTST | 0.302 | 0.14 |\\n\\nOur model achieves the best prediction results, with computational costs positioned between the two provided baselines.\"}", "{\"title\": \"Response to Reviewer pp94\", \"comment\": \"Dear Reviewer pp94,\\n\\nWe would like to express our sincere gratitude for your time and efforts in reviewing our paper.\\n\\nWe have made an extensive effort to try to address your concerns. In our response:\\n\\n- We provide a detailed explanation of the specific definition of style.\\n\\n- We elucidate the rationale behind using images for time series forecasting.\\n\\n- We supplement our findings with results from our model on the Traffic and Solar Energy datasets, along with comparisons to other baseline models.\\n\\n- We also provide the code used in our research.\\n\\nWe hope our response can address your concerns. If you have any further concerns or questions, please do not hesitate to inform us, and we will be more than happy to address them promptly.\\n\\n\\nAll the best,\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer MNU9\", \"comment\": \"We would like to sincerely thank Reviewer MNU9 for providing a detailed review and insightful comments. Based on the suggestions, we have revised our paper accordingly.\\n\\n# W1:The motivation for Image Transformation\\n\\nWe employ image reconstruction techniques for processing time series data because current time series models fail to capture the inherent geometric structural information. These models typically use MSE or MAE between corresponding time points for training, which only reflects numerical similarity. For instance, two prediction results may yield the same MSE, yet their styles can differ significantly. Consider an example where the ground truth (GT) is a sine function $y=sin(x)$ , Time Series One is a shifted sine function $y=sin(x+a)+b$, and Time Series Two is a straight line $y=0$. By adjusting the magnitude of the shift in Time Series One, Time Series One and Time Series Two may have similar MSEs compared with GT. While the MSEs are similar, Time Series Two lacks geometric details and does not convey useful information such as periodicity and trends. Thus, low MSE alone is insufficient; we need to incorporate geometric structural similarity into our predictions.\\n\\nSince one-dimensional time serie sequences cannot capture two-dimensional geometric features well, we introduced image reconstruction components to our approach. This allows us to capture the combined structural similarity of time series in a two-dimensional context, where one dimension represents the time dimension and the other dimension represents the value dimension. Our experimental results demonstrate that our model can effciently capture these combined structures, as shown in the visualizations. More detailed information regarding the incorporation of images has been revised in the appendix, with further specifics available in Appendix B.1.\\n\\nWe conduct ablation experiments on TimeIR using the ETTh1 dataset, and the specific results are as follows:\\n\\n| pred length | DTFE MSE | DTFE+TimeIR MSE | DTFE MAE | DTFE+TimeIR MAE | DTFE SSIM | DTFE+TimeIR SSIM |\\n| ----------- | -------- | --------------- | ---------- | ----------------- | ------------ | ------------------ |\\n| 96 | 0.377 | **0.374** | 0.386 | **0.383** | 0.4654 | **0.4796** |\\n| 192 | 0.420 | **0.416** | 0.425 | **0.422** | 0.4483 | **0.4605** |\\n| 336 | 0.461 | **0.459** | 0.447 | **0.444** | 0.4432 | **0.4538** |\\n| 720 | 0.478 | **0.475** | 0.465 | **0.461** | 0.4348 | **0.4485** |\\n| Avg | 0.434 | **0.431** | 0.431 | **0.428** | 0.4479 | **0.4606** |\\n\\nAfter adding TimeIR, it can be observed that our model achieve better prediction performance on the MSE metric., while the SSIM (which ranges from 0 to 1, with higher values indicating better performance) increased by 0.0127. This indicates that the TimeIR module not only enhances the geometric structural integrity of the time series but also reduces the traditional MSE loss.\\n\\n# W2:The concept of style feature\\n\\nThe style of the time series mentioned in the article can be defined as the geometric structural information of the time series. Specific manifestations of style can be observed in our visualization results. In the example shown in the third row of Figure 5, the original input time series(the left half of each sequence) exhibits a linear style, while the ground truth for prediction (the right half of the sequence) also appears as a straight line. In contrast, the predictions made by other time series forecasting methods show fluctuating patterns, highlighting a clear stylistic difference from the original time series. This noticeable difference clearly indicates that the original time series and predicted time series do not constitute a unified time series, thus suggesting a discontinuity in their styles. However, our model successfully reconstructs the time series into a form that closely resembles a straight line, thereby preserving the style of the preceding series.\"}", "{\"title\": \"Response to Reviewer pp94\", \"comment\": \"We would like to sincerely thank Reviewer pp94 for providing a detailed review and insightful comments. Based on the suggestions, we have revised our paper accordingly.\\n\\n# W1:The concept of style consisency\\n\\nThe continuity of style can be understood as a type of discrepancy measurement between the original time series and the predicted time series. Specifically, the style of the time series mentioned in the article can be defined as **the geometric structural information of the time series**. This can be illustrated by examining the visualization results in Figure 5. In the third row of examples, the original input time series(the left half of each sequence) exhibits a linear style, while the ground truth for prediction (the right half of the sequence) also appears as a straight line. In contrast, the predictions made by other time series forecasting methods show fluctuating patterns, highlighting a clear stylistic difference from the original time series. This noticeable difference clearly indicates that the original time series and predicted time series do not constitute a unified time series, thus suggesting a discontinuity in their styles. However, our model successfully reconstructs the time series into a form that closely resembles a straight line, thereby preserving the style of the preceding series.\\n\\n# W2(1):The motivation for Image Transformation in Time-Series Forecasting\\n\\nWe employ image reconstruction techniques for processing time series data because current time series models fail to capture the inherent geometric structural information. These models typically use MSE or MAE between corresponding time points for training, which only reflects numerical similarity. For instance, two prediction results may yield the same MSE, yet their styles can differ significantly. Consider an example where the ground truth (GT) is a sine function $y=sin(x)$ , Time Series One is a shifted sine function $y=sin(x+a)+b$, and Time Series Two is a straight line $y=0$. By adjusting the magnitude of the shift in Time Series One, Time Series One and Time Series Two may have similar MSEs compared with GT. While the MSEs are similar, Time Series Two lacks geometric details and does not convey useful information such as periodicity and trends. Thus, low MSE alone is insufficient; we need to incorporate geometric structural similarity into our predictions.\\n\\nSince one-dimensional time series sequences cannot capture two-dimensional geometric features well, we introduced image reconstruction components to our approach. This allows us to capture the combined structural similarity of time series in a two-dimensional context, where one dimension represents the time dimension and the other dimension represents the value dimension. Our experimental results demonstrate that our model can effciently capture these combined structures, as shown in the visualizations.\\n\\n# W2(2):The distinction between Time series and image\\n\\nIn the image modality, the order of each patch is significant; altering this order results in changes to the image, similar to the characteristics of time series data. Additionally, our TimeIR model incorporates positional embeddings to ensure that the model comprehends the sequence of the patches. The primary distinction between these two modalities is that the image modality, while preserving the information of the time series, introduces a y-axis, thereby providing additional geometric structural information and enhancing the capability to capture style. More detailed information regarding the incorporation of images has been revised in the appendix, with further specifics available in Appendix B.1.\"}", "{\"summary\": \"This paper introduces VisiTER, a novel method for time series forecasting that leverages the rich semantic information contained in images by converting time series data into visual representations. The proposed method consists of two primary components: the decomposition of time series data into images and the prediction of time series through three image inputs. In the first component, a feature extraction model decomposes the time series to isolate periodic and trend information, which is then transformed into images. In the second component, the authors employ the TimeIR module to perform time series forecasting via image reconstruction. Experimental results on seven real-world datasets demonstrate that VisiTER achieves state-of-the-art performance in both traditional and novel evaluation metrics.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.Compared to traditional methods that directly map time series data to scatter plots and then reconstruct them, this paper proposes the T2V method. By diffusing and continuousizing the scatter points, the generated images are more suitable as inputs for image models. Additionally, the introduction of multiple Swin Transformer Blocks in the TimeIR model allows the model to better focus on the detailed information of the time series. This approach leverages the rich information in the image modality, enabling the model to capture complex relationships more effectively.\\n\\n2.The paper introduces the DTFE module, which effectively extracts periodic and trend features, thereby enhancing the model's performance. This module can efficiently handle the periodic and trend characteristics of time series data and convert them into image form for reconstruction, improving the accuracy and stability of the reconstruction results.\\n\\n3.The introduction of the TimeIR model enables more efficient utilization of periodic, trend, and style features for image reconstruction, which is crucial for time series prediction. This model enhances the overall predictive capabilities of the system.\\n\\n4.Experimental results show that the VisiTER model achieves state-of-the-art performance across multiple datasets. Moreover, the authors use the SSIM metric to evaluate the similarity between the generated time series images and the actual time series. The results indicate that the VisiTER model also performs exceptionally well in terms of SSIM, demonstrating its robustness and effectiveness.\", \"weaknesses\": \"1.Lack of Motivation for Image Transformation. The authors do not provide sufficient justification for the advantages of converting time series data into images for the forecasting task. There is a lack of ablation studies to explore the motivation behind this transformation, which weakens the argument for its necessity.\\n\\n2.The concept of \\\"style features\\\" in the context of time series tasks is somewhat abstract and not clearly defined. This lack of clarity makes it difficult for readers to understand the relevance and reasonableness of these features, potentially undermining the credibility of the approach.\\n\\n3.The introduction of the SSIM metric seems to lack a strong rationale. While SSIM is useful for evaluating image quality, its relevance and significance in the context of time series prediction are not well justified. This raises concerns about the appropriateness of using such an image-based metric for this specific task.\\n\\n4.The authors could enhance the robustness of their method by treating the image reconstruction component as a plug-in module and applying it to other existing baseline models. This would help to determine whether the image reconstruction approach can improve the performance of different models, providing stronger evidence for its effectiveness.\", \"questions\": \"Please refer to Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents VisiTER (Vision-Enhanced Time Series Forecasting), a new framework integrating image-based approaches to enhance time series forecasting. VisiTER consists of two main components: Decomposed Time Series to Image Generation and Composed Image to Time Series Generation. The approach transforms time series data into images to capture richer semantic information and generate consistent predictions with greater fidelity.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Good set of figures (fig 1, 2, 3) that roughly show the proposed idea\", \"weaknesses\": [\"Writing: The author should revise the comprehension and linking of ideas. For example:\", \"line 47\\u219255: Linking words choices (The first challenge \\u2026 \\u2192 Additionally, \\u2026 \\u2192 The second challenge \\u2026) can be improved.\", \"line 47\\u219249: Ambiguous connection: Why the complexity in image transformation can be explained with the latter idea of direct scatter plots causing information loss?\", \"Method:\", \"from the text description (Sec. 3.2) and figures (1, 2), the idea is seem to linked with extraction of periodic and trend components of input data, while from the loss, the modules try to predict the these component for output instead. The authors should clarify this point and reflect it in the manuscript.\", \"T2V module: why expanding in Y-dimension can help with reconstruction process?\", \"Experiment:\", \"The overall pipeline involves DTFE, V2T which have their own transformer blocks inside. The author should have experiments calculating the computational overhead of proposed method, in comparison with its performance gain.\"], \"questions\": \"Please refer to Weaknesses for related questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Authors\", \"comment\": \"Dear the Authors,\\n\\nThanks for your response addressing my comments.\\n\\nYour answer have cleared some of my initial concerns. However, the motivation of you using image reconstruction techniques to predict periodicity and trend of the future times series remain unclear for me. While these techniques are \\\"good\\\" or \\\"suitable\\\" to **reconstruct** corrupted images, I do not see why they are suitable for **prediction** task, especially for time series - a different modality. And with this choice, you now have to address **the limitations of image reconstruction models in capturing temporal dynamics** - cited the reply of the Authors.\\n\\nGiven this, I believe this work can be further improved, and I would keep my score as it is. \\n\\nThanks,\\n\\nReviewer xofX\"}", "{\"title\": \"Response to Reviewer xofX\", \"comment\": \"Dear Reviewer xofX,\\n\\nWe would like to sincerely thank you for your time and efforts in reviewing our paper.\\n\\nWe have made an extensive effort to try to address your concerns. In our response:\\n\\n- We revise the manuscript to reduce the use of inappropriate terminology and ambiguous expressions.\\n- We provide a detailed explanation of why we use the output features as the prediction target.\\n- We supplement our analysis with a comparison of the computational costs associated with our model and the baseline models.\\n\\nWe hope our response can effectively address your concerns, If you have any further concerns or questions, please do not hesitate to let us know, and we will respond timely.\\n\\nAll the best, \\n\\nAuthors\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"Dear Reviewers,\\n\\nThank you for your valuable feedback and insights regarding our manuscript. We truly appreciate the time and effort you invested in reviewing our work. Your comments have provided us with a clearer direction for improvement.\\n\\nWe have decided to withdraw the paper for now, as we plan to address the concerns raised and enhance the quality of our research. We are committed to refining our work and hope to resubmit in the future.\\n\\nThank you once again for your constructive criticism.\\n\\nBest regards, The Authors\"}", "{\"title\": \"Response to Reviewer MNU9\", \"comment\": \"Dear Reviewer MNU9,\\n\\nWe would like to sincerely thank you for your time and efforts in reviewing our paper.\\n\\nWe have made an extensive effort to try to address your concerns. In our response:\\n\\n- We elucidate the motivation for using images in time series forecasting.\\n- We provide a detailed explanation of the specific definition of style.\\n- We present examples to analyze the rationale behind our introduction of SSIM.\\n- We supplement our findings with corresponding experiments that incorporate our model as a plugin.\\n\\nWe hope our response can address your concerns. If you have any further concerns or questions, please do not hesitate to let us know, and we will respond timely.\\n\\nAll the best, \\n\\nAuthors\"}", "{\"summary\": \"This work addresses time series forecasting from the view of image modal. The authors propose a novel model called VisiTER, which leverages image modality to enhance the realism and consistency of time series predictions. The experimental results show that VisiTER present promising performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tThe paper introduces a novel approach to time series forecasting by framing the problem within the image modality, offering a fresh perspective.\\n2.\\tThe model structure is clear and appears logically sound.\\n3.\\tThe presentation is well-organized and easy to follow.\", \"weaknesses\": \"1.\\tThe motivation behind the paper is that existing models often produce predictions that lack consistency with the \\\"style\\\" of the input. However, the concept of \\u201cstyle consistency\\u201d is not clearly defined, weakening the argument for this motivation.\\n2.\\tThe rationale for using Vision Transformers (ViT) for time series feature extraction is not well-explained. Specifically, it is unclear what key differences between image and time series modalities justify the use of ViT. For instance, while the order/position of image patches is irrelevant, the temporal order of time series data is crucial. This distinction needs further elaboration.\\n3.\\tThe paper omits some commonly-used datasets, such as Traffic and Solar-Energy. Including results from these datasets would strengthen the evaluation of the proposed model.\\n4.\\tThe paper does not provide the source code, which is critical for reproducibility. Given that the reported results claim state-of-the-art performance, evidence supporting this claim is necessary.\", \"questions\": \"Please answer the concerns of W1 and W2, and supplement the materials of W3 and W4.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer MNU9\", \"comment\": \"# W3:The introduction of the SSIM metric\\n\\nWe introduce the SSIM because existing evaluation metrics for time series lack a focus on structural assessment. Continuing with the previously mentioned example in response to W1, both Time Series One and Time Series Two have the same MSE value compared to the ground truth (GT), yet their geometric structures are entirely different. To evaluate their geometric structural similarity with the GT, we converted this example into images and conducted SSIM testing. Specifically, the SSIM value between Time Series One and the GT is 0.5866, while the SSIM value between Time Series Two and the GT is only 0.2722.\\n\\nThis clearly indicates that Time Series One, which exhibits high geometric structural similarity with the GT, achieves a higher SSIM, whereas Time Series Two has a significantly lower SSIM. This demonstrates that SSIM effectively captures the geometric structural similarity of time series. Therefore, we introduce SSIM as an evaluation metric to supplement the limitations of MSE and MAE in assessing structural integrity.\\n\\n# W4:Treating the image reconstruction component as a plug-in module\\n\\nOur current image reconstruction model is built upon the joint reconstruction of the periodicity, trend, and features of the time series. In contrast, existing time series models predict only the time series itself, rather than their underlying cycles and trends. To transform our method into a plugin, we need to make modifications. Specifically, in TimeIR, we will continue to accept style inputs but replace the input features of periodicity and trends with the predictions from traditional time series models. We conduct experiments on iTransformer, selecting the ETTh2 dataset for our experiments. The specific experimental results are as follows:\\n\\n| pred length | iTransformer MSE | iTransformer+TimeIR MSE | iTransformer MAE | iTransformer+TimeIR MAE | iTransformer SSIM | iTransformer+TimeIR SSIM |\\n| ----------- | ---------------- | ----------------------- | ------------------ | ------------------------- | -------------------- | -------------------------- |\\n| 96 | 0.297 | **0.293** | 0.349 | **0.344** | 0.4409 | **0.4569** |\\n| 192 | 0.380 | **0.379** | 0.400 | **0.399** | 0.4167 | **0.4247** |\\n| 336 | 0.428 | **0.428** | 0.432 | **0.431** | 0.4077 | **0.4134** |\\n| 720 | 0.427 | **0.428** | 0.445 | **0.445** | 0.3998 | **0.4053** |\\n| Avg | 0.383 | **0.382** | 0.407 | **0.405** | 0.4163 | **0.4251** |\\n\\nIt can be observed that after incorporating our TimeIR module, our model achieved a reduction of 0.01 in MSE, a reduction of 0.02 in MAE, and an increase of 0.0088 in SSIM. These results demonstrate that the proposed image reconstruction component provides improved predictions in accuracy and geometric structure of the time series.\"}" ] }
Dc6dgTq2UZ
Towards Distributed Backdoor Attacks with Network Detection in Decentralized Federated Learning
[ "Bohan Liu", "Yang Xiao", "Ruimeng Ye", "Zinan Ling", "Xiaolong Ma", "Bo Hui" ]
Distributed backdoor attacks (DBA) have shown a higher attack success rate than centralized attacks in centralized federated learning (FL). However, it has not been investigated in the decentralized FL. In this paper, we experimentally demonstrate that, while directly applying DBA to decentralized FL, the attack success rate depends on the distribution of attackers in the network architecture. Considering that the attackers can not decide their location, this paper aims to achieve a high attack success rate regardless of the attackers' location distribution. Specifically, we first design a method to detect the network by predicting the distance between any two attackers on the network. Then, based on the distance, we organize the attackers in different clusters. Lastly, we propose an algorithm to \textit{dynamically} embed local patterns decomposed from a global pattern into the different attackers in each cluster. We conduct a thorough empirical investigation and find that our method can, in benchmark datasets, outperform both centralized attacks and naive DBA in different decentralized frameworks.
[ "Decentralized Federated Learning", "Distributed Backdoor Attacks" ]
Reject
https://openreview.net/pdf?id=Dc6dgTq2UZ
https://openreview.net/forum?id=Dc6dgTq2UZ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ynEPZNx28A", "wEaNDPchvb", "rsmZgtkpTz", "o1ZS34j5BN", "kd6TlQew6W", "jS7jgFkZNw", "hjsA5bHgxR", "crgaUdPwBu", "YRFSWdRp4D", "Y5vCGanJTR", "U0RDB2ktEj", "QBS2lIlpwp", "PelVVmYux7", "NkEM957OkF", "EtFU63HCLb", "D7Jd81YEUJ", "C2oOLUjg4l", "4PGcS6be3r" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732506423149, 1732379949571, 1732209362515, 1730443981253, 1732210213061, 1737524078236, 1730727277579, 1732544214217, 1732209582066, 1732896817920, 1735035104812, 1732209816862, 1732688272816, 1730682987520, 1732209551587, 1730079934152, 1732210331693, 1732209066097 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10808/Reviewer_5u5L" ], [ "ICLR.cc/2025/Conference/Submission10808/Authors" ], [ "ICLR.cc/2025/Conference/Submission10808/Authors" ], [ "ICLR.cc/2025/Conference/Submission10808/Reviewer_y1F4" ], [ "ICLR.cc/2025/Conference/Submission10808/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10808/Reviewer_ND9D" ], [ "ICLR.cc/2025/Conference/Submission10808/Authors" ], [ "ICLR.cc/2025/Conference/Submission10808/Authors" ], [ "ICLR.cc/2025/Conference/Submission10808/Reviewer_ND9D" ], [ "ICLR.cc/2025/Conference/Submission10808/Area_Chair_tv6f" ], [ "ICLR.cc/2025/Conference/Submission10808/Authors" ], [ "ICLR.cc/2025/Conference/Submission10808/Reviewer_y1F4" ], [ "ICLR.cc/2025/Conference/Submission10808/Reviewer_FJTf" ], [ "ICLR.cc/2025/Conference/Submission10808/Authors" ], [ "ICLR.cc/2025/Conference/Submission10808/Reviewer_5u5L" ], [ "ICLR.cc/2025/Conference/Submission10808/Authors" ], [ "ICLR.cc/2025/Conference/Submission10808/Authors" ] ], "structured_content_str": [ "{\"title\": \"Thank you for the responses\", \"comment\": \"I want to thank the authors for their detailed responses. Although some questions were only partially answered, I am satisfied with the clarification for most of my concerns. Hence, I decided to increase my score.\"}", "{\"title\": \"Thanks for valuable review\", \"comment\": [\"Dear Reviewers,\", \"We thank all reviewers for their detailed and valuable feedback. We are pleased to see that reviewers found that\", \"the problem studied is **important and has seen limited exploration** (ND9D, y1F4, 5u5L);\", \"the proposed method is **innovative and interesting** (ND9D, FJTf, y1F4, 5u5L);\", \"the proposed method is **reasonable and outperforms traditional methods** (ND9D, FJTf)\\uff1b\", \"the structure of this paper is easy to follow (5u5L).\"], \"we_have_carefully_addressed_the_review_and_here_are_the_major_ones\": [\"**Not sufficiently exploring defensive strategies**: We have added two defensive mechanisms in the experiment. We would be most thankful if you reviewer can point out some specific defense mechanisms to be introduced and we are more than happy to include them in the experiment. Compared with DBA and centralized attack, our method can further pose a challenge to the effectiveness of these defense mechanisms.\", \"**The impact of hyperparameters**: We have added new experiments to investigate the impact of the following parameters: the number of clusters, the number of classes in the dataset, and the accuracy of topology detection,\", \"**Scalability and cost on complicated topology**: We have added new experiments to report the performance and computational cost on topologies with more clients and random structures. Our method also works for DFL with complicated topology and the computational cost can be ignored compared with the training time of DFL.\", \"For more detailed, point-by-point responses, please refer to our response to each reviewer.\", \"We deeply appreciate the time and effort each of you has invested in this process. Please kindly let us know if you have any further questions, and we would be more than happy to resolve them before the rebuttal deadline.\"]}", "{\"title\": \"Response to Reviewer 5u5L [Part 2]\", \"comment\": \"**Q1: How would the clustering algorithm handle larger networks with significantly more clients and attackers?**\\n\\n**Reply:** Thanks for raising the question. Our algorithm can be applied to larger networks. In this paper, we use K-means as our clustering algorithm. The major cost of our algorithm is to generate the sequence used for distance prediction. K-means, distance prediction, and trigger distribution can be done in a few seconds regardless of the number of clients. In the following table, we report the extra time used for our clustering algorithm. Compared with the normal training cost on FL, the computational cost of the clustering algorithm can be ignored.\\n| Topology | Clustering and trigger distribution | 2000 epoches of training |\\n|-----------------------------|--------------------|--------------------------|\\n| 40 nodes | 9 minutes | 3 hours |\\n| 80 nodes | 25 minutes | 9 hours |\\n| 100 nodes | 32 minutes | 11 hours |\\n\\n\\n**Q2: How practical is it for attackers to synchronize their attacks across clusters in real-world decentralized FL applications with limited communication?**\\n\\n\\n**Reply:** This is a deep question. We remark that there could be two strategies to synchronize the attacks: (1) In decentralized FL, the topology is usually static due to the high cost of changing topology frequently. Therefore, the attackers only need to synchronize once after the sequence used for the distance prediction is collected; (2) Each attacker registers multiple clients so that the attacker can synchronize the attacking inside. We look forward to proposing attacking strategies without synchronization in the future.\\n\\n\\n**Q3: Have any potential defense mechanisms been considered that could mitigate the effectiveness of the proposed DBA in decentralized FL?**\\n\\n\\n**Reply:**We have introduced two defensive mechanisms [1,2]. We would be most thankful if you reviewer can point out some specific defense mechanisms to be introduced and we are more than happy to include them. To the author\\u2019s best knowledge, there is no defense mechanism designed specifically for decentralized FL in the literature.\\nWe can observe that the defense mechanism does reduce ASR. However, decentralized FL mitigates backdoor attacks because each client only has a few neighbors (e.g., 2 on a ring topology). Compared with DBA and centralized attack, our method can further pose a challenge to the effectiveness of defense mechanisms.\\n| Method | DBA | Centralized | Our |\\n|---------------|-------|-------------|-------|\\n| Swift | 0.656 | 0.782 | 0.801 |\\n| Swift+FLIP | 0.431 | 0.699 | 0.783 |\\n| Swift+FedGame | 0.587 | 0.728 | 0.779 |\\n| DSGD | 0.712 | 0.764 | 0.831 |\\n| DSGD+FLIP | 0.679 | 0.688 | 0.787 |\\n| DSGD+FedGame | 0.646 | 0.647 | 0.805 |\\n\\n\\n**Q4: How sensitive is the method\\u2019s effectiveness to inaccuracies in distance prediction? Is there a tolerance threshold?**\\n\\n**Reply:** To address the reviewer\\u2019s concern, we investigate the impact of the error in distance prediction on the method\\u2019s effectiveness. Since we can directly control inaccuracies in distance prediction, we vary the topology of DFL and the number of clients. With more clients and random structures, we have observed larger errors. The following table shows ASR when the error is varying. We have not observed an error larger than 5.3. Even in the worst case, ASR with our method is still higher than DBA. We remark that it is unnecessary to set a threshold because our prediction will never be worse than random distribution in DBA. \\n| Error of topology detection | Attack success rate |\\n|-----------------------------|---------------------|\\n| 1.2 | 0.821 |\\n| 1.3 | 0.801 |\\n| 2.8 | 0.783 |\\n| 3.6 | 0.772 |\\n| 5.3 | 0.709 |\\n| DBA | 0.657 |\\n\\n\\n**Q5: Could this method be extended to other types of adversarial attacks in decentralized FL, such as data poisoning or model inversion attacks?**\\n\\n\\n**Reply:** Thanks for the insightful comments. To the best of the author\\u2019s knowledge, there are 3 major types of data poisoning attacks: indiscriminate, targeted, and backdoor attacks. We believe that the idea of distributed attack can be applied to indiscriminate and targeted attacks by distributing a set of malicious samples to different clients. It would be an interesting topic to be investigated. For inversion attacks, it would be interesting to verify if the inversion results are different at different attacker clients because the model parameters are synchronized. Unfortunately, we were unable to set up these experiments in the short rebuttal period. But we thank the reviewer for pointing out these promising future directions.\"}", "{\"summary\": \"The paper investigates distributed backdoor attacks (DBA) in the context of decentralized federated learning (DFL), where there is no central server. Traditional DBA methods, which work effectively in centralized settings, often experience reduced success rates in decentralized systems due to the varying influence of adversarial clients based on their network location. To address this, the authors propose a two-step approach: first, a method to estimate distances between adversarial clients in the network, and second, a clustering-based algorithm to maximize attack success by dynamically organizing the distributed backdoor attacks based on network topology. Through experiments on various DFL frameworks, the authors demonstrate that their method achieves higher attack success rates than standard DBA and centralized backdoor approaches.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper presents an innovative approach by introducing distributed backdoor attacks (DBA) specifically tailored for decentralized federated learning (DFL), an area that has seen limited exploration.\", \"weaknesses\": \"Although the proposed attack method is shown to be effective, the paper does not sufficiently explore potential defensive strategies against this enhanced DBA approach.\\n\\nThe success of the proposed approach heavily relies on the accuracy of topology detection and clustering. However, there is limited discussion on the potential impact of inaccuracies in clustering or topology estimation on the overall attack success rate.\\n\\n\\nThe clustering and trigger decomposition steps involve hyperparameters, such as cluster size and trigger distribution patterns. However, the paper does not provide sufficient insight into how sensitive the method\\u2019s performance is to these parameters.\\n\\nThe method relies heavily on accurate distance estimation between adversarial clients. The paper does not discuss how inaccuracies in these estimates might affect the attack's effectiveness, especially in dynamic or less predictable network environments where client distances may vary.\", \"questions\": \"See the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer ND9D\", \"comment\": \"Thanks for your insightful comments. We address your concerns below.\\n\\n\\n**W1: The work lacks discussion on the key parameters of the proposed method in the experiment, such as the number of clusters.**\\n\\n\\n**Response**: Thanks for your rigorous comment. As suggested by the reviewer, we have now investigated the effect of three key parameters: the number of clusters, the clustering algorithm, and the sequence length for distance prediction. Besides these parameters, we have already investigated the effect of trigger size, trigger gap, and trigger shift in Figure 9.\\nThe following table reports the attack success rate (ASR) with different numbers of clusters for the default k-means clustering algorithms:\\n| # of clusters | 2 | 3 | 4 | 5 | 10 |\\n|---------------|-------------|------------|-------------|-------------|-------------|\\n| Swift | 0.768\\u00b10.012 | 0.801\\u00b10.021 | 0.818\\u00b10.032 | 0.752\\u00b10.084 | 0.678\\u00b10.12 |\\n| DSGD | 0.812\\u00b10.008 | 0.831\\u00b10.019 | 0.823\\u00b10.008 | 0.788\\u00b10.067 | 0.713\\u00b10.097 |\\n\\n\\nWe report the average results of 5 random distributions of attacks on a ring topology with 40 clients. We can observe that there is a tradeoff in choosing the number of clusters. When the number is too small, our algorithm tends to be similar to DBA (e.g., all clients are in the same cluster). When the number is too large, it tends to be similar to a centralized attack (e.g., each client is a center). According to the experiment result in DBA and our observation, n/3 is a fair setting (i.e., each cluster has 3 clients on average).\\nWe also investigate different clustering algorithms. We can observe that the clustering does have an impact on the algorithm. We remark that K-means is suitable in this case because we can directly control the number of clusters. Also, the number of attackers is usually small in FL. It is not necessary to leverage these clustering algorithms based on density or hierarchy.\\n\\n\\n| Algorithm | Swift | DSGD |\\n|---------------|-------------|-------------|\\n| K-means | 0.801\\u00b10.021 | 0.831\\u00b10.019 |\\n| Hierarchical clustering | 0.782\\u00b10.013 | 0.821\\u00b10.39 |\\n| DBSCAN | 0.812\\u00b10.038 | 0.824\\u00b10.29 |\\n\\n\\n**W2: The authors should add more ablation studies to evaluate the contribution of each module to the attack success rate.**\\n\\n\\n**Response**: We thank the reviewer for the thoughtful feedback. There are two modules in our contribution: cluster-based DBA and topology detection with attacking signals. (1) If we remove the clustering module, our algorithm basically becomes DBA. We have already compared with DBA in Figure 7. (2) To verify the necessity of using attacking signals for prediction, we compare the accuracy of distance prediction between our method and the prediction based on a normal prediction signal. As shown in the following table, if we remove our module based on attacking signals, the distance prediction results tend to be random.\\n \\n| Ground truth | Ring topology (DSGD) | Ring topology (Swift) | Grid Topology (DSGD) | Grid Topology (Swift) |\\n|---------------|----------------------|-----------------------|----------------------|-----------------------|\\n| Our | 1.3\\u00b10.5 | 0.8\\u00b10.6 | 1.2\\u00b10.7 | 0.8\\u00b10.6 |\\n| Normal signal | 4.6\\u00b11.2 | 5.2\\u00b11.9 | 3.2\\u00b11.2 | 5.2\\u00b12.1 |\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"In this paper, the authors propose a distributed backdoor attack method. The core of the work is based on the insight that the attack success rate depends on the distribution of attackers in the network architecture. The authors design a topology detection method to detect the network by the distance of the attackers, and then organize the subsequent attacks based on the distance to improve the attack success rate. Experimental results show that the proposed method outperforms traditional centralized attacks and the naive distributed backdoor attack.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The proposed work is the first to investigate the distributed backdoor attack method on decentralized federated learning tasks.\", \"Experimental results show that the proposed method achieves a higher attack success rate than traditional methods.\"], \"weaknesses\": [\"The work lacks discussion on the key parameters of the proposed method in the experiment, such as the number of clusters.\", \"The authors should add more ablation studies to evaluate the contribution of each module to the attack success rate.\"], \"questions\": \"Please refer to the weaknesses for rebuttal. I will check the related content carefully.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you\", \"comment\": \"We thank the reviewer for taking the time to review our rebuttal. We are delighted that our rebuttal has addressed most of your concerns. Wishing the reviewer all the best!\"}", "{\"title\": \"Response to Reviewer y1F4 [Part 2]\", \"comment\": \"**W4: The method relies heavily on accurate distance estimation between adversarial clients. The paper does not discuss how inaccuracies in these estimates might affect the attack's effectiveness, especially in dynamic or less predictable network environments where client distances may vary.**\\n\\n\\n**Reply:** This is a great question. We agree that a dynamic or less predictable network can pose a challenge to the attack's effectiveness. We remark that it is very rare to use dynamic networks in the literature of decentralized FL due to the cost of changing communication topology frequently. To address the reviewer\\u2019s concern, we introduce a dynamic graph with random structures as the topology. Once an attacker is informed that its neighborhood or routing has been changed, the attacker will detect the topology again. The following tables indicate that the attack success rate will drop slightly. This is because, with dynamic topology, the parameters updating flow will change, which is a challenge for attackers to maximize the influence of the poison samples.\\n| Topology | Swift | DSGD |\\n|------------------------------------|-------|-------|\\n| Random dynamic graph with 40 nodes | 0.791 | 0.813 |\\n| Random dynamic graph with 80 nodes | 0.756 | 0.774 |\"}", "{\"comment\": \"Thanks for your response, I appreciate the detailed comments and experiments. Even after reading the responses of other reviewers, I want to keep my original score.\"}", "{\"metareview\": \"The authors proposed distributed backdoor attacks in decentralized FL, introducing a method to detect attackers by estimating distances between them, clustering attackers accordingly, and proposing an algorithm to dynamically embed local patterns from a global pattern into each cluster. The novelties and contributions are quite limited for this work compared with the original DBA attack which is designed for FL, and this work lacks discussion on the key parameters of the proposed method in the experiment, such as the number of clusters.\\nThe authors are encouraged to add more ablation studies to evaluate the contribution of each module to the attack success rate.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers are agreed on the final decision.\"}", "{\"title\": \"Response to Reviewer FJTf [Part 1]\", \"comment\": \"We would like to thank the reviewer for the valuable comments. We address the concern below.\\n\\n**W1: DBA takes into account factors like location and size, resulting in potentially infinite combinations of triggers. Even with a dynamic selection method, there's no guarantee that the chosen combination will be optimal or near-optimal. A more fundamental approach might involve using a generative model to implant invisible/stealthy triggers (as pixels), such as [1], to optimize the trigger more effectively.**\\n\\n\\n**Reply**: We thank the reviewer for referring [1]. We have now cited [1] and included it as a baseline. We remark that our method does not conflict with [1]. For any generative trigger, we can leverage DBA to further make it stealthier by decomposing it into distributed attackers. In the following table, we use the strategy in [1] to attack parameters in DFL. Specifically, we stop attacking at Epoch 1000 and report the ASR at Epoch 1100. The results indicate that DBA can further increase durability. This is because each decomposed trigger is small and it makes the one-line gradient project in [1] more invisible. We totally agree that finding an optimal combination of many parameters is a challenge for DBA. Our contribution is that for any combination of the parameters, our clustering algorithm can improve the attack success rate.\\n| Method | Swift-Ring | DSGD-Ring | Swift-Clique | DSGD-Clique |\\n|----------------|------------|-----------|--------------|-------------|\\n| Neurotoxin | 0.601 | 0.623 | 0.652 | 0.672 |\\n| Neurotoxin+DBA | 0.613 | 0.636 | 0.662 | 0.663 |\\n| Neurotoxin+our | 0.651 | 0.676 | 0.712 | 0.704 |\\n\\n\\n**W2: Considering the clustering method as a major contribution to this paper, ablation studies are needed to assess the improvement gained from introducing clustering (and the number of clusters, threshold distance to dividing clusters) compared to not using clustering in a fair comparison.**\\n\\n**Reply:** As suggested by the reviewer, we add ablation studies to remove the clustering and vary the number of clusters. In the following table, \\u201cNo clustering\\u201d indicates that we randomly assign the decomposed triggers to attackers. We use K-means as the clustering algorithm. The results suggest that our clustering algorithm is effective in improving the attack success rate. \\n| Method | Swift-Ring | DSGD-Ring | Swift-Clique | DSGD-Clique |\\n|---------------|------------|-----------|--------------|-------------|\\n| No clustering | 0.658 | 0.703 | 0.812 | 0.798 |\\n| 2 clusters | 0.789 | 0.812 | 0.872 | 0.880 |\\n| 3 clusters | 0.801 | 0.831 | 0.893 | 0.917 |\\n| 4 clusters | 0.818 | 0.823 | 0.876 | 0.902 |\\n| 5 clusters | 0.752 | 0.788 | 0.862 | 0.856 |\\n\\n\\n\\n**W3: To showcase the effectiveness of proposed attack, performance under defense mechanisms is needed.**\\n\\n**Reply:** Thanks for the insightful comments. As suggested by the reviewer, we introduce two defensive mechanisms [1,2] in the experiment. To the author\\u2019s best knowledge, there is no defense mechanism designed specifically for decentralized FL in the literature. The possible reason is that a decentralized framework itself is a defense mechanism. Many defensive strategies based on client selection such as Krum [3] are not suitable for DFL. We would be most thankful if you reviewer can point out some specific defense mechanisms to be introduced and we are more than happy to include them in the experiment. To introduce [1] and [2] in decentralized FL, we leverage the corresponding strategy for each client. Note that the defense mechanism does reduce ASR. However, decentralized FL mitigates backdoor attacks because each client only has a few neighbors (e.g., 2 on a ring topology). Compared with DBA and centralized attack, our method can further pose a challenge to the effectiveness of these defense mechanisms.\\n| Method | DBA | Centralized | Our |\\n|---------------|-------|-------------|-------|\\n| Swift | 0.656 | 0.782 | 0.801 |\\n| Swift+FLIP | 0.431 | 0.699 | 0.783 |\\n| Swift+FedGame | 0.587 | 0.728 | 0.779 |\\n| DSGD | 0.712 | 0.764 | 0.831 |\\n| DSGD+FLIP | 0.679 | 0.688 | 0.787 |\\n| DSGD+FedGame | 0.646 | 0.647 | 0.805 |\\n\\n\\n[1] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning\\n[2] FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning\\n[3] Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent\"}", "{\"comment\": \"Thank you for the authors' replying. After reading the rebuttal, I decide to maintain my scores.\"}", "{\"summary\": \"The authors examine distributed backdoor attacks in decentralized FL, introducing a method to detect attackers by estimating distances between them, clustering attackers accordingly, and proposing an algorithm to dynamically embed local patterns from a global pattern into each cluster.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. It is interesting to consider the topology of decentralized FL and LSTM seems like a reasonable way to predict the distance.\\n2. The authors consider dynamically embed the backdoor trigger instead of using fixed patterns.\", \"weaknesses\": \"1. DBA takes into account factors like location and size, resulting in potentially infinite combinations of triggers. Even with a dynamic selection method, there's no guarantee that the chosen combination will be optimal or near optimal. A more fundamental approach might involve using a generative model to implant invisible/stealthy triggers (as pixels), such as [1], to optimize the trigger more effectively.\\n2. Considering the clustering method as a major contribution to this paper, ablation studies are needed to assess the improvement gained from introducing clustering (and the number of clusters, threshold distance to dividing clusters) compared to not using clustering in a fair comparison.\\n3. To showcase the effectiveness of proposed attack, performance under defense mechanisms is needed.\\n\\n[1] Doan, Khoa, et al. \\\"Lira: Learnable, imperceptible and robust backdoor attacks.\\\" Proceedings of the IEEE/CVF international conference on computer vision. 2021.\", \"questions\": \"1. Have the authors considered other methods to enhance backdoor attacks durability like [1]\\n2. Since MNIST and CIFAR-10 each have only 10 classes, does the number of classes matter?\\n3. Can the current method effectively handle more complex real-world topologies in terms of scalability and performance guarantees?\\n\\n[1] Zhang, Zhengming, et al. \\\"Neurotoxin: Durable backdoors in federated learning.\\\" International Conference on Machine Learning. PMLR, 2022.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer y1F4 [Part 1]\", \"comment\": \"We thank the reviewer for the insightful comments and suggestions. We address the concern below.\\n\\n**W1: Although the proposed attack method is shown to be effective, the paper does not sufficiently explore potential defensive strategies against this enhanced DBA approach.**\\n\\n**Reply:** Thanks for the insightful comments. As suggested by the reviewer, we introduce two defensive strategies [1,2] in the experiment. To the author\\u2019s best knowledge, there is no defense mechanism designed specifically for decentralized FL in the literature. The possible reason is that a decentralized framework itself is a defense mechanism. Many defensive strategies based on client selection such as Krum [3] can not be applied to DFL. We would be most thankful if you reviewer can refer to some specific defense mechanisms and we are more than happy to include them in the experiment. Note that the defense mechanism does reduce ASR. However, decentralized FL mitigates backdoor attacks because each client only has a few neighbors (e.g., 2 on a ring topology). Compared with DBA and centralized attack, our method can further pose a challenge to the effectiveness of these defense mechanisms.\\n| Method | DBA | Centralized | Our |\\n|---------------|-------|-------------|-------|\\n| Swift | 0.656 | 0.782 | 0.801 |\\n| Swift+FLIP | 0.431 | 0.699 | 0.783 |\\n| Swift+FedGame | 0.587 | 0.728 | 0.779 |\\n| DSGD | 0.712 | 0.764 | 0.831 |\\n| DSGD+FLIP | 0.679 | 0.688 | 0.787 |\\n| DSGD+FedGame | 0.646 | 0.647 | 0.805 |\\n\\n\\n[1] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning\\n[2] FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning\\n[3] Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent\\n\\n\\n**W2: The success of the proposed approach heavily relies on the accuracy of topology detection and clustering. However, there is limited discussion on the potential impact of inaccuracies in clustering or topology estimation on the overall attack success rate.**\\n\\n\\n**Reply:** Thanks for the insightful comments. As suggested by the reviewer, we have now investigated the impact of topology on the attack success rate. Since we can not directly control the accuracy of topology detection, we vary the poison sample in the topology. We have observed that when the poison samples are out of the domain or not close to the samples in the training set, the error of topology detection will be smaller. In the following table, we observe that the attack success rate will be lower if the topology detection is not accurate. It further verifies the necessity of topology detection. In our solution, we choose poison samples that are out of domain to improve the accuracy of topology detection.\\n| Poison samples | Error of topology detection | Attack success rate |\\n|----------------|-----------------------------|---------------------|\\n| Poison set 1 | 1.2 | 0.821 |\\n| Poison set 2 | 1.3 | 0.801 |\\n| Poison set 3 | 2.8 | 0.783 |\\n| Poison set 4 | 3.6 | 0.772 |\\n \\n**W3: The clustering and trigger decomposition steps involve hyperparameters, such as cluster size and trigger distribution patterns. However, the paper does not provide sufficient insight into how sensitive the method\\u2019s performance is to these parameters.**\\n\\n\\n**Reply:** As suggested by the reviewer, we vary the number of clusters. We use K-means as the clustering algorithm. So the cluster size will automatically decided by the value of K. The results suggest that there is a tradeoff in choosing the value of K. When K is too small, it tends to be similar to DBA. When K is too large, there are only 1 to 2 clients in the cluster and it is similar to centralized attacks.\\n| Method | Swift-Ring | DSGD-Ring | Swift-Clique | DSGD-Clique |\\n|---------------|------------|-----------|--------------|-------------|\\n| 2 clusters | 0.789 | 0.812 | 0.872 | 0.880 |\\n| 3 clusters | 0.801 | 0.831 | 0.893 | 0.917 |\\n| 4 clusters | 0.818 | 0.823 | 0.876 | 0.902 |\\n| 5 clusters | 0.752 | 0.788 | 0.862 | 0.856 |\\n\\nWe also investigate the trigger distribution in Figure 9. Specifically, we investigate the impact of three parameters in the trigger distribution pattern: trigger size, trigger gap, and trigger shift. When we increase the size of the local trigger from 1 to 4, the attack success ratio will increase. The value of the gap has little impact on both ASR and accuracy. We observe a U-shape curve of ASR when the shift increases. This is because when the trigger overlaps with some pattern in the clear image, the impact can be ignored due to overlap.\"}", "{\"summary\": \"This paper investigates Distributed Backdoor Attacks (DBA) within a decentralized Federated Learning (FL) framework. The authors demonstrate that the attack success rate of DBA in decentralized settings is impacted by the distribution of attackers across the network. To address this, the paper introduces a two-step strategy: (1) a method to detect network topology by predicting distances between attackers, allowing them to cluster, and (2) an enhanced DBA method where attack patterns are distributed dynamically within clusters to optimize the attack\\u2019s impact across various network topologies. Experimental results show that the proposed approach improves attack success rates over traditional DBA and centralized attacks on standard datasets (CIFAR-10 and MNIST).\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Overall, the structure of this paper is easy to follow.\", \"The problem studied is sound and important.\", \"The dynamic cluster-based trigger distribution is interesting.\"], \"weaknesses\": [\"This paper\\u2019s contribution is somehow limited as it only focuses on DBA. While DBA in decentralized FL is a novel attack, the study does not discuss possible defense mechanisms, which could provide a more balanced perspective.\", \"The clustering and dynamic distribution of triggers may become computationally expensive with a larger number of attackers and clients.\", \"The approach assumes attackers can communicate to coordinate poisoned images and agree on target labels, which may not be practical in a real-world adversarial setting.\"], \"questions\": [\"How would the clustering algorithm handle larger networks with significantly more clients and attackers?\", \"How practical is it for attackers to synchronize their attacks across clusters in real-world decentralized FL applications with limited communication?\", \"Have any potential defense mechanisms been considered that could mitigate the effectiveness of the proposed DBA in decentralized FL?\", \"How sensitive is the method\\u2019s effectiveness to inaccuracies in distance prediction? Is there a tolerance threshold?\", \"Could this method be extended to other types of adversarial attacks in decentralized FL, such as data poisoning or model inversion attacks?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer FJTf [Part 2]\", \"comment\": \"**Q1: Have the authors considered other methods to enhance backdoor attack durability like [1].**\\n\\n**Reply:** We have now cited [1] and combined it with our method to enhance attack durability. In the following table, we use the strategy in [1] to update parameters in DFL. Specifically, we stop attacking at Epoch 1000 and report the ASR at Epoch 1100. The results indicate that decomposed triggers can further increase durability. This is because each decomposed trigger is small and it makes the one-line gradient project in [1] more invisible. We thank the reviewer for the insightful suggestion.\\n| Method | Swift-Ring | DSGD-Ring | Swift-Clique | DSGD-Clique |\\n|----------------|------------|-----------|--------------|-------------|\\n| Neurotoxin | 0.601 | 0.623 | 0.652 | 0.672 |\\n| Neurotoxin+DBA | 0.613 | 0.636 | 0.662 | 0.663 |\\n| Neurotoxin+our | 0.651 | 0.676 | 0.712 | 0.704 |\\n\\n\\n**Q2: Since MNIST and CIFAR-10 each have only 10 classes, does the number of classes matter?**\\n\\n\\n**Reply:** Thanks for the rigorous comments. As suggested by the reviewer, we have included two datasets with 100 classes: CIFAR-100 and Tiny-imagenet. With more classes, poisoned data is more likely predicted to classes other than the designed wrong label. As a result, ASR will drop for all attacking methods.\\n| Method | DBA | Centralized | Our |\\n|----------------------------|-------|-------------|-------|\\n| 10 classess: CIFAR-10 | 0.656 | 0.782 | 0.801 |\\n| 100 classess:CIFAR-100 | 0.642 | 0.687 | 0.734 |\\n| 100 classess:Tiny-imagenet | 0.671 | 0.654 | 0.773 |\\n\\n\\n\\n\\n**Q3:Can the current method effectively handle more complex real-world topologies in terms of scalability and performance guarantees?**\\n\\n**Reply:** To address the reviewer\\u2019s concern regarding more complex topology, we conduct experiments on random graph topologies with more clients. We remark that the majority of the computational overhead is still the cost of training on the decentralized FL. The cost of topology detection can be ignored compared with training. Also, clustering will only be performed once and it can be done in a few seconds. Therefore, our method can handle more complex real-world topologies and the extra computational overhead can be ignored.\\n\\n\\n| Topology | Topology detection | 2000 epochs of training | ASR |\\n|-----------------------------|--------------------|-------------------------|-------|\\n| Ring topology with 40 nodes | 9 minutes | 3 hours | 0.801 |\\n| Random graph with 40 nodes | 30 minutes | 10 hours | 0.824 |\\n| Random graph with 80 nodes | 36 minutes | 12 hours | 0.786 |\"}", "{\"title\": \"Response to Reviewer 5u5L [Part 1]\", \"comment\": \"We thank the reviewer for the insightful comments. We are grateful for the valuable suggestions for our paper.\\n\\n\\n**W1: This paper\\u2019s contribution is somehow limited as it only focuses on DBA. While DBA in decentralized FL is a novel attack, the study does not discuss possible defense mechanisms, which could provide a more balanced perspective.**\\n\\n\\n**Reply:** Thanks for the insightful comments. As suggested by the reviewer, we introduce two defensive mechanisms [1,2] in the experiment. To the author\\u2019s best knowledge, there is no defense mechanism designed specifically for decentralized FL in the literature. The possible reason is that a decentralized framework itself is a defense mechanism. Many defensive strategies based on client selection such as Krum [3] are not suitable for DFL. We would be most thankful if you reviewer can point out some specific defense mechanisms to be introduced and we are more than happy to include them in the experiment. To introduce [1] and [2] in decentralized FL, we leverage the corresponding strategy for each client. Note that the defense mechanism does reduce ASR. However, decentralized FL mitigates defensive mechanisms because each client only has a few neighbors (e.g., 2 on a ring topology). Compared with DBA and centralized attack, our method can further pose a challenge to the effectiveness of these defense mechanisms.\\n| Method | DBA | Centralized | Our |\\n|---------------|-------|-------------|-------|\\n| Swift | 0.656 | 0.782 | 0.801 |\\n| Swift+FLIP | 0.431 | 0.699 | 0.783 |\\n| Swift+FedGame | 0.587 | 0.728 | 0.779 |\\n| DSGD | 0.712 | 0.764 | 0.831 |\\n| DSGD+FLIP | 0.679 | 0.688 | 0.787 |\\n| DSGD+FedGame | 0.646 | 0.647 | 0.805 |\\n\\n\\n[1] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning\\n\\n[2] FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning\\n\\n[3] Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent\\n\\n\\n**W2: The clustering and dynamic distribution of triggers may become computationally expensive with a larger number of attackers and clients.**\\n\\n\\n**Reply:** To address the reviewer\\u2019s concern regarding computational cost, we report the running time of our algorithm and the training time of FL. We remark that the majority of the computational overhead is still the cost of training on the decentralized FL. For Cifar-100, it usually takes at least 3000 epochs to reach the convergent performance with FL. The cost of clustering can be ignored compared with training. Also, trigger distribution can be done in a few seconds. Therefore, our method can handle more complex real-world topologies and the extra computational overhead can be ignored.\\n\\n\\n| Topology | Clustering and trigger distribution | 2000 epochs of training |\\n|-----------------------------|--------------------|--------------------------|\\n| 40 nodes | 9 minutes | 3 hours |\\n| 80 nodes | 25 minutes | 9 hours |\\n| 100 nodes | 32 minutes | 11 hours |\\n\\n\\n**W3: The approach assumes attackers can communicate to coordinate poisoned images and agree on target labels, which may not be practical in a real-world adversarial setting.**\\n\\n**Reply:** We agree that it is an inherent limitation of DBA. We follow the assumption in DBA and will explore attacking strategies requiring less coordination. Thanks for the insightful comments. We are glad to have an expert in FL as our reviewer.\"}" ] }
DblHBgD0GR
Rethinking and Defending Protective Perturbation in Personalized Diffusion Models
[ "Yixin Liu", "Ruoxi Chen", "Xun Chen", "Lichao Sun" ]
Personalized diffusion models (PDMs) have become prominent for adapting pre-trained text-to-image models to generate images of specific subjects using minimal training data. However, PDMs are susceptible to minor adversarial perturbations, leading to significant degradation when fine-tuned on corrupted datasets. These vulnerabilities are exploited to create protective perturbations that prevent unauthorized image generation. Existing purification methods attempt to red-team the protective perturbation to break the protection but often over-purify images, resulting in information loss. In this work, we conduct an in-depth analysis of the fine-tuning process of PDMs through the lens of shortcut learning. We hypothesize and empirically demonstrate that adversarial perturbations induce a latent-space misalignment between images and their text prompts in the CLIP embedding space. This misalignment causes the model to erroneously associate noisy patterns with unique identifiers during fine-tuning, resulting in poor generalization. Based on these insights, we propose a systematic red-teaming framework that includes data purification and contrastive decoupling learning. We first employ off-the-shelf image restoration techniques to realign images with their original semantic meanings in latent space. Then, we introduce contrastive decoupling learning with noise tokens to decouple the learning of personalized concepts from spurious noise patterns. Our study not only uncovers fundamental shortcut learning vulnerabilities in PDMs but also provides a comprehensive evaluation framework for developing stronger protection. Our extensive evaluation demonstrates its superiority over existing purification methods and stronger robustness against adaptive perturbation.
[ "Protective Perturbations", "Imperceptible Perturbations", "Adversarial Purification", "Diffusion-based Generative Models" ]
Reject
https://openreview.net/pdf?id=DblHBgD0GR
https://openreview.net/forum?id=DblHBgD0GR
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zPcx3Sb5gt", "uvtYcw4C2F", "s984qt1n2d", "pSxPI7lQh8", "nUZq6l8XS6", "myxE3WpmVI", "iiTVx9BTaN", "e2UyAgEZkc", "dP4El9HB6y", "bkkkC1ReCl", "Z6IxZ3O59R", "U96dG26ndH", "Sa8gION6jz", "Rdplk3XdiG", "Pp1InnpmPF", "PZqvTKJkVS", "O7DEjectH7", "HrALq7g1rz", "FelGLnbA8m", "DdrJ4hj7fV", "ALTA7J8hFB", "71Mj3KNlld", "5PceB1jIVo", "1tEZ93wfLP", "0uTyIV1dRC" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732635422861, 1730347759309, 1732322312938, 1732139103124, 1730697321022, 1732138979380, 1732780047344, 1734815115040, 1732624393420, 1732483544260, 1737523658716, 1732139427625, 1730550920873, 1732416061927, 1730154890291, 1732139505973, 1732490507967, 1732139264002, 1732624767729, 1732490584495, 1732415348313, 1732139168273, 1732143278232, 1732587120322, 1732897326737 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4737/Authors" ], [ "ICLR.cc/2025/Conference/Submission4737/Reviewer_2TWg" ], [ "ICLR.cc/2025/Conference/Submission4737/Reviewer_uRZe" ], [ "ICLR.cc/2025/Conference/Submission4737/Authors" ], [ "ICLR.cc/2025/Conference/Submission4737/Reviewer_nipZ" ], [ "ICLR.cc/2025/Conference/Submission4737/Authors" ], [ "ICLR.cc/2025/Conference/Submission4737/Reviewer_uRZe" ], [ "ICLR.cc/2025/Conference/Submission4737/Area_Chair_WRmW" ], [ "ICLR.cc/2025/Conference/Submission4737/Reviewer_co1c" ], [ "ICLR.cc/2025/Conference/Submission4737/Area_Chair_WRmW" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4737/Authors" ], [ "ICLR.cc/2025/Conference/Submission4737/Reviewer_co1c" ], [ "ICLR.cc/2025/Conference/Submission4737/Authors" ], [ "ICLR.cc/2025/Conference/Submission4737/Reviewer_uRZe" ], [ "ICLR.cc/2025/Conference/Submission4737/Authors" ], [ "ICLR.cc/2025/Conference/Submission4737/Authors" ], [ "ICLR.cc/2025/Conference/Submission4737/Authors" ], [ "ICLR.cc/2025/Conference/Submission4737/Reviewer_co1c" ], [ "ICLR.cc/2025/Conference/Submission4737/Authors" ], [ "ICLR.cc/2025/Conference/Submission4737/Authors" ], [ "ICLR.cc/2025/Conference/Submission4737/Authors" ], [ "ICLR.cc/2025/Conference/Submission4737/Authors" ], [ "ICLR.cc/2025/Conference/Submission4737/Authors" ], [ "ICLR.cc/2025/Conference/Submission4737/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your input regarding the `uRZe`\\u2019s concern on the theoretical guarantee. Aligned with previous works in the backdoor domain (e.g., Zhang et al., 2023; Liu et al., 2024), we use the causal graph primarily as a conceptual tool to illustrate the learning relationships and to motivate our methodology. Our intention is to provide intuitive insights into the mechanism of protective perturbations that can guide the development of our red-teaming framework.\"}", "{\"summary\": \"The paper conduct a comprehensive analysis to show that perturbations induce a latent-space misalignment between images and their text prompts in the CLIP embedding space, which leads to association between the noise patterns and the identifiers. Based on this observation, the paper introduces contrastive decoupling learning with noise tokens to decouple the learning of personalized concepts from spurious noise patterns.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The observation that adversarial perturbations induce a latent-space misalignment between images and their text prompts in the CLIP embedding space is interesting and insightful.\\n\\n2. The paper is well-organized and easy-to-follow.\\n\\n3. The paper conducts an extensive array of experiments and also considers adaptive perturbation.\", \"weaknesses\": \"1. The paper does not provide strong theoretical analysis to support the conclusions.\\n\\n2. The technical contribution is a little limited since Decoupled Contrastive Learning is not a new technique proposed by the paper.\", \"questions\": \"I am wondering if the noisy images generated by DM without any defense can be denoised?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Respond to the rebuttal\", \"comment\": \"Thank you for the authors' reply. After taking a closer look at the paper, I found the overall pipeline clearer, and I now understand the motivation and methods presented in the paper.\\n\\nHowever, I still have some concerns regarding the novelty of the work. Although the authors provided a causal analysis, it appears to be limited to constructing a causal graph with prior knowledge to describe the problem (typically, this analysis process is highly similar to the causal analysis in backdoor attack, which has been already proposed), without offering theoretical guarantees. Moreover, the proposed methods are not fundamentally based on the causal aspect itself. Upon closer examination, the methods share significant similarities with prior approaches that use purification techniques.\\n\\nThe main difference between the previous models and the current one seems to be that the current approach leverages off-the-shelf pretrained methods, whereas the previous models are optimization-based. Additionally, the only key modification involves changing the image prompts, such as appending a suffix like \\\"with/without XX noisy pattern\\\" to the images.\\n\\nFor these reasons, I have decided to maintain my score.\"}", "{\"title\": \"Part 2\", \"comment\": [\"> Q2: After reading the entire paper, I found it challenging to identify the specific question the author aims to address and the associated motivations. While the introduction attempts to outline these points, it is difficult to discern the relationship between the motivation and the problem being addressed. Additionally, there appears to be a disconnect between the problem definition in the introduction and the methods presented. Here are some specific suggestions for clarification:\", \"**Response:** We appreciate the reviewer\\u2019s feedback regarding the clarity of our research question and motivation. We have addressed the reviewer's specific concerns in Q2 as follows. In this response, we would like to first clarify the positioning and structure of our paper as a big picture to understand our work:\", \"**Core Research Question**: Our work focuses on red-teaming existing protective perturbation methods to develop more effective, efficient, and faithful approaches. We observed that existing purification studies are either inefficient (e.g., IMPRESS requires heavy iterative optimization) or produce unfaithful results (e.g., GrIDPure yields hallucinated images). Additionally, both methods operate solely on the input side, limiting the comprehensiveness of red-teaming. Our core motivation is to bridge this gap by proposing a more comprehensive and effective red-teaming framework.\", \"**Core Contributions**: Unlike previous works that primarily conduct empirical red-teaming, we are the first to introduce the perspectives of shortcut learning and causal analysis to understand the underlying mechanisms of how protective perturbations affect personalized diffusion model fine-tuning. This new understanding allows us to design a systematic red-teaming framework grounded in causal intervention. Without this deep insight into the shortcut learning induced by protective perturbations, designing a systematic and robust red-teaming framework would be challenging.\", \"**Connection Between Introduction and Methodology**: We respectfully argue that these two parts actually are tightly connected in our paper. In the introduction, we first point out the current lack of understanding on \\\"why protective perturbations work\\\", which is a more fundamental problem for both protection and red-teaming sides. Then we introduce our main research question, \\\"how to red-team protective perturbations more effectively, efficiently and faithfully\\\", aiming to bridge the gap of existing purification methods. Correspondingly, in the methodology section, we first provide an explanatory framework based on causal and shortcut learning, addressing the \\u201cwhy\\u201d question. Building on this insight, we propose a systematic red-teaming framework grounded in causal intervention. Through experiments, we demonstrate the effectiveness, efficiency, and faithfulness of our framework, thus answering the \\u201chow\\u201d question of our research.\"]}", "{\"summary\": \"The paper aims to improve the personalization performance of Diffusion Models on images with protective perturbation, a kind of noise avoiding images to be learned by models. The authors fist empirically analyze the latent mismatch between the perturbed and original images, finding that perturbation significantly alternate the latent representations of images. The authors believe that the mismatch causes shortcut learning and therefore fail the personalization of diffusion models on such perturbed data. Therefore, a novel method is proposed to improve the personalization training by contrastive learning and super resolution.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The proposed contrastive learning method is well motivated by the empirical finding on the latent mismatch of perturbed images.\\n2. In multiple domains, the method presents better fine-tuning performance than baselines given protective perturbation on images.\\n3. Comprehensive experiments are conducted to understand and evaluate the method.\", \"weaknesses\": \"1. It is not clear to me the connection between the latent mismatch and the shortcut learning. Why does the existence of latent mismatch lead to shortcut learning?\\n2. I don't think the word \\\"defending\\\" (in the title) should be used against a good technique, protective perturbation. The paper is a good red-teaming paper that explored a stronger threat model for protective perturbation. Unfortunately, many description of the method is defined as a mitigation method, which could mislead the readers about the negative impacts of the methods. The authors should discuss how this method can break the existing protective perturbation. It would be appreciated if the authors can discuss potential solutions toward better copyright protection via protective perturbation or other alternatives.\", \"questions\": [\"It is not clear to me the connection between the latent mismatch and the shortcut learning. Why does the existence of latent mismatch lead to shortcut learning?\", \"What are the potential mitigation against to the proposed method?\"], \"flag_for_ethics_review\": \"['Yes, Legal compliance (e.g., GDPR, copyright, terms of use)', 'Yes, Potentially harmful insights, methodologies and applications']\", \"details_of_ethics_concerns\": \"The proposed method can put the copyright of artists' work at risk. The method can void the protective perturbation in protecting images from being used for training diffusion models. The authors did not discuss the potential negative impacts.\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Part 1\", \"comment\": \"We thank the reviewer for their valuable and detailed feedback. Based on the reviewer's suggestions, we have revised our manuscript accordingly. We acknowledge that some aspects of our motivation and setup may not have been clearly communicated, and we would like to clarify and address each of the concerns raised, as outlined below.\\n\\n> Q1: The paper lacks overall coherence, with some sections difficult to follow and, in some cases, contradictory. Additionally, several terms and graphs are missing clear definitions and explanations.\\n \\n**Response:** We have made significant efforts to improve the presentation in the revised manuscript, ensuring that our main motivations and contributions are clearly articulated. Below, we address each of the specific points raised.\\n\\n> Q1.1. Are \\\"adversarial perturbations\\\" and \\\"protective perturbations\\\" intended to be the same concept? The author seems to use these terms interchangeably; if they differ, please clarify each term carefully.\\n\\n**Response:** We apologize for the confusion caused by the interchangeable use of \\u201cadversarial perturbations\\u201d and \\u201cprotective perturbations.\\u201d As per GrIDPure (Zhao et al., 2024) [1], these terms refer to the same concept but from different perspectives. \\u201cAdversarial perturbations\\u201d emphasize the perturbations\\u2019 disruptive effect on the model fine-tuning process from the model trainer\\u2019s viewpoint, while \\u201cprotective perturbations\\u201d highlight their role in safeguarding portrait owners\\u2019 images from unauthorized synthesis. **To avoid confusion, we have standardized the terminology in the revised manuscript to consistently use \\u201cprotective perturbations.\\u201d**\\n\\n> Q1.2. In the introduction, the author presents multiple related works. It may be helpful to focus on those most relevant to the paper\\u2019s main motivation. Additionally, certain terms, such as \\\"purification studies,\\\" would benefit from brief explanations\\u2014similar to the way \\\"image purifications\\\" is introduced on line 142.\\n\\n**Response:** In the revised manuscript, we have refined the introduction to focus on the most relevant related works that align with our main motivation, particularly discussing the limitations of IMPRESS and GrIDPure. We have also provided brief explanations and appropriate citations for terms like \\u201cpurification studies\\u201d to enhance clarity.\\n\\n> Q1.3. Several equations need further explanation, such as those on lines 178-179, regarding the function of an instance dataset and a class dataset. Additionally, the meaning of \\\"r\\\" on line 208 is unclear.\\n\\n**Response:** **We have added detailed explanations for these terms in the revised manuscript.** Specifically, as outlined in our preliminary section, in DreamBooth, the **instance dataset** contains images of the specific subject (e.g., portraits of a particular person) that the model is intended to learn. To prevent \\u201clanguage drift,\\u201d where the model might incorrectly associate the class name (e.g., \\u201cperson\\u201d) exclusively with the subject instance, DreamBooth also employs a **class dataset** comprising images of the same class but with different identities. This helps the model retain general class-specific knowledge during fine-tuning. The weighted denoising losses for these datasets are presented in Equations 1 and 2 in the paper. Additionally, the parameter \\u201c$r$\\u201d refers to the perturbation radius in the $\\\\ell_\\\\infty$-norm ball.\"}", "{\"title\": \"Concerns from Reviewer uRZe\", \"comment\": \"Thank you author for your responds. In your response \\u201cour contribution lies in\\u00a0being the first to apply causal analysis to protective perturbations in personalized generation tasks\\u201d And based the current version of the causal part in your paper, I would not think it is a contribution or should be called as causal analysis.\\n\\nTypically, I agree that most causal graphs are constructed using prior knowledge, drawing from experience in a specific domain to determine which features cause others, and representing these relationships in a graph [3-4]. These graphs serve as **assumptions** for further causal treatment estimation or other in-depth causal analyses, which I believe are not present in this paper. The papers cited by the authors that incorporate causal analysis into their work not only construct causal graphs but also provide in-depth causal analyses (e.g., using front-door adjustment [2]) or design models explicitly based on causal theory (e.g., disentangling causal and confounding factors [1]). Hence, I do not consider merely constructing a causal graph to be an analysis of the problem. Instead, it appears to be another way of **describing the problem** from the author\\u2019s perspective, which is also pointed out by reviewer co1c.\\n\\n\\nAdditionally, regarding the method described in the paper: While I acknowledge the effectiveness of the approach, I believe that, compared to previous methods, the primary difference lies in the addition of a suffix prompt. This distinction, however, may not constitute a particularly strong novelty. Also the method in the paper \\\"During training, we insert V \\u2217 N into the prompt of instance data with the suffix \\u201cwith XX noisy pattern\\u201d, and include the \\u201cinverse\\u201d of V \\u2217 N in the prompt of class-prior data with the suffix \\u201cwithout XX noisy pattern\\u201d. During inference, we add the suffix \\u201cwithout XX noisy pattern\\u201d to the prompt input to guide the model in disregarding the learned patterns associated with V \\u2217 N. It seems, in essence, that this method functions like injecting a backdoor attack into the model. Specifically, all instance data is linked to the suffix \\u201cwith XX noisy pattern,\\u201d while class-prior data is linked to \\u201cwithout XX noisy pattern.\\u201d As a result, during inference, if the prompt includes the suffix \\u201cwithout XX noisy pattern,\\u201d the model generates a safe image. However, this mechanism resembles that of a backdoor attack, where the attacker could potentially exploit it. For example, if the attacker becomes aware of this mechanism and inputs the suffix \\u201cwith XX noisy pattern,\\u201d I suspect all personalized images would be generated, compromising the intended safety.\\n\\nDue to these concerns, I maintain my opinion on the paper. That said, I am happy to engage in further discussion with other reviewers to hear their perspectives and clarify any points.\\n\\n\\n[1]Zhang Z, Liu Q, Wang Z, et al. Backdoor defense via deconfounded representation learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 12228-12238.\\n\\n[2]Liu Y, Xu X, Hou Z, et al. Causality Based Front-door Defense Against Backdoor Attack on Language Models[C]//Forty-first International Conference on Machine Learning.\\n\\n[3]Pearl J. Causal inference in statistics: An overview[J]. 2009.\\n\\n[4] Yao L, Chu Z, Li S, et al. A survey on causal inference[J]. ACM Transactions on Knowledge Discovery from Data (TKDD), 2021, 15(5): 1-46.\"}", "{\"metareview\": \"The paper proposes a view for the fine-tuning process of Personalized Diffusion Models (PDMs) as shortcut learning, motivated by causal analysis. The authors introduce a defense framework to help the model correctly associate images with their original semantic meanings.\", \"strength\": \"1. The paper studies why protective noise works in T2I models.\\n\\n2. The paper conducts an extensive array of experiments and also considers adaptive perturbation.\", \"weaknesses\": \"The authors and reviewers discuss about the causal analysis in this paper and even the authors agree that their causal analysis is not strict. I think this greatly weaken this paper bacuase of the following reasons, 1. short-cut analysis are not a newly proposed term in unlearnable exmaple, and classification also has \\\"class-image feature misalignment\\\". Due to this reason, I believe the authors need to add some new insights to make this paper stand out, like causal analysis. Therefore, their causal analysis should not be just drawing some causal graphs. 2. In the paper's contribution, the causal analysis also mentioned a lot by the authors. Therefore, I think the authors should do in-depth analysis to demonstrate their analysis are correct.\\n\\nTherefore, I think this paper still need a lot modifications before the acceptance.\", \"additional_comments_on_reviewer_discussion\": \"The authors and reviewers dicuss a lot about the paper's causal analysis, which should be the key novelty and contribution of this paper.\"}", "{\"comment\": \"I personally quite like this paper in the reviewing phrase. However, after reading other comments, I find out there are some room for it to update and improve in the revision. I would not like to champion for its acceptance.\\n\\nOnly one point for Reviewer uRZe, to set up a causal graph without any guarantee is a common way in that domain (I do not support it either). Thus, I think you two might be not one the same page in the discussion :)\"}", "{\"comment\": \"Dear reviewers,\\n\\nThanks for serving as a reviewer. As the discussion period comes to a close and the authors have submitted their rebuttals (maybe in general response), I kindly ask you to take a moment to review them and provide any final comments.\\n\\nIf you have already updated your comments, please disregard this message.\\n\\nThank you once again for your dedication to the OpenReview process.\\n\\nFor authors, I think it will be better to also respond to each reviewer separately, as it will be easier for reviewers to find whether their own concerns have been addressed.\\n\\nBest,\\n\\nArea Chair\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Part 5\", \"comment\": \"> Q4. The causal graph is underexplained and possibly contains ambiguities. For example, the definitions of\\u00a0$\\\\bar{c}$ and\\u00a0$\\\\bar{x}_0$ are missing. While a brief introduction to the construction of the graph is provided, explanations of each node\\u2019s meaning and the meaning of the arrows are absent. Given that the causal graph is a key contribution, adding a paragraph to introduce and explain it in detail would be beneficial. The term \\\"spurious path\\\" may also be misapplied; in causal inference, this usually refers to a backdoor path between treatment and outcome. Since this doesn\\u2019t apply here, either avoid the term or define it within the paper's context.\\n\\n**Response:** We appreciate the feedback and agree that the causal graph needed a more detailed explanation. While we did provide some definitions of variables in the preliminary section, we acknowledge that these variables' definitions could be more explicit and better integrated into the causal graph discussion part. In the revised manuscript, we have added a dedicated paragraph (Appendix C.1) that thoroughly explains each component of the causal graph, including clear definitions of variables like $\\\\bar{c}$ and $\\\\bar{X}_0$, as well as the meanings of the nodes and arrows. Additionally, to avoid confusion, we have replaced the term \\u201cspurious path\\u201d with \\u201cidentifier-noise shortcut\\u201d within our context. We also attached the context of causal graph details (in Appendix C.1) here for your reference.\\n\\n> Appendix C.1. Detailed Explanation of the Causal Graph Building Process\\n> \\n> To understand how protective perturbations lead to shortcut learning in PDMs, we construct a Structural Causal Model (SCM) that captures the learned causal relationships between the variables involved in the fine-tuning process. The variables in our SCM are defined as follows: $X_0$ represents the original clean images representing the true concept; $\\\\Delta$ denotes the protective perturbations added to the images; $X_0^\\\\prime = X_0 + \\\\Delta$ are the perturbed images used for fine-tuning; $c$ represents class-specific textual prompts without the unique identifier (e.g., \\\"a photo of a person\\\"); $\\\\mathcal{V}^*$ is the unique identifier token used in personalized prompts (e.g., \\\"sks\\\"); $c^{\\\\mathcal{V}^*} = c \\\\oplus \\\\mathcal{V}^*$ denotes the personalized textual prompts combining $c$ and $\\\\mathcal{V}^*$; $\\\\theta_{T}$ represents the model parameters after being fine-tuned. The structural equations governing the relationships in our SCM are as follows: (1) Perturbed Images: $X_0' = X_0 + \\\\Delta$, where $X_0'$ represents the perturbed images, $X_0$ the original clean images, and $\\\\Delta$ the protective perturbations. (2) Model Fine-tuning: $\\\\theta_{T} = f_{\\\\theta}(\\\\theta_0, X_0', c^{\\\\mathcal{V}^*}, \\\\bar{X_0} ,\\\\bar{c})$, where $\\\\theta_{T}$ represents the fine-tuned model parameters, $\\\\theta_0$ the initial model parameters, $c^{\\\\mathcal{V}^*}$ the personalized text prompts, $\\\\bar{X_0}$ and $\\\\bar{c}$ the image and prompt of class-specific dataset to help model maintain class prior. For our case of finetuning on human portrait, the $\\\\bar{X_0}$ is the person images from different identities, and $\\\\bar{c}$ is set as \\\"a photo of person\\\". After $\\\\theta_T$ has been fine-tuned, it learns the latent causal relationship $\\\\mathcal{V}^*$ $\\\\rightarrow$ $X_0'$ with conditioning mechanism through prompt-image association.\\n\\n---\\n\\n> Q5. The causal graph may need structural revision. In causal inference, an arrow between A and B signifies that A causes B. However, in this graph, it seems that an arrow signifies containment rather than causation. I would suggest adhering closely to causal inference conventions and adjusting the graph accordingly.\\n\\n**Response:** Thank you for this suggestion. We have revised the causal graph in the manuscript to adhere more closely to causal inference conventions, ensuring that the arrows correctly represent causal relationships. In Appendix C.1, we have also provided detailed explanations of each node and edge in the graph to clarify their meanings. We also attached them here for your reference.\\n\\n> Appendix C.1. Detailed Explanation of the Causal Graph on Node and Edge\\n>\\n> In the graph, we define each node to represent one of the elements for the learned causation: independent variables (i.e., text prompts, and unique identifier), dependent variables (i.e., perturbed identity images, general face images), or intermediate variables like prompt composited. We define each edge to represent the causal unidirectional dependency between the variables. For those prompt composition edges, the relationship is simply the concatenation operation in the textual space. For those prompt-image association edges, the relationship is defined as the causation learned by the model $\\\\theta_T$. For the edges between $\\\\Delta$ and $X_0'$, it is defined as the direct effect of the perturbations on the original clean images, $X_0'= X_0 + \\\\Delta$.\"}", "{\"summary\": \"This paper uncovers and validates the underlying mechanism by which adversarial perturbations disturb the fine-tuning of personalized diffusion models by latent-space image-text misalignment. Then, it introduces a systematic defense framework that mitigates the misalignment with data purification and contrastive decoupled learning and sampling.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This paper finds that adversarial perturbation leads to latent image-text mismatch and provides an explanation from the perspective of shortcut learning. Their analysis contributes to the further development of protective perturbation in personalized diffusion models.\", \"The proposed framework provides a system-level defense covering data purification, model training, and sampling strategy. Compared with previous data transformation and diffusion-based methods, the proposed method achieves the best semantic and image quality restoration.\"], \"weaknesses\": [\"In Table I, the authors would better add a setting that the clean images are processed by the proposed and baseline methods.\", \"In Table II, why only calculate the time for data purification? Will CDL incur additional time costs?\"], \"questions\": \"Please help to check weaknesses.\", \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety']\", \"details_of_ethics_concerns\": \"This is a promising method to destroy nearly all SOTA defense study on personalized diffusion model. As it mentioned, it provides a valuable evaluation framework. However, how to protect is still unsolved.\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We further provide point-by-point responses to the reviewer's concerns:\\n\\n> **Causal Analysis Novelty**: \\u201cAlthough the authors provided a causal analysis, it appears to be limited to constructing a causal graph with prior knowledge to describe the problem, without offering theoretical guarantees.\\u201d\\n\\n**Response**: While we do not claim to propose a new causal analysis framework or theoretical guarantees, our contribution lies in **being the first to apply causal analysis to protective perturbations in personalized generation tasks**. Through this lens, we made the novel discovery of **latent-space image-prompt mismatches**, validated by extensive empirical evidence (Section 4.1, Appendix B.2).\", \"regarding_similarities_with_causal_analyses_in_backdoor_attacks\": \"while both involve causal frameworks, the problem settings differ fundamentally. Backdoor analyses typically focus on simple classification tasks (input-output pairs), whereas our work addresses text-to-image diffusion models with complex conditioning and multiple learning targets. Additionally, the confounder in backdoor attacks is introduced at the input level, while in our case, it arises from protective perturbations affecting learning targets. To our knowledge, this is the first causal analysis tailored to personalized generation tasks under adversarial setups. Additionally, this analysis motivated our CDL module, which introduces a noise-related node $\\\\mathcal{V}_N^*$ to decouple clean and noisy concepts\\u2014a design not present in prior work.\\n\\n> **Purification Pipeline Novelty**: \\u201cThe proposed methods are not fundamentally based on the causal aspect itself. Upon closer examination, the methods share significant similarities with prior approaches using purification techniques.\\u201d\\n\\n**Response**: While all red-teaming methods share the common goal of mitigating perturbations, our work differs in both **approach** and **systematic integration**:\\n\\n1. **Beyond Image Denoising**: Unlike prior methods (e.g., GrIDPure, IMPRESS) that focus solely on image-space denoising, our framework incorporates causal insights to address the root cause of latent mismatches. This allows us to go beyond traditional purification, introducing decoupling learning to systematically restore alignment and generation quality.\\n\\n2. **Contrastive Decoupling Learning (CDL)**: Our CDL module not only adds suffixes to prompts but also adjusts the sampling process to enhance decoupling and generation quality. By learning clean and noisy concepts separately, and guiding the model with enhanced classifier-free guidance (Eq. 6), we ensure high fidelity and clarity in generated images.\\n\\n3. **More Efficient, Faithful and Practical Purification**: Leveraging off-the-shelf image restoration and super-resolution models, our approach avoids heavy iterative optimization (comparable to IMPRESS) and produces faithful content (comparable to GrIDPure). Furthermore, leveraging our image restoration models as purification pipelines is also more aligned with real-world standardized practices [1,2] on training personalized diffusion models on potentially corrupted data, providing a more practical red-teaming framework for the protection side. \\n\\nWe hope these clarifications address the reviewer\\u2019s concerns and highlight the novelty and significance of our work. We are happy to discuss any specific points or further elaborate on areas of interest.\\n\\n**References**\\n\\n`[1]`. Kohya-ss. SD Scripts. GitHub, https://github.com/kohya-ss/sd-scripts. Accessed 23 Nov. 2024\\n\\n`[2]`. Akegarasu. LoRA & Dreambooth Training Scripts & GUI. GitHub, https://github.com/Akegarasu/lora-scripts. Accessed 23 Nov. 2024.\"}", "{\"summary\": \"This paper proposes viewing the fine-tuning process of Personalized Diffusion Models (PDMs) through the lens of shortcut learning, using causal analysis as motivation. The authors then introduce a defense framework designed to enable the model to correctly associate images with their original semantic meanings.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The paper provides preliminary experiments on CLIP, which help demonstrate the authors' ideas.\\nPersonalized diffusion models present an interesting area for further exploration.\", \"weaknesses\": \"1. The paper lacks overall coherence, with some sections difficult to follow and, in some cases, contradictory. Additionally, several terms and graphs are missing clear definitions and explanations.\\n\\n 1. Are \\\"adversarial perturbations\\\" and \\\"protective perturbations\\\" intended to be the same concept? The author seems to use these terms interchangeably; if they differ, please clarify each term carefully.\\n 2. In the introduction, the author presents multiple related works. It may be helpful to focus on those most relevant to the paper\\u2019s main motivation. Additionally, certain terms, such as \\\"purification studies,\\\" would benefit from brief explanations\\u2014similar to the way \\\"image purifications\\\" is introduced on line 142.\\n 3. Several equations need further explanation, such as those on lines 178-179, regarding the function of an instance dataset and a class dataset. Additionally, the meaning of \\\"r\\\" on line 208 is unclear.\\n\\n2.After reading the entire paper, I found it challenging to identify the specific question the author aims to address and the associated motivations. While the introduction attempts to outline these points, it is difficult to discern the relationship between the motivation and the problem being addressed. Additionally, there appears to be a disconnect between the problem definition in the introduction and the methods presented. Here are some specific suggestions for clarification:\\n\\n 1. The introduction states, \\u201cThe model trained on perturbed data will generate images that are poor in quality, and thus, unauthorized fine-tuning fails.\\u201d Does this imply that generating low-quality images of private content protects copyright and privacy? If so, why does the proposed method focus on enhancing image clarity for private content while defining it as a defense?\\n 2.The author mentions that shortcuts are key to avoiding the generation of private personal images. Given this, why does the method seem to eliminate these shortcuts?\\n 3.On line 46, adversarial perturbations are suggested as a means to protect users\\u2019 images from unauthorized personalized synthesis. However, line 100 describes an intention to \\\"defend against\\\" this. Could you clarify?\\n 4.Additionally, the highlighted question in the introduction, \\u201cHow to design an effective, efficient, and faithful purification approach is still an open question,\\u201d lacks context. Although there is a mention of \\u201cMoreover, purification studies are also purposed to further break those protections\\u201d in the following sentence, there are no subsequent explanations, particularly concerning how this question connects with the paragraph's earlier discussion.\\n 5. In the end of introduction, it seems that the authors propose a new purify methods, \\\"Our approach conducts comprehensive purification from three perspectives, including input image purification, contrastive decoupling learning with the negative token, and quality-enhanced sampling....\\\". However, in the methods, the author says they propose a method to address the short cut learning...., which is a little bit confusing.\\n\\n3. Minor: Although viewing fine-tuning from a causal effect and shortcut learning perspective is novel, it shares similarities with backdoor attacks. In the backdoor attack literature, several papers have employed causal graphs to analyze shortcut mechanisms.[1-3]\\n\\n4. The causal graph is underexplained and possibly contains ambiguities. For example, the definitions of $\\\\bar{C}$ and $\\\\bar{x_o}$ are missing. While a brief introduction to the construction of the graph is provided, explanations of each node\\u2019s meaning and the meaning of the arrows are absent. Given that the causal graph is a key contribution, adding a paragraph to introduce and explain it in detail would be beneficial. The term \\\"spurious path\\\" may also be misapplied; in causal inference, this usually refers to a backdoor path between treatment and outcome. Since this doesn\\u2019t apply here, either avoid the term or define it within the paper\\u2019s context.\\n\\n5. The causal graph may need structural revision. In causal inference, an arrow between A and B signifies that A causes B. However, in this graph, it seems that an arrow signifies containment rather than causation. I would suggest adhering closely to causal inference conventions and adjusting the graph accordingly.\\n\\n[1]Zhang Z, Liu Q, Wang Z, et al. Backdoor defense via deconfounded representation learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 12228-12238.\\n\\n[2]Liu Y, Xu X, Hou Z, et al. Causality Based Front-door Defense Against Backdoor Attack on Language Models[C]//Forty-first International Conference on Machine Learning.\\n\\n[3]Hu M, Guan Z, Zhou Z, et al. Causality-Based Black-Box Backdoor Detection[J].\", \"questions\": \"See weakness\", \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"References\", \"comment\": \"Again, we appreciate the reviewer for their valuable comments, which helped improve the manuscript significantly.\\n\\n**References**\\n\\n`[1]`. Zhao, Zhengyue, et al. \\\"Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion?.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n\\n`[2]`. Zhang, Zaixi, et al. \\\"Backdoor defense via deconfounded representation learning.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.\\n\\n`[3]`. Liu, Yiran, et al. \\\"Causality Based Front-door Defense Against Backdoor Attack on Language Models.\\\" Forty-first International Conference on Machine Learning. 2024.\"}", "{\"comment\": \"Dear Reviewer nipZ, we would like to check if our rebuttal has addressed your concerns or if any points still require clarification before the response period ends. Thank you for your time.\"}", "{\"title\": \"Part 4\", \"comment\": \"> Q3. Minor: Although viewing fine-tuning from a causal effect and shortcut learning perspective is novel, it shares similarities with backdoor attacks. In the backdoor attack literature, several papers have employed causal graphs to analyze shortcut mechanisms.[1-3]\\n\\n**Response:** Thank you for pointing out these related works in backdoor attacks and defenses. We have added a dedicated subsection in the revised manuscript (Appendix E.1) to discuss these causality-based backdoor defense works. While there are similarities in problem formulation and technical direction, our work is fundamentally different from these studies. Specifically:\\n\\n- **Different Problem Settings**: Backdoor defenses typically focus on classification tasks, where the spurious correlations are between input features and class labels. In contrast, our work addresses personalized generation tasks in diffusion models, dealing with associations between textual identifiers and image content.\\n- **Distinct Technical Approaches**: Our method introduces a unified framework employing both do-calculus (for purification) and decoupling learning, operating on prompt augmentation and image generation, which differs from the techniques used in the backdoor defense literature.\\n\\nWe provide detailed discussions comparing our work with CBD (Zhang et al., 2023) [1] and FABE (Zheng et al., 2024) [2] in the Appendix E.1. We also attached them here for your reference.\\n\\n> **Comparison with CBD and FABE.** Our work and these works both leverage causality-based perspectives to defend or red-teaming the perturbation. However, the problem and techniques in our work are fundamentally different from these two works. First, in terms of problem, CBD and FABE both focus on the classification task, either image classification or text classification, where the backdoor spurious path is established between the model input $X$ and class label prediction $Y$. For our task, we are tackling the personalized generation task, where the LDMs are fine-tuned to link a unique identifier $\\\\mathcal{V}^*$ to a new subject concept $X_0$. In the backdoor attack case, the attacker aims to introduce a confounder $A$ variable at the input side to trigger certain label prediction $Y'$, while in our case, the image protector only modifies the learning target $X_0^\\\\prime=X_0+\\\\Delta$ but do not explicitly add any trigger at the input side, which serves as the confounder in backdoor attack case. Thus, considering the difference in the threat model, the defense techniques in backdoor case, such as CBD and FABE, focus more on removing the confounder in the input side, while the defense in our case focuses on the prediction side, by reinforcing the causal path between the unique identifier $\\\\mathcal{V}^*$ and the clean target concept $X_0$. \\n\\n> Second, in terms of techniques, both CBD and FABE only focus on one perspective on causal intervention, while our work proposes a unified framework that conducts both do-calculus (i.e., removing the injected variable or purification) and decoupling learning. Specifically, CBD assumes that the correlations $A \\\\rightarrow Y$ can be well captured by an early-stop model $f_B$, and CBD learns the clean model $f_C: X \\\\rightarrow Y$ by minimizing the mutual information between the embedding from $f_B$ and $f_C$. Compared to this feature space decoupling learning, our work operates the prompt augmentation side, which can be more efficient and end-to-end. Specifically, we observe the fact that the class-specific image doesn't contain any perturbation, while the instance image might contain the perturbation. Thus, we introduce a new noise identifier $\\\\mathcal{V}^*_N$ and append it to two different datasets with different prefixes \\\"with\\\" and \\\"without\\\" to achieve contrastive decoupling learning without any need to access the model weights and tuning any early stopping hyper-parameters as in CBD. \\n\\n> Similar to the purification part in our work, FABE mainly focuses on conducting semantic denoising on the original textual input to approximately achieve the do-calculus from the causal intervention perspective. Specifically, FABE denoise the $X$ to semantically equivalent text $Z$, with a fine-tuned language model. The fine-tuned language model learns to rank that effective $ Z$ that removes confounder $A$, i.e., the backdoor trigger. Then, the prediction is conducted via voting over a pool of sampled $Z$ to achieve a clean prediction of $Y$. Compared to FABE, our purification pipeline for protective perturbation is more direct and flexible, without the need to fine-tune an additional model. Meanwhile, FABE requires unrolling $B$ semantic candidates using beam search, which can be computationally expensive especially when context length $L$ is large. In contrast, we leverage off-the-shelf image restoration and super-resolution models to conduct one-shot efficient purification.\"}", "{\"comment\": \"I acknowledge that I have read the response.\"}", "{\"comment\": \"Dear Reviewer 2TWg, we would like to check if our rebuttal has addressed your concerns or if any points still require clarification before the response period ends. Thank you for your time.\"}", "{\"comment\": \"Thank you for taking the time to reply to our response. According to the 2021-2025 ICLR Reviewer Guide, novelty should be evaluated based on both **technical methods** and **novel findings**. We believe our work contributes significantly in both aspects. Below, we provide detailed clarifications regarding the novelty of our work.\\n\\n**Contribution Statements:**\\n\\n1. **Novel Finding on Protective Perturbations**: Our work is the first to analyze and rethink the effectiveness of protective perturbations through a causal and shortcut learning lens. We make the novel discovery that **effective protective perturbations create latent-space image-prompt mismatches**. This means that the perturbed images and their corresponding prompts are no longer semantically aligned in the latent space. We validate this finding through extensive experiments, including latent mismatch visualizations and concept interpretations (Section 4.1 and Appendix B.2).\\n\\n2. **Systematic Red-Teaming Framework**: Building on this insight, we propose a systematic, efficient, and faithful red-teaming framework against existing protective perturbations. Our framework achieves state-of-the-art performance across 9 purification baselines and 7 protection methods, excelling in defense effectiveness, efficiency, and faithfulness. This extends beyond existing purification techniques that solely focus on image denoise, incorporating decoupling learning to address the limitations of prior methods.\\n\\n3. **Novel Contrastive Decoupling Learning (CDL)**: Our CDL method is the first to explicitly guide models to separately learn clean and noisy concepts during fine-tuning for personalized generation tasks. Through classifier-free guidance (Eq. 6, Section 4.2) during sampling, our CDL effectively decouples these concepts, enabling robust red-teaming against protective perturbations. We demonstrate that CDL is not only effective on its own but also works synergistically with purification techniques. Moreover, our experiments in Section 5.3 (Resilience Against Adaptive Perturbations) highlight CDL as a robust, and potentially once-for-all solution for breaking protective perturbations. \\n\\nThese contributions offer significant insights and advancements in both protective perturbation design and red-teaming methodologies, with broader implications for the ICLR community.\"}", "{\"title\": \"Part 3\", \"comment\": \"> Q2.1. The introduction states, \\u201cThe model trained on perturbed data will generate images that are poor in quality, and thus, unauthorized fine-tuning fails.\\u201d Does this imply that generating low-quality images of private content protects copyright and privacy? If so, why does the proposed method focus on enhancing image clarity for private content while defining it as a defense\\n\\n**Response:** We acknowledge the confusion and appreciate the opportunity to clarify. Our work is a red-teaming effort aimed at defeating existing protective perturbations. While protective perturbations degrade image quality to prevent unauthorized fine-tuning, our goal is to overcome these protections and restore high-quality image generation. This is aligned with previous works like IMPRESS and GrIDPure. The value of our red-teaming work lies in:\\n\\n- **Evaluating Robustness**: By developing more principled and systematic methods to break protective perturbations, we assess their robustness and reveal potential vulnerabilities. This helps prevent a false sense of security among portrait owners and artists who might overly rely on these protections.\\n- **Guiding Future Protections**: Our findings can inform the development of more effective and robust protective perturbation methods, enhancing privacy and copyright protections in the future.\\n\\n> Q2.2. The author mentions that shortcuts are key to avoiding the generation of private personal images. Given this, why does the method seem to eliminate these shortcuts?\\n\\n**Response:** We understand this confusion is similar to Q2.1 and would like to clarify. In our work, we identify that protective perturbations cause the model to learn shortcut connections between the added noise and the personalized identifier \\u2014 an unintended association. Our method aims to eliminate these shortcuts to prevent the model from being misled by the perturbations, thereby allowing it to correctly learn the association between the identifier and the original clean images. This elimination is essential for defeating the protective perturbations and restoring high-quality image generation.\\n\\n\\n\\n> Q2.3. On line 46, adversarial perturbations are suggested as a means to protect users\\u2019 images from unauthorized personalized synthesis. However, line 100 describes an intention to \\\"defend against\\\" this. Could you clarify?\\n\\n**Response:** Thank you for pointing out this inconsistency. We followed the terminology used in GrIDPure (Zhao et al., 2024) [1], where \\u201cdefense\\u201d refers to methods that defeat protective perturbations. **However, to avoid confusion, we have revised the manuscript to use the term \\u201cred-teaming\\u201d instead of \\u201cdefense,\\u201d clearly indicating that our work focuses on breaking existing protective perturbations rather than safeguarding them.**\\n\\n> Q2.4. Additionally, the highlighted question in the introduction, \\u201cHow to design an effective, efficient, and faithful purification approach is still an open question,\\u201d lacks context. Although there is a mention of \\u201cMoreover, purification studies are also purposed to further break those protections\\u201d in the following sentence, there are no subsequent explanations, particularly concerning how this question connects with the paragraph's earlier discussion.\\n\\n**Response:** Thank you for highlighting this issue. In the revised manuscript, we have added more context and explanations regarding purification studies, along with appropriate references. We have also revised the introduction in revision to better present the limitations of existing purification methods including the two important baseline methods, IMPRESS and GrIDPure. This provides a smoother transition to our main motivation of developing a more effective, efficient, and faithful purification approach.\\n\\n> Q2.5. In the end of introduction, it seems that the authors propose a new purify methods, \\\"Our approach conducts comprehensive purification from three perspectives, including input image purification, contrastive decoupling learning with the negative token, and quality-enhanced sampling....\\\". However, in the methods, the author says they propose a method to address the short cut learning...., which is a little bit confusing.\\n\\n**Response:** Our method is indeed a purification approach, but it is theoretically grounded in causal analysis and specifically designed to address shortcut learning. Through our causal analysis, we discovered that protective perturbations create shortcut connections during the fine-tuning process, causing the model to learn superficial patterns rather than meaningful semantic features. By understanding and targeting this root cause, our method provides a systematic and principled purification mechanism against protective perturbations. This theoretical grounding distinguishes our approach from previous methods that focus solely on noise removal without considering the underlying causal mechanisms.\"}", "{\"title\": \"General Response\", \"comment\": \"We greatly appreciate all reviewers for your time and effort in providing this insightful feedback that helps us improve our work. We have submitted a revised version of the paper that highlights the changes in blue color. In this post, we provide a general response summary to the most common questions and the main updates in our revision.\\n\\n### Response Summary to Common Questions\\n\\n> Q1: Analysis and connection between latent mismatch and shortcut learning (Reviewer `nipZ`, `2TWg`)\\n\\n**Response**: We provide a more detailed explanation of how latent mismatch leads to shortcut learning. In Section 4.1, we demonstrate that protective perturbation transforms the target to $X_0^\\\\prime = X_0 + \\\\Delta$, causing semantic displacement in the latent space toward noise patterns rather than preserving the original identity concept. Given the architectural constraints of Dreambooth learning, where $\\\\mathcal{V}^*$ must associate with either noise or identity concept, the model naturally converges to learning spurious correlations between $\\\\mathcal{V}^*$ and $\\\\Delta$ as this represents the path of least resistance for loss minimization.\\n\\n---\\n\\n> Q2: Positioning and framing of the work as red-teaming protective perturbations rather than safeguarding them (Reviewer `uRZe`, `nipZ`)\\n\\n**Response**: We appreciate the feedback about the positioning of our work. We have made several important revisions:\\n\\n- Updated the \\\"Defending\\\" in title to \\\"Red-Teaming\\\" to better reflect our positioning to avoid confusion. \\n- Clarified throughout the paper that our work is a red-teaming effort aimed at understanding and further breaking protective perturbations.\\n- Updated the introduction to focus on the two key related works, including IMPRESS and GrIDPure, clarifying terminology, and maintaining consistent terminology around \\\"protective perturbations\\\".\\n\\n---\\n\\n> Q3: Technical novelty of contrastive decoupling learning and comparison with existing work in backdoor defense (Reviewer `uRZe`, `2TWg`)\\n\\n**Response**: While decoupling learning exists in backdoor defense literature [1] as pointed out by reviewer `uRZe`, our approach differs substantially in both problem space and technical implementation from CBD [1]. We believe that our work makes a novelty contribution in terms of methodology and also findings. We summarize the main difference with CBD [1] here for reviewers' convenience: \\n1. We address fundamentally different challenges than previous work like CBD [1]. CBD focuses on backdoor attacks in classification tasks, where attackers introduce input-side confounders to trigger specific label predictions, whereas our work tackles personalized generation in text-to-image diffusion models where protectors modify the learning target ($X_0^\\\\prime=X_0+\\\\Delta$) to link identifiers ($\\\\mathcal{V}^*$) with subject concepts.\\n2. Our technical approach achieves decoupling more efficiently through prompt augmentation that capitalizes on inherent differences between class-specific and instance-specific images. In CBD, decoupling is achieved through feature space decoupling through mutual information minimization between early-stop and clean models. However, our approach achieves more efficient decoupling through prompt augmentation by leveraging the inherent differences between class-specific and instance-specific images. We introduce a noise identifier $\\\\mathcal{V}^*_N$ and use \\\"with\\\"/\\\"without\\\" prefixes for contrastive decoupling learning, eliminating the need for model weight access or complex early-stopping parameters that CBD requires.\\n\\nPlease refer to Appendix E.1 in the revised manuscript for more discussion with existing work in backdoor defense.\\n\\n---\\n\\n### Change Summary in the Revised Version\\n1. Improved introduction by focusing on the two key related works, including IMPRESS and GrIDPure, clarifying variables like $r, \\\\bar{c},\\\\bar{X_0}$, and maintaining consistent terminology around \\\"protective perturbations\\\". (`uRZe`)\\n2. Updated the causal graph in Figure 2 and Figure 9 with more detailed construction details in Appendix C following the conventional notation in causal inference. (`uRZe`)\\n3. Enhanced clarity throughout the paper regarding our positioning as a red-teaming effort. (`uRZe, nipZ`)\\n4. Expanded discussion in Section 4.1 to better explain the connection between latent mismatch and shortcut learning. (`uRZe, nipZ`)\\n5. Added detailed comparison with related work in backdoor defense in Appendix E.1. (`uRZe, 2TWg`)\\n6. Added comprehensive discussion of alternative copyright protection approaches in Appendix E.2. (`nipZ`)\\n7. Added results under the clean setup in Table 1 to evaluate purification methods' performance without protective perturbations. (`co1c`)\\n8. Added post-hoc purification results on noisy outputs in Appendix B.6 and Figure 8. (`2TWg`)\\n9. Added discussion on broader impact and more adaptive protection in Appendix B.5. (`nipZ, co1c`)\\n\\n**References:**\\n\\n`[1]`. \\\"Backdoor defense via deconfounded representation learning.\\\" CVPR'23\"}", "{\"comment\": \"We additionally provide a point-by-point response to the reviewer's ethics concerns:\\n\\n\\n> `nipZ`: The proposed method can put the copyright of artists' work at risk. The method can void the protective perturbation in protecting images from being used for training diffusion models. The authors did not discuss the potential negative impacts.\\n\\n**Response**: **We additionally discuss the potential negative impacts of our work in the broader impact sector in Appendix B.5 in the revised manuscript.** We attached them here for your reference.\\n\\n> **Discussion on Broader Impact:** Our work on red-teaming existing protective perturbations raises ethical considerations, particularly regarding privacy and intellectual property rights. While our methods could potentially compromise images protected by existing protective perturbations, we believe that the benefits of this research outweigh the potential risks. First, our research helps prevent a false sense of security by revealing limitations in existing protective measures. This transparency enables portrait owners and artists to make more informed decisions about protecting their content. Furthermore, the insights gained from our analysis can inform the development of next-generation protection techniques that are more resilient against sophisticated red-teaming, thereby strengthening privacy and copyright safeguards in the long term.\\n\\n---\\n\\n> `co1c`: This is a promising method to destroy nearly all SOTA defense study on personalized diffusion model. As it mentioned, it provides a valuable evaluation framework. However, how to protect is still unsolved.\\n\\n**Response**: Our work primarily focuses on red-teaming and provides a comprehensive evaluation framework for existing protective perturbations. While we demonstrate that our framework is robust against adaptive perturbations (Section 5.3), we acknowledge that more sophisticated protection techniques may emerge. For instance, our red-teaming setup currently focuses on noise-based protective perturbations, but object-embedded perturbations (Zhu et al., 2024) could potentially resist our noise-concept-based CDL prompt design. Additionally, to counter our purification pipeline, future protection techniques could explore more advanced ensemble methods (Chen et al., 2022) to develop more resilient defenses. **We have added this discussion to the limitations section in Appendix B.5 of the revised manuscript.**\"}", "{\"title\": \"Response to Reviewer uRZe\\u2019s Comments and Concerns\", \"comment\": \"Thank you for your detailed feedback. We would like to address the concerns you mentioned, and we hope this response provides clarity and facilitates further discussion among reviewers.\\n\\n> Q1. Contribution and Depth in Causal Analysis Part\\n\\n**Response:** We agree that in-depth causal analysis should not be listed as a key contribution of our work. Instead, our contribution lies in providing an in-depth understanding of the first research question: *Why do existing protective perturbations work?* This led us to uncover the latent-space image-prompt mismatch, which we identify as a key mechanism exploited by existing protective perturbation methods.\\n\\nUsing a causal graph to describe and explain the problem, we identified the **identifier-noise shortcut path** as the root cause of the protective perturbation effect. Further, we demonstrated that this shortcut path does not activate by default with random perturbations, but rather through the latent-space image-prompt mismatch\\u2014a novel mechanism we discovered. This hypothesis was validated with extensive experiments, including latent visualizations and interpretation studies (Figures 3, 7, 9; Appendix B.2 in the revised manuscript).\\n\\nIn Section 4.1, we provide an in-depth analysis connecting the effectiveness of perturbations to the latent-space mismatch hypothesis. While we acknowledge that our methodology includes elements of empirical causality-based defense, our work contributes a broader concept that incorporates both **do-calculus** and **decoupling learning strategies**\\u2014a conceptual advancement over works like CBD and FABE. More details can be found in Appendices E.1 and C.2 of the revised manuscript. Taken together, our systematic defense strategies and state-of-the-art red-teaming results represent a significant contribution to the field.\\n\\n\\n> Q2. Purification Part Does Not Differ from Previous Works\\n\\n**Response:** We respectfully disagree with this assessment. Our purification pipeline introduces significant differences compared to prior methods like IMPRESS and GrIDPure, both of which leverage off-the-shelf diffusion models (e.g., Stable Diffusion in IMPRESS and pre-trained unconditioned diffusion models in GrIDPure) as denoisers. In contrast, our work is the first to explore the use of **image restoration models and super-resolution models** for handling adversarial perturbations in protective perturbation tasks. Unlike the iterative optimization approach of IMPRESS and the grid-division strategy of GrIDPure, our pipeline focuses on designing an effective combination of modules, validated through adaptive perturbation experiments and ablation studies. Additionally, our method addresses practical challenges, such as **inefficiency and hallucination issues**, observed in previous purification techniques. This practical contribution should not be overlooked, as it resolves key limitations in existing methods while delivering strong red-teaming results.\\n\\n\\n> Q3. Similarity to Backdoor Attacks and Further Safety Concerns\\n\\n**Response:** We appreciate this concern and would like to clarify potential misunderstandings about the problem setup and security implications of our method.\\n\\nFirstly, our work is focused on **red-teaming** protective perturbations to **break the protection effect** and enable the generation of high-quality personalized images from protected datasets. **We are not claiming to generate \\u201csafe\\u201d images or to enhance the safety of the models in terms of preventing unauthorized use.** Instead, our objective is to counteract protective perturbations crafted to disrupt personalized diffusion model fine-tuning. Specifically, in our problem setup, the image protector crafts protective perturbation that fools the personalized diffusion model fine-tuning process, and our red-teaming side is on retaining the clean generation performance. \\n\\nSecondly, regarding the potential for further exploitation from the mentioned \\u201cattacker\\u201d who assumes to have access to the trained models with our method, we respectfully argue that it\\u2019s unlikely to happen. Adding the suffix \\u201cwith XX noisy pattern\\u201d during inference would not enable the \\u201cattacker\\u201d to generate protected personalized images. Instead, it would likely degrade the model\\u2019s generation performance because the model learns to associate the \\u201cwith XX noisy pattern\\u201d suffix with the noise patterns introduced by the protective perturbations. **Therefore, there is no incentive for the \\u201cattacker\\u201d or a model trainer to use this suffix, as it would not yield beneficial results.**\\n\\nThirdly, our method fundamentally differs from backdoor attacks in both objective and mechanism (see App E.2). In backdoor attacks, triggers are intentionally injected to create spurious correlations between a trigger and the target label, allowing attackers to manipulate the model. Our approach aims to **decouple spurious correlations**, restoring the correct association between personalized identifiers and clean concepts.\"}" ] }
DbZDbg2z9q
Ontology-Retrieval Augmented Generation for Scientific Discovery
[ "Andres M Bran", "Alexandru Oarga", "Matthew Hart", "Magdalena Lederbauer", "Philippe Schwaller" ]
Large Language Models (LLMs) have demonstrated remarkable capabilities across a wide range of tasks, sparkling an increasing interest for their application in science. However, in scientific domains, their utility is often limited by hallucinations that violate established relationships between concepts or ignore their meaning; problems that are not entirely eliminated with Retrieval Augmented Generation (RAG) techniques. A key feature of science is the use of niche concepts, abbreviations and implicit relationships, which may deem RAG approaches less powerful due to the lack of understanding of concepts, especially in emerging and less known fields. Ontologies, as structured frameworks for organizing knowledge and establishing relationships between concepts, offer a potential solution to this challenge. In this work we introduce OntoRAG, a novel approach that enhances RAG by retrieving taxonomical knowledge from ontologies. We evaluate the performance of this method on three common biomedical benchmarks. To extend the value of OntoRAG to emerging fields, where ontologies have not yet been developed, we also present OntoGen, a methodology for generating ontologies from a set of documents. We apply the combined OntoGen+OntoRAG pipeline to a novel benchmark of scientific discovery in the emerging field of single-atom catalysis. Our results demonstrate the promise of this method for improving reasoning and suppressing hallucinations in LLMs, potentially accelerating scientific discovery across various domains.
[ "ontology", "rag", "retrieval", "llm", "science", "ai4science", "chemistry", "biomedical", "reasoning" ]
Reject
https://openreview.net/pdf?id=DbZDbg2z9q
https://openreview.net/forum?id=DbZDbg2z9q
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xlSEd4ye03", "mvgyNWIhpP", "j2TnScWit6", "cmi61X3dvL", "aan4kwmzyr", "ZNGEWD8khR", "Yj4DreCZT8", "UVNZgIdYoE", "PG7j6UKEJi", "OiOx1pcWih", "L3JhXp8GG5", "KeDfjLM1KZ", "IsftgDa0Hf", "FcMjZyJBqL", "DBQuEAl8l4", "33fW1wb04q", "1qHNf12vE0", "1hFBZbCAsu" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732578397506, 1729176062217, 1732379898919, 1732876601315, 1732713732795, 1732611876625, 1730483368292, 1734897361558, 1737524297647, 1732804819882, 1732883397287, 1732634212289, 1730278618626, 1730718495132, 1732490850512, 1732521437871, 1732301400258, 1732492433417 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission14062/Authors" ], [ "ICLR.cc/2025/Conference/Submission14062/Reviewer_4Tdh" ], [ "ICLR.cc/2025/Conference/Submission14062/Reviewer_htBH" ], [ "ICLR.cc/2025/Conference/Submission14062/Authors" ], [ "ICLR.cc/2025/Conference/Submission14062/Authors" ], [ "ICLR.cc/2025/Conference/Submission14062/Authors" ], [ "ICLR.cc/2025/Conference/Submission14062/Reviewer_htBH" ], [ "ICLR.cc/2025/Conference/Submission14062/Area_Chair_PNqS" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission14062/Reviewer_4Tdh" ], [ "ICLR.cc/2025/Conference/Submission14062/Authors" ], [ "ICLR.cc/2025/Conference/Submission14062/Reviewer_2Kkq" ], [ "ICLR.cc/2025/Conference/Submission14062/Reviewer_2Kkq" ], [ "ICLR.cc/2025/Conference/Submission14062/Reviewer_ee63" ], [ "ICLR.cc/2025/Conference/Submission14062/Authors" ], [ "ICLR.cc/2025/Conference/Submission14062/Authors" ], [ "ICLR.cc/2025/Conference/Submission14062/Authors" ], [ "ICLR.cc/2025/Conference/Submission14062/Reviewer_htBH" ] ], "structured_content_str": [ "{\"comment\": \"We sincerely thank the reviewer for their thorough comments. We have revised our manuscript and addressed your questions in the following way:\\n\\n# Q1: the authors speculate that the results are due to discrepancies in vocabulary; have they confirmed that?\\n\\nWe appreciate the reviewer\\u2019s comment and thank them for pointing out this weakness. To address the reviewer's concern, we chose to conduct further analysis to support our speculation on the effect of vocabulary discrepancies on the performance of the system. We conducted a simple analysis on the correlation between the ontological relevance of the statement (i.e. how many ontological concepts are detected in the statement) and the performance of OntoRAG on evaluating said statements. The results are given in the following table.\\n\\n| Benchmark | Correlation |\\n|-----------|-------------|\\n| medqa | 0.7852 |\\n| mmlumed | 0.7506 |\\n| medmcqa | 0.1018 |\\n\\nThe results indicate an overall positive and strong correlation between ontological relevance and downstream performance. We again thank the reviewer for their question, and hope this addresses this point. \\n\\n\\n# Q2: What is the verification process \\\"to ensure fidelity to the source text\\\" described in Section 4.1?\\n\\nthank you for this, indeed we have not made it very clear. We have updated the manuscript to make it clear that this process is a string matching operation, to ensure all the concepts proposed by the LLM are indeed sourced from the papers given as context, making it possible to detect and remove hallucinations.\\n\\n# Q3: how are discrepancies handled when the above verification process encounters them? Can the authors give a specific example of how a discrepancy might be handled?\\n\\nSince the verification process involves a string matching over the original text, discrepancies are handled by simply discarding the term from the vocabulary if it is not found. For instance, if the original text contains the term \\\"carbon dioxide\\\" but the LLM hallucinates \\\"CO2\\\", the latter term will be discarded from the vocabulary, even if it is a valid synonym. This is done as a countermeasure to avoid the introduction of hallucinated terms into the ontology.\\n\\n# Q4: What self-consistency techniques were used in Section 4.2?\\n\\nWe have included details in the Appendix, where we formally describe and cite self-consistency, and where we describe its application in our work:\\n\\n\\\"self-consistency is applied in the category generation step of Ontogen by generating multiple lists of categories and then taking the most frequent categories (i.e. the majority vote) as the final list. In the taxonomy extraction step, self-consistency is applied in the \\\"query_relationships\\\" function (see Algorithm 1 in Appendix C). In this case, a query is prompted $N$ times to the LLM (e.g. \\\"Single-Atom Catalist isA ?\\\"), and a taxonomic relationship (e.g. \\\"Single-Atom Catalist isA Catalist\\\") is extracted only if it is the answer in the majority of the $N$ queries (i.e. if a relationship appears in at least $(N + 1)/2$ answers).\\n\\n# Extra\\n\\nRegarding some of the concerns you raised on the weaknesses sections, here are a few clarifications, that we would gladly incorporate in the manuscript if the reviewer finds it appropriate.\\n\\nFor the question \\\"Can OntoGen produce ontologies that capture scient. significant patterns?\\\" we have included slices of the generated Ontologies for SACs to show what patterns are encoded there, and analyze the patterns found there. We believe this analysis, along with the results from downstream applications (e.g. the SACBench) will be enough to show how good the results from OntoGen are.\\n\\nWe have also included the base LLM (under the name of ZeroShot) as one of out baselines in the updated manuscript. Thank you for pointing this out, it is indeed a clear miss in our original submission.\\n\\nFinally, the biomedical benchmarks were run initially kind of as control experiments. No improvement in these would just show that using OntoRAG doesn't hurt, while it can help in tasks more related to scientific discovery, as we show with the SACBench.\\n\\nStill, we have re-run the experiments on the medical benchmarks, this time also including the gene ontology, and we have updated the results as follows:\\n\\n| Method | medmcqa | medqa | mmlumed |\\n|--------|------------------|----------------|------------------|\\n| zeroshot | 62.06 | 67.16 | 80.06 |\\n| cot | 60.91 | **69.99** | 76.70 |\\n| ontorag-simple | **64.12** | 68.34 | 79.26 |\\n| ontorag-tm | 61.80 | 68.11 | 80.01 |\\n| ontorag-hypo_ans | **64.04** | 67.64 | 79.96 |\\n| ontorag-hypo_ans-tm | 62.13 | **69.36** | **80.65** |\\n\\nAs can be seen in the new results, there's typically either an advantage or simply no improvement over the baselines (ZeroShot or CoT). With this, we update our analysis to account for this fact and make it clear that this works more as a control experiment.\\n\\nI hope the reviewer finds these updates reasonable, and we're open to hear more feedback to improve our work.\"}", "{\"summary\": \"In this study, the authors design and use an automatic ontology generator, OntoGen, to create ontologies for specialized domains in which there are no pre-existing ontologies. Then, they incorporate the generated ontology from OntoGen as input into OntoRAG, a retrieval-augmented system for LLMs. The authors aim to create a system which produces more accurate scientific output than LLMs or RAG systems. They first test OntoRAG without OntoGen, using biomedical ontologies in its place, for biomedical prediction tasks. Thereafter, they evaluate OntoRAG + OntoGen on a materials science application (single atom catalyst (SAC) synthesis) for which they designed a novel benchmark dataset. OntoRAG performs consistently better than a baseline RAG system for SAC synthesis. The novelty of this paper lies in (1) the design of an automatic ontology generator, OntoGen, (2) the development of a RAG system which incorporates ontologies as input, OntoRAG, and (3) the creation of a benchmark dataset, SACBench, for assessing LLM output within the context of a specialized, materials science domain.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Novelty: The authors addressed a novel, interdisciplinary area between AI and the natural sciences.\", \"Clarity: The authors have done a great job of explaining the necessary background in a concise way. I commend the authors for acknowledging the significance and challenges which go along with applying AI approaches to scientific domains, in which plausibility is not necessarily the same as scientific accuracy.\", \"Creativity: The ideas in this paper are creative: the authors have found a unique way to address ongoing concerns surrounding LLM hallucinations, particularly within a scientific context.\", \"Reproducibility: The paper is generally well-written and easy to read. For the most part, the paper was clear, and experiments seem reproducible based on the details given.\"], \"weaknesses\": \"I have two major concerns regarding gaps in the evaluation processes. These concerns make it difficult to confirm the scientific rigor of this study:\\n\\n1. The authors should evaluate the capabilities of OntoGen against existing scientific ontologies, like the Gene Ontology. This will allow the reader to assess whether OntoGen can really produce ontologies that capture scientifically significant patterns. The authors could accomplish this through a comparison of the metrics reported in Fig. 3 or through metrics such as concept coverage, structural similarity, or expert evaluation of key relationships.\\n\\n2. The experimental results in Figure 4 should also include the base LLMs, without any augmentation, as baselines. Specifically, the authors should report the performance metrics of the base LLMs on SACBench using the same criteria as OntoRAG. This will allow the reader to assess whether OntoRAG truly improves upon LLM accuracy within specialized, scientific domains. \\n\\n**Expansions:**\\n\\n1 (expansion): I do not think the authors sufficiently evaluated the capabilities of OntoGen before moving on to evaluate OntoRAG. Since OntoRAG on the SACBench dataset relies upon the output of OntoGen, it is necessary to ensure that OntoGen can produce ontologies with qualities consistent to established ones. Specifically, the authors should compare the output of OntoGen to an existing ontology. While there is no existing ontology for SAC, the authors acknowledge in Section 2.1 that other curated ontologies exist for other domains, like genetic or biomedical ones. For example, the authors could use OntoGen on a corpus of genetic literature and compare the generated ontology to the Gene Ontology. \\n\\n2 (expansion): The experimental results given in Figure 4 are missing a key baseline: the base LLMs without any RAG system. The authors should include this baseline as it is critical to assess one of the aims of the paper (\\\"enhancing the scientific accuracy of LLM outputs\\\"). Additionally, this baseline is particularly important in light of the results of Section 5, in which the OntoRAG system performed worse than the base LLMs in a majority (6/10) of cases (based on the metrics reported Appendix A.0.1). The results of Section 5 (Appendix A.0.1) call to question why the authors decided to move on with OntoRAG + OntoGen. It appears that OntoRAG with pre-existing ontologies has no improvement or limited improvement over the base LLMs. If OntoRAG with established ontologies offers no substantial improvement, then the authors should clarify why they believe that OntoRAG + OntoGen will offer improvements. Specifically, the authors may be able to justify the use of OntoRAG + OntoGen by including the performances of base LLMs on SACBench.\", \"questions\": \"1. Have the authors conducted any further investigations into the results of Section 5? Specifically, the authors speculate that the results are due to discrepancies in vocabulary; have they confirmed that?\\n\\n2. What is the verification process \\\"to ensure fidelity to the source text\\\" described in Section 4.1? Can the authors specifically describe the details of (or cite, if using another approach) this verification process?\\n\\n3. Furthermore, how are discrepancies handled when the above verification process encounters them? Can the authors give a specific example of how a discrepancy might be handled?\\n\\n4. What self-consistency techniques were used in Section 4.2? Can the authors give specific details or citations for these techniques?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the authors\\u2019 effort to revise and correct their definitions. While these definitions may not fully align with standard conventions and can be further polished, they are now at least reasonable.\\n\\nFrom my understanding in [1], this does not represent a naive retrieve-and-paste RAG setting. While it is valid to define RAG within the context of your specific approach, I do not believe that a general probabilistic definition for RAG is feasible.\\n\\nThat said, aside from revising the definitions, I have not seen the authors address the other concerns raised in my initial review.\"}", "{\"comment\": \"We again thank you sincerely for your comments and your effort in bringing our paper to a better shape.\\nWe understand the concern with GraphRAG, and also we would like to highlight that the rest of the comments were addressed. Please refer to the updated manuscript to see these and many other improvements we have implemented based on yours and other author's comments.\\nWe would like to invite you to consider this in submitting an updated evaluation. We will continue strengthening our work and again, we appreciate your comments and feedback.\"}", "{\"comment\": \"Thank you very much for your kind and insightful response.\\nWe understand the points you raised here, and we would like to address them more concretely in the manuscript with your feedback.\\nJust for clarification, the key idea we try to prove is that using ontologies can improve in tasks relevant for scientific discovery, hence we would like to make it very clear that the main results of our paper are those realted to SACBench.\\nThe biomed benchmarks were conducted more as control experiments, to show that using ontorag has no detrimental effect on other tasks. Indeed, our results show that ontorag improves or remains similarly performant as the baselines.\\n\\nWe hope we can continue this productive discussion towards improving our paper, and we appreciate all your results so far.\\nWe hope the points that were addressed, as well as the updated manuscript, are considered in a new evaluation. Thank you very much again!\"}", "{\"comment\": \"We sincerely thank the reviewer for their thorough and insightful comments. We have carefully considered your comments and have made appropriate changes as we describe below, along with answers with your questions and clarifications.\\n\\n# Questions:\\n\\n## Can authors add additional benchmarks that show improvement in performance?\\n\\nIndeed, the original results we show were not a clear improvement over the baselines. From the time of submission of the original work, we have changed prompts and slightly optimized the pipeline. Our results for the same benchmarks are now as shown in the following results table:\\n\\n| Method | medmcqa | medqa | mmlumed |\\n|--------|------------------|----------------|------------------|\\n| zeroshot | 62.06 | 67.16 | 80.06 |\\n| cot | 60.91 | 69.99 | 76.70 |\\n| ontorag-simple | **64.12** | 68.34 | 79.26 |\\n| ontorag-tm | 61.80 | 68.11 | 80.01 |\\n| ontorag-hypo_ans | **64.04** | 67.64 | 79.96 |\\n| ontorag-hypo_ans-tm | 62.13 | **69.36** | **80.65** |\\n\\nAs shown in the new results, OntoRAG performs typically on par or better than the zero-shot LLM, or even a CoT baseline. However please note that these benchmark experiments were included more as control experiments, to show that the use of OntoRAG does not deteriorate performance. The main results however are those on SACBench, which we augment and expand also in the updated version.\\n\\n\\n## How does OntoRAG handle conflicting information from different retrieved sources within a single ontology or between multiple ontologies?\\n\\nThis is indeed a very good question, and we think it would be worth considering in a follow-up study. Indeed, conflicting information can be retrieved from different ontologies. However we argue that retrieved information from an existing ontology should be self-consistent (in the sense of [1]), already preventing these situations from their creation. \\nIn the case of ontologies generated with OntoGen, these situations are prevented by using a series of verification and control steps, where responses are required to be self consistent, as well as matching with the retrieved literature. We have elaborated and explained more of this in the manuscript thanks to this and other reviewer's comments.\\n\\n\\n## How does the quality of automatically generated ontologies compare to expert-curated ones in established fields?\\n\\nThis is a very good question. We believe it would indeed be very interesting to e.g. generate a new gene ontology from collected papers in the field, however we note that the extensive amount of literature that has been produced for this field, and thus a massive number of terms associated, makes it a much more challenging undertaking, as compared against the field of SAC, and thus a comparison against the Gene Ontology is rather unfeasible.\\nThe question highlights a very good point however, and as such we have included a new analysis section in the Appendix, where we display slices of the generated SAC ontologies (by different LLMs) and include assessments of their quality by chemists experts in the field.\\nWe hope this update will address this concern, and we're very much looking forward to your comments on this.\\n\\n\\n## Sensitivity OntoRAG to the specific choices made in the ontology retrieval and fusion steps?\\n\\nThis is a great question, that we tried to directly address in form of ablations in our original submission, however the presentation was not very clear. With the updated results (Table above) we can also respond to this question by looking at what each variation of ontorag is.\\nIn particular, we ablate the fusion step using two variants of fusion, namely _simple_ and _tm_, which stands for \\\"translation module\\\".\\nThe method _simple_, consists of providing all the ontological context as a simple string in json format containing all information. The _tm_ is an intermediate module that summarizes the raw ontological context, and is made to distill it down to the relevant information for the query.\\nAlthough in this aspect the results are not very conclusive, there seems to be a net positive effect of using _tm_ on these benchmarks. In fact, the most clear trend that we find here (and also in the SACBench results, that we are adding to the updated manuscript) is that OntoRAG+HyQ benefits the most from _tm_, while OntoRAG-simple works better without _tm_.\\n\\n\\n## Extra\\n\\nTo your additional concern regarding increased overhead and computational cost, our method is indeed more costly than simply using ZeroShot inference. However it's important to note that the ontology is only generated once, and the goal of it is to condense a lot of the state of a field, including concepts and relationships. In that sense, the ontology can continue being useful for other tasks, without needing to carry this overhead for every use.\\n\\nWe hope our responses have addressed your concerns, and we stay open to further discussion if needed and appreciate any further feedback.\\n\\n[1] ArXiv, abs/2203.11171\"}", "{\"summary\": \"This paper introduces an ontology-based retrieval-augmented generation (RAG) pipeline designed to enhance scientific discovery by integrating ontology-based knowledge with language models. Additionally, the paper presents OntoGen, an automated ontology-generation method for fields where no ontology exists, extending OntoRAG's applicability to emerging domains. The proposed method is mainly evaluated on biomedical QA and catalyst synthesis benchmarks.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"This work is well-motivated by the need in many scientific domains for expert-curated knowledge that goes beyond document-level retrieval. It seeks to a new RAG pipeline by integrating ontologies, which are widely adopted knowledge bases for specific domains.\", \"The pipeline addresses cases where ontologies are unavailable, proposing an automated approach to ontology construction from documents.\"], \"weaknesses\": [\"Definition 2.1 for ontology requires a signficant revision as it is unclear and contains inaccuracies:\", \"An ontology can be described as a set of logical axioms that define relationships among entities (concepts, properties, and instance) in the ontology. This approach avoids separating axioms from relationships, as relationships should not be limited to triples alone.\", \"It seems that the relationships set $\\\\mathcal{R}$ refer to object properties, while the properties set $\\\\mathcal{P}$ appears to denote data properties.\", \"Additionally, the notation $\\\\forall i \\\\in \\\\mathcal{I} \\\\exists c \\\\mid c \\\\in \\\\mathcal{C}$ needs clearer explanation. If the intent is to express that an instance $x$ belongs to a class $C$, it would be more accurate to write $C(x)$ or $x: C$.\", \"Definition of RAG in Equation (1) is inaccurate: As stated, this definition implies that each retrieved document influences the generation probability and is weighted by its relevance. However, in a standard (vanilla) RAG setting, this is not the case; only a subset of retrieved documents typically impacts the generation process, without automatic weighting by relevance.\", \"The OntoRAG definition needs a significant revision since it builds on the earlier definitions of ontology and RAG, which contain inaccuracies.\", \"The OntoRAG methodology section lacks sufficient detail for reproduction. To enhance clarity, it would be helpful to include step-by-step explanations of the methodology components and provide running examples.\", \"The main evaluation in Table 1 primarily examines variations of OntoRAG and one Chain-of-Thought (CoT) baseline. However, it overlooks important comparisons with existing GraphRAG approaches, which similarly aim to incorporate graphs and knowledge bases within the RAG framework.\"], \"questions\": [\"**Suggestion/Typo**:\", \"In Table 1, the word \\u201ctypo\\u201d appears to be a typo itself and may need correction.\", \"I recommend a careful review of formal definitions throughout the paper. For established concepts like \\\"ontology,\\\" it would be beneficial to reference widely accepted definitions, such as those based on description logic. For processes like RAG, ensure the level of abstraction aligns with practical implementations. The current definition assumes independent and automatic weighting of each retrieved document, which is not universally applicable in RAG and oversimplifies the underlying mechanics.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"n/a\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper introduces an ontology-based retrieval-augmented generation (RAG) pipeline for integrating ontology-based knowledge with LLMs. Additionally, the paper presents OntoGen, an automated ontology-generation method for fields where ontologies don't exist.\\n\\nThe reviewers recognized several strengths, including:\\n\\n- Problem being well motivated: Using ontology-deriven RAG is a nice solution to address LLMs inaccuracies in scientific discovery\\n- Novelty: OntoRAG innovatively integrates ontologies into the RAG framework to ground outputs in established relationships.\\n- Benchmark SACBench could be a valuable contribution \\n\\nHowever, the reviewers identified several major weaknesses in the paper, particularly in presentation, methodology, evaluation, and clarity. Reviewer ee63 noted missing results analyses and ablation studies, while htBH flagged inaccuracies in key definitions and insufficient detail for reproducing the approach. The risk of hallucinations in LLM-generated ontologies was a shared concern (ee63 and 2Kkq). And 4Tdh suggested validation against existing ontologies.\\n\\nThe evaluation was also criticized as limited and unconvincing. htBH and 4Tdh highlighted the absence of key baselines, including comparisons with graph-based methods and non-RAG LLMs. 2Kkq and 4Tdh questioned the empirical results and lack of evaluation breadth.\\n\\nThe authors tried to address some of these points during the rebuttal, but the reviewers aren't entirely convinced.\", \"additional_comments_on_reviewer_discussion\": [\"Reviewer 4Tdh raised concerns about the lack of baseline evaluations and the insufficient validation of generated ontologies. The authors addressed the first concern but did not sufficiently resolve the latter.\", \"Reviewer ee63 emphasized the risk of hallucinations in generated ontologies and called for human validation. The authors clarified their verification process but did not offer a comprehensive resolution.\", \"Reviewer htBH criticized the paper's definitions and lack of comparisons with GraphRAG. While the authors revised definitions and added explanations, they argued that GraphRAG comparisons were out of scope.\", \"Reviewer 2Kkq noted modest improvements in performance and questioned the generalizability of OntoRAG. Despite updates, concerns about limited evaluation breadth persisted.\", \"Overall, the reviewers acknowledged the authors\\u2019 efforts to address concerns but remained skeptical about the scientific rigor of the approach.\"]}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Rebuttal\", \"comment\": \"Dear, Authors,\\n\\nI will raise my score to a 6 because you've addressed my second major concern (weakness \\\\#2). Thank you for that.\\n\\nI do not feel the first concern (weakness \\\\#1), regarding the evaluation of OntoRAG, is sufficiently addressed. I would like to leave additional explanations and feedback in case the authors wish to further improve their work.\\n\\nThe information presented in Fig. 3 is not enough to evaluate the quality of an ontology- especially one that is generated by an LLM and not an expert.\\n\\nI also see that Reviewers **2Kkq** and **ee63** express similar concerns to the first weakness I listed. Regarding what Reviewer **ee63** said...:\\n\\n \\\"Because the LLMs-generated ontologies cannot utilized to enhance the LLMs directly without the human\\u2019s curation, since the hallucinations still exist when generating the ontologies with LLMs, the manual validator is needed.\\\"\\n\\nAs an alternative to this, I believe another suitable alternative to a human-in-the-loop, which Reviewer **ee63** had suggested, would be to compare an ontology generated by OntoGen against a gold-standard ontology like the GO, as I suggested. While this would not be a direct evaluation of the SAC ontology, it would be convincing evidence toward the idea that OntoGen can produce high-quality ontologies.\\n\\nAdditionally, regarding the biomedical benchmarks: \\\"using OntoRAG doesn't hurt\\\" is not a very convincing reason to go forward and apply OntoRAG to a domain in which there is a greater level of uncertainty (i.e., lack of information and no gold-standard ontology). I truly believe that this section confuses and derails the overall story being conveyed in this paper. \\n\\nOverall, however, I think the contents of the study are interesting, and I thank the authors for their efforts during the rebuttal.\"}", "{\"title\": \"Final submission comments\", \"comment\": [\"Dear Program Chairs, Area Chairs, and Reviewers,\", \"We sincerely thank you for your thorough and insightful feedback. We have made substantial revisions to address your concerns and significantly improve our manuscript, submitted on this rebuttal:\", \"**Definitions and Notations:** We have revised our definitions of ontology, RAG, and OntoRAG to improve clarity and accuracy (thank you reviewer **htBH** for the feedback on this). See Definition 1 and Equations 1, 2 and 3.\", \"**Improved inclusion of baselines:** As suggested by the reviewers, we have included ZeroShot LLMs as a new baseline, which we overlooked in the original submission.\", \"**Biomedical benchmarks:** We have updated the evaluation results on the 3 biomedical benchmarks used in this paper. In the new manuscript, we show how OntoRAG methods perform on par or better than baselines (ZeroShot and CoT) in average (Table 1, Appendix A.1.1), and excell at tasks for which the provided ontologies are more relevant (in our experiments: genetics, anatomy, microbiology, see Table 2, Appendix A.2).\", \"**Additional analysis:** Furthermore, we perform an additional analysis where we show that ontological relevance (as measured by the average number of concepts retrieved by the retrieval operator) correlate strongly with improved performance on the biomedical benchmarks. See Appendix A.1.2, Table 2.\", \"**Quality assessment of Ontologies:** Some reviewers raised concerns regarding the reliability of the ontologies generated by OntoGen. We have added a new section in the Appendix (A.4.6) where we show some parts of the produced SAC ontologies with Llama-3.1-70B and Claude-3.5-Sonnet, along with an analysis by a domain expert (see Appendix A.4.5).\", \"**Ablation on ontology source:** To further this analysis, and with the argument that the best way to evaluate the quality of an ontology is through a downstream application, we report the results of running multiple methods (ontorag and baselines) using SAC ontologies generated by two different models (Llama-3.1-70B and Claude-3.5-Sonnet), see Appendix A.7, Tables 4 to 7. These results show that, overall the largest effect is on the \\\"metal\\\" and \\\"support\\\" metrics, with between 6%-10% points of difference, with the Claude-generated ontology achieving a higher score.\", \"**Methodology:** We have greatly upgraded our work by adding pseudo-code (Algorithm 1, 2, 3, 4 in the Appendix), along with code-snippets (Figure 5, Appendix) to improve the clarity and reproducbility of our work.\", \"**Code release:** We have made our code publicly available for further transparency and reproducibility. It is under https://figshare.com/s/4f898ef092ae5898c1b7 and we have updated our abstract accordingly.\", \"**Ablation studies:** We have improved and clarified our ablation studies on the fusion operators. Table 1, A.1.1 shows the downstream effect of using the translation module (TM) on OntoRAG, as evaluated by the biomedical benchmarks.\", \"**Further clarifications:** We have elaborated and clarified in the manuscript the details of the verification process of OntoGen, and the self-consistency techniques used there. See Appendix A.4.1, A.4.2.\", \"**SACBench clarifications:** We have further improved explainations for the metrics, and added more details on the generation and curation process (Appendix A.5). This is one of our key contributions, and we have released the code as part of the submission.\", \"**SACBench results:** Additionally, we add more results and analyses on SACBench, that help us understand how and where OntoRAG works better than baselines. In particular, see Figure 7 and 8 of the appendix.\", \"These extensive revisions address most of the concerns raised by the reviewers, and significantly strengthen our work. We believe these improvements, along with the novelty of our contributions and their potential impact on scientific discovery, make a compelling case for acceptance. We look forward to your updated evaluation and remain open to any further feedback to ensure our paper meets ICLR's high standards.\"]}", "{\"comment\": \"I appreciate the authors' detailed responses to the concerns raised in my initial review. While the authors have made efforts to address the concerns some of the issues are still there:\\n\\nThe new benchmark results, while showing some improvements over the initial submission, still demonstrate only modest gains over baselines. The improvements are incremental rather than substantial and in some cases perform similarly to baseline approaches. And this is still an assessment on the same benchmarks.\\n\\nThe addition of expert assessment of generated SAC ontologies in the appendix is an improvement.\\n\\nThe ablation studies clarify some aspects of the system's sensitivity to different components, but the results remain somewhat inconclusive regarding the benefits of different fusion approaches.\\n \\nWhile these updates strengthen certain aspects of the paper, the core concerns about limited evaluation breadth and modest performance improvements remain.\"}", "{\"summary\": \"This paper introduces OntoRAG, a novel approach that enhances Retrieval Augmented Generation (RAG) by incorporating ontological knowledge to improve the accuracy and scientific grounding of large language models (LLMs). The authors also present OntoGen, a tool for automatic ontology generation to extend OntoRAG's utility to fields without pre-existing ontologies. The key contributions are:\\n \\u2022 OntoRAG: An extension of RAG that retrieves and integrates relevant ontological information to improve reasoning and reduce hallucinations in large language models (LLMs).\\n \\u2022 OntoGen: An LLM-based pipeline for automatically constructing domain-specific ontologies from scientific papers.\\n \\u2022 SACBench: A benchmark for evaluating the synthesis of single-atom catalysts (SACs), used to test the OntoRAG approach in an emerging scientific domain.\\nThe authors evaluate OntoRAG on standard biomedical benchmarks and the novel of Single-Atom Catalysis SACBench. Results show improvements over baseline RAG in some domains, reduction of hallucinations in LLMs, particularly for the SAC synthesis task.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"\\u2022 Novel approach: Combining ontologies with RAG is an innovative idea to enhance LLM performance in specialized scientific domains.\\n\\u2022 Automatic ontology generation: OntoGen addresses a key bottleneck by automating the creation of ontologies for emerging fields. \\n\\u2022 Application in Emerging Domains: The case study in Single-Atom Catalysis demonstrates the potential of the approach to aid scientific progress in cutting-edge fields where ontologies are not yet fully established.\\n\\u2022 Reduction of Hallucinations: OntoRAG addresses a critical problem in LLMs - factual inaccuracies - by grounding the outputs in established scientific relationships and concepts.\", \"weaknesses\": \"\\u2022 Weak evaluation: The authors test their approach on established benchmarks without significant improvement and a novel task in an emerging field that shows promise. Nevertheless, its only one benchmark in a specific field that shows some results.\\n\\u2022 Limited Improvement in Aggregate Performance: Despite the benefits of ontology integration, the paper notes that the aggregate improvement across benchmarks is modest, suggesting that the effectiveness of OntoRAG depends on the specific domain.\\n\\u2022 Ontology quality assessment: The paper lacks a thorough evaluation of the quality of automatically generated ontologies beyond downstream task performance.\\n\\u2022 Computational Overhead: The process of ontology generation and integration adds complexity and computational cost to the pipeline, which may limit its practical use in certain scenarios.\\n\\u2022 Expert Dependency: While OntoGen attempts to automate ontology creation, the variability between LLMs and the need for manual curation still imply a dependence on human expertise for high-quality outputs.\", \"questions\": \"\\u2022 Can authors add additional benchmarks that show improvement in performance?\\n\\u2022 How does OntoRAG handle conflicting information from different retrieved sources within a single ontology or between multiple ontologies?\\n\\u2022 How does the quality of automatically generated ontologies compare to expert-curated ones in established fields? \\n\\u2022 How sensitive is the performance of OntoRAG to the specific choices made in the ontology retrieval and fusion steps?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents an OntoRAG that leverages the LLMs generated ontologies as the context to enhance the RAG by retrieving taxonomical knowledge from context for accelerating scientific discovery. \\u00a0The results on the SACBench benchmark demonstrate that OntoRAG outperforms the CoT-based RAG on accuracy, completeness, and order. Additionally, the quality of the ontologies generated by LLMs is evaluated by the downstream task on the biomedical QA benchmark.\\n\\nThe paper is well-written and organized, but the pipeline of ontology generation (OntoGen) and RAG with ontologies (OntoRAG) is not a novel contribution, as the existing methods have already investigated it. \\n\\nMy main concern is that the ontologies generated by LLMs cannot be utilized to enhance the LLMs directly without the human\\u2019s curation since hallucinations remain when generating the ontologies with LLMs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"S1. The paper is well-written and organized, and the methodology of OntoRAG is well-designed and demonstrated.\\n\\nS2. The OntoGen pipeline is created to generate the ontologies based on multiple calling the long-context LLMs.\\n\\nS3. The experiments on the SACBench benchmark and biomedical QA benchmark are conducted to evaluate the performance of OntoRAG and the quality of LLM-generated ontologies.\", \"weaknesses\": \"W1. The presented OntoGen and OntoRAG pipeline is not novel, as the existing works have already been investigated but they are missed in related works. (Details in Q1)\\n\\nW2. The ontologies generated by LLMs are utilized directly as context for RAG without the human\\u2019s validations. (Details in Q2)\\n\\nW3. The source code and claimed SACBench benchmark dataset are not provided for reproducibility.\\n\\nW4. The readability of this paper needs to be improved, as some results analyses and the ablation study are missed. (Details in Q3)\\n\\nW5. Some typos need to be fixed and avoided. For example, the parentheses after \\u201caxioms\\u201d should be removed \\u201caxioms()-> axioms\\u201d in line 289-290, the comma after \\u201cin\\u201d should be removed \\u201cin. order to-> in order to\\u201d \\u00a0in line 408, the \\u201cACcuracy->Accuracy\\u201d in line 916, etc.\", \"questions\": \"Q1: How differ of your proposed OntoRAG \\u00a0and OntoGen pipeline \\u00a0when comparing with the existing DRAGON-AI (https://arxiv.org/abs/2312.10904) and LLMs4OL (https://link.springer.com/chapter/10.1007/978-3-031-47240-4_22)?\", \"q2\": \"Can you provide the details of your verification process and manual effort for LLM-generated ontologies that you mentioned in Lines 340-341 and 363-364?\\nBecause the LLMs-generated ontologies cannot utilized to enhance the LLMs directly without the human\\u2019s curation, since the hallucinations still exist when generating the ontologies with LLMs, the manual validator is needed.\", \"q3\": \"Can you provide a detailed analysis of the results that are reported in Table 2 and Table 3 and highlight how much the OntoGen and OntoRAG contribute to the final results?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for your comments. We're happy the definitions are more acceptable now. If there's any other specific edit you think is pertinent, we would be happy to further review our definitions.\\n\\n# Definition of RAG and OntoRAG.\\nFor the definition of RAG, an thus OntoRAG, we have opted for a probabilistic definition (following [1]) as a given answer is never deterministically generated, but is instead subject to a decoding process. However we think the definition could be further adapted to include a hint to the sampling process, something in the lines of:\\n\\n---\\n$y \\\\sim p(y|x) = p_\\\\theta(y| F(x, R(x))$\\n---\\n\\nThis definition of course doesn't aim to generally describe RAG, but we believe it fits the needs of our manuscript. \\nWe are open to any further suggestions and discussion.\\n\\n\\nRegarding the other comments/questions:\\n\\n---\\n\\n# Details for reproduction\\n\\nTo improve the clarity and reproducibility of our work, we are updating the paper to include pseudo-code for the proposed ontorag methodology, as well as step-by-step explanations with running examples in the appendix. This way we ensure the readers understand what the ontological context is, how it's being used in the pipeline, and how it can be helpful for performance as we show. \\nIn addition, we are releasing the code associated with or paper.\\n- ontorag\\n- the code for running the sac-specific experiments (in another repo)\", \"here_is_the_code\": \"\", \"https\": \"//figshare.com/s/4f898ef092ae5898c1b7\\n\\nAnd we have added additional pseudo-code snippets to the Appendix, illustrating:\\n- the process of retrieval of ontological context\\n- a description of the flow of information from query, to retrieval, fusion, and finally, response generation.\\n\\nWe will update this in the manuscript.\\n\\n\\n---\\n\\n# Comparisons with other baselines\\n\\nWe understand the reviewer's concern regarding comparisons with existing GraphRAG approaches.\\nFor a more complete comparison against other baselines, we have now included a comparison against the \\\"raw\\\" LLM (ZeroShot), which is a common baseline for this type of studies.\\nWe understand the reviewer's concern, however we need to note that GraphRAG is a method that leverages Knowledge Graphs (KGs) for RAG with LLMs. While this is similar in that both approaches aim to incorporate knowledge bases in the RAG framework, our approach relies on ontologies, which are representations of conceptualizations of knowledge, rather than graphs of semantic triples in the form of KGs. We thus believe that including such a comparison is out of the scope of this work.\\n\\n\\nWe hope these explainations clarify our approach, ideas, and experimental design, and we remain open for any further question or feedback you might have!\"}", "{\"comment\": \"We sincerely thank the reviewer for their thorough and insightful comments. We have carefully considered your comments and revised the definitions in our manuscript. Below we address the questions you have posed, hoping to improve the quality and clarity of our work:\\n\\n# Q1: OntoRAG and OntoGen vs existing DRAGON-AI and LLMs4OL?\\n\\nThank you very much for highlighting these works, indeed we have missed to reference them, and we would like to clarify the differences between our work and these others.\\n\\nOn the one hand, LLMs4OL focuses on the evaluation of LLMs rather than the generation itself of ontologies. In this work an LLM is prompted to assess whether a term is a subclass of another, without additional context. DRAGON-AI on the other hand tackles the task of inserting new terms into an existing ontology, so it requires an existing ontology. Our work, particularly OntoGen, takes a set of documents as an input, and generates an ontology by selecting and interconnecting terms into an ontology.\\n\\nWe address this in our updated manuscript by directly mentioning what they do and how our work is different, in the Introduction section. In particular, we add the following lines:\\n\\n\\\"Tools have been proposed recently to accelerate this process by inserting new terms into an already existing ontology Toro et al. (2023); Funk et al. (2023), or automating tasks such as term typing, taxonomy discovery, etc, under the frame of Ontology Learning Ciatto et al. (2024); Toro et al. (2023); Babaei Giglou et al. (2023). However, none of these works has attempted to generate full ontologies. To address this we propose OntoGen, an LLM-based method for automatic end-to-end generation of ontologies.\\\"\\n\\nWe hope this helps clarify the novelty issue you have raised. \\n\\n\\n# Q2: Details of verification process and manual effort for LLM-generated ontologies?\\n\\nThank you very much for your comment, indeed some of these details have been left unspecified.\\nFor both the verification process and the comment on manual effort, we have added new sections to the appendix that elaborates further on the role and details of these processes.\\nTo clarify here, the verification process consists of a string match that checks whether each of the terms in the current list, indeed exists in the documents that were provided to the LLM as context.\\n\\nRegarding the comment on manual effort, we clarify that this is only made with the \\\"seed\\\" terms used to initialize the taxonomy. This seed list has only few terms as this is automatically extracted and is composed of only the most common terms extracted by the LLM; in the case of the SAC ontology this was around 7 terms, namely _Characterization, Physical properties, Synthesis methods, Reaction mechanisms, Structure, Applications, Reactions_ and _Support_. The \\\"manual curation\\\" we perform in this step involved selecting the following additional categories from the pool of generated categories, so as to make the ontology more aligned with our chemistry knowledge: _Catalytic performance, Preparation methods, Theory and modelling_ , and _Materials_.\\n\\nThis is, the manual effort that is expected here to exclude one or more categories from the generated list or include additional categories if needed. Notice that this does not involve manually refining the whole taxonomy, but just the set of terms from the initial seed.\\n\\n\\nWe hope this can help clarify any concerns regarding human involvement. Indeed, no extensive human curation is required at any point, only shortly at the early stages for more complete results. Regarding hallucinations, countermeasures have been taken through the generation process so as to minimize their impact. Just to recap, a verification step is performed after vocabulary extraction, while self-consistency is applied both during category generation and taxonomy extraction.\\n\\n# Q3: Analysis of the results that are reported in Table 2 and Table 3 and highlight how much the OntoGen and OntoRAG contribute to the final results?\\n\\nAs the reviewer noted, our manuscript falls short to analyze these two tables and the effect of these 2 components in the final results. Thank you for pointing this out. In our updated manuscript we conducted additional experiments to assess the effect of variations of OntoRAG (Table 2). In particular, we also add the final results on these benchmarks to assess the effect of the fusion operator (F) with two variations: concat, and translation-module. We append the table in a follow-up comment.\\n\\nFor Table 3, we again conducted additional experiments on a fixed version of OntoRAG, but using different ontologies (generated with different LLMs in OntoGen), to assess the effect of this part. In addition we have included more of the SACBench metrics for the sake of completeness, and we further analyze this in the manuscript.\\n\\n\\nWe thank again the reviewer for their insightful comments, and look forward to productive discussion.\"}", "{\"title\": \"Revision of definitions and notations used throughout the paper.\", \"comment\": \"We sincerely thank the reviewer for their thorough and insightful comments. We have carefully considered your comments and revised the definitions in our manuscript accordingly. Below we present the revised definitions, and we hope we can iterate on this and give them an ideal shape for our work.\\n\\n\\n### Ontology\\n\\nFor the definition of ontology, we now avoid the distinction between axioms and relationships. This aligns more with the implementation of ontology used in our work. In addition we have improved the notation by using $i:C, C \\\\in \\\\mathcal{C}$ to denote that instance i belongs to class C. Finally, we specify what are object properties (when defining I) and data properties (when defining P).\", \"please_find_the_full_updated_definition_here\": \"---\\n---\\n\\nAn ontology is a tuple $ \\\\\\\\{ \\\\\\\\mathcal{C}, \\\\\\\\mathcal{R}, \\\\\\\\mathcal{I}, \\\\\\\\mathcal{P} \\\\\\\\} $ where:\\n\\n- $\\\\\\\\mathcal{C}$ is a set of classes $\\\\\\\\{ C_{1}, C_{2},...,C_{n} \\\\\\\\}$ present in the ontology.\\n- $\\\\\\\\mathcal{R}$ is a set of relationships present in the ontology.\\n\\n $\\\\\\\\mathcal{R} = \\\\\\\\{ (C_{i}, r_{s}, C_{j}) | r_{s} \\\\in \\\\\\\\mathcal{R}_s \\\\\\\\}$, where $\\\\\\\\mathcal{R}_s$ is the set of all possible relations (object properties).\\n- $\\\\\\\\mathcal{I}$ is the set of all instances of classes present in the ontology\\n\\n $\\\\\\\\mathcal{I} = \\\\\\\\{ i_{1},i_{2},...,i_{m} \\\\\\\\}$; $i:C, C \\\\in \\\\\\\\mathcal{C}$.\\n- $\\\\\\\\mathcal{P}$ is the set of all possible properties in an ontology.\\n\\n $\\\\\\\\mathcal{P} = \\\\\\\\{ p_{1}, p_{2},...,p_{l} \\\\\\\\} $ and $p:\\\\\\\\mathcal{I} \\\\\\\\xrightarrow{}\\\\\\\\mathcal{V}$ or $p:\\\\\\\\mathcal{C} \\\\\\\\xrightarrow{}\\\\\\\\mathcal{V}$; where $\\\\\\\\mathcal{V}$ is the set of all possible values for a property (data properties).\\n\\n---\\n---\\n\\nRegarding our definition of RAG, while we took it from reference [1], it is indeed inaccurate for our implementation and we have modified it as follows to correct for this. It is now formulated in terms of retrieval function $R$ and fusion operator $F$. This way we directly address the fact that we only take a limited number (k) of retrieved documents, and perform no weighting on them. The probability of generation of a response $y$ is then directly affected by a single prior $F(x, R(x) )$.\", \"please_find_our_updated_definition_below\": \"---\\n---\\n\\n\\n$p(y|x) = p_\\\\\\\\theta (y|F(x, R(x)))$. (1)\\n\\nwith \\n\\n$R(x) = \\\\arg \\\\max_{z\\\\in Z} ^k \\\\{r(z, x)\\\\}$. (2)\", \"where\": [\"$p(y|x)$ is the probability of generating output y given input x.\", \"$R(x)$ hence defines a set of the $k$ most relevant documents to $x$ under relevance function $r$.\", \"$r$ is a _document relevance_ function, such that $r(z, x)$ quantifies the relevance of document $z$ to query $x$.\", \"$F$ is a fusion operator.\", \"$p_\\\\theta (y|w)$ is the probability of generating $y$ given context $w$ for a language model parameterized by $\\\\theta$.\", \"---\", \"---\"], \"the_definition_for_ontorag_thus_changes_as_follows\": \"---\\n---\\n\\n$p(y|x) = p_\\\\theta (y|F(x, R(x), R_O(x)) )$. (3)\\n\\nwith\\n\\n$R_O(x) = \\\\{ O(c): c \\\\in C(x) \\\\}$. (4)\\n\\nWhere Eq. 1 is modified in Eq. 2 to include:\\n\\n\\n- $R_O(x)$ is the ontological context relevant to query $x$, which depends on:\\n- $O(c)$ is some ontological context retriever, and\\n- $C(x)$ is a set of concepts found in text $x$.\\n\\n---\\n---\\n\\nWe will address the other points you have raised in another response. Thank you very much again for your feedback, we would greatly appreciate your thoughts on these revised definitions, to improve our paper.\\n\\n\\n\\n### References\\n[1] Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Kuttler, H., Lewis, M., Yih, W., Rockt\\u00e4schel, T., Riedel, S., & Kiela, D. (2020). Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. ArXiv, abs/2005.11401.\"}", "{\"comment\": \"I respectfully disagree with the authors\\u2019 opinion that GraphRAG is irrelevant. While it is true that GraphRAG typically targets knowledge graphs (KGs) and ontologies often capture richer semantics than KGs, ontologies can still be represented in a KG format through standards such as RDF and RDFS. Numerous ontology projection methods are available to facilitate this conversion. **Even in your definition**, you use triples to define relationships in ontologies, they are essentially forming KGs. From a broader perspective, the \\\"Graph\\\" in \\\"GraphRAG\\\" refers to structured data that can potentially be represented as graphs, rather than being limited to KGs alone.\"}" ] }
Daq6Pw3TjN
Wolf2Pack: The AutoFusion Framework for Dynamic Parameter Fusion
[ "Bowen Tian", "Songning Lai", "Yutao Yue" ]
In the rapidly evolving field of deep learning, specialized models have driven significant advancements in tasks such as computer vision and natural language processing. However, this specialization leads to a fragmented ecosystem where models lack the adaptability for broader applications. To overcome this, we introduce AutoFusion, an innovative framework that integrates distinct models into a unified architecture for multi-task learning without pre-trained checkpoints. Using an unsupervised, end-to-end approach, AutoFusion dynamically blends model weights at each layer, optimizing the combination through a loss-minimization process that does not require labeled data. We validate AutoFusion’s effectiveness through experiments on commonly used benchmark datasets, demonstrating superior performance over established methods like Weight Interpolation, Git Re-Basin, and ZipIt. Our framework offers a scalable and flexible solution for model integration, positioning it as a powerful tool for future research and practical applications.
[ "Parameter Fusion", "Multi-task Model Fusion", "Computer Vision" ]
Reject
https://openreview.net/pdf?id=Daq6Pw3TjN
https://openreview.net/forum?id=Daq6Pw3TjN
ICLR.cc/2025/Conference
2025
{ "note_id": [ "x14cFsi0r5", "veUWqic0E3", "oqakz0cvaT", "n0WUi248rc", "lTNAFAReP5", "lGPeCTc3lW", "cWzWK22YER", "bkShG5uHaj", "ZeIS1O5V1G", "ZJi6sW4ttT", "XCB1Zpxuxz", "VRVVDdooBr", "SJOJH2reFh", "Qp943JN1Zp", "QY3mOfxbsU", "PEkAT9od7j", "N7jbAwfNgb", "Lde3bra4eZ", "HjcxuIbbvO", "GO3bZf55o0", "AH8Fhh58O3", "9hSHF9XVhv", "9I1KNHuqbO", "7Z1HDiF2HC", "4Z3j72e5Ux" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "meta_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732281543479, 1730024506496, 1732645277697, 1732036459355, 1732037002887, 1732549867581, 1732036797880, 1732516524690, 1730584937227, 1730710319047, 1734521168204, 1732036855815, 1732674293942, 1737523722873, 1732281606204, 1732549912792, 1732506630636, 1732673417426, 1732537488041, 1732512632170, 1732281583496, 1732037129930, 1732681772689, 1732671902101, 1732037182463 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5738/Authors" ], [ "ICLR.cc/2025/Conference/Submission5738/Reviewer_aNrj" ], [ "ICLR.cc/2025/Conference/Submission5738/Reviewer_ACFL" ], [ "ICLR.cc/2025/Conference/Submission5738/Authors" ], [ "ICLR.cc/2025/Conference/Submission5738/Authors" ], [ "ICLR.cc/2025/Conference/Submission5738/Authors" ], [ "ICLR.cc/2025/Conference/Submission5738/Authors" ], [ "ICLR.cc/2025/Conference/Submission5738/Reviewer_Tk42" ], [ "ICLR.cc/2025/Conference/Submission5738/Reviewer_ACFL" ], [ "ICLR.cc/2025/Conference/Submission5738/Reviewer_Tk42" ], [ "ICLR.cc/2025/Conference/Submission5738/Area_Chair_s7Mp" ], [ "ICLR.cc/2025/Conference/Submission5738/Authors" ], [ "ICLR.cc/2025/Conference/Submission5738/Reviewer_Tk42" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5738/Authors" ], [ "ICLR.cc/2025/Conference/Submission5738/Authors" ], [ "ICLR.cc/2025/Conference/Submission5738/Reviewer_Tk42" ], [ "ICLR.cc/2025/Conference/Submission5738/Authors" ], [ "ICLR.cc/2025/Conference/Submission5738/Authors" ], [ "ICLR.cc/2025/Conference/Submission5738/Authors" ], [ "ICLR.cc/2025/Conference/Submission5738/Authors" ], [ "ICLR.cc/2025/Conference/Submission5738/Authors" ], [ "ICLR.cc/2025/Conference/Submission5738/Authors" ], [ "ICLR.cc/2025/Conference/Submission5738/Reviewer_aNrj" ], [ "ICLR.cc/2025/Conference/Submission5738/Authors" ] ], "structured_content_str": [ "{\"title\": \"We are looking forward to your feedback!\", \"comment\": \"Dear Reviewer,\\n\\nThank you so much for your time and efforts in reviewing our paper. We have addressed your comments in detail and are happy to discuss more if there are any additional concerns. We are looking forward to your feedback and would greatly appreciate you consider raising the scores.\\n\\nThank you,\\n\\nAuthors\"}", "{\"summary\": \"In this work, the authors aim to merge models independently trained with different initializations. Specifically, the authors employ the Sinkhorn operator to convert the problem of finding a discrete permutation matrix into a differentiable problem that can be directly optimized using gradient descent algorithms.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The visualization of the method is good.\\n2. The application of the Sinkhorn operator is innovative in the field of deep model fusion.\", \"weaknesses\": \"1. It would be beneficial to list the number of optimized parameters of each methods.\\n2. Lacking related work or experimental results to substantiate the claim in lines 215-218 that \\\"However, this assumption of high similarity falls apart when the models to be merged are trained for different tasks. During merging, we must not only align parameters with similar functions but also strive to retain parameters with distinct functions, enabling the fused model to perform various tasks simultaneously.\\\"\\n3. The manuscript lacks a related work section. The introduction is insufficient and fails to provide a comprehensive overview of the existing literature and context for the study. The author could further discuss why the absence of a shared pre-trained initialization poses a challenge to multi-task model merging.\\n4. It would be beneficial to compare the results of the model merging techniques with the ensemble method and knowledge distillation method, as demonstrated in [1].\\n5. In lines 351-353, Git Re-Basin archives the best results for Task B, while AutoFusion is highlighted.\\n\\n[1] Kinderman et al. Foldable SuperNets: Scalable Merging of Transformers with Different Initializations and Tasks. http://arxiv.org/abs/2410.01483\", \"questions\": \"1. For models trained independently rather than fine-tuning from a shared pre-trained checkpoint, the task-specific models reside in different loss basins. Consequently, linear weight interpolation is expected to yield the worst performance in this scenario. Nonetheless, in Table 4.1, for MLP models on two disjoint MNIST subsets, weight interpolation surpasses both Git Re-Basin and ZipIt. Could the authors please provide an explanation for this?\\n2. Can the proposed method scale to larger models such as vision transformers used in [1]?\\n\\n[1] Kinderman et al. Foldable SuperNets: Scalable Merging of Transformers with Different Initializations and Tasks. http://arxiv.org/abs/2410.01483\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for the response\", \"comment\": \"Thank you to the authors for the detailed reply and for addressing some of my initial concerns. However, two major issues prevent me from increasing my score at this time:\\n\\n1. Clarity and Presentation: The paper's clarity could be significantly improved. For example, streamlining the mathematical presentation and focusing on the most relevant equations would enhance readability and impact.\\n2. Experimental Validation: The current experiments do not include evaluation on standard CV datasets like ImageNet-1K or MS-COCO. Evaluating the proposed method on these established benchmarks is crucial.\\n\\nWhile I appreciate the revisions made so far, these outstanding issues lead me to lean towards rejection. I encourage the authors to address these points in the final version to strengthen the paper.\"}", "{\"title\": \"We have added more experiments as well as modified known issues\", \"comment\": \"## Dear Reviewer Tk42\\uff0c\\n\\nWe would like to express our sincere gratitude for taking the time to review our paper and providing valuable feedback. We have carefully considered your comments and have made revisions to address each of the issues raised. Please find our responses to your specific points below:\\n\\n**Weakness 1: Incomplete Sentence (Line 225)**\\n\\nThank you for pointing out the issue in Line 225. We acknowledge that the sentence is incomplete. The complete intended statement is as follows:\\n\\n\\\"We attempted to utilize neural functional functions from neural functional analysis to predict network parameters from network parameters \\\\cite{navon2023equivariant} \\\\cite{zhou2024neural} \\\\cite{zhou2024permutation}. \\\"\\n\\n**Weakness 2: Evaluation on More Complex Tasks and Datasets**\\n\\nWe appreciate the reviewer\\u2019s suggestion to evaluate our method on more complex tasks and datasets. To address this, we have conducted additional experiments on object detection tasks using the VOC dataset(Constrained by computing resources). The results are as follows:\\n\\n| Method | mAP |\\n| ------------ | ----- |\\n| Model A | 24.64 |\\n| Model B | 25.43 |\\n| Ensemble | 55.24 |\\n| Git Re-basin | 20.99 |\\n| Zipit | 18.74 |\\n| AutoFusion | 36.02 |\\n\\nWe divided the 20 target categories in the VOC2007 dataset into two parts, each containing 10 categories, where Model A and Model B represent the object detection models trained on these two parts, respectively, and the Ensemble model represents the results of training on the full training set. We uniformly use the pre-trained Feature part of VGG16 as the feature extraction network, and the object detection head is constructed using random initialization, and our fusion also only considers the fusion of the object detection head, as can be seen from the table, our method not only generalizes well on the object detection task, but also exceeds the known baseline methods\\n\\nAdditionally, we extended the AutoFusion method to the CIFAR100 dataset, as well as to the more complex Resnet network, obtaining the following results:\\n\\n| CNN-CIFAR100-GS | Joint | TaskA | TaskB |\\n| --------------- | ----- | ----- | ----- |\\n| Avg | 2.2 | 2.26 | 2.14 |\\n| ModelA | 23.12 | 43.52 | 1.74 |\\n| ModelB | 22.63 | 2.51 | 43.74 |\\n| Git-Rebasin | 3.67 | 5.12 | 2.23 |\\n| Zipit | 7.63 | 10.12 | 5.14 |\\n| Ours | 20.65 | 17.8 | 23.58 |\\n\\n| CNN-CIFAR100 | Joint | TaskA | TaskB |\\n| ------------ | ------ | ----- | ----- |\\n| Avg | 2.29 | 2.16 | 2.42 |\\n| ModelA | 28.475 | 54.11 | 2.84 |\\n| ModelB | 27.78 | 2.58 | 52.98 |\\n| Git-Rebasin | 2 | 2.21 | 1.79 |\\n| Zipit | 4.05 | 5.74 | 2.36 |\\n| Ours | 21.67 | 21.14 | 22.2 |\\n\\n| Resnet18-CIFAR100 | Joint | TaskA | TaskB |\\n| ----------------- | ----- | ----- | ----- |\\n| Avg | 2.28 | 2.45 | 2.1 |\\n| ModelA | 27.03 | 51.06 | 3.11 |\\n| ModelB | 30.13 | 2.88 | 57.38 |\\n| Git-Rebasin | 1.69 | 2.27 | 1.11 |\\n| Zipit | 4.51 | 6.79 | 2.22 |\\n| Ours | 32.85 | 35.62 | 30.08 |\\n\\nThese results further demonstrate the robustness and generalization of the AutoFusion framework for complex tasks. For detailed analyses, please refer to Appendix E.7 and E.8 of the revised manuscript, where we have provided additional experimental details and results. We also provide an analysis of AutoFusion's computational efficiency in Appendix E.9. Limited by computational resources, we will be completing richer supplemental experiments next, which may be available in Camera Ready.\\n\\nThank you again for the valuable feedback. We hope these additional experiments address your concerns and demonstrate the broader applicability of our proposed method. I also sincerely ask if you have any other doubts and concerns that we can solve to make it possible to improve the score for our paper.\"}", "{\"title\": \"We have added more experiments as well as modified known issues (1/3)\", \"comment\": \"## Dear Reviewer aNrj\\uff0c\\n\\nWe sincerely appreciate the time you took to review our paper and provide valuable feedback. We have carefully considered your comments and made revisions to address each issue raised. Please find our responses to your specific points below:\\n\\n**Weakness 1: Number of Optimized Parameters**\\n\\nThank you for pointing this out. We have added a table summarizing the number of optimized parameters for the AutoFusion model and the ensemble model. This addition is included in the revised manuscript in Appendix E.9, providing a clearer comparison of computational complexity across methods. And the results are as follows:\\n\\n| Ensemble/AutoFusion | Setting | Convergence Step | Optimizable parameters |\\n| ------------------- | ---------------- | ---------------- | ---------------------- |\\n| Ensemble | CNN + MNIST | 18740 | 567226 |\\n| | MLP + MNIST | 16900 | 1720330 |\\n| | CNN + Fashion | 21551 | 567226 |\\n| | MLP + Fashion | 14992 | 1720330 |\\n| | CNN + KMNIST | 22488 | 567226 |\\n| | MLP + KMNIST | 15929 | 1720330 |\\n| | CNN + CIFAR10 | 37480 | 567226 |\\n| | MLP + CIFAR10 | 38417 | 1720330 |\\n| AutoFusion | CNN + MNIST(5+5) | 932 | 267328 |\\n| | MLP + MNIST(5+5) | 1020 | 802660 |\\n| | CNN + Fashion(5+5)| 859 | 267328 |\\n| | MLP + Fashion(5+5)| 800 | 802660 |\\n| | CNN + KMNIST(5+5)| 937 | 267328 |\\n| | MLP + KMNIST(5+5)| 792 | 802660 |\\n| | CNN + CIFAR10(5+5)| 2811 | 267328 |\\n| | MLP + CIFAR10(5+5)| 1267 | 802660 |\\n\\nYou can get information about the setup and interpretation of the form in Appendix E.9.\\n\\n**Weakness 2: Claim on High Similarity Assumption (Lines 215-218)**\\n\\nWe appreciate your observation and agree that further evidence strengthens this claim. To address this: We have computed similarity metrics across layers of fusion models, demonstrating the divergence in parameter distributions. These results substantiate our statement and are presented in Appendix F.1 of the revised manuscript. \\n\\nFurthermore, In Appendix E.8 our newly added experiments for object detection model fusion(First time in the model fusion field) on VOC dataset highlight that retaining task-specific parameters improves multi-task performance, validating the necessity of aligning similar parameters while preserving distinct ones.\\n\\nAnd the results are as follows, too, you can get more information from Appendix E.9:\\n\\n| Method | mAP |\\n| ------------ | ----- |\\n| Model A | 24.64 |\\n| Model B | 25.43 |\\n| Ensemble | 55.24 |\\n| Git Re-basin | 20.99 |\\n| Zipit | 18.74 |\\n| AutoFusion | 36.02 |\\n\\n**Weakness 3: Absence of Related Work Section**\\n\\nWhile we appreciate your suggestion to include a separate related work section, we respectfully assert that the current structure integrates related work into the introduction and throughout the paper. This decision was made to focus on highlighting the unique challenges addressed by AutoFusion, such as merging models without shared pre-trained initializations.\\n\\nThe introduction discusses prior works like Git Re-Basin and ZipIt, situating AutoFusion within the broader research context.\\n\\nSection 2 contrasts traditional model fusion methods with our approach, explaining how AutoFusion handles the absence of pre-trained initializations\\u2014a critical and less explored challenge in the field.\\n\\nGiven the page constraints, we chose this integrated approach to avoid redundancy while providing sufficient context for our contributions.\\n\\nWe hope this justifies our structural decision while ensuring the manuscript remains concise and focused.\"}", "{\"title\": \"I apologize if this follow-up message seems frequent.\", \"comment\": \"Dear Reviewer ACFL,\\n\\nI hope this message finds you well. We would like to take a moment to express our sincere gratitude once again for your time and effort in reviewing our paper. Your insightful comments have been invaluable in guiding our revisions.\\n\\nThe revisions we have made significantly improve the quality and contribution of our work. We are committed to addressing any remaining concerns and are more than willing to engage in further discussions. I apologize if this follow-up message seems frequent. We genuinely value your feedback and are eager to ensure that all your concerns are thoroughly addressed. Your insights are crucial to the improvement of our work, and we hope for your continued support.\\n\\nThank you once again for your support and consideration. We look forward to your response.\\n\\nWarm regards\"}", "{\"title\": \"We have added more experiments as well as modified known issues (1/2)\", \"comment\": \"## Dear Reviewer ACFL\\uff0c\\n\\nWe would like to express our sincere gratitude for taking the time to review our paper and providing valuable feedback. We have carefully considered your comments and have made revisions to address each of the issues raised. Please find our responses to your specific points below:\\n\\n**Weakness 1: Experiments Could Be Improved**\\n\\n**1.1 Analysis of Model Similarity\\uff1a**\\n\\nThank you for this suggestion. We agree that analyzing model similarity is crucial to understanding the challenges of parameter fusion. \\n\\nTo address this, we have computed similarity metrics via cosine similarity between models trained on different tasks. These results will be included in the revised paper in Appendix E.9. \\n\\n**1.2 Baselines of Fine-tuning Models Jointly on Multi-tasks**\\n\\nThank you for your suggestion, we also believe that the addition of an ensemble model is necessary, and we have included the results of the ensemble model in our comparison experiments in Section 4.1 - Table 1 - 'Ensemble Model' by red words. \\n\\n**1.3 Comparison to LoRA Fine-tuning**\\n\\nWhile we acknowledge its relevance as a fine-tuning method, our approach, AutoFusion, fundamentally differs in purpose and methodology. LoRA primarily focuses on parameter-efficient fine-tuning by introducing additional low-rank matrices to existing parameters, while AutoFusion aims at parameter fusion across models trained on disjoint tasks, without introducing additional trainable parameters.\\n\\nMoreover, AutoFusion does not inherently rely on the concept of low-rank decomposition or optimization for specific parameter subsets. Instead, it employs a differentiable permutation matrix to align and merge parameters dynamically. This makes LoRA's low-rank perspective less applicable as a direct comparison. \\n\\nFinally, while LoRA is a robust fine-tuning approach, its inclusion as a baseline might lead to confusion regarding the scope of our work, which is centered on unsupervised parameter fusion rather than fine-tuning. For these reasons, we have opted not to include LoRA as a baseline in this study. We hope this explanation clarifies our rationale and aligns with the focus of the paper.\\n\\n**1.4 Comparisons to Git Re-Basin and Zipit in Section 4.3**\\n\\nWe have extended the experiments in Section 4.3 to include comparisons to Git Re-Basin and Zipit. The updated results demonstrate that AutoFusion consistently outperforms these baselines on different distributions. Please refer to the revised Section 4.3 Table 3 for details. And the additional results are as follows, too:\\n\\n| Fusion Method | Fused Models | MNIST | Fashion | KMNIST | Avg |\\n| -------------- | ------------- | ----- | ------- | ------ | ----- |\\n| Git-Rebasin | MNIST+Fashion | 12.36 | 10.32 | 20.19 | 14.29 |\\n| - | MNIST+KMNIST | 10.23 | 9.88 | 15.58 | 11.89 |\\n| - | KMNIST+Fashion | 10.12 | 12.92 | 19.16 | 14.06 | \\n| - | Fused ALL | 10.29 | 9.11 | 13.76 | 11.05 | \\n| Zipit | MNIST+Fashion | 10.75 | 12.23 | 21.92 | 14.97 |\\n| - | MNIST+KMNIST | 15.41 | 9.11 | 24.95 | 16.49 | \\n| - | KMNIST+Fashion | 10.42 | 14.45 | 23.79 | 16.22 | \\n| - | Fused ALL | 9.98 | 9.12 | 10.87 | 9.99 | \\n\\n**1.5 Experiments on Larger Datasets**\\n\\nLimited by computational equipment, we supplemented the fusion results with the ResNet family of models while taking the more complex CIFAR100 dataset into account. Additional experimental results are shown below: \\n\\n| CNN-CIFAR100-GS | Joint | TaskA | TaskB |\\n| --------------- | ----- | ----- | ----- |\\n| Avg | 2.2 | 2.26 | 2.14 |\\n| ModelA | 23.12 | 43.52 | 1.74 |\\n| ModelB | 22.63 | 2.51 | 43.74 |\\n| Git-Rebasin | 3.67 | 5.12 | 2.23 |\\n| Zipit | 7.63 | 10.12 | 5.14 |\\n| Ours | 20.65 | 17.8 | 23.58 |\\n\\n| CNN-CIFAR100 | Joint | TaskA | TaskB |\\n| ------------ | ------ | ----- | ----- |\\n| Avg | 2.29 | 2.16 | 2.42 |\\n| ModelA | 28.475 | 54.11 | 2.84 |\\n| ModelB | 27.78 | 2.58 | 52.98 |\\n| Git-Rebasin | 2 | 2.21 | 1.79 |\\n| Zipit | 4.05 | 5.74 | 2.36 |\\n| Ours | 21.67 | 21.14 | 22.2 |\\n\\n| Resnet18-CIFAR100 | Joint | TaskA | TaskB |\\n| ----------------- | ----- | ----- | ----- |\\n| Avg | 2.28 | 2.45 | 2.1 |\\n| ModelA | 27.03 | 51.06 | 3.11 |\\n| ModelB | 30.13 | 2.88 | 57.38 |\\n| Git-Rebasin | 1.69 | 2.27 | 1.11 |\\n| Zipit | 4.51 | 6.79 | 2.22 |\\n| Ours | 32.85 | 35.62 | 30.08 |\\n\\nMore detailed results can be found in Appendix E.7.\"}", "{\"comment\": \"Many thanks for your clarification and it makes sense for me. I have another concern that how about the performance when applying your fusion method to other models with more parameters, such as Resnet-50, ViT-B. They don't require high computational capability GPUs, e.g., A100. Extensive experiments on different scales of models are required to demonstrate the effectiveness and generalization ability.\"}", "{\"summary\": \"This paper introduces AutoFusion, a framework that fuses distinct model\\u2019s parameters (with the same architecture) for multi-task learning. The key idea is to leverage Mena et al. (2018) to make permutation matrix in Re-basin differentiable, thus allowing end-to-end training. Experimental results demonstrate clear improvement over baseline methods.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"It leverages Mena et al. (2018) to make permutation matrix in Re-basin differentiable, thus allowing end-to-end training.\", \"It achieves clear improvement over baseline methods on MINST and CIFAR.\"], \"weaknesses\": [\"__Experiments could be improved__\", \"an analysis of model similarity is needed.\", \"baselines of fine-tuning the model (trained on one task) on the multi-task jointly are needed. They will provide a good reference even though they are not consider as fair comparisons.\", \"LoRA fine-tuning could be considered as a fair baseline. As the proposed model learns a permutation matrix per layer, which essentially can be considered as low-rank fine-tuning. Thus, adding comparison to LoRA fine-tuning would provide additional insights.\", \"In section 4.3, it only compares to weight interpolation on different distributions. Please add comparisons to Git Re-Basin and Zipit (similar to section 4.1)\", \"experiments on larger dataset (like ImageNet) using transformer based architectures would provide more convincing evidences.\", \"__The paper needs a major revision in writing.__\", \"The introduction could be improved. It is not usual to have half of the introduction to summarize contributions. It would be better to add more lines on the loss function and unsupervised setup and reduce the space for contributions.\", \"Figure 1 could be improved. Please adding explanation what each animal represents in the caption.\", \"Please avoid overusing equations. For example, eq. 1-4 could be in text for better readability. Eq. 7 and 8 could be combined. Eq. 9 and 10 need more explanation about M, U and insights behind. Eq. 11 could be in text.\", \"Figure 2 is too busy. Math equations make it difficult to read.\", \"Line 209: \\u201cin the absence of pre-trained parameters\\u201d. Are parameters in Model A and B pre-trained? This is confusing.\", \"Line 215: \\u201cHowever, this assumption of high similarity falls apart when the models to be merged are trained for different tasks.\\u201d Please demonstrate this by real examples and measure the similarities for different tasks.\", \"Section 3.1 could be written in a more straightforward manner. It simply leverages differentiable Sinkhorn operator in prior works Mena et al. (2018) and Pena et al. (2023). The error bound is nice to have, but not directly related to the key idea of the paper.\"], \"questions\": \"please refer to items in weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper concentrates on a very interesting problem: how to fuse two types of distinct model parameters pretrained for two different tasks into one model that can simultaneously solve two tasks. By applying permutation on different parameters and unsupervised learning on unlabeld data, this paper provide an autofusion method and achieves good performance.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"Although I am not an expert in this domain, I believe these strengths should be acknowledged:\", \"The paper presents a clear and convincing motivation, effectively setting the stage for the proposed work.\", \"There is a notable degree of innovation in the methodology, and the authors have thoroughly reviewed prior approaches, clarifying how their contributions advance the state-of-the-art.\", \"The results achieved by the proposed method are impressive, consistently outperforming baselines across a variety of experimental settings, which underscores its effectiveness.\", \"Additionally, the paper provides detailed theoretical proofs that reinforce the validity and soundness of the approach.\", \"The writing is also commendable, as the paper reads smoothly and is relatively accessible, making it easier for readers to grasp complex concepts.\", \"Overall, this work shows promise in advancing the field and could be a valuable addition to the literature.\"], \"weaknesses\": [\"Line 225: The sentence appears to be incomplete because it begins with a conditional clause (\\u201cIf we attempt to\\u2026\\u201d), which typically requires a main clause to complete the thought. In English, when a sentence starts with \\u201cIf,\\u201d it sets up an expectation that there will be a following statement explaining the result, purpose, or consequence of the condition.\", \"To further demonstrate the effectiveness of the proposed fusion method, more complex tasks and datasets should be considered, such as detection and segmentation tasks with VOC, COCO, or ImageNet datasets, respectively. In this paper, the evaluation is limited to the classification task on two relatively simple datasets (MNIST and CIFAR-10), which is insufficient to validate the robustness of the approach and may render the work less substantial. I will update my final score if the authors can provide more experimental results on some complex tasks and datasets.\"], \"questions\": \"Please refer to weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper presents an AutoFusion method for fusing parameters from distinct models for multi-task learning without relying on pre-trained checkpoints. It dynamically permutes model parameters at each layer, optimizing their combination through an unsupervised process.\\n\\nThis paper received mixed reviews, with one positive and two negative scores. All reviewers agree that the current evaluation is insufficient to support the proposed method. Additionally, the results seen in the added experiments, such as VOC object detection, lack sufficient convincing evidence. More comprehensive experiments are needed to strengthen the claims. At this stage, the paper is not sufficiently prepared for publication.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer Tk42 believes the evaluation is limited to classification tasks on two relatively simple datasets. Although the authors provide additional results on VOC and CIFAR100, the reviewer feels that these new tasks show limited improvements.\\n\\nReviewer ACFL emphasizes the need for more comparisons and experiments on larger datasets. And note that the paper requires major revisions in writing. The rebuttal did not adequately address these concerns.\\n\\nReviewer aNrj raises concerns about insufficient comparisons with ensemble and knowledge distillation methods. While the rebuttal partially addresses these concerns, the issue of scaling to larger models remains unresolved.\\n\\nOverall, the experimental analysis in this paper is insufficient for publication.\"}", "{\"title\": \"We have added more experiments as well as modified known issues (2/2)\", \"comment\": \"Additionally, for the first time, we extend the approach of model parameter fusion to the object detection task, we trained two object detection models with disjointed detection targets on VOC2007 separately and tested the fusion, the overall results are shown in the following table:\\n\\n| Method | mAP |\\n| ------------ | ----- |\\n| Model A | 24.64 |\\n| Model B | 25.43 |\\n| Ensemble | 55.24 |\\n| Git Re-basin | 20.99 |\\n| Zipit | 18.74 |\\n| AutoFusion | 36.02 |\\n\\nFor more detailed settings and more results, please refer to Appendix E.8.\\n\\n**Weakness 2: The Paper Needs a Major Revision in Writing**\\n\\n**2.1 Improving the Introduction**\\n\\nWe have revised Section - Introduction to strike a better balance between introducing the problem, describing the unsupervised setup, and summarizing contributions. The space allocated for contributions have been reduced, with more emphasis on explaining the loss function and AutoFusion's unsupervised learning framework. You can see that in the new version (red words) .\\n\\n**2.2 Figure 1 Explanations**\\n\\nWe have improved the **caption of Figure 1** by explaining what each animal represents in the AutoFusion context. You can see that in the new version. \\n\\n**2.3 Reducing the Use of Equations**\\n\\nIn the new version, we have simplified the presentation of some equation to enhance readability. \\n\\n**2.4 Simplifying Figure 2**\\n\\nWe've done a thorough optimization of Figure 2, which you can see in the new version of paper. We have redesigned Figure 2 to improve readability by reducing the number of mathematical notations and enhancing visual clarity.\\n\\n**2.5 Clarifying Lines 209 and 215**\\n\\n**Line 209:** The confusion arises because Model A and Model B are trained separately on different tasks but do not share pre-trained weights. We have rephrased this to clarify the distinction. \\n\\n**Line 215:** To address this, we have computed similarity metrics via cosine similarity between models trained on different tasks. These results are included in the revised paper in Appendix F.1. \\n\\n**2.6 Improving Section 3.1**\\n\\nHowever, we would like to clarify the importance of the theoretical elements included in this section. The differentiable Sinkhorn operator and its associated error bound are not merely auxiliary but are central to the novelty and effectiveness of our method. While we have refined the language in Section 3.1 for better readability, we respectfully assert that the theoretical components are essential and aligned with the paper's objectives.\"}", "{\"comment\": \"Thanks for your response. I decide to keep my current score for acceptance.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"We are looking forward to your feedback!\", \"comment\": \"Dear Reviewer,\\n\\nThank you so much for your time and efforts in reviewing our paper. We have addressed your comments in detail and are happy to discuss more if there are any additional concerns. We are looking forward to your feedback and would greatly appreciate you consider raising the scores.\\n\\nThank you,\\n\\nAuthors\"}", "{\"title\": \"I apologize if this follow-up message seems frequent.\", \"comment\": \"Dear Reviewer aNrj,\\n\\nI hope this message finds you well. We would like to take a moment to express our sincere gratitude once again for your time and effort in reviewing our paper. Your insightful comments have been invaluable in guiding our revisions.\\n\\nThe revisions we have made significantly improve the quality and contribution of our work. We are committed to addressing any remaining concerns and are more than willing to engage in further discussions. I apologize if this follow-up message seems frequent. We genuinely value your feedback and are eager to ensure that all your concerns are thoroughly addressed. Your insights are crucial to the improvement of our work, and we hope for your continued support.\\n\\nThank you once again for your support and consideration. We look forward to your response.\\n\\nWarm regards,\"}", "{\"comment\": \"Thanks for the authors' response. I have carefully read their comments and decide to keep my current score. The reason is that the additional results on more complex datasets and tasks present limited improvements. Especially when validating on CIFAR-100, Wolf2Pack obtained a worse joint performance than both model A and model B.\"}", "{\"title\": \"Thank you very much for your detailed feedback\", \"comment\": \"Dear Reviewer **ACFL**,\\n\\nThank you very much for your detailed feedback and for taking the time to review our paper. We appreciate your valuable insights and the constructive comments that have helped us improve the quality of our work. We understand your concerns regarding the clarity and presentation of the paper, as well as the need for more comprehensive experimental validation on standard benchmarks. We would like to address these points in a revised version and hope to convince you of the merits of our approach.\\n\\n**Clarity and Presentation:**\\nWe acknowledge the importance of presenting our work in a clear and accessible manner. To enhance the readability of the paper, we will streamline the mathematical presentation, focusing on the most relevant equations and integrating others into the text where appropriate. We will also revisit the entire document to ensure that all sections are presented in a coherent and concise fashion, with special attention to Figures 1 and 2, which we will simplify further while maintaining their informative value. Our goal is to make the paper more accessible to a broader audience without sacrificing the technical depth of our contributions.\\n\\n**Experimental Validation:**\\nRegarding the experimental validation, we fully agree that evaluating our method on established benchmarks such as ImageNet-1K and MS-COCO is crucial. We are currently conducting experiments on these datasets and are committed to including the results in the final version of the paper. Due to the time constraints, we were unable to complete these experiments in time for the current submission. However, we want to assure you that we are actively working on this and will provide the results, along with a thorough analysis, in the camera-ready version.\\n\\nWe are confident that these revisions, along with the previous changes, will significantly strengthen the paper. We sincerely hope that you will consider these improvements and reassess the contribution of our work. Your guidance has been invaluable, and we look forward to your continued feedback.\\n\\nThank you once again for your time and consideration.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Thank you once again for your thoughtful comments\", \"comment\": \"Dear Reviewer,\\n\\nThank you once again for your thoughtful comments and for taking the time to engage with the results of our new experiments. We truly appreciate your insights, which have guided us in refining our work.\\n\\nIn response to the significant structural differences between the Vision Transformer (ViT) and our previous model, we decided to extend our experiments to include the ResNet50 architecture. The results of these experiments are summarized in the table below.\\n\\n| Resnet50-CIFAR100 | Joint | TaskA | TaskB |\\n| ----------------- | ------ | ----- | ----- |\\n| Avg | 5.985 | 5.25 | 6.72 |\\n| Model A | 26.18 | 49.74 | 2.62 |\\n| Model B | 27.745 | 3.66 | 51.83 |\\n| Git-Rebasin | 7.265 | 6.29 | 8.24 |\\n| Zipit | 9.125 | 10.97 | 7.28 |\\n| Ours | 33.83 | 33.37 | 34.29 |\\n\\n| Resnet50-CIFAR100-Pretrained | Joint | TaskA | TaskB |\\n| ---------------------------- | ------ | ----- | ----- |\\n| Avg | 24.51 | 23.44 | 25.58 |\\n| Model A | 32.02 | 61.17 | 2.87 |\\n| Model B | 31.665 | 3.45 | 59.88 |\\n| Git-Rebasin | 16.49 | 18.66 | 14.32 |\\n| Zipit | 22.425 | 23.97 | 20.88 |\\n| Ours | 52.26 | 54.23 | 50.29 |\\n\\nFor the **Resnet50-CIFAR100** experimental results, we trained the full Resnet50 model from scratch, and the rest of the settings were the same as in previous experiments, and as can be seen from the experimental results, our model still maintains high Joint accuracy and guarantees a more even result on Model A as well as Model B.\\n\\nRegarding the **ResNet50-CIFAR100-Pretrained** experimental results, we employed the pre-trained ResNet50 network as a feature extraction layer, which we kept frozen during training. In this setup, we focused on training only the classification layer from scratch. We carefully integrated the parameters such that the fusion of parameters was limited to the classification layer. Encouragingly, we observed improvements across various methods in this configuration, with our approach achieving results that are quite comparable to those of Model A and Model B on their respective tasks. **We believe that the outcomes of this task setting are significant and we will emphasizing this aspect as a key result in subsequent revisions of our article.**\\n\\nThank you again for your detailed feedback. We believe that these updates will strengthen our paper and enhance its clarity, and we look forward to addressing any further questions you may have.\"}", "{\"title\": \"Thank you very much for your thoughtful review and for the valuable feedback on our revised article.\", \"comment\": \"Dear Reviewer,\\n\\nThank you very much for your thoughtful review and for the valuable feedback on our revised article. We appreciate the opportunity to address your comments regarding the CIFAR100 dataset and the performance of our fusion model.\\n\\nWe understand your observation that the fusion model exhibits weaker performance on the Joint metric compared to Model A and Model B. We would like to clarify our approach in the context of model parameter fusion. Our method involves directly merging the parameters of two models trained on separate tasks to create a single model that aims to maintain a certain level of competence across both tasks. It is not uncommon for such fused models to experience a reduction in joint performance, as highlighted in several foundational studies in this field [1].\\n\\nWhile Models A and B may show superior Joint metrics, it\\u2019s important to note that they excel in only one task each, demonstrating limited capability on the other. In contrast, our fusion model seeks to balance performance across both tasks, achieving a more uniform capability that is significantly above the baseline for similar tasks. This characteristic indicates that our approach effectively enhances overall performance, even if the Joint metric does not surpass that of the individual models. Furthermore, we are pleased to report that our ResNet18-based fusion model indeed outperforms both Model A and Model B on the Joint metric, reinforcing the validity of our methodology.\\n\\nAdditionally, in response to your insightful suggestion, we have extended our method to the object detection task using the VOC dataset, and we provide detailed results in Appendix E.8. The outcomes demonstrate the strong generalization performance of our proposed approach, further validating its effectiveness.\\n\\nThank you again for your detailed feedback. We believe that these updates will strengthen our paper and enhance its clarity, and we look forward to addressing any further questions you may have.\\n\\n**References**:\\n\\n[1]arxiv.org/abs/2305.03053[accepted by ICLR2024]\"}", "{\"title\": \"We are looking forward to your feedback!\", \"comment\": \"Dear Reviewer,\\n\\nThank you so much for your time and efforts in reviewing our paper. We have addressed your comments in detail and are happy to discuss more if there are any additional concerns. We are looking forward to your feedback and would greatly appreciate you consider raising the scores.\\n\\nThank you,\\n\\nAuthors\"}", "{\"title\": \"We have added more experiments as well as modified known issues (2/3)\", \"comment\": \"**Weakness 4.1: Comparison with Ensemble**\\n\\nThank you for your suggestion, we also believe that the addition of an ensemble model is necessary, and we have included the results of the ensemble model in our comparison experiments in Section 4.1 - Table 1 - 'Ensemble Model' by red words. \\n\\n**Weakness 4.2: Comparison with Knowledge Distillation**\\n\\nWe appreciate the reviewer\\u2019s suggestion to compare AutoFusion with knowledge distillation method. However, we respectfully argue that such a comparison is not aligned with the problem setting of this work for the following reasons:\\n\\n1) The focus of our work is on unsupervised parameter fusion, particularly for models trained independently on disjoint tasks without shared pre-trained initializations. The knowledge distillation does not address this specific challenge, and comparing them with AutoFusion would shift the focus away from the core contribution of our work.\\n2) Evaluating knowledge distillation methods would require additional experimental setups unrelated to the parameter fusion context, thereby diluting the clarity of our contributions. Instead, we prioritize comparisons with directly relevant baselines, such as Git Re-Basin and ZipIt, to highlight the strengths of AutoFusion in solving the problem it targets.\\n\\nWe hope this explanation clarifies why knowledge distillation are not included as baselines and aligns with the problem scope of the paper.\\n\\n**Weakness 5: Highlighting AutoFusion in Lines 351-353**\\n\\nThank you for noting this. We have clarified in the revised manuscript that while Git Re-Basin achieves the best results for Task B in some cases. Since the Git Re-basin method relies on aligning one model to another in terms of similarity, which in some cases can result in two models that are very similar, the fused model may retain the capabilities of one but the other is often poorly equipped and ends up performing poorly on Joint tasks, so a certain individual task on the Git Re- basin achieves better results on an individual task is reasonable, but our AutoFusion approach consistently maintains a large degree of leadership in the overall evaluation.\\n\\n**Question 1: Linear Weight Interpolation Surpassing Baselines (Table 4.1)**\\n\\nThank you for highlighting this unexpected observation. We have conducted a detailed analysis to understand why weight interpolation performs better than Git Re-Basin and ZipIt for MLP models on disjoint MNIST subsets. Below are our findings:\\n\\nThe MNIST dataset and MLP models represent a relatively low-dimensional problem and simple architecture. In such settings, the optimization landscape tends to be less rugged, and the parameter spaces of independently trained models may exhibit a degree of alignment even without explicit permutation. This can allow linear weight interpolation to achieve reasonable performance. Unlike deeper models or convolutional architectures, MLPs have a smaller parameter space with fewer invariances to neuron permutations. This reduces the impact of unaligned parameters, allowing interpolation to partially preserve task-specific information.\\n\\nGit Re-Basin and ZipIt rely on sophisticated alignment mechanisms that provide significant benefits in high-dimensional or complex tasks. However, in simpler settings like disjoint MNIST subsets with MLPs, these alignment procedures may not provide significant additional benefits over straightforward interpolation, particularly given the inherent alignment observed in simpler models and tasks.\\n\\nIt is important to emphasize that this result is not representative of more complex settings. As demonstrated in experiments with CNNs on the CIFAR dataset, AutoFusion consistently outperforms interpolation and other baselines, highlighting its scalability and robustness in diverse scenarios.\"}", "{\"title\": \"Thank you for your thoughtful feedback\", \"comment\": \"Thank you for your thoughtful feedback and for acknowledging the revisions we've made in response to your earlier concerns. In response to your new comments, we made the following plan to revise the paper\\n\\n1. **Related Work Section**: We recognize the importance of having a dedicated section for related work to enhance the readability and comprehensiveness of our paper. We will incorporate a dedicated section in the camera-ready version, ensuring that it provides a thorough overview of relevant literature while maintaining clarity.\\n\\n2. **References**: We appreciate your observation regarding the references. We will expand our reference list in the camera-ready version to include additional relevant works that strengthen the context and foundation of our research.\\n\\n3. **Additional Experiments with Modern Architectures**: We agree that conducting more extensive experiments with modern architectures, such as Vision Transformers and larger models like GPT-2, would provide valuable insights into the applicability of AutoFusion. Due to complexity constraints, we regret to inform you that we will not be able to include these experiments in the current version of the paper. However, we commit to conducting these experiments and including the findings in the camera-ready version.\\n\\nThank you again for your constructive comments. We believe that these additions will significantly enhance the overall quality of our work and appreciate your understanding as we continue to refine our manuscript.\"}", "{\"comment\": \"Thank you for the comprehensive revisions and detailed responses to the concerns raised, which partially address the weaknesses. While most of my concerns have been adequately addressed, I maintain my original score based on the following key points:\\n\\n1. Related work: While I understand your decision to integrate related work throughout, I still believe a dedicated section would improve readability without significantly impacting page limits. And the references are Insufficient.\\n2. For larger model scaling: The additional ResNet results are promising, but more extensive experiments with modern architectures (e.g., Vision Transformers < 100 M parameters, GPT-2 ~ 140 M parameters) would better demonstrate AutoFusion's broad applicability.\"}", "{\"title\": \"We have added more experiments as well as modified known issues (3/3)\", \"comment\": \"**Question 2: Scaling to Larger Models like Vision Transformers**\\n\\nWe acknowledge the importance of scaling to larger models. Due to the limitation of computational resources, we currently introduce only a portion of more complex experimental setups to validate the generalization ability of our method, and in Appendix A we provide test results using Resnet as a baseline network on the relatively complex CIFAR100 dataset:\\n\\n| CNN-CIFAR100-GS | Joint | TaskA | TaskB |\\n| --------------- | ----- | ----- | ----- |\\n| Avg | 2.2 | 2.26 | 2.14 |\\n| ModelA | 23.12 | 43.52 | 1.74 |\\n| ModelB | 22.63 | 2.51 | 43.74 |\\n| Git-Rebasin | 3.67 | 5.12 | 2.23 |\\n| Zipit | 7.63 | 10.12 | 5.14 |\\n| Ours | 20.65 | 17.8 | 23.58 |\\n\\n| CNN-CIFAR100 | Joint | TaskA | TaskB |\\n| ------------ | ------ | ----- | ----- |\\n| Avg | 2.29 | 2.16 | 2.42 |\\n| ModelA | 28.475 | 54.11 | 2.84 |\\n| ModelB | 27.78 | 2.58 | 52.98 |\\n| Git-Rebasin | 2 | 2.21 | 1.79 |\\n| Zipit | 4.05 | 5.74 | 2.36 |\\n| Ours | 21.67 | 21.14 | 22.2 |\\n\\n| Resnet18-CIFAR100 | Joint | TaskA | TaskB |\\n| ----------------- | ----- | ----- | ----- |\\n| Avg | 2.28 | 2.45 | 2.1 |\\n| ModelA | 27.03 | 51.06 | 3.11 |\\n| ModelB | 30.13 | 2.88 | 57.38 |\\n| Git-Rebasin | 1.69 | 2.27 | 1.11 |\\n| Zipit | 4.51 | 6.79 | 2.22 |\\n| Ours | 32.85 | 35.62 | 30.08 |\\n\\nIt can be clearly seen that our method still stably outperforms the baseline method, and a more specific explanation of the experimental setup and results can be found in Appendix E.7. The aforementioned mentioned that we tested the fusion effect on a object detection model, which also belongs to the type of task that pushes the AutoFusion method to more complex tasks, and also showed good results, detailed results are shown in Appendix E.8.\"}" ] }
DakTqQu161
Unified Multi-Modal Interleaved Document Representation for Information Retrieval
[ "Jaewoo Lee", "Joonho Ko", "Jinheon Baek", "Soyeong Jeong", "Sung Ju Hwang" ]
Information Retrieval (IR) methods aim to identify relevant documents in response to a given query, which have gained remarkable attention due to their successful application in various natural language tasks. However, existing approaches typically consider only the textual information within the documents, which overlooks the fact that documents can contain multiple modalities, including texts, images, and tables. Further, they often segment each long document into multiple discrete passages for embedding, preventing them from capturing the overall document context and interactions between paragraphs. We argue that these two limitations lead to suboptimal document representations for retrieval. In this work, to address them, we aim to produce more comprehensive and nuanced document representations by holistically embedding documents interleaved with different modalities. Specifically, we achieve this by leveraging the capability of recent vision-language models that enable the processing and integration of text, images, and tables into a unified format and representation. Moreover, to mitigate the information loss from segmenting documents into passages, instead of representing and retrieving passages individually, we further merge the representations of segmented passages into one single document representation, while we additionally introduce a reranking strategy to decouple and identify the relevant passage within the document if necessary. Then, through extensive experiments on diverse information retrieval scenarios considering both the textual and multi-modal queries, we show that our approach substantially outperforms relevant baselines, thanks to the consideration of the multi-modal information interleaved within the documents in a unified way.
[ "Information Retrieval", "Multi-Modal Information Retrieval", "Multi-Modal Representation Learning" ]
https://openreview.net/pdf?id=DakTqQu161
https://openreview.net/forum?id=DakTqQu161
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zFobZu06lk", "tGoY4XnI8D", "soRMP3139q", "pYsQKyRzqv", "mLiD9LxiI5", "eapSYBekSl", "d8AGkH8Ud1", "MXa5dxQ3c2", "Iu2oWkN2ky", "Idn5afIpPG", "GQde6XDyXW", "Ewk9ZPaQDg", "BlbO12hOii", "95RYwozFeu", "6JVwIskZjk", "40nB98NKdt" ], "note_type": [ "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1732895898125, 1732896100396, 1734189163233, 1732895810108, 1732896061181, 1733109712300, 1730692178633, 1732895597560, 1732896004442, 1733116411464, 1730646588432, 1733177462250, 1732895654916, 1732895764893, 1730765843467, 1730369750904 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9091/Authors" ], [ "ICLR.cc/2025/Conference/Submission9091/Authors" ], [ "ICLR.cc/2025/Conference/Submission9091/Authors" ], [ "ICLR.cc/2025/Conference/Submission9091/Authors" ], [ "ICLR.cc/2025/Conference/Submission9091/Authors" ], [ "ICLR.cc/2025/Conference/Submission9091/Reviewer_xZc9" ], [ "ICLR.cc/2025/Conference/Submission9091/Reviewer_ruy7" ], [ "ICLR.cc/2025/Conference/Submission9091/Authors" ], [ "ICLR.cc/2025/Conference/Submission9091/Authors" ], [ "ICLR.cc/2025/Conference/Submission9091/Authors" ], [ "ICLR.cc/2025/Conference/Submission9091/Reviewer_pXLA" ], [ "ICLR.cc/2025/Conference/Submission9091/Reviewer_ruy7" ], [ "ICLR.cc/2025/Conference/Submission9091/Authors" ], [ "ICLR.cc/2025/Conference/Submission9091/Authors" ], [ "ICLR.cc/2025/Conference/Submission9091/Reviewer_boqu" ], [ "ICLR.cc/2025/Conference/Submission9091/Reviewer_xZc9" ] ], "structured_content_str": [ "{\"title\": \"Official Comment by Authors\", \"comment\": \"Dear Reviewer pXLA,\\n\\nThank you for your review and constructive comments. We have made every effort to address your concerns.\\n\\n---\\n\\n> #### **[Weakness 1] Real-world application** \\n> The experiments are conducted on clean, source-avaliable corpus whose documents can be easily segmented into sections according to the subtitles, and then extracted into multi-modal elements. However, real-world data are often presented in compiled files like PDFs. In such scenarios, document division and multi-model data extraction may not be possible. This poses a challenge for IDentIfy in real-world use.\\n\\n$\\\\rightarrow$ Thank you for your constructive feedback. While we acknowledge that documents are sometimes presented in compiled formats (such as PDFs), a more significant portion of publicly accessible documents, particularly those available on the web, are formatted in HTML. Also, we envision that documents represented in PDFs can be converted into HTML from which we can utilize the proposed approach for representing them, and we leave such the direction of handling PDF documents as future work. \\n\\n---\\n\\n> #### **[Weakness 2] Absence of introductory paragraph**\\n> The presentation of the results in Section 4.3 lacks a main thread, and is difficult to follow. I suggest the authors add an introductory paragraph at the beginning of Section 4.3 and organize the experiments in a clearer structure.\\n\\n$\\\\rightarrow$ We appreciate your constructive suggestion. While we tried to provide the main finding of each experiment at the beginning of each paragraph with the bolded sentences, we will improve the presentation and description of Section 4.3 in the revision. \\n\\n---\\n\\n> #### **[Questions 1] Explanation on retrieval targets** \\n> As shown in Table 8, the retrieval target of Encyclopedic-VQA, InfoSeek, ViQuAE is only text. Why does IDentIfy perform better than the Text-document baseline on these datasets?\\n\\n$\\\\rightarrow$ We apologize for the confusion. The retrieval target for those three datasets is the documents interleaved with multiple modalities (such as text, images, and tables); therefore, as our proposed method can holistically consider them over the Text-document baseline that is limited to consider only the text, our method is superior to the baseline. \\n\\n---\\n\\n> #### **[Questions 2] Error on contrastive loss equation** \\n>The equation on line 240 contains an error: exp is missed in the loss calculation.\\n\\n$\\\\rightarrow$ We thank you for pointing it out; we will fix it in the revision. \\n\\n---\\n\\n> #### **[Questions 3] Clear definition of section and passage** \\n> Do \\u201csection\\u201d and \\u201cpassage\\u201d in this paper mean the same thing? If yes, a sentence could be added stating that the two terms refer to the same thing.\\n\\n$\\\\rightarrow$ Yes, the \\u201csection\\u201d and \\u201cpassage\\u201d are used interchangeably to refer to the same concept. We will clarify it by adding a footnote (mentioning their equivalence) in our revision. \\n\\n---\\n\\n> #### **[Questions 4] Clear definition of section and passage** \\n> The terms \\u201cdocument retrieval\\u201d and \\u201csection retrieval\\u201d are confusing. They actually mean the two stages in IDentIfy. But they read like two levels of retrieval granularity, as the experiment presents on line 347.\\n\\n$\\\\rightarrow$ We apologize for the confusion. The section retrieval for the experiment discussed in Line 347 is not the section reranking within the proposed two-stage (retrieval and reranking) pipeline of IDentIfy, but rather it denotes retrieving the sections directly without the document retrieval step. This is the baseline and, in Table 2 (with Line 347), we aim to show that our approach (that first retrieves top-K documents and then identifies relevant sections within them) is superior to the approach of directly retrieving the sections (called section retrieval). We will improve the clarity of it in the next revision. \\n\\n---\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"> #### **[Weakness 4] Modality gap.**\\n> The paper does not sufficiently address how the modality gap is resolved, which is critical for understanding the effectiveness of the proposed method.\\n\\n$\\\\rightarrow$ This may be a critical misunderstanding of our work as the concept of modality gap is not relevant to our work. Specifically, its concept is commonly used to measure the distance between the representations of two separate modalities in the multimodal representation space [1, 2, 3], whereas, in IR, the multiple modalities within queries and documents are not separately handled. For example, in IR, the query can be either text or a combination of text and image, and the retrieval target (document or section) can be either text or a combination of different modalities, which are not distinctly considered. Therefore, the analysis on the modality gap is clearly unnecessary and inappropriate.\\n\\n[1] Liang et al., Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning, NeurIPS 2022\\n\\n[2] Udandarao et al., Understanding and fixing the modality gap in vision-language models, PhD thesis, University of Cambridge\\n\\n[3] Shi et al., Understanding the Modality Gap in CLIP, ICLR 2023\\n\\n---\\n\\n> #### **[Question 1] Rationale for sectioning.** \\n> Could you clarify the rationale for segmenting documents into sections? What benefits do you envision from this approach that could not be achieved through a holistic document representation?\\n\\n$\\\\rightarrow$ We thank you for your question. We provide the rationale and benefits for segmenting documents into sections in our response to Weaknesses 1-2 and 3-1. \\n\\n---\\n\\n> #### **[Question 2] Alternative approaches.** \\n> Have you considered preprocessing with techniques like CNNs before embedding to retain document-level context without segmenting? How might this impact your findings regarding limitations?\\n\\n$\\\\rightarrow$ No, we have not considered CNNs as they are designed for image processing, which are not the suitable neural network architectures to process documents. If there are specific techniques or adaptations of CNNs that could effectively handle sequences of tokens comprising text, images, and tables, we would appreciate any suggestions/insights.\\n\\n---\\n\\n> #### **[Question 3] Effectiveness of representations.** \\n> Can you provide empirical evidence or theoretical justification that supports the efficacy of using representations like \\u2018End of Query\\u2019 and \\u2018End of Section\\u2019 compared to other methods?\\n\\n$\\\\rightarrow$ We address this question in our response to Weakness 3.\\n\\n---\\n\\n> #### **[Question 4] Baseline choices.** \\n> What criteria did you use to select the baseline models for evaluation? How do these baselines adequately reflect the current state of research in multimodal IR?\\n\\n$\\\\rightarrow$ Thank you for your question. The baselines considered in our evaluation not only reflect the current state-of-the-art in document (or section) representation for IR but also are carefully selected to ensure a comprehensive validation of the effectiveness of our proposed approach. Specifically, when representing documents for IR, not only recent multimodal IR work utilizes only one image but also most of the conventional IR approaches only consider the textual content within documents, and we include baselines for them in Table 1. From this, we then demonstrate the effectiveness of our approach in representing documents in their interleaved formats with different modalities, for various IR tasks.\\n\\n---\\n\\n> #### **[Question 5] Modality gap.** \\n> How does your approach specifically address the modality gap? Can you elaborate on any mechanisms or metrics used to assess this aspect?\\n\\n$\\\\rightarrow$ We answer this question in our response to Weakness 4. \\n\\n---\\n\\n> #### **[Question 6] Generalizability of Results** \\n> Since LLaVA-NeXT is highlighted as a strong VLM, how do you anticipate the performance might vary with other VLMs? Have you conducted preliminary analyses to explore this?\\n\\n$\\\\rightarrow$ We would like to clarify that the reason we use the same VLM (LLaVA-NeXT) across different approaches is to ensure a fair and consistent comparison. Also, we believe the results with this VLM is sufficient to demonstrate the effectiveness of our approach (incorporating interleaved multimodal information and contextual integration for document representations in IR), and, in achieving this goal, comparing different VLMs and their performances is not the focus of our work. However, we anticipate that using much larger VLMs will enhance the overall performance thanks to their increased capacity.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"> #### **[Question 1] Unclear settings.**\\n> Clarifying the experimental settings in Table 2: If I understand correctly, the comparison of 2nd and 3rd rows is to demonstrate the effectiveness of document retriever (aggregate section embeddings from section retriever) is better than section retriever. However, I cannot find the detailed settings for the 2nd row (i.e., how many documents are passed to rerankers? Since the retrieved unit is section; then, there maybe multiple top-K sections coming from the same document.). For a comparison, my imagination is that the top 25 distinct documents should be first identified from the top-K retrieved sections (where K > 25) before reranking?\\n\\n$\\\\rightarrow$ We apologize for the confusion. In Table 2, as described in Lines 348-349 and Lines 356-358, the total number of sections considered for reranking is 200 on average for both the \\u2018Document\\u2019 retriever and the \\u2018Passage\\u2019 retriever, where, for the \\u2018Document\\u2019 retriever, we retrieve 25 documents and each document has 8 sections on average (therefore, the total section number is 200 on average). In other words, we first collect 200 sections and then perform reranking, showing that representing the document with aggregated section representations and then identifying the relevant section for the given document is superior to directly performing the section retrieval. We will clarify the description for Table 2 in the next revision.\\n\\n---\\n\\n> #### **[Question 2] Detailed explanation on results**\\n> Why the numbers of the last row from Table1 and Table 2 are different? I assume that they are from the best approach with document retrieval with reranker?\\n\\n$\\\\rightarrow$ This is because Table 1 and Table 2 show results of different retrieval targets. Specifically, Table 1 reports the document retrieval performance (whose goal is to find the top-K relevant documents for a given query); meanwhile, Table 2 reports the section reranking performance (whose goal is to further pinpoint the query-relevant section within the retrieved top-K documents from document retrieval). We will clarify this in the revision. \\n\\n---\\n\\n> #### **[Question 3] Reranking for document retrieval**\\n> For document retrieval, how you conduct reranking? Is the reranking pipeline is still the same as section retrieval? I.e., top-25 documents are provided to the reranker, which reranks all the sections in the top-25 documents and use the maximum score of the section in a document as the score to rerank the document?\\n\\n\\\\$\\\\rightarrow$ Yes, the reranking is the same as the section retrieval. As we explained in our response to Question 2, document retrieval finds the top-K query-relevant documents, while section retrieval scores how relevant the sections from the top-K documents are to the given query. We will make this clearer in the next revision. \\n\\n---\\n\\n> #### **[Question 4] Generalization of reranker**\\n> Have you tried to train a retriever and reranker on all the datasets and check if the ranking models can generalize well across different datasets?\\n\\n$\\\\rightarrow$ In Table 5 (b), we demonstrate the generalizability of the reranker by training it on the Encyclopedic-VQA dataset and testing it on the ViQuAE dataset. Specifically, compared to the performance of the reranker fine-tuned explicitly on the ViQuAE dataset (50.9 in R@10), the reranker without training on it achieves competitive performance (49.0 in R@10), which confirms that the reranker can be generalizable across different datasets.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Dear Reviewer xZc9,\\n\\nThank you for your review and helpful comments. We have made every effort to address your concerns.\\n\\n---\\n\\n> #### **[Weakness 1-1] Novelty.** \\n> While the application of VLMs to IR is interesting, the paper lacks substantial novelty beyond their application. Previous works, such as those exploring VLMs in other contexts (e.g., CLIP, BLIP), have already laid the groundwork for similar methodologies.\\n\\n$\\\\rightarrow$ We believe there may be a misunderstanding of the contribution of our work. Our main contribution does not lie in the application of VLMs to IR. Instead, our novel contributions are the proposal of a new task setup of considering multimodal contents interleaved within documents in their most natural format, and the consideration of the contextual information spread in documents (by aggregating section-level representations), with the goal of holistically representing documents for IR. Therefore, the use of VLMs is to operationalize this new idea, and we do not claim novelty on it. Also, the methods that are on top of previous VLMs that you mentioned (such as CLIP or BLIP) are not suitable for our target task of IR, as they can only process a single image or a small chunk of text. \\n\\n---\\n\\n> #### **[Weakness 1-2, 3-1] Justification of segmenting.** \\n> The segmentation of documents into sections does not introduce a new technique; rather, it mirrors existing practices without clear justification for its necessity; The rationale for dividing documents into sections is not convincingly justified, leaving the impression that it may compromise document representation integrity.\\n\\n$\\\\rightarrow$ We would like to clarify that the segmentation of documents into sections is a practical design choice that aligns with conventional methods [1, 2], as it enables the effective handling of long documents. For example, in one of very practical scenarios such as retrieval-augmented generation (RAG), providing segmented sections rather than an entire long article often leads to more accurate query-specific answers, as the model can focus on the most relevant parts of the document without being overwhelmed by extraneous content. In addition to this, segmenting documents allows users or models to access concise, focused information, thereby enhancing the efficiency of processing them.\\n\\n[1] Mensink et al., Encyclopedic-VQA: Visual questions about detailed properties of fine-grained categories, ICCV 2023\\n\\n[2] Li et al., Retrieval Augmented Generation or Long-Context LLMs? A Comprehensive Study and Hybrid Approach, EMNLP 2024\\n\\n---\\n\\n> #### **[Weakness 2-1] Baselines.** \\n> The evaluation framework appears insufficiently rigorous, with limited baseline comparisons provided. The selection criteria for these baselines are not clearly articulated, raising concerns about the validity of the results.\\n\\n$\\\\rightarrow$ We clearly outline the selection criteria for our baselines in Lines 306 - 311. Also, they are not limited, which include a range of strategies to represent documents and are indeed far sufficient to validate the advantage of our approach (i.e., demonstrating the importance of considering both contextual and multimodal-interleaved information within documents). Please let us know if you have specific suggestions for additional baselines.\\n\\n---\\n\\n> #### **[Weakness 2-2] Evaluation.** \\n> There is a notable absence of non-VLM-based evaluations to establish the effectiveness of the proposed method relative to traditional approaches.\\n\\n$\\\\rightarrow$ This may be a critical misunderstanding of our work. Our experiment setups and results already include the evaluations that do not consider modalities other than text, and, through this, we already demonstrate the effectiveness of our approach over models with text-only modality. Also, comparing our proposed approach with methods based on other non-VLMs is trivially relevant at best for the task of IR with multimodal documents and queries. This is because not only non-VLM models inherently lack the capability to process multiple modalities, but also differences in base models and their underlying capabilities further make direct comparisons between different approaches unfair and not meaningful.\\n\\n---\\n\\n> #### **[Weakness 3] Representation method.** \\n> The proposed use of representations such as \\u2018End of Query\\u2019 and \\u2018End of Section\\u2019 lacks comparative evidence demonstrating their superiority over alternative representation methods.\\n\\n$\\\\rightarrow$ The use of tokens such as \\u2018End of Query\\u2019 and \\u2018End of Section\\u2019 is a very well-established and standard practice for finalizing embeddings of variable-length queries and passages; therefore, we strongly believe that providing comparative evidence for such a widely accepted approach is unnecessary and not the scope of this work (as our primary focus is not in developing a new tokenization or representation markers for embeddings).\\n\\n---\"}", "{\"title\": \"Reply to authors\", \"comment\": \"Thank you for your reply. I would like to know why you claim that \\\"*the proposal of a new task setup of considering multimodal contents interleaved within documents in their most natural forma*\\\" and \\\"*to represent the document holistically for IR, taking into account contextual information spread throughout the document*\\\" and why this has not been possible before and why you want to make the VLM part of this new task set. Can you explain the novelty and improvements of using these models and methods?\"}", "{\"summary\": \"The paper presents a unified approach to encode document representations for information retrieval, consisting of (1) encoding multi-modal interleaved information in a document; (2) split a document into multiple passages and separately encoding the split passages; then average pooling over the passage embeddings as the document representation. The authors conduct studies on how to fine-tune a VLM retriever and reranker to handle information retrieval tasks with interleaved document.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed approach is straightforward. Leveraging the pre-trained VLMs for information retrieval is an important topic.\\n2. The ablation studies on training a reranker are comprehensive and clearly illustrates the detail on how to train a multimodal reranker.\", \"weaknesses\": \"1. Although the main claims of the paper (interleaved document embeddings and aggregate representations from sections) are intuitive, the experiments are not fully convinced. (1) Is interleaved document encoding better? No text-only retrievers as baselines are provided. It is reasonable to compare document encoding with and without interleaved images; however, it is also sensible to provide the text-only retriever (such as E5, DRAGON or MistralE5) fine-tuned on the same dataset or zero-shot as the text-only retrieval baseline since using VLM fine-tuned on text-only training data may make the VLM overfitting on the small training data. (2) Is aggregating representation from sections better? The experimental results in Table2 may provide the answer but some settings are not clear to me (See 1. in Questions).\\n2. Some experimental settings are not clear (See Questions) and I\\u2019m somehow a bit confused by the tables in the main experiment. For example, in the same dataset, Encyclopedic-VQA and Enc-VQA, there are document and section retrieval; however, there is no clear explanation of the settings on document and section retrieval (See 3. in Questions).\", \"questions\": \"1. Clarifying the experimental settings in Table2: If I understand correctly, the comparison of 2nd and 3rd rows is to demonstrate the effectiveness of document retriever (aggregate section embeddings from section retriever) is better than section retriever. However, I cannot find the detailed settings for the 2nd row (i.e., how many documents are passed to rerankers? Since the retrieved unit is section; then, there maybe multiple top-K sections coming from the same document.). For a comparison, my imagination is that the top 25 distinct documents should be first identified from the top-K retrieved sections (where K > 25) before reranking?\\n2. Why the numbers of the last row from Table1 and Table 2 are different? I assume that they are from the best approach with document retrieval with reranker?\\n3. For document retrieval, how you conduct reranking? Is the reranking pipeline is still the same as section retrieval? I.e., top-25 documents are provided to the reranker, which reranks all the sections in the top-25 documents and use the maximum score of the section in a document as the score to rerank the document?\\n4. Have you tried to train a retriever and reranker on all the datasets and check if the ranking models can generalize well across different datasets?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Dear Reviewer boqu,\\n\\nThank you for your review and constructive comments. We have made every effort to faithfully address your concerns.\\n\\n---\\n\\n> #### **[Weakness 1-1] Novelty.**\\n> The novelty of the proposed method is limited.\\n\\n$\\\\rightarrow$ We would like to clarify our novel contributions, which are 1) a new task for representing documents interleaved with multiple modalities, 2) a general framework that leverages VLMs to operationalize this, and 3) a reranking method to pinpoint the relevant piece within the document. Specifically, no previous works consider representing documents interleaved with multimodal content in their natural format, and, to address this gap, we define a new task. Also, to tackle this novel task, we propose a new approach that leverages VLMs to encode interleaved documents into the unified representation.\\n\\n---\\n\\n> #### **[Weakness 1-2] Presentation.**\\n> The experiment results and discussion sections are not well-presented to demonstrate the effectiveness and benefits of the proposed methods.\\n\\n$\\\\rightarrow$ We thank you for raising this concern and apologize for the inconvenience. We believe that the extensive set of experiments that we design is sufficient to validate the effectiveness of our approach, demonstrating that leveraging interleaved and contextual information within the document is effective over previous IR approaches on both document- and section-level IR tasks with uni- and multi-modal queries. We will improve the presentation for our experiment settings and results in the next revision. \\n\\n\\n---\\n\\n> #### **[Question 1] Order of modalities.** \\n> The paper proposed to first represent each document as a sequence of sections as si=[VSi,LSi,TSi], where VSi, LSi, and TSi are visual tokens, text tokens, and table tokens, respectively. Is there a specific reason why concatenate features from different modalities in this way? Have you tried other feature fusion methods?\\n\\n$\\\\rightarrow$ Thank you for your question. The conventional VLMs (including the one that we used for our experiments) are pre-trained and fine-tuned with the sequence of images and texts [1, 2, 3]; therefore, we also follow this conventional order to encode documents. We will include the discussion on it in the next revision. \\n\\n\\n\\n[1] Li et al., LLaVA-Next-Interleave: Tackling Multi-image, Video, and 3D in Large Multimodal Models, arXiv\\n\\n[2] Zhang et al., InterLM-XComposer: A Vision-Language Large Model for Advanced Text-image Comprehension and Composition, arXiv\\n\\n[3] Chen et al., ShareGPT4V: Improving Large Multi-modal Models with Better Captions, ECCV 2024\\n\\n---\\n\\n> #### **[Question 2-1] Order of query and section.** \\n> Is there a specific reason why concatenate query q and s_i? Why not shuffle their positions?\\n\\n$\\\\rightarrow$ The concatenation of query q and section s_i (in the order of q followed by s_i) is a standard way to perform reranking, as shown in multiple previous works [1, 2, 3]. \\n\\n\\n\\n[1] Ma et al., Fine-Tuning LLaMA for Multi-Stage Text Retrieval, SIGIR \\u201824\\n\\n[2] Beak et al., Direct Fact Retrieval from Knowledge Graphs without Entity Linking, ACL 2023 \\n\\n[3] Gao et al., Rethink Training of BERT Rerankers in Multi-Stage Retrieval Pipeline, ECIR 2021\\n\\n---\\n\\n> #### **[Question 2-2] Fusion methods for reranker.** \\n> Why not choose other feature fusion methods such as inner product, outer product, addition, subtraction, etc.?\\n\\n$\\\\rightarrow$ In Table 7 (a) and (b), we do consider different feature fusion methods for the reranker. This includes \\u2018Contrastive\\u2019, which calculates the cosine similarity between a query embedding and section embeddings, and \\u2018Document+BCE\\u2019, which concatenates sections alongside the query and then performs reranking simultaneously. In contrast, \\u2018Section+BCE (Ours)\\u2019 follows the conventional approach to fuse the information of the query with the section by concatenating their representations, proving that this approach is the best. \\n\\n---\\n\\n> #### **[Question 3] Loss choice for a retriever.** \\n> I was wondering is there a specific reason why choose contrastive loss for training in section 3.2? Have you compared it with conventional cross-entropy loss?\\n\\n$\\\\rightarrow$ We would like to clarify that contrastive learning loss is the standard choice for training retrievers [1], as the primary objective in information retrieval is to distinguish relevant documents from non-relevant ones for a given query. In other words, this IR objective aligns naturally with contrastive learning, which explicitly models the relative distances between documents in relation to the query, unlike cross-entropy loss that treats each document independently and does not inherently optimize for the ranking of documents relative to each other in terms of their relevance to the query.\\n\\n\\n\\n[1] Karpukhin et al., Dense Passage Retrieval for Open-Domain Question Answering, EMNLP 2020\\n\\n---\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"> #### **[Questions 5] Organization of inputs to section encoder**\\n> How are texts, images, and tables extracted from a section organized into the input to the section encoder? Is it a fixed order of texts, then images, finally tables (as line 210 indicates)?\\n\\n$\\\\rightarrow$ Yes, your understanding is correct. As described in Line 210 and illustrated in Figure 2, we organize the different modalities in their conventional order following existing work [1, 2, 3].\\n\\n\\n\\n[1] Li et al., LLaVA-Next-Interleave: Tackling Multi-image, Video, and 3D in Large Multimodal Models, arXiv\\n\\n[2] Zhang et al., InterLM-XComposer: A Vision-Language Large Model for Advanced Text-image Comprehension and Composition, arXiv\\n\\n[3] Chen et al., ShareGPT4V: Improving Large Multi-modal Models with Better Captions, ECCV 2024\\n\\n---\\n\\n> #### **[Questions 6] Detailed explanation on method** \\n> What do the authors mean by \\u201ccombine four images into one\\u201d, on line 301?\\n\\n$\\\\rightarrow$ As explained in Lines 300 - 302, the \\u201ccombine four images into one\\u201d phrase means we scale each image down to half its original width and height and then combine four scaled-down images into a single composite image (arranged in a grid pattern). This is to achieve our goal of considering several images within a document, since processing each image individually at full resolution significantly increases the token count and imposes an excessive burden on GPU resources, which is not feasible with the best of our resources.\\n\\n\\n---\\n\\n> #### **[Questions 7] Detailed explanation on method** \\n> How do the authors \\u201cconsider four sections per document in representing documents\\u201d (line 302)? What four, the first four?\\n\\n$\\\\rightarrow$ We randomly select four sections per document while ensuring the inclusion of the positive section (if it is available in the dataset). This selection of the sections is to balance efficiency and performance during the training of both the retriever and reranker. Specifically, as shown in Figure 3, additional experiments with varying numbers of sections demonstrate that including more sections consistently improves retrieval performance; however, the efficiency trade-off becomes more pronounced as the number of sections increases, highlighting the necessity of this balance in our design. \\n\\n---\\n\\n> #### **[Questions 8] Detailed explanation on method** \\n> In Table 2 and 1, the passage (section?) retriever performs significantly worse than document retriever (20.5 R@1 for document retriever, 3.9 R@1 for passage retriever, only 19% of the performance of document retriever). Does that mean that the global information plays a so important role, that ignoring it can have a huge impact on retrieval, while a simple embedding averaging can mitigate it effectively? If yes, why can the re-ranker, which doesn\\u2019t integrate any global information, offer so much gain (3.9\\u219228.6, closer to 35.1)?\\n\\n\\n$\\\\rightarrow$ We apologize for the confusion. We would like to clarify the distinctions between the results in Tables 1 and 2, as well as the roles of different retrieval components.\\n\\nSpecifically, Table 1 presents the performance of document retrieval without section-level selection, which aims to showcase the benefits of incorporating interleaved multimodal information. For instance, the comparison between 'Text-document' and '+ Interleaved' demonstrates that integrating multimodal content enhances the retrieval accuracy by providing a more comprehensive representation of the document. \\n\\nIn contrast, Table 2 shows the section retrieval performance. For example, the \\u2018Passage\\u2019 and \\u2018Document\\u2019 settings first retrieve sections and documents, respectively, and then perform reranking over them (to identify the query-relevant section). Also, the 'Passage*' setting (unlike the others) does not employ the reranker over the retrieved sections (i.e., directly selecting sections without reranking). In this regard, we can interpret the results in Table 2 as follows: the improvement from 3.9 R@1 (Passage*) to 28.6 R@1 (Passage with reranking) shows the effectiveness of the reranker in refining section-level relevance; the proposed \\u2018Document\\u2019 approach achieves the best performance thanks to the extra consideration of the global context information during document retrieval before performing section selection.\\n\\nWe will clarify them in the revision.\"}", "{\"title\": \"Dear Reviewer xZc9\", \"comment\": \"We sincerely thank reviewer xZc9 for raising these concerns for a better understanding of the novelty of our work.\\n\\n---\\n\\n> #### **[Concern 1] Novelty Claim** \\n> I would like to know why you claim that \\\"the proposal of a new task setup of considering multimodal contents interleaved within documents in their most natural forma\\\" and \\\"to represent the document holistically for IR, taking into account contextual information spread throughout the document\\\"\\n\\n$\\\\rightarrow$ Our claim is supported by the observation that recent information retrieval (IR) approaches [1, 2, 3] primarily rely on a limited portion of text or a single image to represent a document. However, human-generated documents, such as Wikipedia, naturally contain diverse modalities, including text, images, and tables, providing richer information to represent documents, and no previous works consider these diverse modalities to represent documents. Moreover, the documents are typically long, where each section contributing unique contextual information that enhances the overall document representation.\\n\\nWe would like to emphasize that our work is the first to address this gap by demonstrating the effectiveness of leveraging the interleaved and contextual information within documents over previous IR approaches in diverse scenarios, including document- and section-level IR tasks with uni- and multi-modal queries, suggesting a new task of active utilization of the interleaved multimodal information and contextual information is essential for enhanced IR systems.\\n\\n[1] Mensink et al., Encyclopedic VQA: VIsual questions about detailed properties of fine-grained categories, ICCV 2023\\n\\n[2] Caffagni, et al., Wiki-LLaVA: Hierarchical Retrieval-Augmented Generation for Multimodal LLMs, CVPRW 2024\\n\\n[3] Ma et al., Fine-Tuning LLaMA for Multi-Stage Text Retrieval, arXiv\\n\\n---\\n\\n> #### **[Concern 2] Approach** \\nand why this has not been possible before and why you want to make the VLM part of this new task set. Can you explain the novelty and improvements of using these models and methods?\\n\\n$\\\\rightarrow$ The integration of VLMs into our proposed task setup is enabled by a recent development of VLMs that can process interleaved multimodal content [1, 2]. Hence, we leverage this recent advancement to operationalize the idea of incorporating diverse modalities into a unified representation for improved IR systems. \\n\\nWe would like to clarify our novel contributions, which are 1) a new task for representing documents interleaved with multiple modalities, 2) a general framework that leverages VLMs to operationalize this, and 3) a reranking method to pinpoint the relevant piece within the document.\\n\\nIn our paper, we propose that a simple approach to leverage VLMs to encode interleaved documents into a unified representation can yield superior representation compared to text-only document representation. As shown in Table 1, methods that rely solely on text, such as \\u2018Entity\\u2019 and \\u2018Text-document\\u2019, obtain R@1 scores of 3.1 and 12.5, respectively on the document retrieval task. In contrast, our approach, denoted as \\u2018+Interleaved\\u2019, achieves a significantly higher R@1 score of 20.5. This substantial improvement supports our claim that leveraging both images and tables as well as texts in documents yields effective, holistic document representations, enhancing performance in IR systems.\\n\\n[1] Li et al, LLaVA-NeXT-Interleave: Tackling Multi-image, Video, and 3D in Large Multimodal Models, arXiv\\n\\n[2] Zhang et al., InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output, arXiv\"}", "{\"summary\": \"This paper presents Interleaved Document Information Retrieval System (IDentIfy), a document retrieval framework that uses vision-language models (VLMs) to encode the multi-modal document interleaved with textual, visual, and tabular data to perform document retrieval followed by section retrieval. In the document retrieval stage, following the bi-encoder paradigm, the query and document section is separately encoded, and the section embeddings from a document is averaged to form the document embedding. In the section retrieval stage, the authors develop a re-ranker to re-rank sections previously retrieved by the document retriever. Experimental results show that IDentIfy can outperform Entity and Summary baselines as well as textual models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"With the advantages of VLMs, IDentIfy is able to perform effective retrieval on documents interleaved with multiple modalities.\", \"IDentIfy effectively integrates global information into segmented sections while maintaining efficient training inference.\"], \"weaknesses\": [\"The experiments are conducted on clean, source-avaliable corpus whose documents can be easily segmented into sections according to the subtitles, and then extracted into multi-modal elements. However, real-world data are often presented in compiled files like PDFs. In such scenarios, document division and multi-model data extraction may not be possible. This poses a challenge for IDentIfy in real-world use.\", \"The presentation of the results in Section 4.3 lacks a main thread, and is difficult to follow. I suggest the authors add an introductory paragraph at the beginning of Section 4.3 and organize the experiments in a clearer structure.\", \"There are some details in this paper that are not very clear (see Questions).\"], \"questions\": [\"As shown in Table 8, the retrieval target of Encyclopedic-VQA, InfoSeek, ViQuAE is only text. Why does IDentIfy perform better than the Text-document baseline on these datasets?\", \"The equation on line 240 contains an error: exp is missed in the loss calculation.\", \"Do \\u201csection\\u201d and \\u201cpassage\\u201d in this paper mean the same thing? If yes, a sentence could be added stating that the two terms refer to the same thing.\", \"The terms \\u201cdocument retrieval\\u201d and \\u201csection retrieval\\u201d are confusing. They actually mean the two stages in IDentIfy. But they read like two levels of retrieval granularity, as the experiment presents on line 347.\", \"How are texts, images, and tables extracted from a section organized into the input to the section encoder? Is it a fixed order of texts, then images, finally tables (as line 210 indicates)?\", \"What do the authors mean by \\u201ccombine four images into one\\u201d, on line 301?\", \"How do the authors \\u201cconsider four sections per document in representing documents\\u201d (line 302)? What four, the first four?\", \"In Table 2 and 1, the passage (section?) retriever performs significantly worse than document retriever (20.5 R@1 for document retriever, 3.9 R@1 for passage retriever, only 19% of the performance of document retriever). Does that mean that the global information plays a so important role, that ignoring it can have a huge impact on retrieval, while a simple embedding averaging can mitigate it effectively? If yes, why can the re-ranker, which doesn\\u2019t integrate any global information, offer so much gain (3.9\\u219228.6, closer to 35.1)?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your reply. I agree that comparing the proposed model with other text retrieval models is not a fair comparison. But this is still valuable to have an overall understanding that given the current text-image interleaved training data, whether we can outperform existing state-of-the-art text only retrieval models. Even text-of-the-art text retrievers are doing better, I don\\u2019t think that the comparison would negatively impact the value of the work. And thank for your clarification, I think the experiment sections should be revised and organized to make the contribution more clearly. For example, I think that merging the overall effectiveness of all the datasets as the same big table and comparing with some variants of the models and other state-of-the-art text or multimodal retrieval models would be more clear and easy to read. Then, the ablation experiments can be shown afterwards to further discuss the impact of each component.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"> #### **[Question 4] Dual encoder structure.**\\n> In the experiment section, I was wondering have you conducted the experiments with conventional methods such as dual encoder, where the feature embeddings are extracted from LLaVA?\\n\\n$\\\\rightarrow$ We apologize for the confusion. The structure of our retriever follows the dual encoder you mentioned. We will make this clear in the final revision of our paper.\\n\\n---\\n\\n> #### **[Question 5] Impact of each modality.** \\n> In the experimental result and discussion sections, it is worth exploring how much benefits introduced by each modality.\\n\\n$\\\\rightarrow$ We would like to note that, in Table 1, we already show the performance gains introduced by each modality. Specifically, in contrast to models only with textual modality, such as \\u2018Entity\\u2019 and \\u2018Text-document\\u2019 methods (that obtain 3.1 and 12.5 in R@1 score), the method with the additional single image (\\u2018+Single-image\\u2019) obtains 16.4 in R@1 score, which clearly shows the benefit of including a visual modality in document representations. Also, the \\u2018+Interleaved\\u2019 method (that uses all different modalities including tables in the document) achieves the highest performance of 20.5 R@1 score, supporting our claim that leveraging both images and tables as well as texts in documents yields effective document representations, leading to the performance improvement in information retrieval. \\n\\n\\n\\n---\\n\\n> #### **[Question 6] Latency.** \\n> It is worth comparing the additional performance gain introduced by each modality v.s. their extra latency.\\n\\n$\\\\rightarrow$ Thank you for your insightful question. We would like to clarify that, in inference time (where we compare the similarity between the given query and documents), no additional latencies are introduced by considering multiple modalities, since each document is encoded into a fixed-size representation regardless of its modality composition.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Dear Reviewer ruy7,\\n\\nThank you for your time and efforts in reviewing our paper, as well as your helpful and constructive comments. We have made every effort to address your concerns.\\n\\n---\\n\\n> #### **[Weakness 1-1] Text-only scenarios.** \\n> Although the main claims of the paper (interleaved document embeddings and aggregate representations from sections) are intuitive, the experiments are not fully convinced. (1) Is interleaved document encoding better? No text-only retrievers as baselines are provided. It is reasonable to compare document encoding with and without interleaved images; however, it is also sensible to provide the text-only retriever (such as E5, DRAGON or MistralE5) fine-tuned on the same dataset or zero-shot as the text-only retrieval baseline since using VLM fine-tuned on text-only training data may make the VLM overfitting on the small training data.\\n\\n$\\\\rightarrow$ Thank you for your thoughtful question. We first would like to clarify that we do have results on text-only retrievers in Table 1, showing that interleaved document encoding is better over them. In addition to this, making comparisons of different retrieval approaches with different base models is trivially relevant at best. This is because different models have different capacities in understanding the documents; therefore, it is challenging to draw the informed conclusion on whether the performance improvement comes from considering extra modalities or from using high-capacity models. Lastly, unlike your concern that training VLMs with text-only training data might result in overfitting, as shown in Figure 4, the VLM is not overfitted to the training data when its size is small. \\n\\n---\\n\\n> #### **[Weakness 1-2] Superiority of aggregating sections.** \\n> (2) Is aggregating representation from sections better? The experimental results in Table2 may provide the answer but some settings are not clear to me (See 1. in Questions).\\n\\n$\\\\rightarrow$ Yes, aggregating representations from sections is indeed better for document representation, which is demonstrated in Table 2 and Figure 3. Specifically, in Table 2, the \\u2018Documents\\u2019 method, which adopts this aggregation strategy, results in superior performance over other models that do not aggregate representations from sections. In addition to this, Figure 3 reinforces this finding by showing a clear trend where increasing the number of sections incorporated into the aggregation improves retrieval performance. In other words, as more sections are considered and their representations are aggregated, the model can more effectively capture the holistic context and interactions across the entire document. \\n\\n---\\n\\n> #### **[Weakness 2] Unclear explanations of experiments.** \\n> Some experimental settings are not clear (See Questions) and I\\u2019m somehow a bit confused by the tables in the main experiment. For example, in the same dataset, Encyclopedic-VQA and Enc-VQA, there are document and section retrieval; however, there is no clear explanation of the settings on document and section retrieval (See 3. in Questions).\\n\\n$\\\\rightarrow$ We thank you for pointing them out and apologize for the confusion. We will improve the explanations of experiment settings and results in the next revision. Also, please refer to the more detailed answers for your questions in our subsequent responses. \\n\\n---\"}", "{\"summary\": \"This paper introduced a novel IR framework, which enables integration and representation of diverse multimodal content including text, images, and tables, into a unified document representation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The motivation of the work is clear and the problem is worth exploring. The proposed methods are technically sound.\", \"weaknesses\": \"The novelty of the proposed method is limited. The experiment results and discussion sections are not well-presented to demonstrate the effectiveness and benefits of the proposed methods.\", \"questions\": \"1. The paper proposed to first represent each document as a sequence of sections as $s_i = [V_{S_i}, L_{S_i}, T_{S_i}]$, where $V_{S_i}$, $L_{S_i}$, and $T_{S_i}$ are visual tokens, text tokens, and table tokens, respectively. Is there a specific reason why concatenate features from different modalities in this way? Have you tried other feature fusion methods?\\n2. The above mentioned question also exists in section 3.3, is there a specific reason why concatenate query q and s_i? Why not shuffle their positions? Why not choose other feature fusion methods such as inner product, outer product, addition, subtraction, etc.?\\n3. I was wondering is there a specific reason why choose contrastive loss for training in section 3.2? Have you compared it with conventional cross-entropy loss?\\n4. In the experiment section, I was wondering have you conducted the experiments with conventional methods such as dual encoder, where the feature embeddings are extracted from LLaVA?\\n5. In the experimental result and discussion sections, it is worth exploring how much benefits introduced by each modality. \\n6. Besides, it is worth comparing the additional performance gain introduced by each modality v.s. their extra latency.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper addresses limitations in document representation for information retrieval (IR) by recognizing that documents can contain multiple modalities\\u2014such as text, images, and tables\\u2014and that segmenting long documents into discrete passages often hampers the ability to capture overall context and inter-paragraph interactions. The authors propose a novel method that interleaves different modalities in document embeddings, leveraging the capabilities of vision-language models (VLMs) to enhance the representation of multimodal documents. The proposed method aims to improve the effectiveness of document retrieval by better capturing the relationships among various modalities within a single document.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. **Originality**:\\n - The paper identifies significant limitations in current document representation methods and proposes an innovative approach to integrate multiple modalities, a relatively underexplored area in information retrieval.\\n\\n2. **Quality**:\\n - The methodology demonstrates a thoughtful integration of VLMs for enhancing document embeddings, showing promise in leveraging advanced models to address multimodal challenges.\\n\\n3. **Clarity**:\\n - The paper is well-structured and articulately presents the limitations of existing approaches, the proposed solution, and the expected impact on information retrieval. This clarity makes it accessible to readers across various backgrounds.\\n\\n4. **Significance**:\\n - By focusing on the multimodal nature of documents, the research has potential implications for various applications in IR, making it a timely contribution to the field as the demand for more sophisticated document processing techniques grows.\", \"weaknesses\": [\"1. **Lack of Novel Contribution**:\", \"While the application of VLMs to IR is interesting, the paper lacks substantial novelty beyond their application. Previous works, such as those exploring VLMs in other contexts (e.g., CLIP, BLIP), have already laid the groundwork for similar methodologies.\", \"The segmentation of documents into sections does not introduce a new technique; rather, it mirrors existing practices without clear justification for its necessity.\", \"2. **Evaluation and Baselines**:\", \"The evaluation framework appears insufficiently rigorous, with limited baseline comparisons provided. The selection criteria for these baselines are not clearly articulated, raising concerns about the validity of the results.\", \"There is a notable absence of non-VLM-based evaluations to establish the effectiveness of the proposed method relative to traditional approaches.\", \"3. **Methodological Concerns**:\", \"The rationale for dividing documents into sections is not convincingly justified, leaving the impression that it may compromise document representation integrity.\", \"The proposed use of representations such as \\u2018End of Query\\u2019 and \\u2018End of Section\\u2019 lacks comparative evidence demonstrating their superiority over alternative representation methods.\", \"4. **Inadequate Discussion of Modality Gap**:\", \"The paper does not sufficiently address how the modality gap is resolved, which is critical for understanding the effectiveness of the proposed method.\"], \"questions\": \"1. **Rationale for Sectioning**:\\n - Could you clarify the rationale for segmenting documents into sections? What benefits do you envision from this approach that could not be achieved through a holistic document representation?\\n\\n2. **Alternative Approaches**:\\n - Have you considered preprocessing with techniques like CNNs before embedding to retain document-level context without segmenting? How might this impact your findings regarding limitations?\\n\\n3. **Effectiveness of Representations**:\\n - Can you provide empirical evidence or theoretical justification that supports the efficacy of using representations like \\u2018End of Query\\u2019 and \\u2018End of Section\\u2019 compared to other methods?\\n\\n4. **Baseline Choices**:\\n - What criteria did you use to select the baseline models for evaluation? How do these baselines adequately reflect the current state of research in multimodal IR?\\n\\n5. **Modality Gap Resolution**:\\n - How does your approach specifically address the modality gap? Can you elaborate on any mechanisms or metrics used to assess this aspect?\\n\\n6. **Generalizability of Results**:\\n - Since LLaVA-NeXT is highlighted as a strong VLM, how do you anticipate the performance might vary with other VLMs? Have you conducted preliminary analyses to explore this?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
DaUsIJe2Az
Continual Learning via Continual Weighted Sparsity and Meta-Plasticity Scheduling
[ "Xuefeng Zhang", "Ke Fan", "Muhan Zhang", "Yuan Zhou", "Jianzhu Ma" ]
Continual Learning (CL) is fundamentally challenged by the stability-plasticity dilemma: the trade-off between acquiring new information and maintaining past knowledge. To address the stability, many methods keep a replay buffer containing a small set of samples from prior tasks and employ parameter isolation strategies that allocate separate parameter subspaces for each task, reducing interference between tasks. To get more refined, task-specific groups, we adapt a dynamic sparse training technique and introduce a continual weight score function to guide the iterative pruning process over multiple rounds of training. We refer to this method as the continual weighted sparsity scheduler. Furthermore, with more incremental tasks introduced, the network inevitably becomes saturated, leading to a loss of plasticity, where the model's adaptability decreases due to dormant or saturated neurons. To mitigate this, we draw inspiration from biological meta-plasticity mechanisms, and develop a meta-plasticity scheduler to dynamically adjust these task-specific groups' learning rates based on the sensitive score function we designed, ensuring a balance between retaining old knowledge and acquiring new skills. The results of comparison on popular datasets demonstrate that our approach consistently outperforms existing state-of-the-art methods, confirming its effectiveness in managing the stability-plasticity trade-off.
[ "Continual learning" ]
Reject
https://openreview.net/pdf?id=DaUsIJe2Az
https://openreview.net/forum?id=DaUsIJe2Az
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yMBYLxnvp2", "mBhVwFJ2Ik", "RZ4uyiKI7a", "Lmonrnic6h", "Jp3fxcIUba", "DjEmxBMTQC" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review", "decision", "official_review" ], "note_created": [ 1734810714844, 1730253391383, 1730714672670, 1730539601652, 1737524092209, 1730583181557 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10925/Area_Chair_VLbw" ], [ "ICLR.cc/2025/Conference/Submission10925/Reviewer_syhb" ], [ "ICLR.cc/2025/Conference/Submission10925/Reviewer_NLes" ], [ "ICLR.cc/2025/Conference/Submission10925/Reviewer_RPbt" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10925/Reviewer_xLQr" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes continual weighted sparsity scheduler, which consists of two main stages. In the first stage, after training on the current task, multiple iterations of network pruning are performed to create task-specific groups. During this process, neurons and connections critical to the task are identified and selected based on high activation levels and a continual weighted scoring mechanism. These selected components are then grouped by task, enabling more effective retention of task-specific knowledge. In the second stage, once training for each task is complete, a meta-plasticity scheduler adjusts the learning rates for individual neurons, increasing the rates for relatively less active groups.\\n\\nExperimental results show that this algorithm outperforms existing baselines. But, numerous concerns have been raised by the reviewers, such as too many hyperparameters, limited experimenatal results, lack of justification of increased computational costs, etc. Unfortunately, no rebuttals has been submitted and there has been no further discussions among the reviewers and the authors. Since those issues have not been resolved and there have been more negative reviews, the decision of the paper is Reject.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers have raised following points regarding the weakness of the paper: some of them include too many hyperparameters, limited experimenatal results, lack of justification of increased computational costs, etc.\\n\\nNLes mainly raised issue about the sensitivity of the algorithm w.r.t to the hyperparameters and experimental details, such as model architectures, memory sizes, small datasets, etc. \\n\\nxLQr raised questions about novelty and lack of ablation study. \\n\\nsyhb raised issues on limited related research discussion and experimental setting.\"}", "{\"summary\": \"The paper presents an approach to address the stability-plasticity dilemma in continual learning by introducing two main components: a continual weighted sparsity and a meta-plasticity scheduler. The method also uses a replay buffer. This makes it a hybrid continual learning approach that combines parameter isolation, regularization, and replay. The continual weighted sparsity component iteratively prunes the network with gradually increasing sparsity over multiple training rounds to identify task-specific neuron groups. The meta-plasticity scheduler then dynamically adjusts learning rates for different neuron groups based on their sensitivity scores to balance knowledge retention and acquisition. The authors evaluate their method on standard benchmarks (CIFAR-10, CIFAR-100, and Tiny-ImageNet) in both class-incremental and task-incremental learning settings.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-structured, making it easy to follow.\", \"The paper explores a combination of connection rewiring with individually adjusted learning rates at the neuron level. While both approaches have been explored separately in prior work, their integration provides an additional data point in addressing the stability-plasticity dilemma in continual learning.\"], \"weaknesses\": \"## Limited Discussion of Related Literature (Why I gave 1 to Presentation)\\nWhile the paper presents interesting ideas, there are several important concerns regarding the positioning and novelty of the work:\\n\\n- The proposed \\\"meta-plasticity\\\" mechanism is similar to existing regularization methods (e.g., elastic weight consolidation, synaptic intelligence, and memory aware synapses). These methods similarly modulate learning rates across different connections. A more thorough discussion differentiating the proposed approach from these established methods would be valuable, particularly given that these regularization approaches are explicitly categorized as \\\"meta-plasticity\\\" in the \\\"Biological underpinnings for lifelong learning machines\\\" paper (page 204) they cite to motivate the idea.\\n\\n- The paper's claimed novelty in iterative pruning appears to overlap significantly with existing work:\\n * NISPA (https://arxiv.org/abs/2206.09117, ICML 2022), which presents similar ideas about iterative connection rewiring while maintaining constant sparsity Also see, Space-Net (https://arxiv.org/abs/2007.07617) and AFAF (https://arxiv.org/abs/2110.05329) - this is not an exhaustive list.\\n\\n- While the current work introduces more sophisticated rewiring strategies and meta-plasticity mechanisms compared to NISPA's random selection approach (or other mentioned works), the fundamental principles appear derivative. The paper would benefit from a clearer justification for why these more complex mechanisms are necessary and advantageous.\\n\\n## Experimental Design and Methodology Concerns (Why I gave 1 to Soundness)\\nResults appear to be identical to the values in the TriRE paper, which would be acceptable with proper attribution (but I do not see any attribution). However, a more serious concern is the architectural discrepancy: while TriRE uses ResNet-18, this work claims to use ResNet-50, potentially creating an unfair comparison due to significantly different parameter counts.\\n\\nWhile the paper includes ablation studies comparing different parameter settings for individual components, it lacks a comprehensive analysis justifying the necessity of combining all three approaches (replay, parameter isolation, and meta-plasticity). The ablations focus on tuning parameters within each component rather than demonstrating why the full combination of components is required to achieve the reported performance gains.\\n\\nCritical details about buffer sizes in experiments are missing, making it difficult to properly evaluate the replay component\", \"questions\": [\"Could you provide an analysis of the computational cost for each component of your method (replay, regularization, and parameter isolation) and explain why all these components are necessary for the overall approach?\", \"Could you clarify how neurons are defined in convolutional layers (e.g., whether they represent channels/feature maps) and explain specifically how connection rewiring is implemented in convolutional layers?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a pruning-based continual learning algorithm consisting of two steps. After learning the current task, multiple rounds of network pruning are conducted to form task-specific groups. In this process, essential neurons and connections are selected based on high activation and a continual weighted score, and the selected components are grouped by task. This approach allows for more effective preservation of knowledge for each task. In the second step, after completing the learning for each task, a meta-plasticity scheduler is updated, applying an independent learning rate for each neuron by increasing the learning rate for relatively inactive groups. Experimental results demonstrate that the proposed algorithm achieves superior performance compared to other baselines.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The proposed algorithm introduces the idea of using pruning methods to overcome the trade-off between stability and plasticity in continual learning scenarios. It consists of two steps, each with a clear objective that is specifically designed to achieve its respective goal.\\n\\n2. Experimental results across various datasets and continual learning scenarios demonstrate that the proposed algorithm outperforms other baseline algorithms.\\n\\n3. The algorithm's effectiveness is validated through visualizations of actual neuron groups, comparisons of plasticity and stability, experiments in long task scenarios, and an ablation study, providing a comprehensive analysis from multiple perspectives.\", \"weaknesses\": \"1. The proposed algorithm emphasizes the importance of carefully setting the hyperparameters, such as pre-defined sparsity, $\\\\lambda$, $\\\\alpha_1$, $\\\\alpha_2$, $\\\\beta_1$, and $\\\\beta_2$, for effective utilization. However, this seems to involve a considerable number of hyperparameters.\\n\\n1-1) Although $\\\\alpha_1$, $\\\\alpha_2$, $\\\\beta_1$, $\\\\beta_2$, and pre-defined sparsity utilize already reported values, and experiments have been conducted on various values for pre-defined sparsity and $\\\\lambda$, how can these parameters be configured in actual continual learning situations? This hyperparameter issue has been discussed as a significant concern in continual learning research, as highlighted in studies [1, 2, 3], so I believe a discussion on this aspect is essential. \\n\\n1-2) How sensitive is the proposed algorithm to changes in the values of $\\\\alpha_1$, $\\\\alpha_2$, $\\\\beta_1$, and $\\\\beta_2$? \\n\\n1-3) In studies such as [1,2], scenarios where many classes are learned in the first task and the remaining classes are learned evenly across multiple tasks are considered. In such scenarios, can the proposed algorithm still achieve superior performance using the same hyperparameters, or would it require finding the new best hyperparameter values? \\n\\n2. All experiments appear to be conducted based on ResNet-50. Recently, research using Vision Transformers in the computer vision domain has been actively used, reporting superior performance [4]. In this context, I wonder if the proposed algorithm would function effectively with Vision Transformers, which have entirely different characteristics. I think additional experiments are needed to verify this aspect. \\n\\n3. The proposed algorithm is structured in two stages, and particularly, I believe that calculating Equation (6) incurs additional computational costs. From this perspective, how does the computation cost of the proposed algorithm (e.g., FLOPs or training time) compare to other algorithms in Table 1? As shown in studies [3, 5], it is crucial to compare not only the performance of each continual learning algorithm but also their practical computation costs. \\n\\n4. All experiments were conducted using small image datasets with sizes of 32x32 or 64x64. As noted in studies [3, 6], each algorithm exhibits different trends depending on the dataset. In this light, I believe the authors should conduct experiments on at least the ImageNet-100 dataset (image size of 224x224) and compare the results with baselines to validate additional effectiveness. \\n\\n5. What was the memory buffer size used in the experiments? When varying this size, can the proposed algorithm consistently achieve superior performance? \\n\\n6. Considering the results and experiments in [6] (refer to GitHub), does the proposed algorithm outperform well-established algorithms in class-incremental learning, such as WA, BiC, and FOSTER? To concretely validate the superiority of the proposed algorithm, a comparison of performance and costs with these methods is necessary.\\n\\n[1] Online hyperparameter optimization for class-incremental learning, AAAI 2024.\\n\\n[2] Hyperparameter Selection in Continual Learning, CoLLAs 2024 Workshop.\\n\\n[3] Hyperparameters in Continual Learning: A Reality Check, CoLLAs 2024 Workshop.\\n\\n[4] A Comprehensive Survey of Continual Learning: Theory, Method and Application, TPAMI 2024.\\n\\n[5] Computationally Budgeted Continual Learning: What Does Matter?, CVPR 2023.\\n\\n[6] PyCIL: A Python Toolbox for Class-Incremental Learning, SCIENCE CHINA Information Sciences.\", \"questions\": \"I have included all questions, along with weaknesses, in the Weakness section, so please refer to it. I believe this paper proposes an interesting pruning-based algorithm to overcome the trade-off between stability and plasticity. However, the complexity of the algorithm and the need to set numerous hyperparameter values, along with various experimental concerns, make it difficult to provide a favorable assessment. I look forward to the authors' response in which they address any misunderstandings on my part and provide feedback on my review.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a framework for Continual Learning (CL) that addresses the stability-plasticity dilemma through two key innovations: the Continual Weighted Sparsity Scheduler and the Meta-Plasticity Scheduler. The former iteratively prunes neurons and connections to create task-specific groups, while the latter dynamically adjusts learning rates based on sensitivity scores. Experimental results demonstrate that this approach outperforms existing methods across various datasets, effectively balancing knowledge retention and adaptability to new tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed framework integrates a Continual Weighted Sparsity Scheduler and a Meta-Plasticity Scheduler, effectively addressing the stability-plasticity dilemma in Continual Learning (CL). This allows the model to retain previously learned knowledge while adapting to new tasks, leading to improved overall performance.\\n\\n2. The use of iterative pruning and dynamic learning rate adjustments based on sensitivity scores enables the model to maintain flexibility and adaptivity. This approach allows for fine-tuning of connections, ensuring that the model can efficiently learn new information without significant interference from prior tasks.\\n\\n3. Experimental results demonstrate that the proposed method consistently outperforms state-of-the-art CL techniques across various datasets, including CIFAR-10, CIFAR-100, and Tiny-ImageNet. This highlights the robustness and effectiveness of the framework in handling complex and incremental learning scenarios.\", \"weaknesses\": \"1.Network Saturation Challenges: The proposed framework may face limitations as the number of tasks increases, leading to network saturation. This saturation can result in reduced adaptability and performance on new tasks, as the model may struggle to allocate sufficient resources to accommodate additional knowledge without interference from previously learned tasks.\\n\\n2.Complexity of Implementation: The integration of both the continual weighted sparsity scheduler and the meta-plasticity scheduler adds complexity to the implementation. This complexity may pose challenges in terms of computational resources and tuning hyperparameters, making it less accessible for practical applications compared to simpler continual learning methods. Therefore, this paper should add a hyper-parameter analysis.\\n\\n3.Limited Experiments: The experimental evaluation in this paper appears to be insufficient. The authors should extend their experiments to include more diverse datasets to better demonstrate the robustness and generalizability of their proposed method. \\n\\n4.Limited Improvement over SPARCL (NeurIPS 2022): The enhancements made in this framework compared to SPARCL (Sparse Continual Learning on the Edge) seem relatively incremental. A more detailed comparison could clarify the specific contributions and advantages of the proposed method over SPARCL, especially in terms of practical improvements and scalability.\\n\\n5.Lack of Open-Source Code: The absence of code or implementation details in the supplementary material limits the reproducibility and practical applicability of the proposed framework.\", \"questions\": \"Q1: How does the proposed framework address the network saturation problem when scaling to a larger number of tasks, and what are the potential solutions to maintain performance without compromising the model's ability to learn new tasks?\", \"q2\": \"Given the complex integration of both continual weighted sparsity scheduler and meta-plasticity scheduler:\\ni) How do different hyperparameters affect the model's performance?\\nii) What is the optimal configuration of hyperparameters for different scenarios?\\niii) How can we balance the trade-off between implementation complexity and model performance?\", \"q3\": \"To what extent does the proposed method generalize across different datasets and task domains?\\ni) How does the method perform on more challenging and diverse datasets?\\nii) Can the performance advantages demonstrated in the current experiments be replicated across a broader range of scenarios?\\niii) What are the potential limitations or strengths of the method when applied to different types of data and learning tasks?\", \"q4\": \"Time Overhead from Sparsity Scheduling and Neuron Selection: The proposed Continual Weighted Sparsity Scheduler and neuron selection process likely introduce additional computational time. It would be beneficial to include a comparative analysis of training time with and without these components to better understand the time cost associated with the framework.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper proposes a parameter-isolation algorithm that combines a continual weighted sparsity scheduler with a meta-plasticity scheduler to address the stability-plasticity trade-off in continual learning.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Overall, the paper is well-organized and easy to understand.\", \"Detailed experiments support the effectiveness of the proposed algorithm.\"], \"weaknesses\": [\"What is the primary contribution of the proposed $\\\\textit{continual weighted sparsity scheduler}$ compared to [1,2]? Is it simply a novel combination applied to CL? Why is it necessary to gradually reduce $S_{t,n}$ during epoch training? During successive epochs, are previously selected neurons fixed, with only remaining neurons being considered for selection?\", \"In Eqn (6), what does $\\\\delta$ represent? I checked paper [1], and it seems this should be $\\\\partial$, please clarify.\", \"Based on steps 1 and 2, the algorithm obtains a set of groups of neurons and connections $\\\\mathcal{G}=\\\\{g_1,...,g_T\\\\}$. I am curious about the extent of overlap among these groups across different layers- does this provide any insight? During testing, is the task ID required to be predicted and then select the corresponding $g_i$? If not, does the algorithm need to store all groups $\\\\mathcal{G}=\\\\{g_1,...,g_T\\\\}$, or would it be sufficient to store only $g_T$?\", \"In Eqn (7), $\\\\omega_t^e$ is determined only after completing task t. Is the adjusted learning rate used for training task t+1, or is there a retraining step required here?\", \"The ablation study on the continual weighted sparsity scheduler is insufficient, and the description of the static condition is ambiguous. Does \\\"fixed sparsity from scratch\\\" mean the sparsity remains fixed throughout each training epoch or that the same fixed sparsity is applied across all layers? The authors should clarify these differences and interpret the performance under each condition in the ablation study.\", \"[1] To prune, or not to prune: exploring the efficacy of pruning for model compression, 2017.\", \"[2] Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science, 2018.\"], \"questions\": \"See weakness 1-5. If my questions are effectively addressed, and after considering feedback from other reviewers, I would be open to increasing my score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
DaA0wAcTY7
TIPS: Text-Image Pretraining with Spatial awareness
[ "Kevis-kokitsi Maninis", "Kaifeng Chen", "Soham Ghosh", "Arjun Karpur", "Koert Chen", "Ye Xia", "Bingyi Cao", "Daniel Salz", "Guangxing Han", "Jan Dlabal", "Dan Gnanapragasam", "Mojtaba Seyedhosseini", "Howard Zhou", "Andre Araujo" ]
While image-text representation learning has become very popular in recent years, existing models tend to lack spatial awareness and have limited direct applicability for dense understanding tasks. For this reason, self-supervised image-only pretraining is still the go-to method for many dense vision applications (e.g. depth estimation, semantic segmentation), despite the lack of explicit supervisory signals. In this paper, we close this gap between image-text and self-supervised learning, by proposing a novel general-purpose image-text model, which can be effectively used off the shelf for dense and global vision tasks. Our method, which we refer to as Text-Image Pretraining with Spatial awareness (TIPS), leverages two simple and effective insights. First, on textual supervision: we reveal that replacing noisy web image captions by synthetically generated textual descriptions boosts dense understanding performance significantly, due to a much richer signal for learning spatially aware representations. We propose an adapted training method that combines noisy and synthetic captions, resulting in improvements across both dense and global understanding tasks. Second, on the learning technique: we propose to combine contrastive image-text learning with self-supervised masked image modeling, to encourage spatial coherence, unlocking substantial enhancements for downstream applications. Building on these two ideas, we scale our model using the transformer architecture, trained on a curated set of public images. Our experiments are conducted on $8$ tasks involving $16$ datasets in total, demonstrating strong off-the-shelf performance on both dense and global understanding, for several image-only and image-text tasks. Code and models are released at https://github.com/google-deepmind/tips .
[ "image representations", "image-text", "vision-language", "dense understanding", "computer vision" ]
Accept (Poster)
https://openreview.net/pdf?id=DaA0wAcTY7
https://openreview.net/forum?id=DaA0wAcTY7
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zSBCQNbWM8", "y7dm8H5YOv", "xK5Wh65wir", "vnLp2icUJT", "uJZdfYO5or", "p9DpuduYWI", "nFtTXbb0rZ", "mTinGCL1PN", "lG4Tgb4bce", "fTTEeYusuY", "btkcZ2y4Kv", "aW4BUPV8WH", "YX5QR96tCw", "TqFc5Ht1BY", "R4PZ9VV9Q5", "QPo7cyt52Y", "PLo0rm3D8Q", "MwtrVByQC0", "M1J50v7kht", "KA16qmH109", "Je4VnUo1h6", "IuyGu9zmup", "HXOUKtOjxT", "Dgu5wb0vES", "B2iqm5w8kb", "6PaDgCYY9L", "2TwrDbf3wc", "0sadQFjr8q", "0kcWlwTSD1" ], "note_type": [ "official_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review" ], "note_created": [ 1730648012029, 1730451071543, 1732738754572, 1730733589169, 1732289523695, 1733235143517, 1732289680963, 1732887079239, 1732289168532, 1732289237050, 1732104682540, 1732890528013, 1732634095383, 1732738428798, 1732890679584, 1732738478808, 1730429702329, 1732104297197, 1732634251131, 1732289336601, 1737523489342, 1732104376886, 1733209849322, 1732738996933, 1732633929767, 1732104162980, 1732105093394, 1732886043963, 1734418599117 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2163/Reviewer_FXGp" ], [ "ICLR.cc/2025/Conference/Submission2163/Reviewer_BGwY" ], [ "ICLR.cc/2025/Conference/Submission2163/Authors" ], [ "ICLR.cc/2025/Conference/Submission2163/Reviewer_6Wzj" ], [ "ICLR.cc/2025/Conference/Submission2163/Authors" ], [ "ICLR.cc/2025/Conference/Submission2163/Authors" ], [ "ICLR.cc/2025/Conference/Submission2163/Authors" ], [ "ICLR.cc/2025/Conference/Submission2163/Reviewer_6Wzj" ], [ "ICLR.cc/2025/Conference/Submission2163/Authors" ], [ "ICLR.cc/2025/Conference/Submission2163/Authors" ], [ "ICLR.cc/2025/Conference/Submission2163/Authors" ], [ "ICLR.cc/2025/Conference/Submission2163/Authors" ], [ "ICLR.cc/2025/Conference/Submission2163/Authors" ], [ "ICLR.cc/2025/Conference/Submission2163/Authors" ], [ "ICLR.cc/2025/Conference/Submission2163/Authors" ], [ "ICLR.cc/2025/Conference/Submission2163/Authors" ], [ "ICLR.cc/2025/Conference/Submission2163/Reviewer_wggG" ], [ "ICLR.cc/2025/Conference/Submission2163/Authors" ], [ "ICLR.cc/2025/Conference/Submission2163/Authors" ], [ "ICLR.cc/2025/Conference/Submission2163/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2163/Authors" ], [ "ICLR.cc/2025/Conference/Submission2163/Reviewer_wggG" ], [ "ICLR.cc/2025/Conference/Submission2163/Authors" ], [ "ICLR.cc/2025/Conference/Submission2163/Authors" ], [ "ICLR.cc/2025/Conference/Submission2163/Authors" ], [ "ICLR.cc/2025/Conference/Submission2163/Authors" ], [ "ICLR.cc/2025/Conference/Submission2163/Reviewer_BGwY" ], [ "ICLR.cc/2025/Conference/Submission2163/Area_Chair_Bp1Y" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces a novel pretrained image-text encoder with spatial awareness which is effective in a variety of downstream computer vision tasks. To achieve this, the author first employs pretrained multimodal generative models to generate high-quality synthetic image descriptions and develops a dual embedding approach that leverages both synthetic and noisy web captions in training. Additionally, contrastive image-text learning, coupled with self-distillation and masked image modeling, is introduced to encourage the model to learn spatially aware representations. Experiments conducted on eight downstream tasks validate the effectiveness of the proposed method.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"1. The author proposes an effective approach that enhances the utility of both synthetic and noisy web captions in training. They also introduce contrastive image-text learning with self-supervised masked image modeling, which effectively encourage the learning of spatial coherence.\\n2. The author conduct a variety of experiments in 8 downstream tasks demonstrate the effectiveness of its spatial-aware text-image encoder.\", \"weaknesses\": \"The formatting of the paper needs improvement and there are a lot of empty spaces around fig1 and fig2.\", \"questions\": \"Will the pretrained model and the curation dataset with synthetic captions be released?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper targets integrating the paradigms of both image-text representation learning and self-supervised learning to improve the spatial awareness of the former. For the SSL branch, the authors leverage the DINO V2 (iBOT) pre-training method; for the image-text branch, they propose the dual image-text embedding technique that learns from both noisy and sythetic captions while harnessing the distribution gap between two types of captions. The effeciveness of the proposed method is evaluated on several image-level multimodal tasks and comprehensive dense image prediction tasks.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"This paper is well written.\", \"The experiments on dense image prediction tasks are comprehensive and promising, outperforming DINO V2 on several tasks.\", \"Improving the spatial awareness of image-text representation learning is an important direction, combining DINO v2 and CLIP, where both are foundational works in their respective fields, is intuitive and promising.\"], \"weaknesses\": [\"The technical contributions are limited. The proposed method is a combination of existing methods, with the dual embedding technique being the only novel contribution. Nonetheless, I'm okay with this, since the proposed model effectively and adequately solves model's spatial awareness limitation.\", \"As claim in Line 300:\", \">Our method is the first to demonstrate that combining contrastive image-text learning with self-distillation and masked image modeling leads to improvements across many tasks\", \"However, both integrating CLIP with self-distillation and masked image modeling[1][2] have been proposed before. And this paper lacks a further discussion against these works.\", \"Since this is a multimodal model with spatial awareness, only I$\\\\rightarrow$T and T$\\\\rightarrow$I retrieval tasks are not enough to evaluate the model's fine-grained spatial awareness under multimodal settings. Including more experiments like open-vocabulary segmentation would be beneficial.\"], \"reference\": \"[1] Maskclip: Masked self-distillation advances contrastive language-image pretraining. CVPR 23.\\n\\n[2] Scaling Language-Image Pre-Training via Masking. CVPR 23\", \"questions\": [\"As the motivation of this paper is to bridge the gap between image-text representation learning and SSL, although the ablation studies are provided, this paper lacks an in-depth analysis on how the two paradigms interact with each other. For example, how the SSL design choices such as augmentations (mask ratio, etc.) affect the image-text representation learning.\", \"The idea of dual embedding is interesting. I'm curious about the different roles of the two embeddings, and how they interact with the network. Could the authors provide more empirical analysis on this? For example, visualization of the attention maps of the two different $[CLS]$ to see their focus areas.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We would like to provide an update on the remaining experiment suggested by the reviewer (fine-grained spatial awareness under multimodal settings, **W3** above). We thank the reviewer again for suggesting this additional experiment.\\n\\nWe evaluated TIPS for zero-shot semantic segmentation, i.e. the similarity of the image patch tokens with the query class text token (grounding) for the task of semantic segmentation. For a fair comparison, and since we are using a different framework, we re-implemented the evaluation protocol of TLC* [B]. We evaluate the raw features, without any training, or post-processing**.\\n\\nWe use a TIPS model with a global average pool (GAP) head for the image embedding***. We evaluate on two different datasets: PASCAL VOC (VOC20) and ADE20k (A150). We compare TIPS to the state-of-the-art [C], and [A] which is the pioneering work in this area.\\nTIPS achieves an IoU of 78.6% on VOC20, better than both [C] (77.5%) and [A] (53.7%). On A150, TIPS achieves mIoU of 17.8%, better than [A] (10.8%) and slightly below [C] (19.3%).\\n\\n\\\\* The evaluation protocol of [B] uses an average of 80 different prompts for embedding each class (such as `this is an image of {}`). It uses an input size of 448x448, with a sliding window of 224x224).\\n\\n** Post-processing methods significantly improve the performance of zero-shot semantic segmentation. For example, [A] trains on pseudo-ground-truth spatial labels, and filters out non-existing classes with a process called prompt denoising, while [B] uses post-processing with Pixel-Adaptive Mask Refinement (PAMR). While TIPS can orthogonally benefit from these techniques, in our experiments we evaluate the raw representations produced by TIPS.\\n\\n*** While we use the CLS tokens for embedding the real and synthetic captions, these are not a direct function of the output patch embeddings. Therefore, the vanilla patch embeddings are not necessarily grounded to the text. We tried different approaches for improving this, including using the values of the last encoder block [A, C], or using different embedding heads, such as MAP or GAP. Using a simple global average pooling (GAP) worked the best out of all variants.\", \"references\": [\"[A] Zhou et al., Extract Free Dense Labels from CLIP, ECCV 2022.\", \"[B] Cha et al., Learning to Generate Text-grounded Mask for Open-world Semantic Segmentation from Only Image-Text Pairs, CVPR 2023.\", \"[C] SILC: reference \\u201cNaeem et al., 2024\\u201d in our paper.\"]}", "{\"summary\": \"This paper presents a spatial-aware text-image pre-training method that combines contrastive image-text learning with self-supervised masked image modeling. Besides, the method proposes to combine the noisy web captions and synthetic captions that are more helpful to learn spatially aware representations. The method is evaluated on both zero-shot classification and dense prediction tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper presents a solid work on text-image pre-training: a large-scale synthetic caption dataset is created, the method is evaluated on both classification and dense prediction tasks, and extensive experimental studies are conducted for ablation studies and analyses.\", \"The results look good. The proposed method can achieve good dense prediction and classification/retrieval performance simultaneously. Ablation results provided in the paper may be helpful for developing new text-image models.\"], \"weaknesses\": [\"The general idea of combining contrastive image-text learning and masked image modeling is not new. Previous work like EVA-CLIP [r1] has already show that MIM can improve the spatial awareness or locality of CLIP features and improve CLIP performance. The core different between TIPS and the line of work is to combine MIM and CLIP successively or simultaneously. I think simultaneously perform the two tasks may be better to preserve the spatial awareness/locality, but it may also make the training more costly, or possibly unstable. It would be better to provide a comparison/analysis on the pros and cons of the two strategies.\", \"[r1] EVA-CLIP: Improved Training Techniques for CLIP at Scale\", \"The study use the proprietary WebLI dataset to train the model. Is it possible that the improvements over previous methods mainly come from better data sources? How about the results if both the proposed model and the baseline use publicly available datasets like LAION, COYO or DataComp.\"], \"questions\": \"Please refer to my comments above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We have now uploaded a new PDF version with the latest modifications to the manuscript, as per reviewer suggestions. We would like to point out the following changes:\\n- Ablation on SSL components (**Q1** in the previous comment): Table 7 (*relabeled to Tab 8 in the latest version*) was added to the appendix, with the associated paragraph \\u201cAblation on self-supervised learning components\\u201d. These experiments report results varying the masking approach and ratio for the masked modeling component of TIPS. Additional ablations will be included in the next few days.\\n- MaskCLIP discussion (**W2** in the previous comment): included the MaskCLIP reference, which was previously missing. Discussed it in Related Work (section 2) and section 3.2, in relation to the proposed method. Added MaskCLIP results in Tab 3.\\n\\nWe are continuing to work on the additional experiments suggested by the reviewer and will report back once they are ready.\\n\\nThanks once again for your attention. We continue to be available for discussions in case any further clarifications can be helpful.\"}", "{\"comment\": \"Thanks for the reply. We are glad to see that reviewer\\u2019s concerns were addressed and that the score was improved, leading to an acceptance recommendation.\"}", "{\"comment\": \"We have now uploaded a new PDF version with the latest modifications to the manuscript, as per suggestions from all reviewers.\\n\\nWe believe that all concerns from the reviewer have been addressed in the previous comment in this thread, and we thank the reviewer for the attention here. We sincerely hope that these notes can help the reviewer finalize the assessment of our work. We continue to be available for discussions in case any further clarifications can be helpful.\"}", "{\"comment\": \"Thanks for the detailed reply and the new results. My concerns about the performance and the proprietary WebLI dataset have been addressed. I appreciate the new experiments conducted during the rebuttal, especially considering the large training cost of ablation on the training pipeline and the dataset choice. However, I still think the method is not that new and inspiring since the two core methods have already been separately validated in previous work. Overall, I think the results presented in the paper are valuable for the community. After reading other reviews, I would keep my initial rating and recommend acceptance for this paper.\"}", "{\"comment\": \"Dear reviewers,\\n\\nWe have now uploaded a new PDF version with the latest modifications to the text, as per reviewer suggestions. Please note that the changes are marked in blue, to make them more salient. The main changes were:\\n- Ablation on SSL components, according to reviewer BGwY (Q1): Table 7 (*relabeled to Tab 8 in latest version*) was added to the appendix, with the associated paragraph \\u201cAblation on self-supervised learning components\\u201d. These experiments report results varying the masking approach and ratio for the masked modeling component of TIPS. Additional ablations will be included in the next few days.\\n- Formatting changes, according to reviewer FXGp (W1): we fixed the spacing issues around figures 1 and 2, as requested. We additionally improved formatting as follows: (i) rearranging the placement of some tables and fixing the numbering of Tables 3 and 4; (ii) improving spacing in the new Table 3 (compare to previous Table 4); (iii) enhancing formatting of Tab 2 to guide the reader more effectively over all results.\\n- MaskCLIP discussion, according to reviewer BGwY (W2): included the MaskCLIP reference, which was previously missing. Discussed it in Related Work (section 2) and section 3.2, in relation to the proposed method. Added MaskCLIP results in Tab 3.\\n\\nThanks once again for your attention. We continue to be available for discussions in case any further clarifications can be helpful.\"}", "{\"comment\": \"We have now uploaded a new PDF version with the latest modifications to the text, as per suggestions from all reviewers. We are continuing to work on the additional experiments suggested by the reviewer and will report back once they are ready.\\n\\nThanks once again for your attention. We continue to be available for discussions in case any further clarifications can be helpful.\"}", "{\"title\": \"Rebuttal comment to BGwY\", \"comment\": \"We thank the reviewer for the detailed comments. We are encouraged by the positive evaluation of our work.\\n\\nSome of the weaknesses and questions raised in the review suggest the need for additional experimental studies. We are currently working hard on experiments which could provide results to help alleviate the concerns, and will send an update in this comment thread once the results are available.\\n\\nIn the following, we provide detailed responses to the weaknesses and questions from the reviewer:\\n\\n**W1**) *Potentially limited technical contributions.*\\nFirstly, we are glad to see the reviewer acknowledge that the proposed model effectively and adequately solves spatial awareness limitations. Our work does leverage learnings from previous work in representation learning, however we would like to highlight a few points:\\n- It was previously not known that synthetic captions could benefit spatial understanding tasks, and the improvements are very significant (Tab 1). To the best of our knowledge, ours is the first paper to demonstrate this.\\n- As the reviewer points out, we propose a novel dual image-text embedding learning technique. To emphasize, our experiments show that combining synthetic and alt-text captions in the right way helps with a variety of downstream applications, for both dense or image-level prediction.\\n- Our method is the first to combine image-text contrastive learning with masked image modeling and self-distillation at the same time, showing that they can provide complementary strengths with positive synergy, resulting in outstanding experimental results in a broad range of tasks.\\n\\n**W2**) *Potential issues with claim in Line 300: missing FLIP/MaskCLIP citations?*\\nWe thank the reviewer for the detailed comments on this point, which will help us improve our paper and better position our work against previous methods. Let us discuss in detail:\\n- The reference \\u201cScaling Language-Image Pre-Training via Masking\\u201d pointed out by the reviewer corresponds to the FLIP paper, which is already discussed in the submission (see reference Li et al., 2023). FLIP has a very different goal from ours, since they proposed to combine contrastive learning with masking without any reconstruction loss, aiming only at efficient language-image training (no spatial awareness goal). We believe that the current version of our manuscript provides sufficient discussion regarding FLIP, but we are open to any additional suggestions from the reviewer on this point.\\n- Regarding the \\u201cMaskCLIP\\u201d reference: indeed, we have missed it in the submitted version of the paper, and we will fix this. We will upload a new PDF version of the paper in the next few days with this reference included, including relevant discussion. In terms of differences from our TIPS method compared to MaskCLIP, we would like to emphasize: i) our approach goes beyond masked image modeling to also include self-distillation losses, which we show important via ablations; ii) we show the power of synthetic captions for spatial understanding, which is not an aspect studied in their work; iii) we aim for off-the-shelf usage for many vision tasks, which is different from their goal (their dense prediction results are mainly in the setup of full model fine-tuning, covering only a small number of dense tasks). Additionally, we can directly compare some experimental results of our method against MaskCLIP: while our ViT-B model achieves 89.2% on Flickr I\\u2192T retrieval (Tab 1), MaskCLIP\\u2019s ViT-B achieves only 70.1% (Tab 5 in their paper). And on Flickr T\\u2192I, we achieve 77.3%, compared to MaskCLIP\\u2019s 45.6%. This shows that our full training recipe for TIPS tends to achieve significantly better results than MaskCLIP.\\n\\n**W3**) *Experiments on fine-grained spatial awareness under multimodal settings.*\\nWe are working on this experiment currently and will report results in this comment thread once they are available.\\n\\n**Q1**) *How SSL design choices affect image-text representation learning.*\\nWe are working on this ablation experiment currently and will report results in this comment thread once they are available.\\n\\n**Q2**) *Visualization of the attention maps of the two different [CLS] to see their focus areas.*\\nWe are working on this experiment currently and will report results in this comment thread once they are available.\"}", "{\"comment\": \"Thanks for the reply. We are glad to see that the reviewer's concerns were addressed, and that the reviewer recognizes our results are valuable for the community, leading to an acceptance recommendation.\"}", "{\"comment\": \"We have now uploaded a second revised PDF version with the latest modifications to the manuscript.\\n\\nWe have included an experiment replacing WebLI by DataComp, as suggested by the reviewer (**W2** above): see the new Table 7, and associated paragraph \\u201cTraining on DataComp\\u201d, in appendix A.1. Results show that very similar performance is obtained if using WebLI or DataComp. This strongly suggests that improvements over previous methods do **not** come from better data sources: for example, CLIP trained on WebLI or DataComp leads to very similar numbers (same observation for TIPS).\\n\\nWe are continuing to work on the remaining experiment suggested by the reviewer (comparison to EVA\\u2019s training approach), and will report back once it is ready.\\n\\nThanks once again for your attention. We continue to be available for discussions in case any further clarifications can be helpful.\"}", "{\"comment\": \"Dear reviewers,\\n\\nWe have now uploaded a final revised PDF version with the latest modifications to the manuscript, as per reviewer suggestions. Please note that all changes in the revised versions are marked in blue, to make them more salient. \\n\\nThe final change in this iteration is the inclusion of the experiment comparing the EVA-like successive CLIP \\u2192 MIM training, against our proposed learning process. This was requested by reviewer 6Wzj (W1). The new result can be found in Tab 8 (E), showing that our proposed method of combining contrastive and self-supervised learning simultaneously performs better.\\n\\nThanks once again for your attention. We continue to be available for discussions in case any further clarifications can be helpful.\"}", "{\"comment\": \"Thanks for the reply. We are glad to see that reviewer\\u2019s concerns were effectively addressed, and that the reviewer recognizes the \\\"good job\\\" of the paper, leading to an acceptance recommendation.\"}", "{\"comment\": \"We have now uploaded a final revised PDF version with the latest modifications to the manuscript.\\n\\nWe have included an experiment ablating the way to combine contrastive and self-supervised learning, as suggested by the reviewer (**W1** above). The result can be found in Tab 8 (E). While the successive manner of CLIP \\u2192 MIM training improves on dense tasks compared to the CLIP baseline, the proposed TIPS approach to combine contrastive learning with self-distillation and MIM simultaneously performs better across the board.\\n\\nWe believe that all concerns from the reviewer have been addressed at this point, and we thank the reviewer for the attention here. We continue to be available for discussions in case any further clarifications can be helpful.\"}", "{\"summary\": \"This paper addresses dense and global vision tasks by enhancing textual supervision and integrating contrastive image-text learning with self-supervised techniques. The method combines noisy web captions with synthetically generated captions to improve spatial awareness and applies masked image modeling to promote coherence in spatial understanding. As a result, the model demonstrates robust performance across various tasks without the need for fine-tuning, showcasing its general-purpose applicability in both image-only and image-text applications.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe paper is well-structured and clearly articulated, with detailed experimental records. By cleaning and constructing a high-quality dataset and incorporating self-supervision, methods such as dual captioning and masked image modeling enable the model to achieve significant (albeit incremental) advancements in dense prediction tasks.\\n2.\\tThe trained model demonstrates strong generalizability across multiple tasks, indicating its broad applicability in vision tasks.\\n3.\\tThe paper includes a substantial amount of experimental comparisons and work.\", \"weaknesses\": \"1.\\tThe work presents only a limited amount of novelty. The main critique lies in the lack of significant innovation. The paper largely repurposes existing techniques like synthetic captioning and contrastive learning, and while the results are solid, they do not represent a substantial leap forward in the field.\\n2.\\tThe improvements over existing models such as CLIP and DINOv2 are incremental, and the performance gains are sometimes marginal or context-specific. The originality in combining these techniques does not feel transformative.\\n3.\\tMore detailed ablation studies focusing on the contribution of each component (e.g., the specific impact of spatial coherence from the captions) could strengthen the claim of novelty.\", \"questions\": \"1.\\tAuthors are suggested to add detailed ablation results to isolate the impact of the synthetic captions on different spatial tasks.\\n2.\\tHave you considered alternative ways of introducing spatial awareness besides synthetic captions and masking?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal comment to 6Wzj\", \"comment\": \"We thank the reviewer for the detailed comments. We are encouraged by the positive evaluation of our work.\\n\\nThe two weaknesses raised in the review suggest the need for additional experimental studies. We are currently working hard on experiments which could provide results to help alleviate the two concerns, and will send an update in this comment thread once the results are available. \\n\\nIn detail, the two weaknesses were:\\n\\n**W1**) *Comparing against the EVA method of combining contrastive image-text learning and masked image modeling.*\\nThe reviewer mentions EVA-CLIP, and by that we understand the suggestion to compare against the sequential method of i) contrastive, then ii) MIM reconstruction, which was originally presented in the first EVA paper (reference \\u201cFang et al., 2023\\u201d in the paper). We are working on experiments to compare this approach against our method. The reviewer mentions concerns of potential unstable training with our approach, but we do not observe this in practice.\\n\\n**W2**) *Improvements potentially coming from better data (use of WebLi).*\\nWe would like to point out experimental results in the submitted version of the paper which already suggest that the gains are mainly coming from a better training method, rather than better data. Table 6 (in the appendix) ablates dataset versions according to our curation pipeline, showing that our curated dataset leads to moderate gains in NYUv2 (from 0.698 to 0.62 RMSE), when using a standard CLIP method. However, Table 1 indicates a much larger gain by changing from CLIP to our method, from 0.62 to 0.478 RMSE, which is roughly 2X the gain from data curation.\\nAdditionally, we are working hard on providing experimental results with a public dataset, as suggested by the reviewer. This is a significant engineering task, which consumes a very large amount of resources, not only for training, but also for downloading, curating and re-captioning the large-scale datasets. Given that WebLi and the other public datasets are collected in similar ways, we believe that other datasets of the same size will yield similar results. Nevertheless, we are doing our best to provide results on this as soon as possible.\"}", "{\"comment\": \"We have now uploaded a second revised PDF version with the latest modifications to the manuscript.\\n\\nWe have included additional experiments as per the reviewer\\u2019s request:\\n- Dual embedding attention visualization (addresses **Q2**): see new appendix section A.6, with detailed visualizations and discussions on the roles of the two embeddings, which corroborate our intuitions that the two embedding heads focus on different aspects of the image.\\n- Additional ablation on SSL components (addresses **Q1**): the new Table 8 now reports ablations considering additional image augmentations, and related discussion was added to the paragraph \\u201cAblation on self-supervised learning components\\u201d.\\n\\nWe are continuing to work on the remaining experiment suggested by the reviewer (fine-grained spatial awareness under multimodal settings), and will report back once it is ready.\\n\\nThanks once again for your attention. We continue to be available for discussions in case any further clarifications can be helpful.\"}", "{\"comment\": \"We have now uploaded a new PDF version with the latest modifications to the text, as per reviewer suggestions. We would like to point out that the formatting changes suggested by the reviewer were adopted (**W1** in the previous comment): we fixed the spacing issues around figures 1 and 2, as requested. We additionally improved formatting as follows: (i) rearranging the placement of some tables and fixing the numbering of Tables 3 and 4; (ii) improving spacing in the new Table 3 (compare to previous Table 4); (iii) enhancing formatting of Tab 2 to guide the reader more effectively over all results.\\n\\nWe believe that all concerns from the reviewer have now been addressed, and we thank the reviewer for the attention here. We continue to be available for discussions in case any further clarifications can be helpful.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Rebuttal comment to FXGp\", \"comment\": \"We thank the reviewer for the detailed comments. We are encouraged by the positive evaluation of our work.\", \"here_we_respond_to_the_weakness_and_the_question_raised_by_the_reviewer\": \"**W1**) *Formatting.*\\nWe agree that there is room for improvement on the paper formatting, in particular around Figures 1 and 2, as pointed out by the reviewer. We will make changes accordingly and upload a new PDF version of the paper in the next few days.\\n\\n**Q1**) *Model/data release.*\\nWe are planning to release the pretrained model together with the final version of the paper. We are currently following the model release process required by our organization and expect that all approvals will be obtained in time.\", \"regarding_the_dataset\": \"our curated dataset is part of a much larger corpus of images that has not yet been publicly released. Therefore, unfortunately, our organization prohibits its release.\"}", "{\"comment\": \"Thanks for the resonse. Most of my concerns have been addressed. I have improved my score, yet I still hope to see more extensive experiments.\"}", "{\"comment\": \"We have now uploaded the final revised PDF version with the latest modifications to the manuscript.\\n\\nWe would like to point out that we have included additional ablation experiments, in addition to the ones isolating the impact of synthetic captions on spatial tasks (which we discussed in the above answer to **W3**). While the original ablations (Tab 1, Tab 5, Tab 6) assess the contribution of each component in the TIPS method, the new ablations provided in Tab 7 and Tab 8 consider TIPS training on a different dataset and provide a detailed study on self-supervised learning components. We hope that these additional results help the reviewer\\u2019s assessment of our work.\\n\\nThanks once again for your attention. We continue to be available for discussions in case any further clarifications can be helpful.\"}", "{\"comment\": \"Dear reviewers,\\n\\nWe have now uploaded a second revised PDF version with the latest modifications to the manuscript, as per reviewer suggestions. Please note that all changes in the revised versions are marked in blue, to make them more salient. The main changes in this iteration were:\\n- Experiment replacing the WebLI training set by DataComp, as per request of reviewer 6Wzj (W2): see new Table 7 and associated paragraph \\u201cTraining on DataComp\\u201d, in appendix A.1. This experiment shows that similar results are obtained if training on WebLI or DataComp, indicating the effectiveness of TIPS independently of the training dataset.\\n- Additional ablation on SSL components, as per request of reviewer BGwY (Q1): the new Table 8 now reports ablations considering additional image augmentations, and related discussion was added to the paragraph \\u201cAblation on self-supervised learning components\\u201d.\\n- Dual embedding attention visualization, as per request of reviewer BGwY (Q2): new appendix section A.6 was added, with detailed visualizations and discussions on the roles of the two embeddings, which corroborate our intuitions that the two embedding heads focus on different aspects of the image.\\n\\nThanks once again for your attention. We continue to be available for discussions in case any further clarifications can be helpful.\"}", "{\"title\": \"Rebuttal comment to all reviewers\", \"comment\": \"We would like to thank all reviewers for the valuable feedback.\\n\\nWe are encouraged that our work is generally recognized as \\u201c**solid**\\u201d (6Wzj), exploring an \\u201c**important direction**\\u201d (BGwY). Reviewers acknowledged that our method is \\u201c**intuitive and promising**\\u201d (BGwY), \\u201c**novel**\\u201d (FXGp), showing \\u201c**strong generalizability**\\u201d (wggG), \\u201c**effective**\\u201d (FXGp), overall achieving \\u201c**significant**\\u201d results (wggG). The experiments are regarded as \\u201c**comprehensive and promising**\\u201d (BGwY), \\u201c**detailed**\\u201d (wggG), \\u201c**extensive**\\u201d (6Wzj), \\u201c**substantial**\\u201d (wggG), with ablations that are \\u201c**helpful**\\u201d (6Wzj). In terms of presentation, the paper is \\u201c**well written**\\u201d (BGwY), \\u201c**well-structured and clearly articulated**\\u201d (wggG).\\n\\nWe also truly appreciate the constructive comments, which help us improve our work and strengthen the paper. Given that there are no significant concerns that are common across all reviewers, we will address them directly in the individual sections below. We are doing our very best to address all of the comments and are committed to discussing with the reviewers in detail to help with the paper\\u2019s assessment.\\n\\nPlease note that we are currently working on the requested experiments and will provide their results as soon as they are ready. In any case, we wanted to start discussion with all reviewers as soon as possible, in order to provide ample time for discussions.\"}", "{\"title\": \"Rebuttal comment to wggG\", \"comment\": \"We thank the reviewer for the detailed comments. We are encouraged by the many positive remarks and will try our very best to address the reviewer\\u2019s concerns.\\n\\nIn the following, we provide detailed responses to the weaknesses and questions from the reviewer:\\n\\n**W1**) *Potentially limited novelty.*\\nFirst of all, we are glad to see the reviewer acknowledge solid results, despite the concern. While it is true that we are inspired by previous work\\u2019s explorations on synthetic captions and contrastive/self-supervised learning, we would like to highlight a few points:\\n- It was previously not known that synthetic captions could benefit spatial understanding tasks, and the improvements are very significant (Tab 1). To the best of our knowledge, ours is the first paper to demonstrate this.\\n- We propose a novel dual image-text embedding learning technique, which shows strong results by combining synthetic and alt-text captions in the right way.\\n- We are the first to combine contrastive image-text learning with self-distillation and masked image modeling at the same time, showing that they can provide complementary strengths with positive synergy, and lead to outstanding experimental results in a broad range of tasks.\\n\\n**W2**) *Incremental gains over CLIP and DINOv2.*\\nFirst, we would like to highlight that our main goal is to design a general-purpose method achieving strong performance across both spatial understanding and image-text tasks. CLIP and DINOv2 are disjoint models that lack capabilities on spatial understanding and image-text, respectively, while ours is the first model shown to provide strong results in both of these tasks.\\nSecond, we highlight some strong performance improvements of our method compared to recent work: \\n- TIPS outperforms DINOv2 in fine-grained retrieval (UnED) by 9.2% absolute (Tab 2). Note that TIPS has close or better performance on other evals compared to DINOv2.\\n- TIPS outperforms the same-size ViT-g EVA-CLIP in absolute terms by 5.8% on COCO I\\u2192T, 9.1% on COCO T\\u2192I, 1.4% on Flickr I\\u2192T, 5.6% on Flickr T\\u2192I, as per Tab 4 (*relabeled to Tab 3 in latest version*). These are substantial improvements upon recent work. \\n\\n**W3**) *Ablation study on impact of spatial coherence from the captions.*\\nThank you for the suggestion. We would like to point out that this ablation study is already provided in Tab 1, showing that spatial tasks (segmentation and depth) improve significantly when using synthetic captions. Simply replacing the web captions by PaliGemma-generated ones improves segmentation by 10.1 percentage points and reduces depth RMSE by 0.076, which are big positive gains (compare Tab 1 (A) vs Tab 1 (B) \\u201cPaliGemma captions\\u201d).\\nAdditionally, we provide more synthetic caption ablations in Table 5 (appendix), which help understand which components in the synthetic captions help spatial tasks \\u2013 e.g., segmentation benefits significantly from listing the different objects in the image, while depth obtains a substantial boost when spatial relationships of scene content are described in the caption.\\nWe hope that these results help alleviate the reviewer\\u2019s concern but are happy to continue the discussion in case there is any additional feedback.\\n\\n**Q1**) *Ablations isolating the impact of synthetic captions on spatial tasks.*\\nThis is the same as **W3**, see answer above.\\n\\n**Q2**) *Alternative ways of introducing spatial awareness besides synthetic captions and masking.*\\nYes, additional ways could include the use of dense annotations such as boxes and masks. Since these are generally expensive to collect, one possibility would be to use high-confidence boxes and masks produced by state-of-the-art off-the-shelf models as pseudo ground-truth annotations. We can also leverage synthetic captions of boxed regions to obtain richer grounded supervision, e.g. \\u201ca yellow city bike\\u201d instead of simply \\u201cbike\\u201d (which is usually what one would obtain with standard class names). In our paper, though, we aimed to keep our method as simple as possible without requiring any additional expensive annotation, which could complicate the model design, but this could be a fruitful research direction.\\nAdditionally, we plan to explore modifications to our training strategy to enhance spatially-grounded multimodal learning. For example, one can leverage the text embedding to find the most text-aware patches in the image, and preferably mask them to incentivize the model to learn better visual representations.\"}", "{\"comment\": \"Thanks for the detailed rebuttal. Most of my concerns have been effectively addressed. The paper does a good job of enhancing CLIP with SOTA SSL techniques and synthetic captions. While I still believe that the technical contributions of combining DINO V2 with CLIP may be somewhat limited, I have decided to maintain my original score and am inclined to recommend the paper for acceptance.\"}", "{\"metareview\": \"The paper presents an approach to integrating spatial awareness into text-image pretraining. The reviewers generally agree on the paper's strengths, including its extensive experiments, strong generalization across tasks, and intuitive dual-embedding technique. Some concerns were raised about the novelty of combining existing methods, though these were largely addressed during the rebuttal phase. Given the positive scores after the rebuttal and discussion period, the AC recommends acceptance.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers raised concerns about novelty, dataset dependency, and marginal improvements. The authors conducted new experiments comparing datasets, ablated self-supervised components, and visualized dual embeddings, addressing all issues. Synthetic captions\\u2019 role in spatial tasks was clarified, and formatting improved. While novelty was questioned, reviewers acknowledged the method\\u2019s effectiveness and comprehensive results. Given the thorough rebuttal, additional experiments, and general agreement on contributions, the AC weighed these responses positively, leading to the final decision to recommend acceptance.\"}" ] }
Da3j02cHe0
Efficient Physics-Constrained Diffusion Models for Solving Inverse Problems
[ "Seungjun Lee", "Shinjae Yoo" ]
Solving inverse problems in scientific and engineering domains often involves complex, nonlinear forward physics and ill-posed conditions. Recent advancements in diffusion model have shown promise for general inverse problems, yet their application to scientific domains remains less explored and is hindered by the complexity and high non-linearity of physics constraints. We present a physics-constrained diffusion model (PCDM) designed to solve inverse problems in scientific and engineering domains by efficiently integrating pre-trained diffusion models and physics-constrained objectives. We leverage accelerated diffusion sampling to enable a practical generation process while strictly adhering to physics constraints by solving optimization problems at each timestep. By decoupling the likelihood optimization from the reverse diffusion steps, we ensure that the solutions remain physically consistent, even when employing fewer sampling steps. We validate our method on a wide range of challenging physics-constrained inverse problems, including data assimilation, topology optimization, and full-waveform inversion. Experimental results show that our approach significantly outperforms existing methods in efficiency and precision, making it practical for real-world applications.
[ "physics-constraints inverse problem", "diffusion model", "PDE", "generative modeling" ]
Reject
https://openreview.net/pdf?id=Da3j02cHe0
https://openreview.net/forum?id=Da3j02cHe0
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xZVOsDcXMu", "vBnf2KOP70", "uxJtPOQsTq", "ttZIOimkgB", "sg5v1LtCzC", "j4nJRMKLAp", "fDb3r4NHHV", "dOGy19SYek", "cnAD1VQICA", "cVQfklclm3", "b1oSRUCkPF", "aNl8Wbpour", "Wfsd9znYqn", "HAhFs3yLrJ", "E04fvRxQH2", "92NZpJtnZE", "3g8qJJ0Ksq", "29Q8yTalcp", "0yyNpVjV5f", "0N8ctTgZQi" ], "note_type": [ "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732773296386, 1732683768047, 1732774877185, 1734668845847, 1732779243745, 1732782688001, 1730352926517, 1732812883487, 1730380756025, 1730387104735, 1737524160822, 1733187352726, 1732777680575, 1732781293499, 1730680525157, 1732508793130, 1732645471355, 1732773157412, 1729721688678, 1732776389168 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12020/Authors" ], [ "ICLR.cc/2025/Conference/Submission12020/Reviewer_DxjV" ], [ "ICLR.cc/2025/Conference/Submission12020/Authors" ], [ "ICLR.cc/2025/Conference/Submission12020/Area_Chair_JVki" ], [ "ICLR.cc/2025/Conference/Submission12020/Authors" ], [ "ICLR.cc/2025/Conference/Submission12020/Authors" ], [ "ICLR.cc/2025/Conference/Submission12020/Reviewer_DxjV" ], [ "ICLR.cc/2025/Conference/Submission12020/Reviewer_Liuf" ], [ "ICLR.cc/2025/Conference/Submission12020/Reviewer_RYMp" ], [ "ICLR.cc/2025/Conference/Submission12020/Reviewer_uCwN" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12020/Reviewer_DxjV" ], [ "ICLR.cc/2025/Conference/Submission12020/Authors" ], [ "ICLR.cc/2025/Conference/Submission12020/Authors" ], [ "ICLR.cc/2025/Conference/Submission12020/Reviewer_LNSc" ], [ "ICLR.cc/2025/Conference/Submission12020/Area_Chair_JVki" ], [ "ICLR.cc/2025/Conference/Submission12020/Authors" ], [ "ICLR.cc/2025/Conference/Submission12020/Authors" ], [ "ICLR.cc/2025/Conference/Submission12020/Reviewer_Liuf" ], [ "ICLR.cc/2025/Conference/Submission12020/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer LNSc (part 2)\", \"comment\": \"**The paper ... Have the authors considered Sequential Monte Carlo methods [1, 2, 3], ... mode-collapse.**\\n\\nSequential Monte Carlo methods mentioned by the reviewer are usually applied to linear inverse problems. In our work, the forward models involve complex nonlinear operators, which pose significant challenges for the direct application of these methods. Although these methods have strong theoretical guarantees, further development would be required to adapt them effectively to inverse problems in scientific and engineering domains.\"}", "{\"title\": \"Individual response?\", \"comment\": \"I noticed the short general response indicating that the paper has been revised. However, I did not see individual responses addressing the specific concerns raised by reviewers. Additionally, while I have checked the updated appendix and the revised paper, I could not easily locate the sections that address the major concerns outlined in my review.\\n\\nAs a friendly reminder, the [ICLR policy](https://iclr.cc/Conferences/2025/AuthorGuide) states:\\n> You can upload revisions until the discussion period ends, but reviewers and area chairs are not required to look at every revision. It is up to you to clearly communicate whats been changed.\\n\\nI would strongly encourage the authors to provide individual responses to each reviewer, explicitly stating how the revisions address the points raised. This would greatly facilitate understanding the changes made and their relevance to the feedback provided.\"}", "{\"title\": \"Response to Reviewer uCwN (part 1)\", \"comment\": \"We appreciate the reviewer's insightful comments and constructive feedback. Our responses are given below:\\n\\n**The technical contribution is marginal. The idea of variable splitting ... likelihood step.**\\n\\nAs noted by the reviewer, the main difference is that there is no closed-form solution for our optimization problem. Therefore, we used a gradient update for the likelihood is used for each likelihood step. However, another key contribution of our method lies in how the likelihood steps are performed. While existing state-of-the-art plug-and-play (PnP) algorithms also rely on gradient updates for the likelihood, our approach has greater flexibility and efficiency. Unlike existing PnP algorithms, which use a single gradient update per likelihood steps, our method introduces the flexibility of performing multiple gradient updates ($N$) and applies the likelihood steps selectively, focusing on the later steps of the diffusion reverse process ($t<t_s$). Our observations indicate that likelihood steps have minimal impact during the early stages but become more effective later in the process. This design choice improves both efficiency and performance, as described in Table 5, and Figure 6 and 7, with thorough comparisons. \\n\\n**Does one likelihood iteration mean ... throughout the algorithm?**\\n\\nThe 1000 likelihood iterations represent the total gradient updates required to solve the inverse problem, rather than the updates needed for a single round of Equation 15 (previously Equation 16). To ensure a fair comparison, both DPS (1000) and PCDM (200) use an equal number of likelihood iterations. However, DPS involves an additional 1000 reverse diffusion steps, whereas PCDM incorporates 200 reverse diffusion steps. To clarify, we present the gradient update scheme in Equation 16, which corresponds to a single likelihood iteration. \\n\\n**Concerns about how fair the comparison to DPS is, such as in Figure 3 (b). / Please comment on how hyperparameters for baselines, including DPS with an appendix.**\\n\\nWe used the same setting of pre-trained diffusion model and hyperparameter settings as closely as possible to the implementation in [1] for all comparison methods. Although minor differences exist, such as preprocessing or visualized examples, the comparison between DPS, SDA, and PCDM remains fair, as they all utilize the same pre-trained diffusion model, during inference. As recommended by the reviewer, we added details on the training process and used hyperparameters in Table 4 and Appendix A.2 and 3. \\n\\n[1] Score-based Data Assimilation, Neurips 2023. \\n\\n**Often it makes more sense to think of physics constraints as prior ... move the physics-consistency term as an additional regularizer?**\\n\\nOur framework treats both physical model and measurement operators (represented by sparse matrix or convolution operators with a given kernel) as the forward model $A(x)$, and the corresponding likelihood term, $\\\\| y \\u2013 A(x) \\\\|_2^2$, is treated as the same way. \\nFor example, in the case of our data assimilation scenarios, we consider two types of constraints including sparse measurements, represented by $y_1=M(x)$, where $M$ is a forward model with 8x spatial coarsening and 4x temporal coarsening operation and physical residuals, represented as $y_2 = r = P(x)$, where $P(x)$ represents the governing equation (e.g., $P(x) = 0$). Therefore, the corresponding likelihood terms can be a combination of them, $c_1 \\\\cdot \\\\| y_1 \\u2013 M(x) \\\\|_2^2 + c_2 \\\\cdot \\\\| 0 \\u2013 P(x) \\\\|_2^2$. \\nFrom the domain-specific problem perspective, physics constraints are often the primary target for minimization to obtain a physically plausible solution. This aligns with the reviewers\\u2019 question to consider the physics-consistency term as prior. However, in our approach, the obtaining solution to the inverse problem is formulated as a sampling process from a pre-trained generative model. This process is iteratively guided by alignment with either observations or physics-based constraints, where these constraints are enforced as likelihood rather than treated as prior.\"}", "{\"metareview\": \"This paper introduces Physics-Constrained Diffusion Model (PCDM), for solving inverse problems in physics by leveraging diffusion models as priors. PCDM employs a variable splitting technique, similar to ADMM and plug-and-play methods, to minimize a composite objective function that balances a likelihood term (enforcing physical constraints) with a regularization term defined by a diffusion model. This is achieved by alternating between a step that updates the solution using the diffusion model as a regularizer and a step that enforces data and physics constraints. The authors demonstrate PCDM's effectiveness on three applications: full-waveform inversion, data assimilation, and topology optimization, showing improved results compared to baseline methods. The claimed contribution lies in the integration of diffusion model priors with physical constraints to achieve solutions that are both realistic and physically consistent.\\n\\nAll the reviewers agree that the novelty of this approach is limited, and that the authors seem oblivious to the large amount of literature in the topic. As such the benchmarks are not really useful as they are not compared with state-of-the-art related (or in fact very similar) methods. As such I recommend rejection.\", \"additional_comments_on_reviewer_discussion\": \"Most of the reviewers agree with the lack of novelty. The response of the authors didn't address their concerns.\"}", "{\"title\": \"Response to Reviewer DxjV\", \"comment\": \"Thank you for the notice. Additionally, we appreciate the reviewer's insightful comments and constructive feedback. Our responses are given below:\\n\\n**The proposed PCDM appears to be mathematically equivalent to a special case of the algorithm in Li et al. [1] (specifically, the case using Tweedie's formula). The claim of algorithmic novelty is questionable.**\\n\\n[1] also employs a variable splitting method and allows multiple gradient updates within a likelihood step, similar to DAPS [3]. The key algorithmic difference between our method and [1, 3] is that our approach begins performing likelihood steps after $t < t_s$ (e.g., $t_s/T=0.25$) of the reverse process. This reduces the round of likelihood step overall, allowing us to perform more gradient updates in the selective time steps ($t_s<t$) given the same computational resources. In Appendix A.3, we describe the difference between DAPS [3] and our method and present thorough comparisons with both quantitative (Table 5) and qualitative (Tables 6 and 7) results. These experiments demonstrate that performing more likelihood gradient updates in the later stages of the reverse process is more effective than applying the same number of updates across all reverse steps. \\n\\n[1] Decoupled data consistency with diffusion purification for image restoration, arXiv, 2024.\\n\\n[3] Improving diffusion inverse problem solving with decoupled noise annealing, arXiv, 2024.\\n\\n**The \\\"physics-constrained\\\" ... These methods are not compared or discussed in the paper. / The experimental comparison excludes many recent and relevant algorithms. For example DiffPIR and DAPS, RED-diff.** \\n\\nAs noted by the reviewer, we have provided a brief explanation of the state-of-the-art methods (including DPS, DiffPIR, RED-diff, and DAPS) and discussed their difference from our approach in Appendix A.3. Additionally, we conducted additional thorough experiments comparing these state-of-the-art algorithms with our method in Table 5, and Figure 6, 7 in Appendix B. Our proposed method outperforms the state-of-the-art comparisons in all evaluation metrics.\\n\\n**What is exactly the Opt w/o diff baseline in Table 1? Is that the Adam optimizer? What initialization strategy was employed? What are the specific hyperparameters used to report the results?**\\n\\nWe employ the Adam optimizer with a learning rate of 0.005 and perform 1,000 iterations for the total optimization process. For the initialization, we take a random initialization from a standard normal distribution $N(0, I)$. During the optimization, $x$ is scaled to match the proper scales of the values of velocity fields (about 1500 - 4500 m/s). We calculate the minimum and maximum values, $v_{min}$, and $v_{max}$ from the training set of velocity fields, which are used for denormalization. \\n\\n**Why are the residuals of InversionNet and VelocityGAN omitted from Table 1?**\\n\\nInversionNet and VelocityGAN are examples of the end-to-end method which don\\u2019t include the forward model therefore, it cannot compute the residual of measurement-consistency term $\\\\| y-A(x) \\\\|$.\\n\\n**Given that OpenFWI paper does not provide the gradient implementation of the forward model, how did the authors implement the gradient?**\\n\\nWe utilized the open-sourced Deepwave package [1], which implements the forward model using PyTorch. To compute the gradient and optimize the process on the target velocity field, we employed torch.autograd.grad and torch.optim.Adam. \\n\\n[1] Richardson, A. (2023). Deepwave (v0.0.20). Zenodo. https://doi.org/10.5281/zenodo.8381177\\n\\n**What are the hyperparameter selection criteria across compared methods?**\\n\\nThe hyperparameters for the baselines are provided in Appendix A.3, and implementation details of problems and training are provided in Appendix A.\\n\\n**Is there any supplementary material or code to facilitate the reproducibility?**\\n\\nWe will release our implementation code following the publication. \\n\\n**There is a lack of ablation studies on important algorithm design parameters, such as the number of likelihood steps per iteration, the optimization threshold t_s, and sensitivity to the optimizer configurations.**\\n\\nAs noted by the reviewer, ablation studies on key hypermeters and their effectiveness are presented in Figure 5 in Section 4.4 Ablation studies. The figures highlight that selecting an appropriate step size (such as $\\\\alpha=5e-3$ in that case) is essential. Performing more likelihood iterations per likelihood step leads to better performance. Furthermore, performing likelihood steps only after $t < t_s$ (e.g., $t_s/T=0.25$) of the reverse process achieves comparable results with significantly reduced computational time. These findings highlight the flexibility and efficiency of our algorithm in addressing inverse problems within scientific domains, making it a practical use.\"}", "{\"title\": \"Response to Reviewer Liuf (part 2)\", \"comment\": \"**Question 1: What is physics constrained, please define mathematically**\\n\\nThe goal of solving an inverse problem is to recover $x$ from the measurements $y$, \\n$$y=A(x)+n,$$\\nwhere A is the physical forward model such as PDEs and some physical constraints, and $n$ is additive noise. Then, the solution can be obtained by solving the following physics-constrained optimization problem,\\n$$\\\\min_x \\\\frac{1}{2} \\\\| y-A(x)\\\\|_2^2 + \\\\lambda R(x),$$\\nwhere $L(x)=\\\\frac{1}{2} \\\\| y-A(x)\\\\|_2^2$ is an objective function that stems from the likelihood of alignment for physics constraints. Therefore, the physics constraint means that solutions that are constrained by the objective $\\\\frac{1}{2} \\\\| y-A(x)\\\\|_2^2$.\\n\\n**Finally in ADMM (eq 12) there is another term for Lagrange multiplier that you are missing. The solution of your problem is different than the original problem. / Question 4: Why don't you use Lagrange multipliers for (12), add a term p^T(z-x).**\\n\\nFirst, we reformulate the optimization problem with half quadratic splitting (HQS) methods which are usually used to solve the following form of optimization problem for the $x$,\\n$$\\\\min_x f(x)+g(x).$$\\nThis can be reformulated by\\n$$\\\\min_{x,z} f(x)+g(z)+\\\\mu \\\\| x -z \\\\|^2.$$\\n\\nHowever, the alternating direction method of multipliers (ADMM), the reviewer mentioned, usually considers the following form of an optimization problem with two sets of variables,\\n$$\\\\min_{x,z} f(x)+g(z) \\\\quad s.t. Ax+Bz=c,$$\\nwhere the augmented Lagrangian for this problem is\\n$$L_\\\\rho(x, z, y) = f(x) + g(z) + y^T(Ax+bZ - c) + \\\\rho \\\\| Ax+Bz \\u2013 c\\\\|^2$$\\nThis is not consistent with our formulation, since one variable $x$ is optimized in our problem. Therefore, we respectfully disagree with the reviewer\\u2019s recommendation to add the term $\\\\rho^T (z-x)$.\\n\\n**Question 5: Can you clarify the overall algorithm?**\\n\\nThe overall algorithm of our method is presented in Algorithm 1 in Appendix A.3 (page 18).\\nA noisy sample is drawn from a normal distribution, $x_T \\\\sim N(0, I)$. During the prior step, we employ the DDIM sampling scheme, which transitions from time step $t_{k+1}$ to $t_{k}$ and obtains denoised estimate $x_k$, alternatively. If the current time step is less than $t_k < t_s$, we perform the $N$ gradient updates for likelihood steps. After completing the reverse processes, we obtain the solution $x_0$, which satisfies the given physical constraints. \\n\\n**Question 6: How do you ensure that you fit the data to some given tolerance?**\\n\\nOur inverse problems include highly nonlinear and complex forward models, where the optimization problem has no closed-form solution and it is not straightforward to obtain the theoretical error bounds of the solution for the optimization problem. Instead, we empirically demonstrate that our method outperforms existing state-of-the-art algorithms for solving inverse problems and we conducted thorough ablation studies on our hyperparameters including the step size of the optimizer, number of iterations within a likelihood step, and starting time of steps.\"}", "{\"summary\": \"This paper proposes PCDM (physics-constrained diffusion model), an inverse problem solver that leverages diffusion model as plug-and-play prior. PCDM uses the idea of variable splitting and proposes to solve the underlying optimization problem with implicit diffusion model regularization. The authors demonstrate its application in full-waveform inversion, data assimilation, and topology optimization.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Applying plug-and-play diffusion model methods to physics-constrained inverse problems is relatively new to the diffusion model community.\\n2. The paper is generally easy to follow.\", \"weaknesses\": \"1. The proposed PCDM appears to be mathematically equivalent to a special case of the algorithm in Li et al. [1] (specifically, the case using Tweedie's formula). The claim of algorithmic novelty is questionable (line 100).\\n2. The \\\"physics-constrained\\\" aspect really comes from the inverse problem itself instead of the novel algorithmic design. Most existing gradient-based plug-and-play diffusion model methods can incorporate that physics loss such as DiffPIR [2], DPS,DAPS [3], RED-diff [4], [5]. These methods are not compared or discussed in the paper. \\n3. The experimental comparison excludes many recent and relevant algorithms. For example DiffPIR [2] and DAPS [3], RED-diff [4]. \\n4. Reproducibility concerns: important experimental and implementation details are insufficiently documented. See more concrete questions in the next section.\\n5. There is a lack of ablation studies on important algorithm design parameters, such as the number of likelihood steps per iteration, the optimization threshold $t_s$, and sensitivity to the optimizer configurations. \\n\\n\\n[1] : Li, Xiang, et al. \\\"Decoupled data consistency with diffusion purification for image restoration.\\\"\\u00a0_arXiv preprint arXiv:2403.06054_\\u00a0(2024).\\n[2] : Zhu, Yuanzhi, et al. \\\"Denoising Diffusion Models for Plug-and-Play Image Restoration.\\\"\\u00a0_arXiv preprint arXiv:2305.08995_\\u00a0(2023).\\n[3] : Zhang, Bingliang, et al. \\\"Improving diffusion inverse problem solving with decoupled noise annealing.\\\"\\u00a0_arXiv preprint arXiv:2407.01521_\\u00a0(2024).\\n[4] : Mardani, Morteza, et al. \\\"A Variational Perspective on Solving Inverse Problems with Diffusion Models.\\\"\\u00a0_The Twelfth International Conference on Learning Representations_.\\n[5] : Peng, Xinyu, et al. \\\"Improving Diffusion Models for Inverse Problems Using Optimal Posterior Covariance.\\\"\\u00a0_Forty-first International Conference on Machine Learning_. 2024.\", \"questions\": \"1. I'm a bit surprised at how well the Opt w/o diff baseline can recover the large structure of the ground truth, as shown in Figure 2 and Table 1. This contrasts with traditional FWI literature findings [1] and my own experimental validation on OpenFWI dataset. I'm curious how the authors implement the FWI problem and the corresponding baselines. More specifically,\\n\\t1. What is exactly the Opt w/o diff baseline in Table 1? Is that the Adam optimizer? What initialization strategy was employed? What are the specific hyperparameters used to report the results? \\n\\t3. Why are the residuals of InversionNet and VelocityGAN omitted from Table 1? \\n\\t4. Given that OpenFWI paper does not provide the gradient implementation of the forward model, how did the authors implement the gradient? \\n2. What are the hyperparameter selection criteria across compared methods? \\n3. Is there any supplementary material or code to facilitate the reproducibility?\\n\\n[1] : Virieux, Jean, and St\\u00e9phane Operto. \\\"An overview of full-waveform inversion in exploration geophysics.\\\"\\u00a0_Geophysics_\\u00a074.6 (2009): WCC1-WCC26.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Retain my score\", \"comment\": \"Not only that you did not answer my questions. Your answer shows deep gaps in understanding optimization theory.\\n\\nTo start you clearly do not define constraints in a way that optimization theory does. What does it mean \\\"constrained by the objective $\\\\frac{1}{2} | y-A(x)|_2^2$\\\"? Typically one would formulate it as $\\\\frac{1}{2} | y-A(x)|_2^2 \\\\le epsilon$ or something like that. The misfit, $\\\\frac{1}{2} | y-A(x)|_2^2$ is just a number, its not a constraint on $x$. \\nAlso, $A$ is typically not a PDE! It is the solution operator. This is a huge difference. While a PDE is typically an unbounded operator the solution operator is typically compact. This is why inverse problems are ill-posed (rather than just ill-conditioned).\\n\\nSecond, the claim that $$\\\\min_x f(x)+g(x).$$ can be reformulated by $$\\\\min_{x,z} f(x)+g(z)+\\\\mu | x -z |^2.$$ is simply wrong!\\nI strongly suggest you try to do this for a simple problem and see what you get (even in 1D). You need the Lagrange multiplier to make them equivalent.\\n\\nOverall, this paper shows fundamental gaps in optimization theory and inverse problems. The authors should do their reputation good if they withdraw the paper and re-write what they want to say asking some advise from someone who is immersed in optimization.\"}", "{\"summary\": \"The paper proposes doing MAP estimation using a diffusion plug and play prior (PnP). Namely, the paper aims at solving\\n$$argmin \\\\|y - \\\\mathcal{A}(x)\\\\| + \\\\lambda \\\\mathcal{R}(x).$$\\n\\nTo do so, it follows the traditional PnP route by using ADMM to split this into solving two proximal problems:\\n\\n$$ z_{i+1} = argmin_{z} \\\\mathcal{L}_\\\\mu(z, x_i) $$\\n\\n$$ x_{i+1} = argmin_{x} \\\\mathcal{L}_\\\\mu(z_{i+1}, x) $$\\n\\nFinally, it replaces the prior proximal problem by a forward backward (with one step) sampling, namely equation (15).\\nIt then evaluates the approach in non-linear problems coming from physics.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper proposes an adapted method to solving several physics problems where adapting to a physical constraint is cast as having high likelihood. The numerical applications are relevant.\", \"weaknesses\": \"My main concern is the novelty aspect of the paper. Indeed, several papers have investigated the applications of pretrained diffusion generative models as PnP priors. In particular, Algorithm 1 of [1] is essentially the same as the one proposed in this paper. Unless I'm mistaken, this makes the only novelty in this paper w.r.t. [1] to be the physical applications, which are indeed interesting. But I do not reckon it is worth being accepted to ICLR.\\n\\nFurthermore, even if the proposed algorithm is conceptually different, it is still part of the broad Plug and Play family and I would expect at least a comparison with [1] or any other Plug and Play with diffusion paper.\\n\\n\\n[1] Denoising Diffusion Models for Plug-and-Play Image Restoration\\nYuanzhi Zhu, Kai Zhang, Jingyun Liang, Jiezhang Cao, Bihan Wen, Radu Timofte, Luc Van Gool; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2023, pp. 1219-1229 \\n\\n[2] Provably robust score-based diffusion posterior sampling for plug-and-play image reconstruction.\\n Xu, Xingyu, and Yuejie Chi. arXiv preprint arXiv:2403.17042 (2024).\\n\\n[3]Graikos, Alexandros, et al. \\\"Diffusion models as plug-and-play priors.\\\" Advances in Neural Information Processing Systems 35 (2022): 14715-14728.\\n\\n[4] F. Coeurdoux, N. Dobigeon and P. Chainais, \\\"Plug-and-Play Split Gibbs Sampler: Embedding Deep Generative Priors in Bayesian Inference,\\\" in IEEE Transactions on Image Processing, vol. 33, pp. 3496-3507, 2024, doi: 10.1109/TIP.2024.3404338.\\n\\n[5] Wu, Zihui, et al. \\\"Principled Probabilistic Imaging using Diffusion Models as Plug-and-Play Priors.\\\" arXiv preprint arXiv:2405.18782 (2024).\\n\\n[6] Wang, Hengkang, et al. \\\"DMPlug: A Plug-in Method for Solving Inverse Problems with Diffusion Models.\\\" arXiv preprint arXiv:2405.16749 (2024).\", \"questions\": \"For the major point, see weaknesses.\\n\\nMinor questions and remarks.\\n\\n* Is the left term in eq(6) $x_{t-1}$ ? Otherwise it is not a sampling process, as it does not evolve through time.\\n* What is $ \\\\hat{\\\\epsilon}_{t}$ in equation (15) ? \\nIs it equation (7) with $x_{t_k}$?\\n* Equation (15) mixes indexes between $t_k$ and $t$.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a variable splitting method for solving physics-constrained inverse problems with diffusion model priors. Throughout the reverse diffusion process, the method alternates between two optimization problems: one to update the noisy estimated image with the diffusion model as a regularizer, and one to enforce data/physics constraints. The authors present experiments on full-waveform inversion, data assimilation, and topology optimization, showing quantitative and qualitative improvement upon baselines in all three applications.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The proposed method is simple and intuitive. Although the methodology lacks technical novelty (see weaknesses), it\\u2019s helpful to see that such a simple extension of DPS and other plug-and-play diffusion-based inverse solvers may already go a long way in handling physics inverse problems.\", \"Validation is done on three very different tasks, and task-specific baselines and the DPS baseline are compared against for each task.\"], \"weaknesses\": [\"The technical contribution is marginal. The idea of variable splitting for diffusion-based inverse solving is not new (Equation 16 is similar to the proximal optimization step in Equation 8 of Song and Shen et al. 2022). The main difference in this work is that there may not be a closed-form solution to Equation 16, so iterative gradient-based optimization is used at each likelihood step.\", \"In Tables 1 and 2, the smaller number of reverse steps used by PCDM is touted. This is a little misleading, as the Table 1 caption says that PCDM involves 1000 likelihood iterations in addition to 200 reverse steps. For expensive forward models, it may be the case that these 1000 likelihood iterations are very costly. Also, a clarifying question: does one likelihood iteration mean an entire optimization round of Equation 16, or do the 1000 likelihood iterations account for all the gradient steps needed to solve Equation 16 throughout the algorithm?\", \"I have concerns about how fair the comparison to DPS is. In Figure 3(b), the DPS reconstruction clearly doesn\\u2019t match the visual statistics of Kolmogorov flow. I would expect DPS to at least produce something that appears visually plausible even if it doesn\\u2019t agree with the physical model. For example, in Figure 4 of SDA (Rozet and Louppe 2023) and Figure 5 of Feng et al. 2024, the DPS reconstructions at least look qualitatively reasonable. I would also be curious how hyperparameters for DPS were chosen.\", \"---\"], \"references\": \"Song and Shen et al. \\u201cSolving Inverse Problems in Medical Imaging with Score-Based Generative Models.\\u201d ICLR 2022.\\n\\nRozet and Loupe. \\u201cScore-based Data Assimilation.\\u201d NeurIPS 2023.\\n\\nFeng et al. \\u201cNeural Approximate Mirror Maps for Constrained Diffusion Models.\\u201d arXiv 2024.\", \"questions\": [\"Please comment on how hyperparameters for baselines, including DPS, were chosen. I recommend making an appendix to include such details.\", \"Often it makes more sense to think of physics constraints as priors (i.e., checking whether a solution satisfies a physical model doesn\\u2019t involve the observed measurements). Does it make sense in that case to move the physics-consistency term into Equation 12 as an additional regularizer?\", \"It\\u2019s surprising that \\u201cOpt w/o diff\\u201d in Table 1 has the worst data residual, given that it only optimizes the likelihood term. The authors suggest that this is because it struggles with local minima, but I was under the impression that adding a diffusion regularizer would only complicate the optimization landscape. I would appreciate comments from the authors on why they believe \\u201copt w/o diff\\u201d struggles to fit the data and whether they observed the same trend with the other tasks (why wasn\\u2019t opt w/o diff included as a baseline for the other tasks?).\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank the authors for posting individual responses. The rebuttal helps clarify several points but the novelty concern still remains unaddressed.\\n\\n**Novelty** If the primary difference is the starting iteration of the likelihood step, I don't think the proposed PCDM can be considered a novel algorithm. While this detail may have some impact on computational efficiency, it is unclear if this constitutes sufficient algorithmic novelty for ICLR publication. I also note that the current manuscript does not give adequate credit to [1] despite its apparent influence on your proposed method. \\n\\nFurthermore, I'm not sure there is enough new knowledge or sufficient value for the community. The only thing I learned from the paper is that the authors show that an existing algorithm works well on a set of known and curated toy problems different from the common image restoration tasks, which does not provide sufficient new insights. \\n\\nA stronger contribution would involve addressing previously unresolved challenges or introducing new ideas and insights.\\n\\n\\n**Residual of InversionNet and VelocityGAN** While I understand that InversionNet and VelocityGAN are end-to-end networks that do not incorporate the forward model explicitly, these methods still produce predictions $\\\\hat{x}$. Using the Deepwave implementation, it should be possible to compute the residual $\\\\|y-A(\\\\hat{x})\\\\|$, enabling a fair comparison of measurement consistency across different methods. \\n\\n**hyperparameter selection criteria** Appendix A.3 reports the hyperparameter choices but does not provide insight into how these values were selected or tuned. My question pertains to the criteria and process used for selecting hyperparameters across the compared methods. Directly borrowing hyperparameters from prior work without adaptation seems inappropriate, especially since your experimental setups differ significantly from the original papers.\\n\\nI appreciate the authors\\u2019 efforts in responding to the reviewers and revising the manuscript, but based on the points above, I remain concerned about the degree of novelty and clarity in key experimental details.\\n\\n[1]: Decoupled data consistency with diffusion purification for image restoration, arXiv, 2024.\"}", "{\"title\": \"Response to Reviewer RYMp\", \"comment\": \"We appreciate the reviewer's insightful comments and constructive feedback. Our responses are given below:\\n\\n**My main concern is the novelty aspect of the paper. Indeed, several papers have investigated the applications of pretrained diffusion generative models as PnP priors. Algorithm 1 of [1] is essentially the same as the one proposed in this paper. Unless I'm mistaken, this makes the only novelty in this paper w.r.t. [1] to be the physical applications, which are indeed interesting.**\\n\\nApplications to scientific and engineering domains are not trivial. Existing state-of-the-art plug-and-play algorithms are primarily applied in image restoration tasks, where the forward models are typically degradation operations represented by linear matrices or convolutional operators, which are less expensive and relatively less complex. In contrast, solving inverse problems in scientific and engineering domains involves forward models based on physical simulations or partial differential equations, which are significantly more computationally intensive and complex. \\n\\nThe key difference between [1] and our proposed method lies in how likelihood steps are performed. While [1] employs a single gradient update for the likelihood steps, our method introduces the flexibility of performing multiple gradient updates ($N$) and applies the likelihood steps selectively, focusing only on the later steps of the diffusion reverse process ($t<t_s$). Our observations indicate that likelihood steps have minimal impact during the early stages but become more effective later in the process. This design choice improves both efficiency and performance. \\n\\n[1] Denoising Diffusion Models for Plug-and-Play Image Restoration CVPRW, 2023.\\n\\n**Furthermore, ... I would expect at least a comparison with [1] or any other Plug and Play with diffusion paper.**\\n\\nAs noted by the reviewer, we have provided a brief explanation of the state-of-the-art methods including [1], and discussed their difference from our approach in Appendix A.3. Additionally, we conducted additional thorough experiments comparing these state-of-the-art algorithms with our method in Table 5, and Figure 6, 7 in Appendix B. Our proposed method outperforms the state-of-the-art comparisons in all evaluation metrics. \\n\\n**Is the left term in Eq 6 x_(t-1)? Otherwise, it is not a sampling process, as it does not evolve through time. / What is $ \\\\hat{\\\\epsilon}{t}$ in Eq 15? Is it Eq 7 with $x_{t_k}$? / Equation 15 mixes indexes between $t_k$ and $t$.**\\n\\nYes, the term should be $x_{t-1}$ / Yes, that is the same with $\\\\hat{\\\\epsilon_t}$ (previously Eq 7). \\nThe term $\\\\hat{\\\\epsilon_t}$ is the noise term in the DDIM sampling process which is a weighted combination of deterministic $\\\\epsilon_\\\\theta^{(t)} (x_t)$ and stochastic $\\\\epsilon \\\\sim N(0, I)$ component. For readability, we denote $x_{t_k}$ as $x_k$ in the revised manuscript to address the issue of mixing indexes. I appreciate the reviewer pointing out the typo.\"}", "{\"title\": \"Response to Reviewer Liuf (part 1)\", \"comment\": \"We appreciate the reviewer's insightful comments and constructive feedback. Our responses are given below:\\n\\n**The authors invented new jargon, \\\"physics constrained\\\" which means, what exactly? What is the constraint they are fulfilling? On which variables? How do you deal with the constraints? Lagrange multipliers? elimination? penalty?**\\n\\nThe term \\u201cphysics-constrained\\u201d is not new jargon. It has been used in scientific and engineering domains for many years [1, 2, 3, 4], where the solutions of the proposed methods are guided by underlying physical constraints or governing equations, aligning with our usage of the term. In our method, this is achieved by optimizing the penalty $\\\\| y-A(x) \\\\|_2^2$ during the diffusion reverse process, ensuring that the solution of the inverse problem $x$ is both physically plausible and adheres to the underlying physical constraints. \\n\\n[1] Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data, Journal of Computational Physics, 2019. \\n\\n[2] Surrogate Modeling for Fluid Flows Based on Physics-Constrained Deep Learning without Simulation Data, Computer Methods in Applied Mechanics and Engineering, 2020.\\n\\n[3] Learning physical models that can respect conservation laws, ICML, 2023. \\n\\n[4] multi-fidelity physics constrained neural networks for a dynamical system, Computer Methods in Applied Mechanics and Engineering, 2024. \\n\\n**There is a huge branch of inverse problems that treat them as PDE constrained optimization. ... but clearly this is not one of these examples.**\\n\\nWe respectfully disagree with this statement. Our proposed method focuses on solving inverse problems in scientific and engineering domains, where the solution is obtained by optimizing a PDE-constrained objective with diffusion models as regularizers. This approach makes the solution more plausible and ensures it aligns with the underlying governing equation that the solution should satisfy. \\n\\n**Similarly, in section 3, the equations flow smoothly and I can easily understand how to get from (7) and the way to (10). Then you switch to section 3.2 and I cannot see how (11) and on is related to the previous section. / Question 2 / Question 3**\\n\\n**- Question 2: How to get from the Langevin dynamics of (8-9) to your optimization problem (11-12)**\\n\\nThe Langevin dynamics described in equation (7-8) is a well-known method for solving the optimization problem (10-11). It leverages a generative process to progressively transition from $x_T \\\\sim N(0, I)$ to the desired solution $x_0 \\\\sim p(x|y)$. The sampling process during a small time step, transitioning from $t$ to $t-1$, is governed by (7), which requires the computation of the posterior score function $\\\\nabla_{x_t} \\\\log p_t (x_t|y)$. From Bayes\\u2019 rule, this score function can be decomposed into two terms: the score function $\\\\nabla_{x_t} \\\\log p_t(x_t)$, which can be computed using a pre-trained diffusion model with trainset, and the likelihood term, $\\\\nabla_{x_t} \\\\log p_t (y|x_t)$. [5] uses a Gaussian approximation for the likelihood function $\\\\exp(-\\\\rho \\\\| y - A(x)\\\\|^2 )$ which evaluates how well the solution satisfies the physical constraints. Correspondingly, the likelihood gradient term is approximately $\\\\nabla_{x_t} \\\\log p_t (y|x_t) \\\\approx - \\\\rho \\\\nabla_{x_t} \\\\| y \\u2013 A(x) \\\\|_2^2$. \\n\\n**- Question 3: Why and how are the two related**\\n\\nRoughly speaking, the sampling process from $t$ to $t-1$ can be interpreted as consisting of two steps: (1) a reverse step in the pre-trained (unconditional) diffusion generative process, transitioning from $x_t$ to $x_{t-1}$, and (2) a one-step gradient update to optimize the likelihood term $\\\\min_x \\\\frac{1}{2} \\\\| y \\u2013 A(x) \\\\|_2^2$ as described in Equation (10). This can also be viewed as an optimizing equation (10), where the classical regularizer is replaced by the pre-trained diffusion reverse process. \\n\\n**- From equation (8,9) to equation (10, 11)**\\n\\nHowever, due to the one-step gradient update for the likelihood at every reverse time step \\u2013 both $\\\\nabla_{x_t} \\\\log p_t (x_t)$ and $\\\\nabla_{x_t} \\\\log p_t(y|x_t)$ are computed simultaneously at every time t \\u2013 this na\\u00efve approach can lead to slow inference times or suboptimal performance if the effective number of likelihood updates is insufficient to fully optimize the object. To address these limitations, it is necessary to employ accelerated diffusion sampling while enabling multiple gradient updates for the likelihood in an effective way. To this end, we revisit the original optimization problem (equation 10) and reformulate it inspired by variable splitting methods (equation 11). Such separation provides greater flexibility and efficiency, we can leverage accelerated sampling (DDIM) to reduce the number of reverse time steps. Moreover, we use multiple gradient updates within a single likelihood step and perform the updates concentrated on time steps where they are most effective.\"}", "{\"summary\": \"The authors address inverse problems by leveraging diffusion models as priors.\\nThey formulate the problem as minimizing a composite objective function comprising a likelihood term, which enforces physics constraints, and a regularization term defined by a diffusion model.\\nAs the resulting problem is difficult to solve directly, the authors utilizes a variable splitting scheme that alternates between minimization over the regularizer and the likelihood.\\nThe regularization step is handled through a backward diffusion step, while the likelihood step is performed by minimizing and L2-regularized inverse problem.\\nThe authors validate their approach on a set of three problems.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"Solve inverse problems that arise in physics-constrained setups using a variable splitting scheme.\", \"weaknesses\": \"**Insufficient coverage of the related work**\", \"the_authors_provide_a_high_level_overview_of_two_lines_of_research\": \"end-to-end supervised approaches and unsupervised approaches.\\nWhile, there is a wealth of methods in inverse problems with diffusion models priors, few are mentioned.\\nNotably, related works that leverage variable splitting schemes, also known as Split Gibbs sampling, are not discussed; for reference, see [1, 2, 3] and the corresponding Related Work sections.\\n\\n**Methodological ambiguities**\\n\\nSection 3.3 introduces the regularization step without a clear justification for its formulation. Specifically, _why it has this form?_.\\nFurthermore, the method employs two regularization hyperparameters, $\\\\lambda$ and $\\\\mu$, yet only $\\\\mu$ appears in the update equations.\\nBesides, the regularization step is independent of these hyperparameters.\\n\\n**Lack of implementation details**\\n\\n- The paper does not address the sensitivity of the method to its hyperparameters, namely the early stopping criterion and the timing of triggering the optimization (the parameter $t_s$ in line 256).\\n- The experimental section lacks specific implementation details, such as the hyperparameters for DPS and SDA; details regarding the used pre-trained diffusion models.\\n- The reported results raises some concerns In Table 1, DPS performance appears almost identical to the method that omits the prior (Opt w/o diff), which warrants further clarification as inverse problems are severely ill-posed hence pure optimization often yields an inconsistent solutions\\n\\n---\\n.. [1] Zhu, Yuanzhi, et al. \\\"Denoising diffusion models for plug-and-play image restoration.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.\\n\\n.. [2] Wu, Zihui, et al. \\\"Principled Probabilistic Imaging using Diffusion Models as Plug-and-Play Priors.\\\" arXiv preprint arXiv:2405.18782 (2024).\\n\\n.. [3] Xu, Xingyu, and Yuejie Chi. \\\"Provably robust score-based diffusion posterior sampling for plug-and-play image reconstruction.\\\" arXiv preprint arXiv:2403.17042 (2024).\\n \\n.. [4] Rozet, Fran\\u00e7ois, and Gilles Louppe. \\\"Score-based data assimilation.\\\" Advances in Neural Information Processing Systems 36 (2023): 40521-40541.\", \"questions\": \"**Specific questions**\\n\\nIn the experiments, the formulation of the inverse problem in Experiments 4.1 and 4.3 is unclear, namely\\n \\n- Experiment 4.1: is the operator $A$ a discretization of the d\\u2019Alembert operator? Additionally, is $s(r,t)$ provided within the dataset?\\n - Experiment 4.3: Given that the problem is defined as a constrained optimization, how does the operator $A$ transform $x$ to yield the observation $y$?\\n\\nWhy was SDA excluded from Experiments 4.1 and 4.3? Although originally developed for data assimilation, it remains applicable as an inverse problem method.\\nSimilarly, why was DPS omitted from Experiment 4.3?\\n\\n\\n**Broader questions**\\n\\n- Could this method be applied to inverse problems in image restoration, and how would it compare to existing algorithms in the literature?\\n- The paper addresses problems of moderate dimensionality, approximately $5000$; have the authors considered Sequential Monte Carlo methods [1, 2, 3], which offer stronger theoretical guarantees?\\nGiven this dimensionality, propagating multiple particles in parallel is feasible and would overcome mode-collapse.\\n\\n---\\n.. [1] Dou, Zehao, and Yang Song. \\\"Diffusion posterior sampling for linear inverse problem solving: A filtering perspective.\\\" The Twelfth International Conference on Learning Representations. 2024.\\n\\n.. [2] Cardoso, Gabriel, Janati Yazid,, Sylvain Le Corff, and Eric Moulines. \\\"Monte Carlo guided Denoising Diffusion models for Bayesian linear inverse problems.\\\" The Twelfth International Conference on Learning Representations. 2023.\\n\\n.. [3] Wu, Luhuan, et al. \\\"Practical and asymptotically exact conditional sampling in diffusion models.\\\" Advances in Neural Information Processing Systems 36 (2024).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Authors' Rebuttal\", \"comment\": \"Dear Authors,\\n\\nAs the author-reviewer discussion period is approaching its end, I strongly encourage you to read the reviews and engage with the reviewers to ensure the message of your paper has been appropriately conveyed and any outstanding questions have been resolved.\\n\\nThis is a crucial step, as it ensures that both reviewers and authors are on the same page regarding the paper's strengths and areas for improvement.\\n\\nThank you again for your submission.\\n\\nBest regards,\\n\\nAC\"}", "{\"title\": \"General response\", \"comment\": \"We would like to appreciate all the reviewers for their constructive comments which have led to the revision, and we believe without a doubt have improved the quality of the manuscript. The revised manuscript has been uploaded. We have addressed the individual reviewer's comments below respectively, and here the summary of the major changes is as follows:\\n\\n1.\\tWe added ablation studies in Figure 5 and Section 4.4 in page 10. \\n\\n2.\\tWe added details of the architectures and training procedures, and implementation details for our methods in Appendix A.1 and 2. \\n\\n3.\\tWe have provided a brief explanation of the state-of-the-art methods and discussed their difference from our approach in Appendix A.3. Additionally, we conducted additional experiments comparing these state-of-the-art algorithms with our method in Table 5, and Figure 6, 7 in Appendix B.\\n\\n4. We added the overall algorithm of our method in Algorithm 1 (Appendix A.3).\"}", "{\"title\": \"Response to Reviewer LNSc (part 1)\", \"comment\": \"We appreciate the reviewer's insightful comments and constructive feedback. Our responses are given below:\\n\\n**Insufficient coverage of the related work**\\n\\nAs noted by the reviewer, we have provided a brief explanation of the state-of-the-art methods and discussed their difference from our approach in Appendix A.3. Additionally, we conducted additional thorough experiments comparing these state-of-the-art algorithms with our method in Table 5, and Figure 6, 7 in Appendix B. \\n\\n**Methodological ambiguities**\\n\\nThe regularization step corresponds to the DDIM sampling scheme, as described in Equation (6) of Section 3.1. By utilizing DDIM, we accelerate reverse sampling with fewer steps.\\nThe $\\\\lambda$ serves as weight coefficients between regularizer $R(z)$ and proximal term $\\\\| z- x_k \\\\|$, and $\\\\mu$ serves as weight coefficient between measurement-consistency term $\\\\|y-A(x)\\\\|$ and proximal term $\\\\|z \\u2013 x_k\\\\|$. Instead of explicitly tuning these hyperparameters, we implicitly implement the proximal operator $\\\\|z-x_k\\\\|$ through sampling or optimizing starting from the output of the previous steps. During the regularizer steps, sufficiently small changes between $t_k$ and $t_{k-1}$ ensure that the state of the next time step, regularized by the diffusion model, remains close to the state from the previous step. In our likelihood steps, we set a proper step size $\\\\alpha$ and limit the number of likelihood updates $N$ for searching the solution near the previous state while strictly enforcing the physical constraints, rather than relying on balancing weights between measurement consistency and the proximal term. This approach removes the need for the $\\\\mu$ and $\\\\lambda$ by implicitly alternating between reverse sampling and optimization, with each process initialized using the output of the previous steps. Empirical studies validating the effectiveness of our hyperparameters are provided in Figure 5 in Section 4.4 Ablation studies. \\n\\n**Lack of implementation details**\\n\\nWe included ablation studies in Figure 5 and Section 4.4. Additionally, details of the architectures training procedures, and implementation details for both our methods and the baselines are provided in Appendix A. \\n\\n**Specific questions**\\n\\n**Experiment 4.1 ... dataset?**\\n\\nRoughly speaking, the velocity field is represented as $x = v(r)$, and the seismic measurement is represented as $y= p(r,t)$. The wave equation is expressed as $A(x)y = s$, where $A$ incorporates the Laplace operator and second-order time derivatives. Consequently, the solution is given by $y=A^{-1}(x)s$. This formulation is implemented using the finite difference method. For the source function $s(r, t)$, the locations and waveform of source functions are predefined in the benchmark paper [1] and further details are described in Appendix A.1 Problem details. \\n\\n[1] OpenFWI: Large-scale Multi-structural Benchmark Datasets for Full Waveform Inversion, NeuriPS 2022. \\n\\n**Experiment 4.3 ... observation?**\\n\\nTopology optimization includes three constraints; compliance $C(x)=U^T (x) K^T (x)U(x)$ near to zero, elastic equilibrium $K(x)U(x)=F$, and volume constraint, $\\\\frac{1}{N}\\\\sum_i x_i -V_0 \\\\leq 0$, where $K(x)$ and $U(x)$ are the global stiffness and displacement respectively, and $F$ is given loads. Therefore, we implement the constraint optimization problem as, $\\\\underset{x}{argmin} \\\\| K(x)U(x) - F\\\\|_2^2 + c_1 \\\\cdot \\\\| \\\\mathcal{C}(x) - 0\\\\|_2^2 + c_2 \\\\cdot ReLU(\\\\frac{1}{N}\\\\sum_i x_i - V_0)$, where given loads and volume conditions can be considered as observations and the compliance and elastic equations can be considered as the forward operator. We set the coefficients with $c_1=1e-4$ and $c_2=1$. \\n\\n**Why was SDA excluded from Experiments 4.1 and 4.3?**\\n\\nDifferent from existing data assimilation methods, which restore each frame $x_i$ solely from the incomplete observation of itself, SDA uses surrounding frames $x_{i-k:i+k}$ within a window size $2k$ to restore the target frame $x_i$. While this approach is well-suited for sequential data, it is not directly applicable to other inverse problems that require reconstructing a single target frame $x$. \\n\\n**Broader questions**\\n\\n**Could this method be applied to inverse problems in image restoration?**\\n\\nYes, this algorithm is compatible with image restoration. However, in this paper, we focused on the scientific and engineering domains. While some physical problems could be considered as image restoration from sparse measurements, exploring such applications is outside the scope of our interest, as we aim to address problems that incorporate physical simulations or constraints.\"}", "{\"summary\": \"The paper propose a method for the solution of an inverse problem where the forward problem is a solution of some physical simulating.\\nThe claim is that the result of the algorithm produces solutions that do not only honour the prior but also obey the physics.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The results look interesting. The paper may be improved so the results will make sense.\", \"weaknesses\": \"To be honest I could not understand the paper, even though I have been working in this field for many years. The authors invented new jargon, \\\"physics constrained\\\" which means, what exactly? What is the constraint they are fulfilling? On which variables? How do you deal with the constraints? Lagrange multipliers? elimination? penalty?\\nThere is a huge branch of inverse problems that treat them as PDE constrained optimization. Clearly, this escaped from the authors. There is a large number of papers that introduce constraints into inverse problems (e.g 0 \\\\le x) but clearly this is not one of these examples. The authors should try to rewrite the paper and be a bit more precise about what they do,\\n\\nSimilarly, in section 3, the equations flow smoothly and I can easily understand how to get from (7) and the way to (10). \\nThen you switch to section 3.2 and I cannot see how (11) and on is related to the previous section. \\n\\nFinally in ADMM (eq 12) there is another term for Lagrange multiplier that you are missing. The solution of your problem is different than the original problem.\", \"questions\": \"1. What is physics constrained, please define mathematically\\n\\n2. How to get from the Langevin dynamics of (8-9) to your optimization problem (11-12)\\n\\n3. Why and how are the two related\\n\\n4. Why don't you use Lagrange multipliers for (12), add a term p^T(z-x)\\n\\n5. Can you clarify the overall algorithm?\\n\\n6. How do you ensure that you fit the data to some given tolerance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer uCwN (part 2)\", \"comment\": \"**I would appreciate comments from the authors on why they believe \\u201copt w/o diff\\u201d struggles to fit the data and whether they observed the same trend with the other tasks (why wasn\\u2019t opt w/o diff included as a baseline for the other tasks?).**\\n\\nTo provide strong evidence, we included loss trajectories (blue lines represent Opt w/o diff) and progressive states at each time step, as shown in Figures 6 and 7 (in Appendix B), to illustrate the optimization landscape. As these figures demonstrate, the only optimizing the likelihood term struggles with local minima, leading to poor performance despite rapid initial convergence. In contrast, the comparisons incorporating the pre-trained diffusion show a poor start at the early stage, and they finally show better results. These models, with their ability to capture the data structure of velocity fields, can generate the solutions with velocity fields-like images. Given the ill-posed nature of inverse problems, the usage of an appropriate regularizer is often crucial to obtaining plausible solutions. Diffusion models, with their expressive capacity to capture complex data structures, serve as powerful regularizers with their iterative generative process.\\n\\nIn the case of topology optimization, the SIMP method (based on the finite element method, which typically takes a long time to converge) is used in Figure 4 and Table 3 to present the optimized solution without diffusion. For this task, the training set of optimal topologies generated by the SIMP is used to train the diffusion model. The evaluation metric \\\"% CE\\\" indicates the stability of the structure relative to the SIMP solution, which is a common metric in the related literature [1, 2]. The negative values in Figure 4 and Table 3 indicate that our method, which combines optimization with the diffusion model, generates more stable structures under the given boundary conditions.\\n\\n[1] Diffusion Models Beat GANs on Topology Optimization, AAAI, 2023.\\n\\n[2] Aligning Optimization Trajectories with Diffusion Models for Constrained Design Generation, NeurIPS, 2023.\"}" ] }
DZcmz9wU0i
Provable Convergence and Limitations of Geometric Tempering for Langevin Dynamics
[ "Omar Chehab", "Anna Korba", "Austin J Stromme", "Adrien Vacher" ]
Geometric tempering is a popular approach to sampling from challenging multi-modal probability distributions by instead sampling from a sequence of distributions which interpolate, using the geometric mean, between an easier proposal distribution and the target distribution. In this paper, we theoretically investigate the soundness of this approach when the sampling algorithm is Langevin dynamics, proving both upper and lower bounds. Our upper bounds are the first analysis in the literature under functional inequalities. They assert the convergence of tempered Langevin in continuous and discrete-time, and their minimization leads to closed-form optimal tempering schedules for some pairs of proposal and target distributions. Our lower bounds demonstrate a simple case where the geometric tempering takes exponential time, and further reveal that the geometric tempering can suffer from poor functional inequalities and slow convergence, even when the target distribution is well-conditioned. Overall, our results indicate that the geometric tempering may not help, and can even be harmful for convergence.
[ "Sampling", "Langevin", "Annealing" ]
Accept (Poster)
https://openreview.net/pdf?id=DZcmz9wU0i
https://openreview.net/forum?id=DZcmz9wU0i
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ypuUZM639t", "yKx9UJRN2d", "sQygoRRBQ6", "qjMFBrHHjS", "p5qAcgYgun", "mLey6kv20X", "mC0uggpsL2", "hIoyh4LLIe", "XIEjAH4JNF", "UW7Eo8Tbnj", "KeaMzd75Lk", "GF11jq2ewG", "FHZt1I3DmZ", "F4NLY75sF9", "DzfUghtYO3", "8jU9QXObhT", "5B2GeRntEe" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "decision", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732050916440, 1732724904804, 1732229428163, 1731027160769, 1732208968105, 1732503435029, 1734466415761, 1732051019101, 1730765106105, 1737523670194, 1730706121902, 1732050872678, 1732050699958, 1730639870041, 1732050798954, 1732724870011, 1732840707656 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4913/Authors" ], [ "ICLR.cc/2025/Conference/Submission4913/Authors" ], [ "ICLR.cc/2025/Conference/Submission4913/Authors" ], [ "ICLR.cc/2025/Conference/Submission4913/Reviewer_Exbp" ], [ "ICLR.cc/2025/Conference/Submission4913/Reviewer_mTXb" ], [ "ICLR.cc/2025/Conference/Submission4913/Reviewer_8HQm" ], [ "ICLR.cc/2025/Conference/Submission4913/Area_Chair_a5tY" ], [ "ICLR.cc/2025/Conference/Submission4913/Authors" ], [ "ICLR.cc/2025/Conference/Submission4913/Reviewer_8HQm" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4913/Reviewer_pomC" ], [ "ICLR.cc/2025/Conference/Submission4913/Authors" ], [ "ICLR.cc/2025/Conference/Submission4913/Authors" ], [ "ICLR.cc/2025/Conference/Submission4913/Reviewer_mTXb" ], [ "ICLR.cc/2025/Conference/Submission4913/Authors" ], [ "ICLR.cc/2025/Conference/Submission4913/Authors" ], [ "ICLR.cc/2025/Conference/Submission4913/Reviewer_Exbp" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer pomC\", \"comment\": \"We thank reviewer pomC for their attentive reading of our paper and positive review. We have integrated all their suggestions relating to presentation and typos. We agree with reviewer pomC that discussing when tempering can be useful in practice, as well\\nas comparing to vanilla Langevin is important, and do so in our general response to all reviewers.\"}", "{\"title\": \"Response to Reviewer 8HQm\", \"comment\": \"We thank the reviewer: are there any remaining concerns that could be addressed to update the score?\"}", "{\"title\": \"Answer to Reviewer mTXb\", \"comment\": \"Thank you! We will add this trick to the paper: it \\\"nearly\\\" covers the uniform case because it relies on the assumption that $\\\\lambda_0 > 0$. This is a reasonable assumption in practice, but not an exhaustive one in theory, where one could consider tempering schedules which do start at $0$.\\n\\nWe agree with the wording issue around \\\"recovers\\\" and will modify. Thank you for picking up on this!\"}", "{\"summary\": \"This work studies the convergence guarantee of geometric tempering for the Langevin diffusion and its time-discretization the Langevin algorithm. The authors prove a convergence rate under a general tempering schedule, demonstrating dependency on the isoperimetry of the intermediate probability measures, in particular their log-Sobolev constant. While this constant can be suitably controlled when both measures are strongly log-concave, the authors show that even when both proposal and target densities are unimodal, intermediate measures can suffer from a poor log-Sobolev constant that scales exponentially with the distance between the modes of proposal and target measures.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"While I am not an expert in annealing or tempering algorithms for sampling, it seems that this is the first paper that proves the convergence of geometric tempering for the Langevin diffusion using functional inequalities, which is interesting. The negative results also provide a good example of why tempering may not work in practice, despite both proposal and target measures having suitable isoperimetry.\", \"weaknesses\": [\"The related work section could be better structured. For example, breaking into multiple paragraphs and adding paragraph titles could help with readability and following the discussion.\", \"The lower bound examples hold in dimension 1, and show exponentially bad dependence on the distance between modes. While these bounds are interesting, it is not very intuitive to me why it would be natural for the distance between modes to grow in fixed dimensions. On the other hand, in a high-dimensional setting, it is more intuitive that $m$ grows with square root of dimension. Could it be straightforward to (perhaps only intuitively) extend the lower bounds to high-dimensional settings?\"], \"questions\": [\"I believe for $KL(p_0, \\\\mu_0)$ to disappear in Corollary 5, one needs to set $\\\\lambda_0 = 0$. In that case, it would not be possible to choose $\\\\lambda_t = 1$ for all $t > 0$ in a continuous manner.\", \"If all $\\\\lambda_i$s are very close to 1 in Theorem 9, we are effectively running vanilla Langevin. In that case, why should we have exponential convergence time?\", \"The vanilla Langevin analysis only requires the log-Sobolev inequality and smoothness for discretization. Why do we additionally need dissipativity of proposal and target measures here?\", \"In fact, the Langevin algorithm is known to convergence under extremely mild conditions, namely a weak Poincar\\u00e9 inequality (which holds for all locally bounded potentials, although without explicit control on the constant) and smoothness of the gradients, see e.g. [1] and references therein. Are there major challenges for obtaining convergence guarantees under (weak) Poincar\\u00e9 inequalities for the tempered Langevin algorithm?\", \"Is there a sense in which one can choose optimal proposal distributions $\\\\nu$ when we only know some information about $\\\\pi$?\", \"I believe a summation over $i$ is missing in Equation (12).\", \"Some typos:\", \"Line 152 missing absolute continuity before \\u201c... and $+\\\\infty$ otherwise\\u201d.\", \"Line 175: potential -> potentials\", \"Line 119, 233, 253, 467: missing parentheses in citation\", \"Line 238: satisfy -> satisfies\", \"Line 336, 339, 383: section \\u2026 -> Section \\u2026\", \"Line 420: are unknown -> is unknown\", \"Line 425: where we obtain\", \"A typo in Line 439 makes the sentence unreadable.\", \"---\", \"[1] A. Mousavi-Hosseini, T. Farghly, Y. He, K. Balasubramanian, M. A. Erdogdu. \\\"Towards a Complete Analysis of Langevin Monte Carlo: Beyond Poincar\\u00e9 Inequality\\\". COLT 2023.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your answer.\\n\\nThat's a nice trick for covering the uniform proposal! I think it could be worth adding as a remark in the paper. You say your bounds can \\\"nearly\\\" be extended using this trick; why \\\"nearly\\\"?\\n\\nRegarding Proposition 7, I would suggest removing or rephrasing the sentence on line 433, as the word \\\"recover\\\" can be interpreted to mean that this rate is well-known.\"}", "{\"title\": \"Response to Authors\", \"comment\": \"Thanks for the clarification. I maintain my score and I have no objections to the paper being accepted.\"}", "{\"metareview\": \"The authors analyze the convergence of tempered Langevin dynamics. Most interestingly, the authors derive lower bounds where tempering leads to exponentially poor convergence. This result is quite novel as agreed upon by all reviewers. On top of this, the authors also provide the first upper bound results on convergence under functional inequality, which is already a nice result in itself.\\n\\nNo significant criticisms were raised during the review process, and both reviewers with a score of 6 have agreed with acceptance, albeit not good enough to raise the scores to 8. Therefore, I believe this paper is welcomed contribution to the sampling literature and a clear accept.\", \"additional_comments_on_reviewer_discussion\": \"A common theme among all the reviewers is that everyone is impressed by the negative result on tempering. This seems to be a genuinely novel and surprising result, since the target distributions appear to satisfy nice functional inequalities.\"}", "{\"title\": \"Response to Reviewer mTXb\", \"comment\": \"We thank reviewer mTXb for their detailed reading of our submission, especially the appendix. We have integrated all their suggestions relating to presentation and typos. We respond to the reviewer's question\\nabout how practical successes of tempering mesh with our theory\\nin the general response above, and here respond to the \\nminor questions. \\n\\n**Varying the proposal distribution $\\\\nu$ ---** \\nFor our lower bounds, we use the proposal $\\\\nu = \\\\mathcal{N}(0, 1)$\\nas is common [1, 2, 3] (indeed this choice\\nis even a default in the Bayesian software package Blackjax),\\nand the analysis is indeed tailored to this choice. The other common\\nchoice, which you mention,\\nis the so-called ``uniform proposal\\\",\\nwhere $\\\\nu = \\\\mathcal{L}$ is the Lebesgue measure.\\nThis setting, however, is not precisely covered by our\\nframework, since we need $\\\\nu$ to be a probability measure.\\nOn the other hand,\\nour upper bounds can nearly be extended to this case by using the following trick. Consider the\\nscheme $\\\\pi^{\\\\gamma_t}$, for $\\\\gamma_0 > 0$ and $\\\\gamma_t$\\nnon-decreasing.\\nThen if we let $\\\\nu = \\\\pi^{\\\\gamma_0}$\\nand set $\\\\lambda_t := \\\\frac{\\\\gamma_t - \\\\gamma_0}{1 - \\\\gamma_0}$,\\nwe obtain $\\\\mu_t = \\\\pi^{\\\\gamma_t}$. In other words,\\nwe can recover the uniform proposal in our framework\\nso long as $\\\\gamma_0 > 0$,\\nso that there is initially at least some weight on $\\\\pi$.\\n\\n\\n**Convergence rate of tempering with a linear schedule in Proposition 7 ---** We are not aware of any references\\nwhich establish similar rates as this result, either for tempered\\nor non-tempered Langevin.\\n\\n\\n[1] Zhang et al. Differentiable Annealed Importance Sampling and the Perils of Gradient Noise. NeurIPS, 2021.\\n\\n[2] Thin et al. Monte Carlo Variational Auto-Encoders. ICML, 2021.\\n\\n[3] Dai et al. An Invitation to Sequential Monte Carlo Samplers. Journal of the American Statistical Association, 2020.\"}", "{\"summary\": \"This paper presents a theoretical analysis of geometric tempering when applied to Langevin dynamics, a popular sampling method in machine learning and statistics. Geometric tempering is a technique that attempts to improve sampling from complex multi-modal distributions by sampling from a sequence of intermediate distributions that interpolate between an easy-to-sample proposal distribution and the target distribution. The authors provide the first convergence analysis under functional inequalities, proving both upper and lower bounds for tempered Langevin dynamics in continuous and discrete time. They also derive optimal tempering schedules for certain pairs of proposal and target distributions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Perhaps surprisingly, the paper's findings are largely negative regarding the effectiveness of geometric tempering. The authors demonstrate that geometric tempering can actually worsen functional inequalities exponentially, even when both the proposal and target distributions have favorable properties. Through theoretical analysis, they show a simple bimodal case where geometric tempering takes exponential time to converge. More strikingly, they prove that similar poor convergence results can occur even with unimodal target distributions that have good functional inequalities. These results suggest that geometric tempering may not only fail to help with convergence but could actually be harmful in some cases, challenging the conventional wisdom about its utility.\", \"weaknesses\": \"In this paper they consider targets of the form $\\\\nu^{1 - \\\\lambda} \\\\pi^{\\\\lambda}$, where $\\\\nu$ is called the proposal. In many other prior works, the targets are of the form $\\\\pi^{\\\\lambda}$, which corresponds to $\\\\nu$ being an improper uniform distribution. This seems to be the main source of the largely negative results provided in this paper. Could the authors clarify the reason for considering target the above form?\", \"questions\": \"Please see question abobe\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper offers a thorough study of geometric tempering combined with a Langevin MCMC scheme. In particular, a general theory is given which characterizes the error induced by said dynamics for arbitrary tempering schemes. Negative results are then given for the efficacy of tempering schemes (over the naive Langevin dynamics) both in terms of the intermediate distributions' log-Sobolev constants, as well as the worst-case convergence rate, although some regimes where the tempering is beneficial are highlighted.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The main positive result on geometric tempering (Theorem 3) seems quite thorough (in that it comprises every reasonable regime of interest) and about as good as one could hope for in this context.\\n\\nThe negative example is very intuitive and is a worthwhile inclusion into the paper. It offers a good characterization about why one might be skeptical about the occasional poor performance of these schemes in practice, and gives good intuition about the heart of the problem (the appearance of multimodality).\\n\\nThe inclusion of more concrete lower bounds is also insightful.\", \"weaknesses\": \"It would be more helpful if the paper offered more positive examples of instances where the tempering can improve over vanilla Langevin by at least polynomial factors; in particular, a comparative bound would be helpful in Propositions 6, 7.\\n\\nIt would also be good if the paper could explore further the areas where tempering has a provable benefit over Langevin, especially in cases of multimodality where the algorithm would likely be used.\", \"questions\": \"The following suggestions relate to minor areas of the paper:\\n\\nIn Figure 2, should we not be scaling the $y$-axis logarithmically for a more reasonable demonstration?\\n\\nThe comment after (3) is strange. Probably, you mean to take $kh = t$ for a fixed choice of $t \\\\in \\\\mathbb R$, and then some schedule $h = t/K$ for a set of integers $K$, rather than what is written.\\n\\nThere is a spacing issue in Line 190~191.\", \"line_240\": \"Lebesgue -> Lebesgue measure.\", \"line_425\": \"where obtain-> where we obtain\\n\\nIt is a bit strange to cite Durmus 2019 for the Langevin rate in the str. convex + smooth setting, compared to earlier work such as [1].\\n\\nDurmus, Alain, and Eric Moulines. \\\"High-dimensional Bayesian inference via the unadjusted Langevin algorithm.\\\" (2019): 2854-2882.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 8HQm\", \"comment\": \"We thank reviewer 8HQm for their careful reading of our paper.\\nLet us respond to their question concerning the motivation\\nfor considering Gaussian proposals\\n$\\\\nu = \\\\mathcal{N}(0, 1)$.\\nWe emphasize that these proposals are indeed used\\nin practice, especially in Bayesian\\nsettings. For example, see (1, 2, 3)\\nfor three papers which use Gaussian proposals. In fact,\\nthe popular Bayesian software library Blackjax even makes the Gaussian\\nproposal the default initialization.\\nMore generally, geometric tempering makes sense for a broad variety\\nof proposal distributions, and so we believe that an investigation\\nat this level of generality is of basic interest.\\nFinally, we mention that our upper bounds can yield bounds for schemes\\nof the form $\\\\pi^{\\\\gamma_t}$, so long as $\\\\gamma_0 > 0$ and is\", \"non_decreasing\": \"indeed, if we let $\\\\nu = \\\\pi^{\\\\gamma_0}$\\nand set $\\\\lambda_t := \\\\frac{\\\\gamma_t - \\\\gamma_0}{1 - \\\\gamma_0}$,\\nthen $\\\\mu_t = \\\\pi^{\\\\gamma}$. \\n\\n[1] Zhang et al. Differentiable Annealed Importance Sampling and the Perils of Gradient Noise. NeurIPS, 2021.\\n\\n[2] Thin et al. Monte Carlo Variational Auto-Encoders. ICML, 2021.\\n\\n[3] Dai et al. An Invitation to Sequential Monte Carlo Samplers. Journal of the American Statistical Association, 2020.\"}", "{\"title\": \"Response to all reviewers\", \"comment\": \"We thank all four reviewers for their positive feedback. In this general reply, we address a question that was\\nshared among reviewers. The remaining comments\\nare addressed in detail in the individual replies. \\n\\n\\n**When does tempering outperform Langevin? (reviewers pomC and mTXb) ---** \\nReviewer pomC asked us to identify settings where\\ntempering outperforms Langevin. And reviewer mTXb asked\\nhow our results fit with the fact that\\ntempering is often used successfully in practice.\\nFor both of these questions, we refer to our upper bounds,\\nTheorems 1 and 3. These results gives rates of convergence\\nfor tempered Langevin dynamics in terms of the inverse log-Sobolev constants\\n$\\\\alpha_t$ along the tempering path. While it is true that\\nour lower bound on the log-Sobolev constant in Theorem 4\\nrules out the possibility of tempering generically improving the log-Sobolev\\nconstant, we emphasize that Theorem 4 only shows\\none poorly conditioned example, specific to a pair of proposal and target distributions.\\nIn particular, in any given example,\\nit may happen that the intermediate log-Sobolev contants $\\\\alpha_t$ \\nare significantly better than those of the target $\\\\pi$ (this is, essentially,\\nthe core intuition behind tempering).\\nIn such a case, tempering can be expected to converge more quickly\\nthan vanilla Langevin.\\n\\nTo gain some intuition for why this can happen, \\nimagine $\\\\pi$ is a bimodal distribution, $\\\\nu$ is a wide distribution\\nroughly uniformly spread over the modes,\\nand the initial-time distribution $p_0$ is concentrated in one mode.\\nIn general, the log-Sobolev constant $\\\\mu_t$ should scale\", \"as_the_height_of_the_energy_barrier_between_the_modes\": \"this is the hill\\nthat a particle must cross to move from one mode to the other.\\nBut, when $\\\\lambda_t$ is small, the target $\\\\mu_t \\\\approx \\\\pi^{\\\\lambda_t}$,\\nso, in particular,\\nthe height of the energy barrier is significantly smaller.\\nThus, the log-Sobolev constants of $\\\\mu_t$ should be significantly better\\nthan those of the target, early in the tempering.\\n\\nGiven such control of the log-Sobolev constants, we could\\nplug it in to Theorem 1 to obtain a rate with which could\\nthen improve on vanilla Langevin. \\nRigorously obtaining such control is a fascinating open question which builds upon\\nthe theory developed in this work;\\nwe expect that this will delicately depend on the details\\nof the specific example in consideration. Generally speaking,\\nthe control of log-Sobolev constants for mixtures and other multimodal distribution\\nis a challenging area of ongoing research,\\nsee [1, 2] for some recent\\nwork\\nin this direction. \\nTo finally answer reviewer mTXb's question: \\nsituations where tempering performs well in practice\\ncan be explained within our theory\\nas situations where the intermediate log-Sobolev constants\\nare significantly better than those of the target.\", \"and_to_respond_to_reviewer_pomc\": \"our upper bounds provide a general framework, through \\nthe log-Sobolev constants, to understand when\\ntempering does and does not improve on vanilla Langevin.\\nAlthough Theorem 4 (and our other lower bounds) rules out generic good behavior of the tempering,\\nthere still exist\\nsituations (such as in Figure 2) in practice where the behavior is better than that\\nof vanilla Langevin,\\nand our results\\nprovide a solid foundation for future research into this phenomenon.\\n\\n[1] Chen, Hong-Bin, Sinho Chewi, and Jonathan Niles-Weed. ``Dimension-free log-Sobolev inequalities for mixture distributions.\\\" Journal of Functional Analysis 281.11 (2021): 109236.\\n\\n[2] Schlichting, Andr\\u00e9. ``Poincar\\u00e9 and log\\u2013sobolev inequalities for mixtures.\\\" Entropy 21.1 (2019): 89.\"}", "{\"summary\": \"This work analyzes the convergence rate of Langevin dynamics with geometric tempering (LD-GT), a modification of the Langevin dynamics which attempts to follow the geometric path between a proposal distribution $\\\\nu$ (e.g. a standard Gaussian) and the target distribution $\\\\pi$.\\nMore precisely, LD-GT with tempering schedule $(\\\\lambda_k)_k \\\\subset [0,1]$ is\\n\\n$$X_0 \\\\sim \\\\nu, ~~~~ X_{k+1} = X_k + h \\\\nabla \\\\log \\\\mu_k(X_k) + \\\\sqrt{2 h} \\\\epsilon_k,$$\\n\\nwhere $\\\\mu_k \\\\propto \\\\nu^{1-\\\\lambda_k} \\\\pi^{\\\\lambda_k}$ is the geometric path (and $\\\\epsilon_k$ are independent Gaussians).\\n\\n(Per the authors' account of the literature,) LD-GT was proposed since the 1990s, and one of the motivations is the intuition that sampling progressively from the path $\\\\mu_k$ is easier than sampling directly from the target $\\\\pi$, especially if $\\\\pi$ is multi-modal. \\n\\nThis work's contributions are two-fold:\\n- Precise convergence guarantees for LD-GT under certain common assumptions on the proposal $\\\\nu$ and the target $\\\\pi$: Poincare inequality (PI), log-Sobolev inequality (LSI), strong log-concavity.\\\\\\nTo this aim, a key sub-question is to estimate the PI or LSI constant of the path $(\\\\mu_k)_k$. Besides showing how to best utilize well-known upper bounds, the authors also identify cases where these constants are surprisingly poor, leading to the next item.\\n- This work provides evidence that the original intuition motivating LD-GT is wrong, by exhibiting cases where LD-GT must converge very slowly (regardless of the choice of schedule). Remarkably this can even happen for well-conditioned and uni-modal targets $\\\\pi$, for which vanilla LD can be expected to converge fast.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"This paper addresses a natural question on the convergence of a sampling algorithm. The positive results (first item in \\\"Summary\\\") are of theoretical interest, as they address the technically difficult question of optimizing the upper bounds w.r.t the temperature schedule. The negative findings are surprising: namely, adding geometric tempering may actually slow down Langevin dynamics. This new insight is significant for both theory and practice.\\n\\nThe presentation is very clear and \\\"flows\\\" very nicely. All the technical claims are correct are far as I checked.\", \"weaknesses\": \"No substantial weaknesses, but the negative results of this paper naturally lead to a question which is not addressed nor mentioned in this paper, see \\\"Questions\\\" below.\", \"minor_comments_on_the_presentation\": [\"use citep instead of citet on lines 119, 233, 254, 468\", \"correct typos and/or grammar on lines 271, 326, 420, 496, 1389, 1494, 1838\", \"justify the fact that chi^2, KL > TV rather than say it \\\"of course\\\" holds (line 493)\", \"line 988 contains the proof of Corollary 13, not 17\", \"add details on the argument on line 1824 (I could not reconstruct it using Cauchy-Schwarz, only Jensen)\", \"use different markers for each curve in Figure 2\", \"consider using a log scale or showing less iterations in Figure 3\", \"consider including the example of section 4.1 in Figure 4, in addition to Figure 1 (which shows only $\\\\lambda \\\\in \\\\{0, 0.45, 1\\\\}$)\"], \"questions\": \"The theoretical results in this work suggest that geometric tempering may not help the convergence of Langevin dynamics. Yet tempering is a strategy that is used in practice (per the authors' presentation of the literature). In practice, is tempering observed to lead to improved performance compared to vanilla Langevin dynamics? If yes, is there any intuitive reason why?\", \"minor_questions\": [\"Would the conclusions, and the analysis techniques, of this paper still apply if one takes $\\\\nu$ to be the Lebesgue measure instead of a probability measure?\", \"In practice, is the proposal $\\\\nu$ always taken to be a Gaussian? Is it sometimes taken to be the Lebesgue measure? Or multi-modal?\", \"Is the $\\\\frac{1}{2\\\\alpha_\\\\pi t}$ rate in Proposition 7 (line 433) classical? If so please give a reference.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer Exbp\", \"comment\": \"We thank the reviewer Exbp for their attentive reading: we will correct all typos and better structure the related works paragraphs.\\nWe respond below to their other questions and comments. \\n\\n**Applicability of Corollary 5 ---** the corollary requires initializing Langevin dynamics at the proposal, i.e. $p_0 = \\\\nu$,\\nbut we do not require that $\\\\lambda_0 = 0$. \\nWe refer the reviewer to the short proof in Appendix C.1:\\nto control $\\\\mathrm{KL}(p_0\\\\|\\\\mu_0)$ we use the tempering rule,\\nand ultimately simply bound $\\\\lambda_0$ by $1$.\\n\\n**Theorem 9 in case $\\\\lambda_i$ is close to $1$---** \\nWe are in complete agreement with the reviewers' intution,\\nand note that in Theorem 9, $\\\\delta_k$ is defined as $\\\\delta_k := 8m^2 e^{-(1 - \\\\lambda_k)m^2}$,\\nso that when $\\\\lambda_k$ is close to $1$, there is no exponential\\nslow-down. Actually, Theorem 9 makes this intuition quantitative\\nand states that so long as $1 - \\\\lambda_k \\\\leqslant C m^{-2}$,\\nthe slow-down will only be of polynomial order in the mean separation.\\n\\n\\n**Choosing an optimal proposal distribution knowing some information about the target distribution ---** consider the setup when the target is a mixture of two symmetric Gaussian distributions and we initialize with a Gaussian located \\u201cin between\\u201d the two target modes, specifically at the barycenter. Then, convergence is provably fast for Tempered Langevin [1, Example 1] and related sampling processes [2]. Yet, initializing in this way requires knowing the locations of all target modes: this would require solving a \\u201cglobal optimization\\u201d problem, which can be a harder problem than the original problem of sampling from the target distribution [3]. \\n\\n**Do the lower bounds hold in higher dimensions? ---** \\nWe expect that the lower bounds go through in higher dimensions\\nwith minor modifications. The key high-level point in all\\nof the lower bounds is that at intermediate times,\\nthe tempering path becomes bimodal, and if the\\nlaw $p_t$ of the particle following the tempering\\nis too concentrated in one of the modes vs. another,\\nit will take exponential time to spread mass between the modes\\n(for example, see Prop. 22 in Appendix D).\\nIn other words, and as illustrated in Fig. 4 Appendix E, the core phenomenon behind our lower bounds is the fact that along the geometric path, mass tends to \\u201cteleport\\u201d from one mode to another, preventing Langevin to converge as the particles eventually get stuck in the first encountered mode. \\n\\nAll of this translates intuitively, and likely rigorously,\\nwithout issue to higher dimensions.\\nWe chose to present the lower bounds in dimension $1$\\nfor simplicity, as well as\\nto make it clear that the problem is not a curse of dimensionality (which may be common across methods),\\nbut instead a problem specifically with the tempered Langevin itself. \\n\\n**Why additional dissipativity assumption? ---** \\nOur only use of the dissipativity assumptions\\n is to control the second moment of $p_t$, the law of the process $X_t$ given in (9).\\n At a technical level, the reason why we have this extra assumption\\n as compared to the standard analyses of vanilla Langevin\\n is precisely because of the additional terms arising from\\n the tempering. Weakening this assumption further \\n is an interesting direction for future work. \\n\\n**Convergence under weaker functional inequalities\\nthan log-Sobolev? ---** \\nThe main technical novelty of our analysis is the\\nway that we deal with the new terms\\narising from the tempering dynamics (see Step 1 and Step 2 in Appendix A.2, as well as the supporting Lemmas in Appendix A.4).\\nIn particular,\\nthe extra terms which arise here are particularly suitable\\nto analysis when the Lyapunov function is $\\\\mathrm{KL}$.\\nFor example, we are not\\naware of a straightforward means of extending\\nour analysis to $\\\\chi^2$.\\nSince these alternative functional inequalities imply\\nconvergence in alternative Lyapunov functions (e.g. Poincar\\u00e9 involves\\nanalysis in $\\\\chi^2$), we are therefore not aware of a straightforward\\nextension of our results to weaker isoperimetric assumptions. \\n\\n\\n[1] Guo et al. Provable Benefit of Annealed Langevin Monte Carlo for Non-log-concave Sampling. Arxiv, 2024.\\n\\n[2] Madras and Zheng. On the Swapping Algorithm.\\nJournal of Random Struct. Algorithms, 2003. \\n\\n[3] Ma et al. Sampling can be faster than optimization. PNAS, 2019.\"}", "{\"title\": \"Follow-up Response to Reviewer Exbp\", \"comment\": \"We hope to have addressed all the reviewer's concerns, please let us know if there is anything we can further clarify.\"}", "{\"comment\": \"I thank the authors for their detailed response. I'm happy to recommend acceptance, and have decided to keep my score based on my perceived significance of the results.\"}" ] }
DZBFchnM3b
Navigating the Labyrinth: Evaluating and Enhancing LLMs’ Ability to Reason About Search Problems
[ "Nasim Borazjanizadeh", "Roei Herzig", "Trevor Darrell", "Rogerio Feris", "Leonid Karlinsky" ]
Recently, Large Language Models (LLMs) attained impressive performance in math and reasoning benchmarks. However, they still often struggle with multi-step reasoning which is relatively easy for humans. To further investigate this, we introduce a new benchmark, SearchBench, containing 11 unique combinatorial problems that avoid training contamination (each equipped with automated pipelines to generate an arbitrary number of instances) and analyze the feasibility, correctness, and optimality of LLM-generated solutions. We show that even the most advanced LLMs fail to solve these problems end-to-end in text, e.g., GPT4 and o1-preview respectively solve only 1.4% and 18.6% correctly. SearchBench problems require considering multiple pathways to the solution and backtracking, posing a significant challenge to auto-regressive models. Instructing LLMs to generate code that solves the problem helps only slightly. We next introduce an in-context learning approach that prompts the model to implement A*, an informed search algorithm, to comprehensively traverse the problem state space, improving the performance of models. We further extend this approach and propose the Multi-Stage-Multi-Try inference method which breaks down the A* algorithm implementation into two stages and auto-verifies the first stage against unit tests, raising GPT-4's performance above 57%.
[ "Mathematical & reasoning benchmark", "Search & Combinatorial problems", "A* algorithm" ]
https://openreview.net/pdf?id=DZBFchnM3b
https://openreview.net/forum?id=DZBFchnM3b
ICLR.cc/2025/Conference
2025
{ "note_id": [ "osW278g1JH", "mPJnxJEU2F", "YrGKCRLtRV", "WJuaaeVvVV", "NQaScWiEYK", "5fEPpKtTAC", "47WCojn6kP" ], "note_type": [ "official_comment", "official_review", "official_review", "official_review", "comment", "official_comment", "official_comment" ], "note_created": [ 1732778320688, 1730400510812, 1730807297118, 1730660237933, 1734283118984, 1732777581243, 1732778061040 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7467/Authors" ], [ "ICLR.cc/2025/Conference/Submission7467/Reviewer_fqts" ], [ "ICLR.cc/2025/Conference/Submission7467/Reviewer_pgfP" ], [ "ICLR.cc/2025/Conference/Submission7467/Reviewer_1qot" ], [ "ICLR.cc/2025/Conference/Submission7467/Authors" ], [ "ICLR.cc/2025/Conference/Submission7467/Authors" ], [ "ICLR.cc/2025/Conference/Submission7467/Authors" ] ], "structured_content_str": [ "{\"title\": \"Rebuttal\", \"comment\": \"Thank you for your time and effort in reviewing our paper. Our primary focus was to introduce a comprehensive search and planning benchmark for LLMs. The A* MSMT serves as a baseline on this benchmark, demonstrating the current models' ability to design sophisticated search algorithms, offloading the execution of numerous iterations of the algorithm to an external interpreter. However, the benchmark itself is our main contribution, and it remains challenging even for state-of-the-art models like GPT-4 and GPT-01, especially when tasked with solving problems end-to-end.\\n\\n>Using demonstrations from other problem categories deviates from the few-shot learning definition and may serve as distractors[1], which could lower baseline performance.\\n\\nWe did provide results using zero-shot prompts in both text and code, where models were instructed to solve a given problem without any other information or examples. Figure 3 demonstrates that the performance of A* and MSMT A* is significantly higher than zero-shot performance. From this comparison we can draw the conclusion that the solved examples in the prompts do not act as distractions. While A* implementations for different problem types vary, the general structure of the algorithm and the conversion of problem states into graph nodes are consistent across combinatorial problems, aiding model performance.\\n\\n>MSMT A* lacks novelty, as combining code generation with external feedback (e.g., compiler feedback and unit test filtering) is now a standard technique in LLM optimization.\\n\\nWhile various prompting and inference methods have been used to enhance model reasoning, our contribution lies in the novel application of these methods using the A* search algorithm, a powerful yet complex algorithm to implement.\\n\\nMoreover, our main motivation for A* MSMT was to show that language models struggle with combinatorial problems due to the inherent nonlinearity of computations involved in evaluating each state within the problem's state space, such as calculating cost and heuristic of each node, generating child nodes, and determining their feasibility . Our results show that while LLMs struggle with performing these simple computations end to end, they excel at writing complex search plans when the execution of many iterations of these plans is offloaded to a Python interpreter, highlighting that nonlinearity is a bottleneck in LLM reasoning. Finally, it\\u2019s important to note that our primary contribution was presenting the challenging SearchBench benchmark.\\n\\n>Are there results for GPT-o1 with A* and MSMT A*, and Code Llama with 0-shot text?\\n\\nCode Llama is specifically trained for code generation, so we did not evaluate it on text-based approaches. At the time of our experiments, GPT-01 did not support the context length required for A* and MSMT A* approaches.\\n\\n>Line 268 suggests that models capable of solving SearchBench can generalize to other combinatorial problems; are there experimental results supporting this claim?\\n\\nOur claim that models capable of solving SearchBench can generalize to other combinatorial problems is based on the fact that each problem type in SearchBench is selected from representative categories in combinatorial problems. These problems have been uniquely modified to ensure they do not resemble previously solved problems, making them new combinatorial challenges. Moreover, given that NP-hard problems can be reduced to each other, solving these new problems suggests broader generalization capabilities.\\n\\n>MSMT A* benefits from multiple tries and unit test prefiltering, which naturally boosts feasibility rates. Would giving other methods an equivalent number of trials make for a fairer comparison?\\n\\nWhile using multiple tries for any method and averaging over the answers could improve performance, MSMT's strength lies in its ease of use and the simplicity of calculating unit tests. Implementing multiple tries for text-based approaches would require a method to evaluate and compare intermediate generations (note that the answer to our problems is a list of actions, making methods like averaging and majority vote inapplicable as the set of feasible and/or correct solutions is unbounded), which is complex and typically involves training a reward model. This is a complex task requiring supervised datasets and is outside the scope of our current work, our primary contribution was to introduce a robust and challenging benchmark of search and planning problems.\\n\\n\\nWe hope this clarifies our contributions and the rationale behind our approach. Thank you for your feedback.\"}", "{\"summary\": \"This paper presents SearchBench, a benchmark evaluating large language models (LLMs) on complex combinatorial search tasks that require multi-step reasoning and backtracking. With 11 unique problem types across five categories, SearchBench challenges LLMs by avoiding training contamination and requiring reasoning-intensive solutions. The authors introduce A* search prompting and a Multi-Stage-Multi-Try (MSMT) strategy, which breaks A* implementation into verifiable steps, improving GPT-4\\u2019s success rate to over 57% on some tasks. Despite these advances, results reveal LLM limitations in achieving optimal solutions in multi-hop complex problem-solving.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The use of classic optimization problems with unique rule modifications to prevent LLM familiarity from pre-training is well-designed.\\n2. SearchBench is a legit benchmark for evaluating LLMs on complex tasks; commendable effort in its generation and diversity.\", \"weaknesses\": \"1. Using demonstrations from other problem categories deviates from the few-shot learning definition and may serve as distractors[1], which could lower baseline performance.\\n2. MSMT A* lacks novelty, as combining code generation with external feedback (e.g., compiler feedback and unit test filtering) is now a standard technique in LLM optimization.\\n3. Presentation could improve: font issues in Figure 3, misuse of \\\"accuracy\\\" on the y-axis of Figure 5, and some redundancy in explanations.\\n\\n[1] Shi, F., Chen, X., Misra, K., Scales, N., Dohan, D., Chi, E. H., ... & Zhou, D. (2023, July). Large language models can be easily distracted by irrelevant context. In International Conference on Machine Learning (pp. 31210-31227). PMLR.\", \"questions\": \"1. [Figure3] Are there results for GPT-o1 with A* and MSMT A*, and Code Llama with 0-shot text?\\n2. Line 268 suggests that models capable of solving SearchBench can generalize to other combinatorial problems; are there experimental results supporting this claim?\\n3. MSMT A* benefits from multiple tries and unit test prefiltering, which naturally boosts feasibility rates. Would giving other methods an equivalent number of trials make for a fairer comparison?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces an approach to using LLM to solve search problems by prompting LLM to implement A*.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The proposed method performs better than directly prompting LLMs to solve the problem or generate code to solve the problem.\", \"The paper also proposes a benchmark set with the hope of avoiding training contamination.\"], \"weaknesses\": [\"Using LLMs to generate the A* implementation sounds like an overkill. One could consider simply prompting LLMs to generate the inputs and heuristic function to an existing A* implementation and then prompting LLMs again to interpret the output of the A* algorithm.\", \"It seems the paper transforms the effort of implementing the A* algorithm to the effort of implementing a prompting scheme to have LLM generate A*. From this perspective, I don't see a significant motivation to use the method proposed in the paper. On the other hand, if the motivation is to understand the capability of LLMs to solve these types of puzzles, it would be more interesting to consider the scenario where the LLM is not provided a hint about how the problem can be solved (i.e., with A*).\"], \"questions\": \"I would consider an approach that does not resynthesize code that already exists (e.g., like A*) but only prompt LLMs for parameters to the A*.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The work investigates the ability of Large Language Models (LLMs) to solve complex combinatorial search problems that require multi-step reasoning and backtracking. The paper introduces a new benchmark referred as to SearchBench. This new benchmark dataset has 11 unique combinatorial problems that examines LLMs with tasks involving state-based search and backtracking. The authors analyze the feasibility, correctness, and optimality of solutions generated by LLMs. They reported that even advanced models like GPT-4 severely struggle with such tasks.\\nThe authors then propose an A* prompting strategy to guide LLMs in implementing an informed search algorithm (A*). They also presented a Multi-Stage-Multi-Try (MSMT) approach that decomposes the A* algorithm into two stages with unit test verifications, significantly improving model performance. Experimental results show that the MSMT method helps models achieve higher accuracy. Despite these improvements, the authors still observed challenges in optimality and broader reasoning persist. \\nOverall, the work contributes SearchBench as a robust benchmark and MSMT A* prompting as an effective strategy for enhancing LLM reasoning capabilities on complex search problems.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper\\u2019s strengths are as follows:\\n\\n1. The most important contribution of the paper is creation and introduction of SearchBench, presenting a broad set of combinatorial search problems dataset that extend beyond standard benchmarks. It assesses models based on feasibility, correctness, and optimality, providing a detailed assessment of LLM reasoning abilities in combinatorial problems.\\n\\n2. The paper illustrates that LLMs often struggles in multi-step reasoning and backtracking tasks. As such, the paper underline major issues in current model capabilities. The challenges in SearchBench reflect real-world applications, such as pathfinding and puzzle-solving, that require systematic search and misstep correction.\\n\\n3. The authors idea of using A* prompting strategy with a Multi-Stage-Multi-Try (MSMT) approach is interesting and shows substantial improvements relative to prompt based solutions alone. MSMT\\u2019s staged and unit-tested code generation approach improves LLM performance, demonstrating a practical way to improve reasoning on complex tasks.\\n\\n4. The paper provides evaluation of various large language models (e.g., GPT-4, Llama) and also studies various prompting techniques (e.g., 0-shot, Chain-of-Thought, A* prompting). Hence, the paper shows meaningful comparisons across models and prompt based strategies.\", \"weaknesses\": \"The main weaknesses of the paper:\\n\\n1. The paper does not consider recent advance works on multi-step reasoning techniques and code synthesis methods using LLM. Instead, the authors solely use prompt based approaches. It is unclear how those advance methods will perform on the proposed dataset.\\n2. The paper conclusions may not hold for problems that do not have code based solutions. As such it is limited to certain types of problems that can be solved through coding.\\n3. The evaluations are non comprehensive. The authors give unfair advantage to their MSMT method as the method has prior knowledge about the type of the code it needs to sysnthesize, i.e., code for A-search algorithm. The evaluations should have included recent LLM works who also can synthesis codes and provided with such prior that the code is for A*search. Only then one can better appreciate the proposed code synthesis method.\\n4. The scalability of Multi-Stage-Multi-Try (MSMT) method is unclear as the method is complex and computationally demanding. Moreover, the simulation-based experiments lack real-world variability, making it difficult to evaluate how well the proposed methods would generalize to other real world problems. \\n5. This is less important in my overall rating. But, clearly, SearchBench is centered around combinatorial tasks. Although interesting dataset, it does not support how one could devise LLM methods with other reasoning challenges, such as open-domain problems.\", \"questions\": \"1. As explained in weakness, the scalability of MSMT is unclear. Can the authors comment as to why this method will not suffer from state space expansion problem when the problem scales to practical scenarios? Since SearchBench is limited to those problems that human can solve correctly, it appears that the scale of the problems are too small for real world problems.\\n2. It would be helpful to check and compare the performance of MSMT in other combinatorics tasks outside of the authors own SearchBench dataset.\\n3. The authors need to evaluate also the performance of other more recent methods in LLM reasoning that are not promote based solely, such as multistep reasoning, reward process modeling, planning with world model and deliberate reasoning techniques, on their SearchBench dataset and compare with those of MSMT. It is u fair to limit compare MSMT with prompt based approaches or LLM that is not boosted to synthesis codes. It is well known that LLm cannot do well in code generation for complex reasoning problems unless they are guided through multiple structured steps.\\n4. The paper could benefit from detailed analysis on error patterns, which could help identify specific failure areas in LLM reasoning and suggest targeted improvements.\\n5. What if we do not know the type of the problem and hence do not know if the A search algorithm is the solution. This limits the scope of the work. The paper does not address the issue of algorithm selection. In other words, the proposed MSMT method has the prior knowledge about the code type to generate.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethics review needed.\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"Thank you for your time and effort in reviewing our paper. Our primary focus was to introduce a comprehensive search and planning benchmark for LLMs. The A* MSMT serves as a baseline on this benchmark, demonstrating the current models' ability to design sophisticated search algorithms, offloading the execution of numerous iterations of the algorithm to an external interpreter. However, the benchmark itself is our main contribution, and it remains challenging even for state-of-the-art models like GPT-4 and GPT-01, especially when tasked with solving problems end-to-end.\\n\\n\\n>Using LLMs to generate the A* implementation sounds like an overkill. One could consider simply prompting LLMs to generate the inputs and heuristic function to an existing A* implementation and then prompting LLMs again to interpret the output of the A* algorithm. \\n\\nImplementations of the A* search algorithms cannot be reduced to a few parameters; the algorithms must be constructed for each unique problem type. Each problem requires fundamental changes to the A* algorithm that are unique to each problem type, including how to construct the search graph, represent actions as nodes, select child nodes, and calculate costs. Our prompts, detailed in the Appendix, demonstrate that there is little in common between different A* implementations beyond the general structure of the algorithm. These differences extend beyond the heuristic function, making each implementation unique and unsuitable for reuse across different problem types in our benchmark.\\n\\n> if the motivation is to understand the capability of LLMs to solve these types of puzzles, it would be more interesting to consider the scenario where the LLM is not provided a hint about how the problem can be solved (i.e., with A*).\\n\\nWe explored various zero-shot prompting schemes where no hint is provided to the model on how to solve the problems, such as zero-shot text and zero-shot code (Please refer to Figure 3 and teh methods and Experiments sections). The motivation for using A* was to highlight the challenges LLMs face with nonlinear reasoning. While these problems are straightforward for humans, requiring only basic algebra, LLMs struggle to solve them end to end due to the need for multiple iterations of simple computations. By prompting the model to write an A* algorithm, we offload executing many iterations of the search algorithm to an external engine, allowing the LLM to focus on a complex task that needs to be executed once rather than repeatedly.\\n\\nWe hope this clarifies our contributions and the rationale behind our approach. Thank you for your feedback.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"Thank you for reviewing our paper. Our main goal was to introduce a comprehensive search and planning benchmark for LLMs. The A* MSMT serves as a baseline, showcasing current models' ability to design sophisticated search algorithms, offloading iterations of search to an external interpreter. However, the benchmark itself is our primary contribution and remains challenging for state-of-the-art models like GPT-4 and GPT-01, especially when tasked to solve the problems end-to-end.\\n\\n>The paper does not consider recent advances in multi-step reasoning and code synthesis using LLMs\\n\\nIn addition to presenting the challenging SearchBench benchmark, our contribution includes demonstrating the application of MSMT A* which highlights LLMs' stronger capability to write a complex search algorithm as opposed to performing non linear reasoning, which involves the iterative computations involved in evaluating each state within the problem's state space, such as calculating cost and heuristic of each node, generating child nodes, etc. If you could provide specific citations for the advanced methods you mentioned, it would help us understand your concerns better. Our MSMT approach aligns with recent advancements that leverage prompting and inference techniques to enhance reasoning.\\n\\n>The paper conclusions may not hold for problems that do not have code based solutions\\n\\nThe assumption that problems can be solved through code is not overly restrictive. Many real-world problems can be modeled as state-base problems with a start state, end state, and a series of allowed actions and solved using search, allowing for the application of A* search, making our approach broadly applicable.\\n\\n>The authors give unfair advantage to their MSMT method as the method has prior knowledge about the type of the code it needs to sysnthesize\\n\\nWe have evaluated the model's performance using 0-shot code prompting, where the model was not given any prior information about the problem-solving approach. This approach demonstrates the model's ability to generate solutions without explicit guidance or prior knowledge about how to solve these problems. Providing solved instances as a prompt, as in our A* and MSMT A* approach, is a common technique to enhance reasoning. Our main contribution is the benchmark itself, which remains challenging even with such guidance.\\n\\n>The scalability of MSMT method is unclear as the method is complex and computationally demanding\\n\\nThe problems in SearchBench are NP-hard, representing some of the most challenging problems in theoretical computer science. These problems inherently require significant computational resources, even when the algorithms are implemented by experts in the field, regardless of the approach used. Our work highlights the current limitations of LLMs in solving such problems end-to-end, emphasizing the need for further research in this area.\\n\\n> it does not support how one could devise LLM methods with other reasoning challenges, such as open-domain problems\\n\\nBy \\\"open-domain problems,\\\" if you mean unbounded state-space problems like board games or simulation worlds, our A* MSMT approach is applicable there as well. In our method, we only assume that we are provided with a start state, an end state, and a set of allowed actions, which can be extracted from natural language or set based on common sense rules. Although our problem types are mostly bounded, their state space can be made arbitrarily large, making our approach generally applicable to such problems.\\n\\n>The authors need to evaluate also the performance of other more recent methods in LLM reasoning that are not promote based solely, such as multistep reasoning, reward process modeling\\n\\nMultistep reasoning is incorporated in our 4-shot CoT and A* implementations as comments. Reward-based methods require training both the language model and a reward model which is beyond the scope of our work. Our focus was on introducing a problem set that highlights the current challenges LLMs face in reasoning, and MSMT A* is one of the methods we used to show case that the nonlinearity involved in reasoning is the main bottleneck of LLMs which can be alleviated by using an external interpreter to execute many iterations of the LLM-generated algorithms.\\n\\n>What if we do not know the type of the problem and hence do not know if the A search algorithm is the solution\\n\\nA* is the most computationally efficient search algorithm that guarantees finding an optimal solution with an admissible and consistent heuristic. The combinatorial problems in our dataset would take days to solve using breadth-first search, and depth-first search does not guarantee optimal solutions. A* does not require specific assumptions about the problem compared to other search algorithms; it can be applied to any problem where intermediate states can be represented as a graph, with actions as links between nodes. \\n\\nWe hope this clarifies our contributions and the rationale behind our approach.\"}" ] }
DYXl6P70aH
Benchmarking Robustness of Foundation Models for Remote Sensing
[ "Hakob Tamazyan", "Ani Vanyan", "Tigran Galstyan", "Alvard Barseghyan", "Anna Khosrovyan", "Vahan Huroyan", "Hrant Khachatrian" ]
Foundation models have significantly advanced machine learning applications across various modalities, including images. Recently numerous attempts have been made on developing foundation models specifically tailored for remote sensing applications, predominantly through masked image modeling techniques. This work explores the essential characteristics and performance expectations for a foundation model in aerial imagery. We introduce a benchmark designed to evaluate the model's performance as well as robustness to changes in scale and spectral bands of the input. Our benchmarks encompass tasks unique to aerial imagery, such as change detection and scene classification, and utilize publicly available datasets RESISC45, BigEarthNet, LEVIR-CD and OSCD. We evaluate recently proposed foundation models on the benchmark. Furthermore, we explore the impact of various design choices in pretraining and fine-tuning on the performance of the models on our benchmark. Specifically, we pretrain several variations of a self-distillation based self-supervised model on aerial imagery datasets, including one without scale-augmentations and another one with a pretrained mask decoder module.
[ "aerial imagery", "foundation models", "self-supervised learning", "benchmark" ]
https://openreview.net/pdf?id=DYXl6P70aH
https://openreview.net/forum?id=DYXl6P70aH
ICLR.cc/2025/Conference
2025
{ "note_id": [ "V2qQYJ8wF6", "QabAH7AYha", "BOhfuJv4mP", "9SBrOzIPyU", "54PVJWM5CF" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730627462659, 1730378842983, 1730146687576, 1733313230052, 1729006228955 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13991/Reviewer_t4Up" ], [ "ICLR.cc/2025/Conference/Submission13991/Reviewer_DyRL" ], [ "ICLR.cc/2025/Conference/Submission13991/Reviewer_XFSf" ], [ "ICLR.cc/2025/Conference/Submission13991/Authors" ], [ "ICLR.cc/2025/Conference/Submission13991/Reviewer_UcQC" ] ], "structured_content_str": [ "{\"summary\": \"This paper examines the key characteristics and performance benchmarks for foundation models applied to remote sensing data. The authors present a comprehensive benchmark framework to assess the performance and robustness of these models across diverse scales and spectral bands. This benchmark includes tasks such as change detection and scene classification, leveraging publicly available datasets like RESISC45, BigEarthNet, LEVIR-CD, and OSCD. The topic is highly relevant, as establishing benchmarks for remote sensing (RS) foundation models is essential for advancing this field.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1) The topic of this paper is valuable, as benchmarking remote sensing (RS) foundation models is crucial for the future development of this field.\\n2) The experiments on spectral bands and the idea of a pretrained mask decoder are interesting, and I encourage the authors to expand and reorganize these sections. However, in its current form, I believe the paper still falls short of the standards required for ICLR.\", \"weaknesses\": \"1) I am concerned about the label quality of the datasets used. For example, BigEarthNet V2 [8] provides a significant improvement over BigEarthNet. While I understand that it might not be feasible to use BENv2 given its release date, the authors should consider data and label quality when selecting datasets for benchmarking. The authors shall discuss how they assessed the quality of the datasets used and what impact potential label quality issues might have on their results.\\n\\n2) How do the authors define the generalizability of foundation models? Given that different foundation models are trained on different datasets or combinations, comparisons may be problematic. For instance, DinoV2 uses data augmentation techniques like random cropping and scaling, which enrich the spatial resolution of the training data. However, not all foundation models use such augmentations, potentially making direct comparisons unfair. The authors should address these inconsistencies and avoid drawing broad conclusions about generalizability without accounting for differences in training data and augmentations. The authors are suggested to provide a clear definition of generalizability in the context of their study and include a detailed analysis of the training data and augmentation techniques used by each evaluated model, and discuss how these differences might impact the comparisons and conclusions drawn from the benchmark results\\n\\n3) It would be beneficial to also compare the foundation models with smaller models like ResNet50. This comparison could illustrate the advantages and trade-offs of using larger foundation models versus smaller ones.\\n\\n4) The paper claims that the performance gap between frozen models and full fine-tuning is relatively large for models such as ChannelViT-S, Prithvi, and Clay v1. However, I am concerned about how the learning rate was selected. Using the same learning rate across different models may not be optimal. Moreover, different learning rates for the backbone and the task-specific heads (e.g., classification, segmentation, change detection) should be considered. Without optimizing these hyperparameters for each model, the conclusions drawn regarding their performance are not convincing.\\n\\n5) The authors conclude that all tested models struggle with generalizability across scales and spectral bands. While foundation models are expected to generalize well to different data sources, this relies on training with large, diverse datasets. It is therefore expected that models trained on aerial images may perform poorly on satellite images due to the inherent differences between these data sources. I recommend that the authors further investigate the relationship between pre-training datasets, data augmentations, and model generalizability.\", \"questions\": \"1) The paper does not clearly justify the choice of datasets used for evaluation. Given that there are so many benchmark datasets available for remote sensing, why were these particular datasets chosen? A more thorough rationale is needed. Please elaborate on the specific criteria you have used for dataset selection. Please also discuss the strengths and limitations of the chosen datasets compared to alternatives, and how these choices might impact the generalizability of your results.\\n\\n2) There are numerous well-known foundation models for remote sensing data, including vision-language models like RemoteCLIP [1] and Skyscript [2], multimodal models like DOFA [3], SSL4EO-S12 [4], and MMEarth [5], as well as others like msGFM [6] and SatMAE++ [7]. Considering the broad definition of a foundation model, why did the authors not include these in the evaluation? Please kindly explain the criteria for model selection and discuss how the inclusion or exclusion of specific models might affect your findings. Additionally, please kindly address the potential limitations of their current model selection in the paper's discussion section.\\n\\n3) The evaluation tasks are limited to scene classification and change detection. Why were other critical tasks, such as semantic segmentation, object detection, and regression, not included? Please kindly justify your task selection and discuss how the inclusion of additional tasks like semantic segmentation or object detection might provide a more comprehensive evaluation of the foundation models' capabilities. Please kindly address this limitation in your paper and propose how future work could expand the range of tasks.\\n\\n[1] Liu, Fan, et al. \\\"Remoteclip: A vision language foundation model for remote sensing.\\\" IEEE Transactions on Geoscience and Remote Sensing (2024).\\n\\n[2] Wang, Zhecheng, et al. \\\"Skyscript: A large and semantically diverse vision-language dataset for remote sensing.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 6. 2024.\\n\\n[3] Xiong, Zhitong, et al. \\\"Neural plasticity-inspired foundation model for observing the earth crossing modalities.\\\" arXiv preprint arXiv:2403.15356 (2024).\\n\\n[4] Wang, Yi, et al. \\\"SSL4EO-S12: A large-scale multimodal, multitemporal dataset for self-supervised learning in Earth observation [Software and Data Sets].\\\" IEEE Geoscience and Remote Sensing Magazine 11.3 (2023): 98-106.\\n\\n[5] Nedungadi, Vishal, et al. \\\"MMEarth: Exploring multi-modal pretext tasks for geospatial representation learning.\\\" arXiv preprint arXiv:2405.02771 (2024).\\n\\n[6] Han, Boran, et al. \\\"Bridging remote sensors with multisensor geospatial foundation models.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n\\n[7] Noman, Mubashir, et al. \\\"Rethinking transformers pre-training for multi-spectral satellite imagery.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n\\n[8] Clasen, Kai Norman, et al. \\\"reben: Refined bigearthnet dataset for remote sensing image analysis.\\\" arXiv preprint arXiv:2407.03653 (2024).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a benchmark aimed at evaluating the robustness and generalization capabilities of foundation models specifically designed for remote sensing applications. The authors assess performance across various image resolutions and spectral bands, focusing on tasks like change detection and scene classification. Using aerial imagery datasets such as RESISC45, BigEarthNet, LEVIR-CD, and OSCD, the paper benchmarks several models and explores the influence of pretraining and fine-tuning strategies on model robustness.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The paper's attempt to establish a generalization benchmark tailored to remote sensing fills an essential gap, addressing unique requirements in aerial imaging, like resilience to scale and spectral variations.\\n2. By focusing on change detection and scene classification, the paper makes an effort to connect benchmark results with real-world implications, enhancing the relevance of the benchmark.\", \"weaknesses\": \"1. Although the paper aims to establish a benchmark for evaluating foundation models in remote sensing, the scope of experimentation is limited. The study only explores a small subset of possible variations and factors within the remote sensing domain, which weakens its potential to serve as a comprehensive benchmark for ICLR readers. For instance, the study could have broadened its scope by including more downstream tasks, such as object detection and segmentation. Additionally, it fails to address important variables such as geographical location, times of the day, or seasonal variations, all of which could significantly impact the models' performance in real-world applications. By focusing on just a few experimental parameters, the study provides only a partial view of the potential applications and robustness of these foundation models in remote sensing.\\n2. Remote sensing data includes numerous factors that could influence model performance, such as resolution, spectral band variability, and environmental conditions. However, the current study only examines resolution and spectral bands, omitting several critical factors that users in the field of remote sensing would consider valuable. For instance, controlling for environmental factors (e.g., time of year, weather conditions) or ablation studies on model and dataset size could provide more meaningful insights. Additionally, benchmarks could have varied the type of pretraining datasets used, comparing options like FAIR1M and MillionAID, which cater specifically to different data characteristics. This limited approach leaves significant gaps in understanding the models' broader adaptability and robustness.\\n3. One of the most significant weaknesses of the paper is its lack of coverage of remote-sensing-specific foundation models, particularly those developed with unique properties for remote sensing challenges. Models such as Scale-MAE and SatMAE are designed to address the exact generalization challenges discussed in the paper, such as resilience to spatial, temporal, and spectral variability. Scale-MAE, for instance, is specifically designed to handle resolution variations, a critical factor in satellite imagery, while SatMAE incorporates temporal and locational encoding, making it suitable for applications that require adaptability across time and space. By not including these specialized models, the paper misses a critical opportunity to benchmark the very architectures that are purpose-built to handle the complexities of remote sensing, thus limiting the practical relevance and depth of the benchmark.\\n4. While the paper discusses self-distillation-based models, it does not sufficiently evaluate how different pretraining methods, such as DINOv2 and EVA, might impact model robustness and generalization capabilities. Advanced pretraining techniques are known to affect the performance of foundation models differently, especially when fine-tuning for specific tasks like those in remote sensing. Including a comparative analysis of these methods could offer insights into how various approaches to pretraining influence downstream task performance and generalization in remote sensing contexts. This omission reduces the value of the benchmark, as it does not provide a complete picture of how alternative pretraining methods might improve or hinder model performance in real-world scenarios.\\n\\n### References\\n- [SatMAE] Cong, Yezhen, et al. \\\"Satmae: Pre-training transformers for temporal and multi-spectral satellite imagery.\\\" Advances in Neural Information Processing Systems 35 (2022): 197-211.\\n- [Scale-MAE] Reed, Colorado J., et al. \\\"Scale-mae: A scale-aware masked autoencoder for multiscale geospatial representation learning.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\\n- [EVA] Fang, Yuxin, et al. \\\"Eva: Exploring the limits of masked visual representation learning at scale.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.\\n- [DINOv2] Oquab, Maxime, et al. \\\"Dinov2: Learning robust visual features without supervision.\\\" TMLR. 2024.\", \"questions\": \"Nothing in particular\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors conduct benchmarking for remote sensing foundation models around different characteristics including changes to different resolutions and new bands. The authors train several iBOT models on the MIllionAID dataset with different augmentations and evaluate it on change detection and scene classification.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This is an interesting and important line of investigation as the ability to produce high-quality remote sensing foundation models has dramatically lagged behind other domains.\\n\\nOverall the text is clear and well wrtiten.\\n\\nSignificant benchmarks are run with multiple trials to generate error bars. \\n\\nDesign choices are all reasonable.\", \"weaknesses\": \"Only scene classification and change detection are explored. There are many other tasks within remote sensing which are highly relevant and may have dramatically different feature relevancy (i.e. more spectrally dominant vs. structurally).\\n\\nI disagree with the authors' stance that images at inference time are more likely to be lower-resolution. Very often the opposite is true- there are huge amounts of publicly available low resolution imagery. However, with the introduction of private satellite, planes, and drones, into the remote sensing space, people are trying to use these high-resolution sources for a very specific task while leveraging the decades of low resolution data that exists. Similarly, satellite imagery is only going to get better over time so being able to adapt the low resolution data to a new high resolution task is paramount. \\n\\nFor tasks like change detection and scene classification in general, there is substantial labeled data out there- foundation models would hopefully boost performance, but they're still usable. In contrast, there are many novel remote sensing tasks (especially in ecology/climate and agriculture) which are blocked by a lack of good labels and desperately need improved foundation models. I think exploring some of these would dramatically improve the impact of the work. \\n\\nAdditional analysis and discussion is warranted around why contradictory results are seen for different tasks/datasets. This type of result has been a key challenge (particularly in remote sensing), so I think it warrants more discussion.\", \"questions\": \"I don't have specific questions, but do feel like more discussion is needed around interpretation of the results. For example, \\\"we hypothesize that models pretrained in a self-supervised manner require less data...\\\"- explain and discuss further.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"The concerns raised by reviewers raise a few categories of concerns: the scale of the tasks and datasets; the range of models tested; lack of algorithmic novelties (new losses, architectures etc.)\\n\\nWhile we work on improving the tasks and datasets and try to increase the coverage of tested models (as much as our compute resources allow), we cannot understand the need of algorithmic novelties in a benchmark paper. We decided to withdraw the paper, but **it would be very beneficial for our future work if the reviewers and AC help us understand what is a scope of a good benchmark paper in ICLR**. ICLR's *Call for papers* includes a line \\\"Datasets and Benchmarks\\\". Should the papers in that category / topic necessarily include algorithmic innovations to be considered worthy for ICLR? \\n\\nWhat are some benchmark papers published at previous ICLR conferences that the reviewers and ACs consider high quality?\\n\\nWe thank the reviewers for the comments and suggestions. We will use them in the next iterations of this work.\"}", "{\"summary\": \"The paper introduces a benchmark pipeline that measures the generalization ability and performances on downstream tasks. It first outlines the motivation for application of self-supervised foundation models to remotes sensing tasks and the needs to test these foundation models with different experimental settings. The paper points out a few significant axes of generalization and highlights why resolution and bands are of key interest. It explains the benchmarking methods and evaluation metrics of scale augmentation for resolution study and different spectral bands combination study. The authors select iBOT, a cutting-edge vision transformer variation, as backbone and tested the impact of introduction of scale augmentation at different training stages, they also develop a joint training strategy for mask decoder. The authors point out that fine-tuning has negative impact on the generalization performance, where frozen backbone can become handy. For the results, they list the metric scores for different models on different remote sensing tasks and demonstrate that most foundation models perform not very well on generalization to lower resolutions and different spectral bands, they also point out that froze backbone preserves general representations better than fine-tuned ones.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"This paper gives out a comprehensive benchmarking for foundation models in aerial imagery processing. It addresses two practical challenges: resolution and spectral bands variation. It also evaluated different techniques which could cause positive/negative impact for generalization. It reveals the limitations of current models. This paper has a easy-to-understand writing style.\", \"weaknesses\": \"1. Lack of novelty: From what I received in this paper, this paper lacks new algorithm/architecture/methodology with fundamental contribution. All works are limited to testing and applying of architectures or metrics from previous works. Correct me if I am wrong.\\n2. New loss? New architecture?: This paper did a good job in demonstrating current models are not competitive in generalization to different resolutions and band combinations, but gives out no new solutions or possible proposals. For example, will it be helpful to develop a new loss mechanism or a new architecture? Or re-design current models to adapt to your generalization needs?\\n3. Multi/hyper-spectral data: Recent studies more focus on how to retrieve spectral representations across all bands instead of transferring the knowledge from, say RGB, to NIR. Correct me if I am wrong.\\n4. Pre-training, Fine-tuning: These are not new concepts at all and have been studied extensively, the conclusion of fine-tuned backbone impact generalization and frozen backbone preserves generalization ability is intuitive and familiar to most researchers and has been widely accepted as a fact, repeatedly conduct these trainings seems redundant.\", \"questions\": \"Benchmark is important for people to know what are the problems that causes poor generalization performances, but this paper seems to provide no useful information regarding why foundation models generalize bad?\\n\\nAll researchers know that large models suffer from overfitting and generalization issues, reiterate this with remote sensing models will not clarify it more. Instead, it would be more meaningful to explore what method could be used to address this issue? Did you or are you planning to propose any technique/algorithm/architecture design to address this generalization problem instead of simply freezing the pre-trained weights?\\n\\nThe claim that generalization to other geographical locations is hard to study seems a bit underwhelming since multiple datasets cover various landscapes under uniformed labeling framework such as EuroSAT/LoveDA? Also for seasonal generalization, as far as I know many satellite imagery sources such as Sentinel-2 provides data around the year. Correct me if I am wrong.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
DYVSLfiyRN
Transferable Adversarial Attack on Vision-enabled Large Language Models
[ "Kai Hu", "Weichen Yu", "Alexander Robey", "Andy Zou", "Chengming Xu", "Haoqi Hu", "Matt Fredrikson" ]
Vision-enabled Large Language Models (VLLMs) are increasingly deployed to offer advanced capabilities on inputs comprising both text and images. While prior research has shown that adversarial attacks can transfer from open-source to proprietary black-box models in text-only and vision-only contexts, the extent and effectiveness of such vulnerabilities remain underexplored for VLLMs. We present a comprehensive analysis demonstrating that targeted adversarial examples are highly transferable to widely-used proprietary VLLMs such as GPT-4o, Claude, and Gemini. We show that attackers can craft perturbations to induce specific attacker-chosen interpretations of visual information, such as misinterpreting hazardous content as safe, overlooking sensitive or restricted material, or generating detailed incorrect responses aligned with the attacker's intent. Furthermore, we discover that universal perturbations---modifications applicable to a wide set of images---can consistently induce these misinterpretations across multiple proprietary VLLMs. Our experimental results on object recognition, visual question answering, and image captioning show that this vulnerability is common across current state-of-the-art models, and underscore an urgent need for robust mitigations to ensure the safe and secure deployment of VLLMs.
[ "adversarial attack", "black-box attack", "transferable attack", "vision-enabled large language models" ]
https://openreview.net/pdf?id=DYVSLfiyRN
https://openreview.net/forum?id=DYVSLfiyRN
ICLR.cc/2025/Conference
2025
{ "note_id": [ "fHT0YIgdk5", "L3dX7cUTiq", "KeHsbQgxa7", "IVzlsSHQnE", "3zn9wMCF46" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1729631272351, 1730765814090, 1730126288878, 1732508098215, 1730680178162 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1696/Reviewer_L8rG" ], [ "ICLR.cc/2025/Conference/Submission1696/Reviewer_ojxe" ], [ "ICLR.cc/2025/Conference/Submission1696/Reviewer_jHNg" ], [ "ICLR.cc/2025/Conference/Submission1696/Authors" ], [ "ICLR.cc/2025/Conference/Submission1696/Reviewer_ufji" ] ], "structured_content_str": [ "{\"summary\": \"The paper proposes two techniques to create adversarial attacks to Vision-enabled Large Language Models (VLLMs).\\n\\n1) CLIP Score attack - this technique essentially learns a perturbation to an image such that the embedding of the image aligns with (high cosine similarity to) a set of embeddings of incorrect text labels, and does not align with a set of embeddings of correct text labels.\\n2) VLLM response attack - this technique learns a perturbation to directly maximizes the log likelihood of the VLLM outputting some incorrect label when prompted with an image and text query.\\n\\nBoth techniques learn perturbations using gradients from white-box surrogate models. The paper focuses on how the learnt adversarial attacks transfer to help out models (black-box transfer).\", \"the_paper_tests_the_above_techniques_in_three_different_settings\": \"1) Image classification\\n2) Text generation (captioning of images in natural language)\\n3) Safety-related reasoning (identifying properties of harmful images or answering safety-related questions about harmful images).\\n\\nIn each setting the paper demonstrates impressive transfer of attacks to held out models. Most impressively, across all settings they create successful attacks (non 0 attack success rate) to frontier models such as GPT-4o and Claude 3.5 sonnet.\\n\\nIn addition to these results, the author's create a custom dataset and benchmark for the \\\"Safety-related reasoning\\\" task and release this.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"From here on, I will refer to changing the data present in an image as an adversarial attack to a VLLM. That is I will not be including things such as jailbreaks or prompt injection attacks in my notion of adversarial attacks.\\n\\n### Originality\\n\\nWhilst adversarial attacks to VLLMs have been studied widely, the authors make the following original (to the best of my knowledge) contributions (in order of importance):\\n\\n1) Most importantly, they show the highest attack success rate for transferable adversarial attacks I have seen.\\n2) Through ablation studies, they show how various \\\"tricks\\\" such as data augmentation and model ensemble size can be used to enhance transferability. By \\\"tricks\\\" I do not mean anything negative, and believe that these techniques are useful for the broader research community to know about.\\n3) They introduce three different evaluation tasks for VLLMs.\\n4) They release a dataset and benchmark for the third of said tasks, safety-related reasoning.\\n\\n### Quality and Clarity\\n\\nThe paper is high quality. The experiments are thorough and I am convinced of the broad claims made in the paper. For the most part the paper is well written and easy to follow, and figures / tables are informative.\\n\\n### Significance\\n\\nIn my opinion, adversarial attack papers are most significant when they demonstrate methods that can be deployed against, or at least one can imagine a scenario in which they would be deployed against, real world systems. In the case of attacking frontier models, this means a technique is significant if it can be used against close-source models such as GPT4, Claude, etc.\\n\\nThis paper meets this criteria.\", \"weaknesses\": \"I am going to split my critique up into two sections. The first will be a high-level critique of the paper, and the second will be specifics about sections. Whilst the critique is long, this is only because I believe the paper has interesting results that could be improved, not because I think there are any fundamental failings in the paper. To the contrary, I think the paper contains valuable insights for the broader adversarial attack community.\\n\\n\\n# High Level:\\n\\n### Originality\\n\\nMy high-level critique of this work concerns its originality. In particular, the CLIP Score attack and VLLM response attack seem very similar to the two attacks presented in Dong et al.[1]. In particular, Dong et al. also present a method based on CLIP embeddings (albeit they align to a target image not a target textual embedding) and an end to end technique. They also demonstrate that these methods transfer to black-box models. Zhao et al. [2] also demonstrate black-box adversarial attacks of a similar nature to those presented in this work (although they do not attack frontier models). \\n\\nFirstly, these works should be mentioned and treated in the related works, but they are not ([1] is however mentioned at the end of the introduction).\\n\\nDespite this critique, I believe this paper still has valuable contributions. In particular, making a technique successful in machine learning often comes down to small tweaks or \\\"tricks\\\". The authors demonstrate, that through the specific techniques they use, they are able to get what appears to be stronger transfer than [1].\", \"i_would_recommend_two_concrete_changes_on_this_front_however\": \"1) Soften or remove claims of novelty about your techniques. For example, you state \\\"we develop a novel attack for VLLMs designed to find image perturbations by targeting adversarially chosen text embeddings.\\\" Given what I have seen, I do not think it is fair to say your technique is entirely novel. An alternative framing would be to say something like \\\"building on prior works that have displayed some transferability <cite>, we enhance transferability by doing <x>\\\". \\n2) Running baselines using prior works ([1] in particular) and seeing how adding your tricks (ensembling, data augmentation) would be very valuable.\\n\\n### Experiments\\n\\n In all of the experiments I do not understand the exact algorithm you are using. You say you use an ensemble of surrogate models that includes CLIP models and full VLLMs. This leads me to assume that the adversaries are created using a mixture of the CLIP score attack and VLLM response attack methods (e.g. you accumulate loss from both and then take a gradient step). Is this correct?\\n\\n If this is correct, then why did you not show ablation studies of using each technique individually? This would seem to me to be very important. If you find that using both techniques at the same time was what increased transferability, that would be a very useful to know. Additionally, if this is correct or not, language should be added to make the experimental setup more clear.\\n\\nApologies in advance if I have missed something here.\\n\\n\\n### Definition of adversarial attack\\n\\nSecondly, the paper uses a broad definition of adversarial attacks to VLLMs. For example, in the related works you compare to Schaeffer et al. Their paper concerns transferable jailbreaking attacks. In contrast this paper concerns adversarial attacks that change how the model perceives an image, as opposed to attacks that convey some hidden instruction to the model. In fact, your formulation in equation (1) is good (although I have some critiques of it below) and clearly does not cover the case of jailbreaking or prompt injection attacks. Making this distinction clearer in the introduction and related works would be valuable,\\n\\n# Section level critique\\n\\n### Section 2 - related works\\n\\n- You state \\\"in this paper, we take a new perspective: We\\n\\tinvestigate how visual perturbations can induce targeted misinterpretations in proprietary VLLMs\\n\\tsuch as GPT-4o.\\\" As mentioned above, I don' think this is an entirely new perspective, and thus should be removed.\\n- Like I said above, a more thorough treatment of [1] and [2] is required.\\n\\n### Section 3 - generating transferable attacks for VLLMs\\n\\n- The problem setup in equation (1) seems slightly off. The requirement is only that the two outputs are not equal, but this would be satisfied by simply flipping a single token (which does not match my internal definition of what I think an adversarial attack to be). For example, this could occur simply if I was sampling with some entropy.\\n- Nit - above equation (5) I think tilda x_a should be tilda t_a? That is tilda t_a is introduced in (5) without a definition.\\n\\n### Section 4 - experiments \\n\\n- See above concerning my question of method.\\n- For each experiment, I would like to see for each epsilon budget how a random perturbation of that size affects the model performance. It may be in all cases that the ASR remains 0, in which case I think this is still valuable to include but can be put in the Appendix. If not, then this is a useful baseline to compare your method against.\\n- For table 5, it would be nice to see some example questions, model responses with and without adversarial perturbations.\\n- Nit - should be \\\"after generating\\\" not \\\"after generate\\\" in line 302.\\n- Section 4.3. Claims here need to made more narrow. You state \\\"This benchmark not only provides a standardized framework for assessing VLLM capabilities in safety critical domains.\\\" This is evaluating a certain subset of safety critical domains. Not all possible VLLM safety critical domains (e.g. it does not tell me much about how useful the VLLM could be used as an agent to assist me in some nefarious task). This is ok, just the claim should be made more narrow. \\n- I think the safety benchmark is very interesting. Writing could be enhanced by referencing real world situations in which a failure on this benchmark would lead to bad things.\\n\\n### Other Nits\\n\\n- Line 335 \\\"thereby making transferable attacks more less effective on Claude\\u2019s models\\\", should be \\\"more OR less\\\"\\n- Line 392 \\\"Each question ends with 'Please\\n\\tanswer with yes or no'\\\"\\n\\t\\t- The quotation mark is backwards.\", \"references\": \"[1] - Dong, Yinpeng, et al. \\\"How Robust is Google's Bard to Adversarial Image Attacks?.\\\" arXiv preprint arXiv:2309.11751 (2023).\\n[2] - Zhao, Yunqing, et al. \\\"On evaluating adversarial robustness of large vision-language models.\\\" Advances in Neural Information Processing Systems 36 (2024).\", \"questions\": \"My questions simply relate to the weaknesses raised. I restate them in more brevity here:\\n\\nQ1) Originality. Do you believe my concerns with regards to originality are valid, and if so how do you intend to edit the paper accordingly?\\n\\nQ2) Experiments. Is my interpretation of the method used to produce the results correct (a mixture of the two attacks presented), and if so why did you not compare the techniques individually? \\n\\nQ3) Definition of adversarial attack. Do you agree with my distinction between the types of adversarial attack, and if so how do you intend to edit the paper to reflect this?\\n\\nQ4) How do you plan to address the other more narrow weaknesses I raised about each of the sections? \\n\\nOverall I think this is a valuable piece of work! I believe, however, that it could be made stronger by adressing these concerns.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper explores whether targeted adversarial examples are transferable to widely-used proprietary Vision-enabled Large Language Models (VLLMs) such as GPT-4o, Claude and Gemini. The paper conducts experiments to show that crafted perturbations by attackers can induce the misinterpretation of visual information. Also, the paper shows that universal perturbations can consistently induce these misinterpretations. The paper conducts sufficient experiments including object recognition, visual question answering and image captioning.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The paper explores the transferability of adversarial examples to proprietary blackbox Vision-enabled Large Language Models (VLLMs). This transferability of adversarial examples is significant.\\n\\nTwo attacks including CLIP score attack and VLLM response attack are proposed. Also, two tricks including data augmentation and ensembling surrogate are proposed to enhance transferability.\\n\\nThe paper conducts sufficient experiments including using two open-source VLLMs, using three blackbox VLLMs, and conducting attacks on three tasks.\", \"weaknesses\": \"The paper only provides the experiments results to demonstrate the transferability of proposed two attacks and tricks. It is better to explain more about the reason of transferability.\\n\\nIt looks like that proposed attacks depend on the positive and negative textual prompts in equation (2). It is better to do some explainations or experiments to show the influence of textual prompts.\\n\\nThere are some typos, e.g., the confusing use of t_a^~ and x_a^~ in line 223 and equation (5).\\n\\nThere are no comparison methods in the main results. It is difficult to understand the advantage of proposed attacks.\", \"questions\": \"Since some works have explored the transferability of adversarial attacks, could the authors explain the difference of transferability between traditional adversarial attacks and proposed attacks?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper explores the vulnerability of vision-enabled large language models (VLLMs) to adversarial attacks, focusing on the transferability and universality of adversarial examples. The authors introduce two specific attack methods\\u2014CLIP Score and VLLM Response attacks by attacking the vision modality of the VLLM\\u2014demonstrating their impact across two tasks: image classification and text generation and six VLLMs.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper is both relevant and timely, given the real-world deployment of models with these vulnerabilities and the associated risks. The authors conduct thorough experiments on these models, providing an in-depth assessment of their adversarial robustness. The discussion of universal perturbations for VLLMs is especially compelling, as it addresses an area that remains underexplored.\", \"weaknesses\": \"My main concern is that the proposed attack lacks novelty as in [1], the authors introduced a transfer-based attack strategy by matching image-text features. Many of the conclusions drawn like the vulnerability of VLMs to adversarial attacks-specifically on the vision modality of these models- and the transferability of these attacks are already well-explored in the existing literature, which the authors fail to acknowledge adequately [1,2].\\nThroughout this paper, there is a notable lack of consistency and cohesion across sections. Furthermore, the VLLM SafeBench tool is introduced without any prior context, emerging only within the experimental results. The experimental design is disorganized, with unclear settings. Overall, the paper lacks clarity, cohesion, and originality.\\n\\n\\n1.\\tZhao, Yunqing, et al. \\\"On evaluating adversarial robustness of large vision-language models.\\\" \\n2.\\tYin, Ziyi, et al. \\\"Vlattack: Multimodal adversarial attacks on vision-language tasks via pre-trained models.\\\"\", \"questions\": [\"Below is a list of comments and questions, ordered not by importance:\", \"Lines 121-122: Why are you comparing the success rate of untargeted and targeted attacks?\", \"Line 180: How do you know this? Did you conduct experiments to verify?\", \"The threat model for the two proposed attacks is unclear. Specifically, what level of access does the attacker have? Is this a white-box or black-box attack, and is it targeted or untargeted?\", \"Line 186: The definitions of \\\"transferable\\\" and \\\"universal\\\" should not be relegated to the appendix.\", \"I do not see the novelty in the CLIP Score attack. In [1], the authors introduced a transfer-based attack strategy by matching image-text features. What is your contribution beyond this approach?\", \"Why are you proposing two separate attacks? The motivation presented in lines 218-219 is unclear and insufficiently supported. From my understanding, the CLIP Score attack is untargeted, while the VLLM Response attack is targeted.\", \"In line 225, I am assuming $\\\\tilde{t_a}$ should be $\\\\tilde{x_a}$? In general, the optimization for the VLLM Response attack would benefit from more explanation, what do you mean by its output? is it the target text or a label?\", \"Which attack are you referring to as the \\\"transfer attack\\\" in line 268?\", \"In Section 4.1, the experimental setup is unclear. The surrogate and victim models are not identified in the tables, and it\\u2019s not specified which attack is being optimized.\", \"In lines 335-336, you suggest that the attack is only transferable when the same or a similar model is used for embeddings. Is this correct?\", \"The term \\\"Multimodal LLMs\\\" is introduced only in certain section titles and table captions.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper presents an adversarial attack to demonstrate that the targeted adversarial examples are transferable to current VLLMs. By crafting perturbations, the attacker can achieve both good targeted and untargeted attack. The author also finds that universal perturbations can consistently induce the misinterpretations across VLLMs.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.The paper is well-written and easy to follow.\\n\\n2.The selection of Open-sourced VLLMs are SOTA. \\n\\n3.Investigating the attack on VLLMs is necessary for the VLLM safety community.\", \"weaknesses\": \"1. Provide some adversarial examples (visually)? If we just randomly perturbed the images, it can also cause the misclassification. So it can not prove the efficiency of the proposed method.\\n\\n2. The paper use \\u03b5=32/255 (%) to measure the perturbation. If the perturbation is too much, it is not stealthy. we can see that when the 32/255, it can get some good results. But this perturbation is a bit large and it will affect the stealthies. \\n\\n3. The paper claims to achieves good attack performance on both untargeted and targeted attack settings. For targeted attack setting, the results are only provided by ImageNet-1K image classification. More experiments would be better.\\n\\n4. The proposed two attack methods: CLIP score attack and VLLM response attack, are good, however not novel. The intuition has been proposed by other works, and the author applied to the adversarial attack domain.\", \"questions\": \"1.Seems the attack only manipulates the image. Is it possible to add some analysis on the attack effects on text perturbation?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
DXz1PDA0Wg
Herb-GANNet: Synthetic Data Generation through Conditional GANs for Improving Accuracy in Medicinal Leaf Classification
[ "Sapna R" ]
Accurate classification of medicinal leaves is essential across various fields, including agriculture, Ayurveda, drug discovery, and biodiversity conservation. However, this task can be complex and time consuming for experts due to the complexity of plant morphology, limited public datasets, and inherent class imbalances among species. These issues not only hinder effective identification and utilization of medicinal plants but also impede research and development in related domains. This study explores the application of Conditional Generative Adversarial Networks (CGANs) to generate synthetic data aimed at improving medicinal leaf classification models. CGANs offer effective solution for augmenting datasets and addressing class imbalance issues. We employed a conditional Deep Convolution Generative Adversarial Network (cDCGAN) to produce 500 synthetic images for each of thirty different plant species. To evaluate the effectiveness of the generated data, we trained and evaluated three popular convolutional neural networks: ResNet-34, VGG-16, and EfficientNet-B1, on both the original and augmented datasets. Our results show that CGAN-generated data significantly improved the performance across all tested models. EfficientNet-B1 achieved the lowest test loss of 1.74% on the augmented dataset, while ResNet-34 exhibited the highest test accuracy of 98.26%. These findings indicate that cDCGANs can generate synthetic data that effectively mimics real images, leading to (1) larger training datasets, (2) reduced data collection cost, and (3) increased data diversity and model generalization by providing a broader range of training examples.
[ "medicinal leaf classification", "data augmentation", "generative adversarial networks (GAN)", "deep learning", "image generation", "classification", "drug discovery" ]
https://openreview.net/pdf?id=DXz1PDA0Wg
https://openreview.net/forum?id=DXz1PDA0Wg
ICLR.cc/2025/Conference
2025
{ "note_id": [ "iabQsRxp1j" ], "note_type": [ "comment" ], "note_created": [ 1729125351085 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"desk_reject_comments\": \"not a full paper; violating anonymity requirement.\", \"title\": \"Submission Desk Rejected by Program Chairs\"}" ] }
DXaUC7lBq1
What Makes Your Model a Low-empathy or Warmth Person: Exploring the Origins of Personality in LLMs
[ "Shu Yang", "Shenzhe Zhu", "Ruoxuan Bao", "Liang Liu", "Yu Cheng", "Lijie Hu", "Mengdi Li", "Di Wang" ]
Large language models (LLMs) have demonstrated remarkable capabilities in generating human-like text and exhibiting personality traits similar to those in humans. However, the mechanisms by which LLMs encode and express traits such as agreeableness and impulsiveness remain poorly understood. Drawing on the theory of social determinism, we investigate how long-term background factors, such as family environment and cultural norms, interact with short-term pressures like external instructions, shaping and influencing LLMs' personality traits. By steering the output of LLMs through the utilization of interpretable features within the model, we explore how these background and pressure factors lead to changes in the model's traits without the need for further fine-tuning. Additionally, we suggest the potential impact of these factors on model safety from the perspective of personality.
[ "explainable ai", "personality of LLM" ]
https://openreview.net/pdf?id=DXaUC7lBq1
https://openreview.net/forum?id=DXaUC7lBq1
ICLR.cc/2025/Conference
2025
{ "note_id": [ "nIQZWxswbx", "aZ8pILLPOl", "aH4Cq14jZr", "JfIAPRBSls", "9cGRFXA8CR" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730162958283, 1730349239006, 1730692434871, 1730687031577, 1731429825644 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4406/Reviewer_PxVq" ], [ "ICLR.cc/2025/Conference/Submission4406/Reviewer_ko6j" ], [ "ICLR.cc/2025/Conference/Submission4406/Reviewer_FvtT" ], [ "ICLR.cc/2025/Conference/Submission4406/Reviewer_haF1" ], [ "ICLR.cc/2025/Conference/Submission4406/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper attempts to investigate how long-term factors, represented as activation features of LLMs, and short-term pressures, operationalized by prompt engineering, affect the LLMs' ability to complete the Big Five Inventory and Short Dark Triad personality tests. The authors show that the models exhibit less agreeableness and more neuroticism on the Big Five Inventory test when they have been tuned to have strained family relationships. They also show that if the models are prompted to be gregarious through prompts like 'Imagine you're a person who enjoys being around others and thrives in social situations', the models are more likely to score lower on Psychopathy and Machiavellianism on the Dark Triad personality test.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The authors have robust methods and solutions. The main strength of the paper is the range of factors the authors tested. These include gender, age, education, social ideology, etc. The results are presented in an extremely accessible manner in the tables.\", \"weaknesses\": \"While this paper has many issues, the main one is the framing. The authors are attempting to explore personality in LLMs. There seems to be a fundamental misunderstanding of what personality entails. Personality definitionally presumes personhood (https://www.apa.org/topics/personality). As the authors point out in the Sparse Autoencoders (SAEs) Section, they are investigating activations in LLMs, not personality.\\n\\n\\n\\n\\n\\n\\n\\n\\nThis misunderstanding of what personality entails is further evidenced by the works they have cited, e.g., Joshi et al. (2023a). The cited paper discusses personas, not personalities. Personas and personalities are not interchangeable ideas. Personality refers to the intrinsic qualities and characteristics that make an individual who they are, while persona refers to the outward projection.\\n\\n\\n\\n\\nThe LLM's ability to solve the Big Five Personality MCQs or the Dark Triad test does not indicate that the LLM itself has a personality; it just indicates that it can solve the tests. The authors might be conflating the ability to process and respond to test questions with having the underlying psychological constructs being measured.\\n\\n\\n\\n\\nThe authors' definition of personality is extremely atypical from a cognitive psychological perspective. I would have preferred that in the background section they had gone into detail about how they are thinking of personality and how their definition compares to the typical ideas of personalities.\\n\\nI would encourage the authors to engage with literature from cognitive psychology which explicitly define personality as a fundamentally biological feature. Consider the following examples from the field:\\n\\nMischel & Shoda define personality as emerging from the interaction between an organism's internal psychological features and its environment, requiring consciousness and biological systems that LLMs fundamentally lack.\\n\\nMcCrae & Costa grounds personality in \\\"basic tendencies\\\" that are rooted in biology and require a living system capable of adaptation and response to environment.\\n\\n\\nIn order to make their argument more robust, I would recommend that the authors should:\\n1. Clarify if they are using \\\"personality\\\" as an analogy or metaphor rather than claiming LLMs have true personalities\\n2. Consider alternative frameworks like \\\"behavioral patterns\\\" or \\\"response tendencies\\\" that don't carry the same biological/psychological implications\\n3. Discuss the limitations of applying human personality constructs to computational systems\\n\\n\\nThe authors seem to have cited works without actually engaging with them. I have pointed out the Joshi et al. (2023a) paper earlier. They also have cited Perez et al. 2023 in support of reliability concerns in LLMs, including misinformation and privacy risks. However, the cited paper does not discuss either of these issues. The paper instead cites Carlini et al. (2019, 2021) for privacy risks and Lin et al. for misinformation. I would recommend that the authors should engage with those works directly instead of Perez et al. 2023\\nI would also suggest that the authors review all their citations to ensure they directly support the claims made. \\n\\nIn line 051, the authors matter-of-factly allude to previous works that have identified two primary strategies for endowing LLMs with personality traits. However, this is unsubstantiated, as no supporting citations are provided for this claim.\\nI would suggest that the authors either provide specific citations for the claim about the two primary strategies. In case these claims do not exist, rephrase the statement to clarify that this is their own observation or hypothesis.\\n\\nAdditionally, the research questions are not sufficiently answered. In RQ2, the authors asked `how can these personalities influence LLMs' safety?\\u2019 Within the main text of the paper, this RQ was not engaged with in any form. I would recommend that the authors either consider removing or revising RQ2 to better align with the actual content of their paper. If they feel RQ2 is necessary for their argument, then they can include a dedicated section to address RQ2, providing specific analyses and results related to LLM safety.\", \"works_cited\": \"Mischel, Walter, and Yuichi Shoda. \\\"A cognitive-affective system theory of personality: reconceptualizing situations, dispositions, dynamics, and invariance in personality structure.\\\" Psychological review 102.2 (1995): 246.\\n\\nMcCrae, Robert R., and Paul T. Costa. \\\"Empirical and theoretical status of the five-factor model of personality traits.\\\" The SAGE handbook of personality theory and assessment 1 (2008): 273-294.\", \"questions\": \"The authors' definition of personality is extremely atypical. I would suggest the authors add a background section to discuss their definition of personality and how it compares to the cognitive psychology literature. I would have preferred that in the background section they had gone into detail about how they are thinking of personality and how their definition compares to the typical ideas of personalities.\", \"minor_concern\": \"The use of the phrase 'trustworthy concerns' in line 32 is quite uncommon. I am not aware of any paper using this particular phrasing. If this is a phrase introduced by the authors, they should clarify it. If this is a common phrase any references would be helpful for the uninitiated reader. An alternative phrasing would be 'reliability concerns'.\\n\\nThe authors have robust methods and solutions; however, there seems to be an issue with framing. I would suggest that the authors either reframe the scope of this work to a problem besides 'personality in LLMs' or include a researcher from psychology or at least discuss their work with some researchers from psychology.\\n\\nI would also encourage the authors to restructure the paper so that they can sufficiently answer both RQs instead of one.\\nThe labels in Table 5 might be incorrect. I would suggest that the authors fix those. Narcissism is rated as 4.3 for Gemma-2B-Instruct base \\nin Table 5, but is rated as 4.3 for Gemma2-9B-Instruct base in Tables 1 to 4.\\n\\nIn the both introduction and abstract, the authors have framed short-term pressures and long-term factors as interactional and related phenomena. However, in their experiments, they focus on the two features as independent and investigate them independently. These features might be interactional, and it would be prudent to study them in conjunction. I.e., assume a 'gregarious young person', 'gregarious old person', 'gregarious male', 'gregarious female', etc.\\n\\nGenerally, the word 'warmth' is used as a noun; however, the title 'what makes your model a low-empathy or warmth person' uses warmth as an adjective.\\n\\nIt is not clear from the write up why the authors have preferred GEMMA over other open source generative models like Llama. A justification of the choice would be helpful to the reader.\", \"flag_for_ethics_review\": \"['Yes, Potentially harmful insights, methodologies and applications']\", \"details_of_ethics_concerns\": \"The research maps human psychological constructs (personality tests, trait theories) to LLMs without proper justification or acknowledgment of fundamental differences between human cognition and LLMs. This anthropomorphization of LLMs through personality frameworks could mislead both researchers and the public about the true nature and capabilities of LLMs.\\n\\nThe paper explores how to modify LLM behavior using factors like gender and socioeconomic status, failing to acknowledge that the LLM is fundamentally a set of matrix multiplications and lacks both gender and socioeconomic status. The authors risk conflating statistical patterns with genuine human attributes.\\n\\n\\nThe research appears to use psychological assessment tools (Big Five Inventory, Dark Triad) without proper consideration of the validity of applying clinical/psychological assessment tools to non-human entities.\\n\\n\\nPublishing this paper in its current form is not contributing much to the academic community, as it promotes methodologically unsound practices, misapplication and misunderstanding of ideas from cognitive psychology and potentially misleading anthropomorphization of LLMs.\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper sought to measure the variability of LLM traits in response to steering towards different model personas (background factors like age, education, social ideology, etc) and pressures (i.e. trust, etc).\\n\\nFeature steers are achieved in two ways.\\n\\nOne through extracting feature vectors via sparse encoders with SAE Lens, identifying highly activated features against GPT4o generated descriptions against each background (i.e. wealthy lineage, affluent upbringing -> for rich, and \\\"struggling financially/etc -> for poor). These feature steers are integrated into the residual stream for model steering.\\n\\nTwo, \\\"Short-term pressure\\\" features (basically, prompt-based steers, i.e. asking the model \\\"imagine you are a person who is xxx\\\" before generation) use generated GPT4o-generated prompts for each desired psychometric attribute (i.e. \\\"competence\\\") and representation engineering (Zou et al 2023) to capture their activation features, which are then passed through PCA to find unit steering vectors. These features are added to corresponding activations for model steering.\\n\\nThe effects of this steering are measured via a personality test for LLMs, TRAIT, which comprises of 8K multiple-choice questions against psychometric traits. Results of how these measures vary against different feature steers are described and shown, with the authors making the conclusion that larger LLMs are more easily shaped by \\\"external\\\" pressures, while smaller LLMs are more sensitive to the background factor of the persona, among other findings of similar nature.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The authors ask an interesting and novel question\\u2014how do model background descriptions lead to changes in model personality? To wit, the authors structure their experiments in an interpretable fashion, and ground their work in existing psychometric literature (and the emerging LLM psychometric literature), incorporating validated (for humans) psychometric questions into their prompts.\", \"weaknesses\": \"It's not immediately clear that the results for different personality steers are directly comparable with each other.\\n\\nWhile the authors state that they \\\"guarantee[d] the monosemanticity nature of each feature\\\" by \\\"verify[ying] that they remained inactive when presented with descriptions of other factors\\\", even with this method, it's not especially clear if the degree to which each background steer may be captured by a single SAE feature vector is the same for each background tested, or that the feature activations are monosemantic to the specifically tested background attribute. For example, it could very well be the case that while the feature vector for \\\"poor\\\" is indeed monosemantic relative to \\\"rich\\\", and vice versa, the degree to which the feature vector is monosemantic to our understanding of \\\"poor\\\" is different to the degree to which the same is for the \\\"rich\\\" feature vector. This would mean that the conclusions observed on model personalities aren't so much driven by actual background factors (which is the aim of this paper) as they are by quirks of the steering vectors found. \\n\\nThis problem is compounded by steering coefficients\\u2014it's not especially clear that using the same steering coefficients across all concepts for these steering vectors that may have varying degrees of monosemanticity to the concepts tested is necessarily valid: for example, it may very well be the case that a coefficient of 200x towards \\\"poor\\\" is equivalent to a steering coefficient of 800x towards \\\"rich\\\". \\n\\nThese validity issues make it hard to take the conclusions of the paper at their face value\\u2014we can't say that, for instance, larger LLMs are more easily shaped by external pressure while LLMs are more sensitive to background factors, as the authors conclude here, or make conclusions of similar nature.\", \"nit\": \"It was difficult parsing this paper. For instance, Tables 2-5 don't have any units on their measurements or any captions, so it was difficult to parse what these stated numbers mean. This paper would benefit from a round of revision.\", \"questions\": \"How did you validate that the GPT4o-generated sentence descriptions/words actually represent what you intended them to be?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Authors explore controlling LLM \\u2018personalities\\u2019 using sparse autoencoders and representation-based methods to extract features that are correlated with long-term background factors and short-term social pressures that are posited to influence human personalities. They use these features to steer the models\\u2019 responses and test them on the TRAIT personality assessment tool based on dimensions of the BFI and SDT personality tests but designed for testing LLMs and against Safetybench safety benchmarking tool. Results demonstrate that their method can effectively steer the models\\u2019 performance.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The paper presents an original study using sparse auto-encoders to steer LLMs and successfully influence LLMs behavior on personality measures and safety benchmarks. The study is important for the field of AI interpretability as an empirical demonstration of LLM steering by directly manipulating features in the model layers. The study is significant as one of the first to attempt to steer LLM personality using SAE and representation-based methods.\", \"weaknesses\": \"The authors make some claims that are not clearly supported by the results: \\u201cLarger model exhibits more stable personalities and lower dark traits\\u201d, \\u201cLarger LLM is more easily shaped by external pressure, while smaller LLM is more sensitive to the background factor\\u201d, \\u201cOlder and liberalism influence most on larger models while communism and uneducated in-\\nfluence most on smaller models\\u2019 personalities\\u201d, and \\u201cLarger models are driven by self-motivations while smaller models are shaped by self-confidence in skills.\\u201c From what I can gather, these claims are based only on the relative change in scores based on the feature steering. Some of the changes are quite big, but no statistics are provided to quantify the difference, such as significance or effect size.\\n\\nThe presentation of the paper is a little confusing and repetitive. A clearer separation between the theoretical framework, methods, results, and discussion would greatly help the reader to understand the study. Importantly, safety is presented as one of the main objectives of the paper, yet the findings and discussion are relegated to the appendices.\\n\\nThe argument for social determinism in the study of LLM personalities is not convincing. From my reading, it appears that the choice of background factors and social pressures are based on empirical findings from social determinism but this is not well articulated in the paper. It is not immediately obvious that empirical findings from human personality psychology will apply to LLMs. Moreover, there is abundant literature that questions the notion of stable personalities in LLMs and this is not addressed in the paper at all. The idea of using educational attainment, cultural background, and political ideology with LLMs might appear contentious to some readers as LLMs are manifestly not human. Ultimately these factors are only used as categories for steering the LLMs and the psychological literature is not revisited in the findings, nor are the findings compared to the psychological literature. Thus, the validity or the necessity of couching this work in human psychological terms is questionable as the use of human psychological factors in the study of LLMs runs the risk of anthropomorphizing AI.\", \"questions\": \"P2 82 \\u201c Our study employs SAEs to extract background features (e.g., educational level or cultural background) encoded during training\\u201d\\nP2 100 \\u201cWe provide some insightable findings on how long-term background factors like age and Family Relations and external pressure like Achievement Striving can influence LLM\\u2019s\\npersonality.\\u201d\\n-> Can LLM really be said to have an education level or cultural background? This smacks of anthropomorphization and just confuses the matter. There might indeed be parallels with human populations but it would be helpful to elucidate these links or it risks confusion, e.g., education attainment is related to the subject matter content; socio-cultural background is related to the cultural sources of the training data, etc.\\n\\nP3 113 \\u201cPersonality and Trait Theory on LLMs\\u201d -> This is a very superficial summary and does not present any extant findings. Research suggests that LLMs do not exhibit stable personalities and ought to be considered more like a cultural superposition of perspectives. This is at odds with your literature review, and ought to be addressed directly.\\n\\nP4 193 4 SOCIAL DETERMINISM IN LLM PERSONALITY\\n-> I\\u2019m not sure what social determinism adds to the present discussion. Only superficial links to the psychological literature are made and no implications from theory or empirical findings are inferred. Unfortunately the conclusions are left to the reader when a fuller exposition might help clarify the intentions of the authors. Social determinism as a theory makes certain claims and predictions about human behavior. Is the reader supposed to infer that these apply to LLMs too? Are long-term background factors more powerful than short term pressures in explaining LLMs?\\n\\nP6 310-316 \\u201cFor background factors, we carefully chose 1-2 key elements from each domain in Table 1, ensuring comprehensive coverage of influential aspects. These include Family Environment (represented by Family Relations Status), Cultural and Social Norms (Social Ideology), Education (Education Level), Life and Work Experience (Professional Commitment), and Environmental Stressors (Socioeconomic Status). We also considered Biological Development factors (Gender, Age, and Emotional Intelligence) and the impact of Media and Technology (AI Familiarity). These factors were selected based on their significant impact on personality development, as supported by various studies in the field.\\u201d\\n-> How can an LLM be said to have any of these? There are some equivalences being drawn between LLMs and humans but these have not been rendered explicitly.\\n\\nP8 427 -> What is the base score ?\\nP8 428 -> What should the largest difference be regarded as the most determinant? Even if the SAE is identifying monosemantic features, the interplay can hardly be said to monofactorial. As your results demonstrate, a change to one feature can have effects across personalty scores.\\nP8 Tables and 3 -> What do the values signify? Trait is only presented on the next page after the results of Table 2 and 3. Yet this is important to understanding the experimental setup. Results should not be presented in the experimental setup.\\n\\nP9 and P10 -> It is unclear how these conclusions are being drawn on the basis of changes in the prompts. Some of the claims are speculative and should be separated from the presentation of the results. Statistical tests would help to understand what differences were significant.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors explore the factors influencing personality traits in LLMs, drawing on social determinism theory and recent advances in interpretability. They classify these influences into long-term background factors, such as training data, and short-term external pressures, like prompts and instructions. Using SAE and representation-based methods, they analyze how these factors impact LLM personality through the Big Five and Short Dark Triad tests. The study compares two models: Gemma-2-9B-Instruct and Gemma-2B-Instruct, and analyze their differences in personality expression.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The paper\\u2019s motivation makes sense, grounded in the theory of social determinism, which provides a new framework for exploring LLM personality. The use of SAE and other interpretability methods indeed provides an interesting direction to understanding personality traits in LLMs.\", \"weaknesses\": \"However, there are several critical weaknesses that lead me to recommend rejecting this paper.\\n\\nFirst, Table 5 appears to mislabel which results correspond to each model when cross-referenced with Tables 2\\u20134, casting doubt on the reliability of the whole Section 5.2. The error also complicates my understanding whether the observations on long-term factors align consistently with short-term factors across different model sizes. I would strongly recommend verifying these results for accuracy.\\n\\nIn addition, many findings and claims seem speculative and lack sufficient experimental support. For example, the statement in line 454 (1) feels overly inferred. Rather than relying on abstract claims about model size and personality feature stability, it would be more insightful to analyze models\\u2019 responses to questions directly tied to each background factor. For instance, instead of personality tests, examining if steering on gender with SAE yields more significant changes on gender-related questions for the 9B model would provide clearer insights.\\n\\nThe claim that SAE is better suited for long-term influences while representation-based methods are better for short-term factors lacks empirical support. Looking solely at the definitions of long-term and short-term factors in the paper, either method could easily and reasonably apply to either factor set. This suggest for further justification and experimentation to substantiate this claim.\\n\\nIn light of these issues, I find the overall contribution insufficiently sound to get acceptance.\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
DWa1bATAot
Exploiting Topology of Protein Language Model Attention Maps for Token Classification
[ "Maria Ivanova", "Ilya Trofimov", "Pavel Strashnov", "Nikita Ivanisenko", "Serguei Barannikov", "Evgeny Burnaev", "Olga Kardymon" ]
In this paper, we introduce a method to extract topological features from transformer-based protein language models. Our method leverages the persistent homology of attention maps to generate features for token (per amino-acid) classification tasks and demonstrate its relevance in a biological context. We implement our method on transformer-based protein language models using the family of ESM-2 models. Specifically, we demonstrate that minimum spanning trees, derived from attention matrices, encode structurally significant information about proteins. In our experiments, we combine these topological features with standard embeddings from ESM-2. Our method outperforms traditional approaches and other transformer-based methods with a similar number of parameters in several binding site identification tasks and achieves state-of-the-art performance in conservation prediction tasks. Our results highlight the potential of this hybrid approach in advancing the understanding and prediction of protein functions.
[ "protein language models", "protein property prediction", "topological data analysis", "attention maps", "transformers" ]
Reject
https://openreview.net/pdf?id=DWa1bATAot
https://openreview.net/forum?id=DWa1bATAot
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xx1AAh7ngy", "vvmZDn2L2H", "uvpqcgBsky", "upgxZdYyMN", "sPdJ7eZy4y", "r3q4cuspbc", "qrGSYgT34p", "oxmNSClh4S", "n7W6jMHCTe", "mwgjuEnO0c", "mPL8aPmDdI", "iqDK4Hh0VQ", "iNT46v5ebb", "gN2ZpK6p8a", "fdIpqI9I4d", "fQWwJgD3ha", "erPDSEen27", "ZOU1QRIK0E", "XdnhTpMQdk", "XJ12SvrDkJ", "WcFtXVeAkS", "WYCTfxTh2p", "TIZ2I0mAq1", "T5tAeyX0N2", "S3HymTIKKr", "QeqmmKWBfm", "OPp01HHxBG", "Nyf1Kl6uWZ", "KGk4L69uLN", "GJQibesvEr", "DnPPyygy9g", "9n6Gr2hSeE", "7yOek6evQj", "7VXHNAGkzM", "3tv4qFWQV0" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732726790748, 1732714080717, 1732710470309, 1732715022859, 1730499565802, 1732867155398, 1734861327031, 1733169890761, 1732724908426, 1732885764018, 1730687567782, 1732710856036, 1733308394737, 1730737586470, 1730266575970, 1732719699935, 1737524129851, 1732708947718, 1733312355006, 1732715110970, 1730719708746, 1733002440334, 1732714172652, 1732750544173, 1732728590684, 1732720022712, 1733161272455, 1730670050760, 1732724396052, 1730517665117, 1733308647935, 1733183538315, 1732726453005, 1732708426831, 1730115878488 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11536/Authors" ], [ "ICLR.cc/2025/Conference/Submission11536/Authors" ], [ "ICLR.cc/2025/Conference/Submission11536/Authors" ], [ "ICLR.cc/2025/Conference/Submission11536/Authors" ], [ "ICLR.cc/2025/Conference/Submission11536/Reviewer_WAg8" ], [ "ICLR.cc/2025/Conference/Submission11536/Reviewer_uv5N" ], [ "ICLR.cc/2025/Conference/Submission11536/Area_Chair_i2iG" ], [ "ICLR.cc/2025/Conference/Submission11536/Authors" ], [ "ICLR.cc/2025/Conference/Submission11536/Authors" ], [ "ICLR.cc/2025/Conference/Submission11536/Reviewer_XcsX" ], [ "ICLR.cc/2025/Conference/Submission11536/Reviewer_KYAV" ], [ "ICLR.cc/2025/Conference/Submission11536/Authors" ], [ "ICLR.cc/2025/Conference/Submission11536/Authors" ], [ "ICLR.cc/2025/Conference/Submission11536/Reviewer_t9DU" ], [ "ICLR.cc/2025/Conference/Submission11536/Reviewer_pPMN" ], [ "ICLR.cc/2025/Conference/Submission11536/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11536/Authors" ], [ "ICLR.cc/2025/Conference/Submission11536/Authors" ], [ "ICLR.cc/2025/Conference/Submission11536/Authors" ], [ "ICLR.cc/2025/Conference/Submission11536/Reviewer_Mubu" ], [ "ICLR.cc/2025/Conference/Submission11536/Authors" ], [ "ICLR.cc/2025/Conference/Submission11536/Authors" ], [ "ICLR.cc/2025/Conference/Submission11536/Reviewer_pPMN" ], [ "ICLR.cc/2025/Conference/Submission11536/Reviewer_WAg8" ], [ "ICLR.cc/2025/Conference/Submission11536/Authors" ], [ "ICLR.cc/2025/Conference/Submission11536/Reviewer_WAg8" ], [ "ICLR.cc/2025/Conference/Submission11536/Reviewer_dMUN" ], [ "ICLR.cc/2025/Conference/Submission11536/Authors" ], [ "ICLR.cc/2025/Conference/Submission11536/Reviewer_XcsX" ], [ "ICLR.cc/2025/Conference/Submission11536/Authors" ], [ "ICLR.cc/2025/Conference/Submission11536/Authors" ], [ "ICLR.cc/2025/Conference/Submission11536/Authors" ], [ "ICLR.cc/2025/Conference/Submission11536/Authors" ], [ "ICLR.cc/2025/Conference/Submission11536/Reviewer_uv5N" ] ], "structured_content_str": [ "{\"title\": \"Response by Authors. Part 2\", \"comment\": \"> The chosen downstream tasks primarily focus on per-residue scale tasks. However, it would be valuable to discuss structure-related tasks on a larger scale (e.g., protein function annotation), as this could reveal whether this MST-based topological modeling approach can capture more global protein property information.\\n\\nOur primary focus was on per-residue tasks to highlight the ability of RES-MST to capture local structural relationships. Expanding to the protein function annotation, would be an excellent avenue for future research for our approach.\\n\\n> A more detailed comparison of the method\\u2019s runtime is needed. Compared to traditional full-parameter fine-tuning approaches, your method requires first calculating the MST, then extracting features and training a Pyboost classifier, which incurs significant time costs and may reduce algorithmic efficiency. Therefore, a discussion of the time costs of this approach compared to traditional full-parameter fine-tuning is necessary. However, in Appendix A.5, you did not provide runtime comparisons with baseline models.\\n\\nOur method\\u2019s complexity is detailed in Appendix C, highlighting the linear scaling with respect to depth and attention heads. Compared to full-parameter fine-tuning, our approach offers a computationally efficient alternative by leveraging pre-computed attention maps and using a lightweight classifier for downstream tasks. We will include runtime comparisons with baseline models in future work to further validate efficiency.\"}", "{\"title\": \"Response by Authors. Part 1\", \"comment\": \"Thank you for your detailed and thoughtful feedback. We appreciate your recognition of the novelty and thoroughness of our approach, as well as your constructive suggestions to further clarify and strengthen the paper. We will improve the presentation according to suggestions. Below we address specific concerns one by one.\\n\\n> The analysis of the TDA in section 3.3 feels somewhat incomplete. Is this just based on the one example from figure 5? Can some of these descriptions such as \\u201cchaotic\\u201d vs \\u201cstar\\u201d or \\u201clinear\\u201d be quantified? What is the significance of each of these stages? \\n\\nWe agree that quantifying the stages described as \\u201cchaotic,\\u201d \\u201cstar,\\u201d and \\u201clinear\\u201d would add rigor to the analysis. Terms \\\"Star\\\" and \\\"linear\\\" refer to common shapes of \\\"star graph\\\" (a tree having $k$ nodes, one of them is internal and it is connected to $k-1$ leaves) and \\\"linear graph\\\" (a graph whose vertices can be listed in the order $v_1$, $v_2$, ..., $v_k$ such that the edges are $(v_i, v_{i+1})$ where $i = 1, 2, ..., k \\u2212 1$. \\\"Chaotic\\\" graph is neither \\\"star\\\" or \\\"linear\\\". We are adding these clarifications to the revised version of the paper (lines 768-771). Quantitative evaluation of \\u201cstar\\u201d, \\u201clinear\\u201d and \\u201cchaotic\\u201d patterns is the following. Figure 6 presents a mean maximum degree of a node in MST and confirms a \\\"star\\\" pattern in middle layers (a high maximum degree) and a \\\"linear\\\" pattern in early and late layers (very low maximum degree). Figure 8 presents a mean distance between tokens corresponding to incident nodes of edges in MST. This value is low in early and late layers, proving a \\\"linear\\\" pattern.\\n\\n> (small) Figure 7 would be clearer if the ymin was set to 0\\n\\nThank you for the suggestion regarding Figure 7. We appreciate your attention to clarity and understand the importance of improving visual presentation. While adjusting $y_{\\\\text{min}}$\\u200b to 0 might enhance readability, Figure 7 illustrates correlation values ranging from -1 to 1, making it crucial to display the full range to accurately represent both positive and negative correlations. This ensures that the graph effectively conveys the significance of correlations exceeding 0, which is essential for interpreting the results correctly.\\n\\n> LMetalSite, another (strong) sequence-based method from Yuan et al (2024) is missing from the metal-binding table. Also, it may be appropriate to include ESMFold-derived structural methods, since this is another sequence \\u201cpreprocessing\\u201d step.\\n\\nWe acknowledge the omission of LMetalSite (Yuan et al., 2024) from the metal-binding table and appreciate your valuable suggestion. LMetalSite is based on the sequence protein language model prot_t5_xl_uniref50. While it achieves superior performance in metal-binding prediction tasks, it leverages a different protein language model than ESM-2, making direct comparisons with our approach less appropriate. The primary aim of our work is to demonstrate the advantages of our method over standalone embeddings derived from the same protein language model (ESM-2 in our experiments) to ensure a fair and controlled evaluation.\\nSimilarly, ESMFold explicitly incorporates 3D structural data during training, which inherently contains additional information beyond sequence-based models. As a result, it also cannot be included in fair comparisons with our method, which is designed to rely solely on sequence embeddings.\\nIn the revised version of the paper, we explicitly reference LMetalSite and ESMFold, clarifying their reliance on different training paradigms and data sources, to provide a more comprehensive and transparent discussion for readers.\\n\\n> The provided source code is incomplete. There was substantial use of a package called bio_tda which was not provided.\\n\\nYou can find the `bio_tda` package in the `res_mst_tda/src` directory in the supplementary material.\\n\\n> Figures 6-9 are interesting, but it is not immediately clear what the takeaway is. It seems to me that figure 6, 8, and 9 can be explained by: \\u201cESM2 attends more to linear positional encoding in the early and late layers\\u201d\\n\\nYes, you are right. Figure 6 presents a mean maximum degree of a node in MST and confirms a \\\"star\\\" pattern in middle layers (a high maximum degree) and a \\\"linear\\\" pattern in early and late layers (very low maximum degree). Figure 8 presents a mean distance between tokens corresponding to incident nodes of edges in MST. This value is low in early and late layers, proving a \\\"linear\\\" pattern. See also a visualization of a protein in Figure 5. We are adding a more detailed discussion to the paper (300-305).\"}", "{\"title\": \"Response by Authors. Part 1\", \"comment\": \"Thank you for your thorough and constructive feedback. We are pleased that you found the novelty and motivation of our approach compelling, as well as its non-parametric nature and the improvements demonstrated on downstream tasks. We will improve the presentation according to suggestions. Below we address specific concerns one by one.\\n\\n> The main results only use $H_0$ features, which can be derived from an MST. The method for $H_0$ boils down to generating the MST and taking basic statistics over the edges to the neighbors for each node. There is no description about why these statistics are equivalent to $H_0$ except [212]: \\\"Each interval in a barcode corresponds to an edge in MST\\\". Perhaps a more thorough description in the appendix could be provided?\\n\\nThank you for pointing this out. While the equivalence of $H_0$ features to MST-derived statistics is briefly mentioned, we agree that a more detailed explanation would enhance clarity. We are expanding the Section 2 in the revised version of the paper to provide a comprehensive description, see lines 165-181. A bar in the $H_0$ barcode corresponds to an edge in MST, because both are constructed by incrementally connecting components in a graph based on ascending edge weights. In $H_0$ persistent homology, an interval represents the lifespan of a connected component, ending when it merges with another, which directly maps to the addition of an edge in the MST that connects two disjoint components. This equivalence arises because both processes prioritize edges by weight to form a single connected structure. See Section 3.5.3 from (Dey, 2022) for more details. \\n\\nThe basic statistics computed over the edges in the MST, such as the minimum, maximum, sum, and mean of edge weights, align with those derived from the $H_0$\\u200b barcode because the intervals in the barcode encode the same edge weights. The length of each interval in the $H_0$\\u200b barcode corresponds to the weight of an edge in the MST. Therefore, summarizing these weights through statistics directly captures the key features of the $H_0$\\u200b barcode, making the two representations equivalent in terms of the structural information they encode. \\n Dey, T. K., & Wang, Y. (2022). Computational topology for data analysis. Cambridge University Press.\\n\\n> When some edge weights are the same, MST can give different resulting graphs, since the order of edges is ambiguous. And since small changes in attention weights could cause radically different MST doesn't this make the resulting features very noisy? In your experience, how widely spread are transformer attention weights? And how is your method robust to this?\\n\\nThank you, it is a very interesting and insightful question. Persistence barcodes, including an $H_0$ barcode are robust to small perturbations of filtration functions (Skraba, 2020). However individual features which correspond to nodes, like node degree in our algorithm, might change abruptly with small changes of attention maps which are the filtration functions in our case. This is an interesting avenue for further research.\\n\\nSkraba, P., & Turner, K. (2020). Wasserstein stability for persistence diagrams. arXiv preprint arXiv:2006.16824.\\n\\n> RES-MST takes some statistics over edges are taken per node in the MST. Here it is also mentioned that: \\\"We add: self-attention + sum abs values in ith row jth col.\\\". There should be an ablation study for the effect these (non-MST) features have. How much performance do the MST features add over these extra features?\\n\\nThank you for the suggestion. We acknowledge the importance of distinguishing the contributions of MST-based features from additional self-attention-derived features. To address this, we have included an ablation study in the appendix of the revised version (see Tables 3\\u20135), which provides a detailed analysis of their individual impacts. The results clearly demonstrate that our method using only MST-based features achieves performance comparable to the full RES-MST setup, significantly outperforming the standalone ESM-2 embeddings.\\n\\n> For results using both H_0 and H_1, one has to look to the appendix for the RES-LT results. While they are not better than MST the main body would be clearer if they were included - there is discussion in section 2 about persistent homology and Betti numbers for H_k, and there is talk of cycles and topological features, but cycles only appear in H_1 and only H_0 is used in all the results (in the main text).\\n\\nThe decision to include the RES-LT performance results in the appendix is based on the presence of an ablation study section in the supplementary materials, which specifically examines how various features influence performance. Since similar ablation studies for other non-MST feature influences are also presented in the appendix, placing the RES-LT results there ensures consistency in the paper's structure and presentation.\"}", "{\"title\": \"Response by Authors. Part 1\", \"comment\": \"Thank you for your thoughtful review and detailed feedback. We appreciate your recognition of the theoretical contributions and performance improvements demonstrated in our study. We have improved the presentation according to suggestions. Below we address specific concerns one by one.\\n\\n> While the paper suggests that the MST method could enhance model performance by extracting topological information from attention maps, it lacks empirical evidence to substantiate this claim. Drawing on prior experience, the potential for performance enhancement with attention map integration appears plausible.\\n\\nAs mentioned in our paper, several approaches have been proposed for analyzing the attention maps of models trained on protein sequences (Bhattacharya et al., 2021; Vig et al., 2020). According to (Vig et al., 2020) findings, the attention maps generated by the models: highlight amino acid pairs distant in sequence but close in structure, as indicated by correlations with pairwise contacts, highlight binding sites within proteins and capture local secondary structure, revealing patterns corresponding to structural motifs like alpha-helices and beta-sheets. This results suggest that protein language models can infer structural proximity from sequence data alone, recognize functionally important sites essential for protein activity, and detect common structural motifs inherent in protein sequences. This demonstrates the capability of attention maps to uncover intricate structural features solely from sequence information. Based on this analysis we conducted topological data analysis of the attention maps.\\n\\nOur method provides a unique and interpretable perspective by leveraging topological data analysis of attention maps, specifically through minimum spanning trees (MSTs), to enrich traditional embeddings. The topological features extracted from attention maps contain independent information not present in ESM-2 embeddings. Notably, our approach outperforms ESM-2 embeddings in several binding prediction tasks (Tables 2), including protein-metal ion interactions, peptide binding, and protein-protein interactions, demonstrating its practical utility. This success arises because, while ESM-2 embeddings capture rich latent features, they do not explicitly encode the structured, graph-like information inherent in attention maps. By distilling this information, our method captures localized structural relationships and highlights residues that are critical for biological functions, making it a valuable addition to the protein analysis toolkit. Across all experiments (Tables 1-2), combining ESM-2 embeddings with the proposed topological features (RES-MST) consistently outperforms using EMS-2 embeddings alone, with performance improvements across a wide range of tasks (10 types of binding and 2 types of conservation prediction), including a notable +4.9% standalone increase for MG binding prediction.\\n\\nBhattacharya N. et al. Interpreting potts and transformer protein models through the lens of simplified attention //PACIFIC SYMPOSIUM ON BIOCOMPUTING 2022. \\u2013 2021.\\n\\nVig J. et al. Bertology meets biology: Interpreting attention in protein language models//Ninth International Conference on Learning Representations (ICLR) 2021. \\u2013 2021.\\n\\n> The benchmarks assessed are less widely used (especially for the conservation prediction task), which challenges the demonstration of the new method's practical applicability.\\n\\nWe acknowledge the importance of incorporating additional, widely used datasets to better demonstrate the generalizability our approach. To address this, we have included experiments on secondary structure prediction tasks using the NEW364 and CASP12 datasets (Q3 and Q8 prediction tasks) for the RES-MST and RES-LT methods, as presented in the ablation studies (see Table 8). \\n\\nAdditionally, if you have a specific dataset in mind, we would greatly value your suggestion for future evaluations.\\n\\n> The baseline comparisons for the binding experiment are limited in diversity and omit the latest methods (e.g., [1,2]), thereby reducing the persuasiveness of the findings.\\n\\nAs noted in our paper, we excluded the methods Yuan et al. (2024) and Li & Liu (2023) from our comparisons because they leverage 3D structural data during their training process. To ensure a fair comparison, our method relies exclusively on sequence-based protein language models and does not utilize 3D structural data, which inherently provides more detailed information. The aim of our approach is to demonstrate that while ESM-2 embeddings capture rich latent features, they do not explicitly encode the structured, graph-like information present in attention maps. By extracting and distilling this information, our method captures localized structural relationships and highlights residues critical for biological functions, establishing itself as a valuable tool for protein sequence analysis and prediction tasks.\"}", "{\"summary\": \"This work introduces a method to extract topological features from protein language model attention maps for improved per-amino-acid classification tasks. The authors present RES-MST, which uses minimum spanning trees derived from attention matrices to capture structurally significant protein information. By combining these topological features with standard embeddings from the PLMs, the method outperforms existing sequence-based approaches on binding site identification and conservation prediction tasks.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"Novel application of topological data analysis to protein language models: This work bridges two important areas (TDA and protein LMs) in an innovative way, potentially opening up new avenues for analyzing and improving protein language models.\", \"weaknesses\": [\"Limited theoretical foundation: The paper lacks a robust theoretical explanation for why this topological approach should outperform alternative methods that leverage attention maps. A stronger motivation for the use of topological data analysis in this context would strengthen the paper's argument.\", \"Insufficient ablation studies: The paper would benefit from more comprehensive ablation studies to elucidate the contribution of different components of the method, such as various types of topological features and the impact of different layers.\", \"Unclear methodology description: The explanation of the method in Section 3.1 lacks clarity. Specifically: a) The exact features extracted from the MST for each amino acid are not clearly defined. b) The features extracted directly from the attention map are ambiguously described. c) The process of combining the MST-derived and attention map-derived features is not explained. d) The final prediction process using this non-parametric method is not adequately detailed.\", \"Ambiguous interpretation of results: The interpretation of Figures 6, 8, and 9 in relation to the described patterns (chaotic, star, linear) in Section 3.3 is not sufficiently clear, making it difficult to follow the authors' reasoning.\", \"Choice of evaluation metric for conservation prediction: The authors' decision to treat the conservation prediction task as a classification problem, rather than using regression metrics like Pearson correlation or Spearman's rank correlation, is not well justified.\", \"Limited comparison with relevant baselines: The paper lacks comparison with other approaches that use both protein sequence embeddings and their attention maps. This makes it unclear whether the performance improvement stems from the proposed Topological Data Analysis approach or simply from leveraging attention patterns. Additional baselines utilizing both embeddings and attention maps with different methods such as (Rao et al, 2020) is necessary to substantiate the effectiveness of the proposed method.\"], \"questions\": [\"Can you provide more theoretical justification or intuition for why this topological approach should work better than alternative methods that leverage attention maps? How does it capture information that other approaches might miss?\", \"Could you clarify the feature extraction process in more detail? Specifically: a) What exact features are extracted from the MST for each amino acid? b) What features are extracted directly from the attention map? c) How are these two sets of features combined? d) How is the final prediction made using this non-parametric method?\", \"The paper describes patterns in the MSTs as \\\"chaotic,\\\" \\\"star,\\\" and \\\"linear\\\" across different layers. Could you provide a more detailed explanation of how Figures 6, 8, and 9 support these characterizations?\", \"How does your method compare to other approaches that use both protein sequence embeddings and attention maps? Can you provide additional baselines or comparisons to isolate the contribution of the topological data analysis approach versus simply leveraging attention patterns?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response. The revised paper does indeed have a clearer structure. The research topic is interesting, but I still believe that further exploration of runtime and more categories of tasks is necessary. Given these factors, I have increased the score to 5, but I still think the paper requires further improvements.\"}", "{\"metareview\": \"**Summary:**\\n\\n Topological Data Analysis (TDA) encompasses topological descriptors (capturing features such as number of components/clusters, number of independent loops) that can augment the capabilities of deep neural architectures. This work proposes to enhance the representations in Transformer-based protein language models (PLMs) with topological features extracted from their attention maps via persistent homology (PH). Specifically, attention weights are treated to get edge weights in a fully connected graph, and a threshold-based filtration is applied to obtain a persistence barcode. While the barcode can be used to extract higher-order information such as that pertaining to cycles, this work focuses only on the zero order features, for each amino acid (i.e., a token in the language model), efficiently computed with a Minimum Spanning Tree (MST) algorithm. Empirically, the authors show that combining these embeddings with those from an ESM-2 model can improve performance on multiple tasks such as binding site identification and conservation prediction tasks. \\n\\n \\n**Strengths:**\\nReviewers generally acknowledged many strong points of this work, including, (a) clarity of presentation (writing; illustrations; background on topological features) though some also pointed out parts where readability and clarity could be enhanced, (b) interesting and important research problem (clear motivation with a potential to help better understand what PLMs learn; can potentially capture global structural information that PLMs on their own might miss out on), and (c) innovative way to encode attention (bridging TDA and PLMs; no need to finetune PLMs), and (d) empirical benefits on binding site identification and conservation prediction tasks. \\n\\n\\n**Weaknesses:** \\nReviewers also raised several concerns, including, (a) lack of analysis on topological stability (i.e, robustness to small perturbations in the attention weights), (b) evaluation limited only to per-amino acid scale tasks, i.e., binding and conservation tasks (no evidence for structural tasks such as protein function annotation), (c) benchmarks being less common, (d) lack of comparison with recent methods, as well as prior approaches (such as Rao et al. 2020) that use both the usual token embeddings and the attention maps, (e) issues with interpretation of some results and choice of metrics, (e) insufficient ablation (e.g,, impact of the number of layers), (f) discussion of alternative approaches than MST for extracting topological information from the attention weights, and (g) lack of detailed comparison on runtime. Some reviewers also pointed out that the empirical improvements over the original ESM-2 were not persuasive, \\n\\n\\n**`Recommendation:** \\n\\nThe authors addressed many of the above concerns and clarified some aspects during their discussion with the reviewers. However, some key issues remained unresolved. \\n\\nReviewer XcsX maintained that since 3D structural information about proteins is accessible these days, the authors\\u2019 argument about comparing with methods that use such structural features being unfair is not compeling. I strongly agree with the reviewer\\u2019s assessment that the work loses considerable significance absent comparisons with other methods that are capable of performing the same tasks, \\n\\nReviewer uv5N emphasized that more comprehensive evaluations including on runtime and with additional tasks was necessary. I endorse this feedback. \\n\\nFinally, reviewer pPMN also underscored the need for stronger evidence such as the attention solely being able to demonstrate in their setting that long-term dependencies in the sequence are localised in the 3D space (e.g., in the context of protein design, such as observation was made in a PLM by Ingraham et al., Generative models for graph-based protein design, NeurIPS (2019)). This they argued would convincingly demonstrate the ability of the current approach to capture the underlying generative grammar of the protein sequences. They also raised another important point that residue-wise classification should not be restricted to methods like theirs that extracts topological features from attention, I fully support these concerns. \\n\\nWithout the resolution of these major shortcomings, this work does not meet the bar for acceptance.\", \"additional_comments_on_reviewer_discussion\": \"Please see the Metareview above for all the relevant details.\"}", "{\"title\": \"Response to Follow-up Questions\", \"comment\": \"> What do you mean by \\\"summing attention maps over rows.\\\"? -In which axis (row or column), is the Softmax computed for the attention maps?\\n\\nYes, indeed, clarification is needed here. If we were to simply sum the attention scores over the rows for each attention map, the result would be exactly 1 due to the normalization properties of Softmax. However, following the approach outlined by Rao et al. (2020), we applied Average Product Correction (APC). This adjustment was performed independently on the symmetrized attention maps for each head in the Transformer. As a result, summing over the rows of these corrected attention maps for each head does not yield 1.\\n\\n> How do you deal with the multiple attention maps in each layer and those across different layers? For example, Rao et al. trained a logistic regression on top of attention maps. Have you trained a Py-boost classifiers to aggregate information within multiple attention maps directly?\\n\\nTo address handling multiple attention maps, we extracted per-residue features from each attention map, resulting in $L \\\\times H$ features for each residue. Similar to Rao et al. (2020), we trained a logistic regression model with $L_1$ regularization, adding further hyperparameter tuning for the regularization term to optimize performance.\\n\\nWe also experimented with Py-Boost classifiers as an alternative to logistic regression for aggregating information from multiple attention maps. However, the Py-Boost approach did not yield significant improvements in performance over logistic regression. Consequently, we chose to retain logistic regression as our primary method, maintaining consistency with Rao et al.'s methodology. That said, we plan to include the Py-Boost results as part of our ablation studies in the final paper to provide a comprehensive analysis.\"}", "{\"title\": \"Response by Authors. Part 2\", \"comment\": \"> The choice to focus on topological features derived from MSTs lacks sufficient motivation regarding why these features, specifically from MSTs rather than other graph representations.\\n\\nIn our paper, we apply topological data analysis to graphs derived from attention maps. In particular, we evaluate a persistent homology of these graphs. The $H_0$ barcode is the simplest and the fastest to compute a type of persistent homology descriptor. In a nutshell, it depicts multi-scale clustering patterns of a graph when removing edges having weight greater than some varying threshold. Essentially, $H_0$ barcodes are equivalent to minimum spanning trees (we provided explanation on that, see lines 166-181 in the revised version of the paper). While many other graph representations exist, they are out of focus of our research. \\nSome properties of MST like maximum node degree are significantly correlated with a conservation value of a protein (Figure 7). This is supported by the consistent performance improvements observed across downstream tasks when integrating MST features with embeddings (Tables 1-2).\\n\\n> The authors do not specify the model used for downstream tasks, nor do they clarify the form and structure of the input to this model. While they detail the process of extracting topological features from attention maps and MSTs, they omit critical information on how these features are subsequently utilized in downstream tasks. Without specifying the model type or its architecture, it\\u2019s challenging to assess how effectively the extracted features are integrated or if they are even suited to the task's requirements.\\n\\n> In the paper paper, the author discuss the extraction of topological features from attention maps, but do not specify the model used for downstream tasks. Could you provide more detail about the model type and architecture?\\n\\nThank you for pointing this out, we provided details in section 5. The RES-MST features as well as ESM-2 embeddings are used as input to a PyBoost classifier, a non-parametric model known for its robustness and scalability.\\n\\n> There is no explanation for the choice of the name 'RES-MST.'\\n\\n> There is no explanation for the choice of the name 'RES-MST.' What does 'RES' stand for in 'RES-MST'?\\n\\nThank you for pointing this out, the choice of the name stands from: for each MST node, we calculate per-RESidue MST statistics. We added this clarification to the revised version of the paper (see lines 212-213 of the revised manuscript).\\n\\n> The citation format used in the paper does not adhere to standard conventions. For example, in line 172, the citation 'ESM-2 Lin et al. (2022)' should be formatted as 'ESM-2 (Lin et al., 2022)'. I recommend reviewing and revising the citation style throughout the manuscript.\\n\\nWe have adjusted the citation format to standard conventions in the revised manuscript.\\n\\n> There is no interpretation provided for Figures 6, 7, 8, and 9.\\n\\nThese figures illustrate evolving topological patterns across layers, from chaotic to star-like and linear configurations. The patterns align with changes in the functional and structural focus of the transformer layers. For example, the star-like topology highlights the centralization of structural features in intermediate layers, as described in Section 3.3. Figure 6 presents a mean maximum degree of a node in MST and confirms a \\\"star\\\" pattern in middle layers (a high maximum degree) and a \\\"linear\\\" pattern in early and late layers (very low maximum degree). Figure 8 presents a mean distance between tokens corresponding to incident nodes of edges in MST. This value is low in early and late layers, proving a \\\"linear\\\" pattern. See also a visualization of a protein in Figure 5. We are including these explanations in the main text for clarity.\\n\\n> - Line 136: there is an incorrect use of the open quotation mark.\\n> - Line 147: \\\"the vertices set\\\" is not grammatically correct. \\n> - Line 150: \\\"The natural issue is a necessity to pick some \\u03b1.\\\" is not grammatically correct. A grammatically correct version would be: \\\"A natural issue is the necessity of choosing a value for \\u03b1.\\\" \\n> - In the tables, some numbers are in different fonts.\\n\\nThank you for pointing this out, we have adjusted the open quotation mark, numbers in the tables, changed for \\u201cis the vertex set of the graph\\u201d and \\\"A natural issue is the necessity of choosing a value for \\u03b1.\\\"\"}", "{\"comment\": \"Thank you for your response. However, I remain unconvinced regarding the novelty of the method and its performance. For example, regarding the statement about the comparison with baseline methods: \\\"*It's an unfair comparison because they leverage 3D structural data during their training process*\\\", I find this argument unconvincing. Considering that accessing protein structural information is no longer a significant challenge (e.g., with structure prediction models), I see no compelling reason to differentiate between \\\"methods with structure input\\\" and \\\"methods without structure input\\\" for the same task. Similarly, the claim that \\\"*deep learning-based methods specifically designed to extract topological features from attention maps for per-token downstream classification tasks*\\\" appears puzzling. Why must baselines for \\\"per-token downstream classification tasks\\\" specifically involve \\\"deep learning-based methods designed to extract topological features from attention maps\\\"? In my view, any method capable of performing the same task should be a valid baseline.\\n\\nFor these reasons, I am inclined to maintain my original score.\"}", "{\"summary\": \"The authors apply techniques from topological analysis on graphs to the attention maps of protein language models, (particularly ESM-2)\\n\\nThey generate features that can be appended to the ESM-2 embeddings and then used to help in tasks that make classifications / predictions at the amino acid level.\\n\\nThe authors describe how to generate barcode of different persistent homology features by varying a threshold for edge weights, filtering the edges in the graph and recording when various topological features come and go as the threshold is raised. There are simple edge features (H_0), and cyclical features (H_1), and presumably higher level feature s that can be extracted from the barcode.\\n\\nNext, the authors state that the topological features for H_0 are equivalent to features derived from the Minimum Spanning Tree (MST). \\n\\nThe experiments show that concatenating these features to the pLM embeddings can improve performance on downstream tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The discussion about barcodes and topological features is nice.\\nThe motivation seems clear; An output feature that says 'dense clique around here' or 'cycles present' will be useful for some tasks.\\n\\nThe method is non-parametric which is good, the MST does not need a threshold, and there is no need to fine tune ESM-2.\\n\\nNew features are generated from the attention maps of the transformer network. \\nSince the transformer was trained on language model tasks, the embedding features at the output do not necessarily encode the graph structure contained in the attention maps, only the information necessary for the output token, so it makes sense to try and include more of the information held in the network of attention weights remaining in the transformer.\\n\\nThe new features do improve the results on downstream tasks.\", \"weaknesses\": \"The main results only use H_0 features, which can be derived from an MST.\\nThe method for H0 boils down to generating the MST and taking basic statistics over the edges to the neighbors for each node. There is no description about why these statistics are equivalent to H_0 except [212]: \\\"Each interval in a barcode corresponds to an edge in MST\\\". Perhaps a more thorough description in the appendix could be provided?\\n\\nWhen some edge weights are the same, MST can give different resulting graphs, since the order of edges is ambiguous. \\nAnd since small changes in attention weights could cause radically different MST doesn't this make the resulting features very noisy? In your experience, how widely spread are transformer attention weights? And how is your method robust to this?\\n\\nFor results using both H_0 and H_1 , one has to look to the appendix for the RES-LT results. While they are not better than MST the main body would be clearer if they were included - there is discussion in section 2 about persistent homology and Betti numbers for H_k, and there is talk of cycles and topological features, but cycles only appear in H_1 and only H_0 is used in all the results (in the main text).\\n\\nRES-MST takes some statistics over edges are taken per node in the MST. [194] In the description for the features derived from the MST (for each node - min,max,sum,mean weights and count incident edges).\", \"here_it_is_also_mentioned_that\": \"\\\"We add: self-attention + sum abs values in ith row jth col.\\\".\\nThere should be an ablation study for the effect these (non-MST) features have.\\nHow much performance do the MST features add over these extra features?\", \"typos\": \":\\n\\n*** 159: is\\n*** a: 201 - LxH should be resulting in L accroding to 187\\n*** 450 -(2020) - paper title missing.\", \"questions\": \"What is actual size of resulting feature vector (to be added to ESM-2 Embedding) - 8? or 8 x L (when all heads in layer are averaged).\\n\\nPerhaps this model has advantages over ESM-2 embeddings because it uses features from other layers in the pLM.\\nThe pretraining task for the pLM is for token reconstruction, which might throw away information about connectivity in the last layer.\\nWhat about simply taking ESM-2 features from other the layers (eg. middle + last layers) and concatenating them?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response by Authors. Part 2\", \"comment\": \"> *** 159: is *** a: 201 - LxH should be resulting in L accroding to 187 *** 450 -(2020) - paper title missing.\\n\\nThank you for pointing this out. We have corrected these issues in the revised version of our paper. Specifically, we clarified the notation for $L \\\\times H$ by providing a more detailed explanation (lines 245-251). Furthermore, the missing paper title in line 450 has been added in the revised version.\\n\\n> What is actual size of resulting feature vector (to be added to ESM-2 Embedding) - 8? or 8 x L (when all heads in layer are averaged).\\n\\nThe resulting feature vector has a size of $7 \\\\times L$ when all heads in each layer are averaged. If all heads across all layers are used, the size becomes $7 \\\\times L \\\\times H$. We have provided a more detailed explanation in the revised version (lines 245\\u2013251) to clarify this further. Additionally, we updated the visualization of the method pipeline to better illustrate this process.\\n\\n> Perhaps this model has advantages over ESM-2 embeddings because it uses features from other layers in the pLM. The pretraining task for the pLM is for token reconstruction, which might throw away information about connectivity in the last layer. What about simply taking ESM-2 features from other the layers (eg. middle + last layers) and concatenating them?\\n\\nWhile concatenating embeddings from multiple layers could capture additional information, it would result in extremely large feature vector that poses significant computational and memory challenges. For example, with ESM-2 (650M), the feature set size would be $1280 \\\\times 33 = 42,240$, making it nearly infeasible for standard machine learning classifiers. In contrast, our approach produces a much more compact and efficient feature vector: $7 \\\\times 20 \\\\times 33 = 4,620$ when considering all attention heads across layers or $7 \\\\times 33 = 231$ when averaging heads within layers. This significant reduction allows efficient downstream processing while retaining critical information.\"}", "{\"title\": \"Response to Follow-up Questions\", \"comment\": \"> To support the reliability of attention maps in this context, stronger evidence would be beneficial. For instance, illustrations of attention scores showing residues that are close in 3D space but distant in sequence receiving high attention could strengthen the argument.\\n\\n> If the authors can demonstrate how features derived solely from attention maps can achieve such results\\u2014perhaps by showing how attention maps capture information about the \\\"grammar\\\" of protein sequences, as explored in genome data in this work [1]\\u2014it could become a key strength of the paper. \\n\\n> The authors claim that \\\"the patterns align with changes in the functional and structural focus of the transformer layers\\\", but is there a chemically grounded explanation for why these patterns align with the actual functional and structural focus of proteins, if they do? Alternatively, if these patterns do not directly correspond to real protein functions and structures, what might be the underlying reason that learning them leads to improved model performance?\\n\\nWe recognize the importance of these considerations. Relevant analyses will be incorporated into the revised version of the paper.\\n> Additionally, while Tables 2 and 3 showcase the model\\u2019s performance on per-residue conservation and binding prediction, these tasks focus more on residue functions than directly encoding \\\"structurally significant information about proteins.\\\" As such, they cannot fully demonstrate the ability of attention maps to capture structural properties.\\n\\nTo address this, we conducted additional experiments focused on secondary structure prediction, and the findings are included in the revised version of the paper, see Appendix B.2.3.\\n\\n> If choosing the maximum bidirectional attention is not established as a commonly accepted or proven method that outperforms alternatives, a thorough ablation study with alternative methods should have been conducted within the scope of this work to provide a stronger foundation for the proposed approach. I would appreciate hearing more from the authors on this point to better understand their rationale.\\n\\nWe thank you for insightful comments. We performed additional experiments exploring alternative methods. Specifically, instead of maximum bidirectional attention symmetrization, we considered an alternative setting. The elements of an attention matrix are used as weights of a bipartite graph. Then, we calculated the topological features of its minimum spanning tree (MST), a method we refer to as \\\"Bipartite RES-MST\\\" in our analysis. While these experiments are still ongoing, the initial results are promising and suggest that this approach may offer meaningful insights.\\n\\nIn addition, we implemented an alternative self-attention map aggregation method inspired by the approach used for contact map prediction in Rao et al. (2020). Following their methodology, we applied Average Product Correction (APC) independently on the symmetrized attention maps for each head in the Transformer. Given our focus on per-residue predictions, we extended this approach by summing the attention maps over rows, building on the procedure described in Rao et al. (2020). This method, referred to as \\\"Attention Map Aggregation\\\" performed singificantly worse than RES-MST, see Table 2 in the revised manuscript.\", \"table_1\": \"Per-residue binding prediction experimental results.\\n|Model | Parameters | DNA | RNA | HEM | ATP | CA | MN | MG | ZN | PEP | PRO |\\n|-------------|:-------------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|\\n|Attention map aggregation| 650M| 57.1 | 63.1 | 56.2 | 63.7 | 62.6 | 67.4 | 63.8 | 67.3 | 63.0 | 61.3 |\\n|Attention map aggregation| 3B | 56.0 | 62.2 | 53.7 | 64.9 | 61.9 | 66.3 | 62.9 | 66.5 | 62.1 |60.3|\\n|ESM-2 | 650M |86.5 |85.3 |91.6 |89.8 |82.9 |93.4 |76.8 |96.7 |74.6|69.9|\\n|ESM-2 | 3B | 87.9 |85.7 |91.7 |90.5 |83.4 |91.7 |78.5 |96.5 |75.1|70.3|\\n|RES-MST (all) | 650M |86.0|83.7|91.2 |91.6 |*86.4* | **94.7**| *82.4*| 96.9 | 76.2 |73.2|\\n|Bipartite RES-MST (all) | 650M |87.0|84.2|- |- |- | -| -| - | - |-|\\n|RES-MST (avg) | 650M |77.0 | 76.0 | 86.7 | 87.6 | 81.2 | 92.4 | 79.9 | 94.9 | 70.8 | 68.5 |\\n|RES-MST (avg) | 3B | 77.4 | 75.3 | 86.2 | 87.4 | 82.0 | 92.9 | 79.8 | 95.5 | 71.5 | 69.0 |\\n|RES-MST (all) + ESM-2 | 650M |88.3 |85.8 | *92.4* |**92.4** | **86.9**|*94.4*|**83.4**| **97.2**|*77.8* |*74.4* |\\n|Bipartite RES-MST (all) + ESM-2 | 650M |**89.2** |**86.2** | - |-| -|-|-| -|- |- |\\n|RES-MST (avg) + ESM-2 | 650M | 88.3 | 85.9 | 92.1 | 91.4 | 85.5 | 93.6 | 82.2 | *97.2* | 76.8 | 73.9 |\\n|RES-MST (avg) + ESM-2 | 3B | *89.1*| *86.1* | **92.4** | *91.8* | 85.0 | 93.4 | 81.9 | 97.0 |**78.5** |**74.4**|\\n\\n[1] Transformer protein language models are unsupervised structure learners (Rao et al., bioRxiv2020/ICLR 2021)\"}", "{\"summary\": \"This paper presents an interesting approach to extract topological features from protein language models. More specifically, they compute the minimum spanning tree (MST) from the attention weights of ESM2. To evaluate their method, they train a PyBoost classifier that takes the processed MST features as input and predict conservation and binding residues. They also ensemble their model with ESM to achieve stronger performance.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The authors provide an interesting take on the attention weights. By thinking about it as a fully connected graph, the authors present an interesting analysis using minimum spanning trees.\", \"weaknesses\": \"Evaluation is limited to binding and conservation.\\n\\nThe proposed models, RES-MST (ESM2-650M all) and RES-MST (ESM2-650M avg), perform comparably with ESM2 across the benchmarks. Specifically, ESM achieves stronger performance in 5 of the 12 benchmarks.\", \"questions\": \"This paper reports an interesting idea on how to convert attention matrices into topological features. The authors provide analysis and visualizations of the minimum spanning tree on different proteins. They look quite interesting. However, it remains unclear to me what the utility of such an approach is.\\n\\nSince the topological features are extracted solely from ESM2, ESM2 already contains topological features, albeit in a rich latent representation. The similar performance of the proposed approach and ESM2 seems to suggest that one can implicitly decode these topological features from ESM2. Thus, what is the significance of this approach? Is there anything besides being \\u201cthe first time that topological data analysis has been applied to classification on a per-token basis\\u201d? What are some cases in which the proposed topological features capture information that is not easily accessible from ESM2 embeddings alone? In other words, what are some potential advantages of topological approach over the ESM embedding?\\n\\nTo be clear, the tasks of residue conservation and binding are motivated in the introduction. However, the motivation for topological data analysis is not clear, as ESM seems to perform fine.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose RES-MST, a method that leverages attention maps from protein language models to generate minimum spanning trees (MSTs) and extract various features for per-residue conservation and binding predictions. By evaluating their approach on datasets such as ConSurf10k for conservation and diverse binding site prediction benchmarks, they demonstrate that RES-MST outperforms baseline models, achieving superior accuracy and AUC scores.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper introduces a non-parametric framework aimed at transforming attention matrices from transformer models into topological features that are customized for token-wise classification. The results presented in the paper demonstrate impressive performance in per-residue conservation and binding predictions. The competitive accuracy and AUC values highlight the effectiveness of the proposed method, particularly in leveraging attention maps from pLMs for generating MSTs.\", \"weaknesses\": [\"Methodology:\", \"Suitability of attention maps for topology: Attention maps represent learned relationships between sequence tokens (amino acids) based on the model's training objective, which is primarily language-based. These relationships are not necessarily grounded in spatial or physical proximity, which are crucial for understanding protein structure and function. Attention matrices are often dense and noisy, with attention spread across many tokens, which might make topological methods like persistent homology less informative or even misleading when applied naively. The paper would need to convincingly demonstrate that the topology derived from attention maps has a meaningful connection to physical or functional protein properties.\", \"While the authors analyze MST structures across layers, they don\\u2019t provide a clear theoretical or empirical justification for why these specific patterns (chaotic, star, linear) are meaningful in terms of protein functionality or how these differences are expected to relate to biological significance.\", \"The transformation of attention scores into a quasi-distances matrix is a key step, but the reasoning behind this particular transformation is under-explained. Why the maximum of the bidirectional attention scores is chosen, or how this approach compares with others, isn\\u2019t detailed.\", \"The choice to focus on topological features derived from MSTs lacks sufficient motivation regarding why these features, specifically from MSTs rather than other graph representations.\", \"The authors do not specify the model used for downstream tasks, nor do they clarify the form and structure of the input to this model. While they detail the process of extracting topological features from attention maps and MSTs, they omit critical information on how these features are subsequently utilized in downstream tasks. Without specifying the model type or its architecture, it\\u2019s challenging to assess how effectively the extracted features are integrated or if they are even suited to the task's requirements.\"], \"writing\": [\"There is no explanation for the choice of the name 'RES-MST.'\", \"The citation format used in the paper does not adhere to standard conventions. For example, in line 172, the citation 'ESM-2 Lin et al. (2022)' should be formatted as 'ESM-2 (Lin et al., 2022)'. I recommend reviewing and revising the citation style throughout the manuscript.\", \"There is no interpretation provided for Figures 6, 7, 8, and 9.\", \"Line 136: there is an incorrect use of the open quotation mark.\", \"Line 147: \\\"the vertices set\\\" is not grammatically correct.\", \"Line 150: \\\"The natural issue is a necessity to pick some \\u03b1.\\\" is not grammatically correct. A grammatically correct version would be: \\\"A natural issue is the necessity of choosing a value for \\u03b1.\\\"\", \"In the tables, some numbers are in different fonts.\"], \"questions\": [\"There is no explanation for the choice of the name 'RES-MST.' What does 'RES' stand for in 'RES-MST'?\", \"Could the authors elaborate on the theoretical or empirical rationale behind analyzing MST structures in terms of chaotic, star, and linear patterns? How do these specific patterns relate to protein functionality and biological significance?\", \"In the paper paper, the author discuss the extraction of topological features from attention maps, but do not specify the model used for downstream tasks. Could you provide more detail about the model type and architecture?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response by Authors. Part 1\", \"comment\": \"Thank you for your detailed review and constructive feedback. We have improved the presentation according to suggestions. Below, we address the key points to clarify and strengthen our contribution one by one.\\n\\n> Limited theoretical foundation: The paper lacks a robust theoretical explanation for why this topological approach should outperform alternative methods that leverage attention maps. A stronger motivation for the use of topological data analysis in this context would strengthen the paper's argument.\\n\\n> Can you provide more theoretical justification or intuition for why this topological approach should work better than alternative methods that leverage attention maps? How does it capture information that other approaches might miss?\\n\\nThe theoretical foundation of our topological approach is not necessarily that it should consistently outperform alternative methods that leverage attention maps but rather that it complements embeddings to provide additional insights. This success stems from the fact that while ESM-2 embeddings capture rich latent features, they do not explicitly encode the structured, graph-like information present in attention maps. The motivation for our approach lies in the unique ability of topological data analysis (TDA) to capture graph-like, structural relationships within attention maps that are not explicitly represented in embeddings.\\nMST-based features, derived from TDA, reflect the inherent topological structure of attention maps. These features have demonstrated strong correlations with biologically significant residues, such as conserved amino acids (Section 3.2), underscoring their biological relevance. By capturing these topological structures, our approach provides an orthogonal perspective to embeddings, offering a richer and more comprehensive representation that enhances downstream predictions.\\n\\n> Insufficient ablation studies: The paper would benefit from more comprehensive ablation studies to elucidate the contribution of different components of the method, such as various types of topological features\\n\\nWe acknowledge the importance of the comprehensive ablation studies. Recognizing the importance of isolating the contributions of MST-based features from those derived from non-MST features, derived from attention map itself, we have included a detailed ablation study in Appendix B.1. The results reveal that our method using only MST-based features achieves performance comparable to the full RES-MST setup as well significantly outperforming standalone ESM-2 embeddings. \\n\\nAdditionally, we have conducted comparative analyses of MST-based features (RES-MST) with alternative topological representations, such as local topology features (RES-LT), as presented in Appendix B.2. These analyses demonstrate the superior performance of RES-MST across tasks like conservation, binding site prediction and secondary structure prediction. \\n\\n> Ambiguous interpretation of results: The interpretation of Figures 6, 8, and 9 in relation to the described patterns (chaotic, star, linear) in Section 3.3 is not sufficiently clear, making it difficult to follow the authors' reasoning.\\n\\n> The paper describes patterns in the MSTs as \\\"chaotic,\\\" \\\"star,\\\" and \\\"linear\\\" across different layers. Could you provide a more detailed explanation of how Figures 6, 8, and 9 support these characterizations?\\n\\nThese figures illustrate evolving topological patterns across layers, from chaotic to star-like and linear configurations. The patterns align with changes in the functional and structural focus of the transformer layers. For example, the star-like topology highlights the centralization of structural features in intermediate layers, as described in Section 3.3. Figure 6 presents a mean maximum degree of a node in MST and confirms a \\\"star\\\" pattern in middle layers (a high maximum degree) and a \\\"linear\\\" pattern in early and late layers (very low maximum degree). Figure 8 presents a mean distance between tokens corresponding to incident nodes of edges in MST. This value is low in early and late layers, proving a \\\"linear\\\" pattern. See also a visualization of a protein in Figure 5. We are including these explanations in the main text for clarity.\\n\\n> Choice of evaluation metric for conservation prediction: The authors' decision to treat the conservation prediction task as a classification problem, rather than using regression metrics like Pearson correlation or Spearman's rank correlation, is not well justified.\\n\\nTreating conservation prediction as a classification task aligns with prior work, such as Marquet et al. (2022). However, we acknowledge the potential value of regression metrics (e.g., Pearson or Spearman correlations) and will consider these for future comparisons.\\n\\nMarquet C. et al. Embeddings from protein language models predict conservation and variant effects //Human genetics. 2022.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for your thoughtful and constructive feedback. We are pleased that you found the novelty and interpretability of our approach compelling and appreciated the effectiveness of our visualisations. We will improve the presentation according to suggestions. Below we address specific concerns one by one.\\n\\n> The major flaw of this paper is about its experimental results. The table 1 shows minimal improvement over original esm-2. Could authors give a brief explanation? \\n\\nIn terms of experimental results, the topological features extracted from attention maps significantly enhance the accuracy of binding site predictions for various molecular interactions, including protein-metal ions (e.g., CA, MN, MG), protein-protein interactions, and peptides, often surpassing standalone ESM-2. Consequently, our method, RES-MST, achieves notable performance improvements when combined with ESM-2 embeddings, as demonstrated in Tables 1-2. This improvement is driven by our method's ability to distill structured, graph-like information from attention maps, which is not explicitly captured by ESM-2 embeddings. By leveraging these localised structural relationships, our approach effectively identifies critical residues essential for biological functions, adding a valuable dimension to protein sequence analysis and prediction tasks.\", \"in_some_cases_improvement_is_large\": \"+4.9% for MG binding prediction, +3.5% for CA binding prediction.\\n\\n> Also, the error reported in these tables is astonishingly low. How are these numbers produced? I think such low variance can only be obtained by training linear modules.\\n\\nStandard deviations of metrics are estimated from several runs of a PyBoost classifier with distinct seeds. The PyBoost classifier is designed for robustness and efficiency, resulting in low variance in our reported metrics. This is achieved through careful hyperparameter tuning and effective handling of imbalanced datasets using techniques like SMOTE.\\n\\n> The line space of contributions listed in introduction might need adjustment. \\nThe \\\"all\\\" and \\\"avg\\\" in table1-4 are not explained in tables' captions.\\n\\nIn response to your suggestions on presentation, we have adjusted the line spacing for contributions in the introduction and provided detailed explanations for \\\"all\\\" and \\\"avg\\\" in the text (lines 246-251) and in the captions of Tables 1-8 in the revised version of the paper.\", \"title\": \"Response by Authors.\"}", "{\"title\": \"Overall Response by Authors\", \"comment\": [\"We thank the reviewers for their detailed and thoughtful reviews. We are pleased to see that our approach has been generally recognized as novel and compelling. We have addressed individual questions, comments, and concerns in separate threads.\", \"Below, we summarize the main revisions to the manuscript for the convenience of the reviewers and the AC:\", \"The manuscript has been extensively revised to enhance clarity and readability. We provided a more detailed description of the feature extraction process from attention maps and Minimum Spanning Trees (MSTs), as well as a clearer explanation of the workflow for downstream tasks, including feature combination and classifier design. Additionally, the topological patterns across transformer layers\\u2014ranging from chaotic to star-like to linear configurations\\u2014are now explained with greater precision.\", \"Comprehensive ablation studies were conducted to isolate the contributions of different features, including those derived from MSTs, non-MST features. These results, presented in Appendix B.1, demonstrate that MST-based features provide significant value (Appendix B.1).\", \"Beyond per-residue binding and conservation tasks, we included experiments on secondary structure prediction (Q3/Q8 tasks) using the NEW364 and CASP12 datasets. These results (see Appendix B.2.3) highlight the broader applicability of the proposed method.\", \"To specifically isolate the contribution of the topological data analysis (TDA) approach compared to directly leveraging attention patterns, we conducted additional experiments utilizing a self-attention map aggregation method inspired by the approach used for contact map prediction in Rao et al. (2020).\", \"Regarding the state-of-the-art model for binding site prediction tasks, which leverages additional structural data, we conducted a feasible comparison test by averaging the prediction scores with our model. This combined approach demonstrated an increase in performance.\", \"Additionally, we conducted additional experiments of the alternative approach to the symmetrization of attention maps, such as bipartite graph.\", \"We believe that these additional results further strengthen our work, and we sincerely thank the reviewers for their insightful suggestions and constructive feedback.\"]}", "{\"title\": \"Response by Authors. Part 2\", \"comment\": \"> There are many typos in the manuscript. e.g., wrong citation format (e.g., \\\"Several unique properties of proteins can be derived from their 3D structure Wang et al. (2022a); Zhang et al. (2022); Kucera et al. (2024); Sun et al. (2024).\\\" -- the references should be included in a parentheses.) and repetitive figures (e.g., Figure 4 and Figure 10).\\n\\nThank you for pointing this out, we have corrected the typos and repetitive elements like Figures, and adjusted citation format to standard conventions in the revised manuscript.\\n\\n> What's the empirical advantage of the MST-based method in comparison to other deep learning-based methods for topological feature extraction in downstream tasks?\\n\\nTo the best of our knowledge, there are currently no deep learning-based methods specifically designed to extract topological features from attention maps for per-token downstream classification tasks, making direct empirical comparisons unavailable. If the reviewer is aware of such a method, we would greatly appreciate a citation for reference.\"}", "{\"summary\": \"The authors propose to conduct topological analysis on the attention maps produced by protein language models. The analysis shows a relationship between attention strength and physical contact in 3D structures. The authors then propose to use this extracted tree information to augment protein language models in various tasks.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The manuscript is well-written. I specifically like the illustration in figure 4 and figure 5.\\n2. The idea is novel and the analysis is convincing. While it is widely accepted that protein language models can directly and indirectly encode structure information, the effort to directly convert this information into an explicit tree is novel, as far as I know.\", \"weaknesses\": \"1. The major flaw of this paper is about its experimental results. The table 1 shows minimal improvement over original esm-2. Could authors give a brief explanation? Also, the error reported in these tables is astonishingly low. How are these numbers produced? I think such low variance can only be obtained by training linear modules.\", \"questions\": \"1. Figure 6-9 should be renamed to one figure with sub figures.\\n2. The line space of contributions listed in introduction might need adjustment.\\n3. The \\\"all\\\" and \\\"avg\\\" in table1-4 are not explained in tables' captions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Follow-up Questions\", \"comment\": \"We appreciate the opportunity to address your concerns and are pleased to provide further insights regarding your follow-up question:\\n\\n> Can you provide more theoretical justification or intuition for why this topological approach should work better than alternative methods that leverage attention maps? How does it capture information that other approaches might miss?\\n\\nPersistent homology (Dey, 2022) is an established tool of Topological Data Analysis which capture shape of data at multiple scales (global and local). It is robust to noise. $H_0$ homology depict multi-scale clustering pattern, $H_1$ homology depicts cycles, $H_2$ depicts voids, etc. In our work, we focus on $H_0$ homology which is fast to compute.\\n\\n> Is TDA demonstrably more effective at extracting structural information from attention maps?\\n\\nTo specifically isolate the contribution of the TDA approach compared to leveraging attention patterns directly, we conducted additional experiments using a self-attention map aggregation method, similar to the approach used for contact map prediction in Rao et al. (2020). In our case, since we focus on per-residue predictions, we extended this method by summing attention maps over rows, building on the procedure described in Rao et al. (2020). We refer to this approach as \\\"Attention Map Aggregation\\\" in our analysis:\", \"table_1\": \"Per-residue binding prediction experimental results. **Bold** denotes the best performance, *italic* denotes the runner-up. RES-MST (all) method denotes the attention matrices are processed individually for each attention head ($L \\\\times H$ matrices). RES-MST (avg) method denotes the attention matrices averaged across all heads within a layer ($L$ matrices).\\n|Model | Parameters | DNA | RNA | HEM | ATP | CA | MN | MG | ZN | PEP | PRO |\\n|-------------|:-------------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|\\n|Attention map aggregation| 650M| 57.1 | 63.1 | 56.2 | 63.7 | 62.6 | 67.4 | 63.8 | 67.3 | 63.0 | 61.3 |\\n|Attention map aggregation| 3B | 56.0 | 62.2 | 53.7 | 64.9 | 61.9 | 66.3 | 62.9 | 66.5 | 62.1 |60.3|\\n|ESM-2 | 650M |86.5 |85.3 |91.6 |89.8 |82.9 |93.4 |76.8 |96.7 |74.6|69.9|\\n|ESM-2 | 3B | 87.9 |85.7 |91.7 |90.5 |83.4 |91.7 |78.5 |96.5 |75.1|70.3|\\n|RES-MST (all) | 650M |86.0|83.7|91.2 |91.6 |*86.4* | **94.7**| *82.4*| 96.9 | 76.2 |73.2|\\n|RES-MST (avg) | 650M |77.0 | 76.0 | 86.7 | 87.6 | 81.2 | 92.4 | 79.9 | 94.9 | 70.8 | 68.5 |\\n|RES-MST (avg) | 3B | 77.4 | 75.3 | 86.2 | 87.4 | 82.0 | 92.9 | 79.8 | 95.5 | 71.5 | 69.0 |\\n|RES-MST (all) + ESM-2 | 650M |*88.3* |85.8 | *92.4* |**92.4** | **86.9**|*94.4*|**83.4**| **97.2**|*77.8* |*74.4* |\\n|RES-MST (avg) + ESM-2 | 650M | 88.3 | *85.9* | 92.1 | 91.4 | 85.5 | 93.6 | 82.2 | *97.2* | 76.8 | 73.9 |\\n|RES-MST (avg) + ESM-2 | 3B | **89.1** | **86.1** | **92.4** | *91.8* | 85.0 | 93.4 | 81.9 | 97.0 |**78.5** |**74.4**|\", \"table_2\": \"Per-residue conservation prediction experimental results. **Bold** denotes the best performance, *italic* denotes the runner-up. RES-MST (all) method denotes the attention matrices are processed individually for each attention head ($L \\\\times H$ matrices). RES-MST (avg) method denotes the attention matrices averaged across all heads within a layer ($L$ matrices).\\n| Model | Parameters | Q2 Accuracy (%) | Q9 Accuracy (%) |\\n|-------------|:-------------:|:--------:|:--------:|\\n|Random | | 49.9 |12.4 |\\n|Attention map aggregation| 650M|56.3 | 15.4 |\\n|Attention map aggregation|3B|58.2 | 16.9 |\\n|ESM-2 | 650M| 79.5 | 33.2 |\\n|ESM-2 | 3B| *81.1*| 33.3 |\\n|RES-MST (all) | 650M | 78.2 | 31.5 |\\n|RES-MST (avg) | 650M | 75.1 | 27.7 |\\n|RES-MST (avg) | 3B | 75.9 | 28.4 | \\n|RES-MST (all) + ESM-2 | 650M | 81.0 |33.4 | \\n|RES-MST (avg) + ESM-2 | 650M | 80.9 |33.2 | \\n|RES-MST (avg) + ESM-2 | 3B |**81.5**|**33.9**|\"}", "{\"title\": \"Response by Authors. Part 2\", \"comment\": \"> What are the specific features included in the RES-MST (*) methods? The performance of these methods is suspiciously good for just the features listed in section 3.1 - in particular, the models don\\u2019t seem to need residue types?\\n\\nThe RES-MST methods utilize a combination of features derived from the MST (e.g., edge weight statistics like min, max, sum, mean, and node degree) and attention-based features (e.g., self-attention values and row/column sums). While residue type information is not directly included, the rich structural signals from the MST features derived from attention maps provide sufficient predictive power, as demonstrated in the results.\\n\\n> How expensive is the MST preprocessing compared to structure prediction with ESMFold or AlphaFold2?\\n\\nMST preprocessing is computationally efficient, with a complexity of $O(E \\\\log E)$, where E is the number of edges in the graph. This cost is significantly lower than full structure prediction methods like ESMFold or AlphaFold2, making our approach much faster while still leveraging structural insights from attention maps.\"}", "{\"comment\": \"**Suitability of attention maps for topology:**\\n\\nI appreciate the rationale behind using attention maps to create features for the classifier, as they can convey valuable information, as outlined in your paper. While attention maps are effective at identifying important tokens, much of the information they provide can still be quite noisy. Incorporating features derived from attention maps as complementary inputs alongside representations from foundation models such as ESM-2 could enhance overall performance. However, relying solely on features from attention maps means the model depends entirely on potentially noisy data, which could limit its robustness.\\n\\nTo support the reliability of attention maps in this context, stronger evidence would be beneficial. For instance, illustrations of attention scores showing residues that are close in 3D space but distant in sequence receiving high attention could strengthen the argument. While Figure 5 shows something related, it does not clearly highlight such amino acid pairs.\\n\\nI acknowledge the combination of features extracted from attention maps and ESM-2 embeddings in your method. However, the comparable performance of models using only attention-derived features is intriguing and warrants further explanation. If the authors can demonstrate how features derived solely from attention maps can achieve such results\\u2014perhaps by showing how attention maps capture information about the \\\"grammar\\\" of protein sequences, as explored in genome data in this work [1]\\u2014it could become a key strength of the paper. This finding could provide valuable insights into the quality and relevance of attention maps learned by foundation models like ESM-2. Additional explanations or analyses to clarify why this occurs would greatly enhance the reader's understanding of the approach and its broader implications.\\n\\nAdditionally, while Tables 2 and 3 showcase the model\\u2019s performance on per-residue conservation and binding prediction, these tasks focus more on residue functions than directly encoding *\\\"structurally significant information about proteins.\\\"* As such, they cannot fully demonstrate the ability of attention maps to capture structural properties.\\n\\n**MST patterns (chaotic, star, and linear)**\\n\\nThe authors claim that *\\\"the patterns align with changes in the functional and structural focus of the transformer layers\\\"*, but is there a chemically grounded explanation for why these patterns align with the actual functional and structural focus of proteins, if they do? Alternatively, if these patterns do not directly correspond to real protein functions and structures, what might be the underlying reason that learning them leads to improved model performance?\\n\\n**The choice of maximum bidirectional attention**\\n\\nI respectfully disagree with this point. The choice of the transformation method for attention scores is a critical step in this work and requires a robust justification. It is not sufficient to defer this evaluation to future work. If choosing the maximum bidirectional attention is not established as a commonly accepted or proven method that outperforms alternatives, a thorough ablation study with alternative methods should have been conducted within the scope of this work to provide a stronger foundation for the proposed approach. I would appreciate hearing more from the authors on this point to better understand their rationale.\\n\\n[1] DNA language model GROVER learns sequence context in the human genome\"}", "{\"title\": \"Post-Response\", \"comment\": \"Thank you for the response and clarifications. Unfortunately, however, some key concerns remain:\\n\\n> Can you provide more theoretical justification or intuition for why this topological approach should work better than alternative methods that leverage attention maps? How does it capture information that other approaches might miss?\\n\\n> How does your method compare to other approaches that use both protein sequence embeddings and attention maps? Can you provide additional baselines or comparisons to isolate the contribution of the topological data analysis approach versus simply leveraging attention patterns?\\n\\nWhile the authors assert the \\\"unique ability of topological data analysis (TDA) to capture graph-like, structural relationships within attention maps that are not explicitly represented in embeddings,\\\" this claim is not yet fully substantiated in the paper. Previous works [1-2] have already demonstrated that attention maps of protein models capture structural information. The key question remains: **Is TDA demonstrably more effective at extracting structural information from attention maps?** In the current manuscript, RES-MST is the only method that utilizes both attention maps and embeddings. Thus, it's unclear whether the performance improvement stems from (1) the use of attention maps or (2) the application of TDA. It's possible that alternative methods of leveraging attention maps could yield comparable performance gains.\\n\\nThere appears to be a misinterpretation of Rao et al. (2020). Their use of a limited number of contact map samples (20 sequences) for training the logistic regression component was specific to their focus on contact map prediction. This does not imply that their method always requires structural data. For conservation/binding prediction tasks, it should be feasible to utilize attention maps in a manner similar to Rao et al., without relying on TDA. This approach could serve as a valuable baseline to distinguish the contribution of the topological data analysis approach from simply leveraging attention patterns.\\n\\n[1] BERTology meets biology: Interpreting attention in protein language models (Vig et al., ICLR 2021)\\n\\n[2] Transformer protein language models are unsupervised structure learners (Rao et al., bioRxiv2020/ICLR 2021)\"}", "{\"title\": \"Response by Authors. Part 2\", \"comment\": \"> Unclear methodology description: The explanation of the method in Section 3.1 lacks clarity. Specifically: a) The exact features extracted from the MST for each amino acid are not clearly defined. b) The features extracted directly from the attention map are ambiguously described. c) The process of combining the MST-derived and attention map-derived features is not explained. d) The final prediction process using this non-parametric method is not adequately detailed.\\n\\n> Could you clarify the feature extraction process in more detail? Specifically: a) What exact features are extracted from the MST for each amino acid? b) What features are extracted directly from the attention map? c) How are these two sets of features combined? d) How is the final prediction made using this non-parametric method?\\n\\nThank you for highlighting these areas. We recognize the need for additional clarity and a more detailed methodology description, which has been addressed in the revised version of the paper (see Section 3.1). Additionally, we have updated the visualization of the method pipeline to better illustrate the workflow. Below, we provide a detailed explanation of the feature extraction and prediction process:\\n\\na) From the minimum spanning tree (MST) constructed over the attention maps, we extract the following features for each amino acid (represented as a node in the MST): Minimum, maximum, sum, and mean weights of incident edges and node degree (the number of edges connected to the node). These features capture the topological structure and relationships of each amino acid within the attention-based MST.\\n\\nb) From the attention maps, we extract: self-attention values for each residue and sums of absolute attention values for each row and column of the attention map, reflecting the residue\\u2019s importance or centrality in the attention context. These features provide direct insights into how the protein language model encodes relationships between residues.\\n\\nc) The MST-derived and attention map-derived features are concatenated into a unified feature vector for each amino acid. Feature vectors from multiple attention maps are further concatenated for each amino acid to create a comprehensive representation.\\n\\nd) The concatenated feature vectors for all amino acids are fed into standard machine learning classifiers to perform per-residue (token-level) classification tasks.\\n\\n> Limited comparison with relevant baselines: The paper lacks comparison with other approaches that use both protein sequence embeddings and their attention maps. This makes it unclear whether the performance improvement stems from the proposed Topological Data Analysis approach or simply from leveraging attention patterns. Additional baselines utilizing both embeddings and attention maps with different methods such as (Rao et al, 2020) is necessary to substantiate the effectiveness of the proposed method.\\n\\n> How does your method compare to other approaches that use both protein sequence embeddings and attention maps? Can you provide additional baselines or comparisons to isolate the contribution of the topological data analysis approach versus simply leveraging attention patterns?\\n\\nOur method is sequence-only, ensuring fair comparisons with ESM-2 embeddings. We excluded structural methods like Rao et al. (2020), which incorporate 3D data. However, we agree that adding baselines leveraging attention maps with sequence embeddings would strengthen our findings and plan to include these in future work.\"}", "{\"title\": \"Post-Response\", \"comment\": [\"Thanks for the follow-up experiments. However, It's unclear how exactly the authors used attention maps in \\\"summing attention maps over rows. Thus, I cannot say whether this is a valid baseline. Some of the unanswered questions are:\", \"What do you mean by \\\"summing attention maps over rows.\\\"? -In which axis (row or column), is the Softmax computed for the attention maps?\", \"How do you deal with the multiple attention maps in each layer and those across different layers? For example, Rao et al. trained a logistic regression on top of attention maps. Have you trained a Py-boost classifiers to aggregate information within multiple attention maps directly?\"]}", "{\"summary\": \"This paper performs a topological data analysis (TDA) of the attention maps produced by\\nESM2 protein language models. Inspired by TDA of natural language model attention maps \\nand TDA of protein structures, this work leverages the apparent relationship between \\nattention and 3D structure in ESM models. The authors demonstrate that some topological \\nfeatures are correlated with structural features of proteins and show that adding \\ntopological features improves per-residue performance on a variety of downstream tasks. \\nI would recommend to accept this paper. It is difficult to understand the precise \\ncontribution of the TDA methods, but the approach is interesting, and the experiments are \\nthorough.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"\\u2022 The method is an interesting way to probe what ESM models are attending to and\\nhow this relates to its knowledge of 3D structure. \\n\\u2022 The figures (both diagrams and renderings) are very clear and helpful. \\n\\u2022 The analysis of the relationship between TDA features and 3D structure sheds some \\nlight on the utility of the method. \\n\\u2022 The results show a consistent benefit from the method and generally provide a fair \\ncomparison to other state of the art sequence methods.\", \"weaknesses\": \"The analysis of the TDA in section 3.3 feels somewhat incomplete. Is this just based\\non the one example from figure 5? Can some of these descriptions such as \\n\\u201cchaotic\\u201d vs \\u201cstar\\u201d or \\u201clinear\\u201d be quantified? What is the significance of each of \\nthese stages? \\n\\u2022 (small) Figure 7 would be clearer if the ymin was set to 0 \\n\\u2022 LMetalSite, another (strong) sequence-based method from Yuan et al (2024) is \\nmissing from the metal-binding table. Also, it may be appropriate to include \\nESMFold-derived structural methods, since this is another sequence \\n\\u201cpreprocessing\\u201d step. \\n\\u2022 The provided source code is incomplete. There was substantial use of a package \\ncalled bio_tda which was not provided.\", \"questions\": \"\\u2022 Figures 6-9 are interesting, but it is not immediately clear what the takeaway is. It\\nseems to me that figure 6, 8, and 9 can be explained by: \\u201cESM2 attends more to \\nlinear positional encoding in the early and late layers\\u201d. \\n\\u2022 What are the specific features included in the RES-MST (*) methods? The \\nperformance of these methods is suspiciously good for just the features listed in \\nsection 3.1 - in particular, the models don\\u2019t seem to need residue types? \\n\\u2022 How expensive is the MST preprocessing compared to structure prediction with \\nESMFold or AlphaFold2?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response by Authors. Part 1\", \"comment\": \"Thank you for your detailed review and constructive feedback. We have improved the presentation according to suggestions. Below, we address the key points to clarify and strengthen our contribution one by one.\\n\\n> Suitability of attention maps for topology: Attention maps represent learned relationships between sequence tokens (amino acids) based on the model's training objective, which is primarily language-based. These relationships are not necessarily grounded in spatial or physical proximity, which are crucial for understanding protein structure and function. Attention matrices are often dense and noisy, with attention spread across many tokens, which might make topological methods like persistent homology less informative or even misleading when applied naively. The paper would need to convincingly demonstrate that the topology derived from attention maps has a meaningful connection to physical or functional protein properties.\\n\\nAs mentioned in our paper, several approaches have been proposed for analyzing the attention maps of models trained on protein sequences (Bhattacharya et al., 2021; Vig et al., 2020). According to (Vig et al., 2020) findings, the attention maps generated by the models: highlight amino acid pairs distant in sequence but close in structure, as indicated by correlations with pairwise contacts, highlight binding sites within proteins and capture local secondary structure, revealing patterns corresponding to structural motifs like alpha-helices and beta-sheets. This results suggest that protein language models can infer structural proximity from sequence data alone, recognize functionally important sites essential for protein activity, and detect common structural motifs inherent in protein sequences. This demonstrates the capability of attention maps to uncover intricate structural features solely from sequence information. Based on this analysis we conducted topological data analysis of the attention maps. Our results in Tables 1-2 empirically demonstrate the strong correlation between topological features derived from attention maps and structural/functional properties of proteins. For example, nodes with high degrees in the MST often correspond to highly conserved or biologically critical residues.\\n\\nBhattacharya N. et al. Interpreting potts and transformer protein models through the lens of simplified attention //PACIFIC SYMPOSIUM ON BIOCOMPUTING 2022. \\u2013 2021.\\n\\nVig J. et al. Bertology meets biology: Interpreting attention in protein language models//Ninth International Conference on Learning Representations (ICLR) 2021. \\u2013 2021.\\n\\n> While the authors analyze MST structures across layers, they don\\u2019t provide a clear theoretical or empirical justification for why these specific patterns (chaotic, star, linear) are meaningful in terms of protein functionality or how these differences are expected to relate to biological significance.\\n\\n> Could the authors elaborate on the theoretical or empirical rationale behind analyzing MST structures in terms of chaotic, star, and linear patterns? How do these specific patterns relate to protein functionality and biological significance?\\n\\nFigures 6-9 illustrate evolving topological patterns across layers, from chaotic to star-like and linear configurations. The patterns align with changes in the functional and structural focus of the transformer layers. For example, the star-like topology highlights the centralization of structural features in intermediate layers, as described in Section 3.3. Figure 6 presents a mean maximum degree of a node in MST and confirms a \\\"star\\\" pattern in middle layers (a high maximum degree) and a \\\"linear\\\" pattern in early and late layers (very low maximum degree). Figure 8 presents a mean distance between tokens corresponding to incident nodes of edges in MST. This value is low in early and late layers, proving a \\\"linear\\\" pattern. See also a visualization of a protein in Figure 5. We are including these explanations in the main text for clarity.\\n\\n> The transformation of attention scores into a quasi-distances matrix is a key step, but the reasoning behind this particular transformation is under-explained. Why the maximum of the bidirectional attention scores is chosen, or how this approach compares with others, isn\\u2019t detailed.\\n\\nThe choice of maximum bidirectional attention ensures symmetry in the quasi-distance matrix, helps to reduce noise and aligns with a symmetric nature of physical residue interactions. We leave an evaluation of alternatives like averaging or independent treating upper- and lower-diagonal parts of attention matrices to further research.\"}", "{\"summary\": \"This paper investigates the topological features of protein language model attention maps using the lens of persistent homology. The study computes minimum spanning trees (MSTs) from these attention maps to derive per-residue features. The incorporation of topological features enhances the performance of PLMs in prediction tasks such as residue conservation prediction and binding site prediction. Furthermore, the study analyzes variations in the topological features of the MSTs derived from attention maps across different layers of the language model.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The analysis of attention maps using persistent homology offers a commendable theoretical perspective.\", \"The relationship between protein attention maps, residue conservation, and amino acid distances has been analyzed extensively.\"], \"weaknesses\": [\"While the paper suggests that the MST method could enhance model performance by extracting topological information from attention maps, it lacks empirical evidence to substantiate this claim. Drawing on prior experience, the potential for performance enhancement with attention map integration appears plausible.\", \"The benchmarks assessed are less widely used (especially for the conservation prediction task), which challenges the demonstration of the new method's practical applicability.\", \"The baseline comparisons for the binding experiment are limited in diversity and omit the latest methods (e.g., [1,2]), thereby reducing the persuasiveness of the findings.\", \"There are many typos in the manuscript. e.g., wrong citation format (e.g., \\\"Several unique properties of proteins can be derived from their 3D structure Wang et al. (2022a); Zhang et al. (2022); Kucera et al. (2024); Sun et al. (2024).\\\" -- the references should be included in a parentheses.) and repetitive figures (e.g., Figure 4 and Figure 10).\", \"[1] Qianmu Yuan, Chong Tian, and Yuedong Yang. Genome-scale annotation of protein binding sites via language model and geometric deep learning. eLife, 13:RP93695, 2024.\", \"[2] Pengpai Li and Zhi-Ping Liu. Geobind: segmentation of nucleic acid binding interface on protein surface with geometric deep learning. Nucleic Acids Research, 51(10):e60\\u2013e60, 2023.\"], \"questions\": [\"What's the empirical advantage of the MST-based method in comparison to other deep learning-based methods for topological feature extraction in downstream tasks?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you.\", \"comment\": \"We appreciate the opportunity to address these questions and are glad we could provide clarifications. Thank you for the insightful review and constructive feedback.\"}", "{\"title\": \"Response to Follow-up Questions\", \"comment\": \"We are glad that we had the opportunity to address your concerns and would also like to offer further insights regarding your follow-up question.\\n\\n> However, I remain unconvinced regarding the novelty of the method and its performance.\\n\\nTo the best of our knowledge, our approach is novel, as it represents the first application of topological data analysis (TDA) to protein language model attention maps for per-token classification. \\n\\n> Considering that accessing protein structural information is no longer a significant challenge (e.g., with structure prediction models), I see no compelling reason to differentiate between \\\"methods with structure input\\\" and \\\"methods without structure input\\\" for the same task.\\n\\nWe understand your perspective regarding the use of structural information, especially with recent advancements in structure prediction models. However, our goal in differentiating between methods that use structural input and those that do not is to emphasize the unique capability of TDA to capture graph-like, structural relationships that are not explicitly encoded in protein embeddings. While integrating structural data could indeed enhance the method, we aim to demonstrate the potential of TDA in extracting structural information independently from explicit structural inputs.\\n\\nRegarding the comparison with methods utilizing structural data, we recognize that the GPSite model by Yuan et al. (2024) is state-of-the-art in the binding site prediction tasks employed in our experiments. Unfortunately, the training code for GPSite is unavailable, which limits our ability to train GPSite directly on our features rather than on ProtTrans embeddings. Nonetheless, we conducted a feasible comparison test by averaging the prediction scores from both our model and GPSite model. The results show an increase in performance.\\n\\n|Model | Parameters | DNA | RNA | HEM | ATP | CA | MN | MG | ZN | PEP | PRO |\\n|-------------|:-------------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|\\n|GPSite | | 92.06 | 89.94 | 97.15 | 97.48 | 92.21 | 97.35 | 89.14 | 98.06 | 83.64 | 83.61 |\\n|GPSite + RES-MST| 650M | 92.11 | 89.98 | 97.23 | 97.53 | 92.26 | 97.29 | 89.40 | 98.13 | 83.65 | 83.67 |\"}", "{\"title\": \"Response by Authors. Part 1\", \"comment\": \"Thank you for the thoughtful review and constructive feedback. We appreciate the recognition of our method\\u2019s innovative approach in applying topological analysis to protein language models and its potential to enhance the interpretability and utility of attention maps. Below, we address the key points raised to provide further clarity and reinforce the contributions of our work.\\n\\n> The paper lacks a more comprehensive discussion of approaches for utilizing the graph topological information in the attention map. While the MST approach is one option, the authors need to provide stronger motivation for choosing this method to capture topological information. For example, why was the MST method chosen? What are its advantages? These questions require further clarification. On this basis, the paper should also offer more comparative and reference experiments, such as evaluating the impact of using different methods to model topological information on performance. The MST inherently tends to capture high-weight edges between each node and its neighbors, akin to capturing information about nodes strongly associated with each node. But what if alternative modeling methods with similar properties were used? For example, one could identify the top k nearest nodes for each node by distance and index, then construct features for downstream tasks. How would this approach differ? I believe this discussion is essential.\\n\\nOur paper focuses on the application of topological data analysis (TDA) to attention maps derived from protein language models. TDA examines multi-scale structural patterns within data; in our case, this involves analyzing graphs generated from attention maps. Specifically, we study $H_0$\\u200b persistence barcodes, which capture clustering and connectivity patterns at multiple scales. Essentially, these barcodes depict how the clustering patterns of a graph evolve as edges with weights exceeding a varying threshold are removed. We have shown that H_0\\u200b persistence barcodes are essentially equivalent to minimum spanning trees (MSTs); detailed explanations are provided on lines 166\\u2013181 in the revised version of the paper. MSTs are efficient to compute, and while many other graph representations exist, they are beyond the scope of our current research. \\n\\nWe have found that certain properties of MSTs, such as maximum node degree, exhibit significant correlation with protein conservation values (Figure 7). This is supported by the consistent performance improvements observed across downstream tasks when integrating MST features with embeddings (Tables 1\\u20132). Although our current approach focuses on $H_0$ barcodes, it can potentially be extended to $H_k, k \\\\ge 1$ barcodes, which capture topological patterns like cycles, 3D voids, etc. A comparative analysis with $H_1$ based method RES-LT , presented in Appendix A.2, demonstrates the advantages of MST-based features, including enhanced predictive performance in binding, conservation, and secondary structure prediction tasks.\\n\\n> The paper\\u2019s organization needs improvement. The second section introduces substantial background knowledge on topological information, yet this part has little relevance to the content in the following third section. Even if removed, this background section would not impact the understanding of the paper's main content. Furthermore, while defining RES-LT in the appendix, this paper references topological background knowledge from the second section; however, as RES-LT is only used in the appendix, the background knowledge could be moved there as well. In other words, I find the paper's structure to be flawed, with insufficient logical cohesion between different parts.\\n\\nPlease refer to a previous answer regarding the relevance of Topological Data Analysis. RES-LT is an alternative method based on topological features. The decision to include the RES-LT performance results in the Appendix is based on the presence of an ablation study section in the supplementary materials, which specifically examines how various features influence performance. Since similar ablation studies for other non-MST feature influences are also presented in the Appendix, placing the RES-LT results there ensures consistency in the paper's structure and presentation. \\n\\n> More details regarding the experiments should be provided. For instance, what is the difference between RES-MST (ESM-2 650M all) + ESM-2 (650M) and the RES-MST (ESM-2 650M all) model? I couldn\\u2019t find any explanation of this in the paper.\\n\\nThank you for highlighting this point. We have significantly revised the description of the method (see lines 212-255) and experiments (see lines 384-391) to improve clarity and ensure a more comprehensive explanation. Additionally, we have updated method visualization figure to better align with the revised text.\"}", "{\"title\": \"Response by Authors.\", \"comment\": \"Thank you for your thoughtful and insightful review recognizing the novelty of our approach. We will improve the presentation according to suggestions. Below we address specific concerns one by one.\\n> Evaluation is limited to binding and conservation.\\n\\nWhile our evaluation primarily focuses on binding and conservation prediction tasks (10 types of binding and 2 types of conservation), these were selected as representative examples of biologically significant applications to showcase the utility of our approach. Importantly, the method's flexibility makes it suitable for other tasks, such as protein secondary structure prediction. To demonstrate this, we include experiments on secondary structure prediction in the Ablation Studies section, specifically for the RES-MST and RES-LT methods (see Table 8 in the revised version of our paper).\\n> The proposed models, RES-MST (ESM2-650M all) and RES-MST (ESM2-650M avg), perform comparably with ESM2 across the benchmarks. Specifically, ESM achieves stronger performance in 5 of the 12 benchmarks. \\n Since the topological features are extracted solely from ESM2, ESM2 already contains topological features, albeit in a rich latent representation. The similar performance of the proposed approach and ESM2 seems to suggest that one can implicitly decode these topological features from ESM2. Thus, what is the significance of this approach? Is there anything besides being \\u201cthe first time that topological data analysis has been applied to classification on a per-token basis\\u201d? What are some cases in which the proposed topological features capture information that is not easily accessible from ESM2 embeddings alone? In other words, what are some potential advantages of topological approach over the ESM embedding?\\n\\nOur method provides a unique and interpretable perspective by leveraging topological data analysis of attention maps, specifically through minimum spanning trees (MSTs), which enriches traditional embeddings. Notably, our approach outperforms ESM-2 embeddings in several binding prediction tasks, such as identifying protein-metal ion interactions, peptide binding, and protein-protein interactions, demonstrating its practical utility. This success stems from the fact that while ESM-2 embeddings capture rich latent features, they do not explicitly encode the structured, graph-like information present in attention maps. By distilling this information, our method captures localized structural relationships and highlights residues that are critical for biological functions, making it a valuable addition to the toolkit for protein sequence analysis and prediction tasks.\\n\\n> However, it remains unclear to me what the utility of such an approach is. \\n However, the motivation for topological data analysis is not clear, as ESM seems to perform fine.\\n\\nTopological features extracted from attention maps contain an independent information which is missing in EMS-2 embeddings. In all of the experiments (Tables 1-2) combining ESM-2 embeddings with the proposed features (RES-MST) is better than using EMS-2 embeddings alone. The utility of our approach is the improved performance in a wide range of tasks (10 types of binding, 2 types of conservation prediction). The improvement is up to +4.9% for MG binding prediction.\"}", "{\"summary\": \"The paper proposes a method that applies the topological information embedded in the attention maps of protein language models (PLMs) to downstream tasks. Specifically, the paper treats the information in the attention maps as a fully connected undirected graph, where each node represents an amino acid. It then extracts a minimum spanning tree (MST) from this graph and further derives effective topological information from the MST to be used in downstream property prediction tasks. In summary, this work represents an effective attempt to mine structure-related topological information from the attention layers in PLMs, offering a new perspective for further analysis and understanding of PLM behavior.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The research question addressed in this paper is both interesting and significant. Understanding and interpreting the behavior and knowledge learned by PLMs is an essential research direction.\\n2. The idea of treating the attention map as a fully connected graph and modeling structure-related knowledge by capturing its topological information is innovative and worth investigating.\", \"weaknesses\": \"1. The paper lacks a more comprehensive discussion of approaches for utilizing the graph topological information in the attention map. While the MST approach is one option, the authors need to provide stronger motivation for choosing this method to capture topological information. For example, why was the MST method chosen? What are its advantages? These questions require further clarification. On this basis, the paper should also offer more comparative and reference experiments, such as evaluating the impact of using different methods to model topological information on performance. The MST inherently tends to capture high-weight edges between each node and its neighbors, akin to capturing information about nodes strongly associated with each node. But what if alternative modeling methods with similar properties were used? For example, one could identify the top k nearest nodes for each node by distance and index, then construct features for downstream tasks. How would this approach differ? I believe this discussion is essential.\\n\\n2. The paper\\u2019s organization needs improvement. The second section introduces substantial background knowledge on topological information, yet this part has little relevance to the content in the following third section. Even if removed, this background section would not impact the understanding of the paper's main content. Furthermore, while defining RES-LT in the appendix, this paper references topological background knowledge from the second section; however, as RES-LT is only used in the appendix, the background knowledge could be moved there as well. In other words, I find the paper's structure to be flawed, with insufficient logical cohesion between different parts.\\n\\n3. More details regarding the experiments should be provided. For instance, what is the difference between RES-MST (ESM-2 650M all) + ESM-2 (650M) and the RES-MST (ESM-2 650M all) model? I couldn\\u2019t find any explanation of this in the paper.\\n\\n4. The chosen downstream tasks primarily focus on per-residue scale tasks. However, it would be valuable to discuss structure-related tasks on a larger scale (e.g., protein function annotation), as this could reveal whether this MST-based topological modeling approach can capture more global protein property information.\\n\\n5. A more detailed comparison of the method\\u2019s runtime is needed. Compared to traditional full-parameter fine-tuning approaches, your method requires first calculating the MST, then extracting features and training a Pyboost classifier, which incurs significant time costs and may reduce algorithmic efficiency. Therefore, a discussion of the time costs of this approach compared to traditional full-parameter fine-tuning is necessary. However, in Appendix A.5, you did not provide runtime comparisons with baseline models.\", \"questions\": \"Please refer to the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
DWWwGlPMFr
LEMoN: Label Error Detection using Multimodal Neighbors
[ "Haoran Zhang", "Aparna Balagopalan", "Nassim Oufattole", "Hyewon Jeong", "Yan Wu", "Jiacheng Zhu", "Marzyeh Ghassemi" ]
Large repositories of image-caption pairs are essential for the development of vision-language models. However, these datasets are often extracted from noisy data scraped from the web, and contain many mislabeled instances. In order to improve the reliability of downstream models, it is important to identify and filter images with incorrect captions. However, beyond filtering based on image-caption embedding similarity, no prior works have proposed other methods to filter noisy multimodal data, or concretely assessed the impact of noisy captioning data on downstream training. In this work, we propose, theoretically justify, and empirically validate LEMoN, a method to automatically identify label errors in image-caption datasets. Our method leverages the multimodal neighborhood of image-caption pairs in the latent space of contrastively pretrained multimodal models to automatically identify label errors. Through empirical evaluations across eight datasets and ten baselines, we find that LEMoN outperforms the baselines by over 3% in label error detection, and that training on datasets filtered using our method improves downstream captioning performance by 2 BLEU points.
[ "label error detection", "noisy labels", "image captions" ]
Reject
https://openreview.net/pdf?id=DWWwGlPMFr
https://openreview.net/forum?id=DWWwGlPMFr
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xxvnZzka1D", "uELqlshtvl", "tuqI9fBeJp", "tOguYsBwy1", "s3xFspdX7c", "qxEN33zxyL", "pqxwb9YxBh", "oh85spzRS6", "jQBcRNSUf3", "hVzdriIGzM", "ccoDcMX0Tt", "cZIuWMlAY0", "cLUgfyrxxE", "bXdElyLHAi", "YrB1BnEjDG", "VJvmIPtD7l", "V5umM7xwn1", "UaVYITtXhc", "T7UMyzViO2", "SMCTK3IA13", "QQ2K9U5pAe", "Lo3avz3FfI", "KNU3NJEdQf", "EQLz9fNso2", "DhA6afeyyJ", "7p0Qf2mi8R", "5wpGmUL5FA", "4IELPXmAtz" ], "note_type": [ "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733145209259, 1732988079079, 1737523565747, 1732790851694, 1729695416274, 1732171299964, 1729735718387, 1732171103614, 1732171178362, 1730711526358, 1732171156883, 1733215096895, 1730359301022, 1732171421907, 1733261148381, 1732171451683, 1732171242392, 1732505138113, 1733261081856, 1733261006506, 1732171321769, 1732671248484, 1733974179845, 1733261136157, 1733000407985, 1732171384987, 1732702295923, 1732467959064 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3263/Authors" ], [ "ICLR.cc/2025/Conference/Submission3263/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3263/Authors" ], [ "ICLR.cc/2025/Conference/Submission3263/Reviewer_Ux12" ], [ "ICLR.cc/2025/Conference/Submission3263/Authors" ], [ "ICLR.cc/2025/Conference/Submission3263/Reviewer_hPEV" ], [ "ICLR.cc/2025/Conference/Submission3263/Authors" ], [ "ICLR.cc/2025/Conference/Submission3263/Authors" ], [ "ICLR.cc/2025/Conference/Submission3263/Reviewer_uuuA" ], [ "ICLR.cc/2025/Conference/Submission3263/Authors" ], [ "ICLR.cc/2025/Conference/Submission3263/Reviewer_Ux12" ], [ "ICLR.cc/2025/Conference/Submission3263/Reviewer_psZs" ], [ "ICLR.cc/2025/Conference/Submission3263/Authors" ], [ "ICLR.cc/2025/Conference/Submission3263/Authors" ], [ "ICLR.cc/2025/Conference/Submission3263/Authors" ], [ "ICLR.cc/2025/Conference/Submission3263/Authors" ], [ "ICLR.cc/2025/Conference/Submission3263/Reviewer_uuuA" ], [ "ICLR.cc/2025/Conference/Submission3263/Authors" ], [ "ICLR.cc/2025/Conference/Submission3263/Authors" ], [ "ICLR.cc/2025/Conference/Submission3263/Authors" ], [ "ICLR.cc/2025/Conference/Submission3263/Reviewer_hPEV" ], [ "ICLR.cc/2025/Conference/Submission3263/Area_Chair_LMot" ], [ "ICLR.cc/2025/Conference/Submission3263/Authors" ], [ "ICLR.cc/2025/Conference/Submission3263/Reviewer_psZs" ], [ "ICLR.cc/2025/Conference/Submission3263/Authors" ], [ "ICLR.cc/2025/Conference/Submission3263/Reviewer_Ux12" ], [ "ICLR.cc/2025/Conference/Submission3263/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer Ux12,\\n\\nWe have now implemented VDC for our captioning problem setup and conducted experiments on the four captioning datasets used in our paper. As VDC has only been implemented and evaluated on classification datasets, we make the following adaptations to our problem setup:\\n\\n1. As we only utilize open-source models in our method, to preserve fairness, we also implement VDC with only open-source models. Specifically, we use Llama-3.1-8B-Instruct for the LLM in the Visual Question Generation (VQG) and Visual Answer Evaluation (VAE) stages (note that the VDC paper uses the OpenAI API), and InstructBLIP-Vicuna-7b as the VLLM in the Visual Question Answering (VQA) stage (as in the VDC paper). \\n\\n2. In the VQG stage, instead of generating specific questions for each class, we generate six specific questions for each caption. We slightly modify the VQG prompt (Table 8 in the VDC paper) to omit providing the label set, as the set of all possible captions is very large. We keep the two general questions used in the VDC paper.\\n\\n\\nWe compare the performance of VDC versus LEMoN below.\\n\\n\\n| | *flickr30k* | | *mscoco* | | *mmimdb* | | *mimiccxr* | |\\n| :--------- | --------------: | --------------: | --------------: | --------------: | --------------: | --------------: | --------------: | ----------: |\\n| | **AUROC** | **F1** | **AUROC** | **F1** | **AUROC** | **F1** | **AUROC** | **F1** |\\n| VDC | 92\\\\.9 (1.0) | 85\\\\.5 (0.6) | 94\\\\.1 (0.2) | 88\\\\.6 (0.3) | 80\\\\.5 (0.3) | 70\\\\.6 (2.2) | 50\\\\.8 (0.4) | 29\\\\.3 (1.3) |\\n| $\\\\text{LEMoN}\\\\_{\\\\text{fix}}$ | 93\\\\.6 (0.2) | - | 92\\\\.0 (0.1) | - | 84\\\\.3 (0.3) | - | 66\\\\.5 (0.2) | - |\\n| $\\\\text{LEMoN}\\\\_{\\\\text{opt}}$ | **94\\\\.5** (0.2) | **87\\\\.7** (0.9) | **95\\\\.6** (0.2) | **89\\\\.3** (0.2) | **86\\\\.0** (0.1) | **76\\\\.3** (0.1) | **70\\\\.4** (2.3) | **57\\\\.0** (1.6) |\\n\\n\\nWe find that $\\\\text{LEMoN}\\\\_{\\\\text{opt}}$ outperforms VDC in all cases. Further, we note that the per-sample runtime (shown below in milliseconds) of VDC is two orders of magnitude larger than all other methods for the same hardware. This is because VDC requires multiple sequential queries to billion-parameter scale LLMs, each of which are roughly 50x the size of the CLIP model used in LEMoN.\\n\\n\\n| | mscoco | flickr30k | mimiccxr | mmimdb |\\n| :-------- | ----------: | ----------: | -----------: | ----------: |\\n| LEMoN | 18\\\\.8 (1.8) | 35\\\\.9 (1.2) | 52\\\\.2 (2.7) | 21\\\\.1 (1.4) |\\n| CLIP Sim. | 20\\\\.3 (0.0) | 15\\\\.6 (0.0) | 16\\\\.8 (0.0) | 30\\\\.5 (0.0) |\\n| Deep kNN | 19\\\\.9 (0.9) | 10\\\\.6 (1.2) | 47\\\\.1 (12.7) | 20\\\\.5 (1.9) |\\n| Datamap | 39\\\\.7 (0.1) | 38\\\\.1 (4.8) | 41\\\\.4 (1.3) | 62\\\\.6 (9.5) |\\n| VDC | 4460 (880) | 5160 (503) | 7932 (3\\\\.4) | 4672 (357) |\\n\\n\\nAs we are unable to update the revision on OpenReview at this time, we will add these results to Table 3 and Table I.11 in a future revision.\\n\\nPlease let us know if our responses have sufficiently addressed all of your concerns. We sincerely appreciate all of the detailed feedback you have provided! If there are any remaining questions or comments, we would be happy to discuss.\"}", "{\"title\": \"Have we addressed your concerns?\", \"comment\": \"Dear Reviewer psZs,\\n\\nThank you again for your time and valuable feedback. Since there are a few days left in the rebuttal period, we were wondering if our response has adequately addressed your concerns. If so, we would appreciate it if you could update your review and raise your score accordingly. If there are any remaining questions or comments, we would be happy to discuss.\\n\\nThank you!\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for your detailed response and additional questions!\\n\\n> W1.1: The statement implies that the distance from the incorrect label $y'$ can be equal to the distance from the correct label $y$.\\n\\n\\nWe agree with this characterization, and admit that the lower bound is weak as a result. We stand by our statement that *\\\"when $L_Y$ is small, the score for the mislabeled sample cannot be much lower than the score for the positive pair with high probability.\\\"* \\n\\nAdditionally, we emphasize that Theorem 4.1 is a relatively minor contribution of our paper. It largely serves to justify prior works which already utilize the CLIP score for label error detection (Kang et al., 2023; Liang et al., 2023). We believe that Theorem 4.2 sufficiently justifies the main methodological novelties of our work (the scores $s_m$ and $s_n$). \\n\\n\\n> W1.2: In the proof, $p$ is described as a probability, meaning the exact number of relevant neighbors is unknown. Consequently, $S_m$ cannot be directly decomposed into two sums with a fixed number of terms in each.\\n\\nThank you for pointing this out! We have now clarified this (in L258) by defining $p$ to be the fraction of mislabeled examples in the nearest neighbors set:\\n\\n*Suppose that $\\\\frac{1}{k}|\\\\\\\\{i: (X_{m_i}, Y_{m_i}) \\\\text{ is mislabeled}\\\\\\\\}| = p$ is constant for all samples in the support of $(X, Y)$.*\\n\\n> W1.3: The variable over which the expectation is computed appears to be missing. \\n Additionally, why are nested expectations of the form E[E[\\u2026]] necessary in this context?\\n\\nThe nested expectations originate from the law of iterated expectations, i.e. $\\\\mathbb{E}[S_m(X, Y)] = \\\\mathbb{E}[\\\\mathbb{E}[S_m(X, Y) | \\\\zeta_Y]]$. Here, we have used the standard notation that an expectation (without subscript) is computed with respect to the joint distribution of all random variables in the expectation.\\n\\nConditioned on $\\\\zeta_Y$, the term $k\\\\zeta_Y(1-p)$ (the exact number of relevant neighbors) is a known constant. There is one implicit assumption that we have made, which is that $\\\\zeta_Y$ is distributed such that the random variable $k\\\\zeta_Y(1-p)$ has support only over the integers $\\\\\\\\{0, 1, ..., k\\\\\\\\}$. We have clarified this assumption in L838.\\n\\n\\n\\n> W2: The claim that \\\"we believe we are the first to apply it to the setting of label error detection\\\" must be revised in light of VDC.\\n\\nTo clarify, Line 151 in our paper states: *\\\"Although prior works have utilized the idea of multimodal neighbors in other settings, we believe we are the first to apply it to the setting of label error detection.\\\"* We believe this is still true since VDC does not utilize multimodal neighbors in any form.\\n\\nWe have cited VDC in our updated revision. We are working on adapting VDC to our problem setting now with open-source models, and hope to provide results for it before the end of the rebuttal period.\\n\\n\\n\\n\\n> W3: The appendix currently includes only the ranges of hyperparameters. For reproducibility and transparency, could you provide the exact hyperparameter values used during the evaluation process?\\n\\nWe have added Tables G.1 and G.2 in the appendix, which lists the optimal hyperparameters for all baselines.\\n\\n\\n\\nPlease let us know if this sufficiently addresses your concerns. We very much welcome any additional feedback that can further strengthen the paper!\"}", "{\"summary\": \"The paper presents LEMoN, a novel approach designed to identify inconsistencies in image-caption pairs, with a focus on label noise detection. The proposed method demonstrates improved detection performance in classification and captioning tasks. Additionally, the paper offers theoretical justifications to support the proposed cross-modal scoring method.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The work introduces an original scoring method for label noise detection.\\n2. The proposed method is intuitive and clearly explained.\\n3. The method exhibits notable improvements across most experiments involving synthetic label noise, especially among training-free approaches.\", \"weaknesses\": \"1. The theoretical justifications provided in the paper contain several significant flaws, limiting their effectiveness as a core claim:\\n - Theorem 4.1 appears to contain contradictory conditions, where the variable $\\\\eta$ is defined as normal but also subject to the constraint $|\\\\eta| > \\\\epsilon$.\\n - In Proposition A.1, Part 3 (line 885), the inequality does not hold, as the condition on $y' \\\\ne y$ cannot be omitted. This leads to the conclusion $\\\\mathbb{E} \\\\le p\\\\mathbb{E}$ with $p < 1$, implying that $\\\\mathbb{E} = 0$.\\n - There is frequent interchange between labels and embeddings, which results in incorrect conclusions. For example, in line 913, the embedding of $y'$ is replaced with label $y$ and an additive term $\\\\eta$. However, according to Assumption 1 (line 855), both $y'$ and $y$ should be labels, not embeddings.\\n - The expectation operator is lost when transitioning from the equation in line 915 to the one in line 917.\\n - In the proof of Theorem 4.2 (line 954), the variance of $\\\\frac{1}{k} E$ should be expressed as $\\\\frac{1}{k^2} Var E$, rather than $\\\\frac{1}{k} Var E$.\\n2. While the paper claims novelty in applying multi-modal scoring to label noise detection, this approach has recently been explored in [1].\\n3. The provided source code does not include implementations of the baseline methods. As a result, it remains unclear how the hyperparameters for these baseline methods were tuned, particularly given that the authors introduced a new set of synthetic datasets.\\n4. Some previous works [2] evaluate the area under the accuracy/filter-out-rate curve on real datasets, which provides a better understanding of filtering quality in real-world applications. The authors address this metric in Appendix I.12, where the results suggest that Deep k-NN may be a more effective alternative. However, these results are presented for only two datasets, limiting the generalizability of the findings.\\n\\n[1] Zhu, Zihao, et al. \\\"VDC: Versatile Data Cleanser based on Visual-Linguistic Inconsistency by Multimodal Large Language Models.\\\" ICLR 2024.\\n\\n[2] Bahri, Dara, et al. \\\"Deep k-nn for noisy labels.\\\" ICML 2020.\", \"questions\": \"1. Which implementations of the baseline methods were used, and how were their hyperparameters selected?\\n2. How does LEMoN compare to VDC [1] in terms of strengths and weaknesses?\\n3. What is the inference speed of LEMoN relative to other methods? How does it scale with larger datasets, and is it feasible to apply this method to billion-scale datasets?\\n4. Why does LEMoN not show improvements in terms of the area under the accuracy/filter-out-rate curve?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the detailed review and constructive suggestions!\\n\\n> W1: Their approach seems to be mainly designed to detect label errors in image-caption datasets. However, the effectiveness of applying filtering to such a dataset seems to be marginal according to Table 4.\\n\\n\\nWe highlight that in Table 4, even training with a fully clean dataset only outperforms training with a fully noisy (i.e. no filtering) dataset by 2-3 BLEU-4 points. Thus, the range of potential improvement for any filtering method is bounded by this difference. Adding on the variance associated with model training (standard deviations up to 1.0), it is difficult for any method to outperform another with significance. This is an interesting result, since it means that some pre-trained captioning models are stable or have small-sized performance drops in the presence of noisy captioning data. We interpret the result from Table 4 to be that both LEMoN and CLIP Sim. are nearly able to recover the performance of clean data for both datasets. In addition, we note that LEMoN performs slightly better in mscoco, partly because its larger dataset size results in a smaller variance.\\n\\nIn addition, we highlight that in the classification setting, LEMoN is consistently among the top-2 methods across all four datasets in downstream performance. Lastly, in addition to filtering out incorrect data, such label error detection methods are also useful in and of itself. For example, identifying mislabeled samples (and potentially annotators responsible for these mislabels) can lead to improvements in labeling processes. Thus, we strongly believe that label error detection using LEMoN has real-world utility. We will also demonstrate the utility of LEMoN to effectively filter data for downstream training in Datacomp below.\\n\\n> W2: However, there can be more diverse types of noise in real image-caption data collected from the web. \\n\\n> W3: Also, according to their appendix, their metric does not show much gain in the CC3M, which is webly collected. I actually feel that the authors had to conduct more analysis on this kind of dataset since their main motivation seems to detect errors on this kind of dataset, rather than image-classification dataset.\\n\\nThank you for this suggestion. We have conducted a new experiment of $\\\\text{LEMoN}\\\\_{\\\\text{fix}}$ on Datacomp [1]. We use the small dataset from the filtering track, which originally consisted of 12.8M images. As these images are accessed directly from the web, only 9.96M images were able to be downloaded as of 2024/11/14. We apply $\\\\text{LEMoN}\\\\_{\\\\text{fix}}$ to this dataset using OpenAI CLIP ViT-L/14 embeddings provided by Datacomp. We select the 3.5M images with lowest mislabel scores, and use the default hyperparameters from Datacomp to train a CLIP model, and evaluate it on the same 38 zero-shot classification datasets. We compare with filtering using only the CLIP score to the same number of images.\\n\\n| | Method | ImageNet | ImageNet Dist. Shifts | VTAB | Retrieval | Avg (38 Datasets) |\\n| :----------------------------------------- | :----------------------- | ---------: | --------------------------: | ---------: | ---------: | -------------------------: |\\n| Data Currently Available (9\\\\.96M Samples) | LEMoN | **0\\\\.045** | **0\\\\.053** | **0\\\\.188** | 0\\\\.116 | **0\\\\.168** |\\n| | CLIP score | 0\\\\.043 | 0\\\\.049 | 0\\\\.177 | **0\\\\.119** | 0\\\\.160 |\\n| From Datacomp Paper (12\\\\.8M Samples) | No filtering | 0\\\\.025 | 0\\\\.033 | 0\\\\.145 | 0\\\\.114 | 0\\\\.132 |\\n| | Basic filtering | 0\\\\.038 | 0\\\\.043 | 0\\\\.150 | 0\\\\.118 | 0\\\\.142 |\\n| | Text-based | 0\\\\.046 | 0\\\\.052 | 0\\\\.169 | **0\\\\.125** | 0\\\\.157 |\\n| | Image-based | 0\\\\.043 | 0\\\\.047 | 0\\\\.178 | 0\\\\.121 | 0\\\\.159 |\\n| | LAION-2B filtering | 0\\\\.031 | 0\\\\.040 | 0\\\\.136 | 0\\\\.092 | 0\\\\.133 |\\n| | CLIP score | **0\\\\.051** | **0\\\\.055** | **0\\\\.190** | 0\\\\.119 | **0\\\\.173** |\\n| | Image-based + CLIP score | 0\\\\.039 | 0\\\\.045 | 0\\\\.162 | 0\\\\.094 | 0\\\\.144 |\\n\\n\\nWe find that given the available images, LEMoN outperforms the baseline on average, and on three of four individual evaluations. However, neither method outperforms the scores reported in the original paper due to their dataset being larger.\\n\\nWe have added this table and discussion to Appendix I.10.\"}", "{\"summary\": \"This paper presents an approach to detect misalignment between image-text pairs to clean image-text datasets. To achieve it, they propose a new metric that considers the distance between the image-text pairs and neighbors in image and text space.\\n\\nThe high-level idea of their metric to compute label-error score is as follows. \\n1. Given an image-text pair that is neighboring a target pair, if the text is far from the target text, but the image is close to the target image, the target pairs can be inconsistent. \\n2. The neighboring pair used above can be also mismatched. To account for such a case, they weigh the score by using the similarity between the neighboring pair. \\n\\nEmpirically, they evaluate the effectiveness of the proposed metric by image-classification dataset, an image-caption dataset such as COCO, and a medical image-report dataset. For evaluation in the image-text dataset, they randomly inject noise into the supervision of the dataset, e.g., replacing object names. Overall, their metric seems to be better than existing metrics in detecting noises, but the improvement was marginal in image-captioning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. They provide a new metric to detect label noise in image-text data, which is reasonable. Also, their experiments verify that the proposed metric outperforms an existing metric in detecting errors in their settings.\\n2. Their writing and presentations are clear, and mostly easy to follow. \\n3. Their approach includes some hyper-parameters, but the robustness to such parameters is also investigated. \\n4. They conduct a wide rage of experiments which can be insightful for readers.\", \"weaknesses\": \"I am concerned that the experiments are not so focused on noisy image-caption datasets although their motivation is to handle issues of noisy data collected from the web.\\n\\n1. Their approach seems to be mainly designed to detect label errors in image-caption datasets. However, the effectiveness of applying filtering to such a dataset seems to be marginal according to Table 4. I think label-error identification is proven to be effective by improving performance on downstream tasks. In this sense, the effectiveness of the proposed approach is not proven enough. \\n\\n2. They conduct experiments on COCO and Flickr to show the effectiveness of the image-caption dataset. Then, the effectiveness of their metric is verified only on the synthetic noise they created. However, there can be more diverse types of noise in real image-caption data collected from the web. For example, some captions might focus on a specific aspect of the image while others have details. According to the experiments, it is not clear how their metric behaves in such cases. \\n\\n3. Also, according to their appendix, their metric does not show much gain in the CC3M, which is webly collected. I actually feel that the authors had to conduct more analysis on this kind of dataset since their main motivation seems to detect errors on this kind of dataset, rather than image-classification dataset.\", \"questions\": \"1. It took some time to understand the intuition behind Eq. 2. I think it is better to provide high-level ideas of what Eq. 2 is computing.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank all reviewers for their time and valuable feedback! We are glad that reviewers found the research problem to be \\\"important\\\" (uuuA), the method to be \\\"well-motivated\\\" (uuuA), \\\"intuitive and clearly explained\\\" (Ux12), with \\\"both theoretical justification and empirical validation\\\" (psZs), and the writing \\\"clear and mostly easy to follow\\\" (hPEV). Based on their comments and suggestions, we have made several improvements to the paper. We highlight the major additions here, and provide detailed responses to each reviewer below.\\n\\n1. **Experiments on Datacomp.** To address a concern from Reviewer hPEV, we have run our method along with a baseline on the Datacomp filtering track small subset, which consists of 12.8M images from the web. We find that training CLIP models with LEMoN-filtered data outperforms the CLIP similarity baseline on downstream zero-shot classification tasks. \\n\\n2. **Extended Ablations.** Following a suggestion from Reviewer psZs, we have run an extended ablation study measuring the performance of each of the three terms within LEMoN and all of their possible combinations, for all datasets. We find that $d_{mm}$ is the most critical term. Of the two nearest neighbors terms, we find that $s_n$ (nearest image neighbors) is more important for most datasets.\\n\\n3. **Writing and Theory Clarifications.** We have improved the clarity of the writing in response to questions raised by the reviewers. We have also corrected the proof to Theorem 4.1 following flaws astutely pointed out by Reviewer Ux12.\\n\\n\\nAll of these changes have also been added to the updated revision. Please feel free to follow up with us if you have additional feedback, questions, or concerns. We very much welcome any feedback that can further strengthen the paper. Thank you again to all reviewers.\\n\\n\\n[1] Datacomp: In search of the next generation of multimodal datasets. NeurIPS 2024.\"}", "{\"comment\": \"> W3: In the downstream captioning task, the improvements over CLIP similarity seem trivial. Can the authors provide some analysis of the possible reason\\uff1f\\n\\nThis is a good point. We highlight that in Table 4, even training with a fully clean dataset only outperforms training with a fully noisy (i.e. no filtering) dataset by 2-3 BLEU-4 points. Thus, the range of potential improvement for any filtering method is bounded by this difference. Adding on the variance associated with model training (standard deviations up to 1.0), it is difficult for any method to outperform another by a large margin. This is an interesting result, since it means that some pre-trained captioning models are stable or have small-sized performance drops in the presence of noisy captioning data. We interpret the result from Table 4 to be that both LEMoN and CLIP Sim. are nearly able to recover the performance of clean data for both datasets. In addition, we note that LEMoN performs slightly better in mscoco, partly because its larger dataset size results in a smaller variance.\\n\\nFurther, we have conducted an additional experiment of $\\\\text{LEMoN}\\\\_{\\\\text{fix}}$ on Datacomp [1]. We use the small dataset from the filtering track, which originally consisted of 12.8M images. As these images are accessed directly from the web, only 9.96M images were able to be downloaded as of 2024/11/14. We apply $\\\\text{LEMoN}\\\\_{\\\\text{fix}}$ to this dataset using OpenAI CLIP ViT-L/14 embeddings provided by Datacomp. We select the 3.5M images with lowest mislabel scores, and use the default hyperparameters from Datacomp to train a CLIP model, and evaluate it on the same 38 zero-shot classification datasets. We compare with filtering using only the CLIP score to the same number of images.\\n\\n| | Method | ImageNet | ImageNet Dist. Shifts | VTAB | Retrieval | Avg (38 Datasets) |\\n| :----------------------------------------- | :----------------------- | ---------: | --------------------------: | ---------: | ---------: | -------------------------: |\\n| Data Currently Available (9\\\\.96M Samples) | LEMoN | **0\\\\.045** | **0\\\\.053** | **0\\\\.188** | 0\\\\.116 | **0\\\\.168** |\\n| | CLIP score | 0\\\\.043 | 0\\\\.049 | 0\\\\.177 | **0\\\\.119** | 0\\\\.160 |\\n| From Datacomp Paper (12\\\\.8M Samples) | No filtering | 0\\\\.025 | 0\\\\.033 | 0\\\\.145 | 0\\\\.114 | 0\\\\.132 |\\n| | Basic filtering | 0\\\\.038 | 0\\\\.043 | 0\\\\.150 | 0\\\\.118 | 0\\\\.142 |\\n| | Text-based | 0\\\\.046 | 0\\\\.052 | 0\\\\.169 | **0\\\\.125** | 0\\\\.157 |\\n| | Image-based | 0\\\\.043 | 0\\\\.047 | 0\\\\.178 | 0\\\\.121 | 0\\\\.159 |\\n| | LAION-2B filtering | 0\\\\.031 | 0\\\\.040 | 0\\\\.136 | 0\\\\.092 | 0\\\\.133 |\\n| | CLIP score | **0\\\\.051** | **0\\\\.055** | **0\\\\.190** | 0\\\\.119 | **0\\\\.173** |\\n| | Image-based + CLIP score | 0\\\\.039 | 0\\\\.045 | 0\\\\.162 | 0\\\\.094 | 0\\\\.144 |\\n\\n\\nWe find that given the available images, LEMoN outperforms the baseline on average, and on three of four individual evaluations. However, neither method outperforms the scores reported in the original paper due to their dataset being larger.\\n\\nWe have added this table and discussion to Appendix I.10.\\n\\n[2] Learning Transferable Visual Models From Natural Language Supervision. ICML 2021.\"}", "{\"summary\": \"This paper proposes a label error detection method for multimodal datasets. Specifically, the authors first use pre-trained vision-language models to extract image-caption embeddings. Then they leverage the distance of multi-modal neighborhoods to detect the label error of image-caption datasets. This paper also provides a theoretical analysis of the feasibility of the proposed method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The research problem is important. The image-caption datasets are widely used to train multimodal models. Detecting the label errors in these datasets is important for downstream tasks.\\n2. The idea of using pre-trained multimodal models is well-motivated and the theoretical analyses are reasonable. \\n3. This paper is very well organized and written in general.\", \"weaknesses\": \"1. The details of the application on the unimodal dataset need to be clarified. How to define the nearest neighbors of text in unimodal datasets like CIFAR10/100\\uff1f\\n2. Figure 3 can be improved. These lines overlap too much and are difficult to distinguish.\\n3. In the downstream captioning task, the improvements over CLIP similarity seem trivial. Can the authors provide some analysis of the possible reason\\uff1f\", \"questions\": \"see the weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the detailed review and constructive suggestions!\\n\\n> W1: The details of the application on the unimodal dataset need to be clarified. How to define the nearest neighbors of text in unimodal datasets like CIFAR10/100\\uff1f\\n\\nSimilar to the original CLIP paper [2], to generate the text modality for these datasets, we use the name or description associated with each particular label. For example, class 0 in cifar10 is \\\"airplane\\\", and this is the caption that we associate with all images of that class. The representation corresponding to this text modality is then used to define text-based neighbors.\\nWe have clarified this further in Appendix D.1. in our revised paper. \\n\\n\\n> W2: Figure 3 can be improved. These lines overlap too much and are difficult to distinguish.\\n\\nThank you \\u2013 the small size of the image was due to the lack of space. We have now updated the figures in the Appendix I.13 with an enlarged scale. We have also computed the area-under-the test-error vs %data retained curves for each of these figures as Reviewer Ux12 suggested in Appendix Table I.17., where we find that LEMoN has better AUC than all other baselines on cifar10 and cifar100 (i.e., the plots in Figure 3).\"}", "{\"comment\": \"I appreciate the authors' efforts in conducting experiments with VDC and providing both the revised version of the paper and the associated source code. After reviewing both, I have the following observations.\\n\\n**Theory.** The theoretical section of the paper has significant issues and should be reconsidered.\\n- **Theorem 1** claims that \\\"Contrastive Multimodal Embedding Models Detect Noisy Labels,\\\" but the conclusion does not support this claim. The distances in the final inequality can remain equal even with multiple strong constraints. \\n- **Theorem 2** is trivial, as it simply states that random variables with different distributions can be distinguished to some extent. \\n- **Assumption 2** is not properly validated in Appendix A.3. The appendix presents dataset-level statistics, whereas Assumption 2 pertains to individual image-label pairs. \\n\\n**Motivation and Novelty.** The paper lacks a thorough analysis of the distinction between cross-modal retrieval and label noise detection. \\n- It uses CLIP, a retrieval model, for label noise detection, but similar approaches have been explored in prior work. For instance, label noise detection via retrieval has been discussed in Bahri et al. [1], and multimodal retrieval has been studied in [2,3,4]. This overlap makes the novelty unclear.\\n- Appendix B argues that second-order captions can be similar to the original ones for misaligned images, but this claim seems counterintuitive. Misaligned images are more likely to produce captions that differ from the original.\\n\\n**Baselines.** The paper does not fairly evaluate the proposed method against its closest baseline [3]. \\n- Only the discrepancy (DIS) score for a single modality is implemented, while the original baseline combines both discrepancy (DIS) and divergence (DIV) scores in a single framework (Equations (7) and (8) from the original paper). \\n- LeMON evaluates scores from both modalities, whereas the baseline is tested only for one. This creates an unfair comparison.\\n- For a fair evaluation, the baseline must be implemented using combined DIS and DIV scores across both modalities, consistent with how LeMON is evaluated.\\n\\nBased on these observations, I will retain my original score.\\n\\n[1] Bahri D. et al., \\\"Deep k-nn for noisy labels,\\\" ICML 2020 \\n\\n[2] Rafailidis D. et al., \\\"A unified framework for multimodal retrieval,\\\" Pattern Recognition 2013 \\n\\n[3] Thomas C. et al., \\\"Emphasizing Complementary Samples for Non-literal Cross-modal Retrieval,\\\" CVPR 2022 \\n\\n[4] Yi C. et al., \\\"Leveraging Cross-Modal Neighbor Representation for Improved CLIP Classification,\\\" CVPR 2024\"}", "{\"summary\": \"This paper proposed a new way to filter noisy multimodal data. Besides of using image-caption embedding similarity, the new approach leverages the multimodal neighborhood of image-captions pairs to identify label error. The method demonstrates improvements in label error detection and enhances performance on downstream captioning tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper introduces a novel approach that uses multimodal nearest neighbors to assess the relevance between images and captions, providing both theoretical justification and empirical validation for the proposed method. The experiments evaluate its effectiveness in detecting label errors and its impact on downstream classification and captioning models.\", \"weaknesses\": \"The current experiments lack the breadth needed to fully demonstrate the impact of adding nearest neighbor terms. It would be beneficial to include a comparison using only single-side nearest neighbor term, and to present the actual values of all three terms for clearer insight.\", \"questions\": \"1. In section 3, a few symbols are not explained, like r, D, k etc. It's better to split the section 3 into multiple sub sections to explain each term separately\\n2. Is there any explanation that why using pure CLIP similarity performs better than LEMoN on Flickr30k?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> W2: While the paper claims novelty in applying multi-modal scoring to label noise detection, this approach has recently been explored in [1].\\n\\n> Q2: How does LEMoN compare to VDC [1] in terms of strengths and weaknesses?\\n\\nThank you for this reference! First, we would like to emphasize that VDC is only evaluated on classification datasets, while our method is able to detect label errors in classification and captioning datasets.\\n\\nConceptually, our method is also very different from VDC: VDC relies on an LLM to generate questions about the consistency of a label to a given image, uses a multimodal large language model to automatically generate answers for each image and question, and then evaluates how well the generated caption matches the actual label. Thus, VDC entirely relies on prompting LLMs and VLLMs. In contrast, our method does not utilize any prompt engineering, and instead utilizes the neighborhood information in contrastively trained representations of image and text representations. In addition, our method is more principled in that we have theoretical guarantees for the performance of our multimodal neighborhood score (Theorem 4.2).\\n\\nThird, our method outperforms SimiFeat-R, a method that VDC performs comparably (Table 4 in VDC paper) by a significant margin (3-6 AUROC points across different noise types; Table 2 and I.2 in our submission). In principle, the idea of prompting a VLLM is similar to our LLaVa baseline, which LEMoN also far outperforms.\\n\\nFourth, our method uses fully open-source models, while VDC relies on ChatGPT based on GPT-3.5-turbo for their question generation and answer evaluation portions, and they do not evaluate any open-source alternatives. \\n\\nFinally, in order to achieve the prompting abilities required, VDC utilizes models that are much larger than ours. For example, the InstructBLIP used in VDC contains 7B parameters, which is 46x as large as the CLIP ViT-B-32 (151m parameters) used in our paper. This certainly has negative implications for their inference time, which is important in applying such methods to billion-scale datasets as the reviewer pointed out. As VDC also utilizes ChatGPT, this would be extremely costly as well.\\n\\nWe are currently working on adapting VDC to our problem setup of detecting mislabeled image caption pairs, utilizing only open-source models. Results for this baseline will be added in a future revision.\\n\\n\\n> W3: The provided source code does not include implementations of the baseline methods. As a result, it remains unclear how the hyperparameters for these baseline methods were tuned, particularly given that the authors introduced a new set of synthetic datasets.\\n> Q1: Which implementations of the baseline methods were used, and how were their hyperparameters selected?\\n\\nThanks for raising this point! We highlight that all the hyperparameter settings for the baselines are in Appendix G. Based on the reviewer\\u2019s suggestion, we have updated the supplementary code to include baselines, including the hyperparameter grids used for running each baseline in experiments.py. For all baselines, the hyperparameters were selected based on the validation set F1-score, matching the $\\\\text{LEMoN}\\\\_{\\\\text{opt}}$ setting. For Simifeat, we use the open-sourced implementation directly from https://github.com/Docta-ai/docta/tree/master. We have clarified this in Appendix G in the revised paper.\"}", "{\"comment\": \"We believe the only remaining concern is regarding the weakness of the bound in Theorem 1. We largely agree with you on this point, but highlight that this theorem is a relatively minor contribution and a supplementary result of our paper, which largely serves to justify prior works which already utilize the CLIP score for label error detection. As such, we believe this is not sufficient reason to warrant such a negative assessment on its own. We would be open to moving this theorem to the Appendix if suggested so by the reviewer. We emphasize that our key contributions are independent of this theorem.\\n\\nGiven this and the factual errors in your response, we kindly ask that you reconsider your assessment of the paper. We thank you again for your feedback, and your active participation during the rebuttal period.\"}", "{\"comment\": \"> W4: Some previous works [2] evaluate the area under the accuracy/filter-out-rate curve on real datasets, which provides a better understanding of filtering quality in real-world applications. The authors address this metric in Appendix I.12, where the results suggest that Deep k-NN may be a more effective alternative. However, these results are presented for only two datasets, limiting the generalizability of the findings.\\n\\n> Q4: Why does LEMoN not show improvements in terms of the area under the accuracy/filter-out-rate curve?\\n\\nThank you for raising this point! We highlight that we have indeed shown results for all four classification datasets (Figure 3 in main paper and Appendix I.14). \\n\\nWe have now computed the AUC using the test error (i.e., 1-accuracy) vs filter-out-rate curves as the reviewer suggested (and similar to deepknn). These AUC scores are shown below. Note that the minimum %data retained is 20% (i.e., the minimum amount of data required for training the downstream model).\\n\\nOn both cifar10 and cifar100, we observe that LEMoN performs the best in terms of AUC (i.e., lowest test error). On stanfordCars and miniImagenet, deep kNN performs better. However, the gap in performance is low between LEMoN and the best method (less than 0.9% on stanfordCars and 1.2% on miniImagenet).\\n\\n\\n\\n(Area under test error vs %Data filtered curve, lower is better)\\n| Method | cifar10 | cifar100 | stanfordCars | miniImagenet |\\n| :---------- | :------ | :------- | :----------- | :----------- |\\n| CLIP Sim. | 5\\\\.85 | 18\\\\.41 | 46\\\\.81 | 26\\\\.02 |\\n| CLIP Logits | 5\\\\.56 | 17\\\\.07 | 47\\\\.34 | 25\\\\.48 |\\n| Discrepancy | 8\\\\.45 | 20\\\\.82 | 48\\\\.30 | 30\\\\.03 |\\n| Deepknn | 5\\\\.34 | 17\\\\.74 | **46\\\\.19** | **24\\\\.69** |\\n| Ours | **4\\\\.98** | **16\\\\.60** | 46\\\\.29 | 25\\\\.95 |\\n\\nWe have added this table and discussion to Appendix I.17.\\n\\n\\n\\n\\n> Q3: What is the inference speed of LEMoN relative to other methods? How does it scale with larger datasets, and is it feasible to apply this method to billion-scale datasets?\\n\\nWe have added the per-sample runtime (wall clock time) in ms for LEMoN relative to (1) Clip similarity, (2) Deep k-nn, and (3) Datamap (training-dependent) baselines in Appendix Table I.11. All experiments were conducted using an NVIDIA RTX A6000 GPU and 8 CPU cores. Standard deviation across 3 random data seeds are shown in the parentheses.\\n\\n\\n| | cifar10 | cifar100 | miniImageNet | stanfordCars | mscoco | flickr30k | mimiccxr | mmimdb |\\n| :-------- | ----------: | ----------: | -----------: | -----------: | ----------: | ----------: | -----------: | ----------: |\\n| LEMoN | 10\\\\.1 (0.5) | 9\\\\.6 (0.5) | 7\\\\.8 (1.6) | 11\\\\.0 (2.0) | 18\\\\.8 (1.8) | 35\\\\.9 (1.2) | 52\\\\.2 (2.7) | 21\\\\.1 (1.4) |\\n| CLIP Sim. | 1\\\\.8 (0.0) | 1\\\\.8 (0.0) | 2\\\\.7 (0.4) | 3\\\\.5 (0.5) | 20\\\\.3 (0.0) | 15\\\\.6 (0.0) | 16\\\\.8 (0.0) | 30\\\\.5 (0.0) |\\n| Deep kNN | 7\\\\.0 (1.3) | 5\\\\.1 (0.1) | 8\\\\.7 (1.2) | 6\\\\.0 (0.1) | 19\\\\.9 (0.9) | 10\\\\.6 (1.2) | 47\\\\.1 (12.7) | 20\\\\.5 (1.9) |\\n| Datamap | 37\\\\.6 (0.2) | 37\\\\.5 (0.3) | 37\\\\.7 (1.6) | 37\\\\.2 (0.3) | 39\\\\.7 (0.1) | 38\\\\.1 (4.8) | 41\\\\.4 (1.3) | 62\\\\.6 (9.5) |\\n\\n\\n\\nWe observe that LEMoN generally has comparable runtime to deep knn, and significantly lower runtime than Datamap. Note that we use Datamap with LoRA on the captioning datasets, which is why runtime differences between Datamap and LEMoN are lower in these datasets. \\n\\nFinally, we note that LEMoN is embarrassingly parallelizable, and can easily be distributed across multiple processes across multiple servers, whereas trying to distribute training-dependent methods across multiple servers is a bigger challenge.\"}", "{\"comment\": \"Thank you for the detailed review and constructive suggestions!\\n\\n> W1: The current experiments lack the breadth needed to fully demonstrate the impact of adding nearest neighbor terms. It would be beneficial to include a comparison using only single-side nearest neighbor term, and to present the actual values of all three terms for clearer insight.\\n\\nThank you for the suggestion. We have conducted a full ablation study of the performance of each of the three terms in LEMoN, for all datasets. The following table shows the AUROC of each score and all possible combinations:\\n\\n\\n| | cifar10 | cifar100 | miniImageNet | stanfordCars | flickr30k | mscoco | mmimdb | mimiccxr |\\n| :------------------------- | --------------: | --------------: | --------------: | --------------: | --------------: | --------------: | --------------: | --------------: |\\n| $d\\\\_{mm}$ (CLIP Sim.) | 93\\\\.8 (0.1) | 78\\\\.5 (0.6) | 89\\\\.3 (0.2) | 69\\\\.8 (0.6) | 94\\\\.8 (0.5) | 93\\\\.8 (0.2) | 85\\\\.1 (0.3) | 64\\\\.1 (0.4) |\\n| $s\\\\_m$ | 79\\\\.3 (2.8) | 65\\\\.4 (2.0) | 80\\\\.8 (0.3) | 66\\\\.0 (0.9) | 76\\\\.3 (1.8) | 75\\\\.8 (0.3) | 60\\\\.1 (0.4) | 59\\\\.0 (0.6) |\\n| $s\\\\_n$ | 98\\\\.1 (0.0) | 88\\\\.4 (0.1) | 84\\\\.3 (0.2) | 72\\\\.8 (0.7) | 71\\\\.4 (1.6) | 76\\\\.5 (0.5) | 55\\\\.1 (0.3) | 57\\\\.9 (2.1) |\\n| $d\\\\_mm + s\\\\_m$ | 92\\\\.5 (0.5) | 81\\\\.3 (1.1) | 89\\\\.6 (0.2) | 69\\\\.7 (0.5) | **95\\\\.0** (0.5) | 94\\\\.6 (0.3) | **86\\\\.0** (0.4) | 64\\\\.5 (0.6) |\\n| $s\\\\_n + s\\\\_m$ | 98\\\\.0 (0.2) | 88\\\\.8 (0.2) | 84\\\\.5 (0.4) | 72\\\\.8 (0.7) | 83\\\\.5 (0.5) | 86\\\\.1 (0.6) | 67\\\\.6 (0.9) | 63\\\\.6 (0.6) |\\n| $d\\\\_mm + s\\\\_n$ | **98\\\\.2** (0.1) | **90\\\\.8** (0.1) | 89\\\\.9 (0.3) | **73\\\\.9** (0.7) | 94\\\\.9 (0.3) | 94\\\\.9 (0.2) | 85\\\\.3 (0.3) | 66\\\\.4 (2.4) |\\n| $d\\\\_{mm} + s\\\\_n + s\\\\_m$ (LEMoN) | 98\\\\.1 (0.0) | **90\\\\.8** (0.0) | **90\\\\.2** (0.2) | 73\\\\.1 (0.5) | 94\\\\.5 (0.2) | **95\\\\.6** (0.2) | **86\\\\.0** (0.1) | **70\\\\.4** (2.3) |\\n\\n\\n\\nWe find that $d_{mm}$ is the most critical term. Of the two nearest neighbors terms, we find that $s_n$ (nearest image neighbors) is more important in general, though this is highly dataset dependent, e.g. error detection in mmimdb relies much more on neighbors in the text space than the image space, while the opposite is true for mscoco.\\n\\nWe have added this table to the appendices, and discussed it briefly in Section 6.3. \\n\\n\\n> Q1: In section 3, a few symbols are not explained, like r, D, k etc. It's better to split the section 3 into multiple sub sections to explain each term separately\\n\\nThank you for pointing this out. The symbol $\\\\mathcal{D}$ was defined at the start of Section 3, and we have clarified the notation in Section 3 by adding several explanations:\\n\\n- Define $B(\\\\mathbf{x}, r) := \\\\\\\\{x' \\\\in \\\\mathcal{X}: d_{\\\\mathcal{X}}(\\\\mathbf{x}, \\\\mathbf{x}') \\\\leq r \\\\\\\\}$, the ball of radius $r$ around $\\\\mathbf{x}$, and $B(\\\\mathbf{y}, r)$ similarly.\\n\\n- Let $r_k(\\\\mathbf{x}) := \\\\inf\\\\\\\\{ r: | B(\\\\mathbf{x}, r) \\\\cap \\\\mathcal{D}| \\\\geq k \\\\\\\\} $, the minimum radius required to encompass at least $k$ neighbors.\\n\\n\\nFurther, we have added Table C.1 in Appendix C, which provides a summary of all the notation used in this section. Please let us know if there are any remaining issues!\\n\\n\\n\\n> Q2: Is there any explanation that why using pure CLIP similarity performs better than LEMoN on Flickr30k?\\n\\nWe note that due to the large error bars, CLIP similarity does not actually outperform LEMoN with statistical significance on flickr30k (p=0.63 for AUROC and p=0.20 for F1 from paired t-tests). Since LEMoN is a generalization of CLIP similarity, we would expect $\\\\text{LEMoN}\\\\_{\\\\text{opt}}$ to outperform CLIP similarity when sufficient validation data is available. As the dataset size for flickr30k is small, this results in (1) larger variance in test-set metrics as seen in Table 3, and (2) hyperparameters of $\\\\text{LEMoN}\\\\_{\\\\text{opt}}$ overfitting to the smaller validation set. We explore the second phenomenon further in Appendix I.7., where we vary the validation set size.\"}", "{\"comment\": \"Thanks to the authors for their response and further experiments. They address my concerns. Overall, this paper is above the acceptance threshold, I will maintain my rating.\"}", "{\"comment\": \"> Appendix B argues that second-order captions can be similar to the original ones for misaligned images, but this claim seems counterintuitive. Misaligned images are more likely to produce captions that differ from the original.\\n\\n\\nFor context, we assume the reviewer is referring to our discussion of the $\\\\Upsilon\\\\_Y^{DIS}$ term (L903-L908). We note that the point of contention is not whether *\\\"Misaligned images are more likely to produce captions that differ from the original.\\\"*, but whether the distance between *second-order neighbors in text space and the original caption* is larger for mislabeled samples. Note that the authors of [3] \\\"compute neighbors in text space because the text domain provides the cleanest semantic representation of the image-text pair\\\" [3 Section 3.2]. As $\\\\Upsilon\\\\_Y^{DIS}$ does not utilize the image at all in its computation, it cannot possibly give any signal as to whether a particular (image, text) pair is mislabeled.\\n\\nWe have provided an intuitive argument of this in L903-L908, and have demonstrated it empirically across eight datasets in Table I.12, where $\\\\Upsilon\\\\_Y^{DIS}$ achieves chance performance at label error detection (average AUROC of 49.5). We believe this sufficiently addresses this point.\"}", "{\"comment\": \"Thank you for your response.\\n\\n> Theorem 1 claims that \\\"Contrastive Multimodal Embedding Models Detect Noisy Labels,\\\" but the conclusion does not support this claim. The distances in the final inequality can remain equal even with multiple strong constraints.\\n\\nWe have already addressed this point in a previous response. Specifically, we admit that this is a weak lower bound, and that:\\n\\n*We stand by our statement that \\\"when $L_Y$ is small, the score for the mislabeled sample cannot be much lower than the score for the positive pair with high probability.\\\"*\\n\\n*Additionally, we emphasize that Theorem 4.1 is a relatively minor contribution of our paper. It largely serves to justify prior works which already utilize the CLIP score for label error detection (Kang et al., 2023; Liang et al., 2023). We believe that Theorem 4.2 sufficiently justifies the main methodological novelties of our work (the scores $s_m$ and $s_n$).*\\n\\nAs such, the primary contributions of our paper (L87-L97) hold even without this theorem.\\n\\n\\n> Theorem 2 is trivial, as it simply states that random variables with different distributions can be distinguished to some extent.\\n\\nWe completely disagree that a theorem is trivial just because \\\"it simply states that random variables with different distributions can be distinguished to some extent\\\". For example, a large portion of the field of statistical hypothesis testing deals exactly with designing and analyzing methods to distinguish between random variables with different distributions. \\n\\nRegardless, we note that (a) we derive an analytical solution for exactly the extent to which the distributions can be distinguished, as a function of the parameters of the distributions, and (b) Theorem 2 provides theoretical justification for our proposed method (the scores $s_m$ and $s_n$). For these reasons, we do not believe this theorem is \\\"trivial\\\", and we believe it is unfair to dismiss it as so.\\n\\n\\n> Assumption 2 is not properly validated in Appendix A.3. The appendix presents dataset-level statistics, whereas Assumption 2 pertains to individual image-label pairs.\\n\\n\\nThis is simply incorrect. First, Assumption 2 pertains to the *distribution* of distances between images and their \\\"paraphrases\\\", not individual image-label pairs. Second, this distribution is precisely what we show in Appendix A.3 (Figure A.1), and we even conduct a test for normality, showing that this assumption does indeed hold.\\n\\n\\n> The paper lacks a thorough analysis of the distinction between cross-modal retrieval and label noise detection.\\n\\n> It uses CLIP, a retrieval model, for label noise detection, but similar approaches have been explored in prior work. For instance, label noise detection via retrieval has been discussed in Bahri et al. [1], and multimodal retrieval has been studied in [2,3,4]. This overlap makes the novelty unclear.\\n\\nWe respectfully disagree with the reviewer\\u2019s point that \\u201csimilar approaches have been explored in prior work\\\" for label noise detection. As we stated in the prior response (L151): *\\\"Although prior works have utilized the idea of multimodal neighbors in other settings, we believe we are the first to apply it to the setting of label error detection.\\\"* Our novelty in methodology does not lie in using CLIP, but rather proposing *a novel multimodal neighborhood-based score for the task of label noise detection*. As the reviewer themselves notes in their original review, we propose an \\u201coriginal scoring method for label noise detection\\u201d.\\n\\nFurthermore, we have *already compared* against the baselines the reviewer specified [1,3], and have shown that LEMoN outperforms them: deep kNN [1] in Table 2 and Table 3, and Discrepancy/Diversity [3] in Table 2 and Table I.12. The other references [2,4] that the reviewer has identified do not propose a score that can be used for label noise detection, but focus on improving multimodal matching and retrieval. Thus, they are not directly applicable to this setting. We will note this in our revised paper.\\n\\nLastly, the only connection between our work and cross-modal retrieval [2,3,4] is that these methods are multimodal [2,3,4], and may use neighborhood-based strategies [1,4]. Importantly, we *already cite and describe how our works differ from prior multimodal neighborhood-based works in L142-L152 and Appendix B* (which the reviewer already references). Additionally, works in cross-modal retrieval [2,4] are orthogonal and potentially complementary to our work: better multimodal retrieval could lead to more accurate construction of multimodal neighborhoods, and thus better scores for label noise detection under our framework. Thus, we strongly disagree with the reviewer\\u2019s point that our work \\u201clacks distinction between cross-modal retrieval and label noise detection\\u201d.\"}", "{\"comment\": \"> Q1: It took some time to understand the intuition behind Eq. 2. I think it is better to provide high-level ideas of what Eq. 2 is computing.\\n\\n\\nThank you for this suggestion. We have clarified the following plain text description of our method in Appendix C, and added a pointer to it from Section 3.\\n\\n\\n_For each image-caption pair in the dataset, we first compute how similar the image and caption are to each other using a pre-trained CLIP model ($d\\\\_{mm}$), which gives a basic measure of how well they match. To compute $s\\\\_m$, we compute the nearest neighbors of the caption among other captions in the dataset. For each neighbor, we look at how similar their corresponding image is to the original image. The intuition is that if a sample is correctly labeled, the image should be similar to images of other samples with similar captions. We weight each neighbor based on how close it is to our original sample and how well-matched the neighboring pairs themselves are. Finally, we repeat this for nearest neighbors in the image space to get $s\\\\_n$. LEMoN is then the weighted sum of these three scores._\"}", "{\"title\": \"response\", \"comment\": \"Thanks for your response.\\nFrom their new results, I understand that their proposed approach is somewhat effective in filtering noisy image-caption data. \\n\\nI would like to raise my rating to 6 since I still do not think this submission is must-accept one.\"}", "{\"metareview\": \"The paper proposes a mechanism for coping with label/caption noise. The reviewers praise the intuitive method and extensive experiments. However they raise concerns about effectiveness (e.g. improvement over using CLIP, results in Table 4), applicability (to real-world noise) and novelty (e.g. wrt to an ICLR 2024 paper). No reviewer score exceeds 6 and some 6-scoring reviewers voice numerous substantial concerns (even after the rebuttal).\", \"additional_comments_on_reviewer_discussion\": \"All reviewers participated in the discussion and responded to the rebuttal. Some Reviewer Ux12 concerns were discounted due to failing to respond to the authors' last few comments.\"}", "{\"comment\": \"> The paper does not fairly evaluate the proposed method against its closest baseline [3].\\n\\n> For a fair evaluation, the baseline must be implemented using combined DIS and DIV scores across both modalities, consistent with how LeMON is evaluated.\\n\\nAs we have argued both intuitively (in Appendix B) and empirically (in Table I.12), $\\\\Upsilon\\\\_X^{DIS}$ is the only score from [3] that contributes any signal to label error detection. In fact, the average AUROC (across eight datasets) of the remaining three terms $\\\\Upsilon\\\\_Y^{DIS}$, $\\\\Upsilon\\\\_X^{DIV}$, $\\\\Upsilon\\\\_Y^{DIV}$ are, respectively: 49.5 (1.7), 51.8 (5.0), and 49.3 (2.1), where parentheses show standard deviation across the eight datasets. Clearly, these remaining three terms are no better than random chance at detecting label errors.\\n\\nRegardless, we have implemented the ensembling of the four scores as the reviewer suggested. For the $\\\\text{Comb-Val}$ strategy, as there are four terms, we sweep over weights in $\\\\\\\\{1, 2, 3, 4, 5\\\\\\\\}^4$, following [3], selecting the best combination using a labeled validation set, identically to LEMoN. For the $\\\\text{Comb-Stat}$ strategy, we use the mean and standard deviations, as in Equation (8) in [3]. We find that none of the combined scores significantly outperform $\\\\Upsilon\\\\_X^{DIS}$. This is because in both combination strategies, a non-zero weight is placed on the other terms, which essentially adds noise to the final score without contributing any signal.\\n\\n\\n| | **AUROC** | | | | | | | F1 | | | | | | |\\n| :----------- | -------------------: | -------------------: | -------------------: | -------------------: | ----------: | ----------: | --------------: | -------------------: | :------------------- | :------------------- | :------------------- | ----------: | ----------: | :-------------- |\\n| | $\\\\Upsilon_{X}^{DIS}$ | $\\\\Upsilon_{Y}^{DIS}$ | $\\\\Upsilon_{X}^{DIV}$ | $\\\\Upsilon_{Y}^{DIS}$ | $\\\\text{Comb-Val}$ | $\\\\text{Comb-Stat}$ | $\\\\text{LEMoN}\\\\_{\\\\text{opt}}$ | $\\\\Upsilon_{X}^{DIS}$ | $\\\\Upsilon_{Y}^{DIS}$ | $\\\\Upsilon_{X}^{DIV}$ | $\\\\Upsilon_{Y}^{DIS}$ | $\\\\text{Comb-Val}$ | $\\\\text{Comb-Stat}$ | $\\\\text{LEMoN}\\\\_{\\\\text{opt}}$ | \\n| cifar10 | 77\\\\.1 (1.9) | 48\\\\.2 (1.2) | 50\\\\.3 (1.9) | 45\\\\.0 (1.9) | 77\\\\.1 (1.9) | 77\\\\.1 (1.9) | **98\\\\.1** (0.0) | 68\\\\.2 (1.9) | 29\\\\.2 (0.4) | 29\\\\.2 (0.4) | 29\\\\.2 (0.4) | 68\\\\.2 (1.9) | 68\\\\.2 (1.9) | **93\\\\.1** (0.2) |\\n| cifar100 | 66\\\\.0 (1.5) | 49\\\\.4 (1.1) | 49\\\\.9 (1.4) | 49\\\\.7 (1.9) | 65\\\\.9 (1.5) | 65\\\\.9 (1.5) | **90\\\\.8** (0.0) | 51\\\\.9 (1.8) | 29\\\\.4 (1.4) | 32\\\\.5 (5.5) | 29\\\\.4 (0.4) | 51\\\\.9 (1.8) | 51\\\\.9 (1.8) | **81\\\\.3** (0.2) |\\n| miniImageNet | 79\\\\.4 (0.3) | 47\\\\.4 (0.5) | 64\\\\.6 (0.2) | 48\\\\.0 (0.5) | 75\\\\.3 (0.3) | 76\\\\.6 (0.3) | **90\\\\.2** (0.2) | 69\\\\.8 (0.4) | 28\\\\.0 (2.3) | 55\\\\.8 (2.3) | 27\\\\.0 (0.9) | 63\\\\.7 (0.4) | 65\\\\.5 (0.4) | **82\\\\.3** (0.1) |\\n| stanfordCars | 65\\\\.7 (0.7) | 50\\\\.8 (1.1) | 51\\\\.9 (0.9) | 50\\\\.1 (0.5) | 61\\\\.2 (0.8) | 63\\\\.4 (0.9) | **73\\\\.1** (0.5) | 59\\\\.9 (0.4) | 20\\\\.6 (1.3) | 25\\\\.3 (5.6) | 20\\\\.6 (1.4) | 48\\\\.9 (0.4) | 53\\\\.1 (0.4) | **67\\\\.3** (1.0) |\\n| flickr30k | 73\\\\.0 (0.6) | 53\\\\.3 (1.4) | 49\\\\.9 (2.9) | 52\\\\.9 (0.2) | 60\\\\.6 (0.6) | 64\\\\.4 (0.6) | **94\\\\.5** (0.2) | 64\\\\.7 (1.7) | 26\\\\.2 (0.8) | 27\\\\.4 (1.7) | 26\\\\.1 (1.0) | 48\\\\.7 (1.7) | 54\\\\.9 (1.7) | **87\\\\.7** (0.9) |\\n| mimiccxr | 60\\\\.0 (0.8) | 49\\\\.6 (0.4) | 50\\\\.0 (1.3) | 49\\\\.1 (1.3) | 55\\\\.7 (0.8) | 57\\\\.9 (0.8) | **70\\\\.4** (2.3) | 32\\\\.8 (2.8) | 28\\\\.5 (0.0) | 28\\\\.5 (0.0) | 28\\\\.5 (0.0) | 30\\\\.5 (2.8) | 31\\\\.4 (2.8) | **57\\\\.0** (1.6) |\\n| mmimdb | 57\\\\.4 (0.4) | 49\\\\.8 (0.4) | 48\\\\.6 (0.4) | 50\\\\.0 (0.5) | 52\\\\.6 (0.4) | 54\\\\.6 (0.4) | **86\\\\.0** (0.1) | 40\\\\.2 (1.7) | 28\\\\.6 (0.1) | 29\\\\.1 (0.5) | 28\\\\.9 (0.6) | 29\\\\.6 (1.7) | 30\\\\.4 (1.7) | **76\\\\.3** (0.1) |\\n| mscoco | 72\\\\.7 (0.3) | 48\\\\.5 (0.8) | 52\\\\.9 (0.8) | 48\\\\.7 (0.3) | 59\\\\.8 (0.3) | 65\\\\.8 (0.3) | **95\\\\.6** (0.2) | 67\\\\.3 (0.9) | 29\\\\.7 (0.1) | 29\\\\.0 (0.2) | 28\\\\.9 (0.4) | 37\\\\.4 (0.9) | 58\\\\.1 (0.9) | **89\\\\.3** (0.2) |\\n\\n\\nWe will add this result to Table I.12 in the revision.\"}", "{\"comment\": \"Thank you for the detailed response and clarification. Most of my questions are now resolved, and I\\u2019ve updated my score accordingly.\"}", "{\"comment\": \"Thank you for the detailed review and constructive suggestions!\\n\\n> W1: The theoretical justifications provided in the paper contain several significant flaws, limiting their effectiveness as a core claim.\\n\\n\\nWe appreciate your valuable feedback on our theoretical analysis. We acknowledge that there were flaws in our initial proof. However, the underlying intuition \\u2013 that contrastive multimodal embedding models can detect noisy labels due to increases in embedding distances caused by label noise \\u2013 remains valid. We have now revised our theorem and proof (Theorem 4.1 and Appendix A) to correct these errors.\\n\\n\\n**Theorem 4.1** [Contrastive Multimodal Embedding Models Detect Noisy Labels]\\nLet $\\\\mathcal{Y} = \\\\mathbb{R}$ and consider a training dataset $\\\\mathcal{D}$. Suppose that $\\\\hat{h}^{\\\\mathcal{X}}\\\\_{\\\\theta}: \\\\mathcal{X} \\\\rightarrow \\\\mathbb{R}^d$ is an embedding function, and $\\\\hat{h}^{\\\\mathcal{Y}}\\\\_{\\\\theta}: \\\\mathcal{Y} \\\\rightarrow \\\\mathbb{R}^d$ is a Lipschitz continuous embedding function with constants $L\\\\_{\\\\mathcal{Y}} > 0$, meaning that for all $y, y' \\\\in \\\\mathcal{Y}$,\\n$$ \\\\left\\\\|\\\\left\\\\| \\\\hat{h}^{\\\\mathcal{Y}}\\\\_{\\\\theta}(y) - \\\\hat{h}^{\\\\mathcal{Y}}\\\\_{\\\\theta}(y') \\\\right\\\\|\\\\right\\\\|\\\\_2 \\\\leq L\\\\_{\\\\mathcal{Y}} | y - y' |. $$\\nFor an input $x \\\\in \\\\mathcal{X}$ and its corresponding positive label $y \\\\in \\\\mathcal{Y}$, let $\\\\eta$ be a random variable drawn from a normal distribution: $ \\\\eta \\\\sim \\\\mathcal{N}(0, \\\\sigma^2). $\\nDefine a noisy label $y' = y + \\\\eta$. Let $d_{mm}(u, v) = ||u - v||\\\\_2$, which is proportional to $\\\\sqrt{d\\\\_{cos}(u, v)}$ when $||u||\\\\_2 = ||v|\\\\|_2 = 1$. Then, with probability at least $\\\\delta(\\\\epsilon) = 1 - 2 \\\\Phi\\\\left( -\\\\dfrac{\\\\epsilon}{\\\\sigma} \\\\right)$, the following inequality holds:\\n$$ d\\\\_{mm}\\\\left( \\\\hat{h}^{\\\\mathcal{X}}\\\\_{\\\\theta}(x), \\\\hat{h}^{\\\\mathcal{Y}}\\\\_{\\\\theta}(y') \\\\right) \\\\geq d\\\\_{mm}\\\\left( \\\\hat{h}^{\\\\mathcal{X}}\\\\_{\\\\theta}(x), \\\\hat{h}^{\\\\mathcal{Y}}\\\\_{\\\\theta}(y) \\\\right) - L\\\\_{\\\\mathcal{Y}} \\\\epsilon , $$ \\nwhere $\\\\Phi$ is the cumulative distribution function of the standard normal distribution, and $\\\\epsilon > 0$ is a threshold.\\n\\nThus, when $L_{\\\\mathcal{Y}} $ is small, the score for the mislabeled sample cannot be much lower than the score for the positive pair with high probability.\\n\\nThe revised proof can be found in Appendix A.1.\\n\\n\\n> In the proof of Theorem 4.2 (line 954), the variance of 1/k E should be expressed as 1/k\\u00b2 VarE, rather than 1/k VarE.\\n\\n\\nWe believe our original proof is actually correct -- there is indeed a $1/k^2$ term, but one of these factors gets canceled out by the summation over k iid random variables. Concretely:\\n\\n$\\\\\\\\mathbb{E}[Var(S_m(X, Y) | \\\\\\\\zeta_Y)] $\\n\\n$= \\\\\\\\mathbb{E}[\\\\\\\\frac{1}{k^2}Var\\\\\\\\left(d(X, \\\\\\\\bar{X}_1) + d(X, \\\\\\\\bar{X}\\\\_\\\\{k \\\\\\\\zeta_Y (1-p)\\\\})+ d(X, X'_1) + ... + d(X, X'\\\\_{k - k\\\\\\\\zeta_Y(1 - p)})\\\\\\\\mid \\\\\\\\zeta_Y \\\\\\\\right)]$\\n\\n$ = \\\\\\\\mathbb{E}[\\\\\\\\frac{1}{k}(\\\\\\\\zeta_Y (1-p) \\\\\\\\sigma_2^2 + (1 - \\\\\\\\zeta_Y (1-p)) \\\\\\\\sigma_1^2 )] $\\n\\n\\nWe have added this intermediate step in the proof in Appendix A.2. Please let us know if this addresses your concerns.\"}", "{\"comment\": \"Thank you for addressing the concerns raised earlier. However, there are still several areas that require further clarification and elaboration. Please consider the following questions and comments:\\n\\n### W1. **Theory**\\n1. **Line 236 (Theorem 1):** The statement implies that the distance from the incorrect label $y'$ can be equal to the distance from the correct label $y$, even in a carefully designed case. It follows that Lemon may NOT be able to distinguish between correct and incorrect labels.\\n \\n2. **Line 839 (Proof of Theorem 2):** In the proof, $p$ is described as a probability, meaning the exact number of relevant neighbors is unknown. Consequently, $S_m$ cannot be directly decomposed into two sums with a fixed number of terms in each.\\n\\n3. **Line 844 (Proof of Theorem 2):** The variable over which the expectation is computed appears to be missing. Additionally, why are nested expectations of the form $\\\\mathbb{E}[\\\\mathbb{E}[\\\\dots]]$ necessary in this context?\\n\\n### W2. **Novelty**\\n- **Line 151:** The claim that \\\"we believe we are the first to apply it to the setting of label error detection\\\" must be revised in light of VDC.\\n\\n### W3. **Hyperparameters**\\n- The appendix currently includes only the ranges of hyperparameters. For reproducibility and transparency, could you provide the exact hyperparameter values used during the evaluation process?\\n\\nWe appreciate your effort and look forward to seeing the updated evaluation results and responses to these questions.\"}", "{\"title\": \"Have we addressed your concerns?\", \"comment\": \"Dear Reviewers,\\n\\nThank you again for your thorough reviews and constructive feedback. With the limited time remaining in the rebuttal phase, we were wondering if our responses have adequately addressed your concerns. If so, we would appreciate it if you could update your review and your score accordingly. If there are any remaining questions or comments, we would be happy to discuss.\\n\\nThank you!\"}" ] }
DWLlTNhig1
Sparse Rewards Can Self-Train Dialogue Agents
[ "Barrett Martin Lattimer", "Varun Prashant Gangal", "Ryan McDonald", "Yi Yang" ]
Recent advancements in state-of-the-art (SOTA) Large Language Model (LLM) agents, especially in multi-turn dialogue tasks, have been primarily driven by supervised fine-tuning and high-quality human feedback. However, as base LLM models continue to improve, acquiring meaningful human feedback has become increasingly challenging and costly. In certain domains, base LLM agents may eventually exceed human capabilities, making traditional feedback-driven methods impractical. In this paper, we introduce a novel self-improvement paradigm that empowers LLM agents to autonomously enhance their performance without external human feedback. Our method, Juxtaposed Outcomes for Simulation Harvesting (JOSH), is a self-alignment algorithm that leverages a sparse reward simulation environment to extract ideal behaviors and further train the LLM on its own outputs. We present ToolWOZ, a sparse reward tool-calling simulation environment derived from MultiWOZ. We demonstrate that models trained with JOSH, both small and frontier, significantly improve tool-based interactions while preserving general model capabilities across diverse benchmarks. Our code and data are publicly available on GitHub.
[ "Self-training", "LLM", "Simulation", "Benchmark", "Tool-calling", "Dataset", "Task Oriented Dialogue", "Dialogue", "User Simulation", "Beam Search", "Algorithm", "Preference Tuning", "Supervised Finetuning", "JOSH", "ToolWOZ" ]
Reject
https://openreview.net/pdf?id=DWLlTNhig1
https://openreview.net/forum?id=DWLlTNhig1
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vyBlZa245c", "tnbLyMnNg4", "n2cRFXvYSF", "myc1ENr8Ww", "mA49UI4LFH", "a70CFy8uU7", "WZIeCbjETY", "V7xIGi15Im", "TQqnFXP99E", "Ogeqhv2KYj", "IcSJ54bU2e", "EYlqUPY7Kp", "AEUpv6MeHl", "8AB4ISvpWD", "7hiYIK6Rc9", "6Yn57ZdVFy", "4tjHaHhSrm", "3ex7mrmeLr", "3FFnd8IHsd", "0NKLjQXXGC" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "meta_review", "official_comment", "official_comment" ], "note_created": [ 1732636336544, 1733193209692, 1732745220844, 1730779029322, 1730258823154, 1732301314207, 1732745528203, 1732504698426, 1730679451640, 1732745234085, 1732593076051, 1732736383004, 1732303396998, 1731118859140, 1732638594661, 1737524107646, 1732395423276, 1734473629758, 1732301177070, 1732304557515 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11167/Reviewer_qkhY" ], [ "ICLR.cc/2025/Conference/Submission11167/Authors" ], [ "ICLR.cc/2025/Conference/Submission11167/Authors" ], [ "ICLR.cc/2025/Conference/Submission11167/Reviewer_qkhY" ], [ "ICLR.cc/2025/Conference/Submission11167/Reviewer_e3Ly" ], [ "ICLR.cc/2025/Conference/Submission11167/Authors" ], [ "ICLR.cc/2025/Conference/Submission11167/Authors" ], [ "ICLR.cc/2025/Conference/Submission11167/Reviewer_e3Ly" ], [ "ICLR.cc/2025/Conference/Submission11167/Reviewer_YKJK" ], [ "ICLR.cc/2025/Conference/Submission11167/Authors" ], [ "ICLR.cc/2025/Conference/Submission11167/Reviewer_YKJK" ], [ "ICLR.cc/2025/Conference/Submission11167/Authors" ], [ "ICLR.cc/2025/Conference/Submission11167/Authors" ], [ "ICLR.cc/2025/Conference/Submission11167/Reviewer_dqjx" ], [ "ICLR.cc/2025/Conference/Submission11167/Reviewer_dqjx" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11167/Authors" ], [ "ICLR.cc/2025/Conference/Submission11167/Area_Chair_8HRo" ], [ "ICLR.cc/2025/Conference/Submission11167/Authors" ], [ "ICLR.cc/2025/Conference/Submission11167/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thanks for your response. However, after reading the response, I feel that the work will need further modification before publication. Even earlier days methods did not 'force an ordering on goals' on the MultiWOZ dataset for strategy planning or response generation. Such process would largely affect the generality of the method to other datasets or settings. Also, it feels that this work needs further improvements via comparing with better baselines.\"}", "{\"title\": \"Final Paper Draft Comments\", \"comment\": \"Thank you to all of the reviewers for your insightful feedback and ongoing support. We've revised our paper to incorporate your suggestions and enhance its quality. Let us know if you have any further comments ahead of the comment deadline today.\"}", "{\"title\": \"Update on Paper\", \"comment\": \"We would like to thank you for your continued review of this paper! We have updated the paper in a number of places including addressing the comment on the cost of training gpt-4o in Lines 307-308. We have also expanded the definition of JOSH in Section 2, trimmed down Section 3 TooWOZ and provided even further analysis in Section 5 (see specific lines addressing different reviewer comments in above comments).\"}", "{\"summary\": \"This paper introduces a self-alignment approach called Juxtaposed Outcomes for Simulation Harvesting (JOSH), designed to improve dialogue agents in multi-turn, tool-calling tasks by leveraging sparse rewards. The authors propose ToolWOZ, a new simulation environment derived from MultiWOZ, for training agents to make correct API calls based on sparse reward feedback. The JOSH method aims to allow models, including smaller LLMs, to improve autonomously without relying on extensive human feedback, which is increasingly challenging to obtain as models advance.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The JOSH approach is a new solution for self-training dialogue agents, effectively utilizing sparse rewards to build a self-improvement feedback loop without external human evaluation.\\n2. By adapting MultiWOZ into ToolWOZ with a sparse reward structure, the paper provides a valuable benchmark tailored for tool-using task-oriented dialogue systems, which can benefit further research.\\n3. Results indicate that JOSH significantly improves models across benchmarks, demonstrating its potential as a scalable solution for optimizing agent interactions in multi-turn dialogue settings.\", \"weaknesses\": \"1. The concept of the \\\"goal set\\\" in sparse rewards is insufficiently defined, particularly how it influences the agent\\u2019s behavior and the implications of duplicating actions in a path.\\n2. The choice to branch at the turn level rather than the agent action level lacks a comprehensive rationale, leaving questions about its impact on computational efficiency and performance outcomes. In multiwoz dataset, the agent predicts dialogue act in each turn. The delexiclized response is then generated. The slots values are then filled in the delexiclized response to yield the final response. This process is clearly different from the one illustrated in Figure 2. \\n3. While considerable effort is spent on detailing ToolWOZ, the sparse reward process and its precise mechanics within JOSH are not thoroughly elaborated, reducing clarity around its contribution to the results.\\n4. The baseline comparisons are primarily limited to supervised fine-tuning (SFT) and variants of the sparse reward approach itself. To better contextualize the efficacy of JOSH, comparisons with other RL-based methods, particularly those designed for dialogue or tool-calling tasks, would be beneficial.\", \"questions\": \"As in weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N.A.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces JOSH, a self-training framework designed to enable agentic models to achieve self-alignment. The core component of JOSH is the data rollout pipeline, where an agent first interacts with a GPT-based simulator to generate multi-turn conversations that involve tool-calling responses. A critical aspect of this process is the use of beam search to create a tree-structured trajectory. From this trajectory tree, they extract SFT and preference data for subsequent fine-tuning. To evaluate JOSH, they have also curated a multi-turn tool-calling benchmark called ToolWOZ.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The proposed method is evaluated on different model backbones, even gpt-4o\\n\\nThe curated benchmark is useful to the community.\\n\\nThe presentation is mostly clear and easy to follow.\", \"weaknesses\": \"The primary weaknesses of this paper lie in its novelty and the experimental validation.\", \"novelty\": \"The proposed framework is not particularly novel, as it builds upon concepts that have been extensively studied within the community. Techniques such as data rollouts, beam search, and supervised/preference fine-tuning have all been well-explored in prior works.\", \"experiments\": \"This paper evaluates JOSH using only a single benchmark and does not provide comparisons with other robust baselines. There are numerous multi-turn tool-calling and agentic benchmarks available, such as WebLinx[1] and MINT[2]; conducting experiments on multiple benchmarks would significantly strengthen the validity of the results. Furthermore, there are several highly similar methods in this domain, such as [3,4], which should be considered as baselines to effectively demonstrate the true performance of JOSH.\\n\\n\\n[1] WebLINX: Real-World Website Navigation with Multi-Turn Dialogue\\n[2] MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback\\n[3] Trial and Error: Exploration-Based Trajectory Optimization for LLM Agents\\n[4] V-STaR: Training Verifiers for Self-Taught Reasoners\", \"questions\": \"How much does it cost to finetune 4o?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Addressing the weaknesses pointed out (R3)\", \"comment\": \"We would like to thank the reviewer for their valuable feedback, we have addressed the questions/weaknesses stated below\\n\\n1. Thank you for the helpful comments! We have spent considerable time writing out comments that further define and justify the choices made in the design of JOSH. We plan on shortening section 3 ToolWOZ and incorporating these additional definitions into the final version of the paper. A list of topics that we have further fleshed out include:\\n\\na. Reviewer 1, point 3 further defining JOSH\\u2019s average reward function and comparing this to other types of reward functions\\n\\nb. Reviewer 2, point 1 further defining what a goal set is, how we use the goal set to enforce chronological execution, and the positive implications and insights of the way we track how goals have been executed by the agent.\\n\\nc. Reviewer 2, point 2 explores the different ways JOSH could perform branching and why the design decisions that we made are optimal for the problem that we are trying to solve.\\n\\nWe hope that these additional insights further illuminate why certain methodological decisions were made and how they compare to alternative approaches.\\n\\n2. Thank you for this point, as you can see in the reply to Reviewer 2, point 4, we have seen good results using other RL approaches such as KTO on smaller open source models. These results can be seen in Table 2 with the metal-llama-8B-JOSH-KTO model and is further explored in the Analysis sections 5.2 and 5.3. However, for closed sourced models (GPT) we are limited to only training using supervised fine tuning through their website. Thus to experiment with JOSH on larger or frontier models, we needed to use only SFT baselines. We supplemented this by including some experiments on different branching factors and how that affected SFT training (also seen in Table 2 however we agree that larger companies should allow more exploratory training procedures and it is likely that using KTO models such as GPT would see better results.\"}", "{\"title\": \"Clarification on Goal Ordering and Paper Update\", \"comment\": \"We would like to thank the reviewer for thoughtfully considering our comment!\\n\\n1. We would like to provide some further clarification on the subject of goal ordering in ToolWOZ and JOSH in general. JOSH does not require any ordering for the goals in it's goal set. JOSH is built to be flexible and any constraints that one wants to enforce (or not) could be implemented. In reference to ToolWOZ, goal api calls are not strictly ordered either. Rather, for some \\\"booking\\\" api calls, some of the information needed for the booking must be found by using a \\\"search\\\" api call to gather information. This is similar to many real world scenarios where the agent has imperfect information and must use tools to gather more information. However, the reward that an agent gets from JOSH when executing a ToolWOZ tool call that is still in the goal set still has no ordering and is counted regardless of what other tool calls have been executed thus far. We have not fundamentally changed the task in this way from any prior works. \\nWe apologize for any confusion our above comment cause in terms of this matter. We have updated the paper to also address this point in Lines 267-269 and Lines 156-157.\\n\\n4. Thank you for your insightful point on other RL baselines such as PPO vs the use of SFT in our paper. We use SFT as a baseline in this paper rather than other online reinforcement learning based methods (PPO or REINFORCE) for three reasons. For dialogue and dialogue understanding it has been documented that some form of SFT (via offline RL example selection) does as well or better than PPO (https://arxiv.org/pdf/2307.12425). Additionally, we explore preference tuning methods over PPO due to our own computational constraints as well as our target user's. Finally, SFT is the only form of training available for closed source frontier models, and so for larger models it is not possible to experiment with other training paradigms. Thus we leave online reinforcement learning to future work. \\n\\nAdditionally we have updated the paper to address your other comments\\n2. Figure 2 has been updated to better reflect the process of a MultiWOZ style agent. We have also updated Lines 162-171 to reflect the math and design decisions made with respect to branching on actions vs turns.\\n3. Section 3 has been significantly compressed to add more room for further defining JOSH in section 2 and additional analysis in section 5.\"}", "{\"comment\": \"Thank you for the clarification, I will raise my score.\"}", "{\"summary\": \"They propose both a benchmark and a method for training multi-turn tool use dialogue agents. Their method uses beam search to find successful trajectories and uses failed paths in the beam search as negative examples. They finetune an LM on these successful/unsuccessful pairs with KTO. The benchmark is called ToolWOZ, which re-purposes the popular dialogue systems benchmark MultiWOZ to a more native LM tool use format. They evaluate their method on both ToolWOZ and another standard benchmark tau-bench. They find that their method substantially improves LLaMA 3-8B's success rate on both benchmarks. They also conduct evaluations of the robustness of each benchmark by analyzing the standard deviation of the results, finding ToolWOZ with the goal simulator to give the lowest standard deviation. Finally, they conduct some error analysis of their approach on ToolWOZ.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"They propose both a novel method and a benchmark, but they also make sure to evaluate on an existing benchmark to enable more robust comparisons.\", \"They conduct good analysis to demonstrate the robustness and viability of their benchmark.\", \"Their method demonstrates good performance gains on the tasks they study.\", \"The paper is overall well written and easy to follow.\"], \"weaknesses\": [\"Their method feels a little ad-hoc. Yes, it makes sense to build off-policy preference pairs for training these models, but there are numerous ways this could be achieved and its unclear why the specific methodological decisions made in this paper are the correct ones.\", \"They compare to an SFT baseline, but not other RL-inspired approaches for finetuning agents, so it is unclear how well their approach compares against stronger baselines.\"], \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Updates to the paper and addressing baseline concerns\", \"comment\": \"Thank you for continuing to provide insightful feedback! We have added a significant amount to the paper to address all of the concerns and further experiments that have been raised in this review process. Specifically relating to your comments about expanding our definition of JOSH, we've added a significant amount around the design choices that we've made and what some alternative approaches are:\\n1. a. We've updated Lines 133-142 to address how we picked our reward function and what other options there are.\\n1. b. We added to Lines 267-269 and Lines 156-157 to further define how goal sets work in JOSH and ToolWOZ.\\n1. c. We added Lines 162-171 to explore exactly why we picked branching by turn and the associated math to compare with branching by action.\\n\\nThank you for your insightful point on other RL baselines such as PPO vs the use of SFT in our paper. We use SFT as a baseline in this paper rather than other online reinforcement learning based methods (PPO or REINFORCE) for three reasons. For dialogue and dialogue understanding it has been documented that some form of SFT (via offline RL example selection) does as well or better than PPO (https://arxiv.org/pdf/2307.12425). Additionally, we explore preference tuning methods over PPO due to our own computational constraints as well as our target user's. Finally, SFT is the only form of training available for closed source frontier models, and so for larger models it is not possible to experiment with other training paradigms. Thus we leave online reinforcement learning to future work. \\n\\nThank you again for your continued support, we hope this is helpful explanation!\"}", "{\"comment\": \"Thank you for responding to my concerns! I think if you can clarify the points you mentioned in the paper, that would help it a lot. As for baselines, I was wondering how your approach would compare against, say REINFORCE or PPO. Or just generally some simple baseline that is stronger than SFT, but more straightforward and obvious than the method presented in the paper.\"}", "{\"title\": \"Expanding the comments and experiments into the paper\", \"comment\": \"You are absolutely correct, and so we have just revised the PDF version to address all of the comments from reviewers! In reference to your comments, we have updated the paper in the following places:\\n1. Lines 410-418 to provide analysis relating to the goal based user simulator compared to ground truth humans\\n2. Lines 432-444 to provide additional analysis of API errors\\n3. Lines 133-142 defining why we chose our average reward functions and showing other options\\n4. Lines 486-494 adding relevant advancements in language agents for multi-turn dialogues into the related works.\\n\\nWe would like to thank the reviewer for their ongoing efforts!\"}", "{\"title\": \"Addressing questions and weaknesses 1 and 2 stated (R1)\", \"comment\": \"q1. JOSH provides a baseline for sparse reward based self-improvement in multi-turn dialogue agents. Particularly when considering the improvement in tool calling capabilities, we provide experiments using multiple types of training and along various sized models in order to thoroughly explore this baseline approach so that others may compare to it in future works. Other self-alignment approaches do exist for dialogue agents, however most notable approaches are not built for a multi-turn setting nor for dialogue. We also target improvement of tool use in this multi-turn dialogue setting, which further distinguishes us from any other baseline approaches.\\n\\nq2. We provide experiments with reflection using ReACT based prompts in both ToolWOZ and TauBench results, across all model sizes. We do find that reflection techniques enhances JOSH's effectiveness as all ReACT based models outperform themselves when trained on JOSH data.\\n\\n1. We perform an additional analysis comparing the goal based user simulator to the ground truth human conversations in MultiWOZ. We evaluate across three dimensions (naturalness, conciseness, and redundancy) using prompts from the paper LLM-RUBRIC: A Multidimensional, Calibrated Approach to Automated Evaluation of Natural Language Texts. The prompts evaluate the user messages in an entire conversation, assigning a score 1-4 where 4 is the best. We take the average over all 450 conversations in the ToolWOZ test set. We use Claude Sonnet 3.5 as the evaluator. The results are as follows:\\n\\ndimension human bot\\n\\nnaturalness 4 4\\n\\nconciseness 3.98 3.94\\n\\nredundant 3.59 3.42\\n\\nAs we see, both humans and the user simulator are scored as very natural. The conciseness of the user simulator is slightly worse than the human score, which we attribute to the tendency for the user simulator to be verbose in its replies. Finally, the redundancy score for the user simulator is worse than a human, but still achieves a score of 3.42. Our analysis shows that this drop is due to agent errors where information is re-requested, and the user simulator is more willing to reiterate information where humans are less likely to repeat critical information.\\n\\n2. We have performed a deeper analysis of the API calls and where errors are arising with different models and we plan to include this in the final version of our paper. The table below shows the amount of failed api calls split by type of api. Notably, the search_train and search_attraction apis still have a large gap when comparing sft and kto trained models, where sft is far more likely to fail.\\n\\n Method book_train search_hotel search_train book_hotel search_attraction book_restaurant search_restaurant \\n base\\t83\\t77\\t84\\t59\\t96\\t49\\t47\\n sft\\t56\\t44\\t81\\t36\\t77\\t32\\t36\\n kto\\t45\\t49\\t50\\t41\\t40\\t27\\t31\\n\\nTo further investigate this phenomenon, we measured the frequency of required argument groups for search_train and search_attraction that sft failed to call. We observe that while search_train failed calls with \\u201cleaveAt\\u201d argument steadily decrease from base to sft, calls with the \\u201carriveBy\\u201d argument actually slightly increase in failures with sft from base. This pattern is not consistent with kto training, however, where the failures in both sections decrease significantly from base. We find that this phenomenon is due to sft training commonly leaving out arguments when writing API calls, and in the case of \\u201carriveBy\\u201d api calls, the \\u201cdeparture\\u201d parameter is commonly left out. KTO however avoids this pitfall by training on apis with too few parameters as negative examples, and is thus far more likely to include all parameters.\\n\\nsearch_train failure\\n\\n(base) [(['day', 'departure', 'destination', 'leaveAt'], 37), (['arriveBy', 'day', 'departure', 'destination'], 47)]\\n\\n(sft) [(['day', 'departure', 'destination', 'leaveAt'], 31), (['arriveBy', 'day', 'departure', 'destination'], 50)]\\n\\n(kto) [(['day', 'departure', 'destination', 'leaveAt'], 22), (['arriveBy', 'day', 'departure', 'destination'], 28)]\\n\\nWe observe a similar phenomenon in the search_attraction api, where arguments including \\u201carea\\u201d almost never drop. We observe that this is due to two reasons. First, the sft model often neglected to use the \\u201ctype\\u201d argument alongside the \\u201carea\\u201d argument, choosing to often only fill in the \\u201carea\\u201d. Also, the \\u201carea\\u201d argument was commonly being filled in as \\u201call\\u201d for many sft conversations even though this is not a valid value for the area parameter. The KTO trained model manages to avoid many of these pitfalls as well as these invalid apis are commonly found in the negative examples.\\n\\nsearch_attraction failure\\n\\n(base) [(['area'], 9), (['area', 'type'], 32), (['name'], 29), (['type'], 26)]\\n\\n(sft) [(['area'], 8), (['area', 'type'], 30), (['name'], 21), (['type'], 18)]\\n\\n(kto) [(['area'], 2), (['area', 'type'], 15), (['name'], 12), (['type'], 11)]\\n\\nWe plan to expand the error analysis in section 5.2 with these findings.\"}", "{\"summary\": \"The paper introduces JOSH (Juxtaposed Outcomes for Simulation Harvesting), a self-alignment framework for large language model (LLM) agents to enhance multi-turn dialogue capabilities without human feedback, addressing the impracticality of traditional feedback-driven methods. JOSH leverages sparse reward signals within simulated dialogues to allow the model to self-improve, specifically targeting multi-turn tool-calling skills in task-oriented dialogues. The authors also introduce ToolWOZ, a dataset and benchmark based on MultiWOZ 2.0, designed to evaluate tool-usage in dialogue settings. Experimental results demonstrate that a fine-tuned LLaMA-3B model exhibits a 74% increase in Success Rate, and gpt-4o also shows improvements following JOSH self-alignment. Additional experiments on other public benchmarks indicate that JOSH does not degrade the model\\u2019s general performance.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper presents a novel approach to self-alignment in dialogue agents using sparse rewards, reducing reliance on costly human feedback.\", \"ToolWOZ fills a gap in existing evaluation frameworks by focusing on tool usage in multi-turn dialogue settings, adapting MultiWOZ to emphasize real-world API interactions.\", \"JOSH demonstrates significant improvements in success rates and tool-call accuracy, particularly for smaller models, validating its effectiveness.\"], \"weaknesses\": [\"The paper does not assess how well the user simulator aligns with real human interactions.\", \"The evaluation of API calls lacks depth, as it does not separate analyses of API names and parameters.\", \"The design of the average reward function is not thoroughly examined, missing a discussion of alternative reward structures and their potential effects on agent behavior.\", \"The related work section does not cover relevant advancements in language agents for multi-turn dialogues.\"], \"questions\": [\"How does JOSH compare to other sparse reward-based alignment or self-improvement approaches?\", \"Could strategies like reflection, which are often beneficial for tree-search and multi-turn tasks, enhance JOSH\\u2019s effectiveness if integrated?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Comment by Reviewer dqjx\", \"comment\": \"Thank you for your detailed response and the additional experiments addressing questions 1 and 2. While I appreciate the effort put into these analyses and the clarifications provided, I believe that the additional experiments and findings should be carefully expanded and integrated into the paper before publication. Therefore, I will keep my score.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Addressing questions and concerns (R4)\", \"comment\": \"We would like to thank the reviewer for the feedback. We address each of the stated weaknesses below\\n\\n1. Novelty. While it's true that the individual components like data rollouts, beam search, and preference fine-tuning have been studied, our work introduces a novel combination of these elements in the context of self-training dialogue agents using sparse rewards. Specifically, JOSH leverages sparse reward simulations to autonomously generate preference-annotated data without external human feedback, which distinguishes it from existing methods that often rely on dense rewards or human annotations.\\n\\nIn our paper, we discuss how JOSH:\\n\\n- Integrates sparse rewards with beam search simulations to explore and harvest optimal conversation trajectories effectively.\\n- Generates both supervised and preference data from the agent's own simulations, enabling self-improvement in a way that hasn't been extensively explored.\\n- Demonstrates significant performance gains across models of varying sizes, including frontier models like GPT-4, showcasing the scalability and effectiveness of our approach.\\n\\nWe acknowledge that we could have more explicitly highlighted the novelty of our method in comparison to prior work. In the revised version, we will emphasize how JOSH differentiates itself and contributes uniquely to the field, providing a clearer articulation of its innovative aspects.\\n\\n2. Experiments. \\nIn this paper we evaluate JOSH against **two** benchmarks, one introduced in this paper ToolWOZ (Table 2) and another external benchmark Tau-bench (Table 3). These two particular datasets were the only benchmarks used because they have critical aspects needed for JOSH to function: scenarios with multiple ground truth actions that can serve as sparse rewards. Even though the MINT[2] benchmark takes place in a tool calling mult-turn dialogue setting, it lacks a goal set of individual tool calls that can serve as sparse rewards for JOSH to function on, rather there is only a single solution that the bot is iterating towards. While it would be interesting to adapt MINT so JOSH can be applied, this task is beyond the scope of this paper. While we could adapt JOSH to perform on a web based tool using benchmark such as WebLinx[1] we aim in this paper to improve the ability of dialogue agents, thus making web navigation also out of scope for this paper.\\n\\nBaselines.\\nJOSH introduces a novel approach to sparse reward-based alignment and self-improvement specifically tailored for multi-turn dialogue agents utilizing tools. While there are existing self-alignment methods, most are not designed for dialogue agents or do not operate effectively in multi-turn settings. To the best of our knowledge, JOSH is the first method that enables dialogue agents to self-improve their tool-calling capabilities in a multi-turn conversational context without relying on external human feedback.\\n\\nThe approach outlined in \\\"Trial and Error: Exploration-Based Trajectory Optimization for LLM Agents\\\" (ETO) involves a behavioral cloning step that requires initial training on ground truth examples (human input). However, the core objective of our paper and JOSH is to \\\"autonomously enhance LLM agent performance without external human feedback.\\\" Additionally, the ETO's contrastive training phase cannot be executed on closed-source LLMs such as GPT-4, which limits the models that can be trained with ETO.\\n\\nSimilarly, V-STaR: Training Verifiers for Self-Taught Reasoners offers a method for advanced self-training based on output success but is designed for single-turn outputs like GSM8K or MATH. Our paper focuses on techniques for solving multi-turn dialogue problems. While adapting V-STaR for multi-step self-improvement would be interesting, it is also beyond this paper's scope. Additionally, we were not able to find a code implementation of V-STaR which is another impediment for reproducing.\\n\\nIn our work, we provide a comprehensive baseline by experimenting with multiple training methods across various model sizes, including both small and frontier models. This thorough exploration demonstrates the scalability and effectiveness of JOSH, setting a new standard for future research in this area. Our focus on enhancing tool use within multi-turn dialogues distinguishes JOSH from other approaches and highlights its unique contribution to the field.\\n\\nBy establishing this baseline, we aim to facilitate comparisons in future studies and encourage the development of more advanced self-alignment techniques for dialogue agents. We believe that JOSH paves the way for new possibilities in creating autonomous, efficient, and capable dialogue systems.\", \"questions\": \"The price to finetune gpt-4o is $25.000/1M training tokens and gpt-4o-mini is $3.000/1M training tokens. On average we trained from 3 million training tokens to 10 million training tokens in our experiments depending on the dataset and how many examples were used, so it cost anywhere from $75 to $250 to finetune gpt-4o.\"}", "{\"metareview\": \"This work proposes an LLM self-training approach based on sparse reward simulation. The authors also plan to release their code and data. The reviewers appreciate the general idea of this work, the value of the ToolWOZ benchmark to the community, the performance improvements achieved, and the clarity in presentation. Concerns are also raised, however, primarily regarding experimental validation, the design of the simulator (validity, reward function and API call design) as well as concerns around novelty and relation to existing works. The authors provide extensive and detailed responses but the reviewers are not convinced.\", \"additional_comments_on_reviewer_discussion\": \"The discussions mostly focus on clarifying design choices around JOSH. The authors agree that more comparisons to stronger baselines are needed but also emphasize the positive results already in the paper. The authors also conducted a deeper analysis on the API calls issue and present new results (that, however do not seem to convince the reviewer). Overall, while many of the concerns seem to be clarifications, some are more fundamental (e.g. comparing against stronger baselines) and therefore I believe this work is not yet ready for publication.\"}", "{\"title\": \"Addressing the weaknesses pointed out (R2)\", \"comment\": \"1. Thank you for pointing this out, indeed it is not clear. For the purpose of this work, our goal set per simulation is a set of APIs (or tools) that must be called with corresponding parameters. E.g., if a simulation for canceling a flight requires the \\u2018retrieve_reservation(confirmation_code=ABCDEF)\\u2019 API to be called with corresponding confirmation code, that is a goal and it is only achieved once called with the correct parameters. There are a number of considerations here with respect to agent behavior, goal set and the interaction with our beam search. First, our beam search is designed to follow paths once goals are hit, so this naturally will select for trajectories where goals are achieved earlier in the conversation. It is an open question whether this can be suboptimal, but changing the beam search strategy could potentially account for this. Once a goal is achieved, it is removed from the set, so the agent cannot continue to obtain rewards from making duplicate calls. Furthermore, we force an ordering on goals to ensure they are called in the right order. This ordering is enforced by ensuring that some apis require information that can only be found by making other api calls correctly. For example, the \\u201cbook_train\\u201d api call requires a train_id, which can only be found by making a correct call to \\u201csearch_train\\u201d. This way, we ensure that a \\u201cbook_train\\u201d api call cannot be made correctly without first searching for the correct train. Extensions to the goal set and its dynamics are an interesting topic of future work.\\n\\n2. We agree that the internal action breakup of an episode [conversation] happening between an agent and a user in ToolWoz may differ significantly and not have a clear one to one correspondence to action trajectories as they used to take place in MultiWOZ. \\nCasting the interaction in terms of alternating natural language token sequence generation by the agent and the user naturally requires turn level branching for ToolWoz - if one were to enforce a strict agent action based framework that decomposes the notion of a turn, each token generation step by the agent would have to be considered an \\u201caction\\u201d ; this would both lead to unnaturally long \\u201caction\\u201d sequences and make the intuitive passing of control between agent and user a lower-frequency intermediate event that happens once every k actions, rather than being in natural lockstep with the granularity. Also, the user here is a part of the environment, so the user turn that follows an agent turn can be seen as a natural part of the environment\\u2019s state transition function. With ToolWoz , we were looking for an approximate example level correspondence to MultiWOZ in terms of the initial information as well as the goals, rather than exactly creating a one to one mappable replication of the action dynamics or trajectories that would take place in MultiWOZ.\\n\\nBinary trees have a number of 2^h-1 leaf nodes where h is the height of the tree, since JOSH splits at the turn level we can expect t=log_2(max_branches)+1 to be the number of turns t before JOSH can no longer expand the tree. There are roughly 3 actions a per turn on average, so t=3a and thus the number of turns allowed before branching would stop when action splitting is t = (log_2(max_branches)+1)/3. Thus when max_branchs=8 which is used throughout the paper to keep costs reasonable (around $100) we could perform either t=4 turns while still splitting, or t=4/3 turns when splitting on actions. While splitting on actions may provide more diversity, over the course of a multi turn dialogue we can explore more possible paths deep in the tree for the same max_branches when splitting on turns. \\n\\nIn reference to Figure 2, thank you for pointing this out and we have revised the figure to reflect the correct system design and will revise this figure in the final version of the paper.\\n\\n3. We plan to compress section 3 ToolWOZ in order to spend more time rigorously defining the JOSH process defined in both these replies and replies to the other reviewers into the final version of the paper. For a deep dive into the reward process, we have further outlined many details in points 1 and 2 of this comment, as well as the comment to Reviewer 1.\\n\\n4. Thank you for this point and we agree wholeheartedly that other RL based methods should be explored further using this approach. For open source models we include the exploration of other RL approaches (metal-llama-8B-JOSH-KTO, in Table 2) and find better results than variants of supervised fine tuning. We also further explore the benefits of using KTO in the Analysis sections 5.2 and 5.3. However, for closed sourced models (GPT) we are limited to only training using supervised fine tuning through their website. Ideally, companies such as OpenAI would allow for more exploratory training techniques but in order to show that JOSH works for all sized models and frontier models we chose to take advantage of the limited training that was available to us.\"}", "{\"title\": \"Addressing weaknesses 3 and 4 stated (R1)\", \"comment\": \"3. **Design of the Average Reward Function**\\n\\nWe chose the average reward function to balance efficiency and effectiveness in multi-turn dialogues. By averaging rewards over the total number of goals, the agent is incentivized to accomplish all objectives while minimizing the number of API calls and dialogue turns. This approach discourages unnecessary actions and promotes concise, goal-oriented behavior.\\n\\n**Alternative Reward Structures and Their Implications**\", \"we_considered_several_alternative_reward_structures\": [\"**Cumulative Reward**: This approach sums all rewards without normalization. While straightforward, it may encourage the agent to make excessive API calls to maximize the total reward, leading to inefficient interactions. Our goal is to have the agent resolve customer issues with the minimal necessary API calls, so cumulative rewards are less suitable.\", \"**Per-Turn Reward**: Assigning rewards at each turn provides dense feedback, potentially accelerating learning. However, it requires per-turn level annotations, which are expensive to obtain. Although leveraging an LLM as a judge to approximate per-turn rewards is possible, it demands significant resources to develop effectively. We leave this exploration for future work.\", \"**Sparse Goal-Based Reward**: Similar to our method, this rewards the agent only upon achieving specific goals. The key difference is that traditional sparse rewards grant a single reward at the end of the conversation upon completing all goals. In contrast, our average reward function provides partial rewards as each goal (API call) is achieved during the conversation. This offers earlier feedback, helping the agent adjust its behavior in real-time.\", \"**Shaped Reward**: Incorporating intermediate rewards can guide the agent toward goals more effectively. However, designing appropriate shaping rewards is complex and may require an LLM judge to evaluate intermediate actions, adding to the development time and resource requirements. We consider this an area for future investigation.\", \"**Potential Effects on Agent Behavior**\", \"**Cumulative Reward**: May encourage inefficient behavior by incentivizing the agent to perform unnecessary actions to accumulate more rewards, leading to longer and less efficient dialogues.\", \"**Per-Turn Reward**: Could cause the agent to prioritize immediate, potentially low-value actions that yield instant rewards, detracting from achieving the overall conversation goals.\", \"**Sparse Goal-Based Reward (Our Approach)**: By providing partial rewards for each achieved goal, the agent is motivated to focus on completing all tasks efficiently, enhancing both effectiveness and dialogue conciseness.\", \"**Shaped Reward**: While potentially improving learning speed, it risks overcomplicating the reward structure and may inadvertently encourage the agent to optimize for intermediate rewards rather than the final objectives.\", \"By adopting the average reward function with partial sparse rewards, we effectively promote efficient goal completion without the complexities and potential drawbacks of alternative reward structures. This design choice aligns with our objectives of fostering efficient, goal-oriented dialogues while maintaining a straightforward and effective reward mechanism.\", \"The related work section does not cover relevant advancements in language agents for multi-turn dialogues.\", \"4. We are adding a section to the final paper relating to the strides multi turn dialogue has taken. We will again reference the advent of LLMs as stated in the introduction, and also reference the advent of tool use in agents (Gorilla LLM, toolbench), and approaches that have been used to improve the function of LLM Agents such as ReACT and Chain-of-thought.\"]}" ] }